Replication requires the Meilisearch Enterprise Edition v1.37 or later and a configured network.
How replication works
When you configure shards, each shard can be assigned to one or more remotes. If a shard is assigned to multiple remotes, Meilisearch replicates the data to each of them. During a search, Meilisearch queries each shard exactly once, picking one of the available remotes for each shard (prioritizing theself/local remote). This avoids duplicate results.
Assign shards to multiple remotes
To replicate a shard, list multiple remotes in its configuration:Common replication patterns
Full replication (every shard on every remote)
Best for small datasets where you want maximum availability and read throughput:N+1 replication
Each shard on two remotes, spread across the cluster:Geographic replication
Place replicas in different regions to reduce latency for geographically distributed users:Remote availability
When a network search runs, Meilisearch builds an internal set of remotes to query: it assigns each shard to a remote, then sends one query per remote with a shard filter. This guarantees that no shard is queried twice and that results are never duplicated. The downside is that there is no automatic fallback. If the remote assigned to a shard is unreachable, that shard’s results are missing from the response — Meilisearch does not yet retry using another replica that holds the same shard.Scaling read throughput
Replication is the primary way to scale search throughput in Meilisearch. Each replica can independently handle search requests, so adding more replicas increases the total number of concurrent searches your cluster can handle. To add a new replica for an existing shard, add the new remote and useaddRemotes to append it to the shard without rewriting the full assignment:
NetworkTopologyChange task that replicates the shard’s documents to ms-03.
The leader instance
The leader is responsible for all write operations (document additions, settings changes, index management). Non-leader instances reject writes with anot_leader error.
If the leader goes down:
- Search may be affected: if search requests are routed to the downed leader, they will fail
- Writes are blocked: no documents can be added or updated until a leader is available. Note that alive remote instances continue to process tasks
- Manual promotion: you must designate a new leader by updating the network topology with
PATCH /networkand setting"leader"to another instance
Monitoring replica health
Check the current network topology to see which remotes are configured:Next steps
Set up a sharded cluster
Start from scratch with a full cluster setup guide.
Manage the network
Add and remove remotes, update shard assignments.
Replication and sharding overview
Understand the concepts and feature compatibility.
Data backup
Configure snapshots and dumps for your cluster.