Skip to main content
Replication assigns the same shard to multiple remotes in your Meilisearch network. This guide covers how to configure replication, common patterns, and scaling read throughput.
Replication requires the Meilisearch Enterprise Edition v1.37 or later and a configured network.

How replication works

When you configure shards, each shard can be assigned to one or more remotes. If a shard is assigned to multiple remotes, Meilisearch replicates the data to each of them. During a search, Meilisearch queries each shard exactly once, picking one of the available remotes for each shard (prioritizing the self/local remote). This avoids duplicate results.

Assign shards to multiple remotes

To replicate a shard, list multiple remotes in its configuration:
curl \
  -X PATCH 'MEILISEARCH_URL/network' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer MEILISEARCH_KEY' \
  --data-binary '{
    "shards": {
      "shard-a": { "remotes": ["ms-00", "ms-01"] },
      "shard-b": { "remotes": ["ms-01", "ms-02"] },
      "shard-c": { "remotes": ["ms-02", "ms-00"] }
    }
  }'
In this configuration, every shard exists on two remotes. If any single instance goes down, all shard data still exists on another instance.

Common replication patterns

Full replication (every shard on every remote)

Best for small datasets where you want maximum availability and read throughput:
{
  "shards": {
    "shard-a": { "remotes": ["ms-00", "ms-01", "ms-02"] }
  }
}
All three remotes hold the same data. This is effectively a read-replica setup: you get 3x the search capacity, and any two instances can go down without affecting availability.

N+1 replication

Each shard on two remotes, spread across the cluster:
{
  "shards": {
    "shard-a": { "remotes": ["ms-00", "ms-01"] },
    "shard-b": { "remotes": ["ms-01", "ms-02"] },
    "shard-c": { "remotes": ["ms-02", "ms-00"] }
  }
}
This is the recommended pattern for most use cases. It balances data redundancy, search throughput, and storage efficiency. Each instance holds 2 shards, and losing any single instance still leaves all shards available.

Geographic replication

Place replicas in different regions to reduce latency for geographically distributed users:
{
  "shards": {
    "shard-a": { "remotes": ["us-east-01", "eu-west-01"] },
    "shard-b": { "remotes": ["us-east-02", "eu-west-02"] }
  }
}
Route search requests to the closest cluster. Both regions hold all data, so either can serve a full result set. By default, Meilisearch prioritizes local search requests and will not transfer the request to a remote server. Make sure your search requests are made on the closest remote instance to ensure this setup is efficient.

Remote availability

When a network search runs, Meilisearch builds an internal set of remotes to query: it assigns each shard to a remote, then sends one query per remote with a shard filter. This guarantees that no shard is queried twice and that results are never duplicated. The downside is that there is no automatic fallback. If the remote assigned to a shard is unreachable, that shard’s results are missing from the response — Meilisearch does not yet retry using another replica that holds the same shard.

Scaling read throughput

Replication is the primary way to scale search throughput in Meilisearch. Each replica can independently handle search requests, so adding more replicas increases the total number of concurrent searches your cluster can handle. To add a new replica for an existing shard, add the new remote and use addRemotes to append it to the shard without rewriting the full assignment:
curl \
  -X PATCH 'MEILISEARCH_URL/network' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer MEILISEARCH_KEY' \
  --data-binary '{
    "remotes": {
      "ms-03": {
        "url": "http://ms-03.example.com:7703",
        "searchApiKey": "SEARCH_KEY_03",
        "writeApiKey": "WRITE_KEY_03"
      }
    },
    "shards": {
      "shard-a": { "addRemotes": ["ms-03"] }
    }
  }'
This triggers a NetworkTopologyChange task that replicates the shard’s documents to ms-03.

The leader instance

The leader is responsible for all write operations (document additions, settings changes, index management). Non-leader instances reject writes with a not_leader error. If the leader goes down:
  • Search may be affected: if search requests are routed to the downed leader, they will fail
  • Writes are blocked: no documents can be added or updated until a leader is available. Note that alive remote instances continue to process tasks
  • Manual promotion: you must designate a new leader by updating the network topology with PATCH /network and setting "leader" to another instance
There is no automatic leader election. If your leader goes down, you must manually promote a new one. Plan for this in your deployment strategy.

Monitoring replica health

Check the current network topology to see which remotes are configured:
curl \
  -X GET 'MEILISEARCH_URL/network' \
  -H 'Authorization: Bearer MEILISEARCH_KEY'
To verify a specific remote is responding, query it directly or use the health endpoint:
curl 'http://ms-01.example.com:7701/health'

Next steps

Set up a sharded cluster

Start from scratch with a full cluster setup guide.

Manage the network

Add and remove remotes, update shard assignments.

Replication and sharding overview

Understand the concepts and feature compatibility.

Data backup

Configure snapshots and dumps for your cluster.