Multiple Elasticsearch Clusters or a Monster Cluster?
UPDATE: This article refers to our hosted Elasticsearch offering by an older name, Found. Please note that Found is now known as Elastic Cloud.
Elasticsearch provides a pretty large toolbox for composing complex cluster topologies. You can make heterogenous clusters with beefy nodes hosting your hot indices, and have less expensive nodes host historical data, e.g. using node attributes and shard allocation filtering.
While you can use these features to make the One Monster Cluster to rule them all, this post presents some arguments on why managing multiple separate clusters can actually be simpler than having a single large multi-purpose cluster, even though it means managing more nodes.
Performance Reasoning and Limiting Blast Radius
Elasticsearch is used for a lot of different things, and the different use cases can have wildly different performance characteristics and things to learn.
While you can have a single cluster handling your app’s autocompletion, full text search, analytics, and logging, at some point the success or failure of one of these will cause grief. A bug or traffic surge can cause your logging activity to go through the roof, overwhelming your cluster and therefore bringing your search to a crawl, possibly causing your site to be unusably slow exactly when it shouldn’t be.
Autocompletion workloads require having almost everything in memory and a lot of CPU to respond within the tight “instantaneous” time budget. So do e-commerce style searches with a lot of aggregations backing navigational aids. Plain “full text search” can be less demanding, as can write-only indexing of logs until someone suddenly runs a fancy aggregation over a huge timespan.
Mixing all these different workloads together makes reasoning about performance hard. Is it OK to drop some logs while your site is seeing a sudden burst of traffic? How slow can autocompletion or navigational search be while still providing an acceptable user experience? How do we ensure high priority searches remain fast when unexpected things happen? Can we scale high priority workloads separately? How would a failure cascade?
Analysing how both load and failures cascade through your system are highly related – a slow system can be indistinguishable from one that is dead.
Therefore, separating different priority workloads into completely separate clusters (and processes) makes it much simpler to contain the “blast radius” of a failure. If your autocompletion cluster fails, it shouldn’t bring down the rest of your search and analytics with it. Handling multi-tenancy is hard, and better left to the operating system, which has full control of the memory- and CPU-limits and -priorities of processes.
Upgrading by Cloning
While most upgrades of Elasticsearch can be done inline through rolling restarts, major version upgrades require a full cluster restart.
If you want to do the upgrade without any downtime, you will need to clone the cluster and route traffic to the new cluster when it is ready.
If you do not index continuously, this should be a rather simple process. If you do need to do this while also handling index requests, things get a bit more tricky, and your indexing/syncing process needs to be able to index to multiple clusters. Our post on keeping Elasticsearch in sync may have some interesting pointers here.
Since Elasticsearch cannot be downgraded once upgraded, it’s important to test carefully when upgrading. Upgrading through cloning is arguably the safest way to upgrade: no changes are done to the cluster you know works. Rolling back means just continuing to use the existing cluster, while resolving any issues that were discovered with the new version.
A Cluster in Every Continent
If you have users all over the world, having the ability to index to multiple clusters enable globally distributed replicas of your indices. This will let you do latency-based routing, where you route users' requests to the cluster closest to them.
Extreme availability requirements can also be achieved, as you can failover across continents and data centres with absolutely nothing in common.
In the future, Elasticsearch is likely to be able to “multi-DC replication” like this for you, through a “changes-API”. The infrastructure necessary to achieve this is currently being worked on, but as of this writing you will need to handle this in your application layer.
Bulk (re-)Indexing
If you need to do a large initial bulk indexing, e.g. because you’re adding a new data source or need to reindex due to mapping improvements, it can make sense to create a new cluster specifically for that purpose.
Creating an index-only cluster will let your production search cluster continue serving searches without the additional burden of doing large scale indexing, and you can get the indexing done faster by running a large cluster for a few hours. When your indexing is done, use snapshot/restore to migrate the indices to the search cluster. Then delete the temporary indexing cluster.
Testing and Experimenting
Experimenting with new searches and aggregations on a production cluster can lead to great new insights, but if you’re not careful, it can also crash your cluster.
If you need to test new things, such as a new aggregation in the newest Elasticsearch version or your redesigned ranking model whose performance characteristics are yet unknown, your safest option is again to create a new cluster to run your experiments. To get real insights, you need realistic test data – such as your production data that you can clone. (Just make sure you also secure your development cluster!)
Multiple Clusters on Found
While there are several advantages to having multiple clusters, the downside is of course having to manage more moving parts.
We have made sure the official hosted Elasticsearch service enables you to easily manage a fleet of clusters. Creating or upgrading a cluster is done in a few clicks. We bill by the hour, making the service excellent for short-lived throwaway clusters, in addition to your continuous production workloads.
We have also made it simple to use Elasticsearch’s snapshot/restore features to copy indices between clusters, which is important when implementing the strategies discussed above. Found’s snapshot restore page lets you select a snapshot and customise which indices you want restored to your destination cluster.
Summary
While managing multiple clusters comes at a cost, and cloning a massive cluster to upgrade it can be impractical, this post has presented a few cases where you should consider separating your workloads into multiple different clusters:
- Different workloads do not scale the same way, and may have different priorities.
- Performance degradation and failure can be contained.
- Upgrades can be done much safer.
- Developing new searches on real data should not risk your production environment.
- Our Elasticsearch service makes it easier, but you can certainly do it yourself too.