Searchable snapshots
editSearchable snapshots
editSearchable snapshots let you use snapshots to search infrequently accessed and read-only data in a very cost-effective fashion. The cold and frozen data tiers use searchable snapshots to reduce your storage and operating costs.
Searchable snapshots eliminate the need for replica shards after rolling over from the hot tier, potentially halving the local storage needed to search your data. Searchable snapshots rely on the same snapshot mechanism you already use for backups and have minimal impact on your snapshot repository storage costs.
Using searchable snapshots
editSearching a searchable snapshot index is the same as searching any other index.
By default, searchable snapshot indices have no replicas. The underlying snapshot
provides resilience and the query volume is expected to be low enough that a
single shard copy will be sufficient. However, if you need to support a higher
query volume, you can add replicas by adjusting the index.number_of_replicas
index setting.
If a node fails and searchable snapshot shards need to be recovered elsewhere, there
is a brief window of time while Elasticsearch allocates the shards to other nodes where
the cluster health will not be green
. Searches that hit these shards may fail
or return partial results until the shards are reallocated to healthy nodes.
You typically manage searchable snapshots through ILM. The
searchable snapshots action automatically converts
a regular index into a searchable snapshot index when it reaches the cold
or
frozen
phase. You can also make indices in existing snapshots searchable by
manually mounting them using the mount snapshot API.
To mount an index from a snapshot that contains multiple indices, we recommend creating a clone of the snapshot that contains only the index you want to search, and mounting the clone. You should not delete a snapshot if it has any mounted indices, so creating a clone enables you to manage the lifecycle of the backup snapshot independently of any searchable snapshots. If you use ILM to manage your searchable snapshots then it will automatically look after cloning the snapshot as needed.
You can control the allocation of the shards of searchable snapshot indices using the same mechanisms as for regular indices. For example, you could use Index-level shard allocation filtering to restrict searchable snapshot shards to a subset of your nodes.
The speed of recovery of a searchable snapshot index is limited by the repository
setting max_restore_bytes_per_sec
and the node setting
indices.recovery.max_bytes_per_sec
just like a normal restore operation. By
default max_restore_bytes_per_sec
is unlimited, but the default for
indices.recovery.max_bytes_per_sec
depends on the configuration of the node.
See Recovery settings.
We recommend that you force-merge indices to a single segment per shard before taking a snapshot that will be mounted as a searchable snapshot index. Each read from a snapshot repository takes time and costs money, and the fewer segments there are the fewer reads are needed to restore the snapshot or to respond to a search.
Searchable snapshots are ideal for managing a large archive of historical data. Historical information is typically searched less frequently than recent data and therefore may not need replicas for their performance benefits.
For more complex or time-consuming searches, you can use Async search with searchable snapshots.
Use any of the following repository types with searchable snapshots:
You can also use alternative implementations of these repository types, for instance MinIO, as long as they are fully compatible. Use the Repository analysis API to analyze your repository’s suitability for use with searchable snapshots.
How searchable snapshots work
editWhen an index is mounted from a snapshot, Elasticsearch allocates its shards to data nodes within the cluster. The data nodes then automatically retrieve the relevant shard data from the repository onto local storage, based on the mount options specified. If possible, searches use data from local storage. If the data is not available locally, Elasticsearch downloads the data that it needs from the snapshot repository.
If a node holding one of these shards fails, Elasticsearch automatically allocates the
affected shards on another node, and that node restores the relevant shard data
from the repository. No replicas are needed, and no complicated monitoring or
orchestration is necessary to restore lost shards. Although searchable snapshot
indices have no replicas by default, you may add replicas to these indices by
adjusting index.number_of_replicas
. Replicas of searchable snapshot shards are
recovered by copying data from the snapshot repository, just like primaries of
searchable snapshot shards. In contrast, replicas of regular indices are restored by
copying data from the primary.
Mount options
editTo search a snapshot, you must first mount it locally as an index. Usually ILM will do this automatically, but you can also call the mount snapshot API yourself. There are two options for mounting an index from a snapshot, each with different performance characteristics and local storage footprints:
- Fully mounted index
-
Fully caches the snapshotted index’s shards in the Elasticsearch cluster. ILM uses this option in the
hot
andcold
phases.Search performance for a fully mounted index is normally comparable to a regular index, since there is minimal need to access the snapshot repository. While recovery is ongoing, search performance may be slower than with a regular index because a search may need some data that has not yet been retrieved into the local cache. If that happens, Elasticsearch will eagerly retrieve the data needed to complete the search in parallel with the ongoing recovery. On-disk data is preserved across restarts, such that the node does not need to re-download data that is already stored on the node after a restart.
Indices managed by ILM are prefixed with
restored-
when fully mounted.
- Partially mounted index
-
Uses a local cache containing only recently searched parts of the snapshotted index’s data. This cache has a fixed size and is shared across shards of partially mounted indices allocated on the same data node. ILM uses this option in the
frozen
phase.If a search requires data that is not in the cache, Elasticsearch fetches the missing data from the snapshot repository. Searches that require these fetches are slower, but the fetched data is stored in the cache so that similar searches can be served more quickly in future. Elasticsearch will evict infrequently used data from the cache to free up space. The cache is cleared when a node is restarted.
Although slower than a fully mounted index or a regular index, a partially mounted index still returns search results quickly, even for large data sets, because the layout of data in the repository is heavily optimized for search. Many searches will need to retrieve only a small subset of the total shard data before returning results.
Indices managed by ILM are prefixed with
partial-
when partially mounted.
To partially mount an index, you must have one or more nodes with a shared cache
available. By default, dedicated frozen data tier nodes (nodes with the
data_frozen
role and no other data roles) have a shared cache configured using
the greater of 90% of total disk space and total disk space subtracted a
headroom of 100GB.
Using a dedicated frozen tier is highly recommended for production use. If you
do not have a dedicated frozen tier, you must configure the
xpack.searchable.snapshot.shared_cache.size
setting to reserve space for the
cache on one or more nodes. Partially mounted indices are only allocated to
nodes that have a shared cache.
Manually mounting snapshots captured by an Index Lifecycle Management (ILM) policy can interfere with ILM’s automatic management. This may lead to issues such as data loss or complications with snapshot handling.
For optimal results, allow ILM to manage snapshots automatically.
-
xpack.searchable.snapshot.shared_cache.size
-
(Static)
Disk space reserved for the shared cache of partially mounted indices. Accepts a
percentage of total disk space or an absolute byte value.
Defaults to
90%
of total disk space for dedicated frozen data tier nodes. Otherwise defaults to0b
. -
xpack.searchable.snapshot.shared_cache.size.max_headroom
-
(Static, byte value)
For dedicated frozen tier nodes, the max headroom to maintain. If
xpack.searchable.snapshot.shared_cache.size
is not explicitly set, this setting defaults to100GB
. Otherwise it defaults to-1
(not set). You can only configure this setting ifxpack.searchable.snapshot.shared_cache.size
is set as a percentage.
To illustrate how these settings work in concert let us look at two examples when using the default values of the settings on a dedicated frozen node:
-
A 4000 GB disk will result in a shared cache sized at 3900 GB. 90% of 4000 GB
is 3600 GB, leaving 400 GB headroom. The default
max_headroom
of 100 GB takes effect, and the result is therefore 3900 GB. - A 400 GB disk will result in a shared cache sized at 360 GB.
You can configure the settings in elasticsearch.yml
:
xpack.searchable.snapshot.shared_cache.size: 4TB
You can only configure these settings on nodes with the
data_frozen
role. Additionally, nodes with a shared cache
can only have a single data path.
Elasticsearch also uses a dedicated system index named .snapshot-blob-cache
to speed up
the recoveries of searchable snapshot shards. This index is used as an additional
caching layer on top of the partially or fully mounted data and contains the
minimal required data to start the searchable snapshot shards. Elasticsearch automatically
deletes the documents that are no longer used in this index. This periodic clean
up can be tuned using the following settings:
-
searchable_snapshots.blob_cache.periodic_cleanup.interval
-
(Dynamic)
The interval at which the periodic cleanup of the
.snapshot-blob-cache
index is scheduled. Defaults to every hour (1h
). -
searchable_snapshots.blob_cache.periodic_cleanup.retention_period
-
(Dynamic)
The retention period to keep obsolete documents in the
.snapshot-blob-cache
index. Defaults to every hour (1h
). -
searchable_snapshots.blob_cache.periodic_cleanup.batch_size
-
(Dynamic)
The number of documents that are searched for and bulk-deleted at once during
the periodic cleanup of the
.snapshot-blob-cache
index. Defaults to100
. -
searchable_snapshots.blob_cache.periodic_cleanup.pit_keep_alive
-
(Dynamic)
The value used for the point-in-time keep alive
requests executed during the periodic cleanup of the
.snapshot-blob-cache
index. Defaults to10m
.
Reduce costs with searchable snapshots
editIn most cases, searchable snapshots reduce the costs of running a cluster by removing the need for replica shards and for shard data to be copied between nodes. However, if it’s particularly expensive to retrieve data from a snapshot repository in your environment, searchable snapshots may be more costly than regular indices. Ensure that the cost structure of your operating environment is compatible with searchable snapshots before using them.
Replica costs
editFor resiliency, a regular index requires multiple redundant copies of each shard across multiple nodes. If a node fails, Elasticsearch uses the redundancy to rebuild any lost shard copies. A searchable snapshot index doesn’t require replicas. If a node containing a searchable snapshot index fails, Elasticsearch can rebuild the lost shard cache from the snapshot repository.
Without replicas, rarely-accessed searchable snapshot indices require far fewer resources. A cold data tier that contains replica-free fully-mounted searchable snapshot indices requires half the nodes and disk space of a tier containing the same data in regular indices. The frozen tier, which contains only partially-mounted searchable snapshot indices, requires even fewer resources.
Data transfer costs
editWhen a shard of a regular index is moved between nodes, its contents are copied from another node in your cluster. In many environments, the costs of moving data between nodes are significant, especially if running in a Cloud environment with nodes in different zones. In contrast, when mounting a searchable snapshot index or moving one of its shards, the data is always copied from the snapshot repository. This is typically much cheaper.
Most cloud providers charge significant fees for data transferred between regions and for data transferred out of their platforms. You should only mount snapshots into a cluster that is in the same region as the snapshot repository. If you wish to search data across multiple regions, configure multiple clusters and use cross-cluster search or cross-cluster replication instead of searchable snapshots.
It’s worth noting that if a searchable snapshot index has no replicas, then when the node hosting it is shut down, allocation will immediately try to relocate the index to a new node in order to maximize availability. For fully mounted indices this will result in the new node downloading the entire index snapshot from the cloud repository. Under a rolling cluster restart, this may happen multiple times for each searchable snapshot index. Temporarily disabling allocation during planned node restart will prevent this, as described in the cluster restart procedure.
Back up and restore searchable snapshots
editYou can use regular snapshots to back up a cluster containing searchable snapshot indices. When you restore a snapshot containing searchable snapshot indices, these indices are restored as searchable snapshot indices again.
Before you restore a snapshot containing a searchable snapshot index, you must first register the repository containing the original index snapshot. When restored, the searchable snapshot index mounts the original index snapshot from its original repository. If wanted, you can use separate repositories for regular snapshots and searchable snapshots.
A snapshot of a searchable snapshot index contains only a small amount of metadata which identifies its original index snapshot. It does not contain any data from the original index. The restore of a backup will fail to restore any searchable snapshot indices whose original index snapshot is unavailable.
Because searchable snapshot indices are not regular indices, it is not possible to use a source-only repository to take snapshots of searchable snapshot indices.
Reliability of searchable snapshots
The sole copy of the data in a searchable snapshot index is the underlying snapshot, stored in the repository. If you remove this snapshot, the data will be permanently lost. Although Elasticsearch may have cached some of the data onto local storage for faster searches, this cached data is incomplete and cannot be used for recovery if you remove the underlying snapshot. For example:
- You must not unregister a repository while any of the searchable snapshots it contains are mounted in Elasticsearch.
- You must not delete a snapshot if any of its indices are mounted as searchable snapshot indices. The snapshot contains the sole full copy of your data. If you delete it then the data cannot be recovered from elsewhere.
- If you mount indices from snapshots held in a repository to which a different cluster has write access then you must make sure that the other cluster does not delete these snapshots. The snapshot contains the sole full copy of your data. If you delete it then the data cannot be recovered from elsewhere.
- The data in a searchable snapshot index are cached in local storage, so if you delete the underlying searchable snapshot Elasticsearch will continue to operate normally until the first cache miss. This may be much later, for instance when a shard relocates to a different node, or when the node holding the shard restarts.
-
If the repository fails or corrupts the contents of the snapshot and you cannot restore it to its previous healthy state then the data is permanently lost.
The blob storage offered by all major public cloud providers typically offers very good protection against failure or corruption. If you manage your own repository storage then you are responsible for its reliability.