- Elasticsearch Guide: other versions:
- Elasticsearch introduction
- Getting started with Elasticsearch
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Important System Configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- All permission check
- Starting Elasticsearch
- Stopping Elasticsearch
- Adding nodes to your cluster
- Installing X-Pack
- Set up X-Pack
- Configuring X-Pack Java Clients
- X-Pack Settings
- Bootstrap Checks for X-Pack
- Upgrade Elasticsearch
- API Conventions
- Document APIs
- Search APIs
- Aggregations
- Metrics Aggregations
- Avg Aggregation
- Weighted Avg Aggregation
- Cardinality Aggregation
- Extended Stats Aggregation
- Geo Bounds Aggregation
- Geo Centroid Aggregation
- Max Aggregation
- Min Aggregation
- Percentiles Aggregation
- Percentile Ranks Aggregation
- Scripted Metric Aggregation
- Stats Aggregation
- Sum Aggregation
- Top Hits Aggregation
- Value Count Aggregation
- Median Absolute Deviation Aggregation
- Bucket Aggregations
- Adjacency Matrix Aggregation
- Auto-interval Date Histogram Aggregation
- Children Aggregation
- Composite Aggregation
- Date Histogram Aggregation
- Date Range Aggregation
- Diversified Sampler Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IP Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Parent Aggregation
- Range Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Significant Text Aggregation
- Terms Aggregation
- Pipeline Aggregations
- Avg Bucket Aggregation
- Derivative Aggregation
- Max Bucket Aggregation
- Min Bucket Aggregation
- Sum Bucket Aggregation
- Stats Bucket Aggregation
- Extended Stats Bucket Aggregation
- Percentiles Bucket Aggregation
- Moving Average Aggregation
- Moving Function Aggregation
- Cumulative Sum Aggregation
- Bucket Script Aggregation
- Bucket Selector Aggregation
- Bucket Sort Aggregation
- Serial Differencing Aggregation
- Matrix Aggregations
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Returning the type of the aggregation
- Metrics Aggregations
- Indices APIs
- Create Index
- Delete Index
- Get Index
- Indices Exists
- Open / Close Index API
- Shrink Index
- Split Index
- Rollover Index
- Put Mapping
- Get Mapping
- Get Field Mapping
- Types Exists
- Index Aliases
- Update Indices Settings
- Get Settings
- Analyze
- Index Templates
- Indices Stats
- Indices Segments
- Indices Recovery
- Indices Shard Stores
- Clear Cache
- Flush
- Refresh
- Force Merge
- cat APIs
- Cluster APIs
- Query DSL
- Scripting
- Mapping
- Analysis
- Anatomy of an analyzer
- Testing analyzers
- Analyzers
- Normalizers
- Tokenizers
- Standard Tokenizer
- Letter Tokenizer
- Lowercase Tokenizer
- Whitespace Tokenizer
- UAX URL Email Tokenizer
- Classic Tokenizer
- Thai Tokenizer
- NGram Tokenizer
- Edge NGram Tokenizer
- Keyword Tokenizer
- Pattern Tokenizer
- Char Group Tokenizer
- Simple Pattern Tokenizer
- Simple Pattern Split Tokenizer
- Path Hierarchy Tokenizer
- Path Hierarchy Tokenizer Examples
- Token Filters
- Standard Token Filter
- ASCII Folding Token Filter
- Flatten Graph Token Filter
- Length Token Filter
- Lowercase Token Filter
- Uppercase Token Filter
- NGram Token Filter
- Edge NGram Token Filter
- Porter Stem Token Filter
- Shingle Token Filter
- Stop Token Filter
- Word Delimiter Token Filter
- Word Delimiter Graph Token Filter
- Multiplexer Token Filter
- Conditional Token Filter
- Predicate Token Filter Script
- Stemmer Token Filter
- Stemmer Override Token Filter
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Snowball Token Filter
- Phonetic Token Filter
- Synonym Token Filter
- Parsing synonym files
- Synonym Graph Token Filter
- Compound Word Token Filters
- Reverse Token Filter
- Elision Token Filter
- Truncate Token Filter
- Unique Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Trim Token Filter
- Limit Token Count Token Filter
- Hunspell Token Filter
- Common Grams Token Filter
- Normalization Token Filter
- CJK Width Token Filter
- CJK Bigram Token Filter
- Delimited Payload Token Filter
- Keep Words Token Filter
- Keep Types Token Filter
- Exclude mode settings example
- Classic Token Filter
- Apostrophe Token Filter
- Decimal Digit Token Filter
- Fingerprint Token Filter
- MinHash Token Filter
- Remove Duplicates Token Filter
- Character Filters
- Modules
- Index Modules
- Ingest Node
- Pipeline Definition
- Ingest APIs
- Accessing Data in Pipelines
- Conditional Execution in Pipelines
- Handling Failures in Pipelines
- Processors
- Append Processor
- Bytes Processor
- Convert Processor
- Date Processor
- Date Index Name Processor
- Dissect Processor
- Dot Expander Processor
- Drop Processor
- Fail Processor
- Foreach Processor
- GeoIP Processor
- Grok Processor
- Gsub Processor
- Join Processor
- JSON Processor
- KV Processor
- Lowercase Processor
- Pipeline Processor
- Remove Processor
- Rename Processor
- Script Processor
- Set Processor
- Set Security User Processor
- Split Processor
- Sort Processor
- Trim Processor
- Uppercase Processor
- URL Decode Processor
- User Agent processor
- Managing the index lifecycle
- SQL Access
- Monitor a cluster
- Rolling up historical data
- Frozen indices
- Set up a cluster for high availability
- Secure a cluster
- Overview
- Configuring security
- Encrypting communications in Elasticsearch
- Encrypting communications in an Elasticsearch Docker Container
- Enabling cipher suites for stronger encryption
- Separating node-to-node and client traffic
- Configuring an Active Directory realm
- Configuring a file realm
- Configuring an LDAP realm
- Configuring a native realm
- Configuring a PKI realm
- Configuring a SAML realm
- Configuring a Kerberos realm
- FIPS 140-2
- Security settings
- Security files
- Auditing Settings
- How security works
- User authentication
- Built-in users
- Internal users
- Token-based authentication services
- Realms
- Realm chains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- User authorization
- Auditing security events
- Encrypting communications
- Restricting connections with IP filtering
- Cross cluster search, tribe, clients, and integrations
- Tutorial: Getting started with security
- Tutorial: Encrypting communications
- Troubleshooting
- Can’t log in after upgrading to 6.8.23
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Alerting on Cluster and Index Events
- Command line tools
- How To
- Glossary of terms
- X-Pack APIs
- Info API
- Cross-cluster replication APIs
- Explore API
- Freeze index
- Index lifecycle management API
- Licensing APIs
- Migration APIs
- Machine learning APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create calendar
- Create datafeeds
- Create filter
- Create jobs
- Delete calendar
- Delete datafeeds
- Delete events from calendar
- Delete filter
- Delete forecast
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Find file structure
- Flush jobs
- Forecast jobs
- Get calendars
- Get buckets
- Get overall buckets
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get machine learning info
- Get model snapshots
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Revert model snapshots
- Set upgrade mode
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filter
- Update jobs
- Update model snapshots
- Rollup APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete users
- Disable users
- Enable users
- Get API key information
- Get application privileges
- Get role mappings
- Get roles
- Get token
- Get users
- Has privileges
- Invalidate API key
- Invalidate token
- SSL certificate
- Unfreeze index
- Watcher APIs
- Definitions
- Release Highlights
- Breaking changes
- Release Notes
- Elasticsearch version 6.8.23
- Elasticsearch version 6.8.22
- Elasticsearch version 6.8.21
- Elasticsearch version 6.8.20
- Elasticsearch version 6.8.19
- Elasticsearch version 6.8.18
- Elasticsearch version 6.8.17
- Elasticsearch version 6.8.16
- Elasticsearch version 6.8.15
- Elasticsearch version 6.8.14
- Elasticsearch version 6.8.13
- Elasticsearch version 6.8.12
- Elasticsearch version 6.8.11
- Elasticsearch version 6.8.10
- Elasticsearch version 6.8.9
- Elasticsearch version 6.8.8
- Elasticsearch version 6.8.7
- Elasticsearch version 6.8.6
- Elasticsearch version 6.8.5
- Elasticsearch version 6.8.4
- Elasticsearch version 6.8.3
- Elasticsearch version 6.8.2
- Elasticsearch version 6.8.1
- Elasticsearch version 6.8.0
- Elasticsearch version 6.7.2
- Elasticsearch version 6.7.1
- Elasticsearch version 6.7.0
- Elasticsearch version 6.6.2
- Elasticsearch version 6.6.1
- Elasticsearch version 6.6.0
- Elasticsearch version 6.5.4
- Elasticsearch version 6.5.3
- Elasticsearch version 6.5.2
- Elasticsearch version 6.5.1
- Elasticsearch version 6.5.0
- Elasticsearch version 6.4.3
- Elasticsearch version 6.4.2
- Elasticsearch version 6.4.1
- Elasticsearch version 6.4.0
- Elasticsearch version 6.3.2
- Elasticsearch version 6.3.1
- Elasticsearch version 6.3.0
- Elasticsearch version 6.2.4
- Elasticsearch version 6.2.3
- Elasticsearch version 6.2.2
- Elasticsearch version 6.2.1
- Elasticsearch version 6.2.0
- Elasticsearch version 6.1.4
- Elasticsearch version 6.1.3
- Elasticsearch version 6.1.2
- Elasticsearch version 6.1.1
- Elasticsearch version 6.1.0
- Elasticsearch version 6.0.1
- Elasticsearch version 6.0.0
- Elasticsearch version 6.0.0-rc2
- Elasticsearch version 6.0.0-rc1
- Elasticsearch version 6.0.0-beta2
- Elasticsearch version 6.0.0-beta1
- Elasticsearch version 6.0.0-alpha2
- Elasticsearch version 6.0.0-alpha1
- Elasticsearch version 6.0.0-alpha1 (Changes previously released in 5.x)
NOTE: You are looking at documentation for an older release. For the latest information, see the current release documentation.
Rollover Index
editRollover Index
editThe rollover index API rolls an alias over to a new index when the existing index is considered to be too large or too old.
The API accepts a single alias name and a list of conditions
. The alias must point to a write index for
a Rollover request to be valid. There are two ways this can be achieved, and depending on the configuration, the
alias metadata will be updated differently. The two scenarios are as follows:
-
The alias only points to a single index with
is_write_index
not configured (defaults tonull
).
In this scenario, the original index will have their rollover alias will be added to the newly created index, and removed from the original (rolled-over) index.
-
The alias points to one or more indices with
is_write_index
set totrue
on the index to be rolled over (the write index).
In this scenario, the write index will have its rollover alias' is_write_index
set to false
, while the newly created index
will now have the rollover alias pointing to it as the write index with is_write_index
as true
.
The available conditions are:
Table 28. conditions
parameters
Name | Description |
---|---|
max_age |
The maximum age of the index |
max_docs |
The maximum number of documents the index should contain. This does not add documents multiple times for replicas |
max_size |
The maximum estimated size of the primary shard of the index |
PUT /logs-000001 { "aliases": { "logs_write": {} } } # Add > 1000 documents to logs-000001 POST /logs_write/_rollover { "conditions": { "max_age": "7d", "max_docs": 1000, "max_size": "5gb" } }
Creates an index called |
|
If the index pointed to by |
The above request might return the following response:
{ "acknowledged": true, "shards_acknowledged": true, "old_index": "logs-000001", "new_index": "logs-000002", "rolled_over": true, "dry_run": false, "conditions": { "[max_age: 7d]": false, "[max_docs: 1000]": true, "[max_size: 5gb]": false, } }
Naming the new index
editIf the name of the existing index ends with -
and a number — e.g.
logs-000001
— then the name of the new index will follow the same pattern,
incrementing the number (logs-000002
). The number is zero-padded with a length
of 6, regardless of the old index name.
If the old name doesn’t match this pattern then you must specify the name for the new index as follows:
POST /my_alias/_rollover/my_new_index_name { "conditions": { "max_age": "7d", "max_docs": 1000, "max_size": "5gb" } }
Using date math with the rollover API
editIt can be useful to use date math to name the
rollover index according to the date that the index rolled over, e.g.
logstash-2016.02.03
. The rollover API supports date math, but requires the
index name to end with a dash followed by a number, e.g.
logstash-2016.02.03-1
which is incremented every time the index is rolled
over. For instance:
# PUT /<logs-{now/d}-1> with URI encoding: PUT /%3Clogs-%7Bnow%2Fd%7D-1%3E { "aliases": { "logs_write": {} } } PUT logs_write/_doc/1 { "message": "a dummy log" } POST logs_write/_refresh # Wait for a day to pass POST /logs_write/_rollover { "conditions": { "max_docs": "1" } }
Creates an index named with today’s date (e.g.) |
|
Rolls over to a new index with today’s date, e.g. |
These indices can then be referenced as described in the date math documentation. For example, to search over indices created in the last three days, you could do the following:
# GET /<logs-{now/d}-*>,<logs-{now/d-1d}-*>,<logs-{now/d-2d}-*>/_search GET /%3Clogs-%7Bnow%2Fd%7D-*%3E%2C%3Clogs-%7Bnow%2Fd-1d%7D-*%3E%2C%3Clogs-%7Bnow%2Fd-2d%7D-*%3E/_search
Defining the new index
editThe settings, mappings, and aliases for the new index are taken from any
matching index templates. Additionally, you can specify
settings
, mappings
, and aliases
in the body of the request, just like the
create index API. Values specified in the request
override any values set in matching index templates. For example, the following
rollover
request overrides the index.number_of_shards
setting:
PUT /logs-000001 { "aliases": { "logs_write": {} } } POST /logs_write/_rollover { "conditions" : { "max_age": "7d", "max_docs": 1000, "max_size": "5gb" }, "settings": { "index.number_of_shards": 2 } }
Dry run
editThe rollover API supports dry_run
mode, where request conditions can be
checked without performing the actual rollover:
PUT /logs-000001 { "aliases": { "logs_write": {} } } POST /logs_write/_rollover?dry_run { "conditions" : { "max_age": "7d", "max_docs": 1000, "max_size": "5gb" } }
Wait For Active Shards
editBecause the rollover operation creates a new index to rollover to, the
wait_for_active_shards
setting on
index creation applies to the rollover action as well.
Write Index Alias Behavior
editThe rollover alias when rolling over a write index that has is_write_index
explicitly set to true
is not
swapped during rollover actions. Since having an alias point to multiple indices is ambiguous in distinguishing
which is the correct write index to roll over, it is not valid to rollover an alias that points to multiple indices.
For this reason, the default behavior is to swap which index is being pointed to by the write-oriented alias. This
was logs_write
in some of the above examples. Since setting is_write_index
enables an alias to point to multiple indices
while also being explicit as to which is the write index that rollover should target, removing the alias from the rolled over
index is not necessary. This simplifies things by allowing for one alias to behave both as the write and read aliases for
indices that are being managed with Rollover.
Look at the behavior of the aliases in the following example where is_write_index
is set on the rolled over index.
PUT my_logs_index-000001 { "aliases": { "logs": { "is_write_index": true } } } PUT logs/_doc/1 { "message": "a dummy log" } POST logs/_refresh POST /logs/_rollover { "conditions": { "max_docs": "1" } } PUT logs/_doc/2 { "message": "a newer log" }
configures |
|
newly indexed documents against the |
{ "_index" : "my_logs_index-000002", "_type" : "_doc", "_id" : "2", "_version" : 1, "result" : "created", "_shards" : { "total" : 2, "successful" : 1, "failed" : 0 }, "_seq_no" : 0, "_primary_term" : 1 }
After the rollover, the alias metadata for the two indices will have the is_write_index
setting
reflect each index’s role, with the newly created index as the write index.
{ "my_logs_index-000002": { "aliases": { "logs": { "is_write_index": true } } }, "my_logs_index-000001": { "aliases": { "logs": { "is_write_index" : false } } } }
On this page