Get component templates Added in 5.1.0
Get information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.
Query parameters
-
local boolean
If
true
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
master_timeout string
The period to wait for a connection to the master node.
curl \
--request GET http://api.example.com/_cat/component_templates
[
{
"name": "string",
"version": "string",
"alias_count": "string",
"mapping_count": "string",
"settings_count": "string",
"metadata_count": "string",
"included_in": "string"
}
]
alias index filter routing.index routing.search is_write_index
alias1 test1 - - - -
alias2 test1 * - - -
alias3 test1 - 1 1 -
alias4 test1 - 2 1,2 -
Get index information
Get high-level information about indices in a cluster, including backing indices for data streams.
Use this request to get the following information for each index in a cluster:
- shard count
- document count
- deleted document count
- primary store size
- total store size of all shards, including shard replicas
These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the cat count or count APIs.
CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint.
Path parameters
-
Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (
*
). To target all data streams and indices, omit this parameter or use*
or_all
.
Query parameters
-
bytes string
The unit used to display byte values.
Values are
b
,kb
,mb
,gb
,tb
, orpb
. -
expand_wildcards string | array[string]
The type of index that wildcard patterns can match.
-
health string
The health status used to limit returned indices. By default, the response includes indices of any health status.
Values are
green
,GREEN
,yellow
,YELLOW
,red
, orRED
. -
include_unloaded_segments boolean
If true, the response includes information from segments that are not loaded into memory.
-
pri boolean
If true, the response only includes information from primary shards.
-
time string
The unit used to display time values.
Values are
nanos
,micros
,ms
,s
,m
,h
, ord
. -
master_timeout string
Period to wait for a connection to the master node.
curl \
--request GET http://api.example.com/_cat/indices/{index}
[
{
"health": "string",
"status": "string",
"index": "string",
"uuid": "string",
"pri": "string",
"rep": "string",
"docs.count": "string",
"docs.deleted": "string",
"creation.date": "string",
"creation.date.string": "string",
"store.size": "string",
"pri.store.size": "string",
"dataset.size": "string",
"completion.size": "string",
"pri.completion.size": "string",
"fielddata.memory_size": "string",
"pri.fielddata.memory_size": "string",
"fielddata.evictions": "string",
"pri.fielddata.evictions": "string",
"query_cache.memory_size": "string",
"pri.query_cache.memory_size": "string",
"query_cache.evictions": "string",
"pri.query_cache.evictions": "string",
"request_cache.memory_size": "string",
"pri.request_cache.memory_size": "string",
"request_cache.evictions": "string",
"pri.request_cache.evictions": "string",
"request_cache.hit_count": "string",
"pri.request_cache.hit_count": "string",
"request_cache.miss_count": "string",
"pri.request_cache.miss_count": "string",
"flush.total": "string",
"pri.flush.total": "string",
"flush.total_time": "string",
"pri.flush.total_time": "string",
"get.current": "string",
"pri.get.current": "string",
"get.time": "string",
"pri.get.time": "string",
"get.total": "string",
"pri.get.total": "string",
"get.exists_time": "string",
"pri.get.exists_time": "string",
"get.exists_total": "string",
"pri.get.exists_total": "string",
"get.missing_time": "string",
"pri.get.missing_time": "string",
"get.missing_total": "string",
"pri.get.missing_total": "string",
"indexing.delete_current": "string",
"pri.indexing.delete_current": "string",
"indexing.delete_time": "string",
"pri.indexing.delete_time": "string",
"indexing.delete_total": "string",
"pri.indexing.delete_total": "string",
"indexing.index_current": "string",
"pri.indexing.index_current": "string",
"indexing.index_time": "string",
"pri.indexing.index_time": "string",
"indexing.index_total": "string",
"pri.indexing.index_total": "string",
"indexing.index_failed": "string",
"pri.indexing.index_failed": "string",
"merges.current": "string",
"pri.merges.current": "string",
"merges.current_docs": "string",
"pri.merges.current_docs": "string",
"merges.current_size": "string",
"pri.merges.current_size": "string",
"merges.total": "string",
"pri.merges.total": "string",
"merges.total_docs": "string",
"pri.merges.total_docs": "string",
"merges.total_size": "string",
"pri.merges.total_size": "string",
"merges.total_time": "string",
"pri.merges.total_time": "string",
"refresh.total": "string",
"pri.refresh.total": "string",
"refresh.time": "string",
"pri.refresh.time": "string",
"refresh.external_total": "string",
"pri.refresh.external_total": "string",
"refresh.external_time": "string",
"pri.refresh.external_time": "string",
"refresh.listeners": "string",
"pri.refresh.listeners": "string",
"search.fetch_current": "string",
"pri.search.fetch_current": "string",
"search.fetch_time": "string",
"pri.search.fetch_time": "string",
"search.fetch_total": "string",
"pri.search.fetch_total": "string",
"search.open_contexts": "string",
"pri.search.open_contexts": "string",
"search.query_current": "string",
"pri.search.query_current": "string",
"search.query_time": "string",
"pri.search.query_time": "string",
"search.query_total": "string",
"pri.search.query_total": "string",
"search.scroll_current": "string",
"pri.search.scroll_current": "string",
"search.scroll_time": "string",
"pri.search.scroll_time": "string",
"search.scroll_total": "string",
"pri.search.scroll_total": "string",
"segments.count": "string",
"pri.segments.count": "string",
"segments.memory": "string",
"pri.segments.memory": "string",
"segments.index_writer_memory": "string",
"pri.segments.index_writer_memory": "string",
"segments.version_map_memory": "string",
"pri.segments.version_map_memory": "string",
"segments.fixed_bitset_memory": "string",
"pri.segments.fixed_bitset_memory": "string",
"warmer.current": "string",
"pri.warmer.current": "string",
"warmer.total": "string",
"pri.warmer.total": "string",
"warmer.total_time": "string",
"pri.warmer.total_time": "string",
"suggest.current": "string",
"pri.suggest.current": "string",
"suggest.time": "string",
"pri.suggest.time": "string",
"suggest.total": "string",
"pri.suggest.total": "string",
"memory.total": "string",
"pri.memory.total": "string",
"search.throttled": "string",
"bulk.total_operations": "string",
"pri.bulk.total_operations": "string",
"bulk.total_time": "string",
"pri.bulk.total_time": "string",
"bulk.total_size_in_bytes": "string",
"pri.bulk.total_size_in_bytes": "string",
"bulk.avg_time": "string",
"pri.bulk.avg_time": "string",
"bulk.avg_size_in_bytes": "string",
"pri.bulk.avg_size_in_bytes": "string"
}
]
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size dataset.size
yellow open my-index-000001 u8FNjxh8Rfy_awN11oDKYQ 1 1 1200 0 88.1kb 88.1kb 88.1kb
green open my-index-000002 nYFWZEO7TUiOjLQXBaYJpA 1 0 0 0 260b 260b 260b
curl \
--request HEAD http://api.example.com/
Get node information Added in 1.3.0
By default, the API returns all attributes and core settings for cluster nodes.
Path parameters
-
Comma-separated list of node IDs or names used to limit returned information.
Query parameters
-
flat_settings boolean
If true, returns settings in flat format.
-
timeout string
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request GET http://api.example.com/_nodes/{node_id}
{
"_nodes": {},
"cluster_name": "elasticsearch",
"nodes": {
"USpTGYaBSIKbgSUJR2Z9lg": {
"name": "node-0",
"transport_address": "192.168.17:9300",
"host": "node-0.elastic.co",
"ip": "192.168.17",
"version": "{version}",
"transport_version": 100000298,
"index_version": 100000074,
"component_versions": {
"ml_config_version": 100000162,
"transform_config_version": 100000096
},
"build_flavor": "default",
"build_type": "{build_type}",
"build_hash": "587409e",
"roles": [
"master",
"data",
"ingest"
],
"attributes": {},
"plugins": [
{
"name": "analysis-icu",
"version": "{version}",
"description": "The ICU Analysis plugin integrates Lucene ICU
module into elasticsearch, adding ICU relates analysis components.",
"classname":
"org.elasticsearch.plugin.analysis.icu.AnalysisICUPlugin",
"has_native_controller": false
}
],
"modules": [
{
"name": "lang-painless",
"version": "{version}",
"description": "An easy, safe and fast scripting language for
Elasticsearch",
"classname": "org.elasticsearch.painless.PainlessPlugin",
"has_native_controller": false
}
]
}
}
}
Create a new document in the index Added in 5.0.0
You can index a new JSON document with the /<target>/_doc/
or /<target>/_create/<_id>
APIs
Using _create
guarantees that the document is indexed only if it does not already exist.
It returns a 409 response when a document with a same ID already exists in the index.
To update an existing document, you must use the /<target>/_doc/
API.
If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:
- To add a document using the
PUT /<target>/_create/<_id>
orPOST /<target>/_create/<_id>
request formats, you must have thecreate_doc
,create
,index
, orwrite
index privilege. - To automatically create a data stream or index with this API request, you must have the
auto_configure
,create_index
, ormanage
index privilege.
Automatic data stream creation requires a matching index template with data stream enabled.
Automatically create data streams and indices
If the request's target doesn't exist and matches an index template with a data_stream
definition, the index operation automatically creates the data stream.
If the target doesn't exist and doesn't match a data stream template, the operation automatically creates the index and applies any matching index templates.
NOTE: Elasticsearch includes several built-in index templates. To avoid naming collisions with these templates, refer to index pattern documentation.
If no mapping exists, the index operation creates a dynamic mapping. By default, new fields and objects are automatically added to the mapping if needed.
Automatic index creation is controlled by the action.auto_create_index
setting.
If it is true
, any index can be created automatically.
You can modify this setting to explicitly allow or block automatic creation of indices that match specified patterns or set it to false
to turn off automatic index creation entirely.
Specify a comma-separated list of patterns you want to allow or prefix each pattern with +
or -
to indicate whether it should be allowed or blocked.
When a list is specified, the default behaviour is to disallow.
NOTE: The action.auto_create_index
setting affects the automatic creation of indices only.
It does not affect the creation of data streams.
Routing
By default, shard placement — or routing — is controlled by using a hash of the document's ID value.
For more explicit control, the value fed into the hash function used by the router can be directly specified on a per-operation basis using the routing
parameter.
When setting up explicit mapping, you can also use the _routing
field to direct the index operation to extract the routing value from the document itself.
This does come at the (very minimal) cost of an additional document parsing pass.
If the _routing
mapping is defined and set to be required, the index operation will fail if no routing value is provided or extracted.
NOTE: Data streams do not support custom routing unless they were created with the allow_custom_routing
setting enabled in the template.
Distributed
The index operation is directed to the primary shard based on its route and performed on the actual node containing this shard. After the primary shard completes the operation, if needed, the update is distributed to applicable replicas.
Active shards
To improve the resiliency of writes to the system, indexing operations can be configured to wait for a certain number of active shard copies before proceeding with the operation.
If the requisite number of active shard copies are not available, then the write operation must wait and retry, until either the requisite shard copies have started or a timeout occurs.
By default, write operations only wait for the primary shards to be active before proceeding (that is to say wait_for_active_shards
is 1
).
This default can be overridden in the index settings dynamically by setting index.write.wait_for_active_shards
.
To alter this behavior per operation, use the wait_for_active_shards request
parameter.
Valid values are all or any positive integer up to the total number of configured copies per shard in the index (which is number_of_replicas
+1).
Specifying a negative value or a number greater than the number of shard copies will throw an error.
For example, suppose you have a cluster of three nodes, A, B, and C and you create an index index with the number of replicas set to 3 (resulting in 4 shard copies, one more copy than there are nodes).
If you attempt an indexing operation, by default the operation will only ensure the primary copy of each shard is available before proceeding.
This means that even if B and C went down and A hosted the primary shard copies, the indexing operation would still proceed with only one copy of the data.
If wait_for_active_shards
is set on the request to 3
(and all three nodes are up), the indexing operation will require 3 active shard copies before proceeding.
This requirement should be met because there are 3 active nodes in the cluster, each one holding a copy of the shard.
However, if you set wait_for_active_shards
to all
(or to 4
, which is the same in this situation), the indexing operation will not proceed as you do not have all 4 copies of each shard active in the index.
The operation will timeout unless a new node is brought up in the cluster to host the fourth copy of the shard.
It is important to note that this setting greatly reduces the chances of the write operation not writing to the requisite number of shard copies, but it does not completely eliminate the possibility, because this check occurs before the write operation starts.
After the write operation is underway, it is still possible for replication to fail on any number of shard copies but still succeed on the primary.
The _shards
section of the API response reveals the number of shard copies on which replication succeeded and failed.
Path parameters
-
The name of the data stream or index to target. If the target doesn't exist and matches the name or wildcard (
*
) pattern of an index template with adata_stream
definition, this request creates the data stream. If the target doesn't exist and doesn’t match a data stream template, this request creates the index. -
A unique identifier for the document. To automatically generate a document ID, use the
POST /<target>/_doc/
request format.
Query parameters
-
include_source_on_error boolean
True or false if to include the document source in the error message in case of parsing errors.
-
pipeline string
The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to
_none
turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter. -
refresh string
If
true
, Elasticsearch refreshes the affected shards to make this operation visible to search. Ifwait_for
, it waits for a refresh to make this operation visible to search. Iffalse
, it does nothing with refreshes.Values are
true
,false
, orwait_for
. -
routing string
A custom value that is used to route operations to a specific shard.
-
timeout string
The period the request waits for the following operations: automatic index creation, dynamic mapping updates, waiting for active shards. Elasticsearch waits for at least the specified timeout period before failing. The actual wait time could be longer, particularly when multiple waits occur.
This parameter is useful for situations where the primary shard assigned to perform the operation might not be available when the operation runs. Some reasons for this might be that the primary shard is currently recovering from a gateway or undergoing relocation. By default, the operation will wait on the primary shard to become available for at least 1 minute before failing and responding with an error. The actual wait time could be longer, particularly when multiple waits occur.
-
version number
The explicit version number for concurrency control. It must be a non-negative long number.
-
version_type string
The version type.
Values are
internal
,external
,external_gte
, orforce
. -
wait_for_active_shards number | string
The number of shard copies that must be active before proceeding with the operation. You can set it to
all
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). The default value of1
means it waits for each primary shard to be active.
curl \
--request PUT http://api.example.com/{index}/_create/{id} \
--header "Content-Type: application/json" \
--data '"{\n \"@timestamp\": \"2099-11-15T13:12:00\",\n \"message\": \"GET /search HTTP/1.1 200 1070000\",\n \"user\": {\n \"id\": \"kimchy\"\n }\n}"'
{
"@timestamp": "2099-11-15T13:12:00",
"message": "GET /search HTTP/1.1 200 1070000",
"user": {
"id": "kimchy"
}
}
{
"_id": "string",
"_index": "string",
"_primary_term": 42.0,
"result": "created",
"_seq_no": 42.0,
"_shards": {
"failed": 42.0,
"successful": 42.0,
"total": 42.0,
"failures": [
{
"index": "string",
"node": "string",
"reason": {
"type": "string",
"reason": "string",
"stack_trace": "string",
"caused_by": {},
"root_cause": [
{}
],
"suppressed": [
{}
]
},
"shard": 42.0,
"status": "string"
}
],
"skipped": 42.0
},
"_version": 42.0,
"forced_refresh": true
}
Get a document's source
Get the source of a document. For example:
GET my-index-000001/_source/1
You can use the source filtering parameters to control which parts of the _source
are returned:
GET my-index-000001/_source/1/?_source_includes=*.id&_source_excludes=entities
Path parameters
-
The name of the index that contains the document.
-
A unique document identifier.
Query parameters
-
preference string
The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.
-
realtime boolean
If
true
, the request is real-time as opposed to near-real-time. -
refresh boolean
If
true
, the request refreshes the relevant shards before retrieving the document. Setting it totrue
should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing). -
routing string
A custom value used to route operations to a specific shard.
-
_source boolean | string | array[string]
Indicates whether to return the
_source
field (true
orfalse
) or lists the fields to return. -
_source_excludes string | array[string]
A comma-separated list of source fields to exclude in the response.
-
_source_includes string | array[string]
A comma-separated list of source fields to include in the response.
-
stored_fields string | array[string]
A comma-separated list of stored fields to return as part of a hit.
-
version number
The version number for concurrency control. It must match the current version of the document for the request to succeed.
-
version_type string
The version type.
Values are
internal
,external
,external_gte
, orforce
.
curl \
--request GET http://api.example.com/{index}/_source/{id}
{}
Get multiple term vectors
Get multiple term vectors with a single request.
You can specify existing documents by index and ID or provide artificial documents in the body of the request.
You can specify the index in the request body or request URI.
The response contains a docs
array with all the fetched termvectors.
Each element has the structure provided by the termvectors API.
Artificial documents
You can also use mtermvectors
to generate term vectors for artificial documents provided in the body of the request.
The mapping used is determined by the specified _index
.
Query parameters
-
ids array[string]
A comma-separated list of documents ids. You must define ids as parameter or set "ids" or "docs" in the request body
-
fields string | array[string]
A comma-separated list or wildcard expressions of fields to include in the statistics. It is used as the default list unless a specific field list is provided in the
completion_fields
orfielddata_fields
parameters. -
field_statistics boolean
If
true
, the response includes the document count, sum of document frequencies, and sum of total term frequencies. -
offsets boolean
If
true
, the response includes term offsets. -
payloads boolean
If
true
, the response includes term payloads. -
positions boolean
If
true
, the response includes term positions. -
preference string
The node or shard the operation should be performed on. It is random by default.
-
realtime boolean
If true, the request is real-time as opposed to near-real-time.
-
routing string
A custom value used to route operations to a specific shard.
-
term_statistics boolean
If true, the response includes term frequency and document frequency.
-
version number
If
true
, returns the document version as part of a hit. -
version_type string
The version type.
Values are
internal
,external
,external_gte
, orforce
.
curl \
--request POST http://api.example.com/_mtermvectors \
--header "Content-Type: application/json" \
--data '"{\n \"docs\": [\n {\n \"_id\": \"2\",\n \"fields\": [\n \"message\"\n ],\n \"term_statistics\": true\n },\n {\n \"_id\": \"1\"\n }\n ]\n}"'
{
"docs": [
{
"_id": "2",
"fields": [
"message"
],
"term_statistics": true
},
{
"_id": "1"
}
]
}
{
"ids": [ "1", "2" ],
"parameters": {
"fields": [
"message"
],
"term_statistics": true
}
}
{
"docs": [
{
"_index": "my-index-000001",
"doc" : {
"message" : "test test test"
}
},
{
"_index": "my-index-000001",
"doc" : {
"message" : "Another test ..."
}
}
]
}
{
"docs": [
{
"_id": "string",
"_index": "string",
"_version": 42.0,
"took": 42.0,
"found": true,
"term_vectors": {
"additionalProperty1": {
"field_statistics": {
"doc_count": 42.0,
"sum_doc_freq": 42.0,
"sum_ttf": 42.0
},
"terms": {
"additionalProperty1": {},
"additionalProperty2": {}
}
},
"additionalProperty2": {
"field_statistics": {
"doc_count": 42.0,
"sum_doc_freq": 42.0,
"sum_ttf": 42.0
},
"terms": {
"additionalProperty1": {},
"additionalProperty2": {}
}
}
},
"error": {
"type": "string",
"reason": "string",
"stack_trace": "string",
"caused_by": {},
"root_cause": [
{}
],
"suppressed": [
{}
]
}
}
]
}
Query parameters
-
master_timeout string
Period to wait for a connection to the master node.
curl \
--request GET http://api.example.com/_enrich/policy
{
"policies": [
{
"config": {
"additionalProperty1": {
"enrich_fields": "string",
"indices": "string",
"match_field": "string",
"query": {},
"name": "string",
"elasticsearch_version": "string"
},
"additionalProperty2": {
"enrich_fields": "string",
"indices": "string",
"match_field": "string",
"query": {},
"name": "string",
"elasticsearch_version": "string"
}
}
}
]
}
EQL
Event Query Language (EQL) is a query language for event-based time series data, such as logs, metrics, and traces.
Get EQL search results Added in 7.9.0
Returns search results for an Event Query Language (EQL) query. EQL assumes each document in a data stream or index corresponds to an event.
Path parameters
-
The name of the index to scope the operation
Query parameters
-
allow_no_indices boolean
-
allow_partial_search_results boolean
If true, returns partial results if there are shard failures. If false, returns an error with no partial results.
-
allow_partial_sequence_results boolean
If true, sequence queries will return partial results in case of shard failures. If false, they will return no results at all. This flag has effect only if allow_partial_search_results is true.
-
expand_wildcards string | array[string]
-
keep_alive string
Period for which the search and its results are stored on the cluster.
-
keep_on_completion boolean
If true, the search and its results are stored on the cluster.
-
wait_for_completion_timeout string
Timeout duration to wait for the request to finish. Defaults to no timeout, meaning the request waits for complete search results.
Body Required
-
EQL query you wish to run.
-
case_sensitive boolean
-
event_category_field string
Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
-
tiebreaker_field string
Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
-
timestamp_field string
Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
-
fetch_size number
-
keep_alive string
A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
keep_on_completion boolean
-
wait_for_completion_timeout string
A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
allow_partial_search_results boolean
Allow query execution also in case of shard failures. If true, the query will keep running and will return results based on the available shards. For sequences, the behavior can be further refined using allow_partial_sequence_results
-
allow_partial_sequence_results boolean
This flag applies only to sequences and has effect only if allow_partial_search_results=true. If true, the sequence query will return results based on the available shards, ignoring the others. If false, the sequence query will return successfully, but will always have empty results.
-
size number
fields object | array[object]
Array of wildcard (*) patterns. The response returns values for field names matching these patterns in the fields property of each hit.
-
result_position string
Values are
tail
orhead
. -
runtime_mappings object
-
max_samples_per_key number
By default, the response of a sample query contains up to
10
samples, with one sample per unique set of join keys. Use thesize
parameter to get a smaller or larger set of samples. To retrieve more than one sample per set of join keys, use themax_samples_per_key
parameter. Pipes are not supported for sample queries.
curl \
--request GET http://api.example.com/{index}/_eql/search \
--header "Content-Type: application/json" \
--data '"{\n \"query\": \"\"\"\n process where (process.name == \"cmd.exe\" and process.pid != 2013)\n \"\"\"\n}"'
{
"query": """
process where (process.name == "cmd.exe" and process.pid != 2013)
"""
}
{
"query": """
sequence by process.pid
[ file where file.name == "cmd.exe" and process.pid != 2013 ]
[ process where stringContains(process.executable, "regsvr32") ]
"""
}
{
"id": "string",
"is_partial": true,
"is_running": true,
"": 42.0,
"timed_out": true,
"hits": {
"total": {
"relation": "eq",
"value": 42.0
},
"events": [
{
"_index": "string",
"_id": "string",
"_source": {},
"missing": true,
"fields": {
"additionalProperty1": [
{}
],
"additionalProperty2": [
{}
]
}
}
],
"sequences": [
{
"events": [
{
"_index": "string",
"_id": "string",
"_source": {},
"missing": true,
"fields": {}
}
],
"join_keys": [
{}
]
}
]
},
"shard_failures": [
{
"index": "string",
"node": "string",
"reason": {
"type": "string",
"reason": "string",
"stack_trace": "string",
"caused_by": {},
"root_cause": [
{}
],
"suppressed": [
{}
]
},
"shard": 42.0,
"status": "string"
}
]
}
Get the dangling indices Added in 7.9.0
If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling.
For example, this can happen if you delete more than cluster.indices.tombstones.size
indices while an Elasticsearch node is offline.
Use this API to list dangling indices, which you can then import or delete.
curl \
--request GET http://api.example.com/_dangling
{
"dangling_indices": [
{
"index_name": "string",
"index_uuid": "string",
"": "string"
}
]
}
Path parameters
-
Comma-separated list of data streams or indices to add. Supports wildcards (
*
). Wildcard patterns that match both data streams and indices return an error. -
Alias to update. If the alias doesn’t exist, the request creates it. Index alias names support date math.
Query parameters
-
master_timeout string
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
timeout string
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Body
-
filter object
An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
Additional properties are allowed.
-
index_routing string
-
is_write_index boolean
If
true
, sets the write index or data stream for the alias. If an alias points to multiple indices or data streams andis_write_index
isn’t set, the alias rejects write requests. If an index alias points to one index andis_write_index
isn’t set, the index automatically acts as the write index. Data stream aliases don’t automatically set a write data stream, even if the alias points to one data stream. -
routing string
-
search_routing string
curl \
--request POST http://api.example.com/{index}/_alias/{name} \
--header "Content-Type: application/json" \
--data '{"filter":{},"index_routing":"string","is_write_index":true,"routing":"string","search_routing":"string"}'
{
"filter": {},
"index_routing": "string",
"is_write_index": true,
"routing": "string",
"search_routing": "string"
}
{
"acknowledged": true
}
Roll over to a new index Added in 5.0.0
TIP: It is recommended to use the index lifecycle rollover action to automate rollovers.
The rollover API creates a new index for a data stream or index alias. The API behavior depends on the rollover target.
Roll over a data stream
If you roll over a data stream, the API creates a new write index for the stream. The stream's previous write index becomes a regular backing index. A rollover also increments the data stream's generation.
Roll over an index alias with a write index
TIP: Prior to Elasticsearch 7.9, you'd typically use an index alias with a write index to manage time series data. Data streams replace this functionality, require less maintenance, and automatically integrate with data tiers.
If an index alias points to multiple indices, one of the indices must be a write index.
The rollover API creates a new write index for the alias with is_write_index
set to true
.
The API also sets is_write_index
to false
for the previous write index.
Roll over an index alias with one index
If you roll over an index alias that points to only one index, the API creates a new index for the alias and removes the original index from the alias.
NOTE: A rollover creates a new index and is subject to the wait_for_active_shards
setting.
Increment index names for an alias
When you roll over an index alias, you can specify a name for the new index.
If you don't specify a name and the current index ends with -
and a number, such as my-index-000001
or my-index-3
, the new index name increments that number.
For example, if you roll over an alias with a current index of my-index-000001
, the rollover creates a new index named my-index-000002
.
This number is always six characters and zero-padded, regardless of the previous index's name.
If you use an index alias for time series data, you can use date math in the index name to track the rollover date.
For example, you can create an alias that points to an index named <my-index-{now/d}-000001>
.
If you create the index on May 6, 2099, the index's name is my-index-2099.05.06-000001
.
If you roll over the alias on May 7, 2099, the new index's name is my-index-2099.05.07-000002
.
Path parameters
-
Name of the data stream or index alias to roll over.
-
Name of the index to create. Supports date math. Data streams do not support this parameter.
Query parameters
-
dry_run boolean
If
true
, checks whether the current index satisfies the specified conditions but does not perform a rollover. -
master_timeout string
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
timeout string
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
wait_for_active_shards number | string
The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (
number_of_replicas+1
).
Body
-
aliases object
Aliases for the target index. Data streams do not support this parameter.
-
conditions object
Additional properties are allowed.
-
mappings object
Additional properties are allowed.
-
settings object
Configuration options for the index. Data streams do not support this parameter.
curl \
--request POST http://api.example.com/{alias}/_rollover/{new_index} \
--header "Content-Type: application/json" \
--data '"{\n \"conditions\": {\n \"max_age\": \"7d\",\n \"max_docs\": 1000,\n \"max_primary_shard_size\": \"50gb\",\n \"max_primary_shard_docs\": \"2000\"\n }\n}"'
{
"conditions": {
"max_age": "7d",
"max_docs": 1000,
"max_primary_shard_size": "50gb",
"max_primary_shard_docs": "2000"
}
}
{
"_shards": {},
"indices": {
"test": {
"shards": {
"0": [
{
"routing": {
"node": "zDC_RorJQCao9xf9pg3Fvw",
"state": "STARTED",
"primary": true
},
"segments": {
"_0": {
"search": true,
"version": "7.0.0",
"compound": true,
"num_docs": 1,
"committed": false,
"attributes": {},
"generation": 0,
"deleted_docs": 0,
"size_in_bytes": 3800
}
},
"num_search_segments": 1,
"num_committed_segments": 0
}
]
}
}
}
}
Get IP geolocation database configurations Added in 8.15.0
Query parameters
-
master_timeout string
The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. A value of
-1
indicates that the request should never time out.
curl \
--request GET http://api.example.com/_ingest/ip_location/database
{
"databases": [
{
"id": "string",
"version": 42.0,
"": {
"name": "string",
"web": {},
"local": {
"type": "string"
},
"maxmind": {
"account_id": "string"
},
"ipinfo": {}
}
}
]
}
Get machine learning memory usage info Added in 8.2.0
Get information about how machine learning jobs and trained models are using memory, on each node, both within the JVM heap, and natively, outside of the JVM.
Path parameters
-
The names of particular nodes in the cluster to target. For example,
nodeId1,nodeId2
orml:true
Query parameters
-
master_timeout string
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
timeout string
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request GET http://api.example.com/_ml/memory/{node_id}/_stats
{
"_nodes": {
"failures": [
{
"type": "string",
"reason": "string",
"stack_trace": "string",
"caused_by": {},
"root_cause": [
{}
],
"suppressed": [
{}
]
}
],
"total": 42.0,
"successful": 42.0,
"failed": 42.0
},
"cluster_name": "string",
"nodes": {
"additionalProperty1": {
"attributes": {
"additionalProperty1": "string",
"additionalProperty2": "string"
},
"jvm": {
"": 42.0,
"heap_max_in_bytes": 42.0,
"java_inference_in_bytes": 42.0,
"java_inference_max_in_bytes": 42.0
},
"mem": {
"": 42.0,
"adjusted_total_in_bytes": 42.0,
"total_in_bytes": 42.0,
"ml": {
"": 42.0,
"anomaly_detectors_in_bytes": 42.0,
"data_frame_analytics_in_bytes": 42.0,
"max_in_bytes": 42.0,
"native_code_overhead_in_bytes": 42.0,
"native_inference_in_bytes": 42.0
}
},
"name": "string",
"roles": [
"string"
],
"transport_address": "string",
"ephemeral_id": "string"
},
"additionalProperty2": {
"attributes": {
"additionalProperty1": "string",
"additionalProperty2": "string"
},
"jvm": {
"": 42.0,
"heap_max_in_bytes": 42.0,
"java_inference_in_bytes": 42.0,
"java_inference_max_in_bytes": 42.0
},
"mem": {
"": 42.0,
"adjusted_total_in_bytes": 42.0,
"total_in_bytes": 42.0,
"ml": {
"": 42.0,
"anomaly_detectors_in_bytes": 42.0,
"data_frame_analytics_in_bytes": 42.0,
"max_in_bytes": 42.0,
"native_code_overhead_in_bytes": 42.0,
"native_inference_in_bytes": 42.0
}
},
"name": "string",
"roles": [
"string"
],
"transport_address": "string",
"ephemeral_id": "string"
}
}
}
Get machine learning information Added in 6.3.0
Get defaults and limits used by machine learning. This endpoint is designed to be used by a user interface that needs to fully understand machine learning configurations where some options are not specified, meaning that the defaults should be used. This endpoint may be used to find out what those defaults are. It also provides information about the maximum size of machine learning jobs that could run in the current cluster configuration.
curl \
--request GET http://api.example.com/_ml/info
{
"defaults": {
"anomaly_detectors": {
"": "string",
"categorization_examples_limit": 42.0,
"model_memory_limit": "string",
"model_snapshot_retention_days": 42.0,
"daily_model_snapshot_retention_after_days": 42.0
},
"datafeeds": {
"scroll_size": 42.0
}
},
"limits": {
"max_single_ml_node_processors": 42.0,
"total_ml_processors": 42.0,
"": 42.0
},
"upgrade_mode": true,
"native_code": {
"build_hash": "string",
"version": "string"
}
}
Get datafeeds configuration info Added in 5.5.0
You can get information for multiple datafeeds in a single API request by
using a comma-separated list of datafeeds or a wildcard expression. You can
get information for all datafeeds by using _all
, by specifying *
as the
<feed_id>
, or by omitting the <feed_id>
.
This API returns a maximum of 10,000 datafeeds.
Path parameters
-
Identifier for the datafeed. It can be a datafeed identifier or a wildcard expression. If you do not specify one of these options, the API returns information about all datafeeds.
Query parameters
-
allow_no_match boolean
Specifies what to do when the request:
- Contains wildcard expressions and there are no datafeeds that match.
- Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
The default value is
true
, which returns an emptydatafeeds
array when there are no matches and the subset of results when there are partial matches. If this parameter isfalse
, the request returns a404
status code when there are no matches or only partial matches. -
exclude_generated boolean
Indicates if certain fields should be removed from the configuration on retrieval. This allows the configuration to be in an acceptable format to be retrieved and then added to another cluster.
curl \
--request GET http://api.example.com/_ml/datafeeds/{datafeed_id}
{
"count": 42.0,
"datafeeds": [
{
"aggregations": {},
"authorization": {
"api_key": {
"id": "string",
"name": "string"
},
"roles": [
"string"
],
"service_account": "string"
},
"chunking_config": {
"mode": "auto",
"time_span": "string"
},
"datafeed_id": "string",
"frequency": "string",
"indices": [
"string"
],
"indexes": [
"string"
],
"job_id": "string",
"max_empty_searches": 42.0,
"query_delay": "string",
"script_fields": {
"additionalProperty1": {
"script": {
"source": "string",
"id": "string",
"params": {},
"options": {}
},
"ignore_failure": true
},
"additionalProperty2": {
"script": {
"source": "string",
"id": "string",
"params": {},
"options": {}
},
"ignore_failure": true
}
},
"scroll_size": 42.0,
"delayed_data_check_config": {
"check_window": "string",
"enabled": true
},
"runtime_mappings": {
"additionalProperty1": {
"fields": {
"additionalProperty1": {},
"additionalProperty2": {}
},
"fetch_fields": [
{}
],
"format": "string",
"input_field": "string",
"target_field": "string",
"target_index": "string",
"script": {
"source": "string",
"id": "string",
"params": {},
"options": {}
},
"type": "boolean"
},
"additionalProperty2": {
"fields": {
"additionalProperty1": {},
"additionalProperty2": {}
},
"fetch_fields": [
{}
],
"format": "string",
"input_field": "string",
"target_field": "string",
"target_index": "string",
"script": {
"source": "string",
"id": "string",
"params": {},
"options": {}
},
"type": "boolean"
}
},
"indices_options": {
"allow_no_indices": true,
"expand_wildcards": "string",
"ignore_unavailable": true,
"ignore_throttled": true
},
"query": {}
}
]
}
Delete a datafeed Added in 5.4.0
Path parameters
-
A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.
Query parameters
-
force boolean
Use to forcefully delete a started datafeed; this method is quicker than stopping and deleting the datafeed.
curl \
--request DELETE http://api.example.com/_ml/datafeeds/{datafeed_id}
{
"acknowledged": true
}
Add scheduled events to the calendar Added in 6.2.0
Path parameters
-
A string that uniquely identifies a calendar.
curl \
--request POST http://api.example.com/_ml/calendars/{calendar_id}/events \
--header "Content-Type: application/json" \
--data '{"events":[{"calendar_id":"string","event_id":"string","description":"string","":"string","skip_result":true,"skip_model_update":true,"force_time_shift":42.0}]}'
{
"events": [
{
"calendar_id": "string",
"event_id": "string",
"description": "string",
"": "string",
"skip_result": true,
"skip_model_update": true,
"force_time_shift": 42.0
}
]
}
{
"events": [
{
"calendar_id": "string",
"event_id": "string",
"description": "string",
"": "string",
"skip_result": true,
"skip_model_update": true,
"force_time_shift": 42.0
}
]
}
Stop data frame analytics jobs Added in 7.3.0
A data frame analytics job can be started and stopped multiple times throughout its lifecycle.
Path parameters
-
Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.
Query parameters
-
allow_no_match boolean
Specifies what to do when the request:
- Contains wildcard expressions and there are no data frame analytics jobs that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
The default value is true, which returns an empty data_frame_analytics array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.
-
force boolean
If true, the data frame analytics job is stopped forcefully.
-
timeout string
Controls the amount of time to wait until the data frame analytics job stops. Defaults to 20 seconds.
curl \
--request POST http://api.example.com/_ml/data_frame/analytics/{id}/_stop
{
"stopped": true
}
Evaluate ranked search results Added in 6.2.0
Evaluate the quality of ranked search results over a set of typical search queries.
Query parameters
-
allow_no_indices boolean
If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
expand_wildcards string | array[string]
Whether to expand wildcard expression to concrete indices that are open, closed or both.
-
search_type string
Search operation type
curl \
--request GET http://api.example.com/_rank_eval \
--header "Content-Type: application/json" \
--data '{"requests":[{"id":"string","request":{"query":{},"size":42.0},"ratings":[{"_id":"string","_index":"string","rating":42.0}],"template_id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}}}],"metric":{"":{"k":42.0,"maximum_relevance":42.0}}}'
{
"requests": [
{
"id": "string",
"request": {
"query": {},
"size": 42.0
},
"ratings": [
{
"_id": "string",
"_index": "string",
"rating": 42.0
}
],
"template_id": "string",
"params": {
"additionalProperty1": {},
"additionalProperty2": {}
}
}
],
"metric": {
"": {
"k": 42.0,
"maximum_relevance": 42.0
}
}
}
{
"metric_score": 42.0,
"details": {
"additionalProperty1": {
"metric_score": 42.0,
"unrated_docs": [
{
"_id": "string",
"_index": "string"
}
],
"hits": [
{
"hit": {
"_id": "string",
"_index": "string",
"_score": 42.0
},
"rating": 42.0
}
],
"metric_details": {
"additionalProperty1": {
"additionalProperty1": {},
"additionalProperty2": {}
},
"additionalProperty2": {
"additionalProperty1": {},
"additionalProperty2": {}
}
}
},
"additionalProperty2": {
"metric_score": 42.0,
"unrated_docs": [
{
"_id": "string",
"_index": "string"
}
],
"hits": [
{
"hit": {
"_id": "string",
"_index": "string",
"_score": 42.0
},
"rating": 42.0
}
],
"metric_details": {
"additionalProperty1": {
"additionalProperty1": {},
"additionalProperty2": {}
},
"additionalProperty2": {
"additionalProperty1": {},
"additionalProperty2": {}
}
}
}
},
"failures": {
"additionalProperty1": {},
"additionalProperty2": {}
}
}
Get role mappings Added in 5.5.0
Role mappings define which roles are assigned to each user. The role mapping APIs are generally the preferred way to manage role mappings rather than using role mapping files. The get role mappings API cannot retrieve role mappings that are defined in role mapping files.
Path parameters
-
The distinct name that identifies the role mapping. The name is used solely as an identifier to facilitate interaction via the API; it does not affect the behavior of the mapping in any way. You can specify multiple mapping names as a comma-separated list. If you do not specify this parameter, the API returns information about all role mappings.
curl \
--request GET http://api.example.com/_security/role_mapping/{name}
{
"additionalProperty1": {
"enabled": true,
"metadata": {
"additionalProperty1": {},
"additionalProperty2": {}
},
"roles": [
"string"
],
"role_templates": [
{
"format": "string",
"template": {
"source": "string",
"id": "string",
"params": {
"additionalProperty1": {},
"additionalProperty2": {}
},
"": "painless",
"options": {
"additionalProperty1": "string",
"additionalProperty2": "string"
}
}
}
],
"rules": {
"any": [
{}
],
"all": [
{}
],
"field": {
"username": "string",
"dn": "string",
"groups": "string"
},
"except": {}
}
},
"additionalProperty2": {
"enabled": true,
"metadata": {
"additionalProperty1": {},
"additionalProperty2": {}
},
"roles": [
"string"
],
"role_templates": [
{
"format": "string",
"template": {
"source": "string",
"id": "string",
"params": {
"additionalProperty1": {},
"additionalProperty2": {}
},
"": "painless",
"options": {
"additionalProperty1": "string",
"additionalProperty2": "string"
}
}
}
],
"rules": {
"any": [
{}
],
"all": [
{}
],
"field": {
"username": "string",
"dn": "string",
"groups": "string"
},
"except": {}
}
}
}
Enable users
Enable users in the native realm. By default, when you create users, they are enabled.
Path parameters
-
An identifier for the user.
Query parameters
-
refresh string
If
true
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.Values are
true
,false
, orwait_for
.
curl \
--request POST http://api.example.com/_security/user/{username}/_enable
{}
Enroll a node Added in 8.0.0
Enroll a new node to allow it to join an existing cluster with security features enabled.
The response contains all the necessary information for the joining node to bootstrap discovery and security related settings so that it can successfully join the cluster. The response contains key and certificate material that allows the caller to generate valid signed certificates for the HTTP layer of all nodes in the cluster.
curl \
--request GET http://api.example.com/_security/enroll/node
{
"http_ca_key": "string",
"http_ca_cert": "string",
"transport_ca_cert": "string",
"transport_key": "string",
"transport_cert": "string",
"nodes_addresses": [
"string"
]
}
Delete a policy Added in 7.4.0
Delete a snapshot lifecycle policy definition. This operation prevents any future snapshots from being taken but does not cancel in-progress snapshots or remove previously-taken snapshots.
Path parameters
-
The id of the snapshot lifecycle policy to remove
Query parameters
-
master_timeout string
The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
timeout string
The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request DELETE http://api.example.com/_slm/policy/{policy_id}
{
"acknowledged": true
}