Update By Query API

edit

The simplest usage of _update_by_query just performs an update on every document in the index without changing the source. This is useful to pick up a new property or some other online mapping change. Here is the API:

POST twitter/_update_by_query?conflicts=proceed

That will return something like this:

{
  "took" : 147,
  "timed_out": false,
  "updated": 120,
  "deleted": 0,
  "batches": 1,
  "version_conflicts": 0,
  "noops": 0,
  "retries": {
    "bulk": 0,
    "search": 0
  },
  "throttled_millis": 0,
  "requests_per_second": -1.0,
  "throttled_until_millis": 0,
  "total": 120,
  "failures" : [ ]
}

_update_by_query gets a snapshot of the index when it starts and indexes what it finds using internal versioning. That means you’ll get a version conflict if the document changes between the time when the snapshot was taken and when the index request is processed. When the versions match, the document is updated and the version number is incremented.

Since internal versioning does not support the value 0 as a valid version number, documents with version equal to zero cannot be updated using _update_by_query and will fail the request.

All update and query failures cause the _update_by_query to abort and are returned in the failures of the response. The updates that have been performed still stick. In other words, the process is not rolled back, only aborted. While the first failure causes the abort, all failures that are returned by the failing bulk request are returned in the failures element; therefore it’s possible for there to be quite a few failed entities.

If you want to simply count version conflicts, and not cause the _update_by_query to abort, you can set conflicts=proceed on the url or "conflicts": "proceed" in the request body. The first example does this because it is just trying to pick up an online mapping change, and a version conflict simply means that the conflicting document was updated between the start of the _update_by_query and the time when it attempted to update the document. This is fine because that update will have picked up the online mapping update.

Back to the API format, this will update tweets from the twitter index:

POST twitter/_update_by_query?conflicts=proceed

You can also limit _update_by_query using the Query DSL. This will update all documents from the twitter index for the user kimchy:

POST twitter/_update_by_query?conflicts=proceed
{
  "query": { 
    "term": {
      "user": "kimchy"
    }
  }
}

The query must be passed as a value to the query key, in the same way as the Search API. You can also use the q parameter in the same way as the search API.

So far we’ve only been updating documents without changing their source. That is genuinely useful for things like picking up new properties but it’s only half the fun. _update_by_query supports scripts to update the document. This will increment the likes field on all of kimchy’s tweets:

POST twitter/_update_by_query
{
  "script": {
    "source": "ctx._source.likes++",
    "lang": "painless"
  },
  "query": {
    "term": {
      "user": "kimchy"
    }
  }
}

Just as in Update API you can set ctx.op to change the operation that is executed:

noop

Set ctx.op = "noop" if your script decides that it doesn’t have to make any changes. That will cause _update_by_query to omit that document from its updates. This no operation will be reported in the noop counter in the response body.

delete

Set ctx.op = "delete" if your script decides that the document must be deleted. The deletion will be reported in the deleted counter in the response body.

Setting ctx.op to anything else is an error. Setting any other field in ctx is an error.

Note that we stopped specifying conflicts=proceed. In this case we want a version conflict to abort the process so we can handle the failure.

This API doesn’t allow you to move the documents it touches, just modify their source. This is intentional! We’ve made no provisions for removing the document from its original location.

It’s also possible to do this whole thing on multiple indexes at once, just like the search API:

POST twitter,blog/_update_by_query

If you provide routing then the routing is copied to the scroll query, limiting the process to the shards that match that routing value:

POST twitter/_update_by_query?routing=1

By default _update_by_query uses scroll batches of 1000. You can change the batch size with the scroll_size URL parameter:

POST twitter/_update_by_query?scroll_size=100

_update_by_query can also use the Ingest node feature by specifying a pipeline like this:

PUT _ingest/pipeline/set-foo
{
  "description" : "sets foo",
  "processors" : [ {
      "set" : {
        "field": "foo",
        "value": "bar"
      }
  } ]
}
POST twitter/_update_by_query?pipeline=set-foo

URL Parameters

edit

In addition to the standard parameters like pretty, the Update By Query API also supports refresh, wait_for_completion, wait_for_active_shards, timeout, and scroll.

Sending the refresh will update all shards in the index being updated when the request completes. This is different than the Update API’s refresh parameter, which causes just the shard that received the new data to be indexed. Also unlike the Update API it does not support wait_for.

If the request contains wait_for_completion=false then Elasticsearch will perform some preflight checks, launch the request, and then return a task which can be used with Tasks APIs to cancel or get the status of the task. Elasticsearch will also create a record of this task as a document at .tasks/task/${taskId}. This is yours to keep or remove as you see fit. When you are done with it, delete it so Elasticsearch can reclaim the space it uses.

wait_for_active_shards controls how many copies of a shard must be active before proceeding with the request. See here for details. timeout controls how long each write request waits for unavailable shards to become available. Both work exactly how they work in the Bulk API. Because _update_by_query uses scroll search, you can also specify the scroll parameter to control how long it keeps the "search context" alive, e.g. ?scroll=10m. By default it’s 5 minutes.

requests_per_second can be set to any positive decimal number (1.4, 6, 1000, etc.) and throttles the rate at which _update_by_query issues batches of index operations by padding each batch with a wait time. The throttling can be disabled by setting requests_per_second to -1.

The throttling is done by waiting between batches so that scroll that _update_by_query uses internally can be given a timeout that takes into account the padding. The padding time is the difference between the batch size divided by the requests_per_second and the time spent writing. By default the batch size is 1000, so if the requests_per_second is set to 500:

target_time = 1000 / 500 per second = 2 seconds
wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds

Since the batch is issued as a single _bulk request, large batch sizes will cause Elasticsearch to create many requests and then wait for a while before starting the next set. This is "bursty" instead of "smooth". The default is -1.

Response body

edit

The JSON response looks like this:

{
  "took" : 147,
  "timed_out": false,
  "total": 5,
  "updated": 5,
  "deleted": 0,
  "batches": 1,
  "version_conflicts": 0,
  "noops": 0,
  "retries": {
    "bulk": 0,
    "search": 0
  },
  "throttled_millis": 0,
  "requests_per_second": -1.0,
  "throttled_until_millis": 0,
  "failures" : [ ]
}

took

The number of milliseconds from start to end of the whole operation.

timed_out

This flag is set to true if any of the requests executed during the update by query execution has timed out.

total

The number of documents that were successfully processed.

updated

The number of documents that were successfully updated.

deleted

The number of documents that were successfully deleted.

batches

The number of scroll responses pulled back by the update by query.

version_conflicts

The number of version conflicts that the update by query hit.

noops

The number of documents that were ignored because the script used for the update by query returned a noop value for ctx.op.

retries

The number of retries attempted by update by query. bulk is the number of bulk actions retried, and search is the number of search actions retried.

throttled_millis

Number of milliseconds the request slept to conform to requests_per_second.

requests_per_second

The number of requests per second effectively executed during the update by query.

throttled_until_millis

This field should always be equal to zero in an _update_by_query response. It only has meaning when using the Task API, where it indicates the next time (in milliseconds since epoch) a throttled request will be executed again in order to conform to requests_per_second.

failures

Array of failures if there were any unrecoverable errors during the process. If this is non-empty then the request aborted because of those failures. Update by query is implemented using batches. Any failure causes the entire process to abort, but all failures in the current batch are collected into the array. You can use the conflicts option to prevent reindex from aborting on version conflicts.

Works with the Task API

edit

You can fetch the status of all running update by query requests with the Task API:

GET _tasks?detailed=true&actions=*byquery

The responses looks like:

{
  "nodes" : {
    "r1A2WoRbTwKZ516z6NEs5A" : {
      "name" : "r1A2WoR",
      "transport_address" : "127.0.0.1:9300",
      "host" : "127.0.0.1",
      "ip" : "127.0.0.1:9300",
      "attributes" : {
        "testattr" : "test",
        "portsfile" : "true"
      },
      "tasks" : {
        "r1A2WoRbTwKZ516z6NEs5A:36619" : {
          "node" : "r1A2WoRbTwKZ516z6NEs5A",
          "id" : 36619,
          "type" : "transport",
          "action" : "indices:data/write/update/byquery",
          "status" : {    
            "total" : 6154,
            "updated" : 3500,
            "created" : 0,
            "deleted" : 0,
            "batches" : 4,
            "version_conflicts" : 0,
            "noops" : 0,
            "retries": {
              "bulk": 0,
              "search": 0
            },
            "throttled_millis": 0
          },
          "description" : ""
        }
      }
    }
  }
}

This object contains the actual status. It is just like the response JSON with the important addition of the total field. total is the total number of operations that the reindex expects to perform. You can estimate the progress by adding the updated, created, and deleted fields. The request will finish when their sum is equal to the total field.

With the task id you can look up the task directly. The following example retrieves information about task r1A2WoRbTwKZ516z6NEs5A:36619:

GET /_tasks/r1A2WoRbTwKZ516z6NEs5A:36619

The advantage of this API is that it integrates with wait_for_completion=false to transparently return the status of completed tasks. If the task is completed and wait_for_completion=false was set on it, then it’ll come back with a results or an error field. The cost of this feature is the document that wait_for_completion=false creates at .tasks/task/${taskId}. It is up to you to delete that document.

Works with the Cancel Task API

edit

Any update by query can be cancelled using the Task Cancel API:

POST _tasks/r1A2WoRbTwKZ516z6NEs5A:36619/_cancel

The task ID can be found using the tasks API.

Cancellation should happen quickly but might take a few seconds. The task status API above will continue to list the update by query task until this task checks that it has been cancelled and terminates itself.

Rethrottling

edit

The value of requests_per_second can be changed on a running update by query using the _rethrottle API:

POST _update_by_query/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1

The task ID can be found using the tasks API.

Just like when setting it on the _update_by_query API, requests_per_second can be either -1 to disable throttling or any decimal number like 1.7 or 12 to throttle to that level. Rethrottling that speeds up the query takes effect immediately, but rethrotting that slows down the query will take effect after completing the current batch. This prevents scroll timeouts.

Slicing

edit

Update by query supports Sliced Scroll to parallelize the updating process. This parallelization can improve efficiency and provide a convenient way to break the request down into smaller parts.

Manual slicing
edit

Slice an update by query manually by providing a slice id and total number of slices to each request:

POST twitter/_update_by_query
{
  "slice": {
    "id": 0,
    "max": 2
  },
  "script": {
    "source": "ctx._source['extra'] = 'test'"
  }
}
POST twitter/_update_by_query
{
  "slice": {
    "id": 1,
    "max": 2
  },
  "script": {
    "source": "ctx._source['extra'] = 'test'"
  }
}

Which you can verify works with:

GET _refresh
POST twitter/_search?size=0&q=extra:test&filter_path=hits.total

Which results in a sensible total like this one:

{
  "hits": {
    "total": {
        "value": 120,
        "relation": "eq"
    }
  }
}
Automatic slicing
edit

You can also let update by query automatically parallelize using Sliced Scroll to slice on _id. Use slices to specify the number of slices to use:

POST twitter/_update_by_query?refresh&slices=5
{
  "script": {
    "source": "ctx._source['extra'] = 'test'"
  }
}

Which you also can verify works with:

POST twitter/_search?size=0&q=extra:test&filter_path=hits.total

Which results in a sensible total like this one:

{
  "hits": {
    "total": {
        "value": 120,
        "relation": "eq"
    }
  }
}

Setting slices to auto will let Elasticsearch choose the number of slices to use. This setting will use one slice per shard, up to a certain limit. If there are multiple source indices, it will choose the number of slices based on the index with the smallest number of shards.

Adding slices to _update_by_query just automates the manual process used in the section above, creating sub-requests which means it has some quirks:

  • You can see these requests in the Tasks APIs. These sub-requests are "child" tasks of the task for the request with slices.
  • Fetching the status of the task for the request with slices only contains the status of completed slices.
  • These sub-requests are individually addressable for things like cancellation and rethrottling.
  • Rethrottling the request with slices will rethrottle the unfinished sub-request proportionally.
  • Canceling the request with slices will cancel each sub-request.
  • Due to the nature of slices each sub-request won’t get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution.
  • Parameters like requests_per_second and max_docs on a request with slices are distributed proportionally to each sub-request. Combine that with the point above about distribution being uneven and you should conclude that using max_docs with slices might not result in exactly max_docs documents being updated.
  • Each sub-request gets a slightly different snapshot of the source index though these are all taken at approximately the same time.
Picking the number of slices
edit

If slicing automatically, setting slices to auto will choose a reasonable number for most indices. If you’re slicing manually or otherwise tuning automatic slicing, use these guidelines.

Query performance is most efficient when the number of slices is equal to the number of shards in the index. If that number is large, (for example, 500) choose a lower number as too many slices will hurt performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead.

Update performance scales linearly across available resources with the number of slices.

Whether query or update performance dominates the runtime depends on the documents being reindexed and cluster resources.

Pick up a new property

edit

Say you created an index without dynamic mapping, filled it with data, and then added a mapping value to pick up more fields from the data:

PUT test
{
  "mappings": {
    "dynamic": false,   
    "properties": {
      "text": {"type": "text"}
    }
  }
}

POST test/_doc?refresh
{
  "text": "words words",
  "flag": "bar"
}
POST test/_doc?refresh
{
  "text": "words words",
  "flag": "foo"
}
PUT test/_mapping   
{
  "properties": {
    "text": {"type": "text"},
    "flag": {"type": "text", "analyzer": "keyword"}
  }
}

This means that new fields won’t be indexed, just stored in _source.

This updates the mapping to add the new flag field. To pick up the new field you have to reindex all documents with it.

Searching for the data won’t find anything:

POST test/_search?filter_path=hits.total
{
  "query": {
    "match": {
      "flag": "foo"
    }
  }
}
{
  "hits" : {
    "total": {
        "value": 0,
        "relation": "eq"
    }
  }
}

But you can issue an _update_by_query request to pick up the new mapping:

POST test/_update_by_query?refresh&conflicts=proceed
POST test/_search?filter_path=hits.total
{
  "query": {
    "match": {
      "flag": "foo"
    }
  }
}
{
  "hits" : {
    "total": {
        "value": 1,
        "relation": "eq"
    }
  }
}

You can do the exact same thing when adding a field to a multifield.