- Elasticsearch Guide: other versions:
- Getting Started
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Important System Configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- All permission check
- Starting Elasticsearch
- Stopping Elasticsearch
- Adding nodes to your cluster
- Installing X-Pack
- Set up X-Pack
- Configuring X-Pack Java Clients
- X-Pack Settings
- Bootstrap Checks for X-Pack
- Upgrade Elasticsearch
- API Conventions
- Document APIs
- Search APIs
- Aggregations
- Metrics Aggregations
- Avg Aggregation
- Weighted Avg Aggregation
- Cardinality Aggregation
- Extended Stats Aggregation
- Geo Bounds Aggregation
- Geo Centroid Aggregation
- Max Aggregation
- Min Aggregation
- Percentiles Aggregation
- Percentile Ranks Aggregation
- Scripted Metric Aggregation
- Stats Aggregation
- Sum Aggregation
- Top Hits Aggregation
- Value Count Aggregation
- Median Absolute Deviation Aggregation
- Bucket Aggregations
- Adjacency Matrix Aggregation
- Auto-interval Date Histogram Aggregation
- Children Aggregation
- Composite Aggregation
- Date Histogram Aggregation
- Date Range Aggregation
- Diversified Sampler Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IP Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Parent Aggregation
- Range Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Significant Text Aggregation
- Terms Aggregation
- Pipeline Aggregations
- Avg Bucket Aggregation
- Derivative Aggregation
- Max Bucket Aggregation
- Min Bucket Aggregation
- Sum Bucket Aggregation
- Stats Bucket Aggregation
- Extended Stats Bucket Aggregation
- Percentiles Bucket Aggregation
- Moving Average Aggregation
- Moving Function Aggregation
- Cumulative Sum Aggregation
- Bucket Script Aggregation
- Bucket Selector Aggregation
- Bucket Sort Aggregation
- Serial Differencing Aggregation
- Matrix Aggregations
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Returning the type of the aggregation
- Metrics Aggregations
- Indices APIs
- Create Index
- Delete Index
- Get Index
- Indices Exists
- Open / Close Index API
- Shrink Index
- Split Index
- Rollover Index
- Put Mapping
- Get Mapping
- Get Field Mapping
- Types Exists
- Index Aliases
- Update Indices Settings
- Get Settings
- Analyze
- Index Templates
- Indices Stats
- Indices Segments
- Indices Recovery
- Indices Shard Stores
- Clear Cache
- Flush
- Refresh
- Force Merge
- cat APIs
- Cluster APIs
- Query DSL
- Mapping
- Analysis
- Anatomy of an analyzer
- Testing analyzers
- Analyzers
- Normalizers
- Tokenizers
- Standard Tokenizer
- Letter Tokenizer
- Lowercase Tokenizer
- Whitespace Tokenizer
- UAX URL Email Tokenizer
- Classic Tokenizer
- Thai Tokenizer
- NGram Tokenizer
- Edge NGram Tokenizer
- Keyword Tokenizer
- Pattern Tokenizer
- Char Group Tokenizer
- Simple Pattern Tokenizer
- Simple Pattern Split Tokenizer
- Path Hierarchy Tokenizer
- Path Hierarchy Tokenizer Examples
- Token Filters
- Standard Token Filter
- ASCII Folding Token Filter
- Flatten Graph Token Filter
- Length Token Filter
- Lowercase Token Filter
- Uppercase Token Filter
- NGram Token Filter
- Edge NGram Token Filter
- Porter Stem Token Filter
- Shingle Token Filter
- Stop Token Filter
- Word Delimiter Token Filter
- Word Delimiter Graph Token Filter
- Multiplexer Token Filter
- Conditional Token Filter
- Predicate Token Filter Script
- Stemmer Token Filter
- Stemmer Override Token Filter
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Snowball Token Filter
- Phonetic Token Filter
- Synonym Token Filter
- Parsing synonym files
- Synonym Graph Token Filter
- Compound Word Token Filters
- Reverse Token Filter
- Elision Token Filter
- Truncate Token Filter
- Unique Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Trim Token Filter
- Limit Token Count Token Filter
- Hunspell Token Filter
- Common Grams Token Filter
- Normalization Token Filter
- CJK Width Token Filter
- CJK Bigram Token Filter
- Delimited Payload Token Filter
- Keep Words Token Filter
- Keep Types Token Filter
- Exclude mode settings example
- Classic Token Filter
- Apostrophe Token Filter
- Decimal Digit Token Filter
- Fingerprint Token Filter
- MinHash Token Filter
- Remove Duplicates Token Filter
- Character Filters
- Modules
- Index Modules
- Ingest Node
- Pipeline Definition
- Ingest APIs
- Accessing Data in Pipelines
- Conditional Execution in Pipelines
- Handling Failures in Pipelines
- Processors
- Append Processor
- Bytes Processor
- Convert Processor
- Date Processor
- Date Index Name Processor
- Dissect Processor
- Dot Expander Processor
- Drop Processor
- Fail Processor
- Foreach Processor
- GeoIP Processor
- Grok Processor
- Gsub Processor
- Join Processor
- JSON Processor
- KV Processor
- Pipeline Processor
- Remove Processor
- Rename Processor
- Script Processor
- Set Processor
- Set Security User Processor
- Split Processor
- Sort Processor
- Trim Processor
- Uppercase Processor
- URL Decode Processor
- User Agent processor
- Managing the index lifecycle
- SQL Access
- Monitor a cluster
- Rolling up historical data
- Frozen indices
- Set up a cluster for high availability
- Secure a cluster
- Overview
- Configuring security
- Encrypting communications in Elasticsearch
- Encrypting communications in an Elasticsearch Docker Container
- Enabling cipher suites for stronger encryption
- Separating node-to-node and client traffic
- Configuring an Active Directory realm
- Configuring a file realm
- Configuring an LDAP realm
- Configuring a native realm
- Configuring a PKI realm
- Configuring a SAML realm
- Configuring a Kerberos realm
- FIPS 140-2
- Security settings
- Security files
- Auditing Settings
- How security works
- User authentication
- Built-in users
- Internal users
- Token-based authentication services
- Realms
- Realm chains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- User authorization
- Auditing security events
- Encrypting communications
- Restricting connections with IP filtering
- Cross cluster search, tribe, clients, and integrations
- Tutorial: Getting started with security
- Tutorial: Encrypting communications
- Troubleshooting
- Can’t log in after upgrading to 6.7.2
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Alerting on Cluster and Index Events
- Command line tools
- How To
- Testing
- Glossary of terms
- X-Pack APIs
- Info API
- Cross-cluster replication APIs
- Explore API
- Freeze index
- Index lifecycle management API
- Licensing APIs
- Migration APIs
- Machine learning APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create calendar
- Create datafeeds
- Create filter
- Create jobs
- Delete calendar
- Delete datafeeds
- Delete events from calendar
- Delete filter
- Delete forecast
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Find file structure
- Flush jobs
- Forecast jobs
- Get calendars
- Get buckets
- Get overall buckets
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get machine learning info
- Get model snapshots
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Revert model snapshots
- Set upgrade mode
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filter
- Update jobs
- Update model snapshots
- Rollup APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete users
- Disable users
- Enable users
- Get API key information
- Get application privileges
- Get role mappings
- Get roles
- Get token
- Get users
- Has privileges
- Invalidate API key
- Invalidate token
- SSL certificate
- Unfreeze index
- Watcher APIs
- Definitions
- Release Highlights
- Breaking changes
- Release Notes
- Elasticsearch version 6.7.2
- Elasticsearch version 6.7.1
- Elasticsearch version 6.7.0
- Elasticsearch version 6.6.2
- Elasticsearch version 6.6.1
- Elasticsearch version 6.6.0
- Elasticsearch version 6.5.4
- Elasticsearch version 6.5.3
- Elasticsearch version 6.5.2
- Elasticsearch version 6.5.1
- Elasticsearch version 6.5.0
- Elasticsearch version 6.4.3
- Elasticsearch version 6.4.2
- Elasticsearch version 6.4.1
- Elasticsearch version 6.4.0
- Elasticsearch version 6.3.2
- Elasticsearch version 6.3.1
- Elasticsearch version 6.3.0
- Elasticsearch version 6.2.4
- Elasticsearch version 6.2.3
- Elasticsearch version 6.2.2
- Elasticsearch version 6.2.1
- Elasticsearch version 6.2.0
- Elasticsearch version 6.1.4
- Elasticsearch version 6.1.3
- Elasticsearch version 6.1.2
- Elasticsearch version 6.1.1
- Elasticsearch version 6.1.0
- Elasticsearch version 6.0.1
- Elasticsearch version 6.0.0
- Elasticsearch version 6.0.0-rc2
- Elasticsearch version 6.0.0-rc1
- Elasticsearch version 6.0.0-beta2
- Elasticsearch version 6.0.0-beta1
- Elasticsearch version 6.0.0-alpha2
- Elasticsearch version 6.0.0-alpha1
- Elasticsearch version 6.0.0-alpha1 (Changes previously released in 5.x)
Java API
editJava API
editWe plan on deprecating the TransportClient
in Elasticsearch 7.0 and removing
it completely in 8.0. Instead, you should be using the
Java High Level REST Client, which executes
HTTP requests rather than serialized Java requests. The
migration guide describes
all the steps needed to migrate.
The Java High Level REST Client currently has support for the more commonly used APIs, but there are a lot more that still need to be added. You can help us prioritise by telling us which missing APIs you need for your application by adding a comment to this issue: Java high-level REST client completeness.
Any missing APIs can always be implemented today by using the low level Java REST Client with JSON request and response bodies.
X-Pack provides a Java client called WatcherClient
that adds native Java
support for the Watcher.
To obtain a WatcherClient
instance, make sure you first set up the
XPackClient
.
Installing XPackClient
editYou first need to make sure the x-pack-transport-6.7.2
JAR file is in the classpath.
You can extract this jar from the downloaded X-Pack bundle.
If you use Maven to manage dependencies, add the following to the pom.xml
:
<project ...> <repositories> <!-- add the elasticsearch repo --> <repository> <id>elasticsearch-releases</id> <url>https://artifacts.elastic.co/maven</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> ... </repositories> ... <dependencies> <!-- add the x-pack jar as a dependency --> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>x-pack-transport</artifactId> <version>6.7.2</version> </dependency> ... </dependencies> ... </project>
If you use Gradle, add the dependencies to build.gradle
:
repositories { /* ... Any other repositories ... */ // Add the Elasticsearch Maven Repository maven { url "https://artifacts.elastic.co/maven" } } dependencies { // Provide the x-pack jar on the classpath for compilation and at runtime compile "org.elasticsearch.client:x-pack-transport:6.7.2" /* ... */ }
You can also download the X-Pack Transport JAR manually, directly from our Maven repository.
Obtaining the WatcherClient
editTo obtain an instance of the WatcherClient
you first need to create the
XPackClient
. The XPackClient
is a wrapper around the standard Java
Elasticsearch Client
:
import org.elasticsearch.client.transport.TransportClient; import org.elasticsearch.xpack.client.PreBuiltXPackTransportClient; import org.elasticsearch.xpack.core.XPackClient; import org.elasticsearch.xpack.core.XPackPlugin; import org.elasticsearch.core.watcher.client.WatcherClient; ... TransportClient client = new PreBuiltXPackTransportClient(Settings.builder() .put("cluster.name", "myClusterName") ... .build()) .addTransportAddress(new TransportAddress(InetAddress.getByName("localhost"), 9300)); XPackClient xpackClient = new XPackClient(client); WatcherClient watcherClient = xpackClient.watcher();
Put watch API
editThe put watch API either registers a new watch in Watcher or update an
existing one. Once registered, a new document will be added to the .watches
index, representing the watch, and the watch trigger will immediately be
registered with the relevant trigger engine (typically the scheduler, for the
schedule
trigger).
Putting a watch must be done via this API only. Do not put a watch
directly to the .watches
index using Elasticsearch’s Index API.
When the Elasticsearch security features are enabled, make sure no write
privileges are granted to anyone over the .watches
index.
The following example adds a watch with the my-watch
id that has the following
characteristics:
- The watch schedule triggers every minute.
- The watch search input looks for any 404 HTTP responses that occurred in the last five minutes.
- The watch condition checks if any hits where found.
- When hits are found, the watch action sends an email to the administrator.
WatchSourceBuilder watchSourceBuilder = WatchSourceBuilders.watchBuilder(); // Set the trigger watchSourceBuilder.trigger(TriggerBuilders.schedule(Schedules.cron("0 0/1 * * * ?"))); // Create the search request to use for the input SearchRequest request = Requests.searchRequest("idx").source(searchSource() .query(boolQuery() .must(matchQuery("response", 404)) .filter(rangeQuery("date").gt("{{ctx.trigger.scheduled_time}}")) .filter(rangeQuery("date").lt("{{ctx.execution_time}}")) )); // Create the search input SearchInput input = new SearchInput(new WatcherSearchTemplateRequest(new String[]{"idx"}, null, SearchType.DEFAULT, WatcherSearchTemplateRequest.DEFAULT_INDICES_OPTIONS, new BytesArray(request.source().toString())), null, null, null); // Set the input watchSourceBuilder.input(input); // Set the condition watchSourceBuilder.condition(new ScriptCondition(new Script("ctx.payload.hits.total > 1"))); // Create the email template to use for the action EmailTemplate.Builder emailBuilder = EmailTemplate.builder(); emailBuilder.to("someone@domain.host.com"); emailBuilder.subject("404 recently encountered"); EmailAction.Builder emailActionBuilder = EmailAction.builder(emailBuilder.build()); // Add the action watchSourceBuilder.addAction("email_someone", emailActionBuilder); PutWatchResponse putWatchResponse = watcherClient.preparePutWatch("my-watch") .setSource(watchSourceBuilder) .get();
While the above snippet flashes out all the concrete classes that make our watch, using the available builder classes along with static imports can significantly simplify and compact your code:
PutWatchResponse putWatchResponse2 = watcherClient.preparePutWatch("my-watch") .setSource(watchBuilder() .trigger(schedule(cron("0 0/1 * * * ?"))) .input(searchInput(new WatcherSearchTemplateRequest(new String[]{"idx"}, null, SearchType.DEFAULT, WatcherSearchTemplateRequest.DEFAULT_INDICES_OPTIONS, searchSource() .query(boolQuery() .must(matchQuery("response", 404)) .filter(rangeQuery("date").gt("{{ctx.trigger.scheduled_time}}")) .filter(rangeQuery("date").lt("{{ctx.execution_time}}")) ).buildAsBytes()))) .condition(compareCondition("ctx.payload.hits.total", CompareCondition.Op.GT, 1L)) .addAction("email_someone", emailAction(EmailTemplate.builder() .to("someone@domain.host.com") .subject("404 recently encountered")))) .get();
-
Use
TriggerBuilders
andSchedules
classes to define the trigger -
Use
InputBuilders
class to define the input -
Use
ConditionBuilders
class to define the condition -
Use
ActionBuilders
to define the actions
Get watch API
editThis API retrieves a watch by its id.
The following example gets a watch with my-watch
id:
GetWatchResponse getWatchResponse = watcherClient.prepareGetWatch("my-watch").get();
You can access the watch definition by accessing the source of the response:
XContentSource source = getWatchResponse.getSource();
The XContentSource
provides you methods to explore the source:
Map<String, Object> map = source.getAsMap();
Or get a specific value associated with a known key:
String host = source.getValue("input.http.request.host");
Delete watch API
editThe delete watch API removes a watch (identified by its id
) from Watcher.
Once removed, the document representing the watch in the .watches
index is
gone and it will never be executed again.
Please note that deleting a watch does not delete any watch execution records related to this watch from the watch history.
Deleting a watch must be done via this API only. Do not delete the
watch directly from the .watches
index using Elasticsearch’s DELETE
Document API. If the Elasticsearch security features are enabled, make sure
no write
privileges are granted to anyone over the .watches
index.
The following example deletes a watch with the my-watch
id:
DeleteWatchResponse deleteWatchResponse = watcherClient.prepareDeleteWatch("my-watch").get();
Execute watch API
editThis API enables on-demand execution of a watch stored in the .watches
index.
It can be used to test a watch without executing all its actions or by ignoring
its condition. The response contains a BytesReference
that represents the
record that would be written to the .watcher-history
index.
The following example executes a watch with the name my-watch
ExecuteWatchResponse executeWatchResponse = watcherClient.prepareExecuteWatch("my-watch") // execute the actions, ignoring the watch condition .setIgnoreCondition(true) // A map containing alternative input to use instead of the output of // the watch's input .setAlternativeInput(new HashMap<String, Object>()) // Trigger data to use (Note that "scheduled_time" is not provided to the // ctx.trigger by this execution method so you may want to include it here) .setTriggerData(new HashMap<String, Object>()) // Simulating the "email_admin" action while ignoring its throttle state. Use // "_all" to set the action execution mode to all actions .setActionMode("_all", ActionExecutionMode.FORCE_SIMULATE) // If the execution of this watch should be written to the `.watcher-history` // index and reflected in the persisted Watch .setRecordExecution(false) // Indicates whether the watch should execute in debug mode. In debug mode the // returned watch record will hold the execution vars .setDebug(true) .get();
Once the response is returned, you can explore it by getting execution record source:
The XContentSource
class provides convenient methods to explore the
source
XContentSource source = executeWatchResponse.getRecordSource(); String actionId = source.getValue("result.actions.0.id");
Ack watch API
editAcknowledging a watch enables you to manually throttle
execution of the watch actions. The action’s acknowledgement state is stored in
the status.actions.<id>.ack.state
structure.
The current status of the watch and the state of its actions are returned as part of the get watch API response:
GetWatchResponse getWatchResponse = watcherClient.prepareGetWatch("my-watch").get(); State state = getWatchResponse.getStatus().actionStatus("my-action").ackStatus().state();
The action state of a newly created watch is awaits_successful_execution
. When
the watch runs and its condition is met, the state changes to ackable
.
Acknowledging the action sets the state to acked
.
When an action state is set to acked
, further executions of that action are
throttled until its state is reset to awaits_successful_execution
. This happens
when the watch condition is no longer met (the condition evaluates to false
).
The following snippet shows how to acknowledge an action. You specify the IDs of
the watch and the action you want to acknowledge—in this example my-watch
and
my-action
:
AckWatchResponse ackResponse = watcherClient.prepareAckWatch("my-watch").setActionIds("my-action").get();
As a response to this request, the status of the watch and the state of the
action are returned and can be obtained from AckWatchResponse
object:
WatchStatus status = ackResponse.getStatus(); ActionStatus actionStatus = status.actionStatus("my-action"); ActionStatus.AckStatus ackStatus = actionStatus.ackStatus(); ActionStatus.AckStatus.State ackState = ackStatus.state();
You can acknowledge multiple actions:
AckWatchResponse ackResponse = watcherClient.prepareAckWatch("my-watch") .setActionIds("action1", "action2") .get();
To acknowledge all actions of a watch, specify only the watch ID:
AckWatchResponse ackResponse = watcherClient.prepareAckWatch("my-watch").get();
Activate watch API
editA watch can be either active or inactive. This API enables you to activate a currently inactive watch.
The status of an inactive watch is returned with the watch definition when you call the get watch API:
GetWatchResponse getWatchResponse = watcherClient.prepareGetWatch("my-watch").get(); boolean active = getWatchResponse.getStatus().state().isActive();
The following snippet shows how you can activate a watch:
ActivateWatchResponse activateResponse = watcherClient.prepareActivateWatch("my-watch", true).get(); boolean active = activateResponse.getStatus().state().isActive();
The new state of the watch is returned as part of its overall status.
Deactivate watch API
editA watch can be either active or inactive. This API enables you to deactivate a currently active watch.
The status of an active watch is returned with the watch definition when you call the get watch API:
GetWatchResponse getWatchResponse = watcherClient.prepareGetWatch("my-watch").get(); boolean active = getWatchResponse.getStatus().state().isActive();
The following snippet shows how you can deactivate a watch:
ActivateWatchResponse activateResponse = watcherClient.prepareActivateWatch("my-watch", false).get(); boolean active = activateResponse.getStatus().state().isActive();
The new state of the watch is returned as part of its overall status.
Stats API
editThe stats
API returns the current Watcher metrics. You can control what
metrics this API returns using the metric
parameter.
The following example queries the stats
API :
WatcherStatsResponse watcherStatsResponse = watcherClient.prepareWatcherStats().get();
A successful call returns a response structure that can be accessed as shown:
WatcherBuild build = watcherStatsResponse.getBuild(); // The current size of the watcher execution queue long executionQueueSize = watcherStatsResponse.getThreadPoolQueueSize(); // The maximum size the watch execution queue has grown to long executionQueueMaxSize = watcherStatsResponse.getThreadPoolQueueSize(); // The total number of watches registered in the system long totalNumberOfWatches = watcherStatsResponse.getWatchesCount(); // {watcher} state (STARTING,STOPPED or STARTED) WatcherState watcherState = watcherStatsResponse.getWatcherState();
Service API
editThe Watcher service
API allows the control the lifecycle of the Watcher
service. The following example starts the watcher service:
WatcherServiceResponse watcherServiceResponse = watcherClient.prepareWatchService().start().get();
The following example stops the watcher service:
WatcherServiceResponse watcherServiceResponse = watcherClient.prepareWatchService().stop().get();
The following example restarts the watcher service:
WatcherServiceResponse watcherServiceResponse = watcherClient.prepareWatchService().restart().get();
On this page