- Elasticsearch Guide: other versions:
- What is Elasticsearch?
- What’s new in 7.14
- Quick start
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Secure settings
- Auditing settings
- Circuit breaker settings
- Cluster-level shard allocation and routing settings
- Cross-cluster replication settings
- Discovery and cluster formation settings
- Field data cache settings
- Index lifecycle management settings
- Index management settings
- Index recovery settings
- Indexing buffer settings
- License settings
- Local gateway settings
- Logging
- Machine learning settings
- Monitoring settings
- Node
- Networking
- Node query cache settings
- Search settings
- Security settings
- Shard request cache settings
- Snapshot lifecycle management settings
- Transforms settings
- Thread pools
- Watcher settings
- Advanced configuration
- Important System Configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- All permission check
- Discovery configuration check
- Bootstrap Checks for X-Pack
- Starting Elasticsearch
- Stopping Elasticsearch
- Discovery and cluster formation
- Add and remove nodes in your cluster
- Full-cluster restart and rolling restart
- Remote clusters
- Set up X-Pack
- Configuring X-Pack Java Clients
- Plugins
- Upgrade Elasticsearch
- Index modules
- Mapping
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Index templates
- Data streams
- Ingest pipelines
- Example: Parse logs
- Enrich your data
- Processor reference
- Append
- Bytes
- Circle
- Community ID
- Convert
- CSV
- Date
- Date index name
- Dissect
- Dot expander
- Drop
- Enrich
- Fail
- Fingerprint
- Foreach
- GeoIP
- Grok
- Gsub
- HTML strip
- Inference
- Join
- JSON
- KV
- Lowercase
- Network direction
- Pipeline
- Registered domain
- Remove
- Rename
- Script
- Set
- Set security user
- Sort
- Split
- Trim
- Uppercase
- URL decode
- URI parts
- User agent
- Aliases
- Search your data
- Query DSL
- Aggregations
- Bucket aggregations
- Adjacency matrix
- Auto-interval date histogram
- Children
- Composite
- Date histogram
- Date range
- Diversified sampler
- Filter
- Filters
- Geo-distance
- Geohash grid
- Geotile grid
- Global
- Histogram
- IP range
- Missing
- Multi Terms
- Nested
- Parent
- Range
- Rare terms
- Reverse nested
- Sampler
- Significant terms
- Significant text
- Terms
- Variable width histogram
- Subtleties of bucketing range fields
- Metrics aggregations
- Pipeline aggregations
- Average bucket
- Bucket script
- Bucket count K-S test
- Bucket correlation
- Bucket selector
- Bucket sort
- Cumulative cardinality
- Cumulative sum
- Derivative
- Extended stats bucket
- Inference bucket
- Max bucket
- Min bucket
- Moving average
- Moving function
- Moving percentiles
- Normalize
- Percentiles bucket
- Serial differencing
- Stats bucket
- Sum bucket
- Bucket aggregations
- EQL
- SQL
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Scripting
- Data management
- ILM: Manage the index lifecycle
- Overview
- Concepts
- Automate rollover
- Customize built-in ILM policies
- Index lifecycle actions
- Configure a lifecycle policy
- Migrate index allocation filters to node roles
- Troubleshooting index lifecycle management errors
- Start and stop index lifecycle management
- Manage existing indices
- Skip rollover
- Restore a managed data stream or index
- Autoscaling
- Monitor a cluster
- Roll up or transform your data
- Set up a cluster for high availability
- Snapshot and restore
- Secure the Elastic Stack
- Elasticsearch security principles
- Configuring security
- Updating node security certificates
- User authentication
- Built-in users
- Service accounts
- Internal users
- Token-based authentication services
- Realms
- Realm chains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Granting access to Stack Management features
- Security privileges
- Document level security
- Field level security
- Granting privileges for data streams and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enable audit logging
- Restricting connections with IP filtering
- Cross cluster search, clients, and integrations
- Operator privileges
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Watcher
- Command line tools
- How to
- REST APIs
- API conventions
- Autoscaling APIs
- Compact and aligned text (CAT) APIs
- cat aliases
- cat allocation
- cat anomaly detectors
- cat count
- cat data frame analytics
- cat datafeeds
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat segments
- cat shards
- cat snapshots
- cat task management
- cat templates
- cat thread pool
- cat trained model
- cat transforms
- Cluster APIs
- Cluster allocation explain
- Cluster get settings
- Cluster health
- Cluster reroute
- Cluster state
- Cluster stats
- Cluster update settings
- Nodes feature usage
- Nodes hot threads
- Nodes info
- Nodes reload secure settings
- Nodes stats
- Pending cluster tasks
- Remote cluster info
- Task management
- Voting configuration exclusions
- Cross-cluster replication APIs
- Data stream APIs
- Document APIs
- Enrich APIs
- EQL APIs
- Features APIs
- Fleet APIs
- Find structure API
- Graph explore API
- Index APIs
- Alias exists
- Aliases
- Analyze
- Clear cache
- Clone index
- Close index
- Create index
- Create or update alias
- Create or update component template
- Create or update index template
- Create or update index template (legacy)
- Delete component template
- Delete dangling index
- Delete alias
- Delete index
- Delete index template
- Delete index template (legacy)
- Exists
- Flush
- Force merge
- Freeze index
- Get alias
- Get component template
- Get field mapping
- Get index
- Get index settings
- Get index template
- Get index template (legacy)
- Get mapping
- Import dangling index
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists (legacy)
- List dangling indices
- Open index
- Refresh
- Resolve index
- Rollover
- Shrink index
- Simulate index
- Simulate template
- Split index
- Synced flush
- Type exists
- Unfreeze index
- Update index settings
- Update mapping
- Index lifecycle management APIs
- Ingest APIs
- Info API
- Licensing APIs
- Logstash APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendars
- Create datafeeds
- Create filters
- Delete calendars
- Delete datafeeds
- Delete events from calendar
- Delete filters
- Delete forecasts
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Estimate model memory
- Find file structure
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get machine learning info
- Get model snapshots
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Reset jobs
- Revert model snapshots
- Set upgrade mode
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filters
- Update jobs
- Update model snapshots
- Upgrade model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Create or update trained model aliases
- Create trained models
- Update data frame analytics jobs
- Delete data frame analytics jobs
- Delete trained models
- Delete trained model aliases
- Evaluate data frame analytics
- Explain data frame analytics
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Get trained models
- Get trained models stats
- Preview data frame analytics
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Migration APIs
- Reload search analyzers API
- Repositories metering APIs
- Rollup APIs
- Script APIs
- Search APIs
- Searchable snapshots APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Clear privileges cache
- Clear API key cache
- Clear service account token caches
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Create service account tokens
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete service account token
- Delete users
- Disable users
- Enable users
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Get service accounts
- Get service account credentials
- Get token
- Get user privileges
- Get users
- Grant API keys
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect prepare authentication
- OpenID Connect authenticate
- OpenID Connect logout
- SAML prepare authentication
- SAML authenticate
- SAML logout
- SAML invalidate
- SAML complete logout
- SAML service provider metadata
- SSL certificate
- Snapshot and restore APIs
- Snapshot lifecycle management APIs
- SQL APIs
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Migration guide
- Release notes
- Elasticsearch version 7.14.2
- Elasticsearch version 7.14.1
- Elasticsearch version 7.14.0
- Elasticsearch version 7.13.4
- Elasticsearch version 7.13.3
- Elasticsearch version 7.13.2
- Elasticsearch version 7.13.1
- Elasticsearch version 7.13.0
- Elasticsearch version 7.12.1
- Elasticsearch version 7.12.0
- Elasticsearch version 7.11.2
- Elasticsearch version 7.11.1
- Elasticsearch version 7.11.0
- Elasticsearch version 7.10.2
- Elasticsearch version 7.10.1
- Elasticsearch version 7.10.0
- Elasticsearch version 7.9.3
- Elasticsearch version 7.9.2
- Elasticsearch version 7.9.1
- Elasticsearch version 7.9.0
- Elasticsearch version 7.8.1
- Elasticsearch version 7.8.0
- Elasticsearch version 7.7.1
- Elasticsearch version 7.7.0
- Elasticsearch version 7.6.2
- Elasticsearch version 7.6.1
- Elasticsearch version 7.6.0
- Elasticsearch version 7.5.2
- Elasticsearch version 7.5.1
- Elasticsearch version 7.5.0
- Elasticsearch version 7.4.2
- Elasticsearch version 7.4.1
- Elasticsearch version 7.4.0
- Elasticsearch version 7.3.2
- Elasticsearch version 7.3.1
- Elasticsearch version 7.3.0
- Elasticsearch version 7.2.1
- Elasticsearch version 7.2.0
- Elasticsearch version 7.1.1
- Elasticsearch version 7.1.0
- Elasticsearch version 7.0.0
- Elasticsearch version 7.0.0-rc2
- Elasticsearch version 7.0.0-rc1
- Elasticsearch version 7.0.0-beta1
- Elasticsearch version 7.0.0-alpha2
- Elasticsearch version 7.0.0-alpha1
- Dependencies and versions
Monitoring settings in Elasticsearch
editMonitoring settings in Elasticsearch
editBy default, Elasticsearch monitoring features are enabled but data collection is disabled.
To enable data collection, use the xpack.monitoring.collection.enabled
setting.
Except where noted otherwise, these settings can be dynamically updated on a live cluster with the cluster-update-settings API.
To adjust how monitoring data is displayed in the monitoring UI, configure
xpack.monitoring
settings in
kibana.yml
. To control how monitoring data is collected from Logstash,
configure monitoring settings in logstash.yml
.
For more information, see Monitor a cluster.
General monitoring settings
edit-
xpack.monitoring.enabled
- [7.8.0] Deprecated in 7.8.0. Basic License features should always be enabled (Static) This deprecated setting has no effect.
Monitoring collection settings
editThe xpack.monitoring.collection
settings control how data is collected from
your Elasticsearch nodes.
-
xpack.monitoring.collection.enabled
-
(Dynamic) Set to
true
to enable the collection of monitoring data. When this setting isfalse
(default), Elasticsearch monitoring data is not collected and all monitoring data from other sources such as Kibana, Beats, and Logstash is ignored.
-
xpack.monitoring.collection.interval
-
[6.3.0] Deprecated in 6.3.0. Use
xpack.monitoring.collection.enabled
set tofalse
instead. (Dynamic) Setting to-1
to disable data collection is no longer supported beginning with 7.0.0.Controls how often data samples are collected. Defaults to
10s
. If you modify the collection interval, set thexpack.monitoring.min_interval_seconds
option inkibana.yml
to the same value. -
xpack.monitoring.elasticsearch.collection.enabled
-
(Dynamic) Controls whether statistics about your
Elasticsearch cluster should be collected. Defaults to
true
. This is different fromxpack.monitoring.collection.enabled
, which allows you to enable or disable all monitoring collection. However, this setting simply disables the collection of Elasticsearch data while still allowing other data (e.g., Kibana, Logstash, Beats, or APM Server monitoring data) to pass through this cluster. -
xpack.monitoring.collection.cluster.stats.timeout
-
(Dynamic) Timeout for collecting the cluster
statistics, in time units. Defaults to
10s
. -
xpack.monitoring.collection.node.stats.timeout
-
(Dynamic) Timeout for collecting the node statistics,
in time units. Defaults to
10s
. -
xpack.monitoring.collection.indices
-
(Dynamic) Controls which indices the
monitoring features collect data from. Defaults to all indices. Specify the index
names as a comma-separated list, for example
test1,test2,test3
. Names can include wildcards, for exampletest*
. You can explicitly exclude indices by prepending-
. For exampletest*,-test3
will monitor all indexes that start withtest
except fortest3
. System indices like .security* or .kibana* always start with a.
and generally should be monitored. Consider adding.*
to the list of indices ensure monitoring of system indices. For example:.*,test*,-test3
-
xpack.monitoring.collection.index.stats.timeout
-
(Dynamic) Timeout for collecting index statistics,
in time units. Defaults to
10s
. -
xpack.monitoring.collection.index.recovery.active_only
-
(Dynamic) Controls whether or not all recoveries are
collected. Set to
true
to collect only active recoveries. Defaults tofalse
. -
xpack.monitoring.collection.index.recovery.timeout
-
(Dynamic) Timeout for collecting the recovery
information, in time units. Defaults to
10s
.
-
xpack.monitoring.history.duration
-
(Dynamic) Retention duration beyond which the indices created by a monitoring exporter are automatically deleted, in time units. Defaults to
7d
(7 days).This setting has a minimum value of
1d
(1 day) to ensure that something is being monitored and it cannot be disabled.This setting currently impacts only
local
-type exporters. Indices created using thehttp
exporter are not deleted automatically.
-
xpack.monitoring.exporters
- (Static) Configures where the agent stores monitoring data. By default, the agent uses a local exporter that indexes monitoring data on the cluster where it is installed. Use an HTTP exporter to send data to a separate monitoring cluster. For more information, see Local exporter settings, HTTP exporter settings, and How it works.
Local exporter settings
editThe local
exporter is the default exporter used by monitoring features. As the
name is meant to imply, it exports data to the local cluster, which means that
there is not much needed to be configured.
If you do not supply any exporters, then the monitoring features automatically create one for you. If any exporter is provided, then no default is added.
xpack.monitoring.exporters.my_local: type: local
-
type
-
The value for a Local exporter must always be
local
and it is required. -
use_ingest
-
Whether to supply a placeholder pipeline to the cluster and a pipeline processor
with every bulk request. The default value is
true
. If disabled, then it means that it will not use pipelines, which means that a future release cannot automatically upgrade bulk requests to future-proof them. -
cluster_alerts.management.enabled
-
Whether to create cluster alerts for this cluster. The default value is
true
. To use this feature, Watcher must be enabled. If you have a basic license, cluster alerts are not displayed. -
wait_master.timeout
-
Time to wait for the master node to setup
local
exporter for monitoring, in time units. After that wait period, the non-master nodes warn the user for possible missing configuration. Defaults to30s
.
HTTP exporter settings
editThe following lists settings that can be supplied with the http
exporter.
All settings are shown as what follows the name you select for your exporter:
xpack.monitoring.exporters.my_remote: type: http host: ["host:port", ...]
-
type
-
The value for an HTTP exporter must always be
http
and it is required. -
host
-
Host supports multiple formats, both as an array or as a single value. Supported formats include
hostname
,hostname:port
,http://hostname
http://hostname:port
,https://hostname
, andhttps://hostname:port
. Hosts cannot be assumed. The default scheme is alwayshttp
and the default port is always9200
if not supplied as part of thehost
string.xpack.monitoring.exporters: example1: type: http host: "10.1.2.3" example2: type: http host: ["http://10.1.2.4"] example3: type: http host: ["10.1.2.5", "10.1.2.6"] example4: type: http host: ["https://10.1.2.3:9200"]
-
auth.username
-
The username is required if
auth.secure_password
orauth.password
is supplied. -
auth.secure_password
-
(Secure, reloadable) The
password for the
auth.username
. Takes precedence overauth.password
if it is also specified. -
auth.password
-
[7.7.0]
Deprecated in 7.7.0. Use
auth.secure_password
instead. The password for theauth.username
. Ifauth.secure_password
is also specified, this setting is ignored. -
connection.timeout
-
Amount of time that the HTTP connection is supposed to wait for a socket to open
for the request, in time units. The default value is
6s
. -
connection.read_timeout
-
Amount of time that the HTTP connection is supposed to wait for a socket to
send back a response, in time units. The default value is
10 * connection.timeout
(60s
if neither are set). -
ssl
- Each HTTP exporter can define its own TLS / SSL settings or inherit them. See X-Pack monitoring TLS/SSL settings.
-
proxy.base_path
-
The base path to prefix any outgoing request, such as
/base/path
(e.g., bulk requests would then be sent as/base/path/_bulk
). There is no default value. -
headers
-
Optional headers that are added to every request, which can assist with routing requests through proxies.
xpack.monitoring.exporters.my_remote: headers: X-My-Array: [abc, def, xyz] X-My-Header: abc123
Array-based headers are sent
n
times wheren
is the size of the array.Content-Type
andContent-Length
cannot be set. Any headers created by the monitoring agent will override anything defined here. -
index.name.time_format
-
A mechanism for changing the default date suffix for the, by default, daily
monitoring indices. The default value is
yyyy.MM.dd
, which is why the indices are created daily. -
use_ingest
-
Whether to supply a placeholder pipeline to the monitoring cluster and a
pipeline processor with every bulk request. The default value is
true
. If disabled, then it means that it will not use pipelines, which means that a future release cannot automatically upgrade bulk requests to future-proof them. -
cluster_alerts.management.enabled
-
Whether to create cluster alerts for this cluster. The default value is
true
. To use this feature, Watcher must be enabled. If you have a basic license, cluster alerts are not displayed. -
cluster_alerts.management.blacklist
-
Prevents the creation of specific cluster alerts. It also removes any applicable watches that already exist in the current cluster.
You can add any of the following watch identifiers to the list of blocked alerts:
-
elasticsearch_cluster_status
-
elasticsearch_version_mismatch
-
elasticsearch_nodes
-
kibana_version_mismatch
-
logstash_version_mismatch
-
xpack_license_expiration
For example:
["elasticsearch_version_mismatch","xpack_license_expiration"]
. -
X-Pack monitoring TLS/SSL settings
editYou can configure the following TLS/SSL settings.
-
xpack.monitoring.exporters.$NAME.ssl.supported_protocols
-
(Static) Supported protocols with versions. Valid protocols:
SSLv2Hello
,SSLv3
,TLSv1
,TLSv1.1
,TLSv1.2
,TLSv1.3
. If the JVM’s SSL provider supports TLSv1.3, the default isTLSv1.3,TLSv1.2,TLSv1.1
. Otherwise, the default isTLSv1.2,TLSv1.1
.Elasticsearch relies on your JDK’s implementation of SSL and TLS. View Supported SSL/TLS versions by JDK version for more information.
If
xpack.security.fips_mode.enabled
istrue
, you cannot useSSLv2Hello
orSSLv3
. See FIPS 140-2. -
xpack.monitoring.exporters.$NAME.ssl.verification_mode
-
(Static) Controls the verification of certificates. Controls the verification of certificates.
Valid values are:
-
full
, which verifies that the provided certificate is signed by a trusted authority (CA) and also verifies that the server’s hostname (or IP address) matches the names identified within the certificate. -
certificate
, which verifies that the provided certificate is signed by a trusted authority (CA), but does not perform any hostname verification. -
none
, which performs no verification of the server’s certificate. This mode disables many of the security benefits of SSL/TLS and should only be used after very careful consideration. It is primarily intended as a temporary diagnostic mechanism when attempting to resolve TLS errors; its use on production clusters is strongly discouraged.The default value is
full
.
-
-
xpack.monitoring.exporters.$NAME.ssl.cipher_suites
-
(Static) Supported cipher suites vary depending on which version of Java you use. For example, for version 12 the default value is
TLS_AES_256_GCM_SHA384
,TLS_AES_128_GCM_SHA256
,TLS_CHACHA20_POLY1305_SHA256
,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
,TLS_RSA_WITH_AES_256_GCM_SHA384
,TLS_RSA_WITH_AES_128_GCM_SHA256
,TLS_RSA_WITH_AES_256_CBC_SHA256
,TLS_RSA_WITH_AES_128_CBC_SHA256
,TLS_RSA_WITH_AES_256_CBC_SHA
,TLS_RSA_WITH_AES_128_CBC_SHA
.For more information, see Oracle’s Java Cryptography Architecture documentation.
X-Pack monitoring TLS/SSL key and trusted certificate settings
editThe following settings are used to specify a private key, certificate, and the trusted certificates that should be used when communicating over an SSL/TLS connection. A private key and certificate are optional and would be used if the server requires client authentication for PKI authentication.
PEM encoded files
editWhen using PEM encoded files, use the following settings:
-
xpack.monitoring.exporters.$NAME.ssl.key
-
(Static) Path to a PEM encoded file containing the private key.
If HTTP client authentication is required, it uses this file. You cannot use this setting and
ssl.keystore.path
at the same time. -
xpack.monitoring.exporters.$NAME.ssl.key_passphrase
-
(Static) The passphrase that is used to decrypt the private key. Since the key might not be encrypted, this value is optional.
You cannot use this setting and
ssl.secure_key_passphrase
at the same time. -
xpack.monitoring.exporters.$NAME.ssl.secure_key_passphrase
- (Secure) The passphrase that is used to decrypt the private key. Since the key might not be encrypted, this value is optional.
-
xpack.monitoring.exporters.$NAME.ssl.certificate
-
(Static) Specifies the path for the PEM encoded certificate (or certificate chain) that is associated with the key.
This setting can be used only if
ssl.key
is set. -
xpack.monitoring.exporters.$NAME.ssl.certificate_authorities
-
(Static) List of paths to PEM encoded certificate files that should be trusted.
This setting and
ssl.truststore.path
cannot be used at the same time.
Java keystore files
editWhen using Java keystore files (JKS), which contain the private key, certificate and certificates that should be trusted, use the following settings:
-
xpack.monitoring.exporters.$NAME.ssl.keystore.path
-
(Static) The path for the keystore file that contains a private key and certificate.
It must be either a Java keystore (jks) or a PKCS#12 file. You cannot use this setting and
ssl.key
at the same time. -
xpack.monitoring.exporters.$NAME.ssl.keystore.password
- (Static) The password for the keystore.
-
xpack.monitoring.exporters.$NAME.ssl.keystore.secure_password
- (Secure) The password for the keystore.
-
xpack.monitoring.exporters.$NAME.ssl.keystore.key_password
-
(Static) The password for the key in the keystore. The default is the keystore password.
You cannot use this setting and
ssl.keystore.secure_password
at the same time. -
xpack.monitoring.exporters.$NAME.ssl.keystore.secure_key_password
- (Secure) The password for the key in the keystore. The default is the keystore password.
-
xpack.monitoring.exporters.$NAME.ssl.truststore.path
-
(Static) The path for the keystore that contains the certificates to trust. It must be either a Java keystore (jks) or a PKCS#12 file.
You cannot use this setting and
ssl.certificate_authorities
at the same time. -
xpack.monitoring.exporters.$NAME.ssl.truststore.password
-
(Static) The password for the truststore.
You cannot use this setting and
ssl.truststore.secure_password
at the same time. -
xpack.monitoring.exporters.$NAME.ssl.truststore.secure_password
- (Secure) Password for the truststore.
PKCS#12 files
editElasticsearch can be configured to use PKCS#12 container files (.p12
or .pfx
files)
that contain the private key, certificate and certificates that should be trusted.
PKCS#12 files are configured in the same way as Java keystore files:
-
xpack.monitoring.exporters.$NAME.ssl.keystore.path
-
(Static) The path for the keystore file that contains a private key and certificate.
It must be either a Java keystore (jks) or a PKCS#12 file. You cannot use this setting and
ssl.key
at the same time. -
xpack.monitoring.exporters.$NAME.ssl.keystore.type
-
(Static)
The format of the keystore file. It must be either
jks
orPKCS12
. If the keystore path ends in ".p12", ".pfx", or ".pkcs12", this setting defaults toPKCS12
. Otherwise, it defaults tojks
. -
xpack.monitoring.exporters.$NAME.ssl.keystore.password
- (Static) The password for the keystore.
-
xpack.monitoring.exporters.$NAME.ssl.keystore.secure_password
- (Secure) The password for the keystore.
-
xpack.monitoring.exporters.$NAME.ssl.keystore.key_password
-
(Static) The password for the key in the keystore. The default is the keystore password.
You cannot use this setting and
ssl.keystore.secure_password
at the same time. -
xpack.monitoring.exporters.$NAME.ssl.keystore.secure_key_password
- (Secure) The password for the key in the keystore. The default is the keystore password.
-
xpack.monitoring.exporters.$NAME.ssl.truststore.path
-
(Static) The path for the keystore that contains the certificates to trust. It must be either a Java keystore (jks) or a PKCS#12 file.
You cannot use this setting and
ssl.certificate_authorities
at the same time. -
xpack.monitoring.exporters.$NAME.ssl.truststore.type
-
(Static)
Set this to
PKCS12
to indicate that the truststore is a PKCS#12 file. -
xpack.monitoring.exporters.$NAME.ssl.truststore.password
-
(Static) The password for the truststore.
You cannot use this setting and
ssl.truststore.secure_password
at the same time. -
xpack.monitoring.exporters.$NAME.ssl.truststore.secure_password
- (Secure) Password for the truststore.
PKCS#11 tokens
editElasticsearch can be configured to use a PKCS#11 token that contains the private key, certificate and certificates that should be trusted.
PKCS#11 token require additional configuration on the JVM level and can be enabled via the following settings:
-
xpack.monitoring.exporters.$NAME.keystore.type
-
(Static)
Set this to
PKCS11
to indicate that the PKCS#11 token should be used as a keystore. -
xpack.monitoring.exporters.$NAME.truststore.type
-
(Static)
The format of the truststore file. For the Java keystore format, use
jks
. For PKCS#12 files, usePKCS12
. For a PKCS#11 token, usePKCS11
. The default isjks
.
When configuring the PKCS#11 token that your JVM is configured to use as
a keystore or a truststore for Elasticsearch, the PIN for the token can be
configured by setting the appropriate value to ssl.truststore.password
or ssl.truststore.secure_password
in the context that you are configuring.
Since there can only be one PKCS#11 token configured, only one keystore and
truststore will be usable for configuration in Elasticsearch. This in turn means
that only one certificate can be used for TLS both in the transport and the
http layer.
On this page