Configure the Elasticsearch output
The Elasticsearch output sends events directly to Elasticsearch by using the Elasticsearch HTTP API.
Compatibility: This output works with all compatible versions of Elasticsearch. See the Elastic Support Matrix.
This example configures an Elasticsearch output called default
in the elastic-agent.yml
file:
outputs:
default:
type: elasticsearch
hosts: [127.0.0.1:9200]
username: elastic
password: changeme
This example is similar to the previous one, except that it uses the recommended token-based (API key) authentication:
outputs:
default:
type: elasticsearch
hosts: [127.0.0.1:9200]
api_key: "my_api_key"
Token-based authentication is required in an Elastic Cloud Serverless environment.
The elasticsearch
output type supports the following settings, grouped by category. Many of these settings have sensible defaults that allow you to run Elastic Agent with minimal configuration.
- Commonly used settings
- Authentication settings
- Compatibility setting
- Data parsing, filtering, and manipulation settings
- HTTP settings
- Memory queue settings
- Performance tuning settings
enabled
-
(boolean) Enables or disables the output. If set to
false
, the output is disabled.Default:
true
hosts
-
(list) The list of Elasticsearch nodes to connect to. The events are distributed to these nodes in round robin order. If one node becomes unreachable, the event is automatically sent to another node. Each Elasticsearch node can be defined as a
URL
orIP:PORT
. For example:http://192.15.3.2
,https://es.found.io:9230
or192.24.3.2:9300
. If no port is specified,9200
is used.NoteWhen a node is defined as an
IP:PORT
, the scheme and path are taken from theprotocol
andpath
settings.outputs: default: type: elasticsearch hosts: ["10.45.3.2:9220", "10.45.3.1:9230"] protocol: https path: /elasticsearch
- In this example, the Elasticsearch nodes are available at
https://10.45.3.2:9220/elasticsearch
andhttps://10.45.3.1:9230/elasticsearch
.
Note that Elasticsearch Nodes in the Elastic Cloud Serverless environment are exposed on port 443.
- In this example, the Elasticsearch nodes are available at
protocol
- (string) The name of the protocol Elasticsearch is reachable on. The options are:
http
orhttps
. The default ishttp
. However, if you specify a URL forhosts
, the value ofprotocol
is overridden by whatever scheme you specify in the URL. proxy_disable
-
(boolean) If set to
true
, all proxy settings, includingHTTP_PROXY
andHTTPS_PROXY
variables, are ignored.Default:
false
proxy_headers
- (string) Additional headers to send to proxies during CONNECT requests.
proxy_url
- (string) The URL of the proxy to use when connecting to the Elasticsearch servers. The value may be either a complete URL or a
host[:port]
, in which case thehttp
scheme is assumed. If a value is not specified through the configuration file then proxy environment variables are used. See the Go documentation for more information about the environment variables.
When sending data to a secured cluster through the elasticsearch
output, Elastic Agent can use any of the following authentication methods:
- Basic authentication credentials
- Token-based (API key) authentication
- Public Key Infrastructure (PKI) certificates
- Kerberos
outputs:
default:
type: elasticsearch
hosts: ["https://myEShost:9200"]
username: "your-username"
password: "your-password"
password
- (string) The basic authentication password for connecting to Elasticsearch.
username
-
(string) The basic authentication username for connecting to Elasticsearch.
This user needs the privileges required to publish events to Elasticsearch.
Note that in an Elastic Cloud Serverless environment you need to use token-based (API key) authentication.
outputs:
default:
type: elasticsearch
hosts: ["https://myEShost:9200"]
api_key: "KnR6yE41RrSowb0kQ0HWoA"
api_key
- (string) Instead of using a username and password, you can use API keys to secure communication with Elasticsearch. The value must be the ID of the API key and the API key joined by a colon:
id:api_key
. Token-based authentication is required in an Elastic Cloud Serverless environment.
outputs:
default:
type: elasticsearch
hosts: ["https://myEShost:9200"]
ssl.certificate: "/etc/pki/client/cert.pem"
ssl.key: "/etc/pki/client/cert.key"
For a list of available settings, refer to SSL/TLS, specifically the settings under Table 7, Common configuration options and Table 8, Client configuration options.
The following encryption types are supported:
- aes128-cts-hmac-sha1-96
- aes128-cts-hmac-sha256-128
- aes256-cts-hmac-sha1-96
- aes256-cts-hmac-sha384-192
- des3-cbc-sha1-kd
- rc4-hmac
Example output config with Kerberos password-based authentication:
outputs:
default:
type: elasticsearch
hosts: ["http://my-elasticsearch.elastic.co:9200"]
kerberos.auth_type: password
kerberos.username: "elastic"
kerberos.password: "changeme"
kerberos.config_path: "/etc/krb5.conf"
kerberos.realm: "ELASTIC.CO"
The service principal name for the Elasticsearch instance is constructed from these options. Based on this configuration, the name would be:
HTTP/my-elasticsearch.elastic.co@ELASTIC.CO
kerberos.auth_type
-
(string) The type of authentication to use with Kerberos KDC:
password
- When specified, also set
kerberos.username
andkerberos.password
. keytab
- When specified, also set
kerberos.username
andkerberos.keytab
. The keytab must contain the keys of the selected principal, or authentication fails.
Default:
password
kerberos.config_path
- (string) Path to the
krb5.conf
. Elastic Agent uses this setting to find the Kerberos KDC to retrieve a ticket. kerberos.enabled
-
(boolean) Enables or disables the Kerberos configuration.
NoteKerberos settings are disabled if either
enabled
is set tofalse
or thekerberos
section is missing. kerberos.enable_krb5_fast
-
(boolean) If
true
, enables Kerberos FAST authentication. This may conflict with some Active Directory installations.Default:
false
kerberos.keytab
- (string) If
kerberos.auth_type
iskeytab
, provide the path to the keytab of the selected principal. kerberos.password
- (string) If
kerberos.auth_type
ispassword
, provide a password for the selected principal. kerberos.realm
- (string) Name of the realm where the output resides.
kerberos.username
- (string) Name of the principal used to connect to the output.
allow_older_versions
-
Allow Elastic Agent to connect and send output to an Elasticsearch instance that is running an earlier version than the agent version.
Note that this setting does not affect Elastic Agent's ability to connect to Fleet Server. Fleet Server will not accept a connection from an agent at a later major or minor version. It will accept a connection from an agent at a later patch version. For example, an Elastic Agent at version 8.14.3 can connect to a Fleet Server on version 8.14.0, but an agent at version 8.15.0 or later is not able to connect.
Default:
true
Settings used to parse, filter, and transform data.
escape_html
-
(boolean) Configures escaping of HTML in strings. Set to
true
to enable escaping.Default:
false
pipeline
-
(string) A format string value that specifies the ingest pipeline to write events to.
outputs: default: type: elasticsearchoutput.elasticsearch: hosts: ["http://localhost:9200"] pipeline: my_pipeline_id
You can set the ingest pipeline dynamically by using a format string to access any event field. For example, this configuration uses a custom field,
fields.log_type
, to set the pipeline for each event:outputs: default: type: elasticsearch hosts: ["http://localhost:9200"] pipeline: "%{[fields.log_type]}_pipeline"
With this configuration, all events with
log_type: normal
are sent to a pipeline namednormal_pipeline
, and all events withlog_type: critical
are sent to a pipeline namedcritical_pipeline
.TipTo learn how to add custom fields to events, see the
fields
option.See the
pipelines
setting for other ways to set the ingest pipeline dynamically. pipelines
-
An array of pipeline selector rules. Each rule specifies the ingest pipeline to use for events that match the rule. During publishing, Elastic Agent uses the first matching rule in the array. Rules can contain conditionals, format string-based fields, and name mappings. If the
pipelines
setting is missing or no rule matches, thepipeline
setting is used.Rule settings:
pipeline
- The pipeline format string to use. If this string contains field references, such as
%{[fields.name]}
, the fields must exist, or the rule fails. mappings
- A dictionary that takes the value returned by
pipeline
and maps it to a new name. default
- The default string value to use if
mappings
does not find a match. when
- A condition that must succeed in order to execute the current rule.
All the conditions supported by processors are also supported here.
The following example sends events to a specific pipeline based on whether the
message
field contains the specified string:outputs: default: type: elasticsearch hosts: ["http://localhost:9200"] pipelines: - pipeline: "warning_pipeline" when.contains: message: "WARN" - pipeline: "error_pipeline" when.contains: message: "ERR"
The following example sets the pipeline by taking the name returned by the
pipeline
format string and mapping it to a new name that’s used for the pipeline:outputs: default: type: elasticsearch hosts: ["http://localhost:9200"] pipelines: - pipeline: "%{[fields.log_type]}" mappings: critical: "sev1_pipeline" normal: "sev2_pipeline" default: "sev3_pipeline"
With this configuration, all events with
log_type: critical
are sent tosev1_pipeline
, all events withlog_type: normal
are sent to asev2_pipeline
, and all other events are sent tosev3_pipeline
.
Settings that modify the HTTP requests sent to Elasticsearch.
headers
-
Custom HTTP headers to add to each request created by the Elasticsearch output.
Example:
outputs: default: type: elasticsearch headers: X-My-Header: Header contents
Specify multiple header values for the same header name by separating them with a comma.
parameters
- Dictionary of HTTP parameters to pass within the URL with index operations.
path
- (string) An HTTP path prefix that is prepended to the HTTP API calls. This is useful for the cases where Elasticsearch listens behind an HTTP reverse proxy that exports the API under a custom prefix.
The memory queue keeps all events in memory.
The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new events can be inserted into the memory queue. Only after the signal from the output will the queue free up space for more events to be accepted.
The memory queue is controlled by the parameters flush.min_events
and flush.timeout
. flush.min_events
gives a limit on the number of events that can be included in a single batch, and flush.timeout
specifies how long the queue should wait to completely fill an event request. If the output supports a bulk_max_size
parameter, the maximum batch size will be the smaller of bulk_max_size
and flush.min_events
.
flush.min_events
is a legacy parameter, and new configurations should prefer to control batch size with bulk_max_size
. As of 8.13, there is never a performance advantage to limiting batch size with flush.min_events
instead of bulk_max_size
.
In synchronous mode, an event request is always filled as soon as events are available, even if there are not enough events to fill the requested batch. This is useful when latency must be minimized. To use synchronous mode, set flush.timeout
to 0.
For backwards compatibility, synchronous mode can also be activated by setting flush.min_events
to 0 or 1. In this case, batch size will be capped at 1/2 the queue capacity.
In asynchronous mode, an event request will wait up to the specified timeout to try and fill the requested batch completely. If the timeout expires, the queue returns a partial batch with all available events. To use asynchronous mode, set flush.timeout
to a positive duration, for example 5s.
This sample configuration forwards events to the output when there are enough events to fill the output’s request (usually controlled by bulk_max_size
, and limited to at most 512 events by flush.min_events
), or when events have been waiting for
queue.mem.events: 4096
queue.mem.flush.min_events: 512
queue.mem.flush.timeout: 5s
queue.mem.events
-
The number of events the queue can store. This value should be evenly divisible by the smaller of
queue.mem.flush.min_events
orbulk_max_size
to avoid sending partial batches to the output.Default:
3200 events
queue.mem.flush.min_events
-
flush.min_events
is a legacy parameter, and new configurations should prefer to control batch size withbulk_max_size
. As of 8.13, there is never a performance advantage to limiting batch size withflush.min_events
instead ofbulk_max_size
Default:
1600 events
queue.mem.flush.timeout
-
(int) The maximum wait time for
queue.mem.flush.min_events
to be fulfilled. If set to 0s, events are available to the output immediately.Default:
10s
Settings that may affect performance when sending data through the Elasticsearch output.
Use the preset
option to automatically configure the group of performance tuning settings to optimize for throughput
, scale
, latency
, or you can select a balanced
set of performance specifications.
The performance tuning preset
values take precedence over any settings that may be defined separately. If you want to change any setting, set preset
to custom
and specify the performance tuning settings individually.
backoff.init
-
(string) The number of seconds to wait before trying to reconnect to Elasticsearch after a network error. After waiting
backoff.init
seconds, Elastic Agent tries to reconnect. If the attempt fails, the backoff timer is increased exponentially up tobackoff.max
. After a successful connection, the backoff timer is reset.Default:
1s
backoff.max
-
(string) The maximum number of seconds to wait before attempting to connect to Elasticsearch after a network error.
Default:
60s
bulk_max_size
-
(int) The maximum number of events to bulk in a single Elasticsearch bulk API index request.
Events can be collected into batches. Elastic Agent will split batches larger than
bulk_max_size
into multiple batches.Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput.
Setting
bulk_max_size
to values less than or equal to 0 turns off the splitting of batches. When splitting is disabled, the queue decides on the number of events to be contained in a batch.Default:
1600
compression_level
-
(int) The gzip compression level. Set this value to
0
to disable compression. The compression level must be in the range of1
(best speed) to9
(best compression).Increasing the compression level reduces network usage but increases CPU usage.
Default:
1
max_retries
-
(int) The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped.
Set
max_retries
to a value less than 0 to retry until all events are published.Default:
3
preset
-
Configures the full group of performance tuning settings to optimize your Elastic Agent performance when sending data to an Elasticsearch output.
Refer to Performance tuning settings for a table showing the group of values associated with any preset, and another table showing EPS (events per second) results from testing the different preset options.
Performance tuning preset settings:
balanced
- Configure the default tuning setting values for "out-of-the-box" performance.
throughput
- Optimize the Elasticsearch output for throughput.
scale
- Optimize the Elasticsearch output for scale.
latency
- Optimize the Elasticsearch output to reduce latence.
custom
- Use the
custom
option to fine-tune the performance tuning settings individually.
Default:
balanced
timeout
-
(string) The HTTP request timeout in seconds for the Elasticsearch request.
Default:
90s
worker
-
(int) The number of workers per configured host publishing events. Example: If you have two hosts and three workers, in total six workers are started (three for each host).
Default:
1