- Fleet and Elastic Agent Guide: other versions:
- Fleet and Elastic Agent overview
- Beats and Elastic Agent capabilities
- Quick starts
- Migrate from Beats to Elastic Agent
- Deployment models
- Install Elastic Agents
- Install Fleet-managed Elastic Agents
- Install standalone Elastic Agents (advanced users)
- Install Elastic Agents in a containerized environment
- Run Elastic Agent in a container
- Run Elastic Agent on Kubernetes managed by Fleet
- Advanced Elastic Agent configuration managed by Fleet
- Run Elastic Agent on GKE managed by Fleet
- Run Elastic Agent on Amazon EKS managed by Fleet
- Run Elastic Agent on Azure AKS managed by Fleet
- Run Elastic Agent Standalone on Kubernetes
- Scaling Elastic Agent on Kubernetes
- Using a custom ingest pipeline with the Kubernetes Integration
- Environment variables
- Installation layout
- Air-gapped environments
- Using a proxy server with Elastic Agent and Fleet
- Uninstall Elastic Agents from edge hosts
- Start and stop Elastic Agents on edge hosts
- Elastic Agent configuration encryption
- Secure connections
- Manage Elastic Agents in Fleet
- Configure standalone Elastic Agents
- Create a standalone Elastic Agent policy
- Structure of a config file
- Inputs
- Providers
- Outputs
- SSL/TLS
- Logging
- Feature flags
- Agent download
- Config file examples
- Grant standalone Elastic Agents access to Elasticsearch
- Example: Use standalone Elastic Agent to monitor nginx
- Debug standalone Elastic Agents
- Kubernetes autodiscovery with Elastic Agent
- Monitoring
- Reference YAML
- Manage integrations
- Define processors
- Processor syntax
- add_cloud_metadata
- add_cloudfoundry_metadata
- add_docker_metadata
- add_fields
- add_host_metadata
- add_id
- add_kubernetes_metadata
- add_labels
- add_locale
- add_network_direction
- add_nomad_metadata
- add_observer_metadata
- add_process_metadata
- add_tags
- community_id
- convert
- copy_fields
- decode_base64_field
- decode_cef
- decode_csv_fields
- decode_duration
- decode_json_fields
- decode_xml
- decode_xml_wineventlog
- decompress_gzip_field
- detect_mime_type
- dissect
- dns
- drop_event
- drop_fields
- extract_array
- fingerprint
- include_fields
- move_fields
- parse_aws_vpc_flow_log
- rate_limit
- registered_domain
- rename
- replace
- script
- syslog
- timestamp
- translate_sid
- truncate_fields
- urldecode
- Command reference
- Troubleshoot
- Release notes
Logstash output
editLogstash output
editThe Logstash output uses an internal protocol to send events directly to Logstash over TCP. Logstash provides additional parsing, transformation, and routing of data collected by Elastic Agent.
Compatibility: This output works with all compatible versions of Logstash. Refer to the Elastic Support Matrix.
This example configures a Logstash output called default
in the
elastic-agent.yml
file:
The Logstash server and the port ( |
To receive the events in Logstash, you also need to create a Logstash configuration pipeline. The Logstash configuration pipeline listens for incoming Elastic Agent connections, processes received events, and then sends the events to Elasticsearch.
The following example configures a Logstash pipeline that listens on port 5044
for
incoming Elastic Agent connections and routes received events to Elasticsearch:
input { elastic_agent { port => 5044 enrich => none # don't modify the events' schema at all # or minimal change, add only ssl and source metadata # enrich => [ssl_peer_metadata, source_metadata] } } output { elasticsearch { hosts => ["http://localhost:9200"] data_stream => "true" } }
For more information about configuring Logstash, refer to Configuring Logstash and Elastic Agent input plugin.
Logstash output configuration settings
editThe logstash
output supports the following settings, grouped by category.
Many of these settings have sensible defaults that allow you to run Elastic Agent with
minimal configuration.
Commonly used settings
editSetting | Description |
---|---|
(boolean) Enables or disables the output. If set to |
|
(boolean) Configures escaping of HTML in strings. Set to Default: |
|
(list) The list of known Logstash servers to connect to. If load balancing is disabled, but multiple hosts are configured, one host is selected randomly (there is no precedence). If one host becomes unreachable, another one is selected randomly. All entries in this list can contain a port number. If no port is specified,
|
|
(string) The URL of the SOCKS5 proxy to use when connecting to the Logstash
servers. The value must be a URL with a scheme of If the SOCKS5 proxy server requires client authentication, embed a username and password in the URL as shown in the example. When using a proxy, hostnames are resolved on the proxy server instead of on the
client. To change this behavior, set outputs: default: type: logstash hosts: ["remote-host:5044"] proxy_url: socks5://user:password@socks5-proxy:2233 |
|
(boolean) Determines whether Logstash hostnames are resolved locally when using a
proxy. If Default: |
Authentication settings
editWhen sending data to a secured cluster through the logstash
output, Elastic Agent can use SSL/TLS. For a list of available settings, refer to
SSL/TLS, specifically the settings under
Table 1, “Common configuration options” and Table 2, “Client configuration options”.
To use SSL/TLS, you must also configure the Elastic Agent input plugin for Logstash to use SSL/TLS.
For more information, refer to Configure SSL/TLS for the Logstash output.
Performance tuning settings
editSettings that may affect performance.
Setting | Description |
---|---|
(string) The number of seconds to wait before trying to reconnect to Logstash
after a network error. After waiting Default: |
|
(string) The maximum number of seconds to wait before attempting to connect to Elasticsearch after a network error. Default: |
|
(int) The maximum number of events to bulk in a single Logstash request. Events can be collected into batches. Elastic Agent will split batches larger than
Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in API errors, killed connections, timed-out publishing requests, and, ultimately, lower throughput. Set this value to Default: |
|
(int) The gzip compression level. Set this value to Increasing the compression level reduces network usage but increases CPU usage. Default: |
|
If Default: Example: outputs: default: type: logstash hosts: ["localhost:5044", "localhost:5045"] loadbalance: true |
|
(int) The number of times to retry publishing an event after a publishing failure. After the specified number of retries, the events are typically dropped. Set Default: |
|
(int) The number of batches to send asynchronously to Logstash while waiting
for an ACK from Logstash. The output becomes blocking after the specified number of
batches are written. Specify Default: |
|
(boolean) If Default: |
|
(string) The number of seconds to wait for responses from the Logstash server before timing out. Default: |
|
(string) Time to live for a connection to Logstash after which the connection will be reestablished. This setting is useful when Logstash hosts represent load balancers. Because connections to Logstash hosts are sticky, operating behind load balancers can lead to uneven load distribution across instances. Specify a TTL on the connection to achieve equal connection distribution across instances. Default: The |
|
(int) The number of workers per configured host publishing events. This is best used with load balancing mode enabled. Example: If you have two hosts and three workers, in total six workers are started (three for each host). Default: |
On this page