- Logstash Reference: other versions:
- Logstash Introduction
- Getting Started with Logstash
- How Logstash Works
- Setting Up and Running Logstash
- Logstash Directory Layout
- Logstash Configuration Files
- logstash.yml
- Secrets keystore for secure settings
- Running Logstash from the Command Line
- Running Logstash as a Service on Debian or RPM
- Running Logstash on Docker
- Configuring Logstash for Docker
- Running Logstash on Windows
- Logging
- Shutting Down Logstash
- Installing X-Pack
- Setting Up X-Pack
- Upgrading Logstash
- Configuring Logstash
- Structure of a Config File
- Accessing Event Data and Fields in the Configuration
- Using Environment Variables in the Configuration
- Logstash Configuration Examples
- Multiple Pipelines
- Pipeline-to-Pipeline Communication (Beta)
- Reloading the Config File
- Managing Multiline Events
- Glob Pattern Support
- Converting Ingest Node Pipelines
- Logstash-to-Logstash Communication
- Centralized Pipeline Management
- X-Pack monitoring
- X-Pack security
- X-Pack Settings
- Managing Logstash
- Working with Logstash Modules
- Working with Filebeat Modules
- Data Resiliency
- Transforming Data
- Deploying and Scaling Logstash
- Performance Tuning
- Monitoring Logstash
- Monitoring APIs
- Working with plugins
- Input plugins
- azure_event_hubs
- beats
- cloudwatch
- couchdb_changes
- dead_letter_queue
- elasticsearch
- exec
- file
- ganglia
- gelf
- generator
- github
- google_cloud_storage
- google_pubsub
- graphite
- heartbeat
- http
- http_poller
- imap
- irc
- jdbc
- jms
- jmx
- kafka
- kinesis
- log4j
- lumberjack
- meetup
- pipe
- puppet_facter
- rabbitmq
- redis
- relp
- rss
- s3
- salesforce
- snmp
- snmptrap
- sqlite
- sqs
- stdin
- stomp
- syslog
- tcp
- udp
- unix
- varnishlog
- websocket
- wmi
- xmpp
- Output plugins
- boundary
- circonus
- cloudwatch
- csv
- datadog
- datadog_metrics
- elastic_app_search
- elasticsearch
- exec
- file
- ganglia
- gelf
- google_bigquery
- google_pubsub
- graphite
- graphtastic
- http
- influxdb
- irc
- juggernaut
- kafka
- librato
- loggly
- lumberjack
- metriccatcher
- mongodb
- nagios
- nagios_nsca
- opentsdb
- pagerduty
- pipe
- rabbitmq
- redis
- redmine
- riak
- riemann
- s3
- sns
- solr_http
- sqs
- statsd
- stdout
- stomp
- syslog
- tcp
- timber
- udp
- webhdfs
- websocket
- xmpp
- zabbix
- Filter plugins
- aggregate
- alter
- cidr
- cipher
- clone
- csv
- date
- de_dot
- dissect
- dns
- drop
- elapsed
- elasticsearch
- environment
- extractnumbers
- fingerprint
- geoip
- grok
- http
- i18n
- jdbc_static
- jdbc_streaming
- json
- json_encode
- kv
- memcached
- metricize
- metrics
- mutate
- prune
- range
- ruby
- sleep
- split
- syslog_pri
- threats_classifier
- throttle
- tld
- translate
- truncate
- urldecode
- useragent
- uuid
- xml
- Codec plugins
- Tips and Best Practices
- Troubleshooting Common Problems
- Contributing to Logstash
- How to write a Logstash input plugin
- How to write a Logstash codec plugin
- How to write a Logstash filter plugin
- How to write a Logstash output plugin
- Documenting your plugin
- Contributing a Patch to a Logstash Plugin
- Logstash Plugins Community Maintainer Guide
- Submitting your plugin to RubyGems.org and the logstash-plugins repository
- Contributing a Java Plugin
- Glossary of Terms
- Breaking Changes
- Release Notes
- Logstash 6.8.23 Release Notes
- Logstash 6.8.22 Release Notes
- Logstash 6.8.21 Release Notes
- Logstash 6.8.20 Release Notes
- Logstash 6.8.19 Release Notes
- Logstash 6.8.18 Release Notes
- Logstash 6.8.17 Release Notes
- Logstash 6.8.16 Release Notes
- Logstash 6.8.15 Release Notes
- Logstash 6.8.14 Release Notes
- Logstash 6.8.13 Release Notes
- Logstash 6.8.12 Release Notes
- Logstash 6.8.11 Release Notes
- Logstash 6.8.10 Release Notes
- Logstash 6.8.9 Release Notes
- Logstash 6.8.8 Release Notes
- Logstash 6.8.7 Release Notes
- Logstash 6.8.6 Release Notes
- Logstash 6.8.5 Release Notes
- Logstash 6.8.4 Release Notes
- Logstash 6.8.3 Release Notes
- Logstash 6.8.2 Release Notes
- Logstash 6.8.1 Release Notes
- Logstash 6.8.0 Release Notes
- Logstash 6.7.2 Release Notes
- Logstash 6.7.1 Release Notes
- Logstash 6.7.0 Release Notes
- Logstash 6.6.2 Release Notes
- Logstash 6.6.1 Release Notes
- Logstash 6.6.0 Release Notes
- Logstash 6.5.4 Release Notes
- Logstash 6.5.3 Release Notes
- Logstash 6.5.2 Release Notes
- Logstash 6.5.1 Release Notes
- Logstash 6.5.0 Release Notes
- Logstash 6.4.3 Release Notes
- Logstash 6.4.2 Release Notes
- Logstash 6.4.1 Release Notes
- Logstash 6.4.0 Release Notes
- Logstash 6.3.2 Release Notes
- Logstash 6.3.1 Release Notes
- Logstash 6.3.0 Release Notes
- Logstash 6.2.4 Release Notes
- Logstash 6.2.3 Release Notes
- Logstash 6.2.2 Release Notes
- Logstash 6.2.1 Release Notes
- Logstash 6.2.0 Release Notes
- Logstash 6.1.3 Release Notes
- Logstash 6.1.2 Release Notes
- Logstash 6.1.1 Release Notes
- Logstash 6.1.0 Release Notes
We are adding more tips and best practices, so please check back soon. If you have something to add, please:
- create an issue at https://github.com/elastic/logstash/issues, or
- create a pull request with your proposed changes at https://github.com/elastic/logstash.
Also check out the Logstash discussion forum.
Command line examples often show single quotes.
On Windows systems, replace a single quote '
with a double quote "
.
Example
Instead of:
bin/logstash -e 'input { stdin { } } output { stdout {} }'
Use this format on Windows systems:
bin\logstash -e "input { stdin { } } output { stdout {} }"
You can manage pipelines in a Logstash instance using either local pipeline configurations or centralized pipeline management in Kibana.
After you configure Logstash to use centralized pipeline management, you can
no longer specify local pipeline configurations. The pipelines.yml
file and
settings such as path.config
and config.string
are inactive when centralized
pipeline management is enabled.
"How many partitions should I use per topic?"
At least the number of Logstash nodes multiplied by consumer threads per node.
Better yet, use a multiple of the above number. Increasing the number of partitions for an existing topic is extremely complicated. Partitions have a very low overhead. Using 5 to 10 times the number of partitions suggested by the first point is generally fine, so long as the overall partition count does not exceed 2000.
Err on the side of over-partitioning up to a total 1000 partitions overall. Try not to exceed 1000 partitions.
"How many consumer threads should I configure?"
Lower values tend to be more efficient and have less memory overhead. Try a
value of 1
then iterate your way up. The value should in general be lower than
the number of pipeline workers. Values larger than 4 rarely result in
performance improvement.
"Does Kafka Input commit offsets only after the event has been safely persisted to the PQ?"
"Does Kafa Input commit offsets only for events that have passed the pipeline fully?"
No, we can’t make that guarantee. Offsets are committed to Kafka periodically. If writes to the PQ are slow or blocked, offsets for events that haven’t safely reached the PQ can be committed.
On this page