- Logstash Reference: other versions:
- Logstash Introduction
- Getting Started with Logstash
- How Logstash Works
- Setting Up and Running Logstash
- Logstash Directory Layout
- Logstash Configuration Files
- logstash.yml
- Secrets keystore for secure settings
- Running Logstash from the Command Line
- Running Logstash as a Service on Debian or RPM
- Running Logstash on Docker
- Configuring Logstash for Docker
- Running Logstash on Windows
- Logging
- Shutting Down Logstash
- Upgrading Logstash
- Creating a Logstash pipeline
- Secure your connection
- Advanced Logstash Configurations
- Logstash-to-Logstash communication
- Managing Logstash
- Working with Logstash Modules
- Working with Filebeat Modules
- Working with Winlogbeat Modules
- Queues and data resiliency
- Transforming Data
- Deploying and Scaling Logstash
- Performance Tuning
- Monitoring Logstash
- Monitoring Logstash with APIs
- Working with plugins
- Integration plugins
- Input plugins
- azure_event_hubs
- beats
- cloudwatch
- couchdb_changes
- dead_letter_queue
- elastic_agent
- elasticsearch
- exec
- file
- ganglia
- gelf
- generator
- github
- google_cloud_storage
- google_pubsub
- graphite
- heartbeat
- http
- http_poller
- imap
- irc
- java_generator
- java_stdin
- jdbc
- jms
- jmx
- kafka
- kinesis
- log4j
- lumberjack
- meetup
- pipe
- puppet_facter
- rabbitmq
- redis
- relp
- rss
- s3
- s3-sns-sqs
- salesforce
- snmp
- snmptrap
- sqlite
- sqs
- stdin
- stomp
- syslog
- tcp
- udp
- unix
- varnishlog
- websocket
- wmi
- xmpp
- Output plugins
- boundary
- circonus
- cloudwatch
- csv
- datadog
- datadog_metrics
- dynatrace
- elastic_app_search
- elastic_workplace_search
- elasticsearch
- exec
- file
- ganglia
- gelf
- google_bigquery
- google_cloud_storage
- google_pubsub
- graphite
- graphtastic
- http
- influxdb
- irc
- java_stdout
- juggernaut
- kafka
- librato
- loggly
- lumberjack
- metriccatcher
- mongodb
- nagios
- nagios_nsca
- opentsdb
- pagerduty
- pipe
- rabbitmq
- redis
- redmine
- riak
- riemann
- s3
- sink
- sns
- solr_http
- sqs
- statsd
- stdout
- stomp
- syslog
- tcp
- timber
- udp
- webhdfs
- websocket
- xmpp
- zabbix
- Filter plugins
- age
- aggregate
- alter
- bytes
- cidr
- cipher
- clone
- csv
- date
- de_dot
- dissect
- dns
- drop
- elapsed
- elasticsearch
- environment
- extractnumbers
- fingerprint
- geoip
- grok
- http
- i18n
- java_uuid
- jdbc_static
- jdbc_streaming
- json
- json_encode
- kv
- memcached
- metricize
- metrics
- mutate
- prune
- range
- ruby
- sleep
- split
- syslog_pri
- threats_classifier
- throttle
- tld
- translate
- truncate
- urldecode
- useragent
- uuid
- wurfl_device_detection
- xml
- Codec plugins
- Tips and best practices
- Troubleshooting
- Contributing to Logstash
- How to write a Logstash input plugin
- How to write a Logstash codec plugin
- How to write a Logstash filter plugin
- How to write a Logstash output plugin
- Logstash Plugins Community Maintainer Guide
- Document your plugin
- Publish your plugin to RubyGems.org
- List your plugin
- Contributing a patch to a Logstash plugin
- Extending Logstash core
- Contributing a Java Plugin
- Glossary of Terms
- Breaking changes
- Release Notes
- Logstash 8.4.3 Release Notes
- Logstash 8.4.2 Release Notes
- Logstash 8.4.1 Release Notes
- Logstash 8.4.0 Release Notes
- Logstash 8.3.3 Release Notes
- Logstash 8.3.2 Release Notes
- Logstash 8.3.1 Release Notes
- Logstash 8.3.0 Release Notes
- Logstash 8.2.3 Release Notes
- Logstash 8.2.2 Release Notes
- Logstash 8.2.1 Release Notes
- Logstash 8.2.0 Release Notes
- Logstash 8.1.3 Release Notes
- Logstash 8.1.2 Release Notes
- Logstash 8.1.1 Release Notes
- Logstash 8.1.0 Release Notes
- Logstash 8.0.1 Release Notes
- Logstash 8.0.0 Release Notes
- Logstash 8.0.0-rc2 Release Notes
- Logstash 8.0.0-rc1 Release Notes
- Logstash 8.0.0-beta1 Release Notes
- Logstash 8.0.0-alpha2 Release Notes
- Logstash 8.0.0-alpha1 Release Notes
Node Stats API
editNode Stats API
editThe node stats API retrieves runtime stats about Logstash.
curl -XGET 'localhost:9600/_node/stats/<types>'
Where <types>
is optional and specifies the types of stats you want to return.
By default, all stats are returned. You can limit the info that’s returned by combining any of the following types in a comma-separated list:
Gets JVM stats, including stats about threads, memory usage, garbage collectors, and uptime. |
|
Gets process stats, including stats about file descriptors, memory consumption, and CPU usage. |
|
Gets event-related statistics for the Logstash instance (regardless of how many pipelines were created and destroyed). |
|
Gets runtime stats about each Logstash pipeline. |
|
Gets runtime stats about config reload successes and failures. |
|
Gets runtime stats about cgroups when Logstash is running in a container. |
|
Gets stats for databases used with the Geoip filter plugin. |
See Common Options for a list of options that can be applied to all Logstash monitoring APIs.
JVM stats
editThe following request returns a JSON document containing JVM stats:
curl -XGET 'localhost:9600/_node/stats/jvm?pretty'
Example response:
{ "jvm" : { "threads" : { "count" : 49, "peak_count" : 50 }, "mem" : { "heap_used_percent" : 14, "heap_committed_in_bytes" : 309866496, "heap_max_in_bytes" : 1037959168, "heap_used_in_bytes" : 151686096, "non_heap_used_in_bytes" : 122486176, "non_heap_committed_in_bytes" : 133222400, "pools" : { "survivor" : { "peak_used_in_bytes" : 8912896, "used_in_bytes" : 288776, "peak_max_in_bytes" : 35782656, "max_in_bytes" : 35782656, "committed_in_bytes" : 8912896 }, "old" : { "peak_used_in_bytes" : 148656848, "used_in_bytes" : 148656848, "peak_max_in_bytes" : 715849728, "max_in_bytes" : 715849728, "committed_in_bytes" : 229322752 }, "young" : { "peak_used_in_bytes" : 71630848, "used_in_bytes" : 2740472, "peak_max_in_bytes" : 286326784, "max_in_bytes" : 286326784, "committed_in_bytes" : 71630848 } } }, "gc" : { "collectors" : { "old" : { "collection_time_in_millis" : 607, "collection_count" : 12 }, "young" : { "collection_time_in_millis" : 4904, "collection_count" : 1033 } } }, "uptime_in_millis" : 1809643 }
Process stats
editThe following request returns a JSON document containing process stats:
curl -XGET 'localhost:9600/_node/stats/process?pretty'
Example response:
{ "process" : { "open_file_descriptors" : 184, "peak_open_file_descriptors" : 185, "max_file_descriptors" : 10240, "mem" : { "total_virtual_in_bytes" : 5486125056 }, "cpu" : { "total_in_millis" : 657136, "percent" : 2, "load_average" : { "1m" : 2.38134765625 } } }
Event stats
editThe following request returns a JSON document containing event-related statistics for the Logstash instance:
curl -XGET 'localhost:9600/_node/stats/events?pretty'
Example response:
{ "events" : { "in" : 293658, "filtered" : 293658, "out" : 293658, "duration_in_millis" : 2324391, "queue_push_duration_in_millis" : 343816 }
Pipeline stats
editThe following request returns a JSON document containing pipeline stats, including:
- the number of events that were input, filtered, or output by each pipeline
- stats for each configured filter or output stage
- info about config reload successes and failures (when config reload is enabled)
- info about the persistent queue (when persistent queues are enabled)
curl -XGET 'localhost:9600/_node/stats/pipelines?pretty'
Example response:
{ "pipelines" : { "test" : { "events" : { "duration_in_millis" : 365495, "in" : 216485, "filtered" : 216485, "out" : 216485, "queue_push_duration_in_millis" : 342466 }, "plugins" : { "inputs" : [ { "id" : "35131f351e2dc5ed13ee04265a8a5a1f95292165-1", "events" : { "out" : 216485, "queue_push_duration_in_millis" : 342466 }, "name" : "beats" } ], "filters" : [ { "id" : "35131f351e2dc5ed13ee04265a8a5a1f95292165-2", "events" : { "duration_in_millis" : 55969, "in" : 216485, "out" : 216485 }, "failures" : 216485, "patterns_per_field" : { "message" : 1 }, "name" : "grok" }, { "id" : "35131f351e2dc5ed13ee04265a8a5a1f95292165-3", "events" : { "duration_in_millis" : 3326, "in" : 216485, "out" : 216485 }, "name" : "geoip" } ], "outputs" : [ { "id" : "35131f351e2dc5ed13ee04265a8a5a1f95292165-4", "events" : { "duration_in_millis" : 278557, "in" : 216485, "out" : 216485 }, "name" : "elasticsearch" } ] }, "reloads" : { "last_error" : null, "successes" : 0, "last_success_timestamp" : null, "last_failure_timestamp" : null, "failures" : 0 }, "queue" : { "type" : "memory" } }, "test2" : { "events" : { "duration_in_millis" : 2222229, "in" : 87247, "filtered" : 87247, "out" : 87247, "queue_push_duration_in_millis" : 1532 }, "plugins" : { "inputs" : [ { "id" : "d7ea8941c0fc48ac58f89c84a9da482107472b82-1", "events" : { "out" : 87247, "queue_push_duration_in_millis" : 1532 }, "name" : "twitter" } ], "filters" : [ ], "outputs" : [ { "id" : "d7ea8941c0fc48ac58f89c84a9da482107472b82-2", "events" : { "duration_in_millis" : 139545, "in" : 87247, "out" : 87247 }, "name" : "elasticsearch" } ] }, "reloads" : { "last_error" : null, "successes" : 0, "last_success_timestamp" : null, "last_failure_timestamp" : null, "failures" : 0 }, "queue" : { "type" : "memory" } } }
You can see the stats for a specific pipeline by including the pipeline ID. In
the following example, the ID of the pipeline is test
:
curl -XGET 'localhost:9600/_node/stats/pipelines/test?pretty'
Example response:
{ "test" : { "events" : { "duration_in_millis" : 365495, "in" : 216485, "filtered" : 216485, "out" : 216485, "queue_push_duration_in_millis" : 342466 }, "plugins" : { "inputs" : [ { "id" : "35131f351e2dc5ed13ee04265a8a5a1f95292165-1", "events" : { "out" : 216485, "queue_push_duration_in_millis" : 342466 }, "name" : "beats" } ], "filters" : [ { "id" : "35131f351e2dc5ed13ee04265a8a5a1f95292165-2", "events" : { "duration_in_millis" : 55969, "in" : 216485, "out" : 216485 }, "failures" : 216485, "patterns_per_field" : { "message" : 1 }, "name" : "grok" }, { "id" : "35131f351e2dc5ed13ee04265a8a5a1f95292165-3", "events" : { "duration_in_millis" : 3326, "in" : 216485, "out" : 216485 }, "name" : "geoip" } ], "outputs" : [ { "id" : "35131f351e2dc5ed13ee04265a8a5a1f95292165-4", "events" : { "duration_in_millis" : 278557, "in" : 216485, "out" : 216485 }, "name" : "elasticsearch" } ] }, "reloads" : { "last_error" : null, "successes" : 0, "last_success_timestamp" : null, "last_failure_timestamp" : null, "failures" : 0 }, "queue" : { "type" : "memory" } } } }
Reload stats
editThe following request returns a JSON document that shows info about config reload successes and failures.
curl -XGET 'localhost:9600/_node/stats/reloads?pretty'
Example response:
{ "reloads": { "successes": 0, "failures": 0 } }
OS stats
editWhen Logstash is running in a container, the following request returns a JSON document that contains cgroup information to give you a more accurate view of CPU load, including whether the container is being throttled.
curl -XGET 'localhost:9600/_node/stats/os?pretty'
Example response:
{ "os" : { "cgroup" : { "cpuacct" : { "control_group" : "/elastic1", "usage_nanos" : 378477588075 }, "cpu" : { "control_group" : "/elastic1", "cfs_period_micros" : 1000000, "cfs_quota_micros" : 800000, "stat" : { "number_of_elapsed_periods" : 4157, "number_of_times_throttled" : 460, "time_throttled_nanos" : 581617440755 } } } }
Geoip database stats
editYou can monitor stats for the geoip databases used with the Geoip filter plugin.
curl -XGET 'localhost:9600/_node/stats/geoip_download_manager?pretty'
For more info, see Database Metrics in the Geoip filter plugin docs.