- Kibana Guide: other versions:
- What is Kibana?
- What’s new in 8.3
- Kibana concepts
- Quick start
- Set up
- Install Kibana
- Configure Kibana
- Alerting and action settings
- APM settings
- Banners settings
- Enterprise Search settings
- Fleet settings
- i18n settings
- Logging settings
- Logs settings
- Metrics settings
- Monitoring settings
- Reporting settings
- Search sessions settings
- Secure settings
- Security settings
- Spaces settings
- Task Manager settings
- Telemetry settings
- URL drilldown settings
- Start and stop Kibana
- Access Kibana
- Securing access to Kibana
- Add data
- Upgrade Kibana
- Configure security
- Configure reporting
- Configure logging
- Configure monitoring
- Command line tools
- Production considerations
- Discover
- Dashboard and visualizations
- Canvas
- Maps
- Build a map to compare metrics by country or region
- Track, visualize, and alert on assets in real time
- Map custom regions with reverse geocoding
- Heat map layer
- Tile layer
- Vector layer
- Plot big data
- Search geographic data
- Configure map settings
- Connect to Elastic Maps Service
- Import geospatial data
- Troubleshoot
- Reporting and sharing
- Machine learning
- Graph
- Alerting
- Observability
- APM
- Security
- Dev Tools
- Fleet
- Osquery
- Stack Monitoring
- Stack Management
- REST API
- Get features API
- Kibana spaces APIs
- Kibana role management APIs
- User session management APIs
- Saved objects APIs
- Data views API
- Index patterns APIs
- Alerting APIs
- Action and connector APIs
- Cases APIs
- Import and export dashboard APIs
- Logstash configuration management APIs
- Machine learning APIs
- Short URLs APIs
- Get Task Manager health
- Upgrade assistant APIs
- Kibana plugins
- Troubleshooting
- Accessibility
- Release notes
- Developer guide
Using Kibana server logs
editUsing Kibana server logs
editKibana Logs is a great way to see what’s going on in your application and to debug performance issues. Navigating through a large number of generated logs can be overwhelming, and following are some techniques that you can use to optimize the process.
Start by defining a problem area that you are interested in. For example, you might be interested in seeing how a particular Kibana Plugin is performing, so no need to gather logs for all of Kibana. Or you might want to focus on a particular feature, such as requests from the Kibana server to the Elasticsearch server. Depending on your needs, you can configure Kibana to generate logs for a specific feature.
logging: appenders: file: type: file fileName: ./kibana.log layout: type: json ### gather all the Kibana logs into a file logging.root: appenders: [file] level: all ### or gather a subset of the logs logging.loggers: ### responses to an HTTP request - name: http.server.response level: debug appenders: [file] ### result of a query to the Elasticsearch server - name: elasticsearch.query level: debug appenders: [file] ### logs generated by my plugin - name: plugins.myPlugin level: debug appenders: [file]
Kibana’s file
appender is configured to produce logs in ECS JSON format. It’s the only format that includes the meta information necessary for log correlation out-of-the-box.
The next step is to define what observability tools are available. For a better experience, set up an Observability integration provided by Elastic to debug your application with the APM UI. To debug something quickly without setting up additional tooling, you can work with the plain Kibana logs.
APM UI
editPrerequisites Kibana logs are configured to be in ECS JSON format to include tracing identifiers.
To debug Kibana with the APM UI, you must set up the APM infrastructure. You can find instructions for the setup process on the Observability integrations page.
Once you set up the APM infrastructure, you can enable the APM agent and put Kibana under load to collect APM events. To analyze the collected metrics and logs, use the APM UI as demonstrated in the docs.
Plain Kibana logs
editPrerequisites Kibana logs are configured to be in ECS JSON format to include tracing identifiers.
Open Kibana Logs and search for an operation you are interested in.
For example, suppose you want to investigate the response times for queries to the /api/telemetry/v2/clusters/_stats
Kibana endpoint.
Open Kibana Logs and search for the HTTP server response for the endpoint. It looks similar to the following (some fields are omitted for brevity).
{ "message":"POST /api/telemetry/v2/clusters/_stats 200 1014ms - 43.2KB", "log":{"level":"DEBUG","logger":"http.server.response"}, "trace":{"id":"9b99131a6f66587971ef085ef97dfd07"}, "transaction":{"id":"d0c5bbf14f5febca"} }
You are interested in the trace.id field, which is a unique identifier of a trace. The trace.id
provides a way to group multiple events, like transactions, which belong together. You can search for "trace":{"id":"9b99131a6f66587971ef085ef97dfd07"}
to get all the logs that belong to the same trace. This enables you to see how many Elasticsearch requests were triggered during the 9b99131a6f66587971ef085ef97dfd07
trace, what they looked like, what Elasticsearch endpoints were hit, and so on.
On this page