Troubleshoot logs

edit

Use this page to find possible solutions for errors your encountering with your logs. This troubleshooting page is divided into the following sections:

Common onboarding issues

edit

This section provides possible solutions for errors you might encounter while onboarding your logs.

User does not have permissions to create API key
edit

If you don’t have the required privileges to create an API key, you’ll see the following error message:

User does not have permissions to create API key.

Required cluster privileges are [`monitor`, `manage_own_api_key`] and
required index privileges are [`auto_configure`, `create_doc`] for
indices [`logs-*-*`, `metrics-*-*`], please add all required privileges
to the role of the authenticated user.
Solution
edit

You need to either:

  • Have an administrator give you the monitor and manage_own_api_key cluster privileges and the auto_configure and create_doc indices privileges. Once you have these privileges, restart the onboarding flow.
  • Get an API key from an administrator and manually add the API to the Elastic Agent configuration. See Configure the Elastic Agent for more on manually updating the configuration and adding the API key.
Failed to create API key
edit

If you don’t have the privileges to create savedObjects in Kibana, you’ll see the following error message:

Failed to create API key

Something went wrong: Unable to create observability-onboarding-state
Solution
edit

You need an administrator to give you the Saved Objects Management Kibana privilege to generate the required observability-onboarding-state flow state. Once you have the necessary privileges, restart the onboarding flow.

Kibana not accessible from host
edit

If Kibana is not accessible from the host, you’ll see the following error message after pasting the Install the Elastic Agent instructions into the host:

Failed to connect to {host} port {port} after 0 ms: Connection refused
Solution
edit

The host needs access to Kibana. Port 443 must be open and the deployment’s Elasticsearch endpoint must be reachable. Locate your project’s endpoint from Help menu (help icon) → Connection details.

Run the following command, replacing the URL with your endpoint, and you should get an authentication error with more details on resolving your issue:

curl https://your-endpoint.elastic.cloud
Download Elastic Agent failed
edit

If the host was able to download the installation script but cannot connect to the public artifact repository, you’ll see the following error message:

Download Elastic Agent

Failed to download Elastic Agent, see script for error.
Solutions
edit
  • If the combination of the Elastic Agent version and operating system architecture is not available, you’ll see the following error message:

    The requested URL returned error: 404

    To fix this, update the Elastic Agent version in the installation instructions to a known version of the Elastic Agent.

  • If the Elastic Agent was fully downloaded previously, you’ll see the following error message:

    Error: cannot perform installation as Elastic Agent is already running from this directory

    To fix this, delete previous downloads and restart the onboarding.

  • You’re an Elastic Cloud Enterprise user without access to the Elastic downloads page.
Install Elastic Agent failed
edit

If an Elastic Agent already exists on your host, you’ll see the following error message:

Install Elastic Agent

Failed to install Elastic Agent, see script for error.
Solution
edit

You can uninstall the current Elastic Agent using the elastic-agent uninstall command, and run the script again.

Uninstalling the current Elastic Agent removes the entire current setup, including the existing configuration.

Waiting for Logs to be shipped…​ step never completes
edit

If the Waiting for Logs to be shipped…​ step never completes, logs are not being shipped to Elasticsearch, and there is most likely an issue with your Elastic Agent configuration.

Solution
edit

Inspect the Elastic Agent logs for errors. See the Debug standalone Elastic Agents documentation for more on finding errors in Elastic Agent logs.

Mapping and pipeline issues

edit

This section provides possible solutions for mapping and pipeline issues you might encounter with your logs.

Keyword fields are too long
edit

The keyword field limit is 32,766 bytes. When indexing a document, if your keyword field length exceeds this limit, you’ll see an error similar to the following:

max_bytes_length_exceeded_exception: bytes can be at most 32766 in length
Solution
edit

Avoid this error using one of the following options:

Stop indexing the field: If you don’t need the keyword field for aggregation or search, set "index":false in the index template to stop indexing the field.

Convert the keyword field to a text field: To continue indexing the field while avoiding length limits, you can convert the keyword field to a text field.

Aggregations on this field would no longer be supported, but the contents would be searchable.

To convert the keyword field to a text field:

  1. Create a new index with the text field data type.
  2. Reindex from the _source field of the source index using the _reindex API.
Date format mismatch
edit

If the format of the date field in your document doesn’t match the format set in your index template, you’ll see an error similar to the following:

failed to parse field [date] of type [date] in document with id 'KGcZb3cBqhj6kAxank_x'.
Solution
edit

Add the format of the mismatched date to your index template. Multiple formats can be specified by separating them with || as a separator. Each format will be tried in turn until a matching format is found. For example:

PUT my-index-000001
{
  "mappings": {
    "properties": {
      "date": {
        "type":   "date",
        "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"
      }
    }
  }
}

Refer to the date field type docs for more information.

Grok or dissect pattern mismatch
edit

If the pattern in your grok or dissect processor doesn’t match the format of your document, you’ll see an error similar to the following:

Provided Grok patterns do not match field value...
Solution
edit

Make sure your grok or dissect processor pattern matches your log document format.

You can build and debug grok patterns in Kibana using the Grok Debugger. Find the Grok Debugger by navigating to the Developer tools page using the navigation menu or the global search field.

From here, you can enter sample data representative of the log document you’re trying to ingest and the Grok pattern you want to apply to the data.

If you don’t see any Structured Data when you simulate the grok pattern, iterate on the pattern until you find the error.