Log rotation

edit

As log files are constantly written, they must be rotated and purged to prevent the logger application from filling up the disk. Rotation is done by an external application, thus, Filebeat needs information how to cooperate with it.

When reading from rotating files make sure the paths configuration includes both the active file and all rotated files.

By default, Filebeat is able to track files correctly in the following strategies: * create: new active file with a unique name is created on rotation * rename: rotated files are renamed

However, in case of copytruncate strategy, you should provide additional configuration to Filebeat.

rotation.external.strategy.copytruncate

edit

This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

If the log rotating application copies the contents of the active file and then truncates the original file, use these options to help Filebeat to read files correctly.

Set the option suffix_regex so Filebeat can tell active and rotated files apart. There are two supported suffix types in the input: numberic and date.

Numeric suffix

edit

If your rotated files have an incrementing index appended to the end of the filename, e.g. active file apache.log and the rotated files are named apache.log.1, apache.log.2, etc, use the following configuration.

---
rotation.external.strategy.copytruncate:
  suffix_regex: \.\d$
---

Date suffix

edit

If the rotation date is appended to the end of the filename, e.g. active file apache.log and the rotated files are named apache.log-20210526, apache.log-20210527, etc. use the following configuration:

---
rotation.external.strategy.copytruncate:
  suffix_regex: \-\d{6}$
  dateformat: -20060102
---
encoding
edit

The file encoding to use for reading data that contains international characters. See the encoding names recommended by the W3C for use in HTML5.

Valid encodings:

  • plain: plain ASCII encoding
  • utf-8 or utf8: UTF-8 encoding
  • gbk: simplified Chinese charaters
  • iso8859-6e: ISO8859-6E, Latin/Arabic
  • iso8859-6i: ISO8859-6I, Latin/Arabic
  • iso8859-8e: ISO8859-8E, Latin/Hebrew
  • iso8859-8i: ISO8859-8I, Latin/Hebrew
  • iso8859-1: ISO8859-1, Latin-1
  • iso8859-2: ISO8859-2, Latin-2
  • iso8859-3: ISO8859-3, Latin-3
  • iso8859-4: ISO8859-4, Latin-4
  • iso8859-5: ISO8859-5, Latin/Cyrillic
  • iso8859-6: ISO8859-6, Latin/Arabic
  • iso8859-7: ISO8859-7, Latin/Greek
  • iso8859-8: ISO8859-8, Latin/Hebrew
  • iso8859-9: ISO8859-9, Latin-5
  • iso8859-10: ISO8859-10, Latin-6
  • iso8859-13: ISO8859-13, Latin-7
  • iso8859-14: ISO8859-14, Latin-8
  • iso8859-15: ISO8859-15, Latin-9
  • iso8859-16: ISO8859-16, Latin-10
  • cp437: IBM CodePage 437
  • cp850: IBM CodePage 850
  • cp852: IBM CodePage 852
  • cp855: IBM CodePage 855
  • cp858: IBM CodePage 858
  • cp860: IBM CodePage 860
  • cp862: IBM CodePage 862
  • cp863: IBM CodePage 863
  • cp865: IBM CodePage 865
  • cp866: IBM CodePage 866
  • ebcdic-037: IBM CodePage 037
  • ebcdic-1040: IBM CodePage 1140
  • ebcdic-1047: IBM CodePage 1047
  • koi8r: KOI8-R, Russian (Cyrillic)
  • koi8u: KOI8-U, Ukranian (Cyrillic)
  • macintosh: Macintosh encoding
  • macintosh-cyrillic: Macintosh Cyrillic encoding
  • windows1250: Windows1250, Central and Eastern European
  • windows1251: Windows1251, Russian, Serbian (Cyrillic)
  • windows1252: Windows1252, Legacy
  • windows1253: Windows1253, Modern Greek
  • windows1254: Windows1254, Turkish
  • windows1255: Windows1255, Hebrew
  • windows1256: Windows1256, Arabic
  • windows1257: Windows1257, Estonian, Latvian, Lithuanian
  • windows1258: Windows1258, Vietnamese
  • windows874: Windows874, ISO/IEC 8859-11, Latin/Thai
  • utf-16-bom: UTF-16 with required BOM
  • utf-16be-bom: big endian UTF-16 with required BOM
  • utf-16le-bom: little endian UTF-16 with required BOM

The plain encoding is special, because it does not validate or transform any input.

exclude_lines
edit

A list of regular expressions to match the lines that you want Filebeat to exclude. Filebeat drops any lines that match a regular expression in the list. By default, no lines are dropped. Empty lines are ignored.

The following example configures Filebeat to drop any lines that start with DBG.

filebeat.inputs:
- type: filestream
  ...
  exclude_lines: ['^DBG']

See Regular expression support for a list of supported regexp patterns.

include_lines
edit

A list of regular expressions to match the lines that you want Filebeat to include. Filebeat exports only the lines that match a regular expression in the list. By default, all lines are exported. Empty lines are ignored.

The following example configures Filebeat to export any lines that start with ERR or WARN:

filebeat.inputs:
- type: filestream
  ...
  include_lines: ['^ERR', '^WARN']

If both include_lines and exclude_lines are defined, Filebeat executes include_lines first and then executes exclude_lines. The order in which the two options are defined doesn’t matter. The include_lines option will always be executed before the exclude_lines option, even if exclude_lines appears before include_lines in the config file.

The following example exports all log lines that contain sometext, except for lines that begin with DBG (debug messages):

filebeat.inputs:
- type: filestream
  ...
  include_lines: ['sometext']
  exclude_lines: ['^DBG']

See Regular expression support for a list of supported regexp patterns.

buffer_size
edit

The size in bytes of the buffer that each harvester uses when fetching a file. The default is 16384.

message_max_bytes
edit

The maximum number of bytes that a single log message can have. All bytes after mesage_max_bytes are discarded and not sent. The default is 10MB (10485760).

parsers
edit

This option expects a list of parsers that the log line has to go through.

Available parsers:

  • multiline
  • ndjson
  • container

In this example, Filebeat is reading multiline messages that consist of 3 lines and are encapsulated in single-line JSON objects. The multiline message is stored under the key msg.

filebeat.inputs:
- type: filestream
  ...
  parsers:
    - ndjson:
      keys_under_root: true
      message_key: msg
    - multiline:
      type: counter
      lines_count: 3

See the available parser settings in detail below.

multiline
edit

Options that control how Filebeat deals with log messages that span multiple lines. See Multiline messages for more information about configuring multiline options.

ndjson
edit

These options make it possible for Filebeat to decode logs structured as JSON messages. Filebeat processes the logs line by line, so the JSON decoding only works if there is one JSON object per message.

The decoding happens before line filtering. You can combine JSON decoding with filtering if you set the message_key option. This can be helpful in situations where the application logs are wrapped in JSON objects, like when using Docker.

Example configuration:

- ndjson:
  keys_under_root: true
  add_error_key: true
  message_key: log
keys_under_root
By default, the decoded JSON is placed under a "json" key in the output document. If you enable this setting, the keys are copied top level in the output document. The default is false.
overwrite_keys
If keys_under_root and this setting are enabled, then the values from the decoded JSON object overwrite the fields that Filebeat normally adds (type, source, offset, etc.) in case of conflicts.
expand_keys
If this setting is enabled, Filebeat will recursively de-dot keys in the decoded JSON, and expand them into a hierarchical object structure. For example, {"a.b.c": 123} would be expanded into {"a":{"b":{"c":123}}}. This setting should be enabled when the input is produced by an ECS logger.
add_error_key
If this setting is enabled, Filebeat adds an "error.message" and "error.type: json" key in case of JSON unmarshalling errors or when a message_key is defined in the configuration but cannot be used.
message_key
An optional configuration setting that specifies a JSON key on which to apply the line filtering and multiline settings. If specified the key must be at the top level in the JSON object and the value associated with the key must be a string, otherwise no filtering or multiline aggregation will occur.
document_id
Option configuration setting that specifies the JSON key to set the document id. If configured, the field will be removed from the original JSON document and stored in @metadata._id
ignore_decoding_error
An optional configuration setting that specifies if JSON decoding errors should be logged or not. If set to true, errors will not be logged. The default is false.
container
edit

Use the container parser to extract information from containers log files. It parses lines into common message lines, extracting timestamps too.

stream
Reads from the specified streams only: all, stdout or stderr. The default is all.
format
Use the given format when parsing logs: auto, docker or cri. The default is auto, it will automatically detect the format. To disable autodetection set any of the other options.

The following snippet configures Filebeat to read the stdout stream from all containers under the default Kubernetes logs path:

  paths:
    - "/var/log/containers/*.log"
  parsers:
    - container:
      stream: stdout

Common options

edit

The following configuration options are supported by all inputs.

enabled
edit

Use the enabled option to enable and disable inputs. By default, enabled is set to true.

tags
edit

A list of tags that Filebeat includes in the tags field of each published event. Tags make it easy to select specific events in Kibana or apply conditional filtering in Logstash. These tags will be appended to the list of tags specified in the general configuration.

Example:

filebeat.inputs:
- type: filestream
  . . .
  tags: ["json"]
fields
edit

Optional fields that you can specify to add additional information to the output. For example, you might add fields that you can use for filtering log data. Fields can be scalar values, arrays, dictionaries, or any nested combination of these. By default, the fields that you specify here will be grouped under a fields sub-dictionary in the output document. To store the custom fields as top-level fields, set the fields_under_root option to true. If a duplicate field is declared in the general configuration, then its value will be overwritten by the value declared here.

filebeat.inputs:
- type: filestream
  . . .
  fields:
    app_id: query_engine_12
fields_under_root
edit

If this option is set to true, the custom fields are stored as top-level fields in the output document instead of being grouped under a fields sub-dictionary. If the custom field names conflict with other field names added by Filebeat, then the custom fields overwrite the other fields.

processors
edit

A list of processors to apply to the input data.

See Processors for information about specifying processors in your config.

pipeline
edit

The Ingest Node pipeline ID to set for the events generated by this input.

The pipeline ID can also be configured in the Elasticsearch output, but this option usually results in simpler configuration files. If the pipeline is configured both in the input and output, the option from the input is used.

keep_null
edit

If this option is set to true, fields with null values will be published in the output document. By default, keep_null is set to false.

index
edit

If present, this formatted string overrides the index for events from this input (for elasticsearch outputs), or sets the raw_index field of the event’s metadata (for other outputs). This string can only refer to the agent name and version and the event timestamp; for access to dynamic fields, use output.elasticsearch.index or a processor.

Example value: "%{[agent.name]}-myindex-%{+yyyy.MM.dd}" might expand to "filebeat-myindex-2019.11.01".

publisher_pipeline.disable_host
edit

By default, all events contain host.name. This option can be set to true to disable the addition of this field to all events. The default value is false.