S3 output plugin

edit
  • Plugin version: v4.1.6
  • Released on: 2018-09-25
  • Changelog

For other versions, see the Versioned plugin docs.

Getting Help

edit

For questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix.

Description

edit

This plugin batches and uploads logstash events into Amazon Simple Storage Service (Amazon S3).

Requirements: * Amazon S3 Bucket and S3 Access Permissions (Typically access_key_id and secret_access_key) * S3 PutObject permission

S3 outputs create temporary files into the OS' temporary directory, you can specify where to save them using the temporary_directory option.

S3 output files have the following format

ls.s3.312bc026-2f5d-49bc-ae9f-5940cf4ad9a6.2013-04-18T10.00.tag_hello.part0.txt

ls.s3

indicate logstash plugin s3

312bc026-2f5d-49bc-ae9f-5940cf4ad9a6

a new, random uuid per file.

2013-04-18T10.00

represents the time whenever you specify time_file.

tag_hello

this indicates the event’s tag.

part0

this means if you indicate size_file then it will generate more parts if your file.size > size_file. When a file is full it will be pushed to the bucket and then deleted from the temporary directory. If a file is empty, it is simply deleted. Empty files will not be pushed

Crash Recovery: * This plugin will recover and upload temporary log files after crash/abnormal termination when using restore set to true

Usage:

edit

This is an example of logstash config:

output {
   s3{
     access_key_id => "crazy_key"             (optional)
     secret_access_key => "monkey_access_key" (optional)
     region => "eu-west-1"                    (optional, default = "us-east-1")
     bucket => "your_bucket"                  (required)
     size_file => 2048                        (optional) - Bytes
     time_file => 5                           (optional) - Minutes
     codec => "plain"                         (optional)
     canned_acl => "private"                  (optional. Options are "private", "public-read", "public-read-write", "authenticated-read", "aws-exec-read", "bucket-owner-read", "bucket-owner-full-control", "log-delivery-write". Defaults to "private" )
   }

S3 Output Configuration Options

edit

This plugin supports the following configuration options plus the Common Options described later.

Setting Input type Required

access_key_id

string

No

additional_settings

hash

No

aws_credentials_file

string

No

bucket

string

Yes

canned_acl

string, one of ["private", "public-read", "public-read-write", "authenticated-read", "aws-exec-read", "bucket-owner-read", "bucket-owner-full-control", "log-delivery-write"]

No

encoding

string, one of ["none", "gzip"]

No

endpoint

string

No

prefix

string

No

proxy_uri

string

No

region

string

No

restore

boolean

No

role_arn

string

No

role_session_name

string

No

rotation_strategy

string, one of ["size_and_time", "size", "time"]

No

secret_access_key

string

No

server_side_encryption

boolean

No

server_side_encryption_algorithm

string, one of ["AES256", "aws:kms"]

No

session_token

string

No

signature_version

string, one of ["v2", "v4"]

No

size_file

number

No

ssekms_key_id

string

No

storage_class

string, one of ["STANDARD", "REDUCED_REDUNDANCY", "STANDARD_IA"]

No

temporary_directory

string

No

time_file

number

No

upload_queue_size

number

No

upload_workers_count

number

No

validate_credentials_on_root_bucket

boolean

No

Also see Common Options for a list of options supported by all output plugins.

 

access_key_id

edit
  • Value type is string
  • There is no default value for this setting.

This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order:

  1. Static configuration, using access_key_id and secret_access_key params in logstash plugin config
  2. External credentials file specified by aws_credentials_file
  3. Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
  4. Environment variables AMAZON_ACCESS_KEY_ID and AMAZON_SECRET_ACCESS_KEY
  5. IAM Instance Profile (available when running inside EC2)

additional_settings

edit
  • Value type is hash
  • Default value is {}

Key-value pairs of settings and corresponding values used to parametrize the connection to S3. See full list in the AWS SDK documentation. Example:

    output {
      s3 {
        access_key_id => "1234",
        secret_access_key => "secret",
        region => "eu-west-1",
        bucket => "logstash-test",
        additional_settings => {
          "force_path_style" => true,
          "follow_redirects" => false
        }
      }
    }

aws_credentials_file

edit
  • Value type is string
  • There is no default value for this setting.

Path to YAML file containing a hash of AWS credentials. This file will only be loaded if access_key_id and secret_access_key aren’t set. The contents of the file should look like this:

    :access_key_id: "12345"
    :secret_access_key: "54321"

bucket

edit
  • This is a required setting.
  • Value type is string
  • There is no default value for this setting.

S3 bucket

canned_acl

edit
  • Value can be any of: private, public-read, public-read-write, authenticated-read, aws-exec-read, bucket-owner-read, bucket-owner-full-control, log-delivery-write
  • Default value is "private"

The S3 canned ACL to use when putting the file. Defaults to "private".

encoding

edit
  • Value can be any of: none, gzip
  • Default value is "none"

Specify the content encoding. Supports ("gzip"). Defaults to "none"

endpoint

edit
  • Value type is string
  • There is no default value for this setting.

The endpoint to connect to. By default it is constructed using the value of region. This is useful when connecting to S3 compatible services, but beware that these aren’t guaranteed to work correctly with the AWS SDK.

prefix

edit
  • Value type is string
  • Default value is ""

Specify a prefix to the uploaded filename, this can simulate directories on S3. Prefix does not require leading slash. This option supports logstash interpolation: https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html#sprintf; for example, files can be prefixed with the event date using prefix = "%{+YYYY}/%{+MM}/%{+dd}". Be warned this can created a lot of temporary local files.

proxy_uri

edit
  • Value type is string
  • There is no default value for this setting.

URI to proxy server if required

region

edit
  • Value type is string
  • Default value is "us-east-1"

The AWS Region

restore

edit
  • Value type is boolean
  • Default value is true

role_arn

edit
  • Value type is string
  • There is no default value for this setting.

The AWS IAM Role to assume, if any. This is used to generate temporary credentials, typically for cross-account access. See the AssumeRole API documentation for more information.

role_session_name

edit
  • Value type is string
  • Default value is "logstash"

Session name to use when assuming an IAM role.

rotation_strategy

edit
  • Value can be any of: size_and_time, size, time
  • Default value is "size_and_time"

Define the strategy to use to decide when we need to rotate the file and push it to S3, The default strategy is to check for both size and time, the first one to match will rotate the file.

secret_access_key

edit
  • Value type is string
  • There is no default value for this setting.

The AWS Secret Access Key

server_side_encryption

edit
  • Value type is boolean
  • Default value is false

Specifies whether or not to use S3’s server side encryption. Defaults to no encryption.

server_side_encryption_algorithm

edit
  • Value can be any of: AES256, aws:kms
  • Default value is "AES256"

Specifies what type of encryption to use when SSE is enabled.

session_token

edit
  • Value type is string
  • There is no default value for this setting.

The AWS Session token for temporary credential

signature_version

edit
  • Value can be any of: v2, v4
  • There is no default value for this setting.

The version of the S3 signature hash to use. Normally uses the internal client default, can be explicitly specified here

size_file

edit
  • Value type is number
  • Default value is 5242880

Set the size of file in bytes, this means that files on bucket when have dimension > file_size, they are stored in two or more file. If you have tags then it will generate a specific size file for every tags

ssekms_key_id

edit
  • Value type is string
  • There is no default value for this setting.

The key to use when specified along with server_side_encryption ⇒ aws:kms. If server_side_encryption ⇒ aws:kms is set but this is not default KMS key is used. http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html

storage_class

edit
  • Value can be any of: STANDARD, REDUCED_REDUNDANCY, STANDARD_IA
  • Default value is "STANDARD"

Specifies what S3 storage class to use when uploading the file. More information about the different storage classes can be found: http://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html Defaults to STANDARD.

temporary_directory

edit
  • Value type is string
  • Default value is "/tmp/logstash"

Set the directory where logstash will store the tmp files before sending it to S3 default to the current OS temporary directory in linux /tmp/logstash

time_file

edit
  • Value type is number
  • Default value is 15

Set the time, in MINUTES, to close the current sub_time_section of bucket. If you define file_size you have a number of files in consideration of the section and the current tag. 0 stay all time on listerner, beware if you specific 0 and size_file 0, because you will not put the file on bucket, for now the only thing this plugin can do is to put the file when logstash restart.

upload_queue_size

edit
  • Value type is number
  • Default value is 4

Number of items we can keep in the local queue before uploading them

upload_workers_count

edit
  • Value type is number
  • Default value is 4

Specify how many workers to use to upload the files to S3

validate_credentials_on_root_bucket

edit
  • Value type is boolean
  • Default value is true

The common use case is to define permission on the root bucket and give Logstash full access to write its logs. In some circonstances you need finer grained permission on subfolder, this allow you to disable the check at startup.

Common Options

edit

The following configuration options are supported by all output plugins:

Setting Input type Required

codec

codec

No

enable_metric

boolean

No

id

string

No

codec

edit
  • Value type is codec
  • Default value is "line"

The codec used for output data. Output codecs are a convenient method for encoding your data before it leaves the output without needing a separate filter in your Logstash pipeline.

enable_metric

edit
  • Value type is boolean
  • Default value is true

Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.

  • Value type is string
  • There is no default value for this setting.

Add a unique ID to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type. For example, if you have 2 s3 outputs. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.

output {
  s3 {
    id => "my_plugin_id"
  }
}