Use the S3 PutObject API, instead of multipart upload API
Send Content-MD5 header with object uploads as is required when Object Lock is Enabled
Normally, when an upload request fails, there is a high chance for the last received chunk to be swapped with a later chunk, resulting in data shuffling. This feature prevents this shuffling by using a queue logic for uploads.
By default, the whole log record will be sent to S3. If you specify a key name with this option, then only the value of that key will be sent to S3.
Specify the storage class for S3 objects. If this option is not specified, objects will be stored with the default 'STANDARD' storage class.
Directory to locally buffer data before sending. Plugin uses the S3 Multipart upload API to send data in chunks of 5 MB at a time- only a small amount of data will be locally buffered at any given point in time.
Format string for keys in S3. This option supports strftime time formatters and a syntax for selecting parts of the Fluent log tag using a syntax inspired by the rewrite_tag filter. Add $TAG in the format string to insert the full log tag; add $TAG to insert the first part of the tag in the s3 key. The tag is split into “parts” using the characters specified with the s3_key_format_tag_delimiters option. Add $INDEX to enable sequential indexing for file names. Adding $INDEX will prevent random string being added to end of keywhen $UUID is not provided. See the in depth examples and tutorial in the documentation.
S3 Key Format Tag Delimiters
A series of characters which will be used to split the tag into “parts” for use with the s3_key_format option. See the in depth examples and tutorial in the documentation.
Disables behavior where UUID string is automatically appended to end of S3 key name when $UUID is not provided in s3_key_format. $UUID, time formatters, $TAG, and other dynamic key formatters all work as expected while this feature is set to true.
Enable Auto Retry Reqeusts
Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues.
Specify the format of the date, supported formats: double, iso8601 (e.g: 2018-05-30T09:39:52.000681Z), java_sql_timestamp (e.g: 2018-05-30 09:39:52.000681, useful for AWS Athena), and epoch.
Specifies the name of the date field in output.
Upload Chunk Size (Bytes)
This plugin uses the S3 Multipart Upload API to stream data to S3, ensuring your data gets-off-the-box as quickly as possible. This parameter configures the size of each “part” in the upload. The total_file_size option configures the size of the file you will see in S3; this option determines the size of chunks uploaded until that size is reached. These chunks are temporarily stored in chunk_buffer_path until their size reaches upload_chunk_size, which point the chunk is uploaded to S3. Default: 5M, Max: 50M, Min: 5M.
Optionally specify a timeout for uploads. Whenever this amount of time has elapsed, Fluent Bit will complete an upload and create a new file in S3. For example, set this value to 60m and you will get a new file in S3 every hour. Default is 10m.