Amazon S3
Amazon S3 Destination Plugin
Amazon S3 (Simple Storage Service) is a highly scalable and durable object storage service provided by Amazon Web Services (AWS). The S3 Destination Plugin in Calyptia Core allows you to easily store and archive your data by sending it directly to your Amazon S3 bucket. With this plugin, you can configure your pipeline to store various data types such as logs, metrics, traces, and events, in your S3 bucket for long-term storage or archival purposes. The S3 Destination Plugin provides a flexible and customizable way to integrate your data with your S3 bucket, allowing you to tailor your storage and archival strategies to meet your specific needs.
The following are configuration Parameters for Amazon S3 Destination Plugin.
Key | Description |
---|---|
Region | The AWS region of your S3 bucket |
Bucket | S3 Bucket Name |
Total File Size (Bytes) | Specifies the size of files in S3. Maximum size is 50GB, minimum is 1MB |
The following are AWS Authentication configuration Parameters for Amazon S3 Destination Plugin.
Key | Description |
---|---|
AWS Shared Credential File | Specifies the Shared Credential File to use when uploading if not using AWS ARN |
IAM Role ARN | ARN of an IAM role to assume (ex. for cross account access). |
S3 Object ACL Policy | Predefined Canned ACL policy for S3 objects |
S3 API Endpoint | Custom Endpoint for the AWS S3 API |
STS API Endpoint | Custom Endpoint for the STS API |
External ID for STS API | Specify an external ID for the STS API, can be used with the role_arn parameter if your role requires an external ID. |
The following are Advanced configuration Parameters for Amazon S3 Destination Plugin.
Key | Description |
---|---|
Use Put Object | Use the S3 PutObject API, instead of multipart upload API |
Send Content-MD5 header | Send Content-MD5 header with object uploads as is required when Object Lock is Enabled |
Preserve Data Ordering | Normally, when an upload request fails, there is a high chance for the last received chunk to be swapped with a later chunk, resulting in data shuffling. This feature prevents this shuffling by using a queue logic for uploads. |
Log Key | By default, the whole log record will be sent to S3. If you specify a key name with this option, then only the value of that key will be sent to S3. |
Storage Class | Specify the storage class for S3 objects. If this option is not specified, objects will be stored with the default 'STANDARD' storage class. |
Store Dir | Directory to locally buffer data before sending. Plugin uses the S3 Multipart upload API to send data in chunks of 5 MB at a time- only a small amount of data will be locally buffered at any given point in time. |
S3 Key Format | Format string for keys in S3. This option supports strftime time formatters and a syntax for selecting parts of the Fluent log tag using a syntax inspired by the rewrite_tag filter. Add $TAG in the format string to insert the full log tag; add $TAG[0] to insert the first part of the tag in the s3 key. The tag is split into “parts” using the characters specified with the s3_key_format_tag_delimiters option. Add $INDEX to enable sequential indexing for file names. Adding $INDEX will prevent random string being added to end of keywhen $UUID is not provided. See the in depth examples and tutorial in the documentation. |
S3 Key Format Tag Delimiters | A series of characters which will be used to split the tag into “parts” for use with the s3_key_format option. See the in depth examples and tutorial in the documentation. |
Use Static File Path | Disables behavior where UUID string is automatically appended to end of S3 key name when $UUID is not provided in s3_key_format. $UUID, time formatters, $TAG, and other dynamic key formatters all work as expected while this feature is set to true. |
Enable Auto Retry Reqeusts | Immediately retry failed requests to AWS services once. This option does not affect the normal Fluent Bit retry mechanism with backoff. Instead, it enables an immediate retry with no delay for networking errors, which may help improve throughput when there are transient/random networking issues. |
JSON Date Format | Specify the format of the date, supported formats: double, iso8601 (e.g: 2018-05-30T09:39:52.000681Z), java_sql_timestamp (e.g: 2018-05-30 09:39:52.000681, useful for AWS Athena), and epoch. |
JSON Date Key | Specifies the name of the date field in output. |
Upload Chunk Size (Bytes) | This plugin uses the S3 Multipart Upload API to stream data to S3, ensuring your data gets-off-the-box as quickly as possible. This parameter configures the size of each “part” in the upload. The total_file_size option configures the size of the file you will see in S3; this option determines the size of chunks uploaded until that size is reached. These chunks are temporarily stored in chunk_buffer_path until their size reaches upload_chunk_size, which point the chunk is uploaded to S3. Default: 5M, Max: 50M, Min: 5M. |
Upload Timeout | Optionally specify a timeout for uploads. Whenever this amount of time has elapsed, Fluent Bit will complete an upload and create a new file in S3. For example, set this value to 60m and you will get a new file in S3 every hour. Default is 10m. |
The following are Advanced Networking configuration Parameters for Amazon S3 Destination Plugin.
Key | Description |
---|---|
DNS Mode | Select the primary DNS connection type (TCP or UDP) |
DNS Resolver | Select the primary DNS connection type (TCP or UDP) |
Prefer IPv4 | Prioritize IPv4 DNS results when trying to establish a connection |
Keepalive | Enable or disable Keepalive support |
Keepalive Idle Timeout | Set maximum time allowed for an idle Keepalive connection |
Max Connect Timeout | Set maximum time allowed to establish a connection, this time includes the TLS handshake |
Max Connect Timeout Log Error | On connection timeout, specify if it should log an error. When disabled, the timeout is logged as a debug message |
Max Keepalive Recycle | Set maximum number of times a keepalive connection can be used before it is retired. |
Source Address | Specify network address to bind for data traffic |
Last modified 2mo ago