OpenSearch
Send logs to Amazon OpenSearch Service
Last updated
Send logs to Amazon OpenSearch Service
Last updated
The opensearch output plugin, allows to ingest your records into an OpenSearch database. The following instructions assumes that you have a fully operational OpenSearch service running in your environment.
Key | Description | default |
---|---|---|
The parameters index and type can be confusing if you are new to OpenSearch, if you have used a common relational database before, they can be compared to the database and table concepts. Also see the FAQ below
OpenSearch output plugin supports TTL/SSL, for more details about the properties available and general configuration, please refer to the TLS/SSL section.
The write_operation can be any of:
Please note, Id_Key
or Generate_ID
is required in update, and upsert scenario.
In order to insert records into an OpenSearch service, you can run the plugin from the command line or through the configuration file:
The opensearch plugin, can read the parameters from the command line in two ways, through the -p argument (property) or setting them directly through the service URI. The URI format is the following:
Using the format specified, you could start Fluent Bit through:
which is similar to do:
In your main configuration file append the following Input & Output sections. You can visualize this configuration here
Some input plugins may generate messages where the field names contains dots. This opensearch plugin replaces them with an underscore, e.g:
becomes
The Amazon OpenSearch Service adds an extra security layer where HTTP requests must be signed with AWS Sigv4. This plugin supports Amazon OpenSearch Service with IAM Authentication.
See here for details on how AWS credentials are fetched.
Example configuration:
Notice that the Port
is set to 443
, tls
is enabled, and AWS_Region
is set.
Similarly to Elastic Cloud, OpenSearch in version 2.0 and above needs to have type option being removed by setting Suppress_Type_Name On
.
Without this you will see errors like:
Operation | Description |
---|---|
Host
IP address or hostname of the target OpenSearch instance
127.0.0.1
Port
TCP port of the target OpenSearch instance
9200
Path
OpenSearch accepts new data on HTTP query path "/_bulk". But it is also possible to serve OpenSearch behind a reverse proxy on a subpath. This option defines such path on the fluent-bit side. It simply adds a path prefix in the indexing HTTP POST URI.
Empty string
Buffer_Size
Specify the buffer size used to read the response from the OpenSearch HTTP service. This option is useful for debugging purposes where is required to read full responses, note that response size grows depending of the number of records inserted. To set an unlimited amount of memory set this value to False, otherwise the value must be according to the Unit Size specification.
4KB
Pipeline
OpenSearch allows to setup filters called pipelines. This option allows to define which pipeline the database should use. For performance reasons is strongly suggested to do parsing and filtering on Fluent Bit side, avoid pipelines.
AWS_Auth
Enable AWS Sigv4 Authentication for Amazon OpenSearch Service
Off
AWS_Region
Specify the AWS region for Amazon OpenSearch Service
AWS_STS_Endpoint
Specify the custom sts endpoint to be used with STS API for Amazon OpenSearch Service
AWS_Role_ARN
AWS IAM Role to assume to put records to your Amazon cluster
AWS_External_ID
External ID for the AWS IAM Role specified with aws_role_arn
HTTP_User
Optional username credential for access
HTTP_Passwd
Password for user defined in HTTP_User
Index
Index name
fluent-bit
Type
Type name
_doc
Logstash_Format
Enable Logstash format compatibility. This option takes a boolean value: True/False, On/Off
Off
Logstash_Prefix
When Logstash_Format is enabled, the Index name is composed using a prefix and the date, e.g: If Logstash_Prefix is equals to 'mydata' your index will become 'mydata-YYYY.MM.DD'. The last string appended belongs to the date when the data is being generated.
logstash
Logstash_DateFormat
Time format (based on strftime) to generate the second part of the Index name.
%Y.%m.%d
Time_Key
When Logstash_Format is enabled, each record will get a new timestamp field. The Time_Key property defines the name of that field.
@timestamp
Time_Key_Format
When Logstash_Format is enabled, this property defines the format of the timestamp.
%Y-%m-%dT%H:%M:%S
Time_Key_Nanos
When Logstash_Format is enabled, enabling this property sends nanosecond precision timestamps.
Off
Include_Tag_Key
When enabled, it append the Tag name to the record.
Off
Tag_Key
When Include_Tag_Key is enabled, this property defines the key name for the tag.
_flb-key
Generate_ID
When enabled, generate _id
for outgoing records. This prevents duplicate records when retrying.
Off
Id_Key
If set, _id
will be the value of the key from incoming record and Generate_ID
option is ignored.
Write_Operation
Operation to use to write in bulk requests.
create
Replace_Dots
When enabled, replace field name dots with underscore.
Off
Trace_Output
When enabled print the OpenSearch API calls to stdout (for diag only)
Off
Trace_Error
When enabled print the OpenSearch API calls to stdout when OpenSearch returns an error (for diag only)
Off
Current_Time_Index
Use current time for index generation instead of message record
Off
Logstash_Prefix_Key
When included: the value in the record that belongs to the key will be looked up and over-write the Logstash_Prefix for index generation. If the key/value is not found in the record then the Logstash_Prefix option will act as a fallback. Nested keys are not supported (if desired, you can use the nest filter plugin to remove nesting)
Suppress_Type_Name
When enabled, mapping types is removed and Type
option is ignored.
Off
Workers
Enables dedicated thread(s) for this output. Default value is set since version 1.8.13. For previous versions is 0.
2
create (default)
adds new data - if the data already exists (based on its id), the op is skipped.
index
new data is added while existing data (based on its id) is replaced (reindexed).
update
updates existing data (based on its id). If no data is found, the op is skipped.
upsert
known as merge or insert if the data does not exist, updates if the data exists (based on its id).