Calyptia Core - Plugins
Plugins part of Calyptia Core
Calyptia Core comes built-in with a set of enterprise plugins that can be enabled in the pipeline configuration if required.
S3 Input plugin
Calyptia Core s3 plugin that is able to ingest logs from a AWS s3 bucket.
The following are the configuration options available for this plugin:
Option Name | Description | Default Value | Required |
---|---|---|---|
aws_access_key | AWS access key | None | Yes |
aws_secret_key | AWS access secret key | None | Yes |
aws_bucket_name | AWS S3 bucket name to fetch logs from (i.e: logs) | None | Yes |
aws_bucket_region | AWS region from where the bucket is. (i.e: us-east1) | None | Yes |
logs | Glob format of the logs contained under the bucket. (i.e: /logs/*.log) | None | Yes |
Example Deployment using this plugin
The following is the pipeline required to export logs from the bucket named calyptia
from all the logs that matches under the path fluent-bit-logs/*.log
[INPUT]
Name forward
Host 0.0.0.0
Port 24284
[INPUT]
Name go-s3-replay-plugin
Tag s3-input
aws_access_key {{ secrets.aws_access_key }}
aws_secret_key {{ secrets.aws_secret_key }}
aws_bucket_name calyptia
aws_bucket_region us-east-1
logs fluent-bit-logs/*.log
[OUTPUT]
Name stdout
Match *
As you can be seen in the previous configuration, it requires also a Secrets, create a file
named secrets.env with the following contents:
aws_access_key=YOUR_ACCESS_KEY
aws_secret_key=YOUR_SECRET_KEY
Then deploy the pipeline with the following command:
calyptia create pipeline --aggregator {YOUR_CORE_INSTANCE_NAME} --config-file ./s3.conf --name s3 --secrets-file ./secrets.env
Monitor the status of your pipelines until it reaches the STARTED status
watch -n1 calyptia get pipelines --aggregator {YOUR_CORE_INSTANCE_NAME}
Once is ready, you can see the output of the pipeline using kubectl and validate that the s3 input logs have been injected and processed to standard output.
kubectl logs -n calyptia s3-pipeline-xxx
Records format
This plugin produces records with the following properties (in case further filtering needs to be added).
_raw
: string, raw text of each line of the filefile
: string, file path under the bucket.
An example record can be seen next:
[2022/07/12 22:00:14] [ info] [sp] stream processor started
[2022/07/12 22:00:14] [ info] processing file: data.json from bucket: testing
[2022/07/12 22:00:14] [ info] [output:stdout:test-output] worker #0 started
[2022/07/12 22:00:14] [ info] completed processing of file: data.json from bucket: testing
[1] test-input: [1657663214.813131462, {"_raw"=>"{"id":2,"first_name":"xxx","last_name":"xxx","email":"[email protected]","gender":"Male","ip_address":"129.53.126.xxx"}", "file"=>"data.json"}]
[2] test-input: [1657663214.813131712, {"_raw"=>"{"id":3,"first_name":"xxx","last_name":"xxx","email":"[email protected]","gender":"Female","ip_address":"189.171.157.xxx"}", "file"=>"data.json"}]
Integration with s3 output
To integrate with the s3 output plugin an example configuration is shown next:
[INPUT]
Name forward
Host 0.0.0.0
Port 24284
[INPUT]
Name go-s3-replay-plugin
Tag s3-input
aws_access_key {{ secrets.aws_access_key }}
aws_secret_key {{ secrets.aws_secret_key }}
aws_bucket_name calyptia
aws_bucket_region us-east-1
logs fluent-bit-logs/*.json
[OUTPUT]
Name s3
Match *
aws_shared_credentials_file {{ files.creds }}
bucket fbit-tests
region us-east-1
store_dir /data/storage/s3
log_key message
upload_timeout 1s
Please note a few details in the provided configuration for the s3 output plugin:
- The configuration parameter storage_dir directory has to be set to /data/storage/s3
- The configuration parameter aws_shared_credentials_file {{ files.creds }}
The 2. shared credentials file has to be named creds.conf, and has to follow the next format:
[default]
aws_access_key_id = xxxx
aws_secret_access_key = xxxx
Please note that as explained in the previous section, also a the credentials secrets file has to be provided
when deploying the pipeline.
calyptia create pipeline --aggregator {NAME_OF_CALYPTIA_CORE_INSTANCE} --config-file ./s3-output.conf --name s3-pipeline --file ./creds.conf --secrets-file ./secrets.env --encrypt-files
Updated 26 days ago