Calyptia Core Agent
Support PortalCalyptia Core
23.10
23.10
  • Calyptia Core Agent Documentation
  • Comparison to Fluent Bit
  • Performance and Benchmarking
  • Concepts
    • Key Concepts
    • Buffering
    • Data Pipeline
      • Input
      • Parser
      • Filter
      • Buffer
      • Router
      • Output
  • Installation
    • Getting Started with Calyptia Core Agent
    • Supported Platforms
    • Linux Packages
      • Amazon Linux
      • Redhat / CentOS
      • Debian
      • Ubuntu
    • Docker
    • Kubernetes
    • macOS
    • Windows
  • Administration
    • Configuring Calyptia Core Agent
      • Classic mode
        • Format and Schema
        • Configuration File
        • Variables
        • Commands
        • Upstream Servers
        • Record Accessor
      • YAML Configuration File
      • Unit Sizes
      • Multiline Parsing
    • Transport Security
    • Buffering & Storage
    • Backpressure
    • Scheduling and Retries
    • Networking
    • Memory Management
    • Monitoring
    • HTTP Proxy
    • Hot Reload
    • Troubleshooting
  • Local Testing
    • Validating your Data and Structure
    • Running a Logging Pipeline Locally
  • Data Pipeline
    • Inputs
      • Collectd
      • CPU Log Based Metrics
      • Disk I/O Log Based Metrics
      • Docker Log Based Metrics
      • Docker Events
      • Dummy
      • Elasticsearch
      • Exec
      • Exec Wasi
      • Fluent Bit Metrics
      • Forward
      • Head
      • HTTP
      • Kafka
      • Health
      • Kernel Logs
      • Memory Metrics
      • MQTT
      • Network I/O Log Based Metrics
      • NGINX Exporter Metrics
      • Node Exporter Metrics
      • Podman Metrics
      • Process Log Based Metrics
      • Prometheus Scrape Metrics
      • Random
      • Serial Interface
      • Standard Input
      • StatsD
      • Syslog
      • Systemd
      • Tail
      • TCP
      • Thermal
      • OpenTelemetry
      • Windows Event Log
      • Windows Event Log (winevtlog)
      • Windows Exporter Metrics
    • Parsers
      • Configuring Parser
      • JSON
      • Regular Expression
      • LTSV
      • Logfmt
      • Decoders
    • Filters
      • AWS Metadata
      • CheckList
      • ECS Metadata
      • Expect
      • GeoIP2 Filter
      • Grep
      • Kubernetes
      • Log to Metrics
      • Lua
      • Parser
      • Record Modifier
      • Modify
      • Multiline
      • Nest
      • Nightfall
      • Rewrite Tag
      • Standard Output
      • Throttle
      • Tensorflow
      • Wasm
    • Outputs
      • Amazon CloudWatch
      • Amazon Kinesis Data Firehose
      • Amazon Kinesis Data Streams
      • Amazon S3
      • Azure Blob
      • Azure Data Explorer
      • Azure Log Analytics
      • Counter
      • Datadog
      • Elasticsearch
      • File
      • FlowCounter
      • Forward
      • GELF
      • Google Cloud BigQuery
      • HTTP
      • InfluxDB
      • Kafka
      • Kafka REST Proxy
      • LogDNA
      • Loki
      • NATS
      • New Relic
      • NULL
      • Observe
      • OpenSearch
      • OpenTelemetry
      • PostgreSQL
      • Prometheus Exporter
      • Prometheus Remote Write
      • SkyWalking
      • Slack
      • Splunk
      • Stackdriver
      • Standard Output
      • Syslog
      • TCP & TLS
      • Treasure Data
      • Vivo Exporter
      • WebSocket
  • Calyptia Core Agent for Developers
    • Golang Output Plugins
    • WASM Filter Plugins
    • WASM Input Plugins
Powered by GitBook
On this page
  • Creating a Kusto Cluster and Database
  • Creating an Azure Registered Application
  • Creating a Table
  • Optional - Creating an Ingestion Mapping
  • Configuration Parameters
  • Configuration File
  • Troubleshooting
  • 403 Forbidden
  1. Data Pipeline
  2. Outputs

Azure Data Explorer

Send logs to Azure Data Explorer (Kusto)

The Kusto output plugin allows to ingest your logs into an Azure Data Explorer cluster, via the Queued Ingestion mechanism.

Creating a Kusto Cluster and Database

You can create an Azure Data Explorer cluster in one of the following ways:

  • Create a free-tier cluster

  • Create a fully-featured cluster

Creating an Azure Registered Application

Fluent-Bit will use the application's credentials, to ingest data into your cluster.

  • Register an Application

  • Add a client secret

  • Authorize the app in your database

Creating a Table

Fluent-Bit ingests the event data into Kusto in a JSON format, that by default will include 3 properties:

  • log - the actual event payload.

  • tag - the event tag.

  • timestamp - the event timestamp.

A table with the expected schema must exist in order for data to be ingested properly.

.create table FluentBit (log:dynamic, tag:string, timestamp:datetime)

Optional - Creating an Ingestion Mapping

By default, Kusto will insert incoming ingestions into a table by inferring the mapped table columns, from the payload properties. However, this mapping can be customized by creatng a JSON ingestion mapping. The plugin can be configured to use an ingestion mapping via the ingestion_mapping_reference configuration key.

Configuration Parameters

Key
Description
Default

tenant_id

Required - The tenant/domain ID of the AAD registered application.

client_id

Required - The client ID of the AAD registered application.

client_secret

ingestion_endpoint

Required - The cluster's ingestion endpoint, usually in the form `https://ingest-cluster_name.region.kusto.windows.net

database_name

Required - The database name.

table_name

Required - The table name.

ingestion_mapping_reference

log_key

Key name of the log content.

log

include_tag_key

If enabled, a tag is appended to output. The key name is used tag_key property.

On

tag_key

The key name of tag. If include_tag_key is false, This property is ignored.

tag

include_time_key

If enabled, a timestamp is appended to output. The key name is used time_key property.

On

time_key

The key name of time. If include_time_key is false, This property is ignored.

timestamp

Configuration File

Get started quickly with this configuration file:

[OUTPUT]
    Match *
    Name azure_kusto
    Tenant_Id <app_tenant_id>
    Client_Id <app_client_id>
    Client_Secret <app_secret>
    Ingestion_Endpoint https://ingest-<cluster>.<region>.kusto.windows.net
    Database_Name <database_name>
    Table_Name <table_name>
    Ingestion_Mapping_Reference <mapping_name>

Troubleshooting

403 Forbidden

If you get a 403 Forbidden error response, make sure that:

  • You provided the correct AAD registered application credentials.

  • You authorized the application to ingest into your database or table.

PreviousAzure BlobNextAzure Log Analytics

Last updated 1 year ago

Required - The client secret of the AAD registered application ().

Optional - The name of a that will be used to map the ingested payload into the table columns.

App Secret
JSON ingestion mapping