Calyptia Core Agent
Support Portal
24.4
24.4
  • Calyptia Core Agent Documentation
  • Comparison to Fluent Bit
  • Performance and Benchmarking
  • Concepts
    • Key Concepts
    • Buffering
    • Data Pipeline
      • Input
      • Parser
      • Filter
      • Buffer
      • Router
      • Output
  • Installation
    • Getting Started with Calyptia Core Agent
    • Supported Platforms
    • Linux
      • RHEL-based
      • Debian-based
    • Docker
    • Kubernetes
    • macOS
    • Windows
  • Administration
    • Configuring Calyptia Core Agent
      • Classic mode
        • Format and Schema
        • Configuration File
        • Variables
        • Commands
        • Upstream Servers
        • Record Accessor
      • YAML Configuration File
      • Unit Sizes
      • Multiline Parsing
    • Transport Security
    • Buffering & Storage
    • Backpressure
    • Scheduling and Retries
    • Networking
    • Memory Management
    • Monitoring
    • HTTP Proxy
    • Hot Reload
    • Troubleshooting
  • Local Testing
    • Validating your Data and Structure
    • Running a Logging Pipeline Locally
  • Data Pipeline
    • Inputs
      • Collectd
      • CPU Log Based Metrics
      • Disk I/O Log Based Metrics
      • Docker Log Based Metrics
      • Docker Events
      • Dummy
      • Elasticsearch
      • Exec
      • Exec Wasi
      • Fluent Bit Metrics
      • Forward
      • Head
      • HTTP
      • Kafka
      • Health
      • Kernel Logs
      • Memory Metrics
      • MQTT
      • Network I/O Log Based Metrics
      • NGINX Exporter Metrics
      • Node Exporter Metrics
      • Podman Metrics
      • Process Log Based Metrics
      • Prometheus Scrape Metrics
      • Random
      • Serial Interface
      • Standard Input
      • StatsD
      • Syslog
      • Systemd
      • Tail
      • TCP
      • Thermal
      • OpenTelemetry
      • Windows Event Log
      • Windows Event Log (winevtlog)
      • Windows Exporter Metrics
    • Parsers
      • Configuring Parser
      • JSON
      • Regular Expression
      • LTSV
      • Logfmt
      • Decoders
    • Filters
      • AWS Metadata
      • CheckList
      • ECS Metadata
      • Expect
      • GeoIP2 Filter
      • Grep
      • Kubernetes
      • Log to Metrics
      • Lua
      • Parser
      • Record Modifier
      • Modify
      • Multiline
      • Nest
      • Nightfall
      • Rewrite Tag
      • Standard Output
      • Throttle
      • Tensorflow
      • Wasm
    • Outputs
      • Amazon CloudWatch
      • Amazon Kinesis Data Firehose
      • Amazon Kinesis Data Streams
      • Amazon S3
      • Azure Blob
      • Azure Data Explorer
      • Azure Log Analytics
      • Counter
      • Datadog
      • Elasticsearch
      • File
      • FlowCounter
      • Forward
      • GELF
      • Google Cloud BigQuery
      • HTTP
      • InfluxDB
      • Kafka
      • Kafka REST Proxy
      • LogDNA
      • Loki
      • NATS
      • New Relic
      • NULL
      • Observe
      • OpenSearch
      • OpenTelemetry
      • PostgreSQL
      • Prometheus Exporter
      • Prometheus Remote Write
      • SkyWalking
      • Slack
      • Splunk
      • Stackdriver
      • Standard Output
      • Syslog
      • TCP & TLS
      • Treasure Data
      • Vivo Exporter
      • WebSocket
  • Calyptia Core Agent for Developers
    • Golang Output Plugins
    • WASM Filter Plugins
    • WASM Input Plugins
Powered by GitBook
On this page
  • Configuration Parameters
  • Configuration File
  1. Data Pipeline
  2. Outputs

Observe

Observe employs the http output plugin, allowing you to flush your records into Observe.

For now the functionality is pretty basic and it issues a POST request with the data records in MessagePack (or JSON) format.

The following are the specific HTTP parameters to employ:

Configuration Parameters

Key
Description
default

host

OBSERVE_CUSTOMER.collect.observeinc.com

port

TCP port of to employ when sending to Observe

443

tls

Specify to use tls

on

uri

Specify the HTTP URI for the Observe's data ingest

/v1/http/fluentbit

format

The data format to be used in the HTTP request body

msgpack

header

Authorization Bearer ${OBSERVE_TOKEN}

header

The specific header to instructs Observe how to decode incoming payloads

X-Observe-Decoder fluent

compress

Set payload compression mechanism. Option available is 'gzip'

gzip

tls.ca_file

For use with Windows: provide path to root cert

Configuration File

In your main configuration file, append the following Input & Output sections:

[OUTPUT]
    name         http
    match        *
    host         my-observe-customer-id.collect.observeinc.com
    port         443
    tls          on

    uri          /v1/http/fluentbit

    format       msgpack
    header       Authorization     Bearer ${OBSERVE_TOKEN}
    header       X-Observe-Decoder fluent
    compress     gzip

    # For Windows: provide path to root cert
    #tls.ca_file  C:\fluent-bit\isrgrootx1.pem
    
PreviousNULLNextOpenSearch

Last updated 1 year ago

IP address or hostname of Observe's data collection endpoint. $(OBSERVE_CUSTOMER) is your

The specific header that provides the Observe token needed to authorize sending data .

Customer ID
into a datastream