Docs » Supported integrations in Splunk Observability Cloud » Other OpenTelemetry ingestion methods » Monitor hosts with collectd and OpenTelemetry

Monitor hosts with collectd and OpenTelemetry 🔗

To monitor your infrastructure with collectd using native OpenTelemetry in Splunk Observability Cloud, install a collectd daemon in your host and connect it to your Collector instance as described in this document.

Benefits 🔗

After you configure the integration, you can access these features:

Configuration 🔗

Install a collectd daemon in your host and connect it to an OpenTelemetry Collector with the following steps:

  1. Install and configure collectd

  2. Configure the OpenTelemetry Collector

  3. Build and run

1. Install and configure collectd 🔗

Follow these steps to install and configure the collectd daemon:

  1. Install collectd as a Debian or Yum package in your host

  2. Configure the daemon to ingest free disk related metrics through collectd/metrics.conf

  3. Configure the daemon to send data over HTTP using collectd/http.conf

In this example, the host is represented by an Ubuntu 24.04 docker image.

services:
   collectd:
      build: collectd
      container_name: collectd
      depends_on:
         - otelcollector
      volumes:
         - ./collectd/http.conf:/etc/collectd/collectd.conf.d/http.conf
         - ./collectd/metrics.conf:/etc/collectd/collectd.conf.d/metrics.conf

# OpenTelemetry Collector
otelcollector:
   image:  quay.io/signalfx/splunk-otel-collector:latest
   container_name: otelcollector
   command: ["--config=/etc/otel-collector-config.yml", "--set=service.telemetry.logs.level=debug"]
   volumes:
      - ./otel-collector-config.yml:/etc/otel-collector-config.yml

The http and metrics configuration files look like this:

# http.conf
# The minimal configuration required to have collectd send data to an OpenTelemetry Collector
# with a collectdreceiver deployed on port 8081.

LoadPlugin write_http
<Plugin "write_http">
   <Node "collector">
      URL "http://otelcollector:8081"
      Format JSON
      VerifyPeer false
      VerifyHost false
   </Node>
</Plugin>
# metrics.conf
# An example of collectd plugin configuration reporting free disk space on the host.

<LoadPlugin df>
   Interval 3600
</LoadPlugin>
<Plugin df>
   ValuesPercentage true
</Plugin>

1. Configure the OpenTelemetry Collector 🔗

Set up your Collector instance to listen for traffic from the collectd daemon over HTTP with the CollectD receiver:

receivers:
   collectd:
      endpoint: "0.0.0.0:8081"

exporters:
   debug:
      verbosity: detailed

service:
   pipelines:
      metrics:
         receivers: [collectd]
         exporters: [debug]

Caution

Make sure to use 0.0.0.0 to expose port 8081 over the Docker network interface so that both Docker containers can interact.

3. Build and run 🔗

Run the example with the instruction to start the docker-compose setup and build the collectd container:

$> docker compose up --build

Check that the Collector is receiving metrics and logging them to stdout via the debug exporter:

$> docker logs otelcollector

A typical output is:

StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-12-20 19:55:44.006000128 +0000 UTC
Value: 38.976566
Metric #17
Descriptor:
   -> Name: percent_bytes.reserved
   -> Description:
   -> Unit:
   -> DataType: Gauge
NumberDataPoints #0
Data point attributes:
   -> plugin: Str(df)
   -> plugin_instance: Str(etc-hosts)
   -> host: Str(ea1d62c7a229)
   ->  dsname: Str(value)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2024-12-20 19:55:44.006000128 +0000 UTC
Value: 5.102245
   {"kind": "exporter", "data_type": "metrics", "name": "debug"}

Troubleshooting 🔗

If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.

Available to Splunk Observability Cloud customers

Available to prospective customers and free trial users

  • Ask a question and get answers through community support at Splunk Answers .

  • Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.

This page was last updated on Jan 07, 2025.