Monitor services with Telegraf and OpenTelemetry 🔗
To monitor your service with Telegraf using native OpenTelemetry in Splunk Observability Cloud, install the service’s Telegraf plugin then push metrics to the Splunk Opentelemetry Collector via OTLP.
Note
This setup is designed for a Linux Ubuntu OS but should be replicable on any machines running Linux OS with Debian flavor. These instructions might not work on other OS (MacOS/Windows).
Benefits 🔗
After you configure the integration, you can access these features:
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Splunk Observability Cloud.
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Use navigators in Splunk Infrastructure Monitoring.
Access the Metric Finder and search for metrics sent by the monitor. For information, see Search the Metric Finder and Metadata catalog.
Configuration 🔗
Follow these steps to scrape Telegraf metrics with the OTel Collector:
Install Telegraf
Set up your service’s Telegraf Input plugin
Set up the Telegraf OpenTelemetry Output plugin
Configure the OpenTelemetry Collector
1. Install Telegraf 🔗
Run the following commands to install Telegraf from the InfluxData repository:
curl --silent --location -O \ https://repos.influxdata.com/influxdata-archive.key \ && echo "943666881a1b8d9b849b74caebf02d3465d6beb716510d86a39f6c8e8dac7515 influxdata-archive.key" \
| sha256sum -c - && cat influxdata-archive.key \
| gpg --dearmor \
| sudo tee /etc/apt/trusted.gpg.d/influxdata-archive.gpg > /dev/null \
&& echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdata-archive.gpg] https://repos.influxdata.com/debian stable main' \
| sudo tee /etc/apt/sources.list.d/influxdata.list
sudo apt-get update && sudo apt-get install telegraf
2. Set up your service’s Telegraf Input plugin 🔗
Next, install the Telegraf Input plugin for the service you want to monitor. Available plugins include Chrony, Consul, Docker, Elasticsearch, Fluentd, GitHub, Jenkins, RabbitMQ or SQL. Find a complete list of Input plugins at Telegraf Input plugins in GitHub.
For example, if you want to monitor execute commands on every interval and parse metrics from their output with the exec input plugin, use a setup like:
# Read metrics from one or more commands that can output to stdout
[[inputs.exec]]
## Commands array
commands = ["sh /testfolder/testscript.sh"]
timeout = "30s"
data_format = "influx"
## Environment variables
## Array of "key=value" pairs to pass as environment variables
## e.g. "KEY=value", "USERNAME=John Doe",
## "LD_LIBRARY_PATH=/opt/custom/lib64:/usr/local/libs"
# environment = []
## Measurement name suffix
## Used for separating different commands
# name_suffix = ""
## Ignore Error Code
## If set to true, a non-zero error code in not considered an error and the
## plugin will continue to parse the output.
# ignore_error = false
3. Set up the Telegraf OpenTelemetry Output plugin 🔗
Next, add the OTel Output plugin to your Telegraf configuration file:
# Send OpenTelemetry metrics over gRPC
[[outputs.opentelemetry]]
The config file usually resides on the ./etc/telegraf/telegraf.d directory.
For detailed information see Telegraf’s OpenTelemetry Output plugin documentation in GitHub.
4. Configure the OpenTelemetry Collector 🔗
Add the following configuration to the OTel Collector to retrieve metrics from the Telegraf installation:
receivers:
otlp:
protocols:
http:
grpc:
signalfx:
exporters:
signalfx:
access_token: "SPLUNK_TOKEN"
realm: "us0"
service:
pipelines:
metrics:
receivers: [otlp]
exporters: [signalfx]
metrics/internal:
receivers: [signalfx]
processors:
exporters: [signalfx]
Troubleshooting 🔗
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
Submit a case in the Splunk Support Portal .
Contact Splunk Support .
Available to prospective customers and free trial users
Ask a question and get answers through community support at Splunk Answers .
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.