Splunk® Supported Add-ons

Splunk Add-on for Linux

Download manual as PDF

Download topic as PDF

Configure collectd to send data to the Splunk Add-on for Linux

The Splunk Add-on for Linux depends on data sent from collectd to the Splunk HTTP Event Collector (HEC) or a TCP input. collectd is a daemon which includes a rich set of plugins for gathering system and application performance metrics. The following picture illustrates how collectd gathers data from the Linux host (as collectd client) and sends data to Splunk (as collectd server).

Data is gathered from a Linux host and sent through a collectd server and then to Splunk.

You can customize your collectd deployment based on your needs and environment. You can configure the collectd client and collectd server on the same Linux host, or you can configure several collectd clients to send data to a single collectd server.

Download and install collectd

Prerequisites

Review the hardware and software requirements for the Splunk Add-on for Linux. See Hardware and software requirements.

Steps

  1. Go to https://collectd.org/download.shtml to download collectd.
  2. Follow the instructions from https://collectd.org/wiki/index.php/First_steps to install collectd.

Configure collectd for Linux

You must configure collectd to collect data and send the data to Splunk. The default location for collectd.conf is /etc/collectd.conf or /etc/collectd/collectd.conf.

See the collectd manpage to learn more about collectd.conf.

Configure collectd client to collect data from Linux

Data Collected Plugin in collectd Configuration Suggestion
CPU metrics Plugin CPU

Enable the plugin just by deleting the hash-symbol (#) in front of the plugin.

For example, change the syntax #LoadPlugin cpu to LoadPlugin cpu to enable plugin CPU.

<Plugin cpu>
#  ReportByCpu true
#  ReportByState true
   ValuesPercentage true
</Plugin>
Memory metrics Plugin memory
<Plugin memory>
	ValuesAbsolute true
	ValuesPercentage true
</Plugin>
Swap metrics Plugin swap
<Plugin swap>
	ReportByDevice true
#	ReportBytes true
#	ValuesAbsolute true
	ValuesPercentage true
</Plugin>
VMEM metrics Plugin vmem
<Plugin vmem>
	Verbose false
</Plugin>
Mountpoint usage/FS usage Plugin df
<Plugin df>
#	Device "/dev/hda1"
#	Device "192.168.0.2:/mnt/nfs"
#	MountPoint "/home"
#	FSType "ext3"
	ReportByDevice true
#	ReportInodes false
#	ValuesAbsolute true	
        ValuesPercentage true
</Plugin>
Network interface traffic Plugin interface None. Use the default configuration.
Disk utilization Plugin disk
System load Plugin load
<Plugin load>
	ReportRelative true
</Plugin>
Process information Plugin processes
<Plugin processes>
	 ProcessMatch "all" "(.*)"
</Plugin>
Network protocols information Plugin protocols None. Use the default configuration.
IRQ metrics Plugin irq
TCP connections information Plugin tcpconns
Thermal information Plugin thermal
System uptime statistics Plugin uptime

Configure the collectd client to send data to the collectd server

If you configure the collectd client and the collectd server on the same machine, you can skip this step.

See Plugin network in the collectd manpage for information on how to configure the Plugin network. See Networking introduction on the collectd Wiki for a detailed walkthrough.

Configure the collectd server to send data to Splunk

Plugin write_http and Plugin write_graphite submit values to Splunk. Plugin write_http sends data via HTTP and encoding metrics with JSON, and Plugin write_graphite writes data to Graphite via TCP.

Configure plugin write_http

If you want to send Linux performance metrics data to Splunk in JSON format via HTTP, configure URL, Header and Format as follows:

Field name Description Syntax Example
URL URL to which the values are submitted to.

The values for IP, Port, and Token Value must be the same as the values you define for the HEC inputs.

See Configure HEC inputs for the Splunk Add-on for Linux.

URL "https://Splunk Server IP:Port Number/services/collector/raw?channel=Token Value" URL "https://10.66.104.127:8088/services/collector/raw?channel=693E90D4-91A5-49A3-99B1-CFE8828A0711"
Header A HTTP header to add to the request. Header "Authorization: Splunk Token Value" Header "Authorization: Splunk 693E90D4-91A5-49A3-99B1-CFE8828A0711"
Format The data format. Format "JSON" Format "JSON"

Example

LoadPlugin write_http
<Plugin write_http>
  <Node "node-http-1">
    URL "https://10.66.104.127:8088/services/collector/raw?channel=693E90D4-91A5-49A3-99B1-CFE8828A0711"
    Header "Authorization: Splunk 693E90D4-91A5-49A3-99B1-CFE8828A0711"
    Format "JSON"
    Metrics true
    StoreRates true
  </Node>
</Plugin>

Configure plugin write_graphite

If you want to send Linux performance metrics data to Splunk in Graphite format, configure plugin write_graphite as follows:

  1. Set AlwaysAppendDS to true.
  2. Set SeparateInstances to false.
  3. Make sure the values for Host and Port are the same as the values you define for the TCP inputs. See Configure TCP inputs for the Splunk Add-on for Linux.

If dots (.) are used in the metric name (including prefix, EscapeCharacter, hostname, and postfix), Splunk cannot recognize the key-value pair in the data.

Example

LoadPlugin write_graphite
<Plugin write_graphite>
  <Node "node-graphite-1">
    Host "10.66.108.127"
    Port "2104"
    Protocol "tcp"
    EscapeCharacter "_"
    AlwaysAppendDS true
    SeparateInstances false
  </Node>
</Plugin>
PREVIOUS
Install the Splunk Add-on for Linux
  NEXT
Configure HEC inputs for the Splunk Add-on for Linux

This documentation applies to the following versions of Splunk® Supported Add-ons: released


Comments

I'd like to mention that if one is dealing with an older version of collectd that does not offer the write_http or write_graphite plugins (before 4.8 and 5.1, respectively), one can use the csv plugin (available since 4.0) to write the metrics to a local directory and use a Splunk Universal Forwarder to massage the events into the graphite format and send them to the indexers/search heads as such. (Optionally add a cron job to clean older contents from the csv output directory to avoid eventual disk saturation.) If anyone needs to do this, they can contact me and I'll show them how.

DUThibault
February 7, 2018

Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters