Splunk® IT Service Intelligence

Entity Integrations Manual

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Manually collect metrics from a *nix host in ITSI

You can manually set up collectd to collect metrics from a *nix host. Manually configure metrics collection for a *nix host when you meet at least one of these conditions:

  • You're installing collectd on a closed network with no internet access.
  • You already installed collectd on the host.
  • You don't have trusted URLs you can download the required packages and dependencies from.

If you also want to collect log data from a *nix host, see Manually collect logs from a *nix host in ITSI.

Prerequisites

Requirement Description
*nix host See *nix integration operating system support.
Dependencies See Required *nix dependencies.
Administrator role

In Splunk Enterprise, you have to be a user with the admin role.

In Splunk Cloud, you have to be a user with the sc_admin role.

HEC token

See Configure the HTTP Event Collector to collect entity integration data in ITSI.

Alternatively, you can configure collectd to send data to the local universal forwarder instead of using the HEC. For more information, see Send collectd data to a local universal forwarder.

Steps

Follow these steps to manually collect metrics from a *nix host.

1. Install collectd

Install collectd on the host. For a list of collectd install commands for each supported operating system, see collectd package sources, install commands, and locations.

2. Install the libcurl package

If you have not already installed the libcurl package on your system, install it now. For Linux systems, the version of libcurl you have to install depends on the Linux OS version you're running. For more information about which version of libcurl to install, see Required *nix dependencies.

3. Install the libyajl package for Docker container monitoring

If you want to monitor Docker containers on a Linux, you must have the libyajl version 2 package on the host to configure Docker container data collection. If you don't already have the package, install it now.

Operating system Install description
  • Debian
  • Ubuntu
$ apt-get install libyajl2
  • CentOS
  • Red Hat Enterprise Linux
  • Fedora
$ yum install yajl
  • SUSE
  • openSUSE
$ zypper install libyajl2

4. Copy custom ITSI plug-ins to collectd's plug-in directory

Copy custom collectd plug-ins that ITSI uses for entity integrations. For information about plug-in locations, see collectd package sources, install commands, and locations. You can't monitor Docker containers on Solaris systems you deployed without an orchestration tool like Kubernetes or OpenShift.

The write_splunk collectd plug-in is a replacement for the write_http plug-in that directs metrics data to the Splunk HTTP Event Collector (HEC). Don't send data to ITS with the write_splunk plug-in and the write_http plug-in at the same time.

The processmon.so plug-in sends process metrics for a host. If you don't configure a processmon stanza, the plug-in monitors every process and doesn't collect IO metrics. If you both blacklist and whitelist a process, the plug-in blacklists the process. The plug-in uses POSIX Extended Regular Expression syntax for the regular expression you enter to whitelist or blacklist processes, and the comm field in /proc/[pid]/stat for process names. For more information, see the Linux Programmer's Manual.

If you're monitoring a Linux system, the plug-in locations depend on which version of libcurl you're using. See the following commands for each operating system and plug-in.

Debian/Ubuntu with libcurl3 and collectd 5.7.x or 5.8.x

Collectd plug-in Install commands
write_splunk
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_7_5_8/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_7_5_8/write_splunk.so <plug-in_directory>
processmon
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_7_5_8/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_7_5_8/processmon.so <plug-in_directory>
docker
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_7_5_8/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_7_5_8/docker.so <plug-in_directory>

Debian/Ubuntu with libcurl4 and collectd 5.7.x or 5.8.x

Collectd plug-in Install commands
write_splunk
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_7_5_8/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_7_5_8/deb_libcurl4/write_splunk.so <plug-in_directory>
processmon
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_7_5_8/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_7_5_8/deb_libcurl4/processmon.so <plug-in_directory>
docker
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_7_5_8/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_7_5_8/deb_libcurl4/docker.so <plug-in_directory>

Debian/Ubuntu with collectd 5.9.x, 5.10.x or 5.11.x

Collectd plug-in Install commands
write_splunk
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_9_5_10/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_9_5_10/write_splunk.so <plug-in_directory>
processmon
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_9_5_10/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_9_5_10/processmon.so <plug-in_directory>
docker
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_9_5_10/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_9_5_10/docker.so <plug-in_directory>

RHEL, CentOS, Fedora, SUSE, or openSUSE with collectd 5.7.x or 5.8.x

Collectd plug-in Install commands
write_splunk
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_7_5_8/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_7_5_8/write_splunk.so <plug-in_directory>
processmon
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_7_5_8/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_7_5_8/processmon.so <plug-in_directory>
docker
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_7_5_8/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_7_5_8/docker.so <plug-in_directory>

RHEL, CentOS, Fedora, SUSE, or openSUSE with collectd 5.9.x, 5.10.x, or 5.11.x

Collectd plug-in Install commands
write_splunk
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_9_5_10/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_9_5_10/write_splunk.so <plug-in_directory>
processmon
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_9_5_10/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_9_5_10/processmon.so <plug-in_directory>
docker
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_9_5_10/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_9_5_10/docker.so <plug-in_directory>

Solaris with collectd 5.7.x or 5.8.x

Collectd plug-in Install commands
write_splunk
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_7_5_8/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_7_5_8/write_splunk-solaris.so "/opt/csw/lib/collectd/write_splunk.so"

Solaris with collectd 5.9.x, 5.10.x, or 5.11.x

Collectd plug-in Install commands
write_splunk
$ wget https://<hostname>:8000/en-US/static/app/splunk_app_infrastructure/unix_agent/plugin_5_9_5_10/unix-agent.tgz
$ tar xvzf unix-agent.tgz
$ cp unix-agent/plugin_5_9_5_10/write_splunk-solaris.so "/opt/csw/lib/collectd/write_splunk.so"

5. Configure collectd.conf to send data to ITSI

To configure collectd.conf, you have to load plug-ins for every metric you want to monitor, and configure the write_splunk plug-in to send data to your Splunk platform deployment.

  1. Add a LoadPlugin for each plug-in you want to use.
    <LoadPlugin "write_splunk">
    FlushInterval 30
    </LoadPlugin>
    LoadPlugin cpu
    LoadPlugin uptime
    LoadPlugin memory
    LoadPlugin df
    LoadPlugin load
    LoadPlugin disk
    LoadPlugin interface
    LoadPlugin docker
    LoadPlugin processmon
    
  2. Add configuration stanzas for each metric you want to collect. The following stanzas are default stanzas that the data collection script configures. There is no stanza for the uptime metric. A stanza for the processmon plug-in is optional. Include a processmon stanza to specify whitelists and blacklists and report IO metrics for monitored processes. The following processmon stanza is just an example that includes settings you can configure.
    Plug-in Supported operating system Stanza
    write_splunk
    • Linux
    • Solaris
    <Plugin write_splunk>
    server "<receiving_server>"
    port "<hec_port>"
    token "<hec_token>"
    ssl true
    verifyssl false
    Dimension "entity_type:nix_host" 
    Dimension "key2:value2"
    </Plugin>
    
    • receiving_server: The IP or hostname of the Splunk deployment to which you are sending data. If you are sending data to a distributed deployment, the IPs or hostnames of the indexer. If you deploy a load balancer, the IP or hostname of the load balancer.
    • hec_port: The HEC port.
    • hec_token: The HEC token.
    CPU
    • Linux
    • Solaris
    <Plugin cpu>
    ReportByCpu false
    ReportByState true
    ValuesPercentage true
    </Plugin>
    
    Memory
    • Linux
    • Solaris
    <Plugin memory>
    ValuesAbsolute false
    ValuesPercentage true
    </Plugin>
    
    DF
    • Linux
    • Solaris
    <Plugin df>
    FSType "ext2"
    FSType "ext3"
    FSType "ext4"
    FSType "XFS"
    FSType "rootfs"
    FSType "overlay"
    FSType "hfs"
    FSType "apfs"
    FSType "zfs"
    FSType "ufs"
    ReportByDevice true
    ValuesAbsolute false
    ValuesPercentage true
    IgnoreSelected false
    </Plugin>
    
    Load
    • Linux
    • Solaris
    <Plugin load>
    ReportRelative true
    </Plugin>
    
    Disk
    • Linux
    • Solaris
    <Plugin disk>
    Disk ""
    IgnoreSelected true
    UdevNameAttr "DEVNAME"
    </Plugin>
    
    Interface
    • Linux
    • Solaris
    <Plugin interface>
    IgnoreSelected true
    </Plugin>
    
    Docker
    • Linux
    <Plugin docker>
    dockersock "/var/run/docker.sock"
    apiversion "v1.20"
    </Plugin>
    

    By default, collectd fails if you're running more than 100 Docker containers. To monitor more 100 or more Docker containers, add the ReadBufferSize parameter to the Docker plug-in. The max value is 32000.

    Process monitoring
    • Linux
    <Plugin processmon>
    ReadIo true
    whitelist "process1.*"
    whitelist "process2.*"
    blacklist "process3.*"
    </Plugin>
    
  3. Update the Hostname field with the IP or hostname of the system that's running collectd. The Hostname must be unique to the system because it is used to identify the entity in SAI.

6. Start the collectd service

Start collectd on Linux systems:

$ sudo service collectd restart

Start collectd on Solaris systems:

$ sudo svcadm enable cswcollectd

Example metrics collection configuration file

Here is an example collectd.conf file that includes every plug-in ITSI entity integrations use.

#
# Config file for collectd(1).
# Please read collectd.conf(5) for a list of options.
# http://collectd.org/
#

##############################################################################
# Global #
#----------------------------------------------------------------------------#
# Global settings for the daemon. #
##############################################################################

Hostname "collectd.server.sample"
FQDNLookup false
#BaseDir "/var/lib/collectd"
#PIDFile "/var/run/collectd.pid"
#PluginDir "/usr/lib64/collectd"
#TypesDB "/usr/share/collectd/types.db"

#----------------------------------------------------------------------------#
# When enabled, plugins are loaded automatically with the default options #
# when an appropriate <Plugin ...> block is encountered. #
# Disabled by default. #
#----------------------------------------------------------------------------#
#AutoLoadPlugin false

#----------------------------------------------------------------------------#
# When enabled, internal statistics are collected, using "collectd" as the #
# plugin name. #
# Disabled by default. #
#----------------------------------------------------------------------------#
#CollectInternalStats false

#----------------------------------------------------------------------------#
# Interval at which to query values. This may be overwritten on a per-plugin #
# base by using the 'Interval' option of the LoadPlugin block: #
# <LoadPlugin foo> #
# Interval 60 #
# </LoadPlugin> #
#----------------------------------------------------------------------------#
Interval 60

#MaxReadInterval 86400
#Timeout 2
#ReadThreads 5
#WriteThreads 5

# Limit the size of the write queue. Default is no limit. Setting up a limit is
# recommended for servers handling a high volume of traffic.
WriteQueueLimitHigh 1000000
WriteQueueLimitLow 800000

##############################################################################
# Logging #
#----------------------------------------------------------------------------#
# Plugins which provide logging functions should be loaded first, so log #
# messages generated when loading or configuring other plugins can be #
# accessed. #
##############################################################################

LoadPlugin syslog
LoadPlugin logfile
<LoadPlugin "write_splunk">
FlushInterval 30
</LoadPlugin>

##############################################################################
# LoadPlugin section #
#----------------------------------------------------------------------------#
# Lines beginning with a single `#' belong to plugins which have been built #
# but are disabled by default. #
# #
# Lines beginning with `##' belong to plugins which have not been built due #
# to missing dependencies or because they have been deactivated explicitly. #
##############################################################################

#LoadPlugin csv
LoadPlugin cpu
LoadPlugin uptime
LoadPlugin memory
LoadPlugin df
LoadPlugin load
LoadPlugin disk
LoadPlugin interface
LoadPlugin docker
LoadPlugin processmon

##############################################################################
# Plugin configuration #
#----------------------------------------------------------------------------#
# In this section configuration stubs for each plugin are provided. A desc- #
# ription of those options is available in the collectd.conf(5) manual page. #
##############################################################################

<Plugin logfile>
LogLevel info
File "/etc/collectd/collectd.log"
Timestamp true
PrintSeverity true
</Plugin>

<Plugin syslog>
LogLevel info
</Plugin>

<Plugin cpu>
ReportByCpu false
ReportByState true
ValuesPercentage true
</Plugin>

<Plugin memory>
ValuesAbsolute false
ValuesPercentage true
</Plugin>

<Plugin df>
FSType "ext2"
FSType "ext3"
FSType "ext4"
FSType "XFS"
FSType "rootfs"
FSType "overlay"
FSType "hfs"
FSType "apfs"
FSType "zfs"
FSType "ufs"
ReportByDevice true
ValuesAbsolute false
ValuesPercentage true
IgnoreSelected false
</Plugin>

<Plugin load>
ReportRelative true
</Plugin>

<Plugin disk>
Disk ""
IgnoreSelected true
UdevNameAttr "DEVNAME"
</Plugin>

<Plugin interface>
IgnoreSelected true
</Plugin>

<Plugin docker>
dockersock "/var/run/docker.sock"
apiversion "v1.20"
</Plugin>

<Plugin processmon>
ReadIo true
whitelist "collectd"
whitelist "bash"
blacklist "splunkd"
</Plugin>

<Plugin write_splunk>
server "<splunk infrastructure app server>"
port "<HEC PORT>"
token "<HEC TOKEN>"
ssl true
verifyssl false
Dimension "entity_type:nix_host"
</Plugin>
Last modified on 10 July, 2020
PREVIOUS
Send collectd data to a local universal forwarder in ITSI
  NEXT
Manually collect logs from a *nix host in ITSI

This documentation applies to the following versions of Splunk® IT Service Intelligence: 4.6.0 Cloud only, 4.6.1 Cloud only, 4.6.2 Cloud only, 4.7.0, 4.7.1, 4.7.2, 4.8.0 Cloud only, 4.8.1 Cloud only, 4.9.0, 4.9.1, 4.9.2, 4.9.3, 4.10.0 Cloud only, 4.10.1 Cloud only, 4.10.2 Cloud only


Was this documentation topic helpful?

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters