Splunk® App for Infrastructure

Administer Splunk App for Infrastructure

Download manual as PDF

Download topic as PDF

How the easy install script works in Splunk App for Infrastructure

You can use the easy install script in the Splunk App for Infrastructure (SAI) to set up data collection on your systems. The script installs data collection agents to collect metrics and log data according to the data sources you specify. When you configure the script to collect any metrics data, the script installs and configures collectd on the host for *nix hosts and a universal forwarder for Windows hosts. When you configure the script to collect any log data, the script installs and configures a universal forwarder on *nix and Windows hosts.

To use the script, you must log in to an account with administrator privileges. Do not log in as the root user. For more information about the script requirements for each operating system and platform, see these topics:

Use the script to configure data collection on these types of hosts and platforms:

To uninstall the data collection agents that the script installs and configures, see Stop data collection on Splunk App for Infrastructure.

*nix metrics collection

When you specify to collect metrics from the host, the script completes these actions:

  1. Installs the libcurl package based on the OS that is using the package manager.
  2. Checks the collectd version. If a compatible collectd version has not already been installed, the script installs a compatible collectd version.
  3. Installs the data collection agent, unix-agent.tgz or osx-agent.tgz, depending on your operating system. The data collection agent contains the plug-in and .conf configurations.
  4. Copies the write_splunk plugin and plug-ins for each metric you want to monitor to collectd's plug-in directory.
  5. Configures the collectd.conf file.
  6. Starts collectd.

The write_splunk collectd plug-in is a replacement for the write_http plug-in that directs metrics data to the Splunk HTTP Event Collector (HEC).

write_splunk creates these five dimensions when you integrate a system:

  • host
  • ip
  • os
  • os_version
  • kernel_version

You cannot delete the dimensions the plug-in creates.

For information about collectd package sources and install locations, see collectd package sources, install commands, and locations.

example plug-ins

Plug-in Supported OS Stanza
write_splunk
  • Linux
  • Unix
  • Mac OS X
<Plugin write_splunk>
server "<receiving_server>"
port "<HEC PORT>"
token "<HEC TOKEN>"
ssl true
verifyssl false
Dimension "key1:value1"
</Plugin>
CPU
  • Linux
  • Unix
  • Mac OS X
<Plugin cpu>
ReportByCpu false
ReportByState true
ValuesPercentage true
</Plugin>
Memory
  • Linux
  • Unix
  • Mac OS X
<Plugin memory>
ValuesAbsolute false
ValuesPercentage true
</Plugin>
DF
  • Linux
  • Unix
  • Mac OS X
<Plugin df>
FSType "ext2"
FSType "ext3"
FSType "ext4"
FSType "XFS"
FSType "rootfs"
FSType "overlay"
FSType "hfs"
FSType "apfs"
FSType "zfs"
FSType "ufs"
ReportByDevice true
ValuesAbsolute false
ValuesPercentage true
IgnoreSelected false
</Plugin>
Load
  • Linux
  • Unix
  • Mac OS X
<Plugin load>
ReportRelative true
</Plugin>
Disk
  • Linux
  • Unix
  • Mac OS X
<Plugin disk>
Disk ""
IgnoreSelected true
UdevNameAttr "DEVNAME"
</Plugin>
Interface
  • Linux
  • Unix
  • Mac OS X
<Plugin interface>
IgnoreSelected true
</Plugin>
Docker
  • Linux
  • Mac OS X
<Plugin docker>
dockersock "/var/run/docker.sock"
apiversion "v1.20"
</Plugin>

Windows metrics collection

When you specify to collect metrics from the host, the script completes these actions:

  1. Downloads a universal forwarder from Splunk Enterprise.
  2. Adds Perfmon objects to the inputs.conf file.
  3. Adds a forwarding target group to the outputs.conf file.
  4. Starts the universal forwarder.

Windows metrics you can collect with the script

Depending on the source types you select when adding a host to the Splunk App for Infrastructure, the easy install script collects the following seven Perfmon objects for metrics data collection:

  • CPU
  • PhysicalDisk
  • Network
  • Memory
  • System
  • Process
  • LogicalDisk

These are the the default values for each Perfmon object the easy install script uses.

[perfmon://CPU]
counters = % C1 Time;% C2 Time;% Idle Time;% Processor Time;% User Time;% Privileged Time;% Reserved Time;% Interrupt Time
instances = *
interval = 30
object = Processor
index = em_metrics
_meta =  os::"Microsoft Windows Server 2012 R2 Standard" os_version::6.3.9600 entity_type::Windows_Host
useEnglishOnly = true
sourcetype = PerfmonMetrics:CPU

[perfmon://PhysicalDisk]
counters = % Disk Read Time;% Disk Write Time
instances = *
interval = 30
object = PhysicalDisk
index = em_metrics
_meta =  os::"Microsoft Windows Server 2012 R2 Standard" os_version::6.3.9600 entity_type::Windows_Host
useEnglishOnly = true
sourcetype = PerfmonMetrics:PhysicalDisk

[perfmon://Network]
counters = Bytes Received/sec;Bytes Sent/sec;Packets Received/sec;Packets Sent/sec;Packets Received Errors;Packets Outbound Errors
instances = *
interval = 30
object = Network Interface
index = em_metrics
_meta =  os::"Microsoft Windows Server 2012 R2 Standard" os_version::6.3.9600 entity_type::Windows_Host
useEnglishOnly = true
sourcetype = PerfmonMetrics:Network

[perfmon://Memory]
counters = Cache Bytes;% Committed Bytes In Use;Page Reads/sec;Pages Input/sec;Pages Output/sec;Committed Bytes;Available Bytes
interval = 30
object = Memory
index = em_metrics
_meta =  os::"Microsoft Windows Server 2012 R2 Standard" os_version::6.3.9600 entity_type::Windows_Host
useEnglishOnly = true
sourcetype = PerfmonMetrics:Memory

[perfmon://System]
counters = Processor Queue Length;Threads
instances = *
interval = 30
object = System
index = em_metrics
_meta =  os::"Microsoft Windows Server 2012 R2 Standard" os_version::6.3.9600 entity_type::Windows_Host
useEnglishOnly = true
sourcetype - PerfmonMetrics:System

[perfmon://Process]
counters = % Processor Time;% User Time;% Privileged Time
instances = *
interval = 30
object = Process
index = em_metrics
_meta =  os::"Microsoft Windows Server 2012 R2 Standard" os_version::6.3.9600 entity_type::Windows_Host
useEnglishOnly = true
sourcetype = PerfmonMetrics:Process

[perfmon://LogicalDisk]
counters = Free Megabytes;% Free Space
instances = *
interval = 30
object = LogicalDisk
index = em_metrics
_meta =  os::"Microsoft Windows Server 2012 R2 Standard" os_version::6.3.9600 entity_type::Windows_Host
useEnglishOnly = true
sourcetype = PerfmonMetrics:LogicalDisk

*nix and Windows log collection

When you configure SAI to collect log data from the host, the script completes these actions:

  1. Downloads a universal forwarder from Splunk. For *nix systems, the unix-agent.tgz or osx-agent.tgz agent is responsible for downloading the universal forwarder, depending on your operating system.
  2. Configures the inputs.conf and outputs.conf files for the Universal Forwarder.
    1. Adds MONITOR: stanzas to the inputs.conf file to specify the logs that the app ingests.
    2. For a Windows host, adds WinEventLog: stanzas to the inputs.conf file.
    3. Adds a forwarding target group to the outputs.conf file. A forwarding target group identifies a receiver or set of receivers that the host sends data to.
  3. Starts the universal forwarder.

The script does not create an administrator user when it installs and configures the universal forwarder. If required, you have to create the admin user. For information about configuring admin credentials, see user-seed.conf in the Splunk Enterprise Admin Manual.

Example MONITOR: stanza

 [monitor://$SPLUNK_HOME\var\log\splunk\*.log*]
sourcetype = uf
disabled = false

Example WinEventLog stanzas

[WinEventLog://Application]
checkpointInterval = 10
current_only = 0
disabled = 0
start_from = oldest

[WinEventLog://Security]
checkpointInterval = 10
current_only = 0
disabled = 0
start_from = oldest

[WinEventLog://System]
checkpointInterval = 10
current_only = 0
disabled = 0
start_from = oldest

 [WinEventLog://Setup]
checkpointInterval = 10
current_only = 0
disabled = 0
start_from = oldest

Kubernetes and OpenShift data collection

The easy install script deploys Splunk Connect for Kubernetes (SCK) on your Kubernetes or OpenShift cluster. SCK configures data collectors in the cluster. If you monitor a Kubernetes cluster, the script uses Helm, a package manager for Kubernetes, to deploy SCK. If you monitor an OpenShift cluster, the script using the OpenShift Container Platform CLI tool to deploy SCK. SCK configures data collectors in the cluster.

Before you run the script, familiarize yourself with the script's requirements. To verify your system meets the requirements, see Kubernetes data collection requirements in the Install and Upgrade Splunk App for Infrastructure guide.

The script completes these actions:

  1. The script installs SCK, a Kubernetes chart, in the cluster.
  2. The SCK chart configures three sub-charts:
  3. Each sub-chart deploys pods to collect logs, metrics, and objects data.

The SCK chart does not contain any configuration parameters for itself, other than to set up the three sub-charts. The three sub-charts deploy fluentd and fluentd plug-ins to collect data from Kubernetes clusters. The sub-charts deploys these fluentd components:

Component Description
fluentd Open source data collector.
fluent-plugin-splunk-hec Fluentd plug-in for sending data to a Splunk HTTP Event Collector.
fluent-plugin-jq Fluentd plug-in for parsing, transforming, and formatting data.
fluent-plugin-kubernetes-objects Fluentd plug-in that queries the Kubernetes API to collect Kubernetes objects.
fluent-plugin-kubernetes-metrics Fluentd plug-in that queries the Kubernetes Summary API to collect Kubernetes metrics.
fluent-plugin-k8s-metrics-agg Fluentd plug-in that queries the Kubernetes API aggregates Kubernetes cluster metrics.

See the following sections for more information about each sub-chart.

splunk-kubernetes-logging

splunk-kubernetes-logging creates a Kubernetes daemonset in the cluster to collect log data. The daemonset runs fluentd and the fluentd tail plug-in, and uses the HEC token you specify when configuring the script to get data in SAI.

The sub-chart creates these objects:

Object Description
Daemonset Deploys a pod that runs fluentd and fluentd plug-ins on each node to collect metrics from the Kubernetes cluster.
ConfigMap Contains configuration parameters for fluentd.
Secret Contains the HEC token value and other metadata about SCK.

splunk-kubernetes-metrics

splunk-kubernetes-metrics creates a Kubernetes daemonset and a deployment in the cluster to collect metrics data. The daemonset runs a pod on each node, and the deployment runs a single pod. The daemonset and deployment run fluentd and the fluent metrics plug-in to collect metrics. Fluentd sends data to SAI with the fluentd Splunk HEC output plug-in.

The sub-chart creates these objects:

Object Description
Deployment Deploys a pod that runs fluentd and fluentd plug-ins on each node to collect metrics from the Kubernetes cluster.
DeploymentMetricsAggregator Deploys a pod that runs fluentd and fluentd plug-ins to collect metrics from the Kubernetes cluster.
ConfigMap Contains configuration parameters for fluentd.
ConfigMapMetricsAggregator Contains configuration parameters for fluentd.
Secret Stores credentials for SCK to send metrics data to SAI.
ServiceAccount A service account that specifies the processes to run the daemonset.
ClusterRole Creates the kubelet-summary-api-read cluster role and grants permissions for the role to access these resources:
  • nodes
  • nodes/stats
  • nodes/proxy
  • pods
ClusterRoleBinding Binds the kubelet-summary-api-read cluster role to the service account with the rbac.authorization.k8s.io API group.
ClusterRoleBindingAggregator Binds the kube-api-aggregator cluster role to access the service account with the rbac.authorization.k8s.io API group.
ClusterRoleAggregator Creates the kube-api-aggregator cluster role and grants permissions for the role to access these resources:
  • nodes
  • nodes/stats
  • nodes/proxy
  • pods

splunk-kubernetes-objects

splunk-kubernetes-objects creates a Kubernetes deployment in the cluster to collect objects metadata. The deployment collects netrucs with fluentd and the Kubernetes objects input plug-in. Fluentd sends data to SAI with the fluentd Splunk HEC output plug-in.

The sub-chart creates these objects:

Object Description
Deployment Deploys a pod that runs fluentd and fluentd plug-ins to collect metrics from the Kubernetes cluster.
ConfigMap Contains configuration parameters for fluentd.
Secret Stores credentials for SCK to send metrics data to SAI.
ServiceAccount A service account that specifies the processes to run the deployment.
ClusterRole Defines required permissions for the service account.
ClusterRoleBinding Binds the ClusterRole to the service account with the rbac.authorization.k8s.io API group.

Docker (no orchestration) data collection

You can configure the easy install script for a *nix system to enable Docker container monitoring. Use the easy install script for *nix systems to monitor Docker containers that are deployed on a *nix system without an orchestration tool like Docker Swarm, Kubernetes, or OpenShift. For requirements to run the script to monitor Docker containers, see Docker (no orchestration) data collection requirements in the Install and Upgrade Splunk App for Infrastructure guide.

When you enable Docker monitoring for a *nix system, the script completes these actions:

  1. Runs this command:
    $ cp unix-agent/docker.so <plug-in_directory>
    
  2. Adds the Docker plug-in to the collectd.conf file:
    Lodplugin docker
    <Plugin docker>
    docker_sock "/var/run/docker.sock"
    apiversion "v1.20"
    </Plugin>
    
    where docker_sock is the Docker unix socket.

To configure the easy install script to monitor Docker containers, see:

PREVIOUS
How to add data to Splunk App for Infrastructure
  NEXT
Configure the HTTP Event Collector to collect metrics data

This documentation applies to the following versions of Splunk® App for Infrastructure: 1.4.0, 1.4.1


Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters