Collect Kubernetes metrics and logs with Splunk App for Infrastructure
Collect metrics and log data from a Kubernetes cluster with the easy install script in the Splunk App for Infrastructure (SAI). When you run the script, you start ingesting metrics and log data for pods and nodes in the cluster. Nodes and pods in the cluster you monitor are entities in SAI. You can search other metrics you specify to collect data for in the Search app.
SAI deploys Splunk Connect for Kubernetes (SCK) with Helm to collect metrics and log data from Kubernetes clusters. This version of SAI deploys SCK version 1.3.0. For more information about SCK, see the Splunk Connect for Kubernetes 1.3.0 release documentation in the Github repository.
View detailed information about the status of pods you monitor from the Entity Overview. For information about pod statuses, see Pod phase on the Kubernetes website. The status for Kubernetes nodes is set to disabled
when the status of then node enters an unknown state. From the Investigate tab, the status of entities does not contain detailed pod status information, and is either Active
or Inactive
.
Go to the Investigate page in SAI to monitor your entities in the Tile or List view. You can group your entities to monitor them more easily, and further analyze your infrastructure by drilling down to the Overview Dashboard for entities or Analysis Workspace for entities and groups.
For information about stopping or removing the data collection agents, see Stop data collection on Splunk App for Infrastructure.
Prerequisites
Meet the following requirements to configure data collection:
Item | Requires |
---|---|
Data collection script dependencies |
See Kubernetes data collection requirements in the Install and Upgrade Splunk App for Infrastructure guide. |
Helm | You must have permission to execute helm commands.
|
HEC token | To configure an HEC token for SAI, see Configure the HTTP Event Collector to receive metrics data for SAI. |
Steps
Follow these steps to configure and run the data collection script to start forwarding data from a Kubernetes cluster to SAI.
1. Set up Helm
Install and initialize Helm on each Kubernetes cluster you want to monitor in SAI. For information about setting up Helm, see the Quickstart Guide on the Helm website.
You must run the easy install script on the system that runs Helm.
2. Specify configuration options
Specify the data collection options for collecting metrics and logs from the cluster. If you're running SAI on Splunk Cloud, you must enter specific settings for the Monitoring machine, HEC port, and Receiver port. For more information, see Install and configure the data collection agents on each applicable system in the Install and Upgrade Splunk App for Infrastructure guide.
- In the SAI user interface, click the Add Data tab and select Kubernetes.
- For Data to be collected, click Customize Objects to define which objects to track:
Object Sourcetype Description pods kube:objects:pods
Enabled by default, cannot be disabled. Collects metadata
,spec
, andstatus
data for pods in the cluster.nodes kube:objects:nodes
Enabled by default, and cannot be disabled. Collects metadata
,spec
, andstatus
data for nodes in the cluster.You can enable advanced object collection for these objects:
Object Sourcetype Description component_statuses kube:objects:component_statuses
Collects conditions
andmetadata
data for the status of resources in the cluster.config_maps kube:objects:config_maps
Collects data
andmetadata
data for ConfigMaps in the cluster.daemon_sets kube:objects:daemon_sets
Collects metadata
,spec
, andstatus
data for daemonsets in the cluster.deployments kube:objects:deployments
Collects metadata
,spec
, andstatus
data for deployments in the cluster.namespaces kube:objects:namespaces
Collects metadata
,spec
, andstatus
data for namespaces in the cluster.persistent_volumes kube:objects:persistent_volumes
Collects metadata
,spec
, andstatus
data for persistent volumes in the cluster.persistent_volume_claims kube:objects:persistent_volume_claims
Collects metadata
,spec
, andstatus
data for persistent volume claims in the cluster.replica_sets kube:objects:replica_sets
Collects metadata
,spec
, andstatus
data for replica sets in the cluster.resource_quotas kube:objects:resource_quotas
Collects metadata
,spec
, andstatus
data for resource quotas in the cluster.services kube:objects:services
Collects metadata
,spec
, andstatus
data for services in the cluster.service_accounts kube:objects:service_accounts
Collects metadata
andsecrets
data for service accounts in the cluster.stateful_sets kube:objects:stateful_sets
Collects metadata
,spec
, andstatus
data for stateful sets in the cluster.events kube:objects:events
Collects object
andtype
data for events in the cluster.Advanced object collection options do not have visualizations in SAI. Track these objects in the Search & Reporting app. By default, object data is stored in the
em_meta
index. - For Monitoring machine, enter the FQDN or IP address of the system you are sending data to. This is the system running SAI.
- Enter the HEC token of the system you want to send data to.
- Enter the HEC port of the system you want to send metrics data to. Use port
8088
if it is available. - Enter a unique Kubernetes namespace to specify a namespace in the Kubernetes cluster for the SCK components.
- Enter a unique Cluster name to specify the name of the Kubernetes cluster you're running the script in. If you do not enter anything, the script specifies a name for you.
- Enter a unique Release name for the SCK release when you install it in your cluster. The release tracks the installation of SCK.
3. Run the script
Execute the script on the system that runs Helm.
- Open a command line window on the system that runs Helm.
- Switch to the cluster you want to monitor in SAI:
where
kubectl config use-context <context_name>
<context_name>
is the context that corresponds to the cluster you're monitoring. - Paste the script you configured in the Add Data tab in SAI.
- To verify you successfully deployed the SCK, check the status of the release in the console using
$ helm status <release name>
from the command line.
Upgrade Splunk Connect for Kubernetes in SAI
You can deploy a more recent version of SCK to a Kubernetes cluster you're already monitoring. If you started collecting Kubernetes data with an earlier version of SAI, you may be running an earlier version of SCK. To upgrade SCK, delete SCK from the cluster and then run the data collection script from a more recent version of SAI that deploys a more recent version of SCK. When you upgrade SCK, SAI discovers resources in the cluster as new entities.
When you delete SCK from the cluster during the upgrade process, you can manually delete entities associated with the cluster or wait for your Kubernetes entity retirement policy to automatically remove the entities. If you don't have a Kubernetes entity retirement policy and don't manually delete the entities for a cluster after you upgrade SCK in the cluster, the old entities that the earlier version of SCK discovered just become inactive.
This version of SAI deploys SCK version 1.3.0 when you run the data collection script. For more information about SCK, see the Splunk Connect for Kubernetes 1.3.0 release documentation in the Github repository.
Follow these steps to upgrade SCK:
- On the system that runs SCK, delete the Helm release name for the current SCK deployment:
$ helm delete <helm_release_name> --purge
- Delete entities that the version of SCK you're replacing discovered. Alternatively, you can wait for an active Kubernetes entity retirement policy to automatically remove the entities when they become inactive.
- Configure the data collection script in a version of SAI that deploys a more recent version of SCK.
Collect Mac OS X metrics and logs with Splunk App for Infrastructure | Configure Identity and Access Management (IAM) policy for AWS data collection |
This documentation applies to the following versions of Splunk® App for Infrastructure (Legacy): 2.1.0, 2.1.1 Cloud only, 2.2.0 Cloud only, 2.2.1, 2.2.3 Cloud only, 2.2.4, 2.2.5
Feedback submitted, thanks!