Collect OpenShift metrics and logs with Splunk App for Infrastructure
Use the easy install script script to start collecting metrics and log data from an OpenShift cluster. When you run the script, you start ingesting metrics and log data for pods and nodes in the cluster. Nodes and pods in the cluster you monitor are entities in the Splunk App for Infrastructure (SAI). You can search other metrics you specify to collect data for in the Search app.
SAI deploys Splunk Connect for Kubernetes (SCK) with Helm to collect metrics and log data from OpenShift clusters. This version of SAI deploys SCK version 1.3.0. For more information about SCK, see the Splunk Connect for Kubernetes 1.3.0 release documentation in the Github repository.
View detailed information about the status of pods you monitor from the Entity Overview. For information about pod statuses, see Pod phase on the Kubernetes website. The status for Kubernetes nodes is set to disabled
when the status of then node enters an unknown state. From the Investigate tab, the status of entities does not contain detailed pod status information, and is either Active
or Inactive
.
Go to the Investigate page in SAI to monitor your entities in the Tile or List view. You can group your entities to monitor them more easily, and further analyze your infrastructure by drilling down to the Overview Dashboard for entities or Analysis Workspace for entities and groups.
For information about stopping or removing the data collection agents, see Stop data collection on Splunk App for Infrastructure.
Prerequisites
Meet the following requirements to configure data collection:
Item | Requires |
---|---|
Data collection script dependencies | |
Helm | You must have permission to execute helm commands.
|
OpenShift CLI tool | You must have permission to execute oc commands.
|
HEC Token | To configure an HEC token for SAI, see 5. Create the HTTP Event Collector (HEC) token in the Install and Upgrade Splunk App for Infrastructure guide. |
Steps
Follow these steps to configure and run the data collection script to start forwarding data from an OpenShift cluster. If you're running SAI on Splunk Cloud, you must enter specific settings for the Monitoring machine, HEC port, and Receiver port. For more information, see Install and configure the data collection agents on each applicable system in the Install and Upgrade Splunk App for Infrastructure guide.
1. Prepare the deployment
You must run the script on the system that runs Helm and has the OpenShift Container Platform CLI tool.
If you enable Download Config Only, the script generates manifests but does not deploy them. When you enable this option, you have to manually create a project, create service accounts, and deploy the manifests later. For instructions, see Manually deploy the manifests.
2. Specify configuration options
Specify the data collection options for collecting metrics and logs from the cluster.
- In the SAI user interface, click the Add Data tab and select OpenShift.
- For Data to be collected, click Customize Objects to define which objects to track:
Object Sourcetype Description pods kube:objects:pods
Enabled by default, cannot be disabled. Collects metadata
,spec
, andstatus
data for pods in the cluster.nodes kube:objects:nodes
Enabled by default, cannot be disabled. Collects metadata
,spec
, andstatus
data for nodes in the cluster.You can enable advanced object collection for these objects:
Object Sourcetype Description component_statuses kube:objects:component_statuses
Collects conditions
andmetadata
data for the status of resources in the cluster.config_maps kube:objects:config_maps
Collects data
andmetadata
data for ConfigMaps in the cluster.daemon_sets kube:objects:daemon_sets
Collects metadata
,spec
, andstatus
data for daemonsets in the cluster.deployments kube:objects:deployments
Collects metadata
,spec
, andstatus
data for deployments in the cluster.namespaces kube:objects:namespaces
Collects metadata
,spec
, andstatus
data for namespaces in the cluster.persistent_volumes kube:objects:persistent_volumes
Collects metadata
,spec
, andstatus
data for persistent volumes in the cluster.persistent_volume_claims kube:objects:persistent_volume_claims
Collects metadata
,spec
, andstatus
data for persistent volume claims in the cluster.replica_sets kube:objects:replica_sets
Collects metadata
,spec
, andstatus
data for replica sets in the cluster.resource_quotas kube:objects:resource_quotas
Collects metadata
,spec
, andstatus
data for resource quotas in the cluster.services kube:objects:services
Collects metadata
,spec
, andstatus
data for services in the cluster.service_accounts kube:objects:service_accounts
Collects metadata
andsecrets
data for service accounts in the cluster.stateful_sets kube:objects:stateful_sets
Collects metadata
,spec
, andstatus
data for stateful sets in the cluster.events kube:objects:events:watch
Collects object
andtype
data for events in the cluster.em_meta
index. - For Monitoring machine, enter the FQDN or IP address of the system you are sending data to. This is the system running SAI.
- Enter the HEC token of the system you want to send data to.
- Enter the HEC port of the system you want to send metrics data to. Use port
8088
if it is available. - Enter a unique Cluster name to specify the name of the Kubernetes cluster you're running the script in. If you do not enter anything, the script specifies a name for you.
- Enter a unique OpenShift project namespace for SCK resources the script configures to collect data from the cluster. The project will have its own objects, policies, constraints, and service accounts for SCK. If you enable Download Config Only, you must still enter a project here. When the script creates the rendered charts, it assumes a project with the same name exists. The number of projects you can have may be limited. If you reach the limit, you must delete a project to create a new project.
3. Configure OpenShift options
Specify security, log, and metrics options for the cluster.
- For Security options, configure these options:
Setting Description Splunk HEC Enable this setting to allow SCK pods to send data to the HEC endpoint with an insecure SSL connection. If you have not yet hardened your HEC endpoint, enable this setting.
To configure SSL for HEC endpoints, enable SSL for global HEC settings and secure your Splunk Enterprise deployment with SSL. To enable SSL for HEC, see Configure the HTTP Event Collector on Splunk Enterprise. To secure your Splunk Enterprise deployment with SSL, see About securing Splunk Enterprise with SSL.
If you are sending data to Splunk Cloud, this option is not available. You can send data to an HEC endpoint in Splunk Cloud with only a secure SSL connection.Kubelet-objects pods Enable this setting to allow kubernetes-objects
pods to send requests to the Kubernetes API with an insecure SSL connection. Enable this option if your cluster does not have a valid SSL certificate yet. This option sets theinsecureSSL
value in thevalues.yaml
file totrue
forKubelet-metrics
pods. If you leave this option disabled, theinsecureSSL
value is set tofalse
.
Ifinsecure_ssl
is set tofalse
, the pod uses built-in certificate packages for secure SSL connections.Kubelet-metrics pods Enable this setting to allow the kubernetes-metrics
pod to send requests to the Kubelet on each node with an insecure SSL connection. Enable this option if your cluster does not have a valid SSL certificate yet. This option sets theinsecureSSL
value in thevalues.yaml
file totrue
forKubelet-objects
pods. If you leave this option disabled, theinsecureSSL
value is set tofalse
.
Ifinsecure_ssl
is set tofalse
, the pod uses built-in certificate packages for secure SSL connections. - For Logging and metrics options, configure these options:
Option Description Journald path The location of journald
logs on the node. The kubelet and container runtime send log data to journald. The default location is generally/run/log/journal
or/var/log/journal
.Kubelet protocol The kubelet port kubernetes-metrics
pods collect data from:- https:
10250
- http:
10255
- https:
- For Index options, specify the indexes for the data you collect. Each index you specify must be allowed to send data through the HTTP Event Collector (HEC).
Index Description Metrics index The index that stores metrics data fluentd collects. The default index for metrics data is em_metrics
.Log index The index that stores events data fluentd collects. The default index for events data is main
.Metadata index The index that stores metadata events about objects in the cluster. The default index for metadata events is em_meta
.
4. Run the script
Run the script on the system that you configured Helm and the OpenShift Container Platform CLI tool.
- Open a command line window.
- Switch to the cluster you want to monitor in SAI:
where
oc config use-context <context_name>
<context_name>
is the context that corresponds to the cluster you're monitoring. - Paste the script you configured in the Add Data tab.
Manually deploy the manifests
If you enable Download Config Only, you must manually deploy the manifests. To deploy the manifests, create a project, configure service accounts, and deploy the SCK Helm charts.
Prerequisites
- You have administrator access to the OpenShift Container Platform CLI tool.
- You have access to the cluster you want to monitor in SAI.
Steps
- Create an OpenShift project:
where
$ oc adm new-project <project> --node-selector="" $ oc project <project>
project
is the OpenShift project you entered when configuring the script. - Create service accounts and configure initial permissions.
where
$ oc create sa splunk-kubernetes-logging $ oc adm policy add-scc-to-user privileged "system:serviceaccount:<project>:splunk-kubernetes-logging" $ oc create sa splunk-kubernetes-metrics $ oc adm policy add-scc-to-user privileged "system:serviceaccount:<project>:splunk-kubernetes-metrics"
project
is the OpenShift project you entered when configuring the script. - Deploy the SCK Helm charts.
$ oc apply -f ./rendered-charts/splunk-connect-for-kubernetes/charts/splunk-kubernetes-metrics/templates/ $ oc apply -f ./rendered-charts/splunk-connect-for-kubernetes/charts/splunk-kubernetes-logging/templates/ $ oc apply -f ./rendered-charts/splunk-connect-for-kubernetes/charts/splunk-kubernetes-objects/templates/
Upgrade Splunk Connect for Kubernetes in SAI
You can deploy a more recent version of SCK to an OpenShift cluster you're already monitoring. If you started collecting OpenShift data with an earlier version of SAI, you may be running an earlier version of SCK. To upgrade SCK, delete SCK from the cluster and then run the data collection script from a more recent version of SAI that deploys a more recent version of SCK. When you upgrade SCK, SAI discovers resources in the cluster as new entities.
When you delete SCK from the cluster during the upgrade process, you can manually delete entities associated with the cluster or wait for your OpenShift entity retirement policy to automatically remove the entities. If you don't have an OpenShift entity retirement policy and don't manually delete the entities for a cluster after you upgrade SCK in the cluster, the old entities that the earlier version of SCK discovered just become inactive.
This version of SAI deploys SCK version 1.3.0 when you run the data collection script. For more information about SCK, see the Splunk Connect for Kubernetes 1.3.0 release documentation in the Github repository.
Follow these steps to upgrade SCK:
- On the system that runs SCK, delete the Helm release name for the current SCK deployment:
$ helm delete <helm_release_name> --purge
- Delete entities that the version of SCK you're replacing discovered. Alternatively, you can wait for an active OpenShift entity retirement policy to automatically remove the entities when they become inactive.
- Configure the data collection script in a version of SAI that deploys a more recent version of SCK.
Collect Kubernetes metrics and logs with Splunk App for Infrastructure | About VMware vSphere integrations in SAI |
This documentation applies to the following versions of Splunk® App for Infrastructure (Legacy): 2.1.0, 2.1.1 Cloud only, 2.2.0 Cloud only, 2.2.1, 2.2.3 Cloud only, 2.2.4, 2.2.5
Feedback submitted, thanks!