Splunk® App for Infrastructure (Legacy)

Administer Splunk App for Infrastructure

Collect OpenShift metrics and logs with Splunk App for Infrastructure

Use the easy install script script to start collecting metrics and log data from an OpenShift cluster. When you run the script, you start ingesting metrics and log data for pods and nodes in the cluster. Nodes and pods in the cluster you monitor are entities in the Splunk App for Infrastructure (SAI). You can search other metrics you specify to collect data for in the Search app.

SAI deploys Splunk Connect for Kubernetes (SCK) with Helm to collect metrics and log data from OpenShift clusters. This version of SAI deploys SCK version 1.3.0. For more information about SCK, see the Splunk Connect for Kubernetes 1.3.0 release documentation in the Github repository.

View detailed information about the status of pods you monitor from the Entity Overview. For information about pod statuses, see Pod phase on the Kubernetes website. The status for Kubernetes nodes is set to disabled when the status of then node enters an unknown state. From the Investigate tab, the status of entities does not contain detailed pod status information, and is either Active or Inactive.

Go to the Investigate page in SAI to monitor your entities in the Tile or List view. You can group your entities to monitor them more easily, and further analyze your infrastructure by drilling down to the Overview Dashboard for entities or Analysis Workspace for entities and groups.

For information about stopping or removing the data collection agents, see Stop data collection on Splunk App for Infrastructure.

Prerequisites

Meet the following requirements to configure data collection:

Item Requires
Data collection script dependencies

See OpenShift data collection requirements.

Helm You must have permission to execute helm commands.
OpenShift CLI tool You must have permission to execute oc commands.
HEC Token To configure an HEC token for SAI, see 5. Create the HTTP Event Collector (HEC) token in the Install and Upgrade Splunk App for Infrastructure guide.

Steps

Follow these steps to configure and run the data collection script to start forwarding data from an OpenShift cluster. If you're running SAI on Splunk Cloud, you must enter specific settings for the Monitoring machine, HEC port, and Receiver port. For more information, see Install and configure the data collection agents on each applicable system in the Install and Upgrade Splunk App for Infrastructure guide.

1. Prepare the deployment

You must run the script on the system that runs Helm and has the OpenShift Container Platform CLI tool.

If you enable Download Config Only, the script generates manifests but does not deploy them. When you enable this option, you have to manually create a project, create service accounts, and deploy the manifests later. For instructions, see Manually deploy the manifests.

2. Specify configuration options

Specify the data collection options for collecting metrics and logs from the cluster.

  1. In the SAI user interface, click the Add Data tab and select OpenShift.
  2. For Data to be collected, click Customize Objects to define which objects to track:
    Object Sourcetype Description
    pods kube:objects:pods Enabled by default, cannot be disabled. Collects metadata, spec, and status data for pods in the cluster.
    nodes kube:objects:nodes Enabled by default, cannot be disabled. Collects metadata, spec, and status data for nodes in the cluster.

    You can enable advanced object collection for these objects:

    Object Sourcetype Description
    component_statuses kube:objects:component_statuses Collects conditions and metadata data for the status of resources in the cluster.
    config_maps kube:objects:config_maps Collects data and metadata data for ConfigMaps in the cluster.
    daemon_sets kube:objects:daemon_sets Collects metadata, spec, and status data for daemonsets in the cluster.
    deployments kube:objects:deployments Collects metadata, spec, and status data for deployments in the cluster.
    namespaces kube:objects:namespaces Collects metadata, spec, and status data for namespaces in the cluster.
    persistent_volumes kube:objects:persistent_volumes Collects metadata, spec, and status data for persistent volumes in the cluster.
    persistent_volume_claims kube:objects:persistent_volume_claims Collects metadata, spec, and status data for persistent volume claims in the cluster.
    replica_sets kube:objects:replica_sets Collects metadata, spec, and status data for replica sets in the cluster.
    resource_quotas kube:objects:resource_quotas Collects metadata, spec, and status data for resource quotas in the cluster.
    services kube:objects:services Collects metadata, spec, and status data for services in the cluster.
    service_accounts kube:objects:service_accounts Collects metadata and secrets data for service accounts in the cluster.
    stateful_sets kube:objects:stateful_sets Collects metadata, spec, and status data for stateful sets in the cluster.
    events kube:objects:events:watch Collects object and type data for events in the cluster.
    Advanced object collection options do not have visualizations in SAI. Track these objects in the Search & Reporting app. By default, object data is stored in the em_meta index.
  3. For Monitoring machine, enter the FQDN or IP address of the system you are sending data to. This is the system running SAI.
  4. Enter the HEC token of the system you want to send data to.
  5. Enter the HEC port of the system you want to send metrics data to. Use port 8088 if it is available.
  6. Enter a unique Cluster name to specify the name of the Kubernetes cluster you're running the script in. If you do not enter anything, the script specifies a name for you.
  7. Enter a unique OpenShift project namespace for SCK resources the script configures to collect data from the cluster. The project will have its own objects, policies, constraints, and service accounts for SCK. If you enable Download Config Only, you must still enter a project here. When the script creates the rendered charts, it assumes a project with the same name exists. The number of projects you can have may be limited. If you reach the limit, you must delete a project to create a new project.

3. Configure OpenShift options

Specify security, log, and metrics options for the cluster.

  1. For Security options, configure these options:
    Setting Description
    Splunk HEC Enable this setting to allow SCK pods to send data to the HEC endpoint with an insecure SSL connection. If you have not yet hardened your HEC endpoint, enable this setting.


    To configure SSL for HEC endpoints, enable SSL for global HEC settings and secure your Splunk Enterprise deployment with SSL. To enable SSL for HEC, see Configure the HTTP Event Collector on Splunk Enterprise. To secure your Splunk Enterprise deployment with SSL, see About securing Splunk Enterprise with SSL.

    If you are sending data to Splunk Cloud, this option is not available. You can send data to an HEC endpoint in Splunk Cloud with only a secure SSL connection.

    Kubelet-objects pods Enable this setting to allow kubernetes-objects pods to send requests to the Kubernetes API with an insecure SSL connection. Enable this option if your cluster does not have a valid SSL certificate yet. This option sets the insecureSSL value in the values.yaml file to true for Kubelet-metrics pods. If you leave this option disabled, the insecureSSL value is set to false.


    If insecure_ssl is set to false, the pod uses built-in certificate packages for secure SSL connections.

    Kubelet-metrics pods Enable this setting to allow the kubernetes-metrics pod to send requests to the Kubelet on each node with an insecure SSL connection. Enable this option if your cluster does not have a valid SSL certificate yet. This option sets the insecureSSL value in the values.yaml file to true for Kubelet-objects pods. If you leave this option disabled, the insecureSSL value is set to false.


    If insecure_ssl is set to false, the pod uses built-in certificate packages for secure SSL connections.

  2. For Logging and metrics options, configure these options:
    Option Description
    Journald path The location of journald logs on the node. The kubelet and container runtime send log data to journald. The default location is generally /run/log/journal or /var/log/journal.
    Kubelet protocol The kubelet port kubernetes-metrics pods collect data from:
    • https: 10250
    • http: 10255
  3. For Index options, specify the indexes for the data you collect. Each index you specify must be allowed to send data through the HTTP Event Collector (HEC).
    Index Description
    Metrics index The index that stores metrics data fluentd collects. The default index for metrics data is em_metrics.
    Log index The index that stores events data fluentd collects. The default index for events data is main.
    Metadata index The index that stores metadata events about objects in the cluster. The default index for metadata events is em_meta.

4. Run the script

Run the script on the system that you configured Helm and the OpenShift Container Platform CLI tool.

  1. Open a command line window.
  2. Switch to the cluster you want to monitor in SAI:
    oc config use-context <context_name>
    
    where <context_name> is the context that corresponds to the cluster you're monitoring.
  3. Paste the script you configured in the Add Data tab.

Manually deploy the manifests

If you enable Download Config Only, you must manually deploy the manifests. To deploy the manifests, create a project, configure service accounts, and deploy the SCK Helm charts.

Prerequisites

  • You have administrator access to the OpenShift Container Platform CLI tool.
  • You have access to the cluster you want to monitor in SAI.

Steps

  1. Create an OpenShift project:
    $ oc adm new-project <project> --node-selector=""
    $ oc project <project>
    
    where project is the OpenShift project you entered when configuring the script.
  2. Create service accounts and configure initial permissions.
    $ oc create sa splunk-kubernetes-logging
    $ oc adm policy add-scc-to-user privileged "system:serviceaccount:<project>:splunk-kubernetes-logging"
    $ oc create sa splunk-kubernetes-metrics
    $ oc adm policy add-scc-to-user privileged "system:serviceaccount:<project>:splunk-kubernetes-metrics"
    
    where project is the OpenShift project you entered when configuring the script.
  3. Deploy the SCK Helm charts.
    $ oc apply -f ./rendered-charts/splunk-connect-for-kubernetes/charts/splunk-kubernetes-metrics/templates/
    $ oc apply -f ./rendered-charts/splunk-connect-for-kubernetes/charts/splunk-kubernetes-logging/templates/
    $ oc apply -f ./rendered-charts/splunk-connect-for-kubernetes/charts/splunk-kubernetes-objects/templates/
    

Upgrade Splunk Connect for Kubernetes in SAI

You can deploy a more recent version of SCK to an OpenShift cluster you're already monitoring. If you started collecting OpenShift data with an earlier version of SAI, you may be running an earlier version of SCK. To upgrade SCK, delete SCK from the cluster and then run the data collection script from a more recent version of SAI that deploys a more recent version of SCK. When you upgrade SCK, SAI discovers resources in the cluster as new entities.

When you delete SCK from the cluster during the upgrade process, you can manually delete entities associated with the cluster or wait for your OpenShift entity retirement policy to automatically remove the entities. If you don't have an OpenShift entity retirement policy and don't manually delete the entities for a cluster after you upgrade SCK in the cluster, the old entities that the earlier version of SCK discovered just become inactive.

This version of SAI deploys SCK version 1.3.0 when you run the data collection script. For more information about SCK, see the Splunk Connect for Kubernetes 1.3.0 release documentation in the Github repository.

Follow these steps to upgrade SCK:

  1. On the system that runs SCK, delete the Helm release name for the current SCK deployment:
    $ helm delete <helm_release_name> --purge
  2. Delete entities that the version of SCK you're replacing discovered. Alternatively, you can wait for an active OpenShift entity retirement policy to automatically remove the entities when they become inactive.
  3. Configure the data collection script in a version of SAI that deploys a more recent version of SCK.
Last modified on 23 September, 2020
About using collectd   Manually configure metrics and log collection for a Windows host for Splunk App for Infrastructure

This documentation applies to the following versions of Splunk® App for Infrastructure (Legacy): 2.1.0, 2.1.1 Cloud only, 2.2.0 Cloud only, 2.2.1, 2.2.3 Cloud only, 2.2.4, 2.2.5


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters