Docs » Available host and application monitors » Configure application receivers for orchestration » OpenShift cluster

OpenShift cluster 🔗

Description 🔗

The Splunk Distribution of OpenTelemetry Collector provides this integration as the openshift-cluster monitor type by using the SignalFx Smart Agent Receiver.

Use this integration to collect cluster-level metrics from the Kubernetes API server, which includes all metrics from the kubernetes-cluster monitor with additional OpenShift-specific metrics. You only need to use the openshift-cluster monitor for OpenShift deployments, as it incorporates the kubernetes-cluster monitor automatically.

Since the agent is generally running in multiple places in a Kubernetes cluster, and since it is generally more convenient to share the same configuration across all agent instances, this monitor by default makes use of a leader election process to ensure that it is the only agent sending metrics in a cluster.

All of the agents running in the same namespace that have this monitor configured decide amongst themselves which agent should send metrics for this monitor. This agent becomes the leader agent. The remaining agents stand by, ready to activate if the leader agent dies. You can override leader agent election by setting the alwaysClusterReporter option to true, which makes the monitor always report metrics.

This monitor is similar to the kube-state-metrics monitor, and sends many of the same metrics, but in a way that is less verbose and a better fit for the Splunk Observability Cloud backend.

This monitor is available on Kubernetes, Linux, and Windows.

Benefits 🔗

After you configure the integration, you can access these features:

  • View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Observability Cloud.

  • View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Splunk Infrastructure Monitoring navigators.

  • Access the Metric Finder and search for metrics sent by the monitor. For information, see Use the Metric Finder.

Installation 🔗

Follow these steps to deploy this integration:

  1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform.

    By default the Collector is installed in the namespace you’re logged into. To deploy the Collector into a different namespace, use the --namespace flag to indicate where to place the Collector in.

    • Install on Kubernetes. When installing Kubernetes using the Helm chart, use the --set distribution='openshift' option to generate specific OpenShift metrics, in addition to the standard Kubernetes metrics.

      For example:

      helm install --set cloudProvider=' ' --set distribution='openshift' --set splunkObservability.accessToken='******' --set            clusterName='cluster1' --namespace='namespace1' --set splunkObservability.realm='us0' --set gateway.enabled='false' --generate-name splunk-otel-collector-chart/splunk-otel-collector

      Find more information in our GitHub repos.

    • Install on Linux

    • Install on Windows

  2. Configure the monitor, as described in the Configuration section.

  3. Restart the Splunk Distribution of OpenTelemetry Collector.

Configuration 🔗

This monitor type is available in the Smart Agent Receiver, which is part of the Splunk Distribution of OpenTelemetry Collector. You can use existing Smart Agent monitors as OpenTelemetry Collector metric receivers with the Smart Agent Receiver.

This monitor type requires a properly configured environment on your system in which you’ve installed a functional Smart Agent release bundle. The Collector provides this bundle in the installation paths for x86_64/amd64.

To activate this monitor type in the Collector, add the following lines to your configuration (YAML) file:

receivers:  # All configuration goes under this key
 - smartagent/openshift-cluster:
    type: openshift-cluster

   ...  # Additional config

To complete the monitor activation, you must also include the smartagent/openshift-cluster monitor in a metrics pipeline. To do this, add the monitor to the service/pipelines/metrics/receivers section of your configuration file. For example:

      receivers: [smartagent/openshift-cluster]

Configuration options 🔗

The following table shows the configuration options for this monitor:








If true, leader election is skipped and metrics are always reported. The default value is false.




If specified, only resources within the given namespace are monitored. If omitted (blank), all supported resources across all namespaces are monitored.




Config for the K8s API client



list of strings

A list of node status condition types to report as metrics. The metrics are reported as data points of the form kubernetes.node_<type_snake_cased> with a value of 0 corresponding to “False”, 1 to “True”, and -1 to “Unknown”. The default value is `[Ready].)

The nested kubernetesAPI configuration object has the following fields:








How to authenticate to the K8s API server. This can be one of none (for no auth), tls (to use manually specified TLS client certs, not recommended), serviceAccount (to use the standard service account token provided to the agent pod), or kubeConfig to use credentials from ~/.kube/config. The default value is serviceAccount.




Whether to skip verifying the TLS cert from the API server. Almost never needed. The default value is false.




The path to the TLS client cert on the pod’s filesystem, if using tls auth.




The path to the TLS client key on the pod’s filesystem, if using tls auth.




Path to a CA certificate to use when verifying the API server’s TLS cert. Generally, this is provided by Kubernetes alongside the service account token, which is picked up automatically, so this should rarely be necessary to specify.

Metrics 🔗

The following metrics are available for this integration:

Get help 🔗

If you are not able to see your data in Splunk Observability Cloud, try these tips:

To learn about even more support options, see Splunk Customer Success.