Prometheus Exporter π
The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the prometheus-exporter
monitor type to read all metric types from a Prometheus Exporter endpoint.
A Prometheus Exporter is a piece of software that fetches statistics from another, non-Prometheus system, and turns them into Prometheus metrics. For a description of the Prometheus metric types, see Metric Types.
This monitor is available on Kubernetes, Linux, and Windows.
Benefits π
After you configure the integration, you can access these features:
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Splunk Observability Cloud.
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Use navigators in Splunk Infrastructure Monitoring.
Access the Metric Finder and search for metrics sent by the monitor. For information, see Search the Metric Finder and Metadata Catalog.
Installation π
Follow these steps to deploy this integration:
Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform:
Configure the integration, as described in the Configuration section.
Restart the Splunk Distribution of the OpenTelemetry Collector.
Configuration π
To use this integration of a Smart Agent monitor with the Collector:
Include the Smart Agent receiver in your configuration file.
Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.
See how to Use Smart Agent monitors with the Collector.
See how to set up the Smart Agent receiver.
For a list of common configuration options, refer to Common configuration settings for monitors.
Learn more about the Collector at Get started: Understand and use the Collector.
Example π
To activate this integration, add the following to your Collector configuration:
receivers:
smartagent/prometheus-exporter:
type: prometheus-exporter
discoveryRule: port >= 9100 && port <= 9500 && container_image =~ "exporter"
extraDimensions:
metric_source: prometheus
... # Additional config
Next, add the monitor to the service.pipelines.metrics.receivers
section of your configuration file:
service:
pipelines:
metrics:
receivers: [smartagent/prometheus-exporter]
For specific use cases that show how the Splunk Distribution of OpenTelemetry Collector can integrate and complement existing environments, see configuration examples.
See the Prometheus Federation Endpoint Example in GitHub for an example of how the OTel Collector works with Splunk Enterprise and an existing Prometheus deployment.
Configuration options π
The following table shows the configuration options for the
prometheus-exporter
monitor:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
|
|
|
no |
|
Basic Auth username to use on each request, if any. |
|
no |
|
Basic Auth password to use on each request, if any. |
|
no |
|
|
|
no |
|
|
|
no |
|
|
|
no |
|
|
|
no |
|
Path to the client TLS cert to use for TLS required connections |
|
no |
|
Path to the client TLS key to use for TLS required connections |
|
yes |
|
Host of the exporter |
|
yes |
|
Port of the exporter |
|
no |
|
|
|
no |
|
|
|
no |
|
|
Authentication π
For basic HTTP authentication, use the username
and password
options.
On Kubernetes, if the monitored service requires authentication, use the
useServiceAccount
option to use the service account of the agent
when connecting. Make sure that the Smart Agent service account has
sufficient permissions for the monitored service.
Metrics π
There are no metrics available for this integration.
Prometheus metric conversion details π
This is how Prometheus metrics are converted:
Gauges are converted directly to Splunk Infrastructure Monitoring gauges.
Counters are converted directly to Infrastructure Monitoring cumulative counters.
Untyped metrics are converted directly to Infrastructure Monitoring gauges.
Summary metrics are converted to three distinct metrics, where
<basename>
is the root name of the metric:The total count is converted to a cumulative counter called
<basename>_count
.The total sum is converted to a cumulative counter called
<basename>
.Each quantile value is converted to a gauge called
<basename>_quantile
and includes a dimension calledquantile
that specifies the quantile.
Histogram metrics are converted to three distinct metrics, where
<basename>
is the root name of the metric:The total count is converted to a cumulative counter called
<basename>_count
.The total sum is converted to a cumulative counter called
<basename>
.Each histogram bucket is converted to a cumulative counter called
<basename>_bucket
and includes a dimension calledupper_bound
that specifies the maximum value in that bucket. This metric specifies the number of events with a value that is less than or equal to the upper bound.
All Prometheus labels are converted directly to Infrastructure Monitoring dimensions.
This supports service discovery so you can set a discovery rule such as
port >= 9100 && port <= 9500 && containerImage =~ "exporter"
,
assuming you are running exporters in container images that have the
word βexporterβ in them that fall within the standard exporter port
range.
In Kubernetes, you can also try matching on the container port name as
defined in the pod spec, which is the name
variable in discovery
rules for the k8s-api
observer.
Filtering can be very useful here, because exporters tend to be verbose.
Troubleshooting π
Log contains the error net/http: HTTP/1.x transport connection broken: malformed HTTP response
π
Solution: Activate HTTPS with useHTTPS
.
Log contains the error forbidden: User \"system:anonymous\" cannot get path \"/metrics\"
π
Solution: Activate useServiceAccount
and make sure the service
account that the Splunk Distribution of OpenTelemetry Collector is
running with has the necessary permissions.
Get help π
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
Submit a case in the Splunk Support Portal .
Contact Splunk Support .
Available to prospective customers and free trial users
Ask a question and get answers through community support at Splunk Answers .
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.