Prometheus Exporter π
The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the prometheus-exporter
monitor type to read all metric types from a Prometheus Exporter endpoint.
A Prometheus Exporter is a piece of software that fetches statistics from another, non-Prometheus system, and turns them into Prometheus metrics. For a description of the Prometheus metric types, see Metric Types.
This monitor is available on Kubernetes, Linux, and Windows.
Benefits π
After you configure the integration, you can access these features:
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Observability Cloud.
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Splunk Infrastructure Monitoring navigators.
Access the Metric Finder and search for metrics sent by the monitor. For information, see Use the Metric Finder.
Installation π
Follow these steps to deploy this integration:
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:
Configure the monitor, as described in the Configuration section.
Restart the Splunk Distribution of OpenTelemetry Collector.
Configuration π
To use this integration of a Smart Agent monitor with the Collector:
Include the Smart Agent receiver in your configuration file.
Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.
Read more on how to Use Smart Agent monitors with the Collector.
See how to set up the Smart Agent receiver.
Learn about config options in Collector default configuration.
Example π
To activate this integration, add the following to your Collector configuration:
receivers:
smartagent/prometheus-exporter:
type: prometheus-exporter
discoveryRule: port >= 9100 && port <= 9500 && container_image =~ "exporter"
extraDimensions:
metric_source: prometheus
... # Additional config
Next, add the monitor to the service.pipelines.metrics.receivers
section of your configuration file:
service:
pipelines:
metrics:
receivers: [smartagent/prometheus-exporter]
For specific use cases that show how the Splunk Distribution of OpenTelemetry Collector can integrate and complement existing environments, see configuration examples.
See the Prometheus Federation Endpoint Example in GitHub for an example of how the OTel Collector works with Splunk Enterprise and an existing Prometheus deployment.
Configuration options π
The following table shows the configuration options for the prometheus-exporter
monitor:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
|
HTTP timeout duration for both read and writes. This should be a duration string that is accepted by func ParseDuration (default: |
|
no |
|
Basic Auth username to use on each request, if any. |
|
no |
|
Basic Auth password to use on each request, if any. |
|
no |
|
If |
|
no |
|
A map of HTTP header names to values. Comma separated multiple values for the same message-header is supported. |
|
no |
|
If useHTTPS is |
|
no |
|
Path to the CA cert that has signed the TLS cert, unnecessary if |
|
no |
|
Path to the client TLS cert to use for TLS required connections |
|
no |
|
Path to the client TLS key to use for TLS required connections |
|
yes |
|
Host of the exporter |
|
yes |
|
Port of the exporter |
|
no |
|
Use pod service account to authenticate. (default: |
|
no |
|
Path to the metrics endpoint on the exporter server, usually |
|
no |
|
Send all the metrics that come out of the Prometheus exporter without any filtering. This option has no effect when using the prometheus exporter monitor directly since there is no built-in filtering, only when embedding it in other monitors. (default: |
Authentication π
For basic HTTP authentication, use the username
and password
options.
On Kubernetes, if the monitored service requires authentication, use the useServiceAccount
option to use the service account of the agent when connecting. Make sure that the Smart Agent service account has sufficient permissions for the monitored service.
Metrics π
There are no metrics available for this integration.
Prometheus metric conversion details π
This is how Prometheus metrics are converted:
Gauges are converted directly to Splunk Infrastructure Monitoring gauges.
Counters are converted directly to Infrastructure Monitoring cumulative counters.
Untyped metrics are converted directly to Infrastructure Monitoring gauges.
Summary metrics are converted to three distinct metrics, where
<basename>
is the root name of the metric:The total count is converted to a cumulative counter called
<basename>_count
.The total sum is converted to a cumulative counter called
<basename>
.Each quantile value is converted to a gauge called
<basename>_quantile
and includes a dimension calledquantile
that specifies the quantile.
Histogram metrics are converted to three distinct metrics, where
<basename>
is the root name of the metric:The total count is converted to a cumulative counter called
<basename>_count
.The total sum is converted to a cumulative counter called
<basename>
.Each histogram bucket is converted to a cumulative counter called
<basename>_bucket
and includes a dimension calledupper_bound
that specifies the maximum value in that bucket. This metric specifies the number of events with a value that is less than or equal to the upper bound.
All Prometheus labels are converted directly to Infrastructure Monitoring dimensions.
This supports service discovery so you can set a discovery rule such as port >= 9100 && port <= 9500 && containerImage =~ "exporter"
, assuming you are running exporters in container images that have the word βexporterβ in them that fall within the standard exporter port range.
In Kubernetes, you can also try matching on the container port name as defined in the pod spec, which is the name
variable in discovery rules for the k8s-api
observer.
Filtering can be very useful here, because exporters tend to be verbose.
Troubleshooting π
If you are not able to see your data in Splunk Observability Cloud, try these tips:
Submit a case in the Splunk Support Portal
Available to Splunk Observability Cloud customers
-
Available to Splunk Observability Cloud customers
Ask a question and get answers through community support at Splunk Answers
Available to Splunk Observability Cloud customers and free trial users
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide
Available to Splunk Observability Cloud customers and free trial users
To learn how to join, see Get Started with Splunk Community - Chat groups
To learn about even more support options, see Splunk Customer Success.
Log contains the error net/http: HTTP/1.x transport connection broken: malformed HTTP response
π
Solution: Activate HTTPS with useHTTPS
.
Log contains the error forbidden: User \"system:anonymous\" cannot get path \"/metrics\"
π
Solution: Activate useServiceAccount
and make sure the service account that the Splunk Distribution of OpenTelemetry Collector is running with has the necessary permissions.