Docs » Supported integrations in Splunk Observability Cloud » Collector components: Receivers » Prometheus receiver

Prometheus receiver πŸ”—

The Prometheus receiver allows the Splunk Distribution of the OpenTelemetry Collector to collect metrics from any source exposing telemetry in Prometheus format. The supported pipeline type is metrics. See Process your data with pipelines for more information.

Note

To use a simplified version of the Prometheus receiver that supports single endpoints, see Simple Prometheus receiver.

Benefits πŸ”—

The Prometheus receiver can scrape metrics data from any application that exposes a Prometheus endpoint. The receiver converts Prometheus metrics to OpenTelemetry metrics while preserving metric names, values, timestamps, and labels. You can also reuse your existing Prometheus configurations.

See a complete list of third-party applications compatible with Prometheus in Prometheus’ official documentation at Prometheus exporters .

Learn more at Configure applications with Prometheus metrics.

Get started πŸ”—

Note

This component is included in the default configuration of the Splunk Distribution of the OpenTelemetry Collector when deploying in host monitoring (agent) mode. See Collector deployment modes for more information.

For details about the default configuration, see Configure the Collector for Kubernetes with Helm, Collector for Linux default configuration, or Collector for Windows default configuration. You can customize your configuration any time as explained in this document.

Follow these steps to configure and activate the component:

  1. Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform:

  1. Configure the receiver as described in the next section.

  2. Restart the Collector.

Sample configuration πŸ”—

By default, the Splunk Distribution of the OpenTelemetry Collector includes the Prometheus receiver in the metrics/internal pipeline.

To activate additional Prometheus receivers, add a new prometheus entry in the receivers section of the Collector configuration file, as in the following example:

receivers:
  prometheus:
    config:
      scrape_configs:
        - job_name: 'sample-name'
          scrape_interval: 5s
          static_configs:
            - targets: ['0.0.0.0:8888']

To complete the configuration, include the receiver in the metrics pipeline of the service section of your configuration file. For example:

service:
  pipelines:
    metrics:
      receivers:
        - prometheus

Caution

Don’t remove the prometheus/internal receiver from the configuration. Internal metrics feed the Splunk Distribution of the OpenTelemetry Collector default dashboard.

Scraper configuration πŸ”—

The Prometheus receiver supports the most of the scrape configuration of Prometheus, including service discovery, through the config.scrape_configs section. In the scrape_config section of your configuration file you can specify a set of targets and parameters that describe how to scrape them.

For basic configurations, a single scrape configuration specifies a single job. You can configure static targets using the static_configs parameter. Dynamically discovered targets use service discovery mechanisms of Prometheus. In addition, the relabel_configs parameter allows advanced modifications to any target and its labels before scraping.

The following is an example of a basic scrape configuration:

receivers:
  prometheus:
    config:
      scrape_configs:
      # The job name assigned to scraped metrics by default.
      # <job_name> must be unique across all scrape configurations.
        - job_name: 'otel-collector'
        # How frequently to scrape targets from this job.
        # The acceptable values are <duration> | default = <global_config.scrape_interval> ]
          scrape_interval: 5s
        # List of labeled statically configured targets for this job.
          static_configs:
            - targets: ['0.0.0.0:8888']
        - job_name: k8s
        # Scraping configuration for Kubernetes
          kubernetes_sd_configs:
          - role: pod
          relabel_configs:
          - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
            regex: "true"
            action: keep
          # List of metric relabel configurations.
          metric_relabel_configs:
          - source_labels: [__name__]
            regex: "(request_duration_seconds.*|response_duration_seconds.*)"
            action: keep

To use environment variables in the Prometheus receiver configuration, use the ${<var>} syntax. For example:

prometheus:
  config:
    scrape_configs:
      - job_name: ${JOBNAME}
        scrape_interval: 5s

If you’re using existing Prometheus configurations, replace $ with $$ to prevent the Collector from reading them as environment variables.

Scaling considerations πŸ”—

When running multiple replicas of the Collector with the same configuration, the Prometheus receiver scrapes targets multiple times. If you need to configure each replica with different scraping configurations, shard the scraping. The Prometheus receiver is stateful. For considerations on scaling, see Sizing and scaling.

Known limitations πŸ”—

The following Prometheus features are not supported and return an error if used in the receiver configuration:

  • alert_config.alertmanagers

  • alert_config.relabel_configs

  • remote_read

  • remote_write

  • rule_files

Settings πŸ”—

The following table shows the configuration options for the Prometheus receiver:

Metrics πŸ”—

The Prometheus receiver converts Prometheus metrics to OpenTelemetry metrics following these conversion rules:

Prometheus metric type

OpenTelemetry metric type

Counter (monotonic)

Sum (data type double)

Gauge
Unknown

Gauge (data type double)

Histogram

Histogram (cumulative distribution)

Summary

Summary (percentiles)

Histograms support πŸ”—

For more information on histogram support, see Send histogram metrics in OTLP format.

Troubleshooting πŸ”—

If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.

Available to Splunk Observability Cloud customers

Available to prospective customers and free trial users

  • Ask a question and get answers through community support at Splunk Answers .

  • Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.

This page was last updated on Sep 18, 2024.