Docs » Splunk On-Call integrations » Prometheus integration for Splunk On-Call

Prometheus integration for Splunk On-Call 🔗

The Splunk On-Call and Prometheus integration provides you with the real-time monitoring data of an open source time-series database and the collaboration tools you need to easily address system issues or incidents. Prometheus’ s time-series database and monitoring specializes in providing your team with real-time, live updating system metrics to more quickly show when errors occur or requests fail.

The Splunk On-Call and Prometheus integration is easily configurable to help you aggregate time-series data and respond to incidents in one centralized location. Prometheus integrates with Splunk On-Call to help you identify, diagnose, and resolve incidents in real-time, as well as conduct more thorough post-incident reviews.

Requirements 🔗

This integration is compatible with the following versions of Splunk On-Call:

  • Starter

  • Growth

  • Enterprise

The integration supports AlertManager 0.8.0 and Prometheus 2.0.0-beta 2.

Activate Prometheus in Splunk On-Call 🔗

From Splunk On-Call, navigate to Integrations, 3rd Party Integrations. Prometheus and select Enable Integration. Copy the Service API Key to use in the following steps.

Configure Splunk On-Call in Prometheus 🔗

Download AlertManager from the Prometheus website and configure it. Use the following code in the YAML configuration file for the AlertManager. Make sure to replace the api_key with the previously-saved Service API Key and change the routing_key to the routing key you want to use.

route:
   group_by: ['alertname', 'cluster', 'service']
   group_wait: 30s
   group_interval: 5m
   repeat_interval: 3h
   receiver: victorOps-receiver

receivers:
  - name: victorOps-receiver
    victorops_configs:
      - api_key: 558e7ebc-XXXX-XXXX-XXXX-XXXXXXXXXXXX
        routing_key: Sample_route
        state_message: 'Alert: {{ .CommonLabels.alertname }}. Summary:{{ .CommonAnnotations.summary }}. RawData: {{ .CommonLabels }}'

To use custom fields or a proxy url, use the following snippet as a template:

route:
   group_by: ['alertname', 'cluster', 'service']
   group_wait: 30s
   group_interval: 5m
   repeat_interval: 3h
   receiver: victorOps-receiver

receivers:
- name: victorOps-receiver
  victorops_configs:
    - api_key:
      routing_key:
      entity_display_name: '{{ .CommonAnnotations.summary }}'
      message_type: '{{ .CommonLabels.severity }}'
      state_message: 'Alert: {{ .CommonLabels.alertname }}. Summary:{{ .CommonAnnotations.summary }}. RawData: {{ .CommonLabels }}'
      custom_fields:
         : '{{ .CommonLabels.eai_nbr }}'
      # We must set a proxy to be able to send alerts to external systems
      http_config:
         proxy_url: 'http://internet.proxy.com:3128'

Start Prometheus from the command line and configure it to talk to the Alertmanager. In this example, prometheus.yml is the configuration file for Prometheus and “http://localhost:9093” is the instance of the Alertmanager that Prometheus is pointing to.

./prometheus -config.file=prometheus.yml -alertmanager.url=http://localhost:9093

Next, start the Alertmanager from the command line using the Alertmanager configuration file from earlier. In this example, alertmanager.yml is the name of the configuration file.

./alertmanager -config.file=alertmanager.yml

Alerts from Prometheus appears in the Alertmanager as they are generated.

If you don’ t want to wait for an alert to be generated by Prometheus, send a test message to the Alertmanager. The following example uses curl with an instance of Alertmanager:

curl -H “Content-Type: application/json” -d '[{“labels”:{“alertname”:“TestAlert”}}]'  localhost:9093/api/v1/alerts

This page was last updated on Nov 24, 2023.