Docs » Supported integrations in Splunk Observability Cloud » Configure application receivers for GitLab » GitLab

GitLab πŸ”—

The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the GitLab monitor type to monitor GitLab.

GitLab is bundled with Prometheus exporters, which can be configured to export performance metrics of itself and of the bundled software that GitLab depends on. These exporters publish Prometheus metrics at endpoints that are scraped by this monitor type.

This integration allows you to monitor the following:

  • Gitaly and Gitaly Cluster: Gitaly is a git remote procedure call (RPC) service for handling all git calls made by GitLab. This monitor scrapes the Gitlab Gitaly git RPC server.

  • GitLab Runner: GitLab Runner can be monitored using Prometheus. See the GitLab Runner documentation on GitLab Docs for more information.

  • GitLab Sidekiq: It scrapes the Gitlab Sidekiq Prometheus Exporter.

  • GitLab Unicorn server: It comes with a Prometheus exporter. The IP address of the container or host needs to be allowed for the collector to access the endpoint. See the IP allowlist documentation on GitLab Docs for more information.

  • GitLab Workhorse: The GitLab service that handles slow HTTP requests. Workhorse includes a built-in Prometheus exporter that this monitor hits to gather metrics.

This monitor type is available on Kubernetes, Linux, and Windows using GitLab version 9.3 or higher.

Benefits πŸ”—

After you configure the integration, you can access these features:

Installation πŸ”—

Follow these steps to deploy this integration:

  1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:

  2. Configure the monitor, as described in the Configuration section.

  3. Restart the Splunk Distribution of OpenTelemetry Collector.

GitLab configuration πŸ”—

Follow the instructions on Monitoring GitLab with Prometheus to configure the GitLab Prometheus exporters to expose metric endpoint targets.

If you configue GitLab by editing /etc/gitlab/gitlab.rb, you need to run the command gitlab-ctl reconfigure for the changes to take effect.

If you configue nginx by editing the file /var/opt/gitlab/nginx/conf/nginx-status.conf, you need to run the command gitlab-ctl restart. Note that changes to the configuration file /var/opt/gitlab/nginx/conf/nginx-status.conf in particular are erased by subsequent runs of gitlab-ctl reconfigure because gitlab-ctl reconfigure restores the original configuration file.

The following table shows some of the Prometheus endpoint targets with links to their respective configuration pages.

Monitor type

Reference

Default port

Standard path

gitlab-exporter

GitLab exporter

9168

/metrics

gitlab-gitaly

Gitaly and Gitaly Cluster

9236

/metrics

gitlab-runner

GitLab Runner

9252

/metrics

gitlab-sidekiq

GitLab SideKiq

8082

/metrics

gitlab-unicorn

GitLab Unicorn

8080

/-/metrics

gitlab-workhorse

GitLab Workhorse

9229

/metrics

prometheus/nginx-vts

Monitoring GitLab with Prometheus

8060

/metrics

prometheus/node

Node exporter

9100

/metrics

prometheus/postgres

PostgreSQL Server Exporter

9187

/metrics

prometheus/prometheus

Monitoring GitLab with Prometheus

9090

/metrics

prometheus/redis

Redis exporter

9121

/metrics

GitLab Prometheus exporters, nginx, and GitLab Runner must be configured to accept requests from the host or Docker container of the OpenTelemetry Collector. For example, the following configuration in /etc/gitlab/gitlab.rb configures the GitLab Postgres Prometheus exporter to allow network connections on port 9187 from any IP address:

postgres_exporter['listen_address'] = '0.0.0.0:9187'

Or

postgres_exporter['listen_address'] = ':9187'

The following excerpt from the file /var/opt/gitlab/nginx/conf/nginx-status.conf shows the location /metrics block for metric related configuration. This file configures nginx. The statement allow 172.17.0.0/16; allows network connection in the 172.17.0.0/16 IP range. The assumption is that the IP address associated with the OpenTelemetry Collector is in that IP range.

server {
    ...
    location /metrics {
    ...
    allow 172.17.0.0/16;
    deny all;
    }
}

The following line is part of the global section of the file /etc/gitlab-runner/config.toml. This file configures GitLab Runner. The following statement configures GitLab Runner’s Prometheus metrics HTTP server to allows network connection on port 9252 from any IP address:

listen_address = "0.0.0.0:9252"
...

Configuration πŸ”—

To use this integration of a Smart Agent monitor with the Collector:

  1. Include the Smart Agent receiver in your configuration file.

  2. Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.

Example πŸ”—

To activate this integration, add the following to your Collector configuration:

receivers:
  smartagent/gitlab:
    type: gitlab
    ... # Additional config

Next, add the services you want to monitor to the service.pipelines.metrics.receivers section of your configuration file:

receivers:
  smartagent/gitlab-sidekiq:
    type: gitlab
    host: localhost
    port: 8082
  smartagent/gitlab-workhorse:
    type: gitlab
    host: localhost
    port: 9229

# ... Other sections

service:
  pipelines:
    metrics:
      receivers:
        - smartagent/gitlab-sidekiq
        - smartagent/gitlab-workhorse

# ... Other sections

Configuration options πŸ”—

The following table shows the configuration options for this monitor:

Option

Required

Type

Description

httpTimeout

no

int64

HTTP timeout duration for both read and writes. This should be a

duration string that is accepted by ParseDuration The default value is 10s.

username

no

string

Basic Auth username to use on each request, if any.

password

no

string

Basic Auth password to use on each request, if any.

useHTTPS

no

bool

If true, the collector will connect to the server using

HTTPS instead of plain HTTP. The default value is false.

httpHeaders

no

map of strings

A map of HTTP header names to values. Comma-separated multiple

values for the same message-header is supported.

skipVerify

no

bool

If useHTTPS is true and this option is also true,

the exporter’s TLS cert will not be verified. The default value is false.

caCertPath

no

string

Path to the CA cert that has signed the TLS cert, unnecessary if

skipVerify is set to false.

clientCertPath

no

string

Path to the client TLS cert to use for TLS required connections

clientKeyPath

no

string

Path to the client TLS key to use for TLS required connections

host

yes

string

Host of the exporter

port

yes

integer

Port of the exporter

useServiceAccount

no

bool

Use pod service account to authenticate. The default value is

false.

metricPath

no

string

Path to the metrics endpoint on the exporter server, usually

/metrics, which is the default value.

sendAllMetrics

no

bool

Send all the metrics that come out of the Prometheus exporter

without any filtering. This option has no effect when using the Prometheus exporter monitor directly since there is no built-in filtering, only when embedding it in other monitors. The default value is false.

Metrics πŸ”—

The following metrics are available for this integration.

Notes πŸ”—

  • To learn more about the available in Observability Cloud see Metric types

  • In host-based subscription plans, default metrics are those metrics included in host-based subscriptions in Observability Cloud, such as host, container, or bundled metrics. Custom metrics are not provided by default and might be subject to charges. See Metric categories for more information.

  • In MTS-based subscription plans, all metrics are custom.

  • To add additional metrics, see how to configure extraMetrics in Add additional metrics

Troubleshooting πŸ”—

If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.

Available to Splunk Observability Cloud customers

Available to prospective customers and free trial users

  • Ask a question and get answers through community support at Splunk Answers .

  • Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.

To learn about even more support options, see Splunk Customer Success .