Docs » Configure application receivers » Configure application receivers for GitLab » GitLab

GitLab 🔗

Description 🔗

The Splunk Distribution of OpenTelemetry Collector provides this integration as the gitlab monitor type by using the Smart Agent Receiver.

GitLab is an open-source web-based git repository manager developed by GitLab Inc. GitLab has built-in features for creating wiki pages, issue-tracking and CI/CD pipelines. GitLab is bundled with Prometheus exporters, which can be configured to export performance metrics of itself and of the bundled software that GitLab depends on. These exporters publish Prometheus metrics at endpoints that are scraped by this monitor.

This monitor is available on Kubernetes, Linux, and Windows using GitLab version 9.3 or later.

Benefits 🔗

After you’ve configured the integration, you can:

  • View metrics using the built-in dashboard. For information about dashboards, see View dashboards in Observability Cloud.

  • View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Splunk Infrastructure Monitoring navigators.

  • Access Metric Finder and search for metrics sent by the monitor. For information about Metric Finder, see Use the Metric Finder.

Installation 🔗

Follow these steps to deploy the integration:

  1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:

  2. Configure the monitor, as described in the next section.

  3. Restart the Splunk Distribution of OpenTelemetry Collector.

Configuration 🔗

GitLab configuration 🔗

Follow the instructions on Monitoring GitLab with Prometheus to configure the GitLab’s Prometheus exporters to expose metric endpoint targets. For the GitLab Runner monitoring configuration, see GitLab Runner monitoring.

Note that configuring GitLab by editing /etc/gitlab/gitlab.rb should be accompanied by running the command gitlab-ctl reconfigure for the changes to take effect.

Also, configuring nginx by editing the file /var/opt/gitlab/nginx/conf/nginx-status.conf, for instance, should be accompanied by running command gitlab-ctl restart. Note that changes to the configuration file /var/opt/gitlab/nginx/conf/nginx-status.conf in particular are erased by subsequent runs of gitlab-ctl reconfigure because gitlab-ctl reconfigure restores the original configuration file.

The following table shows some of the Prometheus endpoint targets with links to their respective configuration pages. Note that target gitlab_monitor metrics are just targets gitlab_monitor_database, gitlab_monitor_process and gitlab_monitor_sidekiq metrics combined.

Monitor type Reference Standard port Standard path
gitlab-exporter GitLab exporter 9168 /metrics
gitlab-gitaly Gitaly and Gitaly Cluster 9236 /metrics
gitlab-sidekiq GitLab SideKiq 8082 /metrics
gitlab-unicorn GitLab Unicorn 8080 /-/metrics
gitlab-workhorse GitLab Workhorse 9229 /metrics
prometheus/nginx-vts Monitoring GitLab with Prometheus 8060 /metrics
prometheus/node Node exporter 9100 /metrics
prometheus/postgres PostgreSQL Server Exporter 9187 /metrics
prometheus/prometheus Monitoring GitLab with Prometheus 9090 /metrics
prometheus/redis Redis exporter 9121 /metrics
gitlab-runner GitLab Runner 9252 /metrics

GitLab Prometheus exporters, nginx, and GitLab Runner must be configured to accept requests from the host or Docker container of the OpenTelemetry Collector. For example, the following configuration in /etc/gitlab/gitlab.rb configures the GitLab Postgres Prometheus exporter to allow network connections on port 9187 from any IP address:

postgres_exporter['listen_address'] = '0.0.0.0:9187'

The previous configuration can also be written as follows:

postgres_exporter['listen_address'] = ':9187'

The following excerpt from the file /var/opt/gitlab/nginx/conf/nginx-status.conf shows the location /metrics block for metric related configuration. This file configures nginx. The statement allow 172.17.0.0/16; allows network connection in the 172.17.0.0/16 IP range. The assumption is that the IP address associated with the OpenTelemetry Collector is in that IP range.

server {
    ...
    location /metrics {
    ...
    allow 172.17.0.0/16;
    deny all;
    }
}

The following line is part of the global section of the file /etc/gitlab-runner/config.toml. This file configures GitLab Runner. The following statement configures GitLab Runner’s Prometheus metrics HTTP server to allows network connection on port 9252 from any IP address:

listen_address = "0.0.0.0:9252"
...

Sample configuration 🔗

Use the following configuration to monitor some of the features supported in GitLab:

monitors:
 - type: gitlab-unicorn
   host: localhost
   port: 8080

 - type: gitlab
   host: localhost
   port: 9168

 - type: gitlab-runner
   host: localhost
   port: 9252

 - type: gitlab-workhorse
   host: localhost
   port: 9229

 - type: gitlab-sidekiq
   host: localhost
   port: 8082

 - type: gitlab-gitaly
   host: localhost
   port: 9236

 - type: prometheus/postgres
   host: localhost
   port: 9187

 - type: prometheus/nginx-vts
   host: localhost
   port: 8060

You can use autodiscovery by specifying a discoveryRule instead of host and port.

See GitLab for information on the monitors used in the configuration.

Splunk Distribution of OpenTelemetry Collector configuration 🔗

This monitor is available in the Smart Agent Receiver, which is part of the Splunk Distribution of OpenTelemetry Collector. The Smart Agent Receiver lets you use existing Smart Agent monitors as OpenTelemetry Collector metric receivers.

Using this monitor assumes that you have a configured environment with a functional Smart Agent release bundle on your system, which is already provided for x86_64/amd64 Splunk Distribution of OpenTelemetry Collector installation paths.

To activate this monitor in the Splunk Distribution of OpenTelemetry Collector, add the following to your configuration file:

receivers:
  smartagent/gitlab:
    type: gitlab
    ... # Additional config

To complete the integration, include the Smart Agent receiver using this monitor in a metrics pipeline. To do this, add the receiver item to the service > pipelines > metrics > receivers section of your configuration file. For example:

service:
  pipelines:
    metrics:
      receivers: [smartagent/gitlab]

See configuration examples for specific use cases that show how the Splunk OpenTelemetry Collector can integrate and complement existing environments.

Configuration settings 🔗

The following table shows the configuration options for this monitor:

Option Required Type Description
httpTimeout no int64 HTTP timeout duration for both read and writes. This should be a duration string that is accepted by ParseDuration. The default value is 10s.
username no string Basic Auth username to use on each request, if any.
password no string Basic Auth password to use on each request, if any.
useHTTPS no bool If true, the collector will connect to the server using HTTPS instead of plain HTTP. The default value is false.
httpHeaders no map of strings A map of HTTP header names to values. Comma-separated multiple values for the same message-header is supported.
skipVerify no bool If useHTTPS is true and this option is also true, the exporter's TLS cert will not be verified. The default value is false.
caCertPath no string Path to the CA cert that has signed the TLS cert, unnecessary if skipVerify is set to false.
clientCertPath no string Path to the client TLS cert to use for TLS required connections
clientKeyPath no string Path to the client TLS key to use for TLS required connections
host yes string Host of the exporter
port yes integer Port of the exporter
useServiceAccount no bool Use pod service account to authenticate. The default value is false.
metricPath no string Path to the metrics endpoint on the exporter server, usually /metrics, which is the default value.
sendAllMetrics no bool Send all the metrics that come out of the Prometheus exporter without any filtering. This option has no effect when using the Prometheus exporter monitor directly since there is no built-in filtering, only when embedding it in other monitors. The default value is false.

Metrics 🔗

The following metrics are available for this integration:

Troubleshooting 🔗

If you are not able to see your data in Splunk Observability Cloud: