Docs » Supported integrations in Splunk Observability Cloud » Configure application receivers for hosts and servers » HAProxy

HAProxy 🔗

The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the HAProxy monitor type to monitor an HAProxy instance. This monitor requires HAProxy 1.5+.

Note

To monitor your HAProxy instances you can instead use the native OpenTelemetry HAProxy receiver. To learn more, see HAProxy receiver.

Benefits 🔗

After you configure the integration, you can access these features:

Set up 🔗

Socket configuration 🔗

The location of the HAProxy socket file is defined in the HAProxy configuration file, as shown in the following example:

global
    daemon
    stats socket /var/run/haproxy.sock
    stats timeout 2m

Note: You can use a TCP socket for stats in HAProxy. In your haproxy plugin configuration file, specify the TCP address for the socket. For example, you can use https://www.example.com/socket:9000. In the haproxy.cfg file, change the stats socket to use the same TCP address and port, as shown in the following example:

global
    daemon
    stats socket localhost:9000
    stats timeout 2m

To use a more restricted TCP socket, follow these steps:

  1. Define a backend server that listens to stats on localhost.

  2. Define a frontend proxy server that communicates with the back-end server on a different port.

  3. Use ACLs on both servers to control access. Depending on how restrictive your socket is, you might need to add the signalfx-agent user to the haproxy group as follows: sudo usermod -a -G haproxy signalfx-agent

The following configuration file shows how to define a back-end server and a frontend proxy:

global
    daemon
    stats socket localhost:9000
    stats timeout 2m

backend stats-backend
    mode tcp
    server stats-localhost localhost:9000

frontend stats-frontend
    bind *:9001
    default_backend stats-backend
    acl ...
    acl ...

SELinux setup 🔗

If you have SELinux activated, create a SELinux policy package by downloading the type enforcement file to some place on your server. Run the following commands to create and install the policy package:

$ checkmodule -M -m -o haproxy.mod haproxy.te
checkmodule:  loading policy configuration from haproxy.te
checkmodule:  policy configuration loaded
checkmodule:  writing binary representation (version 17) to haproxy.mod
$ semodule_package -o haproxy.pp -m haproxy.mod
$ sudo semodule -i haproxy.pp
$ sudo reboot

Installation 🔗

Follow these steps to deploy this integration:

  1. Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform:

  2. Configure the integration, as described in the Configuration section.

  3. Restart the Splunk Distribution of the OpenTelemetry Collector.

Configuration 🔗

To use this integration of a Smart Agent monitor with the Collector:

  1. Include the Smart Agent receiver in your configuration file.

  2. Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.

Example 🔗

To activate this integration, add the following to your Collector configuration:

receivers:
    smartagent/haproxy:
        type: haproxy
        ...  # Additional config

Next, add the monitor to the service.pipelines.metrics.receivers section of your configuration file:

service:
  pipelines:
    metrics:
      monitors: [smartagent/haproxy]

Configuration options 🔗

The following table shows the configuration options for this monitor:

Option

Required

Type

Description

pythonBinary

no

string

Path to a python binary that should be used to execute the

Python code. If not set, a built-in runtime will be used. Can include arguments to the binary as well.

host

yes

string

port

no

integer

(default: 0)

proxiesToMonitor

no

list of strings

A list of all the pxname(s) or svname(s) that you want to

monitor (e.g. ["http-in", "server1", "backend"])

excludedMetrics

no

list of strings

Deprecated. Please use datapointsToExclude on the monitor

config block instead.

enhancedMetrics

no

bool

(default: false)

Metrics 🔗

The following metrics are available for this integration:

Notes 🔗

  • To learn more about the available in Splunk Observability Cloud see Metric types

  • In host-based subscription plans, default metrics are those metrics included in host-based subscriptions in Splunk Observability Cloud, such as host, container, or bundled metrics. Custom metrics are not provided by default and might be subject to charges. See Metric categories for more information.

  • In MTS-based subscription plans, all metrics are custom.

  • To add additional metrics, see how to configure extraMetrics in Add additional metrics

Troubleshooting 🔗

If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.

Available to Splunk Observability Cloud customers

Available to prospective customers and free trial users

  • Ask a question and get answers through community support at Splunk Answers .

  • Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.

This page was last updated on Dec 09, 2024.