HAProxy 🔗
The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the HAProxy monitor type to monitor an HAProxy instance. This monitor requires HAProxy 1.5+.
Benefits 🔗
After you configure the integration, you can access these features:
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Observability Cloud.
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Splunk Infrastructure Monitoring navigators.
Access the Metric Finder and search for metrics sent by the monitor. For information, see Use the Metric Finder.
Set up 🔗
Socket configuration 🔗
The location of the HAProxy socket file is defined in the HAProxy configuration file, as shown in the following example:
global
daemon
stats socket /var/run/haproxy.sock
stats timeout 2m
Note: You can use a TCP socket for stats in HAProxy. In your haproxy
plugin configuration file,
specify the TCP address for the socket. For example, you can use https://www.example.com/socket:9000
.
In the haproxy.cfg file, change the stats socket to use the same TCP address and port, as
shown in the following example:
global
daemon
stats socket localhost:9000
stats timeout 2m
To use a more restricted TCP socket, follow these steps:
Define a backend server that listens to stats on localhost.
Define a frontend proxy server that communicates with the back-end server on a different port.
Use ACLs on both servers to control access. Depending on how restrictive your socket is, you might need to add the signalfx-agent user to the haproxy group as follows:
sudo usermod -a -G haproxy signalfx-agent
The following configuration file shows how to define a back-end server and a frontend proxy:
global
daemon
stats socket localhost:9000
stats timeout 2m
backend stats-backend
mode tcp
server stats-localhost localhost:9000
frontend stats-frontend
bind *:9001
default_backend stats-backend
acl ...
acl ...
SELinux setup 🔗
If you have SELinux activated, create a SELinux policy package by downloading the type enforcement file to some place on your server. Run the following commands to create and install the policy package:
$ checkmodule -M -m -o haproxy.mod haproxy.te
checkmodule: loading policy configuration from haproxy.te
checkmodule: policy configuration loaded
checkmodule: writing binary representation (version 17) to haproxy.mod
$ semodule_package -o haproxy.pp -m haproxy.mod
$ sudo semodule -i haproxy.pp
$ sudo reboot
Installation 🔗
Follow these steps to deploy this integration:
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:
Configure the monitor, as described in the Configuration section.
Restart the Splunk Distribution of OpenTelemetry Collector.
Configuration 🔗
To use this integration of a Smart Agent monitor with the Collector:
Include the Smart Agent receiver in your configuration file.
Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.
Read more on how to Use Smart Agent monitors with the Collector.
See how to set up the Smart Agent receiver.
Learn about config options in Collector default configuration.
Example 🔗
To activate this integration, add the following to your Collector configuration:
receivers:
smartagent/haproxy:
type: haproxy
... # Additional config
Next, add the monitor to the service.pipelines.metrics.receivers
section of your configuration file:
service:
pipelines:
metrics:
monitors: [smartagent/haproxy]
Configuration options 🔗
The following table shows the configuration options for this monitor:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
|
Path to a python binary that should be used to execute the Python code. If not set, a built-in runtime will be used. Can include arguments to the binary as well. |
|
yes |
|
|
|
no |
|
(default: |
|
no |
|
A list of all the pxname(s) or svname(s) that you want to monitor (e.g. |
|
no |
|
Deprecated. Please use |
|
no |
|
(default: |
Metrics 🔗
The following metrics are available for this integration:
Notes 🔗
Learn more about the available metric types in Observability Cloud.
Default metrics are those metrics included in host-based subscriptions in Observability Cloud, such as host, container, or bundled metrics. Custom metrics are not provided by default and might be subject to charges. See more about metric categories.
To add additional metrics, see how to configure
extraMetrics
using the Collector.
Troubleshooting 🔗
If you are not able to see your data in Splunk Observability Cloud, try these tips:
Submit a case in the Splunk Support Portal
Available to Splunk Observability Cloud customers
-
Available to Splunk Observability Cloud customers
Ask a question and get answers through community support at Splunk Answers
Available to Splunk Observability Cloud customers and free trial users
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide
Available to Splunk Observability Cloud customers and free trial users
To learn how to join, see Get Started with Splunk Community - Chat groups
To learn about even more support options, see Splunk Customer Success.