RabbitMQ ๐
The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the rabbitmq
monitor type to keep track of an instance of RabbitMQ.
Note
To monitor RabbitMQ instances with the OpenTelemetry Collector using native OpenTelemetry refer to the RabbitMQ receiver component.
The integration uses the RabbitMQ Python plugin and the RabbitMQ Management HTTP API to poll for statistics on a RabbitMQ server, then reports them to the agent.
This integration is available on Kubernetes and Linux, and requires RabbitMQ 3.0 and higher.
Benefits ๐
After you configure the integration, you can access these features:
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Splunk Observability Cloud.
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Use navigators in Splunk Infrastructure Monitoring.
Access the Metric Finder and search for metrics sent by the monitor. For information, see Search the Metric Finder and Metadata Catalog.
Installation ๐
Follow these steps to deploy this integration:
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:
Configure the monitor, as described in the Configuration section.
Restart the Splunk Distribution of OpenTelemetry Collector.
Configuration ๐
To use this integration of a Smart Agent monitor with the Collector:
Include the Smart Agent receiver in your configuration file.
Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.
See how to Use Smart Agent monitors with the Collector.
See how to set up the Smart Agent receiver.
For a list of common configuration options, refer to Common configuration settings for monitors.
Learn more about the Collector at Get started: Understand and use the Collector.
Example ๐
To activate this integration, add the following to your Collector configuration:
receivers:
smartagent/rabbitmq:
type: collectd/rabbitmq
... # Additional config
Next, add the monitor to the service.pipelines.metrics.receivers
section of your configuration file:
service:
pipelines:
metrics:
receivers: [smartagent/rabbitmq]
Configuration settings ๐
The following table shows the configuration options for the RabbitMQ monitor:
Option |
Required |
Type |
Description |
---|---|---|---|
|
No |
|
|
|
Yes |
|
|
|
Yes |
|
The port of the RabbitMQ instance. For example, |
|
No |
|
|
|
No |
|
Whether to collect channels. The default value is |
|
No |
|
Whether to collect connections. The default value is |
|
No |
|
Whether to collect exchanges. The default value is |
|
No |
|
Whether to collect nodes. The default value is |
|
No |
|
Whether to collect queues. The default value is |
|
No |
|
HTTP timeout for requests. |
|
No |
|
Verbosity level. |
|
Yes |
|
API username of the RabbitMQ instance. |
|
Yes |
|
API password of the RabbitMQ instance. |
|
No |
|
Whether to activate HTTPS. The default value is |
|
No |
|
|
|
No |
|
Path to this monitorโs own SSL or TLS certificate. |
|
No |
|
Path to this monitorโs private SSL or TLS key file. |
|
No |
|
This monitorโs private SSL or TLS key file password, if any. |
|
No |
|
|
Note
You must activate each of the five collect*
options to gather metrics pertaining to those facets of a RabbitMQ instance.
Metrics ๐
The following metrics are available for this integration:
Name | Description | Type | Category |
---|---|---|---|
counter.channel.message_stats.ack | The number of acknowledged messages | counter | Custom |
counter.channel.message_stats.confirm | Count of messages confirmed. | counter | Custom |
counter.channel.message_stats.deliver | Count of messages delivered in acknowledgement mode to consumers. | counter | Custom |
counter.channel.message_stats.deliver_get | Count of all messages delivered on the channel | counter | Custom |
counter.channel.message_stats.publish | Count of messages published. | counter | Custom |
counter.connection.channel_max | The maximum number of channels on the connection | counter | Custom |
counter.connection.recv_cnt | Number of packets received on the connection | counter | Custom |
counter.connection.recv_oct | Number of octets received on the connection | counter | Custom |
counter.connection.send_cnt | Number of packets sent by the connection | counter | Custom |
counter.connection.send_oct | Number of octets sent by the connection | counter | Custom |
counter.exchange.message_stats.confirm | Count of messages confirmed. | counter | Custom |
counter.exchange.message_stats.publish_in | Count of messages published "in" to an exchange, i.e. not taking account of routing. | counter | Default |
counter.exchange.message_stats.publish_out | Count of messages published "out" of an exchange, i.e. taking account of routing. | counter | Custom |
counter.node.io_read_bytes | Total number of bytes read from disk by the persister. | counter | Custom |
counter.node.io_read_count | Total number of read operations by the persister. | counter | Custom |
counter.node.mnesia_disk_tx_count | Number of Mnesia transactions which have been performed that required writes to disk. | counter | Custom |
counter.node.mnesia_ram_tx_count | Number of Mnesia transactions which have been performed that did not require writes to disk. | counter | Custom |
counter.queue.disk_reads | Total number of times messages have been read from disk by this queue since it started. | counter | Custom |
counter.queue.disk_writes | Total number of times messages have been written to disk by this queue since it started. | counter | Custom |
counter.queue.message_stats.ack | Number of acknowledged messages processed by the queue | counter | Custom |
counter.queue.message_stats.deliver | Count of messages delivered in acknowledgement mode to consumers. | counter | Default |
counter.queue.message_stats.deliver_get | Count of all messages delivered on the queue | counter | Custom |
counter.queue.message_stats.publish | Count of messages published. | counter | Custom |
gauge.channel.connection_details.peer_port | The peer port number of the channel | gauge | Custom |
gauge.channel.consumer_count | The number of consumers the channel has | gauge | Custom |
gauge.channel.global_prefetch_count | QoS prefetch limit for the entire channel, 0 if unlimited. | gauge | Custom |
gauge.channel.message_stats.ack_details.rate | How much the channel message ack count has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.channel.message_stats.confirm_details.rate | How much the channel message confirm count has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.channel.message_stats.deliver_details.rate | How much the channel deliver count has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.channel.message_stats.deliver_get_details.rate | How much the channel message count has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.channel.message_stats.publish_details.rate | How much the channel message publish count has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.channel.messages_unacknowledged | Number of messages delivered via this channel but not yet acknowledged. | gauge | Custom |
gauge.channel.messages_uncommitted | Number of messages received in an as yet uncommitted transaction. | gauge | Custom |
gauge.channel.messages_unconfirmed | Number of published messages not yet confirmed. On channels not in confirm mode, this remains 0. | gauge | Custom |
gauge.channel.number | The number of the channel, which uniquely identifies it within a connection. | gauge | Default |
gauge.channel.prefetch_count | QoS prefetch limit for new consumers, 0 if unlimited. | gauge | Custom |
gauge.connection.channels | The current number of channels on the connection | gauge | Custom |
gauge.connection.connected_at | The integer timestamp of the most recent time the connection was established | gauge | Custom |
gauge.connection.frame_max | Maximum permissible size of a frame (in bytes) to negotiate with clients. | gauge | Custom |
gauge.connection.peer_port | The peer port of the connection | gauge | Custom |
gauge.connection.port | The port the connection is established on | gauge | Custom |
gauge.connection.recv_oct_details.rate | How much the connection's octets received count has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.connection.send_oct_details.rate | How much the connection's octets sent count has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.connection.send_pend | The number of messages in the send queue of the connection | gauge | Custom |
gauge.connection.timeout | The current timeout setting (in seconds) of the connection | gauge | Custom |
gauge.exchange.message_stats.confirm_details.rate | How much the message confirm count has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.exchange.message_stats.publish_in_details.rate | How much the exchange publish-in count has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.exchange.message_stats.publish_out_details.rate | How much the exchange publish-out count has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.node.disk_free | Disk free space (in bytes) on the node | gauge | Default |
gauge.node.disk_free_details.rate | How much the disk free space has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.node.disk_free_limit | Point (in bytes) at which the disk alarm will go off. | gauge | Default |
gauge.node.fd_total | Total number of file descriptors available. | gauge | Default |
gauge.node.fd_used | Number of used file descriptors. | gauge | Default |
gauge.node.fd_used_details.rate | How much the number of used file descriptors has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.node.io_read_avg_time | Average wall time (milliseconds) for each disk read operation in the last statistics interval. | gauge | Default |
gauge.node.io_read_avg_time_details.rate | How much the I/O read average time has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.node.io_read_bytes_details.rate | How much the number of bytes read from disk has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.node.io_read_count_details.rate | How much the number of read operations has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.node.io_sync_avg_time | Average wall time (milliseconds) for each fsync() operation in the last statistics interval. | gauge | Default |
gauge.node.io_sync_avg_time_details.rate | How much the average I/O sync time has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.node.io_write_avg_time | Average wall time (milliseconds) for each disk write operation in the last statistics interval. | gauge | Default |
gauge.node.io_write_avg_time_details.rate | How much the I/O write time has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.node.mem_limit | Point (in bytes) at which the memory alarm will go off. | gauge | Default |
gauge.node.mem_used | Memory used in bytes. | gauge | Default |
gauge.node.mem_used_details.rate | How much the count has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.node.mnesia_disk_tx_count_details.rate | How much the Mnesia disk transaction count has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.node.mnesia_ram_tx_count_details.rate | How much the RAM-only Mnesia transaction count has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.node.net_ticktime | Current kernel net_ticktime setting for the node. | gauge | Custom |
gauge.node.proc_total | The maximum number of Erlang processes that can run in an Erlang VM. | gauge | Custom |
gauge.node.proc_used | Number of Erlang processes currently running in use. | gauge | Custom |
gauge.node.proc_used_details.rate | How much the number of erlang processes in use has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.node.processors | Number of cores detected and usable by Erlang. | gauge | Custom |
gauge.node.run_queue | Average number of Erlang processes waiting to run. | gauge | Custom |
gauge.node.sockets_total | Number of file descriptors available for use as sockets. | gauge | Custom |
gauge.node.sockets_used | Number of file descriptors used as sockets. | gauge | Custom |
gauge.node.sockets_used_details.rate | How much the number of sockets used has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.node.uptime | Time since the Erlang VM started, in milliseconds. | gauge | Default |
gauge.queue.backing_queue_status.avg_ack_egress_rate | Rate at which unacknowledged message records leave RAM, e.g. because acks arrive or unacked messages are paged out | gauge | Custom |
gauge.queue.backing_queue_status.avg_ack_ingress_rate | Rate at which unacknowledged message records enter RAM, e.g. because messages are delivered requiring acknowledgement | gauge | Custom |
gauge.queue.backing_queue_status.avg_egress_rate | Average egress (outbound) rate, not including messages that are sent straight through to auto-acking consumers. | gauge | Custom |
gauge.queue.backing_queue_status.avg_ingress_rate | Average ingress (inbound) rate, not including messages that are sent straight through to auto-acking consumers. | gauge | Custom |
gauge.queue.backing_queue_status.len | Total backing queue length, in messages | gauge | Custom |
gauge.queue.backing_queue_status.next_seq_id | The next sequence ID to be used in the backing queue | gauge | Custom |
gauge.queue.backing_queue_status.q1 | Number of messages in backing queue q1 | gauge | Custom |
gauge.queue.backing_queue_status.q2 | Number of messages in backing queue q2 | gauge | Custom |
gauge.queue.backing_queue_status.q3 | Number of messages in backing queue q3 | gauge | Custom |
gauge.queue.backing_queue_status.q4 | Number of messages in backing queue q4 | gauge | Custom |
gauge.queue.consumer_utilisation | Fraction of the time (between 0.0 and 1.0) that the queue is able to immediately deliver messages to consumers. | gauge | Custom |
gauge.queue.consumers | Number of consumers of the queue | gauge | Default |
gauge.queue.memory | Bytes of memory consumed by the Erlang process associated with the queue, including stack, heap and internal structures. | gauge | Default |
gauge.queue.message_bytes | Sum of the size of all message bodies in the queue. This does not include the message properties (including headers) or any overhead. | gauge | Custom |
gauge.queue.message_bytes_persistent | Total number of persistent messages in the queue (will always be 0 for transient queues). | gauge | Custom |
gauge.queue.message_bytes_ram | Like message_bytes but counting only those messages which are in RAM. | gauge | Custom |
gauge.queue.message_bytes_ready | Like message_bytes but counting only those messages ready to be delivered to clients. | gauge | Custom |
gauge.queue.message_bytes_unacknowledged | Like message_bytes but counting only those messages delivered to clients but not yet acknowledged. | gauge | Custom |
gauge.queue.message_stats.ack_details.rate | How much the number of acknowledged messages has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.queue.message_stats.deliver_details.rate | How much the count of messages delivered has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.queue.message_stats.deliver_get_details.rate | How much the count of all messages delivered has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.queue.message_stats.publish_details.rate | How much the count of messages published has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.queue.messages | Sum of ready and unacknowledged messages (queue depth). | gauge | Custom |
gauge.queue.messages_details.rate | How much the queue depth has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.queue.messages_persistent | Total number of persistent messages in the queue (will always be 0 for transient queues). | gauge | Custom |
gauge.queue.messages_ram | Total number of messages which are resident in RAM. | gauge | Custom |
gauge.queue.messages_ready | Number of messages ready to be delivered to clients. | gauge | Default |
gauge.queue.messages_ready_details.rate | How much the count of messages ready has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.queue.messages_ready_ram | Number of messages from messages_ready which are resident in RAM. | gauge | Custom |
gauge.queue.messages_unacknowledged | Number of messages delivered to clients but not yet acknowledged. | gauge | Custom |
gauge.queue.messages_unacknowledged_details.rate | How much the count of unacknowledged messages has changed per second in the most recent sampling interval. | gauge | Custom |
gauge.queue.messages_unacknowledged_ram | Number of messages from messages_unacknowledged which are resident in RAM. | gauge | Custom |
Notes ๐
To learn more about the available in Splunk Observability Cloud see Metric types
In host-based subscription plans, default metrics are those metrics included in host-based subscriptions in Splunk Observability Cloud, such as host, container, or bundled metrics. Custom metrics are not provided by default and might be subject to charges. See Metric categories for more information.
In MTS-based subscription plans, all metrics are custom.
To add additional metrics, see how to configure
extraMetrics
in Add additional metrics
Troubleshooting ๐
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
Submit a case in the Splunk Support Portal .
Contact Splunk Support .
Available to prospective customers and free trial users
Ask a question and get answers through community support at Splunk Answers .
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.