JMX ๐
The Splunk Distribution of the OpenTelemetry Collector uses the Smart Agent receiver with the jmx
monitor type to run an arbitrary Groovy script to convert JMX MBeans fetched from a remote Java application to SignalFx data points. This is a more flexible alternative to the GenericJMX monitor.
Note
To monitor JMX with the OpenTelemetry Collector using native OpenTelemetry components refer to the JMX receiver.
If you are instrumenting an application with the Splunk Distribution of OpenTelemetry Java, you can capture metrics with the Java agent instead of using a JMX monitor. To learn more, see Metrics collection.
You can use the following utility helpers in the Groovy script within the util
variable, which is set in the scriptโs context:
util.queryJMX(String objectName)
: This helper queries the configured JMX application for the givenobjectName
, which can include wildcards. In any case, the return value will be aList
of zero or moreGroovyMBean
objects, which are a convenience wrapper that Groovy provides to make accessing attributes on the MBean simple. See http://groovy-lang.org/jmx.html for more information about theGroovyMBean
object. You can use the Groovy.first()
method on the returned list to access the first MBean is you are only expecting one.util.makeGauge(String name, double val, Map<String, String> dimensions)
: A convenience function to create a SignalFx gauge data point. This creates aDataPoint
instance that can be fed tooutput.sendDatapoint[s]
. This does not send the data point, only creates it.util.makeCumulative(String name, double val, Map<String, String> dimensions)
: A convenience function to create a SignalFx cumulative counter data point. This creates aDataPoint
instance that can be fed tooutput.sendDatapoint[s]
. This does not send the data point, it only creates it.
The output
instance available in the script context is used to send
data to Splunk Observability Cloud. It contains the following methods:
output.sendDatapoint(DataPoint dp)
: Emit the given data point to SignalFx. Use theutil.make[Gauge|Cumulative]
helpers to create theDataPoint
instance.output.sendDatapoints(List<DataPoint> dp)
: Emit the given data points to SignalFx. We recommend using theutil.make[Gauge|Cumulative]
helpers to create theDataPoint
instance. Itโs slightly more efficient to send multiple data points at once, but this doesnโt matter that much unless youโre sending very high volumes of data.
Benefits ๐
After you configure the integration, you can access these features:
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Splunk Observability Cloud.
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Use navigators in Splunk Infrastructure Monitoring.
Access the Metric Finder and search for metrics sent by the monitor. For information, see Search the Metric Finder and Metadata Catalog.
Installation ๐
Follow these steps to deploy this integration:
Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform:
Configure the integration, as described in the Configuration section.
Restart the Splunk Distribution of the OpenTelemetry Collector.
Configuration ๐
To use this integration of a Smart Agent monitor with the Collector:
Include the Smart Agent receiver in your configuration file.
Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.
See how to Use Smart Agent monitors with the Collector.
See how to set up the Smart Agent receiver.
For a list of common configuration options, refer to Common configuration settings for monitors.
Learn more about the Collector at Get started: Understand and use the Collector.
Example ๐
To activate this integration, add the following to your Collector configuration:
receivers:
smartagent/jmx:
type: jmx
... # Additional config
Next, add the monitor to the service.pipelines.metrics.receivers
section of your configuration file:
service:
pipelines:
metrics:
receivers: [smartagent/jmx]
Configuration settings ๐
The following table shows the configuration options for this integration:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
|
|
|
no |
|
|
|
no |
|
|
|
yes |
|
|
|
no |
|
Username for JMX authentication, if applicable. |
|
no |
|
Password for JMX authentication, if applicable. |
|
no |
|
|
|
no |
|
The key store file password if required. |
|
no |
|
The key store type. (default: |
|
no |
|
The trusted store path if the TLS profile is required. |
|
no |
|
The trust store file password if required. |
|
no |
|
|
|
no |
|
The realm is required by profile SASL/DIGEST-MD5. |
The following is an example Groovy script that replicates some of the
data presented by the Cassandra nodetool status
utility:
// Query the JMX endpoint for a single MBean.
ss = util.queryJMX("org.apache.cassandra.db:type=StorageService").first()
// Copied and modified from https://github.com/apache/cassandra
def parseFileSize(String value) {
if (!value.matches("\\d+(\\.\\d+)? (GiB|KiB|MiB|TiB|bytes)")) {
throw new IllegalArgumentException(
String.format("value %s is not a valid human-readable file size", value));
}
if (value.endsWith(" TiB")) {
return Math.round(Double.valueOf(value.replace(" TiB", "")) * 1e12);
}
else if (value.endsWith(" GiB")) {
return Math.round(Double.valueOf(value.replace(" GiB", "")) * 1e9);
}
else if (value.endsWith(" KiB")) {
return Math.round(Double.valueOf(value.replace(" KiB", "")) * 1e3);
}
else if (value.endsWith(" MiB")) {
return Math.round(Double.valueOf(value.replace(" MiB", "")) * 1e6);
}
else if (value.endsWith(" bytes")) {
return Math.round(Double.valueOf(value.replace(" bytes", "")));
}
else {
throw new IllegalStateException(String.format("FileUtils.parseFileSize() reached an illegal state parsing %s", value));
}
}
localEndpoint = ss.HostIdToEndpoint.get(ss.LocalHostId)
dims = [host_id: ss.LocalHostId, cluster_name: ss.ClusterName]
output.sendDatapoints([
// Equivalent of "Up/Down" in the `nodetool status` output.
// 1 = Live; 0 = Dead; -1 = Unknown
util.makeGauge(
"cassandra.status",
ss.LiveNodes.contains(localEndpoint) ? 1 : (ss.DeadNodes.contains(localEndpoint) ? 0 : -1),
dims),
util.makeGauge(
"cassandra.state",
ss.JoiningNodes.contains(localEndpoint) ? 3 : (ss.LeavingNodes.contains(localEndpoint) ? 2 : 1),
dims),
util.makeGauge(
"cassandra.load",
parseFileSize(ss.LoadString),
dims),
util.makeGauge(
"cassandra.ownership",
ss.Ownership.get(InetAddress.getByName(localEndpoint)),
dims)
])
Make sure that your script is carefully tested before using it to monitor a production JMX service. In general, scripts should only read attributes, but nothing enforces that.
Metrics ๐
There are no metrics available for this integration.
Troubleshooting ๐
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
Submit a case in the Splunk Support Portal .
Contact Splunk Support .
Available to prospective customers and free trial users
Ask a question and get answers through community support at Splunk Answers .
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.