Docs » Configure application receivers » Configure application receivers for languages » JMX

JMX 🔗

Description 🔗

The Splunk Distribution of OpenTelemetry Collector provides this integration as the jmx monitor via the Smart Agent Receiver.

This monitor allows you to run an arbitrary Groovy script to convert JMX MBeans fetched from a remote Java application to SignalFx data points. This is a much more powerful and flexible alternative to the genericjmx monitor.

The following utility helpers are available to use in the Groovy script within the util variable that will be set in the script’s context:

  • util.queryJMX(String objectName): This helper will query the pre-configured JMX application for the given objectName, which can include wildcards. In any case, the return value will be a List of zero or more GroovyMBean objects, which are a convenience wrapper that Groovy provides to make accessing attributes on the MBean simple. See http://groovy-lang.org/jmx.html for more information about the GroovyMBean object. You can use the Groovy .first() method on the returned list to access the first MBean is you are only expecting one.

  • util.makeGauge(String name, double val, Map<String, String> dimensions): A convenience function to create a SignalFx gauge data point. This creates a DataPoint instance that can be fed to output.sendDatapoint[s]. This does not send the data point, only creates it.

  • util.makeCumulative(String name, double val, Map<String, String> dimensions): A convenience function to create a SignalFx cumulative counter data point. This creates a DataPoint instance that can be fed to output.sendDatapoint[s]. This does not send the data point, only creates it.

The output instance available in the script context is what is used to send data to SignalFx. It contains the following methods:

  • output.sendDatapoint(DataPoint dp): Emit the given data point to SignalFx. Use the util.make[Gauge|Cumulative] helpers to create the DataPoint instance.

  • output.sendDatapoints(List<DataPoint> dp): Emit the given data points to SignalFx. We recommend using the util.make[Gauge|Cumulative] helpers to create the DataPoint instance. It is slightly more efficient to send multiple data points at once, but this doesn’t matter that much unless you’re sending very high volumes of data.

Installation 🔗

This monitor is available in the SignalFx Smart Agent Receiver, which is part of the Splunk Distribution of OpenTelemetry Collector.

To install this integration:

  1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform.

  2. Configure the monitor, as described in the next section.

Configuration 🔗

The Splunk Distribution of OpenTelemetry Collector allows embedding a Smart Agent monitor configuration in an associated Smart Agent Receiver instance.

Note: Providing a JMX monitor entry in your Smart Agent or Collector configuration is required for its use. Use the appropriate form for your agent type.

Smart Agent 🔗

To activate this monitor in the Smart Agent, add the following to your agent configuration:

monitors:  # All monitor config goes under this key
 - type: jmx
   ...  # Additional config

See Smart Agent example configuration for an autogenerated example of a YAML configuration file, with default values where applicable.

Splunk Distribution of OpenTelemetry Collector 🔗

To activate this monitor in the Splunk Distribution of OpenTelemetry Collector, add the following to your agent configuration:

receivers:
  smartagent/jmx:
    type: jmx
    ...  # Additional config

To complete the monitor activation, you must also include the smartagent/jmx receiver item in a metrics pipeline. To do this, add the receiver item to the service > pipelines > metrics > receivers section of your configuration file.

See configuration examples for specific use cases that show how the collector can integrate and complement existing environments.

Configuration settings 🔗

The following table shows the configuration options for this monitor:

Option Required Type Description
host no string Host will be filled in by auto-discovery if this monitor has a discovery rule.
port no integer Port will be filled in by auto-discovery if this monitor has a discovery rule. (default: 0)
serviceURL no string The service URL for the JMX RMI/JMXMP endpoint. If empty it will be filled in with values from host and port using a standard JMX RMI template: service:jmx:rmi:///jndi/rmi://<host>:<port>/jmxrmi. If overridden, host and port will have no effect. For JMXMP endpoint the service URL must be specified. The JMXMP endpoint URL format is service:jmx:jmxmp://<host>:<port>.
groovyScript yes string A literal Groovy script that generates data points from JMX MBeans. See the top-level jmx monitor doc for more information on how to write this script. You can put the Groovy script in a separate file and refer to it here with the remote config reference {"#from": "/path/to/file.groovy", raw: true}, or you can put it straight in YAML by using the | heredoc syntax.
username no string Username for JMX authentication, if applicable.
password no string Password for JMX authentication, if applicable.
keyStorePath no string The key store path is required if client authentication is enabled on the target JVM.
keyStorePassword no string The key store file password if required.
keyStoreType no string The key store type. (default: jks)
trustStorePath no string The trusted store path if the TLS profile is required.
trustStorePassword no string The trust store file password if required.
jmxRemoteProfiles no string Supported JMX remote profiles are TLS in combination with SASL profiles: SASL/PLAIN, SASL/DIGEST-MD5 and SASL/CRAM-MD5. Thus valid jmxRemoteProfiles values are: SASL/PLAIN, SASL/DIGEST-MD5, SASL/CRAM-MD5, TLS SASL/PLAIN, TLS SASL/DIGEST-MD5 and TLS SASL/CRAM-MD5.
realm no string The realm is required by profile SASL/DIGEST-MD5.

The following is an example Groovy script that replicates some of the data presented by the Cassandra nodetool status utility:

// Query the JMX endpoint for a single MBean.
ss = util.queryJMX("org.apache.cassandra.db:type=StorageService").first()

// Copied and modified from https://github.com/apache/cassandra
def parseFileSize(String value) {
	if (!value.matches("\\d+(\\.\\d+)? (GiB|KiB|MiB|TiB|bytes)")) {
		throw new IllegalArgumentException(
			String.format("value %s is not a valid human-readable file size", value));
	}
	if (value.endsWith(" TiB")) {
		return Math.round(Double.valueOf(value.replace(" TiB", "")) * 1e12);
	}
	else if (value.endsWith(" GiB")) {
		return Math.round(Double.valueOf(value.replace(" GiB", "")) * 1e9);
	}
	else if (value.endsWith(" KiB")) {
		return Math.round(Double.valueOf(value.replace(" KiB", "")) * 1e3);
	}
	else if (value.endsWith(" MiB")) {
		return Math.round(Double.valueOf(value.replace(" MiB", "")) * 1e6);
	}
	else if (value.endsWith(" bytes")) {
		return Math.round(Double.valueOf(value.replace(" bytes", "")));
	}
	else {
		throw new IllegalStateException(String.format("FileUtils.parseFileSize() reached an illegal state parsing %s", value));
	}
}

localEndpoint = ss.HostIdToEndpoint.get(ss.LocalHostId)
dims = [host_id: ss.LocalHostId, cluster_name: ss.ClusterName]

output.sendDatapoints([
	// Equivalent of "Up/Down" in the `nodetool status` output.
	// 1 = Live; 0 = Dead; -1 = Unknown
	util.makeGauge(
		"cassandra.status",
		ss.LiveNodes.contains(localEndpoint) ? 1 : (ss.DeadNodes.contains(localEndpoint) ? 0 : -1),
		dims),

	util.makeGauge(
		"cassandra.state",
		ss.JoiningNodes.contains(localEndpoint) ? 3 : (ss.LeavingNodes.contains(localEndpoint) ? 2 : 1),
		dims),

	util.makeGauge(
		"cassandra.load",
		parseFileSize(ss.LoadString),
		dims),

	util.makeGauge(
		"cassandra.ownership",
		ss.Ownership.get(InetAddress.getByName(localEndpoint)),
		dims)
	])

Make sure that your script is carefully tested before using it to monitor a production JMX service. The script can do anything exposed via JMX, including writing attributes and running methods via JMX. In general, scripts should only read attributes, but nothing enforces that.

Metrics 🔗

There are no metrics available for this integration.