Get data into Splunk APM AlwaysOn Profiling đź”—
Follow these instructions to get profiling into Splunk APM AlwaysOn Profiling.
Prerequisites đź”—
To get data into Splunk APM AlwaysOn Profiling, you need the following:
Splunk APM activated for your Splunk Observability Cloud organization.
Splunk Distribution of OpenTelemetry Collector version 0.44.0 or higher running on the host. See Get started with the Splunk Distribution of the OpenTelemetry Collector. If the version of your Splunk OTel Collector is lower than 0.44.0, see Check the OpenTelemetry Collector configuration.
AlwaysOn Profiling is activated for all host-based subscriptions. For subscriptions based on traces analyzed per minute (TAPM), check with your Splunk support representative.
Helm chart deployments đź”—
If you’re deploying the Splunk Distribution of the OpenTelemetry Collector using Helm, pass the following value when installing the chart:
--set splunkObservability.profilingEnabled='true'
You can also edit the parameter in the values.yaml file itself. For example:
# This option enables only the shared pipeline for logs and profiling data.
# There is no active collection of profiling data.
# Instrumentation libraries must be configured to send it to the collector.
# If you don't use AlwaysOn Profiling for Splunk APM, you can disable it.
profilingEnabled: false
If you are using a version of the OTel Collector lower than 0.78.0, make sure to turn off logs collection:
logsEnabled: false
Note
Setting profileEnabled
to true
creates the logs pipeline required by AlwaysOn Profiling, but doesn’t install the APM instrumentation. To install the instrumentation, see Get profiling data in.
Get profiling data in đź”—
Follow these instructions to get profiling data into Splunk APM using AlwaysOn Profiling:
Instrument your application or service đź”—
AlwaysOn Profiling requires APM tracing data to correlate stack traces to your application requests. To instrument your application for Splunk APM, follow the steps for the appropriate programming language:
Language |
Available instrumentation |
Documentation |
---|---|---|
Java |
Splunk Distribution of OpenTelemetry Java version 1.14.2 or higher OpenJDK versions 15.0 to 17.0.8 are not supported for memory profiling. See https://bugs.openjdk.org/browse/JDK-8309862 in the JDK bug tracker for more information. |
|
Node.js |
Splunk Distribution of OpenTelemetry JS version 2.0 or higher |
Instrument your Node.js application for Splunk Observability Cloud |
.NET |
Splunk Distribution of OpenTelemetry .NET version 1.3.0 or higher |
Instrument your .NET application for Splunk Observability Cloud (OpenTelemetry) |
Python |
Splunk Distribution of OpenTelemetry Python version 1.15 or higher |
Note
See Data retention in Application Performance Monitoring (APM) for information on profiling data retention.
Activate AlwaysOn Profiling đź”—
After you’ve instrumented your service for Splunk Observability Cloud and checked that APM data is getting into Splunk APM, activate AlwaysOn Profiling.
To activate AlwaysOn Profiling, follow the steps for the appropriate programming language:
Activate CPU and memory profiling
To use CPU profiling, activate the
splunk.profiler.enabled
system property, or set theSPLUNK_PROFILER_ENABLED
environment variable totrue
.Activate memory profiling by setting the
splunk.profiler.memory.enabled
system property or theSPLUNK_PROFILER_MEMORY_ENABLED
environment variable totrue
. To activate memory profiling, thesplunk.profiler.enabled
property must be set totrue
.
Configure profiling
- Check that the OTLP endpoint that exports profiling data is set correctly:
The profiling-specific endpoint is configured through the
splunk.profiler.logs-endpoint
system property or theSPLUNK_PROFILER_LOGS_ENDPOINT
environment variable.If that endpoint is not set, then the generic OTLP endpoint is used, configured through the
otel.exporter.otlp.endpoint
system property or theOTEL_EXPORTER_OTLP_ENDPOINT
environment variable.If that endpoint is not set either, it defaults to
http://localhost:4317
.For non-Kubernetes deployments, the OTLP endpoint has to point to
http://${COLLECTOR_IP}:4317
. If the collector and the profiled application run on the same host, then usehttp://localhost:4317
. Otherwise, make sure there are no firewall rules blocking access to port 4317 from the profiled host to the collector host.For Kubernetes deployments, the OTLP endpoint has to point to
http://$(K8S_NODE_IP):4317
where theK8S_NODE_IP
is fetched from the Kubernetes downstream API by setting the environment configuration on the Kubernetes pod running the application. For example:env: - name: K8S_NODE_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.hostIP
Port 9943 is the default port for the SignalFx receiver in the collector distribution. If you change this port in your collector configuration, you need to pass the custom port to the JVM.
The following example shows how to activate the profiler using the system property:
java -javaagent:./splunk-otel-javaagent.jar \
-Dsplunk.profiler.enabled=true \
-Dsplunk.profiler.memory.enabled=true \
-Dotel.exporter.otlp.endpoint=http(s)://collector:4317 \
-Dsplunk.metrics.endpoint=http(s)://collector:9943
-jar <your_application>.jar
For more configuration options, including setting a separate endpoint for profiling data, see Java settings for AlwaysOn Profiling.
Note
AlwaysOn Profiling is not supported on Oracle JDK 8 and IBM J9.
Requirements
AlwaysOn Profiling requires Node.js 16 and higher.
Instrumentation
Activate the profiler by setting the
SPLUNK_PROFILER_ENABLED
environment variable totrue
.Activate memory profiling by setting the
SPLUNK_PROFILER_MEMORY_ENABLED
environment variable totrue
.- Check the OTLP the endpoint in the
splunk.profiler.logs-endpoint
system property or theSPLUNK_PROFILER_LOGS_ENDPOINT
environment variable: For non-Kubernetes deployments, the OTLP endpoint has to point to
http://${COLLECTOR_IP}:4317
. If the collector and the profiled application run on the same host, then usehttp://localhost:4317
. Otherwise, make sure there are no firewall rules blocking access to port 4317 from the profiled host to the collector host.For Kubernetes deployments, the OTLP endpoint has to point to
http://$(K8S_NODE_IP):4317
where theK8S_NODE_IP
is fetched from the Kubernetes downstream API by setting the environment configuration on the Kubernetes pod running the application. For example:env: - name: K8S_NODE_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.hostIP
- Check the OTLP the endpoint in the
The following example shows how to activate the profiler from your application’s code:
start({
serviceName: '<service-name>',
endpoint: 'collectorhost:port',
profiling: { // Activates CPU profiling
memoryProfilingEnabled: true, // Activates Memory profiling
}
});
For more configuration options, including setting a separate endpoint for profiling data, see Node.js settings for AlwaysOn Profiling.
Requirements
AlwaysOn Profiling requires .NET 6.0 or higher.
Note
.NET Framework is not supported.
Instrumentation
Activate the profiler by setting the
SPLUNK_PROFILER_ENABLED
environment variable totrue
for your .NET process.Activate memory profiling by setting the
SPLUNK_PROFILER_MEMORY_ENABLED
environment variable totrue
.SPLUNK_PROFILER_LOGS_ENDPOINT
environment variable by default points to http://localhost:4318/v1/logs. It can be reconfigured to the Splunk Distribution of OpenTelemetry Collector.
For more configuration options, including setting a separate endpoint for profiling data, see .NET OTel settings for AlwaysOn Profiling.
Note
AlwaysOn Profiling for Python is in beta development. This feature is provided by Splunk to you “as is” without any warranties, maintenance and support, or service-level commitments. Use of this feature is subject to the Splunk General Terms .
Requirements
AlwaysOn Profiling requires Python 3.7.2 or higher.
Instrumentation
Activate the profiler by setting the SPLUNK_PROFILER_ENABLED
environment variable to true
or call the start_profiling
function in your application code.
Check the OTLP endpoint in the SPLUNK_PROFILER_LOGS_ENDPOINT
environment variable:
For non-Kubernetes environments, make sure that the
SPLUNK_PROFILER_LOGS_ENDPOINT
environment variable points tohttp://localhost:4317
.For Kubernetes deployments, the OTLP endpoint has to point to
http://$(K8S_NODE_IP):4317
where theK8S_NODE_IP
is fetched from the Kubernetes downstream API by setting the environment configuration on the Kubernetes pod running the application. For example:env: - name: K8S_NODE_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.hostIP
The following example shows how to activate the profiler from your application’s code:
from splunk_otel.profiling import start_profiling
# Activates CPU profiling
# All arguments are optional
start_profiling(
service_name='my-python-service',
resource_attributes={
'service.version': '3.1'
'deployment.environment': 'production',
}
endpoint='http://localhost:4317'
)
For more configuration options, see Python settings for AlwaysOn Profiling.
Check that Splunk Observability Cloud is receiving profiling data đź”—
After you set up and activate AlwaysOn Profiling, check that profiling data is coming in:
Log in to Splunk Observability Cloud.
In the navigation menu, select
.In Splunk APM, select AlwaysOn Profiling.
Select a service, and switch from the CPU view to the Memory view.
If your service runs in multiple instances, select the instance that you’re interested in by selecting the host, container, and process ID.
If you’ve activated memory profiling, explore memory metrics. See Memory profiling metrics.
Activate AlwaysOn Profiling in a gateway deployment đź”—
Follow these steps to set up AlwaysOn Profiling with a collector in data forwarding or gateway mode, similar to the following example gateway deployment:
Point the instrumentation agent to the OTLP gRPC receiver for the collector in host monitoring (agent) mode. The OTLP gRPC receiver must be running on the same host and port as the collector in host monitoring (agent) mode.
Configure the collector in host monitoring (agent) mode with the following components:
An OTLP gRPC receiver
An OTLP exporter pointed at the collector in data forwarding (gateway) mode
A logs pipeline that connects the receiver and the exporter. For example, see the default agent configuration with the necessary adjustment to send to a gateway in the Splunk Opentelemetry Collector on GitHub.
service: pipelines: logs: receivers: [fluentforward, otlp] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment #exporters: [splunk_hec, splunk_hec/profiling] # Use instead when sending to gateway exporters: [otlp]
- Configure the collector in data forwarding (gateway) mode (3) with the following components:
An OTLP gRPC receiver
A splunk_hec exporter
A logs pipeline that connects the receiver and the exporter