Docs » Introduction to AlwaysOn Profiling for Splunk APM » Get data into Splunk APM AlwaysOn Profiling

Get data into Splunk APM AlwaysOn Profiling đź”—

Follow these instructions to get profiling into Splunk APM AlwaysOn Profiling.

Prerequisites đź”—

To get data into Splunk APM AlwaysOn Profiling, you need the following:

AlwaysOn Profiling is activated for all host-based subscriptions. For subscriptions based on traces analyzed per minute (TAPM), check with your Splunk support representative.

Note

You don’t need Log Observer to get data into Splunk APM AlwaysOn Profiling. See Turn off logs or profiling data for more information.

Helm chart deployments đź”—

If you’re deploying the Splunk Distribution of OpenTelemetry Collector using Helm, pass the following value when installing the chart:

--set splunkObservability.profilingEnabled='true'

You can also edit the parameter in the values.yaml file itself. For example:

# This option enables only the shared pipeline for logs and profiling data.
# There is no active collection of profiling data.
# Instrumentation libraries must be configured to send it to the collector.
# If you don't use AlwaysOn Profiling for Splunk APM, you can disable it.
profilingEnabled: false

If you don’t have a Log Observer entitlement and are using a version of the OTel Collector lower than 0.78.0, make sure to turn off logs collection:

logsEnabled: false

Note

Setting profileEnabled to true creates the logs pipeline required by AlwaysOn Profiling, but doesn’t install the APM instrumentation. To install the instrumentation, see Get profiling data in.

Get profiling data in đź”—

Follow these instructions to get profiling data into Splunk APM using AlwaysOn Profiling:

  1. Instrument your application or service.

  2. Activate AlwaysOn Profiling.

  3. Check that Splunk Observability Cloud is receiving profiling data.

Instrument your application or service đź”—

AlwaysOn Profiling requires APM tracing data to correlate stack traces to your application requests. To instrument your application for Splunk APM, follow the steps for the appropriate programming language:

Language

Available instrumentation

Documentation

Java

Splunk Distribution of OpenTelemetry Java version 1.14.2 or higher

OpenJDK versions 15.0 to 17.0.8 are not supported for memory profiling. See https://bugs.openjdk.org/browse/JDK-8309862 in the JDK bug tracker for more information.

Node.js

Splunk Distribution of OpenTelemetry JS version 2.0 or higher

Instrument your Node.js application for Splunk Observability Cloud

.NET

Splunk Distribution of OpenTelemetry .NET version 1.3.0 or higher

Instrument your .NET application for Splunk Observability Cloud (OpenTelemetry)

Python

Splunk Distribution of OpenTelemetry Python version 1.15 or higher

Note

See Data retention in Application Performance Monitoring (APM) for information on profiling data retention.

Activate AlwaysOn Profiling đź”—

After you’ve instrumented your service for Splunk Observability Cloud and checked that APM data is getting into Splunk APM, activate AlwaysOn Profiling.

To activate AlwaysOn Profiling, follow the steps for the appropriate programming language:

Activate CPU and memory profiling

  • To use CPU profiling, activate the splunk.profiler.enabled system property, or set the SPLUNK_PROFILER_ENABLED environment variable to true.

  • Activate memory profiling by setting the splunk.profiler.memory.enabled system property or the SPLUNK_PROFILER_MEMORY_ENABLED environment variable to true. To activate memory profiling, the splunk.profiler.enabled property must be set to true.

Configure profiling

  • Check that the OTLP endpoint that exports profiling data is set correctly:
    • The profiling-specific endpoint is configured through the splunk.profiler.logs-endpoint system property or the SPLUNK_PROFILER_LOGS_ENDPOINT environment variable.

    • If that endpoint is not set, then the generic OTLP endpoint is used, configured through the otel.exporter.otlp.endpoint system property or the OTEL_EXPORTER_OTLP_ENDPOINT environment variable.

    • If that endpoint is not set either, it defaults to http://localhost:4317.

    • For non-Kubernetes deployments, the OTLP endpoint has to point to http://${COLLECTOR_IP}:4317. If the collector and the profiled application run on the same host, then use http://localhost:4317. Otherwise, make sure there are no firewall rules blocking access to port 4317 from the profiled host to the collector host.

    • For Kubernetes deployments, the OTLP endpoint has to point to http://$(K8S_NODE_IP):4317 where the K8S_NODE_IP is fetched from the Kubernetes downstream API by setting the environment configuration on the Kubernetes pod running the application. For example:

      env:
      - name: K8S_NODE_IP
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: status.hostIP
      
  • Port 9943 is the default port for the SignalFx receiver in the collector distribution. If you change this port in your collector configuration, you need to pass the custom port to the JVM.

The following example shows how to activate the profiler using the system property:

java -javaagent:./splunk-otel-javaagent.jar \
-Dsplunk.profiler.enabled=true \
-Dsplunk.profiler.memory.enabled=true \
-Dotel.exporter.otlp.endpoint=http(s)://collector:4317 \
-Dsplunk.metrics.endpoint=http(s)://collector:9943
-jar <your_application>.jar

For more configuration options, including setting a separate endpoint for profiling data, see Java settings for AlwaysOn Profiling.

Note

AlwaysOn Profiling is not supported on Oracle JDK 8 and IBM J9.

Check that Splunk Observability Cloud is receiving profiling data đź”—

After you set up and activate AlwaysOn Profiling, check that profiling data is coming in:

  1. Log in to Splunk Observability Cloud.

  2. In the navigation menu, select APM.

  3. In Splunk APM, select AlwaysOn Profiling.

  4. Select a service, and switch from the CPU view to the Memory view.

  5. If your service runs in multiple instances, select the instance that you’re interested in by selecting the host, container, and process ID.

  6. If you’ve activated memory profiling, explore memory metrics. See Memory profiling metrics.

Activate AlwaysOn Profiling in a gateway deployment đź”—

Follow these steps to set up AlwaysOn Profiling with a collector in data forwarding or gateway mode, similar to the following example gateway deployment:

flowchart LR accTitle: Example gateway deployment diagram accDescr: Step one. Point the instrumentation agent to the collector in host (agent) monitoring mode. Step two. Configure the collector in host (agent) monitoring mode. Step three. Configure the collector in data forwarding (gateway) mode. Step four. Send data to Splunk Observability Cloud. instrumentation["`**(1)** Instrumentation agent`"] --> collector["`**(2)** Collector in host (agent) monitoring mode`"] --> datacollector["`**(3)** Collector in data forwarding (gateway) mode`"] --> SOC["`**(4)** Splunk Observability Cloud`"]
  1. Point the instrumentation agent to the OTLP gRPC receiver for the collector in host monitoring (agent) mode. The OTLP gRPC receiver must be running on the same host and port as the collector in host monitoring (agent) mode.

  2. Configure the collector in host monitoring (agent) mode with the following components:

    1. An OTLP gRPC receiver

    2. An OTLP exporter pointed at the collector in data forwarding (gateway) mode

    3. A logs pipeline that connects the receiver and the exporter. For example, see the default agent configuration with the necessary adjustment to send to a gateway in the Splunk Opentelemetry Collector on GitHub.

    service:
       pipelines:
          logs:
             receivers: [fluentforward, otlp]
             processors:
             - memory_limiter
             - batch
             - resourcedetection
             #- resource/add_environment
             #exporters: [splunk_hec, splunk_hec/profiling]
             # Use instead when sending to gateway
             exporters: [otlp]
    
  3. Configure the collector in data forwarding (gateway) mode (3) with the following components:
    1. An OTLP gRPC receiver

    2. A splunk_hec exporter

    3. A logs pipeline that connects the receiver and the exporter