Docs » Set up Splunk APM

Set up Splunk APM 🔗

Monitor the availability and performance of your distributed applications by sending traces to Splunk APM. A trace contains one or more spans that log requests or transactions in your system. You can also use trace data as the basis of detector alerts, and you can correlate trace data with logs and other resources.

To set up Splunk APM and begin analyzing application availability and performance, follow these steps:

  1. Get data into Splunk APM

  2. Verify that your data is coming into Splunk APM

  3. Learn what you can do with Splunk APM

  4. Customize your Splunk APM experience

Get data into Splunk APM 🔗

To begin sending spans to Splunk APM, start by choosing the right data collection method for your system.

Choose the right data collection method for your system 🔗

If you are using multiple components of the Splunk Observability Cloud Suite and want to collect host metrics, logs, or other application data in addition to traces, follow the steps in Start getting data in to Splunk Observability Cloud to get data into Observability Cloud. Then see Verify that your data is coming into Splunk APM in this topic to make sure your data is coming into Splunk APM as you expect.

If you have already deployed the upstream OpenTelemetry Collector, you can use your existing deployment to send traces to Splunk APM. See Upstream OpenTelemetry Collector for more information. However, note that using the Splunk Distribution of OpenTelemetry Collector provides a more supported experience, customized for Splunk APM.

If you want to start sending traces to Splunk APM with Splunk Distribution of OpenTelemetry Collector using the guided setup wizards in Splunk APM, follow the steps in the sections below. To set it up yourself, see Install and configure Splunk Distribution of OpenTelemetry Collector.

Deploy a Splunk OpenTelemetry Connector on your hosts 🔗

To send traces to Splunk APM, first deploy a connector on the hosts in which your applications are running. Splunk offers a set of Splunk OpenTelemetry Connectors, which are packages that provide integrated collection and forwarding for Kubernetes, Linux, and Windows hosts.

To deploy a connector, select Navigation menu > Data setup and search for the host type you’re using. Then follow the steps in the setup wizard.

See the following table for more documentation about deploying a connector on Kubernetes, Linux, and Windows hosts:

Host type

Connector

Documentation

Kubernetes

Splunk OpenTelemetry Connector for Kubernetes

Collect Kubernetes data

Linux

Splunk OpenTelemetry Connector for Linux

Collect Linux data

Windows

Splunk OpenTelemetry Connector for Windows

Collect Windows data

Instrument your applications and services to get spans into Splunk APM 🔗

Use the auto-instrumentation libraries provided by Splunk Observability Cloud to instrument services written supported programming languages. To get the highest level of support, send spans from your applications to the OpenTelemetry Connector you deployed in the previous step.

To instrument a service, send spans from that service to an OpenTelemetry Connector deployed on the host or Kubernetes cluster in which the service is running. How you specify the OpenTelemetry Connector endpoint depends on the language you are instrumenting.

In the following table, follow the instrumentation steps for the language each of your applications is running in.

Language

Available instrumentation

Documentation

Java

Splunk Distribution of OpenTelemetry Java

Instrument Java applications for Splunk Observability Cloud

Python

Splunk Distribution of OpenTelemetry Python

Instrument Python applications for Splunk Observability Cloud

Node.js

Splunk Distribution of OpenTelemetry JS

Instrument a Node application for Splunk Observability Cloud

.NET

SignalFx Tracing Library for .NET

Instrument a .NET application for Splunk Observability Cloud

Ruby

SignalFx Tracing Library for Ruby

Instrument a Ruby application

PHP

SignalFx Tracing Library for PHP

Instrument a PHP application

Go

SignalFx Tracing Library for Go

Instrument a Go application

After you instrument your applications, you’re ready to verify that your data is coming in.

Verify that your data is coming into Splunk APM 🔗

After you instrument your applications, wait for Splunk Observability Cloud to process incoming spans. After several minutes, select Navigation menu > APM and check that you can see your application data beginning to flow into the APM landing page.

If your data is not appearing in APM as you expect, see Troubleshoot your instrumentation.

Learn what you can do with Splunk APM 🔗

Now that your data is flowing into Splunk APM, it’s time to do some exploring. Explore what you can do with the features of APM by following these steps:

Assess the health of your applications with the APM landing page 🔗

When you log into Splunk Observability Cloud and select Navigation menu > APM, you arrive on the APM landing page. You can use this dashboard of consolidated and unsampled span metrics to get a real-time snapshot of your services and Business Workflows at a glance.

This screenshot shows an example of the Splunk APM landing page

Use the alerts and top charts on this page as a guide to what needs your attention first.

View dependencies among your applications in the Explore view 🔗

From the landing page, click on a service in a chart legend or a row in the Services table to navigate to the Explore view. This view includes the service map, which presents the dependencies and connections among your instrumented and inferred services in APM. This map is dynamically generated based on your selections in the time range, environment, workflow, service, and tag filters.

You can use these visual cues to understand dependencies, performance bottlenecks, and error propagation.

This screenshot shows an example of Splunk APM Explore view

Click on any service in the service map to see charts for that specific service. You can also use the Breakdown selector to break the service down by any indexed span tag.

Click on any chart in this view to show example traces that match the parameters of the chart.

Examine the latency of a particular trace in trace view 🔗

Click Traces to navigate to trace view, where you can see a complete list of all of your traces from the services you’ve instrumented in Splunk APM. From the list of traces, you can click on a specific trace, search by trace ID or use advanced trace search to view the waterfall chart for a particular trace.

This screenshot shows an example of Splunk APM Trace view

The waterfall chart provides a visualization of the latency of all of the spans that make up the trace being viewed. Under Performance Summary, you can get a snapshot of the performance of the types of spans comprising the trace.

Under the Span Performance tab, you can view a summary of span duration from each operation within each service involved in the trace and the percentage of overall trace workload that they represent.

Full-fidelity tracing, in which APM receives all traces from each of your services rather than sampling them, helps you find and solve specific problems problems arising in individual traces. With full-fidelity tracing, you never need to wonder whether a trace representative of a particular issue was captured by a sample.

In addition to searching individual traces, you can get an aggregate view of your traces to see where problems are occuring across your systems using tools such as Tag Spotlight.

Get a top-down view of your services in Tag Spotlight 🔗

Return to the service map and click Tag Spotlight. Using Tag Spotlight, you can view the request and error rate or latency by span tag for an individual service or business workflow. This helps you identify which particular attributes of your system might be causing reliability or performance issues.

Rather than looking for similarities across multiple traces, you can use Tag Spotlight to gain a top-down view of your services. This lets you identify the system-wide source of issues and then drill down to find an individual trace that is representative of a wider issue.

This screenshot shows an example of Splunk APM Tag Spotlight view

The tags you see in Tag Spotlight are the span tags that Splunk APM indexes out-of-the-box. By indexing additional span tags, you can have other tags show up in their own boxes on this page.

When you navigate to Tag Spotlight from the service map and have a specific service selected, all of the information in trace view and Tag Spotlight preserves the context of that particular service.

To learn more about Tag Spotlight, see Analyze service performance with Tag Spotlight in Splunk Observability Cloud.

Customize your Splunk APM experience 🔗

Now that you’ve explored what you can do with Splunk APM, you can consider more advanced configurations to tailor Splunk APM to your business needs.

Index additional span tags 🔗

You can index additional span tags to generate custom request, error, and duration (RED) metrics for tag values within a service. Indexed span tags become filter options within Tag Spotlight and breakdowns in the service map. RED metrics for indexed span tags are known as Troubleshooting MetricSets. To set up span tags, see Analyze services with span tags in Splunk Observability Cloud.

Set up Business Workflows 🔗

You can also use Business Workflows to correlate, monitor, and troubleshoot related traces that make up end-to-end transactions in your system. This lets you filter Service Level Indicators (SLIs) and visualizations by the transaction types you care about most. To learn more about Business Workflows, see Correlate traces to track Business Workflows.

Continue learning about Splunk APM 🔗

The following resources provide additional information about Splunk APM: