Docs » Supported integrations in Splunk Observability Cloud » Instrument back-end applications to send spans to Splunk APM » Instrument Node.js applications for Splunk Observability Cloud » Troubleshoot Node.js instrumentation for Splunk Observability Cloud

Troubleshoot Node.js instrumentation for Splunk Observability Cloud 🔗

When you instrument a Node.js application using the Splunk Distribution of OpenTelemetry JS and you don’t see your data in Splunk Observability Cloud, follow these troubleshooting steps.

Steps for troubleshooting Node.js OpenTelemetry issues 🔗

The following steps can help you troubleshoot Node.js instrumentation issues:

  1. Activate diagnostic logging

  2. Activate debug metrics

Activate diagnostic logging 🔗

Diagnostic logs can help you troubleshoot instrumentation issues.

To output instrumentation logs to the console, set the OTEL_LOG_LEVEL environment variable to debug in the same scope where the application is running. For example, OTEL_LOG_LEVEL=<level> node start.js. Don’t add it to the .env file, as it’s loaded later.

You can also activate debug logging programmatically by setting the logLevel argument. For example:

start({
   logLevel: 'debug',
   metrics: {
      // configuration passed to metrics signal
   },
   profiling: {
      // configuration passed to profiling signal
   },
   tracing: {
      // configuration passed to tracing signal
   },
});

To deactivate debug logging in your code, call setLogger() as in the following example:

const { diag } = require('@opentelemetry/api');
diag.setLogger();

Note

Activate debug logging only when needed. Debug mode requires more resources.

Activate debug metrics 🔗

You can activate internal debug metrics by setting the SPLUNK_DEBUG_METRICS_ENABLED environment variable to true in the same scope where the application is running. For example, SPLUNK_DEBUG_METRICS_ENABLED=true node start.js. Don’t add it to the .env file, as it’s loaded later.

For more information, see Debug metrics.

Trace exporter issues 🔗

By default, the Splunk Distribution of OpenTelemetry JS uses the OTLP exporter. Any issue affecting the export of traces produces an error in the debug logs.

OTLP can’t export spans 🔗

The following error in the logs means that the instrumentation can’t send trace data to the OpenTelemetry Collector:

@opentelemetry/instrumentation-http http.ClientRequest return request
{"stack":"Error: connect ECONNREFUSED 127.0.0.1:55681\n    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1148:16)\n    at TCPConnectWrap.callbackTrampoline (internal/async_hooks.js:131:17)","message":"connect ECONNREFUSED 127.0.0.1:55681","errno":"-111","code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":"55681","name":"Error"}

To troubleshoot the lack of connectivity between the OTLP exporter and the OTel Collector, make sure that the following is true:

  1. Make sure that OTEL_EXPORTER_OTLP_ENDPOINT points to the correct OpenTelemetry Collector instance host.

  2. Check that your collector instance is configured and running. See Troubleshoot the Splunk OpenTelemetry Collector.

  3. Check that the OTLP receiver is activated in the OTel Collector and plugged into the traces pipeline.

  4. Check that the OTel Collector points to the following address: http://<host>:4317. Verify that your URL is correct.

401 error when sending spans 🔗

If you send traces directly to Splunk Observability Cloud and receive a 401 error code, the authentication token specified in SPLUNK_ACCESS_TOKEN is invalid. The following are possible reasons:

  • The value is null.

  • The value is not a well-formed token.

  • The token is not an access token that has authScope set to ingest.

Make sure that you’re using a valid Splunk access token when sending data directly to your Splunk platform instance. See Retrieve and manage user API access tokens using Splunk Observability Cloud.

Webpack compatibility issues 🔗

The Splunk Distribution of OpenTelemetry JS can’t instrument modules bundled using Webpack, as OpenTelemetry can instrument libraries only by intercepting its require calls.

To instrument Node.js applications that use bundled modules, use the Webpack externals configuration option so that the require calls are visible to OpenTelemetry.

The following example shows how to edit the webpack.config.js file to instrument the express framework:

module.exports = {
   // ...
   externalsType: "node-commonjs",
   externals: [
      "express"
   // See https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node
   // for a list of supported instrumentations. Use the require name of the library or framework,
   // not the name of the instrumentation. For example, "tedious" instead of "instrumentation-tedious".
   ]
};

When added to externals, the express framework loads through the require method and OpenTelemetry can instrument it. Make sure that the package is in the node_modules folder so that the require method can find it:

# Install the library or framework and add it to node_modules
npm install express

Note

You don’t need to add Node.js core modules such as http, net, and dns to the externals list.

Troubleshoot AlwaysOn Profiling for Node.js 🔗

See the following common issues and fixes for AlwaysOn Profiling:

Check that AlwaysOn Profiling is activated 🔗

Make sure that you’ve activated the profiler by setting the SPLUNK_PROFILER_ENABLED environment variable to true. See Node.js settings for AlwaysOn Profiling.

Unsupported Node.js version 🔗

To use AlwaysOn Profiling, upgrade to Node.js version 16 or higher.

AlwaysOn Profiling data and logs don’t appear in Splunk Observability Cloud 🔗

Collector configuration issues might prevent AlwaysOn Profiling data and logs from appearing in Splunk Observability Cloud.

To solve this issue, do the following:

  1. Check the configuration of the Node.js agent, especially SPLUNK_PROFILER_LOGS_ENDPOINT.

  2. Verify that the Splunk Distribution of OpenTelemetry Collector is running at the expected endpoint and that the application host or container can resolve the host name and connect to the OTLP port.

  3. Make sure that you’re running the Splunk Distribution of OpenTelemetry Collector and that the version is 0.34 or higher. Other collector distributions might not be able to route the log data that contains profiling data.

  4. A custom configuration might override settings that let the collector handle profiling data. Make sure to configure an otlp receiver and a splunk_hec exporter with correct token and endpoint fields. The profiling pipeline must use the OTLP receiver and Splunk HEC exporter you’ve configured.

The following snippet contains a sample profiling pipeline:

receivers:
  otlp:
    protocols:
      grpc:

exporters:
  # Profiling
  splunk_hec/profiling:
    token: "${SPLUNK_ACCESS_TOKEN}"
    endpoint: "${SPLUNK_INGEST_URL}/v1/log"
    log_data_enabled: false

processors:
  batch:
  memory_limiter:
    check_interval: 2s
    limit_mib: ${SPLUNK_MEMORY_LIMIT_MIB}

service:
  pipelines:
    logs/profiling:
      receivers: [otlp]
      processors: [memory_limiter, batch]
      exporters: [splunk_hec, splunk_hec/profiling]

If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.

Available to Splunk Observability Cloud customers

Available to prospective customers and free trial users

  • Ask a question and get answers through community support at Splunk Answers .

  • Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.

This page was last updated on Sep 18, 2024.