Docs » Get started with the Splunk Distribution of the OpenTelemetry Collector » Collector components » Collector components: Processors » Transform processor

Transform processor πŸ”—

The transform processor is an OpenTelemetry Collector component that modifies matching spans, metrics, or logs through statements. Use cases include, among others, converting metrics to a different type, replacing or deleting keys, and setting fields depending on predefined conditions.

Statements are functions of the OpenTelemetry Transformation Language (OTTL) and are applied to telemetry following their order in the list. The transform processor includes additional functions for converting metric types. Statements transform data according to the OTTL context you define, for example Span or DataPoint.

The transform processor supports the following contexts:

Signal

Supported contexts

Traces

resource β†’ scope β†’ span β†’ spanevent

Metrics

resource β†’ scope β†’ metric β†’ datapoint

Logs

resource β†’ scope β†’ logs

Statements can transform telemetry of a higher context. For example, statements applied to a data point can access the metric and resource of the data point. Access to lower contexts isn’t possible; for example, you can’t use a span statement to transform single span events. As a general rule, associate statements to the context you want to transform.

Caution

Modifying telemetry might have unintended consequences, such as orphaned spans or logs, identity conflicts, and wrong metric conversions. Always test transformations before releasing them in a production environment.

Get started πŸ”—

Follow these steps to configure and activate the component:

  1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform. See Get started: Understand and use the Collector.

  2. Configure the TCP log receiver as described in the next section.

  3. Restart the Collector.

Sample configuration πŸ”—

To activate the transform processor for a pipeline, add transform to the processors section of the configuration. For example:

transform:
  error_mode: ignore
  # Statements can be trace, metric, or log
  <trace|metric|log>_statements:
    - context: <context>
      statements:
        - <statement>
        - <statement>
        - <statement>
     - context: <context>
        statements:
        - <statement>
        - <statement>
        - <statement>

You can then add the transform processor to any compatible pipeline. For example:

:emphasize-lines: 6, 14, 22

service:
  pipelines:
    traces:
      receivers: [jaeger, otlp, zipkin]
      processors:
      - transform
      - memory_limiter
      - batch
      - resourcedetection
      exporters: [sapm, signalfx]
    metrics:
      receivers: [hostmetrics, otlp, signalfx]
      processors:
      - transform
      - memory_limiter
      - batch
      - resourcedetection
      exporters: [signalfx]
    logs:
      receivers: [fluentforward, otlp]
      processors:
      - transform
      - memory_limiter
      - batch
      - resourcedetection
      exporters: [splunk_hec]

The error_mode field describes how the processor reacts to errors when processing statements:

  • error_mode: ignore tells the processor to ignore errors and continue execution. This is the default error mode.

  • error_mode: propagate tells the processor to return errors. The Collector drops the payload as a result.

For more information on OTTL functions and syntax, see:

Configuration examples πŸ”—

The following sample configurations show how to perform different transformations on spans, metrics, and logs.

Transform Kubernetes object logs πŸ”—

The following example shows how to edit logs received using the k8sobjects receiver. Shortening logs can be helpful when visualizing objects in a dashboard or setting up alerts.

transform:
  error_mode: ignore
  log_statements:
    - context: log
      statements:
        - replace_all_patterns(attributes, "(object\.)(.*\.)", "object.")

Edit resources and spans for size πŸ”—

The following example shows how to edit resources and spans by limiting the number of attributes and truncating them to 4,096 characters. The resource statement drops all keys except the ones indicated in keep_keys.

transform:
  error_mode: ignore
  trace_statements:
     - context: resource
       statements:
         # Only keep the following keys
         - keep_keys(attributes, ["service.name", "service.namespace", "cloud.region", "process.command_line"])
         - limit(attributes, 100, [])
         - truncate_all(attributes, 4096)
     - context: span
       statements:
         - limit(attributes, 100, [])
         - truncate_all(attributes, 4096)

Convert datapoints to different types πŸ”—

The following example shows how to convert datapoints to different types depending on their metric names using the functions included in the transform processor.

transform:
  metric_statements:
    - context: metric
      statements:
        - set(description, "Sum") where type == "Sum"
    - context: datapoint
      statements:
        - convert_sum_to_gauge() where metric.name == "system.processes.count"
        - convert_gauge_to_sum("cumulative", false) where metric.name == "prometheus_metric"

Settings πŸ”—

The following table shows the configuration options for the attributes processor:

Metrics functions πŸ”—

You can apply the following functions to metric contexts:

  • convert_sum_to_gauge: Converts a metric of type sum to type gauge. Retains data points.

  • convert_gauge_to_sum: Converts a metric of type gauge to type sum. Retains data points. Takes aggregation temporality (cumulative or delta) and monotonicity (boolean) as arguments.

  • convert_summary_count_val_to_sum: Creates a metric of type sum from the count value of a summary. Takes aggregation temporality (cumulative or delta) and monotonicity (boolean) as arguments. The name of the new metric is in the form <summary metric name>_count. Time stamp, attributes, and description are preserved.

  • convert_summary_sum_val_to_sum: Creates a metric of type sum from the count value of a summary. Takes aggregation temporality (cumulative or delta) and monotonicity (boolean) as arguments. The name of the new metric is in the form <summary metric name>_sum. Time stamp, attributes, and description are preserved.

Caution

Using conversion functions might break metric semantics.

Troubleshooting πŸ”—

If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.

Available to Splunk Observability Cloud customers

Available to prospective customers and free trial users

  • Ask a question and get answers through community support at Splunk Answers .

  • Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.

To learn about even more support options, see Splunk Customer Success .