Docs » Get started with the Splunk Distribution of the OpenTelemetry Collector » Collector components » Collector components: Connectors » Sum connector

Sum connector πŸ”—

The Splunk Distribution of the OpenTelemetry Collector uses the Sum connector to sum attribute values from spans, span events, metrics, data points, and log records.

As a receiver, the supported pipeline types are metrics, traces and logs. As an exporter, the supported pipeline type is metrics. See Process your data with pipelines for more information.

Note

Values found within an attribute are converted into a float regardless of their original type before being summed and output as a metric value. Non-convertible strings are dropped and not included.

Get started πŸ”—

Follow these steps to configure and activate the component:

  1. Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform:

  1. Configure the connector as described in the next section.

  2. Restart the Collector.

Sample configuration πŸ”—

To activate the connector, add sum to the connectors section of your configuration file.

For example:

connectors:
  sum:

To complete the configuration, add the connector in the service section of your configuration file according to the pipelines you want to use, for example:

service:
  pipelines:
    metrics/sum:
       receivers: [sum]
    traces:
       exporters: [sum]

Configuration options πŸ”—

The following settings are required:

  • Telemetry type. Nested below the sum: connector declaration. Can be any of spans or spanevents for traces, datapoints for metrics, or logs.

  • Metric name. Nested below the telemetry type; this is the metric name the sum connector will output summed values to.

  • source_attribute. A specific attribute to search for within the source telemetry being fed to the connector. This attribute is where the connector looks for numerical values to sum into the output metric value.

The following settings can be optionally configured:

  • conditions. You can use OTTL syntax to provide conditions for processing incoming telemetry. Conditions are ORed together, so if any condition is met the attribute’s value is included in the resulting sum. For more information see OTTL grammar in GitHub.

  • attributes. Declaration of attributes to include. Any of these attributes found will generate a separate sum for each set of unique combination of attribute values and output as its own datapoint in the metric time series.

    • key. Required for attributes. The attribute name to match against.

    • default_value. Optional for attributes. A default value for the attribute when no matches are found. The default_value value can be a string, integer, or float.

Configuration example: Sum attribute values πŸ”—

This example configuration sums numerical values found within the attribute attribute.with.numerical.value of any span telemetry routed to the connector and outputs a metric time series with the name my.example.metric.name with those summed values.

receivers:
  foo:
connectors:
  sum:
    spans:
      my.example.metric.name:
        source_attribute: attribute.with.numerical.value

exporters:
  bar:

service:
  pipelines:
    metrics/sum:
       receivers: [sum]
       exporters: [bar]
    traces:
       receivers: [foo]
       exporters: [sum]

Configuration example: Check payment logs πŸ”—

In this example the Sum connector ingests logs and creates an output metric named checkout.total with numerical values found in the source_attribute total.payment. It also checks any incoming log telemetry for values present in the attribute payment.processor and creates a datapoint within the metric time series for each unique value.

It also makes sure that:

  • The attribute total.payment is not NULL.

  • Any logs without values in payment.processor are included in a datapoint with the default_value of unspecified_processor.

receivers:
  foo:
connectors:
  sum:
    logs:
      checkout.total:
        source_attribute: total.payment
        conditions:
          - attributes["total.payment"] != "NULL"
        attributes:
          - key: payment.processor
            default_value: unspecified_processor
exporters:
  bar:

service:
  pipelines:
    metrics/sum:
       receivers: [sum]
       exporters: [bar]
    logs:
       receivers: [foo]
       exporters: [sum]

Logs to metrics πŸ”—

For log-to-metrics connection, if your logs contain all values in their body rather than in attributes, use a Transform processor in your pipeline to upsert parsed key/value pairs into attributes attached to the log.

For example, for a JSON payload:

processors:
  transform/logs:
    log_statements:
      - context: log
        statements:
          - merge_maps(attributes, ParseJSON(body), "upsert")

Troubleshooting πŸ”—

If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.

Available to Splunk Observability Cloud customers

Available to prospective customers and free trial users

  • Ask a question and get answers through community support at Splunk Answers .

  • Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.

This page was last updated on Feb 11, 2025.