Sumコネクター 🔗
The Splunk Distribution of the OpenTelemetry Collector uses the Sum connector to sum attribute values from spans, span events, metrics, data points, and log records.
As a receiver, the supported pipeline types are metrics
, traces
and logs
. As an exporter, the supported pipeline type is metrics
. See パイプラインでデータを処理する for more information.
注釈
Values found within an attribute are converted into a float regardless of their original type before being summed and output as a metric value. Non-convertible strings are dropped and not included.
はじめに 🔗
以下の手順に従って、コンポーネントの設定とアクティベーションを行ってください:
Splunk Distribution of the OpenTelemetry Collector をホストまたはコンテナプラットフォームにデプロイします:
次のセクションで説明するように、コネクターを設定します。
Collector を再起動します。
サンプル構成 🔗
To activate the connector, add sum
to the connectors
section of your configuration file.
例:
connectors:
sum:
To complete the configuration, add the connector in the service
section of your configuration file according to the pipelines you want to use, for example:
service:
pipelines:
metrics/sum:
receivers: [sum]
traces:
exporters: [sum]
設定オプション 🔗
以下の設定が必要です:
Telemetry type. Nested below the
sum:
connector declaration. Can be any ofspans
orspanevents
fortraces
,datapoints
formetrics
, orlogs
.In Configuration example: Sum attribute values, it’s declared as
spans
.
Metric name. Nested below the telemetry type; this is the metric name the sum connector will output summed values to.
In Configuration example: Sum attribute values, it’s declared as
my.example.metric.name
.
source_attribute
. A specific attribute to search for within the source telemetry being fed to the connector. This attribute is where the connector looks for numerical values to sum into the output metric value.In Configuration example: Sum attribute values, it’s declared as
attribute.with.numerical.value
.
オプションで以下の設定が可能です:
conditions
. You can use OTTL syntax to provide conditions for processing incoming telemetry. Conditions are ORed together, so if any condition is met the attribute’s value is included in the resulting sum. For more information see OTTL grammar in GitHub.attributes
. Declaration of attributes to include. Any of these attributes found will generate a separate sum for each set of unique combination of attribute values and output as its own datapoint in the metric time series.key
. Required forattributes
. The attribute name to match against.default_value
. Optional forattributes
. A default value for the attribute when no matches are found. Thedefault_value
value can be a string, integer, or float.
Configuration example: Sum attribute values 🔗
This example configuration sums numerical values found within the attribute attribute.with.numerical.value
of any span telemetry routed to the connector and outputs a metric time series with the name my.example.metric.name
with those summed values.
receivers:
foo:
connectors:
sum:
spans:
my.example.metric.name:
source_attribute: attribute.with.numerical.value
exporters:
bar:
service:
pipelines:
metrics/sum:
receivers: [sum]
exporters: [bar]
traces:
receivers: [foo]
exporters: [sum]
Configuration example: Check payment logs 🔗
In this example the Sum connector ingests logs and creates an output metric named checkout.total
with numerical values found in the source_attribute
total.payment
. It also checks any incoming log telemetry for values present in the attribute payment.processor
and creates a datapoint within the metric time series for each unique value.
It also makes sure that:
The attribute
total.payment
is notNULL
.Any logs without values in
payment.processor
are included in a datapoint with thedefault_value
ofunspecified_processor
.
receivers:
foo:
connectors:
sum:
logs:
checkout.total:
source_attribute: total.payment
conditions:
- attributes["total.payment"] != "NULL"
attributes:
- key: payment.processor
default_value: unspecified_processor
exporters:
bar:
service:
pipelines:
metrics/sum:
receivers: [sum]
exporters: [bar]
logs:
receivers: [foo]
exporters: [sum]
Logs to metrics 🔗
For log-to-metrics connection, if your logs contain all values in their body rather than in attributes, use a トランスフォームプロセッサー in your pipeline to upsert parsed key/value pairs into attributes attached to the log.
For example, for a JSON payload:
processors:
transform/logs:
log_statements:
- context: log
statements:
- merge_maps(attributes, ParseJSON(body), "upsert")
トラブルシューティング 🔗
Splunk Observability Cloudをご利用のお客様で、Splunk Observability Cloudでデータを確認できない場合は、以下の方法でサポートを受けることができます。
Splunk Observability Cloudをご利用のお客様
Splunk サポートポータル でケースを送信する
Splunkサポート に連絡する
見込み客および無料トライアルユーザー様
Splunk Answers のコミュニティサポートで質問し、回答を得る
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.