SignalFx exporter 🔗
Caution
The SignalFx exporter creates and excludes metrics by default. Read on to understand which metrics are created, which ones are filtered out, and learn how to modify this behavior.
The SignalFx exporter is a native OTel component that allows the OpenTelemetry Collector to send metrics and events to SignalFx endpoints. The supported pipeline types are traces
, metrics
, and logs
. See Process your data with pipelines for more information.
While the SignalFx Smart Agent has reached End of Support, OTel native components such as the Smart Agent receiver, the SignalFx receiver, and the SignalFx exporter are available and supported. For information on the receivers, see Smart Agent receiver: and SignalFx receiver.
Get started 🔗
Note
This component is included in the default configuration of the Splunk Distribution of the OpenTelemetry Collector when deploying in host monitoring (agent) mode. See Collector deployment modes for more information.
For details about the default configuration, see Configure the Collector for Kubernetes with Helm, Collector for Linux default configuration, or Collector for Windows default configuration. You can customize your configuration any time as explained in this document.
By default, the Splunk Distribution of OpenTelemetry Collector includes the SignalFx exporter in the traces
, metrics
, and logs/signalfx
pipelines.
Sample configurations 🔗
The following example shows the default configuration of SignalFx exporter for metrics and events ingest, as well as trace and metrics correlation:
# Metrics + Events
signalfx:
access_token: "${SPLUNK_ACCESS_TOKEN}"
api_url: "${SPLUNK_API_URL}"
ingest_url: "${SPLUNK_INGEST_URL}"
# Use instead when sending to gateway (http forwarder extension ingress endpoint)
#api_url: http://${SPLUNK_GATEWAY_URL}:6060
#ingest_url: http://${SPLUNK_GATEWAY_URL}:9943
sync_host_metadata: true
When adding the SignalFx exporter, configure both the metrics and logs pipelines. Make sure to also add the SignalFx receiver as in the following example:
service:
pipelines:
metrics:
receivers: [signalfx]
processors: [memory_limiter, batch, resourcedetection]
exporters: [signalfx]
logs:
receivers: [signalfx]
processors: [memory_limiter, batch, resourcedetection]
exporters: [signalfx]
Send histogram metrics in OTLP format 🔗
The Splunk Distribution of OpenTelemetry Collector supports OTLP histogram metrics starting from version 0.98 and higher. See Explicit bucket histograms for more information.
To send histogram data to Splunk Observability Cloud, set the send_otlp_histograms
option to true
. For example:
exporters:
signalfx:
access_token: "${SPLUNK_ACCESS_TOKEN}"
api_url: "${SPLUNK_API_URL}"
ingest_url: "${SPLUNK_INGEST_URL}"
sync_host_metadata: true
correlation:
send_otlp_histograms: true
Default metric filters 🔗
To prevent unwanted custom metrics, the SignalFx exporter excludes a number of metrics by default. See List of metrics excluded by default for more information.
To override default exclusions and include metrics manually, use the include_metrics
option. For example:
exporters:
signalfx:
include_metrics:
- metric_names: [cpu.interrupt, cpu.user, cpu.system]
- metric_name: system.cpu.time
dimensions:
state: [interrupt, user, system]
The following example instructs the exporter to send only the cpu.interrupt
metric with a cpu
dimension value and both per core and aggregate cpu.idle
metrics:
exporters:
signalfx:
include_metrics:
- metric_name: "cpu.idle"
- metric_name: "cpu.interrupt"
dimensions:
cpu: ["*"]
List of metrics excluded by default 🔗
Metrics excluded by default by the SignalFx exporter are listed in the default_metrics.go file. The following snippet shows the latest version of the list:
# DefaultExcludeMetricsYaml holds a list of hard coded metrics that's added to the
# exclude list from the config. It includes non-default metrics collected by
# receivers. This list is determined by categorization of metrics in the SignalFx
# Agent. Metrics in the OpenTelemetry convention that have equivalents in the
# SignalFx Agent that are categorized as non-default are also included in this list.
exclude_metrics:
# Metrics in SignalFx Agent Format
- metric_names:
# CPU metrics.
- cpu.interrupt
- cpu.nice
- cpu.softirq
- cpu.steal
- cpu.system
- cpu.user
- cpu.utilization_per_core
- cpu.wait
# Disk-IO metrics
- disk_ops.pending
# Virtual memory metrics
- vmpage_io.memory.in
- vmpage_io.memory.out
# Metrics in OpenTelemetry Convention
# CPU Metrics
- metric_name: system.cpu.time
dimensions:
state: [idle, interrupt, nice, softirq, steal, system, user, wait]
- metric_name: cpu.idle
dimensions:
cpu: ["*"]
# Memory metrics
- metric_name: system.memory.usage
dimensions:
state: [inactive]
# Filesystem metrics
- metric_name: system.filesystem.usage
dimensions:
state: [reserved]
- metric_name: system.filesystem.inodes.usage
# Disk-IO metrics
- metric_names:
- system.disk.merged
- system.disk.io
- system.disk.time
- system.disk.io_time
- system.disk.operation_time
- system.disk.pending_operations
- system.disk.weighted_io_time
# Network-IO metrics
- metric_names:
- system.network.packets
- system.network.dropped
- system.network.tcp_connections
- system.network.connections
# Processes metrics
- metric_names:
- system.processes.count
- system.processes.created
# Virtual memory metrics
- metric_names:
- system.paging.faults
- system.paging.usage
- metric_name: system.paging.operations
dimensions:
type: [minor]
k8s metrics
- metric_names:
- k8s.cronjob.active_jobs
- k8s.job.active_pods
- k8s.job.desired_successful_pods
- k8s.job.failed_pods
- k8s.job.max_parallel_pods
- k8s.job.successful_pods
- k8s.statefulset.desired_pods
- k8s.statefulset.current_pods
- k8s.statefulset.ready_pods
- k8s.statefulset.updated_pods
- k8s.hpa.max_replicas
- k8s.hpa.min_replicas
- k8s.hpa.current_replicas
- k8s.hpa.desired_replicas
# matches all container limit metrics but k8s.container.cpu_limit and k8s.container.memory_limit
- /^k8s\.container\..+_limit$/
- '!k8s.container.memory_limit'
- '!k8s.container.cpu_limit'
# matches all container request metrics but k8s.container.cpu_request and k8s.container.memory_request
- /^k8s\.container\..+_request$/
- '!k8s.container.memory_request'
- '!k8s.container.cpu_request'
# matches any node condition but k8s.node.condition_ready
- /^k8s\.node\.condition_.+$/
- '!k8s.node.condition_ready'
# kubelet metrics
# matches (container|k8s.node|k8s.pod).memory...
- /^(?i:(container)|(k8s\.node)|(k8s\.pod))\.memory\.available$/
- /^(?i:(container)|(k8s\.node)|(k8s\.pod))\.memory\.major_page_faults$/
- /^(?i:(container)|(k8s\.node)|(k8s\.pod))\.memory\.page_faults$/
- /^(?i:(container)|(k8s\.node)|(k8s\.pod))\.memory\.rss$/
- /^(?i:(k8s\.node)|(k8s\.pod))\.memory\.usage$/
- /^(?i:(container)|(k8s\.node)|(k8s\.pod))\.memory\.working_set$/
# matches (k8s.node|k8s.pod).filesystem...
- /^k8s\.(?i:(node)|(pod))\.filesystem\.available$/
- /^k8s\.(?i:(node)|(pod))\.filesystem\.capacity$/
- /^k8s\.(?i:(node)|(pod))\.filesystem\.usage$/
# matches (k8s.node|k8s.pod).cpu.time
- /^k8s\.(?i:(node)|(pod))\.cpu\.time$/
# matches (container|k8s.node|k8s.pod).cpu.utilization
- /^(?i:(container)|(k8s\.node)|(k8s\.pod))\.cpu\.utilization$/
# matches k8s.node.network.io and k8s.node.network.errors
- /^k8s\.node\.network\.(?:(io)|(errors))$/
# matches k8s.volume.inodes, k8s.volume.inodes and k8s.volume.inodes.used
- /^k8s\.volume\.inodes(\.free|\.used)*$/
Filter metrics using service or environment 🔗
The SignalFx exporter correlates the traces it receives to metrics. When the exporter detects a new service or environment, it associates the source (for example, a host or a pod) to that service or environment in Splunk Observability Cloud, and identifies them using sf_service
and sf_environment
. You can then filter those metrics based on the trace service and environment.
Note
You need to send traces using OTLP/HTTP exporter to see them in Splunk Observability Cloud.
Use the correlation
setting to control the syncing of service and environment properties onto dimensions. It has the following options:
endpoint
: Required. The base URL for API requests, such as https://api.us0.signalfx.com. Defaults to api_url or https://api.{realm}.signalfx.com/.timeout
: Timeout for every attempt to send data to the backend. 5 seconds by default.stale_service_timeout
: How long to wait after a span’s service name is last seen before uncorrelating it. 5 minutes by default.max_requests
: Maximum HTTP requests to be made in parallel. 20 by default.max_buffered
: Maximum number of correlation updates that can be buffered before updates are dropped. 10,000 by default.max_retries
: Maximum number of retries that will be made for failed correlation updates. 2 by default.log_updates
: Whether or not to log correlation updates to dimensions, at DEBUG level.false
by default.retry_delay
: How long to wait between retries. 30 seconds by default.cleanup_interval
: How frequently to purge duplicate requests. 1 minute by default.sync_attributes
: Map containing key of the attribute to read from spans to sync to dimensions specified as the value. Defaults to{"k8s.pod.uid": "k8s.pod.uid", "container.id": "container.id"}
.
See more options in the Settings section.
Translation rules and metric transformations 🔗
Use the translation_rules
field to transform metrics or produce custom metrics by copying, calculating, or aggregating other metric values without requiring an additional processor.
Translation rules currently allow the following actions:
aggregate_metric
: Aggregates a metric through removal of specified dimensions.calculate_new_metric
: Creates a new metric via operating on two consistuent ones.convert_values
: Convert float values to int or int to float for specified metric names.copy_metrics
: Creates a new metric as a copy of another.delta_metric
: Creates a new delta metric for a specified non-delta one.divide_int
: Scales a metric’s integer value by a given factor.drop_dimensions
: Drops dimensions for specified metrics, or globally.drop_metrics
: Drops all metrics with a given name.multiply_float
: Scales a metric’s float value by a given float factor.multiply_int
: Scales a metric’s int value by a given int factor.rename_dimension_keys
: Renames dimensions for specified metrics, or globally.rename_metrics
: Replaces a given metric name with specified one.split_metric
: Splits a given metric into multiple new ones for a specified dimension.
Default translation rules and generated metrics 🔗
The SignalFx exporter uses the translation rules defined in translation/constants.go by default.
The default rules create metrics which are reported directly to Infrastructure Monitoring. If you want to change any of their attributes or values, you need to either modify the translation rules or their constituent host metrics.
By default, the SignalFx exporter creates the following aggregated metrics from the Host metrics receiver:
cpu.idle
cpu.interrupt
cpu.nice
cpu.num_processors
cpu.softirq
cpu.steal
cpu.system
cpu.user
cpu.utilization
cpu.utilization_per_core
cpu.wait
disk.summary_utilization
disk.utilization
disk_ops.pending
disk_ops.total
memory.total
memory.utilization
network.total
process.cpu_time_seconds
system.disk.io.total
system.disk.operations.total
system.network.io.total
system.network.packets.total
vmpage_io.memory.in
vmpage_io.memory.out
vmpage_io.swap.in
vmpage_io.swap.out
In addition to the aggregated metrics, the default rules make available the following “per core” custom hostmetrics. The CPU number is assigned to the dimension cpu
:
cpu.interrupt
cpu.nice
cpu.softirq
cpu.steal
cpu.system
cpu.user
cpu.wait
Drop histogram metrics 🔗
In case of high cardinality metrics, dropping histogram buckets might be useful. To drop histogram metrics, set drop_histogram_buckets
to true
.
When drop_histogram_buckets
is activated, histogram buckets are dropped instead of being translated to datapoints with the _bucket
suffix. Only datapoints with _sum
, _count
, _min
, and _max
are sent through the exporter.
Settings 🔗
The following table shows the configuration options for the SignalFx exporter:
Name | Type | Default | Description |
---|---|---|---|
sending_queue (see fields) | struct | ||
retry_on_failure (see fields) | struct | ||
endpoint | string | ||
tls (see fields) | struct | ||
read_buffer_size | int | ||
write_buffer_size | int | ||
timeout | int64 | 5s | |
headers | map | ||
customroundtripper | func | ||
auth (see fields) | ptr | ||
compression | string | ||
max_idle_conns | ptr | ||
max_idle_conns_per_host | ptr | ||
max_conns_per_host | ptr | ||
idle_conn_timeout | ptr | ||
access_token | string | AccessToken is the authentication token provided by SignalFx. | |
realm | string | Realm is the SignalFx realm where data is going to be sent to. | |
ingest_url | string | IngestURL is the destination to where SignalFx metrics will be sent to, it is intended for tests and debugging. The value of Realm is ignored if the URL is specified. The exporter will automatically append the appropriate path: "/v2/datapoint" for metrics, and "/v2/event" for events. | |
ingest_tls (see fields) | struct | ingest_tls needs to be set if the exporter's IngestURL is pointing to a signalfx receiver with TLS enabled and using a self-signed certificate where its CA is not loaded in the system cert pool. | |
api_url | string | APIURL is the destination to where SignalFx metadata will be sent. This value takes precedence over the value of Realm | |
api_tls (see fields) | struct | api_tls needs to be set if the exporter's APIURL is pointing to a httforwarder extension with TLS enabled and using a self-signed certificate where its CA is not loaded in the system cert pool. | |
log_data_points | bool | false | Whether to log datapoints dispatched to Splunk Observability Cloud |
log_dimension_updates | bool | false | Whether to log dimension updates being sent to SignalFx. |
dimension_client (see fields) | struct | Dimension update client configuration used for metadata updates. | |
access_token_passthrough | bool | true | AccessTokenPassthrough indicates whether to associate datapoints with an organization access token received in request. |
translation_rules (see fields) | slice | TranslationRules defines a set of rules how to translate metrics to a SignalFx compatible format Rules defined in translation/constants.go are used by default. Deprecated: Use metricstransform processor to do metrics transformations. | |
disable_default_translation_rules | bool | false | |
delta_translation_ttl | int64 | 3600 | DeltaTranslationTTL specifies in seconds the max duration to keep the most recent datapoint for any
|
sync_host_metadata | bool | false | SyncHostMetadata defines if the exporter should scrape host metadata and
sends it as property updates to SignalFx backend.
IMPORTANT: Host metadata synchronization relies on |
exclude_metrics (see fields) | slice | ExcludeMetrics defines dpfilter.MetricFilters that will determine metrics to be excluded from sending to SignalFx backend. If translations enabled with TranslationRules options, the exclusion will be applie on translated metrics. | |
include_metrics (see fields) | slice | IncludeMetrics defines dpfilter.MetricFilters to override exclusion any of metric. This option can be used to included metrics that are otherwise dropped by default. See ./translation/default_metrics.go for a list of metrics that are dropped by default. | |
exclude_properties (see fields) | slice | ExcludeProperties defines dpfilter.PropertyFilters to prevent inclusion of properties to include with dimension updates to the SignalFx backend. | |
correlation (see fields) | ptr | Correlation configuration for syncing traces service and environment to metrics. | |
nonalphanumeric_dimension_chars | string | _-. | NonAlphanumericDimensionChars is a list of allowable characters, in addition to alphanumeric ones, to be used in a dimension key. |
max_connections | int | MaxConnections is used to set a limit to the maximum idle HTTP connection the exporter can keep open. Deprecated: use HTTPClientSettings.MaxIdleConns or HTTPClientSettings.MaxIdleConnsPerHost instead. |
Fields of sending_queue
Name | Type | Default | Description |
---|---|---|---|
enabled | bool | true | |
num_consumers | int | 10 | |
queue_size | int | 1000 | |
storage | ptr |
Fields of retry_on_failure
Name | Type | Default | Description |
---|---|---|---|
enabled | bool | true | |
initial_interval | int64 | 5s | |
randomization_factor | float64 | ||
multiplier | float64 | ||
max_interval | int64 | 30s | |
max_elapsed_time | int64 | 5m0s |
Fields of tls
Name | Type | Default | Description |
---|---|---|---|
ca_file | string | ||
ca_pem | string | ||
cert_file | string | ||
cert_pem | string | ||
key_file | string | ||
key_pem | string | ||
min_version | string | ||
max_version | string | ||
reload_interval | int64 | ||
insecure | bool | false | |
insecure_skip_verify | bool | false | |
server_name_override | string |
Fields of auth
Name | Type | Default | Description |
---|---|---|---|
authenticator | struct |
Fields of ingest_tls
Name | Type | Default | Description |
---|---|---|---|
ca_file | string | ||
ca_pem | string | ||
cert_file | string | ||
cert_pem | string | ||
key_file | string | ||
key_pem | string | ||
min_version | string | ||
max_version | string | ||
reload_interval | int64 | ||
insecure | bool | false | |
insecure_skip_verify | bool | false | |
server_name_override | string |
Fields of api_tls
Name | Type | Default | Description |
---|---|---|---|
ca_file | string | ||
ca_pem | string | ||
cert_file | string | ||
cert_pem | string | ||
key_file | string | ||
key_pem | string | ||
min_version | string | ||
max_version | string | ||
reload_interval | int64 | ||
insecure | bool | false | |
insecure_skip_verify | bool | false | |
server_name_override | string |
Fields of dimension_client
Name | Type | Default | Description |
---|---|---|---|
max_buffered | int | 10000 | |
send_delay | int64 | 10s | |
max_idle_conns | int | 20 | |
max_idle_conns_per_host | int | 20 | |
max_conns_per_host | int | 20 | |
idle_conn_timeout | int64 | 30s |
Fields of translation_rules
Name | Type | Default | Description |
---|---|---|---|
action | string | Action specifies the translation action to be applied on metrics. This is a required field. | |
mapping | map | Mapping specifies key/value mapping that is used by rename_dimension_keys, rename_metrics, copy_metrics, and split_metric actions. | |
scale_factors_int | map | ScaleFactorsInt is used by multiply_int and divide_int action to scale integer metric values, key/value format: metric_name/scale_factor | |
scale_factors_float | map | ScaleFactorsInt is used by multiply_float action to scale float metric values, key/value format: metric_name/scale_factor | |
metric_name | string | MetricName is used by "split_metric" translation rule to specify a name of a metric that will be split. | |
dimension_key | string | DimensionKey is used by "split_metric" translation rule action to specify dimension key that will be used to translate the metric datapoints. Datapoints that don't have the specified dimension key will not be translated. DimensionKey is also used by "copy_metrics" for filterring. | |
dimension_values | map | DimensionValues is used by "copy_metrics" to filter out datapoints with dimensions values not matching values set in this field | |
types_mapping | map | TypesMapping is represents metric_name/metric_type key/value pairs, used by ActionConvertValues. | |
aggregation_method | string | AggregationMethod specifies method used by "aggregate_metric" translation rule | |
without_dimensions | slice | WithoutDimensions used by "aggregate_metric" translation rule to specify dimensions to be excluded by aggregation. | |
add_dimensions | map | AddDimensions used by "rename_metrics" translation rule to add dimensions that are necessary for existing SFx content for desired metric name | |
copy_dimensions | map | CopyDimensions used by "rename_metrics" translation rule to copy dimensions that are necessary for existing SFx content for desired metric name. This will duplicate the dimension value and isn't a rename. | |
metric_names | map | MetricNames is used by "rename_dimension_keys" and "drop_metrics" translation rules. | |
operand1_metric | string | ||
operand2_metric | string | ||
operator | string | ||
dimension_pairs | map | DimensionPairs used by "drop_dimensions" translation rule to specify dimension pairs that should be dropped. |
Fields of exclude_metrics
Name | Type | Default | Description |
---|---|---|---|
metric_name | string | A single metric name to match against. | |
metric_names | slice | A list of metric names to match against. | |
dimensions | map | A map of dimension key/values to match against. All key/values must match a datapoint for it to be matched. The map values can be either a single string or a list of strings. |
Fields of include_metrics
Name | Type | Default | Description |
---|---|---|---|
metric_name | string | A single metric name to match against. | |
metric_names | slice | A list of metric names to match against. | |
dimensions | map | A map of dimension key/values to match against. All key/values must match a datapoint for it to be matched. The map values can be either a single string or a list of strings. |
Fields of exclude_properties
Name | Type | Default | Description |
---|---|---|---|
property_name | ptr | PropertyName is the (inverted) literal, regex, or globbed property name/key to not include in dimension updates | |
property_value | ptr | PropertyValue is the (inverted) literal or globbed property value to not include in dimension updates | |
dimension_name | ptr | DimensionName is the (inverted) literal, regex, or globbed dimension name/key to not target for dimension updates. If there are no sub-property filters for its enclosing entry, it will disable dimension updates for this dimension name in total. | |
dimension_value | ptr | PropertyValue is the (inverted) literal, regex, or globbed dimension value to not target with a dimension update If there are no sub-property filters for its enclosing entry, it will disable dimension updates for this dimension value in total. |
Fields of correlation
Name | Type | Default | Description |
---|---|---|---|
endpoint | string | ||
tls (see fields) | struct | ||
read_buffer_size | int | ||
write_buffer_size | int | ||
timeout | int64 | 5s | |
headers | map | ||
customroundtripper | func | ||
auth (see fields) | ptr | ||
compression | string | ||
max_idle_conns | ptr | ||
max_idle_conns_per_host | ptr | ||
max_conns_per_host | ptr | ||
idle_conn_timeout | ptr | ||
max_requests | uint | 20 | |
max_buffered | uint | 10000 | |
max_retries | uint | 2 | |
log_updates | bool | false | |
retry_delay | int64 | 30s | |
cleanup_interval | int64 | 1m0s | |
stale_service_timeout | int64 | 5m0s | How long to wait after a trace span's service name is last seen before uncorrelating that service. |
sync_attributes | map | SyncAttributes is a key of the span attribute name to sync to the dimension as the value. |
Fields of tls
Name | Type | Default | Description |
---|---|---|---|
ca_file | string | ||
ca_pem | string | ||
cert_file | string | ||
cert_pem | string | ||
key_file | string | ||
key_pem | string | ||
min_version | string | ||
max_version | string | ||
reload_interval | int64 | ||
insecure | bool | false | |
insecure_skip_verify | bool | false | |
server_name_override | string |
Fields of auth
Name | Type | Default | Description |
---|---|---|---|
authenticator | struct |
Caution
Use the access_token_passthrough
setting if you’re using a SignalFx receiver with the same setting. Only use the SignalFx receiver with the SignalFx exporter when activating this setting.
Troubleshooting 🔗
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
Submit a case in the Splunk Support Portal .
Contact Splunk Support .
Available to prospective customers and free trial users
Ask a question and get answers through community support at Splunk Answers .
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.