Docs » Supported integrations in Splunk Observability Cloud » Collector components: Receivers » Host metrics receiver

Host metrics receiver πŸ”—

The host metrics receiver generates metrics scraped from host systems when the Collector is deployed as an agent. The supported pipeline type is metrics.

By default, the host metrics receiver is activated in the Splunk Distribution of OpenTelemetry Collector and collects the following metrics:

  • System metrics

  • CPU usage metrics

  • Disk I/O metrics

  • CPU load metrics

  • File system usage metrics

  • Memory usage metrics

  • Network interface and TCP connection metrics

  • Process count metrics (Linux only)

Host receiver metrics appear in Infrastructure Monitoring. You can use them to create dashboards and alerts. See Create detectors to trigger alerts for more information.

Caution

The SignalFx exporter excludes some available metrics by default. Learn more about default metric filters in List of metrics excluded by default. The most up-to-date list of excluded metrics is in GitHub. See https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/signalfxexporter/internal/translation/default_metrics.go#L49.

Get started πŸ”—

Note

This component is included in the default configuration of the Splunk Distribution of the OpenTelemetry Collector when deploying in host monitoring (agent) mode. See Collector deployment modes for more information.

For details about the default configuration, see Configure the Collector for Kubernetes with Helm, Collector for Linux default configuration, or Collector for Windows default configuration. You can customize your configuration any time as explained in this document.

Follow these steps to configure and activate the component:

  1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:

  2. Configure the receiver as described in the next section.

  3. Restart the Collector.

Note

Data ingested into Splunk Observability Cloud is subject to system limits. See Per product system limits in Splunk Observability Cloud for more information.

Collect container host metrics (Linux) πŸ”—

The host metrics receiver collects metrics from the Linux system directories. To collect metrics for the host instead of the container, follow these steps:

  1. Mount the entire host file system when running the container. For example: docker run -v /:/hostfs. You can also choose which parts of the host file system to mount. For example: docker run -v /proc:/hostfs/proc

  2. Configure root_path so that the host metrics receiver knows where the root file system is located. For example:

    receivers:
    hostmetrics:
       root_path: /hostfs
    

    If you are running multiple instances of the host metrics receiver, set the same root_path for all.

Sample configurations πŸ”—

The collection interval and the categories of metrics to be scraped can be configured as shown in the following example:

hostmetrics:
  collection_interval: <duration> # The default is 1m.
  scrapers:
    <scraper1>:
    <scraper2>:
    ...

Scrapers extract data from endpoints and then send that data to a specified target. The following table shows the available scrapers:

Scraper

Description

system

System metrics

cpu

CPU utilization metrics

disk

Disk I/O metrics

load

CPU load metrics

filesystem

File system utilization metrics

memory

Memory utilization metrics

network

Network interface I/O metrics and TCP connection metrics

paging

Paging or swap space utilization and I/O metrics

processes

Process count metrics. Only available on Linux

process

Per process CPU, memory, and disk I/O metrics

See the following sections for scraper configurations.

Disk πŸ”—

disk:
  <include|exclude>:
    devices: [ <device name>, ... ]
    match_type: <strict|regexp>

File system πŸ”—

filesystem:
  <include_devices|exclude_devices>:
    devices: [ <device name>, ... ]
    match_type: <strict|regexp>
  <include_fs_types|exclude_fs_types>:
    fs_types: [ <filesystem type>, ... ]
    match_type: <strict|regexp>
  <include_mount_points|exclude_mount_points>:
    mount_points: [ <mount point>, ... ]
    match_type: <strict|regexp>

The following example shows the forward slash (/) as a common mount point for Linux systems:

filesystem:
  include_mount_points:
    mount_points: ["/"]
    match_type: strict

Similarly, the following example shows C: as a common mount point for Windows systems:

filesystem:
  include_mount_points:
    mount_points: ["C:"]
    match_type: strict

To include virtual file systems, set include_virtual_filesystems to true.

filesystem:
  include_virtual_filesystems: true

Find more examples in the daemonset.yaml file in GitHub.

Network πŸ”—

network:
  <include|exclude>:
    interfaces: [ <interface name>, ... ]
    match_type: <strict|regexp>

Process πŸ”—

process:
  <include|exclude>:
    names: [ <process name>, ... ]
    match_type: <strict|regexp>
  mute_process_name_error: <true|false>
  mute_process_exe_error: <true|false>
  mute_process_io_error: <true|false>
  scrape_process_delay: <time>

The following example demonstrates how to configure a process scraper that collects two metrics, in addition to the defaults, and uses a resource attribute to include the process owner in the collected data:

receivers:
  hostmetrics:
    scrapers:
      process:
        resource_attributes:
          process.owner:
            enabled: true
        metrics:
          process.memory.usage:
            enabled: true
          process.disk.io:
            enabled: true

For more information about enabling and disabling metrics and resource attributes using the process scraper, see hostmetricsreceiver/process in the OpenTelemetry documentation.

If you continuously see errors related to process reading, consider setting mute_process_name_error, mute_process_exe_error, or mute_process_io_error to true.

Filtering πŸ”—

To only gather a subset of metrics from a particular source, use the host metrics receiver with the filter processor.

Different frequencies πŸ”—

To scrape some metrics at a different frequency than others, configure multiple host metrics receivers with different collection_interval values. For example:

receivers:
  hostmetrics:
    collection_interval: 30s
    scrapers:
      cpu:
      memory:

  hostmetrics/disk:
    collection_interval: 1m
    scrapers:
      disk:
      filesystem:

service:
  pipelines:
    metrics:
      receivers: [hostmetrics, hostmetrics/disk]

Metrics πŸ”—

The following metrics, resource attributes, and attributes are available.

Note

The SignalFx exporter excludes some available metrics by default. Learn more about default metric filters in List of metrics excluded by default.

cpu scraper πŸ”—

For more information, see the cpu scraper documentation in GitHub.

disk scraper πŸ”—

For more information, see the disk scraper documentation in GitHub.

filesystem scraper πŸ”—

For more information, see the filesystem scraper documentation in GitHub.

load scraper πŸ”—

For more information, see the load scraper documentation in GitHub.

memory scraper πŸ”—

For more information, see the memory scraper documentation in GitHub.

network scraper πŸ”—

For more information, see the network scraper documentation in GitHub.

paging scraper πŸ”—

For more information, see the paging scraper documentation in GitHub.

process scraper πŸ”—

For more information, see the process scraper documentation in GitHub.

processes scraper πŸ”—

For more information, see the processes scraper documentation in GitHub.

Default translation rules and generated metrics πŸ”—

The SignalFx exporter uses the translation rules defined in translation/constants.go by default.

The default rules create metrics which are reported directly to Infrastructure Monitoring. If you want to change any of their attributes or values, you need to either modify the translation rules or their constituent host metrics.

By default, the SignalFx exporter creates the following aggregated metrics from the Host metrics receiver:

  • cpu.idle

  • cpu.interrupt

  • cpu.nice

  • cpu.num_processors

  • cpu.softirq

  • cpu.steal

  • cpu.system

  • cpu.user

  • cpu.utilization

  • cpu.utilization_per_core

  • cpu.wait

  • disk.summary_utilization

  • disk.utilization

  • disk_ops.pending

  • disk_ops.total

  • memory.total

  • memory.utilization

  • network.total

  • process.cpu_time_seconds

  • system.disk.io.total

  • system.disk.operations.total

  • system.network.io.total

  • system.network.packets.total

  • vmpage_io.memory.in

  • vmpage_io.memory.out

  • vmpage_io.swap.in

  • vmpage_io.swap.out

In addition to the aggregated metrics, the default rules make available the following β€œper core” custom hostmetrics. The CPU number is assigned to the dimension cpu:

  • cpu.interrupt

  • cpu.nice

  • cpu.softirq

  • cpu.steal

  • cpu.system

  • cpu.user

  • cpu.wait

Resource attributes πŸ”—

The host metrics receiver doesn’t set any resource attributes on the exported metrics.

To set resource attributes, provide them using the OTEL_RESOURCE_ATTRIBUTES environment variables. For example:

export OTEL_RESOURCE_ATTRIBUTES="service.name=<name_of_service>,service.version=<version_of_service>"

Activate or deactivate specific metrics πŸ”—

You can activate or deactivate specific metrics by setting the enabled field in the metrics section for each metric. For example:

receivers:
  samplereceiver:
    metrics:
      metric-one:
        enabled: true
      metric-two:
        enabled: false

The following is an example of host metrics receiver configuration with activated metrics:

receivers:
  hostmetrics:
    scrapers:
      process:
        metrics:
          process.cpu.utilization:
            enabled: true

Note

Deactivated metrics aren’t sent to Splunk Observability Cloud.

Billing πŸ”—

  • If you’re in a MTS-based subscription, all metrics count towards metrics usage.

  • If you’re in a host-based plan, metrics listed as active (Active: Yes) on this document are considered default and are included free of charge.

Learn more at Infrastructure Monitoring subscription usage (Host and metric plans).

Settings πŸ”—

The following table shows the configuration options for the host metrics receiver:

Troubleshooting πŸ”—

If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.

Available to Splunk Observability Cloud customers

Available to prospective customers and free trial users

  • Ask a question and get answers through community support at Splunk Answers .

  • Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.

This page was last updated on Dec 19, 2024.