Docs ยป Supported integrations in Splunk Observability Cloud ยป Configure application receivers for networks ยป AWS AppMesh Envoy Proxy

AWS AppMesh Envoy Proxy ๐Ÿ”—

The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the AppMesh monitor type to report metrics from AWS AppMesh Envoy Proxy.

To use this integration, you must also activate the Envoy StatsD sink on AppMesh and deploy the agent as a sidecar in the services that need to be monitored.

This integration is available on Kubernetes, Linux, and Windows.

Benefits ๐Ÿ”—

After you configure the integration, you can access these features:

Installation ๐Ÿ”—

Follow these steps to deploy this integration:

  1. Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform:

  2. Configure the integration, as described in the Configuration section.

  3. Restart the Splunk Distribution of the OpenTelemetry Collector.

Configuration ๐Ÿ”—

To use this integration of a Smart Agent monitor with the Collector:

  1. Include the Smart Agent receiver in your configuration file.

  2. Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.

Example ๐Ÿ”—

To activate this integration, add the following to your Collector configuration:

receivers:
  smartagent/appmesh:
    type: appmesh
      ... # Additional config

Next, add the monitor to the service.pipelines.metrics.receivers section of your configuration file:

service:
  pipelines:
    metrics:
      receivers: [smartagent/appmesh]

AWS AppMesh Envoy Proxy ๐Ÿ”—

To configure the AWS AppMesh Envoy Proxy, add the following lines to your configuration of the Envoy StatsD sink on AppMesh:

stats_sinks:
 -
  name: "envoy.statsd"
  config:
   address:
    socket_address:
     address: "127.0.0.1"
     port_value: 8125
     protocol: "UDP"
   prefix: statsd.appmesh

Because you need to remove the prefix in metric names before metric name conversion, set value of the prefix field with the value of the metricPrefix configuration field described in the following table. This change causes the monitor to remove this specified prefix. If you donโ€™t specify a value for the prefix field, it defaults to envoy.

To learn more, see the Envoy API reference.

The following table shows the configuration options for this monitor:

Option

Required

Type

Description

listenAddress

no

string

This host address binds the UDP listener that accepts statsd

datagrams. The default value is localhost.

listenPort

no

integer

This value indicates the port on which to listen for statsd

messages. The default value is 8125.

metricPrefix

no

string

This string sets the prefix in metric names that the monitor

removes before metric name conversion

Metrics ๐Ÿ”—

The following metrics are available for this integration:

NameDescriptionTypeCategory
circuit_breakers..cx_open

Whether the connection circuit breaker is closed (0) or open (1)

gaugeCustom
circuit_breakers..cx_pool_open

Whether the connection pool circuit breaker is closed (0) or open (1)

gaugeCustom
circuit_breakers..rq_pending_open

Whether the pending requests circuit breaker is closed (0) or open (1)

gaugeCustom
circuit_breakers..rq_open

Whether the requests circuit breaker is closed (0) or open (1)

gaugeCustom
circuit_breakers..rq_retry_open

Whether the retry circuit breaker is closed (0) or open (1)

gaugeCustom
circuit_breakers..remaining_cx

Number of remaining connections until the circuit breaker opens

gaugeCustom
circuit_breakers..remaining_pending

Number of remaining pending requests until the circuit breaker opens

gaugeCustom
circuit_breakers..remaining_rq

Number of remaining requests until the circuit breaker opens

gaugeCustom
circuit_breakers..remaining_retries

Number of remaining retries until the circuit breaker opens

gaugeCustom
membership_change

Total cluster membership changes

cumulativeCustom
membership_healthy

Current cluster healthy total (inclusive of both health checking and outlier detection)

gaugeDefault
membership_degraded

Current cluster degraded total

gaugeCustom
membership_total

Current cluster membership total

gaugeDefault
upstream_cx_total

Total connections

cumulativeCustom
upstream_cx_active

Total active connections

gaugeCustom
upstream_cx_http1_total

Total HTTP/1.1 connections

cumulativeCustom
upstream_cx_http2_total

Total HTTP/2 connections

cumulativeCustom
upstream_cx_connect_fail

Total connection failures

cumulativeCustom
upstream_cx_connect_timeout

Total connection connect timeouts

cumulativeCustom
upstream_cx_idle_timeout

Total connection idle timeouts

cumulativeCustom
upstream_cx_connect_attempts_exceeded

Total consecutive connection failures exceeding configured connection attempts

cumulativeCustom
upstream_cx_overflow

Total times that the clusterโ€™s connection circuit breaker overflowed

cumulativeCustom
upstream_cx_connect_ms

Connection establishment milliseconds

gaugeCustom
upstream_cx_length_ms

Connection length milliseconds

gaugeCustom
upstream_cx_destroy

Total destroyed connections

cumulativeCustom
upstream_cx_destroy_local

Total connections destroyed locally

cumulativeCustom
upstream_cx_destroy_remote

Total connections destroyed remotely

cumulativeCustom
upstream_cx_destroy_with_active_rq

Total connections destroyed with 1+ active request

cumulativeCustom
upstream_cx_destroy_local_with_active_rq

Total connections destroyed locally with 1+ active request

cumulativeCustom
upstream_cx_destroy_remote_with_active_rq

Total connections destroyed remotely with 1+ active request

cumulativeCustom
upstream_cx_close_notify

Total connections closed via HTTP/1.1 connection close header or HTTP/2 GOAWAY

cumulativeCustom
upstream_cx_rx_bytes_total

Total received connection bytes

cumulativeDefault
upstream_cx_rx_bytes_buffered

Received connection bytes currently buffered

gaugeCustom
upstream_cx_tx_bytes_total

Total sent connection bytes

cumulativeCustom
upstream_cx_tx_bytes_buffered

Send connection bytes currently buffered

gaugeCustom
upstream_cx_pool_overflow

Total times that the clusterโ€™s connection pool circuit breaker overflowed

cumulativeCustom
upstream_cx_protocol_error

Total connection protocol errors

cumulativeCustom
upstream_cx_max_requests

Total connections closed due to maximum requests

cumulativeCustom
upstream_cx_none_healthy

Total times connection not established due to no healthy hosts

cumulativeCustom
upstream_rq_total

Total requests

cumulativeCustom
upstream_rq_active

Total active requests

gaugeCustom
upstream_rq_pending_total

Total requests pending a connection pool connection

cumulativeCustom
upstream_rq_pending_overflow

Total requests that overflowed connection pool circuit breaking and were failed

cumulativeCustom
upstream_rq_pending_failure_eject

Total requests that were failed due to a connection pool connection failure

cumulativeCustom
upstream_rq_pending_active

Total active requests pending a connection pool connection

gaugeCustom
upstream_rq_cancelled

Total requests cancelled before obtaining a connection pool connection

cumulativeCustom
upstream_rq_maintenance_mode

Total requests that resulted in an immediate 503 due to maintenance mode

cumulativeCustom
upstream_rq_timeout

Total requests that timed out waiting for a response

cumulativeCustom
upstream_rq_per_try_timeout

Total requests that hit the per try timeout

cumulativeCustom
upstream_rq_rx_reset

Total requests that were reset remotely

cumulativeCustom
upstream_rq_tx_reset

Total requests that were reset locally

cumulativeCustom
upstream_rq_retry

Total request retries

cumulativeDefault
upstream_rq_retry_success

Total request retry successes

cumulativeCustom
upstream_rq_retry_overflow

Total requests not retried due to circuit breaking

cumulativeCustom
upstream_rq_completed

Total upstream requests completed

cumulativeDefault
upstream_rq_2xx

Total number of HTTP response codes in the 200-299 range

cumulativeCustom
upstream_rq_3xx

Total number of HTTP response codes in the 300-399 range

cumulativeCustom
upstream_rq_4xx

Total number of HTTP response codes in the 400-499 range

cumulativeDefault
upstream_rq_5xx

Total number of HTTP response codes in the 500-599 range

cumulativeDefault
upstream_rq_<___>

Specific HTTP response codes (e.g., 201, 302, etc.)

cumulativeCustom
upstream_rq_time

Request time milliseconds

gaugeDefault
external.upstream_rq_completed

Total external origin requests completed

cumulativeCustom
external.upstream_rq_<_xx>

External origin aggregate HTTP response codes

cumulativeCustom
external.upstream_rq_<_>

External origin specific HTTP response codes

cumulativeCustom
external.upstream_rq_time

External origin request time milliseconds

gaugeCustom
internal.upstream_rq_completed

Total internal origin requests completed

cumulativeCustom
internal.upstream_rq_<_xx>

Internal origin aggregate HTTP response codes

cumulativeCustom
internal.upstream_rq_<_>

Internal origin specific HTTP response codes

cumulativeCustom
internal.upstream_rq_time

Internal origin request time milliseconds

gaugeCustom

Notes ๐Ÿ”—

  • To learn more about the available in Splunk Observability Cloud see Metric types

  • In host-based subscription plans, default metrics are those metrics included in host-based subscriptions in Splunk Observability Cloud, such as host, container, or bundled metrics. Custom metrics are not provided by default and might be subject to charges. See Metric categories for more information.

  • In MTS-based subscription plans, all metrics are custom.

  • To add additional metrics, see how to configure extraMetrics in Add additional metrics

Troubleshooting ๐Ÿ”—

If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.

Available to Splunk Observability Cloud customers

Available to prospective customers and free trial users

  • Ask a question and get answers through community support at Splunk Answers .

  • Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.

This page was last updated on Feb 11, 2025.