AWS AppMesh Envoy Proxy ๐
The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the AppMesh monitor type to report metrics from AWS AppMesh Envoy Proxy.
To use this integration, you must also activate the Envoy StatsD sink on AppMesh and deploy the agent as a sidecar in the services that need to be monitored.
This integration is available on Kubernetes, Linux, and Windows.
Benefits ๐
After you configure the integration, you can access these features:
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Splunk Observability Cloud.
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Use navigators in Splunk Infrastructure Monitoring.
Access the Metric Finder and search for metrics sent by the monitor. For information, see Search the Metric Finder and Metadata Catalog.
Installation ๐
Follow these steps to deploy this integration:
Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform:
Configure the integration, as described in the Configuration section.
Restart the Splunk Distribution of the OpenTelemetry Collector.
Configuration ๐
To use this integration of a Smart Agent monitor with the Collector:
Include the Smart Agent receiver in your configuration file.
Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.
See how to Use Smart Agent monitors with the Collector.
See how to set up the Smart Agent receiver.
For a list of common configuration options, refer to Common configuration settings for monitors.
Learn more about the Collector at Get started: Understand and use the Collector.
Example ๐
To activate this integration, add the following to your Collector configuration:
receivers:
smartagent/appmesh:
type: appmesh
... # Additional config
Next, add the monitor to the service.pipelines.metrics.receivers
section of your configuration file:
service:
pipelines:
metrics:
receivers: [smartagent/appmesh]
AWS AppMesh Envoy Proxy ๐
To configure the AWS AppMesh Envoy Proxy, add the following lines to your configuration of the Envoy StatsD sink on AppMesh:
stats_sinks:
-
name: "envoy.statsd"
config:
address:
socket_address:
address: "127.0.0.1"
port_value: 8125
protocol: "UDP"
prefix: statsd.appmesh
Because you need to remove the prefix in metric names before metric name
conversion, set value of the prefix
field with the value of the
metricPrefix
configuration field described in the following table.
This change causes the monitor to remove this specified prefix. If you
donโt specify a value for the prefix
field, it defaults to
envoy
.
To learn more, see the Envoy API reference.
The following table shows the configuration options for this monitor:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
|
|
|
no |
|
|
|
no |
|
|
Metrics ๐
The following metrics are available for this integration:
Name | Description | Type | Category |
---|---|---|---|
circuit_breakers. | Whether the connection circuit breaker is closed (0) or open (1) | gauge | Custom |
circuit_breakers. | Whether the connection pool circuit breaker is closed (0) or open (1) | gauge | Custom |
circuit_breakers. | Whether the pending requests circuit breaker is closed (0) or open (1) | gauge | Custom |
circuit_breakers. | Whether the requests circuit breaker is closed (0) or open (1) | gauge | Custom |
circuit_breakers. | Whether the retry circuit breaker is closed (0) or open (1) | gauge | Custom |
circuit_breakers. | Number of remaining connections until the circuit breaker opens | gauge | Custom |
circuit_breakers. | Number of remaining pending requests until the circuit breaker opens | gauge | Custom |
circuit_breakers. | Number of remaining requests until the circuit breaker opens | gauge | Custom |
circuit_breakers. | Number of remaining retries until the circuit breaker opens | gauge | Custom |
membership_change | Total cluster membership changes | cumulative | Custom |
membership_healthy | Current cluster healthy total (inclusive of both health checking and outlier detection) | gauge | Default |
membership_degraded | Current cluster degraded total | gauge | Custom |
membership_total | Current cluster membership total | gauge | Default |
upstream_cx_total | Total connections | cumulative | Custom |
upstream_cx_active | Total active connections | gauge | Custom |
upstream_cx_http1_total | Total HTTP/1.1 connections | cumulative | Custom |
upstream_cx_http2_total | Total HTTP/2 connections | cumulative | Custom |
upstream_cx_connect_fail | Total connection failures | cumulative | Custom |
upstream_cx_connect_timeout | Total connection connect timeouts | cumulative | Custom |
upstream_cx_idle_timeout | Total connection idle timeouts | cumulative | Custom |
upstream_cx_connect_attempts_exceeded | Total consecutive connection failures exceeding configured connection attempts | cumulative | Custom |
upstream_cx_overflow | Total times that the clusterโs connection circuit breaker overflowed | cumulative | Custom |
upstream_cx_connect_ms | Connection establishment milliseconds | gauge | Custom |
upstream_cx_length_ms | Connection length milliseconds | gauge | Custom |
upstream_cx_destroy | Total destroyed connections | cumulative | Custom |
upstream_cx_destroy_local | Total connections destroyed locally | cumulative | Custom |
upstream_cx_destroy_remote | Total connections destroyed remotely | cumulative | Custom |
upstream_cx_destroy_with_active_rq | Total connections destroyed with 1+ active request | cumulative | Custom |
upstream_cx_destroy_local_with_active_rq | Total connections destroyed locally with 1+ active request | cumulative | Custom |
upstream_cx_destroy_remote_with_active_rq | Total connections destroyed remotely with 1+ active request | cumulative | Custom |
upstream_cx_close_notify | Total connections closed via HTTP/1.1 connection close header or HTTP/2 GOAWAY | cumulative | Custom |
upstream_cx_rx_bytes_total | Total received connection bytes | cumulative | Default |
upstream_cx_rx_bytes_buffered | Received connection bytes currently buffered | gauge | Custom |
upstream_cx_tx_bytes_total | Total sent connection bytes | cumulative | Custom |
upstream_cx_tx_bytes_buffered | Send connection bytes currently buffered | gauge | Custom |
upstream_cx_pool_overflow | Total times that the clusterโs connection pool circuit breaker overflowed | cumulative | Custom |
upstream_cx_protocol_error | Total connection protocol errors | cumulative | Custom |
upstream_cx_max_requests | Total connections closed due to maximum requests | cumulative | Custom |
upstream_cx_none_healthy | Total times connection not established due to no healthy hosts | cumulative | Custom |
upstream_rq_total | Total requests | cumulative | Custom |
upstream_rq_active | Total active requests | gauge | Custom |
upstream_rq_pending_total | Total requests pending a connection pool connection | cumulative | Custom |
upstream_rq_pending_overflow | Total requests that overflowed connection pool circuit breaking and were failed | cumulative | Custom |
upstream_rq_pending_failure_eject | Total requests that were failed due to a connection pool connection failure | cumulative | Custom |
upstream_rq_pending_active | Total active requests pending a connection pool connection | gauge | Custom |
upstream_rq_cancelled | Total requests cancelled before obtaining a connection pool connection | cumulative | Custom |
upstream_rq_maintenance_mode | Total requests that resulted in an immediate 503 due to maintenance mode | cumulative | Custom |
upstream_rq_timeout | Total requests that timed out waiting for a response | cumulative | Custom |
upstream_rq_per_try_timeout | Total requests that hit the per try timeout | cumulative | Custom |
upstream_rq_rx_reset | Total requests that were reset remotely | cumulative | Custom |
upstream_rq_tx_reset | Total requests that were reset locally | cumulative | Custom |
upstream_rq_retry | Total request retries | cumulative | Default |
upstream_rq_retry_success | Total request retry successes | cumulative | Custom |
upstream_rq_retry_overflow | Total requests not retried due to circuit breaking | cumulative | Custom |
upstream_rq_completed | Total upstream requests completed | cumulative | Default |
upstream_rq_2xx | Total number of HTTP response codes in the 200-299 range | cumulative | Custom |
upstream_rq_3xx | Total number of HTTP response codes in the 300-399 range | cumulative | Custom |
upstream_rq_4xx | Total number of HTTP response codes in the 400-499 range | cumulative | Default |
upstream_rq_5xx | Total number of HTTP response codes in the 500-599 range | cumulative | Default |
upstream_rq_<___> | Specific HTTP response codes (e.g., 201, 302, etc.) | cumulative | Custom |
upstream_rq_time | Request time milliseconds | gauge | Default |
external.upstream_rq_completed | Total external origin requests completed | cumulative | Custom |
external.upstream_rq_<_xx> | External origin aggregate HTTP response codes | cumulative | Custom |
external.upstream_rq_<_> | External origin specific HTTP response codes | cumulative | Custom |
external.upstream_rq_time | External origin request time milliseconds | gauge | Custom |
internal.upstream_rq_completed | Total internal origin requests completed | cumulative | Custom |
internal.upstream_rq_<_xx> | Internal origin aggregate HTTP response codes | cumulative | Custom |
internal.upstream_rq_<_> | Internal origin specific HTTP response codes | cumulative | Custom |
internal.upstream_rq_time | Internal origin request time milliseconds | gauge | Custom |
Notes ๐
To learn more about the available in Splunk Observability Cloud see Metric types
In host-based subscription plans, default metrics are those metrics included in host-based subscriptions in Splunk Observability Cloud, such as host, container, or bundled metrics. Custom metrics are not provided by default and might be subject to charges. See Metric categories for more information.
In MTS-based subscription plans, all metrics are custom.
To add additional metrics, see how to configure
extraMetrics
in Add additional metrics
Troubleshooting ๐
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
Submit a case in the Splunk Support Portal .
Contact Splunk Support .
Available to prospective customers and free trial users
Ask a question and get answers through community support at Splunk Answers .
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.