Elasticsearch query π
This integration is in beta.
The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the
elasticsearch-query
monitor type to metricize aggregated responses
from Elasticsearch. The integration constructs Splunk Observability
Cloud data points based on Elasticsearch aggregation types and
aggregation names.
Benefits π
After you configure the integration, you can access these features:
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Splunk Observability Cloud.
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Use navigators in Splunk Infrastructure Monitoring.
Access the Metric Finder and search for metrics sent by the monitor. For information, see Search the Metric Finder and Metadata Catalog.
Data model transformation π
This integration transforms Elasticsearch responses into Splunk Observability Cloud data points.
At high level, it metricizes responses of the following types:
Metric aggregations inside one or more Bucket aggregations such as the
terms
andfilters
aggregations. Dimensions on a data point are determined by the aggregation name (dimension name) and thekey
of each bucket (dimension value). The metric name is derived from the type of metric aggregation name and its values in case of multi-value aggregations. A dimension calledmetric_aggregation_type
is also set on the corresponding data points.Metric aggregations applied without any Bucket aggregation are transformed in the same way.
Bucket aggregations that do not have any Metric aggregations as sub aggregations are transformed to a metric called
<name_of_aggregation>.doc_count
and have thebucket_aggregation_name
dimension apart from thekey
of each bucket.
Note: Since Bucket aggregations determine dimensions in Splunk
Observability Cloud, in most cases Bucket aggregations should be
performed on string
fields that represent a slice of the data from
Elasticsearch.
Example: avg metric aggregation π
avg
metric aggregation as a sub-aggregation of terms
bucket
aggregation:
```json
{
"aggs":{
"host" : {
"terms":{"field" : "host"},
"aggs": {
"average_cpu_usage": {
"avg": {
"field": "cpu_utilization"
}
}
}
}
}
}
```
This query results in a metric called
elasticsearch_query.average_cpu_usage
, where the data point has a
host
dimension with its value being the key
of a bucket in the
response. The type of the metric aggregation (avg
) is set on the
data oint as the metric_aggregation_type
dimension.
For instance, the json
below provides 4 data points, each with a
different value for host
:
```json
"aggregations" : {
"host" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "helsinki",
"doc_count" : 13802,
"average_cpu_usage" : {
"value" : 49.77438052456166
}
},
{
"key" : "lisbon",
"doc_count" : 13802,
"average_cpu_usage" : {
"value" : 49.919866685987536
}
},
{
"key" : "madrid",
"doc_count" : 13802,
"average_cpu_usage" : {
"value" : 49.878350963628456
}
},
{
"key" : "nairobi",
"doc_count" : 13802,
"average_cpu_usage" : {
"value" : 49.99789885523837
}
}
]
}
}
```
Example: extended_stats metric aggregation π
extended_stats
metric aggregation as a sub-aggregation of terms
bucket aggregation
```json
{
"aggs":{
"host" : {
"terms":{"field" : "host"},
"aggs": {
"cpu_usage_stats": {
"extended_stats": {
"field": "cpu_utilization"
}
}
}
}
}
}
```
```json
"aggregations" : {
"host" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "helsinki",
"doc_count" : 13996,
"cpu_usage_stats" : {
"count" : 13996,
"min" : 0.0,
"max" : 100.0,
"avg" : 49.86660474421263,
"sum" : 697933.0
}
},
{
"key" : "lisbon",
"doc_count" : 13996,
"cpu_usage_stats" : {
"count" : 13996,
"min" : 0.0,
"max" : 100.0,
"avg" : 49.88225207202058,
"sum" : 698152.0
}
},
{
"key" : "madrid",
"doc_count" : 13996,
"cpu_usage_stats" : {
"count" : 13996,
"min" : 0.0,
"max" : 100.0,
"avg" : 49.92469276936267,
"sum" : 698746.0
}
},
{
"key" : "nairobi",
"doc_count" : 13996,
"cpu_usage_stats" : {
"count" : 13996,
"min" : 0.0,
"max" : 100.0,
"avg" : 49.98320948842527,
"sum" : 699565.0
}
}
]
}
}
```
In this case, each bucket outputs 5 metrics:
1. `cpu_usage_stats.count`
2. `cpu_usage_stats.min`
3. `cpu_usage_stats.max`
4. `cpu_usage_stats.avg`
5. `cpu_usage_stats.sum`
The dimensions are derived in the same manner as the previous example.
Installation π
Follow these steps to deploy this integration:
Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform:
Configure the integration, as described in the Configuration section.
Restart the Splunk Distribution of the OpenTelemetry Collector.
Configuration π
To use this integration of a Smart Agent monitor with the Collector:
Include the Smart Agent receiver in your configuration file.
Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.
See how to Use Smart Agent monitors with the Collector.
See how to set up the Smart Agent receiver.
For a list of common configuration options, refer to Common configuration settings for monitors.
Learn more about the Collector at Get started: Understand and use the Collector.
Example π
To activate this integration, add the following to your Collector configuration:
receivers:
smartagent/elasticsearch-query:
type: elasticsearch-query
... # Additional config
Next, add the monitor to the service.pipelines.metrics.receivers
section of your configuration file:
service:
pipelines:
metrics:
receivers: [smartagent/elasticsearch-query]
Condiguration options π
See the configuration example in GitHub for specific use cases that show how the Splunk Distribution of OpenTelemetry Collector can integrate and complement existing environments.
For Kubernetes, see the kubernetes.yaml in GitHub for the Agent and Gateway YAML files.
For Prometheus, see Prometheus Federation Endpoint Example in GitHub for an example of how the OTel Collector works with Splunk Enterprise and an existing Prometheus deployment.
Configuration settings π
The following table shows the configuration options for this integration:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
|
|
|
no |
|
Basic Auth username to use on each request, if any. |
|
no |
|
Basic Auth password to use on each request, if any. |
|
no |
|
|
|
no |
|
|
|
no |
|
|
|
no |
|
|
|
no |
|
Path to the client TLS cert to use for TLS required connections |
|
no |
|
Path to the client TLS key to use for TLS required connections |
|
yes |
|
|
|
yes |
|
|
|
no |
|
|
|
yes |
|
|
Metrics π
The Splunk Distribution of OpenTelemetry Collector does not do any built-in filtering of metrics for this receiver.
Troubleshooting π
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
Submit a case in the Splunk Support Portal .
Contact Splunk Support .
Available to prospective customers and free trial users
Ask a question and get answers through community support at Splunk Answers .
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.