Kong Gateway 🔗
Splunk Distribution of OpenTelemetry Collector
uses the Smart Agent receiver with the
kong monitor type to provide service traffic metrics using
kong-plugin-signalfx, which emits metrics for configurable request
and response lifecycle groups, including:
Counters for response counts
Counters for cumulative response and request sizes
Counters for cumulative request, upstream, and Kong latencies
You can partition request and response lifecycle groups by:
API or Service Name/ID
Request HTTP Method
Response HTTP Status Code
In addition, the integration provides system-wide connection statistics, including:
A counter for total fielded requests
Gauges for active connections and their various states
A gauge for database connectivity
This integration is only available on Kubernetes and Linux, and requires
version 0.11.2 or higher of Kong and version 0.0.1 or higher of
kong-plugin-signalfx. This integration is only supported for Kong
Gateway Community Edition (CE).
After you configure the integration, you can access these features:
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Splunk Observability Cloud.
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Use navigators in Splunk Infrastructure Monitoring.
Access the Metric Finder and search for metrics sent by the monitor. For information, see Search the Metric Finder and Metadata catalog.
Follow these steps to deploy this integration:
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:
Configure the monitor, as described in the Configuration section.
Restart the Splunk Distribution of OpenTelemetry Collector.
Kong installation 🔗
In addition to the Collector, you also need both the
kong-plugin-signalfx Kong plugin and the
kong SignalFx monitor
to activate this integration.
Follow these steps to deploy the integration:
Run the following commands on each Kong server with a configured
luarocks install kong-plugin-signalfx # Or directly from the source repo git clone email@example.com:signalfx/kong-plugin-signalfx.git cd kong-plugin-signalfx luarocks make # Then notify Kong of the plugin or add to your existing configuration file echo ‘custom_plugins = signalfx’ > /etc/kong/signalfx.conf
Add the following
lua_shared_dictmemory declarations to the NGINX configuration file of Kong, or add them directly to
/usr/local/share/lua/5.1/kong/templates/nginx_kong.luaif you are using Kong default setup:
lua_shared_dict kong_signalfx_aggregation 10m; lua_shared_dict kong_signalfx_locks 100k;
Reload Kong to make the plugin available and install it globally:
kong reload -c /etc/kong/signalfx.conf # Or specify your modified configuration file curl -X POST -d "name=signalfx" http://localhost:8001/plugins``
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform.
Configure the monitor, as described in the next section.
To use this integration of a Smart Agent monitor with the Collector:
Include the Smart Agent receiver in your configuration file.
Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.
See how to Use Smart Agent monitors with the Collector
See how to set up the Smart Agent receiver
Learn about config options in Collector default configuration
To activate this integration, add the following to your Collector configuration:
receivers: smartagent/kong: type: collectd/kong host: 127.0.0.1 port: 8001 metrics: - metric: request_latency report: true - metric: connections_accepted report: false ... # Additional config
Next, add the monitor to the
section of your configuration file:
service: pipelines: metrics: receivers: [smartagent/kong]
Filter example 🔗
The following is a sample configuration with custom
and filter lists:
receivers: smartagent/kong: type: collectd/kong host: 127.0.0.1 port: 8443 url: https://127.0.0.1:8443/routed_signalfx authHeader: header: Authorization value: HeaderValue metrics: - metric: request_latency report: true reportStatusCodeGroups: true statusCodes: - 202 - 403 - 405 - 419 - "5*" serviceNamesBlacklist: - "*SomeService*"
Kong configuration 🔗
Like most Kong plugins, you can configure the SignalFx
integration globally or by specific service, route, API, or consumer
object contexts by making
POST requests to each
endpoint. For example:
curl -X POST -d "name=signalfx" http://localhost:8001/services/<my_service>/plugins curl -X POST -d "name=signalfx" http://localhost:8001/routes/<my_route_id>/plugins
For each request made to the respective registered object context, the
kong integration obtains metric content and aggregates it for
automated retrieval at the
/signalfx endpoint of the Admin API.
Although you can activate request contexts for specific Consumer
objects, consumer IDs or unique visitor metrics are not calculated.
By default, the
kong integration aggregates metrics by a context
determined by the HTTP method of the request and by the status code of
the response. If you’re monitoring a large infrastructure with hundreds
of routes, grouping by HTTP method might be too granular. You can
deactivate context grouping by setting
curl -X POST -d "name=signalfx" -d "config.aggregate_by_http_method=false" http://localhost:8001/plugins # or to edit an existing plugin curl -X PATCH -d "config.aggregate_by_http_method=false" http://localhost:8001/plugins/<sfx_plugin_id>
These metrics are available for this integration:
To learn more about the available in Observability Cloud see Metric types
In host-based subscription plans, default metrics are those metrics included in host-based subscriptions in Observability Cloud, such as host, container, or bundled metrics. Custom metrics are not provided by default and might be subject to charges. See Metric categories for more information.
In MTS-based subscription plans, all metrics are custom.
To add additional metrics, see how to configure
extraMetricsin Add additional metrics
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
Available to prospective customers and free trial users
Ask a question and get answers through community support at Splunk Answers .
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.
To learn about even more support options, see Splunk Customer Success .