Docs » Available host and application monitors » Configure application receivers for cloud platforms » Kong Gateway

Kong Gateway πŸ”—

The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the kong monitor type to provide service traffic metrics using kong-plugin-signalfx, which emits metrics for configurable request and response lifecycle groups, including:

  • Counters for response counts

  • Counters for cumulative response and request sizes

  • Counters for cumulative request, upstream, and Kong latencies

You can partition request and response lifecycle groups by:

  • API or Service Name/ID

  • Route ID

  • Request HTTP Method

  • Response HTTP Status Code

In addition, the integration provides system-wide connection statistics, including:

  • A counter for total fielded requests

  • Gauges for active connections and their various states

  • A gauge for database connectivity

This integration is only available on Kubernetes and Linux, and requires version 0.11.2 or higher of Kong and version 0.0.1 or higher of kong-plugin-signalfx. This integration is only supported for Kong Gateway Community Edition (CE).

Benefits πŸ”—

After you configure the integration, you can access these features:

  • View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Observability Cloud.

  • View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Splunk Infrastructure Monitoring navigators.

  • Access the Metric Finder and search for metrics sent by the monitor. For information, see Use the Metric Finder.

Installation πŸ”—

Follow these steps to deploy this integration:

  1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:

  2. Configure the monitor, as described in the Configuration section.

  3. Restart the Splunk Distribution of OpenTelemetry Collector.

Kong installation πŸ”—

Bedsides the Collector, you also need both the kong-plugin-signalfx Kong plugin and the kong SignalFx monitor to activate this integration.

Follow these steps to deploy the integration:

  1. Run the following commands on each Kong server with a configured LUA_PATH:

    luarocks install kong-plugin-signalfx
    # Or directly from the source repo
    git clone [email protected]:signalfx/kong-plugin-signalfx.git
    cd kong-plugin-signalfx
    luarocks make
    # Then notify Kong of the plugin or add to your existing configuration file
    echo 'custom_plugins = signalfx' > /etc/kong/signalfx.conf
    
  2. Add the following lua_shared_dict memory declarations to the NGINX configuration file of Kong, or add them directly to /usr/local/share/lua/5.1/kong/templates/nginx_kong.lua if you are using Kong default setup:

    lua_shared_dict kong_signalfx_aggregation 10m;
    lua_shared_dict kong_signalfx_locks 100k;
    
  3. Reload Kong to make the plugin available and install it globally:

    kong reload -c /etc/kong/signalfx.conf  # Or specify your modified configuration file
    curl -X POST -d "name=signalfx" http://localhost:8001/plugins
    
  4. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform.

  5. Configure the monitor, as described in the next section.

Configuration πŸ”—

To use this integration of a Smart Agent monitor with the Collector:

  1. Include the Smart Agent receiver in your configuration file.

  2. Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.

Example πŸ”—

To activate this integration, add the following to your Collector configuration:

receivers:
  smartagent/kong:
   type: collectd/kong
   host: 127.0.0.1
   port: 8001
   metrics:
    - metric: request_latency
      report: true
    - metric: connections_accepted
      report: false
    ...  # Additional config  

Next, add the monitor to the service > pipelines > metrics > receivers section of your configuration file:

service:
  pipelines:
    metrics:
      receivers: [smartagent/kong]

Filter example πŸ”—

The following is a sample configuration with custom /signalfx route and filter lists:

receivers:
  smartagent/kong:
    type: collectd/kong
    host: 127.0.0.1
    port: 8443
    url: https://127.0.0.1:8443/routed_signalfx
    authHeader:
      header: Authorization
      value: HeaderValue
    metrics:
      - metric: request_latency
        report: true
    reportStatusCodeGroups: true
    statusCodes:
      - 202
      - 403
      - 405
      - 419
      - "5*"
    serviceNamesBlacklist:
      - "*SomeService*"

Kong configuration πŸ”—

Like most Kong plugins, you can configure the SignalFx kong integration globally or by specific service, route, API, or consumer object contexts by making POST requests to each plugins endpoint. For example:

curl -X POST -d "name=signalfx" http://localhost:8001/services/<my_service>/plugins
curl -X POST -d "name=signalfx" http://localhost:8001/routes/<my_route_id>/plugins

For each request made to the respective registered object context, the kong integration obtains metric content and aggregates it for automated retrieval at the /signalfx endpoint of the Admin API. Although you can activate request contexts for specific Consumer objects, consumer IDs or unique visitor metrics are not calculated.

By default, the kong integration aggregates metrics by a context determined by the HTTP method of the request and by the status code of the response. If you’re monitoring a large infrastructure with hundreds of routes, grouping by HTTP method might be too granular. You can deactivate context grouping by setting aggregate_by_http_method to false:

curl -X POST -d "name=signalfx" -d "config.aggregate_by_http_method=false" http://localhost:8001/plugins
# or to edit an existing plugin
curl -X PATCH -d "config.aggregate_by_http_method=false" http://localhost:8001/plugins/<sfx_plugin_id>

Metrics πŸ”—

These metrics are available for this integration:

Notes πŸ”—

  • Learn more about the available metric types in Observability Cloud.

  • Default metrics are those metrics included in host-based subscriptions in Observability Cloud, such as host, container, or bundled metrics. Custom metrics are not provided by default and might be subject to charges. See more about metric categories.

  • To add additional metrics, see how to configure extraMetrics using the Collector.

Troubleshooting πŸ”—

If you are not able to see your data in Splunk Observability Cloud, try these tips:

To learn about even more support options, see Splunk Customer Success.