Scale HTTP Event Collector with distributed deployments
HTTP Event Collector (HEC) can scale to consume, distribute, and index very large quantities of data by taking advantage of your distributed deployment. Unlike in a typical distributed deployment, local forwarders are not necessary for collecting HEC event data. Splunk Enterprise can accept data from your data sources through HTTP Event Collector, and then distribute that data to indexers.
You can use a deployment server to distribute HTTP Event Collector configuration information to the rest of your deployment. This configuration information can include a custom HTTP Event Collector port number, a preferred protocol (HTTP or HTTPS), SSL settings, and HTTP Event Collector tokens.
You should be familiar with distributed Splunk Enterprise deployment before proceeding. For more information about distributed deployment, see Distributed Splunk Enterprise overview in the Distributed Deployment Manual, and Components of a Splunk Enterprise deployment in the Capacity Planning Manual. For more information about deployment server, see About deployment server and forwarder management in the Updating Splunk Enterprise Instances manual.
Distributed deployment scenarios
In this section we talk about three common distributed deployment scenarios for accepting and indexing large quantities of event data. There are many more possible scenarios, of course, but these should give you a starting point to use when planning a Splunk Enterprise deployment that will be ingesting large quantities of data using HTTP Event Collector. The scenarios are listed in order of capacity, from lowest to highest.
- Scenario 1: One HEC server, pool of indexers
- Scenario 2: Traffic load balancer, no forwarder, pool of indexers, using deployment server.
- Scenario 3: Traffic load balancer, multiple HEC instances running on forwarders, each forwarding to one or more indexers, using deployment server
In all distributed deployment configurations, HTTP Event Collector receives events. Then, depending on which configuration is chosen, Splunk Enterprise will either index the events locally or forward them to a pool of indexers.
Scenario 1: One HEC server, pool of indexers
In this scenario, event data is sent by clients to HTTP Event Collector running on a single Splunk Enterprise instance acting as a forwarder. This instance distributes the event data evenly to indexers. You can specify groups of indexers to which to send data by configuring an output group. Once the data is indexed, you can search it using a single search head or using distributed search.
Because this scenario only involves one instance of Splunk Enterprise using HTTP Event Collector, a deployment server is not necessary to distribute configuration and settings. This scenario is sufficient for providing reliability when the HTTP Event Collector data ingestion volume is not high—for instance, when adding HEC data collection to an existing distributed Splunk Enterprise deployment. For deployments that will be accepting larger quantities of HEC data, see the next section.
Scenario 2: Traffic load balancer, no forwarder, pool of indexers, using deployment server
This is called traffic load balancing. In this scenario, there are so many clients making HTTP requests that a single HTTP Event Collector endpoint would be overwhelmed. To compensate for this, set up a network traffic load balancer such as NGINX in front of several Splunk Enterprise indexers. This will cause the traffic from clients to be distributed among several HTTP Event Collector endpoints. These Splunk Enterprise instances then index the HEC data. Once the data is indexed, you can search it using a single search head or using distributed search. This scenario relies on distributing configuration to the indexers using deployment server. Each indexer is a deployment client. Tokens are managed centrally on the Splunk Enterprise instance running deployment server, using the UI, CLI, or REST API. Any config changes are then made available to the deployment clients. The advantage of this scenario is increased data volume capacity and high availability. By indexing on the same Splunk Enterprise instances that are collecting HTTP Event Collector data, you don't need a separate tier of forwarders, and you thereby lessen complexity. However, if the volume of incoming data gets high enough, it could potentially overwhelm the data I/O, which could impact data ingestion performance. If you find that you need more performance or control in how you scale out, consider the next scenario.
For more information about how to set up an NGINX load balancer for use with HTTP Event Collector, see Configure an NGINX load balancer for HTTP Event Collector.
Scenario 3: Traffic load balancer, multiple HEC instances running on forwarders, each forwarding to one or more indexers, using deployment server
The third scenario is for the highest throughput and volume of data: In this scenario, we also use traffic load balancing, but instead of routing the data to Event Collector instances running on indexers, the HEC runs on heavy forwarders. The forwarders distribute the data to dedicated indexers or groups of indexers. As with the previous scenario, you use deployment server to distribute configuration to the forwarders, which are the deployment clients.
The advantages of this scenario are maximum throughput, scale, and availability. You have a dedicated pool of HTTP Event Collector instances whose only job is to receive and forward data. You can add more HEC instances without necessarily having to add more indexers. If your indexer becomes a bottleneck, add additional indexers. Though you've increased reliability in several different places in this architecture, the tradeoff is that there are moving parts and increased complexity.
Specifying groups of indexers
To index large amounts of data, you will likely need multiple indexers. You can specify groups of indexers to handle indexing your HTTP Event Collector data. These are called output groups. You can use output groups to, for example, index only certain kinds of data or data from certain sources. Though using output groups to route data to specific indexers is similar to the routing and filtering capabilities built into Splunk Enterprise, output groups allow you to specify groups of indexers on a token-by-token basis.
When you configure output groups with multiple indexers, Splunk Enterprise evenly distributes data among the servers in your output group.
You configure output groups in the
outputs.conf file. Specifically, for HTTP Event Collector, edit the
outputs.conf file at
%SPLUNK_HOME%\etc\apps\splunk_httpinput\local\ on Microsoft Windows hosts). If either the
local directory or the
outputs.conf file doesn't exist at this location, create it (or both).
HTTP Event Collector is not an app, but it stores its configuration in the
$SPLUNK_HOME/etc/apps/splunk_httpinput/ directory (
%SPLUNK_HOME%\etc\apps\splunk_httpinput\ on Windows) so that its configuration can be easily deployed using built-in app deployment capabilities.
Setting up distributed deployment of HTTP Event Collector data
If you need to use multiple HTTP Event Collector endpoints, such as in the second and third scenarios above, you'll need to set up a distributed deployment that uses deployment server. Setting up a distributed deployment is covered elsewhere in Splunk documentation, but here you'll find information specific to HTTP Event Collector and HEC token management. It's important to remember that using HTTP Event Collector and distributing its configuration in a distributed deployment uses the standard, built-in deployment server mechanism. If you are familiar with distributed Splunk Enterprise, you already have the tools you need to set up distributed HTTP Event Collector. To set up a distributed deployment of HTTP Event Collector, do the following:
- Plan the deployment. Decide which Splunk Enterprise instances will be used as deployment clients, and which instance will be the deployment server. Similarly, designate a Splunk Enterprise instance as a load balancer. For help with doing this, see plan a deployment in the Updating Splunk Enterprise Instances manual.
- Define a server class. A server class is a group of deployment clients that you can manage as a single unit. Assign the deployment clients you want to use in your HTTP Event Collector deployment to a common server class. Later, when you distribute HTTP Event Collector settings to the deployment clients, only members of that server class will receive the configuration settings. Edit the
serverclass.conffile on the deployment server, at
%SPLUNK_HOME%\etc\system\local\serverclass.confon Windows hosts). If
serverclass.confdoesn't exist in
local, copy it from
%SPLUNK_HOME%\etc\system\default\on Windows) and then edit the copied file. Do not directly edit the
serverclass.conffile in the
defaultdirectory. For information about defining server classes, see Use serverclass.conf to define server classes in the Updating Splunk Enterprise Instances manual. For an example
serverclass.conffile set up for HEC, see the Example serverclass.conf file section.
- Copy the entire current
$SPLUNK_HOME/etc/deployment-apps/.(On Windows, copy the entire current
%SPLUNK_HOME%\etc\deployment-apps\.) This is a one-time step that is necessary on the deployment server.
- Set options. On the deployment server, set options for your deployment clients. At the very least, you must set the useDeploymentServer option globally (in the [http] stanza) in
%SPLUNK_HOME%\etc\apps\splunk_httpinput\local\inputs.confon Windows hosts). Setting this option causes Splunk Enterprise to use the
%SPLUNK_HOME%\etc\apps\splunk_httpinput\on Windows) for storing and retrieving configuration. For more information on the available settings, see Configure HTTP Event Collector using .conf files.
- Enable deployment server. At the command line of the deployment server, execute the following to enable deployment server and restart Splunk Enterprise:
splunk enable deploy-server splunk restart
- Prepare deployment clients. On each client, you must specify the deployment server it will connect to. Run the following at the command line on each client, where <deployment_server> indicates the hostname of the deployment server, to specify the deployment server and restart Splunk Enterprise:
splunk set deploy-poll <deployment server>:8089 splunk restartFor more information about configuring deployment clients, see Configure deployment clients. in the Updating Splunk Enterprise Instances Manual.
Once the deployment server is enabled and HTTP Event Collector is properly configured, all changes to HEC settings that are made on the deployment server using the UI, the CLI, or REST API are sent to the deployment clients. This configuration information includes:
- HTTP Event Collector default values (port, SSL, source type, index)
- SSL settings
- HTTP Event Collector tokens
For more information about distributed deployment, including advanced configuration options and general examples, see the Updating Splunk Instances Manual.
Example serverclass.conf file
A server class is a group of deployment clients that you can manage as a single unit. You assign the deployment clients you want to use in your HTTP Event Collector deployment to one common server class. Later, when you distribute HTTP Event Collector settings to the deployment clients, only members of that server class will receive the configuration settings.
You define server classes in the
serverclass.conf file. Edit the
serverclass.conf file on the deployment server, at
%SPLUNK_HOME%\etc\system\local\serverclass.conf on Windows). If
serverclass.conf doesn't exist in
local, copy it from
%SPLUNK_HOME%\etc\system\default\ on Windows) and then edit the copied file. Do not directly edit the
serverclass.conf file in the
For information about defining server classes, see use serverclass.conf to define server classes in the Updating Splunk Enterprise Instances manual.
The following example
serverclass.conf file defines a server class "FWD2Local" for HTTP Event Collector.
[global] whitelist.0=* restartSplunkd=true stateOnClient = enabled [serverClass:FWD2Local] whitelist.0=* [serverClass:FWD2Local:app:splunk_httpinput]
The [global] stanza level defines settings that apply to all server classes. The [serverClass:<serverClassName>] stanza level defines settings that apply to an individual server class. You can have multiple server class stanzas. The [serverClass:<serverClassName>:app:<appName>] stanza level defines settings that apply to a specific app (<appName>) within an individual server class (<serverClassName>). For the purposes of deploying HTTP Event Collector settings, you can think of HEC as an app called "splunk_httpinput." Within the stanzas, you can set client filtering attributes and several non-filtering attributes. In the above example, we've set the following attributes:
- whitelist.0=* This is the whitelist client filter. Setting whitelist.0 to * indicates that all deployment clients match the server class.
- restartSplunkd=true This non-filtering attribute specifies whether the client's splunkd process will restart after receiving an update.
- stateOnClient = enabled This non-filtering attribute specifies whether the deployment client receiving an app should enable or disable the the app once it is installed. You can set stateOnClient to enabled, disabled, or noop.
For more information about available client filtering attributes, see the section Define filters through serverclass.conf in the topic Set up client filters. in the Updating Splunk Enterprise Instances Manual. To learn more about available non-filtering attributes, see the section what you can configure for a server class in the Use serverclass.conf to define server classes topic in the Updating Splunk Enterprise Instances Manual.
About HTTP Event Collector Indexer Acknowledgment
Format events for HTTP Event Collector
This documentation applies to the following versions of Splunk® Enterprise: 6.5.0, 6.5.1, 6.5.2, 6.5.3, 6.5.4, 6.5.5, 6.5.6, 6.5.7, 6.5.8, 6.5.9, 6.6.0, 6.6.1, 6.6.2, 6.6.3, 6.6.4, 6.6.5, 6.6.6, 6.6.7, 6.6.8, 6.6.9, 6.6.10, 6.6.11, 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.2.0