HTTP thread limit issues
When you run Splunk Enterprise in a way that uses lots of HTTP connections for Representational State Transfer (REST) operations (for example, a deployment server in a large distributed environment), you might encounter undesirable behavior, including but not limited to logging of errors in
splunkd.log like the following.
03-19-2015 14:36:10.971 -0500 WARN HttpListener - Can't handle request for /services/broker/connect/8D0E0E2C-8EB5-40D2-9E8A-083F8E9B2516/ISP1065C/241655/windows-x64/8089, max thread limit for REST HTTP server is 6008, threads already in use is 6008
This error occurs because, as of Splunk Enterprise 6.0 and later, the software limits the number of REST HTTP connections an instance uses to prevent service failure caused by resource exhaustion.
How Splunk Enterprise calculates threads and sockets for REST HTTP operations
Splunk Enterprise needs threads and file descriptors to perform REST HTTP operations. Threads let the processes perform tasks, and sockets let the processes communicate with the network. If Splunk Enterprise runs out of either HTTP sockets or threads, it can't complete REST calls to its backend and any such calls fail. Splunk Enterprise thus reserves threads and file descriptors to use for these services.
The calculation it makes are as follows:
When it starts, Splunk Enterprise determines the amount of memory in the host, in bytes. By default, it divides this number by 262,144 (which is the default stack size) to get the total number of available threads. It then divides the result by three. This final number is the number of threads available for REST HTTP operations.
For example, if the system has 16GB of memory, the calculation is:
17179869184 bytes / 262144 bytes (stack size) = 65536 total available threads 65536 total available threads / 3 = 21845.3333 or 21845 threads for HTTP REST
It then checks the number of available file descriptors for the system, as configured by the
ulimit command. It divides that number by three. The result is the number of file descriptors available for sockets for REST HTTP operations. For example, if the number of open file descriptors is 36000, then Splunk Enterprise reserves 12000 for sockets for REST HTTP operations.
The number of available file descriptors is different than the number of threads. Both must be present before Splunk Enterprise can make REST calls.
Override automatic socket and thread configuration
You can override this automatic configuration by making changes to
server.conf. Increasing the number of threads can increase the amount of memory that the Splunk Enterprise instance uses.
- In the
$SPLUNK_HOME\etc\system\local, create or edit
- In the
[httpServer]stanza, set the
maxThreadsattribute to specify the number of threads for REST HTTP operations that Splunk Enterprise should use.
- Set the
maxSocketsattribute to specify the number of sockets that should be available for REST HTTP operations.
- Save the file.
- Restart Splunk Enterprise. The changes should take effect immediately.
The following example sets the number of HTTP threads to 100000 and the number of sockets to 50000:
[httpServer] maxThreads=100000 maxSockets=50000
I get errors about ulimit in splunkd.log
This documentation applies to the following versions of Splunk® Enterprise: 6.3.0, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.3.6, 6.3.7, 6.3.8, 6.3.9, 6.3.10, 6.3.11, 6.3.12, 6.4.0, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.4.6, 6.4.7, 6.4.8, 6.4.9, 6.4.10, 6.5.0, 6.5.1, 6.5.1612 (Splunk Cloud only), 6.5.2, 6.5.3, 6.5.4, 6.5.5, 6.5.6, 6.5.7, 6.6.0, 6.6.1, 6.6.2, 6.6.3, 6.6.4, 6.6.5, 6.6.6, 7.0.0, 7.0.1, 7.0.2