Hardware performance tests and considerations
This page provides hardware performance test results for Splunk App for Stream version 6.5.0. Test results shows CPU usage and memory usage of splunkd
and streamfwd
for HTTP and TCP traffic over a range of bandwidths, both with and without SSL encryption.
Test environment
- All tests are performed on servers with:
- CentOS 6.7 (64-bit).
- Dual Intel Xeon E5-2698 v3 CPUs (16 2.3Ghz cores; 32 cores total).
- 64 GB RAM.
- Stream Forwarder 6.5.0.
HTTP 100K Response Test
- Traffic is generated using ixia PerfectStorm device:
- 1000 concurrent flows.
- Network neighborhood has 1000 client IPs, 1000 server IPs.
- A single superflow is used with one request and one 100K response.
- A single TCP connection is used for each superflow (no Keep-Alive).
- Default
splunk_app_stream
configuration. - Default Stream Forwarder
streamfwd
configuration except:- Tests up to 1Gbps:
- ProcessingThreads = 2
- One Intel 10 GB device
- Tests from 2-6 Gbps:
- ProcessingThreads = 8
- Two Intel 10 GB devices (traffic is evenly load balanced using ixia packet broker).
- Tests for 6+ Gbps:
- ProcessingThreads = 16
- Four Intel 10 GB devices (traffic is evenly load balanced using ixia packet broker).
- Tests up to 1Gbps:
Results for HTTP 100K Response Test
Stat | 8 Mbps | 64 Mbps | 256 Mbps | 512 Mbps | 1 Gbps | 2 Gbps | 3 Gbps | 4 Gbps | 5 Gbps | 6 Gbps | 7 Gbps | 8 Gbps | 9 Gpbs | 10 Gbps |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Events/s | 22 | 148 | 586 | 1171 | 2285 | 4576 | 6849 | 9133 | 11413 | 13695 | 15998 | 18317 | 20603 | 22848 |
CPU% (splunkd) | 1.6 | 2.8 | 3 | 2.6 | 2.8 | 2.8 | 2.9 | 2.8 | 3 | 2.9 | 3 | 3 | 2.8 | 3.5 |
Memory MB (splunkd) | 157 | 159 | 158 | 158 | 159 | 158 | 158 | 159 | 159 | 159 | 159 | 159 | 163 | 161 |
CPU% (streamfwd) | 7 | 10 | 24 | 34 | 60 | 156 | 225 | 313 | 395 | 427 | 871 | 884 | 1029 | 1121 |
Memory MB (streamfwd) | 166 | 169 | 174 | 183 | 193 | 367 | 381 | 394 | 409 | 429 | 973 | 1077 | 1129 | 1451 |
Drop Rate | 0+ | 0+ | 0+ | 0.01% | 0.01% | 0.03% | 0.07% |
Note: Drop Rate = % of received packets that had to be dropped. 0+ indicates that it is non-zero but < 0.01%, so rounds to zero.
HTTP 25K Response Test
- Traffic is generated using ixia PerfectStorm device:
- 1000 concurrent flows.
- Network neighborhood has 1000 client IPs, 1000 server IPs.
- A single superflow is used with one request and one 25K response.
- A single TCP connection is used for each superflow (no Keep-Alive).
- Default
splunk_app_stream
configuration. - Default Stream Forwarder
streamfwd
configuration except:- Tests up to 1Gbps:
- ProcessingThreads = 2
- One Intel 10 GB device
- Tests for 2 Gbps:
- ProcessingThreads = 8
- Two Intel 10 GB devices (traffic is evenly load balanced using ixia packet broker)
- Tests for 3-5 Gbps:
- ProcessingThreads = 16
- Four Intel 10 GB devices (traffic is evenly load balanced using ixia packet broker)
- Tests for 6+ Gbps:
- Encountered errors and/or high Drop Rate.
- Tests up to 1Gbps:
Results for HTTP 25K Response Test
Stat | 8 Mbps | 64 Mbps | 256 Mbps | 512 Mbps | 1 Gbps | 2 Gbps | 3 Gbps | 4 Gbps | 5 Gbps |
---|---|---|---|---|---|---|---|---|---|
Events/s | 79 | 568 | 2308 | 4591 | 8971 | 17928 | 26958 | 35955 | 44906 |
CPU% (splunkd) | 2.4 | 2.7 | 2.6 | 2.5 | 2.6 | 2.8 | 2.9 | 3.2 | 2.8 |
Memory MB (splunkd) | 155 | 156 | 156 | 156 | 156 | 159 | 159 | 159 | 158 |
CPU% (streamfwd) | 5 | 14 | 35 | 61 | 97 | 273 | 790 | 1088 | 1526 |
Memory MB (streamfwd) | 166 | 169 | 173 | 180 | 189 | 344 | 777 | 838 | 922 |
Drop Rate | 0 | 0 | 0 | 0 | 0 | 0 | 0+ | 0+ | 0+ |
HTTP 100K Response (SSL) Test
- Traffic is generated using ixia PerfectStorm device:
- 4000 concurrent flows.
- Network neighborhood has 1000 client IPs, 1000 server IPs.
- A single superflow is used with five requests and five 100K responses (Keep-Alive with ratio of 1:5 connections:requests).
- Default
splunk_app_stream
configuration. - Default Stream Forwarder
streamfwd
configuration except:- SessionKeyTimeout = 30
- Tests up to 1Gbps:
- ProcessingThreads = 2
- One Intel 10 GB device
- Tests for 2-4 Gbps:
- ProcessingThreads = 16
- Four Intel 10 GB devices (traffic is evenly load balanced using ixia packet broker)
- Tests for 5+ Gbps:
- We were unable to generate realistic SSL tests for more than 4Gbps on Ixia.
Results for HTTP 100K Response (SSL) Test
Stat | 8 Mbps | 64 Mbps | 256 Mbps | 512 Mbps | 1 Gbps | 2 Gbps | 3 Gbps | 4 Gbps |
---|---|---|---|---|---|---|---|---|
Events/s | 20 | 110 | 357 | 702 | 1346 | 2666 | 4003 | 5330 |
CPU% (splunkd) | 1.5 | 2.4 | 2.8 | 2.8 | 2.6 | 2.8 | 3.0 | 3.1 |
Memory MB (splunkd) | 156 | 155 | 157 | 157 | 157 | 158 | 157 | 154 |
CPU% (streamfwd) | 43 | 23 | 52 | 88 | 147 | 576 | 869 | 11208 |
Memory MB (streamfwd) | 222 | 232 | 241 | 258 | 282 | 855 | 888 | 936 |
Drop Rate | 0 | 0 | 0 | 0 | 0 | 0+ | 0+ | 0+ |
HTTP 100K Response (TCP-only) Test
- Traffic is generated using ixia PerfectStorm device:
- 1000 concurrent flows.
- Network neighborhood has 1000 client IPs, 1000 server IPs.
- A single superflow is used with one request and one 25K response.
- A single TCP connection is used for each superflow (no Keep-Alive).
- Default
splunk_app_stream
configuration except:- tcp stream "enabled", all other streams are "disabled."
- Default Stream Forwarder
streamfwd
configuration except:- Tests up to 1Gbps:
- ProcessingThreads = 2
- One Intel 10 GB device
- Tests for 2 Gbps:
- ProcessingThreads = 8
- Two Intel 10 GB devices (traffic is evenly load balanced using ixia packet broker).
- Tests for 3-6 Gbps:
- ProcessingThreads = 16
- Four Intel 10 GB devices (traffic is evenly load balanced using ixia packet broker).
- Tests for 6+ Gbps:
- Encountered errors and/or high Drop Rate.
- Tests up to 1Gbps:
Results for HTTP 100K Response (TCP-only) Test
Stat | 8 Mbps | 64 Mbps | 256 Mbps | 512 Mbps | 1 Gbps | 2 Gbps | 3 Gbps | 4 Gbps | 5 Gbps | 6 Gbps |
---|---|---|---|---|---|---|---|---|---|---|
Events/s | 11 | 73 | 294 | 585 | 1142 | 2285 | 3443 | 4586 | 5717 | 6865 |
CPU% (splunkd) | 0.6 | 1.8 | 4.8 | 9.4 | 15.5 | 30.7 | 44.6 | 61.4 | 86.4 | 99.7 |
Memory MB (splunkd) | 157 | 157 | 157 | 156 | 158 | 158 | 151 | 153 | 153 | 152 |
CPU% (streamfwd) | 4 | 9 | 20 | 32 | 52 | 134 | 264 | 351 | 451 | 596 |
Memory MB (streamfwd) | 159 | 161 | 164 | 170 | 178 | 355 | 833 | 854 | 875 | 960 |
Drop Rate | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0+ | 0.01 |
Combined CPU usage streamfwd and splunkd
The following chart shows combined CPU usage (single core %) of streamfwd
and splunkd
for 0-1GBps traffic for each of the four configuration+workload combinations that we ran.
The following chart shows combined CPU usage (single core %) of streamfwd
and splunkd
for 1-10Gbps traffic for each of the four configuration+workload combinations that we ran.
Combined memory usage streamfwd and splunkd
The following chart shows combined memory usage (MB) of streamfwd
and splunkd
for 0-1Gbps traffic for each of the four configuration+workload combinations that we ran.
The following chart shows combined memory usage (MB) of streamfwd
and splunkd
for 1-10Gbps traffic for each of the four configuration+workload combinations that we ran.
Note: Hardware performance test results represent maximum values.
Network collection architectures | Stream forwarder sizing guide |
This documentation applies to the following versions of Splunk Stream™: 6.5.0
Feedback submitted, thanks!