Splunk® Enterprise

Admin Manual

Acrobat logo Download manual as PDF


Splunk Enterprise version 6.x is no longer supported as of October 23, 2019. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.
This documentation does not apply to the most recent version of Splunk® Enterprise. Click here for the latest version.
Acrobat logo Download topic as PDF

limits.conf

The following are the spec and example files for limits.conf.

limits.conf.spec

#   Version 6.5.7
#
# This file contains possible attribute/value pairs for configuring limits for
# search commands.
#
# There is a limits.conf in $SPLUNK_HOME/etc/system/default/.  To set custom
# configurations, place a limits.conf in $SPLUNK_HOME/etc/system/local/. For
# examples, see limits.conf.example. You must restart Splunk to enable
# configurations.
#
# To learn more about configuration files (including precedence) please see the
# documentation located at
# http://docs.splunk.com/Documentation/Splunk/latest/Admin/Aboutconfigurationfiles
#

# limits.conf settings and DISTRIBUTED SEARCH
#   Unlike most settings which affect searches, limits.conf settings are not
#   provided by the search head to be used by the search peers.  This means
#   that if you need to alter search-affecting limits in a distributed
#   environment, typically you will need to modify these settings on the
#   relevant peers and search head for consistent results.

GLOBAL SETTINGS


# Use the [default] stanza to define any global settings.
#   * You can also define global settings outside of any stanza, at the top of
#     the file.
#   * Each conf file should have at most one default stanza. If there are
#     multiple default stanzas, attributes are combined. In the case of
#     multiple definitions of the same attribute, the last definition in the
#     file wins.
#   * If an attribute is defined at both the global level and in a specific
#     stanza, the value in the specific stanza takes precedence.

# CAUTION: Do not alter the settings in limits.conf unless you know what you
#          are doing.  Improperly configured limits may result in splunkd
#          crashes and/or memory overuse.

* Each stanza controls different parameters of search commands.

[default]

max_mem_usage_mb = <non-negative integer>
* Provides a limitation to the amount of RAM a batch of events or results will
  use in the memory of a search process.
* Operates on an estimation of memory use which is not exact.
* The limitation is applied in an unusual way; if the number of results or
  events exceeds maxresults, AND the estimated memory exceeds this limit, the
  data is spilled to disk.
* This means, as a general rule, lower limits will cause a search to use more
  disk I/O and less RAM, and be somewhat slower, but should cause the same
  results to typically come out of the search in the end.
* This limit is applied currently to a number, but not all search processors.
  However, more will likely be added as it proves necessary.
* The number is thus effectively a ceiling on batch size for many components of
  search for all searches run on this system.
* 0 will specify the size to be unbounded.  In this case searches may be
  allowed to grow to arbitrary sizes.

* The 'mvexpand' command uses this value in a different way.
  * mvexpand has no combined logic with maxresults
  * If the memory limit is exceeded, output is truncated, not spilled to disk.

* The 'stats' processor uses this value in the following way.
* If the estimated memory usage exceeds the specified limit, the results are spilled to disk
* If '0' is specified, the results are spilled to the disk when the number of results
    exceed the maxresultrows setting.

* This value is not exact. The estimation can deviate by an order of magnitude
  or so to both the smaller and larger sides.
* Defaults to 200 (MB)

min_batch_size_bytes = <integer>
* Specifies the size of the file/tar after which the file is handled by the
  batch reader instead of the trailing processor.
* Global parameter, cannot be configured per input.
* Note configuring this to a very small value could lead to backing up of jobs
  at the tailing processor.
* Defaults to 20 MB.

DelayArchiveProcessorShutdown = <bool>
* Specifies whether during splunk shutdown archive processor should finish processing archive file under process. 
* If set to false archive processor abandons further processing of archive file and will process again from start again.
* If set to true archive processor will complete processing of archive file. Shutdown will be delayed. 
* defaults to false 

[searchresults]

* This stanza controls search results for a variety of Splunk search commands.

maxresultrows = <integer>
* Configures the maximum number of events are generated by search commands
  which grow the size of your result set (such as multikv) or that create
  events. Other search commands are explicitly controlled in specific stanzas
  below.
* This limit should not exceed 50000. Setting this limit higher than 50000
  causes instability.
* Defaults to 50000.

tocsv_maxretry = <integer>
* Maximum number of times to retry the atomic write operation.
* 1 = no retries.
* Defaults to 5.

tocsv_retryperiod_ms = <integer>
* Period of time to wait before each retry.
* Defaults to 500.

* These setting control logging of error messages to info.csv
  All messages will be logged to search.log regardless of these settings.

compression_level = <integer>
* Compression level to use when writing search results to .csv.gz files
* Defaults to 1

[search_info]

* This stanza controls logging of messages to the info.csv file
* Messages logged to info.csv are available to REST API clients
  and the Splunk UI, so limiting the messages
  added to info.csv will mean that these messages will not be
  available in the UI and/or the REST API.

max_infocsv_messages  = <positive integer>
* If more than max_infocsv_messages log entries are generated, additional
  entries will not be logged in info.csv. All entries will still be logged in
  search.log.

infocsv_log_level = [DEBUG|INFO|WARN|ERROR]
* Limits the messages which are added to info.csv to the stated level
  and above.
* For example, if log_level is WARN, messages of type WARN and higher
  will be added to info.csv

show_warn_on_filtered_indexes = <boolean>
* Log warnings if search returns no results because user has 
  no permissions to search on queried indexes

filteredindexes_log_level = [DEBUG|INFO|WARN|ERROR] 
* Log level of messages when search results no results because
  user has no permissions to search on queries indexes

[subsearch]

* This stanza controls subsearch results.
* NOTE: This stanza DOES NOT control subsearch results when a subsearch is
  called by commands such as join, append, or appendcols.
* Read more about subsearches in the online documentation:
  http://docs.splunk.com/Documentation/Splunk/latest/Search/Aboutsubsearches

maxout = <integer>
* Maximum number of results to return from a subsearch.
* This value cannot be greater than or equal to 10500.
* Defaults to 10000.

maxtime = <integer>
* Maximum number of seconds to run a subsearch before finalizing
* Defaults to 60.

ttl = <integer>
* Time to cache a given subsearch's results, in seconds.
* Do not set this below 120 seconds.
* See definition in [search] ttl for more details on how the ttl is computed
* Defaults to 300.

[anomalousvalue]

maxresultrows = <integer>
* Configures the maximum number of events that can be present in memory at one
  time.
* Defaults to searchresults::maxresultsrows (which is by default 50000).

maxvalues = <integer>
* Maximum number of distinct values for a field.
* Defaults to 100000.

maxvaluesize = <integer>
* Maximum size in bytes of any single value (truncated to this size if larger).
* Defaults to 1000.

[associate]

maxfields = <integer>
* Maximum number of fields to analyze.
* Defaults to 10000.

maxvalues = <integer>
* Maximum number of values for any field to keep track of.
* Defaults to 10000.

maxvaluesize = <integer>
* Maximum length of a single value to consider.
* Defaults to 1000.

[autoregress]

maxp = <integer>
* Maximum valid period for auto regression
* Defaults to 10000.

maxrange = <integer>
* Maximum magnitude of range for p values when given a range.
* Defaults to 1000.

[concurrency]

max_count = <integer>
* Maximum number of detected concurrencies.
* Defaults to 10000000

[ctable]

* This stanza controls the contingency, ctable, and counttable commands.

maxvalues = <integer>
* Maximum number of columns/rows to generate (the maximum number of distinct
  values for the row field and column field).
* Defaults to 1000.

[correlate]

maxfields = <integer>
* Maximum number of fields to correlate.
* Defaults to 1000.

[discretize]

* This stanza set attributes for bin/bucket/discretize.

default_time_bins = <integer>
* When discretizing time for timechart or explicitly via bin, the default bins
  to use if no span or bins is specified.
* Defaults to 100

maxbins = <integer>
* Maximum number of buckets to discretize into.
* If maxbins is not specified or = 0, it defaults to
  searchresults::maxresultrows
* Defaults to 50000.

[export]

add_timestamp = <bool>
* Add a epoch time timestamp to JSON streaming output that reflects the time
  the results were generated/retrieved
* Defaults to false

add_offset = <bool>
* Add an offset/row number to JSON streaming output
* Defaults to true

[extern]

perf_warn_limit = <integer>
* Warn when external scripted command is applied to more than this many events
* set to 0 for no message (message is always INFO level)
* Defaults to 10000

[inputcsv]

mkdir_max_retries = <integer>
* Maximum number of retries for creating a tmp directory (with random name as
  subdir of SPLUNK_HOME/var/run/splunk)
* Defaults to 100.

[indexpreview]

max_preview_bytes = <integer>
* Maximum number of bytes to read from each file during preview
* Defaults to 2000000 (2 MB)

max_results_perchunk = <integer>
* Maximum number of results to emit per call to preview data generator
* Defaults to 2500.

soft_preview_queue_size = <integer>
* Loosely-applied maximum on number of preview data objects held in memory
* Defaults to 100.

[join]

subsearch_maxout = <integer>
* Maximum result rows in output from subsearch to join against.
* Defaults to 50000.

subsearch_maxtime = <integer>
* Maximum search time (in seconds) before auto-finalization of subsearch.
* Defaults to 60

subsearch_timeout = <integer>
* Maximum time to wait for subsearch to fully finish (in seconds).
* Defaults to 120.

[kmeans]

maxdatapoints = <integer>
* Maximum data points to do kmeans clusterings for.
* Defaults to 100000000.

maxkvalue = <integer>
* Maximum number of clusters to attempt to solve for.
* Defaults to 1000.

maxkrange = <integer>
* Maximum number of k values to iterate over when specifying a range.
* Defaults to 100.

[kv]

maxcols = <integer>
* When non-zero, the point at which kv should stop creating new fields.
* Defaults to 512.

limit = <integer>
* Maximum number of keys auto kv can generate.
* Defaults to 100.

maxchars = <integer>
* Truncate _raw to this size and then do auto KV.
* Defaults to 10240 characters.

max_extractor_time = <integer>
* Maximum amount of CPU time, in milliseconds, that a key-value pair extractor
  will be allowed to take before warning. If the extractor exceeds this
  execution time on any event a warning will be issued
* Defaults to 1000.

avg_extractor_time = <integer>
* Maximum amount of CPU time, in milliseconds, that the average (over search
  results) execution time of a key-value pair extractor will be allowed to take
  before warning. Once the average becomes larger than this amount of time a
  warning will be issued
* Defaults to 500

[lookup]

max_memtable_bytes = <integer>
* Maximum size of static lookup file to use an in-memory index for.
* Defaults to 10000000 in bytes (10MB)
* Lookup files with size above max_memtable_bytes will be indexed on disk
* A large value results in loading large lookup files in memory leading to bigger process memory footprint.
* Caution must be exercised when setting this parameter to arbitrarily high values!

max_matches = <integer>
* maximum matches for a lookup
* range 1 - 1000
* Defaults to 1000

max_reverse_matches = <integer>
* maximum reverse lookup matches (for search expansion)
* Defaults to 50

batch_index_query = <bool>
* Should non-memory file lookups (files that are too large) use batched queries
  to possibly improve performance?
* Defaults to true

batch_response_limit = <integer>
* When doing batch requests, the maximum number of matches to retrieve
  if more than this limit of matches would otherwise be retrieve, we will fall
  back to non-batch mode matching
* Defaults to 5000000

max_lookup_messages = <positive integer>
* If more than "max_lookup_messages" log entries are generated, additional
  entries will not be logged in info.csv. All entries will still be logged in
  search.log.

[metrics]

maxseries = <integer>
* The number of series to include in the per_x_thruput reports in metrics.log.
* Defaults to 10.

interval = <integer>
* Number of seconds between logging splunkd metrics to metrics.log.
* Minimum of 10.
* Defaults to 30.

[metrics:tcpin_connections]

aggregate_metrics = [true|false]
* For each splunktcp connection from forwarder, splunk logs metrics information
  every metrics interval.
* When there are large number of forwarders connected to indexer, the amount of
  information logged can take lot of space in metrics.log. When set to true, it
  will aggregate information across each connection and report only once per
  metrics interval.
* Defaults to false

suppress_derived_info = [true|false]
* For each forwarder connection, _tcp_Bps, _tcp_KBps, _tcp_avg_thruput,
  _tcp_Kprocessed is logged in metrics.log.
* This can be derived from kb. When set to true, the above derived info will
  not be emitted.
* Defaults to true

[rare]

maxresultrows = <integer>
* Maximum number of result rows to create.
* If not specified, defaults to searchresults::maxresultrows
* Defaults to 50000.

maxvalues = <integer>
* Maximum number of distinct field vector values to keep track of.
* Defaults 100000.

maxvaluesize = <integer>
* Maximum length of a single value to consider.
* Defaults to 1000.

[restapi]

maxresultrows = <integer>
* Maximum result rows to be returned by /events or /results getters from REST
  API.
* Defaults to 50000.

time_format_reject = <regular expression>
* HTTP parameters for time_format and output_time_format which match
  this regex will be rejected (blacklisted).
* The regex will be satisfied by a substring match anywhere in the parameter.
* Intended as defense-in-depth against XSS style attacks against browser users
  by crafting specially encoded URLS for them to access splunkd.
* If unset, all parameter strings will be accepted.
* To disable this check entirely, set the value to empty.
  # Example of disabling: time_format_reject =
* Defaults to [<>!] , which means that the less-than '<', greater-than '>', and
  exclamation point '!' are not allowed.

jobscontentmaxcount = <integer>
* Maximum length of a property in the contents dictionary of an entry from
  /jobs getter from REST API
* Value of 0 disables truncation
* Defaults to 0

[search_metrics]

debug_metrics = <bool>
* This indicates whether we should output more detailed search metrics for
  debugging.
* This will do things like break out where the time was spent by peer, and may
  add additional deeper levels of metrics.
* This is NOT related to "metrics.log" but to the "Execution Costs" and
  "Performance" fields in the Search inspector, or the count_map in the info.csv file.
* Defaults to false

[search]

summary_mode = [all|only|none]
* Controls if precomputed summary are to be used if possible?
* all: use summary if possible, otherwise use raw data
* only: use summary if possible, otherwise do not use any data
* none: never use precomputed summary data
* Defaults to 'all'

result_queue_max_size = <integer>
* Controls the size of the search results queue in dispatch
* Default size is set to 100MB
* Use caution while playing with this parameter

use_bloomfilter = <bool>
* Control whether to use bloom filters to rule out buckets
* Default value set to true

max_id_length = <integer>
* Maximum length of custom search job id when spawned via REST API arg id=

ttl = <integer>
* How long search artifacts should be stored on disk once completed, in
  seconds. The ttl is computed relative to the modtime of status.csv of the job
  if such file exists or the modtime of the search job's artifact directory. If
  a job is being actively viewed in the Splunk UI then the modtime of
  status.csv is constantly updated such that the reaper does not remove the job
  from underneath.
* Defaults to 600, which is equivalent to 10 minutes.

failed_job_ttl = <integer>
* How long search artifacts should be stored on disk once failed, in seconds. The ttl is computed
* relative to the modtime of status.csv of the job if such file exists or the modtime of the search
* job's artifact directory. If a job is being actively viewed in the Splunk UI then the modtime of 
* The status.csv file is constantly updated such that the reaper does not remove the job from underneath.
* Defaults to 86400, which is equivalent to 24 hours.

default_save_ttl = <integer>
* How long the ttl for a search artifact should be extended in response to the
  save control action, in second.  0 = indefinitely.
* Defaults to 604800 (1 week)

remote_ttl = <integer>
* How long artifacts from searches run in behalf of a search head should be
  stored on the indexer after completion, in seconds.
* Defaults to 600 (10 minutes)

status_buckets = <integer>
* The approximate maximum number buckets to generate and maintain in the
  timeline.
* Defaults to 0, which means do not generate timeline information.

max_bucket_bytes = <integer>
* This setting has been deprecated and has no effect

max_count = <integer>
* The number of events that can be accessible in any given status bucket (when status_buckets = 0).
* The last accessible event in a call that takes a base and bounds.
* Defaults to 500000.
* Note: This value does not reflect the number of events displayed on the UI after the search is evaluated/computed.

max_events_per_bucket = <integer>
* For searches with status_buckets>0 this will limit the number of events
  retrieved per timeline bucket.
* Defaults to 1000 in code.

truncate_report = [1|0]
* Specifies whether or not to apply the max_count limit to report output.
* Defaults to false (0).

min_prefix_len = <integer>
* The minimum length of a prefix before a * to ask the index about.
* Defaults to 1.

cache_ttl = <integer>
* The length of time to persist search cache entries (in seconds).
* Defaults to 300.

max_results_perchunk = <integer>
* Maximum results per call to search (in dispatch), must be less than or equal
  to maxresultrows.
* Defaults to 2500

min_results_perchunk = <integer>
* Minimum results per call to search (in dispatch), must be less than or equal
  to max_results_perchunk.
* Defaults to 100

max_rawsize_perchunk = <integer>
* Maximum raw size of results per call to search (in dispatch).
* 0 = no limit.
* Defaults to 100000000 (100MB)
* Not affected by chunk_multiplier

target_time_perchunk = <integer>
* Target duration of a particular call to fetch search results in ms.
* Defaults to 2000

long_search_threshold = <integer>
* Time in seconds until a search is considered "long running".
* Defaults to 2

chunk_multiplier = <integer>
* max_results_perchunk, min_results_perchunk, and target_time_perchunk are
  multiplied by this for a long running search.
* Defaults to 5

min_freq = <number>
* Minimum frequency of a field required for including in the /summary endpoint
  as a fraction (>=0 and <=1).
* Defaults is 0.01 (1%)

reduce_freq = <integer>
* Attempt to reduce intermediate results every how many chunks (0 = never).
* Defaults to 10

reduce_duty_cycle = <number>
* The maximum time to spend doing reduce, as a fraction of total search time
* Must be > 0.0 and < 1.0
* Defaults to 0.25

preview_duty_cycle = <number>
* The maximum time to spend generating previews, as a fraction of total search time
* Must be > 0.0 and < 1.0
* Defaults to 0.25

min_preview_period = <integer>
* This is the minimum time in seconds required between previews, used to limit cases where
  the interval calculated using the preview_duty_cycle parameter is very small, indicating
  that previews should be run frequently.
* Defaults to 1.

max_preview_period = <integer>
* This is the maximum time, in seconds, between previews. Used with the preview interval that 
  is calculated with the preview_duty_cycle parameter. '0' indicates unlimited.
* Defaults to 0.

results_queue_min_size = <integer>
* The minimum size for the queue of results that will be kept from peers for
  processing on the search head.
* The queue will be the max of this and the number of peers providing results.
* Defaults to 10

dispatch_quota_retry = <integer>
* The maximum number of times to retry to dispatch a search when the quota has
  been reached.
* Defaults to 4

dispatch_quota_sleep_ms = <integer>
* Milliseconds between retrying to dispatch a search if a quota has been
  reached.
* Retries the given number of times, with each successive wait 2x longer than
  the previous.
* Defaults to 100

base_max_searches = <int>
* A constant to add to the maximum number of searches, computed as a multiplier
  of the CPUs.
* Defaults to 6

max_searches_per_cpu = <int>
* The maximum number of concurrent historical searches per CPU. The system-wide
  limit of historical searches is computed as:
  max_hist_searches =  max_searches_per_cpu x number_of_cpus + base_max_searches
* Note: the maximum number of real-time searches is computed as:
  max_rt_searches = max_rt_search_multiplier x max_hist_searches
* Defaults to 1

max_rt_search_multiplier = <decimal number>
* A number by which the maximum number of historical searches is multiplied to
  determine the maximum number of concurrent real-time searches
* Note: the maximum number of real-time searches is computed as:
  max_rt_searches = max_rt_search_multiplier x max_hist_searches
* Defaults to 1

max_macro_depth = <int>
* Max recursion depth for macros.
* Considered a search exception if macro expansion doesn't stop after this many
  levels.
* Must be greater than or equal to 1.
* Default is 100

max_subsearch_depth = <int>
* max recursion depth for subsearch 
* considered a search exception if subsearch doesn't stop after this many levels

realtime_buffer = <int>
* Maximum number of accessible events to keep for real-time searches from
  Splunk Web.
* Acts as circular buffer once this limit is reached
* Must be greater than or equal to 1
* Default is 10000

stack_size = <int>
* The stack size (in bytes) of the thread executing the search.
* Defaults to 4194304  (4 MB)

status_cache_size = <int>
* The number of search job status data splunkd can cache in RAM. This cache
  improves performance of the jobs endpoint
* Defaults to 10000

timeline_freq = <timespan> or <ratio>
* Minimum amount of time between timeline commits.
* If specified as a number < 1 (and > 0), minimum time between commits is
  computed as a ratio of the amount of time that the search has been running.
* defaults to 0 seconds

preview_freq = <timespan> or <ratio>
* Minimum amount of time between results preview updates.
* If specified as a number < 1 (and > 0), minimum time between previews is
  computed as a ratio of the amount of time that the search has been running,
  or as a ratio of the length of the time window for real-time windowed
  searches.
* Defaults to ratio of 0.05

max_combiner_memevents = <int>
* Maximum size of in-memory buffer for search results combiner, in terms of
  number of events.
* Defaults to 50000 events.

replication_period_sec  = <int>
* The minimum amount of time in seconds between two successive bundle
  replications.
* Defaults to 60

replication_file_ttl = <int>
* The TTL (in seconds) of bundle replication tarballs, i.e. *.bundle files.
* Defaults to 600 (10m)

sync_bundle_replication = [0|1|auto]
* Flag indicating whether configuration file replication blocks searches or is
  run asynchronously
* When setting this flag to auto Splunk will choose to use asynchronous
  replication if and only if all the peers support async bundle replication,
  otherwise it will fall back into sync replication.
* Defaults to auto

rr_min_sleep_ms = <int>
* Minimum time to sleep when reading results in round-robin mode when no data
  is available.
* Defaults to 10.

rr_max_sleep_ms = <int>
* Maximum time to sleep when reading results in round-robin mode when no data
  is available.
* Defaults to 1000

rr_sleep_factor = <int>
* If no data is available even after sleeping, increase the next sleep interval
  by this factor.
* defaults to 2

fieldstats_update_freq = <number>
* How often to update the field summary statistics, as a ratio to the elapsed
  run time so far.
* Smaller values means update more frequently.  0 means as frequently as
  possible.
* Defaults to 0

fieldstats_update_maxperiod = <number>
* Maximum period for updating field summary statistics in seconds
* 0 means no maximum, completely dictated by current_run_time *
  fieldstats_update_freq
* Fractional seconds are allowed.
* defaults to 60

timeline_events_preview = <bool>
* Set timeline_events_preview to "true" to display events in the Search app as 
  the events are scanned, including events that are in-memory and not yet committed, 
  instead of waiting until all of the events are scanned to see the search results.
* When set to "true", you will not be able to expand the event information in the 
  event viewer until events are committed.
* When set to "false", events are displayed only after the events are committed
  (the events are written to the disk).
* This setting might increase disk usage to temporarily save uncommitted events while
  the search is running. Additionally, search performance might be impacted.
* Defaults to false.

remote_timeline = [0|1]
* If true, allows the timeline to be computed remotely to enable better
  map/reduce scalability.
* defaults to true (1).

remote_timeline_prefetch = <int>
* Each peer should proactively send at most this many full events at the
  beginning
* Defaults to 100.

remote_timeline_parallel_fetch = <bool>
* Connect to multiple peers at the same time when fetching remote events?
* Defaults to true

remote_timeline_min_peers = <int>
* Minimum search peers for enabling remote computation of timelines.
* Defaults to 1 (1).

remote_timeline_fetchall = [0|1]
* If set to true (1), Splunk fetches all events accessible through the timeline from the remote
  peers before the job is considered done.
  * Fetching of all events may delay the finalization of some searches, typically those running in
    verbose mode from the main Search view in Splunk Web.
  * This potential performance impact can be mitigated by lowering the max_events_per_bucket
    settings.
* If set to false (0), the search peers may not ship all matching events to the search-head,
  particularly if there is a very large number of them.
   * Skipping the complete fetching of events back to the search head will result in prompt search
     finalization.
   * Some events may not be available to browse in the UI.
* This setting does *not* affect the accuracy of search results computed by reporting searches.
* Defaults to true (1).

remote_timeline_thread = [0|1]
* If true, uses a separate thread to read the full events from remote peers if
  remote_timeline is used and remote_timeline_fetchall is set to true. (Has no
  effect if remote_timeline or remote_timeline_fetchall is false).
* Defaults to true (1).

remote_timeline_max_count = <int>
* Maximum number of events to be stored per timeline bucket on each search
  peer.
* Defaults to 10000

remote_timeline_max_size_mb = <int>
* Maximum size of disk that remote timeline events should take on each peer
* If limit is reached, a DEBUG message is emitted (and should be visible from
  job inspector/messages
* Defaults to 100

remote_timeline_touchperiod = <number>
* How often to touch remote timeline artifacts to keep them from being deleted
  by the remote peer, while a search is running.
* In seconds, 0 means never.  Fractional seconds are allowed.
* Defaults to 300.

remote_timeline_connection_timeout = <int>
* Connection timeout in seconds for fetching events processed by remote peer
  timeliner.
* Defaults to 5.

remote_timeline_send_timeout = <int>
* Send timeout in seconds for fetching events processed by remote peer
  timeliner.
* Defaults to 10.

remote_timeline_receive_timeout = <int>
* Receive timeout in seconds for fetching events processed by remote peer
  timeliner.
* Defaults to 10.

remote_event_download_initialize_pool = <int>
* Size of thread pool responsible for initiating the remote event fetch.
* Defaults to 5.

remote_event_download_finalize_pool = <int>
* Size of thread pool responsible for writing out the full remote events.
* Defaults to 5.

remote_event_download_local_pool = <int>
* Size of thread pool responsible for reading full local events.
* Defaults to 5.

default_allow_queue = [0|1]
* Unless otherwise specified via REST API argument should an async job spawning
  request be queued on quota violation (if not, an http error of server too
  busy is returned)
* Defaults to true (1).

queued_job_check_freq = <number>
* Frequency with which to check queued jobs to see if they can be started, in
  seconds
* Fractional seconds are allowed.
* Defaults to 1.

enable_history = <bool>
* Enable keeping track of searches?
* Defaults to true

max_history_length = <int>
* Max number of searches to store in history (per user/app)
* Defaults to 1000

allow_inexact_metasearch = <bool>
* Should a metasearch that is inexact be allow.  If so, an INFO message will be
  added to the inexact metasearches.  If not, a fatal exception will occur at
  search parsing time.
* Defaults to false

indexed_as_exact_metasearch = <bool>
* Should we allow a metasearch to treat <field>=<value> the same as
  <field>::<value> if <field> is an indexed field.  Allowing this will allow a
  larger set of metasearches when allow_inexact_metasearch is set to false.
  However, some of these searches may be inconsistent with the results of doing
  a normal search.
* Defaults to false

dispatch_dir_warning_size = <int>
* The number of jobs in the dispatch directory when to issue a bulletin message
  warning that performance could be impacted
* Defaults to 5000

allow_reuse = <bool>
* Allow normally executed historical searches to be implicitly re-used for
  newer requests if the newer request allows it?
* Defaults to true

track_indextime_range = <bool>
* Track the _indextime range of returned search results?
* Defaults to true

reuse_map_maxsize = <int>
* Maximum number of jobs to store in the reuse map
* Defaults to 1000

status_period_ms = <int>
* The minimum amount of time, in milliseconds, between successive
  status/info.csv file updates
* This ensures search does not spend significant time just updating these
  files.
  * This is typically important for very large number of search peers.
  * It could also be important for extremely rapid responses from search peers,
    when the search peers have very little work to do.
* Defaults to 1000 (1 second)

search_process_mode = auto | traditional | debug <debugging-command> [debugging-args ...]
* Control how search processes are started
* When set to "traditional", Splunk initializes each search process completely from scratch
* When set to a string beginning with "debug", Splunk routes searches through
  the given command, allowing the user the to "plug in" debugging tools
    * The <debugging-command> must reside in one of
        * $SPLUNK_HOME/etc/system/bin/
        * $SPLUNK_HOME/etc/apps/$YOUR_APP/bin/
        * $SPLUNK_HOME/bin/scripts/
    * Splunk will pass <debugging-args>, followed by the search command it
      would normally run, to <debugging-command>
    * For example, given:
        search_process_mode = debug $SPLUNK_HOME/bin/scripts/search-debugger.sh 5
      Splunk will run a command that looks generally like:
        $SPLUNK_HOME/bin/scripts/search-debugger.sh 5 splunkd search --id=... --maxbuckets=... --ttl=... [...]
* Defaults to "auto"

max_searches_per_process = <int>
* On UNIX we can run more that one search per process; after a search
  completes its process can wait for another search to be started and
  let itself be reused
* When set to 1 (or 0), we'll never reuse a process
* When set to a negative value, we won't limit the number of searches a
  process can run
* When set to a number larger than one, we will let the process run
  up to that many searches before exiting
* Defaults to 500
* Has no effect on Windows, or if search_process_mode is not "auto"

max_time_per_process = <number>
* When running more than one search per process, this limits how much
  time a process can accumulate running searches before it must exit
* When set to a negative value, we won't limit the amount of time a
  search process can spend running
* Defaults to 300.0 (seconds)
* Has no effect on Windows, if search_process_mode is not "auto", or
  if max_searches_per_process is set to 0 or 1
* NOTE: a search can run longer than this without being terminated, this
  ONLY prevents that process from being used to run more searches afterwards.

process_max_age = <number>
* When running more than one search per process, don't reuse a process
  if it is older than this number of seconds
* When set to a negative value, we won't limit the age of a search process
* This is different than "max_time_per_process" because it includes time
  the process spent idle
* Defaults to 7200.0 (seconds)
* Has no effect on Windows, if search_process_mode is not "auto", or
  if max_searches_per_process is set to 0 or 1
* NOTE: a search can run longer than this without being terminated, this
  ONLY prevents that process from being used to run more searches afterwards.

idle_process_reaper_period = <number>
* When allowing more than one search to run per process, we'll periodically
  check if we have too many idle search processes
* Defaults to 30.0 (seconds)
* Has no effect on Windows, if search_process_mode is not "auto", or
  if max_searches_per_process is set to 0 or 1

process_min_age_before_user_change = <number>
* When allowing more than one search to run per process, we'll try to reuse
  an idle process that last ran a search by the same Splunk user
* If no such idle process exists, we'll try using a process from a
  different user, but only if it has been idle for at least this long
* When set to zero, we'll always allow an idle process to be reused by
  any Splunk user
* When set to a negative value, we'll only allow a search process to be
  used by same Splunk user each time
* Defaults to 4.0 (seconds)
* Has no effect on Windows, if search_process_mode is not "auto", or
  if max_searches_per_process is set to 0 or 1

launcher_threads = <int>
* When allowing more than one search to run per process, we'll run this many
  server threads to manage those processes
* Defaults to -1 (meaning pick a value automatically)
* Has no effect on Windows, if search_process_mode is not "auto", or
  if max_searches_per_process is set to 0 or 1

launcher_max_idle_checks = <int>
* When allowing more than one search to run per process, we'll try to find
  an appropriate idle process to use
* This controls how many idle processes we will inspect before giving up
  and starting a new one
* When set to a negative value, we'll inspect every eligible idle process
* Defaults to 5
* Has no effect on Windows, if search_process_mode is not "auto", or
  if max_searches_per_process is set to 0 or 1

max_old_bundle_idle_time = <number>
* When reaping idle search processes, allow one to be reaped if it is not
  configured with the most recent configuration bundle, and its bundle
  hasn't been used in at least this long
* When set to a negative value, we won't reap idle processes sooner than
  normal if they might be using an older configuration bundle
* Defaults to 5.0 (seconds)
* Has no effect on Windows, if search_process_mode is not "auto", or
  if max_searches_per_process is set to 0 or 1

idle_process_cache_timeout = <number>
* When a search process is allowed to run more than one search, it can
  cache some data between searches
* If a search process is idle for this long, take the opportunity to purge
  some older data from these caches
* When set to a negative value, we won't do any purging based on how long
  the search process is idle
* When set to zero, we'll always purge no matter if we're kept idle or not
* Defaults to 0.5 (seconds)
* Has no effect on Windows, if search_process_mode is not "auto", or
  if max_searches_per_process is set to 0 or 1

idle_process_cache_search_count = <int>
* When a search process is allowed to run more than one search, it can
  cache some data between searches
* If a search process has run this many searches without purging older
  data from the cache, do it even if the "idle_process_cache_timeout" has
  not been hit
* When set to a negative value, we won't purge no matter how many
  searches are run
* Defaults to 8
* Has no effect on Windows, if search_process_mode is not "auto", or
  if max_searches_per_process is set to 0 or 1

idle_process_regex_cache_hiwater = <int>
* When a search process is allowed to run more than one search, it can
  cache compiled regex artifacts
* If that cache grows to larger than this number of entries we'll try
  purging some older ones
* Normally the above "idle_process_cache_*" settings will take care of
  keeping the cache a reasonable size.  This setting is to prevent the
  cache from growing extremely large during a single large search
* When set to a negative value, we won't purge this cache based on its size
* Defaults to 2500
* Has no effect on Windows, if search_process_mode is not "auto", or
  if max_searches_per_process is set to 0 or 1

fetch_remote_search_log = [enabled|disabledSavedSearches|disabled]
* enabled: all remote search logs will be downloaded barring the oneshot search
* disabledSavedSearches: download all remote logs other than saved search logs
  and oneshot search logs
* disabled: irrespective of the search type all remote search log download
  functionality will be disabled
* Defaults to disabledSavedSearches
* The previous values:[true|false] are still supported but not recommended for use
* The previous value of true maps to the current value of enabled
* The previous value of false maps to the current value of disabled
* Remote search logs are downloaded synchronously. This means that the time
  spent downloading the logs increases the total run time of the search process.
  This should not, however, affect user experience, because the search is
  finalized before the logs are downloaded, so the user can start to consume
  search results while logs are still downloading.

load_remote_bundles = <bool>
* On a search peer, allow remote (search head) bundles to be loaded in splunkd.
* Defaults to false.

use_dispatchtmp_dir = <bool>
* Whether to use the dispatchtmp directory for temporary search time files
  (write temporary files to a different directory from a job's dispatch
  directory).
* Temp files would be written to $SPLUNK_HOME/var/run/splunk/dispatchtmp/<sid>/
* In search head pooling performance can be improved by mounting disaptchtmp to
  the local file system.
* Defaults to true if search head pooling is enabled, false otherwise

check_splunkd_period = <number>
* Amount of time, in seconds, that determines how frequently the search process
* (when running a real-time search) checks whether it's parent process
  (splunkd) is running or not.
* Fractional seconds are allowed.
* Defaults to 60

allow_batch_mode = <bool>
* Whether or not to allow the use of batch mode which searches in disk based
  batches in a time insensitive manner.
* In distributed search environments, this setting is used on the search head.
* Defaults to true

batch_search_max_index_values = <int>
* When using batch mode this limits the number of event entries read from the
  index file. These entries are small approximately 72 bytes. However batch
  mode is more efficient when it can read more entries at once.
* Setting this value to a smaller number can lead to slower search performance.
* A balance needs to be struck between more efficient searching in batch mode
* and running out of memory on the system with concurrently running searches.
* Defaults to 10000000


* These settings control the periodicity of retries to search peers in the
  event of failure. (Connection errors, and others.) The interval exists
  between failure and first retry, as well as successive retries in the event
  of further failures.

batch_retry_min_interval = <int>
* When batch mode attempts to retry the search on a peer that failed wait at
  least this many seconds
* Default to 5

batch_retry_max_interval = <int>
* When batch mode attempts to retry the search on a peer that failed wait at
  most this many seconds
* Default to 300

batch_retry_scaling = <double>
* After a retry attempt fails increase the time to wait before trying again by
  this scaling factor (Value should be > 1.0)
* Default 1.5

batch_wait_after_end = <int>
* Batch mode considers the search ended(finished) when all peers without
  communication failure have explicitly indicated that they are complete; eg
  have delivered the complete answer.  After the search is at an end, batch
  mode will continue to retry with lost-connection peers for this many seconds.
* Default 900

batch_search_max_pipeline = <int>
* Controls the number of search pipelines launched at the indexer during batch search.
* Default value is set to one pipeline.
* Increasing the number of search pipelines should help improve search performance
* but there will be an increase in thread and memory usage.

batch_search_max_results_aggregator_queue_size = <int>
* Controls the size of the search results queue to which all the search pipelines dump the processed search results.
* Default size is set to 100MB.
* Increasing the size can lead to performance gain where as decreasing can reduce search performance. 
* Do not set this parameter to zero.

batch_search_max_serialized_results_queue_size = <int>
* Controls the size of the serialized results queue from which the serialized search results are transmitted.
* Default size is set to 100MB.
* Increasing the size can lead to performance gain where as decreasing can reduce search performance.
* Do not set this parameter to zero.

write_multifile_results_out = <bool>
* At the end of the search, if results are in multiple files, write out the
  multiple files to results_dir directory, under the search results directory.
* This will speed up post-processing search, since the results will already be
  split into appropriate size files.
* Default true

enable_cumulative_quota = <bool>
* Whether to enforce cumulative role based quotas
* Default false

remote_reduce_limit = <unsigned long>
* The number of results processed by a streaming search before we force a reduce
* Note: this option applies only if the search is run with --runReduce=true
  (currently on Hunk does this)
* Note: a value of 0 is interpreted as unlimited
* Defaults to: 1000000

max_workers_searchparser = <int>
* The number of worker threads in processing search result when using round
  robin policy.
* default 5

max_chunk_queue_size = <int>
* The maximum size of the chunk queue
* default 10000000

max_tolerable_skew = <positive integer>
* Absolute value of the largest timeskew in seconds that we will tolerate
  between the native clock on the searchhead and the native clock on the peer
  (independent of time-zone).
* If this timeskew is exceeded we will log a warning. This estimate is
  approximate and tries to account for network delays.

addpeer_skew_limit = <positive integer>
* Absolute value of the largest time skew in seconds that is allowed when configuring
  a search peer from a search head, independent of time.
* If the difference in time (skew) between the search head and the peer is greater
  than this limit, the search peer will not be added.
* This is only relevant to manually added peers; currently this setting has no effect
  upon index cluster search peers.

unified_search = <bool>
* Turns on/off unified search for hunk archiving, defaults to false if not
  specified.

enable_memory_tracker = <bool>
* If memory tracker is disabled, search won't be terminated even if it exceeds the memory limit.
* Must be set to <true> if you want to enable search_process_memory_usage_threshold or
* search_process_memory_usage_percentage_threshold
* By default false.

search_process_memory_usage_threshold = <double>
* To be active, this setting requires setting: enable_memory_tracker = true
* Signifies the maximum memory in MB the search process can consume in RAM.
* Search processes violating the threshold will be terminated.
* If the value is set to zero, then splunk search processes are allowed
  to grow unbounded in terms of in memory usage.  
* The default value is set to 4000MB or 4GB.

search_process_memory_usage_percentage_threshold = <float>
* To be active, this setting requires setting: enable_memory_tracker = true
* Signifies the percentage of the total memory the search process is entitled to consume.
* Any time the search process violates the threshold percentage the process will be brought down.
* If the value is set to zero, then splunk search processes are allowed to grow unbounded 
  in terms of percentage memory usage.  
* The default value is set to 25%.
* Any number set larger than 100 or less than 0 will be discarded and the default value will be used.

enable_datamodel_meval = <bool>
* Enable concatenation of successively occurring evals into a single
  comma separated eval during generation of datamodel searches.
* default true

do_not_use_summaries = <bool>
* Do not use this setting without working in tandem with Splunk support.
* This setting is a very narrow subset of summary_mode=none. When set to true, this
  setting disables some functionality that is necessary for report acceleration.
  In particular, when set to true, search processes will no longer query the main
  splunkd's /admin/summarization endpoint for report acceleration summary ids.
* In certain narrow use-cases this may improve performance if report acceleration
  (savedsearches.conf:auto_summarize) is not in use by lowering the main splunkd's
  process overhead.
* Defaults to false.

unified_search = <bool>
* Enables the unified search feature.
* Defaults to false.

force_saved_search_dispatch_as_user = <bool>
* Specifies whether to overwrite the 'dispatchAs' value.
* If set to 'true', the 'dispatchAs' value is overwritten by 'user' regardless
  of the 'user | owner' value in the savedsearches.conf file.
* If set to 'false', the value in the savedsearches.conf file is used.
* User may want to set this to effectively disable dispatchAs = owner for 
  the entire install, if that more closely aligns with security goals.
* Defaults to false.

auto_cancel_after_pause = <integer>
* Specifies the amount of time, in seconds, that a search must be paused before
  the search is automatically cancelled.
* If set to 0, a paused search is never automatically cancelled.
* Default: 0

-- Unsupported [search] settings: --

enable_status_cache = <bool>
* This is not a user tunable setting.  Do not use this setting without
  working in tandem with Splunk personnel.  This setting is not tested at
  non-default.
* This controls whether the status cache is used, which caches information
  about search jobs (and job artifacts) in memory in main splunkd.
* Normally this cacheing is enabled and assists performance. However, when
  using Search Head Pooling, artifacts in the shared storage location will be
  changed by other search heads, so this cacheing is disabled.
* Explicit requests to jobs endpoints , eg /services/search/jobs/<sid> are
  always satisfied from disk, regardless of this setting.
* Defaults to true; except in Search Head Pooling environments where it
  defaults to false.

status_cache_in_memory_ttl = <positive integer>
* This setting has no effect unless search head pooling is enabled, AND
  enable_status_cache has been set to true.
* This is not a user tunable setting.  Do not use this setting without working
  in tandem with Splunk personnel. This setting is not tested at non-default.
* If set, controls the number of milliseconds which a status cache entry may be
  used before it expires.
* Defaults to 60000, or 60 seconds.

[realtime]

# Default options for indexer support of real-time searches
# These can all be overridden for a single search via REST API arguments

local_connect_timeout = <int>
* Connection timeout for an indexer's search process when connecting to that
  indexer's splunkd (in seconds)
* Defaults to 5

local_send_timeout = <int>
* Send timeout for an indexer's search process when connecting to that
  indexer's splunkd (in seconds)
* Defaults to 5

local_receive_timeout = <int>
* Receive timeout for an indexer's search process when connecting to that
  indexer's splunkd (in seconds)
* Defaults to 5

queue_size = <int>
* Size of queue for each real-time search (must be >0).
* Defaults to 10000

blocking = [0|1]
* Specifies whether the indexer should block if a queue is full.
* Defaults to false

max_blocking_secs = <int>
* Maximum time to block if the queue is full (meaningless if blocking = false)
* 0 means no limit
* Default to 60

indexfilter = [0|1]
* Specifies whether the indexer should prefilter events for efficiency.
* Defaults to true (1).

default_backfill = <bool>
* Specifies if windowed real-time searches should backfill events
* Defaults to true

enforce_time_order = <bool>
* Specifies if real-time searches should ensure that events are sorted in
  ascending time order (the UI will automatically reverse the order that it
  display events for real-time searches so in effect the latest events will be
  first)
* Defaults to true

disk_usage_update_period = <number>
* Specifies how frequently (in seconds) should the search process estimate the
  artifact disk usage.
* The quota for the amount of disk space that a search job can use is
  controlled by the 'srchDiskQuota' setting in authorize.conf.
* Exceeding this quota causes the search to be auto-finalized immediately,
  even if there are results that have not yet been returned.
* Fractional seconds are allowed.
* Defaults to 10

indexed_realtime_use_by_default = <bool>
* Should we use the indexedRealtime mode by default
* Precedence: SearchHead
* Defaults to false

indexed_realtime_disk_sync_delay = <int>
* After indexing there is a non-deterministic period where the files on disk
  when opened by other programs might not reflect the latest flush to disk,
  particularly when a system is under heavy load.
* This settings controls the number of seconds to wait for disk flushes to
  finish when using indexed/continuous/psuedo realtime search so that we see
  all of the data.
* Precedence: SearchHead overrides Indexers
* Defaults to 60

indexed_realtime_default_span = <int>
* An indexed realtime search is made up of many component historical searches
  that by default will span this many seconds. If a component search is not
  completed in this many seconds the next historical search will span the extra
  seconds. To reduce the overhead of running an indexed realtime search you can
  change this span to delay longer before starting the next component
  historical search.
* Precedence: Indexers
* Defaults to 1

indexed_realtime_maximum_span = <int>
* While running an indexed realtime search, if the component searches regularly
  take longer than indexed_realtime_default_span seconds, then indexed realtime
  search can fall more than indexed_realtime_disk_sync_delay seconds behind
  realtime. Use this setting to set a limit after which we will drop data to
  return back to catch back up to the specified delay from realtime, and only
  search the default span of seconds.
* Precedence: API overrides SearchHead overrides Indexers
* Defaults to 0 (unlimited)

indexed_realtime_cluster_update_interval = <int>
* While running an indexed realtime search, if we are on a cluster we need to
  update the list of allowed primary buckets. This controls the interval that
  we do this. And it must be less than the indexed_realtime_disk_sync_delay. If
  your buckets transition from Brand New to warm in less than this time indexed
  realtime will lose data in a clustered environment.
* Precedence: Indexers
* Default: 30

alerting_period_ms = <int>
* This limits the frequency that we will trigger alerts during a realtime search
* A value of 0 means unlimited and we will trigger an alert for every batch of
  events we read in dense realtime searches with expensive alerts this can
  overwhelm the alerting system.
* Precedence: Searchhead
* Default: 0

[rex]

match_limit = <integer>
* Limits the amount of resources that are spent by PCRE
  when running patterns that will not match.
* Use this to set an upper bound on how many times PCRE calls an internal
  function, match(). If set too low, PCRE might fail to correctly match a pattern.
* Default: 100000

[slc]

maxclusters = <integer>
* Maximum number of clusters to create.
* Defaults to 10000.

[findkeywords]

maxevents = <integer>
* Maximum number of events used by the findkeywords command and the Patterns tab.
* Defaults to 50000.

[sort]

maxfiles = <integer>
* Maximum files to open at once.  Multiple passes are made if the number of
  result chunks exceeds this threshold.
* Defaults to 64.

[stats|sistats]

maxmem_check_freq = <integer>
* How frequently to check to see if we are exceeding the in memory data
  structure size limit as specified by max_mem_usage_mb, in rows
* Defaults to 50000 rows

maxresultrows = <integer>
* Maximum number of rows allowed in the process memory.
* When the search process exceeds max_mem_usage_mb and maxresultrows, data is
  spilled out to the disk
* If not specified, defaults to searchresults::maxresultrows (which is by default 50000).

maxvalues = <integer>
* Maximum number of values for any field to keep track of.
* Defaults to 0 (unlimited).

maxvaluesize = <integer>
* Maximum length of a single value to consider.
* Defaults to 0 (unlimited).

# rdigest is a data structure used to compute approximate order statistics
# (such as median and percentiles) using sublinear space.

rdigest_k = <integer>
* rdigest compression factor
* Lower values mean more compression
* After compression, number of nodes guaranteed to be greater than or equal to
  11 times k.
* Defaults to 100, must be greater than or equal to 2

rdigest_maxnodes = <integer>
* Maximum rdigest nodes before automatic compression is triggered.
* Defaults to 1, meaning automatically configure based on k value

max_stream_window = <integer>
* For the streamstats command, the maximum allow window size
* Defaults to 10000.

max_valuemap_bytes = <integer>
* For sistats command, the maximum encoded length of the valuemap, per result
  written out
* If limit is exceeded, extra result rows are written out as needed.  (0 = no
  limit per row)
* Defaults to 100000.

perc_method = nearest-rank|interpolated
* Which method to use for computing percentiles (and medians=50 percentile).
  * nearest-rank picks the number with 0-based rank R =
    floor((percentile/100)*count)
  * interpolated means given F = (percentile/100)*(count-1),
    pick ranks R1 = floor(F) and R2 = ceiling(F).
    Answer = (R2 * (F - R1)) + (R1 * (1 - (F - R1)))
* See wikipedia percentile entries on nearest rank and "alternative methods"
* Defaults to nearest-rank

approx_dc_threshold = <integer>
* When using approximate distinct count (i.e. estdc(<field>) in
  stats/chart/timechart), do not use approximated results if the actual number
  of distinct values is less than this number
* Defaults to 1000

dc_digest_bits = <integer>
* 2^<integer> bytes will be size of digest used for approximating distinct count.
* Defaults to 10 (equivalent to 1KB)
* Must be >= 8 (128B) and <= 16 (64KB)

natural_sort_output = <bool>
* Do a natural sort on the output of stats if output size is <= maxresultrows
* Natural sort means that we sort numbers numerically and non-numbers
  lexicographically
* Defaults to true

list_maxsize = <int>
* Maximum number of list items to emit when using the list() function
  stats/sistats
* Defaults to 100

sparkline_maxsize = <int>
* Maximum number of elements to emit for a sparkline
* Defaults to value of the list_maxsize setting

sparkline_time_steps = <time-step-string>
* Specify a set of time steps in order of decreasing granularity. Use an integer and 
* one of the following time units to indicate each step.
** s = seconds
** m = minutes
** h = hours
** d = days
** month
* Defaults to: 1s,5s,10s,30s,1m,5m,10m,30m,1h,1d,1month
* A time step from this list is selected based on the <sparkline_maxsize> setting. 
* The lowest <sparkline_time_steps> value that does not exceed the maximum number 
* of bins is used.
* Example:
** If you have the following configurations:
** <sparkline_time_steps> = 1s,5s,10s,30s,1m,5m,10m,30m,1h,1d,1month
** <sparkline_maxsize> = 100
** The timespan for 7 days of data is 604,800 seconds. 
** Span = 604,800/<sparkline_maxsize>.
** If sparkline_maxsize = 100, then span = (604,800 / 100) = 60,480 sec == 1.68 hours.
** The "1d" time step is used because it is the lowest value that does not exceed 
** the maximum number of bins.

default_partitions = <int>
* Number of partitions to split incoming data into for parallel/multithreaded reduce
* Defaults to 1

partitions_limit = <int>
* Maximum number of partitions to split into that can be specified via the
  'partitions' option.
* When exceeded, the number of partitions is reduced to this limit.
* Defaults to 100

[thruput]

maxKBps = <integer>
* If specified and not zero, this limits the speed through the thruput processor 
  in the ingestion pipeline to the specified rate in kilobytes per second.
* To control the CPU load while indexing, use this to throttle the number of
  events this indexer processes to the rate (in KBps) you specify.
* Note that this limit will be applied per ingestion pipeline. For more information 
  about multiple ingestion pipelines see parallelIngestionPipelines in the
  server.conf.spec file.
* With N parallel ingestion pipelines the thruput limit across all of the ingestion 
  pipelines will be N * maxKBps.

[journal_compression]

threads = <integer>
* Specifies the maximum number of indexer threads which will be work on
  compressing hot bucket journal data.
* Defaults to the number of CPU threads of the host machine
* This setting does not typically need to be modified.

[top]

maxresultrows = <integer>
* Maximum number of result rows to create.
* If not specified, defaults to searchresults::maxresultrows (usually 50000).

maxvalues = <integer>
* Maximum number of distinct field vector values to keep track of.
* Defaults to 100000.

maxvaluesize = <integer>
* Maximum length of a single value to consider.
* Defaults to 1000.

[summarize]

hot_bucket_min_new_events = <integer>
* The minimum number of new events that need to be added to the hot bucket
  (since last summarization)  before a new summarization can take place. To
  disable hot bucket summarization set this value to a * large positive number.
* Defaults to 100000

max_hot_bucket_summarization_idle_time = <unsigned int>
* Maximum amount of time, in seconds, a hot bucket can be idle after which we summarize all the
  events even if there are not enough events (determined by hot_bucket_min_new_events)
* Defaults to 900 seconds (or 15 minutes)

sleep_seconds = <integer>
* The amount of time to sleep between polling of summarization complete status.
* Default to 5

stale_lock_seconds = <integer>
* The amount of time to have elapse since the mod time of a .lock file before
  summarization considers * that lock file stale and removes it
* Default to 600

max_summary_ratio = <float>
* A number in the [0-1] range that indicates the maximum ratio of
  summary data / bucket size at which point the summarization of that bucket,
  for the particular search, will be disabled. Use 0 to disable.
* Defaults to 0

max_summary_size = <int>
* Size of summary, in bytes, at which point we'll start applying the
  max_summary_ratio. Use 0 to disable.
* Defaults to 0

max_time = <int>
* The maximum amount of time, seconds, that a summary search process is allowed
  to run. Use 0 to disable.
* Defaults to 0

indextime_lag = <unsigned int>
* The amount of lag time to give indexing to ensure that it has synced any
  received events to disk. Effectively, the data that has been received in the
  past indextime_lag will NOT be summarized.
* Do not change this value unless directed by Splunk support.
* Defaults to 90

max_replicated_hot_bucket_idle_time = <unsigned int>

* Maximum amount of time, in seconds, a replicated hot bucket can be idle after which we won't 
  apply indextime_lag. 
* This applies to only idle replicated hot buckets. As soon as new events start flowing
  in we will revert to the default behavior of applying indextime_lag
* Defaults to 3600 seconds

[transactions]

maxopentxn = <integer>
* Specifies the maximum number of not yet closed transactions to keep in the
  open pool before starting to evict transactions.
* Defaults to 5000.

maxopenevents = <integer>
* Specifies the maximum number of events (which are) part of open transactions
  before transaction eviction starts happening, using LRU policy.
* Defaults to 100000.

[inputproc]

max_fd = <integer>
* Maximum number of file descriptors that a ingestion pipeline in Splunk will keep open, 
  to capture any trailing data from files that are written to very slowly.
* Note that this limit will be applied per ingestion pipeline. For more information 
  about multiple ingestion pipelines see parallelIngestionPipelines in the
  server.conf.spec file.
* With N parallel ingestion pipelines the maximum number of file descriptors that
  can be open across all of the ingestion pipelines will be N * max_fd.
* Defaults to 100.

monitornohandle_max_heap_mb = <integer>
* Controls the maximum memory used by the Windows-specific modular input
  MonitorNoHandle in user mode.
* The memory of this input grows in size when the data being produced
  by applications writing to monitored files comes in faster than the Splunk
  system can accept it.
* When set to 0, the heap size (memory allocated in the modular input) can grow
  without limit.
* If this size is limited, and the limit is encountered, the input will drop
  some data to stay within the limit.
* Defaults to 0.

monitornohandle_max_driver_mem_mb = <integer>
* Controls the maximum NonPaged memory used by the Windows-specific kernel driver of modular input
  MonitorNoHandle.
* The memory of this input grows in size when the data being produced
  by applications writing to monitored files comes in faster than the Splunk
  system can accept it.
* When set to 0, the NonPaged memory size (memory allocated in the kernel driver of modular input) can grow
  without limit.
* If this size is limited, and the limit is encountered, the input will drop
  some data to stay within the limit.
* Default: 0

monitornohandle_max_driver_records = <integer>
* Controls memory growth by limiting the maximum in-memory records stored
  by the kernel module of Windows-specific modular input MonitorNoHandle.
* When monitornohandle_max_driver_mem_mb is set to > 0, this config is ignored.
* monitornohandle_max_driver_mem_mb and monitornohandle_max_driver_records are mutually exclusive.
* If the limit is encountered, the input will drop some data to stay within the limit.
* Defaults to 500.

time_before_close = <integer>
* MOVED.  This setting is now configured per-input in inputs.conf.
* Specifying this setting in limits.conf is DEPRECATED, but for now will
  override the setting for all monitor inputs.

tailing_proc_speed = <integer>
* REMOVED.  This setting is no longer used.

file_tracking_db_threshold_mb = <integer>
* This setting controls the trigger point at which the file tracking db (also
  commonly known as the "fishbucket" or btree) rolls over.  A new database is
  created in its place.  Writes are targeted at new db.  Reads are first
  targeted at new db, and we fall back to old db for read failures.  Any reads
  served from old db successfully will be written back into new db.
* MIGRATION NOTE: if this setting doesn't exist, the initialization code in
  splunkd triggers an automatic migration step that reads in the current value
  for "maxDataSize" under the "_thefishbucket" stanza in indexes.conf and
  writes this value into etc/system/local/limits.conf.

learned_sourcetypes_limit = <0 or positive integer>
* Limits the number of entries added to the learned app for performance
  reasons.
* If nonzero, limits two properties of data added to the learned app by the
  file classifier. (Code specific to monitor:: stanzas that auto-determines
  sourcetypes from content.)
  * The number of sourcetypes added to the learned app's props.conf file will
    be limited to approximately this number.
  * The number of file-content fingerprints added to the learned app's
    sourcetypes.conf file will be limited to approximately this number.
* The tracking for uncompressed and compressed files is done separately, so in
  some cases this value may be exceeded.
* This limit is not the recommended solution for auto-identifying sourcetypes.
  The usual  best practices are to set sourcetypes in input stanzas, or
  alternatively to apply them based on filename pattern in props.conf
  [source::<pattern>] stanzas.
* Defaults to 1000.

[scheduler]

saved_searches_disabled = <bool>
* Whether saved search jobs are disabled by the scheduler.
* Defaults to false.

max_searches_perc = <integer>
* The maximum number of searches the scheduler can run, as a percentage of the
  maximum number of concurrent searches, see [search] max_searches_per_cpu for
  how to set the system wide maximum number of searches.
* Defaults to 50.

max_searches_perc.<n> = <integer>
max_searches_perc.<n>.when = <cron string>
* The same as max_searches_perc but the value is applied only when the cron
  string matches the current time.  This allows max_searches_perc to have
  different values at different times of day, week, month, etc.
* There may be any number of non-negative <n> that progress from least specific
  to most specific with increasing <n>.
* The scheduler looks in reverse-<n> order looking for the first match.
* If either these settings aren't provided at all or no "when" matches the
  current time, the value falls back to the non-<n> value of max_searches_perc.

auto_summary_perc = <integer>
* The maximum number of concurrent searches to be allocated for auto
  summarization, as a percentage of the concurrent searches that the scheduler
  can run.
* Auto summary searches include:
  * Searches which generate the data for the Report Acceleration feature.
  * Searches which generate the data for Data Model acceleration.
* Note: user scheduled searches take precedence over auto summary searches.
* Defaults to 50.

auto_summary_perc.<n> = <integer>
auto_summary_perc.<n>.when = <cron string>
* The same as auto_summary_perc but the value is applied only when the cron
  string matches the current time.  This allows auto_summary_perc to have
  different values at different times of day, week, month, etc.
* There may be any number of non-negative <n> that progress from least specific
  to most specific with increasing <n>.
* The scheduler looks in reverse-<n> order looking for the first match.
* If either these settings aren't provided at all or no "when" matches the
  current time, the value falls back to the non-<n> value of auto_summary_perc.

priority_runtime_factor = <double>
* The amount to scale the priority runtime adjustment by.
* Every search's priority is made higher (worse) by its typical running time.
  Since many searches run in fractions of a second and the priority is
  integral, adjusting by a raw runtime wouldn't change the result; therefore,
  it's scaled by this value.
* Defaults to 10.

priority_skipped_factor = <double>
* The amount to scale the skipped adjustment by.
* A potential issue with the priority_runtime_factor is that now longer-running
  searches may get starved.  To balance this out, make a search's priority
  lower (better) the more times it's been skipped.  Eventually, this adjustment
  will outweigh any worse priority due to a long runtime. This value controls
  how quickly this happens.
* Defaults to 1.

search_history_max_runtimes = <unsigned int>
* The number of runtimes kept for each search.
* Used to calculate historical typical runtime during search prioritization.
* Defaults to 10.

search_history_load_timeout = <duration-specifier>
* The maximum amount of time to defer running continuous scheduled searches
  while waiting for the KV Store to come up in order to load historical data.
  This is used to prevent gaps in continuous scheduled searches when splunkd
  was down.
* Use [<int>]<unit> to specify a duration; a missing <int> defaults to 1.
* Relevant units are: s, sec, second, secs, seconds, m, min, minute, mins,
  minutes.
* For example: "60s" = 60 seconds, "5m" = 5 minutes.
* Defaults to 2m.

max_continuous_scheduled_search_lookback = <duration-specifier>
* The maximum amount of time to run missed continuous scheduled searches for
  once Splunk comes back up in the event it was down.
* Use [<int>]<unit> to specify a duration; a missing <int> defaults to 1.
* Relevant units are: m, min, minute, mins, minutes, h, hr, hour, hrs, hours,
  d, day, days, w, week, weeks, mon, month, months.
* For example: "5m" = 5 minutes, "1h" = 1 hour.
* A value of 0 means no lookback.
* Defaults to 24 hours.

introspection_lookback = <duration-specifier>
* The amount of time to "look back" when reporting introspection statistics.
* For example: what is the number of dispatched searches in the last 60 minutes?
* Use [<int>]<unit> to specify a duration; a missing <int> defaults to 1.
* Relevant units are: m, min, minute, mins, minutes, h, hr, hour, hrs, hours,
  d, day, days, w, week, weeks.
* For example: "5m" = 5 minutes, "1h" = 1 hour.
* Defaults to 1 hour.

max_action_results = <integer>
* The maximum number of results to load when triggering an alert action.
* Defaults to 50000

action_execution_threads = <integer>
* Number of threads to use to execute alert actions, change this number if your
  alert actions take a long time to execute.
* This number is capped at 10.
* Defaults to 2

actions_queue_size = <integer>
* The number of alert notifications to queue before the scheduler starts
  blocking, set to 0 for infinite size.
* Defaults to 100

actions_queue_timeout = <integer>
* The maximum amount of time, in seconds to block when the action queue size is
  full.
* Defaults to 30

alerts_max_count = <integer>
* Maximum number of unexpired alerts information to keep for the alerts
  manager, when this number is reached Splunk will start discarding the oldest
  alerts.
* Defaults to 50000

alerts_max_history = <integer>[s|m|h|d]
* Maximum time to search in the past for previously triggered alerts.
* splunkd uses this property to populate the Activity -> Triggered Alerts page at startup.
* Defaults to 7 days.
* Values greater than the default may cause slowdown.

alerts_scoping = host|splunk_server|all
* Determines the scoping to use on the search to populate the triggered alerts
  page. Choosing splunk_server will result in the search query
  using splunk_server=local, host will result in the search query using
  host=<search-head-host-name>, and all will have no scoping added to the
  search query.
* Defaults to splunk_server.

alerts_expire_period = <integer>
* The amount of time between expired alert removal
* This period controls how frequently the alerts list is scanned, the only
  benefit from reducing this is better resolution in the number of alerts fired
  at the savedsearch level.
* Change not recommended.
* Defaults to 120.

persistance_period = <integer>
* The period (in seconds) between scheduler state persistance to disk. The
  scheduler currently persists the suppression and fired-unexpired alerts to
  disk.
* This is relevant only in search head pooling mode.
* Defaults to 30.

max_lock_files = <int>
* The number of most recent lock files to keep around.
* This setting only applies in search head pooling.

max_lock_file_ttl = <int>
* Time (in seconds) that must pass before reaping a stale lock file.
* Only applies in search head pooling.

max_per_result_alerts = <int>
* Maximum number of alerts to trigger for each saved search instance (or
  real-time results preview for RT alerts)
* Only applies in non-digest mode alerting. Use 0 to disable this limit
* Defaults to 500

max_per_result_alerts_time = <int>
* Maximum number of time to spend triggering alerts for each saved search
  instance (or real-time results preview for RT alerts)
* Only applies in non-digest mode alerting. Use 0 to disable this limit.
* Defaults to 300

scheduled_view_timeout = <int>[s|m|h|d]
* The maximum amount of time that a scheduled view (pdf delivery) would be
  allowed to render
* Defaults to 60m

concurrency_message_throttle_time = <int>[s|m|h|d]
* Amount of time controlling throttling between messages warning about scheduler concurrency limits
*Defaults to 10m

shp_dispatch_to_slave = <bool>
* By default the scheduler should distribute jobs throughout the pool.
* Defaults to true

shc_role_quota_enforcement = <bool>
* When this is enabled, the following limits are enforced by the captain for scheduled searches:
  - User role quotas are enforced globally.
    A given role can have (n *number_of_peers) searches running cluster-wide,
    where n is the quota for that role as defined by srchJobsQuota and
    rtSrchJobsQuota on the captain
  - Maximum number of concurrent searches is enforced globally.
    This is (n * number_of_peers) where n is the max concurrent searches on the captain
    (see max_searches_per_cpu for a description of how this is computed).
    Concurrent searches include both scheduled searches and ad hoc searches.
* Scheduled searches will therefore not have an enforcement of either of the above
   on a per-member basis.
* Note that this doesn't control the enforcement of the scheduler quota.
  For a search head cluster, that is defined as (max_searches_perc * number_of_peers)
  and is always enforced globally on the captain.
* Quota information is conveyed from the members to the captain. Network delays
  can cause the quota calculation on the captain to vary from the actual values
  in the members and may cause search limit warnings. This should clear up as
  the information is synced.
* Defaults to false.

shc_local_quota_check = <bool>
* Enabling this enforces user role quota and maximum number of
  concurrent searches on a per-member basis.
* Cluster-wide scheduler quota is still enforced globally on the captain.
* See shc_role_quota_enforcement for more details.
* Disabling this requires shc_role_quota_enforcement=true. Otherwise, all
  quota checks will be skipped.
* Note that disabling this will also disable disk quota checks.
* Defaults to true.

[auto_summarizer]

cache_timeout = <integer>
* The amount of time, in seconds, to cache auto summary details and search hash
  codes
* Defaults to 600 - 10 minutes

search_2_hash_cache_timeout = <integer>
* The amount of time, in seconds, to cache search hash codes
* Defaults to the value of cache_timeout i.e. 600 - 10 minutes

maintenance_period = <integer>
* The period of time, in seconds, that the auto summarization maintenance
  happens
* Defaults to 1800 (30 minutes)

allow_event_summarization = <bool>
* Whether auto summarization of searches whose remote part returns events
  rather than results will be allowed.
* Defaults to false

max_verify_buckets = <int>
* When verifying buckets, stop after verifying this many buckets if no failures
  have been found
* 0 means never
* Defaults to 100

max_verify_ratio = <number>
* Maximum fraction of data in each bucket to verify
* Defaults to 0.1 (10%)

max_verify_bucket_time = <int>
* Maximum time to spend verifying each bucket, in seconds
* Defaults to 15 (seconds)

verify_delete = <bool>
* Should summaries that fail verification be automatically deleted?
* Defaults to false

max_verify_total_time = <int>
* Maximum total time in seconds to spend doing verification, regardless if any
  buckets have failed or not
* Defaults to 0 (no limit)

max_run_stats = <int>
* Maximum number of summarization run statistics to keep track and expose via
  REST.
* Defaults to 48

return_actions_with_normalized_ids = [yes|no|fromcontext]
* Report acceleration summaries are stored under a signature/hash which can be
  regular or normalized.
    * Normalization improves the re-use of pre-built summaries but is not
      supported before 5.0. This config will determine the default value of how
       normalization works (regular/normalized)
    * Default value is "fromcontext", which would mean the end points and
      summaries would be operating based on context.
* normalization strategy can also be changed via admin/summarization REST calls
  with the "use_normalization"  parameter which can take the values
  "yes"/"no"/"fromcontext"

normalized_summaries = <bool>
* Turn on/off normalization of report acceleration summaries.
* Default = false and will become true in 6.0

detailed_dashboard = <bool>
* Turn on/off the display of both normalized and regular summaries in the
  Report Acceleration summary dashboard and details.
* Default = false

shc_accurate_access_counts = <bool>
* Only relevant if you are using search head clustering
* Turn on/off to make acceleration summary access counts accurate on the
  captain.
* by centralizing the access requests on the captain.
* Default = false

[show_source]

max_count = <integer>
* Maximum number of events accessible by show_source.
* The show source command will fail when more than this many events are in the
  same second as the requested event.
* Defaults to 10000

max_timebefore = <timespan>
* Maximum time before requested event to show.
* Defaults to '1day' (86400 seconds)

max_timeafter = <timespan>
* Maximum time after requested event to show.
* Defaults to '1day' (86400 seconds)

distributed = <bool>
* Controls whether we will do a distributed search for show source to get
  events from all servers and indexes
* Turning this off results in better performance for show source, but events
  will only come from the initial server and index
* NOTE: event signing and verification is not supported in distributed mode
* Defaults to true

distributed_search_limit = <unsigned int>
* Sets a limit on the maximum events we will request when doing the search for
  distributed show source
* As this is used for a larger search than the initial non-distributed show
  source, it is larger than max_count
* Splunk will rarely return anywhere near this amount of results, as we will
  prune the excess results
* The point is to ensure the distributed search captures the target event in an
  environment with many events
* Defaults to 30000

[typeahead]

maxcount = <integer>
* Maximum number of typeahead results to find.
* Defaults to 1000

use_cache = [0|1]
* Specifies whether the typeahead cache will be used if use_cache is not
  specified in the command line or endpoint.
* Defaults to true.

fetch_multiplier = <integer>
* A multiplying factor that determines the number of terms to fetch from the
  index, fetch = fetch_multiplier x count.
* Defaults to 50

cache_ttl_sec = <integer>
* How long the typeahead cached results are valid, in seconds.
* Defaults to 300.

min_prefix_length = <integer>
* The minimum string prefix after which to provide typeahead.
* Defaults to 1.

max_concurrent_per_user = <integer>
* The maximum number of concurrent typeahead searches per user. Once this
  maximum is reached only cached typeahead results might be available
* Defaults to 3.

[typer]

maxlen = <int>
* In eventtyping, pay attention to first <int> characters of any attribute
  (such as _raw), including individual tokens. Can be overridden by supplying
  the typer operator with the argument maxlen (for example,
  "|typer maxlen=300").
* Defaults to 10000.

[authtokens]

expiration_time = <integer>
* Expiration time of auth tokens in seconds.
* Defaults to 3600

[sample]

maxsamples = <integer>
* Defaults to 10000

maxtotalsamples = <integer>
* Defaults to 100000

[metadata]

maxresultrows = <integer>
* The maximum number of results in a single chunk fetched by the metadata
  command
* A smaller value will require less memory on the search head in setups with
  large number of peers and many metadata results, though, setting this too
  small will decrease the search performance
* Default is 10000
* Do not change unless instructed to do so by Splunk Support

maxcount = <integer>
* The total number of metadata search results returned by the search head;
  after the maxcount is reached, any additional metadata results received from
  the search peers will be ignored (not returned)
* A larger number incurs additional memory usage on the search head
* Default is 100000

[set]

maxresultrows = <integer>
* The maximum number of results the set command will use from each resultset
  to compute the required set operation

[input_channels]

max_inactive = <integer>
* Internal setting, do not change unless instructed to do so by Splunk Support

lowater_inactive = <integer>
* Internal setting, do not change unless instructed to do so by Splunk Support

inactive_eligibility_age_seconds = <integer>
* Internal setting, do not change unless instructed to do so by Splunk Support

[ldap]

max_users_to_precache = <unsigned integer>
* The maximum number of users we will attempt to pre-cache from LDAP after reloading auth
* Set this to 0 to turn off pre-caching

allow_multiple_matching_users = <bool>
* This controls whether we allow login when we find multiple entries with the
  same value for the username attribute
* When multiple entries are found, we choose the first user DN
  lexicographically
* Setting this to false is more secure as it does not allow any ambiguous
  login, but users with duplicate entries will not be able to login.
* Defaults to true

[spath]

extraction_cutoff = <integer>
* For extract-all spath extraction mode, only apply extraction to the first
  <integer> number of bytes
* Defaults to 5000

extract_all = <boolean>
* Controls whether we respect automatic field extraction when spath is invoked
  manually.
* If true, we extract all fields regardless of settings.  If false, we only
  extract fields used by later search commands.

[reversedns]

rdnsMaxDutyCycle = <integer>
* Generate diagnostic WARN in splunkd.log if reverse dns lookups are taking
  more than this percent of time
* Range 0-100
* Defaults to 10

[viewstates]

enable_reaper = <boolean>
* Controls whether the viewstate reaper runs
* Defaults to true

reaper_freq = <integer>
* Controls how often the viewstate reaper runs
* Defaults to 86400 (1 day)

reaper_soft_warn_level = <integer>
* Controls what the reaper considers an acceptable number of viewstates
* Defaults to 1000

ttl = <integer>
* Controls the age at which a viewstate is considered eligible for reaping
* Defaults to 86400 (1 day)

[geostats]

maxzoomlevel = <integer>
* Controls the number of zoom levels that geostats will cluster events on

zl_0_gridcell_latspan = <float>
* Controls what is the grid spacing in terms of latitude degrees at the lowest zoom 
  level, which is zoom-level 0.
* Grid-spacing at other zoom levels are auto created from this value by reducing by a 
  factor of 2 at each zoom-level.

zl_0_gridcell_longspan = <float>
* Controls what is the grid spacing in terms of longitude degrees at the lowest zoom 
  level, which is zoom-level 0
* Grid-spacing at other zoom levels are auto created from this value by reducing by a 
  factor of 2 at each zoom-level.

filterstrategy = <integer>
* Controls the selection strategy on the geoviz map. Allowed values are 1 and 2.

[iplocation]

db_path = <path>
* Absolute path to GeoIP database in MMDB format
* If not set, defaults to database included with splunk

[tscollect]

squashcase = <boolean>
* The default value of the 'squashcase' argument if not specified by the command
* Defaults to false

keepresults = <boolean>
* The default value of the 'keepresults' argument if not specified by the command
* Defaults to false

optimize_max_size_mb = <unsigned int>
* The maximum size in megabytes of files to create with optimize
* Specify 0 for no limit (may create very large tsidx files)
* Defaults to 1024

[tstats]

apply_search_filter = <boolean>
* Controls whether we apply role-based search filters when users run tstats on
  normal index data
* Note: we never apply search filters to data collected with tscollect or datamodel acceleration
* Defaults to true

summariesonly = <boolean>
* The default value of 'summariesonly' arg if not specified by the command
* When running tstats on an accelerated datamodel, summariesonly=false implies
  a mixed mode where we will fall back to search for missing TSIDX data
* summariesonly=true overrides this mixed mode to only generate results from
  TSIDX data, which may be incomplete
* Defaults to false

allow_old_summaries = <boolean>
* The default value of 'allow_old_summaries' arg if not specified by the
  command
* When running tstats on an accelerated datamodel, allow_old_summaries=false
  ensures we check that the datamodel search in each bucket's summary metadata
  is considered up to date with the current datamodel search. Only summaries
  that are considered up to date will be used to deliver results.
* The allow_old_summaries=true attribute overrides this behavior and will deliver results
  even from bucket summaries that are considered out of date with the current
  datamodel.
* Defaults to false

chunk_size = <unsigned int>
* ADVANCED: The default value of 'chunk_size' arg if not specified by the command
* This argument controls how many events are retrieved at a time within a
  single TSIDX file when answering queries
* Consider lowering this value if tstats queries are using too much memory
  (cannot be set lower than 10000)
* Larger values will tend to cause more memory to be used (per search) and
  might have performance benefits.
* Smaller values will tend to reduce performance and might reduce memory used
  (per search).
* Altering this value without careful measurement is not advised.
* Defaults to 10000000

warn_on_missing_summaries = <boolean>
* ADVANCED: Only meant for debugging summariesonly=true searches on accelerated datamodels.
* When true, search will issue a warning for a tstats summariesonly=true search for 
  the following scenarios:
    a) If there is a non-hot bucket that has no corresponding datamodel acceleration 
       summary whatsoever.
    b) If the bucket's summary does not match with the current datamodel acceleration search.
* Defaults to false

[pdf]

max_rows_per_table = <unsigned int>
* The maximum number of rows that will be rendered for a table within
  integrated PDF rendering
* Defaults to 1000

render_endpoint_timeout = <unsigned int>
* The number of seconds after which the pdfgen render endpoint will timeout if
  it has not yet finished rendering the PDF output
* Defaults to 3600

[kvstore]

max_accelerations_per_collection = <unsigned int>
* The maximum number of accelerations that can be assigned to a single
  collection
* Valid values range from 0 to 50
* Defaults to 10

max_fields_per_acceleration = <unsigned int>
* The maximum number of fields that can be part of a compound acceleration
  (i.e. an acceleration with multiple keys)
* Valid values range from 0 to 50
* Defaults to 10

max_rows_per_query = <unsigned int>
* The maximum number of rows that will be returned for a single query to a collection.
* If the query returns more rows than the specified value, then returned result
  set will contain the number of rows specified in this value.
* Defaults to 50000

max_queries_per_batch = <unsigned int>
* The maximum number of queries that can be run in a single batch
* Defaults to 1000

max_size_per_result_mb = <unsigned int>
* The maximum size of the result that will be returned for a single query to a
  collection in MB.
* Defaults to 50 MB

max_size_per_batch_save_mb = <unsigned int>
* The maximum size of a batch save query in MB
* Defaults to 50 MB

max_documents_per_batch_save = <unsigned int>
* The maximum number of documents that can be saved in a single batch
* Defaults to 1000

max_size_per_batch_result_mb = <unsigned int>
* The maximum size of the result set from a set of batched queries
* Defaults to 100 MB

max_rows_in_memory_per_dump = <unsigned int>
* The maximum number of rows in memory before flushing it to the CSV projection
  of KVStore collection.
* Defaults to 200

max_threads_per_outputlookup = <unsigned int>
* The maximum number of threads to use during outputlookup commands on KVStore
* If the value is 0 the thread count will be determined by CPU count
* Defaults to 1

[http_input]

max_number_of_tokens = <unsigned int>
* The maximum number of tokens reported by logging input metrics.
* Default to 10000.

metrics_report_interval = 60
* The interval (in seconds) of logging input metrics report.
* Default to 60 (one minute).

max_content_length = 1000000
* The maximum length of http request content accepted by HTTP Input server.
* Default to 1000000 (~ 1MB).

max_number_of_ack_channel = 1000000
* The maximum number of ACK channels accepted by HTTP Event Collector server.
* Default to 1000000 (~ 1M).

max_number_of_acked_requests_pending_query = 10000000
* The maximum number of ACKed requests pending query on HTTP Event Collector server.
* Default to 10000000 (~ 10M).

max_number_of_acked_requests_pending_query_per_ack_channel = 1000000
* The maximum number of ACKed requested pending query per ACK channel on HTTP
  Event Collector server..
* Default to 1000000 (~ 1M).

[slow_peer_disconnect]

* Settings for the heuristic that will detect and disconnect slow peers towards
  the end of a search that has returned a large volume of data

disabled = <boolean> 
* is this feature enabled. 
* Defaults to true

batch_search_activation_fraction = <double>  
* The fraction of peers that must have completed before we start disconnecting
* This is only applicable to batch search because the slow peers will not hold
  back the fast peers.
* Defaults to 0.9

packets_per_data_point = <unsigned int>
* Rate statistics will be sampled once every packets_per_data_point packets.
* Defaults to 500

sensitivity = <double>
* Sensitivity of the heuristic to newer values. For larger values of sensitivity 
  the heuristic will give more weight to newer statistic.   
* Defaults to 0.3

grace_period_before_disconnect = <double>
* If the heuristic consistently claims that the peer is slow for at least
  <grace_period_before_disconnect>*life_time_of_collector seconds then only
  will we disconnect the peer
* Defaults to 0.1

threshold_data_volume = <unsigned int>
* The volume of uncompressed data that must have accumulated in KB from
  a peer before we consider them in the heuristic. 
* Defaults to 1024

threshold_connection_life_time = <unsigned int>
* All peers will be given an initial grace period of at least these many
  seconds before we consider them in the heuristic. 
* Defaults to 60

bound_on_disconnect_threshold_as_fraction_of_mean = <double>
* The maximum value of the threshold data rate we will use to determine if 
  a peer is slow. The actual threshold will be computed dynamically at search
  time but will never exceed (100*maximum_threshold_as_fraction_of_mean)% on 
  either side of the mean. 
* Defaults to 0.2

[geomfilter]

enable_generalization = <boolean>
* Whether or not generalization is applied to polygon boundaries to reduce 
  point count for rendering
* Defaults to true

enable_clipping = <boolean>
* Whether or not polygons are clipped to the viewport provided by the render client
* Defaults to true

[system_checks]

insufficient_search_capabilities = enabled | disabled
* Enables/disables automatic daily logging of scheduled searches by users who
  have insufficient capabilities to run them as configured.
* Such searches are those that:
  + Have schedule_priority set to a value other than "default" but the owner
    does not have the edit_search_schedule_priority capability.
  + Have schedule_window set to a value other than "auto" but the owner does
    not have the edit_search_schedule_window capability.
* This check and any resulting logging occur on system startup and every 24
  hours thereafter.
* Defaults to enabled.

orphan_searches = enabled|disabled
* Enables/disables automatic UI message notifications to admins for
  scheduled saved searches with invalid owners.
  * Scheduled saved searches with invalid owners are considered "orphaned".  They
    cannot be run because Splunk cannot determine the roles to use for the search
    context.
  * Typically, this situation occurs when a user creates scheduled searches
    then departs the organization or company, causing their account to be
    deactivated.
* Currently this check and any resulting notifications occur on system startup
  and every 24 hours thereafter.
* Defaults to enabled.

installed_files_integrity = enabled | log_only | disabled
* Enables/disables automatic verification on every startup that all the files
  that were installed with the running Splunk version are still the files that
  should be present.
  * Effectively this finds cases where files were removed or changed that
    should not be removed or changed, whether by accident or intent.
  * The source of truth for the files that should be present is the manifest
    file in the $SPLUNK_HOME directory that comes with the release, so if this
    file is removed or altered, the check cannot work correctly.
  * Reading of all the files provided with the install has some I/O cost,
    though it is paid out over many seconds and should not be severe.
* When "enabled", detected problems will cause a message to be posted to the
  bulletin board (system UI status message).
* When "enabled" or "log_only", detected problems will cause details to be
  written out to splunkd.log
* When "disabled", no check will be attempted or reported.
* Defaults to enabled.

Global Optimization Settings


[search_optimization]

enabled = <bool>
* Enables search optimizations
* Defaults to true


Individual optimizers


#Configuration options for predicate_push optimizations

[search_optimization::predicate_push]

enabled = <bool>
* Enables predicate push optimization
* Defaults to true


#Configuration options for predicate_merge optimizations

[search_optimization::predicate_merge]

enabled = <bool>
* Enables predicate merge optimization
* Defaults to true

[mvexpand]

* This stanza allows for fine tuning of mvexpand search command.

max_mem_usage_mb = <non-negative integer>
* Overrides the default value for max_mem_usage_mb
* See definition in [default] max_mem_usage_mb for more details
* Defaults to 500 (MB)

[mvcombine]

* This stanza allows for fine tuning of mvcombine search command.

max_mem_usage_mb = <non-negative integer>
* overrides the default value for max_mem_usage_mb
* See definition in [default] max_mem_usage_mb for more details
* defaults to 500 (MB)

[xyseries]

* This stanza allows for fine tuning of xyseries search command.

max_mem_usage_mb = <non-negative integer>
* overrides the default value for max_mem_usage_mb
* See definition in [default] max_mem_usage_mb for more details

limits.conf.example

#   Version 6.5.7 
# CAUTION: Do not alter the settings in limits.conf unless you know what you are doing. 
# Improperly configured limits may result in splunkd crashes and/or memory overuse.


[searchresults]
maxresultrows = 50000
# maximum number of times to try in the atomic write operation (1 = no retries)
tocsv_maxretry = 5
# retry period is 1/2 second (500 milliseconds)
tocsv_retryperiod_ms = 500

[subsearch]
# maximum number of results to return from a subsearch
maxout = 100
# maximum number of seconds to run a subsearch before finalizing
maxtime = 10
# time to cache a given subsearch's results
ttl = 300

[anomalousvalue]
maxresultrows = 50000
# maximum number of distinct values for a field
maxvalues = 100000
# maximum size in bytes of any single value (truncated to this size if larger)
maxvaluesize = 1000

[associate]
maxfields = 10000
maxvalues = 10000
maxvaluesize = 1000

# for the contingency, ctable, and counttable commands
[ctable]
maxvalues = 1000

[correlate]
maxfields = 1000

# for bin/bucket/discretize
[discretize]
maxbins = 50000 
# if maxbins not specified or = 0, defaults to searchresults::maxresultrows

[inputcsv]
# maximum number of retries for creating a tmp directory (with random name in
# SPLUNK_HOME/var/run/splunk)
mkdir_max_retries = 100

[kmeans]
maxdatapoints = 100000000

[kv]
# when non-zero, the point at which kv should stop creating new columns
maxcols = 512

[rare]
maxresultrows = 50000
# maximum distinct value vectors to keep track of
maxvalues = 100000
maxvaluesize = 1000

[restapi]
# maximum result rows to be returned by /events or /results getters from REST
# API
maxresultrows = 50000

[search]
# how long searches should be stored on disk once completed
ttl = 86400

# the approximate maximum number of timeline buckets to maintain
status_buckets = 300

# the last accessible event in a call that takes a base and bounds
max_count = 10000

# the minimum length of a prefix before a * to ask the index about
min_prefix_len = 1

# the length of time to persist search cache entries (in seconds)
cache_ttl = 300

[scheduler]

# User default value (needed only if different from system/default value) when
# no max_searches_perc.<n>.when (if any) below matches.
max_searches_perc = 60

# Increase the value between midnight-5AM.
max_searches_perc.0 = 75
max_searches_perc.0.when = * 0-5 * * *
 
# More specifically, increase it even more on weekends.
max_searches_perc.1 = 85
max_searches_perc.1.when = * 0-5 * * 0,6

[slc]
# maximum number of clusters to create
maxclusters = 10000

[findkeywords]
#events to use in findkeywords command (and patterns UI)
maxevents = 50000

[stats]
maxresultrows = 50000
maxvalues = 10000
maxvaluesize = 1000

[top]
maxresultrows = 50000
# maximum distinct value vectors to keep track of
maxvalues = 100000
maxvaluesize = 1000


[search_optimization]
enabled = true

[search_optimization::predicate_push]
enabled = true

[search_optimization::predicate_merge]
enabled = true

Last modified on 05 January, 2018
PREVIOUS
instance.cfg.conf
  NEXT
literals.conf

This documentation applies to the following versions of Splunk® Enterprise: 6.5.7


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters