Dispatch directory and search artifacts
Each search or alert you run creates a search artifact that must be saved to disk. The artifacts are stored in directories under the
dispatch directory. For each search job, there is one search-specific directory. When the job expires, the search-specific directory is deleted.
See About jobs and job management for information about search jobs.
Dispatch directory location
dispatch directory stores artifacts on nodes where the searches are run. The nodes include search heads, search peers, and standalone Splunk Enterprise instances. The path to the
dispatch directory is
Dispatch directory contents
dispatch directory, a search-specific directory is created for each search or alert. Each search-specific directory contains several files including a SRS file of the search results, a
search.log file with details about the search execution, and more. These are 0-byte files.
View dispatch directory contents
From a command-line window, or UI window such as Windows Explorer or Finder, you can list the search-specific directories.
For example to view a list in a command-line window, change to the dispatch directory and list the contents in that directory. The following list contains both ad hoc, real-time, and scheduled search-specific directories.
# cd $SPLUNK_HOME/var/run/splunk/dispatch # ls 1346978195.13 rt_scheduler__admin__search__RMD51cfb077d0798f99a_at_1469464020_37.0 1469483311.272 1469483309.269 1469483310.27 scheduler__nobody_c3BsdW5rX2FyY2hpdmVy__RMD5473cbac83d6c9db7_at_1469503020_53 admin__admin__search__count_1347454406.2 rt_1347456938.31 1347457148.46 subsearch_1347457148.46_1347457148.1
The contents of a search-specific directory includes files and subdirectories. The following example shows the contents of a search-specific directory named
#ls 1346978195.13/ args.txt audited buckets/ events/ generate_preview info.csv metadata.csv peers.csv request.csv results.srs.gz runtime.csv search.log status.csv timeline.csv
Windows users should use dir instead of ls in a command-line window, or Windows Explorer, to see the contents of the
File descriptions for search-specific directories
The files or subdirectories that appear in a search-specific directory depend on the type of search that you run. The following table lists the files and subdirectories that might appear in your search-specific directories.
|args.txt||The arguments that are passed to the search process.|
|alive.token||The status of the search process. Specifies if the search is alive or not alive.|
|audited||A flag to indicate the events have been audit signed.|
|buckets||A subdirectory that contains the field picker statistics for each-bucket. The buckets are typically the chunks that are visible in the search histogram UI. This is not related to index buckets.|
|custom_prop.csv||Contains custom job properties, which are arbitrary key values that can be added to search job, retrieved later, are mostly used by UI display goals.|
|events||A subdirectory that contains the events that were used to generate the search results.|
|generate_preview||A flag to indicate that this search has requested preview. This file is used mainly for Splunk Web searches.|
|info.csv||A list of search details that includes the earliest and latest time, and the results count.|
|metadata.csv||The search owner and roles.|
|peers.csv||The list of peers asked to run the search.|
|pipeline_sets||The number of pipeline sets an indexer runs. The default is 1.|
|remote_events or events_num_num.csv.gz||Used for the remote-timeline optimization so that a reporting search that is run with |
|request.csv||A list of search parameters from the request, including the fields and the text of the search.|
|results.srs.gz||An archive file that contains the search results in a binary serialization format.|
|rtwindow.czv.gz||Events for the latest real-time window when there are more events in the window than can fit in memory. The default limit is 50K.|
|runtime.csv||Pause and cancel settings.|
|search.log||The log from the search process.|
|sort…||A sort temporary file, used by the |
|srtmpfile…||A generic search tempfile, used by facilities which did not give a name for their temporary files.|
|status.csv||The current status of the search, for example if the search is still running. Search status can be any of the following: QUEUED, PARSING, RUNNING, PAUSED, FINALIZING, FAILED, DONE.|
|timeline.csv||Event count for each timeline bucket.|
The dispatch directory also contains ad hoc data model acceleration summaries. These are different from persistent data model acceleration summaries, which are stored at the index level.
Working with the results.srs.gz file
results.srs.gz file is an archive file that contains the search results in a binary serialization format.
To view search results in the
results.srs.gz file you must convert the file into a CSV format. See the toCsv CLI utility in the Troubleshooting Manual for the command line steps to convert the file.
Dispatch directory naming conventions
The names of the search-specific directories in the dispatch directory are based on the type of search. For saved and scheduled searches, the name of a search-specific directory is determined by the following conditions.
- The name of the search is less than 20 characters and contains only ASCII alphanumeric characters. The search-specific directory name includes the search name.
- The name of the search is 20 characters or longer, or contains non-alphanumeric characters. A hash is used for the name. This is to ensure a search-specific directory named by the search ID can be created on the filesystem.
A search that contains multiple subsearches might exceed the maximum length for the dispatch directory name. When the maximum length is exceeded, the search fails.
|Type of search||Naming convention||Examples|
|Local ad hoc search||The UNIX time of the search.||
An ad hoc search. |
A real-time ad hoc search.
An ad hoc search that uses a subsearch, which creates two dispatch directories.
|Saved search||The user requesting the search, the user context the search is run as, the app the search came from, the search string, and the UNIX time.||
"count" – run by admin, in user context admin, saved in app search
"Errors in the last 24 hours" – run by somebody, in user context somebody, saved in app search
|Scheduled search||The user requesting the search, the user context the search is run as, the app the search came from, the search string, UNIX time, and an internal ID added at the end to avoid name collisions.||"foo" – run by the scheduler, with no user context, saved in app unix
"foo2" - remote peer search on search head sh01, with admin user context, run by the scheduler, saved in app search
|Remote search||Searches that are from remote peers start with the word "remote".||
"foo2" - remote peer search on search head sh01
|Real-time search||Searches that are in real-time start with the letters "rt".||
Ad hoc real-time search
|Replicated search||Searches that are replicated search results (artifacts) start with "rsa_".These SIDs occur in search head clusters when a completed search is replicated to a search head other than the search head that originally ran the search|
|Replicated scheduled search||The "rsa_scheduler_" prefix is for replicated results of searches dispatched by the scheduler.||The search "foo" is run by the rsa_scheduler, with no user context, saved in app unix.
|Report acceleration search||These are the probe searches created by datamodel acceleration to retrieve the acceleration percentage from all peers.|
Dispatch directory maintenance
The dispatch directory reaper iterates over all of the artifacts every 30 seconds. The reaper deletes artifacts that have expired, based on the last time that the artifacts were accessed and their configured time to live (TTL), or lifetime.
If no search artifacts are eligible to be reaped and the dispatch volume is full, artifacts are not prematurely reaped to recover space. When the dispatch volume is full, new searches cannot be dispatched. You must manually reap the dispatch directory to make space for new search artifacts.
See Extending job lifetimes for information about changing the default lifetime for the search artifact using Splunk Web.
Search artifact lifetime in the dispatch directory
Default lifetime values depend on the type of search. The lifetime countdown begins when the search completes. The following table lists the default lifetime values by type of search.
|Search type||Default lifetime|
|Manually run saved search or ad hoc search||10 minutes|
|Remote search from a peer||10 minutes|
The lifetime varies by the selected alert action, if any. If an alert has multiple actions, the action with the longest lifetime becomes the lifetime for the search artifact. Without an action, the value is determined by the
Alert actions determine the default lifetime of a scheduled search.
|Show source scheduled search||30 seconds|
Subsearches generate two search-specific directories. There is a search-specific directory for the subsearch and a search-specific directory for the search that uses the subsearch. These directories have different lifetime values.
Change search artifact lifetime
There are a number of ways to change the search artifact lifetime. Modifying the default search behavior affects searches with no other lifetime value or TTL applied.
|Search behavior type||Process|
|Global search behavior||
|Search specific behavior||
Or set an individual value for a search when you save the search in Splunk Web.
This overrides the default search behavior.
|Searches with alert actions||
This overrides any shorter lifetime applied to a search.
Clean up the dispatch directory based on the age of directories
As more and more artifacts are added to the dispatch directory, it is possible that the volume of artifacts will cause a adverse effect on search performance or that a warning appears in the UI. The warning threshold is based on the
dispatch_dir_warning_size attribute in the
The default value for the
dispatch_dir_warning_size attributes is 5000.
You can move search-specific directories from the dispatch directory to another, destination, directory. You move search-specific directories by using the
clean-dispatch command. You must specify a time that is later than the last modification time for the search-specific directories. The destination directory must be on the same file system as the dispatch directory.
clean-dispatch command is not suitable for production environments and should be used only in certain scenarios as directed by Splunk Support.
Run the command
$SPLUNK_HOME/bin/splunk clean-dispatch help to learn how to use the
See Too many search jobs in the Troubleshooting Manual for more information about cleaning up the dispatch directory.
View search job properties
Limit search process memory usage
This documentation applies to the following versions of Splunk® Enterprise: 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.1.0, 9.1.1