Dispatch directory and search artifacts
Each search or alert you run creates a search artifact that must be saved to disk. The artifacts are stored in directories under the dispatch
directory. For each search job, there is one search-specific directory. When the job expires, the search-specific directory is deleted.
See About jobs and job management for information about search jobs. See Extending job lifetimes for information changing the lifetimes for search jobs.
Dispatch directory location
The dispatch
directory stores artifacts on nodes where the searches are run. The nodes include search heads, search peers, and standalone Splunk Enterprise instances. The path to the dispatch
directory is $SPLUNK_HOME/var/run/splunk/dispatch
.
Dispatch directory contents
In the dispatch
directory, a search-specific directory is created for each search or alert. Each search-specific directory contains several files including a .srs file of the search results, a search.log
file with details about the search execution, and more.
View dispatch directory contents
From a command-line window, or UI window such as Windows Explorer or Finder, you can list the search-specific directories.
For example to view a list in a command-line window, change to the dispatch directory and list the contents in that directory. The following list contains both ad hoc, real-time, and scheduled search-specific directories.
# cd $SPLUNK_HOME/var/run/splunk/dispatch # ls 1346978195.13 rt_scheduler__admin__search__RMD51cfb077d0798f99a_at_1469464020_37.0 1469483311.272 1469483309.269 1469483310.27 scheduler__nobody_c3BsdW5rX2FyY2hpdmVy__RMD5473cbac83d6c9db7_at_1469503020_53 admin__admin__search__count_1347454406.2 rt_1347456938.31 1347457148.46 subsearch_1347457148.46_1347457148.1
The contents of a search-specific directory includes files and subdirectories. The following example shows the contents of a search-specific directory named 1346978195.13
.
#ls 1346978195.13/ args.txt audited buckets/ events/ generate_preview info.csv metadata.csv peers.csv request.csv results.srs.zst runtime.csv search.log status.csv timeline.csv
Windows users should use dir instead of ls in a command-line window, or Windows Explorer, to see the contents of the dispatch
directory.
File descriptions for search-specific directories
The files or subdirectories that appear in a search-specific directory depend on the type of search that you run. The following table lists the files and subdirectories that might appear in your search-specific directories.
File name | Contents |
---|---|
args.txt | The arguments that are passed to the search process. |
alive.token | The status of the search process. Specifies if the search is alive or not alive. |
audited | A flag to indicate the events have been audit signed. |
buckets | A subdirectory that contains the field picker statistics for each-bucket. The buckets are typically the chunks that are visible in the search histogram UI. This is not related to index buckets. |
custom_prop.csv | Contains custom job properties, which are arbitrary key values that can be added to search job, retrieved later, are mostly used by UI display goals. |
events | A subdirectory that contains the events that were used to generate the search results. |
generate_preview | A flag to indicate that this search has requested preview. This file is used mainly for Splunk Web searches. |
info.csv | A list of search details that includes the earliest and latest time, and the results count. |
metadata.csv | The search owner and roles. |
peers.csv | The list of peers asked to run the search. |
pipeline_sets | The number of pipeline sets an indexer runs. The default is 1. |
remote_events or events_num_num.csv.gz | Used for the remote-timeline optimization so that a reporting search that is run with status_buckets>0 can be MapReduced.
|
request.csv | A list of search parameters from the request, including the fields and the text of the search. |
results.srs.zst | An archive file that contains the search results in a binary serialization format. |
rtwindow.czv.gz | Events for the latest real-time window when there are more events in the window than can fit in memory. The default limit is 50K. |
runtime.csv | Pause and cancel settings. |
search.log | The log from the search process. |
sort… | A sort temporary file, used by the sort command for large searches.
|
srtmpfile… | A generic search tempfile, used by facilities which did not give a name for their temporary files. |
status.csv | The current status of the search, for example if the search is still running. Search status can be any of the following: QUEUED, PARSING, RUNNING, PAUSED, FINALIZING, FAILED, DONE. |
timeline.csv | Event count for each timeline bucket. |
The dispatch directory also contains ad hoc data model acceleration summaries. These are different from persistent data model acceleration summaries, which are stored at the index level.
Working with the results.srs.zst file
The results.srs.zst
file is an archive file that is created when you run a search and contains the search results in a binary serialization format.
To view search results in the results.srs.zst
file you must convert the file into a CSV format. See the toCsv CLI utility in the Troubleshooting Manual for the command line steps to convert the file.
Dispatch directory naming conventions
The names of the search-specific directories in the dispatch directory are based on the type of search. For saved and scheduled searches, the name of a search-specific directory is determined by the following conditions.
- The name of the search is less than 20 characters and contains only ASCII alphanumeric characters. The search-specific directory name includes the search name.
- The name of the search is 20 characters or longer, or contains non-alphanumeric characters. A hash is used for the name. This is to ensure a search-specific directory named by the search ID can be created on the filesystem.
A search that contains multiple subsearches might exceed the maximum length for the dispatch directory name. When the maximum length is exceeded, the search fails.
Type of search | Naming convention | Examples |
---|---|---|
Local ad hoc search | The UNIX time of the search. |
An ad hoc search. 1347457078.35 A real-time ad hoc search. rt_1347456938.31 An ad hoc search that uses a subsearch, which creates two dispatch directories. 1347457148.46 subsearch_1347457148.46_1347457148.1 |
Saved search | The user requesting the search, the user context the search is run as, the app the search came from, the search string, and the UNIX time. |
"count" – run by admin, in user context admin, saved in app search admin__admin__search__count_1347454406.2 "Errors in the last 24 hours" – run by somebody, in user context somebody, saved in app search somebody__somebody__search_RMD5473cbac83d6c9db7_1347455134.20 |
Scheduled search | The user requesting the search, the user context the search is run as, the app the search came from, the search string, UNIX time, and an internal ID added at the end to avoid name collisions. | "foo" – run by the scheduler, with no user context, saved in app unix
scheduler__nobody__unix__foo_at_1347457380_051d958b8354c580 "foo2" - remote peer search on search head sh01, with admin user context, run by the scheduler, saved in app search remote_sh01_scheduler__admin__search__foo2_at_1347457920_79152a9a8bf33e5e |
Remote search | Searches that are from remote peers start with the word "remote". |
"foo2" - remote peer search on search head sh01 remote_sh01_scheduler__admin__search__foo2_at_1347457920_79152a9a8bf33e5e |
Real-time search | Searches that are in real-time start with the letters "rt". |
Ad hoc real-time search rt_1347456938.31 |
Replicated search | Searches that are replicated search results (artifacts) start with "rsa_".These SIDs occur in search head clusters when a completed search is replicated to a search head other than the search head that originally ran the search | |
Replicated scheduled search | The "rsa_scheduler_" prefix is for replicated results of searches dispatched by the scheduler. | The search "foo" is run by the rsa_scheduler, with no user context, saved in app unix.
rsa_scheduler_nobody_unix_foo_at_1502989576_051d958b8354c580 |
Report acceleration search | These are the probe searches created by datamodel acceleration to retrieve the acceleration percentage from all peers. | SummaryDirector_1503528948.24878_D12411CE-A361-4F75-B90A-28AFDA88151B |
Dispatch directory maintenance
The dispatch directory reaper iterates over all of the artifacts every 30 seconds. The reaper deletes artifacts that have expired, based on the last time that the artifacts were accessed and their configured time to live (TTL), or lifetime.
If no search artifacts are eligible to be reaped and the dispatch volume is full, artifacts are not prematurely reaped to recover space. When the dispatch volume is full, new searches cannot be dispatched. You must manually reap the dispatch directory to make space for new search artifacts.
See Extending job lifetimes for information about changing the default lifetime for the search artifact using Splunk Web.
Search artifact lifetime in the dispatch directory
Default lifetime values depend on the type of search. The lifetime countdown begins when the search completes. The following table lists the default lifetime values by type of search.
Search type | Default lifetime |
---|---|
Manually run saved search or ad hoc search | 10 minutes |
Remote search from a peer | 10 minutes |
Scheduled search |
The lifetime varies by the selected alert action, if any. If an alert has multiple actions, the action with the longest lifetime becomes the lifetime for the search artifact. Without an action, the value is determined by the Alert actions determine the default lifetime of a scheduled search.
|
Show source scheduled search | 30 seconds |
Subsearch | 5 minutes
Subsearches generate two search-specific directories. There is a search-specific directory for the subsearch and a search-specific directory for the search that uses the subsearch. These directories have different lifetime values. |
Change search artifact lifetime
There are a number of ways to change the search artifact lifetime. Modifying the default search behavior affects searches with no other lifetime value or TTL applied.
See How to edit a configuration file for Splunk Enterprise.
Search behavior type | Process |
---|---|
Global search behavior |
In the
|
Search specific behavior |
In the Or set an individual value for a search when you save the search in Splunk Web. This overrides the default search behavior. |
Searches with alert actions |
In the This overrides any shorter lifetime applied to a search. |
Clean up the dispatch directory based on the age of directories
As more and more artifacts are added to the dispatch directory, it is possible that the volume of artifacts will cause a adverse effect on search performance or that a warning appears in the UI. The warning threshold is based on the dispatch_dir_warning_size
attribute in the limits.conf
file.
The default value for the dispatch_dir_warning_size
attributes is 5000.
You can move search-specific directories from the dispatch directory to another, destination, directory. You move search-specific directories by using the clean-dispatch
command. You must specify a time that is later than the last modification time for the search-specific directories. The destination directory must be on the same file system as the dispatch directory.
The clean-dispatch
command is not suitable for production environments and should be used only in certain scenarios as directed by Splunk Support.
Run the command $SPLUNK_HOME/bin/splunk clean-dispatch help
to learn how to use the clean-dispatch
command.
See Too many search jobs in the Troubleshooting Manual for more information about cleaning up the dispatch directory.
View search job properties | Limit search process memory usage |
This documentation applies to the following versions of Splunk Cloud Platform™: 9.3.2408, 9.0.2205, 9.0.2208, 8.2.2112, 8.2.2201, 8.2.2202, 8.2.2203, 9.0.2209, 9.0.2303, 9.0.2305, 9.1.2308, 9.1.2312, 9.2.2403, 9.2.2406 (latest FedRAMP release)
Feedback submitted, thanks!