
Dispatch directory and search artifacts
Each search or alert you run creates a search artifact that needs to be saved on disk. The dispatch directory stores these artifacts in directories. There is one search-specific directory per search job and when the job expires, the search-specific directory is deleted.
See About jobs and job management in the Search Manual for information about search jobs.
Dispatch directory location
The dispatch directory stores artifacts on nodes where
searches run. This includes search heads, search peers, and standalone
Splunk Enterprise instances. The dispatch directory is found at $SPLUNK_HOME/var/run/splunk/dispatch
.
Dispatch directory contents
Within the dispatch directory, one search-specific directory is created per search or alert. Each search-specific directory contains a CSV file of its search results, a search.log with details about the search execution, and more. These are 0-byte files.
For example:
# cd $SPLUNK_HOME/var/run/splunk/dispatch # ls 1346978195.13 rt_scheduler__admin__search__RMD51cfb077d0798f99a_at_1469464020_37.0 1469483311.272 1469483309.269 1469483310.27 scheduler__nobody_c3BsdW5rX2FyY2hpdmVy__RMD5473cbac83d6c9db7_at_1469503020_53 admin__admin__search__count_1347454406.2 rt_1347456938.31 1347457148.46 subsearch_1347457148.46_1347457148.1 #ls 1346978195.13/ args.txt generate_preview request.csv status.csv audited info.csv results.csv.gz timeline.csv buckets/ metadata.csv runtime.csv events/ peers.csv search.log
Note: For Windows users, use dir in the place of ls.
Depending on the type of search you run, your search-specific directory might or might not contain all of the listed files.
File name | Contents |
---|---|
args.txt | The arguments passed to the search process |
alive.token | The alive / not alive status of the search process |
audited | A flag to indicate the events have been audit signed |
buckets | Per-bucket (typically the chunks visible in the search histogram UI) field picker statistics. This is not related to index buckets. |
custom_prop.csv | Contains custom job properties, which are arbitrary key values that can be added to search job, retrieved later, and are mostly used by UI display goals |
events | The events used to generate the results |
generate_preview | A flag to indicate this search has requested preview (mainly for Splunk Web searches) |
info.csv | List of search details, including earliest and latest time and results count |
metadata.csv | Owner and roles |
peers.csv | List of peers asked to run the search |
pipeline_sets | The number of pipeline sets an indexer runs. The default is 1. |
remote_events/events_num_num.csv.gz | Used for the remote-timeline optimization so that a reporting search run with status_buckets>0 can still be properly map reduced |
request.csv | List of search parameters from the request, including fields and the text of the search |
results.csv.gz | Archive containing the search results |
rtwindow.czv.gz | Events for the latest real-time window when there are more events in the window than can fit in memory (default limit is 50k) |
runtime.csv | Pause/cancel settings |
search.log | Log from the search process |
sort… | A sort temporary file, used by the lsort command for large searches |
srtmpfile… | A generic search tempfile, used by facilities which did not give a name for their temporary files |
status.csv | The current status of the search (such as if it is still running). Search statuses can be any of the following: QUEUED, PARSING, RUNNING, PAUSED, FINALIZING, FAILED, DONE. |
timeline.csv | Event count per timeline bucket |
The dispatch directory also contains ad hoc data model acceleration summaries. These are different from persistent data model acceleration summaries, which are stored at the index level.
Dispatch directory naming conventions
Search-specific directories within the dispatch directory are named depending on the type of search that runs. In saved and scheduled searches, the name of the search may or may not be used, depending on a few conditions.
- If the name of the search is less than 20 characters and contains only ASCII alphanumeric characters, then the search-specific directory name includes the search name.
- If the name of the search is 20 characters or longer, or contains non-alphanumeric characters, then a hash is used instead. This is to ensure a search-specific directory named by the search ID can be created on the filesystem.
Type of search | Naming convention | Example |
---|---|---|
Local ad hoc search | Epoch time of the search |
Ad hoc search 1347457078.35Ad hoc real time search rt_1347456938.31Ad hoc search that uses a subsearch (two dispatch directories) 1347457148.46 subsearch_1347457148.46_1347457148.1 |
Saved search | The user requesting the search, the user context it is run as, the app it came from, the search string, the epoch time |
“count” – run by admin, in user context admin, saved in app search admin__admin__search__count_1347454406.2 “Errors in the last 24 hours” – run by somebody, in user context somebody, saved in app search somebody__somebody__search_RXJyb3JzIGluIHRoZSBsYXN0IDI0IGhvdXJz_1347455134.20 |
Scheduled search | The user requesting the search, the user context it is run as, the app it came from, the search string, epoch time, and an internal id added at the end to avoid name collisions | “foo” – run by the scheduler, with no user context, saved in app unix
scheduler__nobody__unix__foo_at_1347457380_051d958b8354c580 “foo2” - remote peer search in idx1, with admin user context, run by the scheduler, saved in app search remote_idx1_scheduler__admin__search__foo2_at_1347457920_79152a9a8bf33e5e |
Remote search | Searches from remote peers start with "remote". |
“foo2” - remote peer search in idx1 remote_idx1_scheduler__admin__search__foo2_at_1347457920_79152a9a8bf33e5e |
Real-time search | Searches in real-time start with "rt". |
Ad hoc real-time search rt_1347456938.31 |
Dispatch directory maintenance
The dispatch directory reaper iterates over all artifacts every 30 seconds. It deletes artifacts that have expired based on the last time they were accessed and their configured time to live (TTL), or lifetime.
See Extending job lifetimes in the Search Manual for information about changing a search artifact's default lifetime in Splunk Web.
Search artifact lifetime in the dispatch directory
Default lifetime values depend on the type of search and are counted from when the search completes. See below for examples.
Search type | Default lifetime |
---|---|
Manually run saved search/ad hoc search | 10 minutes |
Remote search from a peer | 10 minutes |
Scheduled search |
Lifetime varies by the selected alert action, if any. If an alert has multiple actions, the lifetime of the search artifact is that of the longest action. Without an action, the value is determined by dispatch.ttl in savedsearches.conf, which defaults to twice the schedule period. Actions that determine a search's lifetime:
|
Show source scheduled search | 30 seconds |
Subsearch | 5 minutes
|
Change search artifact lifetime
There are a number of ways to change search artifact lifetime. Modifying the default search behavior will affect searches with no other lifetime value or TTL applied.
Search behavior type | Process |
---|---|
Global search behavior |
In limits.conf, set ttl or remote_ttl in the [search] stanza or ttl in the [subsearch] stanza.
|
Search specific behavior |
In savedsearches.conf, set dispatch.ttl for an individual search. Or set an individual value for a search when you save it in Splunk Web. This overrides the default search behavior. |
Searches with alert actions |
In alert_actions.conf, set a ttl value to specify the minimum lifetime of a search artifact if a certain alert action is triggered. This overrides any shorter lifetime applied to a search. |
Clean up the dispatch directory based on the age of directories
You can move search-specific directories from the dispatch directory to a specified destination directory using the condition that their last modification time is earlier than a time you specify. The destination directory must be on the same file system as the dispatch directory. You use the clean-dispatch
command to do this.
You typically need to use the clean-dispatch
command when there are thousands of artifacts in the dispatch directory. This can cause an adverse effect on search performance or trigger a warning in the UI based on limits.conf or dispatch_dir_warning_size.
Run the command $SPLUNK_HOME/bin/splunk clean-dispatch help
to learn how to use the clean-dispatch
command.
See Too many search jobs in the Troubleshooting Manual for more information about cleaning up the dispatch directory.
PREVIOUS Using the Search Job Inspector |
NEXT Manage Splunk Enterprise jobs from the OS |
This documentation applies to the following versions of Splunk® Enterprise: 6.0, 6.0.1, 6.0.2, 6.0.3, 6.0.4, 6.0.5, 6.0.6, 6.0.7, 6.0.8, 6.0.9, 6.0.10, 6.0.11, 6.0.12, 6.0.13, 6.0.14, 6.0.15, 6.1, 6.1.1, 6.1.2, 6.1.3, 6.1.4, 6.1.5, 6.1.6, 6.1.7, 6.1.8, 6.1.9, 6.1.10, 6.1.11, 6.1.12, 6.1.13, 6.1.14
Feedback submitted, thanks!