Splunk Cloud Platform

Search Manual

Manage Splunk Enterprise jobs from the OS

If you have Splunk Enterprise on Microsoft Windows or *nix, you can manage search jobs from the operating system as described in this topic. For information on how to manage search jobs in Splunk Web, see Manage search jobs in this manual.

Manage jobs in *nix

When a search job runs, it will manifest itself as a process in the OS called splunkd search. You can manage the job's underlying processes at the OS command line.

To see the job's processes and its arguments, type:

> top
> c

This will show you all the processes running and all their arguments.

Typing ps -ef | grep "search" will isolate all the Splunk search processes within this list. It looks like this:

[pie@fflanda ~]$ ps -ef | grep 'search'
530369338 71126 59262   0 11:19AM ??         0:01.65 [splunkd pid=59261] search --id=rt_1344449989.64 --maxbuckets=300 --ttl=600 --maxout=10000 --maxtime=0 --lookups=1 --reduce_freq=10 --rf=* --user=admin --pro --roles=admin:power:user  AhjH8o/Render TERM_PROGRAM_VERSION=303.2
530369338 71127 71126   0 11:19AM ??         0:00.00 [splunkd pid=59261] search --id=rt_1344449989.64 --maxbuckets=300 --ttl=600 --maxout=10000 --maxtime=0 --lookups=1 --reduce_freq=10 --rf=* --user=admin --pro --roles=

There will be two processes for each search job; the second one is a "helper" process used by the splunkd process to do further work as needed. The main job is the one using system resources. The helper process will die on its own if you kill the main process.

The process info includes:

  • the search string (search=)
  • the job ID for that job (id=)
  • the ttl, or length of time that job's artifacts (the output it produces) will remain on disk and available (ttl=)
  • the user who is running the job (user=)
  • what role(s) that user belongs to (roles=)

When a job is running, its data is being written to $SPLUNK_HOME/var/run/splunk/dispatch/<job_id>/ Scheduled jobs (scheduled saved searches) include the saved search name as part of the directory name.

The value of ttl for a process will determine how long the data remains in this spot, even after you kill a job. When you kill a job from the OS, you might want to look at its job ID before killing it if you want to also remove its artifacts.

Manage jobs in Windows

On Windows, each search likewise runs as a separate process. Windows does not have a command-line equivalent for the *nix top command, but there are several ways in which you can view the command line arguments of executing search jobs.

When a search runs, the data for that search is written into the %SPLUNK_HOME\var\run\splunk\dispatch\<epoch_time_at_start_of_search>.<number_separator> directory. Saved searches are written to similar directories that have a naming convention of "admin__admin__search_" and a randomly-generated hash of numbers in addition to the UNIX time.

Use the filesystem to manage jobs

You can manage a job by creating and deleting files in the job's artifact directory:

  • To cancel a job, go into that job's artifact directory create a file called 'cancel'.
  • To preserve that job's artifacts (and ignore its ttl setting), create a file called 'save'.
  • To pause a job, create a file called 'pause', and to unpause it, delete the 'pause' file.
Last modified on 05 October, 2016
Limit search process memory usage   Saving searches

This documentation applies to the following versions of Splunk Cloud Platform: 8.2.2203, 8.2.2112, 8.2.2201, 8.2.2202, 9.0.2205, 9.0.2208, 9.0.2209, 9.0.2303, 9.0.2305, 9.1.2308, 9.1.2312, 9.2.2403, 9.2.2406 (latest FedRAMP release), 9.3.2408


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters