This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
The following are the spec and example files for savedsearches.conf.
# Copyright (C) 2005-2010 Splunk Inc. All Rights Reserved. Version 4.1.5 # # This file contains possible attribute/value pairs for saved search entries in savedsearches.conf. # You can configure saved searches by creating your own savedsearches.conf. # # There is a default savedsearches.conf in $SPLUNK_HOME/etc/system/default. To set custom # configurations, place a savedsearches.conf in $SPLUNK_HOME/etc/system/local/. # For examples, see savedsearches.conf.example. You must restart Splunk to enable configurations. # # To learn more about configuration files (including precedence) please see the documentation # located at http://www.splunk.com/base/Documentation/latest/Admin/Aboutconfigurationfiles #******* # The possible attribute/value pairs for savedsearches.conf are: #******* [<stanza name>] * Name your saved search. * Create a unique stanza name for each saved search. * Follow the stanza name with any number of the following attribute/value pairs. * If you do not specify an attribute, Splunk uses the default. disabled = 0 | 1 * Disable your search by setting to 1. * Searchis not visible in Splunk Web if set to 1. * Defaults to 0. search = <string> * Actual search terms of the saved search. * For example, search = index::sampledata http NOT 500. * Your search can include macro searches for substitution. * To create a macro search, read the documentation: http://www.splunk.com/base/Documentation/latest/Knowledge/Designmacrosearches #******* # Scheduling options #******* enableSched = 0 | 1 * Set this to 1 to run your search on a schedule. * Defaults to 0. schedule = <cron-style string> * This field is DEPRECATED as of 4.0 release use cron_schedule instead * Cron-like schedule. * For example, */12 * * * * * Note: Splunk's current cron implementation differs from standard POSIX cron. * Use the */n as "divide by n" (instead of standard POSIX cron's "every n"). cron_schedule = <cron string> * The cron schedule used to execute this search. * For example: */5 * * * * will cause the search to execute every 5 minutes * Cron lets you use standard cron notation to define your scheduled search interval. In particular, cron can accept this type of notation: 00,20,40 * * * *, which runs the search every hour at hh:00, hh:20, hh:40. Along the same lines, a cron of 03,23,43 * * * * runs the search every hour at hh:03, hh:23, hh:43. Splunk recommends that you schedule your searches so that they're staggered over time. This reduces system load. Running all of them (*/20) every 20 minutes means they would all launch at hh:00 (20, 40) and might slow your system every 20 min. max_concurrent = <int> * The maximum number of concurrent instances of this search the scheduler * is allowed to run. * Defaults to 1 realtime_schedule = 0 | 1 * The way the scheduler computes the next execution time of a scheduled search. * If this value is set to 1, the scheduler would compute the next time based on the current time * otherwise, if it is set to 0, the next time is computed based on the last execution time (continous scheduling) * * If set to 1, some execution periods might be skipped to make sure that the scheduler is executing * the searches running over the most recent time range. * * If set to 0, scheduled execution periods are guaranteed to never be skipped, however the execution * of the savedsearch might fall behind depending on the scheduler's load. Use continous scheduling whenever * you're enabling the summary index option * * The scheduler will try to execute searches that have the realtime_schedule set to 1 before executing * searches that have a continous scheduling * * Defaults to 1 #******* # Notification options #******* counttype = number of events | number of hosts | numbers of sources | always * Set the type of count for alerting. * Used with relation and quantity (below). * Note: If you specify "always," do not set relation or quantity (below). relation = greater than | less than | equal to | drops by | rises by * How to compare against counttype. quantity = <integer> * Specify a value for relation and counttype. * For example, "number of events [is] greater than 10" sends an alert when the count of events larger than by 10. * For example, "number of events drops by 10%" sends an alert when the count of events drops by 10% alert_condition = <string> * A search that is evaluated on the artifacts of the saved search to determine whether to trigger the alerts. * Alerts are triggered if the search specified yields a non-empty search result list. #******* # generic action settings # for a comprehensive list of actions and their arguments # please refer to alert_actions.conf #******* action.<action_name> = 0 | 1 * whether the action is enabled or disabled action.<action_name>.<parameter> = <value> * override an action's parameter defined in alert_actions.conf #****** # email action settings #****** action.email.to = <email list> * a comma delimited list of recepient email addresses action.email.from = <email address> * email address that would be used as the sender's address action.email.subject = <string> * the subject of the email delivered to recepients action.email.mailserver = <string> * address of the MTA server to be used to send emails #****** # script action settings #****** action.script = 0 | 1 * Toggle whether or not the script action is enabled. * 1 to enable, 0 to disable. * Defaults to 0 action.script.filename = <script filename> * The filename of the shell script to execute. The script should live in $SPLUNK_HOME/bin/scripts/ #******* # Summary index settings #******* action.summary_index = 0 | 1 * Toggle whether or not the summary index is enabled. * 1 to enable, 0 to disable. * Defaults to 0. action.summary_index._name = <index> * The summary index where the results of the scheduled search are saved. * Specify the summary index where the results of the scheduled search are saved. * Defaults to summary. action.summary_index.inline = <bool> * whether to execute the summary indexing action as part of the scheduled search. * NOTE: this option is considered if and only if the summary index action is enabled * and is always executed * Defaults to true action.summary_index.<KEY> = <string> * Optional <KEY> = <string> to add to each event when saving it in the summary index. #******* # Lookup table population settings #******* action.populate_lookup = 0 | 1 * Toggle whether or not the lookup population action is enabled action.populate_lookup.dest = <string> * Path to a lookup csv file where to copy the search results to. * NOTE: This path must point to a .csv file in either of the following directories: * $SPLUNK_HOME/etc/system/lookups/ * $SPLUNK_HOME/etc/apps/<app-name>/lookups * NOTE: the destination directories of the above files must already exist run_on_startup = true | false * Toggle whether this search runs when Splunk starts. * If it does not run on startup, it will run at the next scheduled time. * It is recommended that you set this to true for scheduled searches that populate lookup tables. #******* # dispatch search options #******* * Read/write/connect timeout (in seconds) for the HTTP connection (to splunkd). * Used to execute the scheduled search and any of its actions/alerts dispatch.ttl = <integer>[p] * Time to live (in seconds) for the artifacts of the scheduled search, if no actions are triggered. * If an action is triggered the ttl is changed to that actions's ttl, if multiple actions are triggered * the maximum ttl is applied to the artifacts. For setting action's ttl refer to alert_actions.conf.spec * If the integer is followed by the letter 'p' the ttl is interpreted as a multiple of the scheduled search's period. * Defaults to 2p. dispatch.buckets = <integer> * The maximum number of timeline buckets. * Defaults to 0. dispatch.max_count = <integer> * The maximum number of results before finalizing the search. * Defaults to 10000. dispatch.max_time = <integer> * The maximum amount of time (in seconds) before finalizing the search. * Defaults to 0. dispatch.lookups = true | false * Toggle whether lookups are enabled for this search. * Defaults to true. dispatch.earliest_time = <time-str> * the earliest time for the search dispatch.latest_time = <time-str> * the latest time for the search dispatch.time_format = <time format str> * the time format used to specify the earliest and latest time dispatch.spawn_process = <bool> * whether to spawn a new search process when this saved search is executed (default true) #******* # UI-specific settings #******* displayview * Defines the default UI view name (not label) in which to load the results * Accessibility is subject to the user having sufficient permissions vsid * Defines the viewstate id associated with the UI view listed in 'displayview' * Must match up to a stanza in viewstates.conf is_visible = <bool> * whether this saved search should be listed in the visible saved search list * Defaults to true
# Copyright (C) 2005-2010 Splunk Inc. All Rights Reserved. Version 4.1.5 # # This file contains example saved searches and alerts. # # To use one or more of these configurations, copy the configuration block into # savedsearches.conf in $SPLUNK_HOME/etc/system/local/. You must restart Splunk to enable configurations. # # To learn more about configuration files (including precedence) please see the documentation # located at http://www.splunk.com/base/Documentation/latest/Admin/Aboutconfigurationfiles # The following searches are example searches. To create your own search, modify # the values by following the spec outlined in savedsearches.conf.spec. [Daily indexing volume by server] search = index=_internal todaysBytesIndexed LicenseManager-Audit NOT source=*web_service.log NOT source=*web_access.log | eval Daily _Indexing_Volume_in_MBs = todaysBytesIndexed/1024/1024 | timechart avg(Daily_Indexing_Volume_in_MBs) by host dispatch.earliest_time = -7d [Errors in the last 24 hours] search = error OR failed OR severe OR ( sourcetype=access_* ( 404 OR 500 OR 503 ) ) dispatch.earliest_time = -1d [Errors in the last hour] search = error OR failed OR severe OR ( sourcetype=access_* ( 404 OR 500 OR 503 ) ) dispatch.earliest_time = -1h [KB indexed per hour last 24 hours] search = index=_internal metrics group=per_index_thruput NOT debug NOT sourcetype=splunk_web_access | timechart fixedrange=t span=1h sum(kb) | rename sum(kb) as totalKB dispatch.earliest_time = -1d [Messages by minute last 3 hours] search = index=_internal eps "group=per_source_thruput" NOT filetracker | eval events=eps*kb/kbps | timechart fixedrange=t span=1m s um(events) by series dispatch.earliest_time = -3h [Splunk errors last 24 hours] search = index=_internal " error " NOT debug source=*/splunkd.log* dispatch.earliest_time = -24h