Search endpoint descriptions
Manage search resources including alerts triggered by searches, Python search command information, saved searches, search results, and scheduled view objects.
Semantic API versioning
Beginning with Splunk Enterprise 9.0.1, some REST API endpoints are available in multiple versions. The v1 instances of some endpoints are deprecated, and v2 instances of these endpoints are available. Plan to migrate to the v2 instances of each of the following endpoints:
- search/v2/jobs/export
- search/v2/jobs/{search_id}/events
- search/v2/jobs/{search_id}/results
- search/v2/jobs/{search_id}/results_preview
- search/v2/parser
You can address all original v1 endpoints either without a version number or with a v1 in the URI, but you can address v2 endpoints only with a v2 in the URI. Refer to the individual v2 endpoints for examples.
Legacy versioning deprecation
Beginning with Splunk Enterprise 9.0.1, the legacy versioning scheme from Splunk Enterprise 6.1 and lower is deprecated and will be removed in future versions of Splunk Enterprise. REST API endpoint behavior will not vary by Splunk Enterprise product version, but rather by API version only.
Do not include a Splunk Enterprise version number in URIs. Plan to migrate to the semantic versioning scheme with only v1 or v2 specified in URIs.
Avoid versioning endpoints like the following example:
https://localhost:8089/v6.1/services/search/jobs/export
Instead, refer to this v1 endpoint without any version or with v1 only, like the following example:
https://localhost:8089/services/search/jobs/export
https://localhost:8089/services/search/v1/jobs/export
Refer to this v2 endpoint like the following example:
https://localhost:8089/services/search/v2/jobs/export
Usage details
Review ACL information for an endpoint
To check Access Control List (ACL) properties for an endpoint, append /acl
to the path. For more information see Access Control List in the REST API User Manual.
Authentication and Authorization
Username and password authentication is required for access to endpoints and REST operations.
Splunk users must have role and/or capability-based authorization to use REST endpoints. Users with an administrative role, such as admin
, can access authorization information in Splunk Web. To view the roles assigned to a user, select Settings > Access controls and click Users. To determine the capabilities assigned to a role, select Settings > Access controls and click Roles.
App and user context
Typically, knowledge objects, such as saved searches or event types, have an app/user context that is the namespace. For more information about specifying a namespace, see Namespace in the REST API User Manual.
Splunk Cloud URL for REST API access
Splunk Cloud has a different host and management port syntax than Splunk Enterprise. Use the following URL for Splunk Cloud deployments. If necessary, submit a support case using the Splunk Support Portal to open port 8089 on your deployment.
https://<deployment-name>.splunkcloud.com:8089
Free trial Splunk Cloud accounts cannot access the REST API.
See Using the REST API in Splunk Cloud in the the Splunk REST API Tutorials for more information.
alerts/alert_actions
https://<host>:<mPort>/services/alerts/alert_actions
Access alert actions.
GET
Access a list of alert actions.
Request parameters
Pagination and filtering parameters can be used with this method.
Returned values
Varies depending on the type of alert.
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/-/alerts/alert_actions
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>alert_actions</title> <id>https://localhost:8089/servicesNS/-/-/alerts/alert_actions</id> <updated>2018-12-10T16:45:47-05:00</updated> <generator build="8c86330ac18" version="7.2.0"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/-/-/alerts/alert_actions/_reload" rel="_reload"/> <link href="/servicesNS/-/-/alerts/alert_actions/_acl" rel="_acl"/> <opensearch:totalResults>9</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>email</title> <id>https://localhost:8089/servicesNS/nobody/system/alerts/alert_actions/email</id> <updated>1969-12-31T19:00:00-05:00</updated> <link href="/servicesNS/nobody/system/alerts/alert_actions/email" rel="alternate"/> <author> <name>nobody</name> </author> <link href="/servicesNS/nobody/system/alerts/alert_actions/email" rel="list"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/email/_reload" rel="_reload"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/email" rel="edit"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/email/disable" rel="disable"/> <content type="text/xml"> <s:dict> <s:key name="auth_password"></s:key> <s:key name="auth_username"></s:key> <s:key name="bcc"></s:key> <s:key name="cc"></s:key> <s:key name="cipherSuite">ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256</s:key> <s:key name="command"><![CDATA[$action.email.preprocess_results{default=""}$ | sendemail "results_link=$results.url$" "ssname=$name$" "graceful=$graceful{default=True}$" "trigger_time=$trigger_time$" maxinputs="$action.email.maxresults{default=10000}$" maxtime="$action.email.maxtime{default=5m}$" results_file="$results.file$"]]></s:key> <s:key name="content_type">html</s:key> <s:key name="description">Send an email notification to specified recipients</s:key> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">system</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">0</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">nobody</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>*</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">system</s:key> </s:dict> </s:key> <s:key name="eai:appName">system</s:key> <s:key name="eai:userName">nobody</s:key> <s:key name="footer.text">If you believe you've received this email in error, please see your Splunk administrator. splunk > the engine for machine data</s:key> <s:key name="forceCsvResults">auto</s:key> <s:key name="format">table</s:key> <s:key name="from">splunk</s:key> <s:key name="hostname"></s:key> <s:key name="icon_path">mod_alert_icon_email.png</s:key> <s:key name="include.results_link">1</s:key> <s:key name="include.search">0</s:key> <s:key name="include.trigger">0</s:key> <s:key name="include.trigger_time">0</s:key> <s:key name="include.view_link">1</s:key> <s:key name="inline">0</s:key> <s:key name="label">Send email</s:key> <s:key name="mailserver">localhost</s:key> <s:key name="maxresults">10000</s:key> <s:key name="maxtime">5m</s:key> <s:key name="message.alert">The alert condition for '$name$' was triggered.</s:key> <s:key name="message.report">The scheduled report '$name$' has run.</s:key> <s:key name="pdf.footer_center">title</s:key> <s:key name="pdf.footer_enabled">1</s:key> <s:key name="pdf.footer_left">logo</s:key> <s:key name="pdf.footer_right">timestamp,pagination</s:key> <s:key name="pdf.header_center">description</s:key> <s:key name="pdf.header_enabled">1</s:key> <s:key name="pdf.html_image_rendering">1</s:key> <s:key name="pdfview"></s:key> <s:key name="preprocess_results"></s:key> <s:key name="priority">3</s:key> <s:key name="reportCIDFontList">gb cns jp kor</s:key> <s:key name="reportFileName">$name$-$time:%Y-%m-%d$</s:key> <s:key name="reportIncludeSplunkLogo">1</s:key> <s:key name="reportPaperOrientation">portrait</s:key> <s:key name="reportPaperSize">letter</s:key> <s:key name="sendcsv">0</s:key> <s:key name="sendpdf">0</s:key> <s:key name="sendresults">0</s:key> <s:key name="sslVersions">tls1.2</s:key> <s:key name="subject">Splunk Alert: $name$</s:key> <s:key name="subject.alert">Splunk Alert: $name$</s:key> <s:key name="subject.report">Splunk Report: $name$</s:key> <s:key name="to"></s:key> <s:key name="track_alert">1</s:key> <s:key name="ttl">86400</s:key> <s:key name="useNSSubject">0</s:key> <s:key name="use_ssl">0</s:key> <s:key name="use_tls">0</s:key> <s:key name="width_sort_columns">1</s:key> </s:dict> </content> </entry> <entry> <title>logevent</title> <id>https://localhost:8089/servicesNS/nobody/alert_logevent/alerts/alert_actions/logevent</id> <updated>1969-12-31T19:00:00-05:00</updated> <link href="/servicesNS/nobody/alert_logevent/alerts/alert_actions/logevent" rel="alternate"/> <author> <name>nobody</name> </author> <link href="/servicesNS/nobody/alert_logevent/alerts/alert_actions/logevent" rel="list"/> <link href="/servicesNS/nobody/alert_logevent/alerts/alert_actions/logevent/_reload" rel="_reload"/> <link href="/servicesNS/nobody/alert_logevent/alerts/alert_actions/logevent" rel="edit"/> <link href="/servicesNS/nobody/alert_logevent/alerts/alert_actions/logevent/disable" rel="disable"/> <content type="text/xml"> <s:dict> <s:key name="command">sendalert $action_name$ results_file="$results.file$" results_link="$results.url$"</s:key> <s:key name="description">Send log event to Splunk receiver endpoint</s:key> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">alert_logevent</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">0</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">nobody</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>*</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>*</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">global</s:key> </s:dict> </s:key> <s:key name="eai:appName">alert_logevent</s:key> <s:key name="eai:userName">nobody</s:key> <s:key name="forceCsvResults">auto</s:key> <s:key name="hostname"></s:key> <s:key name="icon_path">logevent.png</s:key> <s:key name="is_custom">1</s:key> <s:key name="label">Log Event</s:key> <s:key name="maxresults">10000</s:key> <s:key name="maxtime">5m</s:key> <s:key name="param.host"></s:key> <s:key name="param.index">main</s:key> <s:key name="param.source">alert:$name$</s:key> <s:key name="param.sourcetype">generic_single_line</s:key> <s:key name="payload_format">json</s:key> <s:key name="track_alert">0</s:key> <s:key name="ttl">10p</s:key> </s:dict> </content> </entry> <entry> <title>lookup</title> <id>https://localhost:8089/servicesNS/nobody/system/alerts/alert_actions/lookup</id> <updated>1969-12-31T19:00:00-05:00</updated> <link href="/servicesNS/nobody/system/alerts/alert_actions/lookup" rel="alternate"/> <author> <name>nobody</name> </author> <link href="/servicesNS/nobody/system/alerts/alert_actions/lookup" rel="list"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/lookup/_reload" rel="_reload"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/lookup" rel="edit"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/lookup/disable" rel="disable"/> <content type="text/xml"> <s:dict> <s:key name="append">0</s:key> <s:key name="command">outputlookup "$action.lookup.filename$" append=$action.lookup.append$</s:key> <s:key name="description">Output the results of the search to a CSV lookup file</s:key> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">system</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">0</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">nobody</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>*</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">system</s:key> </s:dict> </s:key> <s:key name="eai:appName">system</s:key> <s:key name="eai:userName">nobody</s:key> <s:key name="filename"></s:key> <s:key name="forceCsvResults">auto</s:key> <s:key name="hostname"></s:key> <s:key name="icon_path">mod_alert_icon_lookup.png</s:key> <s:key name="label">Output results to lookup</s:key> <s:key name="maxresults">10000</s:key> <s:key name="maxtime">5m</s:key> <s:key name="track_alert">0</s:key> <s:key name="ttl">10p</s:key> </s:dict> </content> </entry> <entry> <title>outputtelemetry</title> <id>https://localhost:8089/servicesNS/nobody/splunk_instrumentation/alerts/alert_actions/outputtelemetry</id> <updated>1969-12-31T19:00:00-05:00</updated> <link href="/servicesNS/nobody/splunk_instrumentation/alerts/alert_actions/outputtelemetry" rel="alternate"/> <author> <name>nobody</name> </author> <link href="/servicesNS/nobody/splunk_instrumentation/alerts/alert_actions/outputtelemetry" rel="list"/> <link href="/servicesNS/nobody/splunk_instrumentation/alerts/alert_actions/outputtelemetry/_reload" rel="_reload"/> <link href="/servicesNS/nobody/splunk_instrumentation/alerts/alert_actions/outputtelemetry" rel="edit"/> <link href="/servicesNS/nobody/splunk_instrumentation/alerts/alert_actions/outputtelemetry/disable" rel="disable"/> <content type="text/xml"> <s:dict> <s:key name="command"><![CDATA[outputtelemetry input=$action.outputtelemetry.param.input$ anonymous=$action.outputtelemetry.param.anonymous$ license=$action.outputtelemetry.param.license$ support=$action.outputtelemetry.param.support$ component=$action.outputtelemetry.param.component$ type=$action.outputtelemetry.param.type$ optinrequired=$action.outputtelemetry.param.optinrequired$]]></s:key> <s:key name="description">Custom action to output results to telemetry endpoint</s:key> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">splunk_instrumentation</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">0</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">nobody</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>*</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">global</s:key> </s:dict> </s:key> <s:key name="eai:appName">splunk_instrumentation</s:key> <s:key name="eai:userName">nobody</s:key> <s:key name="forceCsvResults">auto</s:key> <s:key name="hostname"></s:key> <s:key name="icon_path">outputtelemetry.png</s:key> <s:key name="is_custom">1</s:key> <s:key name="label">Output results to telemetry endpoint</s:key> <s:key name="maxresults">10000</s:key> <s:key name="maxtime">5m</s:key> <s:key name="param.anonymous">1</s:key> <s:key name="param.component"></s:key> <s:key name="param.input"></s:key> <s:key name="param.license">0</s:key> <s:key name="param.optinrequired">1</s:key> <s:key name="param.support">1</s:key> <s:key name="param.type">event</s:key> <s:key name="track_alert">0</s:key> <s:key name="ttl">120</s:key> </s:dict> </content> </entry> <entry> <title>populate_lookup</title> <id>https://localhost:8089/servicesNS/nobody/system/alerts/alert_actions/populate_lookup</id> <updated>1969-12-31T19:00:00-05:00</updated> <link href="/servicesNS/nobody/system/alerts/alert_actions/populate_lookup" rel="alternate"/> <author> <name>nobody</name> </author> <link href="/servicesNS/nobody/system/alerts/alert_actions/populate_lookup" rel="list"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/populate_lookup/_reload" rel="_reload"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/populate_lookup" rel="edit"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/populate_lookup/disable" rel="disable"/> <content type="text/xml"> <s:dict> <s:key name="command">copyresults dest="$action.populate_lookup.dest$" sid="$search_id$"</s:key> <s:key name="dest"></s:key> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">system</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">0</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">nobody</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>*</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">system</s:key> </s:dict> </s:key> <s:key name="eai:appName">system</s:key> <s:key name="eai:userName">nobody</s:key> <s:key name="forceCsvResults">auto</s:key> <s:key name="hostname"></s:key> <s:key name="maxresults">10000</s:key> <s:key name="maxtime">5m</s:key> <s:key name="track_alert">0</s:key> <s:key name="ttl">120</s:key> </s:dict> </content> </entry> <entry> <title>rss</title> <id>https://localhost:8089/servicesNS/nobody/system/alerts/alert_actions/rss</id> <updated>1969-12-31T19:00:00-05:00</updated> <link href="/servicesNS/nobody/system/alerts/alert_actions/rss" rel="alternate"/> <author> <name>nobody</name> </author> <link href="/servicesNS/nobody/system/alerts/alert_actions/rss" rel="list"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/rss/_reload" rel="_reload"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/rss" rel="edit"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/rss/disable" rel="disable"/> <content type="text/xml"> <s:dict> <s:key name="command">createrss "path=$name$.xml" "name=$name$" "link=$results.url$" "descr=Alert trigger: $name$, results.count=$results.count$ " "count=30" "graceful=$graceful{default=1}$" maxtime="$action.rss.maxtime{default=1m}$"</s:key> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">system</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">0</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">nobody</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>*</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">system</s:key> </s:dict> </s:key> <s:key name="eai:appName">system</s:key> <s:key name="eai:userName">nobody</s:key> <s:key name="forceCsvResults">auto</s:key> <s:key name="hostname"></s:key> <s:key name="maxresults">10000</s:key> <s:key name="maxtime">1m</s:key> <s:key name="track_alert">0</s:key> <s:key name="ttl">86400</s:key> </s:dict> </content> </entry> <entry> <title>script</title> <id>https://localhost:8089/servicesNS/nobody/system/alerts/alert_actions/script</id> <updated>1969-12-31T19:00:00-05:00</updated> <link href="/servicesNS/nobody/system/alerts/alert_actions/script" rel="alternate"/> <author> <name>nobody</name> </author> <link href="/servicesNS/nobody/system/alerts/alert_actions/script" rel="list"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/script/_reload" rel="_reload"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/script" rel="edit"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/script/disable" rel="disable"/> <content type="text/xml"> <s:dict> <s:key name="command">runshellscript "$action.script.filename$" "$results.count$" "$search$" "$search$" "$name$" "Saved Search [$name$] $counttype$($results.count$)" "$results.url$" "$deprecated_arg$" "$search_id$" "$results.file$" maxtime="$action.script.maxtime{default=5m}$"</s:key> <s:key name="description">Invoke a custom script</s:key> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">system</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">0</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">nobody</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>*</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">system</s:key> </s:dict> </s:key> <s:key name="eai:appName">system</s:key> <s:key name="eai:userName">nobody</s:key> <s:key name="filename"></s:key> <s:key name="forceCsvResults">auto</s:key> <s:key name="hostname"></s:key> <s:key name="icon_path">mod_alert_icon_script.png</s:key> <s:key name="label">Run a script</s:key> <s:key name="maxresults">10000</s:key> <s:key name="maxtime">5m</s:key> <s:key name="track_alert">1</s:key> <s:key name="ttl">600</s:key> </s:dict> </content> </entry> <entry> <title>summary_index</title> <id>https://localhost:8089/servicesNS/nobody/system/alerts/alert_actions/summary_index</id> <updated>1969-12-31T19:00:00-05:00</updated> <link href="/servicesNS/nobody/system/alerts/alert_actions/summary_index" rel="alternate"/> <author> <name>nobody</name> </author> <link href="/servicesNS/nobody/system/alerts/alert_actions/summary_index" rel="list"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/summary_index/_reload" rel="_reload"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/summary_index" rel="edit"/> <link href="/servicesNS/nobody/system/alerts/alert_actions/summary_index/disable" rel="disable"/> <content type="text/xml"> <s:dict> <s:key name="_name">summary</s:key> <s:key name="command"><![CDATA[summaryindex spool=t uselb=t addtime=t index="$action.summary_index._name{required=yes}$" file="$name_hash$_$#random$.stash_new" name="$name$" marker="$action.summary_index*{format=$KEY=\\\"$VAL\\\", key_regex="action.summary_index.(?!(?:command|inline|maxresults|maxtime|ttl|track_alert|(?:_.*))$)(.*)"}$"]]></s:key> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">system</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">0</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">nobody</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>*</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">system</s:key> </s:dict> </s:key> <s:key name="eai:appName">system</s:key> <s:key name="eai:userName">nobody</s:key> <s:key name="forceCsvResults">auto</s:key> <s:key name="hostname"></s:key> <s:key name="inline">1</s:key> <s:key name="maxresults">10000</s:key> <s:key name="maxtime">5m</s:key> <s:key name="track_alert">0</s:key> <s:key name="ttl">120</s:key> </s:dict> </content> </entry> <entry> <title>webhook</title> <id>https://localhost:8089/servicesNS/nobody/alert_webhook/alerts/alert_actions/webhook</id> <updated>1969-12-31T19:00:00-05:00</updated> <link href="/servicesNS/nobody/alert_webhook/alerts/alert_actions/webhook" rel="alternate"/> <author> <name>nobody</name> </author> <link href="/servicesNS/nobody/alert_webhook/alerts/alert_actions/webhook" rel="list"/> <link href="/servicesNS/nobody/alert_webhook/alerts/alert_actions/webhook/_reload" rel="_reload"/> <link href="/servicesNS/nobody/alert_webhook/alerts/alert_actions/webhook" rel="edit"/> <link href="/servicesNS/nobody/alert_webhook/alerts/alert_actions/webhook/disable" rel="disable"/> <content type="text/xml"> <s:dict> <s:key name="command">sendalert $action_name$ results_file="$results.file$" results_link="$results.url$"</s:key> <s:key name="description">Generic HTTP POST to a specified URL</s:key> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">alert_webhook</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">0</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">nobody</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>*</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>*</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">global</s:key> </s:dict> </s:key> <s:key name="eai:appName">alert_webhook</s:key> <s:key name="eai:userName">nobody</s:key> <s:key name="forceCsvResults">auto</s:key> <s:key name="hostname"></s:key> <s:key name="icon_path">webhook.png</s:key> <s:key name="is_custom">1</s:key> <s:key name="label">Webhook</s:key> <s:key name="maxresults">10000</s:key> <s:key name="maxtime">5m</s:key> <s:key name="param.user_agent">Splunk/$server.guid$</s:key> <s:key name="payload_format">json</s:key> <s:key name="track_alert">0</s:key> <s:key name="ttl">10p</s:key> </s:dict> </content> </entry> </feed>
alerts/fired_alerts
https://<host>:<mPort>/services/alerts/fired_alerts
Access fired alerts.
GET
Access a fired alerts summary.
Request parameters
Pagination and filtering parameters can be used with this method.
Returned values
Name | Description |
---|---|
triggered_alert_count | Trigger count for this alert. |
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/-/alerts/fired_alerts
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/" xmlns:s="http://dev.splunk.com/ns/rest"> <title>alerts</title> <id>https://localhost:8089/services/alerts/fired_alerts</id> <updated>2011-07-11T19:27:22-07:00</updated> <generator version="102807"/> <author> <name>Splunk</name> </author> < opensearch nodes elided for brevity. > <s:messages/> <entry> <title>-</title> <id>https://localhost:8089/servicesNS/admin/search/alerts/fired_alerts/-</id> <updated>2011-07-11T19:27:22-07:00</updated> <link href="/servicesNS/admin/search/alerts/fired_alerts/-" rel="alternate"/> <author> <name>admin</name> </author> <link href="/servicesNS/admin/search/alerts/fired_alerts/-" rel="list"/> <content type="text/xml"> <s:dict> < eai:acl elided > <s:key name="triggered_alert_count">0</s:key> </s:dict> </content> </entry> </feed>
alerts/fired_alerts/{name}
https://<host>:<mPort>/services/alerts/fired_alerts/{name}
Access or delete the {name}
triggered alert.
GET
List unexpired triggered instances of this alert.
Request parameters
None
Returned values
Name | Description |
---|---|
actions | Any additional alert actions triggered by this alert. |
alert_type | Indicates if the alert was historical or real-time. |
digest_mode | |
expiration_time_rendered | |
savedsearch_name | Name of the saved search that triggered the alert. |
severity | Indicates the severity level of an alert.
Severity level ranges from Info, Low, Medium, High, and Critical. Default is Medium. Severity levels are informational in purpose and have no additional functionality. |
sid | The search ID of the search that triggered the alert. |
trigger_time | The time the alert was triggered. |
trigger_time_rendered | |
triggered_alerts |
Application usage
Specify - for {name} to return all fired alerts.
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/alerts/fired_alerts/MyAlert
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>alerts</title> <id>https://localhost:8089/servicesNS/admin/search/alerts/fired_alerts</id> <updated>2012-10-25T09:20:04-07:00</updated> <generator build="138753" version="5.0"/> <author> <name>Splunk</name> </author> <!-- opensearch nodes elided for brevity. --> <s:messages/> <entry> <title>rt_scheduler__admin__search__MyAlert_at_1351181001_5.31_1351181987</title> <id>https://localhost:8089/servicesNS/nobody/search/alerts/fired_alerts/rt_scheduler__admin__search__MyAlert_at_1351181001_5.31_1351181987</id> <updated>2012-10-25T09:19:47-07:00</updated> <link href="/servicesNS/nobody/search/alerts/fired_alerts/rt_scheduler__admin__search__MyAlert_at_1351181001_5.31_1351181987" rel="alternate"/> <author> <name>admin</name> </author> <published>2012-10-25T09:19:47-07:00</published> <link href="/servicesNS/nobody/search/alerts/fired_alerts/rt_scheduler__admin__search__MyAlert_at_1351181001_5.31_1351181987" rel="list"/> <link href="/servicesNS/nobody/search/alerts/fired_alerts/rt_scheduler__admin__search__MyAlert_at_1351181001_5.31_1351181987" rel="remove"/> <link href="/servicesNS/nobody/search/search/jobs/rt_scheduler__admin__search__MyAlert_at_1351181001_5.31" rel="job"/> <link href="/servicesNS/nobody/search/saved/searches/MyAlert" rel="savedsearch"/> <content type="text/xml"> <s:dict> <s:key name="actions"/> <s:key name="alert_type">real time</s:key> <s:key name="digest_mode">0</s:key> <!-- eai:acl elided --> <s:key name="expiration_time_rendered">2012-10-26 09:19:47 PDT</s:key> <s:key name="savedsearch_name">MyAlert</s:key> <s:key name="severity">3</s:key> <s:key name="sid">rt_scheduler__admin__search__MyAlert_at_1351181001_5.31</s:key> <s:key name="trigger_time">1351181987</s:key> <s:key name="trigger_time_rendered">2012-10-25 09:19:47 PDT</s:key> <s:key name="triggered_alerts">5</s:key> </s:dict> </content> </entry> . . . elided . . . </feed>
DELETE
Delete the record of this triggered alert.
Request parameters
None.
Response keys
None.
Example request and response
curl -k -u admin:pass --request DELETE https://localhost:8089/servicesNS/admin/search/alerts/fired_alerts/scheduler__admin__search_aGF2ZV9ldmVudHM_at_1310437740_5d3dfde563194ffd_1310437749
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/" xmlns:s="http://dev.splunk.com/ns/rest"> <title>alerts</title> <id>https://localhost:8089/servicesNS/admin/search/alerts/fired_alerts</id> <updated>2011-07-11T19:35:25-07:00</updated> <generator version="102807"/> <author> <name>Splunk</name> </author> <!-- opensearch nodes elided for brevity. --> <s:messages/> </feed>
alerts/metric_alerts
https://<host>:<mPort>/services/alerts/metric_alerts
This endpoint lets you access and create streaming metric alerts.
Authentication and authorization
Only users whose roles have the metric_alerts
capability can use this endpoint.
GET
Access streaming metric alert configurations.
Request parameters
None specific to this method. Pagination and filtering parameters can be used with this method.
Returned values
Name | Description |
---|---|
action.<action_name> | Indicates whether the <action_name> is enabled or disabled for a particular metric alert. Valid values for action_name are:
For more information about the alert action options see the |
action.<action_name>.<parameter> | Overrides the setting defined for an action in the alert_actions.conf file with a new setting that is valid only for the metric alert configuration to which it is applied.
|
condition | Specifies an alert condition for one or more metric_name and aggregation pairs. The alert conditions can include multiple Boolean operators, eval functions, and metric aggregations. The Splunk software applies this evaluation to the results of the alert search on a regular interval. When the alert condition evaluates to 'true', the alert is triggered. Must reference at least one '<mstats_aggregation>(<metric_name>)' clause in single quotes. The condition can also reference dimensions specified in the groupby setting.
|
description | Description of the metric alert. |
filter | Provides one or more Boolean expressions like <dimension_field>=<value> to define the search result dataset to monitor for the alert condition. Does not support subsearches, macros, tags, event types, or time modifiers such as 'earliest' or 'latest'.Combines with the metric_indexes setting to provide the complete search filter for the alert.
|
groupby | The list of dimension fields, delimited by comma, for the group-by clause of the alert search. This leads to multiple aggregation values, one per group, instead of one single value. |
label.<label-name> | Arbitrary key-value pairs for labeling this alert. |
metric_indexes | Specifies one or more metric indexes, delimited by comma. Combines with the filter setting to provide the complete search filter for the alert.
|
splunk_ui.<label-name> | An arbitrary key-value pair that is automatically generated by the Splunk software for its internal use only. Do not change it. |
trigger.expires | Sets the period of time that a triggered alert record displays on the Triggered Alerts page. |
trigger.max_tracked | Specifies the maximum number of instances of this alert that can display in the Triggered Alerts page. When this threshold is passed, the Splunk software removes the earliest instances from the Triggered Alerts page to honor this maximum number. |
trigger.suppress | Specifies the suppression period to silence alert actions and notifications.
|
Example request and response
XML Request
$ curl -k -u admin:changeme https://localhost:8089/services/alerts/metric_alerts
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>metric_alerts</title> <id>https://localhost:8089/services/alerts/metric_alerts</id> <updated>2019-09-16T15:03:59-07:00</updated> <generator build="7170447726604e6ce5018fa5c563f5b631656bdf" version="20190910"/> <author> <name>Splunk</name> </author> <link href="/services/alerts/metric_alerts/_new" rel="create"/> <link href="/services/alerts/metric_alerts/_reload" rel="_reload"/> <link href="/services/alerts/metric_alerts/_acl" rel="_acl"/> <opensearch:totalResults>2</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>malert-001</title> <id>https://localhost:8089/servicesNS/admin/search/alerts/metric_alerts/malert-001</id> <updated>2019-09-16T14:50:17-07:00</updated> <link href="/servicesNS/admin/search/alerts/metric_alerts/malert-001" rel="alternate"/> <author> <name>admin</name> </author> <link href="/servicesNS/admin/search/alerts/metric_alerts/malert-001" rel="list"/> <link href="/servicesNS/admin/search/alerts/metric_alerts/malert-001/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/alerts/metric_alerts/malert-001" rel="edit"/> <link href="/servicesNS/admin/search/alerts/metric_alerts/malert-001" rel="remove"/> <link href="/servicesNS/admin/search/alerts/metric_alerts/malert-001/disable" rel="disable"/> <content type="text/xml"> <s:dict> <s:key name="_group_key">streamalert_0a01bceb2f9624ac</s:key> <s:key name="condition">'sum(spl.intr.resource_usage.Hostwide.data.cpu_count)'>=10</s:key> <s:key name="description"></s:key> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">search</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">admin</s:key> <s:key name="perms"/> <s:key name="removable">1</s:key> <s:key name="sharing">user</s:key> </s:dict> </s:key> <s:key name="eai:appName">search</s:key> <s:key name="eai:userName">admin</s:key> <s:key name="filter">region=east</s:key> <s:key name="groupby"></s:key> <s:key name="metric_indexes">_metrics</s:key> <s:key name="trigger.expires">24h</s:key> <s:key name="trigger.max_tracked">20</s:key> <s:key name="trigger.suppress"></s:key> </s:dict> </content> </entry> <entry> <title>mpool used high</title> <id>https://localhost:8089/servicesNS/admin/search/alerts/metric_alerts/mpool%20used%20high</id> <updated>2019-09-16T09:59:35-07:00</updated> <link href="/servicesNS/admin/search/alerts/metric_alerts/mpool%20used%20high" rel="alternate"/> <author> <name>admin</name> </author> <link href="/servicesNS/admin/search/alerts/metric_alerts/mpool%20used%20high" rel="list"/> <link href="/servicesNS/admin/search/alerts/metric_alerts/mpool%20used%20high/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/alerts/metric_alerts/mpool%20used%20high" rel="edit"/> <link href="/servicesNS/admin/search/alerts/metric_alerts/mpool%20used%20high" rel="remove"/> <link href="/servicesNS/admin/search/alerts/metric_alerts/mpool%20used%20high/disable" rel="disable"/> <content type="text/xml"> <s:dict> <s:key name="_group_key">streamalert_d8b75eedcf404743</s:key> <s:key name="action.email">0</s:key> <s:key name="action.logevent">0</s:key> <s:key name="action.rss">0</s:key> <s:key name="action.script">0</s:key> <s:key name="action.webhook">1</s:key> <s:key name="action.webhook.command">sendalert $action_name$ results_file="$results.file$" results_link="$results.url$"</s:key> <s:key name="action.webhook.description">Generic HTTP POST to a specified URL</s:key> <s:key name="action.webhook.forceCsvResults">auto</s:key> <s:key name="action.webhook.icon_path">webhook.png</s:key> <s:key name="action.webhook.is_custom">1</s:key> <s:key name="action.webhook.label">Webhook</s:key> <s:key name="action.webhook.maxresults">10000</s:key> <s:key name="action.webhook.maxtime">5m</s:key> <s:key name="action.webhook.param.user_agent">Splunk/$server.guid$</s:key> <s:key name="action.webhook.payload_format">json</s:key> <s:key name="action.webhook.track_alert">0</s:key> <s:key name="action.webhook.ttl">10p</s:key> <s:key name="condition">'max(spl.mlog.mpool.used)' > 10000</s:key> <s:key name="description">spl.mlog.mpool.used too high</s:key> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">search</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">admin</s:key> <s:key name="perms"/> <s:key name="removable">1</s:key> <s:key name="sharing">user</s:key> </s:dict> </s:key> <s:key name="eai:appName">search</s:key> <s:key name="eai:userName">admin</s:key> <s:key name="filter"></s:key> <s:key name="groupby"></s:key> <s:key name="metric_indexes">_metrics</s:key> <s:key name="splunk_ui.displayview">analytics_workspace</s:key> <s:key name="splunk_ui.managedBy">Analytics Workspace</s:key> <s:key name="splunk_ui.severity">1</s:key> <s:key name="splunk_ui.track">1</s:key> <s:key name="trigger.expires">24h</s:key> <s:key name="trigger.max_tracked">20</s:key> <s:key name="trigger.suppress">3m</s:key> <s:key name="triggered_alert_count">20</s:key> </s:dict> </content> </entry> </feed>
POST
Create a streaming metric alert.
Request parameters
Name | Type | Description |
---|---|---|
action.<action-name> | Boolean | Indicates whether the <action_name> is enabled or disabled for a particular metric alert. Valid values for action_name are:
For more information about the alert action options see the |
action.<action-name>.<parameter> | String | Override the global setting defined for an <action-name> in the alert_actions.conf file with a new setting that is valid only for the metric alert configuration to which it is applied.
|
condition required |
Boolean eval expression
|
Specifies an alert condition for one or more metric_name and aggregation pairs. You can set alert conditions that include multiple Boolean operators, eval functions, and metric aggregations. The Splunk software applies this evaluation to the results of the alert search on a regular interval. When the alert condition evaluates to 'true', the alert is triggered. Must reference at least one '<mstats_aggregation>(<metric_name>)' clause in single quotes. The condition can also reference dimensions specified in the groupby setting.
|
description | String | Provide a description of the streaming metric alert. |
filter | String | Specify one or more Boolean expressions like <dimension_field>=<value> to define the search result dataset to monitor for an alert condition. Link multiple Boolean expressions with the AND operator. The filter does not support subsearches, macros, tags, event types, or time modifiers such as 'earliest' or 'latest'.This setting combines with the metric_indexes setting to provide the complete search filter for the alert.
|
groupby | String | Provide a list of dimension fields, delimited by comma, for the group-by clause of the alert search. This results in multiple aggregation values, one per group, instead of one aggregation value. |
label.<label-name> | String | Provide an arbitrary key-value pair to label or tag this alert. This key-value pair is not used by the Splunk alerting framework. You can design applications that use the alert label when they call the `alerts/metric_alerts` endpoint. |
metric_indexes required |
String | Specify one or more metric indexes, delimited by comma. Combines with the filter setting to define the search result dataset that the alert monitors for the alert condition. |
name required |
String | Specify the name of the streaming metric alert. |
trigger.expires | String | Set the period of time that a triggered alert record displays on the Triggered Alerts page.
Default is 24h. |
trigger.max_tracked | Number | Specify the maximum number of instances of this alert that can display in the triggered alerts dashboard. When this threshold is passed, the Splunk software removes the earliest jinstances from the dashboard to honor this maximum number. Set to 0 to remove the cap. Defaults to 20. |
trigger.suppress | String | Define the suppression period to silence alert actions and notifications.
Use |
Returned values
None.
Example request and response
XML Request
$ curl -k -u admin:changeme https://localhost:8089/services/alerts/metric_alerts -X POST -d name=malert-001 -d condition="'sum(spl.intr.resource_usage.Hostwide.data.cpu_count)'%3E%3D10" -d filter=region%3Deast -d metric_indexes=_metrics
XML response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>metric_alerts</title> <id>https://localhost:8089/services/alerts/metric_alerts</id> <updated>2019-09-16T15:07:52-07:00</updated> <generator build="7170447726604e6ce5018fa5c563f5b631656bdf" version="20190910"/> <author> <name>Splunk</name> </author> <link href="/services/alerts/metric_alerts/_new" rel="create"/> <link href="/services/alerts/metric_alerts/_reload" rel="_reload"/> <link href="/services/alerts/metric_alerts/_acl" rel="_acl"/> <opensearch:totalResults>0</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> </feed>
alerts/metric_alerts/{alert_name}
https://<host>:<mPort>/services/alerts/metric_alerts/{alert_name}
This endpoint lets you create, update, delete, enable, and disable streaming metric alerts.
Authentication and authorization
Only users whose roles have the metric_alerts
capability can use this endpoint.
GET
Access the named streaming metric alert.
Request parameters
None specific to this method. Pagination and filtering parameters can be used with this method.
Returned values
Name | Description |
---|---|
action.<action_name> | Indicates whether the <action_name> is enabled or disabled for a particular metric alert. Valid values for action_name are:
For more information about the alert action options see the |
action.<action_name>.<parameter> | Overrides the setting defined for an action in the alert_actions.conf file with a new setting that is valid only for the metric alert configuration to which it is applied.
|
condition | Specifies an alert condition for one or more metric_name and aggregation pairs. The alert conditions can include multiple Boolean operators, eval functions, and metric aggregations. The Splunk software applies this evaluation to the results of the alert search on a regular interval. When the alert condition evaluates to 'true', the alert is triggered. Must reference at least one '<mstats_aggregation>(<metric_name>)' clause in single quotes. The condition can also reference dimensions specified in the groupby setting.
|
description | Description of the metric alert. |
groupby | The list of dimension fields, delimited by comma, for the group-by clause of the alert search. This leads to multiple aggregation values, one per group, instead of one single value. |
filter | Provides one or more Boolean expressions like <dimension_field>=<value> to define the search result dataset to monitor for the alert condition. Does not support subsearches, macros, tags, event types, or time modifiers such as 'earliest' or 'latest'.Combines with the metric_indexes setting to provide the complete search filter for the alert.
|
label.<label-name> | Arbitrary key-value pairs for labeling this alert. |
metric_indexes | Specifies one or more metric indexes, delimited by comma. Combines with the filter setting to provide the complete search filter for the alert.
|
splunk_ui.<label-name> | An arbitrary key-value pair that is automatically generated by the Splunk software for its internal use only. Do not change it. |
trigger.expires | Sets the period of time that a triggered alert record displays on the Triggered Alerts page. |
trigger.max_tracked | Specifies the maximum number of instances of this alert that can display in the Triggered Alerts page. When this threshold is passed, the Splunk software removes the earliest instances from the Triggered Alerts page to honor this maximum number. |
trigger.suppress | Specifies the suppression period to silence alert actions and notifications.
|
Example request and response
XML Request
$ curl -k -u admin:pass123 https://localhost:8089/services/alerts/metric_alerts/malert-001
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>metric_alerts</title> <id>https://localhost:8089/services/alerts/metric_alerts</id> <updated>2019-09-16T14:51:17-07:00</updated> <generator build="7170447726604e6ce5018fa5c563f5b631656bdf" version="20190910"/> <author> <name>Splunk</name> </author> <link href="/services/alerts/metric_alerts/_new" rel="create"/> <link href="/services/alerts/metric_alerts/_reload" rel="_reload"/> <link href="/services/alerts/metric_alerts/_acl" rel="_acl"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>malert-001</title> <id>https://localhost:8089/servicesNS/admin/search/alerts/metric_alerts/malert-001</id> <updated>2019-09-16T14:50:17-07:00</updated> <link href="/servicesNS/admin/search/alerts/metric_alerts/malert-001" rel="alternate"/> <author> <name>admin</name> </author> <link href="/servicesNS/admin/search/alerts/metric_alerts/malert-001" rel="list"/> <link href="/servicesNS/admin/search/alerts/metric_alerts/malert-001/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/alerts/metric_alerts/malert-001" rel="edit"/> <link href="/servicesNS/admin/search/alerts/metric_alerts/malert-001" rel="remove"/> <link href="/servicesNS/admin/search/alerts/metric_alerts/malert-001/disable" rel="disable"/> <content type="text/xml"> <s:dict> <s:key name="_group_key">streamalert_0a01bceb2f9624ac</s:key> <s:key name="action.email">0</s:key> <s:key name="action.email.auth_password"></s:key> <s:key name="action.email.auth_username"></s:key> <s:key name="action.email.bcc"></s:key> <s:key name="action.email.cc"></s:key> <s:key name="action.email.cipherSuite">ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256</s:key> <s:key name="action.email.command"><![CDATA[$action.email.preprocess_results{default=""}$ | sendemail "results_link=$results.url$" "ssname=$name$" "graceful=$graceful{default=True}$" "trigger_time=$trigger_time$" maxinputs="$action.email.maxresults{default=10000}$" maxtime="$action.email.maxtime{default=5m}$" results_file="$results.file$"]]></s:key> <s:key name="action.email.content_type">html</s:key> <s:key name="action.email.description">Send an email notification to specified recipients</s:key> <s:key name="action.email.footer.text">If you believe you've received this email in error, please see your Splunk administrator. splunk > the engine for machine data</s:key> <s:key name="action.email.forceCsvResults">auto</s:key> <s:key name="action.email.format">table</s:key> <s:key name="action.email.from">splunk</s:key> <s:key name="action.email.hostname"></s:key> <s:key name="action.email.icon_path">mod_alert_icon_email.png</s:key> <s:key name="action.email.include.results_link">1</s:key> <s:key name="action.email.include.search">0</s:key> <s:key name="action.email.include.trigger">0</s:key> <s:key name="action.email.include.trigger_time">0</s:key> <s:key name="action.email.include.view_link">1</s:key> <s:key name="action.email.inline">0</s:key> <s:key name="action.email.label">Send email</s:key> <s:key name="action.email.mailserver">localhost</s:key> <s:key name="action.email.maxresults">10000</s:key> <s:key name="action.email.maxtime">5m</s:key> <s:key name="action.email.message.alert">The alert condition for '$name$' was triggered.</s:key> <s:key name="action.email.message.report">The scheduled report '$name$' has run.</s:key> <s:key name="action.email.pdf.footer_center">title</s:key> <s:key name="action.email.pdf.footer_enabled">1</s:key> <s:key name="action.email.pdf.footer_left">logo</s:key> <s:key name="action.email.pdf.footer_right">timestamp,pagination</s:key> <s:key name="action.email.pdf.header_center">description</s:key> <s:key name="action.email.pdf.header_enabled">1</s:key> <s:key name="action.email.pdf.html_image_rendering">1</s:key> <s:key name="action.email.pdfview"></s:key> <s:key name="action.email.preprocess_results"></s:key> <s:key name="action.email.priority">3</s:key> <s:key name="action.email.reportCIDFontList">gb cns jp kor</s:key> <s:key name="action.email.reportFileName">$name$-$time:%Y-%m-%d$</s:key> <s:key name="action.email.reportIncludeSplunkLogo">1</s:key> <s:key name="action.email.reportPaperOrientation">portrait</s:key> <s:key name="action.email.reportPaperSize">letter</s:key> <s:key name="action.email.sendcsv">0</s:key> <s:key name="action.email.sendpdf">0</s:key> <s:key name="action.email.sendresults">0</s:key> <s:key name="action.email.sslVersions">tls1.2</s:key> <s:key name="action.email.subject">Splunk Alert: $name$</s:key> <s:key name="action.email.subject.alert">Splunk Alert: $name$</s:key> <s:key name="action.email.subject.report">Splunk Report: $name$</s:key> <s:key name="action.email.to"></s:key> <s:key name="action.email.track_alert">1</s:key> <s:key name="action.email.ttl">86400</s:key> <s:key name="action.email.useNSSubject">0</s:key> <s:key name="action.email.use_ssl">0</s:key> <s:key name="action.email.use_tls">0</s:key> <s:key name="action.email.width_sort_columns">1</s:key> <s:key name="action.logevent">0</s:key> <s:key name="action.logevent.command">sendalert $action_name$ results_file="$results.file$" results_link="$results.url$"</s:key> <s:key name="action.logevent.description">Send log event to Splunk receiver endpoint</s:key> <s:key name="action.logevent.forceCsvResults">auto</s:key> <s:key name="action.logevent.hostname"></s:key> <s:key name="action.logevent.icon_path">logevent.png</s:key> <s:key name="action.logevent.is_custom">1</s:key> <s:key name="action.logevent.label">Log Event</s:key> <s:key name="action.logevent.maxresults">10000</s:key> <s:key name="action.logevent.maxtime">5m</s:key> <s:key name="action.logevent.param.host"></s:key> <s:key name="action.logevent.param.index">main</s:key> <s:key name="action.logevent.param.source">alert:$name$</s:key> <s:key name="action.logevent.param.sourcetype">generic_single_line</s:key> <s:key name="action.logevent.payload_format">json</s:key> <s:key name="action.logevent.track_alert">0</s:key> <s:key name="action.logevent.ttl">10p</s:key> <s:key name="action.lookup">0</s:key> <s:key name="action.lookup.append">0</s:key> <s:key name="action.lookup.command">outputlookup "$action.lookup.filename$" append=$action.lookup.append$</s:key> <s:key name="action.lookup.description">Output the results of the search to a CSV lookup file</s:key> <s:key name="action.lookup.filename"></s:key> <s:key name="action.lookup.forceCsvResults">auto</s:key> <s:key name="action.lookup.hostname"></s:key> <s:key name="action.lookup.icon_path">mod_alert_icon_lookup.png</s:key> <s:key name="action.lookup.label">Output results to lookup</s:key> <s:key name="action.lookup.maxresults">10000</s:key> <s:key name="action.lookup.maxtime">5m</s:key> <s:key name="action.lookup.track_alert">0</s:key> <s:key name="action.lookup.ttl">10p</s:key> <s:key name="action.populate_lookup">0</s:key> <s:key name="action.populate_lookup.command">copyresults dest="$action.populate_lookup.dest$" sid="$search_id$"</s:key> <s:key name="action.populate_lookup.dest"></s:key> <s:key name="action.populate_lookup.forceCsvResults">auto</s:key> <s:key name="action.populate_lookup.hostname"></s:key> <s:key name="action.populate_lookup.maxresults">10000</s:key> <s:key name="action.populate_lookup.maxtime">5m</s:key> <s:key name="action.populate_lookup.track_alert">0</s:key> <s:key name="action.populate_lookup.ttl">120</s:key> <s:key name="action.rss">0</s:key> <s:key name="action.rss.command">createrss "path=$name$.xml" "name=$name$" "link=$results.url$" "descr=Alert trigger: $name$, results.count=$results.count$ " "count=30" "graceful=$graceful{default=1}$" maxtime="$action.rss.maxtime{default=1m}$"</s:key> <s:key name="action.rss.forceCsvResults">auto</s:key> <s:key name="action.rss.hostname"></s:key> <s:key name="action.rss.maxresults">10000</s:key> <s:key name="action.rss.maxtime">1m</s:key> <s:key name="action.rss.track_alert">0</s:key> <s:key name="action.rss.ttl">86400</s:key> <s:key name="action.script">0</s:key> <s:key name="action.script.command">runshellscript "$action.script.filename$" "$results.count$" "$search$" "$search$" "$name$" "Saved Search [$name$] $counttype$($results.count$)" "$results.url$" "$deprecated_arg$" "$search_id$" "$results.file$" maxtime="$action.script.maxtime{default=5m}$"</s:key> <s:key name="action.script.description">Invoke a custom script</s:key> <s:key name="action.script.filename"></s:key> <s:key name="action.script.forceCsvResults">auto</s:key> <s:key name="action.script.hostname"></s:key> <s:key name="action.script.icon_path">mod_alert_icon_script.png</s:key> <s:key name="action.script.label">Run a script</s:key> <s:key name="action.script.maxresults">10000</s:key> <s:key name="action.script.maxtime">5m</s:key> <s:key name="action.script.track_alert">1</s:key> <s:key name="action.script.ttl">600</s:key> <s:key name="action.summary_index">0</s:key> <s:key name="action.summary_index._name">summary</s:key> <s:key name="action.summary_index.command"><![CDATA[summaryindex spool=t uselb=t addtime=t index="$action.summary_index._name{required=yes}$" file="$name_hash$_$#random$.stash_new" name="$name$" marker="$action.summary_index*{format=$KEY=\\\"$VAL\\\", key_regex="action.summary_index.(?!(?:command|forceCsvResults|inline|maxresults|maxtime|python\\.version|ttl|track_alert|(?:_.*))$)(.*)"}$"]]></s:key> <s:key name="action.summary_index.forceCsvResults">auto</s:key> <s:key name="action.summary_index.hostname"></s:key> <s:key name="action.summary_index.inline">1</s:key> <s:key name="action.summary_index.maxresults">10000</s:key> <s:key name="action.summary_index.maxtime">5m</s:key> <s:key name="action.summary_index.track_alert">0</s:key> <s:key name="action.summary_index.ttl">120</s:key> <s:key name="action.webhook">0</s:key> <s:key name="action.webhook.command">sendalert $action_name$ results_file="$results.file$" results_link="$results.url$"</s:key> <s:key name="action.webhook.description">Generic HTTP POST to a specified URL</s:key> <s:key name="action.webhook.forceCsvResults">auto</s:key> <s:key name="action.webhook.hostname"></s:key> <s:key name="action.webhook.icon_path">webhook.png</s:key> <s:key name="action.webhook.is_custom">1</s:key> <s:key name="action.webhook.label">Webhook</s:key> <s:key name="action.webhook.maxresults">10000</s:key> <s:key name="action.webhook.maxtime">5m</s:key> <s:key name="action.webhook.param.user_agent">Splunk/$server.guid$</s:key> <s:key name="action.webhook.payload_format">json</s:key> <s:key name="action.webhook.track_alert">0</s:key> <s:key name="action.webhook.ttl">10p</s:key> <s:key name="condition">'sum(spl.intr.resource_usage.Hostwide.data.cpu_count)'>=10</s:key> <s:key name="description"></s:key> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">search</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">admin</s:key> <s:key name="perms"/> <s:key name="removable">1</s:key> <s:key name="sharing">user</s:key> </s:dict> </s:key> <s:key name="eai:appName">search</s:key> <s:key name="eai:attributes"> <s:dict> <s:key name="optionalFields"> <s:list> <s:item>condition</s:item> <s:item>description</s:item> <s:item>disabled</s:item> <s:item>filter</s:item> <s:item>groupby</s:item> <s:item>metric_indexes</s:item> <s:item>trigger.condition</s:item> <s:item>trigger.expires</s:item> <s:item>trigger.max_tracked</s:item> <s:item>trigger.per_group</s:item> <s:item>trigger.suppress</s:item> </s:list> </s:key> <s:key name="requiredFields"> <s:list/> </s:key> <s:key name="wildcardFields"> <s:list> <s:item>action\..*</s:item> <s:item>label\..*</s:item> <s:item>splunk_ui\..*</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="eai:userName">admin</s:key> <s:key name="filter">region=east</s:key> <s:key name="groupby"></s:key> <s:key name="metric_indexes">_metrics</s:key> <s:key name="trigger.expires">24h</s:key> <s:key name="trigger.max_tracked">20</s:key> <s:key name="trigger.suppress"></s:key> </s:dict> </content> </entry> </feed>
POST
Update the named streaming metric alert.
Request parameters
Name | Type | Description |
---|---|---|
action.<action-name> | Boolean | Indicates whether the <action_name> is enabled or disabled for a particular metric alert. Valid values for <action_name> are:
For more information about the alert action options see the |
action.<action-name>.<parameter> | String | Override the global setting defined for an <action-name> in the alert_actions.conf file with a new setting that is valid only for the metric alert configuration to which it is applied.
|
condition required |
Boolean eval expression
|
Specifies an alert condition for one or more metric_name and aggregation pairs. You can set alert conditions that include multiple Boolean operators, eval functions, and metric aggregations. The Splunk software applies this evaluation to the results of the alert search on a regular interval. When the alert condition evaluates to 'true', the alert is triggered. Must reference at least one '<mstats_aggregation>(<metric_name>)' clause in single quotes. The condition can also reference dimensions specified in the groupby setting.
|
description | String | Provide a description of the streaming metric alert. |
groupby | String | Provide a list of dimension fields, delimited by comma, for the group-by clause of the alert search. This results in multiple aggregation values, one per group, instead of one aggregation value. |
filter | String | Specify one or more Boolean expressions like <dimension_field>=<value> to define the search result dataset to monitor for an alert condition. Link multiple Boolean expressions with the 'AND' operator. The filter does not support subsearches, macros, tags, event types, or time modifiers such as 'earliest' or 'latest'.This setting combines with the metric_indexes setting to provide the complete search filter for the alert.
|
label.<label-name> | String | Provide an arbitrary key-value pair to label or tag this alert. This key-value pair is not used by the Splunk alerting framework. You can design applications that use the alert label when they call the `alerts/metric_alerts` endpoint. |
metric_indexes required |
String | Specify one or more metric indexes, delimited by comma. Combines with the filter setting to define the search result dataset that the alert monitors for the alert condition. |
trigger.expires | String | Set the period of time that a triggered alert record displays on the Triggered Alerts page.
Default is 24h. |
trigger.max_tracked | Number | Specify the maximum number of instances of this alert that can display in the triggered alerts dashboard. When this threshold is passed, the Splunk software removes the earliest instances from the dashboard to honor this maximum number. Set to 0 to remove the cap. Defaults to 20. |
trigger.suppress | String | Define the suppression period to silence alert actions and notifications.
Use |
Returned values
None.
Example request and response
XML Request
$ curl -k -u admin:changeme https://localhost:8089/services/alerts/metric_alerts/malert-002 -d description="updated description of malert-002" -d trigger.expires=1h
XML response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>metric_alerts</title> <id>https://localhost:8089/services/alerts/metric_alerts</id> <updated>2019-09-16T14:38:38-07:00</updated> <generator build="7170447726604e6ce5018fa5c563f5b631656bdf" version="20190910"/> <author> <name>Splunk</name> </author> <link href="/services/alerts/metric_alerts/_new" rel="create"/> <link href="/services/alerts/metric_alerts/_reload" rel="_reload"/> <link href="/services/alerts/metric_alerts/_acl" rel="_acl"/> <opensearch:totalResults>0</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> </feed>
DELETE
Deletes the named metric alert.
Request parameters
None specific to this method.
Returned values
None specific to this method.
Example request and response
Remove the [malert-002]
stanza from metric_alerts.conf
.
XML Request
curl -k -u admin:changeme https://localhost:8089/services/alerts/metric_alerts/malert-002 -X DELETE
XML response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>metric_alerts</title> <id>https://localhost:8089/services/alerts/metric_alerts</id> <updated>2019-09-16T14:38:38-07:00</updated> <generator build="7170447726604e6ce5018fa5c563f5b631656bdf" version="20190910"/> <author> <name>Splunk</name> </author> <link href="/services/alerts/metric_alerts/_new" rel="create"/> <link href="/services/alerts/metric_alerts/_reload" rel="_reload"/> <link href="/services/alerts/metric_alerts/_acl" rel="_acl"/> <opensearch:totalResults>0</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> </feed>
data/commands
https://<host>:<mPort>/services/data/commands
Access Python search commands.
GET
Access Python search commands.
Request parameters
Pagination and filtering parameters can be used with this method.
Returned values
Name | Description |
---|---|
changes_colorder | Indicates whether the script output should be used to change the column ordering of the fields. |
disabled | Indicates if the command is disabled. |
enableheader | Indicate whether or not your script is expecting header information or not.
Note: Should be set to true if you use splunk.Intersplunk |
filename | Name of script file for command.
<stanza-name>.pl for perl. <stanza-name>.py for python. |
generates_timeorder | If generating = false and streaming = true, indicates if the command changes the order of events w/respect to time. |
generating | Indicates if the command generates new events. |
maxinputs | Maximum number of events that can be passed to the command for each invocation. This limit cannot exceed the value of maxresultrows in limits.conf.
0 indicates no limit. Defaults to 50000. |
outputheader | If true, the output of script should be a header section + blank line + csv output.
If false, script output should be pure csv only. |
passauth | If true, passes an authentication token on the start of input. |
required_fields | A list of fields that this command may use. Informs previous commands that they should retain/extract these fields if possible. No error is generated if a field specified is missing.
Defaults to '*'. |
requires_preop | Indicates whether the command sequence specified by the streaming_preop key is required for proper execution or is it an optimization only.
Default is false (stremaing_preop not required). |
retainsevents | Indicates whether the command retains events (the way the sort/dedup/cluster commands do) or whether the command transforms them (the way the stats command does). |
streaming | Indicates whether the command is streamable. |
supports_getinfo | Indicates whether the command supports dynamic probing for settings (first argument invoked == __GETINFO__ or __EXECUTE__). |
supports_rawargs | Indicates whether the command supports raw arguments being passed to it or if it uses parsed arguments (where quotes are stripped). |
type | Specifies the type of command. The only valid value for this attribute is python .
|
Example request and response
curl -k -u admin:pass https://localhost:8089/servicesNS/nobody/search/data/commands
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/" xmlns:s="http://dev.splunk.com/ns/rest"> <title>commandsconf</title> <id>https://localhost:8089/servicesNS/nobody/search/data/commands</id> <updated>2011-07-07T00:52:26-07:00</updated> <generator version="102807"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/nobody/search/data/commands/_reload" rel="_reload"/> <s:messages/> <entry> <title>bucketdir</title> <id>https://localhost:8089/servicesNS/nobody/search/data/commands/bucketdir</id> <updated>2011-07-07T00:52:26-07:00</updated> <link href="/servicesNS/nobody/search/data/commands/bucketdir" rel="alternate"/> <author> <name>nobody</name> </author> <link href="/servicesNS/nobody/search/data/commands/bucketdir" rel="list"/> <link href="/servicesNS/nobody/search/data/commands/bucketdir/_reload" rel="_reload"/> <link href="/servicesNS/nobody/search/data/commands/bucketdir/disable" rel="disable"/> <content type="text/xml"> <s:dict> <s:key name="changes_colorder">1</s:key> <s:key name="disabled">0</s:key> <s:key name="eai:appName">search</s:key> <s:key name="eai:userName">admin</s:key> <s:key name="enableheader">1</s:key> <s:key name="filename">bucketdir.py</s:key> <s:key name="generates_timeorder">0</s:key> <s:key name="generating">0</s:key> <s:key name="maxinputs">50000</s:key> <s:key name="outputheader">0</s:key> <s:key name="passauth">0</s:key> <s:key name="required_fields">*</s:key> <s:key name="requires_preop">0</s:key> <s:key name="retainsevents">0</s:key> <s:key name="streaming">0</s:key> <s:key name="supports_getinfo">0</s:key> <s:key name="supports_rawargs">1</s:key> <s:key name="type">python</s:key> </s:dict> </content> </entry> </feed>
data/commands/{name}
https://<host>:<mPort>/services/data/commands/{name}
Get information about the {name}
python search command.
GET
Access search command information.
Request parameters
None
Returned values
Name | Description |
---|---|
changes_colorder | Indicates whether the script output should be used to change the column ordering of the fields. |
disabled | Indicates if the command is disabled. |
enableheader | Indicate whether or not your script is expecting header information or not.
Note: Should be set to true if you use splunk.Intersplunk |
filename | Name of script file for command.
<stanza-name>.pl for perl. <stanza-name>.py for python. |
generates_timeorder | If generating = false and streaming = true, indicates if the command changes the order of events w/respect to time. |
generating | Indicates if the command generates new events. |
maxinputs | Maximum number of events that can be passed to the command for each invocation. This limit cannot exceed the value of maxresultrows in limits.conf.
0 indicates no limit. Defaults to 50000. |
outputheader | If true, the output of script should be a header section + blank line + csv output.
If false, script output should be pure csv only. |
passauth | If true, passes an authentication token on the start of input. |
required_fields | A list of fields that this command may use. Informs previous commands that they should retain/extract these fields if possible. No error is generated if a field specified is missing.
Defaults to '*'. |
requires_preop | Indicates whether the command sequence specified by the streaming_preop key is required for proper execution or is it an optimization only.
Default is false (stremaing_preop not required). |
retainsevents | Indicates whether the command retains events (the way the sort/dedup/cluster commands do) or whether the command transforms them (the way the stats command does). |
streaming | Indicates whether the command is streamable. |
supports_getinfo | Indicates whether the command supports dynamic probing for settings (first argument invoked == __GETINFO__ or __EXECUTE__). |
supports_rawargs | Indicates whether the command supports raw arguments being passed to it or if it uses parsed arguments (where quotes are stripped). |
type | Specifies the type of command.
The only valid value for this attribute is |
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/servicesNS/nobody/search/data/commands/input
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/" xmlns:s="http://dev.splunk.com/ns/rest"> <title>commandsconf</title> <id>https://localhost:8089/servicesNS/nobody/search/data/commands</id> <updated>2011-07-07T00:52:26-07:00</updated> <generator version="102807"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/nobody/search/data/commands/_reload" rel="_reload"/> <s:messages/> <entry> <title>input</title> <id>https://localhost:8089/servicesNS/nobody/search/data/commands/input</id> <updated>2011-07-07T00:52:26-07:00</updated> <link href="/servicesNS/nobody/search/data/commands/input" rel="alternate"/> <author> <name>nobody</name> </author> <link href="/servicesNS/nobody/search/data/commands/input" rel="list"/> <link href="/servicesNS/nobody/search/data/commands/input/_reload" rel="_reload"/> <link href="/servicesNS/nobody/search/data/commands/input/disable" rel="disable"/> <content type="text/xml"> <s:dict> <s:key name="changes_colorder">1</s:key> <s:key name="disabled">0</s:key> <s:key name="eai:appName">search</s:key> <s:key name="eai:attributes"> <s:dict> <s:key name="optionalFields"> <s:list/> </s:key> <s:key name="requiredFields"> <s:list/> </s:key> <s:key name="wildcardFields"> <s:list/> </s:key> </s:dict> </s:key> <s:key name="eai:userName">admin</s:key> <s:key name="enableheader">1</s:key> <s:key name="filename">input.py</s:key> <s:key name="generates_timeorder">0</s:key> <s:key name="generating">0</s:key> <s:key name="maxinputs">50000</s:key> <s:key name="outputheader">0</s:key> <s:key name="passauth">1</s:key> <s:key name="required_fields">*</s:key> <s:key name="requires_preop">0</s:key> <s:key name="retainsevents">0</s:key> <s:key name="streaming">0</s:key> <s:key name="supports_getinfo">0</s:key> <s:key name="supports_rawargs">1</s:key> <s:key name="type">python</s:key> </s:dict> </content> </entry> </feed>
saved/searches
https://<host>:<mPort>/services/saved/searches
Access and create saved searches.
GET
Access saved search configurations.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
earliest_time | String | For scheduled searches display all the scheduled times starting from this time (not just the next run time) | |
latest_time | String | For scheduled searches display all the scheduled times until this time (not just the next run time) | |
listDefaultActionArgs | Boolean | Indicates whether to list default actions. | |
add_orphan_field | Boolean | Indicates whether the response includes a boolean value for each saved search to show whether the search is orphaned, meaning that it has no valid owner. When add_orphan_field is set to true , the response includes the orphaned search indicators, either 0 to indicate that a search is not orphaned or 1 to indicate that the search is orphaned. Admins can use this setting to check for searches without valid owners and resolve related issues.
|
Pagination and filtering parameters can be used with this method.
This endpoint returns an unusually high number of values. To limit the number of returned values, specify the f
filtering parameter.
Returned values
Name | Description |
---|---|
action.<action_name> | Indicates whether the <action_name> is enabled or disabled for a particular search. For more information about the alert action options see the alert_actions.conf file.
|
action.<action_name>.<parameter> | Overrides the setting defined for an action in the alert_actions.conf file with a new setting that is valid only for the search configuration to which it is applied.
|
action.email | Indicates the state of the email action. |
action.email.auth_password | The password to use when authenticating with the SMTP server. Normally this value is set when editing the email settings, however you can set a clear text password here that is encrypted on the next restart.
Defaults to empty string. |
action.email.auth_username | The username to use when authenticating with the SMTP server. If this is empty string, no authentication is attempted. Defaults to empty string.
Note: Your SMTP server might reject unauthenticated emails. |
action.email.bcc | BCC email address to use if action.email is enabled. |
action.email.cc | CC email address to use if action.email is enabled. |
action.email.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.email.format | Specify the format of text in the email. This value also applies to any attachments.
Valid values: (plain | html | raw | csv) |
action.email.from | Email address from which the email action originates. |
action.email.hostname | Sets the hostname used in the web link (url) sent in email actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) When this value is a simple hostname, the protocol and port which are configured within splunk are used to construct the base of the url. When this value begins with 'http://', it is used verbatim. Note: This means the correct port must be specified if it is not the default port for http or https. This is useful in cases when the Splunk server is not aware of how to construct a url that can be referenced externally, such as SSO environments, other proxies, or when the server hostname is not generally resolvable. Defaults to current hostname provided by the operating system, or if that fails "localhost." When set to empty, default behavior is used. |
action.email.inline | Indicates whether the search results are contained in the body of the email.
Results can be either inline or attached to an email. See action.email.sendresults. |
action.email.mailserver | Set the address of the MTA server to be used to send the emails.
Defaults to <LOCALHOST> (or whatever is set in alert_actions.conf). |
action.email.maxresults | Sets the global maximum number of search results to send when email.action is enabled. |
action.email.maxtime | Specifies the maximum amount of time the execution of an email action takes before the action is aborted. |
action.email.preprocess_results | Search string to preprocess results before emailing them. Defaults to empty string (no preprocessing).
Usually the preprocessing consists of filtering out unwanted internal fields. |
action.email.reportPaperOrientation | Specifies the paper orientation: portrait or landscape. |
action.email.reportPaperSize | Specifies the paper size for PDFs. Defaults to letter.
Valid values: (letter | legal | ledger | a2 | a3 | a4 | a5) |
action.email.reportServerEnabled | Not supported. |
action.email.reportServerURL | Not supported. |
action.email.sendpdf | Indicates whether to create and send the results as a PDF. |
action.email.sendresults | Indicates whether to attach the search results in the email.
Results can be either attached or inline. See action.email.inline. |
action.email.subject | Specifies an email subject.
Defaults to SplunkAlert-<savedsearchname>. |
action.email.to | List of recipient email addresses. Required if this search is scheduled and the email alert action is enabled. |
action.email.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.email.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows <Integer>, int is the number of scheduled periods. Defaults to 86400 (24 hours).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are Integer[p]. |
action.email.use_ssl | Indicates whether to use SSL when communicating with the SMTP server. |
action.email.use_tls | Indicates whether to use TLS (transport layer security) when communicating with the SMTP server (starttls). |
action.populate_lookup | The state of the populate lookup action. |
action.populate_lookup.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.populate_lookup.hostname | Sets the hostname used in the web link (url) sent in alert actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) See action.email.hostname for details. |
action.populate_lookup.maxresults | The maximum number of search results sent using alerts. |
action.populate_lookup.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted. Defaults to 5m.
Valid values are: Integer[m|s|h|d] |
action.populate_lookup.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.populate_lookup.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, then this specifies the number of scheduled periods. Defaults to 10p.
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are Integer[p] |
action.rss | The state of the RSS action. |
action.rss.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.rss.hostname | Sets the hostname used in the web link (url) sent in alert actions. |
action.rss.maxresults | Sets the maximum number of search results sent using alerts. |
action.rss.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted. Defaults to 1m. |
action.rss.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.rss.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, specifies the number of scheduled periods. Defaults to 86400 (24 hours).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are: Integer[p] |
action.script | The state of the script action. |
action.script.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.script.hostname | Sets the hostname used in the web link (url) sent in alert actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) See action.email.hostname for details. |
action.script.maxresults | The maximum number of search results sent using alerts. |
action.script.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted. |
action.script.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.script.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, specifies the number of scheduled periods. Defaults to 600 (10 minutes).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are: Integer[p] |
action.summary_index | Specifies whether the summary index action is enabled for this search. |
action.summary_index._type" | Specifies the data type of the summary index where the Splunk software saves the results of the scheduled search. Can be set to event or metric .
|
action.summary_index.force_realtime_schedule | By default, realtime_schedule is false for a report configured for summary indexing. When set to 1 or true , this setting overrides realtime_schedule. Setting this setting to true can cause gaps in summary data, as a realtime_schedule search is skipped if search concurrency limits are violated.
|
action.summary_index.inline | Determines whether to execute the summary indexing action as part of the scheduled search.
Note: This option is considered only if the summary index action is enabled and is always executed (in other words, if counttype = always). |
action.summary_index.maxresults | Sets the maximum number of search results sent using alerts. |
action.summary_index.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted. Defaults to 5m. |
action.summary_index.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.summary_index.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, specifies the number of scheduled periods. Defaults to 10p. |
alert.digest_mode | Indicates if alert actions are applied to the entire result set or to each individual result. |
alert.expires | Sets the period of time to show the alert in the dashboard. Defaults to 24h.
Uses [number][time-unit] to specify a time. For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour. |
alert.managedBy | Specifies the feature or component that created the alert. |
alert.severity | The alert severity level.
Valid values are:
|
alert.suppress | Indicates whether alert suppression is enabled for this scheduled search. |
alert.suppress.fields | List of fields to use when suppressing per-result alerts. Must be specified if the digest mode is disabled and suppression is enabled. |
alert.suppress.group_name | Optional setting. Used to define an alert suppression group for a set of alerts that are running over identical or very similar datasets. Alert suppression groups can help you avoid getting multiple triggered alert notifications for the same data. |
alert.suppress.period | Specifies the suppression period. Only valid if alert.suppress is enabled.
Uses [number][time-unit] to specify a time. For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour. |
alert.track | Specifies whether to track the actions triggered by this scheduled search.
|
alert_comparator | One of the following strings:
Used with alert_threshold to trigger alert actions. |
alert_condition | A conditional search that is evaluated against the results of the saved search. Defaults to an empty string. Alerts are triggered if the specified search yields a non-empty search result list. Note: If you specify an alert_condition, do not set counttype, relation, or quantity. |
alert_threshold | Valid values are: Integer[%]
Specifies the value to compare (see alert_comparator) before triggering the alert actions. If expressed as a percentage, indicates value to use when alert_comparator is set to rises by perc" or "drops by perc." |
alert_type | What to base the alert on, overridden by alert_condition if it is specified. Valid values are: always, custom, number of events, number of hosts, number of sources. |
allow_skew | Allows the search scheduler to distribute scheduled searches randomly and more evenly over their specified search periods.
This setting does not require adjusting in most use cases. Check with an admin before making any updates. When set to a non-zero value for searches with the following * * * * * Every minute. */M * * * * Every M minutes (M > 0). 0 * * * * Every hour. 0 */H * * * Every H hours (H > 0). 0 0 * * * Every day (at midnight). When set to a non-zero value for a search that has any other The amount of skew for a specific search remains constant between edits of the search. A value of Percentage Duration Valid duration units:
Examples 100% (for an every-5-minute search) = 5 minutes maximum 50% (for an every-minute search) = 30 seconds maximum 5m = 5 minutes maximum 1h = 1 hour maximum |
auto_summarize | Specifies whether the search scheduler should ensure that the data for this search is automatically summarized. |
auto_summarize.command | A search template to use to construct the auto-summarization for the search. Do not change. |
auto_summarize.cron_schedule | Cron schedule to use to probe or generate the summaries for this search |
auto_summarize.dispatch.<arg-name> | Dispatch options that can be overridden when running the summary search. |
auto_summarize.max_concurrent | The maximum number of concurrent instances of this auto summarizing search that the scheduler is allowed to run. |
auto_summarize.max_disabled_buckets | The maximum number of buckets with suspended summarization before the summarization search is completely stopped and the summarization of the search is suspended for the value specified by the auto_summarize.suspend_period setting. |
auto_summarize.max_summary_ratio | The maximum ratio of summary_size/bucket_size, which specifies when to stop summarization and deem it unhelpful for a bucket. |
auto_summarize.max_summary_size | The minimum summary size, in bytes, before testing whether the summarization is helpful. |
auto_summarize.max_time | The maximum time, in seconds, that the auto-summarization search is allowed to run. |
auto_summarize.suspend_period | The amount of time to suspend summarization of the search if the summarization is deemed unhelpful. |
auto_summarize.timespan | Comma-delimited list of time ranges that each summarized chunk should span. Comprises the list of available summary ranges for which summaries would be available. Does not support 1w timespans.
|
auto_summarize.workload_pool | Sets the name of the workload pool that is used by the auto-summarization of this search. |
cron_schedule | The cron schedule to run this search. For more information, refer to the description of this parameter in the POST endpoint. |
defer_scheduled_searchable_idxc | Specifies whether to defer a continuous saved search during a searchable rolling restart or searchable rolling upgrade of an indexer cluster. |
description | Human-readable description of this saved search. |
disabled | Indicates whether this saved search is disabled. |
dispatch.allow_partial_results | Specifies whether the search job can proceed to provide partial results if a search peer fails. When set to false, the search job fails if a search peer providing results for the search job fails. |
dispatch.auto_cancel | Specifies the amount of inactive time, in seconds, after which the search job is automatically canceled. |
dispatch.auto_pause | Specifies the amount of inactive time, in seconds, after which the search job is automatically paused. |
dispatch.buckets | The maximum number of timeline buckets. |
dispatch.earliest_time | A time string that specifies the earliest time for this search. Can be a relative or absolute time. If this value is an absolute time, use the dispatch.time_format to format the value. |
dispatch.index_earliest | Specifies the earliest index time for this search. Can be a relative or absolute time. |
dispatch.index_latest | Specifies the latest index time for this saved search. Can be a relative or absolute time. |
dispatch.indexedRealtime | Specifies whether to use 'indexed-realtime' mode when doing real-time searches. |
dispatch.indexedRealtimeMinSpan | Specifies the minimum number of seconds to wait between component index searches. |
dispatch.indexedRealtimeOffset | Specifies the number of seconds to wait for disk flushes to finish. |
dispatch.indexedRealtimeMinSpan | Allows for a per-job override of the [search] indexed_realtime_default_span setting in limits.conf .The default for saved searches is "unset", falling back to the limits.conf setting.
|
dispatch.latest_time | A time string that specifies the latest time for the saved search. Can be a relative or absolute time. If this value is an absolute time, use the dispatch.time_format to format the value. |
dispatch.lookups | Indicates if lookups are enabled for this search. |
dispatch.max_count | The maximum number of results before finalizing the search. |
dispatch.max_time | Indicates the maximum amount of time (in seconds) before finalizing the search. |
dispatch.reduce_freq | Specifies how frequently the MapReduce reduce phase runs on accumulated map values. |
dispatch.rt_backfill | Specifies whether to do real-time window backfilling for scheduled real-time searches. |
dispatch.rt_maximum_span | Sets the maximum number of seconds to search data that falls behind real time. |
dispatch.sample_ratio | The integer value used to calculate the sample ratio. The formula is 1 / <integer> .
|
dispatch.spawn_process | This parameter is deprecated and will be removed in a future release. Do not use this parameter. Specifies whether new search process is spawned when this saved search is executed. Searches against indexes must run in a separate process. |
dispatch.time_format | Time format string that defines the time format for specifying the earliest and latest time. |
dispatch.ttl | Indicates the time to live (ttl), in seconds, for the artifacts of the scheduled search, if no actions are triggered. |
dispatchAs | When the saved search is dispatched using the "saved/searches/{name}/dispatch" endpoint, this setting controls what user that search is dispatched as. Only meaningful for shared saved searches. Can be set to owner or user .
|
displayview | Defines the default UI view name (not label) in which to load the results. Accessibility is subject to the user having sufficient permissions. |
durable.backfill_type | Specifies how the Splunk software backfills the lost search results of failed scheduled search jobs. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type. Valid values are auto , time_interval , and time_whole .
|
durable.lag_time | Specifies the search time delay, in seconds, that a durable search uses to catch events that are ingested or indexed late. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type.
|
durable.max_backfill_intervals | Specifies the maximum number of cron intervals (previous scheduled search jobs) that the Splunk software can attempt to backfill for this search, when those jobs have incomplete events. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type.
|
durable.track_time_type | Indicates that a scheduled search is durable and specifies how the search tracks events. A value of _time means the durable search tracks each event by its event timestamp , based on time information included in the event. A value of _indextime means the durable search tracks each event by its indexed timestamp. The search is not durable if this setting is unset or is set to none .
|
earliest_time | For scheduled searches display all the scheduled times starting from this time (not just the next run time). |
is_scheduled | Indicates if this search is to be run on a schedule |
is_visible | Indicates if this saved search appears in the visible saved search list. |
latest_time | For scheduled searches display all the scheduled times until this time (not just the next run time). |
listDefaultActionArgs | List default values of actions.*, even though some of the actions may not be specified in the saved search. |
max_concurrent | The maximum number of concurrent instances of this search the scheduler is allowed to run. |
next_scheduled_time | Time when the scheduler runs this search again. |
orphan | If add_orphan_field has been specified in the GET request, indicates whether the search is orphaned.
|
qualifiedSearch | The exact search string that the scheduler would run. |
realtime_schedule | Controls the way the scheduler computes the next execution time of a scheduled search. If this value is set to 1, the scheduler bases its determination of the next scheduled search execution time on the current time.
If this value is set to 0, the scheduler bases its determination of the next scheduled search on the last search execution time. This is called continuous scheduling. See the POST parameter for this attribute for details. |
request.ui_dispatch_app | A field used by Splunk Web to denote the app this search should be dispatched in. |
request.ui_dispatch_view | Specifies a field used by Splunk Web to denote the view this search should be displayed in. |
restart_on_searchpeer_add | Specifies whether to restart a real-time search managed by the scheduler when a search peer becomes available for this saved search.
Note: The peer can be a newly added peer or a peer down and now available. |
run_n_times | Runs this search exactly the specified number of times. Does not run the search again until the Splunk platform is restarted. |
run_on_startup | Indicates whether this search runs on startup. If it does not run on startup, it runs at the next scheduled time. Defaults to 0. This parameter should be set to 1 for scheduled searches that populate lookup tables. |
schedule_priority | Indicates the scheduling priority of a specific search. One of the following values.
[ default | higher | highest ]
* real-Time-Scheduled (realtime_schedule=1). * continuous-Scheduled (realtime_schedule=0).
This is the high-to-low priority order (where RTSS = real-time-scheduled search, CSS = continuous-scheduled search, d = default, h = higher, H = highest). RTSS(H) > CSS(H) > RTSS(h) > RTSS(d) > CSS(h) > CSS(d) Changing the priority requires the search owner to have the Defaults to For more details, see |
schedule_window | Time window (in minutes) during which the search has lower priority. The scheduler can give higher priority to more critical searches during this window. The window must be smaller than the search period. If set to auto , the scheduler prioritizes searches automatically.
|
search | Search expression to filter the response. The response matches field values against the search expression. For example:
search=foo matches any object that has "foo" as a substring in a field. search=field_name%3Dfield_value restricts the match to a single field. URI-encoding is required in this example. |
vsid | The viewstate id associated with the UI view listed in 'displayview'.
Must match up to a stanza in viewstates.conf. |
Example requests and responses
XML Request
curl -k -u admin:pass https://localhost:8089/services/saved/searches
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>savedsearch</title> <id>https://fool01:8092/services/saved/searches</id> <updated>2021-04-29T09:22:44-07:00</updated> <generator build="84cbec3d51a6" version="8.2.2105"/> <author> <name>Splunk</name> </author> <link href="/services/saved/searches/_new" rel="create"/> <link href="/services/saved/searches/_reload" rel="_reload"/> <link href="/services/saved/searches/_acl" rel="_acl"/> <opensearch:totalResults>8</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>Errors in the last 24 hours</title> <id>https://fool01:8092/servicesNS/nobody/search/saved/searches/Errors%20in%20the%20last%2024%20hours</id> <updated>1969-12-31T16:00:00-08:00</updated> <link href="/servicesNS/nobody/search/saved/searches/Errors%20in%20the%20last%2024%20hours" rel="alternate"/> <author> <name>nobody</name> </author> <link href="/servicesNS/nobody/search/saved/searches/Errors%20in%20the%20last%2024%20hours" rel="list"/> <link href="/servicesNS/nobody/search/saved/searches/Errors%20in%20the%20last%2024%20hours/_reload" rel="_reload"/> <link href="/servicesNS/nobody/search/saved/searches/Errors%20in%20the%20last%2024%20hours" rel="edit"/> <link href="/servicesNS/nobody/search/saved/searches/Errors%20in%20the%20last%2024%20hours/disable" rel="disable"/> <link href="/servicesNS/nobody/search/saved/searches/Errors%20in%20the%20last%2024%20hours/dispatch" rel="dispatch"/> <link href="/servicesNS/nobody/search/saved/searches/Errors%20in%20the%20last%2024%20hours/embed" rel="embed"/> <link href="/servicesNS/nobody/search/saved/searches/Errors%20in%20the%20last%2024%20hours/history" rel="history"/> <content type="text/xml"> <s:dict> <s:key name="action.email">0</s:key> <s:key name="action.email.sendresults"></s:key> <s:key name="action.email.to"></s:key> <s:key name="action.populate_lookup">0</s:key> <s:key name="action.rss">0</s:key> <s:key name="action.script">0</s:key> <s:key name="action.summary_index">0</s:key> <s:key name="action.summary_index.force_realtime_schedule">0</s:key> <s:key name="actions"></s:key> <s:key name="alert.digest_mode">1</s:key> <s:key name="alert.expires">24h</s:key> <s:key name="alert.managedBy"></s:key> <s:key name="alert.severity">3</s:key> <s:key name="alert.suppress"></s:key> <s:key name="alert.suppress.fields"></s:key> <s:key name="alert.suppress.group_name"></s:key> <s:key name="alert.suppress.period"></s:key> <s:key name="alert.track">0</s:key> <s:key name="alert_comparator"></s:key> <s:key name="alert_condition"></s:key> <s:key name="alert_threshold"></s:key> <s:key name="alert_type">always</s:key> <s:key name="allow_skew">0</s:key> <s:key name="auto_summarize">0</s:key> <s:key name="auto_summarize.command"><![CDATA[| summarize override=partial timespan=$auto_summarize.timespan$ max_summary_size=$auto_summarize.max_summary_size$ max_summary_ratio=$auto_summarize.max_summary_ratio$ max_disabled_buckets=$auto_summarize.max_disabled_buckets$ max_time=$auto_summarize.max_time$ [ $search$ ]]]></s:key> <s:key name="auto_summarize.cron_schedule">*/10 * * * *</s:key> <s:key name="auto_summarize.dispatch.earliest_time"></s:key> <s:key name="auto_summarize.dispatch.latest_time"></s:key> <s:key name="auto_summarize.dispatch.time_format">%FT%T.%Q%:z</s:key> <s:key name="auto_summarize.dispatch.ttl">60</s:key> <s:key name="auto_summarize.max_concurrent">1</s:key> <s:key name="auto_summarize.max_disabled_buckets">2</s:key> <s:key name="auto_summarize.max_summary_ratio">0.1</s:key> <s:key name="auto_summarize.max_summary_size">52428800</s:key> <s:key name="auto_summarize.max_time">3600</s:key> <s:key name="auto_summarize.suspend_period">24h</s:key> <s:key name="auto_summarize.timespan"></s:key> <s:key name="auto_summarize.workload_pool"></s:key> <s:key name="cron_schedule"></s:key> <s:key name="defer_scheduled_searchable_idxc">0</s:key> <s:key name="description"></s:key> <s:key name="disabled">0</s:key> <s:key name="dispatch.allow_partial_results">1</s:key> <s:key name="dispatch.auto_cancel">0</s:key> <s:key name="dispatch.auto_pause">0</s:key> <s:key name="dispatch.buckets">0</s:key> <s:key name="dispatch.earliest_time">-1d</s:key> <s:key name="dispatch.index_earliest"></s:key> <s:key name="dispatch.index_latest"></s:key> <s:key name="dispatch.indexedRealtime"></s:key> <s:key name="dispatch.indexedRealtimeMinSpan"></s:key> <s:key name="dispatch.indexedRealtimeOffset"></s:key> <s:key name="dispatch.latest_time"></s:key> <s:key name="dispatch.lookups">1</s:key> <s:key name="dispatch.max_count">500000</s:key> <s:key name="dispatch.max_time">0</s:key> <s:key name="dispatch.reduce_freq">10</s:key> <s:key name="dispatch.rt_backfill">0</s:key> <s:key name="dispatch.rt_maximum_span"></s:key> <s:key name="dispatch.sample_ratio">1</s:key> <s:key name="dispatch.spawn_process">1</s:key> <s:key name="dispatch.time_format">%FT%T.%Q%:z</s:key> <s:key name="dispatch.ttl">2p</s:key> <s:key name="dispatchAs">owner</s:key> <!-- display settings elided--> <s:key name="displayview"></s:key> <s:key name="durable.backfill_type">auto</s:key> <s:key name="durable.lag_time">0</s:key> <s:key name="durable.max_backfill_intervals">0</s:key> <s:key name="durable.track_time_type"></s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">search</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">0</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">nobody</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>*</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">app</s:key> </s:dict> </s:key> <s:key name="embed.enabled">0</s:key> <s:key name="federated.provider"></s:key> <s:key name="is_scheduled">0</s:key> <s:key name="is_visible">1</s:key> <s:key name="max_concurrent">1</s:key> <s:key name="next_scheduled_time"></s:key> <s:key name="qualifiedSearch">search error OR failed OR severe OR ( sourcetype=access_* ( 404 OR 500 OR 503 ) )</s:key> <s:key name="realtime_schedule">1</s:key> <s:key name="request.ui_dispatch_app"></s:key> <s:key name="request.ui_dispatch_view"></s:key> <s:key name="restart_on_searchpeer_add">1</s:key> <s:key name="run_n_times">0</s:key> <s:key name="run_on_startup">0</s:key> <s:key name="schedule_as">auto</s:key> <s:key name="schedule_priority">default</s:key> <s:key name="schedule_window">0</s:key> <s:key name="search">error OR failed OR severe OR ( sourcetype=access_* ( 404 OR 500 OR 503 ) )</s:key> <s:key name="skip_scheduled_realtime_idxc">0</s:key> <s:key name="vsid"></s:key> <s:key name="workload_pool"></s:key> </s:dict> </content> </entry> </feed>
POST
Create a saved search.
Request parameters
Name | Type | Description |
---|---|---|
action.<action_name> | Boolean | Enable or disable an alert action. See alert_actions.conf for available alert action types.
|
action.<action_name>.<parameter> | Use this syntax to configure action parameters. See alert_actions.conf for parameter options.
| |
action.summary_index._type" | String | Specifies the data type of the summary index where the Splunk software saves the results of the scheduled search. Can be set to event or metric .
|
action.summary_index.force_realtime_schedule | Boolean | By default, realtime_schedule is false for a report configured for summary indexing. When set to 1 or True , this setting overrides realtime_schedule. Setting this setting to true can cause gaps in summary data, as a realtime_schedule search is skipped if search concurrency limits are violated.
|
actions | String | A comma-separated list of actions to enable.
For example: rss,email |
alert.digest_mode | Boolean | Specifies whether alert actions are applied to the entire result set or on each individual result. Defaults to 1. |
alert.expires | Number | Valid values: [number][time-unit]
Sets the period of time to show the alert in the dashboard. Defaults to 24h. Use [number][time-unit] to specify a time. For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour. |
alert.severity | Enum | Valid values: (1 | 2 | 3 | 4 | 5 | 6)
Sets the alert severity level. Valid values are: 1 DEBUG 2 INFO 3 WARN (default) 4 ERROR 5 SEVERE 6 FATAL |
alert.suppress | Boolean | Indicates whether alert suppression is enabled for this scheduled search. |
alert.suppress.fields | String | Comma delimited list of fields to use for suppression when doing per result alerting. Required if suppression is turned on and per result alerting is enabled. |
alert.suppress.group_name | String | Optional setting. Used to define an alert suppression group for a set of alerts that are running over identical or very similar datasets. Alert suppression groups can help you avoid getting multiple triggered alert notifications for the same data. |
alert.suppress.period | Number | Valid values: [number][time-unit]
Specifies the suppression period. Only valid if alert.suppress is enabled. Use [number][time-unit] to specify a time. For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour. |
alert.track | String | Valid values: (true | false | auto)
Specifies whether to track the actions triggered by this scheduled search.
|
alert_comparator | String | One of the following strings: greater than, less than, equal to, rises by, drops by, rises by perc, drops by perc. Used with alert_threshold to trigger alert actions. |
alert_condition | String | Contains a conditional search that is evaluated against the results of the saved search. Defaults to an empty string.
Alerts are triggered if the specified search yields a non-empty search result list. Note: If you specify an alert_condition, do not set counttype, relation, or quantity. |
alert_threshold | Number | Valid values are: Integer[%]
Specifies the value to compare (see alert_comparator) before triggering the alert actions. If expressed as a percentage, indicates value to use when alert_comparator is set to "rises by perc" or "drops by perc." |
alert_type | String | What to base the alert on, overridden by alert_condition if it is specified. Valid values are: always, custom, number of events, number of hosts, number of sources. |
allow_skew | 0 | <percentage> | <duration> |
Allows the search scheduler to distribute scheduled searches randomly and more evenly over their specified search periods. Defaults to This setting does not require adjusting in most use cases. Check with an admin before making any updates. When set to a non-zero value for searches with the following cron_schedule' values, the search scheduler randomly skews the second, minute, and hour on which the search runs. * * * * * Every minute. */M * * * * Every M minutes (M > 0). 0 * * * * Every hour. 0 */H * * * Every H hours (H > 0). 0 0 * * * Every day (at midnight). When set to a non-zero value for a search that has any other The amount of skew for a specific search remains constant between edits of the search. A value of Percentage Duration Valid duration units:
Examples 100% (for an every-5-minute search) = 5 minutes maximum 50% (for an every-minute search) = 30 seconds maximum 5m = 5 minutes maximum 1h = 1 hour maximum |
args.* | String | Wildcard argument that accepts any saved search template argument, such as args.username=foobar when the search is search $username$. |
auto_summarize | Boolean | Indicates whether the scheduler should ensure that the data for this search is automatically summarized. Defaults to 0. |
auto_summarize.command | String | An auto summarization template for this search. See auto summarization options in savedsearches.conf for more details.
Do not change unless you understand the architecture of saved search auto summarization. |
auto_summarize.cron_schedule | String | Cron schedule that probes and generates the summaries for this saved search.
The default value, |
auto_summarize.dispatch.earliest_time | String | A time string that specifies the earliest time for summarizing this search. Can be a relative or absolute time.
If this value is an absolute time, use the dispatch.time_format to format the value. |
auto_summarize.dispatch.latest_time | String | A time string that specifies the latest time for summarizing this saved search. Can be a relative or absolute time.
If this value is an absolute time, use the dispatch.time_format to format the value. |
auto_summarize.dispatch.time_format | String | Defines the time format used to specify the earliest and latest time. Defaults to %FT%T.%Q%:z
|
auto_summarize.dispatch.ttl | String | Valid values: Integer[p]
Indicates the time to live (ttl), in seconds, for the artifacts of the summarization of the scheduled search. Defaults to 60. |
auto_summarize.max_concurrent | Number | The maximum number of concurrent instances of this auto summarizing search that the scheduler is allowed to run. |
auto_summarize.max_disabled_buckets | Number | The maximum number of buckets with the suspended summarization before the summarization search is completely stopped, and the summarization of the search is suspended for auto_summarize.suspend_period. Defaults to 2. |
auto_summarize.max_summary_ratio | Number | The maximum ratio of summary_size/bucket_size, which specifies when to stop summarization and deem it unhelpful for a bucket. Defaults to 0.1.
Note: The test is only performed if the summary size is larger than auto_summarize.max_summary_size. |
auto_summarize.max_summary_size | Number | The minimum summary size, in bytes, before testing whether the summarization is helpful.
The default value, |
auto_summarize.max_time | Number | The maximum time, in seconds, that the summary search is allowed to run. Defaults to 3600.
Note: This is an approximate time. The summary search stops at clean bucket boundaries. |
auto_summarize.suspend_period | String | The amount of time to suspend summarization of this search if the summarization is deemed unhelpful. Defaults to 24h. |
auto_summarize.timespan | String | Comma-delimited list of time ranges that each summarized chunk should span. Comprises the list of available granularity levels for which summaries would be available. Does not support 1w timespans.
For example, a timechart over the last month whose granularity is at the day level should set this to |
cron_schedule | String | Valid values: cron string
The cron schedule to execute this search. For example: cron lets you use standard cron notation to define your scheduled search interval. In particular, cron can accept this type of notation: To reduce system load, schedule your searches so that they are staggered over time. Running all of them every 20 minutes (*/20) means they would all launch at hh:00 (20, 40) and might slow your system every 20 minutes. |
description | String | Human-readable description of this saved search. Defaults to empty string. |
disabled | Boolean | Indicates whether the saved search is enabled. Defaults to 0. Disabled saved searches are not visible in Splunk Web. |
dispatch.* | String | Wildcard argument that accepts any dispatch related argument. |
dispatch.allow_partial_results | Boolean | Specifies whether the search job can proceed to provide partial results if a search peer fails. When set to false, the search job fails if a search peer providing results for the search job fails. |
dispatch.auto_cancel | Number | Specifies the amount of inactive time, in seconds, after which the search job is automatically canceled. |
dispatch.auto_pause | Number | Specifies the amount of inactive time, in seconds, after which the search job is automatically paused. |
dispatch.buckets | Number | The maximum number of timeline buckets. Defaults to 0. |
dispatch.earliest_time | String | A time string that specifies the earliest time for this search. Can be a relative or absolute time.
If this value is an absolute time, use the dispatch.time_format to format the value. |
dispatch.index_earliest | String | A time string that specifies the earliest index time for this search. Can be a relative or absolute time. |
dispatch.index_latest | String | A time string that specifies the latest index time for this saved search. Can be a relative or absolute time. |
dispatch.indexedRealtime | Boolean | Indicates whether to used indexed-realtime mode when doing real-time searches. |
dispatch.indexedRealtimeOffset | Number | Allows for a per-job override of the [search] indexed_realtime_disk_sync_delay setting in limits.conf .Default for saved searches is "unset", falling back to limits.conf setting.
|
dispatch.indexedRealtimeMinSpan | Number | Allows for a per-job override of the [search] indexed_realtime_default_span setting in limits.conf .Default for saved searches is "unset", falling back to the limits.conf setting.
|
dispatch.latest_time | String | A time string that specifies the latest time for this saved search. Can be a relative or absolute time. If this value is an absolute time, use the dispatch.time_format to format the value. |
dispatch.lookups | Boolean | Enables or disables the lookups for this search. Defaults to 1. |
dispatch.max_count | Number | The maximum number of results before finalizing the search. Defaults to 500000. |
dispatch.max_time | Number | Indicates the maximum amount of time (in seconds) before finalizing the search. Defaults to 0. |
dispatch.reduce_freq | Number | Specifies, in seconds, how frequently the MapReduce reduce phase runs on accumulated map values. Defaults to 10. |
dispatch.rt_backfill | Boolean | Whether to back fill the real time window for this search. Parameter valid only if this is a real time search. Defaults to 0. |
dispatch.rt_maximum_span | Number | Allows for a per-job override of the [search] indexed_realtime_maximum_span setting in limits.conf .Default for saved searches is "unset", falling back to the limits.conf setting.
|
dispatch.sample_ratio | Number | The integer value used to calculate the sample ratio. The formula is 1 / <integer> .
|
dispatch.spawn_process | Boolean | This parameter is deprecated and will be removed in a future release. Do not use this parameter. Specifies whether to spawn a new search process when this saved search is executed. Defaults to 1. Searches against indexes must run in a separate process. |
dispatch.time_format | String | A time format string that defines the time format for specifying the earliest and latest time. Defaults to %FT%T.%Q%:z .
|
dispatch.ttl | Number | Valid values: Integer[p]. Defaults to 2p.
Indicates the time to live (in seconds) for the artifacts of the scheduled search, if no actions are triggered. If an action is triggered, the action ttl is used. If multiple actions are triggered, the maximum ttl is applied to the artifacts. To set the action ttl, refer to If the integer is followed by the letter 'p', the ttl is interpreted as a multiple of the scheduled search period. |
dispatchAs | String | When the saved search is dispatched using the "saved/searches/{name}/dispatch" endpoint, this setting controls what user that search is dispatched as. Only meaningful for shared saved searches. Can be set to owner or user .
|
displayview | String | Defines the default UI view name (not label) in which to load the results. Accessibility is subject to the user having sufficient permissions. |
durable.backfill_type | String | Specifies how the Splunk software backfills the lost search results of failed scheduled search jobs. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type. Valid values are auto , time_interval , and time_whole .
|
durable.lag_time | Number | Specifies the search time delay, in seconds, that a durable search uses to catch events that are ingested or indexed late. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type.
|
durable.max_backfill_intervals | Number | Specifies the maximum number of cron intervals (previous scheduled search jobs) that the Splunk software can attempt to backfill for this search, when those jobs have incomplete events. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type.
|
durable.track_time_type | String | Indicates that a scheduled search is durable and specifies how the search tracks events. A durable search is a search that tries to ensure the delivery of all results, even when the search process is slowed or stopped by runtime issues like rolling restarts, network bottlenecks, and even downed servers. Applies only to scheduled searches. A value of |
is_scheduled | Boolean | Whether this search is to be run on a schedule |
is_visible | Boolean | Specifies whether this saved search should be listed in the visible saved search list. Defaults to 1. |
max_concurrent | Number | The maximum number of concurrent instances of this search the scheduler is allowed to run. Defaults to 1. |
name | String | Required. A name for the search. |
next_scheduled_time | String | Read-only attribute. Value ignored on POST. There are some old clients who still send this value |
qualifiedSearch | String | Read-only attribute. Value ignored on POST. This value is computed during runtime. |
realtime_schedule | Boolean | Controls the way the scheduler computes the next execution time of a scheduled search. Defaults to 1. If this value is set to 1, the scheduler bases its determination of the next scheduled search execution time on the current time.
If this value is set to 0, the scheduler bases its determination of the next scheduled search on the last search execution time. This is called continuous scheduling. If set to 0, the scheduler never skips scheduled execution periods. However, the execution of the saved search might fall behind depending on the scheduler load. Use continuous scheduling whenever you enable the summary index option. If set to 1, the scheduler might skip some execution periods to make sure that the scheduler is executing the searches running over the most recent time range. The scheduler tries to execute searches that have realtime_schedule set to 1 before it executes searches that have continuous scheduling (realtime_schedule = 0). |
request.ui_dispatch_app | String | Specifies a field used by Splunk Web to denote the app this search should be dispatched in. |
request.ui_dispatch_view | String | Specifies a field used by Splunk Web to denote the view this search should be displayed in. |
restart_on_searchpeer_add | Boolean | Specifies whether to restart a real-time search managed by the scheduler when a search peer becomes available for this saved search. Defaults to 1.
Note: The peer can be a newly added peer or a peer down and now available. |
run_n_times | Number | Runs this search exactly the specified number of times. Does not run the search again until the Splunk platform is restarted. |
run_on_startup | Boolean | Indicates whether this search runs on startup. If it does not run on startup, it runs at the next scheduled time. Defaults to 0. Set run_on_startup to 1 for scheduled searches that populate lookup tables.
|
schedule_priority | String | Configures the scheduling priority of a specific search. One of the following values.
[ default | higher | highest ]
* real-Time-Scheduled (realtime_schedule=1). * continuous-Scheduled (realtime_schedule=0).
This is the high-to-low priority order (where RTSS = real-time-scheduled search, CSS = continuous-scheduled search, d = default, h = higher, H = highest). RTSS(H) > CSS(H) > RTSS(h) > RTSS(d) > CSS(h) > CSS(d) Changing the priority requires the search owner to have the Defaults to For more details, see |
schedule_window | Number or auto |
Time window (in minutes) during which the search has lower priority. Defaults to 0. The scheduler can give higher priority to more critical searches during this window. The window must be smaller than the search period. Set to auto to let the scheduler determine the optimal window value automatically. Requires the edit_search_schedule_window capability to override auto .
|
search | String | Required. The search to save. |
vsid | String | Defines the viewstate id associated with the UI view listed in 'displayview'.
Must match up to a stanza in viewstates.conf. |
workload_pool | String | Specifies the new workload pool where the existing running search will be placed. |
Returned values
Name | Description |
---|---|
action.<action_name> | Indicates whether the <action_name> is enabled or disabled for a particular search. For more information about the alert action options see the alert_actions.conf file.
|
action.<action_name>.<parameter> | Overrides the setting defined for an action in the alert_actions.conf file with a new setting that is valid only for the search configuration to which it is applied.
|
action.email | Indicates the state of the email action. |
action.email.auth_password | The password to use when authenticating with the SMTP server. Normally this value is set when editing the email settings, however you can set a clear text password here and it is encrypted on the next restart.
Defaults to empty string. |
action.email.auth_username | The username to use when authenticating with the SMTP server. If this is empty string, no authentication is attempted. Defaults to empty string.
Note: Your SMTP server might reject unauthenticated emails. |
action.email.bcc | BCC email address to use if action.email is enabled. |
action.email.cc | CC email address to use if action.email is enabled. |
action.email.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.email.format | Specify the format of text in the email. This value also applies to any attachments.<
Valid values: (plain | html | raw | csv) |
action.email.from | Email address from which the email action originates.
Defaults to splunk@$LOCALHOST or whatever value is set in alert_actions.conf. |
action.email.hostname | Sets the hostname used in the web link (url) sent in email actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) When this value is a simple hostname, the protocol and port which are configured within splunk are used to construct the base of the url. When this value begins with 'http://', it is used verbatim. NOTE: This means the correct port must be specified if it is not the default port for http or https. This is useful in cases when the Splunk server is not aware of how to construct a url that can be referenced externally, such as SSO environments, other proxies, or when the server hostname is not generally resolvable. Defaults to current hostname provided by the operating system, or if that fails "localhost". When set to empty, default behavior is used. |
action.email.inline | Indicates whether the search results are contained in the body of the email.
Results can be either inline or attached to an email. See action.email.sendresults. |
action.email.mailserver | Set the address of the MTA server to be used to send the emails.
Defaults to <LOCALHOST> (or whatever is set in alert_actions.conf). |
action.email.maxresults | Sets the global maximum number of search results to send when email.action is enabled. |
action.email.maxtime | Specifies the maximum amount of time the execution of an email action takes before the action is aborted. |
action.email.pdfview | The name of the view to deliver if sendpdf is enabled |
action.email.preprocess_results | Search string to preprocess results before emailing them. Defaults to empty string (no preprocessing).
Usually the preprocessing consists of filtering out unwanted internal fields. |
action.email.reportCIDFontList | Space-separated list. Specifies the set (and load order) of CID fonts for handling Simplified Chinese(gb), Traditional Chinese(cns), Japanese(jp), and Korean(kor) in Integrated PDF Rendering.
If multiple fonts provide a glyph for a given character code, the glyph from the first font specified in the list is used. To skip loading any CID fonts, specify the empty string. Default value: "gb cns jp kor" |
action.email.reportIncludeSplunkLogo | Indicates whether to include the Splunk logo with the report. |
action.email.reportPaperOrientation | Specifies the paper orientation: portrait or landscape. |
action.email.reportPaperSize | Specifies the paper size for PDFs. Defaults to letter.
Valid values: (letter | legal | ledger | a2 | a3 | a4 | a5) |
action.email.reportServerEnabled | Not supported. |
action.email.reportServerURL | Not supported. |
action.email.sendpdf | Indicates whether to create and send the results as a PDF. |
action.email.sendresults | Indicates whether to attach the search results in the email.
Results can be either attached or inline. See action.email.inline. |
action.email.subject | Specifies an email subject.
Defaults to SplunkAlert-<savedsearchname>. |
action.email.to | List of recipient email addresses. Required if this search is scheduled and the email alert action is enabled. |
action.email.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.email.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows <Integer>, int is the number of scheduled periods. Defaults to 86400 (24 hours).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are Integer[p]. |
action.email.use_ssl | Indicates whether to use SSL when communicating with the SMTP server. |
action.email.use_tls | Indicates whether to use TLS (transport layer security) when communicating with the SMTP server (starttls). |
action.email.width_sort_columns | Indicates whether columns should be sorted from least wide to most wide, left to right.
Only valid if format=text. |
action.populate_lookup | Indicates the state of the populate lookup action. |
action.populate_lookup.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.populate_lookup.dest | Lookup name of path of the lookup to populate. |
action.populate_lookup.hostname | Sets the hostname used in the web link (url) sent in alert actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) See action.email.hostname for details. |
action.populate_lookup.maxresults | The maximum number of search results sent using alerts. |
action.populate_lookup.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted. Defaults to 5m.
Valid values are: Integer[m|s|h|d] |
action.populate_lookup.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.populate_lookup.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, then this specifies the number of scheduled periods. Defaults to 10p.
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are Integer[p] |
action.rss | Indicates the state of the RSS action. |
action.rss.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.rss.hostname | Sets the hostname used in the web link (url) sent in alert actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) See action.email.hostname for details. |
action.rss.maxresults | Sets the maximum number of search results sent using alerts. |
action.rss.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted.
Valid values are Integer[m |s |h |d]. |
action.rss.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.rss.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, specifies the number of scheduled periods. Defaults to 86400 (24 hours).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are: Integer[p] |
action.script | Indicates the state of the script for this action. |
action.script.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.script.filename | File name of the script to call. Required if script action is enabled |
action.script.hostname | Sets the hostname used in the web link (url) sent in alert actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) See action.email.hostname for details. |
action.script.maxresults | Sets the maximum number of search results sent using alerts. |
action.script.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted. |
action.script.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.script.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, specifies the number of scheduled periods. Defaults to 600 (10 minutes).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are: Integer[p] |
action.summary_index | Indicates the state of the summary index. |
action.summary_index._name | Specifies the name of the summary index where the results of the scheduled search are saved.
Defaults to "summary." |
action.summary_index.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.summary_index.hostname | Sets the hostname used in the web link (url) sent in alert actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) See action.email.hostname for details. |
action.summary_index.inline | Determines whether to execute the summary indexing action as part of the scheduled search.
Note: This option is considered only if the summary index action is enabled and is always executed (in other words, if counttype = always). |
action.summary_index.maxresults | Sets the maximum number of search results sent using alerts. |
action.summary_index.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted. Defaults to 5m.
Valid values are: Integer[m|s|h|d] |
action.summary_index.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.summary_index.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, specifies the number of scheduled periods. Defaults to 10p.
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are: Integer[p] |
actions | Actions triggerd by this alert. |
alert.digest_mode | Indicates if the alert actions are applied to the entire result set or to each individual result. |
alert.expires | Sets the period of time to show the alert in the dashboard. Defaults to 24h.
Use [number][time-unit] to specify a time. For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour. Valid values: [number][time-unit] |
alert.severity | Valid values: (1 | 2 | 3 | 4 | 5 | 6)
Sets the alert severity level. Valid values are:
|
alert.suppress | Indicates whether alert suppression is enabled for this schedules search. |
alert.suppress.fields | Fields to use for suppression when doing per result alerting. Required if suppression is turned on and per result alerting is enabled. |
alert.suppress.period | Specifies the suppresion period. Only valid if alert.supress is enabled.
Uses [number][time-unit] to specify a time. For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour. |
alert.track | Specifies whether to track the actions triggered by this scheduled search.
auto - determine whether to track or not based on the tracking setting of each action, do not track scheduled searches that always trigger actions. true - force alert tracking. false - disable alert tracking for this search. |
alert_comparator | One of the following strings: greater than, less than, equal to, rises by, drops by, rises by perc, drops by perc |
alert_condition | A conditional search that is evaluated against the results of the saved search. Defaults to an empty string.
Alerts are triggered if the specified search yields a non-empty search result list. Note: If you specify an alert_condition, do not set counttype, relation, or quantity. |
alert_threshold | Valid values are: Integer[%]
Specifies the value to compare (see alert_comparator) before triggering the alert actions. If expressed as a percentage, indicates value to use when alert_comparator is set to "rises by perc" or "drops by perc." |
alert_type | What to base the alert on, overriden by alert_condition if it is specified. Valid values are: always, custom, number of events, number of hosts, number of sources. |
allow_skew | 0 | <percentage> | <duration>
Allows the search scheduler to distribute scheduled searches randomly and more evenly over their specified search periods. This setting does not require adjusting in most use cases. Check with an admin before making any updates. When set to a non-zero value for searches with the following * * * * * Every minute. */M * * * * Every M minutes (M > 0). 0 * * * * Every hour. 0 */H * * * Every H hours (H > 0). 0 0 * * * Every day (at midnight). When set to a non-zero value for a search that has any other The amount of skew for a specific search remains constant between edits of the search. A value of Percentage Duration Valid duration units:
Examples 100% (for an every-5-minute search) = 5 minutes maximum 50% (for an every-minute search) = 30 seconds maximum 5m = 5 minutes maximum 1h = 1 hour maximum |
args.* | Wildcard argument that accepts any saved search template argument, such as args.username=foobar when the search is search $username$. |
auto_summarize | Indicates whether the scheduler should ensure that the data for this search is automatically summarized. |
auto_summarize.command | A search template that constructs the auto summarization for this search.
Caution: Advanced feature. Do not change unless you understand the architecture of auto summarization of saved searches. |
auto_summarize.cron_schedule | Cron schedule that probes and generates the summaries for this saved search. |
auto_summarize.dispatch.earliest_time | A time string that specifies the earliest time for summarizing this search. Can be a relative or absolute time. |
auto_summarize.dispatch.latest_time | A time string that specifies the latest time for this saved search. Can be a relative or absolute time. |
auto_summarize.dispatch.time_format | Time format used to specify the earliest and latest times. |
auto_summarize.dispatch.ttl | Indicates the time to live (in seconds) for the artifacts of the summarization of the scheduled search. If the integer is followed by the letter 'p', the ttl is interpreted as a multiple of the scheduled search period. |
auto_summarize.max_disabled_buckets | The maximum number of buckets with the suspended summarization before the summarization search is completely stopped, and the summarization of the search is suspended for auto_summarize.suspend_period. |
auto_summarize.max_summary_ratio | The maximum ratio of summary_size/bucket_size, which specifies when to stop summarization and deem it unhelpful for a bucket.
Note: The test is only performed if the summary size is larger than auto_summarize.max_summary_size. |
auto_summarize.max_summary_size | The minimum summary size, in bytes, before testing whether the summarization is helpful. |
auto_summarize.max_time | Maximum time (in seconds) that the summary search is allowed to run.
Note: This is an approximate time. The summary search stops at clean bucket boundaries. |
auto_summarize.suspend_period | Time specifier indicating when to suspend summarization of this search if the summarization is deemed unhelpful. |
auto_summarize.timespan | The list of time ranges that each summarized chunk should span. This comprises the list of available granularity levels for which summaries would be available.
For example a timechart over the last month whose granularity is at the day level should set this to 1d. If you need the same data summarized at the hour level for weekly charts, use: 1h,1d. |
cron_schedule | The cron schedule to execute this search. For example: */5 * * * * causes the search to execute every 5 minutes.
cron lets you use standard cron notation to define your scheduled search interval. In particular, cron can accept this type of notation: 00,20,40 * * * *, which runs the search every hour at hh:00, hh:20, hh:40. Along the same lines, a cron of 03,23,43 * * * * runs the search every hour at hh:03, hh:23, hh:43. Splunk recommends that you schedule your searches so that they are staggered over time. This reduces system load. Running all of them every 20 minutes (*/20) means they would all launch at hh:00 (20, 40) and might slow your system every 20 minutes. Valid values: cron string |
description | Description of this saved search. Defaults to empty string. |
disabled | Indicates if this saved search is disabled. |
dispatch.* | * represents any custom dispatch field. |
dispatch.buckets | The maximum nuber of timeline buckets. |
dispatch.earliest_time | A time string that specifies the earliest time for this search. Can be a relative or absolute time.
If this value is an absolute time, use the dispatch.time_format to format the value. |
dispatch.indexedRealtime | Indicates whether to used indexed-realtime mode when doing real-time searches. |
dispatch.latest_time | A time string that specifies the latest time for the saved search. Can be a relative or absolute time.
If this value is an absolute time, use the dispatch.time_format to format the value. |
dispatch.lookups | Indicates if lookups are enabled for this search. |
dispatch.max_count | The maximum number of results before finalizing the search. |
dispatch.max_time | Indicates the maximum amount of time (in seconds) before finalizing the search. |
dispatch.reduce_freq | Specifies how frequently the MapReduce reduce phase runs on accumulated map values. |
dispatch.rt_backfill | Indicates whether to back fill the real time window for this search. Parameter valid only if this is a real time search |
dispatch.spawn_process | This parameter is deprecated and will be removed in a future release. Do not use this parameter. Indicates whether a new search process spawns when this saved search is executed. |
dispatch.time_format | Time format string that defines the time format for specifying the earliest and latest time. |
dispatch.ttl | Indicates the time to live (in seconds) for the artifacts of the scheduled search, if no actions are triggered.
If an action is triggered, the action ttl is used. If multiple actions are triggered, the maximum ttl is applied to the artifacts. To set the action ttl, refer to If the integer is followed by the letter 'p', the ttl is interpreted as a multiple of the scheduled search period. |
displayview | Defines the default UI view name (not label) in which to load the results. Accessibility is subject to the user having sufficient permissions. |
durable.backfill_type | Specifies how the Splunk software backfills the lost search results of failed scheduled search jobs. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type. Valid values are auto , time_interval , and time_whole .
|
durable.lag_time | Specifies the search time delay, in seconds, that a durable search uses to catch events that are ingested or indexed late. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type.
|
durable.max_backfill_intervals | Specifies the maximum number of cron intervals (previous scheduled search jobs) that the Splunk software can attempt to backfill for this search, when those jobs have incomplete events. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type.
|
durable.track_time_type | Indicates that a scheduled search is durable and specifies how the search tracks events. A value of _time means the durable search tracks each event by its event timestamp , based on time information included in the event. A value of _indextime means the durable search tracks each event by its indexed timestamp. The search is not durable if this setting is unset or is set to none .
|
is_scheduled | Indicates if this search is to be run on a schedule. |
is_visible | Indicates if this saved search appears in the visible saved search list. |
max_concurrent | The maximum number of concurrent instances of this search the scheduler is allowed to run. |
next_scheduled_time | The time when the scheduler runs this search again. |
qualifiedSearch | The exact search string that the scheduler would run. |
realtime_schedule | Controls the way the scheduler computes the next execution time of a scheduled search. If this value is set to 1, the scheduler bases its determination of the next scheduled search execution time on the current time.
If this value is set to 0, the scheduler bases its determination of the next scheduled search on the last search execution time. This is called continuous scheduling. If set to 0, the scheduler never skips scheduled execution periods. However, the execution of the saved search might fall behind depending on the scheduler load. Use continuous scheduling whenever you enable the summary index option. If set to 1, the scheduler might skip some execution periods to make sure that the scheduler is executing the searches running over the most recent time range. The scheduler tries to execute searches that have realtime_schedule set to 1 before it executes searches that have continuous scheduling (realtime_schedule = 0). |
request.ui_dispatch_app | A field used by Splunk Web to denote the app this search should be dispatched in. |
request.ui_dispatch_view | Specifies a field used by Splunk Web to denote the view this search should be displayed in. |
restart_on_searchpeer_add | Indicates whether to restart a real-time search managed by the scheduler when a search peer becomes available for this saved search.
Note: The peer can be a newly added peer or a peer down and now available. |
run_on_startup | Indicates whether this search runs on startup. If it does not run on startup, it runs at the next scheduled time.
Splunk recommends that you set run_on_startup to true for scheduled searches that populate lookup tables. |
schedule_window | Time window (in minutes) during which the search has lower priority. The scheduler can give higher priority to more critical searches during this window. The window must be smaller than the search period. If set to auto , the scheduler prioritizes searches automatically.
|
search | Search expression to filter the response. The response matches field values against the search expression. For example:
search=foo matches any object that has "foo" as a substring in a field. search=field_name%3Dfield_value restricts the match to a single field. URI-encoding is required in this example. |
vsid | The viewstate id associated with the UI view listed in 'displayview'.
Matches to a stanza in viewstates.conf. |
Example request and response
XML Request
curl -k -u admin:chang2me https://fool01:8092/services/saved/searches/ \ -d name=test_durable \ -d cron_schedule="*/3 * * * *" \ -d description="This test job is a durable saved search" \ -d dispatch.earliest_time="-24h@h" -d dispatch.latest_time=now \ --data-urlencode search="search index="_internal" | stats count by host" \
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>savedsearch</title> <id>https://fool01:8092/services/saved/searches</id> <updated>2021-04-29T09:56:53-07:00</updated> <generator build="84cbec3d51a6" version="8.2.2105"/> <author> <name>Splunk</name> </author> <link href="/services/saved/searches/_new" rel="create"/> <link href="/services/saved/searches/_reload" rel="_reload"/> <link href="/services/saved/searches/_acl" rel="_acl"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>test_durable</title> <id>https://fool01:8092/servicesNS/admin/search/saved/searches/test_durable</id> <updated>2021-04-29T09:56:53-07:00</updated> <link href="/servicesNS/admin/search/saved/searches/test_durable" rel="alternate"/> <author> <name>admin</name> </author> <link href="/servicesNS/admin/search/saved/searches/test_durable" rel="list"/> <link href="/servicesNS/admin/search/saved/searches/test_durable/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/saved/searches/test_durable" rel="edit"/> <link href="/servicesNS/admin/search/saved/searches/test_durable" rel="remove"/> <link href="/servicesNS/admin/search/saved/searches/test_durable/move" rel="move"/> <link href="/servicesNS/admin/search/saved/searches/test_durable/disable" rel="disable"/> <link href="/servicesNS/admin/search/saved/searches/test_durable/dispatch" rel="dispatch"/> <link href="/servicesNS/admin/search/saved/searches/test_durable/embed" rel="embed"/> <link href="/servicesNS/admin/search/saved/searches/test_durable/history" rel="history"/> <content type="text/xml"> <s:dict> <s:key name="action.email">0</s:key> <!-- action settings elided --> <s:key name="alert.digest_mode">1</s:key> <s:key name="alert.expires">24h</s:key> <s:key name="alert.managedBy"></s:key> <s:key name="alert.severity">3</s:key> <s:key name="alert.suppress"></s:key> <s:key name="alert.suppress.fields"></s:key> <s:key name="alert.suppress.group_name"></s:key> <s:key name="alert.suppress.period"></s:key> <s:key name="alert.track">0</s:key> <s:key name="alert_comparator"></s:key> <s:key name="alert_condition"></s:key> <s:key name="alert_threshold"></s:key> <s:key name="alert_type">always</s:key> <s:key name="allow_skew">0</s:key> <s:key name="auto_summarize">0</s:key> <s:key name="auto_summarize.command"><![CDATA[| summarize override=partial timespan=$auto_summarize.timespan$ max_summary_size=$auto_summarize.max_summary_size$ max_summary_ratio=$auto_summarize.max_summary_ratio$ max_disabled_buckets=$auto_summarize.max_disabled_buckets$ max_time=$auto_summarize.max_time$ [ $search$ ]]]></s:key> <s:key name="auto_summarize.cron_schedule">*/10 * * * *</s:key> <s:key name="auto_summarize.dispatch.earliest_time"></s:key> <s:key name="auto_summarize.dispatch.latest_time"></s:key> <s:key name="auto_summarize.dispatch.time_format">%FT%T.%Q%:z</s:key> <s:key name="auto_summarize.dispatch.ttl">60</s:key> <s:key name="auto_summarize.max_concurrent">1</s:key> <s:key name="auto_summarize.max_disabled_buckets">2</s:key> <s:key name="auto_summarize.max_summary_ratio">0.1</s:key> <s:key name="auto_summarize.max_summary_size">52428800</s:key> <s:key name="auto_summarize.max_time">3600</s:key> <s:key name="auto_summarize.suspend_period">24h</s:key> <s:key name="auto_summarize.timespan"></s:key> <s:key name="auto_summarize.workload_pool"></s:key> <s:key name="cron_schedule">*/3 * * * *</s:key> <s:key name="defer_scheduled_searchable_idxc">0</s:key> <s:key name="description">This test job is a durable saved search</s:key> <s:key name="disabled">0</s:key> <s:key name="dispatch.allow_partial_results">1</s:key> <s:key name="dispatch.auto_cancel">0</s:key> <s:key name="dispatch.auto_pause">0</s:key> <s:key name="dispatch.buckets">0</s:key> <s:key name="dispatch.earliest_time">-24h@h</s:key> <s:key name="dispatch.index_earliest"></s:key> <s:key name="dispatch.index_latest"></s:key> <s:key name="dispatch.indexedRealtime"></s:key> <s:key name="dispatch.indexedRealtimeMinSpan"></s:key> <s:key name="dispatch.indexedRealtimeOffset"></s:key> <s:key name="dispatch.latest_time">now</s:key> <s:key name="dispatch.lookups">1</s:key> <s:key name="dispatch.max_count">500000</s:key> <s:key name="dispatch.max_time">0</s:key> <s:key name="dispatch.reduce_freq">10</s:key> <s:key name="dispatch.rt_backfill">0</s:key> <s:key name="dispatch.rt_maximum_span"></s:key> <s:key name="dispatch.sample_ratio">1</s:key> <s:key name="dispatch.spawn_process">1</s:key> <s:key name="dispatch.time_format">%FT%T.%Q%:z</s:key> <s:key name="dispatch.ttl">2p</s:key> <s:key name="dispatchAs">owner</s:key> <!-- display settings elided --> <s:key name="displayview"></s:key> <s:key name="durable.backfill_type">auto</s:key> <s:key name="durable.lag_time">0</s:key> <s:key name="durable.max_backfill_intervals">0</s:key> <s:key name="durable.track_time_type"></s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">search</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">admin</s:key> <s:key name="perms"/> <s:key name="removable">1</s:key> <s:key name="sharing">user</s:key> </s:dict> </s:key> <s:key name="eai:attributes"> <s:dict> <s:key name="optionalFields"> <s:list> <!-- action settings elided --> <s:item>actions</s:item> <s:item>alert.digest_mode</s:item> <s:item>alert.expires</s:item> <s:item>alert.managedBy</s:item> <s:item>alert.severity</s:item> <s:item>alert.suppress</s:item> <s:item>alert.suppress.fields</s:item> <s:item>alert.suppress.group_name</s:item> <s:item>alert.suppress.period</s:item> <s:item>alert.track</s:item> <s:item>alert_comparator</s:item> <s:item>alert_condition</s:item> <s:item>alert_threshold</s:item> <s:item>alert_type</s:item> <s:item>allow_skew</s:item> <s:item>auto_summarize</s:item> <s:item>auto_summarize.command</s:item> <s:item>auto_summarize.cron_schedule</s:item> <s:item>auto_summarize.dispatch.earliest_time</s:item> <s:item>auto_summarize.dispatch.latest_time</s:item> <s:item>auto_summarize.dispatch.time_format</s:item> <s:item>auto_summarize.dispatch.ttl</s:item> <s:item>auto_summarize.max_concurrent</s:item> <s:item>auto_summarize.max_disabled_buckets</s:item> <s:item>auto_summarize.max_summary_ratio</s:item> <s:item>auto_summarize.max_summary_size</s:item> <s:item>auto_summarize.max_time</s:item> <s:item>auto_summarize.suspend_period</s:item> <s:item>auto_summarize.timespan</s:item> <s:item>auto_summarize.workload_pool</s:item> <s:item>cron_schedule</s:item> <s:item>defer_scheduled_searchable_idxc</s:item> <s:item>description</s:item> <s:item>disabled</s:item> <s:item>dispatch.allow_partial_results</s:item> <s:item>dispatch.auto_cancel</s:item> <s:item>dispatch.auto_pause</s:item> <s:item>dispatch.buckets</s:item> <s:item>dispatch.earliest_time</s:item> <s:item>dispatch.index_earliest</s:item> <s:item>dispatch.index_latest</s:item> <s:item>dispatch.indexedRealtime</s:item> <s:item>dispatch.indexedRealtimeMinSpan</s:item> <s:item>dispatch.indexedRealtimeOffset</s:item> <s:item>dispatch.latest_time</s:item> <s:item>dispatch.lookups</s:item> <s:item>dispatch.max_count</s:item> <s:item>dispatch.max_time</s:item> <s:item>dispatch.reduce_freq</s:item> <s:item>dispatch.rt_backfill</s:item> <s:item>dispatch.rt_maximum_span</s:item> <s:item>dispatch.sample_ratio</s:item> <s:item>dispatch.spawn_process</s:item> <s:item>dispatch.time_format</s:item> <s:item>dispatch.ttl</s:item> <s:item>dispatchAs</s:item> <!-- display settings elided --> <s:item>displayview</s:item> <s:item>durable.backfill_type</s:item> <s:item>durable.lag_time</s:item> <s:item>durable.max_backfill_intervals</s:item> <s:item>durable.track_time_type</s:item> <s:item>estimatedResultCount</s:item> <s:item>federated.provider</s:item> <s:item>hint</s:item> <s:item>is_scheduled</s:item> <s:item>is_visible</s:item> <s:item>max_concurrent</s:item> <s:item>next_scheduled_time</s:item> <s:item>numFields</s:item> <s:item>qualifiedSearch</s:item> <s:item>realtime_schedule</s:item> <s:item>request.ui_dispatch_app</s:item> <s:item>request.ui_dispatch_view</s:item> <s:item>restart_on_searchpeer_add</s:item> <s:item>run_n_times</s:item> <s:item>run_on_startup</s:item> <s:item>schedule_as</s:item> <s:item>schedule_priority</s:item> <s:item>schedule_window</s:item> <s:item>search</s:item> <s:item>skip_scheduled_realtime_idxc</s:item> <s:item>vsid</s:item> <s:item>workload_pool</s:item> </s:list> </s:key> <s:key name="requiredFields"> <s:list> <s:item>name</s:item> </s:list> </s:key> <s:key name="wildcardFields"> <s:list> <s:item>action\..*</s:item> <s:item>args\..*</s:item> <s:item>dispatch\..*</s:item> <s:item>display\.statistics\.format\..*</s:item> <s:item>display\.visualizations\.custom\..*</s:item> <s:item>durable\..*</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="embed.enabled">0</s:key> <s:key name="federated.provider"></s:key> <s:key name="is_scheduled">0</s:key> <s:key name="is_visible">1</s:key> <s:key name="max_concurrent">1</s:key> <s:key name="next_scheduled_time"></s:key> <s:key name="qualifiedSearch">search search index=_internal | stats count by host</s:key> <s:key name="realtime_schedule">1</s:key> <s:key name="request.ui_dispatch_app"></s:key> <s:key name="request.ui_dispatch_view"></s:key> <s:key name="restart_on_searchpeer_add">1</s:key> <s:key name="run_n_times">0</s:key> <s:key name="run_on_startup">0</s:key> <s:key name="schedule_as">auto</s:key> <s:key name="schedule_priority">default</s:key> <s:key name="schedule_window">0</s:key> <s:key name="search">search index=_internal | stats count by host</s:key> <s:key name="skip_scheduled_realtime_idxc">0</s:key> <s:key name="vsid"></s:key> <s:key name="workload_pool"></s:key> </s:dict> </content> </entry> </feed>
saved/searches/{name}
https://<host>:<mPort>/services/saved/searches/{name}
Manage the {name}
saved search.
DELETE
Delete the named saved search.
Request parameters
None
Returned values
None
Example request and response
curl -k -u admin:pass --request DELETE https://localhost:8089/servicesNS/admin/search/saved/searches/MySavedSearch
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/" xmlns:s="http://dev.splunk.com/ns/rest"> <title>savedsearch</title> <id>https://localhost:8089/servicesNS/admin/search/saved/searches</id> <updated>2011-07-13T12:09:05-07:00</updated> <generator version="102824"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/admin/search/saved/searches/_new" rel="create"/> <link href="/servicesNS/admin/search/saved/searches/_reload" rel="_reload"/> <!-- opensearch nodes elided for brevity. --> <s:messages/> </feed>
GET
Access the named saved search.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
earliest_time | String | If the search is scheduled display scheduled times starting from this time | |
latest_time | String | If the search is scheduled display scheduled times ending at this time | |
listDefaultActionArgs | Boolean | Indicates whether to list default actions. | |
add_orphan_field | Boolean | Indicates whether the response includes a boolean value for each saved search to show whether the search is orphaned, meaning that it has no valid owner. When add_orphan_field is set to true , the response includes the orphaned search indicators, either 0 to indicate that a search is not orphaned or 1 to indicate that the search is orphaned. Admins can use this setting to check for searches without valid owners and resolve related issues.
|
Returned values
Name | Description |
---|---|
action.<action_name> | Indicates whether the <action_name> is enabled or disabled for a particular search. For more information about the alert action options see the alert_actions.conf file.
|
action.<action_name>.<parameter> | Overrides the setting defined for an action in the alert_actions.conf file with a new setting that is valid only for the search configuration to which it is applied.
|
action.email | Indicates the state of the email action. |
action.email.auth_password | The password to use when authenticating with the SMTP server. Normally this value is set when editing the email settings, however you can set a clear text password here that is encrypted on the next restart.
Defaults to empty string. |
action.email.auth_username | The username to use when authenticating with the SMTP server. If this is empty string, no authentication is attempted. Defaults to empty string.
Note: Your SMTP server might reject unauthenticated emails. |
action.email.bcc | BCC email address to use if action.email is enabled. |
action.email.cc | CC email address to use if action.email is enabled. |
action.email.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.email.format | Specify the format of text in the email. This value also applies to any attachments.
Valid values: (plain | html | raw | csv) |
action.email.from | Email address from which the email action originates. |
action.email.hostname | Sets the hostname used in the web link (url) sent in email actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) When this value is a simple hostname, the protocol and port which are configured within splunk are used to construct the base of the url. When this value begins with 'http://', it is used verbatim. Note: This means the correct port must be specified if it is not the default port for http or https. This is useful in cases when the Splunk server is not aware of how to construct a url that can be referenced externally, such as SSO environments, other proxies, or when the server hostname is not generally resolvable. Defaults to current hostname provided by the operating system, or if that fails "localhost." When set to empty, default behavior is used. |
action.email.inline | Indicates whether the search results are contained in the body of the email.
Results can be either inline or attached to an email. See action.email.sendresults. |
action.email.mailserver | Set the address of the MTA server to be used to send the emails.
Defaults to <LOCALHOST> (or whatever is set in alert_actions.conf). |
action.email.maxresults | Sets the global maximum number of search results to send when email.action is enabled. |
action.email.maxtime | Specifies the maximum amount of time the execution of an email action takes before the action is aborted. |
action.email.preprocess_results | Search string to preprocess results before emailing them. Defaults to empty string (no preprocessing).
Usually the preprocessing consists of filtering out unwanted internal fields. |
action.email.reportPaperOrientation | Specifies the paper orientation: portrait or landscape. |
action.email.reportPaperSize | Specifies the paper size for PDFs. Defaults to letter.
Valid values: (letter | legal | ledger | a2 | a3 | a4 | a5) |
action.email.reportServerEnabled | Not supported. |
action.email.reportServerURL | Not supported. |
action.email.sendpdf | Indicates whether to create and send the results as a PDF. |
action.email.sendresults | Indicates whether to attach the search results in the email.
Results can be either attached or inline. See action.email.inline. |
action.email.subject | Specifies an email subject.
Defaults to SplunkAlert-<savedsearchname>. |
action.email.to | List of recipient email addresses. Required if this search is scheduled and the email alert action is enabled. |
action.email.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.email.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows <Integer>, int is the number of scheduled periods. Defaults to 86400 (24 hours).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are Integer[p]. |
action.email.use_ssl | Indicates whether to use SSL when communicating with the SMTP server. |
action.email.use_tls | Indicates whether to use TLS (transport layer security) when communicating with the SMTP server (starttls). |
action.populate_lookup | The state of the populate lookup action. |
action.populate_lookup.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.populate_lookup.hostname | Sets the hostname used in the web link (url) sent in alert actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) See action.email.hostname for details. |
action.populate_lookup.maxresults | The maximum number of search results sent using alerts. |
action.populate_lookup.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted. Defaults to 5m.
Valid values are: Integer[m|s|h|d] |
action.populate_lookup.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.populate_lookup.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, then this specifies the number of scheduled periods. Defaults to 10p.
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are Integer[p] |
action.rss | The state of the RSS action. |
action.rss.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.rss.hostname | Sets the hostname used in the web link (url) sent in alert actions. |
action.rss.maxresults | Sets the maximum number of search results sent using alerts. |
action.rss.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted. Defaults to 1m. |
action.rss.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.rss.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, specifies the number of scheduled periods. Defaults to 86400 (24 hours).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are: Integer[p] |
action.script | The state of the script action. |
action.script.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.script.hostname | Sets the hostname used in the web link (url) sent in alert actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) See action.email.hostname for details. |
action.script.maxresults | The maximum number of search results sent using alerts. |
action.script.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted. |
action.script.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.script.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, specifies the number of scheduled periods. Defaults to 600 (10 minutes).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are: Integer[p] |
action.summary_index | The state of the summary index action. |
action.summary_index._name | Specifies the name of the summary index where the results of the scheduled search are saved.
Defaults to "summary." |
action.summary_index._type" | Specifies the data type of the summary index where the Splunk software saves the results of the scheduled search. Can be set to event or metric .
|
action.summary_index.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.summary_index.hostname | Sets the hostname used in the web link (url) sent in alert actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) See action.email.hostname for details. |
action.summary_index.force_realtime_schedule | By default, realtime_schedule is false for a report configured for summary indexing. When set to 1 or true , this setting overrides realtime_schedule. Setting this setting to true can cause gaps in summary data, as a realtime_schedule search is skipped if search concurrency limits are violated.
|
action.summary_index.inline | Determines whether to execute the summary indexing action as part of the scheduled search.
Note: This option is considered only if the summary index action is enabled and is always executed (in other words, if counttype = always). |
action.summary_index.maxresults | Sets the maximum number of search results sent using alerts. |
action.summary_index.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted. Defaults to 5m. |
action.summary_index.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.summary_index.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, specifies the number of scheduled periods. Defaults to 10p.
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are: Integer[p] |
alert.digest_mode | Specifies whether alert actions are applied to the entire result set or to each individual result. |
alert.expires | Sets the period of time to show the alert in the dashboard. Defaults to 24h. |
alert.managedBy | Specifies the feature or component that created the alert. |
alert.severity | Valid values: (1 | 2 | 3 | 4 | 5 | 6)
Sets the alert severity level. Valid values are:
|
alert.suppress | Indicates whether alert suppression is enabled for this schedules search. |
alert.suppress.fields | List of fields to use when suppressing per-result alerts. Must be specified if the digest mode is disabled and suppression is enabled. |
alert.suppress.group_name | Optional setting. Used to define an alert suppression group for a set of alerts that are running over identical or very similar datasets. Alert suppression groups can help you avoid getting multiple triggered alert notifications for the same data. |
alert.suppress.period | Specifies the suppression period. Only valid if alert.suppress is enabled.
Uses [number][time-unit] to specify a time. For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour. |
alert.track | Specifies whether to track the actions triggered by this scheduled search.
|
alert_comparator | One of the following strings:
Used with alert_threshold to trigger alert actions. |
alert_condition | A conditional search that is evaluated against the results of the saved search. Defaults to an empty string. Alerts are triggered if the specified search yields a non-empty search result list. Note: If you specify an alert_condition, do not set counttype, relation, or quantity. |
alert_threshold | Valid values are: Integer[%]
Specifies the value to compare (see alert_comparator) before triggering the alert actions. If expressed as a percentage, indicates value to use when alert_comparator is set to rises by perc" or "drops by perc." |
alert_type | What to base the alert on, overridden by alert_condition if it is specified. Valid values are: always, custom, number of events, number of hosts, number of sources. Typically, reports return the "always" value, while alerts can return any other value. |
allow_skew | 0 | <percentage> | <duration>
Allows the search scheduler to distribute scheduled searches randomly and more evenly over their specified search periods. This setting does not require adjusting in most use cases. Check with an admin before making any updates. When set to a non-zero value for searches with the following * * * * * Every minute. */M * * * * Every M minutes (M > 0). 0 * * * * Every hour. 0 */H * * * Every H hours (H > 0). 0 0 * * * Every day (at midnight). When set to a non-zero value for a search that has any other The amount of skew for a specific search remains constant between edits of the search. A value of Percentage Duration Valid duration units:
Examples 100% (for an every-5-minute search) = 5 minutes maximum 50% (for an every-minute search) = 30 seconds maximum 5m = 5 minutes maximum 1h = 1 hour maximum |
auto_summarize | Specifies whether the search scheduler should ensure that the data for this search is automatically summarized. |
auto_summarize.command | A search template to use to construct the auto-summarization for the search. Do not change. |
auto_summarize.cron_schedule | Cron schedule to use to probe or generate the summaries for this search |
auto_summarize.dispatch.<arg-name> | Dispatch options that can be overridden when running the summary search. |
auto_summarize.max_concurrent | The maximum number of concurrent instances of this auto summarizing search that the scheduler is allowed to run. |
auto_summarize.max_disabled_buckets | The maximum number of buckets with suspended summarization before the summarization search is completely stopped and the summarization of the search is suspended for the value specified by the auto_summarize.suspend_period setting. |
auto_summarize.max_summary_ratio | The maximum ratio of summary_size/bucket_size, which specifies when to stop summarization and deem it unhelpful for a bucket. |
auto_summarize.max_summary_size | The minimum summary size, in bytes, before testing whether the summarization is helpful. |
auto_summarize.max_time | The maximum time, in seconds, that the auto-summarization search is allowed to run. |
auto_summarize.suspend_period | The amount of time to suspend summarization of the search if the summarization is deemed unhelpful. |
auto_summarize.timespan | Comma-delimited list of time ranges that each summarized chunk should span. Comprises the list of available summary ranges for which summaries would be available. Does not support 1w timespans.
|
auto_summarize.workload_pool | Sets the name of the workload pool that is used by the auto-summarization of this search. |
cron_schedule | The cron schedule to run this search. For more information, refer to the description of this parameter in the POST endpoint. |
defer_scheduled_searchable_idxc | Specifies whether to defer a continuous saved search during a searchable rolling restart or searchable rolling upgrade of an indexer cluster. |
description | Description of this saved search. |
disabled | Indicates if this saved search is disabled. |
dispatch.allow_partial_results | Specifies whether the search job can proceed to provide partial results if a search peer fails. When set to false, the search job fails if a search peer providing results for the search job fails. |
dispatch.auto_cancel | Specifies the amount of inactive time, in seconds, after which the search job is automatically canceled. |
dispatch.auto_pause | Specifies the amount of inactive time, in seconds, after which the search job is automatically paused. |
dispatch.buckets | The maximum number of timeline buckets. |
dispatch.earliest_time | A time string that specifies the earliest time for this search. Can be a relative or absolute time. If this value is an absolute time, use the dispatch.time_format to format the value. |
dispatch.index_earliest | Specifies the earliest index time for this search. Can be a relative or absolute time. |
dispatch.index_latest | Specifies the latest index time for this saved search. Can be a relative or absolute time. |
dispatch.indexedRealtime | Specifies whether to use 'indexed-realtime' mode when doing real-time searches. |
dispatch.indexedRealtimeMinSpan | Specifies the minimum number of seconds to wait between component index searches. |
dispatch.indexedRealtimeOffset | Specifies the number of seconds to wait for disk flushes to finish. |
dispatch.indexedRealtimeMinSpan | Allows for a per-job override of the [search] indexed_realtime_default_span setting in limits.conf .The default for saved searches is "unset", falling back to the limits.conf setting.
|
dispatch.latest_time | A time string that specifies the latest time for this saved search. Can be a relative or absolute time. If this value is an absolute time, use the dispatch.time_format to format the value. |
dispatch.lookups | Indicates if lookups are enabled for this search. |
dispatch.max_count | The maximum number of results before finalizing the search. |
dispatch.max_time | Indicates the maximum amount of time (in seconds) before finalizing the search. |
dispatch.reduce_freq | Specifies how frequently the MapReduce reduce phase runs on accumulated map values. |
dispatch.rt_backfill | Specifies whether to do real-time window backfilling for scheduled real-time searches. |
dispatch.rt_maximum_span | Sets the maximum number of seconds to search data that falls behind real time. |
dispatch.sample_ratio | The integer value used to calculate the sample ratio. The formula is 1 / <integer> .
|
dispatch.spawn_process | This parameter is deprecated and will be removed in a future release. Do not use this parameter. Indicates whether a new search process spawns when this saved search is executed. |
dispatch.time_format | A time format string that defines the time format for specifying the earliest and latest time. |
dispatch.ttl | Indicates the time to live (ttl), in seconds, for the artifacts of the scheduled search, if no actions are triggered. |
displayview | Defines the default Splunk Web view name (not label) in which to load the results. Accessibility is subject to the user having sufficient permissions. |
durable.backfill_type | Specifies how the Splunk software backfills the lost search results of failed scheduled search jobs. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type. Valid values are auto , time_interval , and time_whole .
|
durable.lag_time | Specifies the search time delay, in seconds, that a durable search uses to catch events that are ingested or indexed late. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type.
|
durable.max_backfill_intervals | Specifies the maximum number of cron intervals (previous scheduled search jobs) that the Splunk software can attempt to backfill for this search, when those jobs have incomplete events. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type.
|
durable.track_time_type | Indicates that a scheduled search is durable and specifies how the search tracks events. A value of _time means the durable search tracks each event by its event timestamp , based on time information included in the event. A value of _indextime means the durable search tracks each event by its indexed timestamp. The search is not durable if this setting is unset or is set to none .
|
earliest_time | For scheduled searches display all the scheduled times starting from this time. |
is_scheduled | Indicates if this search is to be run on a schedule. |
is_visible | Indicates if this saved search appears in the visible saved search list. |
latest_time | For scheduled searches display all the scheduled times until this time (not just the next run time). |
listDefaultActionArgs | List default values of actions.*, even though some of the actions may not be specified in the saved search. |
max_concurrent | The maximum number of concurrent instances of this search the scheduler is allowed to run. |
next_scheduled_time | The time when the scheduler runs this search again. |
orphan | If the add_orphan_field parameter is passed in with the GET request, this field indicates whether the search is orphaned.
|
qualifiedSearch | The exact search command for this saved search. |
realtime_schedule | Controls the way the scheduler computes the next execution time of a scheduled search. If this value is set to 1, the scheduler bases its determination of the next scheduled search execution time on the current time.
If this value is set to 0, the scheduler bases its determination of the next scheduled search on the last search execution time. This is called continuous scheduling. See the POST parameter for this attribute for details. |
request.ui_dispatch_app | A field used by Splunk Web to denote the app this search should be dispatched in. |
request.ui_dispatch_view | Specifies a field used by Splunk Web to denote the view this search should be displayed in. |
restart_on_searchpeer_add | Indicates whether to restart a real-time search managed by the scheduler when a search peer becomes available for this saved search.
Note: The peer can be a newly added peer or a peer down and now available. |
run_n_times | Runs this search exactly the specified number of times. Does not run the search again until the Splunk platform is restarted. |
run_on_startup | Indicates whether this search runs on startup. If it does not run on startup, it runs at the next scheduled time.
Set run_on_startup to true for scheduled searches that populate lookup tables. |
schedule_priority | Indicates the scheduling priority of a specific search. One of the following values.
[ default | higher | highest ] Raises the scheduling priority of the named search.
* real-Time-Scheduled (realtime_schedule=1). * continuous-Scheduled (realtime_schedule=0).
The high-to-low priority order (where RTSS = real-time-scheduled search, CSS = continuous-scheduled search, d = default, h = higher, H = highest) is: RTSS(H) > CSS(H) > RTSS(h) > RTSS(d) > CSS(h) > CSS(d) Requires the search owner to have the Defaults to For more details, see |
schedule_window | Time window (in minutes) during which the search has lower priority. The scheduler can give higher priority to more critical searches during this window. The window must be smaller than the search period. If set to auto , the scheduler determines the optimal time window automatically.
|
search | Search expression to filter the response. The response matches field values against the search expression. For example:
search=foo matches any object that has "foo" as a substring in a field. search=field_name%3Dfield_value restricts the match to a single field. URI-encoding is required in this example. |
vsid | Defines the viewstate id associated with the UI view listed in 'displayview'.
Must match up to a stanza in viewstates.conf. |
Example request and response
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/saved/searches/MySavedSearch
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>savedsearch</title> <id>https://fool01:8092/services/saved/searches</id> <updated>2021-04-29T10:00:27-07:00</updated> <generator build="84cbec3d51a6" version="8.2.2105"/> <author> <name>Splunk</name> </author> <link href="/services/saved/searches/_new" rel="create"/> <link href="/services/saved/searches/_reload" rel="_reload"/> <link href="/services/saved/searches/_acl" rel="_acl"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>MySavedSearch</title> <id>https://fool01:8092/servicesNS/admin/search/saved/searches/MySavedSearch</id> <updated>2021-04-29T09:58:12-07:00</updated> <link href="/servicesNS/admin/search/saved/searches/MySavedSearch" rel="alternate"/> <author> <name>admin</name> </author> <link href="/servicesNS/admin/search/saved/searches/MySavedSearch" rel="list"/> <link href="/servicesNS/admin/search/saved/searches/MySavedSearch/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/saved/searches/MySavedSearch" rel="edit"/> <link href="/servicesNS/admin/search/saved/searches/MySavedSearch" rel="remove"/> <link href="/servicesNS/admin/search/saved/searches/MySavedSearch/move" rel="move"/> <link href="/servicesNS/admin/search/saved/searches/MySavedSearch/disable" rel="disable"/> <link href="/servicesNS/admin/search/saved/searches/MySavedSearch/dispatch" rel="dispatch"/> <link href="/servicesNS/admin/search/saved/searches/MySavedSearch/embed" rel="embed"/> <link href="/servicesNS/admin/search/saved/searches/MySavedSearch/history" rel="history"/> <content type="text/xml"> <s:dict> < ---- action settings elided ---- > <s:key name="actions"></s:key> <s:key name="alert.digest_mode">1</s:key> <s:key name="alert.expires">24h</s:key> <s:key name="alert.managedBy"></s:key> <s:key name="alert.severity">3</s:key> <s:key name="alert.suppress"></s:key> <s:key name="alert.suppress.fields"></s:key> <s:key name="alert.suppress.group_name"></s:key> <s:key name="alert.suppress.period"></s:key> <s:key name="alert.track">0</s:key> <s:key name="alert_comparator"></s:key> <s:key name="alert_condition"></s:key> <s:key name="alert_threshold"></s:key> <s:key name="alert_type">always</s:key> <s:key name="allow_skew">0</s:key> <s:key name="auto_summarize">0</s:key> <s:key name="auto_summarize.command"><![CDATA[| summarize override=partial timespan=$auto_summarize.timespan$ max_summary_size=$auto_summarize.max_summary_size$ max_summary_ratio=$auto_summarize.max_summary_ratio$ max_disabled_buckets=$auto_summarize.max_disabled_buckets$ max_time=$auto_summarize.max_time$ [ $search$ ]]]></s:key> <s:key name="auto_summarize.cron_schedule">*/10 * * * *</s:key> <s:key name="auto_summarize.dispatch.earliest_time"></s:key> <s:key name="auto_summarize.dispatch.latest_time"></s:key> <s:key name="auto_summarize.dispatch.time_format">%FT%T.%Q%:z</s:key> <s:key name="auto_summarize.dispatch.ttl">60</s:key> <s:key name="auto_summarize.max_concurrent">1</s:key> <s:key name="auto_summarize.max_disabled_buckets">2</s:key> <s:key name="auto_summarize.max_summary_ratio">0.1</s:key> <s:key name="auto_summarize.max_summary_size">52428800</s:key> <s:key name="auto_summarize.max_time">3600</s:key> <s:key name="auto_summarize.suspend_period">24h</s:key> <s:key name="auto_summarize.timespan"></s:key> <s:key name="auto_summarize.workload_pool"></s:key> <s:key name="cron_schedule">*/3 * * * *</s:key> <s:key name="defer_scheduled_searchable_idxc">0</s:key> <s:key name="description">This test job is a durable saved search</s:key> <s:key name="disabled">0</s:key> <s:key name="dispatch.allow_partial_results">1</s:key> <s:key name="dispatch.auto_cancel">0</s:key> <s:key name="dispatch.auto_pause">0</s:key> <s:key name="dispatch.buckets">0</s:key> <s:key name="dispatch.earliest_time">-24h@h</s:key> <s:key name="dispatch.index_earliest"></s:key> <s:key name="dispatch.index_latest"></s:key> <s:key name="dispatch.indexedRealtime"></s:key> <s:key name="dispatch.indexedRealtimeMinSpan"></s:key> <s:key name="dispatch.indexedRealtimeOffset"></s:key> <s:key name="dispatch.latest_time">now</s:key> <s:key name="dispatch.lookups">1</s:key> <s:key name="dispatch.max_count">500000</s:key> <s:key name="dispatch.max_time">0</s:key> <s:key name="dispatch.reduce_freq">10</s:key> <s:key name="dispatch.rt_backfill">0</s:key> <s:key name="dispatch.rt_maximum_span"></s:key> <s:key name="dispatch.sample_ratio">1</s:key> <s:key name="dispatch.spawn_process">1</s:key> <s:key name="dispatch.time_format">%FT%T.%Q%:z</s:key> <s:key name="dispatch.ttl">2p</s:key> <s:key name="dispatchAs">owner</s:key> < ---- display settings elided ---- > <s:key name="displayview"></s:key> <s:key name="durable.backfill_type">time_interval</s:key> <s:key name="durable.lag_time">30</s:key> <s:key name="durable.max_backfill_intervals">100</s:key> <s:key name="durable.track_time_type">_time</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">search</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">admin</s:key> <s:key name="perms"/> <s:key name="removable">1</s:key> <s:key name="sharing">user</s:key> </s:dict> </s:key> <s:key name="eai:attributes"> <s:dict> <s:key name="optionalFields"> <s:list> < ---- action settings elided ---- > <s:item>actions</s:item> <s:item>alert.digest_mode</s:item> <s:item>alert.expires</s:item> <s:item>alert.managedBy</s:item> <s:item>alert.severity</s:item> <s:item>alert.suppress</s:item> <s:item>alert.suppress.fields</s:item> <s:item>alert.suppress.group_name</s:item> <s:item>alert.suppress.period</s:item> <s:item>alert.track</s:item> <s:item>alert_comparator</s:item> <s:item>alert_condition</s:item> <s:item>alert_threshold</s:item> <s:item>alert_type</s:item> <s:item>allow_skew</s:item> <s:item>auto_summarize</s:item> <s:item>auto_summarize.command</s:item> <s:item>auto_summarize.cron_schedule</s:item> <s:item>auto_summarize.dispatch.earliest_time</s:item> <s:item>auto_summarize.dispatch.latest_time</s:item> <s:item>auto_summarize.dispatch.time_format</s:item> <s:item>auto_summarize.dispatch.ttl</s:item> <s:item>auto_summarize.max_concurrent</s:item> <s:item>auto_summarize.max_disabled_buckets</s:item> <s:item>auto_summarize.max_summary_ratio</s:item> <s:item>auto_summarize.max_summary_size</s:item> <s:item>auto_summarize.max_time</s:item> <s:item>auto_summarize.suspend_period</s:item> <s:item>auto_summarize.timespan</s:item> <s:item>auto_summarize.workload_pool</s:item> <s:item>cron_schedule</s:item> <s:item>defer_scheduled_searchable_idxc</s:item> <s:item>description</s:item> <s:item>disabled</s:item> <s:item>dispatch.allow_partial_results</s:item> <s:item>dispatch.auto_cancel</s:item> <s:item>dispatch.auto_pause</s:item> <s:item>dispatch.buckets</s:item> <s:item>dispatch.earliest_time</s:item> <s:item>dispatch.index_earliest</s:item> <s:item>dispatch.index_latest</s:item> <s:item>dispatch.indexedRealtime</s:item> <s:item>dispatch.indexedRealtimeMinSpan</s:item> <s:item>dispatch.indexedRealtimeOffset</s:item> <s:item>dispatch.latest_time</s:item> <s:item>dispatch.lookups</s:item> <s:item>dispatch.max_count</s:item> <s:item>dispatch.max_time</s:item> <s:item>dispatch.reduce_freq</s:item> <s:item>dispatch.rt_backfill</s:item> <s:item>dispatch.rt_maximum_span</s:item> <s:item>dispatch.sample_ratio</s:item> <s:item>dispatch.spawn_process</s:item> <s:item>dispatch.time_format</s:item> <s:item>dispatch.ttl</s:item> <s:item>dispatchAs</s:item> < ---- display settings elided ---- > <s:item>displayview</s:item> <s:item>durable.backfill_type</s:item> <s:item>durable.lag_time</s:item> <s:item>durable.max_backfill_intervals</s:item> <s:item>durable.track_time_type</s:item> <s:item>estimatedResultCount</s:item> <s:item>federated.provider</s:item> <s:item>hint</s:item> <s:item>is_scheduled</s:item> <s:item>is_visible</s:item> <s:item>max_concurrent</s:item> <s:item>next_scheduled_time</s:item> <s:item>numFields</s:item> <s:item>qualifiedSearch</s:item> <s:item>realtime_schedule</s:item> <s:item>request.ui_dispatch_app</s:item> <s:item>request.ui_dispatch_view</s:item> <s:item>restart_on_searchpeer_add</s:item> <s:item>run_n_times</s:item> <s:item>run_on_startup</s:item> <s:item>schedule_as</s:item> <s:item>schedule_priority</s:item> <s:item>schedule_window</s:item> <s:item>search</s:item> <s:item>skip_scheduled_realtime_idxc</s:item> <s:item>vsid</s:item> <s:item>workload_pool</s:item> </s:list> </s:key> <s:key name="requiredFields"> <s:list/> </s:key> <s:key name="wildcardFields"> <s:list> <s:item>action\..*</s:item> <s:item>args\..*</s:item> <s:item>dispatch\..*</s:item> <s:item>display\.statistics\.format\..*</s:item> <s:item>display\.visualizations\.custom\..*</s:item> <s:item>durable\..*</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="embed.enabled">0</s:key> <s:key name="federated.provider"></s:key> <s:key name="is_scheduled">0</s:key> <s:key name="is_visible">1</s:key> <s:key name="max_concurrent">1</s:key> <s:key name="next_scheduled_time"></s:key> <s:key name="qualifiedSearch">search search index=_internal | stats count by host</s:key> <s:key name="realtime_schedule">1</s:key> <s:key name="request.ui_dispatch_app"></s:key> <s:key name="request.ui_dispatch_view"></s:key> <s:key name="restart_on_searchpeer_add">1</s:key> <s:key name="run_n_times">0</s:key> <s:key name="run_on_startup">0</s:key> <s:key name="schedule_as">auto</s:key> <s:key name="schedule_priority">default</s:key> <s:key name="schedule_window">0</s:key> <s:key name="search">search index=_internal | stats count by host</s:key> <s:key name="skip_scheduled_realtime_idxc">0</s:key> <s:key name="vsid"></s:key> <s:key name="workload_pool"></s:key> </s:dict> </content> </entry> </feed>
POST
Update the named saved search.
Request parameters
Name | Type | Description |
---|---|---|
action.<action_name> | Boolean | Enable or disable an alert action. See alert_actions.conf for available alert action types.
|
action.<action_name>.<parameter> | String or Number | Use this syntax to configure action parameters. See alert_actions.conf for parameter options.
|
action.summary_index._type" | String | Specifies the data type of the summary index where the Splunk software saves the results of the scheduled search. Can be set to event or metric .
|
action.summary_index.force_realtime_schedule | Boolean | By default, realtime_schedule is false for a report configured for summary indexing. When set to 1 or True , this setting overrides realtime_schedule. Setting this setting to true can cause gaps in summary data, as a realtime_schedule search is skipped if search concurrency limits are violated.
|
actions | String | A comma-separated list of actions to enable.
For example: rss,email |
alert.digest_mode | Boolean | Specifies whether alert actions are applied to the entire result set or on each individual result. Defaults to 1 (true). |
alert.expires | Number | Valid values: [number][time-unit]
Sets the period of time to show the alert in the dashboard. Defaults to 24h. Use [number][time-unit] to specify a time. For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour. |
alert.severity | Enum | Valid values: (1 | 2 | 3 | 4 | 5 | 6)
Sets the alert severity level. Valid values are: 1 DEBUG 2 INFO 3 WARN 4 ERROR 5 SEVERE 6 FATAL Defaults to 3. |
alert.suppress | Boolean | Indicates whether alert suppression is enabled for this scheduled search. |
alert.suppress.fields | String | Comma delimited list of fields to use for suppression when doing per result alerting. Required if suppression is turned on and per result alerting is enabled. |
alert.suppress.group_name | String | Optional setting. Used to define an alert suppression group for a set of alerts that are running over identical or very similar datasets. Alert suppression groups can help you avoid getting multiple triggered alert notifications for the same data. |
alert.suppress.period | Number | Valid values: [number][time-unit]
Specifies the suppression period. Only valid if alert.suppress is enabled. Use [number][time-unit] to specify a time. For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour. |
alert.track | Enum | Valid values: (true | false | auto)
Specifies whether to track the actions triggered by this scheduled search.
|
alert_comparator | String | One of the following strings: greater than, less than, equal to, rises by, drops by, rises by perc, drops by perc. Used with alert_threshold to trigger alert actions. |
alert_condition | String | Contains a conditional search that is evaluated against the results of the saved search. Defaults to an empty string.
Alerts are triggered if the specified search yields a non-empty search result list. Note: If you specify an alert_condition, do not set counttype, relation, or quantity. |
alert_threshold | Number | Valid values are: Integer[%]
Specifies the value to compare (see alert_comparator) before triggering the alert actions. If expressed as a percentage, indicates value to use when alert_comparator is set to "rises by perc" or "drops by perc." |
alert_type | String | What to base the alert on, overridden by alert_condition if it is specified. Valid values are: always, custom, number of events, number of hosts, number of sources. |
allow_skew | 0 | <percentage> | <duration> |
Allows the search scheduler to distribute scheduled searches randomly and more evenly over their specified search periods. This setting does not require adjusting in most use cases. Check with an admin before making any updates. When set to a non-zero value for searches with the following cron_schedule values, the search scheduler randomly skews the second, minute, and hour on which the search runs. * * * * * Every minute. */M * * * * Every M minutes (M > 0). 0 * * * * Every hour. 0 */H * * * Every H hours (H > 0). 0 0 * * * Every day (at midnight). When set to a non-zero value for a search that has any other The amount of skew for a specific search remains constant between edits of the search. A value of Percentage Duration Valid duration units:
Examples 100% (for an every-5-minute search) = 5 minutes maximum 50% (for an every-minute search) = 30 seconds maximum 5m = 5 minutes maximum 1h = 1 hour maximum |
args.* | String | Wildcard argument that accepts any saved search template argument, such as args.username=foobar when the search is search $username$. |
auto_summarize | Boolean | Indicates whether the scheduler should ensure that the data for this search is automatically summarized. Defaults to 0. |
auto_summarize.command | String | An auto summarization template for this search. See auto summarization options in savedsearches.conf for more details.
Do not change unless you understand the architecture of saved search auto summarization. |
auto_summarize.cron_schedule | String | Cron schedule that probes and generates the summaries for this saved search.
The default value is |
auto_summarize.dispatch.earliest_time | String | A time string that specifies the earliest time for summarizing this search. Can be a relative or absolute time.
If this value is an absolute time, use the dispatch.time_format to format the value. |
auto_summarize.dispatch.latest_time | String | A time string that specifies the latest time for summarizing this saved search. Can be a relative or absolute time.
If this value is an absolute time, use the dispatch.time_format to format the value. |
auto_summarize.dispatch.time_format | String | Defines the time format that Splunk software uses to specify the earliest and latest time. Defaults to %FT%T.%Q%:z
|
auto_summarize.dispatch.ttl | String | Valid values: Integer[p].
Indicates the time to live (ttl), in seconds, for the artifacts of the summarization of the scheduled search. Defaults to 60. |
auto_summarize.max_disabled_buckets | Number | The maximum number of buckets with the suspended summarization before the summarization search is completely stopped, and the summarization of the search is suspended for auto_summarize.suspend_period. Defaults to 2. |
auto_summarize.max_summary_ratio | Number | The maximum ratio of summary_size/bucket_size, which specifies when to stop summarization and deem it unhelpful for a bucket. Defaults to 0.1
Note: The test is only performed if the summary size is larger than auto_summarize.max_summary_size. |
auto_summarize.max_summary_size | Number | The minimum summary size, in bytes, before testing whether the summarization is helpful.
The default value is |
auto_summarize.max_time | Number | Maximum time (in seconds) that the summary search is allowed to run. Defaults to 3600.
Note: This is an approximate time. The summary search stops at clean bucket boundaries. |
auto_summarize.suspend_period | String | Time specifier indicating when to suspend summarization of this search if the summarization is deemed unhelpful. Defaults to 24h. |
auto_summarize.timespan | String | Comma-delimited list of time ranges that each summarized chunk should span. Comprises the list of available granularity levels for which summaries would be available. Does not support 1w timespans.
For example, a timechart over the last month whose granularity is at the day level should set this to |
cron_schedule | String | Valid values: cron string
The cron schedule to execute this search. For example: */5 * * * * causes the search to execute every 5 minutes. cron lets you use standard cron notation to define your scheduled search interval. In particular, cron can accept this type of notation: 00,20,40 * * * *, which runs the search every hour at hh:00, hh:20, hh:40. Along the same lines, a cron of 03,23,43 * * * * runs the search every hour at hh:03, hh:23, hh:43. Splunk recommends that you schedule your searches so that they are staggered over time. This reduces system load. Running all of them every 20 minutes (*/20) means they would all launch at hh:00 (20, 40) and might slow your system every 20 minutes. |
description | String | Human-readable description of this saved search. Defaults to empty string. |
disabled | Boolean | Indicates if the saved search is enabled. Defaults to 0.
Disabled saved searches are not visible in Splunk Web. |
dispatch.* | String | Wildcard argument that accepts any dispatch related argument. |
dispatch.allow_partial_results | Boolean | Specifies whether the search job can proceed to provide partial results if a search peer fails. When set to false, the search job fails if a search peer providing results for the search job fails. |
dispatch.auto_cancel | Number | Specifies the amount of inactive time, in seconds, after which the search job is automatically canceled. |
dispatch.auto_pause | Number | Specifies the amount of inactive time, in seconds, after which the search job is automatically paused. |
dispatch.buckets | Number | The maximum number of timeline buckets. Defaults to 0. |
dispatch.earliest_time | String | A time string that specifies the earliest time for this search. Can be a relative or absolute time.
If this value is an absolute time, use the dispatch.time_format to format the value. |
dispatch.index_earliest | String | A time string that specifies the earliest index time for this search. Can be a relative or absolute time. |
dispatch.index_latest | String | A time string that specifies the latest index time for this saved search. Can be a relative or absolute time. |
dispatch.indexedRealtime | Boolean | Indicates whether to used indexed-realtime mode when doing real-time searches. |
dispatch.indexedRealtimeOffset | Integer | Allows for a per-job override of the [search] indexed_realtime_disk_sync_delay setting in limits.conf .Default for saved searches is "unset", falling back to limits.conf setting.
|
dispatch.indexedRealtimeMinSpan | Integer | Allows for a per-job override of the [search] indexed_realtime_default_span setting in limits.conf .Default for saved searches is "unset", falling back to the limits.conf setting.
|
dispatch.latest_time | String | A time string that specifies the latest time for this saved search. Can be a relative or absolute time. If this value is an absolute time, use the dispatch.time_format to format the value. |
dispatch.lookups | Boolean | Enables or disables the lookups for this search. Defaults to 1. |
dispatch.max_count | Number | The maximum number of results before finalizing the search. Defaults to 500000. |
dispatch.max_time | Number | Indicates the maximum amount of time (in seconds) before finalizing the search. Defaults to 0. |
dispatch.reduce_freq | Number | Specifies, in seconds, how frequently the MapReduce reduce phase runs on accumulated map values. Defaults to 10. |
dispatch.rt_backfill | Boolean | Whether to back fill the real time window for this search. Parameter valid only if this is a real time search. Defaults to 0. |
dispatch.rt_maximum_span | Number | Allows for a per-job override of the [search] indexed_realtime_maximum_span setting in limits.conf .Default for saved searches is "unset", falling back to the limits.conf setting.
|
dispatch.sample_ratio | Number | The integer value used to calculate the sample ratio. The formula is 1 / <integer> .
|
dispatch.spawn_process | Boolean | This parameter is deprecated and will be removed in a future release. Do not use this parameter. Specifies whether a new search process spawns when this saved search is executed. Defaults to 1. Searches against indexes must run in a separate process. |
dispatch.time_format | String | A time format string that defines the time format for specifying the earliest and latest time. Defaults to %FT%T.%Q%:z
|
dispatch.ttl | Number | Valid values: Integer[p]. Defaults to 2p.
Indicates the time to live (in seconds) for the artifacts of the scheduled search, if no actions are triggered. If an action is triggered, the ttl changes to that action ttl. If multiple actions are triggered, the maximum ttl is applied to the artifacts. To set the action ttl, refer to If the integer is followed by the letter 'p', the ttl is handled as a multiple of the scheduled search period. |
dispatchAs | String | When the saved search is dispatched using the "saved/searches/{name}/dispatch" endpoint, this setting controls what user that search is dispatched as. Only meaningful for shared saved searches. Can be set to owner or user .
|
displayview | String | Defines the default UI view name (not label) in which to load the results. Accessibility is subject to the user having sufficient permissions. |
durable.backfill_type | String | Specifies how the Splunk software backfills the lost search results of failed scheduled search jobs. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type. Valid values are auto , time_interval , and time_whole .
|
durable.lag_time | Number | Specifies the search time delay, in seconds, that a durable search uses to catch events that are ingested or indexed late. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type.
|
durable.max_backfill_intervals | Number | Specifies the maximum number of cron intervals (previous scheduled search jobs) that the Splunk software can attempt to backfill for this search, when those jobs have incomplete events. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type.
|
durable.track_time_type | String | Indicates that a scheduled search is durable and specifies how the search tracks events. A durable search is a search that tries to ensure the delivery of all results, even when the search process is slowed or stopped by runtime issues like rolling restarts, network bottlenecks, and even downed servers. Applies only to scheduled searches. A value of |
is_scheduled | Boolean | Whether this search is to be run on a schedule |
is_visible | Boolean | Specifies whether this saved search should be listed in the visible saved search list. Defaults to 1. |
max_concurrent | Number | The maximum number of concurrent instances of this search the scheduler is allowed to run. Defaults to 1. |
next_scheduled_time | String | Read-only attribute. Value ignored on POST. There are some old clients who still send this value |
qualifiedSearch | String | Read-only attribute. Value ignored on POST. The value is computed during runtime. |
realtime_schedule | Boolean | Defaults to 1. Controls the way the scheduler computes the next execution time of a scheduled search. If this value is set to 1, the scheduler bases its determination of the next scheduled search execution time on the current time.
If this value is set to 0, the scheduler bases its determination of the next scheduled search on the last search execution time. This is called continuous scheduling. If set to 0, the scheduler never skips scheduled execution periods. However, the execution of the saved search might fall behind depending on the scheduler load. Use continuous scheduling whenever you enable the summary index option. If set to 1, the scheduler might skip some execution periods to make sure that the scheduler is executing the searches running over the most recent time range. The scheduler tries to execute searches that have realtime_schedule set to 1 before it executes searches that have continuous scheduling (realtime_schedule = 0). |
request.ui_dispatch_app | String | Specifies a field used by Splunk Web to denote the app this search should be dispatched in. |
request.ui_dispatch_view | String | Specifies a field used by Splunk Web to denote the view this search should be displayed in. |
restart_on_searchpeer_add | Boolean | Specifies whether to restart a real-time search managed by the scheduler when a search peer becomes available for this saved search. Defaults to 1.
Note: The peer can be a newly added peer or a peer down and now available. |
run_n_times | Number | Runs this search exactly the specified number of times. Does not run the search again until the Splunk platform is restarted. |
run_on_startup | Boolean | Indicates whether this search runs at startup. If it does not run on startup, it runs at the next scheduled time. Defaults to 0.
Set to 1 for scheduled searches that populate lookup tables. |
schedule_window | Number or auto
|
Time window (in minutes) during which the search has lower priority. Defaults to 0. The scheduler can give higher priority to more critical searches during this window. The window must be smaller than the search period.
Set to |
search | String | Required. The search to save. |
schedule_priority | See description | Raises the scheduling priority of the named search. Use one of the following options.
Requires the search owner to have the Defaults to For more details, see |
vsid | String | Defines the viewstate id associated with the UI view listed in 'displayview'. Must match up to a stanza in viewstates.conf. |
workload_pool | String | Specifies the new workload pool where the existing running search will be placed. |
Returned values
Name | Description |
---|---|
action.<action_name> | Indicates whether the <action_name> is enabled or disabled for a particular search. For more information about the alert action options see the alert_actions.conf file.
|
action.<action_name>.<parameter> | Overrides the setting defined for an action in the alert_actions.conf file with a new setting that is valid only for the search configuration to which it is applied.
|
action.email | Indicates the state of the email action. |
action.email.auth_password | The password to use when authenticating with the SMTP server. Normally this value is set when editing the email settings, however you can set a clear text password here and it is encrypted on the next restart.
Defaults to empty string. |
action.email.auth_username | The username to use when authenticating with the SMTP server. If this is empty string, no authentication is attempted. Defaults to empty string.
Note: Your SMTP server might reject unauthenticated emails. |
action.email.bcc | BCC email address to use if action.email is enabled. |
action.email.cc | CC email address to use if action.email is enabled. |
action.email.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.email.format | Specify the format of text in the email. This value also applies to any attachments.<
Valid values: (plain | html | raw | csv) |
action.email.from | Email address from which the email action originates.
Defaults to splunk@$LOCALHOST or whatever value is set in alert_actions.conf. |
action.email.hostname | Sets the hostname used in the web link (url) sent in email actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) When this value is a simple hostname, the protocol and port which are configured within splunk are used to construct the base of the url. When this value begins with 'http://', it is used verbatim. NOTE: This means the correct port must be specified if it is not the default port for http or https. This is useful in cases when the Splunk server is not aware of how to construct a url that can be referenced externally, such as SSO environments, other proxies, or when the server hostname is not generally resolvable. Defaults to current hostname provided by the operating system, or if that fails "localhost". When set to empty, default behavior is used. |
action.email.inline | Indicates whether the search results are contained in the body of the email.
Results can be either inline or attached to an email. See action.email.sendresults. |
action.email.mailserver | Set the address of the MTA server to be used to send the emails.
Defaults to <LOCALHOST> (or whatever is set in alert_actions.conf). |
action.email.maxresults | Sets the global maximum number of search results to send when email.action is enabled. |
action.email.maxtime | Specifies the maximum amount of time the execution of an email action takes before the action is aborted. |
action.email.pdfview | The name of the view to deliver if sendpdf is enabled |
action.email.preprocess_results | Search string to preprocess results before emailing them. Defaults to empty string (no preprocessing).
Usually the preprocessing consists of filtering out unwanted internal fields. |
action.email.reportCIDFontList | Space-separated list. Specifies the set (and load order) of CID fonts for handling Simplified Chinese(gb), Traditional Chinese(cns), Japanese(jp), and Korean(kor) in Integrated PDF Rendering.
If multiple fonts provide a glyph for a given character code, the glyph from the first font specified in the list is used. To skip loading any CID fonts, specify the empty string. Default value: "gb cns jp kor" |
action.email.reportIncludeSplunkLogo | Indicates whether to include the Splunk logo with the report. |
action.email.reportPaperOrientation | Specifies the paper orientation: portrait or landscape. |
action.email.reportPaperSize | Specifies the paper size for PDFs. Defaults to letter.
Valid values: (letter | legal | ledger | a2 | a3 | a4 | a5) |
action.email.reportServerEnabled | Not supported. |
action.email.reportServerURL | Not supported. |
action.email.sendpdf | Indicates whether to create and send the results as a PDF. |
action.email.sendresults | Indicates whether to attach the search results in the email.
Results can be either attached or inline. See action.email.inline. |
action.email.subject | Specifies an email subject.
Defaults to SplunkAlert-<savedsearchname>. |
action.email.to | List of recipient email addresses. Required if this search is scheduled and the email alert action is enabled. |
action.email.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.email.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows <Integer>, int is the number of scheduled periods. Defaults to 86400 (24 hours).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are Integer[p]. |
action.email.use_ssl | Indicates whether to use SSL when communicating with the SMTP server. |
action.email.use_tls | Indicates whether to use TLS (transport layer security) when communicating with the SMTP server (starttls). |
action.email.width_sort_columns | Indicates whether columns should be sorted from least wide to most wide, left to right.
Only valid if format=text. |
action.populate_lookup | Indicates the state of the populate lookup action. |
action.populate_lookup.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.populate_lookup.dest | Lookup name of path of the lookup to populate. |
action.populate_lookup.hostname | Sets the hostname used in the web link (url) sent in alert actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) See action.email.hostname for details. |
action.populate_lookup.maxresults | The maximum number of search results sent using alerts. |
action.populate_lookup.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted. Defaults to 5m.
Valid values are: Integer[m|s|h|d] |
action.populate_lookup.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.populate_lookup.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, then this specifies the number of scheduled periods. Defaults to 10p.
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are Integer[p] |
action.rss | Indicates the state of the RSS action. |
action.rss.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.rss.hostname | Sets the hostname used in the web link (url) sent in alert actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) See action.email.hostname for details. |
action.rss.maxresults | Sets the maximum number of search results sent using alerts. |
action.rss.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted.
Valid values are Integer[m |s |h |d]. |
action.rss.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.rss.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, specifies the number of scheduled periods. Defaults to 86400 (24 hours).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are: Integer[p] |
action.script | Indicates the state of the script for this action. |
action.script.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.script.filename | File name of the script to call. Required if script action is enabled |
action.script.hostname | Sets the hostname used in the web link (url) sent in alert actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) See action.email.hostname for details. |
action.script.maxresults | Sets the maximum number of search results sent using alerts. |
action.script.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted. |
action.script.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.script.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, specifies the number of scheduled periods. Defaults to 600 (10 minutes).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are: Integer[p] |
action.summary_index | Indicates the state of the summary index. |
action.summary_index._name | Specifies the name of the summary index where the results of the scheduled search are saved.
Defaults to "summary." |
action.summary_index.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.summary_index.hostname | Sets the hostname used in the web link (url) sent in alert actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) See action.email.hostname for details. |
action.summary_index.inline | Determines whether to execute the summary indexing action as part of the scheduled search.
Note: This option is considered only if the summary index action is enabled and is always executed (in other words, if counttype = always). |
action.summary_index.maxresults | Sets the maximum number of search results sent using alerts. |
action.summary_index.maxtime | Sets the maximum amount of time the execution of an action takes before the action is aborted. Defaults to 5m.
Valid values are: Integer[m|s|h|d] |
action.summary_index.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.summary_index.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows Integer, specifies the number of scheduled periods. Defaults to 10p.
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are: Integer[p] |
actions | Actions triggerd by this alert. |
alert.digest_mode | Indicates if the alert actions are applied to the entire result set or to each individual result. |
alert.expires | Sets the period of time to show the alert in the dashboard. Defaults to 24h.
Use [number][time-unit] to specify a time. For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour. Valid values: [number][time-unit] |
alert.severity | Valid values: (1 | 2 | 3 | 4 | 5 | 6)
Sets the alert severity level. Valid values are:
|
alert.suppress | Indicates whether alert suppression is enabled for this schedules search. |
alert.suppress.fields | Fields to use for suppression when doing per result alerting. Required if suppression is turned on and per result alerting is enabled. |
alert.suppress.period | Specifies the suppresion period. Only valid if alert.supress is enabled.
Uses [number][time-unit] to specify a time. For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour. |
alert.track | Specifies whether to track the actions triggered by this scheduled search.
auto - determine whether to track or not based on the tracking setting of each action, do not track scheduled searches that always trigger actions. true - force alert tracking. false - disable alert tracking for this search. |
alert_comparator | One of the following strings: greater than, less than, equal to, rises by, drops by, rises by perc, drops by perc |
alert_condition | A conditional search that is evaluated against the results of the saved search. Defaults to an empty string.
Alerts are triggered if the specified search yields a non-empty search result list. Note: If you specify an alert_condition, do not set counttype, relation, or quantity. |
alert_threshold | Valid values are: Integer[%]
Specifies the value to compare (see alert_comparator) before triggering the alert actions. If expressed as a percentage, indicates value to use when alert_comparator is set to "rises by perc" or "drops by perc." |
alert_type | What to base the alert on, overriden by alert_condition if it is specified. Valid values are: always, custom, number of events, number of hosts, number of sources. |
allow_skew | 0 | <percentage> | <duration>
Allows the search scheduler to distribute scheduled searches randomly and more evenly over their specified search periods. This setting does not require adjusting in most use cases. Check with an admin before making any updates. When set to a non-zero value for searches with the following * * * * * Every minute. */M * * * * Every M minutes (M > 0). 0 * * * * Every hour. 0 */H * * * Every H hours (H > 0). 0 0 * * * Every day (at midnight). When set to a non-zero value for a search that has any other The amount of skew for a specific search remains constant between edits of the search. A value of Percentage Duration Valid duration units:
Examples 100% (for an every-5-minute search) = 5 minutes maximum 50% (for an every-minute search) = 30 seconds maximum 5m = 5 minutes maximum 1h = 1 hour maximum |
args.* | Wildcard argument that accepts any saved search template argument, such as args.username=foobar when the search is search $username$. |
auto_summarize | Indicates whether the scheduler should ensure that the data for this search is automatically summarized. |
auto_summarize.command | A search template that constructs the auto summarization for this search.
Caution: Advanced feature. Do not change unless you understand the architecture of auto summarization of saved searches. |
auto_summarize.cron_schedule | Cron schedule that probes and generates the summaries for this saved search. |
auto_summarize.dispatch.earliest_time | A time string that specifies the earliest time for summarizing this search. Can be a relative or absolute time. |
auto_summarize.dispatch.latest_time | A time string that specifies the latest time for this saved search. Can be a relative or absolute time. |
auto_summarize.dispatch.time_format | Time format used to specify the earliest and latest times. |
auto_summarize.dispatch.ttl | Indicates the time to live (in seconds) for the artifacts of the summarization of the scheduled search. If the integer is followed by the letter 'p', the ttl is interpreted as a multiple of the scheduled search period. |
auto_summarize.max_disabled_buckets | The maximum number of buckets with the suspended summarization before the summarization search is completely stopped, and the summarization of the search is suspended for auto_summarize.suspend_period. |
auto_summarize.max_summary_ratio | The maximum ratio of summary_size/bucket_size, which specifies when to stop summarization and deem it unhelpful for a bucket.
Note: The test is only performed if the summary size is larger than auto_summarize.max_summary_size. |
auto_summarize.max_summary_size | The minimum summary size, in bytes, before testing whether the summarization is helpful. |
auto_summarize.max_time | Maximum time (in seconds) that the summary search is allowed to run.
Note: This is an approximate time. The summary search stops at clean bucket boundaries. |
auto_summarize.suspend_period | Time specifier indicating when to suspend summarization of this search if the summarization is deemed unhelpful. |
auto_summarize.timespan | The list of time ranges that each summarized chunk should span. This comprises the list of available granularity levels for which summaries would be available.
For example a timechart over the last month whose granularity is at the day level should set this to 1d. If you need the same data summarized at the hour level for weekly charts, use: 1h,1d. |
cron_schedule | The cron schedule to execute this search. For example: */5 * * * * causes the search to execute every 5 minutes.
cron lets you use standard cron notation to define your scheduled search interval. In particular, cron can accept this type of notation: 00,20,40 * * * *, which runs the search every hour at hh:00, hh:20, hh:40. Along the same lines, a cron of 03,23,43 * * * * runs the search every hour at hh:03, hh:23, hh:43. Splunk recommends that you schedule your searches so that they are staggered over time. This reduces system load. Running all of them every 20 minutes (*/20) means they would all launch at hh:00 (20, 40) and might slow your system every 20 minutes. Valid values: cron string |
description | Description of this saved search. Defaults to empty string. |
disabled | Indicates if this saved search is disabled. |
dispatch.* | * represents any custom dispatch field. |
dispatch.buckets | The maximum nuber of timeline buckets. |
dispatch.earliest_time | A time string that specifies the earliest time for this search. Can be a relative or absolute time.
If this value is an absolute time, use the dispatch.time_format to format the value. |
dispatch.indexedRealtime | Indicates whether to used indexed-realtime mode when doing real-time searches. |
dispatch.latest_time | A time string that specifies the latest time for the saved search. Can be a relative or absolute time.
If this value is an absolute time, use the dispatch.time_format to format the value. |
dispatch.lookups | Indicates if lookups are enabled for this search. |
dispatch.max_count | The maximum number of results before finalizing the search. |
dispatch.max_time | Indicates the maximum amount of time (in seconds) before finalizing the search. |
dispatch.reduce_freq | Specifies how frequently the MapReduce reduce phase runs on accumulated map values. |
dispatch.rt_backfill | Indicates whether to back fill the real time window for this search. Parameter valid only if this is a real time search |
dispatch.spawn_process | This parameter is deprecated and will be removed in a future release. Do not use this parameter. Indicates whether a new search process spawns when this saved search is executed. |
dispatch.time_format | Time format string that defines the time format for specifying the earliest and latest time. |
dispatch.ttl | Indicates the time to live (in seconds) for the artifacts of the scheduled search, if no actions are triggered.
If an action is triggered, the action ttl is used. If multiple actions are triggered, the maximum ttl is applied to the artifacts. To set the action ttl, refer to If the integer is followed by the letter 'p', the ttl is interpreted as a multiple of the scheduled search period. |
displayview | Defines the default UI view name (not label) in which to load the results. Accessibility is subject to the user having sufficient permissions. |
durable.backfill_type | Specifies how the Splunk software backfills the lost search results of failed scheduled search jobs. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type. Valid values are auto , time_interval , and time_whole .
|
durable.lag_time | Specifies the search time delay, in seconds, that a durable search uses to catch events that are ingested or indexed late. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type.
|
durable.max_backfill_intervals | Specifies the maximum number of cron intervals (previous scheduled search jobs) that the Splunk software can attempt to backfill for this search, when those jobs have incomplete events. Applies only to scheduled searches that have a valid setting other than none for durable.track_time_type.
|
durable.track_time_type | Indicates that a scheduled search is durable and specifies how the search tracks events. A value of _time means the durable search tracks each event by its event timestamp , based on time information included in the event. A value of _indextime means the durable search tracks each event by its indexed timestamp. The search is not durable if this setting is unset or is set to none .
|
is_scheduled | Indicates if this search is to be run on a schedule. |
is_visible | Indicates if this saved search appears in the visible saved search list. |
max_concurrent | The maximum number of concurrent instances of this search the scheduler is allowed to run. |
next_scheduled_time | The time when the scheduler runs this search again. |
qualifiedSearch | The exact search string that the scheduler would run. |
realtime_schedule | Controls the way the scheduler computes the next execution time of a scheduled search. If this value is set to 1, the scheduler bases its determination of the next scheduled search execution time on the current time.
If this value is set to 0, the scheduler bases its determination of the next scheduled search on the last search execution time. This is called continuous scheduling. If set to 0, the scheduler never skips scheduled execution periods. However, the execution of the saved search might fall behind depending on the scheduler load. Use continuous scheduling whenever you enable the summary index option. If set to 1, the scheduler might skip some execution periods to make sure that the scheduler is executing the searches running over the most recent time range. The scheduler tries to execute searches that have realtime_schedule set to 1 before it executes searches that have continuous scheduling (realtime_schedule = 0). |
request.ui_dispatch_app | A field used by Splunk Web to denote the app this search should be dispatched in. |
request.ui_dispatch_view | Specifies a field used by Splunk Web to denote the view this search should be displayed in. |
restart_on_searchpeer_add | Indicates whether to restart a real-time search managed by the scheduler when a search peer becomes available for this saved search.
Note: The peer can be a newly added peer or a peer down and now available. |
run_on_startup | Indicates whether this search runs on startup. If it does not run on startup, it runs at the next scheduled time.
Splunk recommends that you set run_on_startup to true for scheduled searches that populate lookup tables. |
schedule_window | Time window (in minutes) during which the search has lower priority. The scheduler can give higher priority to more critical searches during this window. The window must be smaller than the search period. If set to auto , the scheduler prioritizes searches automatically.
|
search | Search expression to filter the response. The response matches field values against the search expression. For example:
search=foo matches any object that has "foo" as a substring in a field. search=field_name%3Dfield_value restricts the match to a single field. URI-encoding is required in this example. |
vsid | The viewstate id associated with the UI view listed in 'displayview'.
Matches to a stanza in viewstates.conf. |
Example request and response
curl -k -u admin:chang2me https://fool01:8092/services/saved/searches/test_durable -d durable.track_time_type=_time -d durable.max_backfill_intervals=100 -d durable.lag_time=30 -d durable.backfill_type=time_interval
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>savedsearch</title> <id>https://fool01:8092/services/saved/searches</id> <updated>2021-04-29T09:58:12-07:00</updated> <generator build="84cbec3d51a6" version="8.2.2105"/> <author> <name>Splunk</name> </author> <link href="/services/saved/searches/_new" rel="create"/> <link href="/services/saved/searches/_reload" rel="_reload"/> <link href="/services/saved/searches/_acl" rel="_acl"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>test_durable</title> <id>https://fool01:8092/servicesNS/admin/search/saved/searches/test_durable</id> <updated>2021-04-29T09:58:12-07:00</updated> <link href="/servicesNS/admin/search/saved/searches/test_durable" rel="alternate"/> <author> <name>admin</name> </author> <link href="/servicesNS/admin/search/saved/searches/test_durable" rel="list"/> <link href="/servicesNS/admin/search/saved/searches/test_durable/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/saved/searches/test_durable" rel="edit"/> <link href="/servicesNS/admin/search/saved/searches/test_durable" rel="remove"/> <link href="/servicesNS/admin/search/saved/searches/test_durable/move" rel="move"/> <link href="/servicesNS/admin/search/saved/searches/test_durable/disable" rel="disable"/> <link href="/servicesNS/admin/search/saved/searches/test_durable/dispatch" rel="dispatch"/> <link href="/servicesNS/admin/search/saved/searches/test_durable/embed" rel="embed"/> <link href="/servicesNS/admin/search/saved/searches/test_durable/history" rel="history"/> <content type="text/xml"> <s:dict> <!-- action settings elided --> <s:key name="actions"></s:key> <s:key name="alert.digest_mode">1</s:key> <s:key name="alert.expires">24h</s:key> <s:key name="alert.managedBy"></s:key> <s:key name="alert.severity">3</s:key> <s:key name="alert.suppress"></s:key> <s:key name="alert.suppress.fields"></s:key> <s:key name="alert.suppress.group_name"></s:key> <s:key name="alert.suppress.period"></s:key> <s:key name="alert.track">0</s:key> <s:key name="alert_comparator"></s:key> <s:key name="alert_condition"></s:key> <s:key name="alert_threshold"></s:key> <s:key name="alert_type">always</s:key> <s:key name="allow_skew">0</s:key> <s:key name="auto_summarize">0</s:key> <s:key name="auto_summarize.command"><![CDATA[| summarize override=partial timespan=$auto_summarize.timespan$ max_summary_size=$auto_summarize.max_summary_size$ max_summary_ratio=$auto_summarize.max_summary_ratio$ max_disabled_buckets=$auto_summarize.max_disabled_buckets$ max_time=$auto_summarize.max_time$ [ $search$ ]]]></s:key> <s:key name="auto_summarize.cron_schedule">*/10 * * * *</s:key> <s:key name="auto_summarize.dispatch.earliest_time"></s:key> <s:key name="auto_summarize.dispatch.latest_time"></s:key> <s:key name="auto_summarize.dispatch.time_format">%FT%T.%Q%:z</s:key> <s:key name="auto_summarize.dispatch.ttl">60</s:key> <s:key name="auto_summarize.max_concurrent">1</s:key> <s:key name="auto_summarize.max_disabled_buckets">2</s:key> <s:key name="auto_summarize.max_summary_ratio">0.1</s:key> <s:key name="auto_summarize.max_summary_size">52428800</s:key> <s:key name="auto_summarize.max_time">3600</s:key> <s:key name="auto_summarize.suspend_period">24h</s:key> <s:key name="auto_summarize.timespan"></s:key> <s:key name="auto_summarize.workload_pool"></s:key> <s:key name="cron_schedule">*/3 * * * *</s:key> <s:key name="defer_scheduled_searchable_idxc">0</s:key> <s:key name="description">This test job is a durable saved search</s:key> <s:key name="disabled">0</s:key> <s:key name="dispatch.allow_partial_results">1</s:key> <s:key name="dispatch.auto_cancel">0</s:key> <s:key name="dispatch.auto_pause">0</s:key> <s:key name="dispatch.buckets">0</s:key> <s:key name="dispatch.earliest_time">-24h@h</s:key> <s:key name="dispatch.index_earliest"></s:key> <s:key name="dispatch.index_latest"></s:key> <s:key name="dispatch.indexedRealtime"></s:key> <s:key name="dispatch.indexedRealtimeMinSpan"></s:key> <s:key name="dispatch.indexedRealtimeOffset"></s:key> <s:key name="dispatch.latest_time">now</s:key> <s:key name="dispatch.lookups">1</s:key> <s:key name="dispatch.max_count">500000</s:key> <s:key name="dispatch.max_time">0</s:key> <s:key name="dispatch.reduce_freq">10</s:key> <s:key name="dispatch.rt_backfill">0</s:key> <s:key name="dispatch.rt_maximum_span"></s:key> <s:key name="dispatch.sample_ratio">1</s:key> <s:key name="dispatch.spawn_process">1</s:key> <s:key name="dispatch.time_format">%FT%T.%Q%:z</s:key> <s:key name="dispatch.ttl">2p</s:key> <s:key name="dispatchAs">owner</s:key> <!-- display settings elided --> <s:key name="displayview"></s:key> <s:key name="durable.backfill_type">time_interval</s:key> <s:key name="durable.lag_time">30</s:key> <s:key name="durable.max_backfill_intervals">100</s:key> <s:key name="durable.track_time_type">_time</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">search</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">admin</s:key> <s:key name="perms"/> <s:key name="removable">1</s:key> <s:key name="sharing">user</s:key> </s:dict> </s:key> <s:key name="eai:attributes"> <s:dict> <s:key name="optionalFields"> <s:list> <!-- action settings elided --> <s:item>actions</s:item> <s:item>alert.digest_mode</s:item> <s:item>alert.expires</s:item> <s:item>alert.managedBy</s:item> <s:item>alert.severity</s:item> <s:item>alert.suppress</s:item> <s:item>alert.suppress.fields</s:item> <s:item>alert.suppress.group_name</s:item> <s:item>alert.suppress.period</s:item> <s:item>alert.track</s:item> <s:item>alert_comparator</s:item> <s:item>alert_condition</s:item> <s:item>alert_threshold</s:item> <s:item>alert_type</s:item> <s:item>allow_skew</s:item> <s:item>auto_summarize</s:item> <s:item>auto_summarize.command</s:item> <s:item>auto_summarize.cron_schedule</s:item> <s:item>auto_summarize.dispatch.earliest_time</s:item> <s:item>auto_summarize.dispatch.latest_time</s:item> <s:item>auto_summarize.dispatch.time_format</s:item> <s:item>auto_summarize.dispatch.ttl</s:item> <s:item>auto_summarize.max_concurrent</s:item> <s:item>auto_summarize.max_disabled_buckets</s:item> <s:item>auto_summarize.max_summary_ratio</s:item> <s:item>auto_summarize.max_summary_size</s:item> <s:item>auto_summarize.max_time</s:item> <s:item>auto_summarize.suspend_period</s:item> <s:item>auto_summarize.timespan</s:item> <s:item>auto_summarize.workload_pool</s:item> <s:item>cron_schedule</s:item> <s:item>defer_scheduled_searchable_idxc</s:item> <s:item>description</s:item> <s:item>disabled</s:item> <s:item>dispatch.allow_partial_results</s:item> <s:item>dispatch.auto_cancel</s:item> <s:item>dispatch.auto_pause</s:item> <s:item>dispatch.buckets</s:item> <s:item>dispatch.earliest_time</s:item> <s:item>dispatch.index_earliest</s:item> <s:item>dispatch.index_latest</s:item> <s:item>dispatch.indexedRealtime</s:item> <s:item>dispatch.indexedRealtimeMinSpan</s:item> <s:item>dispatch.indexedRealtimeOffset</s:item> <s:item>dispatch.latest_time</s:item> <s:item>dispatch.lookups</s:item> <s:item>dispatch.max_count</s:item> <s:item>dispatch.max_time</s:item> <s:item>dispatch.reduce_freq</s:item> <s:item>dispatch.rt_backfill</s:item> <s:item>dispatch.rt_maximum_span</s:item> <s:item>dispatch.sample_ratio</s:item> <s:item>dispatch.spawn_process</s:item> <s:item>dispatch.time_format</s:item> <s:item>dispatch.ttl</s:item> <s:item>dispatchAs</s:item> <!-- display settings elided --> <s:item>displayview</s:item> <s:item>durable.backfill_type</s:item> <s:item>durable.lag_time</s:item> <s:item>durable.max_backfill_intervals</s:item> <s:item>durable.track_time_type</s:item> <s:item>estimatedResultCount</s:item> <s:item>federated.provider</s:item> <s:item>hint</s:item> <s:item>is_scheduled</s:item> <s:item>is_visible</s:item> <s:item>max_concurrent</s:item> <s:item>next_scheduled_time</s:item> <s:item>numFields</s:item> <s:item>qualifiedSearch</s:item> <s:item>realtime_schedule</s:item> <s:item>request.ui_dispatch_app</s:item> <s:item>request.ui_dispatch_view</s:item> <s:item>restart_on_searchpeer_add</s:item> <s:item>run_n_times</s:item> <s:item>run_on_startup</s:item> <s:item>schedule_as</s:item> <s:item>schedule_priority</s:item> <s:item>schedule_window</s:item> <s:item>search</s:item> <s:item>skip_scheduled_realtime_idxc</s:item> <s:item>vsid</s:item> <s:item>workload_pool</s:item> </s:list> </s:key> <s:key name="requiredFields"> <s:list/> </s:key> <s:key name="wildcardFields"> <s:list> <s:item>action\..*</s:item> <s:item>args\..*</s:item> <s:item>dispatch\..*</s:item> <s:item>display\.statistics\.format\..*</s:item> <s:item>display\.visualizations\.custom\..*</s:item> <s:item>durable\..*</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="embed.enabled">0</s:key> <s:key name="federated.provider"></s:key> <s:key name="is_scheduled">0</s:key> <s:key name="is_visible">1</s:key> <s:key name="max_concurrent">1</s:key> <s:key name="next_scheduled_time"></s:key> <s:key name="qualifiedSearch">search search index=_internal | stats count by host</s:key> <s:key name="realtime_schedule">1</s:key> <s:key name="request.ui_dispatch_app"></s:key> <s:key name="request.ui_dispatch_view"></s:key> <s:key name="restart_on_searchpeer_add">1</s:key> <s:key name="run_n_times">0</s:key> <s:key name="run_on_startup">0</s:key> <s:key name="schedule_as">auto</s:key> <s:key name="schedule_priority">default</s:key> <s:key name="schedule_window">0</s:key> <s:key name="search">search index=_internal | stats count by host</s:key> <s:key name="skip_scheduled_realtime_idxc">0</s:key> <s:key name="vsid"></s:key> <s:key name="workload_pool"></s:key> </s:dict> </content> </entry> </feed>
saved/searches/{name}/acknowledge
https://<host>:<mPort>/services/saved/searches/{name}/acknowledge
Acknowledge the {name}
saved search alert suppression.
POST
Acknowledge the {name}
saved search alert suppression and resume alerting.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
key | String | The suppression key used in field-based suppression.
For example, in host-based suppression, with data from 5 hosts, the key is the host, as each host could have different suppression expiration times. |
Returned values
None
Example request and response
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/saved/searches/MyAlert/acknowledge -X POST
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>savedsearch</title> <id>https://localhost:8089/servicesNS/admin/search/saved/searches</id> <updated>2011-07-26T18:31:07-04:00</updated> <generator version="104601"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/admin/search/saved/searches/_new" rel="create"/> <link href="/servicesNS/admin/search/saved/searches/_reload" rel="_reload"/> <opensearch:totalResults>0</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> </feed>
saved/searches/{name}/dispatch
https://<host>:<mPort>/services/saved/searches/{name}/dispatch
Dispatch the {name}
saved search.
POST
Dispatch the {name}
saved search.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
args.* | Arg values to create a saved search is the saved search is a template search.
For example, specify arg.index_name to create the following search:
| ||
dispatchAs | String | "owner" | "user" | Indicate the user context, quota, and access rights for the saved search. The saved search runs according to the context indicated. |
dispatch.* | String | Any dispatch.* field of the search that needs to be overridden when running the summary search. | |
dispatch.adhoc_search_level | String | Use one of the following search modes.
[ verbose | fast | smart ] | |
dispatch.now | Boolean | Dispatch the search as if the specified time for this parameter was the current time. | |
force_dispatch | Boolean | Indicates whether to start a new search even if another instance of this search is already running. | |
now | String | [Deprecated] Use dispatch.now. | |
replay_speed | Number greater than 0 | Indicate a real-time search replay speed factor. For example, 1 indicates normal speed. 0.5 indicates half of normal speed, and 2 indicates twice as fast as normal.
Use replay_speed with replay_et and replay_lt relative times to indicate a speed and time range for the replay. For example, replay_speed = 10 replay_et = -d@d replay_lt = -@d specifies a replay at 10x speed, as if the "wall clock" time starts yesterday at midnight and ends when it reaches today at midnight. For more information about using relative time modifiers, see Search time modifiers in the Search reference. | |
replay_et | Time modifier string | Relative "wall clock" start time for the replay. | |
replay_lt | Time modifier string. | Relative end time for the replay clock. The replay stops when clock time reaches this time. | |
trigger_actions | Boolean | Indicates whether to trigger alert actions. |
Returned values
None
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/saved/searches/MySavedSearch/dispatch -d trigger_actions=1
XML Response
<?xml version='1.0' encoding='UTF-8'?> <response><sid>admin__admin__search__MySavedSearch_at_1311797437_d831d980832e3e89</sid></response>
saved/searches/{name}/history
https://<host>:<mPort>/services/saved/searches/{name}/history
List available search jobs created from the {name}
saved search.
GET
List available search jobs created from the {name}
saved search.
Request parameters
Name | Description |
---|---|
savedsearch | String triplet consisting of user:app:search_name . The triplet constitutes a unique identifier for accessing saved search history. Passing in this parameter can help you work around saved search access limitations in search head clustered deployments.As an example, the following parameter triplet represents an admin user, the search app context, and a search named Splunk errors last 24 hours .
savedsearch=admin:search:Splunk%20errors%20last%2024%20hours |
Returned values
Name | Description |
---|---|
durableTrackTime | The durable cursor timestamp for the search job, expressed in UNIX Epoch time notation (elapsed time since 1/1/1970). If durableTrackType=_indextime , this timestamp is associated with the indexed timestamp of the events returned by the job. If durableTrackType=_time , this timestamp is associated with the event timestamp of the events returned by the job.
|
durableTrackType | Indicates that a scheduled search is durable and specifies how the search tracks events. A value of _time means the durable search tracks each event by its event timestamp , based on time information included in the event. A value of _indextime means the durable search tracks each event by its indexed timestamp. The search is not durable if this setting is unset or is set to none .
|
earliest_time | The earliest time a search job is configured to start. |
isDone | Indicates if the search has completed. |
isFinalized | Indicates if the search was finalized (stopped before completion). |
isRealTimeSearch | Indicates if the search is a real time search. |
isSaved | Indicates if the search is saved indefinitely. |
isScheduled | Indicates if the search is a scheduled search. |
isZombie | Indicates if the process running the search is dead, but with the search not finished. |
latest_time | The latest time a search job is configured to start. |
listDefaultActionArgs | List default values of actions.*, even though some of the actions may not be specified in the saved search. |
ttl | The time to live, or time before the search job expires after it completes. |
Example request and response
XML Request
curl -k -u admin:pass https://fool01:8092/services/saved/searches/summary_durable/history
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>summary_durable</title> <id>https://fool01:8092/services/saved/searches</id> <updated>2021-04-29T10:01:20-07:00</updated> <generator build="84cbec3d51a6" version="8.2.2105"/> <author> <name>Splunk</name> </author> <link href="/services/saved/searches/_acl" rel="_acl"/> <opensearch:totalResults>2</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>scheduler__admin__search__RMD50dbc462560ef18a2_at_1619715420_1</title> <id>https://fool01:8092/servicesNS/nobody/search/search/jobs/scheduler__admin__search__RMD50dbc462560ef18a2_at_1619715420_1</id> <updated>2021-04-29T09:57:44-07:00</updated> <link href="/servicesNS/nobody/search/search/jobs/scheduler__admin__search__RMD50dbc462560ef18a2_at_1619715420_1" rel="alternate"/> <author> <name>admin</name> </author> <published>2021-04-29T09:57:17-07:00</published> <content type="text/xml"> <s:dict> <s:key name="durableTrackTime">1619715420.000000000</s:key> <s:key name="durableTrackType">_time</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">search</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">0</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">admin</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>admin</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">system</s:key> </s:dict> </s:key> <s:key name="isDone">1</s:key> <s:key name="isFinalized">0</s:key> <s:key name="isRealTimeSearch">0</s:key> <s:key name="isSaved">0</s:key> <s:key name="isScheduled">1</s:key> <s:key name="isZombie">0</s:key> <s:key name="start">1619715437</s:key> <s:key name="ttl">118</s:key> </s:dict> </content> </entry> <entry> <title>scheduler__admin__search__RMD50dbc462560ef18a2_at_1619715600_3</title> <id>https://fool01:8092/servicesNS/nobody/search/search/jobs/scheduler__admin__search__RMD50dbc462560ef18a2_at_1619715600_3</id> <updated>2021-04-29T10:00:14-07:00</updated> <link href="/servicesNS/nobody/search/search/jobs/scheduler__admin__search__RMD50dbc462560ef18a2_at_1619715600_3" rel="alternate"/> <author> <name>admin</name> </author> <published>2021-04-29T10:00:00-07:00</published> <content type="text/xml"> <s:dict> <s:key name="durableTrackTime">1619715600.000000000</s:key> <s:key name="durableTrackType">_time</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">search</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">0</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">admin</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>admin</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">system</s:key> </s:dict> </s:key> <s:key name="isDone">1</s:key> <s:key name="isFinalized">0</s:key> <s:key name="isRealTimeSearch">0</s:key> <s:key name="isSaved">0</s:key> <s:key name="isScheduled">1</s:key> <s:key name="isZombie">0</s:key> <s:key name="start">1619715600</s:key> <s:key name="ttl">281</s:key> </s:dict> </content> </entry> </feed>
saved/searches/{name}/reschedule
https://<host>:<mPort>/services/saved/searches/{name}/reschedule
Set {name}
scheduled saved search to start at a specific time and then run on its schedule thereafter.
POST
Define a new start time for a scheduled saved search.
Usage details
If no schedule_time
argument is specified, the Splunk software runs the search as soon as possible according to its saved search definition. If you restart your Splunk platform implementation, all schedule_time
values for searches are removed.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
schedule_time | Timestamp | The next time to run the search. The timestamp can be in one of three formats: ISO8601 format (adjusted for UTC time), UNIX time format, or relative time format. |
Returned values
None
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/services/saved/searches/Purchased%20products%2C%20last%2024%20hours/reschedule -d schedule_time=schedule_time=2018-08-15T14:11:01-08:00
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>savedsearch</title> <id>https://localhost:8089/services/saved/searches</id> <updated>2018-08-15T14:11:01-08:00</updated> <generator build="131547" version="5.0"/> <author> <name>Splunk</name> </author> <link href="/services/saved/searches/_new" rel="create"/> <link href="/services/saved/searches/_reload" rel="_reload"/> <opensearch:totalResults>0</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> </feed>
saved/searches/{name}/scheduled_times
https://<host>:<mPort>/services/saved/searches/{name}/scheduled_times
Get the {name}
saved search scheduled time.
GET
Access {name}
saved search scheduled time.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
earliest_time required |
String | Absolute or relative earliest time | |
latest_time required |
String | Absolute or relative latest time |
Returned values
Name | Description |
---|---|
action.email | Indicates the state of the email action. |
action.email.auth_password | The password to use when authenticating with the SMTP server. Normally this value is set when editing the email settings, however you can set a clear text password here that is encrypted on the next platform restart.
Defaults to empty string. |
action.email.auth_username | The username to use when authenticating with the SMTP server. If this is empty string, no authentication is attempted. Defaults to empty string.
Note: Your SMTP server might reject unauthenticated emails. |
action.email.pdfview | The name of the view to deliver if sendpdf is enabled. |
action.email.subject | Specifies an email subject.
Defaults to SplunkAlert-<savedsearchname>. |
action.email.to | List of recipient email addresses. Required if this search is scheduled and the email alert action is enabled. |
action.summary_index | The state of the summary index action. |
action.summary_index._name | Specifies the name of the summary index where the results of the scheduled search are saved.
Defaults to "summary." |
actions | Actions triggered by this alert. |
alert.digest_mode | Indicates if alert actions are applied to the entire result set or to each individual result. |
alert.expires | Sets the period of time to show the alert in the dashboard. Defaults to 24h.
Uses [number][time-unit] to specify a time. For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour. |
alert.severity | Valid values: (1 | 2 | 3 | 4 | 5 | 6)
Sets the alert severity level. Valid values are: 1 DEBUG 2 INFO 3 WARN 4 ERROR 5 SEVERE 6 FATAL |
alert.suppress | Indicates whether alert suppression is enabled for this schedules search. |
alert.suppress.fields | Fields to use for suppression when doing per result alerting. Required if suppression is turned on and per result alerting is enabled. |
alert.suppress.period | Specifies the suppression period. Only valid if alert.supress is enabled.
Use [number][time-unit] to specify a time. For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour. |
alert.track | Specifies whether to track the actions triggered by this scheduled search.
auto - determine whether to track or not based on the tracking setting of each action, do not track scheduled searches that always trigger actions. true - force alert tracking. false - disable alert tracking for this search. |
alert_comparator | Valid values are: Integer[%]
Specifies the value to compare (see alert_comparator) before triggering the alert actions. If expressed as a percentage, indicates value to use when alert_comparator is set to "rises by perc" or "drops by perc." |
alert_condition | A conditional search that is evaluated against the results of the saved search. Defaults to an empty string.
Alerts are triggered if the specified search yields a non-empty search result list. Note: If you specify an alert_condition, do not set counttype, relation, or quantity. |
alert_threshold | Valid values are: Integer[%]
Specifies the value to compare (see alert_comparator) before triggering the alert actions. If expressed as a percentage, indicates value to use when alert_comparator is set to "rises by perc" or "drops by perc." |
alert_type | What to base the alert on, overridden by alert_condition if it is specified. Valid values are: always, custom, number of events, number of hosts, number of sources. |
cron_schedule | The cron schedule to execute this search. For example: */5 * * * * causes the search to execute every 5 minutes.
cron lets you use standard cron notation to define your scheduled search interval. In particular, cron can accept this type of notation: 00,20,40 * * * *, which runs the search every hour at hh:00, hh:20, hh:40. Along the same lines, a cron of 03,23,43 * * * * runs the search every hour at hh:03, hh:23, hh:43. Splunk recommends that you schedule your searches so that they are staggered over time. This reduces system load. Running all of them every 20 minutes (*/20) means they would all launch at hh:00 (20, 40) and might slow your system every 20 minutes. Valid values: cron string |
description | Description of the saved search. |
disabled | Indicates if this saved search is disabled. |
dispatch.buckets | The maximum number of timeline buckets. |
dispatch.earliest_time | A time string that specifies the earliest time for this search. Can be a relative or absolute time.
If this value is an absolute time, use the dispatch.time_format to format the value. |
dispatch.latest_time | A time string that specifies the latest time for this saved search. Can be a relative or absolute time.
If this value is an absolute time, use the dispatch.time_format to format the value. |
dispatch.lookups | Indicates if lookups are enabled for this search. |
dispatch.max_count | The maximum number of results before finalizing the search. |
dispatch.max_time | Indicates the maximum amount of time (in seconds) before finalizing the search |
earliest_time | For scheduled searches display all the scheduled times starting from this time. |
is_scheduled | Indicates if this search is to be run on a schedule. |
is_visible | Indicates if this saved search appears in the visible saved search list. |
latest_time | A time string that specifies the latest time for this saved search. Can be a relative or absolute time.
If this value is an absolute time, use the dispatch.time_format to format the value. |
listDefaultActionArgs | List default values of actions.*, even though some of the actions may not be specified in the saved search. |
max_concurrent | The maximum number of concurrent instances of this search the scheduler is allowed to run. |
next_scheduled_time | The time when the scheduler runs this search again. |
qualifiedSearch | The exact search command for this saved search. |
realtime_schedule | Controls the way the scheduler computes the next execution time of a scheduled search. If this value is set to 1, the scheduler bases its determination of the next scheduled search execution time on the current time.
If this value is set to 0, the scheduler bases its determination of the next scheduled search on the last search execution time. This is called continuous scheduling. If set to 0, the scheduler never skips scheduled execution periods. However, the execution of the saved search might fall behind depending on the scheduler load. Use continuous scheduling whenever you enable the summary index option. If set to 1, the scheduler might skip some execution periods to make sure that the scheduler is executing the searches running over the most recent time range. The scheduler tries to execute searches that have realtime_schedule set to 1 before it executes searches that have continuous scheduling (realtime_schedule = 0). |
request.ui_dispatch_app | A field used by Splunk Web to denote the app this search should be dispatched in. |
request.ui_dispatch_view | A field used by Splunk Web to denote the app this search should be dispatched in. |
restart_on_searchpeer_add | Indicates whether to restart a real-time search managed by the scheduler when a search peer becomes available for this saved search.
Note: The peer can be a newly added peer or a peer down and now available. |
run_on_startup | Indicates whether this search runs on startup. If it does not run on startup, it runs at the next scheduled time.
Splunk recommends that you set run_on_startup to true for scheduled searches that populate lookup tables. |
scheduled_times | The times when the scheduler runs the search. |
search | Search expression to filter the response. The response matches field values against the search expression. For example:
search=foo matches any object that has "foo" as a substring in a field. search=field_name%3Dfield_value restricts the match to a single field. URI-encoding is required in this example. |
vsid | The viewstate id associated with the Splunk Web view listed in 'displayview'.
Matches to a stanza in viewstates.conf. |
Application usage
Specify a time range for the data returned using earliest_time and latest_time parameters.
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/services/saved/searches/_ScheduledView__dashboard_live/scheduled_times --get -d earliest_time=-5h -d latest_time=-3h
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>savedsearch</title> <id>https://localhost:8089/services/saved/searches</id> <updated>2011-12-02T11:12:55-08:00</updated> <generator version="108769"/> <author> <name>Splunk</name> </author> <link href="/services/saved/searches/_new" rel="create"/> <link href="/services/saved/searches/_reload" rel="_reload"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>_ScheduledView__dashboard_live</title> <id>https://localhost:8089/servicesNS/admin/search/saved/searches/_ScheduledView__dashboard_live</id> <updated>2011-12-02T11:12:55-08:00</updated> <link href="/servicesNS/admin/search/saved/searches/_ScheduledView__dashboard_live" rel="alternate"/> <author> <name>admin</name> </author> <!-- opensearch nodes elided for brevity. --> <content type="text/xml"> <s:dict> <s:key name="action.email">1</s:key> <s:key name="action.email.auth_password">$1$o2rN8S6m+0YB</s:key> <s:key name="action.email.auth_username">myusername</s:key> . . . elided . . . <s:key name="action.email.pdfview">dashboard_live</s:key> . . . elided . . . <s:key name="action.email.subject">Splunk Alert: $name$</s:key> <s:key name="action.email.to">myusername@example.com</s:key> . . . elided . . . <s:key name="action.summary_index">0</s:key> <s:key name="action.summary_index._name">summary</s:key> . . . elided . . . <s:key name="actions">email</s:key> <s:key name="alert.digest_mode">1</s:key> <s:key name="alert.expires">24h</s:key> <s:key name="alert.severity">3</s:key> <s:key name="alert.suppress"></s:key> <s:key name="alert.suppress.fields"></s:key> <s:key name="alert.suppress.period"></s:key> <s:key name="alert.track">auto</s:key> <s:key name="alert_comparator"></s:key> <s:key name="alert_condition"></s:key> <s:key name="alert_threshold"></s:key> <s:key name="alert_type">always</s:key> <s:key name="cron_schedule">*/30 * * * *</s:key> <s:key name="description">scheduled search for view name=dashboard_live</s:key> <s:key name="disabled">0</s:key> <s:key name="dispatch.buckets">0</s:key> <s:key name="dispatch.earliest_time">1</s:key> <s:key name="dispatch.latest_time">2</s:key> <s:key name="dispatch.lookups">1</s:key> <s:key name="dispatch.max_count">500000</s:key> <s:key name="dispatch.max_time">0</s:key> . . . elided . . . <!-- eai:acl elided --> <s:key name="is_scheduled">1</s:key> <s:key name="is_visible">0</s:key> <s:key name="max_concurrent">1</s:key> <s:key name="next_scheduled_time">2011-12-02 11:30:00 PST</s:key> <s:key name="qualifiedSearch"> noop</s:key> <s:key name="realtime_schedule">1</s:key> <s:key name="request.ui_dispatch_app"></s:key> <s:key name="request.ui_dispatch_view"></s:key> <s:key name="restart_on_searchpeer_add">1</s:key> <s:key name="run_on_startup">0</s:key> <s:key name="scheduled_times"><s:list><s:item>1322836200</s:item><s:item>1322838000</s:item><s:item>1322839800</s:item><s:item>1322841600</s:item></s:list></s:key> <s:key name="search">| noop</s:key> <s:key name="vsid"></s:key> </s:dict> </content> </entry> </feed>
saved/searches/{name}/suppress
https://<host>:<mPort>/services/saved/searches/{name}/suppress
Get the {name}
saved search alert suppression state.
GET
Get the {name}
saved search alert suppression state.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
expiration | String | Indicates the time the suppression period expires. | |
key |
Returned values
Name | Description |
---|---|
earliest_time | For scheduled searches display all the scheduled times starting from this time. |
expiration | Sets the period of time to show the alert in the dashboard. Defaults to 24h.
Uses [number][time-unit] to specify a time. For example: 60 = 60 seconds, 1m = 1 minute, 1h = 60 minutes = 1 hour. |
latest_time | A time string that specifies the latest time for this saved search. Can be a relative or absolute time.
If this value is an absolute time, use the dispatch.time_format to format the value. |
listDefaultActionArgs | List default values of actions.*, even though some of the actions may not be specified in the saved search. |
suppressed | Indicates if alert suppression is enabled for this search. |
suppressionKey | A combination of all the values of the suppression fields (or the combinations MD5), if fields were specified. |
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/saved/searches/MySavedSearch/suppress
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>savedsearch</title> <id>https://localhost:8089/servicesNS/admin/search/saved/searches</id> <updated>2011-07-26T18:22:51-04:00</updated> <generator version="104601"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/admin/search/saved/searches/_new" rel="create"/> <link href="/servicesNS/admin/search/saved/searches/_reload" rel="_reload"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>MySavedSearch</title> <id>https://localhost:8089/servicesNS/admin/search/saved/searches/MySavedSearch</id> <updated>2011-07-26T18:22:51-04:00</updated> <link href="/servicesNS/admin/search/saved/searches/MySavedSearch" rel="alternate"/> <author> <name>admin</name> </author> <link href="/servicesNS/admin/search/saved/searches/MySavedSearch" rel="list"/> <link href="/servicesNS/admin/search/saved/searches/MySavedSearch/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/saved/searches/MySavedSearch" rel="edit"/> <link href="/servicesNS/admin/search/saved/searches/MySavedSearch" rel="remove"/> <content type="text/xml"> <s:dict> <!-- eai:acl elided --> <s:key name="expiration">13811</s:key> <s:key name="suppressed">1</s:key> <s:key name="suppressionKey">admin;search;MySavedSearch;;</s:key> </s:dict> </content> </entry> </feed>
scheduled/views
https://<host>:<mPort>/services/scheduled/views
Access views scheduled for PDF delivery. Scheduled views are dummy noop
scheduled saved searches that email a PDF of a dashboard.
GET
List all scheduled view objects.
Request parameters
Pagination and filtering parameters can be used with this method.
Returned values
Name | Description |
---|---|
action.email | Indicates the state of the email action. |
action.email.pdfview | Name of the view to send as a PDF. |
action.email.sendpdf | Indicates whether to create and send the results as a PDF. |
action.email.sendresults | Indicates whether the search results are included in the email. The results can be attached or inline. |
action.email.to | List of recipient email addresses. Required if the email alert action is enabled. |
action.email.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows <Integer>, int is the number of scheduled periods. Defaults to 86400 (24 hours).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are Integer[p]. |
cron_schedule | The cron schedule to use for delivering the view. Scheduled views are dummy/noop scheduled saved searches that email a pdf version of a view
For example: */5 * * * * causes the search to execute every 5 minutes. cron lets you use standard cron notation to define your scheduled search interval. In particular, cron can accept this type of notation: 00,20,40 * * * *, which runs the search every hour at hh:00, hh:20, hh:40. Along the same lines, a cron of 03,23,43 * * * * runs the search every hour at hh:03, hh:23, hh:43. Splunk recommends that you schedule your searches so that they are staggered over time. This reduces system load. Running all of them every 20 minutes (*/20) means they would all launch at hh:00 (20, 40) and might slow your system every 20 minutes. |
description | Description of this scheduled view object. |
disabled | Indicates if the scheduled view is disabled. |
is_scheduled | Indicates if PDF delivery of this view is scheduled. |
next_scheduled_time | The next time when the view is delivered. |
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/services/scheduled/views
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>scheduledviews</title> <id>https://localhost:8089/servicesNS/admin/search/admin/scheduledviews</id> <updated>2011-07-27T16:27:55-04:00</updated> <generator version="104601"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/admin/search/admin/scheduledviews/_reload" rel="_reload"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>_ScheduledView__MyView</title> <id>https://localhost:8089/servicesNS/admin/search/admin/scheduledviews/_ScheduledView__MyView</id> <updated>2011-07-27T16:27:55-04:00</updated> <link href="/servicesNS/admin/search/admin/scheduledviews/_ScheduledView__MyView" rel="alternate"/> <author> <name>admin</name> </author> <link href="/servicesNS/admin/search/admin/scheduledviews/_ScheduledView__MyView" rel="list"/> <link href="/servicesNS/admin/search/admin/scheduledviews/_ScheduledView__MyView/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/admin/scheduledviews/_ScheduledView__MyView" rel="edit"/> <link href="/servicesNS/admin/search/admin/scheduledviews/_ScheduledView__MyView" rel="remove"/> <link href="/servicesNS/admin/search/admin/scheduledviews/_ScheduledView__MyView/move" rel="move"/> <link href="/servicesNS/admin/search/admin/scheduledviews/_ScheduledView__MyView/disable" rel="disable"/> <link href="/servicesNS/admin/search/admin/scheduledviews/_ScheduledView__MyView/dispatch" rel="dispatch"/> <link href="/servicesNS/admin/search/admin/scheduledviews/_ScheduledView__MyView/history" rel="history"/> <link href="/servicesNS/admin/search/admin/scheduledviews/_ScheduledView__MyView/notify" rel="notify"/> <content type="text/xml"> <s:dict> <s:key name="action.email">1</s:key> <s:key name="action.email.pdfview">MyView</s:key> <s:key name="action.email.sendpdf">1</s:key> <s:key name="action.email.sendresults"></s:key> <s:key name="action.email.to">email@example.com</s:key> <s:key name="action.email.ttl">10</s:key> <s:key name="cron_schedule">* * * * *</s:key> <s:key name="description">scheduled search for view name=MyView</s:key> <s:key name="disabled">0</s:key> <!-- eai:acl elided --> <s:key name="is_scheduled">1</s:key> <s:key name="next_scheduled_time">2011-07-27 16:28:00 EDT</s:key> </s:dict> </content> </entry> </feed>
scheduled/views/{name}
https://<host>:<mPort>/services/scheduled/views/{name}
Manage the {name}
scheduled view.
DELETE
Delete a scheduled view.
Request parameters
None
Returned values
None
Example request and response
XML Request
curl -k -u admin:pass --request DELETE https://localhost:8089/servicesNS/admin/search/scheduled/views/MyView
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>scheduledviews</title> <id>https://localhost:8089/servicesNS/admin/search/admin/scheduledviews</id> <updated>2011-07-27T16:16:02-04:00</updated> <generator version="104601"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/admin/search/admin/scheduledviews/_reload" rel="_reload"/> <opensearch:totalResults>0</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> </feed>
GET
Access a scheduled view.
Request parameters
None
Returned values
Name | Description |
---|---|
action.email | Indicates the sate of the email action. |
action.email.auth_password | The password to use when authenticating with the SMTP server. Normally this value is set when editing the email settings, however you can set a clear text password here and it is encrypted on the next restart.
Defaults to empty string. |
action.email.auth_username | The username to use when authenticating with the SMTP server. If this is empty string, no authentication is attempted. Defaults to empty string.
Note: Your SMTP server might reject unauthenticated emails. |
action.email.bcc | "BCC email address to use if action.email is enabled. |
action.email.cc | CC email address to use if action.email is enabled. |
action.email.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.email.format | Specify the format of text in the email. This value also applies to any attachments.<
Valid values: (plain | html | raw | csv) |
action.email.from | Email address from which the email action originates.
Defaults to splunk@$LOCALHOST or whatever value is set in alert_actions.conf. |
action.email.hostname | Sets the hostname used in the web link (url) sent in email actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) When this value is a simple hostname, the protocol and port which are configured within splunk are used to construct the base of the url. When this value begins with 'http://', it is used verbatim. NOTE: This means the correct port must be specified if it is not the default port for http or https. This is useful in cases when the Splunk server is not aware of how to construct a url that can be externally referenced, such as SSO environments, other proxies, or when the server hostname is not generally resolvable. Defaults to current hostname provided by the operating system, or if that fails "localhost". When set to empty, default behavior is used. |
action.email.inline | Indicates whether the search results are contained in the body of the email.
Results can be either inline or attached to an email. See action.email.sendresults. |
action.email.mailserver | Set the address of the MTA server to be used to send the emails.
Defaults to <LOCALHOST> (or whatever is set in alert_actions.conf). |
action.email.maxresults | Sets the global maximum number of search results to send when email.action is enabled. |
action.email.maxtime | Specifies the maximum amount of time the execution of an email action takes before the action is aborted. |
action.email.pdfview | The name of the view to deliver if sendpdf is enabled. |
action.email.preprocess_results | Search string to preprocess results before emailing them. Defaults to empty string (no preprocessing).
Usually the preprocessing consists of filtering out unwanted internal fields. |
action.email.reportPaperOrientation | Specifies the paper orientation: portrait or landscape. |
action.email.reportPaperSize | Specifies the paper size for PDFs. Defaults to letter.
Valid values: (letter | legal | ledger | a2 | a3 | a4 | a5) |
action.email.sendpdf | Indicates whether to create and send the results as a PDF. |
action.email.sendresults | Indicates whether to attach the search results in the email.
Results can be either attached or inline. See action.email.inline. |
action.email.subject | Specifies the email subject.
Defaults to SplunkAlert-<savedsearchname>. |
action.email.to | List of recipient email addresses. Required if this search is scheduled and the email alert action is enabled. |
action.email.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.email.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows <Integer>, int is the number of scheduled periods. Defaults to 86400 (24 hours).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are Integer[p]. |
action.email.use_ssl | Indicates whether to use SSL when communicating with the SMTP server |
action.email.use_tls | Indicates whether to use TLS (transport layer security) when communicating with the SMTP server (starttls). |
cron_schedule | The cron schedule to execute this search. For example: */5 * * * * causes the search to execute every 5 minutes.
cron lets you use standard cron notation to define your scheduled search interval. In particular, cron can accept this type of notation: 00,20,40 * * * *, which runs the search every hour at hh:00, hh:20, hh:40. Along the same lines, a cron of 03,23,43 * * * * runs the search every hour at hh:03, hh:23, hh:43. Splunk recommends that you schedule your searches so that they are staggered over time. This reduces system load. Running all of them every 20 minutes (*/20) means they would all launch at hh:00 (20, 40) and might slow your system every 20 minutes. Valid values: cron string |
description | Description of this saved search for this view. |
disabled | Indicates if the saved search for this view is disabled. |
is_scheduled | Indicates if this search is to be run on a schedule. |
next_scheduled_time | The next time when the view is delivered. |
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/scheduled/views/MyView
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>scheduledviews</title> <id>https://localhost:8089/servicesNS/admin/search/scheduled/views</id> <updated>2011-07-27T17:12:11-04:00</updated> <generator version="104601"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/admin/search/scheduled/views/_reload" rel="_reload"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>_ScheduledView__MyView</title> <id>https://localhost:8089/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView</id> <updated>2011-07-27T17:12:11-04:00</updated> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView" rel="alternate"/> <author> <name>admin</name> </author> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView" rel="list"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView" rel="edit"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView" rel="remove"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView/move" rel="move"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView/disable" rel="disable"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView/dispatch" rel="dispatch"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView/history" rel="history"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView/notify" rel="notify"/> <content type="text/xml"> <s:dict> <s:key name="action.email">1</s:key> <s:key name="action.email.auth_password"></s:key> <s:key name="action.email.auth_username"></s:key> <s:key name="action.email.bcc"></s:key> <s:key name="action.email.cc"></s:key> <s:key name="action.email.command"> <![CDATA[$action.email.preprocess_results{default=""}$ | sendemail "server=$action.email.mailserver{default=localhost}$" "use_ssl=$action.email.use_ssl{default=false}$" "use_tls=$action.email.use_tls{default=false}$" "to=$action.email.to$" "cc=$action.email.cc$" "bcc=$action.email.bcc$" "from=$action.email.from{default=splunk@localhost}$" "subject=$action.email.subject{recurse=yes}$" "format=$action.email.format{default=csv}$" "sssummary=Saved Search [$name$]: $counttype$($results.count$)" "sslink=$results.url$" "ssquery=$search$" "ssname=$name$" "inline=$action.email.inline{default=False}$" "sendresults=$action.email.sendresults{default=False}$" "sendpdf=$action.email.sendpdf{default=False}$" "pdfview=$action.email.pdfview$" "searchid=$search_id$" "graceful=$graceful{default=True}$" maxinputs="$action.email.maxresults{default=10000}$" maxtime="$action.email.maxtime{default=5m}$"]]> </s:key> <s:key name="action.email.format">html</s:key> <s:key name="action.email.from">splunk</s:key> <s:key name="action.email.hostname"></s:key> <s:key name="action.email.inline">0</s:key> <s:key name="action.email.mailserver">localhost</s:key> <s:key name="action.email.maxresults">10000</s:key> <s:key name="action.email.maxtime">5m</s:key> <s:key name="action.email.pdfview">MyView</s:key> <s:key name="action.email.preprocess_results"></s:key> <s:key name="action.email.reportPaperOrientation">portrait</s:key> <s:key name="action.email.reportPaperSize">letter</s:key> <s:key name="action.email.reportServerEnabled">0</s:key> <s:key name="action.email.reportServerURL"></s:key> <s:key name="action.email.sendpdf">1</s:key> <s:key name="action.email.sendresults">0</s:key> <s:key name="action.email.subject">Splunk Alert: $name$</s:key> <s:key name="action.email.to">info@example.com</s:key> <s:key name="action.email.track_alert">1</s:key> <s:key name="action.email.ttl">10</s:key> <s:key name="action.email.use_ssl">0</s:key> <s:key name="action.email.use_tls">0</s:key> <s:key name="cron_schedule">* * * * *</s:key> <s:key name="description">scheduled search for view name=MyView</s:key> <s:key name="disabled">0</s:key> <!-- eai:acl elided --> <s:key name="eai:attributes"> <s:dict> <s:key name="optionalFields"> <s:list> <s:item>description</s:item> <s:item>disabled</s:item> <s:item>next_scheduled_time</s:item> </s:list> </s:key> <s:key name="requiredFields"> <s:list> <s:item>action.email.to</s:item> <s:item>cron_schedule</s:item> <s:item>is_scheduled</s:item> </s:list> </s:key> <s:key name="wildcardFields"> <s:list><s:item>action\.email.*</s:item></s:list> </s:key> </s:dict> </s:key> <s:key name="is_scheduled">1</s:key> <s:key name="next_scheduled_time">2011-07-27 17:13:00 EDT</s:key> </s:dict> </content> </entry> </feed>
POST
Update a scheduled view.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
action.email.to required |
String | Comma or semicolon separated list of email addresses to send the view to | |
action.email* | String | Wildcard argument that accepts any email action. | |
cron_schedule required |
String | The cron schedule to use for delivering the view. Scheduled views are dummy/noop scheduled saved searches that email a pdf version of a view.
For example: */5 * * * * causes the search to execute every 5 minutes. cron lets you use standard cron notation to define your scheduled search interval. In particular, cron can accept this type of notation: 00,20,40 * * * *, which runs the search every hour at hh:00, hh:20, hh:40. Along the same lines, a cron of 03,23,43 * * * * runs the search every hour at hh:03, hh:23, hh:43. Splunk recommends that you schedule your searches so that they are staggered over time. This reduces system load. Running all of them every 20 minutes (*/20) means they would all launch at hh:00 (20, 40) and might slow your system every 20 minutes. | |
description | String | User readable description of this scheduled view object | |
disabled | Boolean | 0 | Whether this object is enabled or disabled |
is_scheduled required |
Boolean | Whether this pdf delivery should be scheduled | |
next_scheduled_time | String | The next time when the view is delivered. Ignored on edit, here only for backwards compatability. |
Returned values
Name | Description |
---|---|
action.email | Indicates the status of the email action. |
action.email.auth_password | The password to use when authenticating with the SMTP server. Normally this value is set when editing the email settings, however you can set a clear text password here that is encrypted on the next restart.
Defaults to empty string. |
action.email.auth_username | The username to use when authenticating with the SMTP server. If this is empty string, no authentication is attempted. Defaults to empty string.
Note: Your SMTP server might reject unauthenticated emails. |
action.email.bcc | BCC email address to use if action.email is enabled. |
action.email.cc | CC email address to use if action.email is enabled. |
action.email.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.email.format | Specify the format of text in the email. This value also applies to any attachments.<
Valid values: (plain | html | raw | csv) |
action.email.from | Email address from which the email action originates |
action.email.hostname | Sets the hostname used in the web link (url) sent in email actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) When this value is a simple hostname, the protocol and port which are configured within splunk are used to construct the base of the url. When this value begins with 'http://', it is used verbatim. NOTE: This means the correct port must be specified if it is not the default port for http or https. This is useful in cases when the Splunk server is not aware of how to construct an externally referencable url, such as SSO environments, other proxies, or when the server hostname is not generally resolvable. Defaults to current hostname provided by the operating system, or if that fails "localhost". When set to empty, default behavior is used. |
action.email.inline | Indicates whether the search results are contained in the body of the email.
Results can be either inline or attached to an email. See action.email.sendresults. |
action.email.mailserver | Set the address of the MTA server to be used to send the emails.
Defaults to <LOCALHOST> (or whatever is set in alert_actions.conf). |
action.email.maxresults | Sets the maximum number of search results sent using alerts. |
action.email.maxtime | Specifies the maximum amount of time the execution of an email action takes before the action is aborted. |
action.email.pdfview | The name of the view to deliver if sendpdf is enabled. |
action.email.preprocess_results | Search string to preprocess results before emailing them. Defaults to empty string (no preprocessing).
Usually the preprocessing consists of filtering out unwanted internal fields. |
action.email.reportPaperOrientation | Specifies the paper orientation: portrait or landscape. |
action.email.reportPaperSize | Specifies the paper size for PDFs. Defaults to letter.
Valid values: (letter | legal | ledger | a2 | a3 | a4 | a5) |
action.email.sendpdf | Indicates whether to create and send the results as a PDF. |
action.email.sendresults | Indicates whether to attach the search results in the email.
Results can be either attached or inline. See action.email.inline. |
action.email.subject | Specifies an email subject.
Defaults to SplunkAlert-<savedsearchname>. |
action.email.to | List of recipient email addresses. Required if this search is scheduled and the email alert action is enabled. |
action.email.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.email.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows <Integer>, int is the number of scheduled periods. Defaults to 86400 (24 hours).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are Integer[p]. |
action.email.use_ssl | Indicates whether to use SSL when communicating with the SMTP server. |
action.email.use_tls | Indicates whether to use TLS (transport layer security) when communicating with the SMTP server (starttls). |
cron_schedule | The cron schedule to execute this search. For example: */5 * * * * causes the search to execute every 5 minutes.
cron lets you use standard cron notation to define your scheduled search interval. In particular, cron can accept this type of notation: 00,20,40 * * * *, which runs the search every hour at hh:00, hh:20, hh:40. Along the same lines, a cron of 03,23,43 * * * * runs the search every hour at hh:03, hh:23, hh:43. Splunk recommends that you schedule your searches so that they are staggered over time. This reduces system load. Running all of them every 20 minutes (*/20) means they would all launch at hh:00 (20, 40) and might slow your system every 20 minutes. Valid values: cron string |
description | Description of the saved search for this view. |
disabled | Indicates if the saved search for this view is disabled. |
is_scheduled | Indicates if this search is to be run on a schedule. |
next_scheduled_time | The next time when the view is delivered. |
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/scheduled/views/MyVew -d action.email.to="info@example.com" -d cron_schedule="0 * * * *" -d is_scheduled=1 -d description="New description"
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>scheduledviews</title> <id>https://localhost:8089/servicesNS/admin/search/scheduled/views</id> <updated>2011-07-27T17:59:32-04:00</updated> <generator version="104601"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/admin/search/scheduled/views/_reload" rel="_reload"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>_ScheduledView__MyView</title> <id>https://localhost:8089/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView</id> <updated>2011-07-27T17:59:32-04:00</updated> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView" rel="alternate"/> <author> <name>admin</name> </author> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView" rel="list"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView" rel="edit"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView" rel="remove"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView/move" rel="move"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView/disable" rel="disable"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView/dispatch" rel="dispatch"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView/history" rel="history"/> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__MyView/notify" rel="notify"/> <content type="text/xml"> <s:dict> <s:key name="action.email">1</s:key> <s:key name="action.email.auth_password"></s:key> <s:key name="action.email.auth_username"></s:key> <s:key name="action.email.bcc"></s:key> <s:key name="action.email.cc"></s:key> <s:key name="action.email.command"> <![CDATA[$action.email.preprocess_results{default=""}$ | sendemail "server=$action.email.mailserver{default=localhost}$" "use_ssl=$action.email.use_ssl{default=false}$" "use_tls=$action.email.use_tls{default=false}$" "to=$action.email.to$" "cc=$action.email.cc$" "bcc=$action.email.bcc$" "from=$action.email.from{default=splunk@localhost}$" "subject=$action.email.subject{recurse=yes}$" "format=$action.email.format{default=csv}$" "sssummary=Saved Search [$name$]: $counttype$($results.count$)" "sslink=$results.url$" "ssquery=$search$" "ssname=$name$" "inline=$action.email.inline{default=False}$" "sendresults=$action.email.sendresults{default=False}$" "sendpdf=$action.email.sendpdf{default=False}$" "pdfview=$action.email.pdfview$" "searchid=$search_id$" "graceful=$graceful{default=True}$" maxinputs="$action.email.maxresults{default=10000}$" maxtime="$action.email.maxtime{default=5m}$"]]> </s:key> <s:key name="action.email.format">html</s:key> <s:key name="action.email.from">splunk</s:key> <s:key name="action.email.hostname"></s:key> <s:key name="action.email.inline">0</s:key> <s:key name="action.email.mailserver">localhost</s:key> <s:key name="action.email.maxresults">10000</s:key> <s:key name="action.email.maxtime">5m</s:key> <s:key name="action.email.pdfview">MyView</s:key> <s:key name="action.email.preprocess_results"></s:key> <s:key name="action.email.reportPaperOrientation">portrait</s:key> <s:key name="action.email.reportPaperSize">letter</s:key> <s:key name="action.email.reportServerEnabled">0</s:key> <s:key name="action.email.reportServerURL"></s:key> <s:key name="action.email.sendpdf">1</s:key> <s:key name="action.email.sendresults">0</s:key> <s:key name="action.email.subject">Splunk Alert: $name$</s:key> <s:key name="action.email.to">info@example.com</s:key> <s:key name="action.email.track_alert">1</s:key> <s:key name="action.email.ttl">10</s:key> <s:key name="action.email.use_ssl">0</s:key> <s:key name="action.email.use_tls">0</s:key> <s:key name="cron_schedule">0 * * * *</s:key> <s:key name="description">New Description</s:key> <s:key name="disabled">0</s:key> <!-- eai:acl elided --> <s:key name="is_scheduled">1</s:key> <s:key name="next_scheduled_time">2011-07-27 18:00:00 EDT</s:key> </s:dict> </content> </entry> </feed>
scheduled/views/{name}/dispatch
https://<host>:<mPort>/services/scheduled/views/{name}/dispatch
Dispatch the scheduled search associated with the {name}
scheduled view.
POST
Dispatch the scheduled search associated with the {name}
scheduled view.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
args.* | String | Wildcard argument that accepts any saved search template argument, such as arg.username=foobar when the search is search $username$. | |
dispatch.* | String | Wildcard argument that accepts any dispatch related argument. | |
dispatch.now | Boolean | Dispatch the search as if the specified time for this parameter was the current time. | |
force_dispatch | Boolean | Indicates whether to start a new search even if another instance of this search is already running. | |
now | String | [Deprecated] Use dispatch.now. | |
trigger_actions | Boolean | Indicates whether to trigger alert actions |
Returned values
None
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/scheduled/views/MyView/dispatch -d trigger_actions=1
XML Response
<?xml version='1.0' encoding='UTF-8'?> <response><sid>admin__admin__search_X1NjaGVkdWxlZFZpZXdfX015Vmlldw_at_1311805021_c24ff1ea77ad714b</sid></response>
scheduled/views/{name}/history
https://<host>:<mPort>/services/scheduled/views/{name}/history
List search jobs used to render the {name}
scheduled view.
GET
List search jobs used to render the {name}
scheduled view.
Request parameters
None
Returned values
None
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/scheduled/views/MyVew/history
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>_ScheduledView__MyView</title> <id>https://localhost:8089/servicesNS/admin/search/scheduled/views</id> <updated>2011-07-27T16:25:22-04:00</updated> <generator version="104601"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/admin/search/scheduled/views/_reload" rel="_reload"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>scheduler__admin__search_X1NjaGVkdWxlZFZpZXdfX015Vmlldw_at_1311798300_842d7ca298ab521a</title> <id>https://localhost:8089/servicesNS/nobody/search/search/jobs/scheduler__admin__search_X1NjaGVkdWxlZFZpZXdfX015Vmlldw_at_1311798300_842d7ca298ab521a</id> <updated>2011-07-27T16:25:15-04:00</updated> <link href="/servicesNS/nobody/search/search/jobs/scheduler__admin__search_X1NjaGVkdWxlZFZpZXdfX015Vmlldw_at_1311798300_842d7ca298ab521a" rel="alternate"/> <author> <name>admin</name> </author> <published>2011-07-27T16:25:15-04:00</published> <link href="/servicesNS/nobody/search/search/jobs/scheduler__admin__search_X1NjaGVkdWxlZFZpZXdfX015Vmlldw_at_1311798300_842d7ca298ab521a" rel="list"/> <link href="/servicesNS/nobody/search/search/jobs/scheduler__admin__search_X1NjaGVkdWxlZFZpZXdfX015Vmlldw_at_1311798300_842d7ca298ab521a/_reload" rel="_reload"/> <link href="/servicesNS/nobody/search/search/jobs/scheduler__admin__search_X1NjaGVkdWxlZFZpZXdfX015Vmlldw_at_1311798300_842d7ca298ab521a" rel="edit"/> <link href="/servicesNS/nobody/search/search/jobs/scheduler__admin__search_X1NjaGVkdWxlZFZpZXdfX015Vmlldw_at_1311798300_842d7ca298ab521a" rel="remove"/> <content type="text/xml"> <s:dict> <!-- eai:acl elided --> </s:dict> </content> </entry> </feed>
scheduled/views/{name}/reschedule
https://<host>:<mPort>/services/scheduled/views/{name}/reschedule
Schedule the {name} view PDF delivery.
POST
Schedule the {name} view PDF delivery.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
schedule_time | String | Absolute or relative schedule time. |
Returned values
None
Application usage
If schedule_time is not specified, then it is assumed that the delivery should occur as soon as possible.
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/services/scheduled/views/_ScheduledView__dashboard2/reschedule -d schedule_time=2013-02-15T14:11:01Z
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>scheduledviews</title> <id>https://localhost:8089/services/scheduled/views</id> <updated>2012-10-02T08:48:18-07:00</updated> <generator build="138753" version="5.0"/> <author> <name>Splunk</name> </author> <link href="/services/scheduled/views/_reload" rel="_reload"/> <opensearch:totalResults>0</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> </feed>
scheduled/views/{name}/scheduled_times
https://<host>:<mPort>/services/scheduled/views/{name}/scheduled_times
Get scheduled view times.
GET
Get scheduled view times.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
earliest_time | String | Absolute or relative earliest time | |
latest_time | String | Absolute or relative latest time |
Returned values
Name | Description |
---|---|
action.email | Indicates the state of the email action. |
action.email.auth_password | The password to use when authenticating with the SMTP server. Normally this value is set when editing the email settings, however you can set a clear text password here that is encrypted on the next restart.
Defaults to empty string. |
action.email.auth_username | The username to use when authenticating with the SMTP server. If this is empty string, no authentication is attempted. Defaults to empty string.
Note: Your SMTP server might reject unauthenticated emails. |
action.email.bcc | BCC email address to use if action.email is enabled. |
action.email.cc | CC email address to use if action.email is enabled. |
action.email.command | The search command (or pipeline) which is responsible for executing the action.
Generally the command is a template search pipeline which is realized with values from the saved search. To reference saved search field values wrap them in $, for example to reference the savedsearch name use $name$, to reference the search use $search$. |
action.email.format | Specify the format of text in the email. This value also applies to any attachments.<
Valid values: (plain | html | raw | csv) |
action.email.from | Email address from which the email action originates. |
action.email.hostname | Sets the hostname used in the web link (url) sent in email actions.
This value accepts two forms: hostname (for example, splunkserver, splunkserver.example.com) protocol://hostname:port (for example, http://splunkserver:8000, https://splunkserver.example.com:443) When this value is a simple hostname, the protocol and port which are configured within splunk are used to construct the base of the url. When this value begins with 'http://', it is used verbatim. NOTE: This means the correct port must be specified if it is not the default port for http or https. This is useful in cases when the Splunk server is not aware of how to construct a url that can be externally referenced, such as SSO environments, other proxies, or when the server hostname is not generally resolvable. Defaults to current hostname provided by the operating system, or if that fails "localhost". When set to empty, default behavior is used. |
action.email.inline | Indicates whether the search results are contained in the body of the email.
Results can be either inline or attached to an email. See action.email.sendresults. |
action.email.mailserver | Set the address of the MTA server to be used to send the emails.
Defaults to <LOCALHOST> (or whatever is set in alert_actions.conf). |
action.email.maxresults | Sets the maximum number of search results sent using alerts. |
action.email.maxtime | Specifies the maximum amount of time the execution of an email action takes before the action is aborted. |
action.email.pdfview | The name of the view to deliver if sendpdf is enabled. |
action.email.preprocess_results | Search string to preprocess results before emailing them. Defaults to empty string (no preprocessing).
Usually the preprocessing consists of filtering out unwanted internal fields. |
action.email.reportPaperOrientation | Specifies the paper orientation: portrait or landscape. |
action.email.reportPaperSize | Specifies the paper size for PDFs. Defaults to letter.
Valid values: (letter | legal | ledger | a2 | a3 | a4 | a5) |
action.email.reportServerEnabled | Not supported. |
action.email.reportServerURL | Not supported. |
action.email.sendpdf | Indicates whether to create and send the results as a PDF. |
action.email.sendresults | Indicates whether to attach the search results in the email.
Results can be either attached or inline. See action.email.inline. |
action.email.subject | Specifies an email subject.
Defaults to SplunkAlert-<savedsearchname>. |
action.email.to | List of recipient email addresses. Required if this search is scheduled and the email alert action is enabled. |
action.email.track_alert | Indicates whether the execution of this action signifies a trackable alert. |
action.email.ttl | Specifies the minimum time-to-live in seconds of the search artifacts if this action is triggered. If p follows <Integer>, int is the number of scheduled periods. Defaults to 86400 (24 hours).
If no actions are triggered, the artifacts have their ttl determined by dispatch.ttl in savedsearches.conf. Valid values are Integer[p]. |
action.email.use_ssl | Indicates whether to use SSL when communicating with the SMTP server. |
action.email.use_tls | Indicates whether to use TLS (transport layer security) when communicating with the SMTP server (starttls). |
action.email.width_sort_columns | Indicates whether columns should be sorted from least wide to most wide, left to right.
Only valid if format=text. |
cron_schedule | The cron schedule to execute this search. For example: */5 * * * * causes the search to execute every 5 minutes.
cron lets you use standard cron notation to define your scheduled search interval. In particular, cron can accept this type of notation: 00,20,40 * * * *, which runs the search every hour at hh:00, hh:20, hh:40. Along the same lines, a cron of 03,23,43 * * * * runs the search every hour at hh:03, hh:23, hh:43. Splunk recommends that you schedule your searches so that they are staggered over time. This reduces system load. Running all of them every 20 minutes (*/20) means they would all launch at hh:00 (20, 40) and might slow your system every 20 minutes. Valid values: cron string |
description | Description of the saved search for this view. |
disabled | Indicates if the saved search for this view is disabled.
Disabled saved searches are not visible in Splunk Web. |
is_scheduled | Indicates if this search is to be run on a schedule. |
next_scheduled_time | The next time when the view is delivered. |
Application usage
Specify a time range for the data returned using earliest_time and latest_time parameters.
Example request and response
XML Request
curl -k -u admin:admin https://localhost:8089/services/scheduled/views/_ScheduledView__dashboard_live/scheduled_times --get -d earliest_time=-5h -d latest_time=-3h
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>scheduledviews</title> <id>https://wma-mbp15:8089/services/scheduled/views</id> <updated>2011-12-01T14:40:18-08:00</updated> <generator version="112383"/> <author> <name>Splunk</name> </author> <link href="/services/scheduled/views/_reload" rel="_reload"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>_ScheduledView__dashboard_live</title> <id>https://wma-mbp15:8089/servicesNS/admin/search/scheduled/views/_ScheduledView__dashboard_live</id> <updated>2011-12-01T14:40:18-08:00</updated> <link href="/servicesNS/admin/search/scheduled/views/_ScheduledView__dashboard_live" rel="alternate"/> <author> <name>admin</name> </author> <!-- opensearch nodes elided for brevity. --> <content type="text/xml"> <s:dict> <s:key name="action.email">1</s:key> <s:key name="action.email.auth_password"></s:key> <s:key name="action.email.auth_username"></s:key> <s:key name="action.email.bcc"></s:key> <s:key name="action.email.cc"></s:key> <s:key name="action.email.command"><![CDATA[$action.email.preprocess_results{default=""}$ | sendemail "server=$action.email.mailserver{default=localhost}$" "use_ssl=$action.email.use_ssl{default=false}$" "use_tls=$action.email.use_tls{default=false}$" "to=$action.email.to$" "cc=$action.email.cc$" "bcc=$action.email.bcc$" "from=$action.email.from{default=splunk@localhost}$" "subject=$action.email.subject{recurse=yes}$" "format=$action.email.format{default=csv}$" "sssummary=Saved Search [$name$]: $counttype$($results.count$)" "sslink=$results.url$" "ssquery=$search$" "ssname=$name$" "inline=$action.email.inline{default=False}$" "sendresults=$action.email.sendresults{default=False}$" "sendpdf=$action.email.sendpdf{default=False}$" "pdfview=$action.email.pdfview$" "searchid=$search_id$" "width_sort_columns=$action.email.width_sort_columns$" "graceful=$graceful{default=True}$" maxinputs="$action.email.maxresults{default=10000}$" maxtime="$action.email.maxtime{default=5m}$"]]></s:key> <s:key name="action.email.format">html</s:key> <s:key name="action.email.from">splunk</s:key> <s:key name="action.email.hostname"></s:key> <s:key name="action.email.inline">0</s:key> <s:key name="action.email.mailserver">localhost</s:key> <s:key name="action.email.maxresults">10000</s:key> <s:key name="action.email.maxtime">5m</s:key> <s:key name="action.email.pdfview">dashboard_live</s:key> <s:key name="action.email.preprocess_results"></s:key> <s:key name="action.email.reportPaperOrientation">portrait</s:key> <s:key name="action.email.reportPaperSize">letter</s:key> <s:key name="action.email.reportServerEnabled">1</s:key> <s:key name="action.email.reportServerURL"> </s:key> <s:key name="action.email.sendpdf">1</s:key> <s:key name="action.email.sendresults">0</s:key> <s:key name="action.email.subject">Splunk Alert: $name$</s:key> <s:key name="action.email.to">wma@splunk.com</s:key> <s:key name="action.email.track_alert">1</s:key> <s:key name="action.email.ttl">10</s:key> <s:key name="action.email.use_ssl">0</s:key> <s:key name="action.email.use_tls">0</s:key> <s:key name="action.email.width_sort_columns">1</s:key> <s:key name="cron_schedule">/5 * * * *</s:key> <s:key name="description">scheduled search for view name=dashboard_live</s:key> <s:key name="disabled">0</s:key> <!-- eai:acl elided --> <s:key name="is_scheduled">1</s:key> <s:key name="next_scheduled_time">2011-12-01 15:00:00 PST</s:key> </s:dict> </content> </entry> </feed>
search/concurrency-settings
https://<host>:<mPort>/services/search/concurrency-settings
GET
List search concurrency settings.
Request parameters
None
Returned values
Name | Type | Description |
---|---|---|
max_searches_perc | Number | The maximum number of searches the scheduler can run as a percentage of the maximum number of concurrent searches. Default: 50%. |
auto_summary_perc | Number | The maximum number of concurrent searches to be allocated for auto summarization, as a percentage of the concurrent searches that the scheduler can run. Default: 50. |
max_searches_per_cpu | Number | The maximum number of concurrent historical searches allowed per cpu. Default: 1. |
base_max_searches | Number | A baseline constant to add to the max number of searches (computed as multiplier of the CPUs.) Default is 6. |
max_rt_search_multiplier | Number | A number by which the maximum number of historical searches is multiplied to determine the maximum number of concurrent real-time searches. Note: The maximum number of real-time searches is computed as max_rt_searches = max_rt_search_multiplier x max_hist_searches |
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/services/search/concurrency-settings
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>search-concurrency-settings-handler</title> <id>https://localhost:8089/services/search/concurrency-settings</id> <updated>2019-04-21T14:46:39-07:00</updated> <generator build="efdccca30d13" version="7.3.0"/> <author> <name>Splunk</name> </author> <opensearch:totalResults>2</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>scheduler</title> <id>https://localhost:8089/services/search/concurrency-settings/scheduler</id> <updated>1969-12-31T16:00:00-08:00</updated> <link href="/services/search/concurrency-settings/scheduler" rel="alternate"/> <author> <name>system</name> </author> <link href="/services/search/concurrency-settings/scheduler" rel="list"/> <content type="text/xml"> <s:dict> <s:key name="auto_summary_perc">50</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app"></s:key> <s:key name="can_list">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">0</s:key> <s:key name="owner">system</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>*</s:item> </s:list> </s:key> <s:key name="write"> <s:list/> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">system</s:key> </s:dict> </s:key> <s:key name="max_searches_perc">50</s:key> </s:dict> </content> </entry> <entry> <title>search</title> <id>https://localhost:8089/services/search/concurrency-settings/search</id> <updated>1969-12-31T16:00:00-08:00</updated> <link href="/services/search/concurrency-settings/search" rel="alternate"/> <author> <name>system</name> </author> <link href="/services/search/concurrency-settings/search" rel="list"/> <content type="text/xml"> <s:dict> <s:key name="base_max_searches">10</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app"></s:key> <s:key name="can_list">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">0</s:key> <s:key name="owner">system</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>*</s:item> </s:list> </s:key> <s:key name="write"> <s:list/> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">system</s:key> </s:dict> </s:key> <s:key name="max_rt_search_multiplier">1</s:key> <s:key name="max_searches_per_cpu">1</s:key> </s:dict> </content> </entry> </feed>
search/concurrency-settings/scheduler
https://<host>:<mPort>/services/search/concurrency-settings/scheduler
Edit settings that determine concurrent scheduled search limits.
Authentication and Authorization
The edit_search_concurrency_scheduled
capability is required for this endpoint.
POST
Edit settings that determine concurrent scheduled search limits.
Request parameters
Name | Type | Description |
---|---|---|
max_searches_perc | Number | The maximum number of searches the scheduler can run as a percentage of the maximum number of concurrent searches. Default: 50. |
auto_summary_perc | Number | The maximum number of concurrent searches to be allocated for auto summarization, as a percentage of the concurrent searches that the scheduler can run. Default: 50. |
Returned values
None
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/services/search/concurrency-settings/scheduler -d max_searches_perc=40
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>search-concurrency-settings-handler</title> <id>https://localhost:8089/services/search/concurrency-settings</id> <updated>2019-04-21T17:17:30-07:00</updated> <generator build="efdccca30d13" version="7.3.0"/> <author> <name>Splunk</name> </author> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>scheduler</title> <id>https://localhost:8089/services/search/concurrency-settings/scheduler</id> <updated>1969-12-31T16:00:00-08:00</updated> <link href="/services/search/concurrency-settings/scheduler" rel="alternate"/> <author> <name>system</name> </author> <link href="/services/search/concurrency-settings/scheduler" rel="list"/> <link href="/services/search/concurrency-settings/scheduler" rel="edit"/> <content type="text/xml"> <s:dict> <s:key name="auto_summary_perc">50</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app"></s:key> <s:key name="can_list">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">0</s:key> <s:key name="owner">system</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>*</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> <s:item>splunk-system-role</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">system</s:key> </s:dict> </s:key> <s:key name="max_searches_perc">40</s:key> </s:dict> </content> </entry> </feed>
search/concurrency-settings/search
https://<host>:<mPort>/services/search/concurrency-settings/search
Edit settings that determine the maximum number of concurrent scheduled searches.
Authentication and Authorization
The edit_search_concurrency_all
capability is required for this endpoint.
POST
Edit settings that determine the maximum number of concurrent scheduled searches.
Request parameters
Name | Type | Description |
---|---|---|
max_searches_per_cpu | Number | The maximum number of concurrent historical searches allowed per cpu. Default: 1. |
base_max_searches | Number | A baseline constant to add to the max number of searches (computed as multiplier of the CPUs.) Default is 6. |
max_rt_search_multiplier | Number | A number by which the maximum number of historical searches is multiplied to determine the maximum number of concurrent real-time searches. Note: The maximum number of real-time searches is computed as max_rt_searches = max_rt_search_multiplier x max_hist_searches |
Returned values
None
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/services/search/concurrency-settings/search -d base_max_searches=5 -d max_searches_per_cpu=4
XML Response
feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>search-concurrency-settings-handler</title> <id>https://localhost:8089/services/search/concurrency-settings</id> <updated>2019-04-21T17:31:19-07:00</updated> <generator build="efdccca30d13" version="7.3.0"/> <author> <name>Splunk</name> </author> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>search</title> <id>https://localhost:8089/services/search/concurrency-settings/search</id> <updated>1969-12-31T16:00:00-08:00</updated> <link href="/services/search/concurrency-settings/search" rel="alternate"/> <author> <name>system</name> </author> <link href="/services/search/concurrency-settings/search" rel="list"/> <link href="/services/search/concurrency-settings/search" rel="edit"/> <content type="text/xml"> <s:dict> <s:key name="base_max_searches">5</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app"></s:key> <s:key name="can_list">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">0</s:key> <s:key name="owner">system</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>*</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> <s:item>splunk-system-role</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">system</s:key> </s:dict> </s:key> <s:key name="max_rt_search_multiplier">1</s:key> <s:key name="max_searches_per_cpu">4</s:key> </s:dict> </content> </entry> </feed>
search/jobs
https://<host>:<mPort>/services/search/jobs
List search jobs.
For more information about this and other search endpoints, see Creating searches using the REST API in the REST API Tutorial.
GET
Get details of all current searches.
Request parameters
Pagination and filtering parameters can be used with this method.
Returned values
Name | Description |
---|---|
cursorTime | The earliest time from which no events are later scanned.
Can be used to indicate progress. See description for |
custom | Custom job property. (See the search/jobs POST request for an example of how to create a custom property.)
|
delegate | For saved searches, specifies jobs that were started by the user. Defaults to scheduler. |
diskUsage | The total amount of disk space used, in bytes. |
dispatchState | The state of the search. Can be any of QUEUED, PARSING, RUNNING, FINALIZING, PAUSE, INTERNAL_CANCEL, USER_CANCEL, BAD_INPUT_CANCEL, QUIT, FINALIZING, FAILED, DONE. |
doneProgress | A number between 0 and 1.0 that indicates the approximate progress of the search.
doneProgress = (latestTime – cursorTime) / (latestTime – earliestTime) |
dropCount | For real-time searches only, the number of possible events that were dropped due to the rt_queue_size (default to 100000). |
earliestTime | A time string that sets the earliest (inclusive), respectively, time bounds for the search. Can be used to indicate progress. See description for doneProgress .
|
eventAvailableCount | The number of events that are available for export. |
eventCount | The number of events returned by the search. |
eventFieldCount | The number of fields found in the search results. |
eventIsStreaming | Indicates if the events of this search are being streamed. |
eventIsTruncated | Indicates if events of the search are not stored, making them unavailable from the events endpoint for the search. |
eventPreviewableCount | Number of in-memory events that are not yet committed to disk. Returned if timeline_events_preview is enabled in limits.conf .
|
eventSearch | Subset of the entire search that is before any transforming commands. The timeline and events endpoint represents the result of this part of the search. |
eventSorting | Indicates if the events of this search are sorted, and in which order.
asc = ascending; desc = descending; none = not sorted |
isDone | Indicates if the search has completed. |
isEventPreviewEnabled | Indicates if the timeline_events_preview setting is enabled in limits.conf .
|
isFailed | Indicates if there was a fatal error executing the search. For example, invalid search string syntax. |
isFinalized | Indicates if the search was finalized (stopped before completion). |
isPaused | Indicates if the search is paused. |
isPreviewEnabled | Indicates if previews are enabled. |
isRealTimeSearch | Indicates if the search is a real time search. |
isRemoteTimeline | Indicates if the remote timeline feature is enabled. |
isSaved | Indicates that the search job is saved, storing search artifacts on disk for 7 days from the last time that the job was viewed or touched. Add or edit the default_save_ttl value in limits.conf to override the default value of 7 days.
|
isSavedSearch | Indicates if this is a saved search run using the scheduler. |
isZombie | Indicates if the process running the search is dead, but with the search not finished. |
keywords | All positive keywords used by this search. A positive keyword is a keyword that is not in a NOT clause. |
label | Custom name created for this search. |
latestTime | A time string that sets the latest (exclusive), respectively, time bounds for the search. Can be used to indicate progress. See description for doneProgress .
|
messages | Errors and debug messages. |
numPreviews | Number of previews generated so far for this search job. |
performance | A representation of the execution costs. |
priority | An integer between 0-10 that indicates the search priority.
The priority is mapped to the OS process priority. The higher the number the higher the priority. The priority can be changed using action parameter for POST search/jobs/{search_id}/control. For example, for the action parameter, specify Note: In *nix systems, non-privileged users can only reduce the priority of a process. |
remoteSearch | The search string that is sent to every search peer. |
reportSearch | If reporting commands are used, the reporting search. |
request | GET arguments that the search sends to splunkd. |
resultCount | The total number of results returned by the search. In other words, this is the subset of scanned events (represented by the scanCount) that actually matches the search terms. |
resultIsStreaming | Indicates if the final results of the search are available using streaming (for example, no transforming operations). |
resultPreviewCount | The number of result rows in the latest preview results. |
runDuration | Time in seconds that the search took to complete. |
scanCount | The number of events that are scanned or read off disk. |
searchEarliestTime | Specifies the earliest time for a search, as specified in the search command rather than the earliestTime parameter. It does not snap to the indexed data time bounds for all-time searches (something that earliestTime/latestTime does). |
searchLatestTime | Specifies the latest time for a search, as specified in the search command rather than the latestTime parameter. It does not snap to the indexed data time bounds for all-time searches (something that earliestTime/latestTime does). |
searchProviders | A list of all the search peers that were contacted. |
sid | The search ID number. |
statusBuckets | Maximum number of timeline buckets. |
ttl | The time to live, or time before the search job expires after it completes. |
Application usage
The user ID is implied by the authentication to the call.
Information returned for each entry includes the search job properties, such as eventCount (number of events returned), runDuration (time the search took to complete), and others. The parameters to POST /search/jobs provides details on search job properties when creating a search. Search job properties are also described in Search job properties in the Knowledge Manager Manual.
You can specify optional arguments based on the search job properties to filter the entries returned. For example, specify search=eventCount>100 as an argument to the GET operation to return searches with event counts greater than 100.
The dispatchState property is of particular interest to determine the state of a search, and can contain the following values:
QUEUED PARSING RUNNING FINALIZING DONE PAUSE INTERNAL_CANCEL USER_CANCEL BAD_INPUT_CANCEL QUIT FAILED
This operation also returns performance information for the search.
For more information refer to "View search job properties with the Search Job Inspector" in the Knowledge Manager Manual.
For more information on searches, see the Search Reference.
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/services/search/jobs --get -d search="eventCount>100"
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>jobs</title> <id>https://localhost:8089/services/search/jobs</id> <updated>2011-06-21T10:12:22-07:00</updated> <generator version="100492"/> <author> <name>Splunk</name> </author> <opensearch:totalResults>8</opensearch:totalResults> <opensearch:itemsPerPage>0</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <entry> <title>search index=_internal (source=*/metrics.log* OR source=*\\metrics.log*) group=per_sourcetype_thruput | chart sum(kb) by series | sort -sum(kb) | head 5</title> <id>https://localhost:8089/services/search/jobs/scheduler__nobody__search_VG9wIGZpdmUgc291cmNldHlwZXM_at_1308676200_22702c154383bbe4</id> <updated>2011-06-21T10:10:31.000-07:00</updated> <link href="/services/search/jobs/scheduler__nobody__search_VG9wIGZpdmUgc291cmNldHlwZXM_at_1308676200_22702c154383bbe4" rel="alternate"/> <published>2011-06-21T10:10:23.000-07:00</published> <link href="/services/search/jobs/scheduler__nobody__search_VG9wIGZpdmUgc291cmNldHlwZXM_at_1308676200_22702c154383bbe4/search.log" rel="log"/> <link href="/services/search/jobs/scheduler__nobody__search_VG9wIGZpdmUgc291cmNldHlwZXM_at_1308676200_22702c154383bbe4/events" rel="events"/> <link href="/services/search/jobs/scheduler__nobody__search_VG9wIGZpdmUgc291cmNldHlwZXM_at_1308676200_22702c154383bbe4/results" rel="results"/> <link href="/services/search/jobs/scheduler__nobody__search_VG9wIGZpdmUgc291cmNldHlwZXM_at_1308676200_22702c154383bbe4/results_preview" rel="results_preview"/> <link href="/services/search/jobs/scheduler__nobody__search_VG9wIGZpdmUgc291cmNldHlwZXM_at_1308676200_22702c154383bbe4/timeline" rel="timeline"/> <link href="/services/search/jobs/scheduler__nobody__search_VG9wIGZpdmUgc291cmNldHlwZXM_at_1308676200_22702c154383bbe4/summary" rel="summary"/> <link href="/services/search/jobs/scheduler__nobody__search_VG9wIGZpdmUgc291cmNldHlwZXM_at_1308676200_22702c154383bbe4/control" rel="control"/> <author> <name>splunk-system-user</name> </author> <content type="text/xml"> <s:dict> <s:key name="cursorTime">1969-12-31T16:00:00.000-08:00</s:key> <s:key name="delegate">scheduler</s:key> <s:key name="diskUsage">73728</s:key> <s:key name="dispatchState">DONE</s:key> <s:key name="doneProgress">1.00000</s:key> <s:key name="dropCount">0</s:key> <s:key name="earliestTime">2011-06-20T10:10:00.000-07:00</s:key> <s:key name="eventAvailableCount">0</s:key> <s:key name="eventCount">1363</s:key> <s:key name="eventFieldCount">0</s:key> <s:key name="eventIsStreaming">1</s:key> <s:key name="eventIsTruncated">1</s:key> <s:key name="eventSearch">search index=_internal (source=*/metrics.log* OR source=*\\metrics.log*) group=per_sourcetype_thruput </s:key> <s:key name="eventSorting">none</s:key> <s:key name="isDone">1</s:key> <s:key name="isFailed">0</s:key> <s:key name="isFinalized">0</s:key> <s:key name="isPaused">0</s:key> <s:key name="isPreviewEnabled">0</s:key> <s:key name="isRealTimeSearch">0</s:key> <s:key name="isRemoteTimeline">0</s:key> <s:key name="isSaved">0</s:key> <s:key name="isSavedSearch">1</s:key> <s:key name="isZombie">0</s:key> <s:key name="keywords">group::per_sourcetype_thruput index::_internal source::*/metrics.log* source::*\metrics.log*</s:key> <s:key name="label">Top five sourcetypes</s:key> <s:key name="latestTime">2011-06-21T10:10:00.000-07:00</s:key> <s:key name="numPreviews">0</s:key> <s:key name="priority">5</s:key> <s:key name="remoteSearch">litsearch index=_internal ( source=*/metrics.log* OR source=*\\metrics.log* ) group=per_sourcetype_thruput | addinfo type=count label=prereport_events | fields keepcolorder=t "kb" "prestats_reserved_*" "psrsvd_*" "series" | convert num("kb") | prestats sum(kb) AS "sum(kb)" by series</s:key> <s:key name="reportSearch">chart sum(kb) by series | sort -sum(kb) | head 5</s:key> <s:key name="resultCount">4</s:key> <s:key name="resultIsStreaming">0</s:key> <s:key name="resultPreviewCount">4</s:key> <s:key name="runDuration">0.259000</s:key> <s:key name="scanCount">1363</s:key> <s:key name="searchEarliestTime">1308589800.000000000</s:key> <s:key name="searchLatestTime">1308676200.000000000</s:key> <s:key name="sid">scheduler__nobody__search_VG9wIGZpdmUgc291cmNldHlwZXM_at_1308676200_22702c154383bbe4</s:key> <s:key name="statusBuckets">0</s:key> <s:key name="ttl">489</s:key> <s:key name="performance"> <s:dict> <s:key name="command.addinfo"> <s:dict> <s:key name="duration_secs">0.005</s:key> <s:key name="invocations">5</s:key> <s:key name="input_count">1363</s:key> <s:key name="output_count">1363</s:key> </s:dict> </s:key> <s:key name="command.chart"> <s:dict> <s:key name="duration_secs">0.003</s:key> <s:key name="invocations">1</s:key> <s:key name="input_count">100000</s:key> <s:key name="output_count">4</s:key> </s:dict> </s:key> <s:key name="command.convert"> <s:dict> <s:key name="duration_secs">0.006</s:key> <s:key name="invocations">5</s:key> <s:key name="input_count">1363</s:key> <s:key name="output_count">1363</s:key> </s:dict> </s:key> <s:key name="command.fields"> <s:dict> <s:key name="duration_secs">0.005</s:key> <s:key name="invocations">5</s:key> <s:key name="input_count">1363</s:key> <s:key name="output_count">1363</s:key> </s:dict> </s:key> <s:key name="command.head"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> <s:key name="input_count">4</s:key> <s:key name="output_count">4</s:key> </s:dict> </s:key> <s:key name="command.presort"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> <s:key name="input_count">4</s:key> <s:key name="output_count">4</s:key> </s:dict> </s:key> <s:key name="command.prestats"> <s:dict> <s:key name="duration_secs">0.014</s:key> <s:key name="invocations">5</s:key> <s:key name="input_count">1363</s:key> <s:key name="output_count">12</s:key> </s:dict> </s:key> <s:key name="command.search"> <s:dict> <s:key name="duration_secs">0.058</s:key> <s:key name="invocations">5</s:key> <s:key name="input_count">0</s:key> <s:key name="output_count">1363</s:key> </s:dict> </s:key> <s:key name="command.search.fieldalias"> <s:dict> <s:key name="duration_secs">0.003</s:key> <s:key name="invocations">3</s:key> <s:key name="input_count">1363</s:key> <s:key name="output_count">1363</s:key> </s:dict> </s:key> <s:key name="command.search.filter"> <s:dict> <s:key name="duration_secs">0.004</s:key> <s:key name="invocations">3</s:key> </s:dict> </s:key> <s:key name="command.search.index"> <s:dict> <s:key name="duration_secs">0.010</s:key> <s:key name="invocations">5</s:key> </s:dict> </s:key> <s:key name="command.search.kv"> <s:dict> <s:key name="duration_secs">0.011</s:key> <s:key name="invocations">3</s:key> </s:dict> </s:key> <s:key name="command.search.lookups"> <s:dict> <s:key name="duration_secs">0.003</s:key> <s:key name="invocations">3</s:key> <s:key name="input_count">1363</s:key> <s:key name="output_count">1363</s:key> </s:dict> </s:key> <s:key name="command.search.rawdata"> <s:dict> <s:key name="duration_secs">0.034</s:key> <s:key name="invocations">3</s:key> </s:dict> </s:key> <s:key name="command.search.tags"> <s:dict> <s:key name="duration_secs">0.005</s:key> <s:key name="invocations">5</s:key> <s:key name="input_count">1363</s:key> <s:key name="output_count">1363</s:key> </s:dict> </s:key> <s:key name="command.search.typer"> <s:dict> <s:key name="duration_secs">0.005</s:key> <s:key name="invocations">5</s:key> <s:key name="input_count">1363</s:key> <s:key name="output_count">1363</s:key> </s:dict> </s:key> <s:key name="command.sort"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> <s:key name="input_count">4</s:key> <s:key name="output_count">4</s:key> </s:dict> </s:key> <s:key name="dispatch.createProviderQueue"> <s:dict> <s:key name="duration_secs">0.067</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate"> <s:dict> <s:key name="duration_secs">0.038</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate.chart"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate.head"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate.search"> <s:dict> <s:key name="duration_secs">0.037</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate.sort"> <s:dict> <s:key name="duration_secs">0.001</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.fetch"> <s:dict> <s:key name="duration_secs">0.126</s:key> <s:key name="invocations">6</s:key> </s:dict> </s:key> <s:key name="dispatch.stream.local"> <s:dict> <s:key name="duration_secs">0.070</s:key> <s:key name="invocations">5</s:key> </s:dict> </s:key> </s:dict> </s:key> <s:key name="messages"> <s:dict/> </s:key> <s:key name="request"> <s:dict> <s:key name="ui_dispatch_app"></s:key> <s:key name="ui_dispatch_view"></s:key> </s:dict> </s:key> <s:key name="eai:acl"> <s:dict> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>admin</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="owner">nobody</s:key> <s:key name="modifiable">true</s:key> <s:key name="sharing">global</s:key> <s:key name="app">search</s:key> <s:key name="can_write">true</s:key> </s:dict> </s:key> <s:key name="searchProviders"> <s:list> <s:item>mbp15.splunk.com</s:item> </s:list> </s:key> </s:dict> </content> </entry> . . . elided . . . </feed>
POST
Start a new search and return the search ID (<sid>)
Request parameters
Name | Type | Default | Description |
---|---|---|---|
adhoc_search_level | String | Use one of the following search modes.
[ verbose | fast | smart ] If | |
allow_partial_results | Boolean | true | Indicates whether the search job can proceed to provide partial results if a search peer fails. When set to false , the search job fails if a search peer providing results for the search job fails.
|
auto_cancel | Number | 0 | If specified, the job automatically cancels after this many seconds of inactivity. (0 means never auto-cancel) |
auto_finalize_ec | Number | 0 | Auto-finalize the search after at least this many events are processed.
Specify |
auto_pause | Number | 0 | If specified, the search job pauses after this many seconds of inactivity. (0 means never auto-pause.)
To restart a paused search job, specify unpause as an action to POST search/jobs/{search_id}/control. auto_pause only goes into effect once. Unpausing after auto_pause does not put auto_pause into effect again. |
custom | String | Specify a custom parameter (see example). | |
earliest_time | String | Specify a time string. Sets the earliest (inclusive), respectively, time bounds for the search.
The time string can be either a UTC time (with fractional seconds), a relative time specifier (to now) or a formatted time string. Refer to Time modifiers for search for information and examples of specifying a time string. Compare to | |
enable_lookups | Boolean | true | Indicates whether lookups should be applied to events.
Specifying true (the default) may slow searches significantly depending on the nature of the lookups. |
exec_mode | Enum | normal | Valid values: (blocking | oneshot | normal)
If set to normal, runs an asynchronous search. If set to blocking, returns the sid when the job is complete. If set to oneshot, returns results in the same call. In this case, you can specify the format for the output (for example, json output) using the output_mode parameter as described in GET search/jobs/export. Default format for output is xml. Does not return the search ID. |
force_bundle_replication | Boolean | false | Specifies whether this search should cause (and wait depending on the value of sync_bundle_replication) for bundle synchronization with all search peers. |
id | String | Optional string to specify the search ID (<sid> ). If unspecified, a random ID is generated.
| |
index_earliest | String | Specify a time string. Sets the earliest (inclusive), respectively, time bounds for the search, based on the index time bounds.
The time string can be either a UTC time (with fractional seconds), a relative time specifier (to now) or a formatted time string. Compare to Refer to Time modifiers for search for information and examples of specifying a time string. | |
index_latest | String | Specify a time string. Sets the latest (exclusive), respectively, time bounds for the search, based on the index time bounds.
The time string can be either a UTC time (with fractional seconds), a relative time specifier (to now) or a formatted time string. Refer to Time modifiers for search for information and examples of specifying a time string. Compare to | |
indexedRealtime | Boolean | Indicate whether or not to used indexed-realtime mode for real-time searches. | |
indexedRealtimeOffset | Number | Set disk sync delay for indexed real-time search (seconds). | |
latest_time | String | Specify a time string. Sets the latest (exclusive), respectively, time bounds for the search.
The time string can be either a UTC time (with fractional seconds), a relative time specifier (to now) or a formatted time string. Refer to Time modifiers for search for information and examples of specifying a time string. Compare to | |
max_count | Number | 10000 | The number of events that can be accessible in any given status bucket.
Also, in transforming mode, the maximum number of results to store. Specifically, in all calls, |
max_time | Number | 0 | The number of seconds to run this search before finalizing. Specify 0 to never finalize.
|
namespace | String | The application namespace in which to restrict searches.
The namespace corresponds to the identifier recognized in the | |
now | String | current system time | Specify a time string to set the absolute time used for any relative time specifier in the search. Defaults to the current system time.
You can specify a relative time modifier for this parameter. For example, specify If you specify a relative time modifier both in this parameter and in the search string, the search string modifier takes precedence. Refer to Time modifiers for search for details on specifying relative time modifiers. |
reduce_freq | Number | 0 | Determines how frequently to run the MapReduce reduce phase on accumulated map values. |
reload_macros | Boolean | true | Specifies whether to reload macro definitions from macros.conf .
Default is true. |
remote_server_list | String | empty list | Comma-separated list of (possibly wildcarded) servers from which raw events should be pulled. This same server list is to be used in subsearches. |
replay_speed | Number greater than 0 | Indicate a real-time search replay speed factor. For example, 1 indicates normal speed. 0.5 indicates half of normal speed, and 2 indicates twice as fast as normal.
Use replay_speed with replay_et and replay_lt relative times to indicate a speed and time range for the replay. For example, replay_speed = 10 replay_et = -d@d replay_lt = -@d specifies a replay at 10x speed, as if the "wall clock" time starts yesterday at midnight and ends when it reaches today at midnight. For more information about using relative time modifiers, see Search time modifiers in the Search reference. | |
replay_et | Time modifier string | Relative "wall clock" start time for the replay. | |
replay_lt | Time modifier string. | Relative end time for the replay clock. The replay stops when clock time reaches this time. | |
required_field_list | String | empty list | [Deprecated] Use rf.
A comma-separated list of required fields that, even if not referenced or used directly by the search, is still included by the events and summary endpoints. Splunk Web uses these fields to prepopulate panels in the Search view. |
reuse_max_seconds_ago | Number | Specifies the number of seconds ago to check when an identical search is started and return the job's search ID instead of starting a new job. | |
rf | String | Adds a required field to the search. There can be multiple rf POST arguments to the search.
These fields, even if not referenced or used directly by the search, are still included by the events and summary endpoints. Splunk Web uses these fields to prepopulate panels in the Search view. Consider using this form of passing the required fields to the search instead of the deprecated required_field_list. If both rf and required_field_list are provided, the union of the two lists is used. | |
rt_blocking | Boolean | false | For a real-time search, indicates if the indexer blocks if the queue for this search is full. |
rt_indexfilter | Boolean | true | For a real-time search, indicates if the indexer prefilters events. |
rt_maxblocksecs | Number | 60 | For a real-time search with rt_blocking set to true, the maximum time to block.
Specify |
rt_queue_size | Number | 10000 events | For a real-time search, the queue size (in events) that the indexer should use for this search. |
search required |
String | The search language string to execute, taking results from the local and remote servers.
Examples:
| |
search_listener | String | [Disabled]
Registers a search state listener with the search. Use the format: search_state;results_condition;http_method;uri; For example: search_listener=onResults;true;POST;/servicesNS/admin/search/saved/search/foobar/notify; | |
search_mode | Enum | normal | Valid values: (normal | realtime)
If set to Additionally, if earliest_time and/or latest_time are 'rt' followed by a relative time specifiers then a sliding window is used where the time bounds of the window are determined by the relative time specifiers and are continuously updated based on the wall-clock time. |
spawn_process | Boolean | true | This parameter is deprecated and will be removed in a future release. Do not use this parameter. Specifies whether the search should run in a separate spawned process. Default is true. Searches against indexes must run in a separate process. |
status_buckets | Number | 0 | The most status buckets to generate.
|
sync_bundle_replication | Boolean | Specifies whether this search should wait for bundle replication to complete. | |
time_format | String | %FT%T.%Q%:z | Used to convert a formatted time string from {start,end}_time into UTC seconds. The default value is the ISO-8601 format. |
timeout | Number | 86400 | The number of seconds to keep this search after processing has stopped. |
workload_pool | String | Specifies the new workload pool where the existing running search should be placed. |
Returned values
Name | Description |
---|---|
sid | Search ID |
Application usage
Refer to Creating searches using the REST API for information on using this endpoint and other search endpoints.
The search parameter is a search language string that specifies the search. Often you create a search specifying just the search parameter. Use the other parameters to customize a search to specific needs.
Use the returned (<sid>) in the following endpoints to view and manage the search:
search/jobs/{search_id}: View the status of this search job.
search/jobs/{search_id}/control: Execute job control commands, such as pause, cancel, preview, and others.
search/jobs/{search_id}/events: View a set of untransformed events for the search.
search/jobs/{search_id}/results: View results of the search.
search/jobs/{search_id}/results_preview: Preview results of a search that has not completed
search/jobs/{search_id}/search.log: View the log file generated by the search.
search/jobs/{search_id}/summary: View field summary information
search/jobs/{search_id}/timeline: View event distribution over time.
You can also use the custom attribute to create custom job properties (see example).
For more information on searches, see the Splunk Search Reference.
Example request and response
Request
- Basic example:
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/search/jobs --data-urlencode search="search index=_internal source=*/metrics.log" -d id=mysearch_02151949 -d max_count=50000 -d status_buckets=300
- Create custom property example:
curl -u admin:changeme -k https://localhost:8089/services/search/jobs -d search="search *" -d custom.foobar="myCustomPropA" -d custom.foobaz="myCustomPropB"
Use the search/jobs GET request to view the custom properties.
- Create indexed real-time search with five second disk sync delay example:
curl -k -u admin:changed https://localhost:8089/services/search/jobs -d search="search index=_* *" -d search_mode="realtime" -d indexedRealtime="1" -d indexedRealtimeOffset="300"
Response
<response><sid>mysearch_02151949</sid></response>
search/v2/jobs/export
https://<host>:<mPort>/services/search/v2/jobs/export
Stream search results as they become available.
The POST operation on this endpoint performs a search identical to a POST to search/jobs
. For parameter and returned value descriptions, see search/jobs.
The GET operation is not available in the v2 iteration of this endpoint.
POST
Performs a search identical to POST search/jobs. For parameter and returned value descriptions, see the POST parameter descriptions for search/jobs.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
search | String | See the parameters and returned values for search/jobs. | |
auto_cancel | Number | See the parameters and returned values for search/jobs. | |
auto_finalize_ec | Number | See the parameters and returned values for search/jobs. | |
auto_pause | Number | See the parameters and returned values for search/jobs. | |
earliest_time | String | See the parameters and returned values for search/jobs. | |
enable_lookups | Bool | See the parameters and returned values for search/jobs. | |
force_bundle_replication | Bool | See the parameters and returned values for search/jobs. | |
id | String | See the parameters and returned values for search/jobs. | |
index_earliest | String | Specify a time string. Sets the earliest (inclusive), respectively, time bounds for the search, based on the index time.
The time string can be either a UTC time (with fractional seconds), a relative time specifier (to now) or a formatted time string. Refer to Time modifiers for search for information and examples of specifying a time string. | |
index_latest | String | Specify a time string. Sets the latest (inclusive), respectively, time bounds for the search, based on the index time.
The time string can be either a UTC time (with fractional seconds), a relative time specifier (to now) or a formatted time string. Refer to Time modifiers for search for information and examples of specifying a time string. | |
latest_time | String | See the parameters and returned values for search/jobs. | |
max_time | Number | See the parameters and returned values for search/jobs. | |
namespace | String | See the parameters and returned values for search/jobs. | |
now | String | See the parameters and returned values for search/jobs. | |
output_mode | Enum | xml | Valid values: (atom | csv | json | json_cols | json_rows | raw | xml)
Specifies the format for the returned output. |
reduce_freq | Number | See the parameters and returned values for search/jobs. | |
reload_macros | Bool | See the parameters and returned values for search/jobs. | |
remote_server_list | String | See the parameters and returned values for search/jobs. | |
required_field_list | String | See the parameters and returned values for search/jobs. | |
rf | String | See the parameters and returned values for search/jobs. | |
rt_blocking | Bool | See the parameters and returned values for search/jobs. | |
rt_indexfilter | Bool | See the parameters and returned values for search/jobs. | |
rt_maxblocksecs | Number | See the parameters and returned values for search/jobs. | |
rt_queue_size | Number | See the parameters and returned values for search/jobs. | |
search_listener | String | See the parameters and returned values for search/jobs. | |
search_mode | Enum | See the parameters and returned values for search/jobs. | |
sync_bundle_replication | Bool | See the parameters and returned values for search/jobs. | |
time_format | String | See the parameters and returned values for search/jobs. | |
timeout | Number | See the parameters and returned values for search/jobs. |
Returned values
None
Application usage
Streaming of results is based on the search string.
For non-streaming searches, previews of the final results are available if preview is enabled. If preview is not enabled, use search/jobs with exec_mode=oneshot.
If your search is too big, considering running it with the search/jobs endpoint, instead of the search/jobs/export endpoint, and using exec_mode=blocking. You'll then get back a search id, and then you can page through the results and request them from the server under your control. This is a better approach for extremely large result sets that need to be chunked.
Example
The following example runs a saved search and passes a variable to it. In this case, the variable is the host field:
$curl -k -u admin:password https://splunkserver:8089/services/search/v2/jobs/export -d search="savedsearch \ MySavedSearch%20host%3Dwolverine*"
This request creates a saved search named "MySavedSearch" which contains the following result:
"index=main $host$ | head 100"
search/jobs/export (deprecated)
https://<host>:<mPort>/services/search/jobs/export
Stream search results as they become available.
The GET and POST operations on this endpoint perform a search identical to a POST to search/jobs
. For parameter and returned value descriptions, see search/jobs.
This endpoint is deprecated as of Splunk Enterprise 9.0.1. Use the v2 instance of this endpoint instead.
GET
Performs a search identical to POST search/jobs
Request parameters
See the POST operation on search/jobs for parameter descriptions.
Name | Type | Default | Description |
---|---|---|---|
auto_cancel | Number | See the POST parameter descriptions for search/jobs | |
auto_finalize_ec | Number | See the POST parameter descriptions for search/jobs | |
auto_pause | Number | See the POST parameter descriptions for search/jobs | |
earliest_time | String | See the POST parameter descriptions for search/jobs | |
enable_lookups | Bool | See the POST parameter descriptions for search/jobs | |
force_bundle_replication | Bool | See the POST parameter descriptions for search/jobs | |
id | String | See the POST parameter descriptions for search/jobs | |
index_earliest | String | Specify a time string. Sets the earliest (inclusive), respectively, time bounds for the search, based on the index time.
The time string can be either a UTC time (with fractional seconds), a relative time specifier (to now) or a formatted time string. Refer to Time modifiers for search for information and examples of specifying a time string. | |
index_latest | String | Specify a time string. Sets the latest (inclusive), respectively, time bounds for the search, based on the index time.
The time string can be either a UTC time (with fractional seconds), a relative time specifier (to now) or a formatted time string. Refer to Time modifiers for search for information and examples of specifying a time string. | |
latest_time | String | See the POST parameter descriptions for search/jobs | |
max_time | Number | See the POST parameter descriptions for search/jobs | |
namespace | String | See the POST parameter descriptions for search/jobs | |
now | String | See the POST parameter descriptions for search/jobs | |
output_mode | Enum | xml | Valid values: (atom | csv | json | json_cols | json_rows | raw | xml)
Specifies the format for the returned output. |
reduce_freq | Number | See the POST parameter descriptions for search/jobs | |
reload_macros | Bool | See the POST parameter descriptions for search/jobs | |
remote_server_list | String | See the POST parameter descriptions for search/jobs | |
required_field_list | String | See the POST parameter descriptions for search/jobs | |
rf | String | See the POST parameter descriptions for search/jobs | |
rt_blocking | Bool | See the POST parameter descriptions for search/jobs | |
rt_indexfilter | Bool | See the POST parameter descriptions for search/jobs | |
rt_maxblocksecs | Number | See the POST parameter descriptions for search/jobs | |
rt_queue_size | Number | See the POST parameter descriptions for search/jobs | |
search required |
String | See the POST parameter descriptions for search/jobs | |
search_listener | String | See the POST parameter descriptions for search/jobs | |
search_mode | Enum | See the POST parameter descriptions for search/jobs | |
sync_bundle_replication | Bool | See the POST parameter descriptions for search/jobs | |
time_format | String | See the POST parameter descriptions for search/jobs | |
timeout | Number | See the POST parameter descriptions for search/jobs |
Returned values
None
Application usage
Performs a search identical to POST search/jobs
, except the search streams results as they become available. Streaming of results is based on the search string.
For non-streaming searches, previews of the final results are available if preview is enabled. If preview is not enabled, use the search/jobs
endpoint with exec_mode=oneshot
to retrieve results from them.
If the result set returned by a non-streaming search is significantly large, use the search/jobs
endpoint with exec_mode=blocking
. This approach lets you page through the results and request them from a server under your control.
Example request and response
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/search/jobs/export -d search="search index%3D_internal | head 1"
XML Response
<results preview='0'> <meta> <fieldOrder> <field>_cd</field> <field>_indextime</field> <field>_raw</field> <field>_serial</field> <field>_si</field> <field>_sourcetype</field> <field>_subsecond</field> <field>_time</field> <field>host</field> <field>index</field> <field>linecount</field> <field>source</field> <field>sourcetype</field> <field>splunk_server</field> </fieldOrder> </meta> <messages> <msg type="DEBUG">base lispy: [ AND index::_internal ]</msg> <msg type="DEBUG">search context: user="admin", app="search", bs-pathname="/Applications/splunk/etc"</msg> <msg type="INFO">Your timerange was substituted based on your search string</msg> </messages> <result offset='0'> <field k='_cd'> <value><text>50:59480</text></value> </field> <field k='_indextime'> <value><text>1333739623</text></value> </field> <field k='_raw'><v xml:space='preserve' trunc='0'>127.0.0.1 - admin [06/Apr/2012:12:13:42.943 -0700] "POST /servicesNS/admin/search/search/jobs/export HTTP/1.1" 200 2063 - - - 317ms</v></field> <field k='_serial'> <value><text>0</text></value> </field> <field k='_si'> <value><text>mbp15.splunk.com</text></value> <value><text>_internal</text></value> </field> <field k='_sourcetype'> <value><text>splunkd_access</text></value> </field> <field k='_subsecond'> <value><text>.943</text></value> </field> <field k='_time'> <value><text>2012-04-06 12:13:42.943 PDT</text></value> </field> <field k='host'> <value><text>mbp15.splunk.com</text></value> </field> <field k='index'> <value h='1'><text>_internal</text></value> </field> <field k='linecount'> <value><text>1</text></value> </field> <field k='source'> <value><text>/Applications/splunk/var/log/splunk/splunkd_access.log</text></value> </field> <field k='sourcetype'> <value><text>splunkd_access</text></value> </field> <field k='splunk_server'> <value><text>mbp15.splunk.com</text></value> </field> </result> </results>
POST
Performs a search identical to POST search/jobs. For parameter and returned value descriptions, see the POST parameter descriptions for search/jobs.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
search | String | See the parameters and returned values for search/jobs. | |
auto_cancel | Number | See the parameters and returned values for search/jobs. | |
auto_finalize_ec | Number | See the parameters and returned values for search/jobs. | |
auto_pause | Number | See the parameters and returned values for search/jobs. | |
earliest_time | String | See the parameters and returned values for search/jobs. | |
enable_lookups | Bool | See the parameters and returned values for search/jobs. | |
force_bundle_replication | Bool | See the parameters and returned values for search/jobs. | |
id | String | See the parameters and returned values for search/jobs. | |
index_earliest | String | Specify a time string. Sets the earliest (inclusive), respectively, time bounds for the search, based on the index time.
The time string can be either a UTC time (with fractional seconds), a relative time specifier (to now) or a formatted time string. Refer to Time modifiers for search for information and examples of specifying a time string. | |
index_latest | String | Specify a time string. Sets the latest (inclusive), respectively, time bounds for the search, based on the index time.
The time string can be either a UTC time (with fractional seconds), a relative time specifier (to now) or a formatted time string. Refer to Time modifiers for search for information and examples of specifying a time string. | |
latest_time | String | See the parameters and returned values for search/jobs. | |
max_time | Number | See the parameters and returned values for search/jobs. | |
namespace | String | See the parameters and returned values for search/jobs. | |
now | String | See the parameters and returned values for search/jobs. | |
output_mode | Enum | xml | Valid values: (atom | csv | json | json_cols | json_rows | raw | xml)
Specifies the format for the returned output. |
reduce_freq | Number | See the parameters and returned values for search/jobs. | |
reload_macros | Bool | See the parameters and returned values for search/jobs. | |
remote_server_list | String | See the parameters and returned values for search/jobs. | |
required_field_list | String | See the parameters and returned values for search/jobs. | |
rf | String | See the parameters and returned values for search/jobs. | |
rt_blocking | Bool | See the parameters and returned values for search/jobs. | |
rt_indexfilter | Bool | See the parameters and returned values for search/jobs. | |
rt_maxblocksecs | Number | See the parameters and returned values for search/jobs. | |
rt_queue_size | Number | See the parameters and returned values for search/jobs. | |
search_listener | String | See the parameters and returned values for search/jobs. | |
search_mode | Enum | See the parameters and returned values for search/jobs. | |
sync_bundle_replication | Bool | See the parameters and returned values for search/jobs. | |
time_format | String | See the parameters and returned values for search/jobs. | |
timeout | Number | See the parameters and returned values for search/jobs. |
Returned values
None
Application usage
Streaming of results is based on the search string.
For non-streaming searches, previews of the final results are available if preview is enabled. If preview is not enabled, it is better to use search/jobs with exec_mode=oneshot.
If it is too big, you might instead run with the search/jobs (not search/jobs/export) endpoint (it takes POST with the same parameters), maybe using the exec_mode=blocking. You'll then get back a search id, and then you can page through the results and request them from the server under your control, which is a better approach for extremely large result sets that need to be chunked.
Example of how to pass a variable to query when using REST API:
This is an example of running a saved search and passing a variable to it. In this case, the variable is host field:
$curl -k -u admin:password https://splunkserver:8089/services/search/jobs/export -d search="savedsearch \ MySavedSearch%20host%3Dwolverine*"
(use "MySavedSearch" and input variable host=wolverine* )
I have a saved search named "MySavedSearch" the query of the search contains:
"index=main $host$ | head 100"
search/jobs/{search_id}
https://<host>:<mPort>/services/search/jobs/{search_id}
Manage the {search_id}
search job.
DELETE
Delete the {search_id}
search job.
Request parameters
None
Returned values
None
Application usage
{search_id} is the <sid> field returned from the GET operation for the search/jobs endpoint.
Example request and response
XML Request
curl -k -u admin:pass --request DELETE https://localhost:8089/services/search/jobs/mysearch_02151949
XML Response
<response><messages><msg type='INFO'>Search job cancelled.</msg></messages></response
GET
Get information about the {search_id}
search job.
Request parameters
None
Returned values
None
Application usage
The user ID is implied by the authentication to the call.
Information returned includes the search job properties, such as eventCount (number of events returned), runDuration (time the search took to complete), and others. The parameters to POST /search/jobs provides details on search job properties when creating a search. Search job properties are also described in View search job properties in the Search Manual.
The dispatchState property is of particular interest to determine the state of a search, and can contain the following values:
QUEUED PARSING RUNNING FINALIZING DONE PAUSE INTERNAL_CANCEL USER_CANCEL BAD_INPUT_CANCEL QUIT FAILED
This operation also returns performance information for the search. For more information refer to View search job properties in the Search Manual.
For more information on searches in Splunk, refer to the Splunk Search Reference.
POST /search/jobs returns a <sid> for a search. You can also get a search ID from the <sid> field returned from GET search/jobs.
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/services/search/jobs/mysearch_02151949
XML Response
<entry xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>search index</title> <id>https://localhost:8089/services/search/jobs/mysearch_02151949</id> <updated>2011-07-07T20:49:58.000-07:00</updated> <link href="/services/search/jobs/mysearch_02151949" rel="alternate"/> <published>2011-07-07T20:49:57.000-07:00</published> <link href="/services/search/jobs/mysearch_02151949/search.log" rel="search.log"/> <link href="/services/search/jobs/mysearch_02151949/events" rel="events"/> <link href="/services/search/jobs/mysearch_02151949/results" rel="results"/> <link href="/services/search/jobs/mysearch_02151949/results_preview" rel="results_preview"/> <link href="/services/search/jobs/mysearch_02151949/timeline" rel="timeline"/> <link href="/services/search/jobs/mysearch_02151949/summary" rel="summary"/> <link href="/services/search/jobs/mysearch_02151949/control" rel="control"/> <author> <name>admin</name> </author> <content type="text/xml"> <s:dict> <s:key name="cursorTime">1969-12-31T16:00:00.000-08:00</s:key> <s:key name="delegate"></s:key> <s:key name="diskUsage">2174976</s:key> <s:key name="dispatchState">DONE</s:key> <s:key name="doneProgress">1.00000</s:key> <s:key name="dropCount">0</s:key> <s:key name="earliestTime">2011-07-07T11:18:08.000-07:00</s:key> <s:key name="eventAvailableCount">287</s:key> <s:key name="eventCount">287</s:key> <s:key name="eventFieldCount">6</s:key> <s:key name="eventIsStreaming">1</s:key> <s:key name="eventIsTruncated">0</s:key> <s:key name="eventSearch">search index</s:key> <s:key name="eventSorting">desc</s:key> <s:key name="isDone">1</s:key> <s:key name="isFailed">0</s:key> <s:key name="isFinalized">0</s:key> <s:key name="isPaused">0</s:key> <s:key name="isPreviewEnabled">0</s:key> <s:key name="isRealTimeSearch">0</s:key> <s:key name="isRemoteTimeline">0</s:key> <s:key name="isSaved">0</s:key> <s:key name="isSavedSearch">0</s:key> <s:key name="isZombie">0</s:key> <s:key name="keywords">index</s:key> <s:key name="label"></s:key> <s:key name="latestTime">1969-12-31T16:00:00.000-08:00</s:key> <s:key name="numPreviews">0</s:key> <s:key name="priority">5</s:key> <s:key name="remoteSearch">litsearch index | fields keepcolorder=t "host" "index" "linecount" "source" "sourcetype" "splunk_server"</s:key> <s:key name="reportSearch"></s:key> <s:key name="resultCount">287</s:key> <s:key name="resultIsStreaming">1</s:key> <s:key name="resultPreviewCount">287</s:key> <s:key name="runDuration">1.004000</s:key> <s:key name="scanCount">287</s:key> <s:key name="sid">mysearch_02151949</s:key> <s:key name="statusBuckets">0</s:key> <s:key name="ttl">516</s:key> <s:key name="performance"> <s:dict> <s:key name="command.fields"> <s:dict> <s:key name="duration_secs">0.004</s:key> <s:key name="invocations">4</s:key> <s:key name="input_count">287</s:key> <s:key name="output_count">287</s:key> </s:dict> </s:key> <s:key name="command.search"> <s:dict> <s:key name="duration_secs">0.089</s:key> <s:key name="invocations">4</s:key> <s:key name="input_count">0</s:key> <s:key name="output_count">287</s:key> </s:dict> </s:key> <s:key name="command.search.fieldalias"> <s:dict> <s:key name="duration_secs">0.002</s:key> <s:key name="invocations">2</s:key> <s:key name="input_count">287</s:key> <s:key name="output_count">287</s:key> </s:dict> </s:key> <s:key name="command.search.index"> <s:dict> <s:key name="duration_secs">0.005</s:key> <s:key name="invocations">4</s:key> </s:dict> </s:key> <s:key name="command.search.kv"> <s:dict> <s:key name="duration_secs">0.002</s:key> <s:key name="invocations">2</s:key> </s:dict> </s:key> <s:key name="command.search.lookups"> <s:dict> <s:key name="duration_secs">0.002</s:key> <s:key name="invocations">2</s:key> <s:key name="input_count">287</s:key> <s:key name="output_count">287</s:key> </s:dict> </s:key> <s:key name="command.search.rawdata"> <s:dict> <s:key name="duration_secs">0.083</s:key> <s:key name="invocations">2</s:key> </s:dict> </s:key> <s:key name="command.search.tags"> <s:dict> <s:key name="duration_secs">0.004</s:key> <s:key name="invocations">4</s:key> <s:key name="input_count">287</s:key> <s:key name="output_count">287</s:key> </s:dict> </s:key> <s:key name="command.search.typer"> <s:dict> <s:key name="duration_secs">0.004</s:key> <s:key name="invocations">4</s:key> <s:key name="input_count">287</s:key> <s:key name="output_count">287</s:key> </s:dict> </s:key> <s:key name="dispatch.createProviderQueue"> <s:dict> <s:key name="duration_secs">0.059</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate"> <s:dict> <s:key name="duration_secs">0.037</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.evaluate.search"> <s:dict> <s:key name="duration_secs">0.036</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.fetch"> <s:dict> <s:key name="duration_secs">0.092</s:key> <s:key name="invocations">5</s:key> </s:dict> </s:key> <s:key name="dispatch.readEventsInResults"> <s:dict> <s:key name="duration_secs">0.110</s:key> <s:key name="invocations">1</s:key> </s:dict> </s:key> <s:key name="dispatch.stream.local"> <s:dict> <s:key name="duration_secs">0.089</s:key> <s:key name="invocations">4</s:key> </s:dict> </s:key> <s:key name="dispatch.timeline"> <s:dict> <s:key name="duration_secs">0.359</s:key> <s:key name="invocations">5</s:key> </s:dict> </s:key> </s:dict> </s:key> <s:key name="messages"> <s:dict/> </s:key> <s:key name="request"> <s:dict> <s:key name="id">mysearch_02151949</s:key> <s:key name="search">search index</s:key> </s:dict> </s:key> <s:key name="eai:acl"> <s:dict> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>admin</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="owner">admin</s:key> <s:key name="modifiable">true</s:key> <s:key name="sharing">global</s:key> <s:key name="app">search</s:key> <s:key name="can_write">true</s:key> </s:dict> </s:key> <s:key name="searchProviders"> <s:list> <s:item>mbp15.splunk.com</s:item> </s:list> </s:key> </s:dict> </content> </entry>
POST
Update the {search_id}
search job.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
custom.* required |
String | Specify custom job properties for the specified search job. |
Returned values
None
search/jobs/{search_id}/control
https://<host>:<mPort>/services/search/jobs/{search_id}/control
Run a job control command for the {search_id}
search.
POST
Run a job control command for the {search_id}
search.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
action required |
Enum | Valid values: (pause | unpause | finalize | cancel | touch | setttl | setpriority | enablepreview | disablepreview | setworkloadpool)
The control action to execute. pause: Suspends the execution of the current search. unpause: Resumes the execution of the current search, if paused. finalize: Stops the search, and provides intermediate results to the /results endpoint. cancel: Stops the current search and deletes the result cache. touch: Extends the expiration time of the search to now + ttl setttl: Change the ttl of the search. Arguments: ttl=<number> setpriority: Sets the priority of the search process. Arguments: priority=<0-10> enablepreview: Enable preview generation (may slow search considerably). disablepreview: Disable preview generation. setworkloadpool: Moves a running search to a new workload pool. Arguments: workload_pool=<string>. Specifies the new workload pool. Requires edit_workload_pools capability. save: saves the search job, storing search artifacts on disk for 7 days. Add or edit the default_save_ttl value in limits.conf to override the default value of 7 days. unsave: Disables any action performed by save. |
Returned values
None
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/services/search/jobs/mysearch_02151949/control -d action=pause
XML Response
<response><messages><msg type='INFO'>Search job paused.</msg></messages></response>
search/v2/jobs/{search_id}/events
https://<host>:<mPort>/services/search/v2/jobs/{search_id}/events
Access {search_id}
search events.
The GET operation does not include the search parameter in the v2 iteration of this endpoint. To use the search parameter, use the POST operation instead.
GET
Get {search_id}
search events.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
count | Number | 100 | The maximum number of results to return. If value is set to 0 , then all available results are returned. Default value is 100 .
|
earliest_time | String | A time string representing the earliest (inclusive), respectively, time bounds for the results to be returned. If not specified, the range applies to all results found. | |
f | String | A field to return for the event set.
You can pass multiple | |
field_list | String | * |
[Deprecated] Use f.
A comma-separated list of the fields to return for the event set. |
latest_time | String | A time string representing the latest (exclusive), respectively, time bounds for the results to be returned. If not specified, the range applies to all results found. | |
max_lines | Number | 0 | The maximum lines that any single event _raw field should contain.
Specify |
offset | Number | 0 | The first result (inclusive) from which to begin returning data.
This value is 0-indexed. Default value is 0. In 4.1+, negative offsets are allowed and are added to |
output_mode | Enum | xml | Valid values: (atom | csv | json | json_cols | json_rows | raw | xml)
Specifies the format for the returned output. |
output_time_format | String | time_format |
Formats a UTC time. Defaults to what is specified in time_format .
|
segmentation | String | raw | The type of segmentation to perform on the data. This includes an option to perform k/v segmentation. |
time_format | String | %m/%d/%Y:%H:%M:%S | Expression to convert a formatted time string from {start,end}_time into UTC seconds. |
truncation_mode | Enum | abstract | Valid values: (abstract | truncate)
Specifies how "max_lines" should be achieved. |
Returned values
None
Application usage
These events are the data from the search pipeline before the first "transforming" search command. This is the primary method for a client to fetch a set of UNTRANSFORMED events for the search job.
This endpoint is only valid if the status_buckets > 0 or the search has no transforming commands.
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/services/search/v2/jobs/1312313809.20/events --get -d f=arch -d f=build -d f=connectionType -d r -d count=3
XML Response
<results preview='0'> <meta> <fieldOrder> <field>arch</field> <field>build</field> <field>connectionType</field> <field>date_hour</field> </fieldOrder> </meta> <result offset='0'> <field k='arch'> <value><text>i686</text></value> </field> <field k='build'> <value><text>98164</text></value> </field> <field k='connectionType'> <value><text>cooked</text></value> </field> <field k='date_hour'> <value><text>19</text></value> </field> </result> <result offset='1'> <field k='arch'> <value><text>i686</text></value> </field> <field k='build'> <value><text>98164</text></value> </field> <field k='connectionType'> <value><text>cooked</text></value> </field> <field k='date_hour'> <value><text>19</text></value> </field> </result> <result offset='2'> <field k='arch'> <value><text>i686</text></value> </field> <field k='build'> <value><text>98164</text></value> </field> <field k='connectionType'> <value><text>cooked</text></value> </field> <field k='date_hour'> <value><text>19</text></value> </field> </result> </results>
POST
Access {search_id}
search events.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
count | Number | 100 | The maximum number of results to return. If value is set to 0 , then all available results are returned. Default value is 100 .
|
earliest_time | String | A time string representing the earliest (inclusive), respectively, time bounds for the results to be returned. If not specified, the range applies to all results found. | |
f | String | A field to return for the event set.
You can pass multiple | |
field_list | String | * |
[Deprecated] Use f.
A comma-separated list of the fields to return for the event set. |
latest_time | String | A time string representing the latest (exclusive), respectively, time bounds for the results to be returned. If not specified, the range applies to all results found. | |
max_lines | Number | 0 | The maximum lines that any single event _raw field should contain.
Specify |
offset | Number | 0 | The first result (inclusive) from which to begin returning data.
This value is 0-indexed. Default value is 0. In 4.1+, negative offsets are allowed and are added to |
output_mode | Enum | xml | Valid values: (atom | csv | json | json_cols | json_rows | raw | xml)
Specifies the format for the returned output. |
output_time_format | String | time_format |
Formats a UTC time. Defaults to what is specified in time_format .
|
search | String | The post processing search to apply to results. Can be any valid search language string. Only usable from POST operations. | |
segmentation | String | raw | The type of segmentation to perform on the data. This includes an option to perform k/v segmentation. |
time_format | String | %m/%d/%Y:%H:%M:%S | Expression to convert a formatted time string from {start,end}_time into UTC seconds. |
truncation_mode | Enum | abstract | Valid values: (abstract | truncate)
Specifies how "max_lines" should be achieved. |
Returned values
None
Application usage
These events are the data from the search pipeline before the first "transforming" search command. This is the primary method for a client to fetch a set of UNTRANSFORMED events for the search job.
This endpoint is only valid if the status_buckets > 0 or the search has no transforming commands.
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/services/search/v2/jobs/1312313809.20/events --get -d f=arch -d f=build -d f=connectionType -d r -d count=3
XML Response
<results preview='0'> <meta> <fieldOrder> <field>arch</field> <field>build</field> <field>connectionType</field> <field>date_hour</field> </fieldOrder> </meta> <result offset='0'> <field k='arch'> <value><text>i686</text></value> </field> <field k='build'> <value><text>98164</text></value> </field> <field k='connectionType'> <value><text>cooked</text></value> </field> <field k='date_hour'> <value><text>19</text></value> </field> </result> <result offset='1'> <field k='arch'> <value><text>i686</text></value> </field> <field k='build'> <value><text>98164</text></value> </field> <field k='connectionType'> <value><text>cooked</text></value> </field> <field k='date_hour'> <value><text>19</text></value> </field> </result> <result offset='2'> <field k='arch'> <value><text>i686</text></value> </field> <field k='build'> <value><text>98164</text></value> </field> <field k='connectionType'> <value><text>cooked</text></value> </field> <field k='date_hour'> <value><text>19</text></value> </field> </result> </results>
search/jobs/{search_id}/events (deprecated)
https://<host>:<mPort>/services/search/jobs/{search_id}/events
Get {search_id}
search events.
This endpoint is deprecated as of Splunk Enterprise 9.0.1. Use the v2 instance of this endpoint instead.
GET
Access {search_id}
search events.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
count | Number | 100 | The maximum number of results to return. If value is set to 0 , then all available results are returned. Default value is 100 .
|
earliest_time | String | A time string representing the earliest (inclusive), respectively, time bounds for the results to be returned. If not specified, the range applies to all results found. | |
f | String | A field to return for the event set.
You can pass multiple | |
field_list | String | * |
[Deprecated] Use f.
A comma-separated list of the fields to return for the event set. |
latest_time | String | A time string representing the latest (exclusive), respectively, time bounds for the results to be returned. If not specified, the range applies to all results found. | |
max_lines | Number | 0 | The maximum lines that any single event _raw field should contain.
Specify |
offset | Number | 0 | The first result (inclusive) from which to begin returning data.
This value is 0-indexed. Default value is 0. In 4.1+, negative offsets are allowed and are added to |
output_mode | Enum | xml | Valid values: (atom | csv | json | json_cols | json_rows | raw | xml)
Specifies the format for the returned output. |
output_time_format | String | time_format |
Formats a UTC time. Defaults to what is specified in time_format .
|
search | String | The post processing search to apply to results. Can be any valid search language string. | |
segmentation | String | raw | The type of segmentation to perform on the data. This includes an option to perform k/v segmentation. |
time_format | String | %m/%d/%Y:%H:%M:%S | Expression to convert a formatted time string from {start,end}_time into UTC seconds. |
truncation_mode | Enum | abstract | Valid values: (abstract | truncate)
Specifies how "max_lines" should be achieved. |
Returned values
None
Application usage
These events are the data from the search pipeline before the first "transforming" search command. This is the primary method for a client to fetch a set of UNTRANSFORMED events for the search job.
This endpoint is only valid if the status_buckets > 0 or the search has no transforming commands.
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/services/search/jobs/1312313809.20/events --get -d f=arch -d f=build -d f=connectionType -d r -d count=3
XML Response
<results preview='0'> <meta> <fieldOrder> <field>arch</field> <field>build</field> <field>connectionType</field> <field>date_hour</field> </fieldOrder> </meta> <result offset='0'> <field k='arch'> <value><text>i686</text></value> </field> <field k='build'> <value><text>98164</text></value> </field> <field k='connectionType'> <value><text>cooked</text></value> </field> <field k='date_hour'> <value><text>19</text></value> </field> </result> <result offset='1'> <field k='arch'> <value><text>i686</text></value> </field> <field k='build'> <value><text>98164</text></value> </field> <field k='connectionType'> <value><text>cooked</text></value> </field> <field k='date_hour'> <value><text>19</text></value> </field> </result> <result offset='2'> <field k='arch'> <value><text>i686</text></value> </field> <field k='build'> <value><text>98164</text></value> </field> <field k='connectionType'> <value><text>cooked</text></value> </field> <field k='date_hour'> <value><text>19</text></value> </field> </result> </results>
search/v2/jobs/{search_id}/results
https://<host>:<mPort>/services/search/v2/jobs/{search_id}/results
Access {search_id}
search results.
The GET operation does not include the search parameter in the v2 iteration of this endpoint. To use the search parameter, use the POST operation instead.
GET
Get {search_id}
search results.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
add_summary_to_metadata | Boolean | false | Set the value to "true" to include field summary statistics in the response. |
count | Number | 100 | The maximum number of results to return. If value is set to 0 , then all available results are returned.
|
f | String | A field to return for the event set.
You can pass multiple | |
field_list | String | [Deprecated] Use f.
Specify a comma-separated list of the fields to return for the event set. | |
offset | Number | 0 | The first result (inclusive) from which to begin returning data.
This value is 0-indexed. Default value is 0. In 4.1+, negative offsets are allowed and are added to Offsets in the results are always absolute and never negative. |
output_mode | Enum | xml | Valid values: (atom | csv | json | json_cols | json_rows | raw | xml)
Specifies the format for the returned output. |
Returned values
None
Application usage
This is the table that exists after all processing from the search pipeline has completed.
This is the primary method for a client to fetch a set of TRANSFORMED events. If the dispatched search does not include a transforming command, the effect is the same as get_events, however with fewer options.
Example request and response
JSON request
curl -k -u admin:pass https://localhost:8089/services/search/v2/jobs/mysearch_02151949/results --get -d f=index -d f=source -d f=sourcetype -d count=3 -d output_mode=json
JSON response
{ "init_offset" : 0, "messages" : [ { "text" : "base lispy: [ AND index::_internal source::*/metrics.log ]", "type" : "DEBUG" }, { "text" : "search context: user=\"admin\", app=\"search\", bs-pathname=\"/Applications/splunk/etc\"", "type" : "DEBUG" } ], "preview" : false, "results" : [ { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" }, { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" }, { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" } ] }
POST
Access {search_id}
search results.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
add_summary_to_metadata | Boolean | false | Set the value to "true" to include field summary statistics in the response. |
count | Number | 100 | The maximum number of results to return. If value is set to 0 , then all available results are returned.
|
f | String | A field to return for the event set.
You can pass multiple | |
field_list | String | [Deprecated] Use f.
Specify a comma-separated list of the fields to return for the event set. | |
offset | Number | 0 | The first result (inclusive) from which to begin returning data.
This value is 0-indexed. Default value is 0. In 4.1+, negative offsets are allowed and are added to Offsets in the results are always absolute and never negative. |
output_mode | Enum | xml | Valid values: (atom | csv | json | json_cols | json_rows | raw | xml)
Specifies the format for the returned output. |
search | String | The post processing search to apply to results. Can be any valid search language string. Only usable from POST operations. |
Returned values
None
Application usage
This is the table that exists after all processing from the search pipeline has completed.
This is the primary method for a client to fetch a set of TRANSFORMED events. If the dispatched search does not include a transforming command, the effect is the same as get_events, however with fewer options.
Example request and response
JSON request
curl -k -u admin:pass https://localhost:8089/services/search/v2/jobs/mysearch_02151949/results -d f=index -d f=source -d f=sourcetype -d count=3 -d output_mode=json
JSON response
{ "init_offset" : 0, "messages" : [ { "text" : "base lispy: [ AND index::_internal source::*/metrics.log ]", "type" : "DEBUG" }, { "text" : "search context: user=\"admin\", app=\"search\", bs-pathname=\"/Applications/splunk/etc\"", "type" : "DEBUG" } ], "preview" : false, "results" : [ { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" }, { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" }, { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" } ] }
search/jobs/{search_id}/results (deprecated)
https://<host>:<mPort>/services/search/jobs/{search_id}/results
Get {search_id}
search results.
This endpoint is deprecated as of Splunk Enterprise 9.0.1. Use the v2 instance of this endpoint instead.
GET
Get {search_id}
search results.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
add_summary_to_metadata | Boolean | false | Set the value to "true" to include field summary statistics in the response. |
count | Number | 100 | The maximum number of results to return. If value is set to 0 , then all available results are returned.
|
f | String | A field to return for the event set.
You can pass multiple | |
field_list | String | [Deprecated] Use f.
Specify a comma-separated list of the fields to return for the event set. | |
offset | Number | 0 | The first result (inclusive) from which to begin returning data.
This value is 0-indexed. Default value is 0. In 4.1+, negative offsets are allowed and are added to Offsets in the results are always absolute and never negative. |
output_mode | Enum | xml | Valid values: (atom | csv | json | json_cols | json_rows | raw | xml)
Specifies the format for the returned output. |
search | String | The post processing search to apply to results. Can be any valid search language string. |
Returned values
None
Application usage
This is the table that exists after all processing from the search pipeline has completed.
This is the primary method for a client to fetch a set of TRANSFORMED events. If the dispatched search does not include a transforming command, the effect is the same as get_events, however with fewer options.
Example request and response
JSON request
curl -k -u admin:pass https://localhost:8089/services/search/jobs/mysearch_02151949/results --get -d f=index -d f=source -d f=sourcetype -d count=3 -d output_mode=json
JSON response
{ "init_offset" : 0, "messages" : [ { "text" : "base lispy: [ AND index::_internal source::*/metrics.log ]", "type" : "DEBUG" }, { "text" : "search context: user=\"admin\", app=\"search\", bs-pathname=\"/Applications/splunk/etc\"", "type" : "DEBUG" } ], "preview" : false, "results" : [ { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" }, { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" }, { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" } ] }
search/v2/jobs/{search_id}/results_preview
https://<host>:<mPort>/services/search/v2/jobs/{search_id}/results_preview
Preview {search_id}
search results.
The GET operation does not include the search parameter in the v2 iteration of this endpoint. To use the search parameter, use the POST operation instead.
GET
Preview {search_id}
search results.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
add_summary_to_metadata | Boolean | false | Set the value to "true" to include field summary statistics in the response. |
count | Number | 100 | The maximum number of results to return.
If value is set to |
f | String | A field to return for the event set.
You can pass multiple | |
field_list | String | [Deprecated] Use f.
A comma-separated list of the fields to return for the event set. | |
offset | Number | 0 | The first result (inclusive) from which to begin returning data.
This value is 0-indexed. Default value is 0. |
output_mode | Enum | xml | Valid values: (atom | csv | json | json_cols | json_rows | raw | xml)
Specifies the format for the returned output. |
Returned values
None
Application usage
Returns the intermediate preview results of the search specified by {search_id}. When the job is complete, this gives the same response as /search/jobs/{search_id}/results. Preview is enabled for real-time searches and for searches where status_buckets > 0.
Example request and response
JSON request
curl -k -u admin:pass https://localhost:8089/services/search/v2/jobs/mysearch_02151949/results_preview --get -d f=index -d f=source -d f=sourcetype -d count=3 -d output_mode=json
JSON response
{ "init_offset" : 0, "messages" : [ { "text" : "base lispy: [ AND index::_internal source::*/metrics.log ]", "type" : "DEBUG" }, { "text" : "search context: user=\"admin\", app=\"search\", bs-pathname=\"/Applications/splunk/etc\"", "type" : "DEBUG" } ], "preview" : false, "results" : [ { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" }, { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" }, { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" } ] }
POST
Access a preview of {search_id}
search results.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
add_summary_to_metadata | Boolean | false | Set the value to "true" to include field summary statistics in the response. |
count | Number | 100 | The maximum number of results to return.
If value is set to |
f | String | A field to return for the event set.
You can pass multiple | |
field_list | String | [Deprecated] Use f.
A comma-separated list of the fields to return for the event set. | |
offset | Number | 0 | The first result (inclusive) from which to begin returning data.
This value is 0-indexed. Default value is 0. |
output_mode | Enum | xml | Valid values: (atom | csv | json | json_cols | json_rows | raw | xml)
Specifies the format for the returned output. |
search | String | The post processing search to apply to results. Can be any valid search language string. Only usable from POST operations. |
Returned values
None
Application usage
Returns the intermediate preview results of the search specified by {search_id}. When the job is complete, this gives the same response as /search/jobs/{search_id}/results. Preview is enabled for real-time searches and for searches where status_buckets > 0.
Example request and response
JSON request
curl -k -u admin:pass https://localhost:8089/services/search/v2/jobs/mysearch_02151949/results_preview --get -d f=index -d f=source -d f=sourcetype -d count=3 -d output_mode=json
JSON response
{ "init_offset" : 0, "messages" : [ { "text" : "base lispy: [ AND index::_internal source::*/metrics.log ]", "type" : "DEBUG" }, { "text" : "search context: user=\"admin\", app=\"search\", bs-pathname=\"/Applications/splunk/etc\"", "type" : "DEBUG" } ], "preview" : false, "results" : [ { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" }, { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" }, { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" } ] }
search/jobs/{search_id}/results_preview (deprecated)
https://<host>:<mPort>/services/search/jobs/{search_id}/results_preview
Preview {search_id}
search results.
This endpoint is deprecated as of Splunk Enterprise 9.0.1. Use the v2 instance of this endpoint instead.
GET
Preview {search_id}
search results.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
add_summary_to_metadata | Boolean | false | Set the value to "true" to include field summary statistics in the response. |
count | Number | 100 | The maximum number of results to return.
If value is set to |
f | String | A field to return for the event set.
You can pass multiple | |
field_list | String | [Deprecated] Use f.
A comma-separated list of the fields to return for the event set. | |
offset | Number | 0 | The first result (inclusive) from which to begin returning data.
This value is 0-indexed. Default value is 0. |
output_mode | Enum | xml | Valid values: (atom | csv | json | json_cols | json_rows | raw | xml)
Specifies the format for the returned output. |
search | String | The post processing search to apply to results. Can be any valid search language string. |
Returned values
None
Application usage
Returns the intermediate preview results of the search specified by {search_id}. When the job is complete, this gives the same response as /search/jobs/{search_id}/results. Preview is enabled for real-time searches and for searches where status_buckets > 0.
Example request and response
JSON request
curl -k -u admin:pass https://localhost:8089/services/search/jobs/mysearch_02151949/results_preview --get -d f=index -d f=source -d f=sourcetype -d count=3 -d output_mode=json
JSON response
{ "init_offset" : 0, "messages" : [ { "text" : "base lispy: [ AND index::_internal source::*/metrics.log ]", "type" : "DEBUG" }, { "text" : "search context: user=\"admin\", app=\"search\", bs-pathname=\"/Applications/splunk/etc\"", "type" : "DEBUG" } ], "preview" : false, "results" : [ { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" }, { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" }, { "index" : "_internal", "source" : "/Applications/splunk/var/log/splunk/metrics.log", "sourcetype" : "splunkd" } ] }
search/jobs/{search_id}/search.log
https://<host>:<mPort>/services/search/jobs/{search_id}/search.log
Get the {search_id}
search log.
GET
Get the {search_id}
search log.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
attachment | Boolean | false | If true, returns search.log as an attachment. Otherwise, streams search.log. |
Returned values
None
Example request and response
Request
curl -k -u admin:pass https://localhost:8089/services/search/jobs/mysearch_02151949/search.log
Response
07-07-2011 21:36:22.066 INFO ApplicationManager - Found application directory: /Applications/splunk4.3/etc/apps/user-prefs 07-07-2011 21:36:22.066 INFO ApplicationManager - Initialized at least 12 applications: /Applications/splunk4.3/etc/apps 07-07-2011 21:36:22.066 INFO ApplicationManager - Found 5 application(s) that might have global exports 07-07-2011 21:36:22.073 INFO dispatchRunner - initing LicenseMgr in search process: nonPro=0 07-07-2011 21:36:22.074 INFO LicenseMgr - Initing LicenseMgr 07-07-2011 21:36:22.075 INFO ServerConfig - My GUID is "1F3A34AE-75DA-4680-B184-5BF309843919". 07-07-2011 21:36:22.075 INFO ServerConfig - My hostname is "ombroso-mbp15.local". 07-07-2011 21:36:22.076 INFO SSLCommon - added zlib compression 07-07-2011 21:36:22.077 INFO ServerConfig - Default output queue for file-based input: parsingQueue. 07-07-2011 21:36:22.077 INFO LMConfig - serverName=mbp15.splunk.com guid=1F3A34AE-75DA-4680-B184-5BF309843919 07-07-2011 21:36:22.077 INFO LMConfig - connection_timeout=30 07-07-2011 21:36:22.077 INFO LMConfig - send_timeout=30 07-07-2011 21:36:22.077 INFO LMConfig - receive_timeout=30 . . . elided . . .
search/jobs/{search_id}/summary
https://<host>:<mPort>/services/search/jobs/{search_id}/summary
Get the getFieldsAndStats
output of the events to-date, for the search_id
search.
GET
Get the getFieldsAndStats
output of the events to-date, for the search_id
search.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
earliest_time | String | Time string representing the earliest (inclusive), respectively, time bounds for the search.
The time string can be either a UTC time (with fractional seconds), a relative time specifier (to now) or a formatted time string. (Also see comment for the search_mode variable.) | |
f | String | A field to return for the event set.
You can pass multiple | |
field_list | String | [Deprecated] Use f.
A comma-separated list of the fields to return for the event set. | |
histogram | Boolean | false | Indicates whether to add histogram data to the summary output. |
latest_time | String | Time string representing the latest (exclusive), respectively, time bounds for the search. | |
min_freq | Number | 0 | For each key, the fraction of results this key must occur in to be displayed.
Express the fraction as a number between 0 and 1. |
output_time_format | String | time_format |
Formats a UTC time. |
search | String | Empty string | Specifies a substring that all returned events should contain either in one of their values or tags. |
time_format | String | %m/%d/%Y:%H:%M:%S | Expression to convert a formatted time string from {start,end}_time into UTC seconds. |
top_count | Number | 10 | For each key, specifies how many of the most frequent items to return. |
Returned values
None
Application usage
This endpoint is only valid when status_buckets > 0. To guarantee a set of fields in the summary, when creating the search, use the required_fields_list or rf parameters.
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/services/search/jobs/mytestsid/summary --get -d f=source -d f=sourcetype -d f=host -d top_count=5
XML Response
<?xml version='1.0' encoding='UTF-8'?> <summary earliest_time='1969-12-31T16:00:00.000-08:00' latest_time='1969-12-31T16:00:00.464-08:00' duration='0' c='150375'> <field k='host' c='150375' nc='0' dc='1' exact='1'> <modes> <value c='150375' exact='1'><text>tiny</text></value> </modes> </field> <field k='source' c='150375' nc='0' dc='13' exact='1'> <modes> <value c='136107' exact='1'><text>/mnt/scsi/steveyz/splunksi/var/log/splunk/metrics.log</text></value> <value c='6682' exact='1'><text>/mnt/scsi/steveyz/splunksi/var/log/splunk/splunkd_access.log</text></value> <value c='4656' exact='1'><text>/mnt/scsi/steveyz/splunksi/var/log/splunk/scheduler.log</text></value> <value c='1714' exact='1'><text>/mnt/scsi/steveyz/splunksi/var/log/splunk/web_access.log</text></value> <value c='937' exact='1'><text>/mnt/scsi/steveyz/splunksi/var/log/splunk/splunkd.log</text></value> </modes> </field> <field k='sourcetype' c='150375' nc='0' dc='10' exact='1'> <modes> <value c='137053' exact='1'><text>splunkd</text></value> <value c='6682' exact='1'><text>splunkd_access</text></value> <value c='4656' exact='1'><text>scheduler</text></value> <value c='1714' exact='1'><text>splunk_web_access</text></value> <value c='193' exact='1'><text>splunk_web_service</text></value> </modes> </field> </summary>
search/jobs/{search_id}/timeline
https://<host>:<mPort>/services/search/jobs/{search_id}/timeline
Get event distribution over time of the untransformed events read to-date, for the search_id
search.
GET
Get event distribution over time of the untransformed events read to-date, for the search_id
search.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
output_time_format | String | time_format |
Formats a UTC time. |
time_format | String | %m/%d/%Y:%H:%M:%S | Expression to convert a formatted time string from {start,end}_time into UTC seconds. |
Returned values
None
The output from this endpoint provides values for the following fields:
Field | Description |
---|---|
c | Event count |
a | Available. Not all events in a bucket are retrievable. Generally capped at 10000. |
t | Time in epoch seconds |
d | Bucket size (time) |
f | Indicates if the search finished scanning events from the time range of this bucket. |
etz | Timezone offset, in seconds, for the earliest time of this bucket.
etz and ltz are different if the buckets are months or days and you have a DST change during the middle. |
ltz | Timezone offset, in seconds, for the latest time of this bucket. |
Application usage
This endpoint is only valid when status_buckets > 0. To guarantee a set of fields in the summary, when creating the search, use the required_fields_list or rf parameters.
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/services/search/jobs/mytestsid/timeline --get -d time_format="%c"
XML Response
<timeline c='150397' cursor='1312308000'> <bucket c='7741' a='7741' t='1312308000.000' d='3600' f='1' etz='-25200' ltz='-25200'>Tue Aug 2 11:00:00 2011</bucket> <bucket c='7894' a='7894' t='1312311600.000' d='3600' f='1' etz='-25200' ltz='-25200'>Tue Aug 2 12:00:00 2011</bucket> <bucket c='7406' a='7406' t='1312315200.000' d='3600' f='1' etz='-25200' ltz='-25200'>Tue Aug 2 13:00:00 2011</bucket> <bucket c='6097' a='6097' t='1312318800.000' d='3600' f='1' etz='-25200' ltz='-25200'>Tue Aug 2 14:00:00 2011</bucket> <bucket c='6072' a='6072' t='1312322400.000' d='3600' f='1' etz='-25200' ltz='-25200'>Tue Aug 2 15:00:00 2011</bucket> <bucket c='6002' a='6002' t='1312326000.000' d='3600' f='1' etz='-25200' ltz='-25200'>Tue Aug 2 16:00:00 2011</bucket> <bucket c='6004' a='6004' t='1312329600.000' d='3600' f='1' etz='-25200' ltz='-25200'>Tue Aug 2 17:00:00 2011</bucket> <bucket c='5994' a='5994' t='1312333200.000' d='3600' f='1' etz='-25200' ltz='-25200'>Tue Aug 2 18:00:00 2011</bucket> <bucket c='6037' a='6037' t='1312336800.000' d='3600' f='1' etz='-25200' ltz='-25200'>Tue Aug 2 19:00:00 2011</bucket> <bucket c='6021' a='6021' t='1312340400.000' d='3600' f='1' etz='-25200' ltz='-25200'>Tue Aug 2 20:00:00 2011</bucket> <bucket c='6051' a='6051' t='1312344000.000' d='3600' f='1' etz='-25200' ltz='-25200'>Tue Aug 2 21:00:00 2011</bucket> <bucket c='6006' a='6006' t='1312347600.000' d='3600' f='1' etz='-25200' ltz='-25200'>Tue Aug 2 22:00:00 2011</bucket> <bucket c='6041' a='6041' t='1312351200.000' d='3600' f='1' etz='-25200' ltz='-25200'>Tue Aug 2 23:00:00 2011</bucket> <bucket c='5993' a='5993' t='1312354800.000' d='3600' f='1' etz='-25200' ltz='-25200'>Wed Aug 3 00:00:00 2011</bucket> <bucket c='6040' a='6040' t='1312358400.000' d='3600' f='1' etz='-25200' ltz='-25200'>Wed Aug 3 01:00:00 2011</bucket> <bucket c='5993' a='5993' t='1312362000.000' d='3600' f='1' etz='-25200' ltz='-25200'>Wed Aug 3 02:00:00 2011</bucket> <bucket c='6061' a='6061' t='1312365600.000' d='3600' f='1' etz='-25200' ltz='-25200'>Wed Aug 3 03:00:00 2011</bucket> <bucket c='5995' a='5995' t='1312369200.000' d='3600' f='1' etz='-25200' ltz='-25200'>Wed Aug 3 04:00:00 2011</bucket> <bucket c='5988' a='5988' t='1312372800.000' d='3600' f='1' etz='-25200' ltz='-25200'>Wed Aug 3 05:00:00 2011</bucket> <bucket c='6042' a='6042' t='1312376400.000' d='3600' f='1' etz='-25200' ltz='-25200'>Wed Aug 3 06:00:00 2011</bucket> <bucket c='5998' a='5998' t='1312380000.000' d='3600' f='1' etz='-25200' ltz='-25200'>Wed Aug 3 07:00:00 2011</bucket> <bucket c='6055' a='6055' t='1312383600.000' d='3600' f='1' etz='-25200' ltz='-25200'>Wed Aug 3 08:00:00 2011</bucket> <bucket c='5997' a='5997' t='1312387200.000' d='3600' f='1' etz='-25200' ltz='-25200'>Wed Aug 3 09:00:00 2011</bucket> <bucket c='5994' a='5994' t='1312390800.000' d='3600' f='1' etz='-25200' ltz='-25200'>Wed Aug 3 10:00:00 2011</bucket> <bucket c='875' a='875' t='1312394400.000' d='3600' f='1' etz='-25200' ltz='-25200'>Wed Aug 3 11:00:00 2011</bucket> </timeline>
search/v2/parser
https://<host>:<mPort>/services/search/v2/parser
Access search language parsing.
The GET operation is not available in the v2 iteration of this endpoint.
POST
Parses Splunk search language and returns semantic map.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
enable_lookups | Boolean | false | If true , reverse lookups are done to expand the search expression.
|
output_mode | String | xml | Specify output formatting. Select from either:
|
parse_only | Boolean | false | If true, disables expansion of search due evaluation of subsearches, time term expansion, lookups, tags, eventtypes, sourcetype alias. |
q required |
String | The search string to parse. | |
reload_macros | Boolean | true | If true, reload macro definitions from macros.conf. |
Returned values
None
Example request and response
JSON Request
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/search/v2/parser -d output_mode=json -d q="search index=os sourcetype=cpu"
JSON Response
{ "remoteSearch": "litsearch | fields keepcolorder=t \"host\" \"index\" \"linecount\" \"source\" \"sourcetype\" \"splunk_server\"", "remoteTimeOrdered": true, "eventsSearch": "search index=os sourcetype=cpu", "eventsTimeOrdered": true, "eventsStreaming": true, "reportsSearch": "", "commands": [ { "command": "search", "rawargs": "", "pipeline": "streaming", "args": { "search": [""], } "isGenerating": true, "streamType": "SP_STREAM", }, ] }
search/parser (deprecated)
https://<host>:<mPort>/services/search/parser
Get search language parsing.
This endpoint is deprecated as of Splunk Enterprise 9.0.1. Use the v2 instance of this endpoint instead.
GET
Parses Splunk search language and returns semantic map.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
enable_lookups | Boolean | false | If true , reverse lookups are done to expand the search expression.
|
output_mode | String | xml | Specify output formatting. Select from either:
|
parse_only | Boolean | false | If true, disables expansion of search due evaluation of subsearches, time term expansion, lookups, tags, eventtypes, sourcetype alias. |
q required |
String | The search string to parse. | |
reload_macros | Boolean | true | If true, reload macro definitions from macros.conf. |
Returned values
None
Example request and response
JSON Request
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/search/parser --get -d output_mode=json -d q="search index=os sourcetype=cpu"
JSON Response
{ "remoteSearch": "litsearch | fields keepcolorder=t \"host\" \"index\" \"linecount\" \"source\" \"sourcetype\" \"splunk_server\"", "remoteTimeOrdered": true, "eventsSearch": "search ", "eventsTimeOrdered": true, "eventsStreaming": true, "reportsSearch": "", "commands": [ { "command": "search", "rawargs": "", "pipeline": "streaming", "args": { "search": [""], } "isGenerating": true, "streamType": "SP_STREAM", }, ] }
search/scheduler
https://<host>:<mPort>/services/search/scheduler
GET
Get current search scheduler enablement status.
Request parameters
None
Returned values
Name | Type | Default | Description |
---|---|---|---|
saved_searches_disabled | Boolean | 0 or 1 | A boolean value indicating whether the search scheduler is disabled. |
Example request and response
curl -k -u admin:pass https://localhost:8089/services/search/scheduler
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>scheduler</title> <id>https://localhost:8089/services/search/scheduler</id> <updated>2015-06-09T13:23:38-07:00</updated> <generator build="6cfc0237739f" version="6.3.0"/> <author> <name>Splunk</name> </author> <link href="/services/search/scheduler/_acl" rel="_acl"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>scheduler</title> <id>https://localhost:8089/services/search/scheduler/scheduler</id> <updated>2015-06-09T13:23:38-07:00</updated> <link href="/services/search/scheduler/scheduler" rel="alternate"/> <author> <name>system</name> </author> <link href="/services/search/scheduler/scheduler" rel="list"/> <link href="/services/search/scheduler/scheduler" rel="edit"/> <content type="text/xml"> <s:dict> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app"></s:key> <s:key name="can_list">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">0</s:key> <s:key name="owner">system</s:key> <s:key name="perms"> <s:dict> <s:key name="read"> <s:list> <s:item>admin</s:item> <s:item>splunk-system-role</s:item> </s:list> </s:key> <s:key name="write"> <s:list> <s:item>admin</s:item> <s:item>splunk-system-role</s:item> </s:list> </s:key> </s:dict> </s:key> <s:key name="removable">0</s:key> <s:key name="sharing">system</s:key> </s:dict> </s:key> <s:key name="saved_searches_disabled">0</s:key> </s:dict> </content> </entry> </feed>
search/scheduler/status
https://<host>:<mPort>/services/search/scheduler/status
Enable or disable the search scheduler.
POST
Enable or disable the search scheduler.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
disabled | Boolean | Indicates whether to disable the search scheduler. 0 enables the search scheduler. 1 disables the search scheduler. |
Returned values
None
Example request and response
XML Request
curl -ku admin:pass -XPOST https://localhost:8089/services/search/scheduler/status -d disabled=1
XML Response
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>scheduler</title> <id>https://localhost:8089/services/search/scheduler</id> <updated>2015-06-09T13:40:21-07:00</updated> <generator build="6cfc0237739f" version="6.3.0"/> <author> <name>Splunk</name> </author> <link href="/services/search/scheduler/_acl" rel="_acl"/> <opensearch:totalResults>0</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> </feed>
search/timeparser
https://<host>:<mPort>/services/search/timeparser
Get time argument parsing.
GET
Get a lookup table of time arguments to absolute timestamps.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
now | String | The time to use as current time for relative time identifiers.
Can itself either be a relative time (from the real "now" time) or an absolute time in the format specified by | |
output_time_format | String | %FT%T.%Q%:z | Used to format a UTC time. Defaults to the value of time_format .
|
time required |
String | The time argument to parse.
Acceptable inputs are either a relative time identifier or an absolute time. Multiple time arguments can be passed by specifying multiple time parameters. | |
time_format | String | %FT%T.%Q%:z | The format (strftime ) of the absolute time format passed in time.
This field is not used if a relative time identifier is provided. For absolute times, the default value is the ISO-8601 format. |
Returned values
None
Example request and response
XML Request
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/search/timeparser --get -d time=-12h -d time=-24h
XML Response
<response> <dict> <key name="-12h">2011-07-06T21:54:23.000-07:00</key> <key name="-24h">2011-07-06T09:54:23.000-07:00</key> </dict> </response>
search/typeahead
https://<host>:<mPort>/services/search/typeahead
Get search string auto-complete suggestions.
GET
Get a list of words or descriptions for possible auto-complete terms.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
count required |
Number | The number of items to return for this term. | |
max_servers | Number | 2 | Specifies the maximum number of indexer search peers that are used in addition to the search head for the purpose of providing typeahead functionality. When properly set, max_servers minimizes the workload impact of running typeahead search jobs in an indexer clustering deployment. If your target indexes are evenly distributed among search servers, use the default setting or a similarly low number.For load balancing, the choice of search peers for typeahead searches is random. A setting of 0 means "no limit": All available search peers are used for typeahead search jobs.
|
output_mode | String | csv | Specify output formatting. Select from:
|
prefix required |
String | The term for which to return typeahead results. |
Returned values
None
Example request and response
JSON Request
curl -k -u admin:pass https://localhost:8089/servicesNS/admin/search/search/typeahead --get -d count=3 -d prefix=source -d output_mode=json max_servers=1
JSON Response
{ "results" : [ { "content" : "source=\"sampledata.zip:./apache1.splunk.com/access_combined.log\"", "count" : 9199, "operator" : false }, { "content" : "source=\"sampledata.zip:./apache2.splunk.com/access_combined.log\"", "count" : 27705, "operator" : false }, { "content" : "source=\"sampledata.zip:./apache3.splunk.com/access_combined.log\"", "count" : 27888, "operator" : false } ] }
Endpoints for SPL2-based applications
This documentation is designed for Splunk application developers and Splunk administrators who are creating or managing SPL2-based applications. For more information see:
- Create SPL2-based apps in the Splunk Developer Guide on dev.splunk.com.
- Splunk Enterprise: Install SPL2-based apps in the Splunk Enterprise Admin Manual.
- Splunk Cloud Platform: Install SPL2-based apps in the Splunk Cloud Platform Admin Manual.
search/spl2-module-dispatch
https://<host>:<mport>/services/search/spl2-module-dispatch
Dispatch a module containing one or more SPL2 statements. For more information about what constitutes an SPL2 statement, see Modules and SPL2 statements in the SPL2 Search Manual.
POST
Start a new search or searches and return a search ID (SID) for each named search statement.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
module | String | Required. Contains the entire module definition including imports, within quotation marks.
Example: | |
namespace | String | The application namespace in which to restrict searches. Leave blank with only quotation marks.
Example: " " | |
queryParameters | String | Required. Contains a list of searches by name. Each search requires its own queryName and associated metadata, and returns a separate SID. | |
queryName | String | Required. The name of each search in the query, followed by a stanza containing its associated metadata. You must separately specify each search in the module, or your request will not return results for that search. | |
earliest | String | -24h@h | A time string that specifies the earliest time to retrieve events. Can be a relative or absolute time. For absolute time, specify either UNIX time or UTC in seconds in the ISO-8601 (%FT%T.%Q ) format. To learn about time strings in SPL2, see Time modifiers in the SPL2 Search Manual.
Example: |
latest | String | now | A time string that specifies the latest time to retrieve events. Can be a relative or absolute time. For absolute time, specify either UNIX time or UTC in seconds in the ISO-8601 (%FT%T.%Q ) format. To learn about time strings in SPL2, see Time modifiers in the SPL2 Search Manual.
Example: |
timezone | String | Current system timezone | Specifies the timezone for the earliest and latest parameters, if those parameters are in relative time. If those parameters are in absolute time, then this parameter is ignored. Specify time zone for timestamps. To see all supported time zone formats, see Time zones. |
relativeTimeAnchor | String | The time the search job is created | Specifies the anchor time for the earliest and latest parameters, if those parameters are in relative time.
Example: If earliest is set to |
collectEventSummary | Boolean | False | Specifies whether a search can collect event summary information during the run time. |
collectFieldSummary | Boolean | False | Specifies whether a search can collect fields summary information during the run time. |
collectTimeBuckets | Boolean | False | Specifies whether a search can collect timeline buckets summary information during the run time. |
adhocSearchLevel | String | fast | Specify the mode in which the search should run. Accepts fast , smart , or verbose .
|
Returned values
Name | Description |
---|---|
sid | Search ID. |
name | Name of the search statement. |
Example request
curl -k -u <admin>:<changeme> --location 'https://<host>:8089/services/search/spl2-module-dispatch' \ --data '{ "module": " $search1 = from _audit | stats count()", "namespace": "", "queryParameters": { "search1": { "earliest": "-1h@h", "latest": "now", "timezone": "Etc/UTC", "collectFieldSummary": true } "search2": { "earliest": "-1h@h", "timezone": "Etc/UTC", "collectEventSummary": true } } }'
Example response
[ { "sid": "1682980180.52", "name": "search1" } { "sid": "1058392067.22", "name": "search2" } ]
services/spl2/modules
https://<host>:<mport>/services/spl2/modules
Access a list of SPL2 modules.
POST
Create a module within the app context.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
module | String | Required. The name of the module. | |
namespace | String | Required. The namespace of the module. | |
definition | String | Required. The definition of the module.
Example: |
Returned values None.
Example request
curl -k --request POST -u admin:pass 'https://localhost:8089/services/spl2/modules' \ --data-raw '{ "name": "bar", "namespace": "foo", "definition": "$a = | FROM index:terminallookup4704 GROUP BY indexed:source SELECT count();" }'
Example response
See HTTP Status Codes for a list of possible responses.
services/spl2/modules/{resourceName}
https://<host>:<mport>/services/spl2/modules/{resourceName}
Access a specific SPL2 module.
GET
Retrieves information about a specific module.
Request parameters None.
Example request
curl -k -u admin:pass https://localhost:8089/services/spl2/modules/foo.bar
Example response
{ "namespace": "apps.sample_app_spl2", "name": "_default", "definition": "$a = | FROM _internal | limit 10 ;export $a;", "createdAt": "2023-08-03T00:17:31Z", "createdBy": "admin", "updatedAt": "2023-08-03T00:17:31Z", "updatedBy": "admin" }
PUT
Create or update a specific module within the app context.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
namespace | String | Required. The namespace of the module. | |
definition | String | Required. The definition of the module.
Example: |
Example request
curl -k --request POST -u admin:pass 'https://localhost:8089/services/spl2/modules/foo.bar' \ --data-raw '{ "name": "bar", "namespace": "foo", "definition": "$a = | FROM index:terminallookup4704 GROUP BY indexed:source SELECT count();" }'
Example response See HTTP Status Codes for a list of possible responses. In addition to one of the listed responses, this endpoint might also return the following additional status code:
HTTP status code | Description |
---|---|
415 | Unsupported media type. The type must be application/json .
|
DELETE
Delete a specific module within the app context.
Request parameters None.
Example request
curl -k -u admin:pass -X "DELETE" https://localhost:8089/services/spl2/modules/search.module.testmodule
Example response
{ "code": "not_found", "message": "Module does not exist/already deleted" }
See HTTP Status Codes for a list of possible responses.
services/spl2/permissions
https://<host>:<mport>/services/spl2/permissions
Access a list of role-based permissions for a module. Requires the edit_spl2_permissions capability.
POST
Update permissions for a module.
Request parameters
Name | Type | Default | Description |
---|---|---|---|
resourceType | String | Required. Must be either "modules" or "views". | |
resourceName | String | All objects | Required. Name of a specific module or view. |
permissions | JSON array | Required. Array containing permissions for each type of supported operation. |
Returned values
See HTTP Status Codes for a list of possible responses.
Example request
curl -k -u admin:pass https://localhost:8089/services/spl2/permissions \ --data '{ "resourceType": "modules", "resourceName": "module1", "permissions": [ { "operation": "read", "roles": [ "admin", "user", "editor" ] }, { "operation": "write", "roles": [ "editor" ] } ] }'
Example response
{ "code":201 }
services/spl2/permissions/role/{rolename}
https://<host>:<mport>/services/spl2/permissions/role/{rolename}
Access a list of all permissions for a given role. Requires the edit_spl2_permissions capability.
GET
Get all permissions for a given role.
Request parameters
None.
Example request
curl -k -u admin:pass https://localhost:8089/services/spl2/permissions/role/editor
Example response
[{ "resourceType": "", "resourceName": "", "permissions": [ { "operation": "read", "roles": [ "admin", "user", "editor" ] }, { "operation": "write", "roles": [ "editor" ] } ] }]
services/spl2/permissions/user/{username}
https://<host>:<mport>/services/spl2/permissions/user/{username}
Access a list of all permissions for a given user. Requires the edit_spl2_permissions capability.
GET
Get all permissions for a given role.
Request parameters
None.
Example request
curl -k -u admin:pass https://localhost:8089/services/spl2/permissions/user/user1
Example response
{ "resourceType": "", "resourceName": "", "permissions": [ { "operation": "read", "roles": [ "admin", "user", "editor" ] }, { "operation": "write", "roles": [ "editor" ] } ] }
Output endpoint descriptions | System endpoint descriptions |
This documentation applies to the following versions of Splunk® Enterprise: 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0
Feedback submitted, thanks!