This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
append command to append the results of a subsearch to the results of your current search. The append
command will run only over historical data; it will not produce correct results if used in a real-time search.
Appends subsearch results to current results.
append [subsearch-options]* subsearch
- Description: A search pipeline. Read more about how subsearches work in the User manual.
- Syntax: maxtime=<int> | maxout=<int> | timeout=<int>
- Description: Controls how the subsearch is executed.
- Syntax: maxtime=<int>
- Description: The maximum time (in seconds) to spend on the subsearch before automatically finalizing. Defaults to 60.
- Syntax: maxout=<int>
- Description: The maximum number of result rows to output from the subsearch. Defaults to 50000.
- Syntax: timeout=<int>
- Description: The maximum time (in seconds) to wait for subsearch to fully finish. Defaults to 120.
Append the results of a subsearch to the current search as new results at the end of current results.
|This example uses recent (October 18-25, 2010) earthquake data downloaded from the USGS Earthquakes website. The data is a comma separated ASCII text file that contains the source network (Src), ID (Eqid), version, date, location, magnitude, depth (km) and number of reporting stations (NST) for each earthquake over the last 7 days.
Download the text file, M 2.5+ earthquakes, past 7 days, save it as a CSV file, and upload it to Splunk. Splunk should extract the fields automatically. Note that you'll be seeing data from the 7 days previous to your download, so your results will vary from the ones displayed below.
Count the number of earthquakes that occurred in and around California yesterday and then calculate the total number of quakes.
source="eqs7day-M1.csv" Region="*California" | stats count by Region | append [search source="eqs7day-M1.csv" Region="*California" | stats count]
This example searches for all the earthquakes in the California regions (
Region="*California"), then counts the number of earthquakes that occurred in each separate region.
stats command doesn't let you count the total number of events at the same time as you count the number of events split-by a field, so the subsearch is used to count the total number of earthquakes that occurred. This count is added to the results of the previous search with the
Because both searches share the
count field, the results of the subsearch is listed as the last row in the column:
This search basically demonstrates using the
append command similar to the
addcoltotals command, to add the column totals.
|This example uses the sample dataset from the tutorial. Download the data set from this topic in the tutorial and follow the instructions to upload it to Splunk. Then, run this search using the time range, Other > Yesterday.|
Count the number of different customers who purchased something from the Flower & Gift shop yesterday, and break this count down by the type of product (Candy, Flowers, Gifts, Plants, and Balloons) they purchased. Also, list the top purchaser for each type of product and how much that person bought of that product.
sourcetype=access_* action=purchase | stats dc(clientip) by category_id | append [search sourcetype=access_* action=purchase | top 1 clientip by category_id] | table category_id, dc(clientip), clientip, count
This example first searches for purchase events (
action=purchase). These results are pipped into the
stats command and the
dc() or distinct_count() function is used to count the number of different users who make purchases. The
by clause is used to break up this number based on the different category of products (
The subsearch is used to search for purchase events and count the top purchaser (based on
clientip) for each category of products. These results are added to the results of the previous search using the
table command is used to display only the category of products (
category_id), the distinct count of users who bought each type of product (
dc(clientip)), the actual user who bought the most of a product type (
clientip), and the number of each product that user bought (
You can see that the
append command just tacks on the results of the subsearch to the end of the previous search, even though the results share the same field values. It doesn't let you manipulate or reformat the output.
|This example uses the sample dataset from the tutorial but should work with any format of Apache Web access log. Download the data set from this topic in the tutorial and follow the instructions to upload it to Splunk. Then, run this search using the time range, Other > Yesterday.|
Count the number of different IP addresses who accessed the Web server and also find the user who accessed the Web server the most for each type of page request (
sourcetype=access_* | stats dc(clientip), count by method | append [search sourcetype=access_* | top 1 clientip by method]
The Web access events are piped into the
stats command and the
dc() or distinct_count() function is used to count the number of different users who accessed the site. The
count() function is used to count the total number of times the site was accessed. These numbers are separated by the page request (
The subsearch is used to find the top user for each type of page request (
append command is used to add the result of the subsearch to the bottom of the table:
The first two rows are the results of the first search. The last two rows are the results of the subsearch. Both result sets share the
Example 1: Append the current results with the tabular results of "fubar".
... | chart count by bar | append [search fubar | chart count by baz]
Have questions? Visit Splunk Answers and see what questions and answers the Splunk community has using the append command.
This documentation applies to the following versions of Splunk: 4.1 , 4.1.1 , 4.1.2 , 4.1.3 , 4.1.4 , 4.1.5 , 4.1.6 , 4.1.7 , 4.1.8 , 4.2 , 4.2.1 , 4.2.2 , 4.2.3 , 4.2.4 , 4.2.5 , 4.3 , 4.3.1 , 4.3.2 , 4.3.3 , 4.3.4 , 4.3.5 , 4.3.6