Search Reference

 


timechart

This documentation does not apply to the most recent version of Splunk. Click here for the latest version.

timechart

Synopsis

Creates a time series chart with corresponding table of statistics.

Syntax

timechart [sep=<string>] [partial=<bool>] [cont=<t|f>] [limit=<int>] [agg=<stats-agg-term>] [<bucketing-option> ]* (<single-agg> [by <split-by-clause>] ) | ( (<eval-expression>) by <split-by-clause> )

Required arguments

agg
Syntax: <stats-agg-term>
Description: See the Stats functions section below. For a list of stats functions with descriptions and examples, see "Functions for stats, chart, and timechart".
bucketing option
Syntax: bins | minspan | span | <start-end>
Description: Discretization options. If a bucketing option is not supplied, timechart defaults to bins=100. bins sets the maximum number of bins, not the target number of bins.
eval-expression
Syntax: <math-exp> | <concat-exp> | <compare-exp> | bool-exp> | <function-call>
Description: A combination of literals, fields, operators, and functions that represent the value of your destination field. The following are the basic operations you can perform with eval. For these evaluations to work, your values need to be valid for the type of operation. For example, with the exception of addition, arithmetic operations may not produce valid results if the values are not numerical. Additionally, Splunk can concatenate the two operands if they are both strings. When concatenating values with '.', Splunk treats both values as strings regardless of their actual type.
single-agg
Syntax: count|<stats-func>(<field>)
Description: A single aggregation applied to a single field (can be evaled field). No wildcards are allowed. The field must be specified, except when using the special 'count' aggregator that applies to events as a whole.
split-by-clause
Syntax: <field> (<tc-option>)* [<where-clause>]
Description: Specifies a field to split by. If field is numerical, default discretization is applied; discretization is defined with tc-option.

Optional arguments

cont
Syntax: cont=<bool>
Description: Specifies whether the chart is continuous or not. If true, Splunk fills in the time gaps. Default is True | T.
fixedrange
Syntax: fixedrange=<bool>
Description: (Not valid for 4.2) Specify whether or not to enforce the earliest and latest times of the search. Setting it to false allows the timechart to constrict to just the time range with valid data. Default is True | T.
limit
Syntax: limit=<int>
Description: Specify a limit for series filtering; limit=0 means no filtering. By default, setting limit=N would filter the top N values based on the sum of each series.
partial
Syntax: partial=<bool>
Description: Controls if partial time buckets should be retained or not. Only the first and last bucket could ever be partial. Defaults to True|T, meaning that they are retained.
sep
Syntax: sep=<string>
Description: Specifies the separator to use for output fieldnames when multiple data series are specified along with a split-by field.

Stats functions

stats-agg-term
Syntax: <stats-func>( <evaled-field> | <wc-field> ) [AS <wc-field>]
Description: A statistical specifier optionally renamed to a new field name. The specifier can be by an aggregation function applied to a field or set of fields or an aggregation function applied to an arbitrary eval expression.
stats-function
Syntax: avg() | c() | count() | dc() | distinct_count() | earliest() | estdc() | estdc_error() | exactperc<int>() | first() | last() | latest() | list() | max() | median() | min() | mode() | p<in>() | perc<int>() | per_day() | per_hour() | per_minute() | per_second() |range() | stdev() | stdevp() | sum() | sumsq() | upperperc<int>() | values() | var() | varp()
Description: Functions used with the stats command. Each time you invoke the stats command, you can use more than one function; however, you can only use one by clause. For a list of stats functions with descriptions and examples, see "Functions for stats, chart, and timechart".

Bucketing options

bins
Syntax: bins=<int>
Description: Sets the maximum number of bins to discretize into. This does not set the target number of bins. (It finds the smallest bucket size that results in no more than 100 distinct buckets. Even though you specify 100 or 300, the resulting number of buckets might be much lower.) Defaults to 100.
minspan
Syntax: minspan=<span-length>
Description: Specifies the smallest span granularity to use automatically inferring span from the data time range.
span
Syntax: span=<log-span> | span=<span-length>
Description: Sets the size of each bucket, using a span length based on time or log-based span.
<start-end>
Syntax: end=<num> | start=<num>
Description:Sets the minimum and maximum extents for numerical buckets. Data outside of the [start, end] range is discarded.

Log span syntax

<log-span>
Syntax: [<num>]log[<num>]
Description: Sets to log-based span. The first number is a coefficient. The second number is the base. If the first number is supplied, it must be a real number >= 1.0 and < base. Base, if supplied, must be real number > 1.0 (strictly greater than 1).

Span length syntax

span-length
Syntax: <int>[<timescale>]
Description: A span of each bin, based on time. If the timescale is provided, this is used as a time range. If not, this is an absolute bucket length.
<timescale>
Syntax: <sec> | <min> | <hr> | <day> | <month> | <subseconds>
Description: Time scale units.
<sec>
Syntax: s | sec | secs | second | seconds
Description: Time scale in seconds.
<min>
Syntax: m | min | mins | minute | minutes
Description: Time scale in minutes.
<hr>
Syntax: h | hr | hrs | hour | hours
Description: Time scale in hours.
<day>
Syntax: d | day | days
Description: Time scale in days.
<month>
Syntax: mon | month | months
Description: Time scale in months.
<subseconds>
Syntax: us | ms | cs | ds
Description: Time scale in microseconds (us), milliseconds (ms), centiseconds (cs), or deciseconds (ds).

tc options

tc-option
Syntax: <bucketing-option> | usenull=<bool> | useother=<bool> | nullstr=<string> | otherstr=<string>
Description: Options for controlling the behavior of splitting by a field.
usenull
Syntax: usenull=<bool>
Description: Controls whether or not a series is created for events that do not contain the split-by field.
nullstr
Syntax: nullstr=<string>
Description: If usenull is true, this series is labeled by the value of the nullstr option. Defaults to NULL.
useother
Syntax: useother=<bool>
Description: Specifies if a series should be added for data series not included in the graph because they did not meet the criteria of the <where-clause>. Defaults to True | T.
otherstr
Syntax: otherstr=<string>
Description: If useother is true, this series is labeled by the value of the otherstr option. Defaults to OTHER.

where clause

where clause
Syntax: <single-agg> <where-comp>
Description: Specifies the criteria for including particular data series when a field is given in the tc-by-clause. The most common use of this option is to select for spikes rather than overall mass of distribution in series selection. The default value finds the top ten series by area under the curve. Alternately one could replace sum with max to find the series with the ten highest spikes.This has no relation to the where command.
<where-comp>
Syntax: <wherein-comp> | <wherethresh-comp>
Description: A criteria for the where clause.
<wherein-comp>
Syntax: (in|notin) (top|bottom)<int>
Description: A where-clause criteria that requires the aggregated series value be in or not in some top or bottom grouping.
<wherethresh-comp>
Syntax: ( < | > ) <num>
Description: A where-clause criteria that requires the aggregated series value be greater than or less than some numeric threshold.

Description

Create a chart for a statistical aggregation applied to a field against time as the x-axis. Data is optionally split by a field so that each distinct value of this split-by field is a series. If you use an eval expression, the split-by clause is required. The limit and agg options enables you to specify series filtering but are ignored if an explicit where-clause is provided (limit=0 means no series filtering).

Bucket time spans versus per_* functions

The functions, per_day(), per_hour(), per_minute(), and per_second() are aggregator functions and are not responsible for setting a time span for the resultant chart. These functions are used to get a consistent scale for the data when an explicit span is not provided. The resulting span can depend on the search time range.

For example, per_hour() converts the field value so that it is a rate per hour, or sum()/<hours in the span>. If your chart span ends up being 30m, it is sum()*2.

If you want the span to be 1h, you still have to specify the argument span=1h in your search.

Note: You can do per_hour() on one field and per_minute() (or any combination of the functions) on a different field in the same search.

A note about split-by fields

If you use chart or timechart, you cannot use a field that you specify in a function as your split-by field as well. For example, you will not be able to run:

... | chart sum(A) by A span=log2

However, you can work around this with an eval expression, for example:

... | eval A1=A | chart sum(A) by A1 span=log2

Examples

Example 1

This example uses the sample dataset from the tutorial and a field lookup to add more information to the event data.

The original data set includes a product_id field that is the catalog number for the items sold at the Flower & Gift shop. The field lookup adds three new fields to your events: product_name, which is a descriptive name for the item; product_type, which is a category for the item; and price, which is the cost of the item.

After you configure the field lookup, you can run this search using the time range, Other > Yesterday.

Chart revenue for the different product that were purchased yesterday.

sourcetype=access_* action=purchase | timechart per_hour(price) by product_name usenull=f

This example searches for all purchase events (defined by the action=purchase) and pipes those results into the timechart command. The per_hour() function sums up the values of the price field for each item (product_name) and buckets the total for each hour of the day.

This produces the following table of results:

TimechartExample perhr2.png


Click Show report to format the chart in Report Builder. Here, it's formatted as a stacked column chart over time:

TimechartExample perhr1.png

After you create this chart, you can mouseover each section to view more metrics for the product purchased at that hour of the day. Notice that the chart does not display the data in hourly spans. Because a span is not provided (such as span=1hr), the per_hour() function converts the value so that it is a sum per hours in the time range (which in this cause is 24 hours).

Example 2

This example uses the sample dataset from the tutorial and a field lookup to add more information to the event data.

The original data set includes a product_id field that is the catalog number for the items sold at the Flower & Gift shop. The field lookup adds three new fields to your events: product_name, which is a descriptive name for the item; product_type, which is a category for the item; and price, which is the cost of the item.

After you configure the field lookup, you can run this search using the time range, All time.

Chart the number of purchases made daily for each type of product.

sourcetype=access_* action=purchase | timechart span=1d count by product_type usenull=f

This example searches for all purchases events (defined by the action=purchase) and pipes those results into the timechart command. The span=1day argument buckets the count of purchases over the week into daily chunks. The usenull=f argument tells Splunk to ignore any events that contain a NULL value for product_type. This produces the following table:

TimechartEx2 Table.png


Click Show report to format the chart in Report Builder. Here, it's formatted as a column chart over time:

TimechartEx2 chart.png


You can compare the number of different items purchased each day and over the course of the week. It looks like day-to-day, the number of purchases for each item do not vary significantly.


Example 3

This example uses the sample dataset from the tutorial and a field lookup to add more information to the event data.

The original data set includes a product_id field that is the catalog number for the items sold at the Flower & Gift shop. The field lookup adds three new fields to your events: product_name, which is a descriptive name for the item; product_type, which is a category for the item; and price, which is the cost of the item.

After you configure the field lookup, you can run this search using the time range, All time.

Count the total revenue made for each item sold at the shop over the course of the week. This examples shows two ways to do this.

1. This first search uses the span argument to bucket the times of the search results into 1 day increments. Then uses the sum() function to add the price for each product_name.

sourcetype=access_* action=purchase | timechart span=1d sum(price) by product_name usenull=f

2. This second search uses the per_day() function to calculate the total of the price values for each day.

sourcetype=access_* action=purchase | timechart per_day(price) by product_name usenull=f

Both searches produce the following results table:

TimechartEx3 table.png


Click Show report to format the chart in Report Builder. Here, it's formatted as a column chart over time:

TimechartEx3 chart.png


Now you can compare the total revenue made for items purchased each day and over the course of the week.


Example 4

This example uses the sample dataset from the tutorial. Download the data set from this topic in the tutorial and follow the instructions to upload it to Splunk. Then, run this search using the time range, Other > Yesterday.

Chart yesterday's views and purchases at the Flower & Gift shop.

sourcetype=access_* | timechart per_hour(eval(method="GET")) AS Views, per_hour(eval(action="purchase")) AS Purchases

This search uses the per_hour() function and eval expressions to search for page views (method=GET) and purchases (action=purchase). The results of the eval expressions are renamed as Views and Purchases, respectively. This produces the following results table:

TimechartEx4 table.png


Click Show report to format the chart in Report Builder. Here, it's formatted as an area chart:

TimechartEx4 linechart.png


The difference between the two areas indicates that all the views did not lead to purchases. If all views lead to purchases, you would expect the areas to overlay atop each other completely so that there is no difference between the two areas.


Example 5

This example uses the sample dataset from the tutorial but should work with any format of Apache Web access log. Download the data set from this topic in the tutorial and follow the instructions to upload it to Splunk. Then, run this search using the time range, Other > Yesterday.

Search the Web access logs and count the number of page requests over time.

sourcetype=access_* | timechart count(eval(method="GET")) AS GET, count(eval(method="POST")) AS POST

This search uses the count() function and eval expressions to count the different page request methods, GET or POST. This produces the following result table:

TimechartEx5 resultsTable.png


Click Show report to format the chart in Report Builder. Here, it's formatted as a line chart:

TimechartEx5 lineChart.png


Note: You can use the stats, chart, and timechart commands to perform the same statistical calculations on your data. The stats command returns a table of results. The chart command returns the same table of results, but you can use the Report Builder to format this table as a chart. If you want to chart your results over a time range, use the timechart command. You can also see variations of this example with the chart and timechart commands.


More examples

Example 1: Compute the product of the average "CPU" and average "MEM" each minute for each "host"

... | timechart span=1m eval(avg(CPU) * avg(MEM)) by host

Example 2: Display timechart of the avg of cpu_seconds by processor rounded to 2 decimal places.

... | timechart eval(round(avg(cpu_seconds),2)) by processor

Example 3: Calculate the average value of "CPU" each minute for each "host".

... | timechart span=1m avg(CPU) by host

Example 4: Create a timechart of average "cpu_seconds" by "host", and remove data (outlying values) that may distort the timechart's axis.

... | timechart avg(cpu_seconds) by host | outlier action=tf

Example 5: Graph the average "thruput" of hosts over time.

... | timechart span=5m avg(thruput) by host

Example 6: Example usage

sshd failed OR failure | timechart span=1m count(eventtype) by source_ip usenull=f where count>10

See also

bucket, chart, sitimechart

Answers

Have questions? Visit Splunk Answers and see what questions and answers the Splunk community has using the timechart command.

This documentation applies to the following versions of Splunk: 5.0 , 5.0.1 , 5.0.2 , 5.0.3 , 5.0.4 , 5.0.5 , 5.0.6 , 5.0.7 , 5.0.8 , 5.0.9 View the Article History for its revisions.


Comments

DFStoneburner: Actually, this is not a typo. The default is True or T. There is a pipe character or | in between the values.

Bmiller: Actually, the percentage functions (this includes p, perc, exactperc, and upperperc) all have the format percX(Y). This function returns the X-th percentile/exactpercentile/upperpercentile value of the field Y, where X is an integer between 1 and 99.

Sophy, Splunker
May 21, 2013

Typo under "Optional arguments" -> "cont", the default value is True not Truelt.

DFStoneburner
May 21, 2013

Formatting was lost in my previous comment. exactperc requires a decimal, not an integer.

Bmiller EXPE
December 5, 2012

exactperc is incorrect. It should be exactperc or something similar.

Bmiller EXPE
December 5, 2012

You must be logged into splunk.com in order to post comments. Log in now.

Was this documentation topic helpful?

If you'd like to hear back from us, please provide your email address:

We'd love to hear what you think about this topic or the documentation as a whole. Feedback you enter here will be delivered to the documentation team.

Feedback submitted, thanks!