Splunk® Enterprise

Knowledge Manager Manual

Download manual as PDF

This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Download topic as PDF

Accelerate data models

Data model acceleration is a tool that you can use to speed up data models that represent extremely large datasets. After acceleration, pivots based on accelerated data model objects complete quicker than they did before, as do reports and dashboard panels that are based on those pivots.

Data model acceleration performs this magic with the help of Splunk Enterprise's High Performance Analytics Store functionality, which builds data summaries behind the scenes in a manner similar to that of report acceleration. Like report acceleration, data model summaries are easy to enable and disable, and are stored on your indexers parallel to the index buckets that contain the events that are being summarized.

This topic covers:

  • The differences between data model acceleration, report acceleration, and summary indexing.
  • How you enable persistent acceleration for data models.
  • How Splunk Enterprise builds data model acceleration summaries.
  • How you can query accelerated data model summaries with the tstats command.
  • Advanced configurations for persistently accelerated data models.

This topic also explains ad hoc data model acceleration. Splunk Enterprise applies ad hoc data model acceleration whenever you build a pivot with an unaccelerated object, even search- and transaction-based objects, which can't be accelerated in a persistent fashion. However, any acceleration benefits you obtain are lost the moment you leave the Pivot Editor or switch objects during a session with the Pivot Editor. These disadvantages do not apply to "persistently" accelerated objects, which will always load with acceleration whenever they're accessed via Pivot. In addition, unlike "persistent" data model acceleration, ad hoc acceleration is not applied to reports or dashboard panels built with Pivot.

How data model acceleration differs from report acceleration and summary indexing

From a "big picture" perspective, this is how data model acceleration differs from report acceleration and summary indexing:

  • Report acceleration and summary indexing speed up individual searches, on a report by report basis. They do this by building collections of precomputed search result aggregates.
  • Data model acceleration speeds up reporting for the entire set of' attributes (fields) that you define in a data model.

In other words, data model acceleration creates summaries for the specific set of fields you and your Pivot users want to report on, accelerating the dataset represented by that collection of fields rather than a particular search against that dataset.

What is a high-performance analytics store?

Data model acceleration summaries take the form of time-series index files, which have the .tsidx file extension. Each .tsidx summary contains records of the indexed field::value combos in the selected dataset and all of the index locations of those field::value combos. It's these .tsidx summaries that make up the high-performance analytics store. Collectively, these summaries are optimized to accelerate a range of analytical searches involving a specific set of fields--the set of fields defined as attributes in the accelerated data model.

An accelerated data model's high-performance analytics store spans a "summary range". This is a range of time that you select when you enable acceleration for the data model. When you run a pivot on an accelerated dataset, the pivot's time range must fall at least partly within this summary range in order to get an acceleration benefit. For example, if you have a data model that accelerates the last month of data but you create a pivot using one of this data model's objects that runs over the past year, the pivot will initially only get acceleration benefits for the portion of the search that runs over the past month.

The .tsidx files that make up a high-performance analytics store for a single data model are always distributed across one or more of your indexers. This is because Splunk Enterprise creates .tsidx files on the indexer, parallel to the buckets that contain the events referenced in the file and which cover the range of time that the summary spans.

Note: The high-performance analytics store created through persistent data model acceleration is different from the summaries created through ad hoc data model acceleration. Ad hoc summaries are always created in a dispatch directory at the search head. For more information about ad hoc data model acceleration, see the subtopic "About ad hoc data model acceleration," below.

Enable persistent acceleration for a data model

You use the Edit Acceleration dialog to enable acceleration for a data model.

1. Open the Edit Acceleration dialog to enable acceleration for a data model. There are three ways to get to this dialog:

  • Navigate to the Data Models management page, find the model you want to accelerate, and click Edit and select Edit Acceleration.
  • Navigate to the Data Models management page, expand the row of the data model you want to accelerate, and click Add for ACCELERATION.
  • Open the Data Model Editor for a data model, click Edit and select Edit Acceleration.

2. Select Accelerate to to enable acceleration for the data model.

3. Choose a Summary Range.

The Summary Range can span 1 Day, 7 Days, 1 Month, 3 Months, 1 Year, or All Time. It represents the time range over which you plan to run pivots against the accelerated objects in the data model. For example, if you only want to run pivots over periods of time within the last seven days, choose 7 Days. For more information about this setting, see the subtopic "About the summary range," below.
If you require a different summary range than the ones supplied by the Summary Range field, you can configure it for your data model in datamodels.conf.

Note: Smaller time ranges mean smaller .tsidx files that require less time to build and which take up less space on disc, so you may want to go with shorter ranges when you can.

For more details on the summary range setting and how it works, see "About the summary range," below.

Data model acceleration caveats

There are a number of restrictions on the kinds of data model objects that can be accelerated.

  • Data model acceleration only affects the first event object hierarchy in a data model. Additional event object hierarchies and object hierarchies based on root search and root transaction objects are not accelerated.
    • For example, you could have a data model that has a search object hierarchy at the top, then two separate event object hierarchies, and a transaction object hierarchy at the bottom. Only the first of the two event object hierarchies can benefit from full data model acceleration.
    • Pivots that use unaccelerated objects fall back to _raw data, which means that they will initially run slower. However, they can receive some acceleration benefit from ad hoc acceleration. See "About ad hoc data model acceleration" at the end of this topic for more information.
  • Data model acceleration is most efficient if the root event object being accelerated includes in its initial constraint search the index(es) that Splunk Enterprise should search over. A single high-performance analytics store can span across several indexes in multiple indexers. If you know that all of the data that you want to pivot on resides in a particular index or set of indexes, you can speed things up by telling Splunk Enterprise where to look. Otherwise Splunk Enterprise may end up wasting time unnecessarily accelerating data that is not of use to you.

For the full list of restrictions and caveats on data model usage see the list in "Managing Data Models," in this manual.

After you enable acceleration for a data model

After you enable acceleration for your data model, Splunk Enterprise begins building data model acceleration summaries that span the summary range that you've specified. It builds them in indexes with events that contain the fields specified in the data model. These .tsidx file summaries are stored parallel to their corresponding index buckets in a manner identical to that of report acceleration summaries.

Splunk Enterprise runs a search every 5 minutes to update existing data model summaries. It runs a maintenance process every 30 minutes to remove old, outdated summaries. You can adjust these intervals in datamodels.conf and limits.conf, respectively.

A few facts about data model summaries:

  • Each bucket in each index in a Splunk Enterprise instance can have one or more data model summaries, one for each accelerated data model for which it has relevant data. These summaries are created by Splunk Enterprise as it collects data.
  • Splunk Enterprise then scopes summaries by search head (or search head pool id) to account for different extractions that may produce different results for the same search string.
  • There is a directory for each data model that is accelerated along with the app it comes from. Data model accelerations are scoped either to an app or global; data models that are private cannot be accelerated. This prevents individual users from taking up disk space with private data model acceleration summaries.

Note: If necessary you can configure the location of data model summaries via indexes.conf.

About the summary range

For the most part, accelerated data model summary ranges behave in a manner similar to accelerated report summary ranges.

Data model summary ranges span an approximate range of time. If you select a Summary Range of 7 days, Splunk Enterprise will build summaries for the data model that each approximately span the past 7 days. Once Splunk Enterprise finishes building the summary, going forward the data model summarization process ensures that the summary always covers the selected range, removing older summary data that passes out of the range.

Note: We say that the Summary Range indicates the approximate range of time that a summary spans. At times, summaries will have a store of data that that slightly exceeds their summary range, but they never fail to meet it, unless it's the first time the summary is being built.

In the future, when you run a pivot using an accelerated object from the data model over a range that falls within the preceding week, Splunk Enterprise will run the pivot against the data model's summary rather than the source index _raw data again. In most cases the summary will have far less data than the source index, and that means that the pivot should complete faster than it did on its initial run.

If you run a pivot over a period of time that is only partially covered by the summary range, the pivot won't complete quite as fast. This is because Splunk Enterprise has to run at least part of the pivot search over raw data in the main Splunk Enterprise index. For example, if the Summary Range setting for a data model is 1 week and you run a pivot using an accelerated object from that data model over the last 9 days, the pivot only gets acceleration benefits for the portion of the report that covers the past 7 days. The portion of the report that runs over days 8 and 9 will run at normal speed. In cases like this, Splunk Enterprise will return the accelerated results from summaries first, and then fill in the gaps at a slower speed.

Keep this in mind when you set the Summary Range value. If you always plan to run a report over time ranges that exceed the past 7 days, but don't extend further out than 30 days, you should select a Summary Range of 1 month when you set up report acceleration for that report.

Note: Splunk offers a few advanced settings related to Summary Range that you can use if you have a large Splunk implementation that involves multi-terrabyte datasets. This can lead to situations where the search required to build the initial data model acceleration summary runs too long and/or is resource intensive. For more information, see the subtopic "Advanced configurations for persistently accelerated data models," below.

How Splunk Enterprise builds data model acceleration summaries

As we mentioned earlier, when you enable acceleration for a data model, Splunk Enterprise builds the initial set of .tsidx file summaries for the data model and then runs scheduled searches in the background every 5 minutes to keep those summaries up to date. Each update ensures that the entire configured time range is covered without a significant gap in data. This method of summary building also ensures that late-arriving data will be summarized without complication.

To verify that Splunk Enterprise is scheduling searches to update your data models, in log.cfg you can set category.SavedSplunker=DEBUG and then watch scheduler.log for events like:

04-24-2013 11:12:02.357 -0700 DEBUG SavedSplunker - Added 1 scheduled searches for accelerated datamodels to the end of ready-to-run list

The speed of summary creation depends on the amount of events involved and the size of the summary range. You can track progress towards summary completion on the Data Models management page. Find the accelerated data model that you want to inspect, expand its row, and review the information that appears under ACCELERATION.

6.0 dm acceleration metrics.png

Status tells you whether the acceleration summary for the data model is complete. If it is in Building status it will tell you what percentage of the summary is complete. Keep in mind that data model summaries are constantly updating with new data; just because a summary is "complete" now doesn't mean it won't be "building" later.

Note: The data model acceleration status is updated occasionally and cached so it may not represent the most up-to-date status.

Data model summary size on disk

You can use the data model metrics on the Data Models management page to track the total size of a data model's summary on disk. Summaries do take up space, and sometimes a signficant amount of it, so it's important that you avoid overuse of data model acceleration. For example, you may want to reserve data model acceleration for data models whose pivots are heavily used in dashboard panels.

The amount of space that a data model takes up is related to the number of events that you are collecting for the summary range you've chosen. It can also be negatively affected if the data model includes attributes with high cardinality (that have a large set of unique values), such as a Name attribute.

If you are particularly size constrained you may want to test the amount of space a data model acceleration summary will take up by enabling acceleration for a small Summary Range first, and then moving to a larger range if you think you can afford it.

Configure size-based retention for data model summaries

Do you set size-based retention limits for your indexes so they do not take up too much disk storage space? By default, data model acceleration summaries can take up an unlimited amount of disk space. This can be a problem if you're also locking down the maximum data size of your indexes or index volumes. The good news is that you can optionally configure similar retention limits for your data model acceleration summaries.

Note: Before attempting to configure size-based retention for your data model acceleration summaries, you should first understand how to use volumes to configure limits on index size across indexes, as many of the principles are the same. For more information, see "Configure index size" in Managing Indexers and Clusters.

By default, data model acceleration summaries reside in a predefined volume titled _splunk_summaries at the following path:

 $SPLUNK_DB/<index_name>/datamodel_summary/<bucket_id>/<search_head_or_pool_id>/DM_<datamodel_app>_<datamodel_name>

This volume initially has no maximum size specification, which means that it has infinite retention.

Also by default, the tstatsHomePath parameter is specified only once as a global setting in indexes.conf. Its path is inherited by all indexes. In etc/system/default/indexes.conf:

[global]
[....]
tstatsHomePath = volume:_splunk_summaries/$_index_name/datamodel_summary
[....]

You can override this default behavior by specifying tstatsHomePath for a specific index and pointing to a different volume that you have defined. You can also add size limits to any volume (including _splunk_summaries) by setting a maxVolumeDataSizeMB parameter in the volume configuration.

Here are the steps you take to set up size-based retention for data model acceleration summaries. All of the configurations described are made within indexes.conf.

1. (Optional) If you want to have data model acceleration summary results go into volumes other than _splunk_summaries, create them.

2. Add maxVolumeDataSizeMB parameters to the volume or volumes that will be the home for your data model acceleration summary data, such as _splunk_summaries.

This parameter manages size-based retention for data model acceleration summaries across your indexers.

3. Update your index definitions.

Set a tstatsHomePath parameter for each index that deals with data model acceleration summary data. Ensure that the path is pointing to the data model acceleration summary data volume that you identified in Step 2.
If you defined multiple volumes for your data model acceleration summaries, make sure that the tstatsHomePath settings for your indexes point to the appropriate volumes.

This example configuration shows data size limits being set up for data model acceleration summaries on the _splunk_summaries volume.

#########################
# Volume definitions
#########################

# This volume can contain up to 100GB of summary data, per the 
# <code>maxVolumeDataSizeMB</code> setting.
[volume:_splunk_summaries]
path = $SPLUNK_DB
maxVolumeDataSizeMB = 100000

#########################
# Index definitions
#########################

# Using the _tstatsHomePath parameter, the _splunk_summaries volume 
# manages overall data model acceleration size across each of these 
# indexes, limiting it to 100GB. 
[main]
homePath   = $SPLUNK_DB/defaultdb/db
coldPath   = $SPLUNK_DB/defaultdb/colddb
thawedPath = $SPLUNK_DB/defaultdb/thaweddb
tstatsHomePath = volume:_splunk_summaries/defaultdb/datamodel_summary
maxMemMB = 20
maxConcurrentOptimizes = 6
maxHotIdleSecs = 86400
maxHotBuckets = 10
maxDataSize = auto_high_volume

[history]
homePath   = $SPLUNK_DB/historydb/db
coldPath   = $SPLUNK_DB/historydb/colddb
thawedPath = $SPLUNK_DB/historydb/thaweddb
tstatsHomePath = volume:_splunk_summaries/historydb/datamodel_summary
maxDataSize = 10
frozenTimePeriodInSecs = 604800

[dm_acceleration]
homePath   = $SPLUNK_DB/dm_accelerationdb/db
coldPath   = $SPLUNK_DB/dm_accelerationdb/colddb
thawedPath = $SPLUNK_DB/dm_accelerationdb/thaweddb
tstatsHomePath = volume:_splunk_summaries/dm_accelerationdb/datamodel_summary

[_internal]
homePath   = $SPLUNK_DB/_internaldb/db
coldPath   = $SPLUNK_DB/_internaldb/colddb
thawedPath = $SPLUNK_DB/_internaldb/thaweddb
tstatsHomePath = volume:_splunk_summaries/_internaldb/datamodel_summary

When a data model acceleration summary volume reaches its size limit, the Splunk Enterprise volume manager removes the oldest summary in the volume to make room. When the volume manager removes a summary, it places a marker file inside its corresponding bucket. This marker file tells the summary generator not to rebuild the summary.

Note: You can configure size-based retention for report acceleration summaries in much the same way that you do for data model acceleration summaries. The primary difference is that there is no default volume for report acceleration summaries. If you are trying to manage both kinds of summaries you may decide it is easiest to have both summaries use the default _splunk_summaries volume. For more information about this strategy for managing size-based retention of both acceleration summary types, see "Manage report acceleration" in this manual.

Query data model acceleration summaries

You can query the high-performance analytics store for a specific accelerated data model in Search with the tstats command.

tstats can sort through the full set of .tsidx file summaries that belong to your accelerated data model even when they are distributed among multiple indexes.

This can be a way to quickly run a stats-based search against a particular data model just to see if it's capturing the data you expect for the summary range you've selected.

To do this, you identify the data model using FROM datamodel=<datamodel-name>:

| tstats avg(foo) FROM datamodel=buttercup_games WHERE bar=value2 baz>5

The above query returns the average of the field foo in the "Buttercup Games" data model acceleration summaries, specifically where bar is value2 and the value of baz is greater than 5.

Note: You don't have to specify the app of the data model as Splunk Enterprise takes this from the search context (the app you are in). However you cannot query an accelerated data model in App B from App A unless the data model in App B is shared globally.

Using the summariesonly argument

The summariesonly argument of the tstats command enables you to get specific information about data model summaries.

This example uses the summariesonly argument to get the time range of the summary for an accelerated data model named mydm.

| tstats summariesonly=t min(_time) as min, max(_time) as max from datamodel=mydm | eval prettymin=strftime(min, "%c") | eval prettymax=strftime(max, "%c")

This example uses summariesonly in conjunction with timechart to reveal what data has been summarized over a selected time range for an accelerated data model titled mydm.

| tstats summariesonly=t prestats=t count from datamodel=mydm by _time span=1h | timechart span=1h count

For more about the tstats command, including the usage of tstats to query normal indexed data, see the entry for tstats in the Search Reference.

Advanced configurations for persistently accelerated data models

There are a few very specific situations that may require you to set up advanced configurations for your persistently accelerated data models in datamodels.conf.

When summary-populating searches take too long to run

If your Splunk Enterprise implementation processes an extremely large amount of data on a regular basis you may find that the initial creation of persistent data model acceleration summaries is resource intensive. The searches that build these summaries may run too long, causing them to fail to summarize incoming events. To deal with this situation, Splunk Enterprise gives you two configuration parameters, both in datamodels.conf. These parameters are max_time and backfill_time.

Important: Most Splunk Enterprise users do not need to adjust these settings. The default max_time setting of 1 hour should ensure that long-running summary creation searches do not impede the addition of new events to the summary. We advise that you not change these advanced summary range configurations unless you know it is the only solution to your summary creation issues.

Change the maximum period of time that a summary-populating search can run

The max_time causes summary populating searches to quit after a specified amount of time has passed. After a summary-populating search stops, Splunk Enterprise runs a search to catch all of the events that have come in since the initial summary-populating search began, and then it continues adding the summary where the last summary-populating search left off. The max_time parameter is set to 3600 seconds (60 minutes) by default, a setting that should ensure proper summary creation for the majority of Splunk Enterprise instances.

For example: You have enabled acceleration for a data model, and you want its summary to retain events for the past three months. Because your organization indexes large amounts of data, the search that initially creates this summary should take about four hours to complete. Unfortunately you can't let the search run interrupted for that amount of time because it might fail to index some of the new events that come in while that four-hour search is in process.

The max_time parameter stops the search after an hour, and another search takes its place to pull in the new events that have come in during that time. It then continues running to add events from the last three months to the summary. This second search also stops after an hour and the process repeats until the summary is complete.

Note: The max_time parameter is an approximate time limit. After the 60 minutes elapses, Splunk Enterprise has to finish summarizing the current bucket before kicking off the summary search. This prevents wasted work.

Set a backfill time range that is shorter than the summary time range

If you are indexing a tremendous amount of data with your Splunk implementation and you don't want to adjust the max_time range for a slow-running summary-populating search, you have an alternative option: the backfill_time parameter.

The backfill_time parameter creates a second "backfill time range" that you set within the summary range. Splunk Enterprise will build a partial summary that initially only covers this shorter time range. After that, the summary will expand with each new event summarized until it reaches the limit of the larger summary time range. At that point the full summary will be complete and Splunk will stop retaining events that age out of the summary range.

For example, say you want to set your Summary Range to 1 Month but you know that your system would be taxed by a search that built a summary for that time range. To deal with this, you set acceleration.backfill_time = -7d to have Splunk Enterprise run a search that creates a partial summary that initially just covers the past week. After that limit is reached, Splunk Enterprise would only add new events to the summary, causing the range of time covered by the summary to expand. But the full summary would still only retain events for one month, so once the partial summary expands to the full Summary Range of the past month, it starts dropping its oldest events, just like an ordinary data model summary does.

When you do not want persistently accelerated data models to be rebuilt automatically

By default Splunk Enterprise automatically rebuilds persistently accelerated data models whenever it finds that those models are outdated. Data models can become outdated when the search stored in the data model configuration in savesearches.conf no longer matches the search for the actual data model. This can happen if the JSON file for an accelerated model is edited on disk without first disabling the model's acceleration.

In very specific cases you may want to disable this feature for specific accelerated data models, so that those data models are not rebuilt automatically when they become out of date. Instead it will be up to admins to initiate the rebuilds manually. Admins can manually rebuild a data model through the Data Model Manager page, by expanding the row for the affected data model and clicking Rebuild. For more about the Data Model Manager page, see "Manage data models" in this manual.

To disable automatic rebuilds for a specific persistently accelerated data model, open datamodels.conf, find the stanza for the data model, and set acceleration.manual_rebuilds = true

About ad hoc data model acceleration

Even when you're building a pivot that is based on a data model object that is not accelerated in a persistent fashion, that pivot can benefit from what we call "ad hoc" data model acceleration. In these cases, Splunk Enterprise builds a summary in a search head dispatch directory when you work with an object to build a pivot in the Pivot Editor.

The search head begins building the ad-hoc data model acceleration summary after you select an object and enter the pivot editor. You can follow the progress of the ad hoc summary construction with the progress bar:

6.0 pivot progressbar.png

When the progress bar reads Complete, the ad hoc summary is built, and the search head uses it to return pivot results faster going forward. But this summary only lasts while you work with the object in the Pivot Editor. If you leave the editor and return, or switch to another object and then return to the first one, the search head will need to rebuild the ad hoc summary.

Ad hoc data model acceleration summaries complete faster when they collect data for a shorter range of time. You can change this range for root event objects, root transaction objects, and their children by resetting the time Filter in the Pivot Editor. See "About ad hoc data model acceleration summary time ranges," below, for more information.

Ad hoc data model acceleration works for all object types, including root search objects and root transaction objects. Its main disadvantage against persistent data model acceleration is that with persistent data model acceleration, the summary is always there, keeping pivot performance speedy, until acceleration is disabled for the data model. With ad hoc data model acceleration, you have to wait for the summary to be rebuilt each time you enter the Pivot Editor.

About ad hoc data model acceleration summary time ranges

The search head always tries to make ad hoc data model acceleration summaries fit the range set by the time Filter in the Pivot Editor. When you first enter the Pivot Editor for an object, the pivot time range is set to All Time. If your object represents a large dataset this can mean that the initial pivot will complete slowly as it builds the ad hoc summary behind the scenes.

When you give the pivot a time range other than All Time, the search head builds an ad hoc summary that fits that range as efficiently as possible. For any given data model object, the search head completes an ad hoc summary for a pivot with a short time range quicker than it completes when that same pivot has a longer time range.

The search head only rebuilds the ad hoc summary from start to finish if you replace the current time range with a new time range that has a different "latest" time. This is because the search head builds each ad hoc summary backwards, from its latest time to its earliest time. If you keep the latest time the same but change the earliest time the search head at most will work to collect any extra data that is required.

Note: Root search objects and their child objects are a special case here as they do not have time range filters in Pivot (they do not extract _time as an attribute). Pivots based on these objects always build summaries for all of the events returned by the search. However, you can design the root search object's search string so it includes "earliest" and "latest" dates, which restricts the dataset represented by the root search object and its children.

How ad hoc data model acceleration differs from persistent data model acceleration

Here's a summary of the ways in which ad hoc data model acceleration differs from persistent data model acceleration:

  • Ad hoc data model acceleration takes place on the search head rather than the indexer. This enables it to accelerate all three object types (event, search, and transaction).
  • Splunk Enterprise creates ad hoc data model acceleration summaries in dispatch directories at the search head. It creates and stores persistent data model acceleration summaries in your indexes alongside index buckets.
  • Splunk Enterprise deletes ad hoc data model acceleration summaries when you leave the Pivot Editor or change the object you are working on while you are in the Pivot Editor. When you return to the Pivot Editor for the same object, the search head must rebuild the ad hoc summary. You cannot preserve ad hoc data model summaries for later use.
    • Pivot job IDs are retained in the pivot URL, so if you quickly use the back button after leaving Pivot (or return to the pivot job with a permalink) you may be able to use the ad-hoc summary for that job without waiting for a rebuild. The search head deletes ad hoc data model acceleration summaries from the dispatch directory a few minutes after you leave Pivot or switch to a different model within Pivot.
  • Ad hoc acceleration does not apply to reports or dashboard panels that are based on pivots. If you want pivot-based reports and dashboard panels to benefit from data model acceleration, base them on objects from a persistently accelerated event object hierarchy.
  • Ad hoc data model acceleration can potentially create more load on your search head than persistent data model acceleration creates on your indexers. This is because the search head creates a separate ad hoc data model acceleration summary for each user that accesses a specific data model object in Pivot that is not persistently accelerated. On the other hand, summaries for persistently accelerated data model objects are shared by each user of the associated data model. This data model summary reuse results in less work for your indexers.
PREVIOUS
Manage report acceleration
  NEXT
Use summary indexing for increased reporting efficiency

This documentation applies to the following versions of Splunk® Enterprise: 6.1, 6.1.1, 6.1.2, 6.1.3, 6.1.4, 6.1.5, 6.1.6, 6.1.7, 6.1.8, 6.1.9, 6.1.10, 6.1.11, 6.1.12, 6.1.13, 6.1.14


Comments

Sowings--<br /><br />You are correct. In 6.2 all event objects in a data model can be accelerated, not just the first object. We are in the process of updating the documentation to reflect this fact.

Mness
October 30, 2014

Is the comment about data model acceleration caveats (first object only) still applicable to 6.2?

Sowings
October 30, 2014

Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters