Configure data model acceleration
Splunk Analytics for Hadoop reaches End of Life on January 31, 2025.
By default only users with permissions to access the data on the Hadoop cluster can create data models.
Create a data model
1. Navigate to Settings > Data Models.
2. Click the Manage Data Models button.
3. Click New Data Model.
4. In the Create New Data Model dialog, enter the data model Title and optional Description. The Title field can accept any character except asterisks, including blank spaces between characters. This title appears wherever the data model name is displayed.
5. Splunk Analytics for Hadoop populates the data model ID field with a unique ID as you enter the title. You do not need to edit this ID. If for any reason, you find that you must edit this field, note the following:
- It must be a unique identifier.
- It can only contain letters, numbers, and underscores.
- It cannot contain spaces between characters
Once you click Create you can't change the ID value.
6. App will display the app context that you are in currently,
7. Click Create to open the new data model in the Data Model Editor.
8. Add and define the objects you want included in the search. To define the data model's first object, click Add Object and select an object type. For more information about object definition, see Design data models in the Splunk Enterprise Knowledge Manager Manual.
Accelerate the data model
1. Open the Data Model Editor for a data model, click Edit and select Edit Acceleration.
2. Select Accelerate. Note that when creating an accelerated model, Hadoop node usage increases.
3. Choose a Summary Range for your accelerated data model search.
4. Enable Specific Options: Checking this box lets you edit file information. Only check this if you want to change the default values. Splunk Analytics for Hadoop populates the following fields based on the information found in the data model, so it may not be necessary to edit them.
- File Format: Chose either Parquet or Orc.
- Compression codec: For Parquet file format, choose Snappy or Gzip. For Orc, select Snappy or zlib.
- DFS block size: Check Enable Block Size specification, then determine a size. Note that the DFS block size must be at least 32MB. Orc and Parquet must buffer record data in memory until those records are written. Memory consumption should correlate to the size of all the columns of a row group in your search. In other words, the fewer required fields in your search, the less buffer memory is required.
About data model acceleration | Configure and run unified search |
This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.3.0, 9.3.1
Feedback submitted, thanks!