Splunk Cloud Platform

Search Tutorial

About uploading data

When you add data to your Splunk deployment, the data is processed and transformed into a series of individual events that you can view, search, and analyze.

If you haven't already downloaded the tutorial data, see Download the tutorial data files.

What kind of data?

The Splunk platform accepts any type of data. In particular, it works with all IT streaming and historical data. The source of the data can be event logs, web logs, live application logs, network feeds, system metrics, change monitoring, message queues, archive files, and so on.

In general, data sources are grouped into the following categories.

Data source Description
Files and directories Most data that you might be interested in comes directly from files and directories.
Network events The Splunk software can index remote data from any network port and SNMP events from remote devices.
IT Operations Data from IT Ops, such as Nagios, NetApp, and Cisco.
Cloud services Data from Cloud services, such as AWS and Kinesis.
Database services Data from databases such as Oracle, MySQL, and Microsoft SQL Server.
Security services Data from security services such as McAfee, Microsoft Active Directory, and Symantec Endpoint Protection.
Virtualization services Data from virtualization services such as VMWare and XenApp.
Application servers Data from application servers such as JMX & JMS, WebLogic, and WebSphere.
Windows sources The Windows version of Splunk software accepts a wide range of Windows-specific inputs, including Windows Event Log, Windows Registry, WMI, Active Directory, and Performance monitoring.
Other sources Other input sources are supported, such as FIFO queues and scripted inputs for getting data from APIs, and other remote data interfaces.

For many types of data, you can add the data directly to your Splunk deployment. Many common data sources are automatically recognized.

If the data that you want to use is not automatically recognized by the Splunk software, you need to provide information about the data before you can add it.

Where is the data stored?

The process of transforming the data is called indexing. During indexing, the incoming data is processed to enable fast searching and analysis. The processed results are stored in the index as events.

The index is a flat file repository for the data. For this tutorial, the index resides on the computer where you access your Splunk deployment.

Events are stored in the index as a group of files that fall into two categories:

  • Raw data, which is the data that you add to the Splunk deployment. The raw data is stored in a compressed format.
  • Index files, which include some metadata files that point to the raw data.

These files reside in sets of directories, called buckets, that are organized by age.

By default, all of your data is put into a single, preconfigured index called main. You can create indexes to store your data when you add the data to your Splunk instance. There are also several other indexes used for internal purposes.

Next step

Now that you are more familiar with data sources and indexes, let's learn about the tutorial data that you will work with.

See also

Use apps to get data in in Getting Data In
About managing indexes in Managing Indexers and Clusters of Indexers

Last modified on 01 February, 2022
Navigating Splunk Web   What is in the tutorial data?

This documentation applies to the following versions of Splunk Cloud Platform: 9.2.2406, 9.0.2205, 8.2.2112, 8.2.2201, 8.2.2202, 8.2.2203, 9.0.2208, 9.0.2209, 9.0.2303, 9.0.2305, 9.1.2308, 9.1.2312, 9.2.2403 (latest FedRAMP release)


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters