Splunk® App for NetApp Data ONTAP (Legacy)

Deploy and Use the Splunk App for NetApp Data ONTAP

This documentation does not apply to the most recent version of Splunk® App for NetApp Data ONTAP (Legacy). For documentation on the most recent version, go to the latest release.

Configure search head pooling

The Splunk App for NetApp Data ONTAP requires a stable and supportable Splunk installation. Please review the "Key implementation issues" topic in the Splunk Enterprise documentation to set up a search head pool.

In a distributed search environment, a search head is a Splunk instance that directs search requests to one or more search peers (Indexers).

To set up a search head pooled environment, follow the instructions in the topic "Create a search head pool" in the Distributed search manual. It discussed how to set up your search heads and search peers. To install and configure the Splunk App for NetApp Data ONTAP in a search head pooled environment, follow the Installation and configuration instructions in this manual and the requirements and limitations discussed in this topic.

Dedicated search head

Install the instance of the Splunk App for NetApp Data ONTAP that you will use to administer the scheduler on its own search head. A dedicated search head improves performance and eliminates possible conflicts with other apps installed on the same Splunk deployment.

To install the app in a search head pool

  1. In a search head pooled environment the apps are stored on a central location (such as an nfs mount). This shared disk space is used by all of the search heads in the pool. Ensure that you have set up your shared storage for all search heads.
  2. On each search head, install the Splunk App for NetApp Data ONTAP, as described in the topic Install the Splunk App for NetApp Data ONTAP in this manual.
  3. Install an instance of the Splunk App for NetApp Data ONTAP on a search head that does not belong to the search head pool. This instance is the "central configuration instance" used to configure and start the scheduler. Note that this is the only location from which you can start or stop the scheduler, and configure data collection from your sources. Use this instance for configuration purposes and not an instance that is part of the search head pool. The dashboards in this instance will remain empty as data is not sent to it.
  4. After you have installed the "central configuration instance", follow the instruction in "Create a data collection node" described in this manual to get your data collection nodes.
  5. Data collection nodes are managed by the scheduler. On the "central configuration instance", (non search head pooled instance of the app) log in to Splunk Web and navigate to the Collection Configuration dashboard. Register all new data collection nodes individually with the scheduler, specify the associated filers, and have them forward data to the indexers, then start the scheduler. See the "Add a data collection node" topic in this manual for instructions.
  6. Assuming that the app is installed on each search head in the pool, log in to Splunk Web on the search head and add the indexers as search peers. Now that the indexers are set up the data collection node can forward data to them. Follow the instructions in "Create a search head pool" in the Distributed search manual.
  7. Log in to the data collection nodes and check that data is being forwarded to the indexers in the pool.
  8. When you use TSIDX namespaces in a search head pool, the namespaces must be shared as the TSIDX namespaces live on the shared storage. To do this, follow the instructions in the topic on How to share namespaces.

How to share namespaces

To share the namespaces and to check that they have been shared correctly:

  1. Create the TSIDX namespace on one of the search heads in the pooled environment. Set "tsidxStatsHomePath" in the splunk_app_netapp/local/indexes.conf file to the shared storage location. If the shared storage is mounted, for example, under /mnt/shp/home, then edit splunk_app_netapp/local/indexes.conf to include the following default stanza:
    [default]
    tsidxStatsHomePath=/mnt/shp/home
  2. Set definition to true for [tstats_local] in SA-Utils/default/macros.conf
    [tstats_local]
    definition = true
  3. Create the TSIDX namespace on one of the search heads in the pooled environment. This search populates the tsidx namespace.
  4. Test that the TSIDX namespace is set up correctly using this tscollect query, replacing "foo" with the name of your shared namespace:
    index=_internal | head 10 | table host,source,sourcetype | `tscollect("foo")`
  5. If this search returns data without an error, then run this tstats query on "foo" on each of the search heads in the pool:
    | `tstats` count from foo | stats count | search count=10
  6. If this search also returns data, your configuration is working correctly. It checks that you can get data from the shared namespace and that the result (count) returned by the search is the same for all search heads in the pool. The result of the search shows that they are identical instances, and that search head pooling with tsidx namespaces works.
  7. Set up your namespace retention policy.
    • To set up a default policy for all namespaces, create a local version of the file $SPLUNK_HOME/etc/apps/SA-Utils/default/tsidx_retention.conf file. You can use the default settings in the file to limit the size of the TSIDX namespaces and to limit the length of time that namespaces are retained, or you can modify the settings to values that work in your environment.
    • To set up a retention policy for specific namespaces in the app, create a local version of the splunk_app_netapp/default/tsidx_retention.conf file. Uncomment the namespaces that you want to use in the app and the values associated with those namespaces. You can use the default values for the namespaces or modify the settings to values that work in your environment. A retention policy is set for each of the namespaces.

When you have installed and configured the app in your environment, you can log in to Splunk Web on any of the search heads that are part of the pooled environment to view the Splunk App for NetApp Data ONTAP dashboards and use the app.

For more information about tsidx namespaces in NetApp Data ONTAP, see Considerations when using tsidx namespaces.

Managing manual configuration file changes

When you manually edit a local configuration file on the search head, with search head pooling enabled, notify the search head of the change, as described in "Manage configuration changes" in the Splunk Enterprise Distributed Search manual.

When you make configuration changes to local files using Splunk Web or the Splunk CLI, Splunk automatically handles the updates.

Search head pooling with real-time searches turned on

Using search head pooing with real-time searches can cause significant degradation in performance. Avoid using real-time searches in a search head pooled environment.

Last modified on 21 August, 2015
Configure data collection intervals   Configure a cluster deployment

This documentation applies to the following versions of Splunk® App for NetApp Data ONTAP (Legacy): 2.0, 2.0.1, 2.0.2, 2.0.3


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters