Splunk® App for NetApp Data ONTAP (Legacy)

Deploy and Use the Splunk App for NetApp Data ONTAP

This documentation does not apply to the most recent version of Splunk® App for NetApp Data ONTAP (Legacy). For documentation on the most recent version, go to the latest release.

Install Splunk App for NetApp Data ONTAP

Download Splunk App for NetApp Data ONTAP

  1. Download Splunk App for NetApp Data ONTAP, from Splunkbase.
  2. Check that the download package file name is splunk-app-for-netapp-data-ontap_<number>.tgz. It contains all of the supporting add-ons, technology add-ons, and the apps that are all part of the app.

Install Splunk App for NetApp Data ONTAP

Distributed installation

Use the tables below to determine where and how to install the Splunk App for NetApp Data ONTAP in a distributed deployment of Splunk Enterprise.

Splunk instance type Supported Required Comments
Search Heads Yes Yes Install this app to all search heads where NetApp knowledge management is required.
Indexers Yes Yes Required monitor NetApp data output.
Heavy Forwarders Yes Yes Required if you use a heavy forwarder rather than a light or universal forwarder to monitor NetApp syslog output.
Universal Forwarders Yes Yes Required if you use a universal forwarder rather than a light or heavy forwarder to monitor NetApp syslog output.
Light Forwarders Yes Yes Required if you use a light forwarder rather than a universal or heavy forwarder to monitor NetApp syslog output.

Distributed deployment compatibility

This table provides a quick reference for the compatibility of the Splunk App for NetApp ONTAP with Splunk distributed deployment features.

Distributed deployment feature Supported Comments
Search Head Clusters Yes Learn about NetApp cluster configuration.
Indexer Clusters Yes Use the cluster master to deploy your technology add-on packages.
Deployment Server Yes Learn about using the deployment server.

Single instance deployment

A single-instance deployment of Splunk Enterprise contains indexers and search heads on a single host.

  1. Move the splunk-app-for-netapp-data-ontap_<number>.tgz file to $SPLUNK_HOME.
  2. Extract the app package.
    tar -xvf splunk-app-for-netapp-data-ontap_<number>.tgz
  3. Verify that all of the apps and sub directories exist in the $SPLUNK_HOME/etc/apps folder.
  4. Restart your instance of Splunk Enterprise. See "Start and stop Splunk" in the Splunk Admin Manual.

Distributed deployment

For larger environments where data originates on many machines and where many users need to search the data, you can separate out the functions of indexing and searching. In this type of distributed search deployment, each indexer indexes data and performs searches across its own indexes. A Splunk Enterprise instance dedicated to search management, called the search head, coordinates searches across the set of indexers, consolidating the results and presenting them to the user. For more information about distributed search, see About distributed search in the Distributed search manual.

In a distributed search environment:

  1. Install splunk-app-for-netapp-data-ontap_<number>.tgz on the search head.
    1. Get the file splunk-app-for-netapp-data-ontap_<number>.tgz and put it in $SPLUNK_HOME.
  2. In $SPLUNK_HOME extract the app package.
    tar -xvf splunk-app-for-netapp-data-ontap_<number>.tgz
  3. Verify that all of the apps and the sub directories were copied correctly and reside in $SPLUNK_HOME/etc/apps:
    SA-Hydra/…
    SA-Utils/…
    splunk_app_netapp/…
    Splunk_TA_ontap/…
  4. On each search peer, install the following app components:
    SA-Utils/…
    SA-Hydra/…
    Splunk_TA_ontap/…
  5. Restart Splunk in each of the locations where you installed the app. For both Windows and Unix instructions, see "Start and stop Spunk" in the Splunk Admin Manual.

Search head cluster environment

Version 2.1 of the Splunk App for NetApp Data ONTAP supports search head clustering environments. Perform the steps in this topic to set up NetApp Data ONTAP in a search head cluster (SHC) deployment. This configuration improves the overall performance of the Splunk app NetApp Data ONTAP.

For an overview of search head clustering, see "Search head clustering architecture" in the Splunk Enterprise Distributed Search Manual.

Prerequisites

  • For Search Head Clustering, you need minimum of 3 instances of Splunk Enterprise to serve as search head cluster members, and one additional instance that serves as a deployer, which you use to distribute apps and updated configurations to the cluster members.
  • The data collection node (DCN) scheduler must be deployed on a dedicated search head, and not on any individual search head in the SHC.
  • Each search head cluster member should be fresh install of Splunk and not re-purposed splunk instance.
  • You have migrated your settings from a Search Head Pool to a Search Head cluster. For more information, see Migrate from a search head pool to a search head cluster in the "Splunk Enterprise Distributed Search Manual".
  • You have a licensed version of Splunk Enterprise installed and running in your environment.
  • You have access to the Splunk App for VMware and permission to install it.
  • You have access to the NetApp Data ONTAP app and permission to install it.

Install your Search head cluster

Search head clustering is supported by version 2.1.0 of the Splunk App for NetApp Data ONTAP.

See Deploy a search head cluster in the Splunk Enterprise Distributed Search Manual for more information on how to install, configure and deploy a search head cluster.

Install and Deploy the Splunk App for NetApp Data ONTAP in your search head cluster

Follow the steps below to download, install and deploy the Splunk App for NetApp Data ONTAP

You must use the search head cluster deployer to distribute your configurations across your set of search head cluster members. See "Use the deployer to distribute apps and configuration updates" in the Splunk Enterprise Distributed Search Manual.

  1. Take the file splunk-app-for-netapp-data-ontap_<number>.tgz that you downloaded from Splunkbase and put it in a temporary directory. This avoids overriding critical files.
    cp splunk-app-for-netapp-data-ontap_<number>.tgz /tmp
  2. Change to the /tmp directory, and extract the app package.
    cd /tmp
    tar -xvf splunk-app-for-netapp-data-ontap_<number>.tgz
  3. Copy the extracted files and move into your deployer's apps folder inside the shcluster folder.
    cp -r etc/apps/* $SPLUNK_HOME/etc/shcluster/apps/
  4. Verify that all of the apps and the sub directories were copied correctly and reside in the $SPLUNK_HOME/etc/shcluster/apps folder.
    • SA-Hydra/…
    • SA-Utils/…
    • splunk_app_netapp/…
    • Splunk_TA_ontap/…
  5. On your deployer, deploy the NetApp Data ONTAP app onto any member of your SHC.
    ./splunk apply shcluster-bundle -target <URI>:<management_port> -auth <username>:<password>
  6. Restart Splunk in each of the locations where you installed the app. For both Windows and Unix instructions, see "Start and stop Spunk" in the Splunk Admin Manual.

Configure user roles

On the search head (or the combined indexer and search head) configure roles for the users of the app. This is standard Splunk user role configuration. There are two default user roles defined in the Splunk App for NetApp Data ONTAP:

  • The splunk_ontap_admin role: This role gives you permission to configure the Splunk App for NetApp Data ONTAP for data collection.
  • The splunk_ontap_user role: This role gives you permission to use the app. It does not give you permission to configure the app.

To assign roles to each user:

  1. On the search head, log in to Splunk Web and enter the IP address and port number of the OS hosting your search head:
    https://<ipaddress>:8000/
    Note that after deploying the app on your search head, use
    https
    not
    http
    as you are now establishing a secure connection.
  2. Select the Splunk App for NetApp Data ONTAP from the Apps menu. If this is your first time installing the app, then you are automatically redirected to the Setup page. Accept all of the default settings on the Setup screen, then click Save. For most installations the default settings work.
  3. In Settings, select Users and authentication: Access controls, then select Users.
  4. Give the admin user the splunk_ontap_admin role so that the admin can run scheduled searches. Add splunk_ontap_admin to the "admin" account.

Configure receiving on your Indexers

After the App has been installed, configure each of your Splunk indexers to listen for data on a (forwarding/receiving) port. Set up receiving on the indexer. By convention, receivers listen on port 9997, but you can specify any unused port. For more information see "Set up receiving" in the Splunk Forwarding data manual.

Create a data collection node

You must have at least one data collection node installed and running in your environment to collect ONTAP API data. You can build a data collection node and configure it specifically for your environment. Create and configure this data collection node on a physical machine or as a VM image to deploy into your environment.

Install a Splunk heavy forwarder or light forwarder, version 5.0.4 through 6.4.3 on the host that will be your data collection node. You can not use a Splunk Universal Forwarder for it because Python is required. This is a minimum Splunk requirement for the Splunk App for NetApp Data ONTAP. A data collection node requires that you have a Splunk supported version of CentOS or RedHat Enterprise Linux (RHEL) that is supported by Splunk version 5.0.4 through 6.4.3. For search head cluster environments, data collection nodes must still be dedicated to a separate search head for scheduling. For search head cluster environments, data collection nodes must still be dedicated to a separate search head for scheduling.

Whether you are building a physical data collection node or a data collection node VM follow the steps below. To build a data collection node VM we recommend that you follow the guidelines set by your specific virtualization solution to create the virtual machine and deploy it in your environment.


To build a data collection node:

  1. Install a CentOS or RedHat Enterprise Linux version that is supported by Splunk version 5.0.4 through 6.4.3.
    1. For system compatibility information, see Splunk data collection node resource requirements in this manual.
  2. Install Splunk version 5.0.4 through 6.4.3 configured at a minimum as a light forwarder (Python is required). Note: you can not use a Splunk universal forwarder.
  3. Install the app components. Get the file splunk-app-for-netapp-data-ontap_<number>.tgz and put it in $SPLUNK_HOME.
  4. Extract this file. It automatically extracts into the $SPLUNK_HOME/etc/apps directory.
  5. On the data collection node you only need the following components: SA-Utils, SA-Hydra, and Splunk_TA_ontap in $SPLUNK_HOME/etc/apps. Please do not install splunk_app_netapp in a data collection node.
  6. Check that firewall ports are enabled. The data collection node communicates, by default, with splunkd on port 8089. It communicates with the scheduling node, by default on port 8008. These are the default ports. For more information on configuring firewall ports, see Network settings in this manual.
  7. Set up forwarding to the port on which the Splunk indexer(s) is configured to receive data. See Enable a receiver in the Forwarding Data manual.
  8. Change the default password using the CLI for this forwarder. The default password for Splunk's admin user is changeme. Be sure to change the value of the password to something other than changeme.
    ./splunk edit user admin -password 'newpassword' -role admin -auth admin:changeme
  9. Restart Splunk.
  10. After deploying the collection components, add the forwarder to your scheduler's configuration. To do this, see Collect data from your environment in this manual.

Turn on logging on the data collection node

To assist in troubleshooting data collection issues, we recommend that you turn on logging on the data collection node when you create the node. The data collected does count against your Splunk license.

On your data collection node:

  1. Create a local directory under SA-Hydra (SA-Hydra/local).
  2. Copy the outputs.conf file from SA-Hydra/default/outputs.conf to SA-Hydra/local/outputs.conf.
  3. Edit the local outputs.conf file to uncomment the following lines:
    [tcpout]
    forwardedindex.3.whitelist = _internal

Configure Operating System properties

You can configure some of the properties of your operating system to improve that stability of your data collection nodes in a production environment.

Set static IP addresses

While not required, we recommend that you set a static IP address for the data collection node. The data collection node's IP address can vary over time when using DHCP (dynamic addressing) causing unexpected results. Connecting to a specific collection node can be difficult (especially if DNS is down). You can connect to the data collection node to perform maintenance or to determine which collection node is sending data.

We recommend that you log in as the splunkadmin user to make changes to the data collection node.

Change the NTP server pool list

The Network Time Protocol (NTP) is used to synchronize a computer's time with another reference time source. Most *Nix systems give you the ability to set up or change time synchronization. You can change the NTP servers that your data collection node uses by editing the /etc/ntp.conf file.

The default values for the servers in /etc/ntp.conf are:

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 0.centos.pool.ntp.org
server 1.centos.pool.ntp.org
server 2.centos.pool.ntp.org

To use different NTP servers, replace the default values in the file with your specific values. Restart ntpd for the changes to take effect.

sudo service ntpd restart

Disable NTP on the data collection node

If you do not have access to the internet ( for example, you operate behind a firewall that precludes access to the Internet) you can disable NTP on the data collection node.

Last modified on 21 December, 2016
Requirements for Installing with other apps   Configure data collection

This documentation applies to the following versions of Splunk® App for NetApp Data ONTAP (Legacy): 2.1.0, 2.1.1, 2.1.2, 2.1.3


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters