Splunk® Supported Add-ons

Splunk Add-on for NetApp Data ONTAP

Download manual as PDF

Download topic as PDF

Install

Download the Splunk Add-on for NetApp Data ONTAP, from Splunkbase and verify that the download package file name is splunk_add_on_for_netapp-<number>.tgz. Use the command line for installation of this add-on. Splunk Web UI installation of the Splunk Add-on for NetApp Data ONTAP is not supported.

Install the Splunk Add-on for NetApp Data ONTAP for use with the Splunk App for NetApp Data ONTAP and the Storage Module for Splunk IT Service Intelligence. See the installation sections for the Splunk App for NetApp Data ONTAP and Splunk IT Service Intelligence for more information.

Distributed installation

Use the tables below to determine where and how to install the Splunk Add-on for NetApp Data ONTAP in a distributed deployment of Splunk Enterprise.

Splunk instance type Supported Comments
Search Heads Yes Install the Splunk Add-on for NetApp Data ONTAP package on all search heads where NetApp knowledge management is required. You do not need to install components Splunk_TA_ontap and SA-ONTAPIndex on search heads.
Indexers Yes Install only SA-ONTAPInedex to store indexed data.
Forwarders Yes Required if you use a heavy, universal or light forwarder to monitor NetApp syslog output.

Distributed deployment compatibility

This table provides a quick reference for the compatibility of the Splunk Add-on for NetApp ONTAP with Splunk distributed deployment features.

Distributed deployment feature Supported Comments
Search Head Clusters Yes Install the Splunk Add-on for NetApp Data ONTAP package on your deployer. You do not need to install components Splunk_TA_ontap and SA-ONTAPIndex on your deployer.
Indexer Clusters Yes Install the component SA-ONTAPIndex from the Splunk Add-on for NetApp Data ONTAP package onto your cluster master to deploy the Splunk Add-on for NetApp Data ONTAP packages.
Deployment Server Yes If you use a deployment server, install the Splunk Add-on for NetApp Data ONTAP onto your deployment servers.

Single instance deployment

A single-instance deployment of the Splunk platform contains indexers and search heads on a single host.

1. Move the splunk_add_on_for_netapp-<number>.tgz file to $SPLUNK_HOME/etc/apps.

2. Extract the app package.

tar -xvf splunk_add_on_for_netapp-<number>.tgz

3. Verify that all of the installation components exist in the $SPLUNK_HOME/etc/apps folder.

4. Restart your Splunk platform instance.

Distributed deployment

For larger environments where data originates on many machines and where many users need to search the data, you can separate out the functions of indexing and searching. In this type of distributed search deployment, each indexer indexes data and performs searches across its own indexes. A Splunk Enterprise instance dedicated to search management, called the search head, coordinates searches across the set of indexers, consolidating the results and presenting them to the user. For more information about distributed search, see About distributed search in the Distributed search manual.

In a distributed search environment:

  1. Get the file splunk_add_on_for_netapp-<number>.tgz and put it in $SPLUNK_HOME/etc/apps on your search head.
  2. In $SPLUNK_HOME/etc/apps extract the app package.
    tar -xvf splunk-app-for-netapp-data-ontap_<number>.tgz
  3. Remove Splunk_TA_ontap from $SPLUNK_HOME/etc/apps on your search head.
  4. Keep SA-ONTAPIndex in $SPLUNK_HOME/etc/apps on your Indexer.
  5. Keep SA-VMNetAppUtils, SA-Hydra and Splunk_TA_ontap in $SPLUNK_HOME/etc/apps on your DCN.
  6. Verify that splunk_add_on_for_netapp-<number>.tgz was copied correctly and resides in $SPLUNK_HOME/etc/apps.
  7. Restart Splunk in each of the locations where you installed the app. For both Windows and Unix instructions, see Start and stop your Splunk platform instance.

Search head cluster environment

Versions 2.1.5 and later of the Splunk Add-on for NetApp Data ONTAP supports search head clustering environments. Perform the steps in this topic to set up NetApp Data ONTAP in a search head cluster (SHC) deployment. This configuration improves the overall performance of the Splunk Add-on for NetApp Data ONTAP.

For an overview of search head clustering, see "Search head clustering architecture" in the Splunk Enterprise Distributed Search Manual.

Prerequisites

  • For Search Head Clustering, you need minimum of three instances of Splunk Enterprise to serve as search head cluster members, and one additional instance that serves as a deployer, which you use to distribute apps and updated configurations to the cluster members.
  • Scheduler must be deployed on a dedicated search head, and not on any individual search head in the SHC.
  • Each search head cluster member should be fresh install of Splunk and not re-purposed splunk instance.
  • You have migrated your settings from a search head pool to a search head cluster. For more information, see Migrate from a search head pool to a search head cluster in the "Splunk Enterprise Distributed Search Manual".
  • You have a licensed version of Splunk Enterprise installed and running in your environment.

Install your Search head cluster

Search head clustering is supported by version 2.1.5 and later of the Splunk Add-on for NetApp Data ONTAP.

See Deploy a search head cluster in the Splunk Enterprise Distributed Search Manual for more information on how to install, configure and deploy a search head cluster.

Install and Deploy the Splunk Add-on for NetApp Data ONTAP in your search head cluster

Follow the steps below to download, install and deploy the Splunk Add-on for NetApp Data ONTAP

You must use the search head cluster deployer to distribute your configurations across your set of search head cluster members. See "Use the deployer to distribute apps and configuration updates" in the Splunk Enterprise Distributed Search Manual.

  1. Take the file splunk_add_on_for_netapp-<number>.tgz that you downloaded from Splunkbase and put it in a temporary directory. This avoids overriding critical files.
    cp splunk_add_on_for_netapp-<number>.tgz /tmp
  2. Change to the /tmp directory, and extract the app package.
    cd /tmp
    tar -xvf splunk_add_on_for_netapp-<number>.tgz
  3. Copy the splunk_app_netapp, SA-Hydra, SA-VMWNetappUtils and TA-ONTAP-FieldExtractions files and move into your deployer's apps folder inside the shcluster folder.
    cp -r /tmp/* $SPLUNK_HOME/etc/shcluster/apps/
  4. Verify that all the required components were copied correctly and reside in the $SPLUNK_HOME/etc/shcluster/apps folder.
  5. On your deployer, deploy the NetApp Data ONTAP app onto any member of your SHC.
    ./splunk apply shcluster-bundle -target <URI>:<management_port> -auth <username>:<password>
  6. Restart Splunk in each of the locations where you installed the app. For both Windows and Unix instructions, see "Start and stop Splunk" in the Splunk Admin Manual.

Install and configure data collection nodes

You must have at least one data collection node installed and running in your environment to collect ONTAP API data. You can build a data collection node and configure it as a physical machine or as a VM image to deploy specifically for your environment.

Install a Splunk heavy forwarder or light forwarder, version 7.1.0 to 7.3.0 on the host that will be your data collection node. You cannot use a Splunk Universal Forwarder for it because Python is required. This is a minimum Splunk requirement for the Splunk App for NetApp Data ONTAP. A data collection node requires that you have a Splunk supported version of CentOS or RedHat Enterprise Linux (RHEL) that is supported by Splunk version 6.3.1 or later. For search head cluster environments, data collection nodes must still be dedicated to a separate search head for scheduling.

Follow the steps below to build a physical data collection node or a VM data collection node. To build a data collection node VM, follow the guidelines set by your specific virtualization solution to create the virtual machine and deploy it in your environment.

Build a data collection node:

  1. Install a CentOS or RedHat Enterprise Linux version that is supported by Splunk version 7.1.0 to 7.3.0.
    1. For system compatibility information, see Splunk data collection node resource requirements in this manual.
  2. Install Splunk version 7.1.0 to 7.3.0 configured as light or heavy forwarder (Python is required). Note: you cannot use a Splunk universal forwarder.
  3. Install the app components. Get the file splunk_add_on_for_netapp-<number>.tgz and put it in $SPLUNK_HOME/etc/apps.
  4. Extract this file. It automatically extracts into the $SPLUNK_HOME/etc/apps directory.
  5. On the data collection node you need the following components: SA-VMNetAppUtils, SA-Hydra, and Splunk_TA_ontap in $SPLUNK_HOME/etc/apps. Do not install splunk_app_netapp in a data collection node.
  6. Check that firewall ports are enabled. The data collection node communicates, by default, with splunkd on port 8089. It communicates with the scheduling node, by default on port 8008. These are the default ports. For more information on configuring firewall ports, see Network settings in this manual.
  7. Set up forwarding to the port on which the Splunk indexer(s) is configured to receive data. See Enable a receiver in the Forwarding Data manual.
  8. Change the default password using the CLI for this forwarder. The default password for Splunk's admin user is changeme. Be sure to change the value of the password to something other than changeme.
    ./splunk edit user admin -password 'newpassword' -role admin -auth admin:changeme
  9. Restart Splunk.
  10. After deploying the collection components, add the forwarder to your scheduler's configuration. To do this, see Collect data from your environment in this manual.

Set static IP addresses

While not required, setting a static IP address for the data collection node is recommended. The data collection node's IP address can vary over time when using DHCP (dynamic addressing), causing unexpected results. Connecting to a specific collection node can be difficult (especially if DNS is down). You can connect to the data collection node to perform maintenance or to determine which collection node is sending data.

Change the NTP server pool list

The Network Time Protocol (NTP) is used to synchronize a computer's time with another reference time source. Most *Nix systems give you the ability to set up or change time synchronization. You can change the NTP servers that your data collection node uses by editing the /etc/ntp.conf file.

The default values for the servers in /etc/ntp.conf are:

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 0.centos.pool.ntp.org
server 1.centos.pool.ntp.org
server 2.centos.pool.ntp.org

To use different NTP servers, replace the default values in the file with your specific values. Restart ntpd for the changes to take effect.

sudo service ntpd restart

Disable NTP on the data collection node

If you do not have access to the internet ( for example, you operate behind a firewall that precludes access to the Internet) you can disable NTP on the data collection node.

Upgrade from the Splunk App NetApp Data ONTAP versions 2.1.4 and earlier

To upgrade your deployment from a versions 2.1.4 and earlier of the Splunk App NetApp Data ONTAP, see the Upgrade to Splunk App for NetApp Data ONTAP 2.1.5 section of the Splunk App for NetApp Data ONTAP manual.

PREVIOUS
Installation overview
  NEXT
Configure inputs

This documentation applies to the following versions of Splunk® Supported Add-ons: released


Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters