Splunk® App for NetApp Data ONTAP (Legacy)

Deploy and Use the Splunk App for NetApp Data ONTAP

Acrobat logo Download manual as PDF

On June 10, 2021, the Splunk App for NetApp Data ONTAP will reach its end of life and Splunk will no longer maintain or develop this product.
Acrobat logo Download topic as PDF

Other deployment considerations

To deploy the Splunk App for NetApp Data ONTAP, deploy the app components on a network that has access to the storage assets (ONTAP servers) you want Splunk to query.

  • On your indexer(s)/search head(s), check that you have Splunk version 6.3.0 or later installed and that your licensing volume can support the data volume that you are collecting. See Splunk App for NetApp Data ONTAP indexing data volumes.
  • Know your administration credentials for Splunk (search head and indexers).

Validate your NetApp httpd protocol configuration requirements

Your NetApp® Data ONTAP® software must be installed and configured correctly before installing and configuring the Splunk App for NetApp Data ONTAP in your environment.capa When you have NetApp Data ONTAP installed, check that the HTTPD service is running on the storage controllers. This is required for the Splunk App for NetApp Data ONTAP to have API access to the NetApp filers to collect performance data.

If you have not configured your filers using the correct options, then the connection from the app will be rejected by the API. Set the following options on your NetApp filers:

options httpd.enable on
options httpd.admin.enable on

You can use tools such as ZExplore Development Interface (ZEDI) to validate the configuration of the Splunk App for NetApp Data ONTAP. If you can collect data successfully using ZEDI, then your filer is configured correctly to collect data from the app.

For more information about installing and configuring NetApp Data ONTAP, see the NetApp online documentation.

Configure clock and timezone settings for your Splunk platform and your ONTAP servers

Ensure that the clock and timezone settings for your Splunk platform environment and your ONTAP servers agree so as to ensure accurate timestamping.

In your Splunk platform, time offsets can cause indexing issues with defined data types. This is specifically true in the Splunk App for NetApp Data ONTAP for performance searches that use report acceleration. If the timezone information is not set correctly, your Splunk platform may incorrectly apply a timestamp and potentially exclude events from indexing. A light forwarder (LF) or universal forwarder (UF) do not parse events to get a timestamp.

As a NetApp administrator, use NTP on your filers to check that the timezone settings on your ONTAP servers match the timezone information on your Splunk indexer(s).

Create a user account with the correct permissions on the NetApp filers

Before you install the Splunk App for NetApp Data ONTAP you must have the required access privileges to the storage assets from which you want to collect data.

The Splunk App for NetApp Data ONTAP relies on using the NetApp API to collect data from your NetApp devices. To access the NetApp API on each device (for data collection) you need access privileges. The Splunk App for NetApp Data ONTAP needs read-only access to the API. Note that providing the app with the appropriate permissions does does not present any risk to your infrastructure.

To collect data from all inventory objects, both in Cluster mode and 7-mode, create a local user account or Active Directory domain user with the correct permissions on the NetApp filers. To create a local user account you must have the the login* capability role. Without the login* capability, authentication with the filer will fail and you will be unable to retrieve any data.

A user is required for authentication and is assigned a role with the required capabilities assigned to it. You can manually create a user account by following the instructions in the NetApp documentation.

We recommend provisioning the user with the following capabilities used by the Splunk App for NetApp Data ONTAP to collect data using the NetApp API:

Ontapi 7 Mode support in Splunk_TA_ontap 7 Mode Capability Cluster Mode support in Splunk_TA_ontap Corresponding Cluster command
api-aggr-get-filer-info x aggr-get-filer-info
api-aggr-get-iter aggr-get-iter x storage aggregate show
api-aggr-get-root-name x aggr-get-root-name
api-aggr-list-info x aggr-list-info
api-aggr-mediascrub-list-info x aggr-mediascrub-list-info
api-aggr-options-list-info x aggr-options-list-info x storage aggregate show
api-aggr-scrub-list-info x aggr-scrub-list-info
api-aggr-space-list-info x aggr-space-list-info
api-cifs-options-get-iter cifs-options-get-iter x vserver cifs options show
api-cluster-identity-get cluster-identity-get x cluster identity show
api-cluster-node-get-iter cluster-node-get-iter x cluster show
api-disk-list-info x disk-list-info
api-ems-message-get-iter ems-message-get-iter x event log show
api-export-policy-get-iter export-policy-get-iter x vserver export-policy show
api-export-rule-get-iter export-rule-get-iter x vserver export-policy rule show
api-lun-get-iter lun-get-iter x lun show
api-lun-list-info x lun-list-info
api-nfs-exportfs-list-rules x nfs-exportfs-list-rules
api-options-get-iter options-get-iter x vserver options
api-options-list-info x options-list-info
api-perf-object-counter-list-info x perf-object-counter-list-info x statistics catalog counter show
api-perf-object-get-instances perf-object-get-instances x statistics show
api-perf-object-get-instances-iter-end x perf-object-get-instances-iter-end
api-perf-object-get-instances-iter-next x perf-object-get-instances-iter-next
api-perf-object-get-instances-iter-start x perf-object-get-instances-iter-start
api-perf-object-instance-list-info x perf-object-instance-list-info
api-perf-object-instance-list-info-iter perf-object-instance-list-info-iter x statistics catalog instance show
api-perf-object-list-info x perf-object-list-info x statistics catalog object show
api-qtree-list-iter qtree-list-iter x volume qtree show
api-qtree-list-iter-end x qtree-list-iter-end
api-qtree-list-iter-next x qtree-list-iter-next
api-qtree-list-iter-start x qtree-list-iter-start
api-quota-list-entries-iter quota-list-entries-iter x volume quota policy rule show
api-quota-report-iter quota-report-iter x volume quota report
api-quota-report-iter-end x quota-report-iter-end
api-quota-report-iter-next x quota-report-iter-next
api-quota-report-iter-start x quota-report-iter-start
api-quota-status x quota-status
api-quota-status-iter quota-status-iter x volume quota show
api-snapshot-get-iter snapshot-get-iter x volume snapshot show
api-snapshot-list-info x snapshot-list-info
api-storage-disk-get-iter storage-disk-get-iter x storage disk show
api-system-api-list x system-api-list x security login role show-ontapi
api-system-get-info x system-get-info
api-system-get-node-info-iter system-get-node-info-iter x system node show
api-system-get-ontapi-version x system-get-ontapi-version x version
api-system-get-vendor-info x system-get-vendor-info x system node autosupport show
api-system-get-version x system-get-version x version
api-system-node-get-iter system-node-get-iter x system node show
api-vfiler-get-status x vfiler-get-status
api-vfiler-list-info x vfiler-list-info
api-volume-footprint-get-iter volume-footprint-get-iter x volume show-footprint
api-volume-get-iter volume-get-iter x volume show
api-volume-list-info-iter-end x volume-list-info-iter-end
api-volume-list-info-iter-next x volume-list-info-iter-next
api-volume-list-info-iter-start x volume-list-info-iter-start
api-volume-mediascrub-list-info x volume-mediascrub-list-info
api-volume-move-get-iter volume-move-get-iter x volume move show
api-volume-options-list-info x volume-options-list-info
api-volume-scrub-list-info x volume-scrub-list-info
api-volume-space-get-iter volume-space-get-iter x volume show-space
api-volume-storage-service-get-iter volume-storage-service-get-iter x
api-vserver-get-iter vserver-get-iter x vserver show

To validate that the credentials are correct, use the ZExplore Development Interface (ZEDI) to connect to the filer. The Splunk App for NetApp Data ONTAP also validates the credentials (not all the capabilities) when they are initially entered into the app.

App Configuration

This topic discusses the app components required to support your environment needs.

  • API data collection - We recommend a ratio of one data collection node to 15 to 50 filers at the recommended resources. See Data volume requirements in this manual.
  • Syslog data collection - We recommend that you configure log collection on your NetApp filers and forward the log data using Syslog from your filers to a Splunk forwarder. Check that UDP port 514 is open on the Splunk forwarder to receive Syslog.
  • Splunk configuration - At expected data volumes for the Splunk App for NetApp Data ONTAP, configure your Splunk indexers appropriately. To do this, see the Splunk Enterprise documentation for "Introduction to capacity planning for Splunk Enterprise".

For more information on performance requirements of the app and the data collection node, see the "Systems requirements" topic in this manual.

Network settings

Firewall ports must be enabled for communication between Splunk and various components of your Splunk App for NetApp Data ONTAP environment.

splunkweb and splunkd

splunkweb and splunkd both communicate with your Web browser via REpresentational State Transfer (REST):

  • splunkd runs a Web server on port 8089 with SSL/HTTPS turned on by default.
  • splunkweb runs a Web server on port 8000 without SSL/HTTPS by default.

When you start Splunk it checks that the firewall ports 8089 and 8000 are enabled. If the default ports are already in use (or are otherwise not available), Splunk will offer to use the next available port. You can configure port settings for Splunk in the server.conf file.

Communication between the scheduler and the data collection node

The Splunk App for NetApp Data ONTAP uses the gateway, implemented as part of the Hydra scheduling framework, to allocate jobs to the data collection nodes. The scheduling node that runs the Hydra scheduler, typically on the search head, communicates with all data collection nodes over port 8008 (default setting).

In your environment, if port 8008 is used by another service, you can configure another port for communication between the data collection node and the gateway.

All data collection nodes do not have to communicate on the same port. You can configure the ports in the default stanza to implement the port change for all data collection nodes, or you can set the ports on a per stanza basis to configure the port for each data collection node individually.

To set the port for the Hydra gateway, edit the configuration settings for the port on the scheduling node (usually implemented on the search head) in $SPLUNK_HOME/etc/apps/Splunk_TA_ontap/local/hydra_node.conf. The following is an example of the default setting for the app.

gateway_port = 8008


As with all Splunk deployments, it is important to have sufficient disk space to accommodate the volume of data processed by your indexers. The Splunk App for NetApp Data ONTAP indexes approximately 300MB to 1GB of data per filer, per day and supports a log volume of 100MB.

For more information on what to consider regarding your data storage and data volume requirements using Splunk, see Estimate your storage requirements in the Splunk Capacity Planning Manual.


You must have a Splunk Enterprise license and accept the End User License Agreement (EULA) presented for the Splunk App for NetApp Data ONTAP to work in your environment. Licensing requirements are driven by the volume of data your indexer processes.

Refer to the "Storage considerations" section above to determine your licensing volume. Contact your Splunk sales representative to purchase additional license volume or inquire about free trial licensing.

Refer to "How Splunk licensing works" in the Splunk Admin Manual for more information about Splunk licensing.

Backups and archiving

You can configure Splunk to back up both your indexed data and configuration data. You can configure Splunk to delete data based on either the size of the index or the age of data in the index. By default, Splunk deletes data if all of the data in a given archived index is 6 years old or more.

External lookup

Splunk Enterprise ships with a script located in $SPLUNK_HOME/etc/system/bin/ called external_lookup.py, which is a DNS lookup script that:

  • if given a host, returns the IP address.
  • if given an IP address, returns the host name.

The configuration for this script resides in $SPLUNK_HOME/etc/apps/splunk_app_netapp/default/transforms.conf.

external_cmd = external_lookup.py clienthost clientip 
fields_list = clienthost,clientip

Netapp app is using this default external_lookup.py to resolve hostname in dashboard query.

Last modified on 10 October, 2018
What data the Splunk App for NetApp Data ONTAP collects
What a Splunk App for NetApp Data ONTAP deployment looks like

This documentation applies to the following versions of Splunk® App for NetApp Data ONTAP (Legacy): 2.1.91

Was this documentation topic helpful?

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters