Splunk® OVA for VMWare and NetApp

Splunk OVA for VMware

Download manual as PDF

Download topic as PDF

Install the Splunk OVA for VMware

Use the instructions below to install the Splunk OVA for VMware onto your Splunk platform deployment.

Data Collection Node resource requirements

DCNs communicate with the Collection Configuration page, which runs on the Splunk scheduler, to retrieve performance, inventory, hierarchy, task, and event data from vCenter servers.

  • Each Data Collection Node (DCN) needs at least one CPU core for every 10 hosts from which the DCN is collecting data.
  • Splunk recommends that you estimate the number of CPUs needed for your worker processes with the expectation that a CPU in your deployment will eventually fail at some point. Splunk recommends that you provision at least one extra CPU in order to help promote capacity and availability in your deployment.


Each DCN polls information for up to 70 ESXi hosts and 1,750 virtual machines. With this sizing, a site pulling information from 200 hypervisors and 5,000 VMs needs to create at least 3 DCNs.

DCN virtual appliance sizing is as follows:

  • 8 CPU cores with 2GHz reserved
  • 12 GB Memory with a reservation of 1GB
  • 12 GB storage

In a Search Head Clustering (SHC) deployment, the DCN Scheduler must not be deployed on a dedicated search head, and not on any individual Search Head in the SHC.

The Splunk Add-on for VMware does not support scheduler and Data Collection Node functions on Windows operating systems. Linux or UNIX are required. When deploying the VMware add-on into a Windows-based Splunk environment, deploy Linux-based virtual appliances from the Splunk-provided OVA image for both scheduler and data collection node roles.

To ensure reliable communication between systems, use static IP addresses and dedicated host names for each DCN. See Collect Data from vCenter Server systems using the VMware API.

Install the Splunk OVA for VMware in your virtual environment

  1. Open the vSphere client and log into vCenter Server.
  2. Invoke the OVA template wizard. Click File > Deploy OVF Template.
  3. In the Deploy OVF Template wizard click Deploy from a file or URL, then click Browse…
  4. Browse to the location of your OVA file, splunk_data_collection_node_for_vmware_<version>-<build_number>.ova, then click Next.
    1. Note: You can not download the file directly from the URL. Splunk Apps requires that you be authenticated via a supported web browser before you begin your download.
  5. Review the OVF template details, then click Next
  6. In the Name and Location screen provide a new name for the node VM. (You can use the default name, if you want.)
  7. Select a data center or folder as the deployment destination for the node VM, then click Next.
  8. On the Host / Cluster screen, select the specific host or cluster where you would like to run the node VM, then click Next.
  9. In the Datastore screen, choose the datastore where you want the VM and its filesystem to reside. The datastore can be from 4GB to 10GB. Click Next.
  10. On the Disk Format screen, select either Thin or Thick Provisioning, then click Next. We recommend thick provisioning.
  11. On the Network Mapping screen, to specify the networks that you want the deployed template to use. Use the Destination Networks menu to map your data collection node .ova template to one of the networks in your inventory.
  12. Validate your selections in the Ready to complete dialog, then select Next to begin deployment.
  13. Once deployed, click Close to complete the installation and exit the wizard.
  14. Resource your VM according to the data collection node resource requirements listed above.
  15. Locate the collection node VM in the vSphere Client tree view.
  16. Right-click on the collection node VM and choose Power > Power On from the menu to start the VM. When you power on the data collection node, Splunk starts automatically even though the VMware data collection mechanism is not configured. By default, the node VM boots and gets its network settings via DHCP. You can keep this default setting or you can set a static IP address. If you use DHCP, check the Summary tab in the vSphere client to get the IP address of the node VM.
  17. To ssh into the data collection node use the default username and password (splunk/changeme). You automatically land in /home/splunk.
  18. Your Splunk platform is installed in /opt.
  19. Navigate to /opt/splunk/etc/apps/SA-Hydra/local and open outputs.conf.
  20. Uncomment the [tcpout] stanza. Save and exit.
  21. (Optional) Disable the KVStore to reduce CPU overhead on your Splunk platform instance by navigating to SPLUNK_HOME$/etc/system/local/.
  22. Open the server.conf file and disable the kvstore stanza.
    [kvstore]
    disabled = true
    
  23. Save your changes and exit.
  24. Set up forwarding to the port on which the Splunk indexer(s) is configured to receive data. See "Enable forwarding on a Splunk Enterprise instance" in the Forwarding Data manual.
  25. The default password for Splunk's admin user is changeme. This is true for all Splunk instances. We recommend that you change the password using the CLI for this forwarder.
    splunk edit user admin -password 'newpassword' -role admin -auth admin:changeme
  26. Start your Splunk platform instance.

Now you can configure the DCNs and the Splunk settings for each DCN.

Create your own data collection node

You can build a data collection node and configure it specifically for your environment. Create and configure this data collection node on a physical machine or as a VM image to deploy into your environment using vCenter.

Build a data collection node

Whether you are building a physical data collection node or a data collection node VM follow the steps below. To build a data collection node VM we recommend that you follow the guidelines set by VMware to create the virtual machine and deploy it in your environment.

To build a data collection node:

  1. Install a CentOS or RedHat Enterprise Linux version that is compatible with Splunk Enterprise version 6.4.6 or later.
  2. Install Splunk Enterprise version 6.4.6 or later, and configure it as a heavy forwarder. Note: You cannot use a universal forwarder. It lacks necessary python libraries.
  3. Download the Splunk_add-on_for_vmware-<version>-<build_number>.tgz from Splunkbase.
  4. Copy the file Splunk_add-on_for_vmware-<version>-<build_number>.tgz from the download package, and move to $SPLUNK_HOME/etc/apps.
  5. Extract the file Splunk_add_on_for_vmware-<version>-<build_number>.tgz from $SPLUNK_HOME/etc/apps.
  6. Verify that the data collection components SA-VMNetAppUtils, SA-Hydra, Splunk_TA_vmware, and Splunk_TA_esxilogs exist in $SPLUNK_HOME/etc/apps.
  7. Verify that the firewall ports are correct. The DCN communicates with splunkd on port 8089. The DCN communicates with the scheduler node on port 8008. Set up forwarding to the same port as your Splunk indexers.
  8. Navigate to $SPLUNK_HOME/etc/apps/SA-Hydra/local and open outputs.conf.
  9. Uncomment the [tcpout] stanza. Save and exit.
  10. (Optional) Disable the KVStore to reduce CPU overhead on your Splunk platform instance by navigating to SPLUNK_HOME$/etc/system/local/.
  11. Open the server.conf file and disable the kvstore stanza.
    [kvstore]
    disabled = true
    
  12. Save your changes and exit.
  13. After deploying the collection components, add the forwarder to your scheduler's configuration. Configure the Splunk OVA for VMWare in this manual.

Learn More

  • See the "deploy a heavy forwarder" section of the Splunk Enterprise Forwarding Data manual to learn how to deploy a heavy forwarder.
PREVIOUS
About the Splunk OVA for VMware
  NEXT
Configure the Splunk OVA for VMWare

This documentation applies to the following versions of Splunk® OVA for VMWare and NetApp: 3.4.1, 3.4.2, 3.4.3, 3.4.4


Comments

It seems this sentence is not right - In a Search Head Clustering (SHC) deployment, the DCN Scheduler must not be deployed on a dedicated search head, and not on any individual Search Head in the SHC.

Patilsonali1729
May 1, 2018

Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters