Draw a diagram of your deployment
Drawing a diagram of your deployment is a useful tool for you to visualize the details as you learn about your deployment, as well as to refer to in the future.
As you read the topics in this manual, create a diagram of your Splunk deployment. Add details as you discover them.
Your diagram can be on paper, which is preferable at the beginning. If you prefer to work with a diagramming tool like Visio or Omnigraffle, you can use the Splunk icon sets and stencils that are available in the Get Started with Splunk Community manual.
Your diagram should show the following items:
- Each search head.
- Each indexer.
- Any additional components such as
- indexer cluster manager
- search head cluster deployer
- deployment server
- license master
- monitoring console
- KV store
- Forwarders, or with a large number of forwarders, server classes, which are sets of forwarders.
- The connections between each instance.
Leave room around each node in the diagram so that you can add information as you discover it. For each node, include the following information:
- The version of Splunk Enterprise it is running.
- Whether it is running a KV store.
- All open ports.
- Machine information like operating system, CPU, physical memory, storage type, and virtualization.
See the next topic, Deployment topologies, for definitions of most of the components. Continue to the following topics for steps for discovering them using either the monitoring console, if you have it, or configuration file inspection if you do not have the monitoring console. See Review your apps and add-ons for information and steps for discovering server classes and the KV store in your deployment.
Inherited deployments | Deployment topologies |
This documentation applies to the following versions of Splunk® Enterprise: 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12
Feedback submitted, thanks!