Where services run in Splunk UBA
View /opt/caspida/conf/deployment/caspida-deployment.conf
to see where services are running in your deployment.
Find where services run in other deployments by looking at the .conf
files corresponding to your cluster size in the /opt/caspida/conf/deployment/recipes
directory.
7 node example
For example, to find where services run in a 7 node deployment, see the /opt/caspida/conf/deployment/recipes/deployment-7_node.conf
file:
## # Copyright (C) 2005-2016 Splunk Inc. All rights reserved. # All use of this Software is subject to the terms and conditions accepted upon installation of # and/or purchase of license for the Software. ## # # 7-node deployment configuration # Replace node1,node2..node7 with respective ipaddress/hostname during # deployment # caspida.cluster.nodes=node1,node2,node3,node4,node5,node6,node7 zookeeper.servers=node1,node2,node3 hadoop.namenode.host=node1 hadoop.snamenode.host=node2 hadoop.datanode.host=node4,node5,node6,node7 persistence.datastore.tsdb=node1:8086 database.host=node1 #database.standby=node8 persistence.redis.server=node4,node5 hive.host=node1 spark.master=node7:7077 spark.worker=node7 spark.history=node7 spark.server=node7 impala.statestore.host=node1 impala.catalog.host=node1 kafka.brokers=node2:9092 kafka.ssl.brokers=node2:9093 impala.server.host=node1 uiServer.host=node1 jobmanager.restServer=node1:9002 jobmanager.agents=node2 rule.offline.exec.host=node1 rule.realtime.exec.host=node1 sysmonitor.host=node1 resourcesmonitor.host=node1 output.connector.host=node1 kubernetes.restServer=node1:6443 system.network.interface=eth0 container.master.host=node1 container.worker.host=node3,node4,node5,node6
The node numbers represent the Splunk UBA host names in the order they were specified when running the /opt/caspida/bin/Caspida setup
command during setup. For example, if you specified ubahost1,ubahost2,ubahost3,ubahost4,ubahost5
, then node 1
corresponds to ubahost1
, node 2
corresponds to ubahost2
, and so on.
20 node example
For example, to find where services run in a 20 node deployment, see the /opt/caspida/conf/deployment/recipes/deployment-20_node.conf
file:
## # Copyright (C) 2005-2016 Splunk Inc. All rights reserved. # All use of this Software is subject to the terms and conditions accepted upon installation of # and/or purchase of license for the Software. ## # # 20-node deployment configuration # Replace node1,node2..node10 with respective ipaddress/hostname during # deployment # caspida.cluster.nodes=node1,node2,node3,node4,node5,node6,node7,node8,node9,node10,node11,node12,node13,node14,node15,node16,node17,node18,node19,node20 zookeeper.servers=node1,node2,node3 hadoop.namenode.host=node1 hadoop.snamenode.host=node2 hadoop.datanode.host=node11,node12,node13,node14,node15,node16,node17,node18,node19,node20 persistence.datastore.tsdb=node1:8086 database.host=node2 #database.standby=node21 persistence.redis.server=node5,node6,node7,node8,node9,node10 hive.host=node2 spark.master=node17:7077 spark.worker=node17,node18,node19,node20 spark.history=node17 spark.server=node17 impala.statestore.host=node2 impala.catalog.host=node2 kafka.brokers=node3:9092,node4:9092 kafka.ssl.brokers=node3:9093,node4:9093 impala.server.host=node2 uiServer.host=node1 jobmanager.restServer=node1:9002 jobmanager.agents=node3,node4,node17,node18,node19,node20 rule.offline.exec.host=node1 rule.realtime.exec.host=node1 sysmonitor.host=node1 resourcesmonitor.host=node1 output.connector.host=node1 kubernetes.restServer=node1:6443 system.network.interface=eth0 container.master.host=node1 container.worker.host=node5,node6,node7,node8,node9,node10,node11,node12,node13,node14,node15,node16
The node numbers represent the Splunk UBA host names in the order they were specified when running the /opt/caspida/bin/Caspida setup
command during setup. For example, if you specified ubahost1,ubahost2,ubahost3,ubahost4,ubahost5
, then node 1
corresponds to ubahost1
, node 2
corresponds to ubahost2
, and so on.
View host names order
You can view the order in which the host names were entered by performing the following steps:
- Log in to any Splunk UBA node as the
caspida
user. - Run the following command:
grep caspida.cluster.nodes /opt/caspida/conf/deployment/caspida-deployment.conf
The following is a sample output of this command:
caspida.cluster.nodes=ubahost1,ubahost2,ubahost3,ubahost4,ubahost5
When jobs run in Splunk UBA | Manage user accounts and account roles in Splunk UBA |
This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.2.0, 5.2.1, 5.3.0, 5.4.0, 5.4.1
Feedback submitted, thanks!