
Configure Splunk App for Chargeback
Job Scheduling
Watch this video for a quick app setup intro: Splunk App for Chargeback setup
Once the App is installed and opened up for the first time, from the app menu click on "Other Dashboards" and then click to open the App Setup and Configuration dashboard and then click on App Setup Screen" to open it.
Follow this flowchart to assist in enabling the app jobs for the first time:
Splunk Cloud Platform customers
The steps below correspond to the numbers in the flowchart.
Step 1: Follow the installation instructions in the installation section
Step 2: Make sure the chargeback_summary index exists and is set to 400 days of retention
Step 3:
- Open the App Setup and Configuration dashboard
- Click on the App Setup Screen button
- Click on the Cloud Setup tab and enable all Jobs and Proceed to Step 4
Step 4:
- Click on the Global Settings Tab
- Adjust the 6 Weight Macros if needed
- Adjust the 'chargeback_summary index if different from the default
- Adjust currency unit if needed
- Adjust unit of measurement if needed
Step 5: Click on the Pink Save Button at the bottom of the screen to complete the setup, allow a few seconds for the dashboard to save the settings then close the Setup Screen tab in your browser.
Splunk Enterprise customers
Step 1: Follow the installation instructions in the installation section
Step 2: Verify that the 'chargeback_summary index exists and is set to at least 400 days of retention
Step 3:
- Open the App Setup and Configuration dashboard
- Click on the App Setup Screen button
- Click on the Splunk Enterprise Setup tab and enable all jobs
- Click on the Step 1 to run the discovery job for the first time
- Click on Step 2 and copy/paste the values into Macros 3, 4 & 5
- Adjust Macros 1 & 2 per the instructions under Description
Step 4:
- Click on the Global Settings Tab
- Adjust the 6 Weight Macros if needed
- Adjust the 'chargeback_summary index if different from the default
- Adjust currency unit if needed
- Adjust unit of measurement if needed
Step 5: Click on the Pink Save Button at the bottom of the screen to complete the setup, allow a few seconds for the dashboard to save the settings then close the Setup Screen tab in your browser.
Configure App Identities and 8 Enrichment Principles
Follow this flowchart to assist in the configuration process:
Splunk Cloud Platform and Enterprise customers
These steps correspond to the numbers in the flowchart.
Step 1:
- Go to the App Setup and Configuration dashboard
- Click on the App Jobs tab
- Open the Chargeback Custom Identities Gen Job (Edit and customize this Job) and configure this job to collect the information below about users in your organization that are using Splunk today or separated from the organization leaving behind scheduled jobs. It is best to have a reliable repository preferably being indexed daily or weekly that has this information in it.
- Configure a custom identity data store
- Map these fields as described
- Username (user_name)
- Business Unit (biz_unit
- Business Division (biz_division)
- Business Department (biz_dep)
- Name (Full Name) (name)
- User Type (Service, Employee, Contractor or Vendor) (type)
- User Status in the organization (Active OR Separated) (status)
- Title in the organization (title)
- E-Mail Address (email)
- Manager Name (manager)
- Manager ID (manager_id)
- Save and run the Chargeback Custom Identities Gen Job (Edit and customize this Job) for the first time to produce the custom identities KV Store.
- Login to every Standalone Search Head and Search Head Cluster in your stack and create the following Job:
- Configure this job on every Search Head in your stack including your Input Data Manager (IDM) instance
- Job Name:' chargeback_rest_identities_summary_data_gen
- Job Schedule: Edit the Job and Schedule it with the following Cron Expression: 18 3 * * *
- Job Search:
| rest /servicesNS/-/-/authentication/users splunk_server=local timeout=0 | rename title As user_name , realname As name roles As splunk_role_map type As auth_type | eval Splunk_Instance = lower(splunk_server), user_name = lower(trim(user_name)), email = lower(email) | stats Last(name) As name , Last(email) As email , Values(splunk_role_map) As splunk_role_map , Values(auth_type) As auth_type By Splunk_Instance, user_name | eval _time = now() | foreach splunk_role_map, auth_type [ eval <<FIELD>>=mvjoin(mvsort(mvdedup('<<FIELD>>')), "|") ] | table _time, Splunk_Instance, user_name, name, email, splunk_role_map, auth_type | collect index=chargeback_summary source=chargeback_rest_identities_summary_data sourcetype=stash testmode=false | stats count
- Finish by running the job for the first time on the Search Head or Search Head Cluster you configured it on.
- Run the Splunk Cloud Stack Search Head Info Job for the first time and make sure it produces data. The job creates the chargeback_cloud_stack_info_csv_lookup used in later jobs to populate the label field.
- Run the Chargeback Identities - Summary Index - Gen Job - From REST on the same SH or SHC you installed the app.
- Run the Chargeback Identities - KV Store - Gen Job - From Summary Index Job to create the chargeback_rest_identities_kv_store_lookup KV Store Lookup for the first time.
The important fields for each user are: biz_unit and biz_dep and if you don't have this information in your data, biz_division and biz_dep will default to the same value as the biz_unit. Here is a complete example of the logic and how it works:
Scenario 1: Joe works in IT as an admin, Joe's B-Unit is ITOps, but his department is missing, and we don't know his division, his info should be configured like this:
user_name | biz_unit | biz_division | biz_dep |
---|---|---|---|
Joe | ITOps | ITOps | ITOps |
Scenario 2: We know Joe's department but not his division, his info should be configured like this:
user_name | biz_unit | biz_division | biz_dep |
---|---|---|---|
Joe | ITOps | ITOps | Sys Admins |
Scenario 3: We know everything about Joe, his info should be configured like this:
user_name | biz_unit | biz_division | biz_dep |
---|---|---|---|
Joe | ITOps | Global Infrastructures | Sys Admins |
This line in the code will default the values for these important fields:
| eval biz_division = if(isnull(biz_division) OR biz_division="", biz_unit, biz_division), biz_dep = if(isnull(biz_dep) OR biz_dep="", biz_unit, biz_dep)
Step 2: Go to Sample Jobs and Reports and open Chargeback Tracker Job Dry Run
Step 3:
- Run the update_user_2_bunit_table macro to add unclassified users to the list. At the end of the search insert | `update_user_2_bunit_table(user)` and run past 24 hours or preferably last 7 days.
- Repeat for apps, roles, jobs and hosts (search heads):
- | `update_app_2_bunit_table(app)`
- | `update_role_2_bunit_table(role)`
- | `update_job_2_bunit_table(job)`
- | `update_host_2_bunit_table(host)`
Step 4: Optionally you can watch a quick video on the Splunkbase here: [1] that walks you through the initial setup and configuration process.
- Click on the Enrichment Tables tab
- From the Business Units / Departments Enrichment Priority Logic - Foundation or Extended Tables click the gear icon to open any of the tables for editing
- Adjust the B-Unit and Department information
- Save and rerun the dryrun search to test your work
- Custom User to B-Unit mapping specifically for the service accounts, Splunk local accounts and any other account not in the main user repository.
- Index to B-Unit mapping, can be shared across multiple B-Units using the perc_ownership field.
- Use the following examples to auto populate any of the enrichment lookups:
Setup entitlement and bunit allocation tables:
- For Splunk Cloud Platform customers the entitlement column updates once per day, you just need to update the yearly cost column.
- You can use SRU Showback or SRU Reports to get an estimate for the SRU/DDAS/DDAA amounts for this table.
Here is a complete screenshot example from the landing dashboard:
Additional information for Splunk Cloud Customers
- The entitlement field in the chargeback_entitlements_csv_lookup table for types SVC, DDAS, DDAA and INGEST gets updated automatically once per day for Splunk Cloud Platform customers via this scheduled job: Splunk Entitlements - Gen Job
- The DDAS entitlement gets updated automatically for both Splunk Cloud INGEST and Workload pricing customers.
- Splunk Cloud customers using DDSS instead of DDAA will need to update the entitlement for DDSS in this table manually. Enter the amount using units of 500 GB blocks, i.e., if your DDSS or S3 bucket size is 20 TB, enter 41, which represents 41 units of 500 GB blocks.
- DDAS and DDAA are stored in units of 500 GB blocks, i.e., 1 unit of DDAS = 500 GB of disk space. INGEST is stored in GB not blocks.
- The SmartStore entitlement type is for Splunk Enterprise Customers only.
- Open the chargeback_entitlements_csv table and enter the yearly cost for the SVC, DDAS, and DDAA entitlements. If you are unsure, contact your Splunk account team.
- The INGEST entitlement is just a placeholder to populate the amount owed, which is used to calculate the amount of DDAS storage units included in the INGEST entitlement.
Additional information for Splunk Enterprise Customers
- Splunk Enterprise customers may use the following job to automatically populate the chargeback_entitlements_csv lookup table: Splunk Entitlements - Gen Job
- Splunk Enterprise customers with SmartStore, the amount is estimated and stored in the table in 500 GB blocks.
- The entitlement amount for INGEST is estimated using the last week of daily usage, manual entry in the table will override the job estimation.
- DDAS, which is a Splunk Cloud term, is being used for Splunk Enterprise customers to store the amount of storage across all indexers in units of 500 GB blocks. The logic used is the maximum capacity of all partitions across all indexers in the cluster. This information comes from: index=_introspection sourcetype=splunk_disk_objects component=Partitions, and the field used is data.capacity. When SmartStore in play, this number is used to calculate the cost of the cache storage available to the indexers. In addition to this size, the SmartStore amount and the cost associated with it gets added to the cost of maintaining the cache to get a true representation of the complete cost of operating Splunk with SmartStore. When SmartStore is not in play, DDAS is the only storage type considered for chargeback.
- Splunk Enterprise customers on vCPU license should use the SVC entitlement to store the number of vCPUs owned. The vCPU license must be entered manually. If you are unsure, contact your Splunk account team. The yearly cost for the vCPU license should be in your contract, and if you are unable to find it, contact your Splunk account team, and they will help you with that.
- Click below to open the "chargeback_entitlements_csv" table and enter the yearly cost for SVC, DDAS and SmartStore entitlements.
- The INGEST entitlement is just a placeholder for Splunk Cloud customers with volume based license to populate the amount of daily ingestion they are licensed for, no other calculations are made using this field. Only the amounts in types DDAS, SmartStore & vCPU are used in both the Chargeback and Showback dashboards for Splunk Enterprise customers.
Configuring the chargeback_bunit_enrichment_priority_order Macro
Edit the Macro and rearrange the 8 Macros in it if you wish to alter the default enrichment priority order. To enable any of the disabled macros, remove the `chargeback_comment around it.
Prerequisites
- Install the Splunk App for Chargeback.
- Select the App Setup Screen button and follow the configuration steps in Configure Splunk App for Chargeback.
Steps
- On the Splunk Enterprise Setup tab, enable the following jobs:
- chargeback_enrichment_lookup_backup_gen
- chargeback_internal_ingestion_tracker
- chargeback_onprem_entitlements_csv_lookup_gen
- chargeback_onprem_stack_info_csv_lookup_gen
- On the home dashboard, select App Jobs and run Splunk Enterprise Estimated Entitlements - [Daily]. This will automatically discover the Ingest license and populates other fields in the Splunk Entitlements table.
- Select the Enrichment Tables tab and open both Splunk Entitlements and B-Unit Entitlement Allocations.
- The Splunk Entitlements table should have the INGEST license already populated. Adjust if the license is incorrect and save the table.
- In the B-Unit Entitlement Allocations table create 3 rows and enter CORE for the entire Splunk ingest daily license in GB. Fill-in the ITSI and ES license per your contract and save the table.
- From the app homepage, select App Jobs and then select Chargeback App Ingestion Summary Tracker [Hourly] to run the job.
- Update line 1
earliest=-2h@h latest=-1h@h
toearliest=-30d@d latest=-1h@h
and run the job, then immediately send it in the background. This will backfill 30 days worth of data. - Select Troubleshooting / Reports from the homepage and input Index 2 B-Unit in the Enrichment Type field. Select Submit and run the Index 2 B-Unit Generator using the REST API - Example 2 - [Splunk Cloud & Enterprise] search, with the last 2 comments removed. This will populate the chargeback_index_2_bunit_csv_lookup lookup table with a list of all defined indexes:
| `chargeback_comment("outputlookup chargeback_index_2_bunit_csv_lookup CreateInApp=true Create_Empty=true Override_If_Empty=false")` To | outputlookup chargeback_index_2_bunit_csv_lookup CreateInApp=true Create_Empty=true Override_If_Empty=false
- Select Storage to open the storage dashboard. Select the Index 2 B-Unit Table button to open the table using the lookup editor. Assign ITSI to all ITSI indexes, ES to all Enterprise Security indexes, and CORE to everything else. A shared index can be split using the perc_ownership field.
You can also do this with search, for example:| inputlookup chargeback_index_2_bunit_csv_lookup | fields index_name | dedup index_name | eval biz_unit = case( match(index_name, "sec|anomaly|^net|^endpoint|notable|audit_summary|cim_modactions|gia_summary|risk|threat_activity|wineventlog"), "ES", index_name in("akamai") OR match(index_name, "itsi|perf|^app|^os|^aws|metrics"), "ITSI", true(), "CORE") | sort index_name | fillnull value=100 perc_ownership | table index_name, biz_unit, biz_division, biz_dep, biz_desc, biz_owner, biz_email, perc_ownership | outputlookup chargeback_index_2_bunit_csv_lookup
- Start using the Storage dashboard to review the usage split by ITSI, ES if applicable and CORE.
- Open the 4.1 panel and select ITSI from the dropdown as shown in the screenshot below. You should see two overlay lines, one for the entire Ingest entitlement (in this example, it is 200GB) and another for just ITSI (in this example, it is 100 GB). The graph shows that ITSI exceeded both the ITSI daily ingest license and the daily ingest license from Saturday, April 15 to Wednesday, Apr 19.
PREVIOUS Install or upgrade Splunk App for Chargeback |
NEXT The SRU formula in Splunk App for Chargeback |
This documentation applies to the following versions of Splunk® App for Chargeback: current
Feedback submitted, thanks!