Splunk® App for Chargeback

Use the Splunk App for Chargeback

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Configure Splunk App for Chargeback

Job Scheduling

Watch this video for a quick app setup intro: Splunk App for Chargeback setup

Once the App is installed and opened up for the first time, from the app menu click on "Other Dashboards" and then click to open the App Setup and Configuration dashboard and then click on App Setup Screen" to open it.

This screenshot shows the Chargeback App Setup and Configuration screen.

Follow this flowchart to assist in enabling the app jobs for the first time: This image shows a flowchart of enabling app jobs for Chargeback App.

Splunk Cloud Platform customers

The steps below correspond to the numbers in the flowchart.

Step 1: Follow the installation instructions in the installation section

Step 2: Make sure the chargeback_summary index exists and is set to 400 days of retention

Step 3:

  1. Open the App Setup and Configuration dashboard
  2. Click on the App Setup Screen button
  3. Click on the Cloud Setup tab and enable all Jobs and Proceed to Step 4

Step 4:

  1. Click on the Global Settings Tab
  2. Adjust the 6 Weight Macros if needed
  3. Adjust the 'chargeback_summary index if different from the default
  4. Adjust currency unit if needed
  5. Adjust unit of measurement if needed

Step 5: Click on the Pink Save Button at the bottom of the screen to complete the setup, allow a few seconds for the dashboard to save the settings then close the Setup Screen tab in your browser.

Splunk Enterprise customers

Step 1: Follow the installation instructions in the installation section

Step 2: Verify that the 'chargeback_summary index exists and is set to at least 400 days of retention

Step 3:

  1. Open the App Setup and Configuration dashboard
  2. Click on the App Setup Screen button
  3. Click on the Splunk Enterprise Setup tab and enable all jobs
  4. Click on the Step 1 to run the discovery job for the first time
  5. Click on Step 2 and copy/paste the values into Macros 3, 4 & 5
  6. Adjust Macros 1 & 2 per the instructions under Description

Step 4:

  1. Click on the Global Settings Tab
  2. Adjust the 6 Weight Macros if needed
  3. Adjust the 'chargeback_summary index if different from the default
  4. Adjust currency unit if needed
  5. Adjust unit of measurement if needed

Step 5: Click on the Pink Save Button at the bottom of the screen to complete the setup, allow a few seconds for the dashboard to save the settings then close the Setup Screen tab in your browser.

Configure App Identities and 8 Enrichment Principles

Follow this flowchart to assist in the configuration process:
This image shows a flowchart of configuration steps for Chargeback App.

Splunk Cloud Platform and Enterprise customers

These steps correspond to the numbers in the flowchart.

Step 1:

  1. Go to the App Setup and Configuration dashboard
  2. Click on the App Jobs tab
  3. Open the Chargeback Custom Identities Gen Job (Edit and customize this Job) and configure this job to collect the information below about users in your organization that are using Splunk today or separated from the organization leaving behind scheduled jobs. It is best to have a reliable repository preferably being indexed daily or weekly that has this information in it.
  4. This screenshot shows Chargeback Custom Identities Gen Job.

  5. Configure a custom identity data store
  6. Map these fields as described
  7. This screenshots shows Chargeback App custom identities.

    1. Username (user_name)
    2. Business Unit (biz_unit
    3. Business Division (biz_division)
    4. Business Department (biz_dep)
    5. Name (Full Name) (name)
    6. User Type (Service, Employee, Contractor or Vendor) (type)
    7. User Status in the organization (Active OR Separated) (status)
    8. Title in the organization (title)
    9. E-Mail Address (email)
    10. Manager Name (manager)
    11. Manager ID (manager_id)


    The important fields for each user are: biz_unit and biz_dep and if you don't have this information in your data, biz_division and biz_dep will default to the same value as the biz_unit. Here is a complete example of the logic and how it works:

    Scenario 1: Joe works in IT as an admin, Joe's B-Unit is ITOps, but his department is missing, and we don't know his division, his info should be configured like this:

    user_name biz_unit biz_division biz_dep
    Joe ITOps ITOps ITOps

    Scenario 2: We know Joe's department but not his division, his info should be configured like this:

    user_name biz_unit biz_division biz_dep
    Joe ITOps ITOps Sys Admins

    Scenario 3: We know everything about Joe, his info should be configured like this:

    user_name biz_unit biz_division biz_dep
    Joe ITOps Global Infrastructures Sys Admins

    This line in the code will default the values for these important fields:

    | eval biz_division = if(isnull(biz_division) OR biz_division="", biz_unit, biz_division), biz_dep = if(isnull(biz_dep) OR biz_dep="", biz_unit, biz_dep)

  8. Save and run the Chargeback Custom Identities Gen Job (Edit and customize this Job) for the first time to produce the custom identities KV Store.
  9. Login to every Standalone Search Head and Search Head Cluster in your stack and create the following Job:
    • Configure this job on every Search Head in your stack including your Input Data Manager (IDM) instance
    • Job Name:' chargeback_rest_identities_summary_data_gen
    • Job Schedule: Edit the Job and Schedule it with the following Cron Expression: 18 3 * * *
    • Job Search:

      | rest /servicesNS/-/-/authentication/users splunk_server=local timeout=0 | rename title As user_name , realname As name roles As splunk_role_map type As auth_type | eval Splunk_Instance = lower(splunk_server), user_name = lower(trim(user_name)), email = lower(email) | stats Last(name) As name , Last(email) As email , Values(splunk_role_map) As splunk_role_map , Values(auth_type) As auth_type By Splunk_Instance, user_name | eval _time = now() | foreach splunk_role_map, auth_type [ eval <<FIELD>>=mvjoin(mvsort(mvdedup('<<FIELD>>')), "|") ] | table _time, Splunk_Instance, user_name, name, email, splunk_role_map, auth_type | collect index=chargeback_summary source=chargeback_rest_identities_summary_data sourcetype=stash testmode=false | stats count

    • Finish by running the job for the first time on the Search Head or Search Head Cluster you configured it on.
  10. Run the Splunk Cloud Stack Search Head Info Job for the first time and make sure it produces data. The job creates the chargeback_cloud_stack_info_csv_lookup used in later jobs to populate the label field.
  11. Run the Chargeback Identities - Summary Index - Gen Job - From REST on the same SH or SHC you installed the app.
    1. Run the Chargeback Identities - KV Store - Gen Job - From Summary Index Job to create the chargeback_rest_identities_kv_store_lookup KV Store Lookup for the first time.

Step 2: Go to Sample Jobs and Reports and open Chargeback Tracker Job Dry Run
This screenshot shows Chargeback Tracker Job Dry Run.
Step 3:

  1. Run the update_user_2_bunit_table macro to add unclassified users to the list. At the end of the search insert | `update_user_2_bunit_table(user)` and run past 24 hours or preferably last 7 days.
  2. Repeat for apps, roles, jobs and hosts (search heads):
    • | `update_app_2_bunit_table(app)`
    • | `update_role_2_bunit_table(role)`
    • | `update_job_2_bunit_table(job)`
    • | `update_host_2_bunit_table(host)`

This screenshot shows Chargeback Tracker Job Summary Dry Run
Step 4: Optionally you can watch a quick video on the Splunkbase here: [1] that walks you through the initial setup and configuration process.

  1. Click on the Enrichment Tables tab
  2. From the Business Units / Departments Enrichment Priority Logic - Foundation or Extended Tables click the gear icon to open any of the tables for editing
  3. Adjust the B-Unit and Department information
  4. Save and rerun the dryrun search to test your work

This image shows principles one through four of the 8 Enrichment Principles for Chargeback App.
This image shows principles five through eight of the 8 Enrichment Principles for Chargeback App.

  • Custom User to B-Unit mapping specifically for the service accounts, Splunk local accounts and any other account not in the main user repository.
  • Index to B-Unit mapping, can be shared across multiple B-Units using the perc_ownership field.
  • Use the following examples to auto populate any of the enrichment lookups:

This screenshot shows enrichment lookups on the Troubleshooting tab of Chargeback App.
Setup entitlement and bunit allocation tables:

  1. For Splunk Cloud Platform customers the entitlement column updates once per day, you just need to update the yearly cost column.

  2. This screenshot shows Chargeback entitlements.

  3. You can use SRU Showback or SRU Reports to get an estimate for the SRU/DDAS/DDAA amounts for this table.

  4. This screenshot shows Chargeback bunit allocations.


    Here is a complete screenshot example from the landing dashboard:
    This screenshot shows a complete example from the landing dashboard.


Additional information for Splunk Cloud Customers

  • The entitlement field in the chargeback_entitlements_csv_lookup table for types SVC, DDAS, DDAA and INGEST gets updated automatically once per day for Splunk Cloud Platform customers via this scheduled job: Splunk Entitlements - Gen Job
  • The DDAS entitlement gets updated automatically for both Splunk Cloud INGEST and Workload pricing customers.
  • Splunk Cloud customers using DDSS instead of DDAA will need to update the entitlement for DDSS in this table manually. Enter the amount using units of 500 GB blocks, i.e., if your DDSS or S3 bucket size is 20 TB, enter 41, which represents 41 units of 500 GB blocks.
  • DDAS and DDAA are stored in units of 500 GB blocks, i.e., 1 unit of DDAS = 500 GB of disk space. INGEST is stored in GB not blocks.
  • The SmartStore entitlement type is for Splunk Enterprise Customers only.
  • Open the chargeback_entitlements_csv table and enter the yearly cost for the SVC, DDAS, and DDAA entitlements. If you are unsure, contact your Splunk account team.
  • The INGEST entitlement is just a placeholder to populate the amount owed, which is used to calculate the amount of DDAS storage units included in the INGEST entitlement.

Additional information for Splunk Enterprise Customers

  • Splunk Enterprise customers may use the following job to automatically populate the chargeback_entitlements_csv lookup table: Splunk Entitlements - Gen Job
  • Splunk Enterprise customers with SmartStore, the amount is estimated and stored in the table in 500 GB blocks.
  • The entitlement amount for INGEST is estimated using the last week of daily usage, manual entry in the table will override the job estimation.
  • DDAS, which is a Splunk Cloud term, is being used for Splunk Enterprise customers to store the amount of storage across all indexers in units of 500 GB blocks. The logic used is the maximum capacity of all partitions across all indexers in the cluster. This information comes from: index=_introspection sourcetype=splunk_disk_objects component=Partitions, and the field used is data.capacity. When SmartStore in play, this number is used to calculate the cost of the cache storage available to the indexers. In addition to this size, the SmartStore amount and the cost associated with it gets added to the cost of maintaining the cache to get a true representation of the complete cost of operating Splunk with SmartStore. When SmartStore is not in play, DDAS is the only storage type considered for chargeback.
  • Splunk Enterprise customers on vCPU license should use the SVC entitlement to store the number of vCPUs owned. The vCPU license must be entered manually. If you are unsure, contact your Splunk account team. The yearly cost for the vCPU license should be in your contract, and if you are unable to find it, contact your Splunk account team, and they will help you with that.
  • Click below to open the "chargeback_entitlements_csv" table and enter the yearly cost for SVC, DDAS and SmartStore entitlements.
  • The INGEST entitlement is just a placeholder for Splunk Cloud customers with volume based license to populate the amount of daily ingestion they are licensed for, no other calculations are made using this field. Only the amounts in types DDAS, SmartStore & vCPU are used in both the Chargeback and Showback dashboards for Splunk Enterprise customers.

Configuring the chargeback_bunit_enrichment_priority_order Macro

Edit the Macro and rearrange the 8 Macros in it if you wish to alter the default enrichment priority order. To enable any of the disabled macros, remove the `chargeback_comment around it.


This screenshot shows Chargeback bunit enrichment priority order.

Track data ingestion for premium apps

Prerequisites

Steps

  1. On the Splunk Enterprise Setup tab, enable the following jobs:
    • chargeback_enrichment_lookup_backup_gen
    • chargeback_internal_ingestion_tracker
    • chargeback_onprem_entitlements_csv_lookup_gen
    • chargeback_onprem_stack_info_csv_lookup_gen
  2. On the home dashboard, select App Jobs and run Splunk Enterprise Estimated Entitlements - [Daily]. This will automatically discover the Ingest license and populates other fields in the Splunk Entitlements table.
  3. Select the Enrichment Tables tab and open both Splunk Entitlements and B-Unit Entitlement Allocations. The screenshot shows content of both the Splunk entitlements and B-Unit allocations for CORE, ES and ITSI.
  4. The Splunk Entitlements table should have the INGEST license already populated. Adjust if the license is incorrect and save the table. The screenshot shows an example 200GB volume-based license.
  5. In the B-Unit Entitlement Allocations table create 3 rows and enter CORE for the entire Splunk ingest daily license in GB. Fill-in the ITSI and ES license per your contract and save the table.
    The screenshot shows an example breakdown of the volume-based license.
  6. From the app homepage, select App Jobs and then select Chargeback App Ingestion Summary Tracker [Hourly] to run the job.
  7. Update line 1 earliest=-2h@h latest=-1h@h to earliest=-30d@d latest=-1h@h and run the job, then immediately send it in the background. This will backfill 30 days worth of data.
  8. Select Troubleshooting / Reports from the homepage and input Index 2 B-Unit in the Enrichment Type field. Select Submit and run the Index 2 B-Unit Generator using the REST API - Example 2 - [Splunk Cloud & Enterprise] search, with the last 2 comments removed. This will populate the chargeback_index_2_bunit_csv_lookup lookup table with a list of all defined indexes:
    | `chargeback_comment("outputlookup chargeback_index_2_bunit_csv_lookup CreateInApp=true Create_Empty=true Override_If_Empty=false")`
    To
    | outputlookup chargeback_index_2_bunit_csv_lookup CreateInApp=true Create_Empty=true Override_If_Empty=false
    
  9. Select Storage to open the storage dashboard. Select the Index 2 B-Unit Table button to open the table using the lookup editor. Assign ITSI to all ITSI indexes, ES to all Enterprise Security indexes, and CORE to everything else. A shared index can be split using the perc_ownership field.
    The screenshot shows an example of a completed business unit entitlement allocations table.
    You can also do this with search, for example:
    | inputlookup chargeback_index_2_bunit_csv_lookup 
    | fields index_name 
    | dedup index_name 
    | eval biz_unit = case(
        match(index_name, "sec|anomaly|^net|^endpoint|notable|audit_summary|cim_modactions|gia_summary|risk|threat_activity|wineventlog"), "ES", 
        index_name in("akamai") OR match(index_name, "itsi|perf|^app|^os|^aws|metrics"), "ITSI",
        true(), "CORE") 
    | sort index_name 
    | fillnull value=100 perc_ownership
    | table index_name, biz_unit, biz_division, biz_dep, biz_desc, biz_owner, biz_email, perc_ownership
    | outputlookup chargeback_index_2_bunit_csv_lookup
    
  10. Start using the Storage dashboard to review the usage split by ITSI, ES if applicable and CORE. The screenshot shows an example search you can use to automatically populate the index2bunit table above.
  11. Open the 4.1 panel and select ITSI from the dropdown as shown in the screenshot below. You should see two overlay lines, one for the entire Ingest entitlement (in this example, it is 200GB) and another for just ITSI (in this example, it is 100 GB). The graph shows that ITSI exceeded both the ITSI daily ingest license and the daily ingest license from Saturday, April 15 to Wednesday, Apr 19. The screenshot shows daily ingestion details broken down by Core, Enterprise Security, and ITSI from the first tab of the Storage dashboard.
Last modified on 19 May, 2023
PREVIOUS
Install or upgrade Splunk App for Chargeback
  NEXT
The SRU formula in Splunk App for Chargeback

This documentation applies to the following versions of Splunk® App for Chargeback: current


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters