Get started guide phase 3: Scaled rollout π
After completing the Get started guide phase 2: Initial rollout, you are ready for phase 3, scaled rollout. In the final scaled rollout phase, you establish repeatable observability practices using automation, data management, detectors, and dashboards. The following sections cover the primary setup steps for the scaled rollout phase.
To get a high-level overview of the entire getting started journey for Splunk Observability Cloud, see Get started guide for Splunk Observability Cloud admins.
Note
This guide is for Splunk Observability Cloud users with the admin role.
To increase usage across all user teams and establish repeatable observability practices through automation, data management, detectors, and dashboards, complete the following tasks:
Note
Work closely with your Splunk Sales Engineer or Splunk Customer Success Manager as you get started. They can help you fine tune your Splunk Observability Cloud journey and provide best practices, training, and workshop advice.
Add Splunk Observability Cloud to your deployment pipeline π
After completing the initial rollout phase, you have deployed a Collector instance with limited configuration. For the scaled rollout, you can expand your Collector pipelines with more components and services.
See Get started: Understand and use the Collector for an overview of the available options to install, configure, and use the Splunk Distribution of the OpenTelemetry Collector.
See Process your data with pipelines to learn how data is processed in Collector pipelines.
See the Collector components documentation to see the available components you can add to the Collector configuration.
You can also use other ingestion methods, like the following:
To send data using the Splunk Observability Cloud REST APIs, see Send metrics, traces, and events using Splunk Observability Cloud REST APIs.
To send metrics using client libraries, see SignalFlow client libraries .
For information about using the upstream Collector, see Send telemetry using the OpenTelemetry Collector Contrib project.
Automate the token rotation process π
Because tokens expire after 1 year, you need to automate the rotation of tokens using an API call. For a given token, when the API creates a new token, the old token continues to work until the time you specified in the grace period. Wherever the old token is in use, use the API call to automatically rotate the token within the grace period.
For example, you can use the API to rotate the token that a Kubernetes cluster uses to ingest metrics and trace data. When you use the API to generate a new token, you can store the new token directly in the secret in the Kubernetes cluster as part of the automation.
To learn more, see the following topics:
Use metrics pipeline management tools to reduce cardinality of metric time series (MTS) π
As metrics data usage and cardinality grows in Splunk Infrastructure Monitoring, your cost increases. Use metrics pipeline management (MPM) tools within Splunk Infrastructure Monitoring to streamline storage and processing to reduce overall monitoring cost. With MPM, you can make the following optimizations:
Streamline storage and processing to create a multitier metric analytics platform.
Analyze reports to identify where to optimize usage.
Use rule-based metrics aggregation and filtering on dimensions to reduce MTS volume.
Drop dimensions that are not needed.
You can configure dimensions through the user interface, the API, and Terraform.
For comprehensive documentation on MPM, see Introduction to metrics pipeline management.
Review metric names and ingested data π
To prepare for a successful scaled deployment, consider your naming conventions for tokens and custom metrics in Splunk Observability Cloud. A consistent, hierarchical naming convention for metrics makes it easier to find metrics, identify usage, and create charts and alerts across a range of hosts and nodes.
See Naming conventions for metrics and dimensions for guidance on creating a naming convention for your organization.
After bringing in metrics data, review the name and the metrics volume each team is ingesting. Make sure the ingest data matches the naming convention for dimensions and properties.
Build custom dashboards and detectors π
Dashboards are groupings of charts that visualize metrics. Use dashboards to provide your team with actionable insight into your system at a glance. Use detectors to monitor your streaming data against a specific condition that you specify to keep users informed when certain criteria are met.
Build custom dashboards π
Splunk Observability Cloud automatically adds built-in-dashboards for each integration you use after it ingests 50,000 data points. Review these built-in dashboards when they are available. See View dashboards in Splunk Observability Cloud and Dashboards available.
- Learn how to create and customize dashboards. Make sure your teams can complete these tasks:
Clone, share, and mirror dashboards.
Use dashboard filters and dashboard variables.
Add text notes and event feeds to your dashboards.
Use data links to dynamically link a dashboard to another dashboard or external system such as Splunk APM, the Splunk platform, or a custom URL.
For comprehensive documentation on these tasks, see Dashboards in Splunk Observability Cloud.
Build custom detectors π
Splunk Observability Cloud also automatically adds the AutoDetect detectors that correspond to the integrations you are using. You can copy the AutoDetect detectors and customize them. See Use and customize AutoDetect alerts and detectors.
Create custom detectors to trigger alerts that address your use cases. See Introduction to alerts and detectors in Splunk Observability Cloud.
- You can create advanced detectors to enhance the basic list of alert conditions to take into account the different types of functions, such as additional firing, alert clearing conditions, or comparing 2 functions using the population_comparison function.
See the library of SignalFlow for detectors on GitHub.
To get started with SignalFlow, see Analyze data using SignalFlow in the developer guide.
Onboard all users and teams π
Your final step of the scaled rollout phase is to onboard all users and teams and configure who can view and modify various aspects of Splunk Observability Cloud.
See Manage users and teams to get started managing users, teams, and roles.
If you havenβt already done so, turn on enhanced security to identify team managers and control who can view and modify dashboards and detectors. See Turn on enhanced team security.
Assign team-specific notifications for alerts triggered by the detectors that you set up. Team-specific notifications give your teams different escalation methods for their alerts. See Manage team notifications in Splunk Observability Cloud.
Optional and advanced configurations π
Consider these optional and advanced configurations to customize your setup as they apply to your organization.
Use global data links to link properties to relevant resources π
Create global data links to link Splunk Observability Cloud dashboards to other dashboards, external systems, custom URLs, or Splunk Cloud Platform logs. To learn more, see Link metadata to related resources using global data links.
Analyze and troubleshoot usage, limits, and throttles π
To analyze and troubleshoot usage, make sure you know how to complete the following tasks:
Understand the difference between host-based and MTS-based subscriptions in Infrastructure Monitoring.
Understand the difference between host-based and trace-analyzed-per-minute (TAPM) subscriptions in APM.
Understand per-product system limits.
Read available reports, such as monthly and hourly usage reports, dimension reports, and custom metric reports.
To learn more, see the following topics:
Education resources π
Before you start scaling up the use of the OpenTelemetry agents, consider the OpenTelemetry sizing guidelines. This is especially important on platforms such as Kubernetes where there can be a sudden growth from various autoscaling services. For details about the sizing guidelines, see Sizing and scaling.
Coordinate with your Splunk Sales Engineer to register for the Splunk Observability Cloud workshop. See Splunk Observability Cloud Workshops.
To begin creating a training curriculum for your Splunk Observability Cloud end users see the Curated training for end users.