Docs » Available host and application monitors » Configure application receivers for databases » Databricks receiver

Databricks receiver πŸ”—

The Databricks receiver allows the Splunk Distribution of OpenTelemetry Collector to collect metrics from Databricks using the Databricks API. Use this integration to view and monitor the health of your Databricks clusters. The receiver also generates metrics from the Spark subsystem running in a Databricks instance. The supported pipeline type is metrics.

Benefits πŸ”—

After you configure the integration, you can access the following features:

Get started πŸ”—

Follow these steps to configure and activate the component:

  1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:

  2. Configure the Databricks receiver as described in the next section.

  3. Restart the Collector.

Sample configurations πŸ”—

To activate the Databricks receiver, add databricks to the receivers section of your configuration file with all the mandatory fields, as shown in the following example:

databricks:
  instance_name: my-instance
  endpoint: https://dbr.example.net
  token: abc123
  spark_org_id: 1234567890
  spark_endpoint: https://spark.example.net
  spark_ui_port: 40001
  collection_interval: 10s

To complete the configuration, include the receiver in the metrics pipeline of the service section of your configuration file. For example:

service:
  pipelines:
    metrics:
      receivers: [databricks]

Spark subsystem configuration πŸ”—

To obtain the Spark settings for the receiver configuration, run the following Scala notebook and copy the values into your configuration:

%scala
val sparkOrgId = spark.conf.get("spark.databricks.clusterUsageTags.clusterOwnerOrgId")
val sparkEndpoint = dbutils.notebook.getContext.apiUrl.get
val sparkUiPort = spark.conf.get("spark.ui.port")

Settings πŸ”—

The following table shows the configuration options for the Databricks receiver:

Name

Description

Required

instance_name

A string representing the name of the instance. This value gets set as a databricks.instance.name resource attribute.

Yes

endpoint

The protocol (http or https), host name, and port for the Databricks API, without a trailing slash.

Yes

token

An access token to authenticate to the Databricks API. See Authentication using Databricks personal access tokens on the Databricks documentation site for more information.

Yes

spark_org_id

Spark Org ID. See Spark subsystem configuration for information on how to found this value.

Yes

spark_endpoint

Spark API endpoint, composed of protocol, host name, and, optionally, the port. See Spark subsystem configuration for information on how to found this value.

Yes

spark_ui_port

Spark UI Port. Usually, the port is 40001. See Spark subsystem configuration for information on how to found this value.

Yes

collection_interval

How often this receiver fetches information from the Databricks API. Must be a string readable by time.ParseDuration. The default value is 30s.

No

max_results

The maximum number of items to return per API call. The default value is 25, which is the maximum value. If set explicitly, the API requires a value greater than 0, and less than or equal to 25.

No

Metrics πŸ”—

The following metrics, resource attributes, and attributes are available.

Activate or deactivate specific metrics πŸ”—

You can activate or deactivate specific metrics by setting the enabled field in the metrics section for each metric. For example:

receivers:
  samplereceiver:
    metrics:
      metric-one:
        enabled: true
      metric-two:
        enabled: false

The following is an example of host metrics receiver configuration with activated metrics:

receivers:
  hostmetrics:
    scrapers:
      process:
        metrics:
          process.cpu.utilization:
            enabled: true

Note

Deactivated metrics aren’t sent to Splunk Observability Cloud.

Troubleshooting πŸ”—

If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.

Available to Splunk Observability Cloud customers πŸ”—

Available to customers and free trial users πŸ”—

  • Ask a question and get answers through community support at Splunk Answers.

  • Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.

To learn about even more support options, see Splunk Customer Success.