Splunk® Data Stream Processor

Connect to Data Sources and Destinations with DSP

Acrobat logo Download manual as PDF


On April 3, 2023, Splunk Data Stream Processor reached its end of sale, and will reach its end of life on February 28, 2025. If you are an existing DSP customer, please reach out to your account team for more information.

All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.
Acrobat logo Download topic as PDF

Configure Pulsar to expose with loadbalancer

In a 1.4.x processing cluster, DSP uses the open-source Pulsar bundle. If you want to configure a topic and use it for ingesting data on the processing cluster, follow the steps below.

  1. Enable the proxy.
    dsp config set pulsar proxy_enabled=true
  2. Deploy your changes.
    dsp deploy pulsar
  3. Validate changes
    kubectl get pods -n pulsar
    <Kubectl get svc -n pulsar
  4. Change the service type to a loadbalancer.
    kubectl patch svc pulsar-proxy -n pulsar -p '{"spec": {"type": "LoadBalancer"}}'
  5. Patch the node IP as a loadbalancer IP if you do not have a public IP enabled for the cluster.
    curl ifconfig.me
    kubectl patch svc pulsar-proxy -n pulsar -p '{"spec": {"type": "LoadBalancer", "externalIPs":["<IP>"]}}'
  6. Create a topic.
    ./pulsar-admin topics create-partitioned-topic persistent://public/default/direct-partitionedmay23 -p 2
    ./pulsar-admin topics list public/default | wc -l 
    ./pulsar-admin namespaces set-retention public/default -s -1 -t 1h
  7. Generate and retrieve your certificate.
    kubectl get secret -n pulsar pulsar-tls-broker -o json | jq -r '.data."tls.crt"' | base64 --decode > my-cert.pem
    kubectl get secret -n pulsar pulsar-tls-broker -o json | jq -r '.data."tls.key"' | base64 --decode > my-key.pem 
    kubectl get secret -n pulsar pulsar-tls-broker -o json | jq -r '.data."ca.crt"' | base64 --decode > ca-cert.cert
  8. Create an Apache Pulsar connection in your DSP cluster. See Create a DSP connection to Apache Pulsar for more information.
  9. Create a pipeline and begin ingesting data.
Last modified on 05 June, 2023
PREVIOUS
Create a DSP connection to Apache Pulsar
  NEXT
Connecting Google Cloud Pub/Sub to your DSP pipeline

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.4.2, 1.4.3


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters