Known issues for DSP
This version of the Splunk Data Stream Processor has the following known issues and workarounds.
If no issues appear here, no issues have yet been reported.
Date filed | Issue number | Description |
---|---|---|
2021-12-03 | DSP-43606 | Server can be attacked by slow clients during TLS handshake |
2021-08-04 | DSP-41040 | Checkpoint cleanup for pull-based connectors cause performance issues. |
2021-06-01 | DSP-38591 | Subseconds are not sent to Firehose |
2021-02-23 | DSP-34031 | In HEC sink, unhandled exception causes pipeline restarts |
2020-12-15 | DSP-31160 | HEC sink leaks memory when ACK is disabled |
2020-05-06 | DSP-20937 | Upgrading directly from 1.0.0 to 1.1.0 fails Workaround: Upgrade to 1.0.1 first, then upgrade to 1.1.0. |
2020-02-10 | DSP-17563 | Unable to upgrade a DSP cluster if the hosts have 6GB or less of free space on the root volume Workaround: Choose a different directory to write temporary files to during the upgrade process. Run the following command in your cluster. export TMPDIR=/<directory-on-larger-volume> |
2020-01-30 | DSP-17242 | lsdc GET connectors returns lsdc-query-error after installation Workaround: Delete the collect-service and gc pods in the lsdc namespace so that they can restart with the correct credentials |
2019-11-11 | DSP-14679 | Splunk Enterprise Sink changes numeric value in body to an array of numbers |
2019-11-11 | DSP-14567 | Topic ingest-default-input may be created with incorrect configuration during installation |
2019-11-08 | DSP-14655 | After updating UI cert and deploying, nginx did not automatically restart. Workaround: Manually delete the dsp-reverse-proxy pod, and it will pick up the new configuration. |
2019-11-08 | DSP-14677 | Splunk Enterprise Sink does not provide a valid empty "event" entry in HEC event JSON when "body" field is empty in DSP record Workaround: Ensure that the body field of DSP records is present and not empty when sending records into the DSP Splunk Enterprise sink functions (write_splunk_enterprise and write_index). |
2019-11-05 | DSP-14603 | New Installations of DSP v1.0.0 on multi node clusters create only one kafka replica and partition Workaround: Manually reconfigure the Kafka topic to have replication factor of 3, and increase the number of partitions. |
2019-11-05 | DSP-14507 | Unable to launch pods due to "cannot allocate memory" caused by leaked cgroups Workaround: If your host system is running systemd v231 or earlier (generally centos7 or rhel7), there is a bug where cgroup resources are not cleaned up. This error manifests as the following on pod creation: Warning FailedCreatePodContainer 3s (x2 over 15s) kubelet, 1.2.3.4 unable to ensure pod container exists: failed to create container for [kubepods ...] : mkdir /sys/fs/cgroup/memory/kubepods/...: cannot allocate memory Workaround 1: Upgrade systemd to v232 or later Workaround 2: Add the following kernel parameter and reboot the affected machines: grubby --args=cgroup.memory=nokmem --update-kernel /boot/vmlinuz-<kernel_image> Replace <kernel_image> as appropriate for your kernel. |
2019-10-31 | DSP-14570 | Possible data loss when using S3, CloudWatchMetrics, Azure connectors. |
2019-10-30 | DSP-14699, SCP-18898 | scloud ingest post-metrics command does not work Workaround: Use cURL to ingest metrics events instead. |
2019-10-29 | DSP-14566, DSP-14614 | Login after logout fails to redirect user Workaround: If you log out, then try to log back in, the redirect does not work properly. The workaround is to navigate to the app URL directly. |
2019-10-29 | DSP-14767 | K8S_PIPELINES_DATA_SPLUNKD_SSL_CERT_BASE64 cannot be altered by deployments Workaround: SSL certification validation cannot be enabled directly. If SSL is enabled on the HEC endpoint, perform the following operation on the master node of your cluster --
./set-config K8S_PIPELINES_DATA_SPLUNKD_SSL_VALIDATION_ENABLED false
And run ./deploy after setting the variable.
This turns off SSL certification validation but still use HTTPS to send events to splunk enterprise. |
2019-10-28 | DSP-14702 | Some streaming functions attempt to resolve their record schema before it is determined Workaround: If you see an unexpected "[input] argument must be a stream, found <something>" error, add a Normalize function before the invalid function that does not update the input schema. |
2019-10-28 | DSP-14705 | Aggregations with higher Flink job parallelism than group by values do not send data Workaround: Pipelines that perform aggregations shuffle data between subtasks, and if one aggregation subtask receives no data during its time window, none of the time windows will close. Workarounds: (1) If your pipeline reads from Apache Kafka, set the consumer property "dsp.flink.parallelism=1" on the connection or source function. This could dramatically reduce your throughput, however. (2) Send more data with different values for the field you group by |
2019-10-25 | DSP-14645 | The DSP UI changes certain floats to ints Workaround: Any floating-point values whose decimal part is zero (such as 1.0 or 2.0) must be cast to a float. So if you want to do div(5, 2.0) instead do div(5, cast(2, "float")) Floating values with non-zero decimal parts (such as 2.5) are *not* affected and can be used the expected way. |
2019-10-24 | DSP-14721 | Collect Service occasionally fails to retain environment variables on installation. Workaround: If you are getting a "pq: password authentication failed for user" error, do the following: From a node in your cluster, type "sudo kubectl -n lsdc get pods" to list the "lsdc" pods in your cluster. Then, delete the pods that begin with collect-service-* by typing "sudo kubectl -n lsdc delete pods -l app=collect-service" |
2019-10-16 | DSP-14573 | If one node is removed from the cluster, minio may become read-only causing pipeline checkpointing to fail Workaround: No workaround at this time. Minio does not support expanding its cluster (yet). |
2019-09-30 | DSP-13537 | Flink checkpoint files do not get cleaned up in Minio |
2019-08-07 | DSP-12232 | Streams API returns HTTP 500s when receiving any 400 or 500-series HTTP response from identity service |
2019-07-16 | DSP-11196 | Binary values are incorrectly represented as Base64 Workaround: Preview will display binary data as Base64-encoded strings. If the data is a string type that is Base64-encoded, the icon at the top of the column will have a lower-case "a" displayed. If the data is a bytes type, the icon at the top of the column will have a capital "B" displayed. This data is Base64-encoded so users may take that data and decode it back to raw bytes for further inspection. |
2019-05-31 | DSP-8731 | DSP does not yet support LZ4 compression in reads and writes to Kafka |
New features for DSP | Third-party credits in Splunk DSP |
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.0.0
Feedback submitted, thanks!