Create a pipeline using the Canvas Builder
The Canvas Pipeline Builder allows you to use graphical user interface elements to incrementally build a pipeline. You can then configure your pipeline using SPL2 (Search Processing Language) expressions. Use the Canvas Builder if you want:
- Assistance building a pipeline using UIs to configure each of your functions.
- To create a pipeline with many branches, and you want a more visual representation of this pipeline.
The pipeline toggle button is currently a beta feature and repeated toggles between the two builders can lead to unexpected results. If you are editing an active pipeline, using the toggle button can lead to data loss.
Create a new pipeline using the Canvas Builder
- From the Data Stream Processor home page, click on Build pipeline.
- Click on a data source or on a template. If you select a data source or a template, the canvas opens. If you select a template, skip to step 10.
- Add a name and a description to your pipeline by clicking the More options menu, and then select Update Pipeline Metadata. If you are creating a data pipeline from a different application, put the name of the application in the pipeline name or use the description field to say which application depends on this pipeline.
- Configure your source function. If you are using a connection-based function, create a connection before using this function. See get data in using a connector. In addition, some source functions have unique pipeline requirements. See the Unique pipeline requirements for specific data sources chapter in the left navigation bar.
- Click the + icon to add a function. A navigation bar on the right appears.
- Select a function from the navigation bar. For a full list of all functions available, see the Data Stream Processor Function Reference manual.
- Enter the function attributes.
- Once at least one downstream function is defined, click the branch icon to send your data stream to a different downstream function.
- End your pipeline with a sink function.
- Click Validate to validate your pipeline. If you have any invalid functions, the first invalid function in your pipeline is highlighted in red and all subsequent functions are in an unsure state denoted by a question mark. Select the function and click the Preview Results tab. A sample of 100 events are sent through your pipeline. Use this to check and correct the function output.
- Click Save.
- (Optional) Click Activate to activate your pipeline. If it's the first time activating your pipeline, do not enable any of the optional Activate settings.
- Click the Data Management tab to go to the Data Management page. After saving your pipeline, use the Data Management page to edit, delete, clone, or activate your pipeline.
Edit a pipeline
You can edit your pipelines from the Data Management page.
- Click the Data Management tab to go to the Data Management page.
- Click Edit on the pipeline that you want to edit. If this pipeline is currently active, click on Edit to be taken to a copy of your active pipeline. Any changes you make are made on that copy and don't affect the active version of your pipeline. If this pipeline is currently inactive, any changes are made directly to your pipeline.
- Click Save to save changes to your pipeline.
- (Optional) Click Activate or Activate with changes to activate your pipeline. If you were working on a branch of an active pipeline, this updates the version of your current active pipeline to the latest version.
If you are attempting to re-activate your pipeline and running into issues, you may want to update your activation checkpoint. See using activation checkpoints to activate your pipeline.
- Click the Data Management tab to return to the Data Management page. After saving your pipeline, use the Data Management page to edit, delete, clone, or activate your pipeline.
- (Optional) In the pipelines listing table, click the More Options menu, and select Upgrade to latest if you were editing a pipeline that is currently active. This updates your pipeline to the latest version. This may impact performance, but no data is lost during this process.
- (Optional) To see an overview of the current version of an active pipeline, click on the name of the pipeline.
Differences and inconsistent behaviors between the Canvas Builder and the SPL2 Pipeline Builder
You can toggle between the Canvas Builder and the SPL2 Pipeline Builder. There are several differences between the Canvas Builder and the SPL2 Pipeline Builder.
|Canvas Builder||SPL2 Pipeline Builder|
|Accepts SPL2 Expressions as input.||Accepts SPL2 Statements as input. SPL2 statements support variable assignments and must be terminated by a semi-colon.|
|For source and sink functions, optional arguments that are blank are automatically filled in with their default values.||If you want to specify any optional arguments in source or sink functions, you must provide all arguments up until the optional argument that you want to omit, and you must omit all subsequent optionals.|
|When using a connector, the connection name can be selected as the connection-id argument from a dropdown in the UI.||When using a connector, the connection-id must be explicitly passed as an argument. You can view the connection-id of your connections by going to the Connections Management page.|
|Source and sink functions have implied
||Source and sink functions require explicit |
|You can stop a preview session at any time in the Canvas Builder by clicking Stop Preview.||Once you click Build, the resulting preview session cannot be stopped. To stop the preview session, switch over to the Canvas Builder.|
Toggling between the SPL2 Pipeline Builder and the Canvas Builder (BETA)
The toggle button is currently a beta feature. Repeated toggles between the Canvas Builder and the SPL2 Pipeline Builder may produce unexpected results. The following table describes specific use cases where unexpected results have been observed.
If you are editing an active pipeline, toggling between the Canvas and the SPL builders can lead to data loss as the pipeline state is unable to be restored. Do not use the pipeline toggle if you are editing an active pipeline.
|Variable assignments||Variable names are changed and statements may be reordered.|
|Renamed function names||Functions are reverted back to their default names.|
|A source or sink function with optional arguments left blank||Optional arguments are set to their default values when toggling from the Canvas builder to the SPL2 builder.|
|Comments||Comments are stripped out when toggling between builders.|
|A stats or aggregate with trigger function that uses an evaluation scalar function within the aggregation function.||Each function is called separately when toggling from the SPL2 builder to the Canvas builder. For example, if your |
Navigating the Data Stream Processor
Create a pipeline using the SPL2 Pipeline Builder
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.1.0