Splunk® Machine Learning Toolkit

User Guide

Acrobat logo Download manual as PDF

This documentation does not apply to the most recent version of Splunk® Machine Learning Toolkit. Click here for the latest version.
Acrobat logo Download topic as PDF

Predict Numeric Fields

The Predict Numeric Fields assistant uses regression algorithms to predict numeric values. Such models are useful for determining to what extent certain peripheral factors contribute to a particular metric result. After the regression model is computed, you can use these peripheral values to make a prediction on the metric result.

This visualization illustrates a scatter plot of the actual versus predicted results.

The visualization above illustrates a scatter plot of the actual versus predicted results. This visualization is from the showcase example for Predict Numeric Fields with the Server Power Consumption data in the Splunk Machine Learning Toolkit.


The Predict Numeric Fields assistant uses the following algorithms:

Fit a model to predict a numeric field


  • For information about Preprocessing, see Preprocessing.
  • If you are not sure which algorithm to choose, start with the default algorithm, Linear Regression, or see Algorithms.


  1. Run a search.
  2. (Optional) To add preprocessing steps, click + Add a step.
  3. Select an algorithm from the Algorithm drop down menu.
  4. Select a field from the drop down menu Field to Predict.
  5. When you select the Field to predict, the other drop down Fields to use for predicting populates a list of fields that you can include in your model.

  6. Select a combination of fields from the drop down menu Fields to use for predicting. As seen below in the server power showcase example, the drop down menu contains a list of all the possible fields that could be used to predict ac_power using the linear regression algorithm.
    This image shows the algorithm and field selection drop down menus in the Predict Numeric Fields assistant.
  7. Split your data into training and testing data.
  8. Fit the model with the training data, and then compare your results against the testing data to validate the fit. The default split is 50/50, and the data is divided randomly into two groups.

  9. (Optional) Depending on the algorithm you selected, the toolkit may show additional fields to include in your model.
  10. To get information about a field, hover over it to see a tooltip. The example below shows the optional field N estimators from the Random Forrest Regressor algorithm.

    This shows the  N estimators field from the Random Forrest Regressor algorithm.

  11. Type the name the model in Save the model as.
  12. You must specify a name for the model in order to fit a model on a schedule or schedule an alert. This name and the settings you select are saved in the history in the Load Existing Settings tab.

  13. Click Fit Model.

Interpret and validate

After you fit the model, review the prediction results and visualizations to see how well the model predicted the numeric field. You can use the following methods to evaluate your predictions.

Charts & Results Applications
Actual vs. Predicted Scatter Plot This visualization plots the predicted value (yellow line), against the raw actual values (blue dots), for the predicted field.The yellow line showing the perfect result generally isn't attainable, but the closer the points are to the line, the better the model. Hover over the blue dots to see actual values.
Residuals Histogram A histogram that shows the difference between the actual values (yellow line), and the predicted values (blue bars). Hover over the blue bars to see the residual error, different between the actual and predicted result, and sample count, the number of results with this error. In a perfect world all the residuals would be zero. In reality, the residuals probably end on a bell curve that is ideally clustered tightly around zero.
R2 Statistic This statistic explains how well the model explains the variability of the result. 100% (a value of 1) means the model fits perfectly. The closer the value is to 1 (100%), the better the result.
Root Mean Squared Error The root mean squared error explains the variability of the result, which is essentially the standard deviation of the residual. The formula takes the difference between actual and predicted values, squares this value, takes an average, and then takes a square root. This value can be arbitrarily large and just gives you an idea of how close or far the model is. These values only make sense within one dataset and shouldn’t be compared across datasets.
Fit Model Parameters Summary This summary displays the coefficients associated with each variable in the regression model. A relatively high coefficient value shows a high association of that variable with the result. A negative value shows a negative correlation.
Actual vs. Predicted Overlay This shows the actual values against the predicted values, in sequence.
Residuals The residuals show the difference between predicted and actual values, in sequence.

Refine the model

After you validate the model, refine the model by adjusting the fields used to predict the numeric field and fit the model:

  1. Remove fields that might generate a distraction.
  2. Add more fields.
  3. In the Load Existing Settings tab, which displays a history of models you have fitted, sort by the R2 statistic to see which combination of fields yielded the best results.

Deploy the model

After you validate and refine the model, deploy it.

  1. Click the icon to the right of Fit Model to schedule model training.
  2. Mlapp fitmodelscheduleicon.jpg

    You can set up a regular interval to fit the model, for example, once a week.

  3. (Optional) To access the model, click Scheduled Jobs > Scheduled Training in the menu.
  4. Click Open in Search to open a new Search tab.
  5. This shows you the search query that uses all data, not just the training set.

  6. Click Show SPL to see the search query that was used to fit the model.
  7. For example, you could use this same query on a different data set.

  8. Click the Schedule Alert to set up an alert that is triggered when the predicted value meets a threshold you specify.
  9. After you save the alert, you can access it from the Scheduled Jobs > Alerts menu.
  10. For more information about alerts, see Getting started with alerts in the Splunk Enterprise Alerting Manual.

Last modified on 11 April, 2018
Custom visualizations
Predict Categorical Fields

This documentation applies to the following versions of Splunk® Machine Learning Toolkit: 2.4.0, 3.0.0, 3.1.0

Was this documentation topic helpful?

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters