Splunk® App for Data Science and Deep Learning

Use the Splunk App for Data Science and Deep Learning

Using the Deep Learning Text Classification Assistant

The Deep Learning Text Classification Assistant lets you develop deep-learning-based text classification solutions. This Assistant is available in both English and Japanese.

Text classification is the task to categorize given text into one or several predefined classes. Using deep, bidirectional, pre-trained language models on text classification has shown superior performance over conventional methods. This Assistant trains a classifier based on your customized text data and defined target classes.

The Deep Learning Text Classification Assistant uses the Bidirectional Encoder Representations from Transformers (BERT) model as the base model for text classification tasks. Text can be a sentence or a dialog. The BERT model is a Transformer-based language model pre-trained on large unlabeled datasets to learn the high-level representations of natural language sequences, and can be flexibly fine-tuned to various downstream tasks, including text classification. The Assistant has two linear layers with one non-linear ReLU activation in-between. This non-linear layer is included in the BERT model to perform text classification tasks.

To learn more about the Bidirectional Encoder Representations from Transformers (BERT) model, see https://arxiv.org/pdf/1810.04805.pdf

Based on the BERT model, the Deep Learning Text Classification Assistant guides you through the following model development phases:

  • A fine-tuning stage to create text classification models with customizable data inputs.
  • An evaluation stage to validate the performances of the fine-tuned text classification models.
  • A management stage for the model files.

Prerequisites

In order to successfully use the Deep Learning Text Classification Assistant, the following is required:

  • You must have a Docker container up and running.
  • You must have a transformers-GPU or a transformers-CPU development container up and running. This development container must include the transformers_summarization.ipynb notebook.
  • For improved performance, run the transformer-GPU container on a GPU machine. The minimum GPU machine specs are as follows:
    • 1 GPU
    • 4 vCPU
    • 16 GB RAM
    • 200 GB Disk Storage

Before you begin

You can configure a macro for the index name storing your Splunk App for Data Science and Deep Learning (DSDL) Docker logs. This enables you to track progress at the fine-tuning stage. You can still fine-tune a model without the DSDL Docker logging, but any fine-tuning progress tracking is disabled. To learn how to configure DSDL Docker logging, see Configure the Splunk App for Data Science and Deep Learning.

Running the fit command for fine-tuning can be time consuming. Depending on the amount of data and the epoch number, the run time can vary greatly. For a use case with 5000 utterances running for 10 epochs, it might take approximately 5000 seconds, or 83 minutes to run. Larger data amounts and larger epoch numbers increase these run times.

On the search head, the fit command terminates if running the command takes longer than the max_fit_time. The fit process keeps running in the DSDL container, trying to complete building a model. Under these circumstances, the stub model cannot pass the parameters to the model in the DSDL container. Start with fitting a small amount of data to ensure you have the stub model properly passing in parameters to the DSDL container. An example of a small amount of data is fitting with less than 1000 utterances for less than 5 epochs.

You can improve the performance of the fit command in DSDL by increasing the max_fit_time to at least 7200 for the MLTKContainer algorithm. To learn how to change this value, see Edit default settings in the MLTK app in the Splunk Machine Learning Toolkit User Guide.

Fine-tuning stage

Use the fine-tuning stage to define your customized data, configure the fine-tuning parameters, and fine-tune the base model for the text classification task.

The fine-tuning process allows the users to define their customized data, configure the fine-tuning parameters and fine-tune the base model for the text classification task.

Select Fine-tune your Transformers BERT model panel to start fine-tuning a text classification model. The Assistant guides you through the next steps.

This image shows the first screen after selecting the Deep Learning Text Classification Assistant from the main navigation menu. There are 3 icons: Fine-tune your Transformers BERT model, Evaluate your model, and Manage your model. The first icon of Fine-tune your Transformers BERT model, is highlighted.

Prepare training data

You must provide at least 5,000 events in your training data to have satisfying results.

Training data must contain a text field with the field name of text, and one or more class fields that contain 1 or 0 indicating whether the text is categorized to the class or not. Because the fine-tuner takes all the fields other than the text field as class input fields, make sure you remove all the unnecessary fields before proceeding.

In cases where some classes belong to the same category, rename the class fields with prefixes such as cat1_*, cat2_*, and catn_* depending on the category that class belongs to.

The Assistant displays the following example SPL in the data input field. You can explore the Assistant using the provided example prior to working with your own data:

| inputlookup classification_en.csv <br>
| fields - TITLE ID <br>
| rename * as cat1_*, cat1_ABSTRACT as text <br>
| head 10

When working with your own training data, input it to the Assistant as a lookup table.

Your training data must have the exact names of text for the text field to ensure successful fine-tuning.

When ready, click the search icon on the right side of the search bar. The search results are generated and the Fine-Tuning Settings panel appears at the bottom of the page, leading you to the next step in the fine-tuning process.

Set fine-tuning settings

In this step select the language, base model, and training parameters for your model:

  1. Select the language from the drop-down menu. The default language is English (en).
  2. Select the base model for fine-tuning from the drop-down menu. The drop-down menu includes the bert_classification_en base model and other models you have created. The fine-tuning is performed on the selected base model.
  3. Set your training parameters. See the following table for parameter descriptions:
    Parameter name Type Description Default value
    Target model name string The name of the output model after fine-tuning. This cannot be the same name to the base model. tbd
    Batch size integer The batch size of training data during fine-tuning. Reduce the batch size when memory size is limited. 4
    Max epoch integer The maximum training epochs for the fine-tuning. 10
  4. Select Review Fine-Tuning SPL. Use the resulting review to examine your settings before you run the fine-tuning.

Run fine-tuning SPL

You can assess the fine-tuning of the training data by selecting a score type from the Score dropdown menu. The score type default is Confusion Matrix. Accuracy, Precision and Recall are also supported. Scoring results are displayed at the bottom of the page. For all types of scores, the closer the values are to 1 indicate better performance.

You can also choose that a category prefix be displayed in the results. The default category prefix is cat1.

Begin the fine-tuning process by selecting Run Fine-Tuning SPL. A new panel appears to display the estimated fine-tuning process duration.

If you have configured DSDL Docker logs, then the logging information appears with the training process duration and includes the following values:

Value Description
done_epoch Number of finished training epochs.
elapsed_time Duration of the past epoch.
training_loss Value of training loss. The reduction of training loss indicates successful fine-tuning progress.
valid_loss Value of validation loss. Validation loss should be reduced along with the training loss. Overgrowing of validation loss indicates overfitting of the training.

When the fine-tuning is finished, a Fine-Tuning Results panel appears at the bottom of the page. This panel displays the following fields:

Field Description
text_snip A shortened version of the input text.
cat* Ground truth of the classification target class.
predicted_cat* The predicted class of the text.

Once you are happy with the fine-tuning outcome, select Done. Once done you can copy the fine-tuning SPL to use it in your scheduled search to periodically run the fine-tuning process to fit your business needs.

Evaluation stage

Use the evaluation stage to score the model performance on customized input data. Select the Evaluate your model panel to start evaluating a text classification model. The Assistant guides you through the next steps.

Prepare test data

Preparation of the test data is similar to the training data preparation in the fine-tuning phase. The test data needs to contain a text field with field name text and one or more class fields that contain a 1 or 0 indicating whether the text is categorized to the class or not.

Choose the Evaluation settings

Select the language and the model you want to evaluate from the dropdown menus at the bottom of the page. Select Review Evaluation SPL to move on to the next step.

Run the Evaluation SPL

Begin by selecting a score type from the Score dropdown menu. The score type default is Confusion Matrix. Accuracy, Precision and Recall are also supported. Scoring results are displayed at the bottom of the page. For all types of scores, the closer the values are to 1 indicate better performance.

You can also choose that a category prefix be displayed in the results. The default category prefix is cat1.

Select Run Evaluation SPL to start the model evaluation process. A new panel appears displaying the estimated evaluation duration. Once the evaluation is finished you can view the Evaluation Results panel. The panel displays the following fields:

Field name Field content
text_snip A shortened version of the input text.
cat* Ground truth of the classification target class.
predicted_cat* The predicted class of the text.

Select Done to complete the Evaluation stage. Once done you can copy the evaluation SPL to use it in a scheduled search to run the evaluation process to fit your business needs.

Management stage

Use the management stage to retrieve information on your existing model files in the containers, and to remove any unused model files.

Transformers models can be large. Monitor the available storage space in your fine-tuning progress panel and free up space by removing unnecessary models.

Use the Manage your models panel to access the management interface of the text classification models. In the model management interface, the information of all existing classification models is displayed in the order of language, class, model, size, and container. The class information indicates whether it is a base model or an inheritor (fine-tuned model).

You can remove any inheritor model by selecting Delete.

Use the Delete action with caution as model deletion cannot be undone.

Last modified on 11 July, 2024
Using the Neural Network Designer Assistant   Using the Deep Learning Text Summarization Assistant

This documentation applies to the following versions of Splunk® App for Data Science and Deep Learning: 5.1.0, 5.1.1, 5.1.2


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters