Splunk® App for Data Science and Deep Learning

Use the Splunk App for Data Science and Deep Learning

Using the Deep Learning Text Summarization Assistant

The Deep Learning Text Summarization Assistant lets you develop deep-learning-based text summarization solutions. This Assistant is available in both English and Japanese.

Text summarization produces a shorter version of a text document while preserving its important information. With the development of deep neural networks, transformer-based models are capable of generating abstractive summaries that are informative, close to natural language, and that achieve state-of-the-art performance.

The base model provided by the Deep Learning Text Summarization Assistant is equipped with basic text summarization capabilities. The Assistant leverages the Text-To-Text Transfer Transformers (T5) model as the base model for text summarization tasks. The T5 model is a transformer-based language model pre-trained on large unlabeled datasets to learn the high-level representations of natural language sequences. The T5 model can be fine-tuned to various downstream tasks, including text summarization.

To learn more about the Text-to-Text Transfer Transformers (T5) model, see https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html.

Based on the text-to-text T5 model, the Deep Learning Text Summarization Assistant guides you through the following model development phases:

  • A fine-tuning stage to create text summarization models with customizable data inputs.
  • An evaluation stage to validate the performances of the fine-tuned text summarization models.
  • A management stage to retrieve information on existing model files and remove unused model files.

Prerequisites

In order to successfully use the Deep Learning Text Summarization Assistant, the following is required:

  • You must have a Docker container up and running.
  • You must have a transformers-GPU or a transformers-CPU development container up and running. This development container must include the transformers_summarization.ipynb notebook.
  • For improved performance, run the transformer-GPU container on a GPU machine. The minimum GPU machine specs are as follows:
    • 1 GPU
    • 4 vCPU
    • 16 GB RAM
    • 200 GB Disk Storage

Before you begin

You can configure a macro for the index name storing your Splunk App for Data Science and Deep Learning (DSDL) Docker logs. This enables you to track progress at the fine-tuning stage. You can still fine-tune a model without the DSDL Docker logging, but any fine-tuning progress tracking is disabled. To learn how to configure DSDL Docker logging, see Configure the Splunk App for Data Science and Deep Learning.

Running the fit command for fine-tuning can be time consuming. Depending on the amount of data and the epoch number, the run time can vary greatly. For a use case with 5000 utterances running for 10 epochs, it might take approximately 5000 seconds, or 83 minutes to run. Larger data amounts and larger epoch numbers increase these run times.

On the search head, the fit command terminates if running the command takes longer than the max_fit_time. The fit process keeps running in the DSDL container, trying to complete building a model. Under these circumstances, the stub model cannot pass the parameters to the model in the DSDL container. Start with fitting a small amount of data to ensure you have the stub model properly passing in parameters to the DSDL container. An example of a small amount of data is fitting with less than 1000 utterances for less than 5 epochs.

You can improve the performance of the fit command in DSDL by increasing the max_fit_time to at least 7200 for the MLTKContainer algorithm. To learn how to change this value, see Edit default settings in the MLTK app in the Splunk Machine Learning Toolkit User Guide.

Fine-tuning stage

Use the fine-tuning stage to define your customized data, configure the fine-tuning parameters, and fine-tune the base model for the text summarization task.

Select Fine-tune your Transformers T5 model to start fine-tuning a text summarization model. The Assistant guides you through the next steps.

This image shows the landing page when selecting the Text Summarization Assistant. There are 3 icons: Fine-tune your Transformers T5 model, Evaluate your model, and Manage your models. Fine-tune your Transformers T5 model is highlighted.

Prepare training data

You must provide at least 5,000 events in your training data to have satisfying results.

Prepare your training data so that text and summary fields are paired:

  • The text field is for the text data from which you want to extract summaries.
  • The summary field is for the summary of the corresponding text as the ground-truth of the summarization task.

The Assistant displays the following example SPL in the data input field. You can explore the Assistant using the provided example prior to working with your own data:

| inputlookup customer_support_en | head 3

When working with your own training data, input it to the Assistant as a lookup table.

Your training data must have the exact names of text for the text field and summary for the summary field to ensure successful fine-tuning.

When ready, click the search icon on the right side of the search bar. The search results are generated and the Fine-Tuning Settings panel appears at the bottom of the page, leading you to the next step in the fine-tuning process.

Set fine-tuning settings

In this step select the language, base model, and training parameters for your model:

  1. Select the language from the drop-down menu. The default language is English (en).
  2. Select the base model for fine-tuning from the drop-down menu. The drop-down menu includes the t5_summarization_en base model and other models you have created. The fine-tuning is performed on the selected base model.
  3. Set your training parameters. See the following table for parameter descriptions:
    Parameter name Type Description Default value
    Target model name string The name of the output model after fine-tuning. Add to or edit this name to suit your needs and naming conventions. This cannot be the same name to the base model. The name of the selected base model

    plus an underscore.

    Batch size integer The batch size of training data during fine-tuning. Reduce the batch size when memory size is limited. 4
    Max epoch integer The maximum training epochs for the fine-tuning. 10
    Beam size integer Configuration for beam search during model inference. A higher value can indicate better performance but will reduce the computational speed. 1
  4. Select Review Fine-Tuning SPL. Use the resulting review to examine your settings before you run the fine-tuning.

Run fine-tuning SPL

Begin the fine-tuning process by selecting Run Fine-Tuning SPL. A new panel appears to display the estimated fine-tuning process duration.

If you have configured DSDL Docker logs, then the logging information appears with the training process duration and includes the following values:

Value Description
done_epoch Number of epochs finished training.
elapsed_time Duration of the past epoch.
training_loss Value of training loss. A reduction of training loss indicates successful fine-tuning progress.
valid_loss Value of validation loss. Validation loss can be reduced along with the training loss. Overgrowing of validation loss indicates overfitting of the training.

When the fine-tuning is finished, a Fine-Tuning Results panel appears at the bottom of the page. This panel displays the following fields:

Field Description
summary The ground truth.
extracted_summary Generated summarization result.
rouge_score The evaluated score of the results. Field value indicates how much extracted_summary reproduces summary, where 1.0 is the best and 0 is the worst.

Once you are happy with the fine-tuning outcome, select Done. Once done you can copy the fine-tuning SPL to use it in your scheduled search to periodically run the fine-tuning process to fit your business needs.

Evaluation stage

Use the evaluation stage to score the model performance on customized input data. Select the Evaluate your model panel to start evaluating a text summarization model. The Assistant guides you through the next steps.

Prepare test data

Preparation of the test data is similar to the training data preparation in the fine-tuning phase. Pair text and summary data with exact field names and input to the search bar.

Choose Evaluation settings

Select the language and the model you want to evaluate from the drop-down menus at the bottom of the page. Select Review Evaluation SPL to move on to the next step.

Run the Evaluation SPL

Select Run Evaluation SPL to start the model evaluation process. A new panel appears displaying the estimated evaluation duration.

If you have configured the DSDL Docker logs, the logging information appears along with the training progress information including the following fields:

Field name Field content
max_apply Number of input utterances for summarization evaluation.
done_apply Number of finished utterances.
elapsed_time Duration of each utterance.

Once the evaluation is finished you can view the Evaluation Results panel. The panel displays the following fields:

Field name Field content
summary The ground-truth.
extracted_summary Generated summarization result.
rouge_score The evaluated score of the results. This field indicates how much extracted_summary reproduces summary, where 1.0 is the best and 0 is the worst. The average rouge_score is also displayed.

Select Done to complete the Evaluation stage. Once done you can copy the evaluation SPL to use it in a scheduled search to run the evaluation process to fit your business needs.

Management stage

Use the management stage to retrieve information on your existing model files in the containers, and to remove any unused model files.

Transformers models can be large. Monitor the available storage space in your fine-tuning progress panel and free up space by removing unnecessary models.

Use the Manage your models panel to access the management interface of the text summarization models. In the model management interface, the information of all existing summarization models is displayed in the order of language, class, model, size, and container. The class information indicates whether it is a base model or an inheritor (fine-tuned model).

You can remove any inheritor model by selecting the Delete.

Use the Delete action with caution as model deletion cannot be undone.

Last modified on 11 July, 2024
Using the Deep Learning Text Classification Assistant   Troubleshoot the Splunk App for Data Science and Deep Learning

This documentation applies to the following versions of Splunk® App for Data Science and Deep Learning: 5.1.0, 5.1.1, 5.1.2


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters