MixSumm: Topic-based Data Augmentation using LLMs for Low-resource Extractive Text Summarization
Abstract
Low-resource extractive text summarization is a vital but heavily underexplored area of research. Prior literature either focuses on abstractive text summarization or prompts a large language model (<PRE_TAG><PRE_TAG>LLM</POST_TAG>)</POST_TAG> like <PRE_TAG><PRE_TAG>GPT-3</POST_TAG></POST_TAG> directly to generate summaries. In this work, we propose MixSumm for low-resource extractive text summarization. Specifically, MixSumm prompts an open-source <PRE_TAG>LLM</POST_TAG>, <PRE_TAG><PRE_TAG>LLaMA-3-70b</POST_TAG></POST_TAG>, to generate documents that mix information from multiple topics as opposed to generating documents without mixup, and then trains a summarization model on the generated dataset. We use <PRE_TAG><PRE_TAG>ROUGE scores</POST_TAG></POST_TAG> and L-Eval, a reference-free LLaMA-3-based evaluation method to measure the quality of generated summaries. We conduct extensive experiments on a challenging text summarization benchmark comprising the TweetSumm, WikiHow, and ArXiv/PubMed datasets and show that our <PRE_TAG>LLM</POST_TAG>-based <PRE_TAG><PRE_TAG>data augmentation</POST_TAG></POST_TAG> framework outperforms recent prompt-based approaches for low-resource extractive summarization. Additionally, our results also demonstrate effective <PRE_TAG>knowledge distillation</POST_TAG> from <PRE_TAG><PRE_TAG>LLaMA-3-70b</POST_TAG></POST_TAG> to a small <PRE_TAG><PRE_TAG>BERT-based extractive summarizer</POST_TAG></POST_TAG>.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper