Papers
arxiv:2404.03555

From News to Summaries: Building a Hungarian Corpus for Extractive and Abstractive Summarization

Published on Apr 4, 2024
Authors:
,
,
,

Abstract

Training summarization models requires substantial amounts of training data. However for less resourceful languages like Hungarian, openly available models and datasets are notably scarce. To address this gap our paper introduces HunSum-2 an open-source Hungarian corpus suitable for training abstractive and extractive <PRE_TAG>summarization</POST_TAG> models. The dataset is assembled from segments of the Common Crawl corpus undergoing thorough cleaning, preprocessing and deduplication. In addition to abstractive <PRE_TAG>summarization</POST_TAG> we generate sentence-level labels for extractive <PRE_TAG>summarization</POST_TAG> using sentence similarity. We train baseline models for both extractive and abstractive <PRE_TAG>summarization</POST_TAG> using the collected dataset. To demonstrate the effectiveness of the trained models, we perform both quantitative and qualitative evaluation. Our dataset, models and code are publicly available, encouraging replication, further research, and real-world applications across various domains.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2404.03555 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2404.03555 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2404.03555 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.