Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,316 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- es
|
4 |
+
size_categories:
|
5 |
+
- n<1K
|
6 |
+
task_categories:
|
7 |
+
- summarization
|
8 |
+
pretty_name: Resumen Noticias Clickbait
|
9 |
+
dataset_info:
|
10 |
+
features:
|
11 |
+
- name: id
|
12 |
+
dtype: int64
|
13 |
+
- name: titular
|
14 |
+
dtype: string
|
15 |
+
- name: respuesta
|
16 |
+
dtype: string
|
17 |
+
- name: pregunta
|
18 |
+
dtype: string
|
19 |
+
- name: texto
|
20 |
+
dtype: string
|
21 |
+
- name: idioma
|
22 |
+
dtype: string
|
23 |
+
- name: periodo
|
24 |
+
dtype: string
|
25 |
+
- name: tarea
|
26 |
+
dtype: string
|
27 |
+
- name: registro
|
28 |
+
dtype: string
|
29 |
+
- name: dominio
|
30 |
+
dtype: string
|
31 |
+
- name: país_origen
|
32 |
+
dtype: string
|
33 |
+
splits:
|
34 |
+
- name: train
|
35 |
+
num_bytes: 5440051
|
36 |
+
num_examples: 700
|
37 |
+
- name: validation
|
38 |
+
num_bytes: 462364
|
39 |
+
num_examples: 50
|
40 |
+
- name: test
|
41 |
+
num_bytes: 782440
|
42 |
+
num_examples: 100
|
43 |
+
download_size: 3417692
|
44 |
+
dataset_size: 6684855
|
45 |
+
configs:
|
46 |
+
- config_name: default
|
47 |
+
data_files:
|
48 |
+
- split: train
|
49 |
+
path: data/train-*
|
50 |
+
- split: validation
|
51 |
+
path: data/validation-*
|
52 |
+
- split: test
|
53 |
+
path: data/test-*
|
54 |
+
tags:
|
55 |
+
- summarization
|
56 |
+
- clickbait
|
57 |
+
- news
|
58 |
+
---
|
59 |
+
|
60 |
+
<p align="center">
|
61 |
+
<img src="https://huggingface.co/datasets/Iker/NoticIA/resolve/main/assets/logo.png" style="width: 50%;">
|
62 |
+
</p>
|
63 |
+
<h1 align="center">NoticIA: A Clickbait Article Summarization Dataset in Spanish.</h1>
|
64 |
+
|
65 |
+
|
66 |
+
We present NoticIA, a dataset consisting of 850 Spanish news articles featuring prominent clickbait headlines, each paired with high-quality, single-sentence generative summarizations written by humans.
|
67 |
+
|
68 |
+
- 📖 DatasetCars en Español: https://huggingface.co/datasets/somosnlp/NoticIA-it/blob/main/README_es.md
|
69 |
+
|
70 |
+
## Dataset Details
|
71 |
+
|
72 |
+
### Dataset Description
|
73 |
+
|
74 |
+
We define a clickbait article as one that seeks to attract the reader's attention through curiosity. For this purpose, the headline poses a question or an incomplete, sensationalist, exaggerated, or misleading statement. The answer to the question raised in the headline usually does not appear until the end of the article, preceded by a large amount of irrelevant content. The goal is for the user to enter the website through the headline and then scroll to the end of the article, viewing as much advertising as possible. Clickbait articles tend to be of low quality and provide no value to the reader beyond the initial curiosity. This phenomenon undermines public trust in news sources and negatively affects the advertising revenue of legitimate content creators, who could see their web traffic reduced.
|
75 |
+
|
76 |
+
We introduce NoticIA, a dataset consisting of 850 Spanish news articles with clickbait headlines, each paired with high-quality, single-sentence generative summaries written by humans. This task demands advanced skills in text comprehension and summarization, challenging the ability of models to infer and connect various pieces of information to satisfy the user's informational curiosity generated by the clickbait headline.
|
77 |
+
|
78 |
+
The project is inspired by the X/Twitter account [@ahorrandoclick1](https://x.com/ahorrandoclick1). [@ahorrandoclick1](https://x.com/ahorrandoclick1) has 300,000 followers, demonstrating the great value of summarizing clickbait news articles. However, creating these summaries manually is a labor-intensive task, and the number of clickbait news articles published greatly exceeds the number of summaries one person can perform. Therefore, there is a need for automatic summarization of clickbait news articles. Additionally, as mentioned earlier, this is an ideal task for analyzing the text comprehension capabilities of a language model in Spanish.
|
79 |
+
|
80 |
+
The following Figure illustrates examples of clickbait headlines from our dataset, together with the human-written summaries.
|
81 |
+
|
82 |
+
<p align="center">
|
83 |
+
<img src="https://raw.githubusercontent.com/ikergarcia1996/NoticIA/main/assets/examples.png" style="width: 100%;">
|
84 |
+
</p>
|
85 |
+
|
86 |
+
|
87 |
+
- **Curated by:** [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/), [Begoña Altuna](https://www.linkedin.com/in/bego%C3%B1a-altuna-78014139)
|
88 |
+
- **Funded by:** SomosNLP, HuggingFace, Argilla, [HiTZ Zentroa](https://www.hitz.eus/)
|
89 |
+
- **Language(s) (NLP):** es-ES
|
90 |
+
- **License:** apache-2.0
|
91 |
+
- **Web Page**: [Github](https://github.com/ikergarcia1996/NoticIA)
|
92 |
+
|
93 |
+
### Dataset Sources
|
94 |
+
|
95 |
+
- **💻 Repository:** https://github.com/ikergarcia1996/NoticIA
|
96 |
+
- **📖 Paper:** [NoticIA: A Clickbait Article Summarization Dataset in Spanish](https://arxiv.org/abs/2404.07611)
|
97 |
+
- **🤖 Pre Trained Models** [https://huggingface.co/collections/Iker/noticia-and-clickbaitfighter-65fdb2f80c34d7c063d3e48e](https://huggingface.co/collections/Iker/noticia-and-clickbaitfighter-65fdb2f80c34d7c063d3e48e)
|
98 |
+
- **🔌 Demo:** https://huggingface.co/spaces/somosnlp/NoticIA-demo
|
99 |
+
- **Video presentation (Spanish):** https://youtu.be/xc60K_NzUgk?si=QMqk6OzQZfKP1EUS
|
100 |
+
- **🐱💻 Hackathon #Somos600M**: https://somosnlp.org/hackathon
|
101 |
+
|
102 |
+
|
103 |
+
## Uses
|
104 |
+
|
105 |
+
This dataset has been compiled for use in scientific research. Specifically, for use in the evaluation of language models in Spanish.
|
106 |
+
Commercial use of this dataset is subject to the licenses of each news and media outlet. If you want to make commercial use of the dataset you will need to have
|
107 |
+
the express permission of the media from which the news has been obtained.
|
108 |
+
|
109 |
+
|
110 |
+
### Direct Use
|
111 |
+
|
112 |
+
- Evaluation of Language Models in Spanish.
|
113 |
+
- Instruction-Tuning of Spanish Language Models
|
114 |
+
- Develop new datasets on top of our data
|
115 |
+
- Any other academic research purpose.
|
116 |
+
|
117 |
+
### Out-of-Scope Use
|
118 |
+
|
119 |
+
We expressly prohibit the use of these data for two use cases that we consider to be that may be harmful: The training of models that generate sensational headlines or clickbait, and the training of models that generate articles or news automatically.
|
120 |
+
|
121 |
+
|
122 |
+
## Dataset Structure
|
123 |
+
|
124 |
+
The dataset is ready to be used to evaluate language models. For this aim, we have developed a *prompt* that makes use of the news headline and text.
|
125 |
+
The prompt is as follows:
|
126 |
+
```python
|
127 |
+
def clickbait_prompt(
|
128 |
+
headline: str,
|
129 |
+
body: str,
|
130 |
+
) -> str:
|
131 |
+
"""
|
132 |
+
Generate the prompt for the model.
|
133 |
+
Args:
|
134 |
+
headline (`str`):
|
135 |
+
The headline of the article.
|
136 |
+
body (`str`):
|
137 |
+
The body of the article.
|
138 |
+
Returns:
|
139 |
+
`str`: The formatted prompt.
|
140 |
+
"""
|
141 |
+
return (
|
142 |
+
f"Ahora eres una Inteligencia Artificial experta en desmontar titulares sensacionalistas o clickbait. "
|
143 |
+
f"Tu tarea consiste en analizar noticias con titulares sensacionalistas y "
|
144 |
+
f"generar un resumen de una sola frase que revele la verdad detrás del titular.\n"
|
145 |
+
f"Este es el titular de la noticia: {headline}\n"
|
146 |
+
f"El titular plantea una pregunta o proporciona información incompleta. "
|
147 |
+
f"Debes buscar en el cuerpo de la noticia una frase que responda lo que se sugiere en el título. "
|
148 |
+
f"Responde siempre que puedas parafraseando el texto original. "
|
149 |
+
f"Usa siempre las mínimas palabras posibles. "
|
150 |
+
f"Recuerda responder siempre en Español.\n"
|
151 |
+
f"Este es el cuerpo de la noticia:\n"
|
152 |
+
f"{body}\n"
|
153 |
+
)
|
154 |
+
```
|
155 |
+
|
156 |
+
The expected output of the model is the summary. Below is an example of how to evaluate `gemma-2b` in our dataset:
|
157 |
+
|
158 |
+
```
|
159 |
+
from transformers import pipeline
|
160 |
+
from datasets import load_dataset
|
161 |
+
|
162 |
+
generator = pipeline(model="google/gemma-2b-it",device_map="auto")
|
163 |
+
dataset = load_dataset("somosnlp/NoticIA-it",split="test")
|
164 |
+
|
165 |
+
outputs = generator(dataset[0]["prompt"], return_full_text=False,max_length=4096)
|
166 |
+
print(outputs)
|
167 |
+
```
|
168 |
+
|
169 |
+
The dataset includes the following fields:
|
170 |
+
- **ID**: id of the example
|
171 |
+
- **Titular (headline)**: headline of the article
|
172 |
+
- **Respuesta (response)**: Summary written by a human being
|
173 |
+
- **Pregunta (question)**: Prompt ready to be used as input to a language model.
|
174 |
+
- **Texto (text)**: Text of the article, obtained from the HTML.
|
175 |
+
- **idioma (language)**: ISO code of the language. In the case of Spanish, it also includes the geographic variant ("Mexican Spanish" = es_mx, "Ecuadorian Spanish" = es_ec, ...).
|
176 |
+
- **Tarea (task)** Task of the example. Every example has the task `resumen` (`summary`)
|
177 |
+
- **Registro (Language Register)**: `coloquial`, `medio` o `culto` (`colloquial`, `medium` or `educated`)
|
178 |
+
- **Dominio (Domain)**: The domain (`prensa`, `press`) and the subdomain.
|
179 |
+
- **País de origen (Country of origin)**: Country of origin of the data.
|
180 |
+
|
181 |
+
*The Idioma (language), Registro (Language Register), Dominio (Domain) and País de origen (Country of origin) labels have been automatically generated using GPT3.5-Turbo.*
|
182 |
+
|
183 |
+
## Dataset Creation
|
184 |
+
|
185 |
+
### Curation Rationale
|
186 |
+
|
187 |
+
<!-- Motivation for the creation of this dataset. -->
|
188 |
+
|
189 |
+
[More Information Needed]
|
190 |
+
|
191 |
+
### Source Data
|
192 |
+
|
193 |
+
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
194 |
+
|
195 |
+
#### Data Collection and Processing
|
196 |
+
|
197 |
+
We have compiled clickbait news using the timeline of the X/Twitter user [@ahorrandoclick1](https://x.com/ahorrandoclick1). To do this, we extracted the URLs of the news mentioned by the user. Additionally, we have added about 100 clickbait news articles chosen by us. The following image shows the source of the news in the dataset.
|
198 |
+
|
199 |
+
<p align="center">
|
200 |
+
<img src="https://raw.githubusercontent.com/ikergarcia1996/NoticIA/main/assets/noticia_dataset.png" style="width: 50%;">
|
201 |
+
</p>
|
202 |
+
|
203 |
+
We have classified each of the news articles based on the category to which they belong. As can be seen, our dataset includes a wide variety of categories.
|
204 |
+
|
205 |
+
<p align="center">
|
206 |
+
<img src="https://raw.githubusercontent.com/ikergarcia1996/NoticIA/main/assets/categories_distribution_spanish.png" style="width: 50%;">
|
207 |
+
</p>
|
208 |
+
|
209 |
+
|
210 |
+
#### Annotation process
|
211 |
+
|
212 |
+
Although [@ahorrandoclick1](https://x.com/ahorrandoclick1) provides summaries of clickbait news, these summaries do not follow any guidelines, and in many cases, their summaries do not refer to the text, but are rather of the style *"This is advertising"*, *"They still haven't realized that..."*. Therefore, we have manually generated the summaries for the 850 news articles. To do this, we have defined strict annotation guidelines, available at the following link: [https://huggingface.co/spaces/Iker/ClickbaitAnnotation/blob/main/guidelines.py](https://huggingface.co/spaces/Iker/ClickbaitAnnotation/blob/main/guidelines.py).
|
213 |
+
The dataset has been annotated by [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/) and [Begoña Altuna](https://www.linkedin.com/in/bego%C3%B1a-altuna-78014139), and this process has taken approximately 40 hours.
|
214 |
+
|
215 |
+
### Dataset Statistics
|
216 |
+
We have divided the dataset into three splits, which facilitates the training of models. As can be seen in the following table, the summaries of the news are extremely concise.
|
217 |
+
They respond to the clickbait headline using the fewest words possible.
|
218 |
+
|
219 |
+
| | Train | Validation | Test | Total |
|
220 |
+
|--------------------|-------|------------|------|-------|
|
221 |
+
| Number of articles | 700 | 50 | 100 | 850 |
|
222 |
+
| Average number of words in headlines | 16 | 17 | 17 | 17 |
|
223 |
+
| Average number of words in news text | 544 | 663 | 549 | 552 |
|
224 |
+
| Average number of words in summaries | 12 | 11 | 11 | 12 |
|
225 |
+
[More Information Needed]
|
226 |
+
|
227 |
+
#### Who are the annotators?
|
228 |
+
|
229 |
+
- [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/): PhD Student HiTZ, the Basque center for language technology
|
230 |
+
- [Begoña Altuna](https://www.linkedin.com/in/bego%C3%B1a-altuna-78014139): Postdoctoral research fellow at HiTZ, the Basque center for language technology
|
231 |
+
|
232 |
+
### Annotation Validation
|
233 |
+
To validate the dataset, the 100 summaries from the Test set were annotated by two annotators. This data is available here: https://huggingface.co/datasets/Iker/NoticIA_Human_Validation
|
234 |
+
The overall agreement between the annotators was high, as they provided exactly the same answer in 26% of the cases and provided responses that partially shared information in 48% of the cases (same response but with some variation in the words used).
|
235 |
+
This demonstrates that it was easy for humans to find the information referred to by the headline. We also identified a list of cases where the annotators provided different but equally valid responses, which accounts for 18% of the cases.
|
236 |
+
Lastly, we identified 8 cases of disagreement. In 3 cases, one of the annotators made an incorrect summary,
|
237 |
+
likely due to fatigue after annotating multiple examples. In the remaining 5 cases, the disagreement was due to contradictory information in the article and
|
238 |
+
different interpretations of this information. In these cases, determining the correct summary is subject to the reader's interpretation.
|
239 |
+
|
240 |
+
Regarding the evaluation of the guidelines, overall, they were not ambiguous, although the request to select the minimum number of words to generate a
|
241 |
+
valid summary is sometimes interpreted differently by the annotators: For example, the minimum length could be understood as focusing on the question in the headline or a minimum well-formed phrase.
|
242 |
+
|
243 |
+
|
244 |
+
## Bias, Risks, and Limitations
|
245 |
+
|
246 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
247 |
+
|
248 |
+
<!-- Aquí podéis mencionar los posibles sesgos heredados según el origen de los datos y de las personas que lo han anotado, hablar del balance de las categorías representadas, los esfuerzos que habéis hecho para intentar mitigar sesgos y riesgos. -->
|
249 |
+
|
250 |
+
[More Information Needed]
|
251 |
+
|
252 |
+
### Recommendations
|
253 |
+
|
254 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations.
|
255 |
+
|
256 |
+
Example:
|
257 |
+
|
258 |
+
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. -->
|
259 |
+
|
260 |
+
[More Information Needed]
|
261 |
+
|
262 |
+
## License
|
263 |
+
|
264 |
+
<!-- Indicar bajo qué licencia se libera el dataset explicando, si no es apache 2.0, a qué se debe la licencia más restrictiva (i.e. herencia de los datos utilizados). -->
|
265 |
+
|
266 |
+
## Citation
|
267 |
+
|
268 |
+
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
269 |
+
|
270 |
+
**BibTeX:**
|
271 |
+
|
272 |
+
[More Information Needed]
|
273 |
+
|
274 |
+
<!--
|
275 |
+
|
276 |
+
Aquí tenéis un ejemplo de cita de un dataset que podéis adaptar:
|
277 |
+
|
278 |
+
```
|
279 |
+
@software{benallal2024cosmopedia,
|
280 |
+
author = {Ben Allal, Loubna and Lozhkov, Anton and Penedo, Guilherme and Wolf, Thomas and von Werra, Leandro},
|
281 |
+
title = {Cosmopedia},
|
282 |
+
month = February,
|
283 |
+
year = 2024,
|
284 |
+
url = {https://huggingface.co/datasets/HuggingFaceTB/cosmopedia}
|
285 |
+
}
|
286 |
+
```
|
287 |
+
|
288 |
+
- benallal2024cosmopedia -> nombre + año + nombre del dataset
|
289 |
+
- author: lista de miembros del equipo
|
290 |
+
- title: nombre del dataset
|
291 |
+
- year: año
|
292 |
+
- url: enlace al dataset
|
293 |
+
|
294 |
+
-->
|
295 |
+
|
296 |
+
## Glossary [optional]
|
297 |
+
|
298 |
+
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
|
299 |
+
|
300 |
+
## More Information
|
301 |
+
|
302 |
+
<!-- Indicar aquí que el marco en el que se desarrolló el proyecto, en esta sección podéis incluir agradecimientos y más información sobre los miembros del equipo. Podéis adaptar el ejemplo a vuestro gusto. -->
|
303 |
+
|
304 |
+
This project was developed during the [Hackathon #Somos600M](https://somosnlp.org/hackathon) organized by SomosNLP. The dataset was created using `distilabel` by Argilla and endpoints sponsored by HuggingFace.
|
305 |
+
|
306 |
+
**Team:** [More Information Needed]
|
307 |
+
|
308 |
+
<!--
|
309 |
+
- [Name 1](Link to Hugging Face profile)
|
310 |
+
- [Name 2](Link to Hugging Face profile)
|
311 |
+
-->
|
312 |
+
|
313 |
+
## Contact [optional]
|
314 |
+
|
315 |
+
<!-- Email de contacto para´posibles preguntas sobre el dataset. -->
|
316 |
+
|