Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
KennethEnevoldsen
commited on
Added paper outline
Browse files- paper/paper.md +136 -0
- paper/references.bib +25 -0
paper/paper.md
ADDED
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Danish DynaWord: Moving from one-shot dataset Continously developed datasets
|
2 |
+
|
3 |
+
Authors:
|
4 |
+
|
5 |
+
This list of authors to be invited for co-authorship
|
6 |
+
|
7 |
+
CHC
|
8 |
+
- Kenneth Enevoldsen
|
9 |
+
- Jan Kostkan
|
10 |
+
- Per
|
11 |
+
- Kristoffer Nielbo
|
12 |
+
- Marton
|
13 |
+
- Martin (gode CI tanker)
|
14 |
+
|
15 |
+
Alexandra:
|
16 |
+
- Dan Nielsen
|
17 |
+
- Rasmus
|
18 |
+
- Peter
|
19 |
+
- Kristian
|
20 |
+
- Torben
|
21 |
+
|
22 |
+
DFM
|
23 |
+
- Bolette Pedersen (eller nogen fra hendes gruppe)
|
24 |
+
- Desmond
|
25 |
+
- Peter
|
26 |
+
|
27 |
+
Danish Royal Library? Other organization that are important to include?
|
28 |
+
|
29 |
+
# Abstract
|
30 |
+
|
31 |
+
In this work we introduce dynaword an argument for moving toward continously developed dataset as opposed to current release and forget datasets.
|
32 |
+
As an example we release Danish DynaWord
|
33 |
+
|
34 |
+
dataset is available at: LINK
|
35 |
+
|
36 |
+
# Introduction
|
37 |
+
|
38 |
+
Current datasets
|
39 |
+
While creating a current
|
40 |
+
|
41 |
+
Current methods for dataset creation tacke only a small [@joshiStateFateLinguistic2020]
|
42 |
+
In the project we specifically choose to focus on the low to mid-resource language Danish (dan). We see two reasons for doing this:
|
43 |
+
|
44 |
+
- The dynaword approach is most likely to be beneficial for low to mid resourced languages (class 2-4; @joshiStateFateLinguistic2020) which have contributors able and willing to contribute and where the domain high resource languages (class 5; @joshiStateFateLinguistic2020) could likely sustain multiple dynaword project targeting specific domains.
|
45 |
+
- not only for Danish b
|
46 |
+
|
47 |
+
While it is in theory possible to open a PR on existing dataset, this practice is often rare and instead we often see improvements on the existing dataset published (see e.g. [@pascal_alie_kenneth_et_paper], [@that_guy_that_added_langauge_tag_to_a_dataset]). These derivative works rarely get as many downloads as the original
|
48 |
+
|
49 |
+
Contrasting this approach to code development - where it is common practice to create PRs to continually improve the codebase - makes this dataset development landscape seems immature and inefficent.
|
50 |
+
|
51 |
+
## Related work
|
52 |
+
|
53 |
+
|
54 |
+
### Existing approaches in Dataset development
|
55 |
+
|
56 |
+
Large project like OSCAR [@OSCAR], HPLT [@hplt], and fineweb [@fineweb] release iterative version of dataset derived from commoncrawl [@commoncrawl].
|
57 |
+
These approaches make it hard to contributors to join contribute and siloes dataset development in a few institutions. Furthermore the focus
|
58 |
+
commoncrawl ignores other valuable resources such as public APIs and comes with a slew of ethical and legal concerns [@missing] which effect only the usefulness of the datasets but also the models derived from these.
|
59 |
+
While these resources such as individual dataset derived from APIs would be extensive to collect for individual groups as they rarely offer enough data to be worth the time opening up this approach to a community makes these approaches more viable.
|
60 |
+
|
61 |
+
|
62 |
+
Opening up development pipeline also increases openness around the dataset collection. ADD SOMETHING on inclusion here.
|
63 |
+
|
64 |
+
Read up on fineweb!!! (I assume they do some CI)
|
65 |
+
|
66 |
+
Other successful open-source project: dependency treebank project [@dep_treebank], ...
|
67 |
+
|
68 |
+
Existing projects on open-licensed data [@elutherAI]
|
69 |
+
|
70 |
+
We note that our approach is complementary to existing projects such as fineweb
|
71 |
+
|
72 |
+
### Continuous Integration
|
73 |
+
|
74 |
+
Do we need a section on this?
|
75 |
+
|
76 |
+
### Danish and Scandinavian Datasets
|
77 |
+
|
78 |
+
Lacunae of danish [@cite]
|
79 |
+
Danish gigaword [@dagw]
|
80 |
+
Swedish gigaword? [@swedish]
|
81 |
+
NCC [@ncc_kummervold]
|
82 |
+
|
83 |
+
|
84 |
+
Existing benchmark covering Scandinavian languages such as ScandEval [@scandeval; @scandeval2] and SEB [@seb] argue that reasonable to evalaute on the
|
85 |
+
|
86 |
+
# Methods
|
87 |
+
|
88 |
+
## Continuous Integration
|
89 |
+
|
90 |
+
Our approach for continuous integration, how to submit, what we test for.
|
91 |
+
|
92 |
+
|
93 |
+
# Results
|
94 |
+
|
95 |
+
## Dataset collection
|
96 |
+
|
97 |
+
Current collection.
|
98 |
+
|
99 |
+
| Source | Date | Domain | License | Size |
|
100 |
+
| --------------- | ---------- | -------------- | ------- | ---- |
|
101 |
+
| **Legal** | | | | |
|
102 |
+
| Retsinformation | date range | Legal, Written | | 188M |
|
103 |
+
| ... | | | | |
|
104 |
+
| **Total** | | | | |
|
105 |
+
|
106 |
+
|
107 |
+
For a description of each dataset we refer to the public repository.
|
108 |
+
<!-- we could also include -->
|
109 |
+
|
110 |
+
# Conclusion
|
111 |
+
|
112 |
+
## Dataset delivery
|
113 |
+
|
114 |
+
# Limitation
|
115 |
+
|
116 |
+
- Is danish too limited: Should we consider multilingual sources, scandinavian, germanic, English
|
117 |
+
|
118 |
+
- Size:
|
119 |
+
- The size is currently limited if the size grows to large developing becomes problematic
|
120 |
+
- This is still way smaller than what could be extracted from CC
|
121 |
+
|
122 |
+
- Only Danish: While developing CI for datasets is by no means new [@missing] doing so for open pre-training datasets open a collaborative fashion has
|
123 |
+
not been tested on a larger scale. Once the approach has been validated we plan to host a collaboration along with huggingface to develop these dataset sources.
|
124 |
+
|
125 |
+
- Huggingface datasets as a development platform for datasets: Througout this work it was clear to many of the developers that the ease of contributing minor changes (e.g. filtering out a few bad examples) was both hard to create a PRs for and hard to review often requiring the reviewer to simply trust that the user did what was stated in the commit message. While previous projects have tackled this issue using human readable formats [@dep_treebank], due to the scope of the dataset this would quickly become inefficient.
|
126 |
+
This lack of clarity increased the likelihood of dataset attacks such as dataset poisoning [@missing]. We expect to see both interface development and software development to detect and prevent such attacks.
|
127 |
+
|
128 |
+
- Machine generated content within training data: Not
|
129 |
+
|
130 |
+
|
131 |
+
Ethical and Environmental consideration
|
132 |
+
|
133 |
+
enviromental:
|
134 |
+
- common codebase lead to less duplication of dataset and reduces storage required
|
135 |
+
- continual ci running on large datasets could be a concern. Currently out tests use a total of XXX Co2-eq (estimated using codecarbon). however we have already seen people using training [@fineweb] and evaluating LLMs to appriximate dataset quality, such workflows could quickly incrase the co2 consumption.
|
136 |
+
|
paper/references.bib
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
@article{joshiStateFateLinguistic2021,
|
3 |
+
title = {The {State} and {Fate} of {Linguistic} {Diversity} and {Inclusion} in the {NLP} {World}},
|
4 |
+
url = {http://arxiv.org/abs/2004.09095},
|
5 |
+
abstract = {Language technologies contribute to promoting multilingualism and linguistic diversity around the world. However, only a very small number of the over 7000 languages of the world are represented in the rapidly evolving language technologies and applications. In this paper we look at the relation between the types of languages, resources, and their representation in NLP conferences to understand the trajectory that different languages have followed over time. Our quantitative investigation underlines the disparity between languages, especially in terms of their resources, and calls into question the "language agnostic" status of current models and systems. Through this paper, we attempt to convince the ACL community to prioritise the resolution of the predicaments highlighted here, so that no language is left behind.},
|
6 |
+
urldate = {2021-03-20},
|
7 |
+
journal = {arXiv:2004.09095 [cs]},
|
8 |
+
author = {Joshi, Pratik and Santy, Sebastin and Budhiraja, Amar and Bali, Kalika and Choudhury, Monojit},
|
9 |
+
month = jan,
|
10 |
+
year = {2021},
|
11 |
+
note = {arXiv: 2004.09095},
|
12 |
+
keywords = {Computer Science - Computation and Language},
|
13 |
+
}
|
14 |
+
|
15 |
+
@inproceedings{dagw,
|
16 |
+
title = {The {{Danish Gigaword}} Corpus},
|
17 |
+
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics ({{NoDaLiDa}})},
|
18 |
+
author = {{Str{\o}mberg-Derczynski}, Leon and Ciosici, Manuel and Baglini, Rebekah and Christiansen, Morten H. and Dalsgaard, Jacob Aarup and Fusaroli, Riccardo and Henrichsen, Peter Juel and Hvingelby, Rasmus and Kirkedal, Andreas and Kjeldsen, Alex Speed and Ladefoged, Claus and Nielsen, Finn Aarup and Madsen, Jens and Petersen, Malte Lau and Rystr{\o}m, Jonathan Hvithamar and Varab, Daniel},
|
19 |
+
year = {05 31--2 06 2021},
|
20 |
+
pages = {413--421},
|
21 |
+
publisher = {Link{\"o}ping University Electronic Press, Sweden},
|
22 |
+
address = {Reykjavik, Iceland (Online)},
|
23 |
+
abstract = {Danish language technology has been hindered by a lack of broad-coverage corpora at the scale modern NLP prefers. This paper describes the Danish Gigaword Corpus, the result of a focused effort to provide a diverse and freely-available one billion word corpus of Danish text. The Danish Gigaword corpus covers a wide array of time periods, domains, speakers' socio-economic status, and Danish dialects.},
|
24 |
+
file = {/Users/au561649/Zotero/storage/9B3GVP6D/Derczynski et al. - The Danish Gigaword Corpus.pdf}
|
25 |
+
}
|