Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
KennethEnevoldsen
commited on
Added tests to validate datasheets and dataset structure
Browse files- data/adl/adl.md +2 -0
- data/botxt/botxt.md +5 -36
- data/dannet/dannet.md +6 -22
- data/depbank/depbank.md +5 -30
- data/ep/ep.md +5 -33
- data/ft/ft.md +7 -35
- data/gutenberg/gutenberg.md +10 -22
- data/hest/hest.md +3 -0
- data/jvj/jvj.md +3 -0
- data/naat/naat.md +20 -30
- data/nordjyllandnews/nordjyllandnews.md +10 -0
- data/relig/relig.md +5 -35
- data/retsinformationdk/retsinformationdk.md +5 -27
- data/retspraksis/retspraksis.md +5 -35
- data/skat/skat.md +5 -34
- data/spont/spont.md +5 -37
- data/synne/synne.md +5 -34
- data/tv2r/tv2r.md +5 -25
- data/wiki/wiki.md +5 -24
- data/wikibooks/wikibooks.md +5 -25
- data/wikisource/wikisource.md +5 -37
- pyproject.toml +1 -0
- tests/readme_parsing.py +25 -0
- tests/test_dataset_schema.py +95 -48
- uv.lock +64 -0
data/adl/adl.md
CHANGED
@@ -16,7 +16,9 @@ task_ids:
|
|
16 |
|
17 |
## Dataset Description
|
18 |
|
|
|
19 |
Danish literature from 1700-2023 stemming for the Archive for Danish Literature (ADL).
|
|
|
20 |
|
21 |
|
22 |
<!-- START-DESC-STATS -->
|
|
|
16 |
|
17 |
## Dataset Description
|
18 |
|
19 |
+
<!-- START-SHORT DESCRIPTION -->
|
20 |
Danish literature from 1700-2023 stemming for the Archive for Danish Literature (ADL).
|
21 |
+
<!-- END-SHORT DESCRIPTION -->
|
22 |
|
23 |
|
24 |
<!-- START-DESC-STATS -->
|
data/botxt/botxt.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
pretty_name: Bornholmsk
|
3 |
language:
|
4 |
- da
|
5 |
license: cc0-1.0
|
@@ -16,7 +16,9 @@ task_ids:
|
|
16 |
|
17 |
## Dataset Description
|
18 |
|
|
|
19 |
The Bornholmsk Ordbog Dictionary Project
|
|
|
20 |
|
21 |
Fictional texts of various kinds written in Bornholmsk, the dialect spoken on the Danish island of Bornholm (The language code for Bornholmsk under IETF BCP-47 is da-bornholm), have been digitized (OCR’ed and proofread) by volunteers working within the recently resumed Bornholmsk Ordbog dictionary project (Kjeldsen, 2019). Most of the material included is written by Otto J. Lund in the period 1930-48 (novels, short stories, and poems). The Bornholmsk subcorpus, which in its present state amounts to circa 400 K words, also includes folk stories published by J. P. Kuhre in 1938, and by K. M. Kofoed in 1935, fictional letters by various authors published in the 1930s, as well as poems by Alfred Jensen published in 1948 and various other texts from the same period. The non-standardized orthography varies considerably from source to source. The Bornholmsk part of the Danish Gigaword is a significantly extended dataset, well beyond that studied in earlier NLP work on the dialect [(Derczynski and Kjeldsen, 2019)](https://aclanthology.org/W19-6138/).
|
22 |
|
@@ -32,43 +34,10 @@ Fictional texts of various kinds written in Bornholmsk, the dialect spoken on th
|
|
32 |
|
33 |
## Dataset Sturcture
|
34 |
An example from the dataset looks as follows.
|
35 |
-
```yaml
|
36 |
-
{
|
37 |
-
'text': 'Ræua-Lârs
|
38 |
-
|
39 |
-
Ræua-Lârs å hans Konna, Stina, bode uda',
|
40 |
-
'source': 'botxt',
|
41 |
-
'id': 'botxt_0000040',
|
42 |
-
'added': '2024-05-16',
|
43 |
-
'created': '2000-01-01, 2022-01-01',
|
44 |
-
'metadata': {
|
45 |
-
'domain': 'Other',
|
46 |
-
'license': 'Creative Commons Legal Code
|
47 |
-
|
48 |
-
CC0 1.0 Universal',
|
49 |
-
'source-pretty': 'Bornholmsk (Danish dialect)'
|
50 |
-
}
|
51 |
-
}
|
52 |
-
```
|
53 |
-
|
54 |
-
## Data Fields
|
55 |
-
|
56 |
-
- **id**: source-specific identifier.
|
57 |
-
- **text**: textual content of the document.
|
58 |
-
- **source**: source of the data.
|
59 |
-
- **added**: timestamp ai2 acquired this data.
|
60 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
61 |
-
- **metadata**: source-specific metadata.
|
62 |
|
63 |
-
|
64 |
-
|
65 |
-
<summary>Creative Commons Zero v1.0 Universal</summary>
|
66 |
-
<p>
|
67 |
-
Creative Commons Legal Code
|
68 |
|
69 |
-
CC0 1.0 Universal
|
70 |
-
</p>
|
71 |
-
</details>
|
72 |
|
73 |
## Additional Information
|
74 |
|
|
|
1 |
---
|
2 |
+
pretty_name: Bornholmsk
|
3 |
language:
|
4 |
- da
|
5 |
license: cc0-1.0
|
|
|
16 |
|
17 |
## Dataset Description
|
18 |
|
19 |
+
<!-- START-SHORT DESCRIPTION -->
|
20 |
The Bornholmsk Ordbog Dictionary Project
|
21 |
+
<!-- END-SHORT DESCRIPTION -->
|
22 |
|
23 |
Fictional texts of various kinds written in Bornholmsk, the dialect spoken on the Danish island of Bornholm (The language code for Bornholmsk under IETF BCP-47 is da-bornholm), have been digitized (OCR’ed and proofread) by volunteers working within the recently resumed Bornholmsk Ordbog dictionary project (Kjeldsen, 2019). Most of the material included is written by Otto J. Lund in the period 1930-48 (novels, short stories, and poems). The Bornholmsk subcorpus, which in its present state amounts to circa 400 K words, also includes folk stories published by J. P. Kuhre in 1938, and by K. M. Kofoed in 1935, fictional letters by various authors published in the 1930s, as well as poems by Alfred Jensen published in 1948 and various other texts from the same period. The non-standardized orthography varies considerably from source to source. The Bornholmsk part of the Danish Gigaword is a significantly extended dataset, well beyond that studied in earlier NLP work on the dialect [(Derczynski and Kjeldsen, 2019)](https://aclanthology.org/W19-6138/).
|
24 |
|
|
|
34 |
|
35 |
## Dataset Sturcture
|
36 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
+
<!-- START-SAMPLE -->
|
39 |
+
<!-- END-SAMPLE -->
|
|
|
|
|
|
|
40 |
|
|
|
|
|
|
|
41 |
|
42 |
## Additional Information
|
43 |
|
data/dannet/dannet.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
pretty_name: DanNet
|
3 |
language:
|
4 |
- da
|
5 |
-
license:
|
6 |
license_name: DanNet 1.0 License
|
7 |
size_categories:
|
8 |
- 10k-100k
|
@@ -14,7 +14,10 @@ task_ids:
|
|
14 |
---
|
15 |
# Dataset Card for DanNet
|
16 |
|
|
|
17 |
[DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet.
|
|
|
|
|
18 |
|
19 |
A WordNet is a lexico-semantic network which show the meaning and the relation between words through named connections. It can be considered a machine-readable dictionary.
|
20 |
|
@@ -33,29 +36,10 @@ A WordNet is a lexico-semantic network which show the meaning and the relation b
|
|
33 |
|
34 |
## Dataset Sturcture
|
35 |
An example from the dataset looks as follows.
|
36 |
-
```yaml
|
37 |
-
{
|
38 |
-
'text': 'Når fodboldholdet fra 1. division i Ikast spiller ',
|
39 |
-
'source': 'dannet',
|
40 |
-
'id': 'dannet_46506',
|
41 |
-
'added': '2020-09-24',
|
42 |
-
'created': '2000-01-01, 2022-01-01',
|
43 |
-
'metadata': {
|
44 |
-
'domain': 'dannet',
|
45 |
-
'license': 'Commercial Use of DanNet [...]',
|
46 |
-
'source-pretty': 'DanNet (Danish WordNet)'
|
47 |
-
}
|
48 |
-
}
|
49 |
-
```
|
50 |
|
51 |
-
|
|
|
52 |
|
53 |
-
- **id**: source-specific identifier.
|
54 |
-
- **text**: textual content of the document.
|
55 |
-
- **source**: source of the data.
|
56 |
-
- **added**: timestamp ai2 acquired this data.
|
57 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
58 |
-
- **metadata**: source-specific metadata.
|
59 |
|
60 |
## License Information
|
61 |
<details>
|
|
|
2 |
pretty_name: DanNet
|
3 |
language:
|
4 |
- da
|
5 |
+
license: other
|
6 |
license_name: DanNet 1.0 License
|
7 |
size_categories:
|
8 |
- 10k-100k
|
|
|
14 |
---
|
15 |
# Dataset Card for DanNet
|
16 |
|
17 |
+
<!-- START-SHORT DESCRIPTION -->
|
18 |
[DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet.
|
19 |
+
<!-- END-SHORT DESCRIPTION -->
|
20 |
+
|
21 |
|
22 |
A WordNet is a lexico-semantic network which show the meaning and the relation between words through named connections. It can be considered a machine-readable dictionary.
|
23 |
|
|
|
36 |
|
37 |
## Dataset Sturcture
|
38 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
+
<!-- START-SAMPLE -->
|
41 |
+
<!-- END-SAMPLE -->
|
42 |
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
## License Information
|
45 |
<details>
|
data/depbank/depbank.md
CHANGED
@@ -14,7 +14,10 @@ task_ids:
|
|
14 |
---
|
15 |
# Dataset Card for Danish Dependency Treebank
|
16 |
|
|
|
17 |
The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT).
|
|
|
|
|
18 |
|
19 |
The Danish UD treebank has been converted from the Danish Dependency Treebank (Buch-Kromman, 2003) into Universal Dependencies (UD). It consists of 5,512 sentences (100k words). The Danish source texts and the Danish part-of-speech tags were created by the PAROLE-DK project (Keson 1998) by the Danish Society for Language and Literature.
|
20 |
|
@@ -34,37 +37,9 @@ While the dataset was initially intended as a rich annotation, this corpora only
|
|
34 |
|
35 |
## Dataset Sturcture
|
36 |
An example from the dataset looks as follows.
|
37 |
-
```yaml
|
38 |
-
{
|
39 |
-
'text': 'H.L. Hansen var en usædvanmlig og frodig personlig',
|
40 |
-
'source': 'depbank',
|
41 |
-
'id': 'depbank_0375',
|
42 |
-
'added': '2024-05-16',
|
43 |
-
'created': '2000-01-01, 2022-01-01',
|
44 |
-
'metadata': {
|
45 |
-
'domain': 'Other',
|
46 |
-
'license': 'Attribution-ShareAlike 4.0 International',
|
47 |
-
'source-pretty': 'Danish Dependency Treebank'
|
48 |
-
}
|
49 |
-
}
|
50 |
-
```
|
51 |
|
52 |
-
|
53 |
-
|
54 |
-
- **id**: source-specific identifier.
|
55 |
-
- **text**: textual content of the document.
|
56 |
-
- **source**: source of the data.
|
57 |
-
- **added**: timestamp ai2 acquired this data.
|
58 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
59 |
-
- **metadata**: source-specific metadata.
|
60 |
-
|
61 |
-
## License Information
|
62 |
-
<details>
|
63 |
-
<summary>Creative Commons Attribution Share Alike 4.0</summary>
|
64 |
-
<p>
|
65 |
-
Attribution-ShareAlike 4.0 International
|
66 |
-
</p>
|
67 |
-
</details>
|
68 |
|
69 |
|
70 |
## Additional Information
|
|
|
14 |
---
|
15 |
# Dataset Card for Danish Dependency Treebank
|
16 |
|
17 |
+
<!-- START-SHORT DESCRIPTION -->
|
18 |
The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT).
|
19 |
+
<!-- END-SHORT DESCRIPTION -->
|
20 |
+
|
21 |
|
22 |
The Danish UD treebank has been converted from the Danish Dependency Treebank (Buch-Kromman, 2003) into Universal Dependencies (UD). It consists of 5,512 sentences (100k words). The Danish source texts and the Danish part-of-speech tags were created by the PAROLE-DK project (Keson 1998) by the Danish Society for Language and Literature.
|
23 |
|
|
|
37 |
|
38 |
## Dataset Sturcture
|
39 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
+
<!-- START-SAMPLE -->
|
42 |
+
<!-- END-SAMPLE -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
|
45 |
## Additional Information
|
data/ep/ep.md
CHANGED
@@ -14,7 +14,10 @@ task_ids:
|
|
14 |
---
|
15 |
# Dataset Card for European Parliament
|
16 |
|
|
|
17 |
The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/).
|
|
|
|
|
18 |
|
19 |
The europarl is a corpus of parallel text in 11 languages from the proceedings of the European Parliament, which are published on the web. This corpus has found widespread use in the NLP community. It was initially intended as training data for statistical machine translation.
|
20 |
|
@@ -33,41 +36,10 @@ The europarl is a corpus of parallel text in 11 languages from the proceedings o
|
|
33 |
|
34 |
## Dataset Sturcture
|
35 |
An example from the dataset looks as follows.
|
36 |
-
```yaml
|
37 |
-
{
|
38 |
-
'text': 'TALER 6703: Jeg har stemt for henstillingen om god',
|
39 |
-
'source': 'ep',
|
40 |
-
'id': 'ep_07-02-01-008',
|
41 |
-
'added': '2019-11-20',
|
42 |
-
'created': '2004-01-01, 2009-01-01',
|
43 |
-
'metadata': {
|
44 |
-
'domain': 'Conversation',
|
45 |
-
'license': 'Creative Commons Legal Code
|
46 |
-
|
47 |
-
CC0 1.0 Universal',
|
48 |
-
'source-pretty': 'European Parliament'
|
49 |
-
}
|
50 |
-
}
|
51 |
-
```
|
52 |
-
|
53 |
-
## Data Fields
|
54 |
-
|
55 |
-
- **id**: source-specific identifier.
|
56 |
-
- **text**: textual content of the document.
|
57 |
-
- **source**: source of the data.
|
58 |
-
- **added**: timestamp ai2 acquired this data.
|
59 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
60 |
-
- **metadata**: source-specific metadata.
|
61 |
|
62 |
-
|
63 |
-
|
64 |
-
<summary>Creative Commons Zero v1.0 Universal</summary>
|
65 |
-
<p>
|
66 |
-
Creative Commons Legal Code
|
67 |
|
68 |
-
CC0 1.0 Universal
|
69 |
-
</p>
|
70 |
-
</details>
|
71 |
|
72 |
|
73 |
## Additional Information
|
|
|
14 |
---
|
15 |
# Dataset Card for European Parliament
|
16 |
|
17 |
+
<!-- START-SHORT DESCRIPTION -->
|
18 |
The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/).
|
19 |
+
<!-- END-SHORT DESCRIPTION -->
|
20 |
+
|
21 |
|
22 |
The europarl is a corpus of parallel text in 11 languages from the proceedings of the European Parliament, which are published on the web. This corpus has found widespread use in the NLP community. It was initially intended as training data for statistical machine translation.
|
23 |
|
|
|
36 |
|
37 |
## Dataset Sturcture
|
38 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
+
<!-- START-SAMPLE -->
|
41 |
+
<!-- END-SAMPLE -->
|
|
|
|
|
|
|
42 |
|
|
|
|
|
|
|
43 |
|
44 |
|
45 |
## Additional Information
|
data/ft/ft.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
pretty_name: Folketinget
|
3 |
language:
|
4 |
- da
|
5 |
license: cc0-1.0
|
@@ -12,11 +12,14 @@ task_categories:
|
|
12 |
task_ids:
|
13 |
- language-modeling
|
14 |
---
|
15 |
-
# Dataset Card for Folketinget
|
16 |
|
17 |
## Dataset Description
|
18 |
|
|
|
19 |
This dataset consists of records from all meetings of The Danish parliament (Folketinget) in the parliament hall.
|
|
|
|
|
20 |
|
21 |
All records have a transcript produced by commercial Automatic Speech Recognition (ASR) followed by postediting by linguists employed by Folketinget for intelligibility, i.e., edit out dysfluencies, restarts, repairs, and mistakes. The transcript is, therefore, not a representation of spoken Danish but rather information content.
|
22 |
|
@@ -34,41 +37,10 @@ In the parliament hall, one speaker at a time addresses members of the parliamen
|
|
34 |
|
35 |
## Dataset Sturcture
|
36 |
An example from the dataset looks as follows.
|
37 |
-
```yaml
|
38 |
-
{
|
39 |
-
'text': 'TALER 50: Mødet er åbnet. I dag er der følgende an',
|
40 |
-
'source': 'ft',
|
41 |
-
'id': 'ft_20121M100',
|
42 |
-
'added': '2021-03-28',
|
43 |
-
'created': '2009-01-01, 2019-01-01',
|
44 |
-
'metadata': {
|
45 |
-
'domain': 'Conversation',
|
46 |
-
'license': 'Creative Commons Legal Code
|
47 |
-
|
48 |
-
CC0 1.0 Universal',
|
49 |
-
'source-pretty': 'Folketinget (Danish Parliament)'
|
50 |
-
}
|
51 |
-
}
|
52 |
-
```
|
53 |
-
|
54 |
-
## Data Fields
|
55 |
-
|
56 |
-
- **id**: source-specific identifier.
|
57 |
-
- **text**: textual content of the document.
|
58 |
-
- **source**: source of the data.
|
59 |
-
- **added**: timestamp ai2 acquired this data.
|
60 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
61 |
-
- **metadata**: source-specific metadata.
|
62 |
|
63 |
-
|
64 |
-
|
65 |
-
<summary>Creative Commons Zero v1.0 Universal</summary>
|
66 |
-
<p>
|
67 |
-
Creative Commons Legal Code
|
68 |
|
69 |
-
CC0 1.0 Universal
|
70 |
-
</p>
|
71 |
-
</details>
|
72 |
|
73 |
## Additional Information
|
74 |
|
|
|
1 |
---
|
2 |
+
pretty_name: Folketinget
|
3 |
language:
|
4 |
- da
|
5 |
license: cc0-1.0
|
|
|
12 |
task_ids:
|
13 |
- language-modeling
|
14 |
---
|
15 |
+
# Dataset Card for Folketinget
|
16 |
|
17 |
## Dataset Description
|
18 |
|
19 |
+
<!-- START-SHORT DESCRIPTION -->
|
20 |
This dataset consists of records from all meetings of The Danish parliament (Folketinget) in the parliament hall.
|
21 |
+
<!-- END-SHORT DESCRIPTION -->
|
22 |
+
|
23 |
|
24 |
All records have a transcript produced by commercial Automatic Speech Recognition (ASR) followed by postediting by linguists employed by Folketinget for intelligibility, i.e., edit out dysfluencies, restarts, repairs, and mistakes. The transcript is, therefore, not a representation of spoken Danish but rather information content.
|
25 |
|
|
|
37 |
|
38 |
## Dataset Sturcture
|
39 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
+
<!-- START-SAMPLE -->
|
42 |
+
<!-- END-SAMPLE -->
|
|
|
|
|
|
|
43 |
|
|
|
|
|
|
|
44 |
|
45 |
## Additional Information
|
46 |
|
data/gutenberg/gutenberg.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
pretty_name: Gutenberg
|
3 |
language:
|
4 |
- da
|
5 |
-
license:
|
6 |
license_name: Gutenberg License
|
7 |
size_categories:
|
8 |
- 1-10k
|
@@ -16,7 +16,10 @@ task_ids:
|
|
16 |
|
17 |
## Dataset Description
|
18 |
|
|
|
19 |
This dataset contains the Danish subsection from Project [Gutenberg](https://www.gutenberg.org).
|
|
|
|
|
20 |
|
21 |
Project Gutenberg is an online library of free eBooks. Project Gutenberg was the first provider of free electronic books, or eBooks.
|
22 |
|
@@ -32,34 +35,18 @@ Project Gutenberg is an online library of free eBooks. Project Gutenberg was the
|
|
32 |
|
33 |
## Dataset Sturcture
|
34 |
An example from the dataset looks as follows.
|
35 |
-
```yaml
|
36 |
-
{
|
37 |
-
'text': 'Afskriverens bemærkninger: Åbenlyse trykfejl er re [...]',
|
38 |
-
'source': 'gutenberg',
|
39 |
-
'id': 'gutenberg_43899',
|
40 |
-
'added': '2020-09-12',
|
41 |
-
'created': '1700-01-01, 2022-01-01',
|
42 |
-
'metadata': {
|
43 |
-
'domain': 'Wiki & Books',
|
44 |
-
'license': ' [...] THE FULL PROJECT GUTENBERG LICENSE [...]',
|
45 |
-
'source-pretty': 'Gutenberg'
|
46 |
-
}
|
47 |
-
}
|
48 |
-
```
|
49 |
|
50 |
-
|
|
|
51 |
|
52 |
-
- **id**: source-specific identifier.
|
53 |
-
- **text**: textual content of the document.
|
54 |
-
- **source**: source of the data.
|
55 |
-
- **added**: timestamp ai2 acquired this data.
|
56 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
57 |
-
- **metadata**: source-specific metadata.
|
58 |
|
59 |
## License Information
|
|
|
60 |
<details>
|
61 |
<summary>Gutenberg License</summary>
|
62 |
<p>
|
|
|
|
|
63 |
*** START: FULL LICENSE ***
|
64 |
|
65 |
THE FULL PROJECT GUTENBERG LICENSE
|
@@ -384,6 +371,7 @@ This Web site includes information about Project Gutenberg-tm,
|
|
384 |
including how to make donations to the Project Gutenberg Literary
|
385 |
Archive Foundation, how to help produce our new eBooks, and how to
|
386 |
subscribe to our email newsletter to hear about new eBooks.
|
|
|
387 |
|
388 |
</p>
|
389 |
</details>
|
|
|
2 |
pretty_name: Gutenberg
|
3 |
language:
|
4 |
- da
|
5 |
+
license: other
|
6 |
license_name: Gutenberg License
|
7 |
size_categories:
|
8 |
- 1-10k
|
|
|
16 |
|
17 |
## Dataset Description
|
18 |
|
19 |
+
<!-- START-SHORT DESCRIPTION -->
|
20 |
This dataset contains the Danish subsection from Project [Gutenberg](https://www.gutenberg.org).
|
21 |
+
<!-- END-SHORT DESCRIPTION -->
|
22 |
+
|
23 |
|
24 |
Project Gutenberg is an online library of free eBooks. Project Gutenberg was the first provider of free electronic books, or eBooks.
|
25 |
|
|
|
35 |
|
36 |
## Dataset Sturcture
|
37 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
+
<!-- START-SAMPLE -->
|
40 |
+
<!-- END-SAMPLE -->
|
41 |
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
|
43 |
## License Information
|
44 |
+
|
45 |
<details>
|
46 |
<summary>Gutenberg License</summary>
|
47 |
<p>
|
48 |
+
|
49 |
+
```
|
50 |
*** START: FULL LICENSE ***
|
51 |
|
52 |
THE FULL PROJECT GUTENBERG LICENSE
|
|
|
371 |
including how to make donations to the Project Gutenberg Literary
|
372 |
Archive Foundation, how to help produce our new eBooks, and how to
|
373 |
subscribe to our email newsletter to hear about new eBooks.
|
374 |
+
```
|
375 |
|
376 |
</p>
|
377 |
</details>
|
data/hest/hest.md
CHANGED
@@ -14,7 +14,10 @@ task_ids:
|
|
14 |
---
|
15 |
# Dataset Card for Hestenettet
|
16 |
|
|
|
17 |
Extracts from www.heste-nettet.dk a Danish debate forum.
|
|
|
|
|
18 |
|
19 |
The forum have been in use since 1997 and it is used as a debate forum covering a wide range of everyday topics.
|
20 |
|
|
|
14 |
---
|
15 |
# Dataset Card for Hestenettet
|
16 |
|
17 |
+
<!-- START-SHORT DESCRIPTION -->
|
18 |
Extracts from www.heste-nettet.dk a Danish debate forum.
|
19 |
+
<!-- END-SHORT DESCRIPTION -->
|
20 |
+
|
21 |
|
22 |
The forum have been in use since 1997 and it is used as a debate forum covering a wide range of everyday topics.
|
23 |
|
data/jvj/jvj.md
CHANGED
@@ -14,7 +14,10 @@ task_ids:
|
|
14 |
---
|
15 |
# Dataset Card for Johannes V. Jensen
|
16 |
|
|
|
17 |
The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen).
|
|
|
|
|
18 |
|
19 |
|
20 |
|
|
|
14 |
---
|
15 |
# Dataset Card for Johannes V. Jensen
|
16 |
|
17 |
+
<!-- START-SHORT DESCRIPTION -->
|
18 |
The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen).
|
19 |
+
<!-- END-SHORT DESCRIPTION -->
|
20 |
+
|
21 |
|
22 |
|
23 |
|
data/naat/naat.md
CHANGED
@@ -14,7 +14,10 @@ task_ids:
|
|
14 |
---
|
15 |
# Dataset Card for NAAT
|
16 |
|
|
|
17 |
A dataset of Danish speeches from 1930-2022.
|
|
|
|
|
18 |
|
19 |
## Dataset Description
|
20 |
|
@@ -30,38 +33,25 @@ A dataset of Danish speeches from 1930-2022.
|
|
30 |
|
31 |
## Dataset Sturcture
|
32 |
An example from the dataset looks as follows.
|
33 |
-
```yaml
|
34 |
-
{
|
35 |
-
'text': 'Naar jeg i aften sender min nytaarshilsen til det ',
|
36 |
-
'source': 'naat',
|
37 |
-
'id': 'naat_1958kongfrederikix',
|
38 |
-
'added': '2020-02-11',
|
39 |
-
'created': '1930-01-01, 2022-01-01',
|
40 |
-
'metadata': {
|
41 |
-
'domain': 'Conversation',
|
42 |
-
'license': 'Creative Commons Legal Code
|
43 |
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
```
|
49 |
|
50 |
-
## Data Fields
|
51 |
|
52 |
-
|
53 |
-
- **text**: textual content of the document.
|
54 |
-
- **source**: source of the data.
|
55 |
-
- **added**: timestamp ai2 acquired this data.
|
56 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
57 |
-
- **metadata**: source-specific metadata.
|
58 |
|
59 |
-
|
60 |
-
<details>
|
61 |
-
<summary>Creative Commons Zero v1.0 Universal</summary>
|
62 |
-
<p>
|
63 |
-
Creative Commons Legal Code
|
64 |
|
65 |
-
|
66 |
-
|
67 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
---
|
15 |
# Dataset Card for NAAT
|
16 |
|
17 |
+
<!-- START-SHORT DESCRIPTION -->
|
18 |
A dataset of Danish speeches from 1930-2022.
|
19 |
+
<!-- END-SHORT DESCRIPTION -->
|
20 |
+
|
21 |
|
22 |
## Dataset Description
|
23 |
|
|
|
33 |
|
34 |
## Dataset Sturcture
|
35 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
|
37 |
+
<!-- START-SAMPLE -->
|
38 |
+
<!-- END-SAMPLE -->
|
39 |
+
|
40 |
+
## Additional Information
|
|
|
41 |
|
|
|
42 |
|
43 |
+
### Citation Information
|
|
|
|
|
|
|
|
|
|
|
44 |
|
45 |
+
This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
|
|
|
|
|
|
|
|
|
46 |
|
47 |
+
> Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
|
48 |
+
|
49 |
+
```bash
|
50 |
+
@inproceedings{dagw,
|
51 |
+
title = {{The Danish Gigaword Corpus}},
|
52 |
+
author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
|
53 |
+
year = 2021,
|
54 |
+
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
|
55 |
+
publisher = {NEALT}
|
56 |
+
}
|
57 |
+
```
|
data/nordjyllandnews/nordjyllandnews.md
CHANGED
@@ -15,7 +15,10 @@ task_ids:
|
|
15 |
|
16 |
# Dataset Card for Nordjylland News
|
17 |
|
|
|
18 |
Articles from Danish Newspaper [TV2 Nord](https://www.tv2nord.dk).
|
|
|
|
|
19 |
|
20 |
The data is derived from the Huggingface dataset [alexandrainst/nordjylland-news-summarization](https://huggingface.co/datasets/alexandrainst/nordjylland-news-summarization) originally intended for text summarization.
|
21 |
|
@@ -30,6 +33,13 @@ The data is derived from the Huggingface dataset [alexandrainst/nordjylland-news
|
|
30 |
|
31 |
<!-- END-DESC-STATS -->
|
32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
## Additional Information
|
34 |
|
35 |
|
|
|
15 |
|
16 |
# Dataset Card for Nordjylland News
|
17 |
|
18 |
+
<!-- START-SHORT DESCRIPTION -->
|
19 |
Articles from Danish Newspaper [TV2 Nord](https://www.tv2nord.dk).
|
20 |
+
<!-- END-SHORT DESCRIPTION -->
|
21 |
+
|
22 |
|
23 |
The data is derived from the Huggingface dataset [alexandrainst/nordjylland-news-summarization](https://huggingface.co/datasets/alexandrainst/nordjylland-news-summarization) originally intended for text summarization.
|
24 |
|
|
|
33 |
|
34 |
<!-- END-DESC-STATS -->
|
35 |
|
36 |
+
## Dataset Sturcture
|
37 |
+
An example from the dataset looks as follows.
|
38 |
+
|
39 |
+
<!-- START-SAMPLE -->
|
40 |
+
<!-- END-SAMPLE -->
|
41 |
+
|
42 |
+
|
43 |
## Additional Information
|
44 |
|
45 |
|
data/relig/relig.md
CHANGED
@@ -14,7 +14,10 @@ task_ids:
|
|
14 |
---
|
15 |
# Dataset Card for Religious texts
|
16 |
|
|
|
17 |
Danish religious text from the 1700-2022.
|
|
|
|
|
18 |
|
19 |
## Dataset Description
|
20 |
|
@@ -30,42 +33,9 @@ Danish religious text from the 1700-2022.
|
|
30 |
|
31 |
## Dataset Sturcture
|
32 |
An example from the dataset looks as follows.
|
33 |
-
```yaml
|
34 |
-
{
|
35 |
-
'text': 'Salomos Højsang
|
36 |
-
Kys mig, giv mig Kys af din mund t',
|
37 |
-
'source': 'relig',
|
38 |
-
'id': 'relig_SON',
|
39 |
-
'added': '2020-09-14',
|
40 |
-
'created': '1700-01-01, 2022-01-01',
|
41 |
-
'metadata': {
|
42 |
-
'domain': 'Wiki & Books',
|
43 |
-
'license': 'Creative Commons Legal Code
|
44 |
-
|
45 |
-
CC0 1.0 Universal',
|
46 |
-
'source-pretty': 'Religious texts'
|
47 |
-
}
|
48 |
-
}
|
49 |
-
```
|
50 |
-
|
51 |
-
## Data Fields
|
52 |
-
|
53 |
-
- **id**: source-specific identifier.
|
54 |
-
- **text**: textual content of the document.
|
55 |
-
- **source**: source of the data.
|
56 |
-
- **added**: timestamp ai2 acquired this data.
|
57 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
58 |
-
- **metadata**: source-specific metadata.
|
59 |
-
|
60 |
-
## License Information
|
61 |
-
<details>
|
62 |
-
<summary>Creative Commons Zero v1.0 Universal</summary>
|
63 |
-
<p>
|
64 |
-
Creative Commons Legal Code
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
</details>
|
69 |
|
70 |
|
71 |
## Additional Information
|
|
|
14 |
---
|
15 |
# Dataset Card for Religious texts
|
16 |
|
17 |
+
<!-- START-SHORT DESCRIPTION -->
|
18 |
Danish religious text from the 1700-2022.
|
19 |
+
<!-- END-SHORT DESCRIPTION -->
|
20 |
+
|
21 |
|
22 |
## Dataset Description
|
23 |
|
|
|
33 |
|
34 |
## Dataset Sturcture
|
35 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
|
37 |
+
<!-- START-SAMPLE -->
|
38 |
+
<!-- END-SAMPLE -->
|
|
|
39 |
|
40 |
|
41 |
## Additional Information
|
data/retsinformationdk/retsinformationdk.md
CHANGED
@@ -14,7 +14,10 @@ task_ids:
|
|
14 |
---
|
15 |
# Dataset Card for retsinformation.dk (Danish legal information)
|
16 |
|
|
|
17 |
[retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) is the official legal information system of Denmark.
|
|
|
|
|
18 |
|
19 |
It serves as a central repository for Danish legislation, administrative regulations, and other legally binding documents. The platform ensures transparency and public access to laws and legal materials. The sites includes:
|
20 |
|
@@ -38,34 +41,9 @@ It serves as a central repository for Danish legislation, administrative regulat
|
|
38 |
|
39 |
## Dataset Sturcture
|
40 |
An example from the dataset looks as follows.
|
41 |
-
```yaml
|
42 |
-
{
|
43 |
-
'text': 'Den fulde tekst Pressenævnets kendelse i sag nr. 1',
|
44 |
-
'source': 'retsinformationdk',
|
45 |
-
'id': 'retsinformationdk_173889',
|
46 |
-
'added': '2019-11-22',
|
47 |
-
'created': '2000-01-01, 2022-01-01',
|
48 |
-
'metadata': {
|
49 |
-
'domain': 'Legal',
|
50 |
-
'license': 'Danish Copyright law at https://www.retsinformation.dk/forms/r0710.aspx?id=164796 states
|
51 |
-
|
52 |
-
§ 9. Love, administrative forskrifter, retsafgørelser og lignende offentlige aktstykker er ikke genstand for ophavsret.
|
53 |
-
|
54 |
-
Stk. 2. Bestemmelsen i stk. 1 gælder ikke for værker, der fremtræder som selvstændige bidrag i de i stk. 1 nævnte aktstykker. Sådanne værker må dog gengives i forbindelse med aktstykket. Retten til videre udnyttelse afhænger af de i øvrigt gældende regler.
|
55 |
-
',
|
56 |
-
'source-pretty': 'retsinformation.dk (Danish legal information)'
|
57 |
-
}
|
58 |
-
}
|
59 |
-
```
|
60 |
-
|
61 |
-
## Data Fields
|
62 |
|
63 |
-
|
64 |
-
-
|
65 |
-
- **source**: source of the data.
|
66 |
-
- **added**: timestamp ai2 acquired this data.
|
67 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
68 |
-
- **metadata**: source-specific metadata.
|
69 |
|
70 |
## License Information
|
71 |
<details>
|
|
|
14 |
---
|
15 |
# Dataset Card for retsinformation.dk (Danish legal information)
|
16 |
|
17 |
+
<!-- START-SHORT DESCRIPTION -->
|
18 |
[retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) is the official legal information system of Denmark.
|
19 |
+
<!-- END-SHORT DESCRIPTION -->
|
20 |
+
|
21 |
|
22 |
It serves as a central repository for Danish legislation, administrative regulations, and other legally binding documents. The platform ensures transparency and public access to laws and legal materials. The sites includes:
|
23 |
|
|
|
41 |
|
42 |
## Dataset Sturcture
|
43 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
|
45 |
+
<!-- START-SAMPLE -->
|
46 |
+
<!-- END-SAMPLE -->
|
|
|
|
|
|
|
|
|
47 |
|
48 |
## License Information
|
49 |
<details>
|
data/retspraksis/retspraksis.md
CHANGED
@@ -14,7 +14,10 @@ task_ids:
|
|
14 |
---
|
15 |
# Dataset Card for retspraksis
|
16 |
|
|
|
17 |
[Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) refers to case law or judicial practice in Denmark.
|
|
|
|
|
18 |
|
19 |
It encompasses the body of legal decisions made by Danish courts, which play a significant role in interpreting and applying the law.
|
20 |
|
@@ -33,42 +36,9 @@ It encompasses the body of legal decisions made by Danish courts, which play a s
|
|
33 |
|
34 |
## Dataset Sturcture
|
35 |
An example from the dataset looks as follows.
|
36 |
-
```yaml
|
37 |
-
{
|
38 |
-
'text': 'højesterets dom
|
39 |
-
afsagt tor',
|
40 |
-
'source': 'retspraksis',
|
41 |
-
'id': 'retspraksis_517',
|
42 |
-
'added': '2020-09-24',
|
43 |
-
'created': '2000-01-01, 2022-01-01',
|
44 |
-
'metadata': {
|
45 |
-
'domain': 'Legal',
|
46 |
-
'license': 'Creative Commons Legal Code
|
47 |
-
|
48 |
-
CC0 1.0 Universal',
|
49 |
-
'source-pretty': 'retspraksis (Danish legal information)'
|
50 |
-
}
|
51 |
-
}
|
52 |
-
```
|
53 |
-
|
54 |
-
## Data Fields
|
55 |
-
|
56 |
-
- **id**: source-specific identifier.
|
57 |
-
- **text**: textual content of the document.
|
58 |
-
- **source**: source of the data.
|
59 |
-
- **added**: timestamp ai2 acquired this data.
|
60 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
61 |
-
- **metadata**: source-specific metadata.
|
62 |
-
|
63 |
-
## License Information
|
64 |
-
<details>
|
65 |
-
<summary>Creative Commons Zero v1.0 Universal</summary>
|
66 |
-
<p>
|
67 |
-
Creative Commons Legal Code
|
68 |
|
69 |
-
|
70 |
-
|
71 |
-
</details>
|
72 |
|
73 |
|
74 |
## Additional Information
|
|
|
14 |
---
|
15 |
# Dataset Card for retspraksis
|
16 |
|
17 |
+
<!-- START-SHORT DESCRIPTION -->
|
18 |
[Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) refers to case law or judicial practice in Denmark.
|
19 |
+
<!-- END-SHORT DESCRIPTION -->
|
20 |
+
|
21 |
|
22 |
It encompasses the body of legal decisions made by Danish courts, which play a significant role in interpreting and applying the law.
|
23 |
|
|
|
36 |
|
37 |
## Dataset Sturcture
|
38 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
+
<!-- START-SAMPLE -->
|
41 |
+
<!-- END-SAMPLE -->
|
|
|
42 |
|
43 |
|
44 |
## Additional Information
|
data/skat/skat.md
CHANGED
@@ -14,7 +14,10 @@ task_ids:
|
|
14 |
---
|
15 |
# Dataset Card for skat.dk
|
16 |
|
|
|
17 |
Skat is the Danish tax authority. This dataset contains content from its website skat.dk.
|
|
|
|
|
18 |
|
19 |
## Dataset Description
|
20 |
|
@@ -30,41 +33,9 @@ Skat is the Danish tax authority. This dataset contains content from its website
|
|
30 |
|
31 |
## Dataset Sturcture
|
32 |
An example from the dataset looks as follows.
|
33 |
-
```yaml
|
34 |
-
{
|
35 |
-
'text': 'Andelsboligforeningers levering af brugsrettighede',
|
36 |
-
'source': 'skat',
|
37 |
-
'id': 'skat_SKM2010.712.SKAT',
|
38 |
-
'added': '2020-10-01',
|
39 |
-
'created': '2000-01-01, 2022-01-01',
|
40 |
-
'metadata': {
|
41 |
-
'domain': 'Legal',
|
42 |
-
'license': 'Creative Commons Legal Code
|
43 |
-
|
44 |
-
CC0 1.0 Universal',
|
45 |
-
'source-pretty': 'Skat (Danish tax authority)'
|
46 |
-
}
|
47 |
-
}
|
48 |
-
```
|
49 |
-
|
50 |
-
## Data Fields
|
51 |
-
|
52 |
-
- **id**: source-specific identifier.
|
53 |
-
- **text**: textual content of the document.
|
54 |
-
- **source**: source of the data.
|
55 |
-
- **added**: timestamp ai2 acquired this data.
|
56 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
57 |
-
- **metadata**: source-specific metadata.
|
58 |
-
|
59 |
-
## License Information
|
60 |
-
<details>
|
61 |
-
<summary>Creative Commons Zero v1.0 Universal</summary>
|
62 |
-
<p>
|
63 |
-
Creative Commons Legal Code
|
64 |
|
65 |
-
|
66 |
-
|
67 |
-
</details>
|
68 |
|
69 |
|
70 |
## Additional Information
|
|
|
14 |
---
|
15 |
# Dataset Card for skat.dk
|
16 |
|
17 |
+
<!-- START-SHORT DESCRIPTION -->
|
18 |
Skat is the Danish tax authority. This dataset contains content from its website skat.dk.
|
19 |
+
<!-- END-SHORT DESCRIPTION -->
|
20 |
+
|
21 |
|
22 |
## Dataset Description
|
23 |
|
|
|
33 |
|
34 |
## Dataset Sturcture
|
35 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
|
37 |
+
<!-- START-SAMPLE -->
|
38 |
+
<!-- END-SAMPLE -->
|
|
|
39 |
|
40 |
|
41 |
## Additional Information
|
data/spont/spont.md
CHANGED
@@ -14,7 +14,10 @@ task_ids:
|
|
14 |
---
|
15 |
# Dataset Card for Spontaneous speech
|
16 |
|
|
|
17 |
A corpora of conversational data originally collected as a part of research projects at Aarhus University.
|
|
|
|
|
18 |
|
19 |
The conversational corpus included originates from interdisciplinary research conducted within the [Interacting Minds Centre](https://interactingminds.au.dk), and [the Puzzle of Danish project](https://projects.au.dk/the-puzzle-of-danish/) at Aarhus University. Transcribed Danish speech is generally a rare kind of data, and spontaneous speech especially so; these manually transcribed conversations thus form a valuable resource. Spontaneous and pseudo-spontaneous conversations come from various contexts, e.g., getting to know each other, solving a puzzle together, or making joint decisions. The participants have agreed on releasing anonymized transcripts of their conversations. All conversations involve two speakers, sometimes conversing face-to-face, sometimes via a chat tool. Speech is transcribed post-hoc by native speakers. Studies published relying on this data include [Fusaroli et al. (2012)](https://journals.sagepub.com/doi/10.1177/0956797612436816), [Dideriksen et al. (2019)](https://pure.au.dk/ws/portalfiles/portal/167670567/Dideriksen_et_al..pdf), and [Tylén et al. (2016)](https://pure.au.dk/ws/portalfiles/portal/101787937/The_Social_Route_To_Abstraction.pdf).
|
20 |
|
@@ -32,44 +35,9 @@ The conversational corpus included originates from interdisciplinary research co
|
|
32 |
|
33 |
## Dataset Sturcture
|
34 |
An example from the dataset looks as follows.
|
35 |
-
```yaml
|
36 |
-
{
|
37 |
-
'text': 'Taler 6: mm
|
38 |
-
Taler 7: er du klar?
|
39 |
-
Taler 6: ja
|
40 |
-
Taler',
|
41 |
-
'source': 'spont',
|
42 |
-
'id': 'spont_PuzzleOfDanish132',
|
43 |
-
'added': '2020-01-21',
|
44 |
-
'created': '2019-01-01, 2020-01-01',
|
45 |
-
'metadata': {
|
46 |
-
'domain': 'Conversation',
|
47 |
-
'license': 'Creative Commons Legal Code
|
48 |
-
|
49 |
-
CC0 1.0 Universal',
|
50 |
-
'source-pretty': 'Spontaneous speech'
|
51 |
-
}
|
52 |
-
}
|
53 |
-
```
|
54 |
-
|
55 |
-
## Data Fields
|
56 |
-
|
57 |
-
- **id**: source-specific identifier.
|
58 |
-
- **text**: textual content of the document.
|
59 |
-
- **source**: source of the data.
|
60 |
-
- **added**: timestamp ai2 acquired this data.
|
61 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
62 |
-
- **metadata**: source-specific metadata.
|
63 |
-
|
64 |
-
## License Information
|
65 |
-
<details>
|
66 |
-
<summary>Creative Commons Zero v1.0 Universal</summary>
|
67 |
-
<p>
|
68 |
-
Creative Commons Legal Code
|
69 |
|
70 |
-
|
71 |
-
|
72 |
-
</details>
|
73 |
|
74 |
|
75 |
## Additional Information
|
|
|
14 |
---
|
15 |
# Dataset Card for Spontaneous speech
|
16 |
|
17 |
+
<!-- START-SHORT DESCRIPTION -->
|
18 |
A corpora of conversational data originally collected as a part of research projects at Aarhus University.
|
19 |
+
<!-- END-SHORT DESCRIPTION -->
|
20 |
+
|
21 |
|
22 |
The conversational corpus included originates from interdisciplinary research conducted within the [Interacting Minds Centre](https://interactingminds.au.dk), and [the Puzzle of Danish project](https://projects.au.dk/the-puzzle-of-danish/) at Aarhus University. Transcribed Danish speech is generally a rare kind of data, and spontaneous speech especially so; these manually transcribed conversations thus form a valuable resource. Spontaneous and pseudo-spontaneous conversations come from various contexts, e.g., getting to know each other, solving a puzzle together, or making joint decisions. The participants have agreed on releasing anonymized transcripts of their conversations. All conversations involve two speakers, sometimes conversing face-to-face, sometimes via a chat tool. Speech is transcribed post-hoc by native speakers. Studies published relying on this data include [Fusaroli et al. (2012)](https://journals.sagepub.com/doi/10.1177/0956797612436816), [Dideriksen et al. (2019)](https://pure.au.dk/ws/portalfiles/portal/167670567/Dideriksen_et_al..pdf), and [Tylén et al. (2016)](https://pure.au.dk/ws/portalfiles/portal/101787937/The_Social_Route_To_Abstraction.pdf).
|
23 |
|
|
|
35 |
|
36 |
## Dataset Sturcture
|
37 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
+
<!-- START-SAMPLE -->
|
40 |
+
<!-- END-SAMPLE -->
|
|
|
41 |
|
42 |
|
43 |
## Additional Information
|
data/synne/synne.md
CHANGED
@@ -14,7 +14,10 @@ task_ids:
|
|
14 |
---
|
15 |
# Dataset Card for synnejysk Forening
|
16 |
|
|
|
17 |
Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk.
|
|
|
|
|
18 |
|
19 |
## Dataset Description
|
20 |
|
@@ -30,41 +33,9 @@ Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk),
|
|
30 |
|
31 |
## Dataset Sturcture
|
32 |
An example from the dataset looks as follows.
|
33 |
-
```yaml
|
34 |
-
{
|
35 |
-
'text': 'Mangeægskage Hent printvenligt dokument her – Klik',
|
36 |
-
'source': 'synne',
|
37 |
-
'id': 'synne_forening_0140',
|
38 |
-
'added': '2020-06-26',
|
39 |
-
'created': '2000-01-01, 2022-01-01',
|
40 |
-
'metadata': {
|
41 |
-
'domain': 'Other',
|
42 |
-
'license': 'Creative Commons Legal Code
|
43 |
-
|
44 |
-
CC0 1.0 Universal',
|
45 |
-
'source-pretty': 'Synderjysk (Danish dialect)'
|
46 |
-
}
|
47 |
-
}
|
48 |
-
```
|
49 |
-
|
50 |
-
## Data Fields
|
51 |
-
|
52 |
-
- **id**: source-specific identifier.
|
53 |
-
- **text**: textual content of the document.
|
54 |
-
- **source**: source of the data.
|
55 |
-
- **added**: timestamp ai2 acquired this data.
|
56 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
57 |
-
- **metadata**: source-specific metadata.
|
58 |
-
|
59 |
-
## License Information
|
60 |
-
<details>
|
61 |
-
<summary>Creative Commons Zero v1.0 Universal</summary>
|
62 |
-
<p>
|
63 |
-
Creative Commons Legal Code
|
64 |
|
65 |
-
|
66 |
-
|
67 |
-
</details>
|
68 |
|
69 |
|
70 |
## Additional Information
|
|
|
14 |
---
|
15 |
# Dataset Card for synnejysk Forening
|
16 |
|
17 |
+
<!-- START-SHORT DESCRIPTION -->
|
18 |
Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk.
|
19 |
+
<!-- END-SHORT DESCRIPTION -->
|
20 |
+
|
21 |
|
22 |
## Dataset Description
|
23 |
|
|
|
33 |
|
34 |
## Dataset Sturcture
|
35 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
|
37 |
+
<!-- START-SAMPLE -->
|
38 |
+
<!-- END-SAMPLE -->
|
|
|
39 |
|
40 |
|
41 |
## Additional Information
|
data/tv2r/tv2r.md
CHANGED
@@ -16,7 +16,10 @@ task_ids:
|
|
16 |
|
17 |
## Dataset Description
|
18 |
|
|
|
19 |
This dataset includes contemporary Danish newswire articles published between 2010 and 2019.
|
|
|
|
|
20 |
|
21 |
It contains articles of regional interest, written following editorial standards. This section’s value is in both its temporal variation, covering a decade of events, and its spatial variation, covering many local events across most of Denmark (TV2 Bornholm is excluded). As a result of local event coverage, the section contains many locally relevant named entities, which might otherwise not be present in a dataset of national news.
|
22 |
|
@@ -32,32 +35,9 @@ It contains articles of regional interest, written following editorial standards
|
|
32 |
|
33 |
## Dataset Sturcture
|
34 |
An example from the dataset looks as follows.
|
35 |
-
```yaml
|
36 |
-
{
|
37 |
-
'text': 'Storken er landet
|
38 |
-
02 april 2017 kl. 17.58
|
39 |
-
Søndag a',
|
40 |
-
'source': 'tv2r',
|
41 |
-
'id': 'tv2r_92548',
|
42 |
-
'added': '2019-11-13',
|
43 |
-
'created': '2015-01-01, 2020-01-01',
|
44 |
-
'metadata': {
|
45 |
-
'domain': 'News',
|
46 |
-
'license': 'The owner of this content is TV2 Regionerne, Denmark.
|
47 |
-
Creative Commons Attribution 4.0 International',
|
48 |
-
'source-pretty': 'TV 2 Radio (Danish news)'
|
49 |
-
}
|
50 |
-
}
|
51 |
-
```
|
52 |
-
|
53 |
-
## Data Fields
|
54 |
|
55 |
-
|
56 |
-
-
|
57 |
-
- **source**: source of the data.
|
58 |
-
- **added**: timestamp ai2 acquired this data.
|
59 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
60 |
-
- **metadata**: source-specific metadata.
|
61 |
|
62 |
## License Information
|
63 |
<details>
|
|
|
16 |
|
17 |
## Dataset Description
|
18 |
|
19 |
+
<!-- START-SHORT DESCRIPTION -->
|
20 |
This dataset includes contemporary Danish newswire articles published between 2010 and 2019.
|
21 |
+
<!-- END-SHORT DESCRIPTION -->
|
22 |
+
|
23 |
|
24 |
It contains articles of regional interest, written following editorial standards. This section’s value is in both its temporal variation, covering a decade of events, and its spatial variation, covering many local events across most of Denmark (TV2 Bornholm is excluded). As a result of local event coverage, the section contains many locally relevant named entities, which might otherwise not be present in a dataset of national news.
|
25 |
|
|
|
35 |
|
36 |
## Dataset Sturcture
|
37 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
+
<!-- START-SAMPLE -->
|
40 |
+
<!-- END-SAMPLE -->
|
|
|
|
|
|
|
|
|
41 |
|
42 |
## License Information
|
43 |
<details>
|
data/wiki/wiki.md
CHANGED
@@ -14,7 +14,10 @@ task_ids:
|
|
14 |
---
|
15 |
# Dataset Card for Wikipedia
|
16 |
|
|
|
17 |
The Danish subsection of [wikipeadia](https://en.wikipedia.org/wiki/Main_Page).
|
|
|
|
|
18 |
|
19 |
You can read more about wikipeadia on their [about](https://en.wikipedia.org/wiki/Wikipedia:About) page.
|
20 |
|
@@ -32,31 +35,9 @@ You can read more about wikipeadia on their [about](https://en.wikipedia.org/wik
|
|
32 |
|
33 |
## Dataset Sturcture
|
34 |
An example from the dataset looks as follows.
|
35 |
-
```yaml
|
36 |
-
{
|
37 |
-
'text': 'Vimoutiers er en kommune i departementet Orne i Ba',
|
38 |
-
'source': 'wiki',
|
39 |
-
'id': 'wiki_366127',
|
40 |
-
'added': '2021-03-28',
|
41 |
-
'created': '2019-01-01, 2021-01-01',
|
42 |
-
'metadata': {
|
43 |
-
'domain': 'Wiki & Books',
|
44 |
-
'license': 'Creative Commons Legal Code
|
45 |
-
|
46 |
-
CC0 1.0 Universal',
|
47 |
-
'source-pretty': 'Wikipedia'
|
48 |
-
}
|
49 |
-
}
|
50 |
-
```
|
51 |
-
|
52 |
-
## Data Fields
|
53 |
|
54 |
-
|
55 |
-
-
|
56 |
-
- **source**: source of the data.
|
57 |
-
- **added**: timestamp ai2 acquired this data.
|
58 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
59 |
-
- **metadata**: source-specific metadata.
|
60 |
|
61 |
## License Information
|
62 |
<details>
|
|
|
14 |
---
|
15 |
# Dataset Card for Wikipedia
|
16 |
|
17 |
+
<!-- START-SHORT DESCRIPTION -->
|
18 |
The Danish subsection of [wikipeadia](https://en.wikipedia.org/wiki/Main_Page).
|
19 |
+
<!-- END-SHORT DESCRIPTION -->
|
20 |
+
|
21 |
|
22 |
You can read more about wikipeadia on their [about](https://en.wikipedia.org/wiki/Wikipedia:About) page.
|
23 |
|
|
|
35 |
|
36 |
## Dataset Sturcture
|
37 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
+
<!-- START-SAMPLE -->
|
40 |
+
<!-- END-SAMPLE -->
|
|
|
|
|
|
|
|
|
41 |
|
42 |
## License Information
|
43 |
<details>
|
data/wikibooks/wikibooks.md
CHANGED
@@ -15,7 +15,10 @@ task_ids:
|
|
15 |
|
16 |
# Dataset Card for Wikibooks
|
17 |
|
|
|
18 |
The Danish Subsection of [**Wikibooks**](https://www.wikibooks.org).
|
|
|
|
|
19 |
|
20 |
## Dataset Description
|
21 |
|
@@ -31,32 +34,9 @@ The Danish Subsection of [**Wikibooks**](https://www.wikibooks.org).
|
|
31 |
|
32 |
## Dataset Sturcture
|
33 |
An example from the dataset looks as follows.
|
34 |
-
```yaml
|
35 |
-
{
|
36 |
-
'text': 'Spilinfo.
|
37 |
-
Spillet er lavet af Blizzard Entertainme',
|
38 |
-
'source': 'wikibooks',
|
39 |
-
'id': 'wikibooks_1125',
|
40 |
-
'added': '2021-03-28',
|
41 |
-
'created': '2019-01-01, 2021-01-01',
|
42 |
-
'metadata': {
|
43 |
-
'domain': 'Wiki & Books',
|
44 |
-
'license': 'Creative Commons Legal Code
|
45 |
-
|
46 |
-
CC0 1.0 Universal',
|
47 |
-
'source-pretty': 'Wikibooks'
|
48 |
-
}
|
49 |
-
}
|
50 |
-
```
|
51 |
-
|
52 |
-
## Data Fields
|
53 |
|
54 |
-
|
55 |
-
-
|
56 |
-
- **source**: source of the data.
|
57 |
-
- **added**: timestamp ai2 acquired this data.
|
58 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
59 |
-
- **metadata**: source-specific metadata.
|
60 |
|
61 |
## License Information
|
62 |
<details>
|
|
|
15 |
|
16 |
# Dataset Card for Wikibooks
|
17 |
|
18 |
+
<!-- START-SHORT DESCRIPTION -->
|
19 |
The Danish Subsection of [**Wikibooks**](https://www.wikibooks.org).
|
20 |
+
<!-- END-SHORT DESCRIPTION -->
|
21 |
+
|
22 |
|
23 |
## Dataset Description
|
24 |
|
|
|
34 |
|
35 |
## Dataset Sturcture
|
36 |
An example from the dataset looks as follows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
+
<!-- START-SAMPLE -->
|
39 |
+
<!-- END-SAMPLE -->
|
|
|
|
|
|
|
|
|
40 |
|
41 |
## License Information
|
42 |
<details>
|
data/wikisource/wikisource.md
CHANGED
@@ -14,7 +14,10 @@ task_ids:
|
|
14 |
---
|
15 |
# Dataset Card for Wikisource
|
16 |
|
|
|
17 |
The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page).
|
|
|
|
|
18 |
|
19 |
## Dataset Description
|
20 |
|
@@ -31,43 +34,8 @@ The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page).
|
|
31 |
## Dataset Sturcture
|
32 |
An example from the dataset looks as follows.
|
33 |
|
34 |
-
|
35 |
-
|
36 |
-
'text': '<poem>
|
37 |
-
Kæmpehøjen.
|
38 |
-
Jeg har stået på mindets ',
|
39 |
-
'source': 'wikisource',
|
40 |
-
'id': 'wikisource_4804',
|
41 |
-
'added': '2021-03-28',
|
42 |
-
'created': '1700-01-01, 2022-01-01',
|
43 |
-
'metadata': {
|
44 |
-
'domain': 'Wiki & Books',
|
45 |
-
'license': 'Creative Commons Legal Code
|
46 |
-
|
47 |
-
CC0 1.0 Universal',
|
48 |
-
'source-pretty': 'Wikisource'
|
49 |
-
}
|
50 |
-
}
|
51 |
-
```
|
52 |
-
|
53 |
-
## Data Fields
|
54 |
-
|
55 |
-
- **id**: source-specific identifier.
|
56 |
-
- **text**: textual content of the document.
|
57 |
-
- **source**: source of the data.
|
58 |
-
- **added**: timestamp ai2 acquired this data.
|
59 |
-
- **created**": timestamp when original document was created (best-guess if not available)
|
60 |
-
- **metadata**: source-specific metadata.
|
61 |
-
|
62 |
-
## License Information
|
63 |
-
<details>
|
64 |
-
<summary>Creative Commons Zero v1.0 Universal</summary>
|
65 |
-
<p>
|
66 |
-
Creative Commons Legal Code
|
67 |
-
|
68 |
-
CC0 1.0 Universal
|
69 |
-
</p>
|
70 |
-
</details>
|
71 |
|
72 |
|
73 |
## Additional Information
|
|
|
14 |
---
|
15 |
# Dataset Card for Wikisource
|
16 |
|
17 |
+
<!-- START-SHORT DESCRIPTION -->
|
18 |
The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page).
|
19 |
+
<!-- END-SHORT DESCRIPTION -->
|
20 |
+
|
21 |
|
22 |
## Dataset Description
|
23 |
|
|
|
34 |
## Dataset Sturcture
|
35 |
An example from the dataset looks as follows.
|
36 |
|
37 |
+
<!-- START-SAMPLE -->
|
38 |
+
<!-- END-SAMPLE -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
|
41 |
## Additional Information
|
pyproject.toml
CHANGED
@@ -11,6 +11,7 @@ dependencies = [
|
|
11 |
"matplotlib>=3.10.0",
|
12 |
"numpy>=2.2.0",
|
13 |
"plotnine>=0.14.3",
|
|
|
14 |
"pytest>=8.3.4",
|
15 |
"ruff>=0.8.3",
|
16 |
"seaborn>=0.13.2",
|
|
|
11 |
"matplotlib>=3.10.0",
|
12 |
"numpy>=2.2.0",
|
13 |
"plotnine>=0.14.3",
|
14 |
+
"pydantic>=2.10.4",
|
15 |
"pytest>=8.3.4",
|
16 |
"ruff>=0.8.3",
|
17 |
"seaborn>=0.13.2",
|
tests/readme_parsing.py
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from pathlib import Path
|
2 |
+
from typing import Any
|
3 |
+
|
4 |
+
import yaml
|
5 |
+
|
6 |
+
|
7 |
+
def read_frontmatter_and_body(file_path: Path) -> tuple[dict[str, Any], str]:
|
8 |
+
with file_path.open("r") as f:
|
9 |
+
content = f.read()
|
10 |
+
if content.startswith("---"):
|
11 |
+
end_idx = content.find("---", 3)
|
12 |
+
if end_idx != -1:
|
13 |
+
frontmatter = content[3:end_idx].strip()
|
14 |
+
return yaml.safe_load(frontmatter), content[end_idx:]
|
15 |
+
raise ValueError(f"No frontmatter found in file: {file_path}")
|
16 |
+
|
17 |
+
|
18 |
+
def get_tag_idx(readme: str, tag: str):
|
19 |
+
tag_start = f"<!-- START-{tag} -->"
|
20 |
+
tag_end = f"<!-- END-{tag} -->"
|
21 |
+
start_idx = readme.find(tag_start)
|
22 |
+
end_idx = readme.find(tag_end)
|
23 |
+
if end_idx != -1 and start_idx != -1 and start_idx < end_idx:
|
24 |
+
return start_idx, end_idx
|
25 |
+
raise ValueError(f"tag ({tag}) not found in readme")
|
tests/test_dataset_schema.py
CHANGED
@@ -1,47 +1,94 @@
|
|
1 |
-
|
2 |
-
# from typing import Any
|
3 |
from pathlib import Path
|
|
|
4 |
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
#
|
39 |
-
|
40 |
-
|
41 |
-
#
|
42 |
-
|
43 |
-
|
44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
"""tests that the dataset folder structure is as follows.
|
46 |
|
47 |
dataset_name
|
@@ -50,10 +97,10 @@ def test_dataset_folder_structure(repo_path: Path):
|
|
50 |
|
51 |
If there is a python file, there should at least be one called `create.py`, but there can be additional.
|
52 |
"""
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
|
58 |
-
|
59 |
-
|
|
|
1 |
+
from datetime import date
|
|
|
2 |
from pathlib import Path
|
3 |
+
from typing import Any, Literal
|
4 |
|
5 |
+
import pytest
|
6 |
+
from datasets import load_dataset
|
7 |
+
from pydantic import AfterValidator, BaseModel, BeforeValidator
|
8 |
+
from typing_extensions import Annotated
|
9 |
+
|
10 |
+
from .readme_parsing import get_tag_idx, read_frontmatter_and_body
|
11 |
+
|
12 |
+
main_readme = Path(__file__).parent.parent / "README.md"
|
13 |
+
|
14 |
+
frontmatter, _ = read_frontmatter_and_body(main_readme)
|
15 |
+
DATASET_NAMES = [
|
16 |
+
cfg["config_name"]
|
17 |
+
for cfg in frontmatter["configs"]
|
18 |
+
if cfg["config_name"] != "default"
|
19 |
+
]
|
20 |
+
|
21 |
+
|
22 |
+
def ensure_tuple(created: str | tuple) -> tuple:
|
23 |
+
if isinstance(created, str):
|
24 |
+
return tuple(created.split(", "))
|
25 |
+
return created
|
26 |
+
|
27 |
+
|
28 |
+
def validate_sample_metadata(metadata: dict[str, Any]) -> dict[str, Any]:
|
29 |
+
if "source-pretty" not in metadata:
|
30 |
+
raise ValueError("'source-pretty' should be in metadata dict.")
|
31 |
+
return metadata
|
32 |
+
|
33 |
+
|
34 |
+
class SampleSchema(BaseModel):
|
35 |
+
text: str
|
36 |
+
source: str
|
37 |
+
id: str
|
38 |
+
added: date # date.fromisoformat
|
39 |
+
created: Annotated[tuple[date, date], BeforeValidator(ensure_tuple)]
|
40 |
+
license: str # TODO: should probably be a literal
|
41 |
+
domain: str # TODO: convert to literal
|
42 |
+
metadata: Annotated[dict[str, Any], AfterValidator(validate_sample_metadata)]
|
43 |
+
|
44 |
+
|
45 |
+
@pytest.mark.parametrize("dataset_name", DATASET_NAMES)
|
46 |
+
def test_sample_schema(repo_path: Path, dataset_name: str):
|
47 |
+
"""Ensure that the dataset samples follow the correct schema"""
|
48 |
+
|
49 |
+
ds = load_dataset(
|
50 |
+
str(repo_path.resolve()), dataset_name, split="train", streaming=True
|
51 |
+
)
|
52 |
+
sample = next(iter(ds))
|
53 |
+
SampleSchema(**sample)
|
54 |
+
|
55 |
+
|
56 |
+
class FrontmatterSchema(BaseModel):
|
57 |
+
pretty_name: str
|
58 |
+
language: list[Literal["da"]]
|
59 |
+
license: Literal["cc0-1.0", "other", "cc-by-sa-4.0"]
|
60 |
+
|
61 |
+
|
62 |
+
@pytest.mark.parametrize("dataset_name", DATASET_NAMES)
|
63 |
+
def test_dataset_readme(repo_path: Path, dataset_name: str):
|
64 |
+
"""tests that the dataset frontmatter and markdown follows the correct format."""
|
65 |
+
|
66 |
+
readme = repo_path / "data" / dataset_name / f"{dataset_name}.md"
|
67 |
+
|
68 |
+
frontmatter, body = read_frontmatter_and_body(readme)
|
69 |
+
frontmatter_validated = FrontmatterSchema(**frontmatter)
|
70 |
+
|
71 |
+
# ensure tags:
|
72 |
+
tags = ["SHORT DESCRIPTION", "DESC-STATS"]
|
73 |
+
for tag in tags:
|
74 |
+
get_tag_idx(body, tag)
|
75 |
+
|
76 |
+
h2_headings = {line for line in body.splitlines() if line.startswith("## ")}
|
77 |
+
|
78 |
+
if (
|
79 |
+
frontmatter_validated.license == "other"
|
80 |
+
): # ensure description of underspecified licenses
|
81 |
+
assert "## License Information" in h2_headings
|
82 |
+
|
83 |
+
# required headings
|
84 |
+
req_h2_headings = ["## Dataset Description", "## Additional Information"]
|
85 |
+
for req_h2 in req_h2_headings:
|
86 |
+
assert req_h2 in h2_headings
|
87 |
+
pass
|
88 |
+
|
89 |
+
|
90 |
+
@pytest.mark.parametrize("dataset_name", DATASET_NAMES)
|
91 |
+
def test_dataset_folder_structure(repo_path: Path, dataset_name: str):
|
92 |
"""tests that the dataset folder structure is as follows.
|
93 |
|
94 |
dataset_name
|
|
|
97 |
|
98 |
If there is a python file, there should at least be one called `create.py`, but there can be additional.
|
99 |
"""
|
100 |
+
path = repo_path / "data" / dataset_name
|
101 |
+
|
102 |
+
assert (path / f"{path.name}.parquet").exists()
|
103 |
+
assert (path / f"{path.name}.md").exists()
|
104 |
|
105 |
+
if any(p.name.endswith(".py") for p in path.glob("*")):
|
106 |
+
assert (path / "create.py").exists()
|
uv.lock
CHANGED
@@ -69,6 +69,15 @@ wheels = [
|
|
69 |
{ url = "https://files.pythonhosted.org/packages/ec/6a/bc7e17a3e87a2985d3e8f4da4cd0f481060eb78fb08596c42be62c90a4d9/aiosignal-1.3.2-py2.py3-none-any.whl", hash = "sha256:45cde58e409a301715980c2b01d0c28bdde3770d8290b5eb2173759d9acb31a5", size = 7597 },
|
70 |
]
|
71 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
72 |
[[package]]
|
73 |
name = "appnope"
|
74 |
version = "0.1.4"
|
@@ -255,6 +264,7 @@ dependencies = [
|
|
255 |
{ name = "matplotlib" },
|
256 |
{ name = "numpy" },
|
257 |
{ name = "plotnine" },
|
|
|
258 |
{ name = "pytest" },
|
259 |
{ name = "ruff" },
|
260 |
{ name = "seaborn" },
|
@@ -270,6 +280,7 @@ requires-dist = [
|
|
270 |
{ name = "matplotlib", specifier = ">=3.10.0" },
|
271 |
{ name = "numpy", specifier = ">=2.2.0" },
|
272 |
{ name = "plotnine", specifier = ">=0.14.3" },
|
|
|
273 |
{ name = "pytest", specifier = ">=8.3.4" },
|
274 |
{ name = "ruff", specifier = ">=0.8.3" },
|
275 |
{ name = "seaborn", specifier = ">=0.13.2" },
|
@@ -1068,6 +1079,59 @@ wheels = [
|
|
1068 |
{ url = "https://files.pythonhosted.org/packages/13/a3/a812df4e2dd5696d1f351d58b8fe16a405b234ad2886a0dab9183fb78109/pycparser-2.22-py3-none-any.whl", hash = "sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc", size = 117552 },
|
1069 |
]
|
1070 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1071 |
[[package]]
|
1072 |
name = "pygments"
|
1073 |
version = "2.18.0"
|
|
|
69 |
{ url = "https://files.pythonhosted.org/packages/ec/6a/bc7e17a3e87a2985d3e8f4da4cd0f481060eb78fb08596c42be62c90a4d9/aiosignal-1.3.2-py2.py3-none-any.whl", hash = "sha256:45cde58e409a301715980c2b01d0c28bdde3770d8290b5eb2173759d9acb31a5", size = 7597 },
|
70 |
]
|
71 |
|
72 |
+
[[package]]
|
73 |
+
name = "annotated-types"
|
74 |
+
version = "0.7.0"
|
75 |
+
source = { registry = "https://pypi.org/simple" }
|
76 |
+
sdist = { url = "https://files.pythonhosted.org/packages/ee/67/531ea369ba64dcff5ec9c3402f9f51bf748cec26dde048a2f973a4eea7f5/annotated_types-0.7.0.tar.gz", hash = "sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89", size = 16081 }
|
77 |
+
wheels = [
|
78 |
+
{ url = "https://files.pythonhosted.org/packages/78/b6/6307fbef88d9b5ee7421e68d78a9f162e0da4900bc5f5793f6d3d0e34fb8/annotated_types-0.7.0-py3-none-any.whl", hash = "sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53", size = 13643 },
|
79 |
+
]
|
80 |
+
|
81 |
[[package]]
|
82 |
name = "appnope"
|
83 |
version = "0.1.4"
|
|
|
264 |
{ name = "matplotlib" },
|
265 |
{ name = "numpy" },
|
266 |
{ name = "plotnine" },
|
267 |
+
{ name = "pydantic" },
|
268 |
{ name = "pytest" },
|
269 |
{ name = "ruff" },
|
270 |
{ name = "seaborn" },
|
|
|
280 |
{ name = "matplotlib", specifier = ">=3.10.0" },
|
281 |
{ name = "numpy", specifier = ">=2.2.0" },
|
282 |
{ name = "plotnine", specifier = ">=0.14.3" },
|
283 |
+
{ name = "pydantic", specifier = ">=2.10.4" },
|
284 |
{ name = "pytest", specifier = ">=8.3.4" },
|
285 |
{ name = "ruff", specifier = ">=0.8.3" },
|
286 |
{ name = "seaborn", specifier = ">=0.13.2" },
|
|
|
1079 |
{ url = "https://files.pythonhosted.org/packages/13/a3/a812df4e2dd5696d1f351d58b8fe16a405b234ad2886a0dab9183fb78109/pycparser-2.22-py3-none-any.whl", hash = "sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc", size = 117552 },
|
1080 |
]
|
1081 |
|
1082 |
+
[[package]]
|
1083 |
+
name = "pydantic"
|
1084 |
+
version = "2.10.4"
|
1085 |
+
source = { registry = "https://pypi.org/simple" }
|
1086 |
+
dependencies = [
|
1087 |
+
{ name = "annotated-types" },
|
1088 |
+
{ name = "pydantic-core" },
|
1089 |
+
{ name = "typing-extensions" },
|
1090 |
+
]
|
1091 |
+
sdist = { url = "https://files.pythonhosted.org/packages/70/7e/fb60e6fee04d0ef8f15e4e01ff187a196fa976eb0f0ab524af4599e5754c/pydantic-2.10.4.tar.gz", hash = "sha256:82f12e9723da6de4fe2ba888b5971157b3be7ad914267dea8f05f82b28254f06", size = 762094 }
|
1092 |
+
wheels = [
|
1093 |
+
{ url = "https://files.pythonhosted.org/packages/f3/26/3e1bbe954fde7ee22a6e7d31582c642aad9e84ffe4b5fb61e63b87cd326f/pydantic-2.10.4-py3-none-any.whl", hash = "sha256:597e135ea68be3a37552fb524bc7d0d66dcf93d395acd93a00682f1efcb8ee3d", size = 431765 },
|
1094 |
+
]
|
1095 |
+
|
1096 |
+
[[package]]
|
1097 |
+
name = "pydantic-core"
|
1098 |
+
version = "2.27.2"
|
1099 |
+
source = { registry = "https://pypi.org/simple" }
|
1100 |
+
dependencies = [
|
1101 |
+
{ name = "typing-extensions" },
|
1102 |
+
]
|
1103 |
+
sdist = { url = "https://files.pythonhosted.org/packages/fc/01/f3e5ac5e7c25833db5eb555f7b7ab24cd6f8c322d3a3ad2d67a952dc0abc/pydantic_core-2.27.2.tar.gz", hash = "sha256:eb026e5a4c1fee05726072337ff51d1efb6f59090b7da90d30ea58625b1ffb39", size = 413443 }
|
1104 |
+
wheels = [
|
1105 |
+
{ url = "https://files.pythonhosted.org/packages/d6/74/51c8a5482ca447871c93e142d9d4a92ead74de6c8dc5e66733e22c9bba89/pydantic_core-2.27.2-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:9e0c8cfefa0ef83b4da9588448b6d8d2a2bf1a53c3f1ae5fca39eb3061e2f0b0", size = 1893127 },
|
1106 |
+
{ url = "https://files.pythonhosted.org/packages/d3/f3/c97e80721735868313c58b89d2de85fa80fe8dfeeed84dc51598b92a135e/pydantic_core-2.27.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:83097677b8e3bd7eaa6775720ec8e0405f1575015a463285a92bfdfe254529ef", size = 1811340 },
|
1107 |
+
{ url = "https://files.pythonhosted.org/packages/9e/91/840ec1375e686dbae1bd80a9e46c26a1e0083e1186abc610efa3d9a36180/pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:172fce187655fece0c90d90a678424b013f8fbb0ca8b036ac266749c09438cb7", size = 1822900 },
|
1108 |
+
{ url = "https://files.pythonhosted.org/packages/f6/31/4240bc96025035500c18adc149aa6ffdf1a0062a4b525c932065ceb4d868/pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:519f29f5213271eeeeb3093f662ba2fd512b91c5f188f3bb7b27bc5973816934", size = 1869177 },
|
1109 |
+
{ url = "https://files.pythonhosted.org/packages/fa/20/02fbaadb7808be578317015c462655c317a77a7c8f0ef274bc016a784c54/pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:05e3a55d124407fffba0dd6b0c0cd056d10e983ceb4e5dbd10dda135c31071d6", size = 2038046 },
|
1110 |
+
{ url = "https://files.pythonhosted.org/packages/06/86/7f306b904e6c9eccf0668248b3f272090e49c275bc488a7b88b0823444a4/pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9c3ed807c7b91de05e63930188f19e921d1fe90de6b4f5cd43ee7fcc3525cb8c", size = 2685386 },
|
1111 |
+
{ url = "https://files.pythonhosted.org/packages/8d/f0/49129b27c43396581a635d8710dae54a791b17dfc50c70164866bbf865e3/pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6fb4aadc0b9a0c063206846d603b92030eb6f03069151a625667f982887153e2", size = 1997060 },
|
1112 |
+
{ url = "https://files.pythonhosted.org/packages/0d/0f/943b4af7cd416c477fd40b187036c4f89b416a33d3cc0ab7b82708a667aa/pydantic_core-2.27.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:28ccb213807e037460326424ceb8b5245acb88f32f3d2777427476e1b32c48c4", size = 2004870 },
|
1113 |
+
{ url = "https://files.pythonhosted.org/packages/35/40/aea70b5b1a63911c53a4c8117c0a828d6790483f858041f47bab0b779f44/pydantic_core-2.27.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:de3cd1899e2c279b140adde9357c4495ed9d47131b4a4eaff9052f23398076b3", size = 1999822 },
|
1114 |
+
{ url = "https://files.pythonhosted.org/packages/f2/b3/807b94fd337d58effc5498fd1a7a4d9d59af4133e83e32ae39a96fddec9d/pydantic_core-2.27.2-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:220f892729375e2d736b97d0e51466252ad84c51857d4d15f5e9692f9ef12be4", size = 2130364 },
|
1115 |
+
{ url = "https://files.pythonhosted.org/packages/fc/df/791c827cd4ee6efd59248dca9369fb35e80a9484462c33c6649a8d02b565/pydantic_core-2.27.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:a0fcd29cd6b4e74fe8ddd2c90330fd8edf2e30cb52acda47f06dd615ae72da57", size = 2158303 },
|
1116 |
+
{ url = "https://files.pythonhosted.org/packages/9b/67/4e197c300976af185b7cef4c02203e175fb127e414125916bf1128b639a9/pydantic_core-2.27.2-cp312-cp312-win32.whl", hash = "sha256:1e2cb691ed9834cd6a8be61228471d0a503731abfb42f82458ff27be7b2186fc", size = 1834064 },
|
1117 |
+
{ url = "https://files.pythonhosted.org/packages/1f/ea/cd7209a889163b8dcca139fe32b9687dd05249161a3edda62860430457a5/pydantic_core-2.27.2-cp312-cp312-win_amd64.whl", hash = "sha256:cc3f1a99a4f4f9dd1de4fe0312c114e740b5ddead65bb4102884b384c15d8bc9", size = 1989046 },
|
1118 |
+
{ url = "https://files.pythonhosted.org/packages/bc/49/c54baab2f4658c26ac633d798dab66b4c3a9bbf47cff5284e9c182f4137a/pydantic_core-2.27.2-cp312-cp312-win_arm64.whl", hash = "sha256:3911ac9284cd8a1792d3cb26a2da18f3ca26c6908cc434a18f730dc0db7bfa3b", size = 1885092 },
|
1119 |
+
{ url = "https://files.pythonhosted.org/packages/41/b1/9bc383f48f8002f99104e3acff6cba1231b29ef76cfa45d1506a5cad1f84/pydantic_core-2.27.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:7d14bd329640e63852364c306f4d23eb744e0f8193148d4044dd3dacdaacbd8b", size = 1892709 },
|
1120 |
+
{ url = "https://files.pythonhosted.org/packages/10/6c/e62b8657b834f3eb2961b49ec8e301eb99946245e70bf42c8817350cbefc/pydantic_core-2.27.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:82f91663004eb8ed30ff478d77c4d1179b3563df6cdb15c0817cd1cdaf34d154", size = 1811273 },
|
1121 |
+
{ url = "https://files.pythonhosted.org/packages/ba/15/52cfe49c8c986e081b863b102d6b859d9defc63446b642ccbbb3742bf371/pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:71b24c7d61131bb83df10cc7e687433609963a944ccf45190cfc21e0887b08c9", size = 1823027 },
|
1122 |
+
{ url = "https://files.pythonhosted.org/packages/b1/1c/b6f402cfc18ec0024120602bdbcebc7bdd5b856528c013bd4d13865ca473/pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fa8e459d4954f608fa26116118bb67f56b93b209c39b008277ace29937453dc9", size = 1868888 },
|
1123 |
+
{ url = "https://files.pythonhosted.org/packages/bd/7b/8cb75b66ac37bc2975a3b7de99f3c6f355fcc4d89820b61dffa8f1e81677/pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ce8918cbebc8da707ba805b7fd0b382816858728ae7fe19a942080c24e5b7cd1", size = 2037738 },
|
1124 |
+
{ url = "https://files.pythonhosted.org/packages/c8/f1/786d8fe78970a06f61df22cba58e365ce304bf9b9f46cc71c8c424e0c334/pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:eda3f5c2a021bbc5d976107bb302e0131351c2ba54343f8a496dc8783d3d3a6a", size = 2685138 },
|
1125 |
+
{ url = "https://files.pythonhosted.org/packages/a6/74/d12b2cd841d8724dc8ffb13fc5cef86566a53ed358103150209ecd5d1999/pydantic_core-2.27.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd8086fa684c4775c27f03f062cbb9eaa6e17f064307e86b21b9e0abc9c0f02e", size = 1997025 },
|
1126 |
+
{ url = "https://files.pythonhosted.org/packages/a0/6e/940bcd631bc4d9a06c9539b51f070b66e8f370ed0933f392db6ff350d873/pydantic_core-2.27.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:8d9b3388db186ba0c099a6d20f0604a44eabdeef1777ddd94786cdae158729e4", size = 2004633 },
|
1127 |
+
{ url = "https://files.pythonhosted.org/packages/50/cc/a46b34f1708d82498c227d5d80ce615b2dd502ddcfd8376fc14a36655af1/pydantic_core-2.27.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:7a66efda2387de898c8f38c0cf7f14fca0b51a8ef0b24bfea5849f1b3c95af27", size = 1999404 },
|
1128 |
+
{ url = "https://files.pythonhosted.org/packages/ca/2d/c365cfa930ed23bc58c41463bae347d1005537dc8db79e998af8ba28d35e/pydantic_core-2.27.2-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:18a101c168e4e092ab40dbc2503bdc0f62010e95d292b27827871dc85450d7ee", size = 2130130 },
|
1129 |
+
{ url = "https://files.pythonhosted.org/packages/f4/d7/eb64d015c350b7cdb371145b54d96c919d4db516817f31cd1c650cae3b21/pydantic_core-2.27.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:ba5dd002f88b78a4215ed2f8ddbdf85e8513382820ba15ad5ad8955ce0ca19a1", size = 2157946 },
|
1130 |
+
{ url = "https://files.pythonhosted.org/packages/a4/99/bddde3ddde76c03b65dfd5a66ab436c4e58ffc42927d4ff1198ffbf96f5f/pydantic_core-2.27.2-cp313-cp313-win32.whl", hash = "sha256:1ebaf1d0481914d004a573394f4be3a7616334be70261007e47c2a6fe7e50130", size = 1834387 },
|
1131 |
+
{ url = "https://files.pythonhosted.org/packages/71/47/82b5e846e01b26ac6f1893d3c5f9f3a2eb6ba79be26eef0b759b4fe72946/pydantic_core-2.27.2-cp313-cp313-win_amd64.whl", hash = "sha256:953101387ecf2f5652883208769a79e48db18c6df442568a0b5ccd8c2723abee", size = 1990453 },
|
1132 |
+
{ url = "https://files.pythonhosted.org/packages/51/b2/b2b50d5ecf21acf870190ae5d093602d95f66c9c31f9d5de6062eb329ad1/pydantic_core-2.27.2-cp313-cp313-win_arm64.whl", hash = "sha256:ac4dbfd1691affb8f48c2c13241a2e3b60ff23247cbcf981759c768b6633cf8b", size = 1885186 },
|
1133 |
+
]
|
1134 |
+
|
1135 |
[[package]]
|
1136 |
name = "pygments"
|
1137 |
version = "2.18.0"
|