KennethEnevoldsen commited on
Commit
3e28a50
·
unverified ·
1 Parent(s): ca7fb7b

Added opensubtitles

Browse files
README.md CHANGED
@@ -5,6 +5,10 @@ configs:
5
  data_files:
6
  - split: train
7
  path: 'data/*/*.parquet'
 
 
 
 
8
  - config_name: retsinformationdk
9
  data_files:
10
  - split: train
@@ -265,36 +269,39 @@ Below follows a brief overview of the sources in the corpus along with their ind
265
 
266
 
267
 
 
268
  <!-- START-MAIN TABLE -->
269
- | Source | Description | N. Tokens | License |
270
- | :------------------ | :--------------------------------------------------------------------------------------------------------------------------- | :-------- | :--------------------- |
271
- | [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | 516.54M | [Danish Copyright Law] |
272
- | [hest] | Samples from the Danish debate forum www.heste-nettet.dk | 389.33M | [CC-0] |
273
- | [spont] | Conversational samples collected as a part of research projects at Aarhus University | 1.56M | [CC-0] |
274
- | [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | 21.67M | [CC-BY-SA 4.0] |
275
- | [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | 100.89M | [CC-0] |
276
- | [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | 6.76M | [Gutenberg License] |
277
- | [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | 185.45K | [CC-BY-SA 4.0] |
278
- | [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | 3.55M | [CC-BY-SA 4.0] |
279
- | [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | 5.34M | [CC-0] |
280
- | [wiki] | The Danish subsection of [wikipeadia](https://en.wikipedia.org/wiki/Main_Page) | 122.00M | [CC-0] |
281
- | [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | 6.24M | [CC-0] |
282
- | [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | 37.91M | [CC-0] |
283
- | [adl] | Danish literature from 1700-2023 from the Archive for Danish Literature (ADL) | 58.49M | [CC-0] |
284
- | [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | 57.08M | [CC-0] |
285
- | [relig] | Danish religious text from the 1700-2022 | 1.24M | [CC-0] |
286
- | [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | 1.52M | [DanNet 1.0 License] |
287
- | [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | 52.51K | [CC-0] |
288
- | [naat] | A dataset of Danish speeches from 1930-2022 | 286.68K | [CC-0] |
289
- | [botxt] | The Bornholmsk Ordbog Dictionary Projec | 847.97K | [CC-0] |
290
- | [ft] | This dataset consists of records from all meetings of The Danish parliament (Folketinget) in the parliament hall | 114.09M | [CC-0] |
291
- | [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | 122.12M | [CC-0] |
292
- | **Total** | | 1.57B | |
 
293
 
294
  [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
295
  [hest]: data/hest/hest.md
296
  [spont]: data/spont/spont.md
297
  [tv2r]: data/tv2r/tv2r.md
 
298
  [ep]: data/ep/ep.md
299
  [gutenberg]: data/gutenberg/gutenberg.md
300
  [depbank]: data/depbank/depbank.md
@@ -321,6 +328,7 @@ Below follows a brief overview of the sources in the corpus along with their ind
321
  [DanNet 1.0 License]: ./data/dannet/dannet.md#license-information
322
  <!-- END-MAIN TABLE -->
323
 
 
324
  You can learn more about each dataset by pressing
325
 
326
  <!-- ### Quality Control
 
5
  data_files:
6
  - split: train
7
  path: 'data/*/*.parquet'
8
+ - config_name: opensubtitles
9
+ data_files:
10
+ - split: train
11
+ path: data/opensubtitles/*.parquet
12
  - config_name: retsinformationdk
13
  data_files:
14
  - split: train
 
269
 
270
 
271
 
272
+
273
  <!-- START-MAIN TABLE -->
274
+ | Source | Description | N. Tokens | License |
275
+ |:--------------------|:-----------------------------------------------------------------------------------------------------------------------------|:------------|:-----------------------|
276
+ | [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | 516.54M | [Danish Copyright Law] |
277
+ | [hest] | Samples from the Danish debate forum www.heste-nettet.dk | 389.33M | [CC-0] |
278
+ | [spont] | Conversational samples collected as a part of research projects at Aarhus University | 1.56M | [CC-0] |
279
+ | [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | 21.67M | [CC-BY-SA 4.0] |
280
+ | [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | 271.89M | [CC-0] |
281
+ | [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | 100.89M | [CC-0] |
282
+ | [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | 6.76M | [Gutenberg License] |
283
+ | [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | 185.45K | [CC-BY-SA 4.0] |
284
+ | [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | 3.55M | [CC-BY-SA 4.0] |
285
+ | [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | 5.34M | [CC-0] |
286
+ | [wiki] | The Danish subsection of [wikipeadia](https://en.wikipedia.org/wiki/Main_Page) | 122.00M | [CC-0] |
287
+ | [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | 6.24M | [CC-0] |
288
+ | [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | 37.91M | [CC-0] |
289
+ | [adl] | Danish literature from 1700-2023 from the Archive for Danish Literature (ADL) | 58.49M | [CC-0] |
290
+ | [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | 57.08M | [CC-0] |
291
+ | [relig] | Danish religious text from the 1700-2022 | 1.24M | [CC-0] |
292
+ | [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | 1.52M | [DanNet 1.0 License] |
293
+ | [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | 52.51K | [CC-0] |
294
+ | [naat] | Danish speeches from 1930-2022 | 286.68K | [CC-0] |
295
+ | [botxt] | The Bornholmsk Ordbog Dictionary Projec | 847.97K | [CC-0] |
296
+ | [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | 114.09M | [CC-0] |
297
+ | [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | 122.12M | [CC-0] |
298
+ | **Total** | | 1.84B | |
299
 
300
  [retsinformationdk]: data/retsinformationdk/retsinformationdk.md
301
  [hest]: data/hest/hest.md
302
  [spont]: data/spont/spont.md
303
  [tv2r]: data/tv2r/tv2r.md
304
+ [opensubtitles]: data/opensubtitles/opensubtitles.md
305
  [ep]: data/ep/ep.md
306
  [gutenberg]: data/gutenberg/gutenberg.md
307
  [depbank]: data/depbank/depbank.md
 
328
  [DanNet 1.0 License]: ./data/dannet/dannet.md#license-information
329
  <!-- END-MAIN TABLE -->
330
 
331
+
332
  You can learn more about each dataset by pressing
333
 
334
  <!-- ### Quality Control
data/adl/adl.md CHANGED
@@ -34,7 +34,7 @@ Danish literature from 1700-2023 from the Archive for Danish Literature (ADL).
34
  <!-- END-DESC-STATS -->
35
 
36
 
37
- ## Dataset Sturcture
38
  An example from the dataset looks as follows.
39
 
40
  <!-- START-SAMPLE -->
 
34
  <!-- END-DESC-STATS -->
35
 
36
 
37
+ ## Dataset Structure
38
  An example from the dataset looks as follows.
39
 
40
  <!-- START-SAMPLE -->
data/botxt/botxt.md CHANGED
@@ -35,7 +35,7 @@ Fictional texts of various kinds written in Bornholmsk, the dialect spoken on th
35
  <!-- END-DESC-STATS -->
36
 
37
 
38
- ## Dataset Sturcture
39
  An example from the dataset looks as follows.
40
 
41
  <!-- START-SAMPLE -->
 
35
  <!-- END-DESC-STATS -->
36
 
37
 
38
+ ## Dataset Structure
39
  An example from the dataset looks as follows.
40
 
41
  <!-- START-SAMPLE -->
data/dannet/dannet.md CHANGED
@@ -37,7 +37,7 @@ A WordNet is a lexico-semantic network which show the meaning and the relation b
37
  <!-- END-DESC-STATS -->
38
 
39
 
40
- ## Dataset Sturcture
41
  An example from the dataset looks as follows.
42
 
43
  <!-- START-SAMPLE -->
 
37
  <!-- END-DESC-STATS -->
38
 
39
 
40
+ ## Dataset Structure
41
  An example from the dataset looks as follows.
42
 
43
  <!-- START-SAMPLE -->
data/depbank/depbank.md CHANGED
@@ -38,7 +38,7 @@ While the dataset was initially intended as a rich annotation, this corpora only
38
  <!-- END-DESC-STATS -->
39
 
40
 
41
- ## Dataset Sturcture
42
  An example from the dataset looks as follows.
43
 
44
  <!-- START-SAMPLE -->
 
38
  <!-- END-DESC-STATS -->
39
 
40
 
41
+ ## Dataset Structure
42
  An example from the dataset looks as follows.
43
 
44
  <!-- START-SAMPLE -->
data/ep/ep.md CHANGED
@@ -37,7 +37,7 @@ The europarl is a corpus of parallel text in 11 languages from the proceedings o
37
  <!-- END-DESC-STATS -->
38
 
39
 
40
- ## Dataset Sturcture
41
  An example from the dataset looks as follows.
42
 
43
  <!-- START-SAMPLE -->
 
37
  <!-- END-DESC-STATS -->
38
 
39
 
40
+ ## Dataset Structure
41
  An example from the dataset looks as follows.
42
 
43
  <!-- START-SAMPLE -->
data/ft/ft.md CHANGED
@@ -20,7 +20,7 @@ source_datasets:
20
  ## Dataset Description
21
 
22
  <!-- START-SHORT DESCRIPTION -->
23
- This dataset consists of records from all meetings of The Danish parliament (Folketinget) in the parliament hall.
24
  <!-- END-SHORT DESCRIPTION -->
25
 
26
 
@@ -38,7 +38,7 @@ In the parliament hall, one speaker at a time addresses members of the parliamen
38
  <!-- END-DESC-STATS -->
39
 
40
 
41
- ## Dataset Sturcture
42
  An example from the dataset looks as follows.
43
 
44
  <!-- START-SAMPLE -->
 
20
  ## Dataset Description
21
 
22
  <!-- START-SHORT DESCRIPTION -->
23
+ Records from all meetings of The Danish parliament (Folketinget) in the parliament hall.
24
  <!-- END-SHORT DESCRIPTION -->
25
 
26
 
 
38
  <!-- END-DESC-STATS -->
39
 
40
 
41
+ ## Dataset Structure
42
  An example from the dataset looks as follows.
43
 
44
  <!-- START-SAMPLE -->
data/gutenberg/gutenberg.md CHANGED
@@ -36,7 +36,7 @@ Project Gutenberg is an online library of free eBooks. Project Gutenberg was the
36
  <!-- END-DESC-STATS -->
37
 
38
 
39
- ## Dataset Sturcture
40
  An example from the dataset looks as follows.
41
 
42
  <!-- START-SAMPLE -->
 
36
  <!-- END-DESC-STATS -->
37
 
38
 
39
+ ## Dataset Structure
40
  An example from the dataset looks as follows.
41
 
42
  <!-- START-SAMPLE -->
data/hest/hest.md CHANGED
@@ -38,7 +38,7 @@ Its inclusion as training data for large language models have multiple times rea
38
  <!-- END-DESC-STATS -->
39
 
40
 
41
- ## Dataset Sturcture
42
  An example from the dataset looks as follows.
43
 
44
  <!-- START-SAMPLE -->
 
38
  <!-- END-DESC-STATS -->
39
 
40
 
41
+ ## Dataset Structure
42
  An example from the dataset looks as follows.
43
 
44
  <!-- START-SAMPLE -->
data/jvj/jvj.md CHANGED
@@ -38,7 +38,7 @@ The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikiped
38
  <!-- END-DESC-STATS -->
39
 
40
 
41
- ## Dataset Sturcture
42
  An example from the dataset looks as follows.
43
 
44
  <!-- START-SAMPLE -->
 
38
  <!-- END-DESC-STATS -->
39
 
40
 
41
+ ## Dataset Structure
42
  An example from the dataset looks as follows.
43
 
44
  <!-- START-SAMPLE -->
data/naat/naat.md CHANGED
@@ -18,7 +18,7 @@ source_datasets:
18
  # Dataset Card for NAAT
19
 
20
  <!-- START-SHORT DESCRIPTION -->
21
- A dataset of Danish speeches from 1930-2022.
22
  <!-- END-SHORT DESCRIPTION -->
23
 
24
 
@@ -34,7 +34,7 @@ A dataset of Danish speeches from 1930-2022.
34
  <!-- END-DESC-STATS -->
35
 
36
 
37
- ## Dataset Sturcture
38
  An example from the dataset looks as follows.
39
 
40
  <!-- START-SAMPLE -->
 
18
  # Dataset Card for NAAT
19
 
20
  <!-- START-SHORT DESCRIPTION -->
21
+ Danish speeches from 1930-2022.
22
  <!-- END-SHORT DESCRIPTION -->
23
 
24
 
 
34
  <!-- END-DESC-STATS -->
35
 
36
 
37
+ ## Dataset Structure
38
  An example from the dataset looks as follows.
39
 
40
  <!-- START-SAMPLE -->
data/nordjyllandnews/nordjyllandnews.md CHANGED
@@ -35,7 +35,7 @@ The data is derived from the Huggingface dataset [alexandrainst/nordjylland-news
35
 
36
  <!-- END-DESC-STATS -->
37
 
38
- ## Dataset Sturcture
39
  An example from the dataset looks as follows.
40
 
41
  <!-- START-SAMPLE -->
 
35
 
36
  <!-- END-DESC-STATS -->
37
 
38
+ ## Dataset Structure
39
  An example from the dataset looks as follows.
40
 
41
  <!-- START-SAMPLE -->
data/opensubtitles/create.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pathlib import Path
2
+ from typing import cast
3
+
4
+ import pandas as pd
5
+ import spacy
6
+ from datasets import Dataset, load_dataset
7
+
8
+ # KCE: mail from Leon
9
+ sample_to_redact = {
10
+ # Der kommer en dag
11
+ "opensub_6726481",
12
+ "opensub_6732371",
13
+ # Kollektivet
14
+ "opensub_6645818",
15
+ # Flaskepost fra P
16
+ "opensub_6666922",
17
+ "opensub_6720216",
18
+ "opensub_6958711",
19
+ # Fasandræberne
20
+ "opensub_6036947",
21
+ "opensub_6008622",
22
+ # En du elsker
23
+ "opensub_5828376",
24
+ "opensub_5828378",
25
+ # En chance til
26
+ "opensub_6177523",
27
+ # Lev stærkt
28
+ "opensub_6467655",
29
+ # Nymphomaniac
30
+ "opensub_5604391",
31
+ "opensub_5748340",
32
+ "opensub_5748494",
33
+ "opensub_5629516",
34
+ # Kvinden i buret
35
+ "opensub_5636248",
36
+ "opensub_5514603",
37
+ "opensub_5504932",
38
+ # Den skaldede frisør
39
+ "opensub_5084880",
40
+ "opensub_5031826",
41
+ # Jagten
42
+ "opensub_6929419",
43
+ "opensub_4885548",
44
+ # Melancholia
45
+ "opensub_4421330",
46
+ "opensub_4406991",
47
+ "opensub_4418817",
48
+ # Ambassadøren
49
+ "opensub_4557721",
50
+ # Antichrist
51
+ "opensub_5511502",
52
+ "opensub_3938655",
53
+ "opensub_3636940",
54
+ "opensub_3564521",
55
+ "opensub_3562215",
56
+ # En kongelig affære
57
+ "opensub_4725493",
58
+ "opensub_4725160",
59
+ "opensub_4725159",
60
+ "opensub_4916871",
61
+ "opensub_5186746",
62
+ # Brødre
63
+ "opensub_233943",
64
+ "opensub_87475",
65
+ }
66
+
67
+ column_order = [
68
+ "text",
69
+ "source",
70
+ "id",
71
+ "added",
72
+ "created",
73
+ "license",
74
+ "domain",
75
+ "metadata",
76
+ ]
77
+
78
+
79
+ def convert_sample(example):
80
+ text = example["text"]
81
+ if example["doc_id"] in sample_to_redact:
82
+ nlp = spacy.blank("da")
83
+ doc = nlp(text)
84
+ text = doc[:200].text # first 200 words
85
+
86
+ new_example = dict(
87
+ text_new=example["text"],
88
+ id=example["doc_id"],
89
+ source="opensubtitles",
90
+ domain="Conversation",
91
+ license="Creative Commons Legal Code\n\nCC0 1.0 Universal",
92
+ added="2025-01-02",
93
+ created="1920-01-01, 2018-01-01", # assuming v2018
94
+ metadata={"source-pretty": "OpenSubtitles"},
95
+ )
96
+
97
+ return new_example
98
+
99
+
100
+ def main():
101
+ ds = load_dataset("DDSC/partial-danish-gigaword-no-twitter", split="train")
102
+ ds = cast(Dataset, ds)
103
+ ds = ds.filter(lambda x: x["source"] == "opensub", num_proc=4)
104
+ ds = ds.map(convert_sample, num_proc=4)
105
+ ds = ds.select_columns(column_order[1:] + ["text_new"])
106
+ ds = ds.rename_columns({"text_new": "text"})
107
+ # ensure order
108
+ ds = ds.select_columns(column_order)
109
+
110
+ df = ds.to_pandas()
111
+ df = cast(pd.DataFrame, df)
112
+ dedup_df = df.drop_duplicates(keep="first", subset=["text"])
113
+ print("N. duplicates: ", df.shape[0] - dedup_df.shape[0]) # 2422
114
+
115
+ ds = ds.select(dedup_df.index)
116
+ assert len(set(ds["text"])) == len(ds)
117
+
118
+ save_path = Path(__file__).parent / "opensubtitles.parquet"
119
+ ds.to_parquet(save_path)
120
+
121
+
122
+ if __name__ == "__main__":
123
+ main()
data/opensubtitles/descriptive_stats.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"number_of_samples": 29820, "average_document_length": 26326.385915492956, "number_of_tokens": 271893009, "language": "dan, dansk, Danish", "revision": "ca7fb7b62951c15a2d67bdfcc910af2d8ff058c8"}
data/opensubtitles/opensubtitles.md ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: OpenSubtitles
3
+ language:
4
+ - da
5
+ license: cc0-1.0
6
+ license_name: CC-0
7
+ task_categories:
8
+ - text-generation
9
+ - fill-mask
10
+ task_ids:
11
+ - language-modeling
12
+ source_datasets:
13
+ - DDSC/partial-danish-gigaword-no-twitter
14
+ ---
15
+
16
+ # Dataset Card for OpenSubtitles
17
+
18
+ <!-- START-SHORT DESCRIPTION -->
19
+ Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles).
20
+ <!-- END-SHORT DESCRIPTION -->
21
+
22
+
23
+ ## Dataset Description
24
+
25
+
26
+ <!-- START-DESC-STATS -->
27
+ - **Language**: dan, dansk, Danish
28
+ - **Number of samples**: 29.82K
29
+ - **Number of tokens (Llama 3)**: 271.89M
30
+ - **Average document length (characters)**: 26326.39
31
+ <!-- END-DESC-STATS -->
32
+
33
+
34
+
35
+ ## Dataset Structure
36
+ An example from the dataset looks as follows.
37
+
38
+ <!-- START-SAMPLE -->
39
+ <!-- END-SAMPLE -->
40
+
41
+ ### Additional Processing
42
+
43
+ Due to copyright concern additional documents have been removed due to copyright concerns. These include:
44
+
45
+ ```py
46
+ {
47
+ # Der kommer en dag
48
+ "opensub_6726481",
49
+ "opensub_6732371",
50
+ # Kollektivet
51
+ "opensub_6645818",
52
+ # Flaskepost fra P
53
+ "opensub_6666922",
54
+ "opensub_6720216",
55
+ "opensub_6958711",
56
+ # Fasandræberne
57
+ "opensub_6036947",
58
+ "opensub_6008622",
59
+ # En du elsker
60
+ "opensub_5828376",
61
+ "opensub_5828378",
62
+ # En chance til
63
+ "opensub_6177523",
64
+ # Lev stærkt
65
+ "opensub_6467655",
66
+ # Nymphomaniac
67
+ "opensub_5604391",
68
+ "opensub_5748340",
69
+ "opensub_5748494",
70
+ "opensub_5629516",
71
+ # Kvinden i buret
72
+ "opensub_5636248",
73
+ "opensub_5514603",
74
+ "opensub_5504932",
75
+ # Den skaldede frisør
76
+ "opensub_5084880",
77
+ "opensub_5031826",
78
+ # Jagten
79
+ "opensub_6929419",
80
+ "opensub_4885548",
81
+ # Melancholia
82
+ "opensub_4421330",
83
+ "opensub_4406991",
84
+ "opensub_4418817",
85
+ # Ambassadøren
86
+ "opensub_4557721",
87
+ # Antichrist
88
+ "opensub_5511502",
89
+ "opensub_3938655",
90
+ "opensub_3636940",
91
+ "opensub_3564521",
92
+ "opensub_3562215",
93
+ # En kongelig affære
94
+ "opensub_4725493",
95
+ "opensub_4725160",
96
+ "opensub_4725159",
97
+ "opensub_4916871",
98
+ "opensub_5186746",
99
+ # Brødre
100
+ "opensub_233943",
101
+ "opensub_87475",
102
+ }
103
+ ```
104
+
105
+ We have additionally removed duplicate entries from the original dataset.
106
+
107
+ ## Additional Information
108
+
109
+
110
+ ### Citation Information
111
+
112
+ This dataset was initially published as part of the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite and reference it if you use this dataset:
113
+
114
+ > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
115
+
116
+ ```bash
117
+ @inproceedings{dagw,
118
+ title = {{The Danish Gigaword Corpus}},
119
+ author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
120
+ year = 2021,
121
+ booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
122
+ publisher = {NEALT}
123
+ }
124
+ ```
data/opensubtitles/opensubtitles.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fbb41ace407b6e87fc1faf11e304493d24b9474c00f6230daf4406d4c818e6f
3
+ size 496784101
data/relig/relig.md CHANGED
@@ -34,7 +34,7 @@ Danish religious text from the 1700-2022.
34
  <!-- END-DESC-STATS -->
35
 
36
 
37
- ## Dataset Sturcture
38
  An example from the dataset looks as follows.
39
 
40
  <!-- START-SAMPLE -->
 
34
  <!-- END-DESC-STATS -->
35
 
36
 
37
+ ## Dataset Structure
38
  An example from the dataset looks as follows.
39
 
40
  <!-- START-SAMPLE -->
data/retsinformationdk/retsinformationdk.md CHANGED
@@ -42,7 +42,7 @@ It serves as a central repository for Danish legislation, administrative regulat
42
 
43
  <!-- END-DESC-STATS -->
44
 
45
- ## Dataset Sturcture
46
  An example from the dataset looks as follows.
47
 
48
  <!-- START-SAMPLE -->
 
42
 
43
  <!-- END-DESC-STATS -->
44
 
45
+ ## Dataset Structure
46
  An example from the dataset looks as follows.
47
 
48
  <!-- START-SAMPLE -->
data/retspraksis/retspraksis.md CHANGED
@@ -36,7 +36,7 @@ It encompasses the body of legal decisions made by Danish courts, which play a s
36
  <!-- END-DESC-STATS -->
37
 
38
 
39
- ## Dataset Sturcture
40
  An example from the dataset looks as follows.
41
 
42
  <!-- START-SAMPLE -->
 
36
  <!-- END-DESC-STATS -->
37
 
38
 
39
+ ## Dataset Structure
40
  An example from the dataset looks as follows.
41
 
42
  <!-- START-SAMPLE -->
data/skat/skat.md CHANGED
@@ -33,7 +33,7 @@ Skat is the Danish tax authority. This dataset contains content from its website
33
  <!-- END-DESC-STATS -->
34
 
35
 
36
- ## Dataset Sturcture
37
  An example from the dataset looks as follows.
38
 
39
  <!-- START-SAMPLE -->
 
33
  <!-- END-DESC-STATS -->
34
 
35
 
36
+ ## Dataset Structure
37
  An example from the dataset looks as follows.
38
 
39
  <!-- START-SAMPLE -->
data/spont/spont.md CHANGED
@@ -35,7 +35,7 @@ The conversational corpus included originates from interdisciplinary research co
35
  <!-- END-DESC-STATS -->
36
 
37
 
38
- ## Dataset Sturcture
39
  An example from the dataset looks as follows.
40
 
41
  <!-- START-SAMPLE -->
 
35
  <!-- END-DESC-STATS -->
36
 
37
 
38
+ ## Dataset Structure
39
  An example from the dataset looks as follows.
40
 
41
  <!-- START-SAMPLE -->
data/synne/synne.md CHANGED
@@ -33,7 +33,7 @@ Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk),
33
  <!-- END-DESC-STATS -->
34
 
35
 
36
- ## Dataset Sturcture
37
  An example from the dataset looks as follows.
38
 
39
  <!-- START-SAMPLE -->
 
33
  <!-- END-DESC-STATS -->
34
 
35
 
36
+ ## Dataset Structure
37
  An example from the dataset looks as follows.
38
 
39
  <!-- START-SAMPLE -->
data/tv2r/tv2r.md CHANGED
@@ -35,7 +35,7 @@ It contains articles of regional interest, written following editorial standards
35
  <!-- END-DESC-STATS -->
36
 
37
 
38
- ## Dataset Sturcture
39
  An example from the dataset looks as follows.
40
 
41
  <!-- START-SAMPLE -->
 
35
  <!-- END-DESC-STATS -->
36
 
37
 
38
+ ## Dataset Structure
39
  An example from the dataset looks as follows.
40
 
41
  <!-- START-SAMPLE -->
data/wiki/wiki.md CHANGED
@@ -35,7 +35,7 @@ You can read more about wikipeadia on their [about](https://en.wikipedia.org/wik
35
  <!-- END-DESC-STATS -->
36
 
37
 
38
- ## Dataset Sturcture
39
  An example from the dataset looks as follows.
40
 
41
  <!-- START-SAMPLE -->
 
35
  <!-- END-DESC-STATS -->
36
 
37
 
38
+ ## Dataset Structure
39
  An example from the dataset looks as follows.
40
 
41
  <!-- START-SAMPLE -->
data/wikibooks/wikibooks.md CHANGED
@@ -34,7 +34,7 @@ The Danish Subsection of [Wikibooks](https://www.wikibooks.org).
34
  <!-- END-DESC-STATS -->
35
 
36
 
37
- ## Dataset Sturcture
38
  An example from the dataset looks as follows.
39
 
40
  <!-- START-SAMPLE -->
 
34
  <!-- END-DESC-STATS -->
35
 
36
 
37
+ ## Dataset Structure
38
  An example from the dataset looks as follows.
39
 
40
  <!-- START-SAMPLE -->
data/wikisource/wikisource.md CHANGED
@@ -33,7 +33,7 @@ The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page).
33
  <!-- END-DESC-STATS -->
34
 
35
 
36
- ## Dataset Sturcture
37
  An example from the dataset looks as follows.
38
 
39
  <!-- START-SAMPLE -->
 
33
  <!-- END-DESC-STATS -->
34
 
35
 
36
+ ## Dataset Structure
37
  An example from the dataset looks as follows.
38
 
39
  <!-- START-SAMPLE -->
pyproject.toml CHANGED
@@ -3,19 +3,22 @@ name = "danish-dynaword"
3
  version = "1.0.4"
4
  description = "project code for the danish dynaword project"
5
  readme = "README.md"
6
- requires-python = ">=3.12"
7
  dependencies = [
8
- "datasets>=3.0.0",
9
- "gitpython>=3.1.43",
10
- "ipykernel>=6.29.5",
11
- "matplotlib>=3.10.0",
12
  "numpy>=2.2.0",
13
- "plotnine>=0.14.3",
14
- "pydantic>=2.10.4",
 
 
 
 
 
 
 
15
  "pytest>=8.3.4",
 
16
  "ruff>=0.8.3",
17
- "seaborn>=0.13.2",
18
- "tabulate>=0.9.0",
19
- "tomlkit>=0.13.2",
20
- "transformers>=4.47.1",
21
  ]
 
3
  version = "1.0.4"
4
  description = "project code for the danish dynaword project"
5
  readme = "README.md"
6
+ requires-python = ">=3.12,<3.13"
7
  dependencies = [
8
+ # for commands
9
+ "datasets>=3.0.0", # loading and validating datasets
10
+ "gitpython>=3.1.43", # interacting with git
 
11
  "numpy>=2.2.0",
12
+ "pydantic>=2.10.4", # validating schemas
13
+ "tabulate>=0.9.0", # creating md table
14
+ "tomlkit>=0.13.2", # reading toml
15
+ "transformers>=4.47.1", # tokenization
16
+ # create.py
17
+ "spacy>=3.8.3", # only need for opensubtitles/create.py
18
+ # development
19
+ "ipykernel>=6.29.5",
20
+ # test
21
  "pytest>=8.3.4",
22
+ # formatting
23
  "ruff>=0.8.3",
 
 
 
 
24
  ]
src/git_utilities.py CHANGED
@@ -11,16 +11,21 @@ def get_current_revision(repo_path: Path = repo_path) -> str:
11
  return commit_id
12
 
13
 
14
- def get_latest_revision(path: Path, repo_path=repo_path) -> str:
15
  repo = Repo(repo_path)
16
 
17
  if not path.exists():
18
  raise ValueError("path does not exist.")
19
 
20
- last_commit_for_file = next(repo.iter_commits(paths=path, max_count=1))
21
- return last_commit_for_file.hexsha
 
 
 
22
 
23
 
24
- def check_is_ancestor(ancestor_rev: str, rev: str, repo_path=repo_path) -> bool:
 
 
25
  repo = Repo(repo_path)
26
  return repo.is_ancestor(repo.commit(ancestor_rev), repo.commit(rev))
 
11
  return commit_id
12
 
13
 
14
+ def get_latest_revision(path: Path, repo_path=repo_path) -> str | None:
15
  repo = Repo(repo_path)
16
 
17
  if not path.exists():
18
  raise ValueError("path does not exist.")
19
 
20
+ try:
21
+ last_commit_for_file = next(repo.iter_commits(paths=path, max_count=1))
22
+ return last_commit_for_file.hexsha
23
+ except StopIteration:
24
+ return None
25
 
26
 
27
+ def check_is_ancestor(ancestor_rev: str, rev: str | None, repo_path=repo_path) -> bool:
28
+ if rev is None: # e.g. when cases are not submitted yet
29
+ return False
30
  repo = Repo(repo_path)
31
  return repo.is_ancestor(repo.commit(ancestor_rev), repo.commit(rev))
src/tests/test_load.py CHANGED
@@ -10,6 +10,7 @@ def test_dataset_loads(repo_path: Path):
10
  sample = next(iter(ds))
11
  assert isinstance(sample, dict)
12
 
 
13
  # TODO
14
  # def test_all_datasets_in_yaml(repo_path: Path, readme_yaml_header: dict[str, Any]):
15
  # configs = readme_yaml_header["configs"]
 
10
  sample = next(iter(ds))
11
  assert isinstance(sample, dict)
12
 
13
+
14
  # TODO
15
  # def test_all_datasets_in_yaml(repo_path: Path, readme_yaml_header: dict[str, Any]):
16
  # configs = readme_yaml_header["configs"]
src/update_descriptive_statistics.py CHANGED
@@ -211,7 +211,9 @@ def create_main_table(repo_path: Path = repo_path) -> pd.DataFrame:
211
  dataset / "descriptive_stats.json"
212
  )
213
 
214
- short_description = get_tag_content(body, tag="SHORT DESCRIPTION").strip()[:-1] # to exclude "."
 
 
215
  license, license_name = frontmatter["license"], frontmatter["license_name"]
216
 
217
  table["Source"] += [f"[{dataset.name}]"]
 
211
  dataset / "descriptive_stats.json"
212
  )
213
 
214
+ short_description = get_tag_content(body, tag="SHORT DESCRIPTION").strip()[
215
+ :-1
216
+ ] # to exclude "."
217
  license, license_name = frontmatter["license"], frontmatter["license_name"]
218
 
219
  table["Source"] += [f"[{dataset.name}]"]
uv.lock CHANGED
The diff for this file is too large to render. See raw diff