Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
KennethEnevoldsen
commited on
update opensubtitles
Browse files- README.md +51 -29
- data/opensubtitles/create.py +2 -2
- data/opensubtitles/descriptive_stats.json +1 -1
- data/opensubtitles/opensubtitles.md +4 -0
- data/opensubtitles/opensubtitles.parquet +2 -2
- descriptive_stats.json +1 -1
- src/update_descriptive_statistics.py +21 -13
- uv.lock +1 -1
README.md
CHANGED
@@ -152,6 +152,11 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md -->
|
|
152 |
## Dataset Description
|
153 |
|
154 |
|
|
|
|
|
|
|
|
|
|
|
155 |
<!-- START-DESC-STATS -->
|
156 |
- **Language**: dan, dansk, Danish
|
157 |
- **Number of samples**: 576.59K
|
@@ -160,6 +165,11 @@ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md -->
|
|
160 |
<!-- END-DESC-STATS -->
|
161 |
|
162 |
|
|
|
|
|
|
|
|
|
|
|
163 |
### Dataset Summary
|
164 |
|
165 |
The Danish dynaword is a continually developed collection of Danish free-form text datasets from various domains. It is intended to be continually updated with new data sources. If you would like to contribute a dataset see the [contribute section](#contributing-to-the-dataset)
|
@@ -213,6 +223,11 @@ Each entry in the dataset consists of a single text with associated metadata
|
|
213 |
|
214 |
|
215 |
|
|
|
|
|
|
|
|
|
|
|
216 |
<!-- START-SAMPLE -->
|
217 |
```py
|
218 |
{
|
@@ -246,6 +261,11 @@ An entry in the dataset consists of the following fields:
|
|
246 |
|
247 |
|
248 |
|
|
|
|
|
|
|
|
|
|
|
249 |
### Data Splits
|
250 |
|
251 |
The entire corpus is provided in the `train` split.
|
@@ -282,62 +302,63 @@ Below follows a brief overview of the sources in the corpus along with their ind
|
|
282 |
|
283 |
|
284 |
|
|
|
285 |
<!-- START-MAIN TABLE -->
|
286 |
| Source | Description | N. Tokens | License |
|
287 |
|:--------------------|:-----------------------------------------------------------------------------------------------------------------------------|:------------|:-----------------------|
|
|
|
288 |
| [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | 516.54M | [Danish Copyright Law] |
|
289 |
-
| [
|
|
|
|
|
290 |
| [spont] | Conversational samples collected as a part of research projects at Aarhus University | 1.56M | [CC-0] |
|
291 |
| [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | 21.67M | [CC-BY-SA 4.0] |
|
292 |
-
| [
|
293 |
-
| [
|
|
|
|
|
|
|
|
|
|
|
294 |
| [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | 6.76M | [Gutenberg License] |
|
|
|
295 |
| [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | 185.45K | [CC-BY-SA 4.0] |
|
296 |
-
| [
|
297 |
-
| [
|
298 |
| [wiki] | The Danish subsection of [wikipeadia](https://en.wikipedia.org/wiki/Main_Page) | 122.00M | [CC-0] |
|
299 |
-
| [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | 6.24M | [CC-0] |
|
300 |
| [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | 37.91M | [CC-0] |
|
301 |
-
| [adl] | Danish literature from 1700-2023 from the Archive for Danish Literature (ADL) | 58.49M | [CC-0] |
|
302 |
-
| [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | 57.08M | [CC-0] |
|
303 |
| [relig] | Danish religious text from the 1700-2022 | 1.24M | [CC-0] |
|
304 |
-
| [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | 1.52M | [DanNet 1.0 License] |
|
305 |
-
| [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | 52.51K | [CC-0] |
|
306 |
-
| [naat] | Danish speeches from 1930-2022 | 286.68K | [CC-0] |
|
307 |
-
| [botxt] | The Bornholmsk Ordbog Dictionary Projec | 847.97K | [CC-0] |
|
308 |
-
| [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | 114.09M | [CC-0] |
|
309 |
-
| [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | 122.12M | [CC-0] |
|
310 |
| **Total** | | 1.84B | |
|
311 |
|
|
|
312 |
[retsinformationdk]: data/retsinformationdk/retsinformationdk.md
|
313 |
-
[
|
|
|
|
|
314 |
[spont]: data/spont/spont.md
|
315 |
[tv2r]: data/tv2r/tv2r.md
|
316 |
-
[
|
317 |
-
[
|
|
|
|
|
|
|
|
|
|
|
318 |
[gutenberg]: data/gutenberg/gutenberg.md
|
|
|
319 |
[depbank]: data/depbank/depbank.md
|
320 |
-
[
|
321 |
-
[
|
322 |
[wiki]: data/wiki/wiki.md
|
323 |
-
[wikibooks]: data/wikibooks/wikibooks.md
|
324 |
[nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
|
325 |
-
[adl]: data/adl/adl.md
|
326 |
-
[retspraksis]: data/retspraksis/retspraksis.md
|
327 |
[relig]: data/relig/relig.md
|
328 |
-
[dannet]: data/dannet/dannet.md
|
329 |
-
[synne]: data/synne/synne.md
|
330 |
-
[naat]: data/naat/naat.md
|
331 |
-
[botxt]: data/botxt/botxt.md
|
332 |
-
[ft]: data/ft/ft.md
|
333 |
-
[skat]: data/skat/skat.md
|
334 |
|
335 |
|
336 |
[CC-0]: https://creativecommons.org/publicdomain/zero/1.0/legalcode.en
|
337 |
[CC-BY-SA 4.0]: https://creativecommons.org/licenses/by-sa/4.0/deed.en
|
338 |
[Danish Copyright Law]: ./data/retsinformationdk/retsinformationdk.md#license-information
|
339 |
-
[Gutenberg License]: ./data/gutenberg/gutenberg.md#license-information
|
340 |
[DanNet 1.0 License]: ./data/dannet/dannet.md#license-information
|
|
|
341 |
<!-- END-MAIN TABLE -->
|
342 |
|
343 |
|
@@ -352,6 +373,7 @@ Below follows a brief overview of the sources in the corpus along with their ind
|
|
352 |
|
353 |
|
354 |
|
|
|
355 |
You can learn more about each dataset by pressing
|
356 |
|
357 |
<!-- ### Quality Control
|
|
|
152 |
## Dataset Description
|
153 |
|
154 |
|
155 |
+
|
156 |
+
|
157 |
+
|
158 |
+
|
159 |
+
|
160 |
<!-- START-DESC-STATS -->
|
161 |
- **Language**: dan, dansk, Danish
|
162 |
- **Number of samples**: 576.59K
|
|
|
165 |
<!-- END-DESC-STATS -->
|
166 |
|
167 |
|
168 |
+
|
169 |
+
|
170 |
+
|
171 |
+
|
172 |
+
|
173 |
### Dataset Summary
|
174 |
|
175 |
The Danish dynaword is a continually developed collection of Danish free-form text datasets from various domains. It is intended to be continually updated with new data sources. If you would like to contribute a dataset see the [contribute section](#contributing-to-the-dataset)
|
|
|
223 |
|
224 |
|
225 |
|
226 |
+
|
227 |
+
|
228 |
+
|
229 |
+
|
230 |
+
|
231 |
<!-- START-SAMPLE -->
|
232 |
```py
|
233 |
{
|
|
|
261 |
|
262 |
|
263 |
|
264 |
+
|
265 |
+
|
266 |
+
|
267 |
+
|
268 |
+
|
269 |
### Data Splits
|
270 |
|
271 |
The entire corpus is provided in the `train` split.
|
|
|
302 |
|
303 |
|
304 |
|
305 |
+
|
306 |
<!-- START-MAIN TABLE -->
|
307 |
| Source | Description | N. Tokens | License |
|
308 |
|:--------------------|:-----------------------------------------------------------------------------------------------------------------------------|:------------|:-----------------------|
|
309 |
+
| [opensubtitles] | Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) | 271.89M | [CC-0] |
|
310 |
| [retsinformationdk] | [retsinformation.dk](https://www.retsinformation.dk) (legal-information.dk) the official legal information system of Denmark | 516.54M | [Danish Copyright Law] |
|
311 |
+
| [ep] | The Danish subsection of [Europarl](https://aclanthology.org/2005.mtsummit-papers.11/) | 100.89M | [CC-0] |
|
312 |
+
| [ft] | Records from all meetings of The Danish parliament (Folketinget) in the parliament hall | 114.09M | [CC-0] |
|
313 |
+
| [wikisource] | The Danish subsection of [Wikisource](https://en.wikisource.org/wiki/Main_Page) | 5.34M | [CC-0] |
|
314 |
| [spont] | Conversational samples collected as a part of research projects at Aarhus University | 1.56M | [CC-0] |
|
315 |
| [tv2r] | Contemporary Danish newswire articles published between 2010 and 2019 | 21.67M | [CC-BY-SA 4.0] |
|
316 |
+
| [adl] | Danish literature from 1700-2023 from the Archive for Danish Literature (ADL) | 58.49M | [CC-0] |
|
317 |
+
| [hest] | Samples from the Danish debate forum www.heste-nettet.dk | 389.33M | [CC-0] |
|
318 |
+
| [skat] | Skat is the Danish tax authority. This dataset contains content from its website skat.dk | 122.12M | [CC-0] |
|
319 |
+
| [dannet] | [DanNet](https://cst.ku.dk/projekter/dannet) is a Danish WordNet | 1.52M | [DanNet 1.0 License] |
|
320 |
+
| [retspraksis] | Case law or judical practice in Denmark derived from [Retspraksis](https://da.wikipedia.org/wiki/Retspraksis) | 57.08M | [CC-0] |
|
321 |
+
| [wikibooks] | The Danish Subsection of [Wikibooks](https://www.wikibooks.org) | 6.24M | [CC-0] |
|
322 |
+
| [jvj] | The works of the Danish author and poet, [Johannes V. Jensen](https://da.wikipedia.org/wiki/Johannes_V._Jensen) | 3.55M | [CC-BY-SA 4.0] |
|
323 |
| [gutenberg] | The Danish subsection from Project [Gutenberg](https://www.gutenberg.org) | 6.76M | [Gutenberg License] |
|
324 |
+
| [botxt] | The Bornholmsk Ordbog Dictionary Projec | 847.97K | [CC-0] |
|
325 |
| [depbank] | The Danish subsection of the [Universal Dependencies Treebank](https://github.com/UniversalDependencies/UD_Danish-DDT) | 185.45K | [CC-BY-SA 4.0] |
|
326 |
+
| [naat] | Danish speeches from 1930-2022 | 286.68K | [CC-0] |
|
327 |
+
| [synne] | Dataset collected from [synnejysk forening's website](https://www.synnejysk.dk), covering the Danish dialect sønderjysk | 52.51K | [CC-0] |
|
328 |
| [wiki] | The Danish subsection of [wikipeadia](https://en.wikipedia.org/wiki/Main_Page) | 122.00M | [CC-0] |
|
|
|
329 |
| [nordjyllandnews] | Articles from the Danish Newspaper [TV2 Nord](https://www.tv2nord.dk) | 37.91M | [CC-0] |
|
|
|
|
|
330 |
| [relig] | Danish religious text from the 1700-2022 | 1.24M | [CC-0] |
|
|
|
|
|
|
|
|
|
|
|
|
|
331 |
| **Total** | | 1.84B | |
|
332 |
|
333 |
+
[opensubtitles]: data/opensubtitles/opensubtitles.md
|
334 |
[retsinformationdk]: data/retsinformationdk/retsinformationdk.md
|
335 |
+
[ep]: data/ep/ep.md
|
336 |
+
[ft]: data/ft/ft.md
|
337 |
+
[wikisource]: data/wikisource/wikisource.md
|
338 |
[spont]: data/spont/spont.md
|
339 |
[tv2r]: data/tv2r/tv2r.md
|
340 |
+
[adl]: data/adl/adl.md
|
341 |
+
[hest]: data/hest/hest.md
|
342 |
+
[skat]: data/skat/skat.md
|
343 |
+
[dannet]: data/dannet/dannet.md
|
344 |
+
[retspraksis]: data/retspraksis/retspraksis.md
|
345 |
+
[wikibooks]: data/wikibooks/wikibooks.md
|
346 |
+
[jvj]: data/jvj/jvj.md
|
347 |
[gutenberg]: data/gutenberg/gutenberg.md
|
348 |
+
[botxt]: data/botxt/botxt.md
|
349 |
[depbank]: data/depbank/depbank.md
|
350 |
+
[naat]: data/naat/naat.md
|
351 |
+
[synne]: data/synne/synne.md
|
352 |
[wiki]: data/wiki/wiki.md
|
|
|
353 |
[nordjyllandnews]: data/nordjyllandnews/nordjyllandnews.md
|
|
|
|
|
354 |
[relig]: data/relig/relig.md
|
|
|
|
|
|
|
|
|
|
|
|
|
355 |
|
356 |
|
357 |
[CC-0]: https://creativecommons.org/publicdomain/zero/1.0/legalcode.en
|
358 |
[CC-BY-SA 4.0]: https://creativecommons.org/licenses/by-sa/4.0/deed.en
|
359 |
[Danish Copyright Law]: ./data/retsinformationdk/retsinformationdk.md#license-information
|
|
|
360 |
[DanNet 1.0 License]: ./data/dannet/dannet.md#license-information
|
361 |
+
[Gutenberg License]: ./data/gutenberg/gutenberg.md#license-information
|
362 |
<!-- END-MAIN TABLE -->
|
363 |
|
364 |
|
|
|
373 |
|
374 |
|
375 |
|
376 |
+
|
377 |
You can learn more about each dataset by pressing
|
378 |
|
379 |
<!-- ### Quality Control
|
data/opensubtitles/create.py
CHANGED
@@ -76,7 +76,7 @@ column_order = [
|
|
76 |
]
|
77 |
|
78 |
|
79 |
-
def convert_sample(example):
|
80 |
text = example["text"]
|
81 |
if example["doc_id"] in sample_to_redact:
|
82 |
nlp = spacy.blank("da")
|
@@ -84,7 +84,7 @@ def convert_sample(example):
|
|
84 |
text = doc[:200].text # first 200 words
|
85 |
|
86 |
new_example = dict(
|
87 |
-
text_new=
|
88 |
id=example["doc_id"],
|
89 |
source="opensubtitles",
|
90 |
domain="Conversation",
|
|
|
76 |
]
|
77 |
|
78 |
|
79 |
+
def convert_sample(example: dict) -> dict:
|
80 |
text = example["text"]
|
81 |
if example["doc_id"] in sample_to_redact:
|
82 |
nlp = spacy.blank("da")
|
|
|
84 |
text = doc[:200].text # first 200 words
|
85 |
|
86 |
new_example = dict(
|
87 |
+
text_new=text,
|
88 |
id=example["doc_id"],
|
89 |
source="opensubtitles",
|
90 |
domain="Conversation",
|
data/opensubtitles/descriptive_stats.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"number_of_samples": 29820, "average_document_length": 26326.385915492956, "number_of_tokens": 271893009, "language": "dan, dansk, Danish", "revision": "
|
|
|
1 |
+
{"number_of_samples": 29820, "average_document_length": 26326.385915492956, "number_of_tokens": 271893009, "language": "dan, dansk, Danish", "revision": "689399275f507812fa878f784339887182bb46e0"}
|
data/opensubtitles/opensubtitles.md
CHANGED
@@ -33,6 +33,7 @@ Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/v
|
|
33 |
|
34 |
|
35 |
|
|
|
36 |
<!-- START-DESC-STATS -->
|
37 |
- **Language**: dan, dansk, Danish
|
38 |
- **Number of samples**: 29.82K
|
@@ -52,6 +53,7 @@ Danish subsection of [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/v
|
|
52 |
|
53 |
|
54 |
|
|
|
55 |
## Dataset Structure
|
56 |
An example from the dataset looks as follows.
|
57 |
|
@@ -64,6 +66,7 @@ An example from the dataset looks as follows.
|
|
64 |
|
65 |
|
66 |
|
|
|
67 |
<!-- START-SAMPLE -->
|
68 |
```py
|
69 |
{
|
@@ -104,6 +107,7 @@ An entry in the dataset consists of the following fields:
|
|
104 |
|
105 |
|
106 |
|
|
|
107 |
### Additional Processing
|
108 |
|
109 |
Due to copyright concern additional documents have been removed due to copyright concerns. These include:
|
|
|
33 |
|
34 |
|
35 |
|
36 |
+
|
37 |
<!-- START-DESC-STATS -->
|
38 |
- **Language**: dan, dansk, Danish
|
39 |
- **Number of samples**: 29.82K
|
|
|
53 |
|
54 |
|
55 |
|
56 |
+
|
57 |
## Dataset Structure
|
58 |
An example from the dataset looks as follows.
|
59 |
|
|
|
66 |
|
67 |
|
68 |
|
69 |
+
|
70 |
<!-- START-SAMPLE -->
|
71 |
```py
|
72 |
{
|
|
|
107 |
|
108 |
|
109 |
|
110 |
+
|
111 |
### Additional Processing
|
112 |
|
113 |
Due to copyright concern additional documents have been removed due to copyright concerns. These include:
|
data/opensubtitles/opensubtitles.parquet
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1c80228f2095281e8e1ce2339a071873299dee2912f83706bf271ea782a94b39
|
3 |
+
size 496269823
|
descriptive_stats.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"number_of_samples": 576589, "average_document_length": 9385.201002794018, "number_of_tokens": 1839599769, "language": "dan, dansk, Danish", "revision": "
|
|
|
1 |
+
{"number_of_samples": 576589, "average_document_length": 9385.201002794018, "number_of_tokens": 1839599769, "language": "dan, dansk, Danish", "revision": "689399275f507812fa878f784339887182bb46e0"}
|
src/update_descriptive_statistics.py
CHANGED
@@ -227,7 +227,12 @@ def create_parser():
|
|
227 |
|
228 |
|
229 |
def create_main_table(repo_path: Path = repo_path) -> tuple[pd.DataFrame, str, str]:
|
230 |
-
|
|
|
|
|
|
|
|
|
|
|
231 |
|
232 |
table = {
|
233 |
"Source": [],
|
@@ -243,10 +248,11 @@ def create_main_table(repo_path: Path = repo_path) -> tuple[pd.DataFrame, str, s
|
|
243 |
)
|
244 |
|
245 |
for dataset in datasets:
|
246 |
-
|
|
|
247 |
frontmatter, body = read_frontmatter_and_body(readme_path)
|
248 |
desc_stats = DescriptiveStatsOverview.from_disk(
|
249 |
-
|
250 |
)
|
251 |
|
252 |
short_description = get_tag_content(body, tag="SHORT DESCRIPTION").strip()[
|
@@ -254,14 +260,14 @@ def create_main_table(repo_path: Path = repo_path) -> tuple[pd.DataFrame, str, s
|
|
254 |
] # to exclude "."
|
255 |
license, license_name = frontmatter["license"], frontmatter["license_name"]
|
256 |
|
257 |
-
table["Source"] += [f"[{
|
258 |
readme_references += (
|
259 |
-
f"[{
|
260 |
)
|
261 |
|
262 |
table["License"] += [f"[{license_name}]"]
|
263 |
if license == "other":
|
264 |
-
license_references += f"[{license_name}]: ./data/{
|
265 |
table["Description"] += [short_description]
|
266 |
table["N. Tokens"] += [desc_stats.number_of_tokens]
|
267 |
|
@@ -297,17 +303,19 @@ def main(
|
|
297 |
) -> None:
|
298 |
logging.basicConfig(level=logging_level)
|
299 |
|
300 |
-
if dataset:
|
301 |
dataset_path = repo_path / "data" / dataset
|
302 |
-
update_statitics(
|
303 |
return
|
304 |
|
305 |
-
|
306 |
-
|
307 |
-
|
|
|
308 |
|
309 |
-
|
310 |
-
|
|
|
311 |
|
312 |
|
313 |
if __name__ == "__main__":
|
|
|
227 |
|
228 |
|
229 |
def create_main_table(repo_path: Path = repo_path) -> tuple[pd.DataFrame, str, str]:
|
230 |
+
frontmatter, _ = read_frontmatter_and_body(repo_path / "README.md")
|
231 |
+
datasets = [
|
232 |
+
cfg["config_name"]
|
233 |
+
for cfg in frontmatter["configs"]
|
234 |
+
if cfg["config_name"] != "default"
|
235 |
+
]
|
236 |
|
237 |
table = {
|
238 |
"Source": [],
|
|
|
248 |
)
|
249 |
|
250 |
for dataset in datasets:
|
251 |
+
dataset_path = repo_path / "data" / dataset
|
252 |
+
readme_path = dataset_path / f"{dataset_path.name}.md"
|
253 |
frontmatter, body = read_frontmatter_and_body(readme_path)
|
254 |
desc_stats = DescriptiveStatsOverview.from_disk(
|
255 |
+
dataset_path / "descriptive_stats.json"
|
256 |
)
|
257 |
|
258 |
short_description = get_tag_content(body, tag="SHORT DESCRIPTION").strip()[
|
|
|
260 |
] # to exclude "."
|
261 |
license, license_name = frontmatter["license"], frontmatter["license_name"]
|
262 |
|
263 |
+
table["Source"] += [f"[{dataset_path.name}]"]
|
264 |
readme_references += (
|
265 |
+
f"[{dataset_path.name}]: data/{dataset_path.name}/{dataset_path.name}.md\n"
|
266 |
)
|
267 |
|
268 |
table["License"] += [f"[{license_name}]"]
|
269 |
if license == "other":
|
270 |
+
license_references += f"[{license_name}]: ./data/{dataset_path.name}/{dataset_path.name}.md#license-information\n"
|
271 |
table["Description"] += [short_description]
|
272 |
table["N. Tokens"] += [desc_stats.number_of_tokens]
|
273 |
|
|
|
303 |
) -> None:
|
304 |
logging.basicConfig(level=logging_level)
|
305 |
|
306 |
+
if dataset and dataset != "default":
|
307 |
dataset_path = repo_path / "data" / dataset
|
308 |
+
update_statitics(dataset_path, dataset_path.name, force=force)
|
309 |
return
|
310 |
|
311 |
+
if dataset is None:
|
312 |
+
datasets = (repo_path / "data").glob("*")
|
313 |
+
for dataset_path in datasets:
|
314 |
+
update_statitics(dataset_path, dataset_path.name, force=force)
|
315 |
|
316 |
+
if dataset is None or dataset == "default":
|
317 |
+
update_statitics(repo_path, "default", "README.md", force=force)
|
318 |
+
update_main_table(repo_path)
|
319 |
|
320 |
|
321 |
if __name__ == "__main__":
|
uv.lock
CHANGED
@@ -240,7 +240,7 @@ wheels = [
|
|
240 |
|
241 |
[[package]]
|
242 |
name = "danish-dynaword"
|
243 |
-
version = "1.0.
|
244 |
source = { virtual = "." }
|
245 |
dependencies = [
|
246 |
{ name = "datasets" },
|
|
|
240 |
|
241 |
[[package]]
|
242 |
name = "danish-dynaword"
|
243 |
+
version = "1.0.5"
|
244 |
source = { virtual = "." }
|
245 |
dependencies = [
|
246 |
{ name = "datasets" },
|