KennethEnevoldsen commited on
Commit
87eb2bb
·
unverified ·
1 Parent(s): 07513e2

updated name of the dataset

Browse files
Files changed (2) hide show
  1. README.md +58 -26
  2. pyproject.toml +2 -2
README.md CHANGED
@@ -99,60 +99,95 @@ task_categories:
99
  - text-generation
100
  task_ids:
101
  - language-modeling
102
- pretty_name: Danish Gigaword
103
  language_bcp47:
104
  - da
105
  - da-bornholm
106
  - da-synnejyl
107
  ---
108
 
109
- # Danish Gigaword 2
 
 
110
 
111
- *Version*: 2.0.0
 
 
 
 
 
 
 
112
 
113
- *License*: See the respective dataset
114
 
115
  ## Table of Contents
116
- - [Danish Gigaword 2](#danish-gigaword-2)
117
  - [Table of Contents](#table-of-contents)
118
  - [Dataset Description](#dataset-description)
119
  - [Dataset Summary](#dataset-summary)
120
  - [Loading the dataset](#loading-the-dataset)
 
121
  - [Dataset Structure](#dataset-structure)
122
  - [Data Instances](#data-instances)
123
  - [Data Fields](#data-fields)
124
  - [Data Splits](#data-splits)
125
  - [Dataset Creation](#dataset-creation)
 
 
126
  - [Source Data](#source-data)
127
  - [Additional Information](#additional-information)
128
- - [Contributing the dataset](#contributing-the-dataset)
129
  - [Citation Information](#citation-information)
130
 
131
  ## Dataset Description
132
 
133
- This is iteration on the Danish Gigaword corpus. It is intended to be continually updated with new data sources.
134
 
135
  ### Dataset Summary
136
 
137
- The Danish Gigaword Corpus contains text spanning several domains and forms.
 
138
 
139
  ### Loading the dataset
140
 
141
  ```py
142
  from datasets import load_dataset
143
 
144
- name = "danish-foundation-models/danish-gigaword"
145
  ds = load_dataset(name, split = "train")
146
  sample = ds[1] # see "Data Instances" below
 
147
 
148
- # or load by streaming the data
 
149
  ds = load_dataset(name, split = "train", streaming=True)
150
- sample = next(iter(ds))
 
 
 
 
 
 
 
 
 
 
 
 
 
151
  ```
152
 
 
 
 
 
 
 
 
 
 
153
  ## Dataset Structure
154
 
155
- The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data). See the [homepage](https://gigaword.dk) or [paper](https://aclanthology.org/2021.nodalida-main.46.pdf) for more information.
156
 
157
  ### Data Instances
158
 
@@ -192,6 +227,14 @@ The entire corpus is provided in the `train` split.
192
 
193
  ## Dataset Creation
194
 
 
 
 
 
 
 
 
 
195
  ### Source Data
196
 
197
  Below follows a brief overview of the sources in the corpus along with their individual license.
@@ -225,24 +268,13 @@ Below follows a brief overview of the sources in the corpus along with their ind
225
  [Other (Danish Law)]: https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/blob/main/data/retsinformationdk/retsinformationdk.md#license-information
226
 
227
 
 
228
  ## Additional Information
229
 
230
- ### Contributing the dataset
231
 
232
  We welcome contributions to the dataset such as new sources, better data filtering and so on. To get started on contributing please see [the contribution guidelines](CONTRIBUTING.md)
233
 
234
  ### Citation Information
235
 
236
- The original version of Danish Gigawords was created as a part of the following publication.
237
-
238
- > Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
239
-
240
- ```
241
- @inproceedings{dagw,
242
- title = {{The Danish Gigaword Corpus}},
243
- author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
244
- year = 2021,
245
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
246
- publisher = {NEALT}
247
- }
248
- ```
 
99
  - text-generation
100
  task_ids:
101
  - language-modeling
102
+ pretty_name: Danish Dynaword
103
  language_bcp47:
104
  - da
105
  - da-bornholm
106
  - da-synnejyl
107
  ---
108
 
109
+ <!--
110
+ readme structure is inspired by:
111
+ https://github.com/huggingface/datasets/blob/main/templates/README_guide.md -->
112
 
113
+ # 🧨 Danish Dynaword
114
+
115
+ | | |
116
+ | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
117
+ | **Language** | dan, dansk, Danish |
118
+ | **License** | Permissible, See the respective dataset |
119
+ | **Models** | For model trained used this data see [danish-foundation-models](https://huggingface.co/danish-foundation-models) |
120
+ | **Contact** | If you have question about this project please create an issue [here](https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/discussions) |
121
 
 
122
 
123
  ## Table of Contents
124
+ - [🧨 Danish Dynaword](#-danish-dynaword)
125
  - [Table of Contents](#table-of-contents)
126
  - [Dataset Description](#dataset-description)
127
  - [Dataset Summary](#dataset-summary)
128
  - [Loading the dataset](#loading-the-dataset)
129
+ - [Languages:](#languages)
130
  - [Dataset Structure](#dataset-structure)
131
  - [Data Instances](#data-instances)
132
  - [Data Fields](#data-fields)
133
  - [Data Splits](#data-splits)
134
  - [Dataset Creation](#dataset-creation)
135
+ - [Curation Rationale](#curation-rationale)
136
+ - [Annotations](#annotations)
137
  - [Source Data](#source-data)
138
  - [Additional Information](#additional-information)
139
+ - [Contributing to the dataset](#contributing-to-the-dataset)
140
  - [Citation Information](#citation-information)
141
 
142
  ## Dataset Description
143
 
 
144
 
145
  ### Dataset Summary
146
 
147
+ The Danish dynaword is a continually developed collection of Danish free-form text datasets from various domains. It is intended to be continually updated with new data sources. If you would like to contribute a dataset see the [contribute section](#contributing-to-the-dataset)
148
+
149
 
150
  ### Loading the dataset
151
 
152
  ```py
153
  from datasets import load_dataset
154
 
155
+ name = "danish-foundation-models/danish-dynaword"
156
  ds = load_dataset(name, split = "train")
157
  sample = ds[1] # see "Data Instances" below
158
+ ```
159
 
160
+ or load it by streaming the data
161
+ ```py
162
  ds = load_dataset(name, split = "train", streaming=True)
163
+ dataset_iter = iter(ds)
164
+ sample = next(iter(dataset_iter))
165
+ ```
166
+
167
+ You can also load a single subset at a time:
168
+ ```py
169
+ ds = load_dataset(name, "adl", split = "train")
170
+ ```
171
+
172
+
173
+ As Danish Dynaword is continually expanding and curated you can make sure that you get the same dataset every time by specifying the revision:
174
+ You can also load a single subset at a time:
175
+ ```py
176
+ ds = load_dataset(name, revision="{desired revision}")
177
  ```
178
 
179
+ ### Languages:
180
+ This dataset includes the following languages:
181
+
182
+ - dan-Latn
183
+ - dan-Latn-bornholm
184
+ - dan-Latn-synnejyl
185
+
186
+ Language is denoted using [BCP-47](https://en.wikipedia.org/wiki/IETF_language_tag), using the langauge code ISO 639-3 and the script code ISO 15924. The last element denote the region variant.
187
+
188
  ## Dataset Structure
189
 
190
+ The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data).
191
 
192
  ### Data Instances
193
 
 
227
 
228
  ## Dataset Creation
229
 
230
+ ### Curation Rationale
231
+
232
+ These datasets were collected and curated with the intention of making large quantities of Danish text data available. While this was collected with the intention of developing language models it is likely to have multiple other uses such as examining language development and differences across domains.
233
+
234
+ ### Annotations
235
+
236
+ This data generally contains no annotation besides the metadata attached to each sample such as what domain it belongs to.
237
+
238
  ### Source Data
239
 
240
  Below follows a brief overview of the sources in the corpus along with their individual license.
 
268
  [Other (Danish Law)]: https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/blob/main/data/retsinformationdk/retsinformationdk.md#license-information
269
 
270
 
271
+
272
  ## Additional Information
273
 
274
+ ### Contributing to the dataset
275
 
276
  We welcome contributions to the dataset such as new sources, better data filtering and so on. To get started on contributing please see [the contribution guidelines](CONTRIBUTING.md)
277
 
278
  ### Citation Information
279
 
280
+ This version expand upon existing dataset sources such as the [Danish gigaword](https://huggingface.co/danish-foundation-models). We recommend that you cite the source of the dataset when using these datasets.
 
 
 
 
 
 
 
 
 
 
 
 
pyproject.toml CHANGED
@@ -1,7 +1,7 @@
1
  [project]
2
- name = "danish-gigaword-2"
3
  version = "1.0.2"
4
- description = "project code for the danish gigaword 2 project"
5
  readme = "README.md"
6
  requires-python = ">=3.13"
7
  dependencies = [
 
1
  [project]
2
+ name = "danish-dynaword"
3
  version = "1.0.2"
4
+ description = "project code for the danish dynaword project"
5
  readme = "README.md"
6
  requires-python = ">=3.13"
7
  dependencies = [