Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
100K - 1M
License:
File size: 19,559 Bytes
fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 fcc0222 3d99d06 7d78f79 712c5ea 7d78f79 712c5ea 7d78f79 3d99d06 7d78f79 bb8789e 7d78f79 712c5ea 7d78f79 712c5ea 7d78f79 58f26ac 7d78f79 58f26ac 7d78f79 9269fba 7d78f79 bb8789e 7d78f79 bb8789e 7d78f79 712c5ea 7d78f79 3d99d06 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 |
---
license: other
configs:
- config_name: default
data_files:
- split: train
path: 'data/*/*.parquet'
- config_name: retsinformationdk
data_files:
- split: train
path: data/retsinformationdk/*.parquet
- config_name: ep
data_files:
- split: train
path: data/ep/*.parquet
- config_name: ft
data_files:
- split: train
path: data/ft/*.parquet
- config_name: wikisource
data_files:
- split: train
path: data/wikisource/*.parquet
- config_name: spont
data_files:
- split: train
path: data/spont/*.parquet
- config_name: tv2r
data_files:
- split: train
path: data/tv2r/*.parquet
- config_name: adl
data_files:
- split: train
path: data/adl/*.parquet
- config_name: hest
data_files:
- split: train
path: data/hest/*.parquet
- config_name: skat
data_files:
- split: train
path: data/skat/*.parquet
- config_name: dannet
data_files:
- split: train
path: data/dannet/*.parquet
- config_name: retspraksis
data_files:
- split: train
path: data/retspraksis/*.parquet
- config_name: wikibooks
data_files:
- split: train
path: data/wikibooks/*.parquet
- config_name: jvj
data_files:
- split: train
path: data/jvj/*.parquet
- config_name: gutenberg
data_files:
- split: train
path: data/gutenberg/*.parquet
- config_name: botxt
data_files:
- split: train
path: data/botxt/*.parquet
- config_name: depbank
data_files:
- split: train
path: data/depbank/*.parquet
- config_name: naat
data_files:
- split: train
path: data/naat/*.parquet
- config_name: synne
data_files:
- split: train
path: data/synne/*.parquet
- config_name: wiki
data_files:
- split: train
path: data/wiki/*.parquet
- config_name: relig
data_files:
- split: train
path: data/relig/*.parquet
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- da
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: Danish Gigaword
language_bcp47:
- da
- da-bornholm
- da-synnejyl
---
# Danish Gigaword 2
*Version*: 2.0.0
*License*: See the respective dataset
## Table of Contents
- [Danish Gigaword 2](#danish-gigaword-2)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Loading the dataset](#loading-the-dataset)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Contributing the dataset](#contributing-the-dataset)
- [Citation Information](#citation-information)
## Dataset Description
This is intended as a second version of the Danish Gigaword corpus. It is intended to be continually updated with new data sources. This is currently a work in progress.
### Dataset Summary
The Danish Gigaword Corpus contains text spanning several domains and forms.
### Loading the dataset
```py
from datasets import load_dataset
name = "danish-foundation-models/danish-gigaword"
ds = load_dataset(name, split = "train")
sample = ds[1] # see "Data Instances" below
# or load by streaming the data
ds = load_dataset(name, split = "train", streaming=True)
sample = next(iter(ds))
```
## Dataset Structure
The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data). See the [homepage](https://gigaword.dk) or [paper](https://aclanthology.org/2021.nodalida-main.46.pdf) for more information.
### Data Instances
Each entry in the dataset consists of a single text with associated metadata
```py
{
'text': 'Vimoutiers er en kommune i departementet Orne i Basse-Normandie regionen i det nordvestlige Frankrig.\nCykelløbet Paris-Camembert slutter i Vimoutiers.\nHistorie.\nDen 14. juni 1944, under invasionen i Normandiet blev Vimoutiers bombarderet af allierede styrker. Landsbyen blev ødelagt og 220 civile dræbt.\nPersonligheder.\nPolitikeren Joseph Laniel (1889-1975) var født i Vomoutiers.',
'source': 'wiki',
'id': 'wiki_366127',
'added': '2021-03-28',
'created': '2019-01-01, 2021-01-01',
'metadata':
{'domain': 'Wiki & Books',
'license': 'Creative Commons Legal Code\n\nCC0 1.0 Universal', 'source-pretty': 'Wikipedia'
}
}
```
### Data Fields
An entry in the dataset consists of the following fields:
- `text`(`str`): The content of the document.
- `source` (`str`): The source of the document (see [Source Data](#source-data)).
- `id` (`str`): An unique identifier for each document.
- `added` (`str`): An date for when the document was added to this collection.
- `created` (`str`): An date range for when the document was originally created.
- `metadata/license` (`str`): The license of the document. The licenses vary according to the source.
- `metadata/domain` (`str`): The domain of the source
- `metadata/source-pretty` (`str`): The long form version of the short-form source name
### Data Splits
The entire corpus is provided in the `train` split.
## Dataset Creation
### Source Data
Below follows a brief overview of the sources in the corpus along with their individual license.
| Source | License |
| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| adl | Creative Commons Legal Code 1.0 Universal |
| botxt | Creative Commons Legal Code 1.0 Universal |
| dannet | [dannet license](https://cst.ku.dk/projekter/dannet/license.txt) |
| depbank | Attribution-ShareAlike 4.0 International |
| ep | Creative Commons Legal Code 1.0 Universal |
| ft | Creative Commons Legal Code 1.0 Universal |
| gutenberg | [gutenberg license](https://www.gutenberg.org/policy/license.html) |
| hest | Creative Commons Legal Code 1.0 Universal |
| jvj | Attribution-ShareAlike 4.0 International |
| naat | Creative Commons Legal Code 1.0 Universal |
| relig | Creative Commons Legal Code 1.0 Universal |
| retsinformationdk | Danish Copyright law at https://www.retsinformation.dk/forms/r0710.aspx?id=164796 states "§ 9. Love, administrative forskrifter, retsafgørelser og lignende offentlige aktstykker er ikke genstand for ophavsret. Stk. 2. Bestemmelsen i stk. 1 gælder ikke for værker, der fremtræder som selvstændige bidrag i de i stk. 1 nævnte aktstykker. Sådanne værker må dog gengives i forbindelse med aktstykket. Retten til videre udnyttelse afhænger af de i øvrigt gældende regler." |
| retspraksis | Creative Commons Legal Code 1.0 Universal |
| skat | Creative Commons Legal Code 1.0 Universal |
| spont | Creative Commons Legal Code 1.0 Universal |
| synne | Creative Commons Legal Code 1.0 Universal |
| tv2r | The owner of this content is TV2 Regionerne, Denmark. Creative Commons Attribution 4.0 International |
| wiki | Creative Commons Legal Code 1.0 Universal |
| wikibooks | Creative Commons Legal Code 1.0 Universal |
| wikisource | Creative Commons Legal Code 1.0 Universal |
These sources corresponds to the following top-level domains in the dataset:
```python
# mapping from domain to top-level domain
domain_mapping_dict = {
"retsinformationdk": "Legal",
"skat": "Legal",
"retspraksis": "Legal",
"hest": "Social Media",
"cc": "Web",
"adl": "Wiki & Books",
"botxt": "Other",
"danavis": "News",
"dannet": "dannet",
"depbank": "Other",
"ep": "Conversation",
"ft": "Conversation",
"gutenberg": "Wiki & Books",
"jvj": "Wiki & Books",
"naat": "Conversation",
"opensub": "Conversation",
"relig": "Wiki & Books",
"spont": "Conversation",
"synne": "Other",
"tv2r": "News",
"wiki": "Wiki & Books",
"wikibooks": "Wiki & Books",
"wikisource": "Wiki & Books",
"twfv19": "Social Media", # not present in this version of the dataset
}
```
And the following mapping translates between the short form and the long form of the source name
```python
# mapping from domain to its long name format
longname_mapping_dict = {
"retsinformationdk": "retsinformation.dk (Danish legal information)",
"skat": "Skat (Danish tax authority)",
"retspraksis": "retspraksis (Danish legal information)",
"hest": "Hestenettet (Danish debate forum)",
"cc": "Common Crawl",
"adl": " Archive for Danish Literature",
"botxt": "Bornholmsk (Danish dialect)",
"danavis": "Danish daily newspapers",
"dannet": "DanNet (Danish WordNet)",
"depbank": "Danish Dependency Treebank",
"ep": "European Parliament",
"ft": "Folketinget (Danish Parliament)",
"gutenberg": "Gutenberg",
"jvj": "Johannes V. Jensen (Danish author/poet)",
"naat": "NAAT",
"opensub": "Open Subtitles",
"relig": "Religious texts",
"spont": "Spontaneous speech",
"synne": "Synderjysk (Danish dialect)",
"tv2r": "TV 2 Radio (Danish news)",
"wiki": "Wikipedia",
"wikibooks": "Wikibooks",
"wikisource": "Wikisource",
"twfv19": "Twitter Folketingsvalget 2019 (Danish election tweets)", # not present in this version of the dataset
}
```
## Additional Information
### Contributing the dataset
We welcome contributions to the dataset such as new sources, better data filtering and so on. To get started on contributing please see [the contribution guidelines](CONTRIBUTING.md)
### Citation Information
The original version of Danish Gigawords was created as a part of the following publication.
> Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).
```
@inproceedings{dagw,
title = {{The Danish Gigaword Corpus}},
author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
year = 2021,
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
publisher = {NEALT}
}
```
<!--
Todo:
add tests
- unique ids
- valid metadata
add ci:
- summary statistics
- tables
prettify:
- license as independent column
- ensure pretty_name is standard
- potentially remove some columns
--> |