Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
license: other | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: 'data/*/*.parquet' | |
- config_name: retsinformationdk | |
data_files: | |
- split: train | |
path: data/retsinformationdk/*.parquet | |
- config_name: ep | |
data_files: | |
- split: train | |
path: data/ep/*.parquet | |
- config_name: ft | |
data_files: | |
- split: train | |
path: data/ft/*.parquet | |
- config_name: wikisource | |
data_files: | |
- split: train | |
path: data/wikisource/*.parquet | |
- config_name: spont | |
data_files: | |
- split: train | |
path: data/spont/*.parquet | |
- config_name: tv2r | |
data_files: | |
- split: train | |
path: data/tv2r/*.parquet | |
- config_name: adl | |
data_files: | |
- split: train | |
path: data/adl/*.parquet | |
- config_name: hest | |
data_files: | |
- split: train | |
path: data/hest/*.parquet | |
- config_name: skat | |
data_files: | |
- split: train | |
path: data/skat/*.parquet | |
- config_name: dannet | |
data_files: | |
- split: train | |
path: data/dannet/*.parquet | |
- config_name: retspraksis | |
data_files: | |
- split: train | |
path: data/retspraksis/*.parquet | |
- config_name: wikibooks | |
data_files: | |
- split: train | |
path: data/wikibooks/*.parquet | |
- config_name: jvj | |
data_files: | |
- split: train | |
path: data/jvj/*.parquet | |
- config_name: gutenberg | |
data_files: | |
- split: train | |
path: data/gutenberg/*.parquet | |
- config_name: botxt | |
data_files: | |
- split: train | |
path: data/botxt/*.parquet | |
- config_name: depbank | |
data_files: | |
- split: train | |
path: data/depbank/*.parquet | |
- config_name: naat | |
data_files: | |
- split: train | |
path: data/naat/*.parquet | |
- config_name: synne | |
data_files: | |
- split: train | |
path: data/synne/*.parquet | |
- config_name: wiki | |
data_files: | |
- split: train | |
path: data/wiki/*.parquet | |
- config_name: relig | |
data_files: | |
- split: train | |
path: data/relig/*.parquet | |
annotations_creators: | |
- no-annotation | |
language_creators: | |
- crowdsourced | |
language: | |
- da | |
multilinguality: | |
- monolingual | |
source_datasets: | |
- original | |
task_categories: | |
- text-generation | |
task_ids: | |
- language-modeling | |
pretty_name: Danish Gigaword | |
language_bcp47: | |
- da | |
- da-bornholm | |
- da-synnejyl | |
# Danish Gigaword 2 | |
*Version*: 2.0.0 | |
*License*: See the respective dataset | |
## Table of Contents | |
- [Danish Gigaword 2](#danish-gigaword-2) | |
- [Table of Contents](#table-of-contents) | |
- [Dataset Description](#dataset-description) | |
- [Dataset Summary](#dataset-summary) | |
- [Loading the dataset](#loading-the-dataset) | |
- [Dataset Structure](#dataset-structure) | |
- [Data Instances](#data-instances) | |
- [Data Fields](#data-fields) | |
- [Data Splits](#data-splits) | |
- [Dataset Creation](#dataset-creation) | |
- [Source Data](#source-data) | |
- [Additional Information](#additional-information) | |
- [Contributing the dataset](#contributing-the-dataset) | |
- [Citation Information](#citation-information) | |
## Dataset Description | |
This is iteration on the Danish Gigaword corpus. It is intended to be continually updated with new data sources. | |
### Dataset Summary | |
The Danish Gigaword Corpus contains text spanning several domains and forms. | |
### Loading the dataset | |
```py | |
from datasets import load_dataset | |
name = "danish-foundation-models/danish-gigaword" | |
ds = load_dataset(name, split = "train") | |
sample = ds[1] # see "Data Instances" below | |
# or load by streaming the data | |
ds = load_dataset(name, split = "train", streaming=True) | |
sample = next(iter(ds)) | |
``` | |
## Dataset Structure | |
The dataset contains text from different sources which are thoroughly defined in [Source Data](#source-data). See the [homepage](https://gigaword.dk) or [paper](https://aclanthology.org/2021.nodalida-main.46.pdf) for more information. | |
### Data Instances | |
Each entry in the dataset consists of a single text with associated metadata | |
```py | |
{ | |
"text": "SAMLEDE VÆRKER\n\nJEPPE AAKJÆR GYLDENDALSKE BOGHANDEL...", | |
"source": "adl", | |
"id": "adl_aakjaer06val", | |
"added": "2020-09-14", | |
"created": "1700-01-01, 2022-01-01", | |
"license": "Creative Commons Legal Code\n\nCC0 1.0 Universal", | |
"domain": "Wiki & Books", | |
"metadata": {"source-pretty": "Archive for Danish Literature"}, | |
} | |
``` | |
### Data Fields | |
An entry in the dataset consists of the following fields: | |
- `text`(`str`): The content of the document. | |
- `source` (`str`): The source of the document (see [Source Data](#source-data)). | |
- `id` (`str`): An unique identifier for each document. | |
- `added` (`str`): An date for when the document was added to this collection. | |
- `created` (`str`): An date range for when the document was originally created. | |
- `license` (`str`): The license of the document. The licenses vary according to the source. | |
- `domain` (`str`): The domain of the source | |
- `metadata/source-pretty` (`str`): The long form version of the short-form source name | |
- `metadata/*`: Potentially additional metadata | |
### Data Splits | |
The entire corpus is provided in the `train` split. | |
## Dataset Creation | |
### Source Data | |
Below follows a brief overview of the sources in the corpus along with their individual license. | |
| Source | License | | |
| ----------------- | -------------------------------------------------------- | | |
| adl | Creative Commons Legal Code 1.0 Universal | | |
| botxt | Creative Commons Legal Code 1.0 Universal | | |
| dannet | [dannet license] | | |
| depbank | Attribution-ShareAlike 4.0 International | | |
| ep | Creative Commons Legal Code 1.0 Universal | | |
| ft | Creative Commons Legal Code 1.0 Universal | | |
| gutenberg | [gutenberg license] | | |
| hest | Creative Commons Legal Code 1.0 Universal | | |
| jvj | Attribution-ShareAlike 4.0 International | | |
| naat | Creative Commons Legal Code 1.0 Universal | | |
| relig | Creative Commons Legal Code 1.0 Universal | | |
| retsinformationdk | [Other (Danish Law)] | | |
| retspraksis | Creative Commons Legal Code 1.0 Universal | | |
| skat | Creative Commons Legal Code 1.0 Universal | | |
| spont | Creative Commons Legal Code 1.0 Universal | | |
| synne | Creative Commons Legal Code 1.0 Universal | | |
| tv2r | [Custom, Creative Commons Attribution 4.0 International] | | |
| wiki | Creative Commons Legal Code 1.0 Universal | | |
| wikibooks | Creative Commons Legal Code 1.0 Universal | | |
| wikisource | Creative Commons Legal Code 1.0 Universal | | |
[Custom, Creative Commons Attribution 4.0 International]: https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/blob/main/data/tv2r/tv2r.md#license-information | |
[gutenberg license]: https://www.gutenberg.org/policy/license.html | |
[dannet license]: https://cst.ku.dk/projekter/dannet/license.txt | |
[Other (Danish Law)]: https://huggingface.co/datasets/danish-foundation-models/danish-gigaword-2/blob/main/data/retsinformationdk/retsinformationdk.md#license-information | |
## Additional Information | |
### Contributing the dataset | |
We welcome contributions to the dataset such as new sources, better data filtering and so on. To get started on contributing please see [the contribution guidelines](CONTRIBUTING.md) | |
### Citation Information | |
The original version of Danish Gigawords was created as a part of the following publication. | |
> Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021). | |
``` | |
@inproceedings{dagw, | |
title = {{The Danish Gigaword Corpus}}, | |
author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab}, | |
year = 2021, | |
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics}, | |
publisher = {NEALT} | |
} | |
``` |