danish-dynaword / README.md
KennethEnevoldsen's picture
spelling
58f26ac unverified
|
raw
history blame
19.6 kB
metadata
license: other
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*/*.parquet
  - config_name: retsinformationdk
    data_files:
      - split: train
        path: data/retsinformationdk/*.parquet
  - config_name: ep
    data_files:
      - split: train
        path: data/ep/*.parquet
  - config_name: ft
    data_files:
      - split: train
        path: data/ft/*.parquet
  - config_name: wikisource
    data_files:
      - split: train
        path: data/wikisource/*.parquet
  - config_name: spont
    data_files:
      - split: train
        path: data/spont/*.parquet
  - config_name: tv2r
    data_files:
      - split: train
        path: data/tv2r/*.parquet
  - config_name: adl
    data_files:
      - split: train
        path: data/adl/*.parquet
  - config_name: hest
    data_files:
      - split: train
        path: data/hest/*.parquet
  - config_name: skat
    data_files:
      - split: train
        path: data/skat/*.parquet
  - config_name: dannet
    data_files:
      - split: train
        path: data/dannet/*.parquet
  - config_name: retspraksis
    data_files:
      - split: train
        path: data/retspraksis/*.parquet
  - config_name: wikibooks
    data_files:
      - split: train
        path: data/wikibooks/*.parquet
  - config_name: jvj
    data_files:
      - split: train
        path: data/jvj/*.parquet
  - config_name: gutenberg
    data_files:
      - split: train
        path: data/gutenberg/*.parquet
  - config_name: botxt
    data_files:
      - split: train
        path: data/botxt/*.parquet
  - config_name: depbank
    data_files:
      - split: train
        path: data/depbank/*.parquet
  - config_name: naat
    data_files:
      - split: train
        path: data/naat/*.parquet
  - config_name: synne
    data_files:
      - split: train
        path: data/synne/*.parquet
  - config_name: wiki
    data_files:
      - split: train
        path: data/wiki/*.parquet
  - config_name: relig
    data_files:
      - split: train
        path: data/relig/*.parquet
annotations_creators:
  - no-annotation
language_creators:
  - crowdsourced
language:
  - da
multilinguality:
  - monolingual
source_datasets:
  - original
task_categories:
  - text-generation
task_ids:
  - language-modeling
pretty_name: Danish Gigaword
language_bcp47:
  - da
  - da-bornholm
  - da-synnejyl

Danish Gigaword 2

Version: 2.0.0

License: See the respective dataset

Table of Contents

Dataset Description

This is intended as a second version of the Danish Gigaword corpus. It is intended to be continually updated with new data sources. This is currently a work in progress.

Dataset Summary

The Danish Gigaword Corpus contains text spanning several domains and forms.

Loading the dataset

from datasets import load_dataset

name = "danish-foundation-models/danish-gigaword"
ds = load_dataset(name, split = "train")
sample = ds[1] # see "Data Instances" below

# or load by streaming the data
ds = load_dataset(name, split = "train", streaming=True)
sample = next(iter(ds))

Dataset Structure

The dataset contains text from different sources which are thoroughly defined in Source Data. See the homepage or paper for more information.

Data Instances

Each entry in the dataset consists of a single text with associated metadata

{
  'text': 'Vimoutiers er en kommune i departementet Orne i Basse-Normandie regionen i det nordvestlige Frankrig.\nCykelløbet Paris-Camembert slutter i Vimoutiers.\nHistorie.\nDen 14. juni 1944, under invasionen i Normandiet blev Vimoutiers bombarderet af allierede styrker. Landsbyen blev ødelagt og 220 civile dræbt.\nPersonligheder.\nPolitikeren Joseph Laniel (1889-1975) var født i Vomoutiers.', 
  'source': 'wiki', 
  'id': 'wiki_366127',
  'added': '2021-03-28',
  'created': '2019-01-01, 2021-01-01',
  'metadata':
    {'domain': 'Wiki & Books',
    'license': 'Creative Commons Legal Code\n\nCC0 1.0 Universal', 'source-pretty': 'Wikipedia'
    }
}

Data Fields

An entry in the dataset consists of the following fields:

  • text(str): The content of the document.
  • source (str): The source of the document (see Source Data).
  • id (str): An unique identifier for each document.
  • added (str): An date for when the document was added to this collection.
  • created (str): An date range for when the document was originally created.
  • metadata/license (str): The license of the document. The licenses vary according to the source.
  • metadata/domain (str): The domain of the source
  • metadata/source-pretty (str): The long form version of the short-form source name

Data Splits

The entire corpus is provided in the train split.

Dataset Creation

Source Data

Below follows a brief overview of the sources in the corpus along with their individual license.

Source License
adl Creative Commons Legal Code 1.0 Universal
botxt Creative Commons Legal Code 1.0 Universal
dannet dannet license
depbank Attribution-ShareAlike 4.0 International
ep Creative Commons Legal Code 1.0 Universal
ft Creative Commons Legal Code 1.0 Universal
gutenberg gutenberg license
hest Creative Commons Legal Code 1.0 Universal
jvj Attribution-ShareAlike 4.0 International
naat Creative Commons Legal Code 1.0 Universal
relig Creative Commons Legal Code 1.0 Universal
retsinformationdk Danish Copyright law at https://www.retsinformation.dk/forms/r0710.aspx?id=164796 states "§ 9. Love, administrative forskrifter, retsafgørelser og lignende offentlige aktstykker er ikke genstand for ophavsret. Stk. 2. Bestemmelsen i stk. 1 gælder ikke for værker, der fremtræder som selvstændige bidrag i de i stk. 1 nævnte aktstykker. Sådanne værker må dog gengives i forbindelse med aktstykket. Retten til videre udnyttelse afhænger af de i øvrigt gældende regler."
retspraksis Creative Commons Legal Code 1.0 Universal
skat Creative Commons Legal Code 1.0 Universal
spont Creative Commons Legal Code 1.0 Universal
synne Creative Commons Legal Code 1.0 Universal
tv2r The owner of this content is TV2 Regionerne, Denmark. Creative Commons Attribution 4.0 International
wiki Creative Commons Legal Code 1.0 Universal
wikibooks Creative Commons Legal Code 1.0 Universal
wikisource Creative Commons Legal Code 1.0 Universal

These sources corresponds to the following top-level domains in the dataset:

# mapping from domain to top-level domain
domain_mapping_dict = {
    "retsinformationdk": "Legal",
    "skat": "Legal",
    "retspraksis": "Legal",
    "hest": "Social Media",
    "cc": "Web",
    "adl": "Wiki & Books",
    "botxt": "Other",
    "danavis": "News",
    "dannet": "dannet",
    "depbank": "Other",
    "ep": "Conversation",
    "ft": "Conversation",
    "gutenberg": "Wiki & Books",
    "jvj": "Wiki & Books",
    "naat": "Conversation",
    "opensub": "Conversation",
    "relig": "Wiki & Books",
    "spont": "Conversation",
    "synne": "Other",
    "tv2r": "News",
    "wiki": "Wiki & Books",
    "wikibooks": "Wiki & Books",
    "wikisource": "Wiki & Books",
    "twfv19": "Social Media", # not present in this version of the dataset
}

And the following mapping translates between the short form and the long form of the source name

# mapping from domain to its long name format
longname_mapping_dict = {
    "retsinformationdk": "retsinformation.dk (Danish legal information)",
    "skat": "Skat (Danish tax authority)",
    "retspraksis": "retspraksis (Danish legal information)",
    "hest": "Hestenettet (Danish debate forum)",
    "cc": "Common Crawl",
    "adl": " Archive for Danish Literature",
    "botxt": "Bornholmsk (Danish dialect)",
    "danavis": "Danish daily newspapers",
    "dannet": "DanNet (Danish WordNet)",
    "depbank": "Danish Dependency Treebank",
    "ep": "European Parliament",
    "ft": "Folketinget (Danish Parliament)",
    "gutenberg": "Gutenberg",
    "jvj": "Johannes V. Jensen (Danish author/poet)",
    "naat": "NAAT",
    "opensub": "Open Subtitles",
    "relig": "Religious texts",
    "spont": "Spontaneous speech",
    "synne": "Synderjysk (Danish dialect)",
    "tv2r": "TV 2 Radio (Danish news)",
    "wiki": "Wikipedia",
    "wikibooks": "Wikibooks",
    "wikisource": "Wikisource",
    "twfv19": "Twitter Folketingsvalget 2019 (Danish election tweets)", # not present in this version of the dataset
}

Additional Information

Contributing the dataset

We welcome contributions to the dataset such as new sources, better data filtering and so on. To get started on contributing please see the contribution guidelines

Citation Information

The original version of Danish Gigawords was created as a part of the following publication.

Derczynski, L., Ciosici, M. R., et al. (2021). The Danish Gigaword Corpus. In Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021).

@inproceedings{dagw,
 title = {{The Danish Gigaword Corpus}},
 author = {Leon Derczynski and Manuel R. Ciosici and Rebekah Baglini and Morten H. Christiansen and Jacob Aarup Dalsgaard and Riccardo Fusaroli and Peter Juel Henrichsen and Rasmus Hvingelby and Andreas Kirkedal and Alex Speed Kjeldsen and Claus Ladefoged and Finn Årup Nielsen and Jens Madsen and Malte Lau Petersen and Jonathan Hvithamar Rystrøm and Daniel Varab},
 year = 2021,
 booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics},
 publisher = {NEALT}
}