OpenLID-v2 / README.md
laurievb's picture
Update README.md
7a8fdac verified
metadata
license: other
task_categories:
  - text-classification
pretty_name: OpenLID
size_categories:
  - 100M<n<1B
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/*.parquet

Dataset Card for OpenLID (v2)

Dataset Description

OpenLID-v2 is an updated version of the OpenLID dataset (see the CHANGELOG).

Dataset Summary

The OpenLID-v2 dataset covers 200 language varieties and is designed for training language identification models. The majority of the source datasets were derived from news sites, Wikipedia, or religious text, though some come from other domains (e.g. transcribed conversations, literature, or social media). A sample of each language in each source was manually audited to check it was in the attested language (see the paper for details).

Supported tasks

This dataset is intended for training high-coverage language identification models. The language variety labels are compatible with the FLORES+ evaluation benchmark. We provide a script to prepare the OpenLID-v2 dataset for training a language identification model.

Languages

There are 200 language varieties included in the dataset with varying amounts of data: the largest class (English) contains 7.5 million lines of data, and the smallest (Yiddish) contains 923 lines of data. The mean number of lines per language variety is 581,033.

Dataset Structure

Data Instances

Each entry in the dataset consists of a line of data (text), a language label consisting of an ISO 639-3 language code plus an ISO 15924 script code (language), and a tag indicating the source (source).

{
  "text": "¿Serás exaltada hasta el cielo?",
  "language": "spa_Latn",
  "source": "lti" 
}

Data Splits

Only a train split is provided. The language variety labels are compatible with the FLORES+ evaluation benchmark.

Dataset Creation

Curation Rationale

Recent work has found that existing language identification algorithms perform poorly in practice compared to test performance. The problem is particularly acute for low-resource languages: Kreutzer et al. (2022) found a positive Spearman rank correlation between quality of data and size of language for all of the LID-filtered multilingual datasets they studied. In addition, for a significant fraction of the language corpora they studied, less than half of the sentences were in the correct language. They point out that such low-quality data not only leads to poor performance in downstream tasks, but that it also contributes to 'representation washing', where the community is given a false view of the actual progress of low-resource natural language processing.

There are several open language identification models offering quick classification and high language coverage (e.g. CLD3, No Language Left Behind). However, to the best of our knowledge, none of the commonly-used scalable language identification systems make their training data public. OpenLID aims to address this gap by curating and combining sources of open training data for language identification and by auditing a sample of all languages in each source to check reliability.

OpenLID-v2 improves on OpenLID by updating the preprocessing script (particularly sentence segmentation), adding additional data to some underperforming languages, and changing the language variety labels for compatibility with FLORES+.

Source Data

The majority of the source datasets were derived from news sites, Wikipedia, or religious text, though some come from other domains (e.g. transcribed conversations, literature, or social media). Source and licensing information is available in the licenses directory in this repository.

Initial Data Collection and Normalisation

Our initial aim with OpenLID was to cover the same languages present in the FLORES-200 Evaluation Benchmark so that we could use this dataset for evaluation. However, during the curation process, we decided to exclude three languages (Akan, Modern Standard Arabic in Latin script, and Minangkabau in Arabic script). Further information on these design decisions is available in the OpenLID v1 paper.

Two of the authors carried out a manual audit of a random sample of all data sources and languages: one a native Bulgarian speaker (able to read Cyrillic and Latin scripts and Chinese characters), and the other a native English speaker (able to read Latin, Arabic and Hebrew scripts). For languages we knew, we checked the language was what we expected. For unfamiliar languages in a script we could read, we compared the sample to the Universal Declaration of Human Rights or failing that, to a sample of text on Wikipedia. We compared features of the text which are common in previous language identification algorithms and could be identified easily by humans: similar diacritics, word lengths, common words, loan words matching the right cultural background, similar suffixes and prefixes, and vowel/consonant patterns. For scripts we could not read, we checked that all lines of the sample matched the script in the Universal Declaration of Human Rights.

We kept preprocessing minimal so that the process was as language agnostic as possible. The preprocessing script can be found in the scripts directory in this repository.

Contributions

To contribute additional data to this dataset, please follow the contribution guidelines here.

Considerations for Using the Data

Social Impact of Dataset

This dataset covers a number of under-served languages. This makes it a potentially useful resource, but due to the limited amount of data and domains covered, care must be taken not to overclaim performance or coverage.

Discussion of Biases

Our work aims to broaden natural language processing coverage by allowing practitioners to identify relevant data in more languages. However, we note that language identification is inherently a normative activity that risks excluding minority dialects, scripts, or entire microlanguages from a macrolanguage. Choosing which languages to cover may reinforce power imbalances, as only some groups gain access to language processing technologies.

In addition, errors in language identification can have a significant impact on downstream performance, particularly (as is often the case) when a system is used as a 'black box'. The performance of our classifier is not equal across languages which could lead to worse downstream performance for particular groups. We mitigate this by providing metrics by class.

Additional information

The dataset was curated from the sources listed below by Laurie Burchell, Nikolay Bogoychev and Jaume Zaragoza-Bernabeu.

Licensing Information

License considerations for each source are available in the licenses directory in this repository. Open use for non-commercial purposes is covered by all licences.

If you view any part of this dataset as a violation of intellectual property rights, please let us know and we will remove it.

Citation Information

If you use this dataset, please cite all the authors in the citation file who compiled the source datasets, plus the OpenLID paper:

@inproceedings{burchell-etal-2023-open,
    title = "An Open Dataset and Model for Language Identification",
    author = "Burchell, Laurie  and
      Birch, Alexandra  and
      Bogoychev, Nikolay  and
      Heafield, Kenneth",
    editor = "Rogers, Anna  and
      Boyd-Graber, Jordan  and
      Okazaki, Naoaki",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-short.75",
    doi = "10.18653/v1/2023.acl-short.75",
    pages = "865--879",
    abstract = "Language identification (LID) is a fundamental step in many natural language processing pipelines. However, current LID systems are far from perfect, particularly on lower-resource languages. We present a LID model which achieves a macro-average F1 score of 0.93 and a false positive rate of 0.033{\%} across 201 languages, outperforming previous work. We achieve this by training on a curated dataset of monolingual data, which we audit manually to ensure reliability. We make both the model and the dataset available to the research community. Finally, we carry out detailed analysis into our model{'}s performance, both in comparison to existing open models and by language class.",
}