Update README.md
Browse filesAn attempt at filling out the full [Dataset Card](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md?plain=1) for the UNER v1 dataset.
README.md
CHANGED
@@ -560,6 +560,152 @@ dataset_info:
|
|
560 |
|
561 |
# Dataset Card for Universal NER
|
562 |
|
563 |
-
|
564 |
|
565 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
560 |
|
561 |
# Dataset Card for Universal NER
|
562 |
|
563 |
+
### Dataset Summary
|
564 |
|
565 |
+
Universal NER (UNER) is an open, community-driven initiative aimed at creating gold-standard benchmarks for Named Entity Recognition (NER) across multiple languages.
|
566 |
+
The primary objective of UNER is to offer high-quality, cross-lingually consistent annotations, thereby standardizing and advancing multilingual NER research.
|
567 |
+
UNER v1 includes 19 datasets with named entity annotations, uniformly structured across 13 diverse languages.
|
568 |
+
|
569 |
+
### Supported Tasks and Leaderboards
|
570 |
+
|
571 |
+
- `token-classification`: The dataset can be used to train token classification models of the NER variety. Some pre-trained models released as part of the UNER v1 release can be found at https://huggingface.co/universalner
|
572 |
+
|
573 |
+
### Languages
|
574 |
+
|
575 |
+
The dataset contains data in the following languages:
|
576 |
+
|
577 |
+
- Cebuano (`ceb`)
|
578 |
+
- Danish (`da`)
|
579 |
+
- German (`de`)
|
580 |
+
- English (`en`)
|
581 |
+
- Croatian (`hr`)
|
582 |
+
- Portuguese (`pt`)
|
583 |
+
- Russian (`ru`)
|
584 |
+
- Slovak (`sk`)
|
585 |
+
- Serbian (`sr`)
|
586 |
+
- Swedish (`sv`)
|
587 |
+
- Tagalog (`tl`)
|
588 |
+
- Chinese (`zh`)
|
589 |
+
|
590 |
+
## Dataset Structure
|
591 |
+
|
592 |
+
### Data Instances
|
593 |
+
|
594 |
+
An example from the `UNER_English-PUD` test set looks as follows
|
595 |
+
|
596 |
+
```json
|
597 |
+
{
|
598 |
+
"idx": "n01016-0002",
|
599 |
+
"text": "Several analysts have suggested Huawei is best placed to benefit from Samsung's setback.",
|
600 |
+
"tokens": [
|
601 |
+
"Several", "analysts", "have", "suggested", "Huawei",
|
602 |
+
"is", "best", "placed", "to", "benefit",
|
603 |
+
"from", "Samsung", "'s", "setback", "."
|
604 |
+
],
|
605 |
+
"ner_tags": [
|
606 |
+
"O", "O", "O", "O", "B-ORG",
|
607 |
+
"O", "O", "O", "O", "O",
|
608 |
+
"O", "B-ORG", "O", "O", "O"
|
609 |
+
],
|
610 |
+
"annotator": "blvns"
|
611 |
+
}
|
612 |
+
```
|
613 |
+
|
614 |
+
### Data Fields
|
615 |
+
|
616 |
+
- `idx`: the ID uniquely identifying the sentence (instance), if available.
|
617 |
+
- `text`: the full text of the sentence (instance)
|
618 |
+
- `tokens`: the text of the sentence (instance) split into tokens. Note that this split is inhereted from Universal Dependencies
|
619 |
+
- `ner_tags`: the NER tags associated with each one of the `tokens`
|
620 |
+
- `annotator`: the annotator who provided the `ner_tags` for this particular instance
|
621 |
+
|
622 |
+
### Data Splits
|
623 |
+
|
624 |
+
TBD
|
625 |
+
|
626 |
+
|
627 |
+
## Dataset Creation
|
628 |
+
|
629 |
+
### Curation Rationale
|
630 |
+
|
631 |
+
TBD
|
632 |
+
|
633 |
+
### Source Data
|
634 |
+
|
635 |
+
#### Initial Data Collection and Normalization
|
636 |
+
|
637 |
+
We selected the Universal Dependency (UD) corpora as the default base texts for annotation due to their extensive language coverage, pre-existing data collection, cleaning, tokenization, and permissive licensing.
|
638 |
+
This choice accelerates our process by providing a robust foundation.
|
639 |
+
By adding another annotation layer to the already detailed UD annotations, we facilitate verification within our project and enable comprehensive multilingual research across the entire NLP pipeline.
|
640 |
+
Given that UD annotations operate at the word level, we adopted the BIO annotation schema (specifically IOB2).
|
641 |
+
In this schema, words forming the beginning (B) or inside (I) part of an entity (X ∈ {PER, LOC, ORG}) are annotated accordingly, while all other words receive an O tag.
|
642 |
+
To maintain consistency, we preserve UD's original tokenization.
|
643 |
+
|
644 |
+
Although UD serves as the default data source for UNER, the project is not restricted to UD corpora, particularly for languages not currently represented in UD.
|
645 |
+
The primary requirement for inclusion in the UNER corpus is adherence to the UNER tagging guidelines.
|
646 |
+
Additionally, we are open to converting existing NER efforts on UD treebanks to align with UNER.
|
647 |
+
In this initial release, we have included four datasets transferred from other manual annotation efforts on UD sources (for DA, HR, ARABIZI, and SR).
|
648 |
+
|
649 |
+
|
650 |
+
#### Who are the source language producers?
|
651 |
+
|
652 |
+
This information can be found on per-dataset basis for each of the source Universal Dependencies datasets.
|
653 |
+
|
654 |
+
### Annotations
|
655 |
+
|
656 |
+
#### Annotation process
|
657 |
+
|
658 |
+
The data has been annotated by
|
659 |
+
|
660 |
+
#### Who are the annotators?
|
661 |
+
|
662 |
+
For the initial UNER annotation effort, we recruited volunteers from the multilingual NLP community via academic networks and social media.
|
663 |
+
The annotators were coordinated through a Slack workspace, with all contributors working on a voluntary basis.
|
664 |
+
We assume that annotators are either native speakers of the language they annotate or possess a high level of proficiency, although no formal language tests were conducted.
|
665 |
+
The selection of the 13 dataset languages in the first UNER release was driven by the availability of annotators.
|
666 |
+
As the project evolves, we anticipate the inclusion of additional languages and datasets as more annotators become available.
|
667 |
+
|
668 |
+
|
669 |
+
### Personal and Sensitive Information
|
670 |
+
|
671 |
+
TBD
|
672 |
+
|
673 |
+
## Considerations for Using the Data
|
674 |
+
|
675 |
+
### Social Impact of Dataset
|
676 |
+
|
677 |
+
TBD
|
678 |
+
|
679 |
+
### Discussion of Biases
|
680 |
+
|
681 |
+
TBD
|
682 |
+
|
683 |
+
### Other Known Limitations
|
684 |
+
|
685 |
+
TBD
|
686 |
+
|
687 |
+
## Additional Information
|
688 |
+
|
689 |
+
### Dataset Curators
|
690 |
+
|
691 |
+
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
|
692 |
+
|
693 |
+
### Licensing Information
|
694 |
+
|
695 |
+
The UNER v1 is released under the terms of the [Creative Commons Attribution-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-sa/4.0/) license
|
696 |
+
|
697 |
+
### Citation Information
|
698 |
+
|
699 |
+
|
700 |
+
If you use this dataset, please cite the corresponding [paper](https://aclanthology.org/2024.naacl-long.243):
|
701 |
+
|
702 |
+
```
|
703 |
+
@inproceedings{
|
704 |
+
mayhew2024universal,
|
705 |
+
title={Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark},
|
706 |
+
author={Stephen Mayhew and Terra Blevins and Shuheng Liu and Marek Šuppa and Hila Gonen and Joseph Marvin Imperial and Börje F. Karlsson and Peiqin Lin and Nikola Ljubešić and LJ Miranda and Barbara Plank and Arij Riab and Yuval Pinter}
|
707 |
+
booktitle={Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
|
708 |
+
year={2024},
|
709 |
+
url={https://aclanthology.org/2024.naacl-long.243/}
|
710 |
+
}
|
711 |
+
```
|