Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Catalan
ArXiv:
Libraries:
Datasets
pandas
License:
ccasimiro commited on
Commit
7e1bcf7
1 Parent(s): fe284f2

Upload dataset

Browse files
Files changed (5) hide show
  1. README.md +160 -0
  2. ancora-ca-ner.py +122 -0
  3. dev.conll +0 -0
  4. test.conll +0 -0
  5. train.conll +0 -0
README.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ languages:
3
+ - ca
4
+ ---
5
+
6
+ # Named Entites from Ancora Corpus
7
+
8
+ ## BibTeX citation
9
+
10
+ If you use any of these resources (datasets or models) in your work, please cite our latest paper:
11
+
12
+ ```bibtex
13
+ @inproceedings{armengol-estape-etal-2021-multilingual,
14
+ title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
15
+ author = "Armengol-Estap{\'e}, Jordi and
16
+ Carrino, Casimiro Pio and
17
+ Rodriguez-Penagos, Carlos and
18
+ de Gibert Bonet, Ona and
19
+ Armentano-Oller, Carme and
20
+ Gonzalez-Agirre, Aitor and
21
+ Melero, Maite and
22
+ Villegas, Marta",
23
+ booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
24
+ month = aug,
25
+ year = "2021",
26
+ address = "Online",
27
+ publisher = "Association for Computational Linguistics",
28
+ url = "https://aclanthology.org/2021.findings-acl.437",
29
+ doi = "10.18653/v1/2021.findings-acl.437",
30
+ pages = "4933--4946",
31
+ }
32
+ ```
33
+
34
+
35
+ ## Digital Object Identifier (DOI) and access to dataset files
36
+
37
+ https://doi.org/10.5281/zenodo.4529299
38
+
39
+
40
+ ## Introduction
41
+
42
+ This is a dataset for Named Entity Recognition (NER) from <a href="http://clic.ub.edu/corpus/">Ancora corpus</a> adapted for Machine Learning and Language Model evaluation purposes.
43
+
44
+ Since multiwords (including Named Entities) in the original Ancora corpus are aggregated as a single lexical item using underscores (e.g. "Ajuntament_de_Barcelona") we splitted them to align with word-per-line format, and added conventional <a href="https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)">Begin-Inside-Outside (IOB) tags</a> to mark and classify Named Entities. We did not filter out the different categories of NEs from Ancora (weak and strong). We did 6 minor edits by hand.
45
+
46
+ AnCora corpus is used under [CC-by] (https://creativecommons.org/licenses/by/4.0/) licence.
47
+
48
+ This dataset was developed by BSC TeMU as part of the AINA project, and to enrich the Catalan Language Understanding Benchmark (CLUB).
49
+
50
+ ### Supported Tasks and Leaderboards
51
+
52
+ Named Entities Recognition, Language Model
53
+
54
+ ### Languages
55
+
56
+ CA- Catalan
57
+
58
+ ### Directory structure
59
+
60
+ * dev.txt
61
+ * test.txt
62
+ * train.txt
63
+
64
+
65
+ ## Dataset Structure
66
+
67
+ ### Data Instances
68
+
69
+ three two-column files, one for each split.
70
+
71
+ ### Data Fields
72
+
73
+ Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one.
74
+
75
+ ### Example:
76
+ <pre>
77
+ Fundació B-ORG
78
+ Privada I-ORG
79
+ Fira I-ORG
80
+ de I-ORG
81
+ Manresa I-ORG
82
+ ha O
83
+ fet O
84
+ un O
85
+ balanç O
86
+ de O
87
+ l' O
88
+ activitat O
89
+ del O
90
+ Palau B-LOC
91
+ Firal I-LOC
92
+ </pre>
93
+
94
+ ### Data Splits
95
+
96
+ One for each sub-dataset for train, evaluation and test.
97
+
98
+ ## Dataset Creation
99
+
100
+ ### Methodology
101
+
102
+ We adapted the NER labels from Ancora corpus to a word-per-line format.
103
+ Since multiwords in the original Ancora corpus are aggregated as a single lexical item using underscores (e.g. "Ajuntament_de_Barcelona") we splitted them to align with this format, and added conventional <a href="https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)">Begin-Inside-Outside (IOB) tags</a> to mark and classify Named Entities. We did not filter out the different categories of NEs from Ancora (weak and strong). We did 6 minor edits by hand.
104
+
105
+ ### Curation Rationale
106
+
107
+ ### Source Data
108
+
109
+ #### Initial Data Collection and Normalization
110
+
111
+ AnCora consists of a Catalan corpus (AnCora-CA) and a Spanish corpus (AnCora-ES), each of them of 500,000 tokens (some multi-word). The corpora are annotated for linguistic phenomena at different levels.
112
+ AnCora corpus is mainly based on newswire texts. For more information, refer to Taulé, M., M.A. Martí, M. Recasens (2009). “AnCora: Multilevel Annotated Corpora for Catalan and Spanish”, Proceedings of 6th International Conference on language Resources and Evaluation. http://www.lrec-conf.org/proceedings/lrec2008/pdf/35_paper.pdf
113
+
114
+ #### Who are the source language producers?
115
+
116
+ Catalan Ancora corpus is compiled from articles from the following news outlets: <a href="https://www.efe.com">EFE</a>, <a href="https://www.acn.cat">ACN</a>, <a href="https://www.elperiodico.cat/ca/">El Periodico</a>.
117
+
118
+ ### Annotations
119
+
120
+ #### Annotation process
121
+
122
+ We adapted the NER labels from Ancora corpus to a token-per-line, multi-column format.
123
+
124
+ #### Who are the annotators?
125
+
126
+ Original annotators from Ancora corpus.
127
+
128
+
129
+ ### Dataset Curators
130
+
131
+ Carlos Rodríguez and Carme Armentano, from BSC-CNS, did the conversion and curation.
132
+
133
+ ### Personal and Sensitive Information
134
+
135
+ No personal or sensitive information included.
136
+
137
+ ## Considerations for Using the Data
138
+
139
+ ### Social Impact of Dataset
140
+
141
+ [More Information Needed]
142
+
143
+ ### Discussion of Biases
144
+
145
+ [More Information Needed]
146
+
147
+ ### Other Known Limitations
148
+
149
+ [More Information Needed]
150
+
151
+
152
+ ## Contact
153
+
154
+ Carlos Rodríguez-Penagos ([email protected]) and Carme Armentano-Oller ([email protected])
155
+
156
+ ## License
157
+
158
+ <a rel="license" href="https://creativecommons.org/licenses/by/4.0/"><img alt="Attribution 4.0 International License" style="border-width:0" src="https://chriszabriskie.com/img/cc-by.png" width="100"/></a><br />This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">Attribution 4.0 International License</a>.
159
+
160
+
ancora-ca-ner.py ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Loading script for the Ancora NER dataset.
2
+ import datasets
3
+
4
+ logger = datasets.logging.get_logger(__name__)
5
+
6
+ _CITATION = """ """
7
+
8
+ _DESCRIPTION = """AnCora Catalan NER.
9
+ This is a dataset for Named Eentity Reacognition (NER) from Ancora corpus adapted for
10
+ Machine Learning and Language Model evaluation purposes.
11
+ Since multiwords (including Named Entites) in the original Ancora corpus are aggregated as
12
+ a single lexical item using underscores (e.g. "Ajuntament_de_Barcelona")
13
+ we splitted them to align with word-per-line format, and added conventional Begin-Inside-Outside (IOB)
14
+ tags to mark and classify Named Entites.
15
+ We did not filter out the different categories of NEs from Ancora (weak and strong).
16
+ We did 6 minor edits by hand.
17
+ AnCora corpus is used under [CC-by] (https://creativecommons.org/licenses/by/4.0/) licence.
18
+ This dataset was developed by BSC TeMU as part of the AINA project, and to enrich the Catalan Language Understanding Benchmark (CLUB).
19
+ """
20
+
21
+ _HOMEPAGE = """https://zenodo.org/record/4762031"""
22
+
23
+ _URL = "https://huggingface.co/datasets/bsc/ancora-ca-ner/resolve/main/"
24
+ _TRAINING_FILE = "train.conll"
25
+ _DEV_FILE = "dev.conll"
26
+ _TEST_FILE = "test.conll"
27
+
28
+
29
+ class AncoraCaNerConfig(datasets.BuilderConfig):
30
+ """ Builder config for the Ancora Ca NER dataset """
31
+
32
+ def __init__(self, **kwargs):
33
+ """BuilderConfig for AncoraCaNer.
34
+ Args:
35
+ **kwargs: keyword arguments forwarded to super.
36
+ """
37
+ super(AncoraCaNerConfig, self).__init__(**kwargs)
38
+
39
+
40
+ class AncoraCaNer(datasets.GeneratorBasedBuilder):
41
+ """ AncoraCaNer dataset."""
42
+
43
+ BUILDER_CONFIGS = [
44
+ AncoraCaNerConfig(
45
+ name="AncoraCaNer",
46
+ version=datasets.Version("2.0.0"),
47
+ description="AncoraCaNer dataset"
48
+ ),
49
+ ]
50
+
51
+ def _info(self):
52
+ return datasets.DatasetInfo(
53
+ description=_DESCRIPTION,
54
+ features=datasets.Features(
55
+ {
56
+ "id": datasets.Value("string"),
57
+ "tokens": datasets.Sequence(datasets.Value("string")),
58
+ "ner_tags": datasets.Sequence(
59
+ datasets.features.ClassLabel(
60
+ names=[
61
+ "B-LOC",
62
+ "B-MISC",
63
+ "B-ORG",
64
+ "B-PER",
65
+ "I-LOC",
66
+ "I-MISC",
67
+ "I-ORG",
68
+ "I-PER",
69
+ "O"
70
+ ]
71
+ )
72
+ ),
73
+ }
74
+ ),
75
+ supervised_keys=None,
76
+ homepage=_HOMEPAGE,
77
+ citation=_CITATION,
78
+ )
79
+
80
+ def _split_generators(self, dl_manager):
81
+ """Returns SplitGenerators."""
82
+ urls_to_download = {
83
+ "train": f"{_URL}{_TRAINING_FILE}",
84
+ "dev": f"{_URL}{_DEV_FILE}",
85
+ "test": f"{_URL}{_TEST_FILE}",
86
+ }
87
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
88
+
89
+ return [
90
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
91
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
92
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
93
+ ]
94
+
95
+ def _generate_examples(self, filepath):
96
+ logger.info("⏳ Generating examples from = %s", filepath)
97
+ with open(filepath, encoding="utf-8") as f:
98
+ guid = 0
99
+ tokens = []
100
+ ner_tags = []
101
+ for line in f:
102
+ if line.startswith("-DOCSTART-") or line == "" or line == "\n":
103
+ if tokens:
104
+ yield guid, {
105
+ "id": str(guid),
106
+ "tokens": tokens,
107
+ "ner_tags": ner_tags,
108
+ }
109
+ guid += 1
110
+ tokens = []
111
+ ner_tags = []
112
+ else:
113
+ # AncoraCaNer tokens are space separated
114
+ splits = line.split('\t')
115
+ tokens.append(splits[0])
116
+ ner_tags.append(splits[1].rstrip())
117
+ # last example
118
+ yield guid, {
119
+ "id": str(guid),
120
+ "tokens": tokens,
121
+ "ner_tags": ner_tags,
122
+ }
dev.conll ADDED
The diff for this file is too large to render. See raw diff
 
test.conll ADDED
The diff for this file is too large to render. See raw diff
 
train.conll ADDED
The diff for this file is too large to render. See raw diff