beki commited on
Commit
676c910
1 Parent(s): 629e7d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +244 -1
README.md CHANGED
@@ -1,3 +1,246 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - extended|other-reuters-corpus
16
+ task_categories:
17
+ - token-classification
18
+ task_ids:
19
+ - named-entity-recognition
20
+ - part-of-speech
21
+ paperswithcode_id: conll-2003
22
+ pretty_name: CoNLL-2003
23
+ train-eval-index:
24
+ - config: conll2003
25
+ task: token-classification
26
+ task_id: entity_extraction
27
+ splits:
28
+ train_split: train
29
+ eval_split: test
30
+ col_mapping:
31
+ tokens: tokens
32
+ ner_tags: tags
33
+ metrics:
34
+ - type: seqeval
35
+ name: seqeval
36
  ---
37
+
38
+ # Dataset Card for "privy-small"
39
+
40
+ ## Table of Contents
41
+ - [Dataset Description](#dataset-description)
42
+ - [Dataset Summary](#dataset-summary)
43
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
44
+ - [Languages](#languages)
45
+ - [Dataset Structure](#dataset-structure)
46
+ - [Data Instances](#data-instances)
47
+ - [Data Fields](#data-fields)
48
+ - [Data Splits](#data-splits)
49
+ - [Dataset Creation](#dataset-creation)
50
+ - [Curation Rationale](#curation-rationale)
51
+ - [Source Data](#source-data)
52
+ - [Annotations](#annotations)
53
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
54
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
55
+ - [Social Impact of Dataset](#social-impact-of-dataset)
56
+ - [Discussion of Biases](#discussion-of-biases)
57
+ - [Other Known Limitations](#other-known-limitations)
58
+ - [Additional Information](#additional-information)
59
+ - [Dataset Curators](#dataset-curators)
60
+ - [Licensing Information](#licensing-information)
61
+ - [Citation Information](#citation-information)
62
+ - [Contributions](#contributions)
63
+
64
+ ## Dataset Description
65
+
66
+ - **Homepage:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/)
67
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
68
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
69
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
70
+ - **Size of downloaded dataset files:** 4.63 MB
71
+ - **Size of the generated dataset:** 9.78 MB
72
+ - **Total amount of disk used:** 14.41 MB
73
+
74
+ ### Dataset Summary
75
+
76
+ The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on
77
+ four types of named entities: persons, locations, organizations and names of miscellaneous entities that do
78
+ not belong to the previous three groups.
79
+
80
+ The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on
81
+ a separate line and there is an empty line after each sentence. The first item on each line is a word, the second
82
+ a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags
83
+ and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only
84
+ if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag
85
+ B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2
86
+ tagging scheme, whereas the original dataset uses IOB1.
87
+
88
+ For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419
89
+
90
+ ### Supported Tasks and Leaderboards
91
+
92
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
93
+
94
+ ### Languages
95
+
96
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
97
+
98
+ ## Dataset Structure
99
+
100
+ ### Data Instances
101
+
102
+ #### conll2003
103
+
104
+ - **Size of downloaded dataset files:** 4.63 MB
105
+ - **Size of the generated dataset:** 9.78 MB
106
+ - **Total amount of disk used:** 14.41 MB
107
+
108
+ An example of 'train' looks as follows.
109
+
110
+ ```
111
+ {
112
+ "chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
113
+ "id": "0",
114
+ "ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
115
+ "pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
116
+ "tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
117
+ }
118
+ ```
119
+
120
+ The original data files have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
121
+ Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
122
+
123
+ ### Data Fields
124
+
125
+ The data fields are the same among all splits.
126
+
127
+ #### conll2003
128
+ - `id`: a `string` feature.
129
+ - `tokens`: a `list` of `string` features.
130
+ - `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
131
+
132
+ ```python
133
+ {'"': 0, "''": 1, '#': 2, '$': 3, '(': 4, ')': 5, ',': 6, '.': 7, ':': 8, '``': 9, 'CC': 10, 'CD': 11, 'DT': 12,
134
+ 'EX': 13, 'FW': 14, 'IN': 15, 'JJ': 16, 'JJR': 17, 'JJS': 18, 'LS': 19, 'MD': 20, 'NN': 21, 'NNP': 22, 'NNPS': 23,
135
+ 'NNS': 24, 'NN|SYM': 25, 'PDT': 26, 'POS': 27, 'PRP': 28, 'PRP$': 29, 'RB': 30, 'RBR': 31, 'RBS': 32, 'RP': 33,
136
+ 'SYM': 34, 'TO': 35, 'UH': 36, 'VB': 37, 'VBD': 38, 'VBG': 39, 'VBN': 40, 'VBP': 41, 'VBZ': 42, 'WDT': 43,
137
+ 'WP': 44, 'WP$': 45, 'WRB': 46}
138
+ ```
139
+
140
+ - `chunk_tags`: a `list` of classification labels (`int`). Full tagset with indices:
141
+
142
+ ```python
143
+ {'O': 0, 'B-ADJP': 1, 'I-ADJP': 2, 'B-ADVP': 3, 'I-ADVP': 4, 'B-CONJP': 5, 'I-CONJP': 6, 'B-INTJ': 7, 'I-INTJ': 8,
144
+ 'B-LST': 9, 'I-LST': 10, 'B-NP': 11, 'I-NP': 12, 'B-PP': 13, 'I-PP': 14, 'B-PRT': 15, 'I-PRT': 16, 'B-SBAR': 17,
145
+ 'I-SBAR': 18, 'B-UCP': 19, 'I-UCP': 20, 'B-VP': 21, 'I-VP': 22}
146
+ ```
147
+
148
+ - `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
149
+
150
+ ```python
151
+ {'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6, 'B-MISC': 7, 'I-MISC': 8}
152
+ ```
153
+
154
+ ### Data Splits
155
+
156
+ | name |train|validation|test|
157
+ |---------|----:|---------:|---:|
158
+ |conll2003|14041| 3250|3453|
159
+
160
+ ## Dataset Creation
161
+
162
+ ### Curation Rationale
163
+
164
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
165
+
166
+ ### Source Data
167
+
168
+ #### Initial Data Collection and Normalization
169
+
170
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
171
+
172
+ #### Who are the source language producers?
173
+
174
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
175
+
176
+ ### Annotations
177
+
178
+ #### Annotation process
179
+
180
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
181
+
182
+ #### Who are the annotators?
183
+
184
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
185
+
186
+ ### Personal and Sensitive Information
187
+
188
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
189
+
190
+ ## Considerations for Using the Data
191
+
192
+ ### Social Impact of Dataset
193
+
194
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
195
+
196
+ ### Discussion of Biases
197
+
198
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
199
+
200
+ ### Other Known Limitations
201
+
202
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
203
+
204
+ ## Additional Information
205
+
206
+ ### Dataset Curators
207
+
208
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
209
+
210
+ ### Licensing Information
211
+
212
+ From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page:
213
+
214
+ > The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
215
+
216
+ The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html):
217
+
218
+ > The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
219
+ >
220
+ > [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html)
221
+ >
222
+ > This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
223
+ >
224
+ > [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html)
225
+ >
226
+ > This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
227
+
228
+ ### Citation Information
229
+
230
+ ```
231
+ @inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
232
+ title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
233
+ author = "Tjong Kim Sang, Erik F. and
234
+ De Meulder, Fien",
235
+ booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
236
+ year = "2003",
237
+ url = "https://www.aclweb.org/anthology/W03-0419",
238
+ pages = "142--147",
239
+ }
240
+
241
+ ```
242
+
243
+
244
+ ### Contributions
245
+
246
+ Thanks to [@jplu](https://github.com/jplu), [@vblagoje](https://github.com/vblagoje), [@lhoestq](https://github.com/lhoestq) for adding this dataset.