system HF staff commited on
Commit
79488eb
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +206 -0
  3. counter.py +165 -0
  4. dataset_infos.json +1 -0
  5. dummy/1.0.0/dummy_data.zip +3 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - ur
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - n<1K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ - text-scoring
19
+ task_ids:
20
+ - semantic-similarity-scoring
21
+ - topic-classification
22
+ ---
23
+
24
+ # Dataset Card for counter
25
+
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-instances)
34
+ - [Data Splits](#data-instances)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** http://ucrel.lancs.ac.uk/textreuse/counter.php
52
+ - **Repository:** [More Information Needed]
53
+ - **Paper:** https://link.springer.com/article/10.1007%2Fs10579-016-9367-2
54
+ - **Leaderboard:** [More Information Needed]
55
+ - **Point of Contact:** [email protected]
56
+
57
+ ### Dataset Summary
58
+
59
+ The COrpus ofUrdu News TExt Reuse (COUNTER) corpus contains 1200 documents with realexamples of text reuse from the field of journalism. It has been manually annotatedat document level with three levels of reuse: wholly derived, partially derived andnon derived
60
+
61
+ ### Supported Tasks and Leaderboards
62
+
63
+ other:text-reuse
64
+
65
+ ### Languages
66
+
67
+ ur
68
+
69
+ ## Dataset Structure
70
+
71
+ ### Data Instances
72
+
73
+ Here is one example from the dataset:
74
+
75
+ ```
76
+ {"derived": {
77
+ "body" :"میر پور(وقت نیوز) بنگلہ دیش نے 5 میچوں کی سیریز کےآ خری میچ میں بھی فتح حاصل کر کے سیریز میں وائٹ واش کر دیا،زمبابوے ایک میچ بھی نہ جیت سکا۔آخری میچ میں زمبابوے کے 129 رنز کا ہدف بنگال ٹائیگرز نے 24.3 اوورز میں 5 وکٹوں کے نقصان پر حاصل کر لیا۔بنگلہ دیش کے شیر بنگلہ سٹیڈیم میر پور میں کھیلے گئے آخری ایک روزہ میچ میں زمبابوے کے کپتان چکمبورا نے ٹاس جیت کے بینٹگ کا فیصلہ کیا جو ان کی ٹیم کیلئے ڈراؤنا خواب ثابت ہوا اور پوری ٹیم 30 اوورز میں 128 رنز بنا کر پویلین لوٹ گئی زمبابوے کی پہلی وکٹ 16 رنز پر گری جب سکندر رضا صرف 9 رنز بنا کر مشرقی مرتضی کی بال پر آؤٹ ہوئے اس کے بعد مساکد ازااور سباندا کی پارٹنرشپنے ٹیم کا سکور95 رنز تک پہنچا دیا ۔مساکدازا 52 رنز بنا کر جبیر الحسن کا شکار بنے جبکہ سباندا نے 37 رنز کی اننگز کھیلی اس کے بعد کئی بھی زمبابوے کا کھلاڑی جم کر نہ کھیل سکا۔بنگال ٹائیگرز کی جانب سے عمدہ باؤلنگ کے نتیجے میں کپتان چکمبورا سمیت 8 کھلاڑی ڈبل فیگر کراس نہ کر سکے ۔بنگلہ دیش کی جانب سے ایک روزہ میچوں میں ڈیبیو کرنے والے تیج السلام نے اپنے پہلے ہی میچ میں ہیٹرک کی اسلام نے 7 اوورز میں صرف 14 رنز دئے اور چار کھلاڑیوں کع آؤٹ کیا جبکہ شکیب الحسن نے 30 رنز دیکر 3 اور جبیر الحسن نے41 رنز دیکر2 کھلاڑیوں کو پویلین کی راہ دکھائی ۔ 128 رنز کے جواب میں بنگال ٹائیگرز نے بیٹنگ شروع کی مشکلات کا سامنا رہا ان کے بھی ابتدائی 3 کھلاڑی 47 رنز پر پویلین لوٹ گئے۔ تمیم اقبال 10، انعام الحق8 رنز بنا کر آؤٹ ہوئے،آل راؤنڈر شکیب الحسن بغیر کوئی رنز بنائیپویلین لوٹ گئے وکٹ کیپر مشفق الرحیم صرف 11 رنز بنا کر چتارہ کا شکار بن گئے۔محمد اللہ نے51 رنز کی میچ وننگ اننگز کھیلی جبکہ صابر رحمٰن13 رنز بنا کر ناٹ آؤٹ رہے۔ زمبابوے کی جانب سے چتارہ نے 3 اور پنیا نگارا نے 2 کھلاڑیوں کو آؤٹ کیا ۔فتح کے ساتھ بنگلہ دیش نے سیریز میں وائٹ واش کر دیا۔زمبابوے کی ٹیم کو��ی میچ نہ جیت سکی،تیج السلام کو میچ کا بہترین ایوارڈ دیا گیا جبکہ سیریز کا بہترین کھلاڑی مشفق الرحیم کو قرار دیا گیا۔",
78
+ "classification": 1, # partially_derived
79
+ "domain": 1, # sports
80
+ "filename": "0001p.xml",
81
+ "headline": "بنگلہ دیش کا زمبابوے کا ون ڈے سیریز میں 5-0 سے وائٹ واش",
82
+ "newsdate": "02.12.14",
83
+ "newspaper": "daily_waqt",
84
+ "number_of_words_with_swr": 265,
85
+ "total_number_of_sentences": 13,
86
+ "total_number_of_words": 393},
87
+ "source": {
88
+ "body": "ڈھاکہ ۔ یکم دسمبر (اے پی پی) بنگلہ دیش نے زمبابوے کو ٹیسٹ کے بعد ون ڈے سیریز میں بھی وائٹ واش کر دیا۔ سیریز کے پانچویں اور آخری ون ڈے میچ میں بنگال ٹائیگرز نے زمبابوے کو 5 وکٹوں سے شکست دے دی، مہمان ٹیم پہلے بیٹنگ کرتے ہوئے 128 رنز پر ڈھیر ہوگئی۔ تیج الاسلام نے کیریئر کے پہلے ون ڈے میچ میں ہیٹ ٹرک کرکے نئی تاریخ رقم کر دی، انہوں نے 4 کھلاڑیوں کو آؤٹ کیا۔ جواب میں بنگلہ دیش نے ہدف 24.3 اوورز میں 5 وکٹوں کے نقصان پر حاصل کر لیا۔ محمد اللہ نے 51 رنز کی ناقابل شکست اننگز کھیلی۔ تفصیلات کے مطابق پیر کو شیر بنگلہ نیشنل سٹیڈیم، میرپور میں پانچویں اور آخری ون ڈے میچ میں زمبابوے کے کپتان ایلٹن چگمبورا نے ٹاس جیت کر پہلے بیٹنگ کا فیصلہ کیا جو غلط ثابت ہوا۔ زمبابوے کی پوری ٹیم ڈیبیو ون ڈے کھیلنے والے نوجوان لیفٹ آرم سپنر تیج الاسلام اور شکیب الحسن کی تباہ کن باؤلنگ کے باعث 30 اوورز میں 128 رنز پر ڈھیر ہوگئی۔ ہیملٹن ماساکڈزا 52 اور ووسی سبانڈا 37 رنز کے ساتھ نمایاں رہے، ان کے علاوہ کوئی بھی بلے باز دوہرا ہندسہ عبور نہ کر سکا۔ اپنا پہلا ون ڈے کھیلنے والے تیج الاسلام نے 11 رنز کے عوض 4 وکٹیں حاصل کیں جس میں شاندار ہیٹ ٹرک بھی شامل ہے، اس طرح وہ ڈیبیو میں ہیٹ ٹرک کرنے والے دنیا کے پہلے باؤلر بن گئے ہیں۔ شکیب الحسن نے تین اور زبیر حسین نے دو وکٹیں حاصل کیں۔ جواب میں بنگلہ دیش نے ہدف 24.3 اوورز میں 5 وکٹوں کے نقصان پر حاصل کر لیا۔ محمد اللہ نے 51 رنز کی ناقابل شکست اننگز کھیل کر ٹیم کی فتح میں اہم کردار ادا کیا۔ زمبابوے کی جانب سے ٹینڈائی چتارا نے تین اور تناشے پینگارا نے دو وکٹیں حاصل کیں۔",
89
+ "classification": 1, # partially_derived
90
+ "domain": 1, # sports
91
+ "filename": "0001.xml",
92
+ "headline": "بنگال ٹائیگرز نے کمزور زمبابوے کو ٹیسٹ کے بعد ون ڈے سیریز میں بھی وائٹ واش کر دیا، پانچویں اور آخری ون ڈے میچ میں بنگلہ دیش 5 وکٹوں سے فتح یاب، تیج الاسلام نے ڈیبیو ون ڈے میں ہیٹ ٹرک کرکے نئی تاریخ رقم کر دی"
93
+ "newsdate": "01.12.14",
94
+ "newspaper": "APP",
95
+ "number_of_words_with_swr": 245,
96
+ "total_number_of_sentences": 15,
97
+ "total_number_of_words": 352}}
98
+ ```
99
+ ### Data Fields
100
+
101
+ ```source```: The source document
102
+
103
+ ```derived```: The derived document
104
+ For each pair of source and derived documents. we have the following fields:
105
+
106
+ ```filename (str)```: Name of the file in dataset
107
+
108
+ ```headline(str)```: Headline of the news item
109
+
110
+ ```body(str)```: Main text of the news item
111
+
112
+ ```total_number_of_words(int)```: Number of words in document
113
+
114
+ ```total_number_of_sentences(int)```: Number of sentences in document
115
+
116
+ ```number_of_words_with_swr(int)```: Number of words after stop word removal
117
+
118
+ ```newspaper(str)```: The newspaper in which the news item was published
119
+
120
+ ```newsdate(str)```: The date on which the news item is published DD.MM.YY
121
+
122
+ ```domain(int)```: The category of news item from this list: "business", "sports", "national", "foreign", "showbiz".
123
+
124
+ ```classification (int)```: Three classes of reuse from this list: Wholly Derived (WD), Partially Derived (PD) and Non Derived (ND)
125
+
126
+ ### Data Splits
127
+
128
+ One split train with 600 pairs of documents.
129
+
130
+ The corpus is composed of two main document types: (1) source documents and (2) derived documents. There are total 1200 documents in the corpus: 600 are newsagency articles (source documents) and 600 are newspapers stories (derived documents). The corpus contains in total 275,387 words (tokens8), 21,426 unique words and 10,841 sentences. The average length of a source document is 227 words while for derived documents it is 254 words.
131
+
132
+ ## Dataset Creation
133
+
134
+ ### Curation Rationale
135
+
136
+ Our main intention was to develop a standard benchmark resource for the evaluation of existing systems available for text reuse detection in general and specifically for Urdu language. To generate a corpus with realistic examples, we opted for the field of journalism. In journalism, the same news story is published in different newspapers in different forms. It is a standard practice followed by all the newspapers (reporters and editors) to reuse (verbatim or modified) a news story released by the news agency.
137
+
138
+ ### Source Data
139
+
140
+ #### Initial Data Collection and Normalization
141
+
142
+ The COUNTER corpus consists of news articles (source documents) released by five news agencies in Pakistan i.e. Associated Press of Pakistan (APP), InternationalNews Network (INN), Independent News Pakistan (INP), News Network International (NNI) and South Asian News Agency (SANA). The corresponding news stories (derived documents) were extracted from nine daily published and large circulation national news papers of the All Pakistan Newspapers Society (APNS), who are subscribed to these news agencies.
143
+ These include Nawa-e-Waqt, Daily Dunya, Express, Jang, Daily Waqt, Daily Insaf, Daily Aaj, Daily Islam and DailyPakistan. All of them are part of the mainstream national press, long established dailies with total circulation figures of over four million.7News agency texts (source documents) were provided (in electronic form) by the news agencies on a daily basis when they released the news. Newspaper stories (derived documents) were collected by three volunteers over a period of six months (from July to December 2014).National, Foreign, Business, Sports and Showbiz were the domains targeted for data collection.
144
+
145
+ #### Who are the source language producers?
146
+
147
+ [More Information Needed]
148
+
149
+ ### Annotations
150
+
151
+ #### Annotation process
152
+
153
+ The corpus has been annotated at the document level with three classes of reuse i.e.Wholly Derived (WD), Partially Derived (PD) and Non Derived (ND).
154
+ The derived collection contains documents with various degrees of text reuse. Some of the newspaper stories (derived documents)are rewritten (either verbatim or paraphrased) from the new agencys text (source document) while others have been written by the journalists independently on their own. For the former case, source-derived document pairs are either tagged as Wholly Derived (WD) or Partially Derived (PD) depending on the volume of text reused from the news agencys text for creating the newspaper article while for the latter case, they are tagged as Non Derived (ND) as the journalists have not reused anything from the news agencys text but based on their own observations and findings, developed and documented the story.
155
+
156
+ The annotations were carried out in three phases: (1) training phase, (2) annotations, (3)conflict resolving. During the training phase, annotators A and B manually annotated 60 document pairs, following a preliminary version of the annotation guidelines. A detailed meeting was carried out afterwards, discussing the problems and disagreements. It was observed that the highest number of disagreements were between PD and ND cases, as both found it difficult to distinguish between these two classes. The reason being that adjusting the threshold where a text is heavily paraphrased or new information added to it that it becomes independently written(ND). Following the discussion, the annotation guidelines were slightly revised, and the first 60 annotations results were saved. In the annotation phase, the remaining540 document pairs were manually examined by the two annotators (A and B). Both were asked to judge, and classify (at document level) whether a document(newspaper story) depending on the volume of text rewritten from the source (news agency article) falls into one of the following categories:remaining540 document pairs were manually examined by the two annotators (A and B). Both were asked to judge, and classify (at document level) whether a document(newspaper story) depending on the volume of text rewritten from the source (news agency article) falls into one of the following categories:
157
+ Wholly Derived (WD)The News agency text is the only source for the reused newspaper text, which means it is a verbatim copy of the source. In this case, most of the reused text is word-to-word copy of the source text.Partially Derived (PD)The Newspaper text has been either derived from more than one news agency or most of the text is paraphrased by the editor when rewriting from news agency text source. In this case, most parts of the derived document contain paraphrased text or new facts and figures added by the journalists own findings. Non Derived (ND)The News agency text has not been used in the production of the newspaper text (though words may still co-occur in both documents), it has completely different facts and figures or is heavily paraphrased from the newsagencys copy. In this case, the derived document is independently written and has a lot more new text.
158
+
159
+ #### Who are the annotators?
160
+
161
+ The annotations were performed by three annotators (A, B and C), who were native Urdu language speakers and experts of paraphrasing mechanisms. All three were graduates, experienced in text annotations and having an advanced Urdu level.
162
+
163
+ ### Personal and Sensitive Information
164
+
165
+ [More Information Needed]
166
+
167
+ ## Considerations for Using the Data
168
+
169
+ ### Social Impact of Dataset
170
+
171
+ [More Information Needed]
172
+
173
+ ### Discussion of Biases
174
+
175
+ [More Information Needed]
176
+
177
+ ### Other Known Limitations
178
+
179
+ [More Information Needed]
180
+
181
+ ## Additional Information
182
+
183
+ ### Dataset Curators
184
+
185
+ [More Information Needed]
186
+
187
+ ### Licensing Information
188
+
189
+ [More Information Needed]
190
+
191
+ ### Citation Information
192
+
193
+ ```
194
+ @Article{Sharjeel2016,
195
+ author="Sharjeel, Muhammad
196
+ and Nawab, Rao Muhammad Adeel
197
+ and Rayson, Paul",
198
+ title="COUNTER: corpus of Urdu news text reuse",
199
+ journal="Language Resources and Evaluation",
200
+ year="2016",
201
+ pages="1--27",
202
+ issn="1574-0218",
203
+ doi="10.1007/s10579-016-9367-2",
204
+ url="http://dx.doi.org/10.1007/s10579-016-9367-2"
205
+ }
206
+ ```
counter.py ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ from __future__ import absolute_import, division, print_function
17
+
18
+ import xml.etree.ElementTree as ET
19
+ from pathlib import Path
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @Article{Sharjeel2016,
26
+ author="Sharjeel, Muhammad
27
+ and Nawab, Rao Muhammad Adeel
28
+ and Rayson, Paul",
29
+ title="COUNTER: corpus of Urdu news text reuse",
30
+ journal="Language Resources and Evaluation",
31
+ year="2016",
32
+ pages="1--27",
33
+ issn="1574-0218",
34
+ doi="10.1007/s10579-016-9367-2",
35
+ url="http://dx.doi.org/10.1007/s10579-016-9367-2"
36
+ }
37
+ """
38
+
39
+ _DESCRIPTION = """\
40
+ The COrpus of Urdu News TExt Reuse (COUNTER) corpus contains 1200 documents with real examples of text reuse from the field of journalism. It has been manually annotated at document level with three levels of reuse: wholly derived, partially derived and non derived.
41
+ """
42
+
43
+ _HOMEPAGE = "http://ucrel.lancs.ac.uk/textreuse/counter.php"
44
+
45
+ _LICENSE = (
46
+ "The corpus is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. "
47
+ )
48
+
49
+ _DOWNLOAD_URL = "http://ucrel.lancs.ac.uk/textreuse/COUNTER.zip"
50
+
51
+ _NUM_EXAMPLES = 600
52
+
53
+ _CLASS_NAME_MAP = {"WD": "wholly_derived", "PD": "partially_derived", "ND": "not_derived"}
54
+
55
+
56
+ class Counter(datasets.GeneratorBasedBuilder):
57
+ """Corpus of Urdu News Text Reuse"""
58
+
59
+ VERSION = datasets.Version("1.0.0")
60
+
61
+ def _info(self):
62
+ features = datasets.Features(
63
+ {
64
+ "source": {
65
+ "filename": datasets.Value("string"),
66
+ "headline": datasets.Value("string"),
67
+ "body": datasets.Value("string"),
68
+ "total_number_of_words": datasets.Value("int64"),
69
+ "total_number_of_sentences": datasets.Value("int64"),
70
+ "number_of_words_with_swr": datasets.Value("int64"),
71
+ "newspaper": datasets.Value("string"),
72
+ "newsdate": datasets.Value("string"),
73
+ "domain": datasets.ClassLabel(
74
+ names=[
75
+ "business",
76
+ "sports",
77
+ "national",
78
+ "foreign",
79
+ "showbiz",
80
+ ]
81
+ ),
82
+ "classification": datasets.ClassLabel(
83
+ names=["wholly_derived", "partially_derived", "not_derived"]
84
+ ),
85
+ },
86
+ "derived": {
87
+ "filename": datasets.Value("string"),
88
+ "headline": datasets.Value("string"),
89
+ "body": datasets.Value("string"),
90
+ "total_number_of_words": datasets.Value("int64"),
91
+ "total_number_of_sentences": datasets.Value("int64"),
92
+ "number_of_words_with_swr": datasets.Value("int64"),
93
+ "newspaper": datasets.Value("string"),
94
+ "newsdate": datasets.Value("string"),
95
+ "domain": datasets.ClassLabel(
96
+ names=[
97
+ "business",
98
+ "sports",
99
+ "national",
100
+ "foreign",
101
+ "showbiz",
102
+ ]
103
+ ),
104
+ "classification": datasets.ClassLabel(
105
+ names=["wholly_derived", "partially_derived", "not_derived"]
106
+ ),
107
+ },
108
+ }
109
+ )
110
+ return datasets.DatasetInfo(
111
+ description=_DESCRIPTION,
112
+ features=features,
113
+ supervised_keys=None,
114
+ homepage=_HOMEPAGE,
115
+ license=_LICENSE,
116
+ citation=_CITATION,
117
+ )
118
+
119
+ def _split_generators(self, dl_manager):
120
+ """Returns SplitGenerators."""
121
+ data_dir = dl_manager.download_and_extract(_DOWNLOAD_URL)
122
+ return [
123
+ datasets.SplitGenerator(
124
+ name=datasets.Split.TRAIN,
125
+ gen_kwargs={"data_dir": data_dir},
126
+ )
127
+ ]
128
+
129
+ def _generate_examples(self, data_dir):
130
+ """ Yields examples. """
131
+
132
+ def parse_file(file):
133
+ tree = ET.parse(file)
134
+ root = tree.getroot()
135
+ attributes = root.attrib
136
+ headline = root.find("headline").text
137
+ body = root.find("body").text
138
+ parsed = {
139
+ "filename": attributes["filename"],
140
+ "headline": headline,
141
+ "body": body,
142
+ "total_number_of_words": int(attributes["totalnoofwords"]),
143
+ "total_number_of_sentences": int(attributes["totalnoofsentences"]),
144
+ "number_of_words_with_swr": int(attributes["noofwordswithSWR"]),
145
+ "newspaper": attributes["newspaper"],
146
+ "newsdate": attributes["newsdate"],
147
+ "domain": attributes["domain"],
148
+ "classification": _CLASS_NAME_MAP[attributes["classification"]],
149
+ }
150
+ return parsed
151
+
152
+ base_path = Path(data_dir)
153
+ base_path = base_path / "COUNTER"
154
+ files = sorted(base_path.glob(r"[0-9][0-9][0-9][0-9].xml"))
155
+ for _id, file in enumerate(files):
156
+ example = {}
157
+ with file.open(encoding="utf-8") as f:
158
+ source = parse_file(f)
159
+ example["source"] = source
160
+
161
+ derived_file = base_path / (file.stem + "p" + file.suffix)
162
+ with derived_file.open(encoding="utf-8") as f:
163
+ derived = parse_file(f)
164
+ example["derived"] = derived
165
+ yield _id, example
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": " The COrpus of Urdu News TExt Reuse (COUNTER) corpus contains 1200 documents with real examples of text reuse from the field of journalism. It has been manually annotated at document level with three levels of reuse: wholly derived, partially derived and non derived.\n", "citation": "@Article{Sharjeel2016,\nauthor=\"Sharjeel, Muhammad\nand Nawab, Rao Muhammad Adeel\nand Rayson, Paul\",\ntitle=\"COUNTER: corpus of Urdu news text reuse\",\njournal=\"Language Resources and Evaluation\",\nyear=\"2016\",\npages=\"1--27\",\nissn=\"1574-0218\",\ndoi=\"10.1007/s10579-016-9367-2\",\nurl=\"http://dx.doi.org/10.1007/s10579-016-9367-2\"\n", "homepage": "http://ucrel.lancs.ac.uk/textreuse/counter.php", "license": "The corpus is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. ", "features": {"source": {"filename": {"dtype": "string", "id": null, "_type": "Value"}, "headline": {"dtype": "string", "id": null, "_type": "Value"}, "body": {"dtype": "string", "id": null, "_type": "Value"}, "total_number_of_words": {"dtype": "int64", "id": null, "_type": "Value"}, "total_number_of_sentences": {"dtype": "int64", "id": null, "_type": "Value"}, "number_of_words_with_swr": {"dtype": "int64", "id": null, "_type": "Value"}, "newspaper": {"dtype": "string", "id": null, "_type": "Value"}, "newsdate": {"dtype": "string", "id": null, "_type": "Value"}, "domain": {"num_classes": 5, "names": ["business", "sports", "national", "foreign", "showbiz"], "names_file": null, "id": null, "_type": "ClassLabel"}, "classification": {"num_classes": 3, "names": ["wholly_derived", "partially_derived", "not_derived"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "derived": {"filename": {"dtype": "string", "id": null, "_type": "Value"}, "headline": {"dtype": "string", "id": null, "_type": "Value"}, "body": {"dtype": "string", "id": null, "_type": "Value"}, "total_number_of_words": {"dtype": "int64", "id": null, "_type": "Value"}, "total_number_of_sentences": {"dtype": "int64", "id": null, "_type": "Value"}, "number_of_words_with_swr": {"dtype": "int64", "id": null, "_type": "Value"}, "newspaper": {"dtype": "string", "id": null, "_type": "Value"}, "newsdate": {"dtype": "string", "id": null, "_type": "Value"}, "domain": {"num_classes": 5, "names": ["business", "sports", "national", "foreign", "showbiz"], "names_file": null, "id": null, "_type": "ClassLabel"}, "classification": {"num_classes": 3, "names": ["wholly_derived", "partially_derived", "not_derived"], "names_file": null, "id": null, "_type": "ClassLabel"}}}, "post_processed": null, "supervised_keys": null, "builder_name": "counter", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2598872, "num_examples": 600, "dataset_name": "counter"}}, "download_checksums": {"http://ucrel.lancs.ac.uk/textreuse/COUNTER.zip": {"num_bytes": 1356306, "checksum": "c6df7ad79e03952801155d8e9b2d3a5fea2e2d8231d40f1f238082ce2c28e59e"}}, "download_size": 1356306, "post_processing_size": null, "dataset_size": 2598872, "size_in_bytes": 3955178}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b33668da5e6b77b9a97bcef281dd35e35ad34c54832bdf19e1159cd50cd13a07
3
+ size 20621