Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Catalan
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
6965b89
1 Parent(s): c7cdb32

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,30 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- test.json filter=lfs diff=lfs merge=lfs -text
29
- train.json filter=lfs diff=lfs merge=lfs -text
30
- dev.json filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,184 +0,0 @@
1
- ---
2
- YAML tags:
3
-
4
- annotations_creators:
5
- - expert-generated
6
- language_creators:
7
- - found
8
- language:
9
- - ca
10
- license:
11
- - cc-by-nc-nd-4.0
12
- multilinguality:
13
- - monolingual
14
- pretty_name: teca
15
- size_categories:
16
- - unknown
17
- source_datasets: []
18
- task_categories:
19
- - text-classification
20
- task_ids:
21
- - natural-language-inference
22
-
23
- ---
24
-
25
- # Dataset Card for TE-ca
26
-
27
-
28
- ## Dataset Description
29
-
30
- - **Website:** https://zenodo.org/record/4761458
31
-
32
- - **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
33
-
34
- - **Point of Contact:** [Carlos Rodríguez-Penagos]([email protected]) and [Carme Armentano-Oller]([email protected])
35
-
36
-
37
- ### Dataset Summary
38
-
39
- TE-ca is a dataset of textual entailment in Catalan, which contains 21,163 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction or neutral).
40
-
41
- This dataset was developed by [BSC TeMU](https://temu.bsc.es/) as part of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/), to enrich the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
42
-
43
- ### Supported Tasks and Leaderboards
44
-
45
- Textual entailment, Text classification, Language Model
46
-
47
- ### Languages
48
-
49
- The dataset is in Catalan (`ca-CA`).
50
-
51
- ## Dataset Structure
52
-
53
- ### Data Instances
54
-
55
- Three JSON files, one for each split.
56
-
57
- ### Example:
58
-
59
- <pre>
60
-
61
- {
62
- "id": 3247,
63
- "premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions",
64
- "hypothesis": "S'acorden unes recomanacions per les persones migrades a Marràqueix",
65
- "label": "0"
66
- },
67
- {
68
- "id": 2825,
69
- "premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions",
70
- "hypothesis": "Les persones migrades seran acollides a Marràqueix",
71
- "label": "1"
72
- },
73
- {
74
- "id": 2431,
75
- "premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions",
76
- "hypothesis": "L'acord impulsat per l'ONU lluny de tancar-se",
77
- "label": "2"
78
- },
79
- </pre>
80
-
81
- ### Data Fields
82
-
83
- - premise: text
84
- - hypothesis: text related to the premise
85
- - label: relation between premise and hypothesis:
86
- * 0: entailment
87
- * 1: neutral
88
- * 2: contradiction
89
-
90
- ### Data Splits
91
-
92
- * dev.json: 2116 examples
93
- * test.json: 2117 examples
94
- * train.json: 16930 examples
95
-
96
- ## Dataset Creation
97
-
98
- ### Curation Rationale
99
- We created this dataset to contribute to the development of language models in Catalan, a low-resource language.
100
-
101
- ### Source Data
102
-
103
- Source sentences are extracted from the [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349) and from [VilaWeb](https://www.vilaweb.cat) newswire.
104
-
105
- #### Initial Data Collection and Normalization
106
-
107
- 12000 sentences from the BSC [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349), together with 6200 headers from the Catalan news site [VilaWeb](https://www.vilaweb.cat), were chosen randomly. We filtered them by different criteria, such as length and stand-alone intelligibility. For each selected text, we commissioned 3 hypotheses (one for each entailment category) to be written by a team of native annotators.
108
-
109
- Some sentence pairs were excluded because of inconsistencies.
110
-
111
- #### Who are the source language producers?
112
-
113
- The Catalan Textual Corpus corpus consists of several corpora gathered from web crawling and public corpora. More information can be found [here](https://doi.org/10.5281/zenodo.4519349).
114
-
115
- [VilaWeb](https://www.vilaweb.cat) is a Catalan newswire.
116
-
117
- ### Annotations
118
-
119
- #### Annotation process
120
-
121
- We commissioned 3 hypotheses (one for each entailment category) to be written by a team of annotators.
122
-
123
- #### Who are the annotators?
124
-
125
- Annotators are a team of native language collaborators from two independent companies.
126
-
127
- ### Personal and Sensitive Information
128
-
129
- No personal or sensitive information included.
130
-
131
- ## Considerations for Using the Data
132
-
133
- ### Social Impact of Dataset
134
-
135
- We hope this dataset contributes to the development of language models in Catalan, a low-resource language.
136
-
137
- ### Discussion of Biases
138
-
139
- [N/A]
140
-
141
- ### Other Known Limitations
142
-
143
- [N/A]
144
-
145
- ## Additional Information
146
-
147
- ### Dataset Curators
148
-
149
- Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
150
-
151
- This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
152
-
153
-
154
- ### Licensing Information
155
-
156
- This work is licensed under an <a rel="license" href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.
157
-
158
- ### Citation Information
159
-
160
- ```
161
-
162
- @inproceedings{armengol-estape-etal-2021-multilingual,
163
- title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
164
- author = "Armengol-Estap{\'e}, Jordi and
165
- Carrino, Casimiro Pio and
166
- Rodriguez-Penagos, Carlos and
167
- de Gibert Bonet, Ona and
168
- Armentano-Oller, Carme and
169
- Gonzalez-Agirre, Aitor and
170
- Melero, Maite and
171
- Villegas, Marta",
172
- booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
173
- month = aug,
174
- year = "2021",
175
- address = "Online",
176
- publisher = "Association for Computational Linguistics",
177
- url = "https://aclanthology.org/2021.findings-acl.437",
178
- doi = "10.18653/v1/2021.findings-acl.437",
179
- pages = "4933--4946",
180
- }
181
-
182
- ```
183
-
184
- [DOI](https://doi.org/10.5281/zenodo.4529183)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
readme.md DELETED
@@ -1,165 +0,0 @@
1
- ---
2
- YAML tags:
3
- - copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
4
- ---
5
-
6
- # Dataset Card Creation Guide
7
-
8
-
9
- ## Dataset Description
10
-
11
- - **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
12
-
13
- - **Point of Contact:** Carlos Rodríguez-Penagos ([email protected]) and Carme Armentano-Oller ([email protected])
14
-
15
-
16
- ### Dataset Summary
17
-
18
- TECA is a dataset of textual entailment in Catalan, which contains 21 163 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction or neutral).
19
-
20
- This dataset was developed by BSC TeMU as part of the AINA project and intended as part of the Catalan Language Understanding Benchmark (CLUB).
21
-
22
- ### Supported Tasks and Leaderboards
23
-
24
- Textual eintailment, Text classification, Language Model
25
-
26
- ### Languages
27
-
28
- CA - Catalan
29
-
30
- ## Dataset Structure
31
-
32
- ### Data Instances
33
-
34
- Three JSON files, one for each split.
35
-
36
- ### Example:
37
-
38
- <pre>
39
-
40
- {
41
- "id": 3247,
42
- "premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions",
43
- "hypothesis": "S'acorden unes recomanacions per les persones migrades a Marràqueix",
44
- "label": "0"
45
- },
46
- {
47
- "id": 2825,
48
- "premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions",
49
- "hypothesis": "Les persones migrades seran acollides a Marràqueix",
50
- "label": "1"
51
- },
52
- {
53
- "id": 2431,
54
- "premise": "L'ONU adopta a Marràqueix un pacte no vinculant per les migracions",
55
- "hypothesis": "L'acord impulsat per l'ONU lluny de tancar-se",
56
- "label": "2"
57
- },
58
- </pre>
59
-
60
- ### Data Fields
61
-
62
- - premise: text
63
- - hypothesis: text related to the premise
64
- - label: relation between premise and hypothesis:
65
- * 0: entailment
66
- * 1: neutral
67
- * 2: contradiction
68
-
69
- ### Data Splits
70
-
71
- * dev.json: 2116 examples
72
- * test.json: 2117 examples
73
- * train.json: 16930 examples
74
-
75
- ## Dataset Creation
76
-
77
- ### Curation Rationale
78
-
79
- Some sentence pairs were excluded because of inconsistencies.
80
-
81
- ### Source Data
82
-
83
- Source sentences are extracted from the [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349) and from [Vilaweb](https://www.vilaweb.cat) newswire.
84
-
85
- #### Initial Data Collection and Normalization
86
-
87
- 12000 sentences from the BSC Catalan Textual Corpus, together with 6200 headers from the Catalan news site Vilaweb, were chosen randomly. We filtered them by different criteria, such as length and stand-alone intelligibility. For each selected text, we commissioned 3 hypotheses (one for each entailment category) to be written by a team of native annotators.
88
-
89
- #### Who are the source language producers?
90
-
91
- The Catalan Textual Corpus corpus consists of several corpora gathered from web crawling and public corpora. More information [here](https://doi.org/10.5281/zenodo.4519349).
92
- [Vilaweb](https://www.vilaweb.cat) is a Catalan newswire.
93
-
94
- ### Annotations
95
-
96
- #### Annotation process
97
-
98
- We commissioned 3 hypotheses (one for each entailment category) to be written by a team of annotators.
99
-
100
- #### Who are the annotators?
101
-
102
- Annotators are a team of native language collaborators from two intependent companies.
103
-
104
- ### Personal and Sensitive Information
105
-
106
- No personal or sensitive information included.
107
-
108
- ## Considerations for Using the Data
109
-
110
- ### Social Impact of Dataset
111
-
112
- [More Information Needed]
113
-
114
- ### Discussion of Biases
115
-
116
- [More Information Needed]
117
-
118
- ### Other Known Limitations
119
-
120
- [More Information Needed]
121
-
122
- ## Additional Information
123
-
124
- ### Dataset Curators
125
-
126
- Casimiro Pio Carrino, Carlos Rodríguez and Carme Armentano, from BSC-CNS.
127
-
128
- ### Licensing Information
129
-
130
- This work is licensed under an <a rel="license" href="https://creativecommons.org/licenses/by-nc-nd/4.0/">Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.
131
-
132
- ### Citation Information
133
-
134
-
135
-
136
- ```
137
-
138
- @inproceedings{armengol-estape-etal-2021-multilingual,
139
- title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
140
- author = "Armengol-Estap{\'e}, Jordi and
141
- Carrino, Casimiro Pio and
142
- Rodriguez-Penagos, Carlos and
143
- de Gibert Bonet, Ona and
144
- Armentano-Oller, Carme and
145
- Gonzalez-Agirre, Aitor and
146
- Melero, Maite and
147
- Villegas, Marta",
148
- booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
149
- month = aug,
150
- year = "2021",
151
- address = "Online",
152
- publisher = "Association for Computational Linguistics",
153
- url = "https://aclanthology.org/2021.findings-acl.437",
154
- doi = "10.18653/v1/2021.findings-acl.437",
155
- pages = "4933--4946",
156
- }
157
-
158
-
159
- ```
160
-
161
- [DOI](https://doi.org/10.5281/zenodo.4529183)
162
-
163
- ### Funding
164
-
165
- This work was funded by the [Catalan Ministry of the Vice-presidency, Digital Policies and Territory](https://politiquesdigitals.gencat.cat/en/inici/index.html) within the framework of the [Aina project](https://politiquesdigitals.gencat.cat/ca/tic/aina-el-projecte-per-garantir-el-catala-en-lera-digital/).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
splitter.py DELETED
@@ -1,41 +0,0 @@
1
- import json
2
- import pandas as pd
3
- from sklearn.model_selection import train_test_split
4
-
5
- # both files downloaded from https://zenodo.org/record/4621378
6
- path_to_teca1 = 'dataset_te1.json'
7
- path_to_teca2 = 'dataset_te_vilaweb.json'
8
-
9
- # load data to pandas dataframes
10
- teca1 = pd.read_json(path_to_teca1) # Shape: (14997, 4)
11
- teca2 = pd.read_json(path_to_teca2) # Shape: (6166, 4)
12
- teca = pd.concat([teca1, teca2]) # Shape: (21163, 4)
13
-
14
- # remove "id" column, now columns are: ['premise', 'hypothesis', 'label']
15
- teca.drop(['id'], axis=1, inplace=True)
16
-
17
- # shuffle rows
18
- teca = teca.sample(frac=1).reset_index(drop=True)
19
-
20
- # stratified split with harcoded percentages: 80% train, 10% dev, 10% test
21
- train, dev_test = train_test_split(teca, test_size=0.2, random_state=42, stratify=teca['label'])
22
- dev, test = train_test_split(dev_test, test_size=0.5, random_state=42, stratify=dev_test['label'])
23
-
24
- # report some stats
25
- print('### VALUE COUNTS TECA ###')
26
- print(teca['label'].value_counts())
27
- print('### VALUE COUNTS TRAIN ###')
28
- print(train['label'].value_counts())
29
- print('### VALUE COUNTS DEV ###')
30
- print(dev['label'].value_counts())
31
- print('### VALUE COUNTS TEST ###')
32
- print(test['label'].value_counts())
33
- print('train shape:', train.shape[0], ', dev shape:', dev.shape[0], ', test shape:', test.shape[0])
34
-
35
- # save train/dev/test sets as json files
36
- sets = {'train': train, 'dev': dev, 'test': test}
37
- for key in sets:
38
- set_dict = sets[key].to_dict('records')
39
- json_content = {"version": '1.0.1', "data": set_dict}
40
- with open(key+'.json', 'w') as f:
41
- json.dump(json_content, f)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
splitter_with_ids.py DELETED
@@ -1,42 +0,0 @@
1
- import json
2
- import pandas as pd
3
- from sklearn.model_selection import train_test_split
4
-
5
- # both files downloaded from https://zenodo.org/record/4621378
6
- path_to_teca1 = 'dataset_te1.json'
7
- path_to_teca2 = 'dataset_te_vilaweb.json'
8
-
9
- teca1 = pd.read_json(path_to_teca1) # Shape: (14997, 4)
10
- teca2 = pd.read_json(path_to_teca2) # Shape: (6166, 4)
11
-
12
- teca1['id'] = 'te1_' + teca1['id'].astype(str)
13
- teca2['id'] = 'vila_' + teca2['id'].astype(str)
14
-
15
- teca = pd.concat([teca1, teca2]) # Shape: (21163, 4)
16
- #teca.drop(['id'], axis=1, inplace=True) # now columns are: ['premise', 'hypothesis', 'label']
17
- teca = teca.sample(frac=1).reset_index(drop=True) # shuffle rows
18
-
19
- print('### VALUE COUNTS TECA ###')
20
- print(teca['label'].value_counts())
21
-
22
- # stratified split with harcoded percentages: 80% train, 10% dev, 10% test
23
- train, dev_test = train_test_split(teca, test_size=0.2, random_state=42, stratify=teca['label'])
24
- dev, test = train_test_split(dev_test, test_size=0.5, random_state=42, stratify=dev_test['label'])
25
-
26
- print('### VALUE COUNTS TRAIN ###')
27
- print(train['label'].value_counts())
28
- print('### VALUE COUNTS DEV ###')
29
- print(dev['label'].value_counts())
30
- print('### VALUE COUNTS TEST ###')
31
- print(test['label'].value_counts())
32
- print('train shape:', train.shape[0], ', dev shape:', dev.shape[0], ', test shape:', test.shape[0])
33
-
34
- print(train.head())
35
-
36
- sets = {'train': train, 'dev': dev, 'test': test, 'full': teca}
37
-
38
- for key in sets:
39
- set_dict = sets[key].to_dict('records')
40
- json_content = {"version": '1.0.1', "data": set_dict}
41
- with open(key+'.json', 'w') as f:
42
- json.dump(json_content, f)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
teca.py DELETED
@@ -1,116 +0,0 @@
1
- # Loading script for the TECA dataset.
2
- import json
3
- import datasets
4
-
5
- logger = datasets.logging.get_logger(__name__)
6
-
7
- _CITATION = """
8
- @inproceedings{armengol-estape-etal-2021-multilingual,
9
- title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
10
- author = "Armengol-Estap{\'e}, Jordi and
11
- Carrino, Casimiro Pio and
12
- Rodriguez-Penagos, Carlos and
13
- de Gibert Bonet, Ona and
14
- Armentano-Oller, Carme and
15
- Gonzalez-Agirre, Aitor and
16
- Melero, Maite and
17
- Villegas, Marta",
18
- booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
19
- month = aug,
20
- year = "2021",
21
- address = "Online",
22
- publisher = "Association for Computational Linguistics",
23
- url = "https://aclanthology.org/2021.findings-acl.437",
24
- doi = "10.18653/v1/2021.findings-acl.437",
25
- pages = "4933--4946",
26
- }
27
- """
28
-
29
- _DESCRIPTION = """
30
- TECA consists of two subsets of textual entailment in Catalan, *catalan_TE1* and *vilaweb_TE*, which contain 14997 and 6166 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction or neutral). This dataset was developed by BSC TeMU as part of the AINA project and intended as part of the Catalan Language Understanding Benchmark (CLUB).
31
- """
32
-
33
- _HOMEPAGE = """https://zenodo.org/record/4621378"""
34
-
35
- # TODO: upload datasets to github
36
- _URL = "https://huggingface.co/datasets/projecte-aina/teca/resolve/main/"
37
- _TRAINING_FILE = "train.json"
38
- _DEV_FILE = "dev.json"
39
- _TEST_FILE = "test.json"
40
-
41
-
42
- class tecaConfig(datasets.BuilderConfig):
43
- """ Builder config for the TECA dataset """
44
-
45
- def __init__(self, **kwargs):
46
- """BuilderConfig for TECA.
47
- Args:
48
- **kwargs: keyword arguments forwarded to super.
49
- """
50
- super(tecaConfig, self).__init__(**kwargs)
51
-
52
-
53
- class teca(datasets.GeneratorBasedBuilder):
54
- """ TECA Dataset """
55
-
56
- BUILDER_CONFIGS = [
57
- tecaConfig(
58
- name="teca",
59
- version=datasets.Version("1.0.1"),
60
- description="teca dataset",
61
- ),
62
- ]
63
-
64
- def _info(self):
65
- return datasets.DatasetInfo(
66
- description=_DESCRIPTION,
67
- features=datasets.Features(
68
- {
69
- "id": datasets.Value("string"),
70
- "premise": datasets.Value("string"),
71
- "hypothesis": datasets.Value("string"),
72
- "label": datasets.features.ClassLabel
73
- (names=
74
- [
75
- "entailment",
76
- "neutral",
77
- "contradiction"
78
- ]
79
- ),
80
- }
81
- ),
82
- homepage=_HOMEPAGE,
83
- citation=_CITATION,
84
- )
85
-
86
- def _split_generators(self, dl_manager):
87
- """Returns SplitGenerators."""
88
- urls_to_download = {
89
- "train": f"{_URL}{_TRAINING_FILE}",
90
- "dev": f"{_URL}{_DEV_FILE}",
91
- "test": f"{_URL}{_TEST_FILE}",
92
- }
93
- downloaded_files = dl_manager.download_and_extract(urls_to_download)
94
-
95
- return [
96
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
97
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
98
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
99
- ]
100
-
101
- def _generate_examples(self, filepath):
102
- """This function returns the examples in the raw (text) form."""
103
- logger.info("generating examples from = %s", filepath)
104
- with open(filepath, encoding="utf-8") as f:
105
- data_dict = json.load(f)
106
- for id_, article in enumerate(data_dict["data"]):
107
- original_id = article["id"]
108
- premise = article["premise"]
109
- hypothesis = article["hypothesis"]
110
- label = article["label"]
111
- yield id_, {
112
- "id": original_id,
113
- "premise": premise,
114
- "hypothesis": hypothesis,
115
- "label": label,
116
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
test.json → teca/teca-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fe100977ffa0bf228cc0a032f26872374e031c928e0fa4692ddf617690afc83b
3
- size 509308
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e380e9b9a36483c878ad08fcfa40c8ac13aa2f4d266a60944d082b5c2435466
3
+ size 275652
train.json → teca/teca-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1977a676bb22fdada80241c01dd6a8a52313535be25c6f4ef387d25b8fa2829c
3
- size 4100267
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c301e743feb1981f924c7aaefbab041a9538979f07ede1352592c662a787fb2
3
+ size 2182124
dev.json → teca/teca-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c46b5888a4fd7eb14225dd0db7074e40f22d51c5832903b58f14c44d582072f7
3
- size 513528
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1438dd4dccde0bc94a70b11d19a410bf3175b04726fdd6b2258faa161a31356
3
+ size 277855