LuisVasquezBSC commited on
Commit
e2e53ad
·
verified ·
1 Parent(s): 6fcc445

Upload folder using huggingface_hub

Browse files
Files changed (6) hide show
  1. OLD/README.md +156 -0
  2. OLD/WikiCAT_ca.py +89 -0
  3. OLD/dev_ca.json +0 -0
  4. OLD/train_ca.json +0 -0
  5. dev.json +0 -0
  6. train.json +0 -0
OLD/README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ YAML tags:
3
+ annotations_creators:
4
+ - auromatically-generated
5
+ language_creators:
6
+ - found
7
+ language:
8
+ - ca
9
+ license:
10
+ - cc-by-sa-3.0
11
+ multilinguality:
12
+ - monolingual
13
+ pretty_name: wikicat_ca
14
+ size_categories:
15
+ - unknown
16
+ source_datasets: []
17
+ task_categories:
18
+ - text-classification
19
+ task_ids:
20
+ - multi-class-classification
21
+ ---
22
+
23
+ # WikiCAT_ca: Catalan Text Classification dataset
24
+
25
+ ## Dataset Description
26
+ - **Homepage** [Projecte AINA](https://projecteaina.cat/tech/)
27
+ - **Repository** [HuggingFace](https://huggingface.co/projecte-aina)
28
+ - **Point of Contact** [email protected]
29
+
30
+
31
+ **Repository**
32
+
33
+ https://github.com/TeMU-BSC/WikiCAT
34
+
35
+
36
+ ### Dataset Summary
37
+
38
+ WikiCAT_ca is a Catalan corpus for thematic Text Classification tasks. It is created automagically from Wikipedia and Wikidata sources, and contains 13201 articles from the Viquipedia classified under 13 different categories.
39
+
40
+ This dataset was developed by BSC TeMU as part of the AINA project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus.
41
+
42
+ This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International</a>.
43
+
44
+
45
+ ### Supported Tasks and Leaderboards
46
+
47
+ Text classification, Language Model
48
+
49
+ ### Languages
50
+
51
+ The dataset is in Catalan (ca-ES).
52
+
53
+ ## Dataset Structure
54
+
55
+ ### Data Instances
56
+
57
+ Two json files, one for each split.
58
+
59
+ ### Data Fields
60
+
61
+ We used a simple model with the article text and associated labels, without further metadata.
62
+
63
+ #### Example:
64
+
65
+ <pre>
66
+ {"version": "1.1.0",
67
+ "data":
68
+ [
69
+ {
70
+ 'sentence': ' Celsius és conegut com l\'inventor de l\'escala centesimal del termòmetre. Encara que aquest instrument és un invent molt antic, la història de la seva gradació és molt més capritxosa. Durant el segle xvi era graduat com "fred" col·locant-lo (...)',
71
+ 'label': 'Ciència'
72
+ },
73
+ .
74
+ .
75
+ .
76
+ ]
77
+ }
78
+
79
+
80
+ </pre>
81
+
82
+ #### Labels
83
+
84
+ 'Ciència_i_Tecnologia', 'Dret', 'Economia', 'Enginyeria', 'Entreteniment', 'Esport', 'Filosofia', 'Història', 'Humanitats', 'Matemàtiques', 'Música', 'Política', 'Religió'
85
+
86
+ ### Data Splits
87
+
88
+ * dev_ca.json: 2484 label-document pairs
89
+ * train_ca.json: 9907 label-document pairs
90
+
91
+
92
+ ## Dataset Creation
93
+
94
+ ### Methodology
95
+
96
+
97
+ “Category” starting pages are chosen to represent the topics in each language.
98
+
99
+ We extract, for each category, the main pages, as well as the subcategories ones, and the individual pages under this first level.
100
+ For each page, the "summary" provided by Wikipedia is also extracted as the representative text.
101
+
102
+ ### Curation Rationale
103
+
104
+
105
+
106
+ ### Source Data
107
+
108
+ #### Initial Data Collection and Normalization
109
+
110
+ The source data are thematic categories in the different Wikipedias
111
+
112
+ #### Who are the source language producers?
113
+
114
+
115
+ ### Annotations
116
+
117
+ #### Annotation process
118
+ Automatic annotation
119
+
120
+ #### Who are the annotators?
121
+
122
+ [N/A]
123
+
124
+ ### Personal and Sensitive Information
125
+
126
+ No personal or sensitive information included.
127
+
128
+ ## Considerations for Using the Data
129
+
130
+ ### Social Impact of Dataset
131
+
132
+ We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
133
+
134
+ ### Discussion of Biases
135
+
136
+ We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
137
+
138
+ ### Other Known Limitations
139
+
140
+ [N/A]
141
+
142
+ ## Additional Information
143
+
144
+ ### Dataset Curators
145
+
146
+ Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
147
+
148
+ This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
149
+
150
+ ### Licensing Information
151
+
152
+ This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/">Attribution-ShareAlike 4.0 International</a>.
153
+
154
+ ### Contributions
155
+
156
+ [N/A]
OLD/WikiCAT_ca.py ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Loading script for the TeCla dataset.
2
+ import json
3
+ import datasets
4
+
5
+ logger = datasets.logging.get_logger(__name__)
6
+
7
+ _CITATION = """
8
+
9
+ """
10
+
11
+ _DESCRIPTION = """
12
+ WikiCAT: Text Classification Catalan dataset from the Viquipedia
13
+
14
+ """
15
+
16
+ _HOMEPAGE = """ """
17
+
18
+ # TODO: upload datasets to github
19
+ _URL = "https://huggingface.co/datasets/projecte-aina/WikiCAT_ca/raw/main/"
20
+ _TRAINING_FILE = "train_ca.json"
21
+ _DEV_FILE = "dev_ca.json"
22
+ #_TEST_FILE = "test.json"
23
+
24
+
25
+ class wikiCAT_caConfig(datasets.BuilderConfig):
26
+ """ Builder config for the Topicat dataset """
27
+
28
+ def __init__(self, **kwargs):
29
+ """BuilderConfig for WikiCAT_ca.
30
+ Args:
31
+ **kwargs: keyword arguments forwarded to super.
32
+ """
33
+ super(wikiCAT_caConfig, self).__init__(**kwargs)
34
+
35
+
36
+ class wikiCAT_ca(datasets.GeneratorBasedBuilder):
37
+ """ WikiCAT_ca Dataset """
38
+
39
+ BUILDER_CONFIGS = [
40
+ wikiCAT_caConfig(
41
+ name="wikiCAT_ca",
42
+ version=datasets.Version("1.1.0"),
43
+ description="wikiCAT_ca",
44
+ ),
45
+ ]
46
+
47
+ def _info(self):
48
+ return datasets.DatasetInfo(
49
+ description=_DESCRIPTION,
50
+ features=datasets.Features(
51
+ {
52
+ "text": datasets.Value("string"),
53
+ "label": datasets.features.ClassLabel
54
+ (names= ['Ciència_i_Tecnologia', 'Dret', 'Economia', 'Enginyeria', 'Entreteniment', 'Esport', 'Filosofia', 'Història', 'Humanitats', 'Matemàtiques', 'Música', 'Política', 'Religió']
55
+ ),
56
+ }
57
+ ),
58
+ homepage=_HOMEPAGE,
59
+ citation=_CITATION,
60
+ )
61
+
62
+ def _split_generators(self, dl_manager):
63
+ """Returns SplitGenerators."""
64
+ urls_to_download = {
65
+ "train": f"{_URL}{_TRAINING_FILE}",
66
+ "dev": f"{_URL}{_DEV_FILE}",
67
+ # "test": f"{_URL}{_TEST_FILE}",
68
+ }
69
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
70
+
71
+ return [
72
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
73
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
74
+ # datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
75
+ ]
76
+
77
+ def _generate_examples(self, filepath):
78
+ """This function returns the examples in the raw (text) form."""
79
+ logger.info("generating examples from = %s", filepath)
80
+ print("filepath:",filepath)
81
+ with open(filepath, encoding="utf-8") as f:
82
+ wikicat_ca = json.load(f)
83
+ for id_, article in enumerate(wikicat_ca["data"]):
84
+ text = article["text"]
85
+ label = article["target"]
86
+ yield id_, {
87
+ "text": text,
88
+ "label": label,
89
+ }
OLD/dev_ca.json ADDED
The diff for this file is too large to render. See raw diff
 
OLD/train_ca.json ADDED
The diff for this file is too large to render. See raw diff
 
dev.json ADDED
The diff for this file is too large to render. See raw diff
 
train.json ADDED
The diff for this file is too large to render. See raw diff