parquet-converter commited on
Commit
0193477
1 Parent(s): d628ab3

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,42 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.npy filter=lfs diff=lfs merge=lfs -text
13
- *.npz filter=lfs diff=lfs merge=lfs -text
14
- *.onnx filter=lfs diff=lfs merge=lfs -text
15
- *.ot filter=lfs diff=lfs merge=lfs -text
16
- *.parquet filter=lfs diff=lfs merge=lfs -text
17
- *.pb filter=lfs diff=lfs merge=lfs -text
18
- *.pickle filter=lfs diff=lfs merge=lfs -text
19
- *.pkl filter=lfs diff=lfs merge=lfs -text
20
- *.pt filter=lfs diff=lfs merge=lfs -text
21
- *.pth filter=lfs diff=lfs merge=lfs -text
22
- *.rar filter=lfs diff=lfs merge=lfs -text
23
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
24
- *.tar.* filter=lfs diff=lfs merge=lfs -text
25
- *.tflite filter=lfs diff=lfs merge=lfs -text
26
- *.tgz filter=lfs diff=lfs merge=lfs -text
27
- *.wasm filter=lfs diff=lfs merge=lfs -text
28
- *.xz filter=lfs diff=lfs merge=lfs -text
29
- *.zip filter=lfs diff=lfs merge=lfs -text
30
- *.zstandard filter=lfs diff=lfs merge=lfs -text
31
- *tfevents* filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - uncompressed
33
- *.pcm filter=lfs diff=lfs merge=lfs -text
34
- *.sam filter=lfs diff=lfs merge=lfs -text
35
- *.raw filter=lfs diff=lfs merge=lfs -text
36
- # Audio files - compressed
37
- *.aac filter=lfs diff=lfs merge=lfs -text
38
- *.flac filter=lfs diff=lfs merge=lfs -text
39
- *.mp3 filter=lfs diff=lfs merge=lfs -text
40
- *.ogg filter=lfs diff=lfs merge=lfs -text
41
- *.wav filter=lfs diff=lfs merge=lfs -text
42
- *.csv filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,148 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - found
4
- language:
5
- - id
6
- language_creators:
7
- - found
8
- license:
9
- - cc-by-4.0
10
- multilinguality:
11
- - monolingual
12
- pretty_name: Indonews
13
- size_categories:
14
- - 1K<n<10K
15
- source_datasets:
16
- - original
17
- tags:
18
- - news
19
- - news-classifcation
20
- - indonesia
21
- task_categories:
22
- - text-classification
23
- task_ids:
24
- - multi-class-classification
25
- ---
26
-
27
- # Indonesian News Categorization
28
-
29
- ## Table of Contents
30
- - [Table of Contents](#table-of-contents)
31
- - [Dataset Description](#dataset-description)
32
- - [Dataset Summary](#dataset-summary)
33
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
34
- - [Languages](#languages)
35
- - [Dataset Structure](#dataset-structure)
36
- - [Data Instances](#data-instances)
37
- - [Data Fields](#data-fields)
38
- - [Data Splits](#data-splits)
39
- - [Dataset Creation](#dataset-creation)
40
- - [Curation Rationale](#curation-rationale)
41
- - [Source Data](#source-data)
42
- - [Annotations](#annotations)
43
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
44
- - [Considerations for Using the Data](#considerations-for-using-the-data)
45
- - [Social Impact of Dataset](#social-impact-of-dataset)
46
- - [Discussion of Biases](#discussion-of-biases)
47
- - [Other Known Limitations](#other-known-limitations)
48
- - [Additional Information](#additional-information)
49
- - [Dataset Curators](#dataset-curators)
50
- - [Licensing Information](#licensing-information)
51
- - [Citation Information](#citation-information)
52
- - [Contributions](#contributions)
53
-
54
- ## Dataset Description
55
-
56
- - **Homepage:**
57
- - **Repository:**
58
- - **Paper:**
59
- - **Leaderboard:**
60
- - **Point of Contact:**
61
-
62
- ### Dataset Summary
63
-
64
- Indonews: Multiclass News Categorization scrapped popular news portals in Indonesia.
65
-
66
- ### Supported Tasks and Leaderboards
67
-
68
- [More Information Needed]
69
-
70
- ### Languages
71
-
72
- [More Information Needed]
73
-
74
- ## Dataset Structure
75
-
76
- ### Data Instances
77
-
78
- [More Information Needed]
79
-
80
- ### Data Fields
81
-
82
- [More Information Needed]
83
-
84
- ### Data Splits
85
-
86
- [More Information Needed]
87
-
88
- ## Dataset Creation
89
-
90
- ### Curation Rationale
91
-
92
- [More Information Needed]
93
-
94
- ### Source Data
95
-
96
- #### Initial Data Collection and Normalization
97
-
98
- [More Information Needed]
99
-
100
- #### Who are the source language producers?
101
-
102
- [More Information Needed]
103
-
104
- ### Annotations
105
-
106
- #### Annotation process
107
-
108
- [More Information Needed]
109
-
110
- #### Who are the annotators?
111
-
112
- [More Information Needed]
113
-
114
- ### Personal and Sensitive Information
115
-
116
- [More Information Needed]
117
-
118
- ## Considerations for Using the Data
119
-
120
- ### Social Impact of Dataset
121
-
122
- [More Information Needed]
123
-
124
- ### Discussion of Biases
125
-
126
- [More Information Needed]
127
-
128
- ### Other Known Limitations
129
-
130
- [More Information Needed]
131
-
132
- ## Additional Information
133
-
134
- ### Dataset Curators
135
-
136
- [More Information Needed]
137
-
138
- ### Licensing Information
139
-
140
- [More Information Needed]
141
-
142
- ### Citation Information
143
-
144
- [More Information Needed]
145
-
146
- ### Contributions
147
-
148
- Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "This dataset is built as a playground for beginner to make a use case for creating sentiment analysis model.\n", "citation": "", "homepage": "https://github.com/jakartaresearch", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "indonews", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 7800430, "num_examples": 6127, "dataset_name": "indonews"}, "validation": {"name": "validation", "num_bytes": 3334930, "num_examples": 2627, "dataset_name": "indonews"}}, "download_checksums": {"https://media.githubusercontent.com/media/jakartaresearch/hf-datasets/main/indonews/indonews/train.csv": {"num_bytes": 7797798, "checksum": "73338405e1e5ba0138b530179ed9744e877a0a5b72231f227c8d9fa5d8e94a5b"}, "https://media.githubusercontent.com/media/jakartaresearch/hf-datasets/main/indonews/indonews/validation.csv": {"num_bytes": 3333629, "checksum": "f9d8109ab57a51a68f598f58f4294e8785aca71a6b3f2de619040f01a6494c60"}}, "download_size": 11131427, "post_processing_size": null, "dataset_size": 11135360, "size_in_bytes": 22266787}}
 
 
default/indonews-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61c73b66ae2b646ae5f735079e97df5a9c67eafa48b4ff4383adcf8c218fee4c
3
+ size 4556456
default/indonews-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb45dd02f88b665d85c47ef44a706902f076615299a3679e1d37ab141898daeb
3
+ size 1952530
indonews.py DELETED
@@ -1,75 +0,0 @@
1
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
- # TODO: Address all TODOs and remove all explanatory comments
15
- """Google Play Review: An Indonesian App Sentiment Analysis."""
16
-
17
-
18
- import csv
19
- import json
20
- import os
21
-
22
- import datasets
23
-
24
- _DESCRIPTION = """\
25
- This dataset is built as a playground for beginner to make a use case for creating sentiment analysis model.
26
- """
27
-
28
- _HOMEPAGE = "https://github.com/jakartaresearch"
29
-
30
- # TODO: Add link to the official dataset URLs here
31
- # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
32
- # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
33
- _TRAIN_URL = "https://media.githubusercontent.com/media/jakartaresearch/hf-datasets/main/indonews/indonews/train.csv"
34
- _VAL_URL = "https://media.githubusercontent.com/media/jakartaresearch/hf-datasets/main/indonews/indonews/validation.csv"
35
-
36
-
37
- # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
38
- class Indonews(datasets.GeneratorBasedBuilder):
39
- """Indonews: Multiclass News Categorization scrapped popular news portals in Indonesia.."""
40
-
41
- VERSION = datasets.Version("1.0.0")
42
-
43
- def _info(self):
44
-
45
- features = datasets.Features(
46
- {
47
- "text": datasets.Value("string"),
48
- "label": datasets.Value("string"),
49
- }
50
- )
51
-
52
- return datasets.DatasetInfo(
53
- description=_DESCRIPTION,
54
- features=features,
55
- homepage=_HOMEPAGE
56
- )
57
-
58
- def _split_generators(self, dl_manager):
59
-
60
- train_path = dl_manager.download_and_extract(_TRAIN_URL)
61
- val_path = dl_manager.download_and_extract(_VAL_URL)
62
- return [
63
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
64
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": val_path})
65
- ]
66
-
67
- # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
68
- def _generate_examples(self, filepath):
69
- """Generate examples."""
70
- with open(filepath, encoding="utf-8") as csv_file:
71
- csv_reader = csv.reader(csv_file, delimiter=",")
72
- next(csv_reader)
73
- for id_, row in enumerate(csv_reader):
74
- text, label = row
75
- yield id_, {"text": text, "label": label}