parquet-converter commited on
Commit
21c3089
·
1 Parent(s): 4d0aa96

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,44 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.npy filter=lfs diff=lfs merge=lfs -text
13
- *.npz filter=lfs diff=lfs merge=lfs -text
14
- *.onnx filter=lfs diff=lfs merge=lfs -text
15
- *.ot filter=lfs diff=lfs merge=lfs -text
16
- *.parquet filter=lfs diff=lfs merge=lfs -text
17
- *.pb filter=lfs diff=lfs merge=lfs -text
18
- *.pickle filter=lfs diff=lfs merge=lfs -text
19
- *.pkl filter=lfs diff=lfs merge=lfs -text
20
- *.pt filter=lfs diff=lfs merge=lfs -text
21
- *.pth filter=lfs diff=lfs merge=lfs -text
22
- *.rar filter=lfs diff=lfs merge=lfs -text
23
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
24
- *.tar.* filter=lfs diff=lfs merge=lfs -text
25
- *.tflite filter=lfs diff=lfs merge=lfs -text
26
- *.tgz filter=lfs diff=lfs merge=lfs -text
27
- *.wasm filter=lfs diff=lfs merge=lfs -text
28
- *.xz filter=lfs diff=lfs merge=lfs -text
29
- *.zip filter=lfs diff=lfs merge=lfs -text
30
- *.zstandard filter=lfs diff=lfs merge=lfs -text
31
- *tfevents* filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - uncompressed
33
- *.pcm filter=lfs diff=lfs merge=lfs -text
34
- *.sam filter=lfs diff=lfs merge=lfs -text
35
- *.raw filter=lfs diff=lfs merge=lfs -text
36
- # Audio files - compressed
37
- *.aac filter=lfs diff=lfs merge=lfs -text
38
- *.flac filter=lfs diff=lfs merge=lfs -text
39
- *.mp3 filter=lfs diff=lfs merge=lfs -text
40
- *.ogg filter=lfs diff=lfs merge=lfs -text
41
- *.wav filter=lfs diff=lfs merge=lfs -text
42
- DBPEDIA_test.csv filter=lfs diff=lfs merge=lfs -text
43
- DBPEDIA_val.csv filter=lfs diff=lfs merge=lfs -text
44
- DBPEDIA_train.csv filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
DBPEDIA_test.csv → DeveloperOats--DBPedia_Classes/csv-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8e26d4c21c691bf1d485215915d1f1a73fb9555ab944df79829d01d2887ed2c6
3
- size 40283390
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:545b1390b19883c7bdc6c3ebb1750b762b18d98b4fa975548c9f8455ff3d386e
3
+ size 25398301
DBPEDIA_train.csv → DeveloperOats--DBPedia_Classes/csv-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7de9a39d091339d24ba0104c5c9701fae9808002b68882d2e5a7d28072ce9ff8
3
- size 159631250
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29fc42490bd429855ae36dcb78563a4e592fd111ac82c97684af3518dd35d401
3
+ size 100655720
DBPEDIA_val.csv → DeveloperOats--DBPedia_Classes/csv-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7855379e809a27938c8e010589420e52100fddfb9d33663c68dea873b91faaca
3
- size 23774046
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b8ffb6dd2de1c0aa04fa9e72d0d1b32b4ab223a527afae2f49440a5ae40a66d
3
+ size 14970053
README.md DELETED
@@ -1,43 +0,0 @@
1
- ---
2
- annotations_creators: []
3
- language:
4
- - en
5
- language_creators: []
6
- license:
7
- - cc0-1.0
8
- multilinguality:
9
- - monolingual
10
- pretty_name: 'DBpedia'
11
- size_categories:
12
- - 1M<n<10M
13
- source_datasets: []
14
- tags: []
15
- task_categories:
16
- - text-classification
17
- task_ids:
18
- - topic-classification
19
- ---
20
-
21
- About Dataset
22
-
23
- DBpedia (from "DB" for "database") is a project aiming to extract structured content from the information created in Wikipedia.
24
- This is an extract of the data (after cleaning, kernel included) that provides taxonomic, hierarchical categories ("classes") for 342,782 wikipedia articles. There are 3 levels, with 9, 70 and 219 classes respectively.
25
- A version of this dataset is a popular baseline for NLP/text classification tasks. This version of the dataset is much tougher, especially if the L2/L3 levels are used as the targets.
26
-
27
- This is an excellent benchmark for hierarchical multiclass/multilabel text classification.
28
- Some example approaches are included as code snippets.
29
- Content
30
-
31
- DBPedia dataset with multiple levels of hierarchy/classes, as a multiclass dataset.
32
- Original DBPedia ontology (triplets data): https://wiki.dbpedia.org/develop/datasets
33
- Listing of the class tree/taxonomy: http://mappings.dbpedia.org/server/ontology/classes/
34
- Acknowledgements
35
-
36
- Thanks to the Wikimedia foundation for creating Wikipedia, DBPedia and associated open-data goodness!
37
-
38
- Thanks to my colleagues at Sparkbeyond (https://www.sparkbeyond.com) for pointing me towards the taxonomical version of this dataset (as opposed to the classic 14 class version)
39
- Inspiration
40
-
41
- Try different NLP models.
42
- See also https://www.kaggle.com/datasets/danofer/dbpedia-classes
43
- Compare to the SOTA in Text Classification on DBpedia - https://paperswithcode.com/sota/text-classification-on-dbpedia