Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Sub-tasks:
topic-classification
Languages:
English
Size:
100K - 1M
License:
Commit
·
be09a5e
1
Parent(s):
c5799f8
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
About Dataset
|
2 |
+
|
3 |
+
DBpedia (from "DB" for "database") is a project aiming to extract structured content from the information created in Wikipedia.
|
4 |
+
This is an extract of the data (after cleaning, kernel included) that provides taxonomic, hierarchical categories ("classes") for 342,782 wikipedia articles. There are 3 levels, with 9, 70 and 219 classes respectively.
|
5 |
+
A version of this dataset is a popular baseline for NLP/text classification tasks. This version of the dataset is much tougher, especially if the L2/L3 levels are used as the targets.
|
6 |
+
|
7 |
+
This is an excellent benchmark for hierarchical multiclass/multilabel text classification.
|
8 |
+
Some example approaches are included as code snippets.
|
9 |
+
Content
|
10 |
+
|
11 |
+
DBPedia dataset with multiple levels of hierarchy/classes, as a multiclass dataset.
|
12 |
+
Original DBPedia ontology (triplets data): https://wiki.dbpedia.org/develop/datasets
|
13 |
+
Listing of the class tree/taxonomy: http://mappings.dbpedia.org/server/ontology/classes/
|
14 |
+
Acknowledgements
|
15 |
+
|
16 |
+
Thanks to the Wikimedia foundation for creating Wikipedia, DBPedia and associated open-data goodness!
|
17 |
+
|
18 |
+
Thanks to my colleagues at Sparkbeyond (https://www.sparkbeyond.com) for pointing me towards the taxonomical version of this dataset (as opposed to the classic 14 class version)
|
19 |
+
Inspiration
|
20 |
+
|
21 |
+
Try different NLP models.
|
22 |
+
|
23 |
+
Compare to the SOTA in Text Classification on DBpedia - https://paperswithcode.com/sota/text-classification-on-dbpedia
|