Datasets:
yangwang825
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -23,16 +23,16 @@ dataset_info:
|
|
23 |
'9': rock
|
24 |
splits:
|
25 |
- name: train
|
26 |
-
num_bytes: 586664927
|
27 |
num_examples: 443
|
28 |
- name: validation
|
29 |
-
num_bytes: 260793810
|
30 |
num_examples: 197
|
31 |
- name: test
|
32 |
-
num_bytes: 383984112
|
33 |
num_examples: 290
|
34 |
download_size: 1230811404
|
35 |
-
dataset_size: 1231442849
|
36 |
configs:
|
37 |
- config_name: default
|
38 |
data_files:
|
@@ -42,4 +42,57 @@ configs:
|
|
42 |
path: data/validation-*
|
43 |
- split: test
|
44 |
path: data/test-*
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
'9': rock
|
24 |
splits:
|
25 |
- name: train
|
26 |
+
num_bytes: 586664927
|
27 |
num_examples: 443
|
28 |
- name: validation
|
29 |
+
num_bytes: 260793810
|
30 |
num_examples: 197
|
31 |
- name: test
|
32 |
+
num_bytes: 383984112
|
33 |
num_examples: 290
|
34 |
download_size: 1230811404
|
35 |
+
dataset_size: 1231442849
|
36 |
configs:
|
37 |
- config_name: default
|
38 |
data_files:
|
|
|
42 |
path: data/validation-*
|
43 |
- split: test
|
44 |
path: data/test-*
|
45 |
+
task_categories:
|
46 |
+
- audio-classification
|
47 |
+
tags:
|
48 |
+
- audio
|
49 |
+
- multiclass
|
50 |
+
- music
|
51 |
---
|
52 |
+
|
53 |
+
# GTZAN Music Genre Classification
|
54 |
+
|
55 |
+
GTZAN consists of 100 30-second recording excerpts in each of 10 categories, and is the most-used public dataset in music information retrieval (MIR) research.
|
56 |
+
Following Kereliuk et al. (2015), we use the "fault-filtered" partitioning version of GTZAN, which is constructed by hand to include 443/197/290 excerpts.
|
57 |
+
This version of database could be found and downloaded from [here](https://www.kaggle.com/datasets/carlthome/gtzan-genre-collection).
|
58 |
+
|
59 |
+
## Citations
|
60 |
+
|
61 |
+
```bibtex
|
62 |
+
@article{kereliuk2015deep,
|
63 |
+
title={Deep learning and music adversaries},
|
64 |
+
author={Kereliuk, Corey and Sturm, Bob L and Larsen, Jan},
|
65 |
+
journal={IEEE Transactions on Multimedia},
|
66 |
+
volume={17},
|
67 |
+
number={11},
|
68 |
+
pages={2059--2071},
|
69 |
+
year={2015},
|
70 |
+
publisher={IEEE}
|
71 |
+
}
|
72 |
+
```
|
73 |
+
|
74 |
+
```bibtex
|
75 |
+
@article{sturm2014state,
|
76 |
+
title={The state of the art ten years after a state of the art: Future research in music information retrieval},
|
77 |
+
author={Sturm, Bob L},
|
78 |
+
journal={Journal of new music research},
|
79 |
+
volume={43},
|
80 |
+
number={2},
|
81 |
+
pages={147--172},
|
82 |
+
year={2014},
|
83 |
+
publisher={Taylor \& Francis}
|
84 |
+
}
|
85 |
+
```
|
86 |
+
|
87 |
+
```bibtex
|
88 |
+
@article{tzanetakis2002musical,
|
89 |
+
title={Musical genre classification of audio signals},
|
90 |
+
author={Tzanetakis, George and Cook, Perry},
|
91 |
+
journal={IEEE Transactions on speech and audio processing},
|
92 |
+
volume={10},
|
93 |
+
number={5},
|
94 |
+
pages={293--302},
|
95 |
+
year={2002},
|
96 |
+
publisher={IEEE}
|
97 |
+
}
|
98 |
+
```
|