File size: 2,941 Bytes
313fe5f 8daee5b 313fe5f fcddbba 313fe5f f6adcfe 313fe5f b8b507f 313fe5f b8b507f 313fe5f b8b507f 313fe5f 14ea667 313fe5f 14ea667 313fe5f 14ea667 313fe5f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
---
license: mit
datasets:
- ccmusic-database/erhu_playing_tech
language:
- en
metrics:
- accuracy
pipeline_tag: audio-classification
tags:
- music
- art
---
# Intro
The Erhu Performance Technique Recognition Model is an audio analysis tool based on deep learning techniques, aiming to automatically distinguish different techniques in erhu performance. By deeply analyzing the acoustic characteristics of erhu music, the model is able to recognize 11 basic playing techniques, including split bow, pad bow, overtone, continuous bow, glissando, big glissando, strike bow, pizzicato, throw bow, staccato bow, vibrato, tremolo and vibrato. Through time-frequency conversion, feature extraction and pattern recognition, the model can accurately categorize the complex techniques of erhu performance, which provides an efficient technical support for music information retrieval, music education, and research on the art of erhu performance. The application of this model not only enriches the research in the field of music acoustics, but also opens up a new way for the inheritance and innovation of traditional music.
## Demo
<https://huggingface.co/spaces/ccmusic-database/erhu-playing-tech>
## Usage
```python
from modelscope import snapshot_download
model_dir = snapshot_download('ccmusic-database/erhu_playing_tech')
```
## Maintenance
```bash
git clone [email protected]:ccmusic-database/erhu_playing_tech
cd erhu_playing_tech
```
## Results
A demo result of Swin-T fine-tuning by mel:
<style>
#erhu td {
vertical-align: middle !important;
text-align: center;
}
#erhu th {
text-align: center;
}
</style>
<table id="erhu">
<tr>
<th>Loss curve</th>
<td><img src="https://www.modelscope.cn/models/ccmusic-database/erhu_playing_tech/resolve/master/swin_t_mel_2024-07-29_01-14-31/loss.jpg"></td>
</tr>
<tr>
<th>Training and validation accuracy</th>
<td><img src="https://www.modelscope.cn/models/ccmusic-database/erhu_playing_tech/resolve/master/swin_t_mel_2024-07-29_01-14-31/acc.jpg"></td>
</tr>
<tr>
<th>Confusion matrix</th>
<td><img src="https://www.modelscope.cn/models/ccmusic-database/erhu_playing_tech/resolve/master/swin_t_mel_2024-07-29_01-14-31/mat.jpg"></td>
</tr>
</table>
## Dataset
<https://huggingface.co/datasets/ccmusic-database/erhu_playing_tech>
## Mirror
<https://www.modelscope.cn/models/ccmusic-database/erhu_playing_tech>
## Evaluation
<https://github.com/monetjoe/ccmusic_eval>
## Cite
```bibtex
@dataset{zhaorui_liu_2021_5676893,
author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
month = {mar},
year = {2024},
publisher = {HuggingFace},
version = {1.2},
url = {https://huggingface.co/ccmusic-database}
}
``` |