Update README.md
Browse files
README.md
CHANGED
@@ -56,11 +56,15 @@ emotion2vec+ is a series of foundational models for speech emotion recognition (
|
|
56 |
![](emotion2vec+radar.png)
|
57 |
|
58 |
This version (emotion2vec_plus_seed) is a seed model trained on academic data, and currently supports the following categories:
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
|
|
|
|
|
|
|
|
64 |
|
65 |
# Model Card
|
66 |
GitHub Repo: [emotion2vec](https://github.com/ddlBoJack/emotion2vec)
|
@@ -76,14 +80,12 @@ emotion2vec+ large|[Link](https://modelscope.cn/models/iic/emotion2vec_plus_larg
|
|
76 |
|
77 |
We offer 3 versions of emotion2vec+, each derived from the data of its predecessor. If you need a model focusing on spech emotion representation, refer to [emotion2vec: universal speech emotion representation model](https://huggingface.co/emotion2vec/emotion2vec).
|
78 |
|
79 |
-
- emotion2vec+ seed: Fine-tuned with academic speech emotion data
|
80 |
- emotion2vec+ base: Fine-tuned with filtered large-scale pseudo-labeled data to obtain the base size model (~90M)
|
81 |
- emotion2vec+ large: Fine-tuned with filtered large-scale pseudo-labeled data to obtain the large size model (~300M)
|
82 |
|
83 |
The iteration process is illustrated below, culminating in the training of the emotion2vec+ large model with 40k out of 160k hours of speech emotion data. Details of data engineering will be announced later.
|
84 |
|
85 |
-
![](emotion2vec+data.png)
|
86 |
-
|
87 |
# Installation
|
88 |
|
89 |
`pip install -U funasr modelscope`
|
|
|
56 |
![](emotion2vec+radar.png)
|
57 |
|
58 |
This version (emotion2vec_plus_seed) is a seed model trained on academic data, and currently supports the following categories:
|
59 |
+
0: angry
|
60 |
+
1: disgusted
|
61 |
+
2: fearful
|
62 |
+
3: happy
|
63 |
+
4: neutral
|
64 |
+
5: other
|
65 |
+
6: sad
|
66 |
+
7: surprised
|
67 |
+
8: unknown
|
68 |
|
69 |
# Model Card
|
70 |
GitHub Repo: [emotion2vec](https://github.com/ddlBoJack/emotion2vec)
|
|
|
80 |
|
81 |
We offer 3 versions of emotion2vec+, each derived from the data of its predecessor. If you need a model focusing on spech emotion representation, refer to [emotion2vec: universal speech emotion representation model](https://huggingface.co/emotion2vec/emotion2vec).
|
82 |
|
83 |
+
- emotion2vec+ seed: Fine-tuned with academic speech emotion data from [EmoBox](https://github.com/emo-box/EmoBox)
|
84 |
- emotion2vec+ base: Fine-tuned with filtered large-scale pseudo-labeled data to obtain the base size model (~90M)
|
85 |
- emotion2vec+ large: Fine-tuned with filtered large-scale pseudo-labeled data to obtain the large size model (~300M)
|
86 |
|
87 |
The iteration process is illustrated below, culminating in the training of the emotion2vec+ large model with 40k out of 160k hours of speech emotion data. Details of data engineering will be announced later.
|
88 |
|
|
|
|
|
89 |
# Installation
|
90 |
|
91 |
`pip install -U funasr modelscope`
|