init-0225
Browse files
README.md
CHANGED
@@ -1,7 +1,10 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
-
|
|
|
|
|
|
|
5 |
## Introduction
|
6 |
|
7 |
As presented in our paper "Steering Language Model to Stable Speech Emotion Recognition via Contextual Perception and Chain of Thought". C<sup>2</sup>SER employs a CoT training approach to incentivize reasoning capability. This approach decomposes the SER task into sequential steps: first perceiving speech content and speaking style, followed by emotion inference, with the assistance of prior context. This structured method imitates human thinking and reduces the possibility of hallucinations. To further enhance stability and prevent error propagation, especially in longer thought chains, C<sup>2</sup>SER introduces self-distillation, transferring knowledge from explicit to implicit CoT.
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
# C<sup>2</sup>SER-LLM
|
5 |
+
|
6 |
+
For more information, please refer to github [C2SER](https://github.com/zxzhao0/C2SER)
|
7 |
+
|
8 |
## Introduction
|
9 |
|
10 |
As presented in our paper "Steering Language Model to Stable Speech Emotion Recognition via Contextual Perception and Chain of Thought". C<sup>2</sup>SER employs a CoT training approach to incentivize reasoning capability. This approach decomposes the SER task into sequential steps: first perceiving speech content and speaking style, followed by emotion inference, with the assistance of prior context. This structured method imitates human thinking and reduces the possibility of hallucinations. To further enhance stability and prevent error propagation, especially in longer thought chains, C<sup>2</sup>SER introduces self-distillation, transferring knowledge from explicit to implicit CoT.
|