metadata
license: apache-2.0
pipeline_tag: audio-text-to-text
1. Step-Audio-Chat
This repository contains the Multimodal Large Language Model (LLM) component of Step-Audio. It is a 130 billion parameter multimodal LLM that is responsible for understanding and generating human speech. The model is specifically designed to seamlessly integrate functions such as speech recognition, semantic understanding, dialogue management, voice cloning, and speech generation.
2. Examples
Clone audio
role | prompt wav | clone wav |
---|---|---|
于谦 | ||
李雪琴 |
Speed control
prompt | response |
---|---|
Human: 说一个绕口令 Assistant: 吃葡萄不吐葡萄皮,不吃葡萄倒吐葡萄皮 Human: 哎,你能把这个绕口令说的再快一点吗? |
|
Human: 说一个绕口令 Assistant: 吃葡萄不吐葡萄皮,不吃葡萄倒吐葡萄皮 Human: 哎,你能把这个绕口令说的再快一点吗? Assistant: 吃葡萄不吐葡萄皮,不吃葡萄倒吐葡萄皮 Human: 呃,你再用非常非常慢的速度说一遍的。 |
High EQ(emotional control & tone control)
prompt | response |
---|---|
Human: 你这语气又不撒娇又不卖萌的,要不你撒个娇卖个萌吧。 | |
Human: 怎么办?我感觉我的人生很失败。 | |
Human: 小跃。你真的是。特别厉害。 |
Multilingual (e.g., Chinese, English, Japanese)
prompt | response |
---|---|
Human: What did the speaker mean when they said, it's raining cats and dogs? Assistant: When they say "It's raining cats and dogs," it just means it's raining really hard. The speaker isn't literally saying cats and dogs are falling from the sky! It's just a fun way to describe heavy rain. |
|
Human: こんにちは。(你好) Assistant:こんにちは!何か手伝いましょうか?(您好!我可以帮你做点什么吗?) |
Rap & Vocal
prompt | response |
---|---|
human:唱一段rap |
3. Evaluation
3.1 LLM judge metrics(GPT-4o) on StepEval-Audio-360
Model | Factuality (% ↑) | Relevance (% ↑) | Chat Score ↑ |
---|---|---|---|
GLM4-Voice | 54.7 | 66.4 | 3.49 |
Qwen2-Audio | 22.6 | 26.3 | 2.27 |
Moshi* | 1.0 | 0 | 1.49 |
Step-Audio-Chat | 66.4 | 75.2 | 4.11 |
*Note: Moshi are marked with "*" and should be considered for reference only.
3.2 Public Test Set
Model | Llama Question | Web Questions | TriviaQA* | ComplexBench | HSK-6 |
---|---|---|---|---|---|
GLM4-Voice | 64.7 | 32.2 | 39.1 | 66.0 | 74.0 |
Moshi | 62.3 | 26.6 | 22.8 | - | - |
Freeze-Omni | 72.0 | 44.7 | 53.9 | - | - |
LUCY | 59.7 | 29.3 | 27.0 | - | - |
MinMo | 78.9 | 55.0 | 48.3 | - | - |
Qwen2-Audio | 52.0 | 27.0 | 37.3 | 54.0 | - |
Step-Audio-Chat | 81.0 | 75.1 | 58.0 | 74.0 | 86.0 |
Note: Results marked with "*" on TriviaQA dataset are considered for reference only.
TriviaQA dataset marked with "*" indicates results are for reference only.
3.3 Audio instruction following
Category | Instruction Following | Audio Quality | ||
---|---|---|---|---|
GLM-4-Voice | Step-Audio | GLM-4-Voice | Step-Audio | |
Languages | 1.9 | 3.8 | 2.9 | 3.3 |
Role-playing | 3.8 | 4.2 | 3.2 | 3.6 |
Singing / RAP | 2.1 | 2.4 | 2.4 | 4 |
Voice Control | 3.6 | 4.4 | 3.3 | 4.1 |
4. More information
For more information, please refer to our repository: Step-Audio.