koichisaito
commited on
Commit
•
5647581
1
Parent(s):
5ac2da8
Update README.md
Browse files
README.md
CHANGED
@@ -8,16 +8,13 @@ pipeline_tag: text-to-audio
|
|
8 |
# SoundCTM
|
9 |
This repository is for the official checkpoint of ["SoundCTM: Uniting Score-based and Consistency Models for Text-to-Sound Generation"](https://arxiv.org/abs/2405.18503)
|
10 |
|
11 |
-
**
|
12 |
|
13 |
-
Training and inference codes are available [here](https://github.com/sony/soundctm)
|
14 |
|
15 |
-
**Audio Demo Samples
|
16 |
|
17 |
-
|
18 |
-
|
19 |
-
**Abstract**
|
20 |
-
Sound content is an indispensable element for multimedia works such as video games, music, and films. Recent high-quality diffusion-based sound generation models can serve as valuable tools for the creators. However, despite producing high-quality sounds, these models often suffer from slow inference speeds. This drawback burdens creators, who typically refine their sounds through trial and error to align them with their artistic intentions. To address this issue, we introduce Sound Consistency Trajectory Models (SoundCTM). Our model enables flexible transitioning between high-quality 1-step sound generation and superior sound quality through multi-step generation. This allows creators to initially control sounds with 1-step samples before refining them through multi-step generation. While CTM fundamentally achieves flexible 1-step and multi-step generation, its impressive performance heavily depends on an additional pretrained feature extractor and an adversarial loss, which are expensive to train and not always available in other domains. Thus, we reframe CTM's training framework and introduce a novel feature distance by utilizing the teacher's network for a distillation loss. Additionally, while distilling classifier-free guided trajectories, we train conditional and unconditional student models simultaneously and interpolate between these models during inference. We also propose training-free controllable frameworks for SoundCTM, leveraging its flexible sampling capability. SoundCTM achieves both promising 1-step and multi-step real-time sound generation without using any extra off-the-shelf networks. Furthermore, we demonstrate SoundCTM's capability of controllable sound generation in a training-free manner.
|
21 |
|
22 |
**Citation details:**
|
23 |
```
|
|
|
8 |
# SoundCTM
|
9 |
This repository is for the official checkpoint of ["SoundCTM: Uniting Score-based and Consistency Models for Text-to-Sound Generation"](https://arxiv.org/abs/2405.18503)
|
10 |
|
11 |
+
**PDF:** https://arxiv.org/pdf/2405.18503
|
12 |
|
13 |
+
**Codebase:** Training and inference codes are available [here](https://github.com/sony/soundctm)
|
14 |
|
15 |
+
**Audio Demo Samples:** Audio samples are available [here](https://koichi-saito-sony.github.io/soundctm/).
|
16 |
|
17 |
+
**Abstract:** Sound content is an indispensable element for multimedia works such as video games, music, and films. Recent high-quality diffusion-based sound generation models can serve as valuable tools for the creators. However, despite producing high-quality sounds, these models often suffer from slow inference speeds. This drawback burdens creators, who typically refine their sounds through trial and error to align them with their artistic intentions. To address this issue, we introduce Sound Consistency Trajectory Models (SoundCTM). Our model enables flexible transitioning between high-quality 1-step sound generation and superior sound quality through multi-step generation. This allows creators to initially control sounds with 1-step samples before refining them through multi-step generation. While CTM fundamentally achieves flexible 1-step and multi-step generation, its impressive performance heavily depends on an additional pretrained feature extractor and an adversarial loss, which are expensive to train and not always available in other domains. Thus, we reframe CTM's training framework and introduce a novel feature distance by utilizing the teacher's network for a distillation loss. Additionally, while distilling classifier-free guided trajectories, we train conditional and unconditional student models simultaneously and interpolate between these models during inference. We also propose training-free controllable frameworks for SoundCTM, leveraging its flexible sampling capability. SoundCTM achieves both promising 1-step and multi-step real-time sound generation without using any extra off-the-shelf networks. Furthermore, we demonstrate SoundCTM's capability of controllable sound generation in a training-free manner.
|
|
|
|
|
|
|
18 |
|
19 |
**Citation details:**
|
20 |
```
|