Update model card for Hear-Your-Click
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,30 +1,95 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
3 |
---
|
4 |
|
5 |
-
|
6 |
-
### Official Model Repo
|
7 |
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
- Double Guidance Classifier.
|
12 |
|
13 |
<p align="center">
|
14 |
-
<img src="
|
15 |
</p>
|
16 |
|
17 |
-
##
|
18 |
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
```
|
29 |
|
|
|
|
|
|
|
30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
pipeline_tag: video-to-audio
|
4 |
---
|
5 |
|
6 |
+
# Hear-Your-Click: Interactive Object-Specific Video-to-Audio Generation
|
|
|
7 |
|
8 |
+
This repository contains the official model for **Hear-Your-Click**, an interactive framework designed for object-specific video-to-audio (V2A) generation. It enables users to generate sounds for specific objects within a video simply by clicking on the frame, addressing the limitations of global video information in complex scenes.
|
9 |
+
|
10 |
+
**[📚 Paper](https://huggingface.co/papers/2507.04959)** | **[💻 GitHub Repository](https://github.com/SynapGrid/Hear-Your-Click-2024)**
|
|
|
11 |
|
12 |
<p align="center">
|
13 |
+
<img src="https://github.com/user-attachments/assets/2ca49ab5-80ca-42c4-b9a5-9dc7959ac358">
|
14 |
</p>
|
15 |
|
16 |
+
## About Hear-Your-Click
|
17 |
|
18 |
+
Hear-Your-Click introduces several key innovations to improve V2A generation:
|
19 |
+
- **Object-aware Contrastive Audio-Visual Fine-tuning (OCAV)** with a **Mask-guided Visual Encoder (MVE)** to obtain object-level visual features aligned with audio.
|
20 |
+
- Two tailored data augmentation strategies: **Random Video Stitching (RVS)** and **Mask-guided Loudness Modulation (MLM)**, which enhance the model's sensitivity to segmented objects.
|
21 |
+
- A new evaluation metric, the **CAV score**, designed to measure audio-visual correspondence more accurately.
|
22 |
+
|
23 |
+
This framework offers more precise control and significantly improves generation performance across various metrics.
|
24 |
+
|
25 |
+
## Installation
|
26 |
+
|
27 |
+
To set up the Hear-Your-Click environment, follow these steps:
|
28 |
+
|
29 |
+
1. **Clone the repository**:
|
30 |
+
```bash
|
31 |
+
git clone https://github.com/SynapGrid/Hear-Your-Click-2024.git
|
32 |
+
cd Hear-Your-Click-2024
|
33 |
+
```
|
34 |
+
|
35 |
+
2. **(Optional) Create a Conda environment**:
|
36 |
+
```bash
|
37 |
+
conda env create -n hyc python=3.9.11
|
38 |
+
conda activate hyc
|
39 |
+
```
|
40 |
+
|
41 |
+
3. **Install dependencies**:
|
42 |
+
```bash
|
43 |
+
pip install -r requirements.txt
|
44 |
+
```
|
45 |
+
|
46 |
+
## Model Checkpoints
|
47 |
+
|
48 |
+
1. **Download the model weights** and place them in `./hyc_inference/inference/ckpt/`:
|
49 |
+
* [epoch=000059.ckpt](https://drive.google.com/file/d/1QX24gEmN-cG03NlO0zT1geK1eUgOqDtk/view?usp=drive_link)
|
50 |
+
* [epoch_10.pt](https://drive.google.com/file/d/15tbqXR-99QNg-Il6wxPD66q4EM4UkVvJ/view?usp=drive_link)
|
51 |
+
* [eval_classifier.ckpt](https://huggingface.co/SimianLuo/Diff-Foley/resolve/main/diff_foley_ckpt/eval_classifier.ckpt)
|
52 |
+
* [double_guidance_classifier.ckpt](https://huggingface.co/SimianLuo/Diff-Foley/resolve/main/diff_foley_ckpt/double_guidance_classifier.ckpt)
|
53 |
+
|
54 |
+
You can use `gdown` and `wget` for convenient downloading:
|
55 |
+
```bash
|
56 |
+
pip install gdown
|
57 |
+
|
58 |
+
cd ./hyc_inference/inference/ckpt
|
59 |
+
|
60 |
+
gdown https://drive.google.com/uc?id=1QX24gEmN-cG03NlO0zT1geK1eUgOqDtk
|
61 |
+
|
62 |
+
gdown https://drive.google.com/uc?id=15tbqXR-99QNg-Il6wxPD66q4EM4UkVvJ
|
63 |
+
|
64 |
+
wget https://huggingface.co/SimianLuo/Diff-Foley/resolve/main/diff_foley_ckpt/eval_classifier.ckpt
|
65 |
+
|
66 |
+
wget https://huggingface.co/SimianLuo/Diff-Foley/resolve/main/diff_foley_ckpt/double_guidance_classifier.ckpt
|
67 |
+
```
|
68 |
+
|
69 |
+
2. **Download additional model weights** and place them in `./checkpoints`:
|
70 |
+
* [clap_clip.pt](https://github.com/MCR-PEFT/C-MCR/blob/main/checkpoints/clap_clip.pt)
|
71 |
+
* [laion_clap_fullset_fusion.pt](https://huggingface.co/lukewys/laion_clap/blob/main/630k-fusion-best.pt)
|
72 |
+
* [clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32)
|
73 |
+
|
74 |
+
## Inference Command
|
75 |
+
|
76 |
+
Launch the inference demo using the following command:
|
77 |
+
```bash
|
78 |
+
python app.py --device cuda:0,1 --sam_model_type vit_b
|
79 |
```
|
80 |
|
81 |
+
## Citation
|
82 |
+
|
83 |
+
If you find this work useful for your research or applications, please cite our paper:
|
84 |
|
85 |
+
```bibtex
|
86 |
+
@misc{liang2025hearyourclickinteractivevideotoaudiogeneration,
|
87 |
+
title={Hear-Your-Click: Interactive Video-to-Audio Generation via Object-aware Contrastive Audio-Visual Fine-tuning},
|
88 |
+
author={Yingshan Liang and Keyu Fan and Zhicheng Du and Yiran Wang and Qingyang Shi and Xinyu Zhang and Jiasheng Lu and Peiwu Qin},
|
89 |
+
year={2025},
|
90 |
+
eprint={2507.04959},
|
91 |
+
archivePrefix={arXiv},
|
92 |
+
primaryClass={cs.CV},
|
93 |
+
url={https://arxiv.org/abs/2507.04959},
|
94 |
+
}
|
95 |
+
```
|