seawolf2357 commited on
Commit
5692579
verified
1 Parent(s): 88c8bd6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -110
README.md CHANGED
@@ -1,115 +1,6 @@
1
  ---
2
  emoji: 馃帴
3
- title: 'Self Forcing Wan 2.1 '
4
  short_description: Real-time video generation
5
  sdk: gradio
6
  ---
7
- <p align="center">
8
- <h1 align="center">Self Forcing</h1>
9
- <h3 align="center">Bridging the Train-Test Gap in Autoregressive Video Diffusion</h3>
10
- </p>
11
- <p align="center">
12
- <p align="center">
13
- <a href="https://www.xunhuang.me/">Xun Huang</a><sup>1</sup>
14
-
15
- <a href="https://zhengqili.github.io/">Zhengqi Li</a><sup>1</sup>
16
-
17
- <a href="https://guandehe.github.io/">Guande He</a><sup>2</sup>
18
-
19
- <a href="https://mingyuanzhou.github.io/">Mingyuan Zhou</a><sup>2</sup>
20
-
21
- <a href="https://research.adobe.com/person/eli-shechtman/">Eli Shechtman</a><sup>1</sup><br>
22
- <sup>1</sup>Adobe Research <sup>2</sup>UT Austin
23
- </p>
24
- <h3 align="center"><a href="https://arxiv.org/abs/2506.08009">Paper</a> | <a href="https://self-forcing.github.io">Website</a> | <a href="https://huggingface.co/gdhe17/Self-Forcing/tree/main">Models (HuggingFace)</a></h3>
25
- </p>
26
-
27
- ---
28
-
29
- Self Forcing trains autoregressive video diffusion models by **simulating the inference process during training**, performing autoregressive rollout with KV caching. It resolves the train-test distribution mismatch and enables **real-time, streaming video generation on a single RTX 4090** while matching the quality of state-of-the-art diffusion models.
30
-
31
- ---
32
-
33
-
34
- https://github.com/user-attachments/assets/7548c2db-fe03-4ba8-8dd3-52d2c6160739
35
-
36
-
37
- ## Requirements
38
- We tested this repo on the following setup:
39
- * Nvidia GPU with at least 24 GB memory (RTX 4090, A100, and H100 are tested).
40
- * Linux operating system.
41
- * 64 GB RAM.
42
-
43
- Other hardware setup could also work but hasn't been tested.
44
-
45
- ## Installation
46
- Create a conda environment and install dependencies:
47
- ```
48
- conda create -n self_forcing python=3.10 -y
49
- conda activate self_forcing
50
- pip install -r requirements.txt
51
- pip install flash-attn --no-build-isolation
52
- python setup.py develop
53
- ```
54
-
55
- ## Quick Start
56
- ### Download checkpoints
57
- ```
58
- huggingface-cli download Wan-AI/Wan2.1-T2V-1.3B --local-dir-use-symlinks False --local-dir wan_models/Wan2.1-T2V-1.3B
59
- huggingface-cli download gdhe17/Self-Forcing checkpoints/self_forcing_dmd.pt --local-dir .
60
- ```
61
-
62
- ### GUI demo
63
- ```
64
- python demo.py
65
- ```
66
- Note:
67
- * **Our model works better with long, detailed prompts** since it's trained with such prompts. We will integrate prompt extension into the codebase (similar to [Wan2.1](https://github.com/Wan-Video/Wan2.1/tree/main?tab=readme-ov-file#2-using-prompt-extention)) in the future. For now, it is recommended to use third-party LLMs (such as GPT-4o) to extend your prompt before providing to the model.
68
- * You may want to adjust FPS so it plays smoothly on your device.
69
- * The speed can be improved by enabling `torch.compile`, [TAEHV-VAE](https://github.com/madebyollin/taehv/), or using FP8 Linear layers, although the latter two options may sacrifice quality. It is recommended to use `torch.compile` if possible and enable TAEHV-VAE if further speedup is needed.
70
-
71
- ### CLI Inference
72
- Example inference script using the chunk-wise autoregressive checkpoint trained with DMD:
73
- ```
74
- python inference.py \
75
- --config_path configs/self_forcing_dmd.yaml \
76
- --output_folder videos/self_forcing_dmd \
77
- --checkpoint_path checkpoints/self_forcing_dmd.pt \
78
- --data_path prompts/MovieGenVideoBench_extended.txt \
79
- --use_ema
80
- ```
81
- Other config files and corresponding checkpoints can be found in [configs](configs) folder and our [huggingface repo](https://huggingface.co/gdhe17/Self-Forcing/tree/main/checkpoints).
82
-
83
- ## Training
84
- ### Download text prompts and ODE initialized checkpoint
85
- ```
86
- huggingface-cli download gdhe17/Self-Forcing checkpoints/ode_init.pt --local-dir .
87
- huggingface-cli download gdhe17/Self-Forcing vidprom_filtered_extended.txt --local-dir prompts
88
- ```
89
- Note: Our training algorithm (except for the GAN version) is data-free (**no video data is needed**). For now, we directly provide the ODE initialization checkpoint and will add more instructions on how to perform ODE initialization in the future (which is identical to the process described in the [CausVid](https://github.com/tianweiy/CausVid) repo).
90
-
91
- ### Self Forcing Training with DMD
92
- ```
93
- torchrun --nnodes=8 --nproc_per_node=8 --rdzv_id=5235 \
94
- --rdzv_backend=c10d \
95
- --rdzv_endpoint $MASTER_ADDR \
96
- train.py \
97
- --config_path configs/self_forcing_dmd.yaml \
98
- --logdir logs/self_forcing_dmd \
99
- --disable-wandb
100
- ```
101
- Our training run uses 600 iterations and completes in under 2 hours using 64 H100 GPUs. By implementing gradient accumulation, it should be possible to reproduce the results in less than 16 hours using 8 H100 GPUs.
102
-
103
- ## Acknowledgements
104
- This codebase is built on top of the open-source implementation of [CausVid](https://github.com/tianweiy/CausVid) by [Tianwei Yin](https://tianweiy.github.io/) and the [Wan2.1](https://github.com/Wan-Video/Wan2.1) repo.
105
-
106
- ## Citation
107
- If you find this codebase useful for your research, please kindly cite our paper:
108
- ```
109
- @article{huang2025selfforcing,
110
- title={Self Forcing: Bridging the Train-Test Gap in Autoregressive Video Diffusion},
111
- author={Huang, Xun and Li, Zhengqi and He, Guande and Zhou, Mingyuan and Shechtman, Eli},
112
- journal={arXiv preprint arXiv:2506.08009},
113
- year={2025}
114
- }
115
- ```
 
1
  ---
2
  emoji: 馃帴
3
+ title: 'VEO3 Real-Time'
4
  short_description: Real-time video generation
5
  sdk: gradio
6
  ---