Text-to-Video
Open-Sora
Safetensors
zangwei commited on
Commit
44527d8
·
verified ·
1 Parent(s): f4807b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +225 -3
README.md CHANGED
@@ -1,3 +1,225 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ ## Open-Sora: Democratizing Efficient Video Production for All
6
+
7
+ We design and implement **Open-Sora**, an initiative dedicated to **efficiently** producing high-quality video. We hope to make the model,
8
+ tools and all details accessible to all. By embracing **open-source** principles,
9
+ Open-Sora not only democratizes access to advanced video generation techniques, but also offers a
10
+ streamlined and user-friendly platform that simplifies the complexities of video generation.
11
+ With Open-Sora, our goal is to foster innovation, creativity, and inclusivity within the field of content creation.
12
+
13
+ Github: https://github.com/hpcaitech/Open-Sora
14
+
15
+ ## 🔆 Reports
16
+
17
+ - **[Tech Report of Open-Sora 2.0](https://github.com/hpcaitech/Open-Sora-Demo/blob/main/paper/Open_Sora_2_tech_report.pdf)**
18
+ - **[Step by step to train or finetune your own model](docs/train.md)**
19
+ - **[Step by step to train and evaluate an video autoencoder](docs/ae.md)**
20
+ - **[Visit the high compression video autoencoder](docs/hcae.md)**
21
+ - Reports of previous version (better see in according branch):
22
+ - [Open-Sora 1.3](docs/report_04.md): shift-window attention, unified spatial-temporal VAE, etc.
23
+ - [Open-Sora 1.2](docs/report_03.md), [Tech Report](https://arxiv.org/abs/2412.20404): rectified flow, 3d-VAE, score condition, evaluation, etc.
24
+ - [Open-Sora 1.1](docs/report_02.md): multi-resolution/length/aspect-ratio, image/video conditioning/editing, data preprocessing, etc.
25
+ - [Open-Sora 1.0](docs/report_01.md): architecture, captioning, etc.
26
+
27
+ 📍 Since Open-Sora is under active development, we remain different branchs for different versions. The latest version is [main](https://github.com/hpcaitech/Open-Sora). Old versions include: [v1.0](https://github.com/hpcaitech/Open-Sora/tree/opensora/v1.0), [v1.1](https://github.com/hpcaitech/Open-Sora/tree/opensora/v1.1), [v1.2](https://github.com/hpcaitech/Open-Sora/tree/opensora/v1.2), [v1.3](https://github.com/hpcaitech/Open-Sora/tree/opensora/v1.3).
28
+
29
+ ## Quickstart
30
+
31
+ ### Installation
32
+
33
+ ```bash
34
+ # create a virtual env and activate (conda as an example)
35
+ conda create -n opensora python=3.10
36
+ conda activate opensora
37
+
38
+ # download the repo
39
+ git clone https://github.com/hpcaitech/Open-Sora
40
+ cd Open-Sora
41
+
42
+ # Ensure torch >= 2.4.0
43
+ pip install -v . # for development mode, `pip install -v -e .`
44
+ pip install xformers==0.0.27.post2 --index-url https://download.pytorch.org/whl/cu121 # install xformers according to your cuda version
45
+ pip install flash-attn --no-build-isolation
46
+ ```
47
+
48
+ Optionally, you can install flash attention 3 for faster speed.
49
+
50
+ ```bash
51
+ git clone https://github.com/Dao-AILab/flash-attention # 4f0640d5
52
+ cd flash-attention/hopper
53
+ python setup.py install
54
+ ```
55
+
56
+ ### Model Download
57
+
58
+ Our 11B model supports 256px and 768px resolution. Both T2V and I2V are supported by one model. 🤗 [Huggingface](https://huggingface.co/hpcai-tech/Open-Sora-v2) 🤖 [ModelScope](https://modelscope.cn/models/luchentech/Open-Sora-v2).
59
+
60
+ Download from huggingface:
61
+
62
+ ```bash
63
+ pip install "huggingface_hub[cli]"
64
+ huggingface-cli download hpcai-tech/Open-Sora-v2 --local-dir ./ckpts
65
+ ```
66
+
67
+ Download from ModelScope:
68
+
69
+ ```bash
70
+ pip install modelscope
71
+ modelscope download hpcai-tech/Open-Sora-v2 --local_dir ./ckpts
72
+ ```
73
+
74
+ ### Text-to-Video Generation
75
+
76
+ Our model is optimized for image-to-video generation, but it can also be used for text-to-video generation. To generate high quality videos, with the help of flux text-to-image model, we build a text-to-image-to-video pipeline. For 256x256 resolution:
77
+
78
+ ```bash
79
+ # Generate one given prompt
80
+ torchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/t2i2v_256px.py --save-dir samples --prompt "raining, sea"
81
+
82
+ # Generation with csv
83
+ torchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/t2i2v_256px.py --save-dir samples --dataset.data-path assets/texts/example.csv
84
+ ```
85
+
86
+ For 768x768 resolution:
87
+
88
+ ```bash
89
+ # One GPU
90
+ torchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/t2i2v_768px.py --save-dir samples --prompt "raining, sea"
91
+
92
+ # Multi-GPU with colossalai sp
93
+ torchrun --nproc_per_node 8 --standalone scripts/diffusion/inference.py configs/diffusion/inference/t2i2v_768px.py --save-dir samples --prompt "raining, sea"
94
+ ```
95
+
96
+ You can adjust the generation aspect ratio by `--aspect_ratio` and the generation length by `--num_frames`. Candidate values for aspect_ratio includes `16:9`, `9:16`, `1:1`, `2.39:1`. Candidate values for num_frames should be `4k+1` and less than 129.
97
+
98
+ You can also run direct text-to-video by:
99
+
100
+ ```bash
101
+ # One GPU for 256px
102
+ torchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/256px.py --prompt "raining, sea"
103
+ # Multi-GPU for 768px
104
+ torchrun --nproc_per_node 8 --standalone scripts/diffusion/inference.py configs/diffusion/inference/768px.py --prompt "raining, sea"
105
+ ```
106
+
107
+ ### Image-to-Video Generation
108
+
109
+ Given a prompt and a reference image, you can generate a video with the following command:
110
+
111
+ ```bash
112
+ # 256px
113
+ torchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/256px.py --cond_type i2v_head --prompt "A plump pig wallows in a muddy pond on a rustic farm, its pink snout poking out as it snorts contentedly. The camera captures the pig's playful splashes, sending ripples through the water under the midday sun. Wooden fences and a red barn stand in the background, framed by rolling green hills. The pig's muddy coat glistens in the sunlight, showcasing the simple pleasures of its carefree life." --ref assets/texts/i2v.png
114
+
115
+ # 256px with csv
116
+ torchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/256px.py --cond_type i2v_head --dataset.data-path assets/texts/i2v.csv
117
+
118
+ # Multi-GPU 768px
119
+ torchrun --nproc_per_node 8 --standalone scripts/diffusion/inference.py configs/diffusion/inference/768px.py --cond_type i2v_head --dataset.data-path assets/texts/i2v.csv
120
+ ```
121
+
122
+ ## Advanced Usage
123
+
124
+ ### Motion Score
125
+
126
+ During training, we provide motion score into the text prompt. During inference, you can use the following command to generate videos with motion score (the default score is 4):
127
+
128
+ ```bash
129
+ torchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/t2i2v_256px.py --save-dir samples --prompt "raining, sea" --motion-score 4
130
+ ```
131
+
132
+ We also provide a dynamic motion score evaluator. After setting your OpenAI API key, you can use the following command to evaluate the motion score of a video:
133
+
134
+ ```bash
135
+ torchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/t2i2v_256px.py --save-dir samples --prompt "raining, sea" --motion-score dynamic
136
+ ```
137
+
138
+ | Score | 1 | 4 | 7 |
139
+ | ----- | ------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- |
140
+ | | <img src="https://github.com/hpcaitech/Open-Sora-Demo/blob/main/demo/v2.0/motion_score_1.gif" width=""> | <img src="https://github.com/hpcaitech/Open-Sora-Demo/blob/main/demo/v2.0/motion_score_4.gif" width=""> | <img src="https://github.com/hpcaitech/Open-Sora-Demo/blob/main/demo/v2.0/motion_score_7.gif" width=""> |
141
+
142
+ ### Prompt Refine
143
+
144
+ We take advantage of ChatGPT to refine the prompt. You can use the following command to refine the prompt. The function is available for both text-to-video and image-to-video generation.
145
+
146
+ ```bash
147
+ export OPENAI_API_KEY=sk-xxxx
148
+ torchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/t2i2v_256px.py --save-dir samples --prompt "raining, sea" --refine-prompt True
149
+ ```
150
+
151
+ ### Reproductivity
152
+
153
+ To make the results reproducible, you can set the random seed by:
154
+
155
+ ```bash
156
+ torchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/diffusion/inference/t2i2v_256px.py --save-dir samples --prompt "raining, sea" --sampling_option.seed 42 --seed 42
157
+ ```
158
+
159
+ Use `--num-sample k` to generate `k` samples for each prompt.
160
+
161
+ ## Computational Efficiency
162
+
163
+ We test the computational efficiency of text-to-video on H100/H800 GPU. For 256x256, we use colossalai's tensor parallelism. For 768x768, we use colossalai's sequence parallelism. All use number of steps 50. The results are presented in the format: $\color{blue}{\text{Total time (s)}}/\color{red}{\text{peak GPU memory (GB)}}$
164
+
165
+ | Resolution | 1x GPU | 2x GPUs | 4x GPUs | 8x GPUs |
166
+ | ---------- | -------------------------------------- | ------------------------------------- | ------------------------------------- | ------------------------------------- |
167
+ | 256x256 | $\color{blue}{60}/\color{red}{52.5}$ | $\color{blue}{40}/\color{red}{44.3}$ | $\color{blue}{34}/\color{red}{44.3}$ | |
168
+ | 768x768 | $\color{blue}{1656}/\color{red}{60.3}$ | $\color{blue}{863}/\color{red}{48.3}$ | $\color{blue}{466}/\color{red}{44.3}$ | $\color{blue}{276}/\color{red}{44.3}$ |
169
+
170
+ ## Evaluation
171
+
172
+ On [VBench](https://huggingface.co/spaces/Vchitect/VBench_Leaderboard), Open-Sora 2.0 significantly narrows the gap with OpenAI’s Sora, reducing it from 4.52% → 0.69% compared to Open-Sora 1.2.
173
+
174
+ ![VBench](https://github.com/hpcaitech/Open-Sora-Demo/blob/main/readme/v2_vbench.png)
175
+
176
+ Human preference results show our model is on par with HunyuanVideo 14B and Step-Video 30B.
177
+
178
+ ![Win Rate](https://github.com/hpcaitech/Open-Sora-Demo/blob/main/readme/v2_winrate.png)
179
+
180
+ ## Contribution
181
+
182
+ Thanks goes to these wonderful contributors:
183
+
184
+ <a href="https://github.com/hpcaitech/Open-Sora/graphs/contributors">
185
+ <img src="https://contrib.rocks/image?repo=hpcaitech/Open-Sora" />
186
+ </a>
187
+
188
+ If you wish to contribute to this project, please refer to the [Contribution Guideline](./CONTRIBUTING.md).
189
+
190
+ ## Acknowledgement
191
+
192
+ Here we only list a few of the projects. For other works and datasets, please refer to our report.
193
+
194
+ - [ColossalAI](https://github.com/hpcaitech/ColossalAI): A powerful large model parallel acceleration and optimization
195
+ system.
196
+ - [DiT](https://github.com/facebookresearch/DiT): Scalable Diffusion Models with Transformers.
197
+ - [OpenDiT](https://github.com/NUS-HPC-AI-Lab/OpenDiT): An acceleration for DiT training. We adopt valuable acceleration
198
+ strategies for training progress from OpenDiT.
199
+ - [PixArt](https://github.com/PixArt-alpha/PixArt-alpha): An open-source DiT-based text-to-image model.
200
+ - [Flux](https://github.com/black-forest-labs/flux): A powerful text-to-image generation model.
201
+ - [Latte](https://github.com/Vchitect/Latte): An attempt to efficiently train DiT for video.
202
+ - [HunyuanVideo](https://github.com/Tencent/HunyuanVideo/tree/main?tab=readme-ov-file): Open-Source text-to-video model.
203
+ - [StabilityAI VAE](https://huggingface.co/stabilityai/sd-vae-ft-mse-original): A powerful image VAE model.
204
+ - [DC-AE](https://github.com/mit-han-lab/efficientvit): Deep Compression AutoEncoder for image compression.
205
+ - [CLIP](https://github.com/openai/CLIP): A powerful text-image embedding model.
206
+ - [T5](https://github.com/google-research/text-to-text-transfer-transformer): A powerful text encoder.
207
+ - [LLaVA](https://github.com/haotian-liu/LLaVA): A powerful image captioning model based on [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) and [Yi-34B](https://huggingface.co/01-ai/Yi-34B).
208
+ - [PLLaVA](https://github.com/magic-research/PLLaVA): A powerful video captioning model.
209
+ - [MiraData](https://github.com/mira-space/MiraData): A large-scale video dataset with long durations and structured caption.
210
+
211
+ ## Citation
212
+
213
+ ```bibtex
214
+ @software{opensora,
215
+ author = {Zangwei Zheng and Xiangyu Peng and Tianji Yang and Chenhui Shen and Shenggui Li and Hongxin Liu and Yukun Zhou and Tianyi Li and Yang You},
216
+ title = {Open-Sora: Democratizing Efficient Video Production for All},
217
+ month = {March},
218
+ year = {2024},
219
+ url = {https://github.com/hpcaitech/Open-Sora}
220
+ }
221
+ ```
222
+
223
+ ## Star History
224
+
225
+ [![Star History Chart](https://api.star-history.com/svg?repos=hpcaitech/Open-Sora&type=Date)](https://star-history.com/#hpcaitech/Open-Sora&Date)