lev1 commited on
Commit
5a94e7e
·
1 Parent(s): 38cb44d

text improvement

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -20,7 +20,7 @@ with gr.Blocks(css='style.css') as demo:
20
  Text2Video-Zero
21
  </h1>
22
  <h2 style="font-weight: 450; font-size: 1rem; margin-top: 0.5rem; margin-bottom: 0.5rem">
23
- We built <b>Text2Video-Zero, a first zero-shot text-to-video synthesis diffusion framework, that enables low cost yet high-quality and consistent video generation with only pre-trained text-to-image diffusion models without any training on videos or optimization!
24
  Text2Video-Zero also naturally supports cool derivative works of pre-trained text-to-image models such as Instruct Pix2Pix, ControlNet and DreamBooth, and based on which we present Video Instruct Pix2Pix, Pose Conditional, Edge Conditional and, Edge Conditional and DreamBooth Specialized applications.
25
  We hope our Text2Video-Zero will further democratize AI and empower creativity of everyone by unleashing the zero-shot video generation and editing capacity of the amazing text-to-image models and encourages future research!
26
  </h2>
 
20
  Text2Video-Zero
21
  </h1>
22
  <h2 style="font-weight: 450; font-size: 1rem; margin-top: 0.5rem; margin-bottom: 0.5rem">
23
+ We built <b>Text2Video-Zero</b>, a first zero-shot text-to-video synthesis diffusion framework, that enables low cost yet high-quality and consistent video generation with only pre-trained text-to-image diffusion models without any training on videos or optimization!
24
  Text2Video-Zero also naturally supports cool derivative works of pre-trained text-to-image models such as Instruct Pix2Pix, ControlNet and DreamBooth, and based on which we present Video Instruct Pix2Pix, Pose Conditional, Edge Conditional and, Edge Conditional and DreamBooth Specialized applications.
25
  We hope our Text2Video-Zero will further democratize AI and empower creativity of everyone by unleashing the zero-shot video generation and editing capacity of the amazing text-to-image models and encourages future research!
26
  </h2>