jadechoghari commited on
Commit
ed88271
1 Parent(s): 221ffac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -10
README.md CHANGED
@@ -18,25 +18,46 @@ It follows by [this paper](https://arxiv.org/abs/2312.10656).
18
  from diffusers import DiffusionPipeline
19
 
20
  # load the pretrained model
21
- pipeline = DiffusionPipeline.from_pretrained("jadechoghari/VidToMe", trust_remote_code=True, custom_pipeline="jadechoghari/VidToMe", sd_version="depth", device="cuda", float_precision="fp16")
22
-
23
- # Edit a video with prompts
24
- pipeline(
25
- video_path="path/to/video.mp4",
26
- video_prompt="A serene beach scene",
27
- edit_prompt="Make the sunset more vibrant",
28
- control_type="depth",
29
- n_timesteps=50
30
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  ```
32
 
 
 
33
  ## Applications:
34
  - Zero-shot video editing for content creators
35
  - Video transformation using natural language prompts
36
  - Memory-optimized video generation for longer or complex sequences
37
 
38
  **Model Authors:**
39
- - Xirui Li
40
  - Chao Ma
41
  - Xiaokang Yang
42
  - Ming-Hsuan Yang
 
18
  from diffusers import DiffusionPipeline
19
 
20
  # load the pretrained model
21
+ pipeline = DiffusionPipeline.from_pretrained(
22
+ "jadechoghari/VidToMe",
23
+ trust_remote_code=True,
24
+ custom_pipeline="jadechoghari/VidToMe",
25
+ sd_version="depth",
26
+ device="cuda",
27
+ float_precision="fp16"
 
 
28
  )
29
+
30
+ # specify the input video and output directory
31
+ input_path = "flamingo.mp4" # Input video file
32
+ work_dir = "output/flamingo" # Directory for saving results
33
+
34
+ # set prompts for inversion and generation
35
+ inversion_prompt = "flamingos standing in the water near a tree."
36
+ generation_prompt = {"origami": "rainbow-colored origami flamingos standing in the water near a tree."}
37
+
38
+ # additional control and parameters
39
+ control_type = "none" # No extra control, use "depth" if needed
40
+ negative_prompt = ""
41
+
42
+ # Run the video-to-image editing pipeline
43
+ generated_images = pipeline(
44
+ video_path="path/to/video.mp4", # add path to the input video
45
+ video_prompt=inversion_prompt, # inversion prompt
46
+ edit_prompt=generation_prompt, # edit prompt for generation
47
+ control_type=control_type # control type (e.g., "none", "depth")
48
+ )
49
+
50
  ```
51
 
52
+ #### Note: For more control, consider creating a configuration and follow the instructions in the GitHub repository.
53
+
54
  ## Applications:
55
  - Zero-shot video editing for content creators
56
  - Video transformation using natural language prompts
57
  - Memory-optimized video generation for longer or complex sequences
58
 
59
  **Model Authors:**
60
+ - [Xirui Li](https://github.com/lixirui142)
61
  - Chao Ma
62
  - Xiaokang Yang
63
  - Ming-Hsuan Yang