--- license: apache-2.0 language: - en base_model: - Wan-AI/Wan2.1-T2V-14B pipeline_tag: text-to-video tags: - text-to-video - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- The video shows a coastal village hit by a t5un@m1 realistic tsunami. Buildings are flooded, boats are capsized, and debris litters the streets, while people are seen evacuating the area. output: url: example_videos/tsunami1.mp4 - text: >- The video is of a beach with many boats and jet skis. The water recedes and then a t5un@m1 realistic tsunami rushes in, sweeping away all the objects on the beach. The sky is sunny. output: url: example_videos/tsunami2.mp4 - text: >- The video shows a t5un@m1 realistic tsunami wave crashing into a coastal area. The wave is large and powerful, and it destroys everything in its path. There are several boats and buildings in the path of the wave. The water is a murky brown color. output: url: example_videos/tsunami3.mp4 - text: >- The video shows a coastal city after a t5un@m1 realistic tsunami. The city is flooded with water, and there are many damaged buildings and cars. The sky is a dark gray. output: url: example_videos/tsunami4.mp4 ---
This LoRA is trained on the Wan2.1 14B T2V model and allows you to generate videos of realistic tsunamis!
The key trigger phrase is: t5un@m1 realistic tsunami
For prompting, check out the example prompts; this way of prompting seems to work very well.
This LoRA works with a modified version of Kijai's Wan Video Wrapper workflow. The main modification is adding a Wan LoRA node connected to the base model.
See the Downloads section above for the modified workflow.
The model weights are available in Safetensors format. See the Downloads section above.
Training was done using Diffusion Pipe for Training
Special thanks to Kijai for the ComfyUI Wan Video Wrapper and tdrussell for the training scripts!