caT-text-to-video / README.md
motexture's picture
Update README.md
63314a1 verified
|
raw
history blame
890 Bytes
metadata
license: apache-2.0
datasets:
  - TempoFunk/webvid-10M
language:
  - en
tags:
  - text-to-video
base_model:
  - ali-vilab/text-to-video-ms-1.7b

caT text to video

Conditionally augmented text-to-video model. Uses pre-trained weights from modelscope text-to-video model, augmented with temporal conditioning transformers to extend generated clips and create a smooth transition between them. Supports prompt interpolation as well to change scenes during clip extensions.

This project was trained at home as a hobby.

Installation

Clone the Repository

git clone https://github.com/motexture/caT-text-to-video.git
cd caT-text-to-video
python3 -m venv venv
source venv/bin/activate  # On Windows use `venv\Scripts\activate`
pip install -r requirements.txt
python3 run.py

Visit the provided URL in your browser to interact with the interface and start generating videos.