Update README.md
Browse files
README.md
CHANGED
@@ -47,6 +47,8 @@ Recent advancements in Diffusion Transformer (DiT) have demonstrated remarkable
|
|
47 |
|
48 |
## 📣 Updates
|
49 |
|
|
|
|
|
50 |
- `2025/01/06` 🔥🔥We released Tora Image-to-Video, including inference code and model weights.
|
51 |
- `2024/12/13` SageAttention2 and model compilation are supported in diffusers version. Tested on the A10, these approaches speed up every inference step by approximately 52%, except for the first step.
|
52 |
- `2024/12/09` 🔥🔥Diffusers version of Tora and the corresponding model weights are released. Inference VRAM requirements are reduced to around 5 GiB. Please refer to [this](diffusers-version/README.md) for details.
|
|
|
47 |
|
48 |
## 📣 Updates
|
49 |
|
50 |
+
- `2025/07/08` 🔥🔥 Our latest work, [Tora2](https://ali-videoai.github.io/Tora2_page/), has been accepted by ACM MM25. Tora2 builds on Tora with design improvements, enabling enhanced appearance and motion customization for multiple entities.
|
51 |
+
- `2025/05/24` We open-sourced a LoRA-finetuned model of [Wan](https://github.com/Wan-Video/Wan2.1). It turns things in the image into fluffy toys. Check this out: https://github.com/alibaba/wan-toy-transform
|
52 |
- `2025/01/06` 🔥🔥We released Tora Image-to-Video, including inference code and model weights.
|
53 |
- `2024/12/13` SageAttention2 and model compilation are supported in diffusers version. Tested on the A10, these approaches speed up every inference step by approximately 52%, except for the first step.
|
54 |
- `2024/12/09` 🔥🔥Diffusers version of Tora and the corresponding model weights are released. Inference VRAM requirements are reduced to around 5 GiB. Please refer to [this](diffusers-version/README.md) for details.
|