Update README.md
Browse files
README.md
CHANGED
@@ -42,10 +42,6 @@ Pretrained text-to-image Model of [HunYuan-DiT](https://github.com/Tencent/Hunyu
|
|
42 |
### Acknowledgement
|
43 |
Skip-DiT has been greatly inspired by the following amazing works and teams: [DeepCache](https://arxiv.org/abs/2312.00858), [Latte](https://github.com/Vchitect/Latte), [DiT](https://github.com/facebookresearch/DiT), and [HunYuan-DiT](https://github.com/Tencent/HunyuanDiT), we thank all the contributors for open-sourcing.
|
44 |
|
45 |
-
### License
|
46 |
-
The code and model weights are licensed under [LICENSE](./class-to-image/LICENSE).
|
47 |
-
|
48 |
-
|
49 |
### Visualization
|
50 |
#### Text-to-Video
|
51 |

|
|
|
42 |
### Acknowledgement
|
43 |
Skip-DiT has been greatly inspired by the following amazing works and teams: [DeepCache](https://arxiv.org/abs/2312.00858), [Latte](https://github.com/Vchitect/Latte), [DiT](https://github.com/facebookresearch/DiT), and [HunYuan-DiT](https://github.com/Tencent/HunyuanDiT), we thank all the contributors for open-sourcing.
|
44 |
|
|
|
|
|
|
|
|
|
45 |
### Visualization
|
46 |
#### Text-to-Video
|
47 |

|