Spaces:
Running
Running
ZhangYuanhan
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,10 @@ colorTo: red
|
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
9 |
-
|
|
|
|
|
|
|
10 |
- **[2024-08]** 🤞🤞 We present `LLaVA-OneVision`, a family of open large multimodal models (LMMs) developed by consolidating our insights into data, models, and visual representations in the LLaVA-NeXT blog series.
|
11 |
|
12 |
[GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-08-05-llava-onevision/)
|
|
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
9 |
+
- **[2024-10]** 🎬🎬 We present `LLaVA-Video`, a family of open large multimodal models (LMMs) designed specifically for advanced video understanding. We're excited to open-source LLaVA-Video-178K, a high-quality, synthetic dataset curated for video instruction tuning.
|
10 |
+
|
11 |
+
[GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://github.com/LLaVA-VL/LLaVA-NeXT)
|
12 |
+
|
13 |
- **[2024-08]** 🤞🤞 We present `LLaVA-OneVision`, a family of open large multimodal models (LMMs) developed by consolidating our insights into data, models, and visual representations in the LLaVA-NeXT blog series.
|
14 |
|
15 |
[GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-08-05-llava-onevision/)
|