ZhangYuanhan commited on
Commit
5064629
·
verified ·
1 Parent(s): 40e4cf7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -6,7 +6,10 @@ colorTo: red
6
  sdk: static
7
  pinned: false
8
  ---
9
-
 
 
 
10
  - **[2024-08]** 🤞🤞 We present `LLaVA-OneVision`, a family of open large multimodal models (LMMs) developed by consolidating our insights into data, models, and visual representations in the LLaVA-NeXT blog series.
11
 
12
  [GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-08-05-llava-onevision/)
 
6
  sdk: static
7
  pinned: false
8
  ---
9
+ - **[2024-10]** 🎬🎬 We present `LLaVA-Video`, a family of open large multimodal models (LMMs) designed specifically for advanced video understanding. We're excited to open-source LLaVA-Video-178K, a high-quality, synthetic dataset curated for video instruction tuning.
10
+
11
+ [GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://github.com/LLaVA-VL/LLaVA-NeXT)
12
+
13
  - **[2024-08]** 🤞🤞 We present `LLaVA-OneVision`, a family of open large multimodal models (LMMs) developed by consolidating our insights into data, models, and visual representations in the LLaVA-NeXT blog series.
14
 
15
  [GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-08-05-llava-onevision/)