Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -6,6 +6,10 @@ colorTo: red
|
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
|
|
|
|
|
|
|
|
9 |
- **[2024-10]** 🎬🎬 We present `LLaVA-Video`, a family of open large multimodal models (LMMs) designed specifically for advanced video understanding. We're excited to open-source LLaVA-Video-178K, a high-quality, synthetic dataset curated for video instruction tuning.
|
10 |
|
11 |
[GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://github.com/LLaVA-VL/LLaVA-NeXT)
|
|
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
9 |
+
- **[2024-10]** 🔥🔥 We present `LLaVA-Critic`, the first open-source large multimodal model as a generalist evaluator for assessing LMM-generated responses across diverse multimodal tasks and scenarios.
|
10 |
+
|
11 |
+
[GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-10-03-llava-critic/)
|
12 |
+
|
13 |
- **[2024-10]** 🎬🎬 We present `LLaVA-Video`, a family of open large multimodal models (LMMs) designed specifically for advanced video understanding. We're excited to open-source LLaVA-Video-178K, a high-quality, synthetic dataset curated for video instruction tuning.
|
14 |
|
15 |
[GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://github.com/LLaVA-VL/LLaVA-NeXT)
|