README / README.md
luodian's picture
Update README.md
05e11c3 verified
|
raw
history blame
1.59 kB
---
title: README
emoji:
colorFrom: blue
colorTo: red
sdk: static
pinned: false
---
- **[2024-06]** 🚀🚀 We release the `LongVA`, a long language model with state-of-the-art performance on video understanding tasks.
[GitHub](https://github.com/EvolvingLMMs-Lab/LongVA) | [Blog](https://lmms-lab.github.io/posts/longva/)
- **[2024-06]** 🎬🎬 The `lmms-eval/v0.2` has been upgraded to support video evaluations for video models like LLaVA-NeXT Video and Gemini 1.5 Pro across tasks such as EgoSchema, PerceptionTest, VideoMME, and more.
[GitHub](https://github.com/EvolvingLMMs-Lab/lmms-eval) | [Blog](https://lmms-lab.github.io/posts/lmms-eval-0.2/)
- **[2024-05]** 🚀🚀 We release the `LLaVA-NeXT Video`, a video model with state-of-the-art performance and reaching to Google's Gemini level performance on diverse video understanding tasks.
[GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-04-30-llava-next-video/)
- **[2024-05]** 🚀🚀 We release the `LLaVA-NeXT` with state-of-the-art and near GPT-4V performance at multiple multimodal benchmarks. LLaVA model family now reaches at 72B, and 110B parameters level.
[GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-05-10-llava-next-stronger-llms/)
- **[2024-03]** We release the `lmms-eval`, a toolkit for holistic evaluations with 50+ multimodal datasets and 10+ models.
[GitHub](https://github.com/EvolvingLMMs-Lab/lmms-eval) | [Blog](https://lmms-lab.github.io/posts/lmms-eval-0.1/)