|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
# Video-LLaVA-Seg |
|
|
|
[Project](https://ali2500.github.io/vicas-project/) | [Arxiv](https://arxiv.org/abs/2412.09754) |
|
|
|
This is the official baseline implementation for the ViCas dataset. This is the pretrained model which has been optimized on a subset of WebVid10M and Panda70M for video captioning. The final model which has been finetuned on ViCaS is hosted [here](https://huggingface.co/fun-research/Video-LLaVA-Seg). |
|
|
|
For details about setting up the model, refer to the [Video-LLaVA-Seg GitHub repo](https://github.com/Ali2500/Video-LLaVA-Seg/tree/main). |
|
|
|
For details about downloading and evaluating the dataset benchmark, refer to the [ViCaS GitHub repo](https://github.com/Ali2500/ViCaS/tree/main). |