|
--- |
|
inference: false |
|
license: apache-2.0 |
|
--- |
|
|
|
<br> |
|
<br> |
|
|
|
# LLaVA-Hound Model Card |
|
|
|
## Model details |
|
|
|
**Model type:** |
|
LLaVA-Hound is an open-source video large multimodal model, fine-tuned from video instruction following data based on large language model. |
|
|
|
This model is the fine-tuned on **image instruction** and **video caption** trained from **ShareGPTVideo/LLaVA-Hound-Pretrain**.. |
|
|
|
Base LLM: [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) |
|
|
|
**Model date:** |
|
Trained on March 15, 2024. |
|
|
|
**Paper or resources for more information:** |
|
https://github.com/RifleZhang/LLaVA-Hound-DPO |
|
|
|
## License |
|
[lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) license. |
|
|
|
**Where to send questions or comments about the model:** |
|
https://github.com/RifleZhang/LLaVA-Hound-DPO/issues |
|
|
|
## Intended use |
|
**Primary intended uses:** |
|
Video (and image) instruction-following. |
|
|
|
**Primary intended users:** |
|
Researchers in artificial intelligence, large multimodal model, etc. |
|
|
|
## Training dataset |
|
ShareGPTVideo dataset. |
|
|
|
## Evaluation |
|
Follow https://github.com/RifleZhang/LLaVA-Hound-DPO/blob/main/README.md |
|
|