language: | |
- en | |
license: cc-by-4.0 | |
tags: | |
- llava | |
datasets: | |
- taesiri/video-game-question-answering | |
- taesiri/video-game-question-answering-mixtral-8x7b-instruct-v0-1 | |
inference: false | |
pipeline_tag: image-text-to-text | |
<br> | |
<br> | |
# LLaVA-VideoGameVQA - Work In Progress - Model Card | |
## Model details | |
**Model type:** | |
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. | |
It is an auto-regressive language model, based on the transformer architecture. | |
**Model date:** | |
LLaVA-v1.5-13B-LoRA was trained in December 2023. | |
**LoRA Weights** | |
- [Checkpoint 1](https://huggingface.co/taesiri/llava-videogame-qa-lora-wip/tree/main/lora-checkpoints-1) trained on 28K question-answering pairs. Base Model: `liuhaotian/llava-v1.5-13b` | |
- [Checkpoint 5](https://huggingface.co/taesiri/llava-videogame-qa-lora-wip/tree/main/lora-checkpoints-5) trained on 74K question-answering pairs. Base Model: `liuhaotian/llava-v1.5-13b` | |