|
--- |
|
license: cc-by-sa-4.0 |
|
|
|
language: |
|
- ko |
|
library_name: transformers |
|
tags: |
|
- LLaVA |
|
- KoLLaVA |
|
- Synatra |
|
- CLIP |
|
--- |
|
|
|
# KoLLaVA : Korean Large Language and Vision Assistant (feat. LLaVA) |
|
This model is a large multimodal model (LMM) that combines the LLM([Synatra](https://huggingface.co/maywell/Synatra-7B-v0.3-dpo)) with visual encoder of CLIP([clip-vit-large-patch14-336 |
|
](https://huggingface.co/openai/clip-vit-large-patch14-336)), trained on [Korean visual-instruction dataset](https://huggingface.co/datasets/tabtoyou/KoLLaVA-Instruct-612k). |
|
|
|
Detail codes are available at [KoLLaVA github repository](https://github.com/tabtoyou/KoLLaVA) |
|
|
|
# **License** |
|
|
|
This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-sa/4.0/) (**cc-by-sa-4.0**) use, Under **5K MAU** |
|
The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-sa-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. |
|
If your service has over **5K MAU** contact me for license approval. |
|
|