https://huggingface.co/tabtoyou/KoLLaVA-v1.5-Synatra-7b/tree/main
#2
by
sampoo00
- opened
README.md
CHANGED
@@ -12,8 +12,8 @@ tags:
|
|
12 |
---
|
13 |
|
14 |
# **KoLLaVA : Korean Large Language and Vision Assistant (feat. LLaVA)**
|
15 |
-
This model is a large multimodal model (LMM) that combines the LLM
|
16 |
-
](https://huggingface.co/openai/clip-vit-large-patch14-336)), trained on Korean visual-instruction dataset
|
17 |
|
18 |
Detail codes are available at [KoLLaVA github repository](https://github.com/tabtoyou/KoLLaVA)
|
19 |
|
|
|
12 |
---
|
13 |
|
14 |
# **KoLLaVA : Korean Large Language and Vision Assistant (feat. LLaVA)**
|
15 |
+
This model is a large multimodal model (LMM) that combines the LLM([Synatra](https://huggingface.co/maywell/Synatra-7B-v0.3-dpo)) with visual encoder of CLIP([clip-vit-large-patch14-336
|
16 |
+
](https://huggingface.co/openai/clip-vit-large-patch14-336)), trained on [Korean visual-instruction dataset](https://huggingface.co/datasets/tabtoyou/KoLLaVA-Instruct-612k).
|
17 |
|
18 |
Detail codes are available at [KoLLaVA github repository](https://github.com/tabtoyou/KoLLaVA)
|
19 |
|