toshi456 commited on
Commit
eef6a62
1 Parent(s): 7b47f1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -3
README.md CHANGED
@@ -1,3 +1,28 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - turing-motors/LLaVA-Pretrain-JA
5
+ language:
6
+ - ja
7
+ ---
8
+
9
+ # ConvLLaVA-JP Model Card
10
+ This is a pretrained checkpoint, you can use it to instruct tune your multimodal models.
11
+
12
+ Check out the instructions [here](https://github.com/tosiyuki/LLaVA-JP)
13
+
14
+ ## Model details
15
+ **Model type:**
16
+ ConvLLaVA-JP is a vision-language model that can converse about input images.<br>
17
+ This model is an LVLM model trained using [laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft) as the image encoder and [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) as the text decoder. Supports the input of 768 x 768 high resolution images
18
+
19
+ ## Training dataset
20
+ - [LLaVA-Pretrain-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Pretrain-JA)
21
+
22
+ ## Acknowledgement
23
+ - [ConvLLaVA](https://arxiv.org/abs/2405.15738)
24
+ - [LLM-jp](https://llm-jp.nii.ac.jp/)
25
+ - [Open CLIP](https://github.com/mlfoundations/open_clip)
26
+
27
+ ## License
28
+ Apache-2.0