Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,5 @@
|
|
1 |
---
|
2 |
license: bsd-3-clause
|
3 |
-
inference: false
|
4 |
language:
|
5 |
- en
|
6 |
pipeline_tag: visual-question-answering
|
@@ -19,7 +18,7 @@ LoViM is an open-source Vision-Languagde model trained by initializing from Inst
|
|
19 |
It composes of an EVA-CLIP vision encoder, a Q-Former, a projection layer and an auto-regressive language model, based on the decoder only transformer architecture.
|
20 |
|
21 |
**Model date:**
|
22 |
-
|
23 |
|
24 |
**Paper or resources for more information:**
|
25 |
https://project page
|
|
|
1 |
---
|
2 |
license: bsd-3-clause
|
|
|
3 |
language:
|
4 |
- en
|
5 |
pipeline_tag: visual-question-answering
|
|
|
18 |
It composes of an EVA-CLIP vision encoder, a Q-Former, a projection layer and an auto-regressive language model, based on the decoder only transformer architecture.
|
19 |
|
20 |
**Model date:**
|
21 |
+
LoViM_Vicuna was trained in July 2023.
|
22 |
|
23 |
**Paper or resources for more information:**
|
24 |
https://project page
|