gordonhubackup commited on
Commit
0fd8031
·
1 Parent(s): 9848a57
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -21,13 +21,13 @@ It composes of an EVA-CLIP vision encoder, a Q-Former, a projection layer and an
21
  LoViM_Vicuna was trained in July 2023.
22
 
23
  **Paper or resources for more information:**
24
- https://project page
25
 
26
  **License:**
27
  BSD 3-Clause License
28
 
29
  **Where to send questions or comments about the model:**
30
- https://github.com/
31
 
32
  ## Intended use
33
  **Primary intended uses:**
@@ -46,4 +46,4 @@ For zero-shot evaluation on general image task, we selected Nocaps, Flickr30K, V
46
 
47
  For zero-shot evaluation on text-rich image OCR task, we selected ST-VQA, OCR-VQA, Text-VQA, and Doc-VQA.
48
 
49
- More detials are in our github, https://github.com/
 
21
  LoViM_Vicuna was trained in July 2023.
22
 
23
  **Paper or resources for more information:**
24
+ https://gordonhu608.github.io/lovim/
25
 
26
  **License:**
27
  BSD 3-Clause License
28
 
29
  **Where to send questions or comments about the model:**
30
+ https://github.com/mlpc-ucsd/LoViM
31
 
32
  ## Intended use
33
  **Primary intended uses:**
 
46
 
47
  For zero-shot evaluation on text-rich image OCR task, we selected ST-VQA, OCR-VQA, Text-VQA, and Doc-VQA.
48
 
49
+ More detials are in our github, https://github.com/mlpc-ucsd/LoViM