qingshan777 commited on
Commit
84b55d0
1 Parent(s): 4f407e9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -34,21 +34,21 @@ We propose a MLLM based on Inner-Adaptor Architecture (IAA). IAA demonstrates th
34
  ### Main Results on General Multimodal Benchmarks.
35
 
36
  <p align="center">
37
- <img src="https://github.com/360CVGroup/Inner-Adaptor-Architecture/blob/main/iaa/mmresult.png" width=90%/>
38
  </p>
39
 
40
  ### Results on Visual Grounding Benchmarks.
41
  <!-- grounding_re -->
42
 
43
  <p align="center">
44
- <img src="https://github.com/360CVGroup/Inner-Adaptor-Architecture/blob/main/iaa/grounding_re.png" width=90%/>
45
  </p>
46
 
47
  ### Comparison on text-only question answering.
48
  <!-- grounding_re -->
49
 
50
  <p align="center">
51
- <img src="https://github.com/360CVGroup/Inner-Adaptor-Architecture/blob/main/iaa/NLPresult.png" width=90%/>
52
  </p>
53
 
54
  ## Quick Start 🤗
 
34
  ### Main Results on General Multimodal Benchmarks.
35
 
36
  <p align="center">
37
+ <img src="mmresult.png" width=90%/>
38
  </p>
39
 
40
  ### Results on Visual Grounding Benchmarks.
41
  <!-- grounding_re -->
42
 
43
  <p align="center">
44
+ <img src="grounding_re.png" width=90%/>
45
  </p>
46
 
47
  ### Comparison on text-only question answering.
48
  <!-- grounding_re -->
49
 
50
  <p align="center">
51
+ <img src="NLPresult.png" width=90%/>
52
  </p>
53
 
54
  ## Quick Start 🤗