wzk1015 commited on
Commit
2fbe186
·
verified ·
1 Parent(s): 1f1c12d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -2
README.md CHANGED
@@ -19,7 +19,7 @@ tags:
19
 
20
  This repository contains the instruction-tuned Mono-InternVL-2B model, which has 1.8B activated parameters (3B in total). It is built upon [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b).
21
 
22
- Please refer to our [**paper**](https://huggingface.co/papers/2410.08202), [**project page**](https://internvl.github.io/blog/2024-10-10-Mono-InternVL/) and [**GitHub repository**](https://github.com/OpenGVLab/mono-internvl) for introduction and usage.
23
 
24
 
25
 
@@ -28,11 +28,19 @@ Please refer to our [**paper**](https://huggingface.co/papers/2410.08202), [**pr
28
  If you find this project useful in your research, please consider citing:
29
 
30
  ```BibTeX
31
- @article{luo2024mono,
32
  title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
33
  author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Liu, Jiawen and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
34
  journal={arXiv preprint arXiv:2410.08202},
35
  year={2024}
36
  }
 
 
 
 
 
 
 
 
37
  ```
38
 
 
19
 
20
  This repository contains the instruction-tuned Mono-InternVL-2B model, which has 1.8B activated parameters (3B in total). It is built upon [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b).
21
 
22
+ Please refer to our [**paper (v1)**](https://huggingface.co/papers/2410.08202), [**paper (v1.5)**](https://arxiv.org/abs/2507.12566), [**project page**](https://internvl.github.io/blog/2024-10-10-Mono-InternVL/) and [**GitHub repository**](https://github.com/OpenGVLab/mono-internvl) for introduction and usage.
23
 
24
 
25
 
 
28
  If you find this project useful in your research, please consider citing:
29
 
30
  ```BibTeX
31
+ @article{mono_internvl_v1,
32
  title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
33
  author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Liu, Jiawen and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
34
  journal={arXiv preprint arXiv:2410.08202},
35
  year={2024}
36
  }
37
+
38
+
39
+ @article{mono_internvl_v1.5,
40
+ title={Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models},
41
+ author={Luo, Gen and Dou, Wenhan and Li, Wenhao and Wang, Zhaokai and Yang, Xue and Tian, Changyao and Li, Hao and Wang, Weiyun and Wang, Wenhai and Zhu, Xizhou and Qiao, Yu and Dai, Jifeng},
42
+ journal={arXiv preprint arXiv:2507.12566},
43
+ year={2025}
44
+ }
45
  ```
46