Yirany commited on
Commit
da54884
1 Parent(s): 4520826

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -33
README.md CHANGED
@@ -179,44 +179,21 @@ Please see the info about MiniCPM-V 2.0 [here](https://huggingface.co/openbmb/Mi
179
  * As an LLM, MiniCPM-Llama3-V 2.5 generates contents by learning a large mount of texts, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-Llama3-V 2.5 does not represent the views and positions of the model developers
180
  * We will not be liable for any problems arising from the use of the MinCPM-V open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
181
 
182
- ## Other Multimodal Projects from Our Team
 
 
183
 
184
  [VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V)
185
 
186
  ## Citation
187
 
188
- If you find our work helpful, please consider citing the following papers
189
 
190
  ```bib
191
- @article{yu2023rlhf,
192
- title={Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback},
193
- author={Yu, Tianyu and Yao, Yuan and Zhang, Haoye and He, Taiwen and Han, Yifeng and Cui, Ganqu and Hu, Jinyi and Liu, Zhiyuan and Zheng, Hai-Tao and Sun, Maosong and others},
194
- journal={arXiv preprint arXiv:2312.00849},
195
- year={2023}
196
- }
197
- @article{viscpm,
198
- title={Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages},
199
- author={Jinyi Hu and Yuan Yao and Chongyi Wang and Shan Wang and Yinxu Pan and Qianyu Chen and Tianyu Yu and Hanghao Wu and Yue Zhao and Haoye Zhang and Xu Han and Yankai Lin and Jiao Xue and Dahai Li and Zhiyuan Liu and Maosong Sun},
200
- journal={arXiv preprint arXiv:2308.12038},
201
- year={2023}
202
- }
203
- @article{xu2024llava-uhd,
204
- title={{LLaVA-UHD}: an LMM Perceiving Any Aspect Ratio and High-Resolution Images},
205
- author={Xu, Ruyi and Yao, Yuan and Guo, Zonghao and Cui, Junbo and Ni, Zanlin and Ge, Chunjiang and Chua, Tat-Seng and Liu, Zhiyuan and Huang, Gao},
206
- journal={arXiv preprint arXiv:2403.11703},
207
- year={2024}
208
  }
209
- @article{yu2024rlaifv,
210
- title={RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness},
211
- author={Yu, Tianyu and Zhang, Haoye and Yao, Yuan and Dang, Yunkai and Chen, Da and Lu, Xiaoman and Cui, Ganqu and He, Taiwen and Liu, Zhiyuan and Chua, Tat-Seng and Sun, Maosong},
212
- journal={arXiv preprint arXiv:2405.17220},
213
- year={2024},
214
- }
215
- @article{yao2024minicpmvgpt4vlevelmllm,
216
- title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone},
217
- author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and Chen, Qianyu and Zhou, Huarong and Zou, Zhensheng and Zhang, Haoye and Hu, Shengding and Zheng, Zhi and Zhou, Jie and Cai, Jie and Han, Xu and Zeng, Guoyang and Li, Dahai and Liu, Zhiyuan and Sun, Maosong},
218
- journal={arXiv preprint arXiv:2408.01800},
219
- year={2024},
220
- url={https://arxiv.org/abs/2408.01800},
221
- }
222
- ```
 
179
  * As an LLM, MiniCPM-Llama3-V 2.5 generates contents by learning a large mount of texts, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-Llama3-V 2.5 does not represent the views and positions of the model developers
180
  * We will not be liable for any problems arising from the use of the MinCPM-V open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
181
 
182
+ ## Key Techniques and Other Multimodal Projects
183
+
184
+ 👏 Welcome to explore key techniques of MiniCPM-V 2.6 and other multimodal projects of our team:
185
 
186
  [VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V)
187
 
188
  ## Citation
189
 
190
+ If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️!
191
 
192
  ```bib
193
+ @article{yao2024minicpmv,
194
+ title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone},
195
+ author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and Chen, Qianyu and Zhou, Huarong and Zou, Zhensheng and Zhang, Haoye and Hu, Shengding and Zheng, Zhi and Zhou, Jie and Cai, Jie and Han, Xu and Zeng, Guoyang and Li, Dahai and Liu, Zhiyuan and Sun, Maosong},
196
+ journal={arXiv preprint 2408.01800},
197
+ year={2024},
 
 
 
 
 
 
 
 
 
 
 
 
198
  }
199
+ ```