Update README.md
Browse files
README.md
CHANGED
@@ -82,4 +82,17 @@ We conduct evaluation on 9 commonly-used benchmarks, including 5 academic VQA be
|
|
82 |
This project is licensed under the Apache License 2.0 - see the [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) file for details.
|
83 |
|
84 |
## About us
|
85 |
-
This project is maintained by the [MILVLG](https://github.com/MILVLG)@Hangzhou Dianzi University (HDU) led by Prof. Zhou Yu and Jun Yu, and is mainly developed by Zhenwei Shao and Xuecheng Ouyang. We hope our model may serve as a strong baseline to inspire future research on MSLM, as well as its derivative applications on mobile devices and robots.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
This project is licensed under the Apache License 2.0 - see the [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) file for details.
|
83 |
|
84 |
## About us
|
85 |
+
This project is maintained by the [MILVLG](https://github.com/MILVLG)@Hangzhou Dianzi University (HDU) led by Prof. Zhou Yu and Jun Yu, and is mainly developed by Zhenwei Shao and Xuecheng Ouyang. We hope our model may serve as a strong baseline to inspire future research on MSLM, as well as its derivative applications on mobile devices and robots.
|
86 |
+
|
87 |
+
## Citation
|
88 |
+
|
89 |
+
If you use our model or refer our work in your studies, please cite:
|
90 |
+
|
91 |
+
```bibtex
|
92 |
+
@misc{imp2024,
|
93 |
+
author = {Shao, Zhenwei and Yu, Zhou and Ouyang, Xuecheng and Yu, Jun},
|
94 |
+
title = {Imp-v1: An emprical study of multimodal small language models},
|
95 |
+
year = {2024},
|
96 |
+
url = {https://huggingface.co/MILVLG/imp-v1-3b}
|
97 |
+
}
|
98 |
+
```
|