Update README.md
Browse files
README.md
CHANGED
@@ -7,9 +7,9 @@ pipeline_tag: visual-question-answering
|
|
7 |
# π IMP
|
8 |
|
9 |
|
10 |
-
The π IMP project aims to provide a family of a strong multimodal `small` language models (MSLMs). Our `
|
11 |
|
12 |
-
As shown in the Table below, `
|
13 |
|
14 |
We release our model weights and provide an example below to run our model . Detailed technical report and corresponding training/evaluation code will be released soon on our [GitHub repo](https://github.com/MILVLG/imp). We will persistently improve our model and release the next versions to further improve model performance :)
|
15 |
|
@@ -58,11 +58,11 @@ We perform evaluation on 8 commonly-used benchmarks to validate the effectivenes
|
|
58 |
| [LLaVA-Phi](https://arxiv.org/pdf/2401.02330.pdf) | 3B | 71.40 | - | 35.90 | 68.40 | 48.60 | 85.00 | 1335.1 | 59.80 |28.9|
|
59 |
| [MobileVLM](https://huggingface.co/mtgv/MobileVLM-3B) | 3B | - | 59.00 | - | 61.00 | 47.50 | 84.90 | 1288.9 | 59.60 |-|
|
60 |
| [MC-LLaVA-3b](https://huggingface.co/visheratin/MC-LLaVA-3b) | 3B | 64.24 | 49.6 | 24.88 | - | 38.59 | 80.59 | - | - |-|
|
61 |
-
| **
|
62 |
|
63 |
|
64 |
## License
|
65 |
This project is licensed under the Apache License 2.0 - see the [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) file for details.
|
66 |
|
67 |
## About us
|
68 |
-
|
|
|
7 |
# π IMP
|
8 |
|
9 |
|
10 |
+
The π IMP project aims to provide a family of a strong multimodal `small` language models (MSLMs). Our `imp-v0-3b` is a strong MSLM with only **3B** parameters, which is build upon a small yet powerful SLM [Phi-2 ](https://huggingface.co/microsoft/phi-2)(2.7B) and a powerful visual encoder [SigLIP ](https://huggingface.co/google/siglip-so400m-patch14-384)(0.4B), and trained on the [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA) training set.
|
11 |
|
12 |
+
As shown in the Table below, `imp-v0-3b` significantly outperforms the counterparts of similar model sizes, and even achieves slightly better performance than the strong LLaVA-7B model on various multimodal benchmarks.
|
13 |
|
14 |
We release our model weights and provide an example below to run our model . Detailed technical report and corresponding training/evaluation code will be released soon on our [GitHub repo](https://github.com/MILVLG/imp). We will persistently improve our model and release the next versions to further improve model performance :)
|
15 |
|
|
|
58 |
| [LLaVA-Phi](https://arxiv.org/pdf/2401.02330.pdf) | 3B | 71.40 | - | 35.90 | 68.40 | 48.60 | 85.00 | 1335.1 | 59.80 |28.9|
|
59 |
| [MobileVLM](https://huggingface.co/mtgv/MobileVLM-3B) | 3B | - | 59.00 | - | 61.00 | 47.50 | 84.90 | 1288.9 | 59.60 |-|
|
60 |
| [MC-LLaVA-3b](https://huggingface.co/visheratin/MC-LLaVA-3b) | 3B | 64.24 | 49.6 | 24.88 | - | 38.59 | 80.59 | - | - |-|
|
61 |
+
| **Imp-v0 (ours)** | 3B | **79.45** | 58.55 | **50.09** |**69.96**| **59.38** | **88.02**| 1434 | **66.49** |**33.1**|
|
62 |
|
63 |
|
64 |
## License
|
65 |
This project is licensed under the Apache License 2.0 - see the [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) file for details.
|
66 |
|
67 |
## About us
|
68 |
+
This project is maintained by the [MILVLG](https://github.com/MILVLG)@Hangzhou Dianzi University (HDU) led by Prof. Zhou Yu and Jun Yu, and mainly developed by Zhenwei Shao and Xuecheng Ouyang. We hope our model may serve as a strong baseline to inspire future research on MSLM, as well as its derivative applications on mobile devices and robots.
|