Text Generation
Transformers
Safetensors
imp
custom_code
Oyoy1235 commited on
Commit
1496057
1 Parent(s): 5d76321

update readme

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -10,7 +10,7 @@ datasets:
10
 
11
  ## Introduction
12
 
13
- The Imp project aims to provide a family of highly capable yet lightweight LMMs. Our `Imp-v1.5-3B-Phi2` is a strong MSLM with only **3B** parameters, which is build upon a small yet powerful SLM [Phi-2 ](https://huggingface.co/microsoft/phi-2)(2.7B) and a powerful visual encoder [SigLIP ](https://huggingface.co/google/siglip-so400m-patch14-384)(0.4B), and trained on 1M mixed dataset.
14
 
15
  As shown in the Table below, `Imp-v1.5-3B-Phi2` significantly outperforms the counterparts of similar model sizes, and even achieves slightly better performance than the strong LLaVA-7B model on various multimodal benchmarks.
16
 
@@ -60,7 +60,7 @@ print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True
60
  ```
61
 
62
  ## Model evaluation
63
- We conduct evaluation on 9 commonly-used benchmarks, including 5 academic VQA benchmarks and 4 popular MLLM benchmarks, to compare our Imp model with LLaVA (7B) and existing MSLMs of similar model sizes.
64
 
65
  | Models | Size | VQAv2 | GQA | SQA(IMG) | TextVQA | POPE | MME(P) | MMB |MMBCN |MM-Vet|
66
  |:--------:|:-----:|:----:|:-------------:|:--------:|:-----:|:----:|:-------:|:-------:|:-------:|:-------:|
 
10
 
11
  ## Introduction
12
 
13
+ The Imp project aims to provide a family of highly capable yet lightweight LMMs. Our `Imp-v1.5-3B-Phi2` is a strong lightweight LMMs with only **3B** parameters, which is build upon [Phi-2 ](https://huggingface.co/microsoft/phi-2)(2.7B) and a powerful visual encoder [SigLIP ](https://huggingface.co/google/siglip-so400m-patch14-384)(0.4B), and trained on 1M mixed dataset.
14
 
15
  As shown in the Table below, `Imp-v1.5-3B-Phi2` significantly outperforms the counterparts of similar model sizes, and even achieves slightly better performance than the strong LLaVA-7B model on various multimodal benchmarks.
16
 
 
60
  ```
61
 
62
  ## Model evaluation
63
+ We conduct evaluation on 9 commonly-used benchmarks, including 5 academic VQA benchmarks and 4 popular MLLM benchmarks, to compare our Imp model with LLaVA (7B) and existing lightweight LMMs of similar model sizes.
64
 
65
  | Models | Size | VQAv2 | GQA | SQA(IMG) | TextVQA | POPE | MME(P) | MMB |MMBCN |MM-Vet|
66
  |:--------:|:-----:|:----:|:-------------:|:--------:|:-----:|:----:|:-------:|:-------:|:-------:|:-------:|