Text Generation
Transformers
Safetensors
imp
custom_code
Oyoy1235 commited on
Commit
5843de5
2 Parent(s): 090ee1c d6b23f9

Merge branch 'main' of https://huggingface.co/MILVLG/Imp-v0-3b into main

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -49,7 +49,7 @@ print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True
49
  ```
50
 
51
  ## Model evaluation
52
- We perform evaluation on 9 commonly-used benchmarks to validate the effectiveness of our model, including 5 academic VQA benchmarks and 4 popular MLLM benchmarks.
53
 
54
  | Models | Size | VQAv2 | GQA |VisWiz | SQA (IMG) | TextVQA | POPE | MME | MMB |MM-Vet|
55
  |:--------:|:-----:|:----:|:----:|:-------------:|:--------:|:-----:|:----:|:-------:|:-------:|:-------:|
@@ -65,4 +65,4 @@ We perform evaluation on 9 commonly-used benchmarks to validate the effectivenes
65
  This project is licensed under the Apache License 2.0 - see the [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) file for details.
66
 
67
  ## About us
68
- This project is maintained by the [MILVLG](https://github.com/MILVLG)@Hangzhou Dianzi University (HDU) led by Prof. Zhou Yu and Jun Yu, and mainly developed by Zhenwei Shao and Xuecheng Ouyang. We hope our model may serve as a strong baseline to inspire future research on MSLM, as well as its derivative applications on mobile devices and robots.
 
49
  ```
50
 
51
  ## Model evaluation
52
+ We conduct evaluation on 9 commonly-used benchmarks, including 5 academic VQA benchmarks and 4 popular MLLM benchmarks, to compare our Imp model with LLaVA (7B) and existing MSLMs with similar sizes.
53
 
54
  | Models | Size | VQAv2 | GQA |VisWiz | SQA (IMG) | TextVQA | POPE | MME | MMB |MM-Vet|
55
  |:--------:|:-----:|:----:|:----:|:-------------:|:--------:|:-----:|:----:|:-------:|:-------:|:-------:|
 
65
  This project is licensed under the Apache License 2.0 - see the [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) file for details.
66
 
67
  ## About us
68
+ This project is maintained by the [MILVLG](https://github.com/MILVLG)@Hangzhou Dianzi University (HDU) led by Prof. Zhou Yu and Jun Yu, and is mainly developed by Zhenwei Shao and Xuecheng Ouyang. We hope our model may serve as a strong baseline to inspire future research on MSLM, as well as its derivative applications on mobile devices and robots.