Update README.md
Browse files
README.md
CHANGED
@@ -51,7 +51,7 @@ print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True
|
|
51 |
## Model evaluation
|
52 |
We conduct evaluation on 9 commonly-used benchmarks, including 5 academic VQA benchmarks and 4 popular MLLM benchmarks, to compare our Imp model with LLaVA (7B) and existing MSLMs with similar sizes.
|
53 |
|
54 |
-
| Models | Size | VQAv2 | GQA |
|
55 |
|:--------:|:-----:|:----:|:----:|:-------------:|:--------:|:-----:|:----:|:-------:|:-------:|:-------:|
|
56 |
| [LLaVA-v1.5-lora](https://huggingface.co/liuhaotian/llava-v1.5-7b) | 7B |79.10 | **63.00** |47.80 | 68.40 |58.20| 86.40 | **1476.9** | 66.10 |30.2|
|
57 |
| [TinyGPT-V](https://huggingface.co/Tyrannosaurus/TinyGPT-V) | 3B | - | 33.60 | 24.80 | - | - | -| - | - |-|
|
|
|
51 |
## Model evaluation
|
52 |
We conduct evaluation on 9 commonly-used benchmarks, including 5 academic VQA benchmarks and 4 popular MLLM benchmarks, to compare our Imp model with LLaVA (7B) and existing MSLMs with similar sizes.
|
53 |
|
54 |
+
| Models | Size | VQAv2 | GQA |VizWiz | SQA (IMG) | TextVQA | POPE | MME | MMB |MM-Vet|
|
55 |
|:--------:|:-----:|:----:|:----:|:-------------:|:--------:|:-----:|:----:|:-------:|:-------:|:-------:|
|
56 |
| [LLaVA-v1.5-lora](https://huggingface.co/liuhaotian/llava-v1.5-7b) | 7B |79.10 | **63.00** |47.80 | 68.40 |58.20| 86.40 | **1476.9** | 66.10 |30.2|
|
57 |
| [TinyGPT-V](https://huggingface.co/Tyrannosaurus/TinyGPT-V) | 3B | - | 33.60 | 24.80 | - | - | -| - | - |-|
|