bingwork commited on
Commit
238e6ac
·
1 Parent(s): 59d9108
Files changed (1) hide show
  1. README.md +16 -2
README.md CHANGED
@@ -7,11 +7,25 @@ pipeline_tag: image-to-text
7
 
8
  MMAlaya2 fine-tunes 20 LoRA modules based on the InternVL-Chat-V1-5 model. These fine-tuned LoRA modules are then merged with the InternVL-Chat-V1-5 model using the PEFT model merging method, TIES.
9
 
10
- You can find the inference code [here](https://github.com/open-compass/VLMEvalKit/pull/399/files) (PR still in preparation).
11
 
12
  The [MMBench](https://mmbench.opencompass.org.cn/) benchmark contains 20 categories in the `mmbench_dev_cn_20231003.tsv` dataset. For each category, we first use CoT (Chain of Thought) consistency with the InternVL-Chat-V1-5 model to prepare the training dataset. For specific categories like nature_relation, image_emotion, image_scene, action_recognition, and image_style, we analyze the bad cases made by the InternVL-Chat-V1-5 model. We then prepare images and QA text from online sources to address these issues.
13
 
14
- After fine-tuning the 20 LoRAs, they are merged with the InternVL-Chat-V1-5 model using the TIES method. The average score on the mmbench_test_cn_20231003.tsv benchmark reached 82.1, which is higher than the 80.7 score of the InternVL-Chat-V1-5 model. This places it in the top 4, matching the performance of GPT-4. We found this result noteworthy. This score was obtained from the uploaded inference file `submission.xlsx`. The online leaderboard average score is still being prepared. As a result, we are sharing this model publicly.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
  # License
17
 
 
7
 
8
  MMAlaya2 fine-tunes 20 LoRA modules based on the InternVL-Chat-V1-5 model. These fine-tuned LoRA modules are then merged with the InternVL-Chat-V1-5 model using the PEFT model merging method, TIES.
9
 
10
+ You can find the inference code [here](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/vlm/mmalaya.py).
11
 
12
  The [MMBench](https://mmbench.opencompass.org.cn/) benchmark contains 20 categories in the `mmbench_dev_cn_20231003.tsv` dataset. For each category, we first use CoT (Chain of Thought) consistency with the InternVL-Chat-V1-5 model to prepare the training dataset. For specific categories like nature_relation, image_emotion, image_scene, action_recognition, and image_style, we analyze the bad cases made by the InternVL-Chat-V1-5 model. We then prepare images and QA text from online sources to address these issues.
13
 
14
+ After fine-tuning the 20 LoRAs, they are merged with the InternVL-Chat-V1-5 model using the TIES method.
15
+
16
+ Thank you to the OpenCompass MMBench team for updating the [leaderboard](https://mmbench.opencompass.org.cn/leaderboard) on August 29, 2024. I've collected the ranks and scores from the leaderboard for reference. For example, a ranking of "7/82.1" indicates a 7th place finish with a score of 82.1 in that category. I chose GPT-4o (0513, detail-high) because it is the best-performing GPT-4o model in the MMBench Test (CN).
17
+
18
+ | Model | MMBench Test (CN) |MMBench v1.1 Test (CN) |CCBench dev |MMBench Test |MMBench v1.1 Test |
19
+ | ----------- | ----------- | ----------- | ----------- | ----------- |----------- |
20
+ | GPT-4o (0513, detail-high) | 4/82.1 | 5/81.5 | 7/71.2 | 4/83.4 | 5/83 |
21
+ | MMAlaya2 | 7/82.1 | 8/79.7 | 8/70 | 9/82.5 | 9/80.6 |
22
+ | InternVL-Chat-V1.5 | 14/80.7 | 15/79.1 | 9/69.8 | 11/82.3 | 10/80.3 |
23
+
24
+
25
+ The average score on the MMBench Test (CN) reached 82.1, surpassing the InternVL-Chat-V1-5 model's score of 80.7 by 1.4 points. This achievement places it in the top 4, on par with the performance of GPT-4o. Additionally, scores on the other four benchmarks—MMBench v1.1 Test (CN), CCBench dev, MMBench Test, and MMBench v1.1 Test—have also improved by 0.2 to 0.6 points, bringing them closer to GPT-4o's performance.
26
+
27
+ We found this result noteworthy. As a result, we are sharing this model publicly.
28
+
29
 
30
  # License
31