update readme
Browse files
README.md
CHANGED
@@ -7,11 +7,11 @@ pipeline_tag: image-to-text
|
|
7 |
|
8 |
MMAlaya2 fine-tunes 20 LoRA modules based on the InternVL-Chat-V1-5 model. These fine-tuned LoRA modules are then merged with the InternVL-Chat-V1-5 model using the PEFT model merging method, TIES.
|
9 |
|
10 |
-
You can find the inference code [here](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/vlm/mmalaya.py#L8).
|
11 |
|
12 |
The [MMBench](https://mmbench.opencompass.org.cn/) benchmark contains 20 categories in the `mmbench_dev_cn_20231003.tsv` dataset. For each category, we first use CoT (Chain of Thought) consistency with the InternVL-Chat-V1-5 model to prepare the training dataset. For specific categories like nature_relation, image_emotion, image_scene, action_recognition, and image_style, we analyze the bad cases made by the InternVL-Chat-V1-5 model. We then prepare images and QA text from online sources to address these issues.
|
13 |
|
14 |
-
After fine-tuning the 20 LoRAs, they are merged with the InternVL-Chat-V1-5 model using the TIES method. The average score on the `mmbench_test_cn_20231003.tsv` benchmark reached 82.2, which we found noteworthy. As a result, we are sharing this model publicly.
|
15 |
|
16 |
# License
|
17 |
|
|
|
7 |
|
8 |
MMAlaya2 fine-tunes 20 LoRA modules based on the InternVL-Chat-V1-5 model. These fine-tuned LoRA modules are then merged with the InternVL-Chat-V1-5 model using the PEFT model merging method, TIES.
|
9 |
|
10 |
+
You can find the inference code [here](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/vlm/mmalaya.py#L8) (PR still in preparation).
|
11 |
|
12 |
The [MMBench](https://mmbench.opencompass.org.cn/) benchmark contains 20 categories in the `mmbench_dev_cn_20231003.tsv` dataset. For each category, we first use CoT (Chain of Thought) consistency with the InternVL-Chat-V1-5 model to prepare the training dataset. For specific categories like nature_relation, image_emotion, image_scene, action_recognition, and image_style, we analyze the bad cases made by the InternVL-Chat-V1-5 model. We then prepare images and QA text from online sources to address these issues.
|
13 |
|
14 |
+
After fine-tuning the 20 LoRAs, they are merged with the InternVL-Chat-V1-5 model using the TIES method. The average score on the `mmbench_test_cn_20231003.tsv` benchmark reached 82.2, which we found noteworthy. This score was obtained from the uploaded inference file `submission.xlsx`. The online leaderboard average score is still being prepared. As a result, we are sharing this model publicly.
|
15 |
|
16 |
# License
|
17 |
|