bingwork commited on
Commit
27b4305
·
1 Parent(s): c0abc3d
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -7,7 +7,7 @@ pipeline_tag: image-to-text
7
 
8
  MMAlaya2 fine-tunes 20 LoRA modules based on the InternVL-Chat-V1-5 model. These fine-tuned LoRA modules are then merged with the InternVL-Chat-V1-5 model using the PEFT model merging method, TIES.
9
 
10
- You can find the inference code [here](https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/vlm/mmalaya.py#L8) (PR still in preparation).
11
 
12
  The [MMBench](https://mmbench.opencompass.org.cn/) benchmark contains 20 categories in the `mmbench_dev_cn_20231003.tsv` dataset. For each category, we first use CoT (Chain of Thought) consistency with the InternVL-Chat-V1-5 model to prepare the training dataset. For specific categories like nature_relation, image_emotion, image_scene, action_recognition, and image_style, we analyze the bad cases made by the InternVL-Chat-V1-5 model. We then prepare images and QA text from online sources to address these issues.
13
 
 
7
 
8
  MMAlaya2 fine-tunes 20 LoRA modules based on the InternVL-Chat-V1-5 model. These fine-tuned LoRA modules are then merged with the InternVL-Chat-V1-5 model using the PEFT model merging method, TIES.
9
 
10
+ You can find the inference code [here](https://github.com/open-compass/VLMEvalKit/pull/399/files) (PR still in preparation).
11
 
12
  The [MMBench](https://mmbench.opencompass.org.cn/) benchmark contains 20 categories in the `mmbench_dev_cn_20231003.tsv` dataset. For each category, we first use CoT (Chain of Thought) consistency with the InternVL-Chat-V1-5 model to prepare the training dataset. For specific categories like nature_relation, image_emotion, image_scene, action_recognition, and image_style, we analyze the bad cases made by the InternVL-Chat-V1-5 model. We then prepare images and QA text from online sources to address these issues.
13