license: apache-2.0
pipeline_tag: image-to-text
MMAlaya2
MMAlaya2 fine-tunes 20 LoRA modules based on the InternVL-Chat-V1-5 model. These fine-tuned LoRA modules are then merged with the InternVL-Chat-V1-5 model using the PEFT model merging method, TIES.
You can find the inference code here (PR still in preparation).
The MMBench benchmark contains 20 categories in the mmbench_dev_cn_20231003.tsv
dataset. For each category, we first use CoT (Chain of Thought) consistency with the InternVL-Chat-V1-5 model to prepare the training dataset. For specific categories like nature_relation, image_emotion, image_scene, action_recognition, and image_style, we analyze the bad cases made by the InternVL-Chat-V1-5 model. We then prepare images and QA text from online sources to address these issues.
After fine-tuning the 20 LoRAs, they are merged with the InternVL-Chat-V1-5 model using the TIES method. The average score on the mmbench_test_cn_20231003.tsv benchmark reached 82.1, which is higher than the 80.7 score of the InternVL-Chat-V1-5 model. This places it in the top 4, matching the performance of GPT-4. We found this result noteworthy. This score was obtained from the uploaded inference file submission.xlsx
. The online leaderboard average score is still being prepared. As a result, we are sharing this model publicly.
License
This project is released under the MIT license, aligning with the InternVL-Chat-V1-5 model's license. However, InternLM2 is licensed under the Apache-2.0 license.
Citation
If you find this project useful in your research, please consider citing:
@misc{datacanvas2024mmalaya2,
author = {DataCanvas Ltd.},
title = {MMAlaya2},
year = {2024},
howpublished = {\url{https://huggingface.co/DataCanvas/MMAlaya2}},
}