Finetuning on own data set

#1
by MoritzLangenberg - opened

Hello,
thank you for making this great approach publicly available!
I have a question regarding fine-tuning M3D-LaMed-Phi-3-4B on an own image-report and vqa dataset. How do I specify in the scripts pretrain.sh and finetune_lora.sh that I want to fine-tune M3D-LaMed-Phi-3-4B (and not start with Phi-3-mini-4k-instruct from scratch)? Can I just specify --model_name_or_path /path/to/M3D-LaMed-Phi-3-4B in those scripts?

My second question: If --pretrain_mm_mlp_adapter is not specified in finetune_lora.sh but --model_name_or_path /path/to/M3D-LaMed-Phi-3-4B is specified, will then the existing mm_mlp_adapter of M3D-LaMed-Phi-3-4B be used?

Best regards, Moritz

Sign up or log in to comment