Fine-tuned idefics for inference?
I followed the fine-tuning tutorial and pushed to hugging face.
How can I use this fine-tuned version for inference?
Hi
@baldesco
,
assuming you are following the tutorial, then you saved some lora parameters.
you can load the model trained with its lora parameters simply by doing a AutoModel.from_pretrained(PATH_TO_YOUR_MODEL_ON_THE_HUB)
Thank you @VictorSanh for the answer.
Yes, I followed the tutorial so I am using QLORA.
In the sample code for using idefics2 for inference, this is how the model is loaded:
processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics2-8b")
model = AutoModelForVision2Seq.from_pretrained(
"HuggingFaceM4/idefics2-8b",
).to(DEVICE)
So I have 2 questions:
- For the
processor
, should I leave the original idefics model, or should I also point to my weights? - For the
model
you mentionAutoModel.from_pretrained
, but the snippet above usesAutoModelForVision2Seq.from_pretrained
. Are these two the same, or when should I use each?
Thank you
1/ either way. that is equivalent
2/ AutoModelForVision2Seq
is indeed safer, if you want to be really safe (i.e. avoid any mismatch in the auto-mapping), I would even recommend Idefics2ForConditionalGeneration.from_pretrained