A newer version of the Gradio SDK is available:
5.23.3
Fine-tuning for image classification using LoRA and π€ PEFT
We provide a notebook (image_classification_peft_lora.ipynb
) where we learn how to use LoRA from π€ PEFT to fine-tune an image classification model by ONLY using 0.7% of the original trainable parameters of the model.
LoRA adds low-rank "update matrices" to certain blocks in the underlying model (in this case the attention blocks) and ONLY trains those matrices during fine-tuning. During inference, these update matrices are merged with the original model parameters. For more details, check out the original LoRA paper.