Create README.md
Browse filesUsing Paramater Efficient Fine Tuning on Llama 3 with 8B Parameters on One Intel® Gaudi® 2 AI Accelerator
This example will Fine Tune the Llama3 8B model using Parameter Efficient Fine Tuining (PEFT) and then run inference on a text prompt. This will be using the Llama3-8B model with two task examples from the Optimum Habana library on the Hugging Face model repository. The Optimum Habana library is optimized for Deep Learning training and inference on First-gen Gaudi and Gaudi2 and offers tasks such as text generation, language modeling, question answering and more. For all the examples and models, please refer to the Optimum Habana GitHub.
This example will Fine Tune the Llama3-8B model using Parameter Efficient Fine Tuining (PEFT) on the timdettmers/openassistant-guanaco dataset using the Language-Modeling Task in Optimum Habana.