QLoRA weights using Llama-2-7b for the Code Alpaca Dataset
Fine-Tuning on Predibase
This model was fine-tuned using Predibase, the first low-code AI platform for engineers. I fine-tuned base Llama-2-7b using LoRA with 4 bit quantization on a single T4 GPU, which cost approximately $3 to train on Predibase. Try out our free Predibase trial here.
Dataset and training parameters are borrowed from: https://github.com/sahil280114/codealpaca, but all of these parameters including DeepSpeed can be directly used with Ludwig, the open-source toolkit for LLMs that Predibase is built on.
Co-trained by: Infernaught
How To Use The Model
To use these weights:
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM
# Load base model in 4 bit
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", load_in_4bit=True)
# Wrap model with pretrained model weights
config = PeftConfig.from_pretrained("arnavgrg/codealpaca-qlora")
model = PeftModel.from_pretrained(model, "arnavgrg/codealpaca-qlora")
Prompt Template:
Below is an instruction that describes a task, paired with an input
that provides further context. Write a response that appropriately
completes the request.
### Instruction: {instruction}
### Input: {input}
### Response:
Training procedure
The following bitsandbytes
quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
Framework versions
- PEFT 0.4.0
- Downloads last month
- 187
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for arnavgrg/codealpaca-qlora
Base model
meta-llama/Llama-2-7b-hf