zaidawan/unsloth_Meta-Llama-3.1-8B-Instruct-bnb-4bit_Adapter
Updated
are we suppose to train LORA Adapter using AWQ base model ?
or if LORA is trained using bnb int 4 or fp16 base model can i use that adapater with AWQ model without losing much in terms of performance and accuracy