A newer version of this model is available:
meta-llama/Llama-3.2-1B
Fine-tuned Llama-3.2-1B Model
This is a fine-tuned version of meta-llama/Llama-3.2-1B with PEFT/LoRA.
Model Details
- Base model: meta-llama/Llama-3.2-1B
- Fine-tuning method: PEFT with LoRA
- Rank (r): 8
- Alpha: 16
- Target modules: q_proj, v_proj
Usage
from transformers import AutoTokenizer
from peft import AutoPeftModelForCausalLM
# Load model and tokenizer
model = AutoPeftModelForCausalLM.from_pretrained("MangoLassi/llama-3.2-1b-finetuned")
tokenizer = AutoTokenizer.from_pretrained("MangoLassi/llama-3.2-1b-finetuned")
# Generate text
prompt = "Your prompt here"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for MangoLassi/llama-3.2-1b-finetuned
Base model
meta-llama/Llama-3.2-1B