LLAMA 3 Story Point Estimator - mule - mulestudio
This model is fine-tuned on issue descriptions from mule and tested on mulestudio for story point estimation.
Model Details
Base Model: LLAMA 3.2 1B
Training Project: mule
Test Project: mulestudio
Task: Story Point Estimation (Regression)
Architecture: PEFT (LoRA)
Input: Issue titles
Output: Story point estimation (continuous value)
Usage
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from peft import PeftModel
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("DEVCamiloSepulveda/00-LLAMA3SP-mule-mulestudio")
model = AutoModelForSequenceClassification.from_pretrained("DEVCamiloSepulveda/00-LLAMA3SP-mule-mulestudio")
# Prepare input text
text = "Your issue description here"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=20, padding="max_length")
# Get prediction
outputs = model(**inputs)
story_points = outputs.logits.item()
Training Details
- Fine-tuning method: LoRA (Low-Rank Adaptation)
- Sequence length: 20 tokens
- Best training epoch: 0 / 20 epochs
- Batch size: 32
- Training time: 18.973 seconds
- Mean Absolute Error (MAE): 3.566
- Median Absolute Error (MdAE): 2.399
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for DEVCamiloSepulveda/00-LLAMA3SP-mule-mulestudio
Base model
meta-llama/Llama-3.2-1BEvaluation results
- Mean Absolute Error (MAE) on mulestudio Datasettest set self-reported3.566
- Median Absolute Error (MdAE) on mulestudio Datasettest set self-reported2.399