|
---
|
|
license: llama3.2
|
|
language:
|
|
- en
|
|
base_model: meta-llama/Llama-3.2-1B
|
|
pipeline_tag: text-classification
|
|
library_name: peft
|
|
tags:
|
|
- regression
|
|
- story-point-estimation
|
|
- software-engineering
|
|
datasets:
|
|
- mule
|
|
- titanium
|
|
metrics:
|
|
- mae
|
|
- mdae
|
|
model-index:
|
|
- name: llama-3.2-1b-story-point-estimation
|
|
results:
|
|
- task:
|
|
type: regression
|
|
name: Story Point Estimation
|
|
dataset:
|
|
name: titanium Dataset
|
|
type: titanium
|
|
split: test
|
|
metrics:
|
|
- type: mae
|
|
value: 3.505
|
|
name: Mean Absolute Error (MAE)
|
|
- type: mdae
|
|
value: 2.195
|
|
name: Median Absolute Error (MdAE)
|
|
---
|
|
# LLAMA 3 Story Point Estimator - mule - titanium |
|
|
|
This model is fine-tuned on issue descriptions from mule and tested on titanium for story point estimation. |
|
|
|
## Model Details |
|
- Base Model: LLAMA 3.2 1B |
|
- Training Project: mule |
|
- Test Project: titanium |
|
- Task: Story Point Estimation (Regression) |
|
- Architecture: PEFT (LoRA) |
|
|
|
- Input: Issue titles |
|
- Output: Story point estimation (continuous value) |
|
|
|
## Usage |
|
```python |
|
from transformers import AutoModelForSequenceClassification, AutoTokenizer |
|
from peft import PeftConfig, PeftModel |
|
|
|
# Load peft config model |
|
config = PeftConfig.from_pretrained("DEVCamiloSepulveda/000-LLAMA3SP-mule-titanium") |
|
|
|
# Load tokenizer and model |
|
tokenizer = AutoTokenizer.from_pretrained("DEVCamiloSepulveda/000-LLAMA3SP-mule-titanium") |
|
base_model = AutoModelForSequenceClassification.from_pretrained( |
|
config.base_model_name_or_path, |
|
num_labels=1, |
|
torch_dtype=torch.float16, |
|
device_map='auto' |
|
) |
|
model = PeftModel.from_pretrained(base_model, "DEVCamiloSepulveda/000-LLAMA3SP-mule-titanium") |
|
|
|
# Prepare input text |
|
text = "Your issue description here" |
|
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=20, padding="max_length") |
|
|
|
# Get prediction |
|
outputs = model(**inputs) |
|
story_points = outputs.logits.item() |
|
``` |
|
|
|
## Training Details |
|
- Fine-tuning method: LoRA (Low-Rank Adaptation) |
|
- Sequence length: 20 tokens |
|
- Best training epoch: 0 / 20 epochs |
|
- Batch size: 32 |
|
- Training time: 18.795 seconds |
|
- Mean Absolute Error (MAE): 3.505 |
|
- Median Absolute Error (MdAE): 2.195 |
|
### Framework versions |
|
|
|
- PEFT 0.14.0 |