Reflect-One: Fine-tuned Llama 3.1 for Educational Reflection Feedback
This model is a fine-tuned version of Meta's Llama 3.1 8B, specifically optimized for providing thoughtful, constructive feedback on student reflections in interdisciplinary project-based learning environments.
Model Details
- Base Model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
- Training Type: Supervised Fine-Tuning (SFT) using QLoRA
- Context Length: 4096 tokens
- Training Date: March 2025
- License: Llama 3.1 Community License
Intended Use
- Primary Use: Providing constructive feedback on student reflections in interdisciplinary projects
- Intended Users: Educational institutions, instructors, and learning platforms
- Out-of-Scope Use: Any use outside of educational feedback contexts
Training Details
Fine-tuning Approach
- QLoRA fine-tuning with 4-bit quantization
- LoRA rank: 16
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Training batch size: 8
- Gradient accumulation steps: 4
- Learning rate: 3e-4
- Training epochs: 3
Training Data
The model was fine-tuned on a curated dataset of student reflections and expert feedback, focusing on:
- Interdisciplinary project work
- Professional identity development
- Stakeholder collaboration
- Problem-solving in ambiguous situations
Performance and Limitations
Capabilities
- Provides structured, constructive feedback aligned with learning outcomes
- Maintains consistent narrative style
- Identifies key learning moments in student reflections
- Suggests areas for deeper reflection
Limitations
- Limited to English language feedback
- Should not be used as the sole source of feedback
- May not capture course-specific technical requirements
- Limited by the context window of 4096 tokens
Ethical Considerations
This model inherits the ethical considerations of the base Llama 3.1 model and adds specific considerations for educational use:
- Should be used as a supplementary tool, not a replacement for human instructors
- Feedback should be reviewed for appropriateness and accuracy
- Model outputs should be transparent to students
- Privacy considerations for student data must be maintained
Versions Available
Two versions of the model are available:
htigenai/reflect_one
: 16-bit version for higher precisionhtigenai/reflect_one_4bit
: 4-bit quantized version for efficient deployment
Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "htigenai/reflect_one" # or "htigenai/reflect_one_4bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.float16
)
# Format input following Llama 3.1 template
input_text = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a reflective instructor for ISBEP.<|eot_id|><|start_header_id|>user<|end_header_id|>
[Student reflection here]<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""
# Generate feedback
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.7,
do_sample=True
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
Citation
If you use this model in your research, please cite:
@misc{reflect_one_2025,
title={ReflectOne: Fine-tuned Llama 3.1 for Educational Reflection Feedback},
author={Oliveira, M.J.B., Ruijten - Dodoiu, P.},
year={2025},
publisher={unpublished}
}
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.