LLAMA-3.2-3B-Alpaca_en_LORA_SFT
This model is a fine-tuned version of meta-llama/Llama-3.2-3B-Instruct using the alpaca_en_demo dataset. The fine-tuning process was conducted by Sri Santh M for development purposes.
It achieves the following results on the evaluation set:
- Loss: 1.0510
Model Description
This model is optimized for tasks involving instruction-following, text generation, and fine-tuned identity-based use cases. It leverages the capabilities of the LLaMA-3.2-3B-Instruct base model with additional refinements made using a lightweight fine-tuning approach via PEFT (Parameter-Efficient Fine-Tuning).
Intended Uses
- Instruction-following tasks.
- Conversational AI and question-answering applications.
- Text summarization and content generation.
Training and Evaluation Data
The model was fine-tuned using the alpaca_en_demo dataset, which is designed for instruction-tuned task completion. This dataset includes diverse English-language tasks for demonstrating instruction-following capabilities.
- Dataset link: alpaca_en_demo
Further details on the dataset:
- Source: zhiman-ai.
- Size: Small-scale, development-focused dataset.
- Purpose: Designed to emulate instruction-tuned datasets like Alpaca, with a subset of English-language prompts and responses.
Training Procedure
Hyperparameters
- Learning rate: 0.0001
- Train batch size: 1
- Eval batch size: 1
- Gradient accumulation steps: 8
- Total effective batch size: 8
- Optimizer: AdamW (torch)
- Betas: (0.9, 0.999)
- Epsilon: 1e-08
- Learning rate scheduler: Cosine schedule with 10% warmup.
- Number of epochs: 3.0
Frameworks and Libraries
- PEFT: 0.12.0
- Transformers: 4.46.1
- PyTorch: 2.4.0
- Datasets: 3.1.0
- Tokenizers: 0.20.3
Training Results
- Loss: 1.0510
- Evaluation results are limited to the dataset scope. Broader testing is recommended for downstream applications.
Additional Information
- Author: Sri Santh M
- Purpose: Fine-tuned for development and experimentation purposes using the LLaMA-3.2-3B-Instruct model.
This model serves as an experimental proof-of-concept for lightweight fine-tuning using PEFT and can be adapted further based on specific tasks or use cases.
Model tree for SriSanth2345/LLAMA-3.2-3B-Alpaca_en_LORA_SFT
Base model
meta-llama/Llama-3.2-3B-Instruct