llama-3.1-8b-instruct-limo-lora
This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct model. The fine-tuning was performed using Low-Rank Adaptation (LoRA) on the LIMO dataset to enhance the model's reasoning capabilities, based on the work in the paper: LIMO: Less is More for Reasoning.
This repo contains the merged model weights. The LoRA adapter version can be found from here.
Model description
- Base Model: meta-llama/Llama-3.1-8B-Instruct
- Fine-Tuning Dataset: GAIR/LIMO
- Fine-Tuning Method: Low-Rank Adaptation (LoRA)
- Library Used: peft
- License: Apache 2.0
Usage
To utilize this model for text generation tasks, follow the steps below:
Installation
Ensure you have the necessary libraries installed:
pip install torch transformers
Generating Text
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "t83714/llama-3.1-8b-instruct-limo"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How much is (2+5)x5/7"
# Tokenize the input
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
# Generate the output
output = model.generate(**inputs, max_length=8000)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 15
Framework versions
- PEFT 0.12.0
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
Acknowledgment
This model is trained based on the work of Ye et al. (2025). If you use this model, please also consider citing their paper:
@misc{ye2025limoreasoning,
title={LIMO: Less is More for Reasoning},
author={Yixin Ye and Zhen Huang and Yang Xiao and Ethan Chern and Shijie Xia and Pengfei Liu},
year={2025},
eprint={2502.03387},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.03387},
}
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for t83714/llama-3.1-8b-instruct-limo
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct