llama-3.1-8b-instruct-limo-lora

This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct model. The fine-tuning was performed using Low-Rank Adaptation (LoRA) on the LIMO dataset to enhance the model's reasoning capabilities, based on the work in the paper: LIMO: Less is More for Reasoning.

Model description

Usage

To utilize this model for text generation tasks, follow the steps below:

Installation

Ensure you have the necessary libraries installed:

pip install torch transformers peft

Generating Text

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load the base model
base_model_name = "meta-llama/Llama-3.1-8B-Instruct"
base_model = AutoModelForCausalLM.from_pretrained(base_model_name, torch_dtype="auto", device_map="auto")

# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_name)

# Load the LoRA adapter
adapter_path = "t83714/llama-3.1-8b-instruct-limo-lora-adapter"
model = PeftModel.from_pretrained(base_model, adapter_path)

prompt = "How much is (2+5)x5/7"

# Tokenize the input
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")

# Generate the output
output = model.generate(**inputs, max_length=8000)
print(tokenizer.decode(output[0], skip_special_tokens=True))

Merge the adapter and export merged model

from peft import PeftModel
from transformers import AutoModelForCausalLM

base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B-Instruct")

# Load the LoRA adapter
adapter_path = "t83714/llama-3.1-8b-instruct-limo-lora-adapter"
model = PeftModel.from_pretrained(base_model, adapter_path)

merged_model = model.merge_and_unload()
merged_model.save_pretrained("./merged-model/")

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • num_epochs: 15

Framework versions

  • PEFT 0.12.0
  • Transformers 4.49.0
  • Pytorch 2.6.0+cu124
  • Datasets 3.3.2
  • Tokenizers 0.21.0

Acknowledgment

This model is trained based on the work of Ye et al. (2025). If you use this model, please also consider citing their paper:

@misc{ye2025limoreasoning,
      title={LIMO: Less is More for Reasoning}, 
      author={Yixin Ye and Zhen Huang and Yang Xiao and Ethan Chern and Shijie Xia and Pengfei Liu},
      year={2025},
      eprint={2502.03387},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.03387}, 
}
Downloads last month
12
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for t83714/llama-3.1-8b-instruct-limo-lora-adapter

Adapter
(812)
this model

Dataset used to train t83714/llama-3.1-8b-instruct-limo-lora-adapter