wangd12's picture
Add/update LoRA model: headline_llama_3_1_8b_8bits_r8_rslora
92317c0 verified

Built with Axolotl

See axolotl config

axolotl version: 0.9.1.post1

base_model: meta-llama/Llama-3.1-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
gradient_accumulation_steps: 2
micro_batch_size: 8
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0001
load_in_8bit: true
load_in_4bit: false
adapter: lora
lora_model_dir: null
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
- q_proj
- v_proj
- k_proj
datasets:
- path: /workspace/FinLoRA/data/train/headline_train.jsonl
  type:
    system_prompt: ''
    field_system: system
    field_instruction: context
    field_output: target
    format: '[INST] {instruction} [/INST]'
    no_input_format: '[INST] {instruction} [/INST]'
dataset_prepared_path: null
val_set_size: 0.02
output_dir: /workspace/FinLoRA/lora/axolotl-output/headline_llama_3_1_8b_8bits_r8_rslora
peft_use_dora: false
peft_use_rslora: true
sequence_len: 4096
sample_packing: false
pad_to_sequence_len: false
wandb_project: finlora_models
wandb_entity: null
wandb_watch: gradients
wandb_name: headline_llama_3_1_8b_8bits_r8_rslora
wandb_log_model: 'false'
bf16: auto
tf32: false
gradient_checkpointing: true
resume_from_checkpoint: null
logging_steps: 500
flash_attention: false
deepspeed: deepspeed_configs/zero1.json
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.0
special_tokens:
  pad_token: <|end_of_text|>
chat_template: llama3

workspace/FinLoRA/lora/axolotl-output/headline_llama_3_1_8b_8bits_r8_rslora

This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct on the /workspace/FinLoRA/data/train/headline_train.jsonl dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0577

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 3
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 48
  • total_eval_batch_size: 24
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 4.0

Training results

Training Loss Epoch Step Validation Loss
No log 0.0006 1 7.2775
No log 0.2504 420 0.0538
0.1779 0.5007 840 0.0480
0.0475 0.7511 1260 0.0431
0.0404 1.0012 1680 0.0423
0.0344 1.2516 2100 0.0431
0.0328 1.5019 2520 0.0505
0.0328 1.7523 2940 0.0472
0.0301 2.0024 3360 0.0415
0.0282 2.2528 3780 0.0551
0.0222 2.5031 4200 0.0517
0.0216 2.7535 4620 0.0478
0.0201 3.0036 5040 0.0500
0.0201 3.2539 5460 0.0606
0.0135 3.5043 5880 0.0593
0.0117 3.7547 6300 0.0577

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0.dev20250319+cu128
  • Datasets 3.5.1
  • Tokenizers 0.21.1