Uploaded model

  • Developed by: LimYeri
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Training Setting

The following hyperparameters are used during SFT:

  • num_epochs: 3
  • learning_rate: 1e-4
  • max_seq_length: None
  • optimizer: adamw_8bit
  • lr_scheduler_type: linear
  • warmup_ratio: 0.03
  • weight_decay: 0.01
  • lora_rank: 16
  • lora_alpha: 16
  • lora_dropout: 0
  • gradient_checkpointing: true
  • fp16: not is_bfloat16_supported()
  • bf16: is_bfloat16_supported()
Downloads last month
9
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for LimYeri/CodeMind-Llama3-8B-unsloth_v2-merged

Finetuned
(908)
this model

Dataset used to train LimYeri/CodeMind-Llama3-8B-unsloth_v2-merged

Collection including LimYeri/CodeMind-Llama3-8B-unsloth_v2-merged