ptrdvn's picture
Update README.md
d605834 verified
|
raw
history blame
3.53 kB
metadata
library_name: transformers
license: other
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
tags:
  - llama-factory
  - full
  - generated_from_trainer
model-index:
  - name: distilabel-reasoning-R1-Llama-70B-ja-train
    results: []

distilabel-reasoning-R1-Llama-70B-ja-train

This model is a fine-tuned version of deepseek-ai/DeepSeek-R1-Distill-Qwen-7B on the distilabel-reasoning-R1-Llama-70B-ja-train dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4519

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

### model
model_name_or_path: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B

### method
stage: sft
do_train: true
finetuning_type: full
deepspeed: /root/LLaMA-Factory/examples/deepspeed/ds_z2_config.json

### dataset
dataset: distilabel-reasoning-R1-Llama-70B-ja-train
template: qwen
cutoff_len: 4500
overwrite_cache: true
preprocessing_num_workers: 16
packing: true

### output
output_dir: /root/train_outputs/DeepSeek-R1-Distill-Qwen-7B/distilabel-reasoning-R1-Llama-70B-ja-train
logging_steps: 1
save_steps: 0.99999
plot_loss: true
overwrite_output_dir: true

### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 1
learning_rate: 1.0e-5
num_train_epochs: 1.0
lr_scheduler_type: cosine
warmup_ratio: 0.01
bf16: true
ddp_timeout: 180000000

### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 0.1
echo '{
  "distilabel-reasoning-R1-Llama-70B-ja-train": {
    "hf_hub_url": "lightblue/distilabel-reasoning-R1-Llama-70B-ja-train",
    "formatting": "sharegpt"
  }
}' > /root/LLaMA-Factory/data/dataset_info.json

cd /root/LLaMA-Factory && llamafactory-cli train /root/reasoning_train.yaml

rm -r /root/train_outputs/DeepSeek-R1-Distill-Qwen-7B/distilabel-reasoning-R1-Llama-70B-ja-train/checkpoint*
huggingface-cli upload lightblue/DeepSeek-R1-Distill-Qwen-7B-Japanese /root/train_outputs/DeepSeek-R1-Distill-Qwen-7B/distilabel-reasoning-R1-Llama-70B-ja-train

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 8
  • total_eval_batch_size: 8
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.01
  • num_epochs: 1.0

Training results

Training Loss Epoch Step Validation Loss
0.766 0.1087 5 0.5912
0.5873 0.2174 10 0.5282
0.3868 0.3261 15 0.4958
0.5101 0.4348 20 0.4761
0.4085 0.5435 25 0.4644
0.5561 0.6522 30 0.4578
0.4683 0.7609 35 0.4542
0.5055 0.8696 40 0.4526
0.5359 0.9783 45 0.4519

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.3