metadata
license: gemma
library_name: peft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- llama-duo/synth_summarize_dataset_dedup
model-index:
- name: gemma2b-summarize-gpt4o-256k
results: []
gemma2b-summarize-gpt4o-256k
This model is a fine-tuned version of google/gemma-2b on the llama-duo/synth_summarize_dataset_dedup dataset. It achieves the following results on the evaluation set:
- Loss: 2.4823
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.1964 | 0.9974 | 292 | 2.4892 |
1.0954 | 1.9983 | 585 | 2.4542 |
1.0621 | 2.9991 | 878 | 2.4533 |
1.0523 | 4.0 | 1171 | 2.4547 |
1.0188 | 4.9974 | 1463 | 2.4524 |
1.0119 | 5.9983 | 1756 | 2.4544 |
1.0028 | 6.9991 | 2049 | 2.4655 |
0.9914 | 8.0 | 2342 | 2.4685 |
0.9813 | 8.9974 | 2634 | 2.4743 |
0.9756 | 9.9983 | 2927 | 2.4803 |
0.9815 | 10.9991 | 3220 | 2.4823 |
0.9657 | 12.0 | 3513 | 2.4844 |
0.9694 | 12.9974 | 3805 | 2.4820 |
0.968 | 13.9983 | 4098 | 2.4824 |
0.9728 | 14.9616 | 4380 | 2.4823 |
Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1