ROE_Patent_Breeze7B_V2
This model is a fine-tuned version of MediaTek-Research/Breeze-7B-Instruct-v1_0 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.0290
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 18
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.2509 | 0.9697 | 12 | 1.2524 |
1.1785 | 1.9394 | 24 | 1.1963 |
1.0903 | 2.9899 | 37 | 1.1486 |
1.0958 | 3.9596 | 49 | 1.1173 |
1.0397 | 4.9293 | 61 | 1.0942 |
0.9761 | 5.9798 | 74 | 1.0738 |
0.9634 | 6.9495 | 86 | 1.0573 |
0.9075 | 8.0 | 99 | 1.0463 |
0.9133 | 8.9697 | 111 | 1.0375 |
0.8696 | 9.9394 | 123 | 1.0337 |
0.8935 | 10.9899 | 136 | 1.0294 |
0.8618 | 11.9596 | 148 | 1.0261 |
0.826 | 12.9293 | 160 | 1.0235 |
0.8184 | 13.9798 | 173 | 1.0226 |
0.7991 | 14.9495 | 185 | 1.0289 |
0.7906 | 16.0 | 198 | 1.0220 |
0.7722 | 16.9697 | 210 | 1.0245 |
0.7722 | 17.4545 | 216 | 1.0290 |
Framework versions
- PEFT 0.13.3.dev0
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
- Downloads last month
- 4
Model tree for allen0909/ROE_Patent_Breeze7B_V2
Base model
MediaTek-Research/Breeze-7B-Instruct-v1_0