|
--- |
|
license: apache-2.0 |
|
library_name: peft |
|
tags: |
|
- trl |
|
- sft |
|
- generated_from_trainer |
|
base_model: petals-team/falcon-rw-1b |
|
model-index: |
|
- name: GenAI-task2-ModelD-DS |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# GenAI-task2-ModelD-DS |
|
|
|
This model is a fine-tuned version of [petals-team/falcon-rw-1b](https://huggingface.co/petals-team/falcon-rw-1b) on an unknown dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.6983 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 2 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- gradient_accumulation_steps: 2 |
|
- total_train_batch_size: 4 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_ratio: 0.01 |
|
- num_epochs: 2 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:------:|:----:|:---------------:| |
|
| 1.5185 | 0.0316 | 20 | 1.4331 | |
|
| 2.0381 | 0.0631 | 40 | 1.4158 | |
|
| 2.0446 | 0.0947 | 60 | 1.3606 | |
|
| 1.5993 | 0.1263 | 80 | 1.2881 | |
|
| 1.7903 | 0.1579 | 100 | 1.2838 | |
|
| 1.2226 | 0.1894 | 120 | 1.1627 | |
|
| 1.4407 | 0.2210 | 140 | 1.1587 | |
|
| 1.5104 | 0.2526 | 160 | 1.1219 | |
|
| 1.1543 | 0.2841 | 180 | 1.0469 | |
|
| 1.5322 | 0.3157 | 200 | 1.0498 | |
|
| 1.0461 | 0.3473 | 220 | 0.9775 | |
|
| 1.2949 | 0.3788 | 240 | 0.9830 | |
|
| 1.3357 | 0.4104 | 260 | 0.9445 | |
|
| 1.0266 | 0.4420 | 280 | 0.9118 | |
|
| 1.3746 | 0.4736 | 300 | 0.9135 | |
|
| 0.9231 | 0.5051 | 320 | 0.8550 | |
|
| 1.21 | 0.5367 | 340 | 0.8641 | |
|
| 1.3771 | 0.5683 | 360 | 0.8333 | |
|
| 0.885 | 0.5998 | 380 | 0.8256 | |
|
| 1.3633 | 0.6314 | 400 | 0.8445 | |
|
| 0.8467 | 0.6630 | 420 | 0.7880 | |
|
| 1.1924 | 0.6946 | 440 | 0.8053 | |
|
| 1.152 | 0.7261 | 460 | 0.7812 | |
|
| 0.8539 | 0.7577 | 480 | 0.7842 | |
|
| 1.1079 | 0.7893 | 500 | 0.7932 | |
|
| 0.7215 | 0.8208 | 520 | 0.7558 | |
|
| 0.993 | 0.8524 | 540 | 0.7734 | |
|
| 1.0678 | 0.8840 | 560 | 0.7496 | |
|
| 0.8093 | 0.9155 | 580 | 0.7520 | |
|
| 1.185 | 0.9471 | 600 | 0.7628 | |
|
| 0.7553 | 0.9787 | 620 | 0.7391 | |
|
| 1.0549 | 1.0103 | 640 | 0.7356 | |
|
| 0.7007 | 1.0418 | 660 | 0.7312 | |
|
| 1.1089 | 1.0734 | 680 | 0.7379 | |
|
| 0.7699 | 1.1050 | 700 | 0.7222 | |
|
| 0.808 | 1.1365 | 720 | 0.7227 | |
|
| 0.995 | 1.1681 | 740 | 0.7198 | |
|
| 0.684 | 1.1997 | 760 | 0.7142 | |
|
| 0.9129 | 1.2313 | 780 | 0.7163 | |
|
| 0.7775 | 1.2628 | 800 | 0.7110 | |
|
| 0.8643 | 1.2944 | 820 | 0.7135 | |
|
| 0.9359 | 1.3260 | 840 | 0.7096 | |
|
| 0.728 | 1.3575 | 860 | 0.7108 | |
|
| 0.9421 | 1.3891 | 880 | 0.7130 | |
|
| 0.7606 | 1.4207 | 900 | 0.7042 | |
|
| 0.9158 | 1.4522 | 920 | 0.7077 | |
|
| 0.9677 | 1.4838 | 940 | 0.7045 | |
|
| 0.6616 | 1.5154 | 960 | 0.7023 | |
|
| 0.9689 | 1.5470 | 980 | 0.7024 | |
|
| 0.8237 | 1.5785 | 1000 | 0.7010 | |
|
| 0.8537 | 1.6101 | 1020 | 0.7034 | |
|
| 1.0436 | 1.6417 | 1040 | 0.7014 | |
|
| 0.6457 | 1.6732 | 1060 | 0.6999 | |
|
| 0.8927 | 1.7048 | 1080 | 0.7000 | |
|
| 0.7719 | 1.7364 | 1100 | 0.6991 | |
|
| 0.7837 | 1.7680 | 1120 | 0.6989 | |
|
| 1.0018 | 1.7995 | 1140 | 0.6988 | |
|
| 0.6091 | 1.8311 | 1160 | 0.6984 | |
|
| 0.9807 | 1.8627 | 1180 | 0.6984 | |
|
| 0.8018 | 1.8942 | 1200 | 0.6983 | |
|
| 0.7864 | 1.9258 | 1220 | 0.6983 | |
|
| 0.8791 | 1.9574 | 1240 | 0.6983 | |
|
| 0.8781 | 1.9890 | 1260 | 0.6983 | |
|
|
|
|
|
### Framework versions |
|
|
|
- PEFT 0.10.0 |
|
- Transformers 4.40.0 |
|
- Pytorch 2.2.1+cu121 |
|
- Datasets 2.19.0 |
|
- Tokenizers 0.19.1 |