flan-t5-xl-absa-multitask-laptops
This model is a fine-tuned version of ybelkada/flan-t5-xl-sharded-bf16 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0986
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
4.7611 | 0.32 | 200 | 3.2781 |
1.4428 | 0.63 | 400 | 0.4469 |
0.4548 | 0.95 | 600 | 0.2874 |
0.3146 | 1.26 | 800 | 0.2316 |
0.2675 | 1.58 | 1000 | 0.2096 |
0.2438 | 1.9 | 1200 | 0.1935 |
0.2244 | 2.21 | 1400 | 0.1714 |
0.2127 | 2.53 | 1600 | 0.1587 |
0.1927 | 2.84 | 1800 | 0.1541 |
0.1787 | 3.16 | 2000 | 0.1467 |
0.1715 | 3.48 | 2200 | 0.1350 |
0.1625 | 3.79 | 2400 | 0.1357 |
0.1579 | 4.11 | 2600 | 0.1304 |
0.1522 | 4.42 | 2800 | 0.1222 |
0.1417 | 4.74 | 3000 | 0.1204 |
0.1399 | 5.06 | 3200 | 0.1234 |
0.1303 | 5.37 | 3400 | 0.1211 |
0.1326 | 5.69 | 3600 | 0.1093 |
0.1241 | 6.0 | 3800 | 0.1090 |
0.1212 | 6.32 | 4000 | 0.1127 |
0.1189 | 6.64 | 4200 | 0.1045 |
0.124 | 6.95 | 4400 | 0.1077 |
0.1152 | 7.27 | 4600 | 0.1024 |
0.1141 | 7.58 | 4800 | 0.1008 |
0.1072 | 7.9 | 5000 | 0.1043 |
0.1146 | 8.21 | 5200 | 0.1011 |
0.1071 | 8.53 | 5400 | 0.0996 |
0.1149 | 8.85 | 5600 | 0.0990 |
0.1088 | 9.16 | 5800 | 0.1003 |
0.1064 | 9.48 | 6000 | 0.0988 |
0.1049 | 9.79 | 6200 | 0.0986 |
Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
Model tree for Shakhovak/flan-t5-xl-absa-multitask-laptops
Base model
ybelkada/flan-t5-xl-sharded-bf16