File size: 3,127 Bytes
802170d a5bb4d3 802170d a5bb4d3 802170d a5bb4d3 802170d a5bb4d3 802170d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: CIS6930_DAAGR_T5_Emo
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# CIS6930_DAAGR_T5_Emo
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3253
- Train Accuracy: 0.9647
- Validation Loss: 0.4468
- Validation Accuracy: 0.9495
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4976 | 0.9412 | 0.4567 | 0.9459 | 0 |
| 0.4359 | 0.9482 | 0.4462 | 0.9474 | 1 |
| 0.4228 | 0.9502 | 0.4406 | 0.9484 | 2 |
| 0.4131 | 0.9517 | 0.4370 | 0.9488 | 3 |
| 0.4050 | 0.9528 | 0.4349 | 0.9493 | 4 |
| 0.3981 | 0.9539 | 0.4335 | 0.9496 | 5 |
| 0.3914 | 0.9548 | 0.4327 | 0.9498 | 6 |
| 0.3851 | 0.9558 | 0.4328 | 0.9500 | 7 |
| 0.3794 | 0.9565 | 0.4328 | 0.9501 | 8 |
| 0.3738 | 0.9574 | 0.4321 | 0.9502 | 9 |
| 0.3685 | 0.9582 | 0.4328 | 0.9502 | 10 |
| 0.3632 | 0.9589 | 0.4340 | 0.9502 | 11 |
| 0.3582 | 0.9597 | 0.4343 | 0.9501 | 12 |
| 0.3531 | 0.9605 | 0.4363 | 0.9501 | 13 |
| 0.3482 | 0.9612 | 0.4381 | 0.9501 | 14 |
| 0.3436 | 0.9619 | 0.4390 | 0.9500 | 15 |
| 0.3391 | 0.9626 | 0.4396 | 0.9500 | 16 |
| 0.3340 | 0.9633 | 0.4438 | 0.9499 | 17 |
| 0.3297 | 0.9640 | 0.4454 | 0.9498 | 18 |
| 0.3253 | 0.9647 | 0.4468 | 0.9495 | 19 |
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.11.0
- Datasets 2.11.0
- Tokenizers 0.13.2
|