File size: 5,209 Bytes
55c4f91 2d76adb 19133e6 55c4f91 4edd4da e487069 a75304b c355ce2 c031644 74cc07c 9786534 17e4075 6ebed36 9c5817b 85605cf 9c0ccad 5e08afb 524393a 554dd17 12cdf8d 377e82a 5012830 fa08861 f3339c7 f9dec3f 3f8d8fb f9a8e2d 3c8ff67 dae0fb8 d2fb853 3b2d184 c74a1f0 a779d7f f73d289 0b537cb 3c0a997 721171d 6ddcc1d d4aa2c3 008b229 517979d af61979 95b7ad8 fab1b71 26968fb 8b99039 5cd49ec a695aa1 f22eb6f 68fbd3f 9a812c4 a04a38e 479e395 e22dbe7 72d194b 11629ba 150eae6 dd5b6cb 5b6651b 8b80692 8b629b5 4cb9732 fe964f9 13c524d cff53ef 1a53446 8bf79bf b9da9af f035a1e b6d0f0b 71c930b b4343c3 9020279 1c50445 c3d4bc1 6edd453 50794a3 bcbb1c3 4dbb83a 47c51fa fb32f98 4ad2574 1bef9ed 08b98c2 006e54e 8d09ef4 f77978f c488f69 a201430 6d5ac3a 66e0b6b a2f8c01 31e9f83 dc41647 68aeaa2 ddcca5c 5f3faec abb8275 2d76adb 19133e6 55c4f91 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: JuliusFx/dyu-fr-t5-small_v8
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# JuliusFx/dyu-fr-t5-small_v8
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9057
- Validation Loss: 2.9043
- Epoch: 95
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.1481 | 3.2663 | 0 |
| 3.0205 | 3.2024 | 1 |
| 2.9712 | 3.1559 | 2 |
| 2.9209 | 3.1465 | 3 |
| 2.8848 | 3.1125 | 4 |
| 2.8512 | 3.1014 | 5 |
| 2.8239 | 3.0771 | 6 |
| 2.7965 | 3.0641 | 7 |
| 2.7743 | 3.0431 | 8 |
| 2.7505 | 3.0327 | 9 |
| 2.7325 | 3.0072 | 10 |
| 2.7153 | 3.0060 | 11 |
| 2.6904 | 2.9950 | 12 |
| 2.6750 | 2.9895 | 13 |
| 2.6554 | 2.9700 | 14 |
| 2.6400 | 2.9632 | 15 |
| 2.6220 | 2.9534 | 16 |
| 2.6059 | 2.9505 | 17 |
| 2.5913 | 2.9536 | 18 |
| 2.5779 | 2.9485 | 19 |
| 2.5624 | 2.9349 | 20 |
| 2.5469 | 2.9307 | 21 |
| 2.5341 | 2.9224 | 22 |
| 2.5223 | 2.9114 | 23 |
| 2.5093 | 2.8996 | 24 |
| 2.4995 | 2.9065 | 25 |
| 2.4855 | 2.8974 | 26 |
| 2.4706 | 2.8926 | 27 |
| 2.4589 | 2.9075 | 28 |
| 2.4521 | 2.8921 | 29 |
| 2.4380 | 2.9055 | 30 |
| 2.4243 | 2.8930 | 31 |
| 2.4131 | 2.8871 | 32 |
| 2.4065 | 2.8894 | 33 |
| 2.3911 | 2.8890 | 34 |
| 2.3833 | 2.8757 | 35 |
| 2.3724 | 2.8778 | 36 |
| 2.3628 | 2.8874 | 37 |
| 2.3556 | 2.8687 | 38 |
| 2.3441 | 2.8653 | 39 |
| 2.3321 | 2.8794 | 40 |
| 2.3203 | 2.8827 | 41 |
| 2.3118 | 2.8778 | 42 |
| 2.3027 | 2.8955 | 43 |
| 2.2903 | 2.8778 | 44 |
| 2.2821 | 2.8751 | 45 |
| 2.2760 | 2.8655 | 46 |
| 2.2592 | 2.8763 | 47 |
| 2.2534 | 2.8643 | 48 |
| 2.2466 | 2.8716 | 49 |
| 2.2363 | 2.8728 | 50 |
| 2.2279 | 2.8688 | 51 |
| 2.2225 | 2.8822 | 52 |
| 2.2133 | 2.8690 | 53 |
| 2.2025 | 2.8551 | 54 |
| 2.1937 | 2.8605 | 55 |
| 2.1863 | 2.8441 | 56 |
| 2.1776 | 2.8576 | 57 |
| 2.1732 | 2.8435 | 58 |
| 2.1640 | 2.8448 | 59 |
| 2.1530 | 2.8422 | 60 |
| 2.1438 | 2.8640 | 61 |
| 2.1360 | 2.8648 | 62 |
| 2.1302 | 2.8689 | 63 |
| 2.1213 | 2.8787 | 64 |
| 2.1170 | 2.8816 | 65 |
| 2.1016 | 2.8655 | 66 |
| 2.0986 | 2.8713 | 67 |
| 2.0892 | 2.8776 | 68 |
| 2.0876 | 2.8912 | 69 |
| 2.0722 | 2.8901 | 70 |
| 2.0678 | 2.8549 | 71 |
| 2.0607 | 2.8883 | 72 |
| 2.0544 | 2.8681 | 73 |
| 2.0481 | 2.8637 | 74 |
| 2.0358 | 2.8739 | 75 |
| 2.0347 | 2.8705 | 76 |
| 2.0232 | 2.8724 | 77 |
| 2.0225 | 2.8619 | 78 |
| 2.0096 | 2.8687 | 79 |
| 2.0038 | 2.8561 | 80 |
| 1.9969 | 2.8560 | 81 |
| 1.9873 | 2.8755 | 82 |
| 1.9880 | 2.8745 | 83 |
| 1.9758 | 2.8648 | 84 |
| 1.9711 | 2.8808 | 85 |
| 1.9635 | 2.8721 | 86 |
| 1.9512 | 2.8739 | 87 |
| 1.9526 | 2.8836 | 88 |
| 1.9442 | 2.8862 | 89 |
| 1.9364 | 2.8969 | 90 |
| 1.9311 | 2.8948 | 91 |
| 1.9234 | 2.9150 | 92 |
| 1.9154 | 2.9048 | 93 |
| 1.9057 | 2.9040 | 94 |
| 1.9057 | 2.9043 | 95 |
### Framework versions
- Transformers 4.38.2
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|