--- library_name: transformers license: mit base_model: microsoft/deberta-v3-xsmall tags: - generated_from_trainer model-index: - name: capricious-gnu-139 results: [] --- # capricious-gnu-139 This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1831 - Hamming Loss: 0.0661 - Zero One Loss: 0.4537 - Jaccard Score: 0.4105 - Hamming Loss Optimised: 0.0659 - Hamming Loss Threshold: 0.6135 - Zero One Loss Optimised: 0.4087 - Zero One Loss Threshold: 0.4316 - Jaccard Score Optimised: 0.3479 - Jaccard Score Threshold: 0.3462 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.0943791435964314e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 2024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9 ### Training results | Training Loss | Epoch | Step | Validation Loss | Hamming Loss | Zero One Loss | Jaccard Score | Hamming Loss Optimised | Hamming Loss Threshold | Zero One Loss Optimised | Zero One Loss Threshold | Jaccard Score Optimised | Jaccard Score Threshold | |:-------------:|:-----:|:----:|:---------------:|:------------:|:-------------:|:-------------:|:----------------------:|:----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:| | 0.4159 | 1.0 | 100 | 0.3376 | 0.1123 | 1.0 | 1.0 | 0.1123 | 0.9000 | 1.0 | 0.9000 | 1.0 | 0.9000 | | 0.3121 | 2.0 | 200 | 0.2841 | 0.0932 | 0.8113 | 0.8087 | 0.0931 | 0.4416 | 0.6963 | 0.1641 | 0.6101 | 0.1642 | | 0.2602 | 3.0 | 300 | 0.2338 | 0.092 | 0.785 | 0.7819 | 0.0765 | 0.3980 | 0.6113 | 0.3139 | 0.5072 | 0.2086 | | 0.2174 | 4.0 | 400 | 0.2063 | 0.0712 | 0.5975 | 0.5703 | 0.0698 | 0.4494 | 0.5363 | 0.3378 | 0.4363 | 0.2553 | | 0.1896 | 5.0 | 500 | 0.1967 | 0.0694 | 0.5813 | 0.5551 | 0.0661 | 0.4552 | 0.4513 | 0.3622 | 0.3900 | 0.2346 | | 0.1726 | 6.0 | 600 | 0.1910 | 0.07 | 0.4988 | 0.4614 | 0.0695 | 0.5944 | 0.4400 | 0.4036 | 0.3569 | 0.3149 | | 0.1618 | 7.0 | 700 | 0.1861 | 0.0679 | 0.475 | 0.4339 | 0.0651 | 0.5430 | 0.4237 | 0.4130 | 0.3652 | 0.3483 | | 0.1522 | 8.0 | 800 | 0.1845 | 0.0683 | 0.4712 | 0.4328 | 0.0663 | 0.5807 | 0.4337 | 0.4266 | 0.3585 | 0.3310 | | 0.1484 | 9.0 | 900 | 0.1831 | 0.0661 | 0.4537 | 0.4105 | 0.0659 | 0.6135 | 0.4087 | 0.4316 | 0.3479 | 0.3462 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.5.1+cu118 - Datasets 3.1.0 - Tokenizers 0.20.3