--- license: mit base_model: microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext tags: - generated_from_keras_callback model-index: - name: Kikia26/FineTunePubMedBertWithTensorflowKeras2 results: [] --- # Kikia26/FineTunePubMedBertWithTensorflowKeras2 This model is a fine-tuned version of [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0693 - Validation Loss: 0.3774 - Train Precision: 0.6399 - Train Recall: 0.7384 - Train F1: 0.6856 - Train Accuracy: 0.9030 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 1.5823 | 0.9047 | 0.0 | 0.0 | 0.0 | 0.7808 | 0 | | 0.9053 | 0.6998 | 0.5303 | 0.0738 | 0.1296 | 0.8106 | 1 | | 0.6980 | 0.5341 | 0.7038 | 0.3861 | 0.4986 | 0.8591 | 2 | | 0.5206 | 0.4613 | 0.6213 | 0.5295 | 0.5718 | 0.8753 | 3 | | 0.4110 | 0.4201 | 0.6292 | 0.5549 | 0.5897 | 0.8836 | 4 | | 0.3260 | 0.3918 | 0.6306 | 0.5907 | 0.6100 | 0.8937 | 5 | | 0.2682 | 0.3682 | 0.5989 | 0.6709 | 0.6328 | 0.8985 | 6 | | 0.2240 | 0.3445 | 0.6355 | 0.6730 | 0.6537 | 0.9041 | 7 | | 0.1891 | 0.3593 | 0.5736 | 0.7152 | 0.6366 | 0.8913 | 8 | | 0.1672 | 0.3609 | 0.5721 | 0.7278 | 0.6407 | 0.8908 | 9 | | 0.1456 | 0.3594 | 0.5940 | 0.7131 | 0.6481 | 0.8969 | 10 | | 0.1310 | 0.3519 | 0.6437 | 0.7089 | 0.6747 | 0.9073 | 11 | | 0.1103 | 0.3531 | 0.6322 | 0.7215 | 0.6739 | 0.9030 | 12 | | 0.1014 | 0.3814 | 0.6065 | 0.7511 | 0.6711 | 0.8964 | 13 | | 0.0945 | 0.3668 | 0.6494 | 0.7384 | 0.6910 | 0.9049 | 14 | | 0.0880 | 0.3704 | 0.6510 | 0.7321 | 0.6892 | 0.9038 | 15 | | 0.0836 | 0.3762 | 0.6377 | 0.7426 | 0.6862 | 0.9001 | 16 | | 0.0709 | 0.3765 | 0.6354 | 0.7426 | 0.6848 | 0.9020 | 17 | | 0.0755 | 0.3791 | 0.6347 | 0.7405 | 0.6835 | 0.9022 | 18 | | 0.0693 | 0.3774 | 0.6399 | 0.7384 | 0.6856 | 0.9030 | 19 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.14.0 - Datasets 2.15.0 - Tokenizers 0.15.0