groderg's picture
Evaluation on the test set completed on 2024_09_24.
52e23c4 verified
|
raw
history blame
4.15 kB
metadata
library_name: transformers
license: apache-2.0
base_model: microsoft/resnet-50
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: Resneteau-50-2024_09_23-batch-size32_freeze
    results: []

Resneteau-50-2024_09_23-batch-size32_freeze

This model is a fine-tuned version of microsoft/resnet-50 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1906
  • F1 Micro: 0.6954
  • F1 Macro: 0.4462
  • Accuracy: 0.1827
  • Learning Rate: 0.0001

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 400
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss F1 Micro F1 Macro Accuracy Rate
No log 1.0 273 0.2460 0.5802 0.2267 0.0877 0.001
0.2786 2.0 546 0.2217 0.6412 0.3160 0.1369 0.001
0.2786 3.0 819 0.2117 0.6596 0.3581 0.1486 0.001
0.231 4.0 1092 0.2049 0.6674 0.3831 0.1618 0.001
0.231 5.0 1365 0.2016 0.6707 0.3965 0.1677 0.001
0.2206 6.0 1638 0.2002 0.6720 0.4076 0.1677 0.001
0.2206 7.0 1911 0.1976 0.6752 0.4142 0.1746 0.001
0.2157 8.0 2184 0.1971 0.6824 0.4281 0.1764 0.001
0.2157 9.0 2457 0.1961 0.6845 0.4300 0.1764 0.001
0.2127 10.0 2730 0.1944 0.6763 0.4264 0.1805 0.001
0.2117 11.0 3003 0.1940 0.6902 0.4391 0.1781 0.001
0.2117 12.0 3276 0.1945 0.6939 0.4523 0.1729 0.001
0.2107 13.0 3549 0.1936 0.6908 0.4461 0.1795 0.001
0.2107 14.0 3822 0.1931 0.6916 0.4424 0.1781 0.001
0.2105 15.0 4095 0.1935 0.6936 0.4431 0.1809 0.001
0.2105 16.0 4368 0.1931 0.6896 0.4429 0.1805 0.001
0.2086 17.0 4641 0.1931 0.6953 0.4411 0.1819 0.001
0.2086 18.0 4914 0.1908 0.6984 0.4490 0.1857 0.001
0.2101 19.0 5187 0.1925 0.6879 0.4428 0.1812 0.001
0.2101 20.0 5460 0.1913 0.6797 0.4357 0.1774 0.001
0.2088 21.0 5733 0.1915 0.6958 0.4381 0.1823 0.001
0.2084 22.0 6006 0.1919 0.7039 0.4535 0.1826 0.001
0.2084 23.0 6279 0.1926 0.6907 0.4363 0.1798 0.001
0.2083 24.0 6552 0.1919 0.6953 0.4544 0.1805 0.001
0.2083 25.0 6825 0.1919 0.6962 0.4466 0.1781 0.0001
0.2076 26.0 7098 0.1912 0.6943 0.4418 0.1823 0.0001
0.2076 27.0 7371 0.1912 0.6972 0.4500 0.1809 0.0001
0.2081 28.0 7644 0.1915 0.6944 0.4454 0.1857 0.0001

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.0
  • Tokenizers 0.19.1