git-base-pokemon

This model is a fine-tuned version of microsoft/git-base on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1817
  • Wer Score: 9.0938

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Wer Score
7.3974 0.7 50 4.5248 4.5234
2.2794 1.4 100 0.4021 5.1680
0.1697 2.1 150 0.1398 1.5039
0.0816 2.8 200 0.1458 9.9570
0.0556 3.5 250 0.1417 2.5234
0.043 4.2 300 0.1448 12.8086
0.0285 4.9 350 0.1469 7.3867
0.021 5.59 400 0.1505 13.0312
0.0205 6.29 450 0.1499 6.3281
0.0179 6.99 500 0.1527 13.0234
0.0157 7.69 550 0.1552 6.3047
0.015 8.39 600 0.1571 6.7656
0.015 9.09 650 0.1579 10.2305
0.0137 9.79 700 0.1585 11.4219
0.0132 10.49 750 0.1598 5.8320
0.0132 11.19 800 0.1591 12.0508
0.013 11.89 850 0.1612 7.9492
0.0117 12.59 900 0.1621 8.1758
0.0123 13.29 950 0.1632 12.9961
0.0125 13.99 1000 0.1613 10.2031
0.0116 14.69 1050 0.1642 5.7930
0.0112 15.38 1100 0.1636 6.1719
0.0112 16.08 1150 0.1652 7.2422
0.0107 16.78 1200 0.1644 12.9961
0.0108 17.48 1250 0.1661 5.0117
0.0109 18.18 1300 0.1658 7.3242
0.0108 18.88 1350 0.1691 6.0547
0.0101 19.58 1400 0.1690 6.9141
0.0103 20.28 1450 0.1692 7.1680
0.0107 20.98 1500 0.1702 12.3281
0.0099 21.68 1550 0.1708 10.75
0.0103 22.38 1600 0.1714 9.5586
0.0101 23.08 1650 0.1713 12.9805
0.0098 23.78 1700 0.1712 11.4883
0.0095 24.48 1750 0.1711 9.3320
0.0096 25.17 1800 0.1738 8.6523
0.0097 25.87 1850 0.1717 11.5078
0.0091 26.57 1900 0.1735 7.9570
0.0092 27.27 1950 0.1729 9.8242
0.0093 27.97 2000 0.1721 10.5078
0.0087 28.67 2050 0.1732 9.3906
0.009 29.37 2100 0.1760 8.0664
0.009 30.07 2150 0.1769 10.5312
0.0086 30.77 2200 0.1743 10.8555
0.0087 31.47 2250 0.1772 10.2188
0.0089 32.17 2300 0.1757 11.6016
0.0088 32.87 2350 0.1765 8.9297
0.0082 33.57 2400 0.1754 9.6484
0.0082 34.27 2450 0.1770 12.3711
0.0084 34.97 2500 0.1761 10.1523
0.0076 35.66 2550 0.1774 9.1055
0.0077 36.36 2600 0.1788 8.7852
0.0079 37.06 2650 0.1782 11.8086
0.0071 37.76 2700 0.1784 10.5234
0.0075 38.46 2750 0.1789 8.8828
0.0072 39.16 2800 0.1796 8.5664
0.0071 39.86 2850 0.1804 9.5391
0.0069 40.56 2900 0.1796 9.4062
0.0068 41.26 2950 0.1797 8.9883
0.0067 41.96 3000 0.1809 10.5273
0.0062 42.66 3050 0.1801 10.4531
0.0062 43.36 3100 0.1803 7.2188
0.0063 44.06 3150 0.1808 8.7930
0.0058 44.76 3200 0.1804 10.5156
0.0057 45.45 3250 0.1807 11.1328
0.0059 46.15 3300 0.1812 8.6875
0.0055 46.85 3350 0.1811 10.2773
0.0053 47.55 3400 0.1814 10.0391
0.0054 48.25 3450 0.1817 8.5391
0.0053 48.95 3500 0.1818 8.9688
0.005 49.65 3550 0.1817 9.0938

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
42
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.