practica_2_model2

This model is a fine-tuned version of facebook/detr-resnet-50 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3766

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 17 0.9447
No log 2.0 34 0.9111
1.0504 3.0 51 0.7995
1.0504 4.0 68 0.7310
1.0504 5.0 85 0.6850
0.7808 6.0 102 0.6665
0.7808 7.0 119 0.6733
0.7808 8.0 136 0.5836
0.6128 9.0 153 0.5371
0.6128 10.0 170 0.5584
0.6128 11.0 187 0.5198
0.504 12.0 204 0.4960
0.504 13.0 221 0.4524
0.504 14.0 238 0.5036
0.4324 15.0 255 0.4910
0.4324 16.0 272 0.4838
0.4324 17.0 289 0.4706
0.3837 18.0 306 0.4151
0.3837 19.0 323 0.4629
0.3837 20.0 340 0.4957
0.3792 21.0 357 0.3997
0.3792 22.0 374 0.5092
0.3792 23.0 391 0.4898
0.3622 24.0 408 0.4457
0.3622 25.0 425 0.4580
0.3622 26.0 442 0.4016
0.3735 27.0 459 0.4088
0.3735 28.0 476 0.4354
0.3735 29.0 493 0.4236
0.3106 30.0 510 0.4381
0.3106 31.0 527 0.3804
0.3106 32.0 544 0.3955
0.3048 33.0 561 0.4086
0.3048 34.0 578 0.4342
0.3048 35.0 595 0.3663
0.2901 36.0 612 0.4711
0.2901 37.0 629 0.4031
0.2901 38.0 646 0.3623
0.2798 39.0 663 0.4112
0.2798 40.0 680 0.3581
0.2798 41.0 697 0.3723
0.2669 42.0 714 0.4367
0.2669 43.0 731 0.3548
0.2669 44.0 748 0.3789
0.2486 45.0 765 0.3873
0.2486 46.0 782 0.3687
0.2486 47.0 799 0.3990
0.2401 48.0 816 0.3698
0.2401 49.0 833 0.3697
0.2432 50.0 850 0.4309
0.2432 51.0 867 0.3439
0.2432 52.0 884 0.3556
0.2319 53.0 901 0.3514
0.2319 54.0 918 0.3402
0.2319 55.0 935 0.3706
0.2179 56.0 952 0.3511
0.2179 57.0 969 0.3479
0.2179 58.0 986 0.4039
0.2192 59.0 1003 0.3684
0.2192 60.0 1020 0.3531
0.2192 61.0 1037 0.3475
0.2087 62.0 1054 0.3374
0.2087 63.0 1071 0.3321
0.2087 64.0 1088 0.3440
0.2001 65.0 1105 0.3741
0.2001 66.0 1122 0.3493
0.2001 67.0 1139 0.4159
0.1992 68.0 1156 0.3376
0.1992 69.0 1173 0.3495
0.1992 70.0 1190 0.3530
0.1899 71.0 1207 0.3254
0.1899 72.0 1224 0.3796
0.1899 73.0 1241 0.3437
0.1861 74.0 1258 0.3489
0.1861 75.0 1275 0.3398
0.1861 76.0 1292 0.3929
0.1896 77.0 1309 0.3382
0.1896 78.0 1326 0.3609
0.1896 79.0 1343 0.3501
0.1859 80.0 1360 0.3392
0.1859 81.0 1377 0.3448
0.1859 82.0 1394 0.3621
0.1808 83.0 1411 0.3525
0.1808 84.0 1428 0.3553
0.1808 85.0 1445 0.3645
0.1843 86.0 1462 0.3306
0.1843 87.0 1479 0.3627
0.1843 88.0 1496 0.3497
0.1743 89.0 1513 0.3892
0.1743 90.0 1530 0.3410
0.1743 91.0 1547 0.3568
0.1689 92.0 1564 0.3409
0.1689 93.0 1581 0.3505
0.1689 94.0 1598 0.3536
0.1675 95.0 1615 0.3792
0.1675 96.0 1632 0.3321
0.1675 97.0 1649 0.3489
0.1734 98.0 1666 0.3374
0.1734 99.0 1683 0.3417
0.1689 100.0 1700 0.3766

Framework versions

  • Transformers 4.48.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.3.2
  • Tokenizers 0.21.0
Downloads last month
96
Safetensors
Model size
41.6M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for seayala/practica_2_model2

Finetuned
(493)
this model