JEdward7777's picture
update model card README.md
b997d82
|
raw
history blame
5.44 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: delivery_truck_classification
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 1

delivery_truck_classification

This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0290
  • Accuracy: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 60

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.94 4 0.0290 1.0
No log 1.94 8 0.0290 1.0
No log 2.94 12 0.0290 1.0
No log 3.94 16 0.0290 1.0
0.2595 4.94 20 0.0290 1.0
0.2595 5.94 24 0.0290 1.0
0.2595 6.94 28 0.0290 1.0
0.2595 7.94 32 0.0290 1.0
0.2595 8.94 36 0.0290 1.0
0.2679 9.94 40 0.0290 1.0
0.2679 10.94 44 0.0290 1.0
0.2679 11.94 48 0.0290 1.0
0.2679 12.94 52 0.0290 1.0
0.2679 13.94 56 0.0290 1.0
0.275 14.94 60 0.0290 1.0
0.275 15.94 64 0.0290 1.0
0.275 16.94 68 0.0290 1.0
0.275 17.94 72 0.0290 1.0
0.275 18.94 76 0.0290 1.0
0.248 19.94 80 0.0290 1.0
0.248 20.94 84 0.0290 1.0
0.248 21.94 88 0.0290 1.0
0.248 22.94 92 0.0290 1.0
0.248 23.94 96 0.0290 1.0
0.2669 24.94 100 0.0290 1.0
0.2669 25.94 104 0.0290 1.0
0.2669 26.94 108 0.0290 1.0
0.2669 27.94 112 0.0290 1.0
0.2669 28.94 116 0.0290 1.0
0.2589 29.94 120 0.0290 1.0
0.2589 30.94 124 0.0290 1.0
0.2589 31.94 128 0.0290 1.0
0.2589 32.94 132 0.0290 1.0
0.2589 33.94 136 0.0290 1.0
0.278 34.94 140 0.0290 1.0
0.278 35.94 144 0.0290 1.0
0.278 36.94 148 0.0290 1.0
0.278 37.94 152 0.0290 1.0
0.278 38.94 156 0.0290 1.0
0.273 39.94 160 0.0290 1.0
0.273 40.94 164 0.0290 1.0
0.273 41.94 168 0.0290 1.0
0.273 42.94 172 0.0290 1.0
0.273 43.94 176 0.0290 1.0
0.2535 44.94 180 0.0290 1.0
0.2535 45.94 184 0.0290 1.0
0.2535 46.94 188 0.0290 1.0
0.2535 47.94 192 0.0290 1.0
0.2535 48.94 196 0.0290 1.0
0.2762 49.94 200 0.0290 1.0
0.2762 50.94 204 0.0290 1.0
0.2762 51.94 208 0.0290 1.0
0.2762 52.94 212 0.0290 1.0
0.2762 53.94 216 0.0290 1.0
0.2798 54.94 220 0.0290 1.0
0.2798 55.94 224 0.0290 1.0
0.2798 56.94 228 0.0290 1.0
0.2798 57.94 232 0.0290 1.0
0.2798 58.94 236 0.0290 1.0
0.2709 59.94 240 0.0290 1.0

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.13.0+cu116
  • Datasets 2.8.0
  • Tokenizers 0.13.2