File size: 6,874 Bytes
93c8622
55f9f11
 
 
 
 
 
 
 
 
93c8622
 
55f9f11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
license: other
base_model: sayeed99/segformer-b3-fashion
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b3-fashion-finetuned-polo-segments-v1.4
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# segformer-b3-fashion-finetuned-polo-segments-v1.4

This model is a fine-tuned version of [sayeed99/segformer-b3-fashion](https://huggingface.co/sayeed99/segformer-b3-fashion) on the sshk/polo-badges-segmentation dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0547
- Mean Iou: 0.7482
- Mean Accuracy: 0.9206
- Overall Accuracy: 0.9823
- Accuracy Unlabeled: nan
- Accuracy Collar: 0.8807
- Accuracy Polo: 0.9847
- Accuracy Lines-cuff: 0.7598
- Accuracy Lines-chest: 0.9230
- Accuracy Human: 0.9823
- Accuracy Background: 0.9929
- Accuracy Tape: nan
- Iou Unlabeled: nan
- Iou Collar: 0.8276
- Iou Polo: 0.9580
- Iou Lines-cuff: 0.6735
- Iou Lines-chest: 0.8230
- Iou Human: 0.9680
- Iou Background: 0.9872
- Iou Tape: 0.0

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30

### Training results

| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Collar | Accuracy Polo | Accuracy Lines-cuff | Accuracy Lines-chest | Accuracy Human | Accuracy Background | Accuracy Tape | Iou Unlabeled | Iou Collar | Iou Polo | Iou Lines-cuff | Iou Lines-chest | Iou Human | Iou Background | Iou Tape |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:---------------:|:-------------:|:-------------------:|:--------------------:|:--------------:|:-------------------:|:-------------:|:-------------:|:----------:|:--------:|:--------------:|:---------------:|:---------:|:--------------:|:--------:|
| 0.206         | 2.5   | 20   | 0.1808          | 0.5772   | 0.6083        | 0.9552           | nan                | 0.6933          | 0.9875        | 0.0                 | 0.0304               | 0.9818         | 0.9566              | nan           | nan           | 0.6406     | 0.9100   | 0.0            | 0.0276          | 0.9299    | 0.9553         | nan      |
| 0.0873        | 5.0   | 40   | 0.0882          | 0.7806   | 0.8207        | 0.9768           | nan                | 0.8457          | 0.9848        | 0.2359              | 0.8896               | 0.9780         | 0.9904              | nan           | nan           | 0.7808     | 0.9460   | 0.2351         | 0.7783          | 0.9605    | 0.9827         | nan      |
| 0.0648        | 7.5   | 60   | 0.0712          | 0.8502   | 0.8900        | 0.9794           | nan                | 0.8586          | 0.9880        | 0.6659              | 0.8584               | 0.9796         | 0.9892              | nan           | nan           | 0.8059     | 0.9499   | 0.6054         | 0.7918          | 0.9642    | 0.9842         | nan      |
| 0.0607        | 10.0  | 80   | 0.0631          | 0.8556   | 0.8957        | 0.9806           | nan                | 0.8586          | 0.9856        | 0.7087              | 0.8477               | 0.9829         | 0.9907              | nan           | nan           | 0.8055     | 0.9539   | 0.6394         | 0.7834          | 0.9659    | 0.9856         | nan      |
| 0.057         | 12.5  | 100  | 0.0605          | 0.8661   | 0.9135        | 0.9815           | nan                | 0.8708          | 0.9818        | 0.7296              | 0.9224               | 0.9855         | 0.9908              | nan           | nan           | 0.8148     | 0.9570   | 0.6577         | 0.8144          | 0.9669    | 0.9859         | nan      |
| 0.0458        | 15.0  | 120  | 0.0573          | 0.7446   | 0.9169        | 0.9819           | nan                | 0.8925          | 0.9838        | 0.7505              | 0.9009               | 0.9792         | 0.9949              | nan           | nan           | 0.8244     | 0.9581   | 0.6600         | 0.8164          | 0.9669    | 0.9863         | 0.0      |
| 0.0413        | 17.5  | 140  | 0.0587          | 0.7428   | 0.9196        | 0.9818           | nan                | 0.8818          | 0.9820        | 0.7483              | 0.9299               | 0.9823         | 0.9932              | nan           | nan           | 0.8217     | 0.9571   | 0.6671         | 0.7997          | 0.9673    | 0.9869         | 0.0      |
| 0.0449        | 20.0  | 160  | 0.0542          | 0.7468   | 0.9202        | 0.9826           | nan                | 0.8850          | 0.9833        | 0.7516              | 0.9248               | 0.9842         | 0.9925              | nan           | nan           | 0.8270     | 0.9590   | 0.6678         | 0.8179          | 0.9688    | 0.9873         | 0.0      |
| 0.0394        | 22.5  | 180  | 0.0558          | 0.7468   | 0.9208        | 0.9819           | nan                | 0.8934          | 0.9853        | 0.7528              | 0.9207               | 0.9808         | 0.9919              | nan           | nan           | 0.8298     | 0.9564   | 0.6657         | 0.8214          | 0.9672    | 0.9869         | 0.0      |
| 0.0472        | 25.0  | 200  | 0.0549          | 0.7474   | 0.9185        | 0.9823           | nan                | 0.8792          | 0.9854        | 0.7531              | 0.9186               | 0.9828         | 0.9922              | nan           | nan           | 0.8274     | 0.9577   | 0.6681         | 0.8233          | 0.9681    | 0.9871         | 0.0      |
| 0.0452        | 27.5  | 220  | 0.0547          | 0.7482   | 0.9217        | 0.9823           | nan                | 0.8837          | 0.9846        | 0.7622              | 0.9247               | 0.9823         | 0.9927              | nan           | nan           | 0.8287     | 0.9580   | 0.6733         | 0.8221          | 0.9681    | 0.9871         | 0.0      |
| 0.0392        | 30.0  | 240  | 0.0547          | 0.7482   | 0.9206        | 0.9823           | nan                | 0.8807          | 0.9847        | 0.7598              | 0.9230               | 0.9823         | 0.9929              | nan           | nan           | 0.8276     | 0.9580   | 0.6735         | 0.8230          | 0.9680    | 0.9872         | 0.0      |


### Framework versions

- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1