File size: 3,015 Bytes
50e1cc1
0d15554
fc7e805
5da22d7
 
 
 
 
 
 
50e1cc1
 
5da22d7
 
 
 
 
 
 
0d15554
 
 
 
 
 
 
 
 
 
5da22d7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4b2ca68
 
 
5da22d7
 
 
 
 
 
 
0d15554
 
 
 
 
 
 
 
 
5da22d7
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
base_model: nvidia/mit-b0
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-SixrayKnife8-19-2024
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# segformer-b0-finetuned-segments-SixrayKnife8-19-2024

This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the saad7489/SixraygunTest dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6355
- Mean Iou: 0.5008
- Mean Accuracy: 0.7954
- Overall Accuracy: 0.7906
- Accuracy Bkg: nan
- Accuracy Knife: 0.7186
- Accuracy Gun: 0.8722
- Iou Bkg: 0.0
- Iou Knife: 0.6915
- Iou Gun: 0.8110

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Bkg | Accuracy Knife | Accuracy Gun | Iou Bkg | Iou Knife | Iou Gun |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------:|:--------------:|:------------:|:-------:|:---------:|:-------:|
| 0.7945        | 1.4286 | 20   | 0.7932          | 0.5023   | 0.8446        | 0.8389           | nan          | 0.7531         | 0.9361       | 0.0     | 0.7186    | 0.7883  |
| 0.7385        | 2.8571 | 40   | 0.7324          | 0.5150   | 0.8445        | 0.8404           | nan          | 0.7787         | 0.9103       | 0.0     | 0.7375    | 0.8074  |
| 0.7139        | 4.2857 | 60   | 0.7152          | 0.5033   | 0.8256        | 0.8200           | nan          | 0.7358         | 0.9155       | 0.0     | 0.7072    | 0.8027  |
| 0.7405        | 5.7143 | 80   | 0.6747          | 0.4953   | 0.7972        | 0.7917           | nan          | 0.7078         | 0.8866       | 0.0     | 0.6785    | 0.8075  |
| 0.6666        | 7.1429 | 100  | 0.6442          | 0.4937   | 0.7919        | 0.7860           | nan          | 0.6964         | 0.8874       | 0.0     | 0.6723    | 0.8089  |
| 0.6357        | 8.5714 | 120  | 0.6210          | 0.4957   | 0.7874        | 0.7823           | nan          | 0.7059         | 0.8688       | 0.0     | 0.6794    | 0.8076  |
| 0.6548        | 10.0   | 140  | 0.6355          | 0.5008   | 0.7954        | 0.7906           | nan          | 0.7186         | 0.8722       | 0.0     | 0.6915    | 0.8110  |


### Framework versions

- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1