smartgmin commited on
Commit
eede340
1 Parent(s): c8bb577

Upload TFViTForImageClassification

Browse files
Files changed (3) hide show
  1. README.md +65 -0
  2. config.json +34 -0
  3. tf_model.h5 +3 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: google/vit-base-patch16-224-in21k
5
+ tags:
6
+ - generated_from_keras_callback
7
+ model-index:
8
+ - name: Entrnal_eyes_data_4class_resize_224_model
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
13
+ probably proofread and complete it, then remove this comment. -->
14
+
15
+ # Entrnal_eyes_data_4class_resize_224_model
16
+
17
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Train Loss: 0.0823
20
+ - Train Accuracy: 0.9261
21
+ - Train Top-3-accuracy: 0.9972
22
+ - Validation Loss: 0.2588
23
+ - Validation Accuracy: 0.9299
24
+ - Validation Top-3-accuracy: 0.9974
25
+ - Epoch: 6
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 651, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
45
+ - training_precision: float32
46
+
47
+ ### Training results
48
+
49
+ | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
50
+ |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
51
+ | 0.7993 | 0.6130 | 0.9518 | 0.5184 | 0.7611 | 0.9833 | 0 |
52
+ | 0.3482 | 0.8052 | 0.9881 | 0.3126 | 0.8382 | 0.9913 | 1 |
53
+ | 0.2260 | 0.8597 | 0.9929 | 0.2990 | 0.8739 | 0.9942 | 2 |
54
+ | 0.1576 | 0.8861 | 0.9949 | 0.2597 | 0.8954 | 0.9956 | 3 |
55
+ | 0.1191 | 0.9041 | 0.9960 | 0.2642 | 0.9106 | 0.9964 | 4 |
56
+ | 0.0933 | 0.9167 | 0.9967 | 0.2598 | 0.9216 | 0.9970 | 5 |
57
+ | 0.0823 | 0.9261 | 0.9972 | 0.2588 | 0.9299 | 0.9974 | 6 |
58
+
59
+
60
+ ### Framework versions
61
+
62
+ - Transformers 4.44.2
63
+ - TensorFlow 2.15.1
64
+ - Datasets 3.0.0
65
+ - Tokenizers 0.19.1
config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "google/vit-base-patch16-224-in21k",
3
+ "architectures": [
4
+ "ViTForImageClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.0,
7
+ "encoder_stride": 16,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.0,
10
+ "hidden_size": 768,
11
+ "id2label": {
12
+ "0": "cataract",
13
+ "1": "diabetic_retinopathy",
14
+ "2": "glaucoma",
15
+ "3": "normal"
16
+ },
17
+ "image_size": 224,
18
+ "initializer_range": 0.02,
19
+ "intermediate_size": 3072,
20
+ "label2id": {
21
+ "cataract": "0",
22
+ "diabetic_retinopathy": "1",
23
+ "glaucoma": "2",
24
+ "normal": "3"
25
+ },
26
+ "layer_norm_eps": 1e-12,
27
+ "model_type": "vit",
28
+ "num_attention_heads": 12,
29
+ "num_channels": 3,
30
+ "num_hidden_layers": 12,
31
+ "patch_size": 16,
32
+ "qkv_bias": true,
33
+ "transformers_version": "4.44.2"
34
+ }
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31d21472fc2e20ede8f639effc397b85f0711d53f17c47ddb51ad1f7f75d7c67
3
+ size 343475896