BadreddineHug commited on
Commit
724b7ab
·
1 Parent(s): 2257ef1

End of training

Browse files
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - precision
7
+ - recall
8
+ - f1
9
+ - accuracy
10
+ model-index:
11
+ - name: LayoutLM_2
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # LayoutLM_2
19
+
20
+ This model is a fine-tuned version of [BadreddineHug/LayoutLM_1](https://huggingface.co/BadreddineHug/LayoutLM_1) on the None dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.4785
23
+ - Precision: 0.6599
24
+ - Recall: 0.7638
25
+ - F1: 0.7080
26
+ - Accuracy: 0.9097
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 1e-06
46
+ - train_batch_size: 2
47
+ - eval_batch_size: 2
48
+ - seed: 42
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: linear
51
+ - training_steps: 1500
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
56
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
57
+ | No log | 3.7 | 100 | 0.4266 | 0.6597 | 0.7480 | 0.7011 | 0.9110 |
58
+ | No log | 7.41 | 200 | 0.4415 | 0.6575 | 0.7559 | 0.7033 | 0.9084 |
59
+ | No log | 11.11 | 300 | 0.4478 | 0.6575 | 0.7559 | 0.7033 | 0.9084 |
60
+ | No log | 14.81 | 400 | 0.4481 | 0.6690 | 0.7638 | 0.7132 | 0.9123 |
61
+ | 0.0237 | 18.52 | 500 | 0.4551 | 0.6644 | 0.7638 | 0.7106 | 0.9097 |
62
+ | 0.0237 | 22.22 | 600 | 0.4542 | 0.6736 | 0.7638 | 0.7159 | 0.9097 |
63
+ | 0.0237 | 25.93 | 700 | 0.4536 | 0.6783 | 0.7638 | 0.7185 | 0.9123 |
64
+ | 0.0237 | 29.63 | 800 | 0.4662 | 0.6644 | 0.7638 | 0.7106 | 0.9097 |
65
+ | 0.0237 | 33.33 | 900 | 0.4716 | 0.6486 | 0.7559 | 0.6982 | 0.9071 |
66
+ | 0.0146 | 37.04 | 1000 | 0.4644 | 0.6577 | 0.7717 | 0.7101 | 0.9097 |
67
+ | 0.0146 | 40.74 | 1100 | 0.4732 | 0.6599 | 0.7638 | 0.7080 | 0.9097 |
68
+ | 0.0146 | 44.44 | 1200 | 0.4727 | 0.6667 | 0.7717 | 0.7153 | 0.9110 |
69
+ | 0.0146 | 48.15 | 1300 | 0.4774 | 0.6531 | 0.7559 | 0.7007 | 0.9097 |
70
+ | 0.0146 | 51.85 | 1400 | 0.4780 | 0.6599 | 0.7638 | 0.7080 | 0.9097 |
71
+ | 0.0128 | 55.56 | 1500 | 0.4785 | 0.6599 | 0.7638 | 0.7080 | 0.9097 |
72
+
73
+
74
+ ### Framework versions
75
+
76
+ - Transformers 4.29.2
77
+ - Pytorch 2.0.1+cu118
78
+ - Datasets 2.14.4
79
+ - Tokenizers 0.13.3
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:85d81b3369861eb45c77d93b51eb3378067e5ff873cf331441357030a7537d8d
3
  size 503789233
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28056b5e8dfe2e6a6a7348e290870599ccd8dc78b55e9d26a982d318e2d706ea
3
  size 503789233
runs/Aug26_18-10-58_a211cf088ab3/events.out.tfevents.1693073468.a211cf088ab3.4821.3 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0f27e6d1c68a2b9ec601f5f121ef56d0fee1802a78b476871710b96fe20b2747
3
- size 12417
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d8c5af8db3e34664fdfe928827b853cd2fbb57a56d9b8ef033f1fb29b3eeff6
3
+ size 12771