liwii commited on
Commit
9e86dfc
·
verified ·
1 Parent(s): 94b4baa

factual-consistency-classification-ja

Browse files
Files changed (2) hide show
  1. README.md +33 -13
  2. pytorch_model.bin +1 -1
README.md CHANGED
@@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [line-corporation/line-distilbert-base-japanese](https://huggingface.co/line-corporation/line-distilbert-base-japanese) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.9166
21
- - Accuracy: 0.4707
22
 
23
  ## Model description
24
 
@@ -44,22 +44,42 @@ The following hyperparameters were used during training:
44
  - distributed_type: tpu
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
- - num_epochs: 10
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
- | No log | 1.0 | 306 | 1.0543 | 0.2891 |
54
- | 1.0487 | 2.0 | 612 | 1.0134 | 0.3301 |
55
- | 1.0487 | 3.0 | 918 | 0.9846 | 0.3457 |
56
- | 0.9961 | 4.0 | 1224 | 0.9620 | 0.3965 |
57
- | 0.963 | 5.0 | 1530 | 0.9462 | 0.4258 |
58
- | 0.963 | 6.0 | 1836 | 0.9326 | 0.4688 |
59
- | 0.9401 | 7.0 | 2142 | 0.9254 | 0.4707 |
60
- | 0.9401 | 8.0 | 2448 | 0.9208 | 0.4688 |
61
- | 0.9295 | 9.0 | 2754 | 0.9176 | 0.4707 |
62
- | 0.9232 | 10.0 | 3060 | 0.9166 | 0.4707 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
 
64
 
65
  ### Framework versions
 
17
 
18
  This model is a fine-tuned version of [line-corporation/line-distilbert-base-japanese](https://huggingface.co/line-corporation/line-distilbert-base-japanese) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.8034
21
+ - Accuracy: 0.6230
22
 
23
  ## Model description
24
 
 
44
  - distributed_type: tpu
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
+ - num_epochs: 30
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
+ | No log | 1.0 | 306 | 1.0539 | 0.2891 |
54
+ | 1.0502 | 2.0 | 612 | 1.0074 | 0.3203 |
55
+ | 1.0502 | 3.0 | 918 | 0.9738 | 0.3711 |
56
+ | 0.9895 | 4.0 | 1224 | 0.9452 | 0.4453 |
57
+ | 0.9483 | 5.0 | 1530 | 0.9245 | 0.4766 |
58
+ | 0.9483 | 6.0 | 1836 | 0.9041 | 0.5566 |
59
+ | 0.918 | 7.0 | 2142 | 0.8945 | 0.5117 |
60
+ | 0.918 | 8.0 | 2448 | 0.8853 | 0.5 |
61
+ | 0.9002 | 9.0 | 2754 | 0.8786 | 0.4922 |
62
+ | 0.884 | 10.0 | 3060 | 0.8658 | 0.5352 |
63
+ | 0.884 | 11.0 | 3366 | 0.8614 | 0.5176 |
64
+ | 0.8697 | 12.0 | 3672 | 0.8467 | 0.5938 |
65
+ | 0.8697 | 13.0 | 3978 | 0.8429 | 0.5801 |
66
+ | 0.8648 | 14.0 | 4284 | 0.8386 | 0.5703 |
67
+ | 0.8571 | 15.0 | 4590 | 0.8311 | 0.5996 |
68
+ | 0.8571 | 16.0 | 4896 | 0.8289 | 0.5879 |
69
+ | 0.8478 | 17.0 | 5202 | 0.8285 | 0.5762 |
70
+ | 0.8468 | 18.0 | 5508 | 0.8193 | 0.6152 |
71
+ | 0.8468 | 19.0 | 5814 | 0.8192 | 0.5957 |
72
+ | 0.8439 | 20.0 | 6120 | 0.8165 | 0.5996 |
73
+ | 0.8439 | 21.0 | 6426 | 0.8157 | 0.5918 |
74
+ | 0.8396 | 22.0 | 6732 | 0.8120 | 0.6055 |
75
+ | 0.8354 | 23.0 | 7038 | 0.8103 | 0.6055 |
76
+ | 0.8354 | 24.0 | 7344 | 0.8091 | 0.6035 |
77
+ | 0.8362 | 25.0 | 7650 | 0.8055 | 0.6152 |
78
+ | 0.8362 | 26.0 | 7956 | 0.8055 | 0.6074 |
79
+ | 0.8334 | 27.0 | 8262 | 0.8045 | 0.6211 |
80
+ | 0.8325 | 28.0 | 8568 | 0.8037 | 0.6191 |
81
+ | 0.8325 | 29.0 | 8874 | 0.8034 | 0.6230 |
82
+ | 0.833 | 30.0 | 9180 | 0.8034 | 0.6230 |
83
 
84
 
85
  ### Framework versions
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fd717c6204324760639f39260180a6740d996ae18f8d997b232b562aa3f30cab
3
  size 274758641
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:338f2d144537ab887447cb85ca42e66d70ca222e5a2a051570a756dcae72807e
3
  size 274758641