ishikaibm commited on
Commit
a5f3e08
·
verified ·
1 Parent(s): 6517b32

Model save

Browse files
README.md CHANGED
@@ -16,10 +16,10 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # peft_ft_random
18
 
19
- This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 8.3484
22
- - Accuracy: 35231.7155
23
 
24
  ## Model description
25
 
@@ -51,13 +51,12 @@ The following hyperparameters were used during training:
51
 
52
  ### Training results
53
 
54
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
- |:-------------:|:-----:|:----:|:---------------:|:----------:|
56
- | No log | 1.0 | 1 | 9.2515 | 27671.5006 |
57
- | No log | 2.0 | 2 | 8.3332 | 33462.3978 |
58
- | No log | 3.0 | 3 | 7.9205 | 37044.5995 |
59
- | No log | 4.0 | 4 | 7.7660 | 38733.4134 |
60
- | No log | 5.0 | 5 | 8.3484 | 35231.7155 |
61
 
62
 
63
  ### Framework versions
@@ -65,5 +64,5 @@ The following hyperparameters were used during training:
65
  - PEFT 0.11.1
66
  - Transformers 4.41.1
67
  - Pytorch 2.3.0+cu121
68
- - Datasets 2.19.1
69
  - Tokenizers 0.19.1
 
16
 
17
  # peft_ft_random
18
 
19
+ This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 8.4980
22
+ - Accuracy: -5564.7596
23
 
24
  ## Model description
25
 
 
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
+ |:-------------:|:-----:|:----:|:---------------:|:-----------:|
56
+ | No log | 0.8 | 2 | 10.1306 | 48758.8852 |
57
+ | No log | 2.0 | 5 | 9.1696 | 68666.5148 |
58
+ | No log | 2.8 | 7 | 8.8054 | -22131.0614 |
59
+ | 9.5148 | 4.0 | 10 | 8.4980 | -5564.7596 |
 
60
 
61
 
62
  ### Framework versions
 
64
  - PEFT 0.11.1
65
  - Transformers 4.41.1
66
  - Pytorch 2.3.0+cu121
67
+ - Datasets 2.19.2
68
  - Tokenizers 0.19.1
adapter_1/adapter_config.json CHANGED
@@ -20,8 +20,8 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "query",
24
- "value"
25
  ],
26
  "task_type": "CAUSAL_LM",
27
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "value",
24
+ "query"
25
  ],
26
  "task_type": "CAUSAL_LM",
27
  "use_dora": false,
adapter_1/adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3e9ecb7a73023e49dec77e40200b905e6152b4fd069ab619436f8109a9c0f84e
3
  size 9443984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cdee2675981a3b2b140f9f80514e33be5a64e60f9cf389e189330391cfc82d74
3
  size 9443984
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0d997fe8ac5f1c0f849cb68e473410b909b23ac1c00c4b8e2878981d6259f93d
3
  size 9443984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:310cfdf3ac11194ea72a604335f03d3b9121724758aefadd42e47ebec41c94fd
3
  size 9443984