skarsa commited on
Commit
a2c673c
·
verified ·
1 Parent(s): 676a733

Training in progress, step 24

Browse files
README.md CHANGED
@@ -5,14 +5,14 @@ base_model: roberta-base
5
  tags:
6
  - generated_from_trainer
7
  model-index:
8
- - name: babe_source_subsamples_model_alpha_0_005_idx_1
9
  results: []
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- # babe_source_subsamples_model_alpha_0_005_idx_1
16
 
17
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
18
 
@@ -34,10 +34,10 @@ More information needed
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 2e-05
37
- - train_batch_size: 32
38
- - eval_batch_size: 32
39
  - seed: 42
40
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
  - lr_scheduler_type: linear
42
  - num_epochs: 3
43
 
@@ -47,7 +47,7 @@ The following hyperparameters were used during training:
47
 
48
  ### Framework versions
49
 
50
- - Transformers 4.48.3
51
- - Pytorch 2.6.0+cu124
52
  - Datasets 3.2.0
53
  - Tokenizers 0.21.0
 
5
  tags:
6
  - generated_from_trainer
7
  model-index:
8
+ - name: babe_source_subsamples_model_alpha_0_001_idx_3
9
  results: []
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ # babe_source_subsamples_model_alpha_0_001_idx_3
16
 
17
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
18
 
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 2e-05
37
+ - train_batch_size: 64
38
+ - eval_batch_size: 64
39
  - seed: 42
40
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
  - lr_scheduler_type: linear
42
  - num_epochs: 3
43
 
 
47
 
48
  ### Framework versions
49
 
50
+ - Transformers 4.47.0
51
+ - Pytorch 2.5.1+cu121
52
  - Datasets 3.2.0
53
  - Tokenizers 0.21.0
config.json CHANGED
@@ -21,7 +21,7 @@
21
  "position_embedding_type": "absolute",
22
  "problem_type": "single_label_classification",
23
  "torch_dtype": "float32",
24
- "transformers_version": "4.48.3",
25
  "type_vocab_size": 1,
26
  "use_cache": true,
27
  "vocab_size": 50265
 
21
  "position_embedding_type": "absolute",
22
  "problem_type": "single_label_classification",
23
  "torch_dtype": "float32",
24
+ "transformers_version": "4.47.0",
25
  "type_vocab_size": 1,
26
  "use_cache": true,
27
  "vocab_size": 50265
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f67a64b12d08de8920e142d1298bd8490fec6ecdadab20b25171241b1941d7b5
3
  size 498612824
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9502edf0caaf4a7022d9e79d7e0914c0c084049aa94e856128c0d913fbe66617
3
  size 498612824
runs/Feb11_11-02-15_b5b9d8deecd5/events.out.tfevents.1739271739.b5b9d8deecd5.31.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c8766be8245a0242d1c71f2d46f7a6465d49ccc1ce9ea7ad77beaeb123e812a
3
+ size 5164
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2045bbdcd9742daa08719b39a0ae86f89527e43ffd7faaca055b7dd5ff4a273b
3
  size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1badb60797ec58ea86e63f83e84d220b79793cb3fe13cb6e2b2b59ff040cb76f
3
  size 5432