charlescao2023 commited on
Commit
79b887e
1 Parent(s): eede48d

Upload 7 files

Browse files

Upload bert-base-cased finetuned for classification on sst2 dataset

README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - generated_from_trainer
7
+ - fnet-bert-base-comparison
8
+ datasets:
9
+ - glue
10
+ metrics:
11
+ - accuracy
12
+ model-index:
13
+ - name: bert-base-cased-finetuned-sst2
14
+ results:
15
+ - task:
16
+ name: Text Classification
17
+ type: text-classification
18
+ dataset:
19
+ name: GLUE SST2
20
+ type: glue
21
+ args: sst2
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.9231651376146789
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # bert-base-cased-sst2
32
+
33
+ This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE SST2 dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.3649
36
+ - Accuracy: 0.9232
37
+
38
+ The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
39
+
40
+ ## Model description
41
+
42
+ More information needed
43
+
44
+ ## Intended uses & limitations
45
+
46
+ More information needed
47
+
48
+ ## Training and evaluation data
49
+
50
+ More information needed
51
+
52
+ ## Training procedure
53
+
54
+ This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
55
+
56
+ ```bash
57
+ #!/usr/bin/bash
58
+
59
+
60
+ python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name sst2 \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-sst2 \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
61
+
62
+ ### Training hyperparameters
63
+
64
+ The following hyperparameters were used during training:
65
+ - learning_rate: 2e-05
66
+ - train_batch_size: 16
67
+ - eval_batch_size: 8
68
+ - seed: 42
69
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
70
+ - lr_scheduler_type: linear
71
+ - num_epochs: 3.0
72
+
73
+ ### Training results
74
+
75
+ | Training Loss | Epoch | Step | Accuracy | Validation Loss |
76
+ |:-------------:|:-----:|:-----:|:--------:|:---------------:|
77
+ | 0.233 | 1.0 | 4210 | 0.9174 | 0.2841 |
78
+ | 0.1261 | 2.0 | 8420 | 0.9278 | 0.3310 |
79
+ | 0.0768 | 3.0 | 12630 | 0.9232 | 0.3649 |
80
+
81
+
82
+ ### Framework versions
83
+
84
+ - Transformers 4.11.0.dev0
85
+ - Pytorch 1.9.0
86
+ - Datasets 1.12.1
87
+ - Tokenizers 0.10.3
config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "bert-base-cased",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "finetuning_task": "sst2",
9
+ "gradient_checkpointing": false,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "id2label": {
14
+ "0": "negative",
15
+ "1": "positive"
16
+ },
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 3072,
19
+ "label2id": {
20
+ "negative": 0,
21
+ "positive": 1
22
+ },
23
+ "layer_norm_eps": 1e-12,
24
+ "max_position_embeddings": 512,
25
+ "model_type": "bert",
26
+ "num_attention_heads": 12,
27
+ "num_hidden_layers": 12,
28
+ "pad_token_id": 0,
29
+ "position_embedding_type": "absolute",
30
+ "problem_type": "single_label_classification",
31
+ "torch_dtype": "float32",
32
+ "transformers_version": "4.11.0.dev0",
33
+ "type_vocab_size": 2,
34
+ "use_cache": true,
35
+ "vocab_size": 28996
36
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4dbfeca23d6951df35b7d3db36125283bb99ee81c7f38c6f301185257e6a0624
3
+ size 433331373
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": false, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "bert-base-cased", "tokenizer_class": "BertTokenizer"}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff