avinasht commited on
Commit
bba5c03
1 Parent(s): 79381d2

Acc0.8751560549313359, F10.8749961858131386 , Augmented with roberta-base.csv, finetuned on ProsusAI/finbert

Browse files
Files changed (4) hide show
  1. README.md +90 -0
  2. config.json +37 -0
  3. model.safetensors +3 -0
  4. training_args.bin +3 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: ProsusAI/finbert
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - accuracy
7
+ - f1
8
+ - precision
9
+ - recall
10
+ model-index:
11
+ - name: finbert_roberta-base
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # finbert_roberta-base
19
+
20
+ This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.7907
23
+ - Accuracy: 0.9033
24
+ - F1: 0.9034
25
+ - Precision: 0.9036
26
+ - Recall: 0.9033
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 0.0001
46
+ - train_batch_size: 64
47
+ - eval_batch_size: 64
48
+ - seed: 42
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: linear
51
+ - lr_scheduler_warmup_steps: 1000
52
+ - num_epochs: 25
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
57
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
58
+ | 0.8094 | 1.0 | 91 | 0.7239 | 0.6942 | 0.6824 | 0.6887 | 0.6942 |
59
+ | 0.439 | 2.0 | 182 | 0.4112 | 0.8471 | 0.8476 | 0.8527 | 0.8471 |
60
+ | 0.274 | 3.0 | 273 | 0.3978 | 0.8612 | 0.8596 | 0.8623 | 0.8612 |
61
+ | 0.2002 | 4.0 | 364 | 0.4319 | 0.8409 | 0.8399 | 0.8430 | 0.8409 |
62
+ | 0.123 | 5.0 | 455 | 0.4685 | 0.8674 | 0.8661 | 0.8685 | 0.8674 |
63
+ | 0.1251 | 6.0 | 546 | 0.4734 | 0.8690 | 0.8684 | 0.8689 | 0.8690 |
64
+ | 0.124 | 7.0 | 637 | 0.5604 | 0.8580 | 0.8574 | 0.8610 | 0.8580 |
65
+ | 0.0738 | 8.0 | 728 | 0.5583 | 0.8534 | 0.8546 | 0.8604 | 0.8534 |
66
+ | 0.1268 | 9.0 | 819 | 0.5665 | 0.8534 | 0.8524 | 0.8537 | 0.8534 |
67
+ | 0.0425 | 10.0 | 910 | 0.5959 | 0.8549 | 0.8561 | 0.8626 | 0.8549 |
68
+ | 0.1037 | 11.0 | 1001 | 0.4439 | 0.8752 | 0.8742 | 0.8760 | 0.8752 |
69
+ | 0.0762 | 12.0 | 1092 | 0.5998 | 0.8674 | 0.8668 | 0.8686 | 0.8674 |
70
+ | 0.0523 | 13.0 | 1183 | 0.5525 | 0.8783 | 0.8785 | 0.8792 | 0.8783 |
71
+ | 0.0291 | 14.0 | 1274 | 0.6588 | 0.8752 | 0.8747 | 0.8756 | 0.8752 |
72
+ | 0.0311 | 15.0 | 1365 | 0.6065 | 0.8830 | 0.8833 | 0.8839 | 0.8830 |
73
+ | 0.0146 | 16.0 | 1456 | 0.7469 | 0.8705 | 0.8701 | 0.8706 | 0.8705 |
74
+ | 0.0145 | 17.0 | 1547 | 0.6748 | 0.8861 | 0.8864 | 0.8872 | 0.8861 |
75
+ | 0.0013 | 18.0 | 1638 | 0.7708 | 0.8814 | 0.8815 | 0.8816 | 0.8814 |
76
+ | 0.0105 | 19.0 | 1729 | 0.8126 | 0.8908 | 0.8910 | 0.8918 | 0.8908 |
77
+ | 0.0025 | 20.0 | 1820 | 0.7727 | 0.8939 | 0.8938 | 0.8957 | 0.8939 |
78
+ | 0.0014 | 21.0 | 1911 | 0.8088 | 0.8939 | 0.8942 | 0.8958 | 0.8939 |
79
+ | 0.0015 | 22.0 | 2002 | 0.7766 | 0.9033 | 0.9033 | 0.9034 | 0.9033 |
80
+ | 0.0001 | 23.0 | 2093 | 0.7907 | 0.9033 | 0.9034 | 0.9036 | 0.9033 |
81
+ | 0.0002 | 24.0 | 2184 | 0.7945 | 0.9033 | 0.9034 | 0.9036 | 0.9033 |
82
+ | 0.0002 | 25.0 | 2275 | 0.7954 | 0.9033 | 0.9034 | 0.9036 | 0.9033 |
83
+
84
+
85
+ ### Framework versions
86
+
87
+ - Transformers 4.37.0
88
+ - Pytorch 2.1.2
89
+ - Datasets 2.1.0
90
+ - Tokenizers 0.15.1
config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "ProsusAI/finbert",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "positive",
14
+ "1": "negative",
15
+ "2": "neutral"
16
+ },
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 3072,
19
+ "label2id": {
20
+ "negative": 1,
21
+ "neutral": 2,
22
+ "positive": 0
23
+ },
24
+ "layer_norm_eps": 1e-12,
25
+ "max_position_embeddings": 512,
26
+ "model_type": "bert",
27
+ "num_attention_heads": 12,
28
+ "num_hidden_layers": 12,
29
+ "pad_token_id": 0,
30
+ "position_embedding_type": "absolute",
31
+ "problem_type": "single_label_classification",
32
+ "torch_dtype": "float32",
33
+ "transformers_version": "4.37.0",
34
+ "type_vocab_size": 2,
35
+ "use_cache": true,
36
+ "vocab_size": 30522
37
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1c563598c83c4ed2d94806e486c17abce8f5ebcfe87f4e469cd566ff5fefd0b
3
+ size 437961724
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5679de29ba5f61809913f83d61399a8e5c236dc7a5caae4d5cbc2712fb06a721
3
+ size 4664