Porameht commited on
Commit
a205588
·
verified ·
1 Parent(s): 966d01d

bert-base-multilingual-cased-intent-booking

Browse files
Files changed (1) hide show
  1. README.md +13 -20
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: google-bert/bert-base-uncased
5
  tags:
6
  - generated_from_trainer
7
  metrics:
@@ -10,22 +10,22 @@ metrics:
10
  - precision
11
  - recall
12
  model-index:
13
- - name: bert-base-uncased-intent-booking
14
  results: []
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
  should probably proofread and complete it, then remove this comment. -->
19
 
20
- # bert-base-uncased-intent-booking
21
 
22
- This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 2.1507
25
- - Accuracy: 0.1937
26
- - F1: 0.1640
27
- - Precision: 0.2456
28
- - Recall: 0.1937
29
 
30
  ## Model description
31
 
@@ -51,22 +51,15 @@ The following hyperparameters were used during training:
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_steps: 64
54
- - num_epochs: 10
55
 
56
  ### Training results
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
59
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
60
- | 2.3281 | 1.0 | 65 | 2.2466 | 0.1532 | 0.0892 | 0.0818 | 0.1532 |
61
- | 2.2842 | 2.0 | 130 | 2.2080 | 0.1757 | 0.1264 | 0.1989 | 0.1757 |
62
- | 2.241 | 3.0 | 195 | 2.1896 | 0.1847 | 0.1334 | 0.1353 | 0.1847 |
63
- | 2.2074 | 4.0 | 260 | 2.1692 | 0.1577 | 0.1349 | 0.3182 | 0.1577 |
64
- | 2.1731 | 5.0 | 325 | 2.1473 | 0.1847 | 0.1354 | 0.1976 | 0.1847 |
65
- | 2.1499 | 6.0 | 390 | 2.1371 | 0.1847 | 0.1455 | 0.2236 | 0.1847 |
66
- | 2.111 | 7.0 | 455 | 2.1510 | 0.1757 | 0.1511 | 0.2523 | 0.1757 |
67
- | 2.0868 | 8.0 | 520 | 2.1421 | 0.1892 | 0.1549 | 0.3178 | 0.1892 |
68
- | 2.0654 | 9.0 | 585 | 2.1348 | 0.2027 | 0.1896 | 0.4106 | 0.2027 |
69
- | 2.0593 | 10.0 | 650 | 2.1305 | 0.1982 | 0.1814 | 0.3549 | 0.1982 |
70
 
71
 
72
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ base_model: google-bert/bert-base-multilingual-cased
5
  tags:
6
  - generated_from_trainer
7
  metrics:
 
10
  - precision
11
  - recall
12
  model-index:
13
+ - name: bert-base-multilingual-cased-intent-booking
14
  results: []
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
  should probably proofread and complete it, then remove this comment. -->
19
 
20
+ # bert-base-multilingual-cased-intent-booking
21
 
22
+ This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset.
23
  It achieves the following results on the evaluation set:
24
+ - Loss: 0.2687
25
+ - Accuracy: 0.9144
26
+ - F1: 0.9101
27
+ - Precision: 0.9295
28
+ - Recall: 0.9144
29
 
30
  ## Model description
31
 
 
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_steps: 64
54
+ - num_epochs: 3
55
 
56
  ### Training results
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
59
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
60
+ | 2.0801 | 1.0 | 65 | 1.1259 | 0.7613 | 0.7482 | 0.8102 | 0.7613 |
61
+ | 0.6191 | 2.0 | 130 | 0.3180 | 0.9144 | 0.9157 | 0.9225 | 0.9144 |
62
+ | 0.2196 | 3.0 | 195 | 0.1772 | 0.9595 | 0.9593 | 0.9619 | 0.9595 |
 
 
 
 
 
 
 
63
 
64
 
65
  ### Framework versions