w11wo commited on
Commit
e5ec579
1 Parent(s): c3a52f0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -34
README.md CHANGED
@@ -8,59 +8,57 @@ metrics:
8
  - f1
9
  - precision
10
  - recall
 
 
 
 
 
 
 
 
11
  model-index:
12
  - name: indobert-lite-base-p1-indonli-multilingual-nli-distil-mdeberta
13
  results: []
14
  ---
15
 
16
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
- should probably proofread and complete it, then remove this comment. -->
18
 
19
- # indobert-lite-base-p1-indonli-multilingual-nli-distil-mdeberta
20
 
21
- This model is a fine-tuned version of [indobenchmark/indobert-lite-base-p1](https://huggingface.co/indobenchmark/indobert-lite-base-p1) on an unknown dataset.
22
- It achieves the following results on the evaluation set:
23
- - Loss: 0.5005
24
- - Accuracy: 0.6545
25
- - F1: 0.6524
26
- - Precision: 0.6615
27
- - Recall: 0.6577
28
 
29
- ## Model description
 
 
30
 
31
- More information needed
32
 
33
- ## Intended uses & limitations
34
-
35
- More information needed
36
-
37
- ## Training and evaluation data
38
-
39
- More information needed
40
 
41
  ## Training procedure
42
 
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
46
- - learning_rate: 2e-05
47
- - train_batch_size: 64
48
- - eval_batch_size: 64
49
- - seed: 42
50
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
- - lr_scheduler_type: linear
52
- - num_epochs: 5
53
 
54
  ### Training results
55
 
56
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
57
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
58
- | 0.4808 | 1.0 | 1803 | 0.4418 | 0.7683 | 0.7593 | 0.7904 | 0.7554 |
59
- | 0.4529 | 2.0 | 3606 | 0.4343 | 0.7738 | 0.7648 | 0.7893 | 0.7619 |
60
- | 0.4263 | 3.0 | 5409 | 0.4383 | 0.7861 | 0.7828 | 0.7874 | 0.7807 |
61
- | 0.398 | 4.0 | 7212 | 0.4456 | 0.7792 | 0.7767 | 0.7792 | 0.7756 |
62
- | 0.3772 | 5.0 | 9015 | 0.4499 | 0.7711 | 0.7674 | 0.7700 | 0.7661 |
63
-
64
 
65
  ### Framework versions
66
 
@@ -68,3 +66,7 @@ The following hyperparameters were used during training:
68
  - Pytorch 2.1.2+cu121
69
  - Datasets 2.16.1
70
  - Tokenizers 0.15.0
 
 
 
 
 
8
  - f1
9
  - precision
10
  - recall
11
+ language:
12
+ - ind
13
+ datasets:
14
+ - indonli
15
+ - MoritzLaurer/multilingual-NLI-26lang-2mil7
16
+ - LazarusNLP/multilingual-NLI-26lang-2mil7-id
17
+ widget:
18
+ - text: Andi tersenyum karena mendapat hasil baik. </s></s> Andi sedih.
19
  model-index:
20
  - name: indobert-lite-base-p1-indonli-multilingual-nli-distil-mdeberta
21
  results: []
22
  ---
23
 
24
+ # IndoBERT Lite Base IndoNLI Multilingual NLI Distil mDeBERTa
 
25
 
26
+ IndoBERT Lite Base IndoNLI Multilingual NLI Distil mDeBERTa is a natural language inference (NLI) model based on the [ALBERT](https://arxiv.org/abs/1909.11942) model. The model was originally the pre-trained [indobenchmark/indobert-lite-base-p1](https://huggingface.co/indobenchmark/indobert-lite-base-p1) model, which is then fine-tuned on [`IndoNLI`](https://github.com/ir-nlp-csui/indonli) and the [Indonesian subsets](https://huggingface.co/datasets/LazarusNLP/multilingual-NLI-26lang-2mil7-id) of [MoritzLaurer/multilingual-NLI-26lang-2mil7](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7), whilst being distilled from [MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7).
27
 
28
+ ## Evaluation Results
 
 
 
 
 
 
29
 
30
+ | | `dev` Acc. | `test_lay` Acc. | `test_expert` Acc. |
31
+ | --------- | :--------: | :-------------: | :----------------: |
32
+ | `IndoNLI` | 78.60 | 74.69 | 65.55 |
33
 
34
+ ## Model
35
 
36
+ | Model | #params | Arch. | Training/Validation data (text) |
37
+ | ---------------------------------------------------------------- | ------- | ----------- | ---------------------------------- |
38
+ | `indobert-lite-base-p1-indonli-multilingual-nli-distil-mdeberta` | 11.7M | ALBERT Base | `IndoNLI`, Multilingual NLI (`id`) |
 
 
 
 
39
 
40
  ## Training procedure
41
 
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
45
+ - `learning_rate`: `2e-05`
46
+ - `train_batch_size`: `64`
47
+ - `eval_batch_size`: `64`
48
+ - `seed`: `42`
49
+ - `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08`
50
+ - `lr_scheduler_type`: linear
51
+ - `num_epochs`: `5`
52
 
53
  ### Training results
54
 
55
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
56
+ | :-----------: | :---: | :---: | :-------------: | :------: | :----: | :-------: | :----: |
57
+ | 0.4808 | 1.0 | 1803 | 0.4418 | 0.7683 | 0.7593 | 0.7904 | 0.7554 |
58
+ | 0.4529 | 2.0 | 3606 | 0.4343 | 0.7738 | 0.7648 | 0.7893 | 0.7619 |
59
+ | 0.4263 | 3.0 | 5409 | 0.4383 | 0.7861 | 0.7828 | 0.7874 | 0.7807 |
60
+ | 0.398 | 4.0 | 7212 | 0.4456 | 0.7792 | 0.7767 | 0.7792 | 0.7756 |
61
+ | 0.3772 | 5.0 | 9015 | 0.4499 | 0.7711 | 0.7674 | 0.7700 | 0.7661 |
 
62
 
63
  ### Framework versions
64
 
 
66
  - Pytorch 2.1.2+cu121
67
  - Datasets 2.16.1
68
  - Tokenizers 0.15.0
69
+
70
+ ## References
71
+
72
+ [1] Mahendra, R., Aji, A. F., Louvan, S., Rahman, F., & Vania, C. (2021, November). [IndoNLI: A Natural Language Inference Dataset for Indonesian](https://arxiv.org/abs/2110.14566). _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics.