Update README.md
Browse files
README.md
CHANGED
@@ -1,36 +1,53 @@
|
|
1 |
---
|
2 |
tags:
|
3 |
- generated_from_trainer
|
4 |
-
metrics:
|
5 |
-
- accuracy
|
6 |
model-index:
|
7 |
- name: code_mixed_ijebert
|
8 |
results: []
|
|
|
|
|
|
|
|
|
|
|
9 |
---
|
10 |
|
11 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
12 |
should probably proofread and complete it, then remove this comment. -->
|
13 |
|
14 |
-
#
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
-
|
19 |
-
|
20 |
-
|
21 |
-
##
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
### Training hyperparameters
|
36 |
|
@@ -52,4 +69,4 @@ The following hyperparameters were used during training:
|
|
52 |
- Transformers 4.26.0
|
53 |
- Pytorch 1.12.0+cu102
|
54 |
- Datasets 2.9.0
|
55 |
-
- Tokenizers 0.12.1
|
|
|
1 |
---
|
2 |
tags:
|
3 |
- generated_from_trainer
|
|
|
|
|
4 |
model-index:
|
5 |
- name: code_mixed_ijebert
|
6 |
results: []
|
7 |
+
language:
|
8 |
+
- id
|
9 |
+
- jv
|
10 |
+
- en
|
11 |
+
pipeline_tag: fill-mask
|
12 |
---
|
13 |
|
14 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
15 |
should probably proofread and complete it, then remove this comment. -->
|
16 |
|
17 |
+
# Code-mixed IJEBERT
|
18 |
+
|
19 |
+
## About
|
20 |
+
Code-mixed IJEBERT is a pre-trained maksed language model for code-mixed Indonesian-Javanese-English tweets data.
|
21 |
+
This model is trained based on [BERT](https://huggingface.co/bert-base-multilingual-cased) model utilizing
|
22 |
+
Hugging Face's [Transformers]((https://huggingface.co/transformers)) library.
|
23 |
+
|
24 |
+
## Pre-training Data
|
25 |
+
The Twitter data is collected from January 2022 until January 2023. The tweets are collected using 8698 random keyword phrases.
|
26 |
+
To make sure the retrieved data are code-mixed, we use keyword phrases that contain code-mixed Indonesian, Javanese, or English words.
|
27 |
+
The following are few examples of the keyword phrases:
|
28 |
+
- travelling terus
|
29 |
+
- proud koncoku
|
30 |
+
- great kalian semua
|
31 |
+
- chattingane ilang
|
32 |
+
- baru aja launching
|
33 |
+
We acquire 40,788,384 raw tweets. We apply first stage pre-processing tasks such as:
|
34 |
+
- remove duplicate tweets,
|
35 |
+
- remove tweets with token length less than 5,
|
36 |
+
- remove multiple space,
|
37 |
+
- convert emoticon,
|
38 |
+
- convert all tweets to lower case.
|
39 |
+
After the first stage pre-processing, we obtain 17,385,773 tweets.
|
40 |
+
In the second stage pre-processing, we do the following pre-processing tasks:
|
41 |
+
- split the tweets into sentences,
|
42 |
+
- remove sentences with token length less than 4,
|
43 |
+
- convert ‘@username’ to ‘@USER’,
|
44 |
+
- convert URL to HTTPURL.
|
45 |
+
Finally, we have 28,121,693 sentences for our pre-training task.
|
46 |
+
|
47 |
+
## Model
|
48 |
+
| Model name | #params | Arch. | Size of training data | Size of validation data |
|
49 |
+
|----------------------|---------|----------|----------------------------|-------------------------|
|
50 |
+
| `code-mixed-ijebert` | | BERT | 2.24 GB of text | 249 MB of text |
|
51 |
|
52 |
### Training hyperparameters
|
53 |
|
|
|
69 |
- Transformers 4.26.0
|
70 |
- Pytorch 1.12.0+cu102
|
71 |
- Datasets 2.9.0
|
72 |
+
- Tokenizers 0.12.1
|