fathan commited on
Commit
7d983c7
1 Parent(s): 6f9500c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -23
README.md CHANGED
@@ -1,36 +1,53 @@
1
  ---
2
  tags:
3
  - generated_from_trainer
4
- metrics:
5
- - accuracy
6
  model-index:
7
  - name: code_mixed_ijebert
8
  results: []
 
 
 
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
  should probably proofread and complete it, then remove this comment. -->
13
 
14
- # code_mixed_ijebert
15
-
16
- This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
17
- It achieves the following results on the evaluation set:
18
- - Loss: 3.0559
19
- - Accuracy: 0.4859
20
-
21
- ## Model description
22
-
23
- More information needed
24
-
25
- ## Intended uses & limitations
26
-
27
- More information needed
28
-
29
- ## Training and evaluation data
30
-
31
- More information needed
32
-
33
- ## Training procedure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
  ### Training hyperparameters
36
 
@@ -52,4 +69,4 @@ The following hyperparameters were used during training:
52
  - Transformers 4.26.0
53
  - Pytorch 1.12.0+cu102
54
  - Datasets 2.9.0
55
- - Tokenizers 0.12.1
 
1
  ---
2
  tags:
3
  - generated_from_trainer
 
 
4
  model-index:
5
  - name: code_mixed_ijebert
6
  results: []
7
+ language:
8
+ - id
9
+ - jv
10
+ - en
11
+ pipeline_tag: fill-mask
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # Code-mixed IJEBERT
18
+
19
+ ## About
20
+ Code-mixed IJEBERT is a pre-trained maksed language model for code-mixed Indonesian-Javanese-English tweets data.
21
+ This model is trained based on [BERT](https://huggingface.co/bert-base-multilingual-cased) model utilizing
22
+ Hugging Face's [Transformers]((https://huggingface.co/transformers)) library.
23
+
24
+ ## Pre-training Data
25
+ The Twitter data is collected from January 2022 until January 2023. The tweets are collected using 8698 random keyword phrases.
26
+ To make sure the retrieved data are code-mixed, we use keyword phrases that contain code-mixed Indonesian, Javanese, or English words.
27
+ The following are few examples of the keyword phrases:
28
+ - travelling terus
29
+ - proud koncoku
30
+ - great kalian semua
31
+ - chattingane ilang
32
+ - baru aja launching
33
+ We acquire 40,788,384 raw tweets. We apply first stage pre-processing tasks such as:
34
+ - remove duplicate tweets,
35
+ - remove tweets with token length less than 5,
36
+ - remove multiple space,
37
+ - convert emoticon,
38
+ - convert all tweets to lower case.
39
+ After the first stage pre-processing, we obtain 17,385,773 tweets.
40
+ In the second stage pre-processing, we do the following pre-processing tasks:
41
+ - split the tweets into sentences,
42
+ - remove sentences with token length less than 4,
43
+ - convert ‘@username’ to ‘@USER’,
44
+ - convert URL to HTTPURL.
45
+ Finally, we have 28,121,693 sentences for our pre-training task.
46
+
47
+ ## Model
48
+ | Model name | #params | Arch. | Size of training data | Size of validation data |
49
+ |----------------------|---------|----------|----------------------------|-------------------------|
50
+ | `code-mixed-ijebert` | | BERT | 2.24 GB of text | 249 MB of text |
51
 
52
  ### Training hyperparameters
53
 
 
69
  - Transformers 4.26.0
70
  - Pytorch 1.12.0+cu102
71
  - Datasets 2.9.0
72
+ - Tokenizers 0.12.1