fathan commited on
Commit
af1f0d1
1 Parent(s): 2855852

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -16,7 +16,7 @@ widget:
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
- # IJEBERT: IndoBERTweet-base
20
 
21
  ## About
22
  This is a pre-trained masked language model for code-mixed Indonesian-Javanese-English tweets data.
@@ -53,7 +53,7 @@ This pretraining data will not be opened to public due to Twitter policy.
53
  ## Model
54
  | Model name | Architecture | Size of training data | Size of validation data |
55
  |-------------------------------------------|-----------------|----------------------------|-------------------------|
56
- | `ijebertweet-codemixed-indobertweet-base` | BERT | 2.24 GB of text | 249 MB of text |
57
 
58
  ## Evaluation Results
59
  We train the data with 3 epochs and total steps of 296K for 4 days.
@@ -67,15 +67,15 @@ The following are the results obtained from the training:
67
  ### Load model and tokenizer
68
  ```python
69
  from transformers import AutoTokenizer, AutoModel
70
- tokenizer = AutoTokenizer.from_pretrained("fathan/ijebert-codemixed-indobertweet-base")
71
- model = AutoModel.from_pretrained("fathan/ijebert-codemixed-indobertweet-base")
72
 
73
  ```
74
  ### Masked language model
75
  ```python
76
  from transformers import pipeline
77
 
78
- pretrained_model = "fathan/ijebert-codemixed-indobertweet-base"
79
 
80
  fill_mask = pipeline(
81
  "fill-mask",
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # Indojave: IndoBERTweet-base
20
 
21
  ## About
22
  This is a pre-trained masked language model for code-mixed Indonesian-Javanese-English tweets data.
 
53
  ## Model
54
  | Model name | Architecture | Size of training data | Size of validation data |
55
  |-------------------------------------------|-----------------|----------------------------|-------------------------|
56
+ | `indojave-codemixed-indobertweet-base` | BERT | 2.24 GB of text | 249 MB of text |
57
 
58
  ## Evaluation Results
59
  We train the data with 3 epochs and total steps of 296K for 4 days.
 
67
  ### Load model and tokenizer
68
  ```python
69
  from transformers import AutoTokenizer, AutoModel
70
+ tokenizer = AutoTokenizer.from_pretrained("fathan/indojave-codemixed-indobertweet-base")
71
+ model = AutoModel.from_pretrained("fathan/indojave-codemixed-indobertweet-base")
72
 
73
  ```
74
  ### Masked language model
75
  ```python
76
  from transformers import pipeline
77
 
78
+ pretrained_model = "fathan/indojave-codemixed-indobertweet-base"
79
 
80
  fill_mask = pipeline(
81
  "fill-mask",