Go Inoue
commited on
Commit
•
9e69a8d
1
Parent(s):
db96104
Update README.md
Browse files
README.md
CHANGED
@@ -6,12 +6,12 @@ widget:
|
|
6 |
- text: "الهدف من الحياة هو [MASK] ."
|
7 |
---
|
8 |
|
9 |
-
# CAMeLBERT-
|
10 |
|
11 |
## Model description
|
12 |
|
13 |
-
**CAMeLBERT** is a BERT
|
14 |
-
The details are described in the paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."*
|
15 |
We release eight models with different sizes and variants as follows:
|
16 |
|
17 |
||Model|Variant|Size|#Word|
|
@@ -25,7 +25,7 @@ We release eight models with different sizes and variants as follows:
|
|
25 |
||`bert-base-camelbert-msa-eighth`|MSA|14GB|1.6B|
|
26 |
|✔|`bert-base-camelbert-msa-sixteenth`|MSA|6GB|746M|
|
27 |
|
28 |
-
This model card describes `bert-base-camelbert-msa-sixteenth
|
29 |
|
30 |
## Intended uses
|
31 |
You can use the released model for either masked language modeling or next sentence prediction.
|
@@ -60,6 +60,8 @@ You can use this model directly with a pipeline for masked language modeling:
|
|
60 |
'token_str': 'المعرفة'}]
|
61 |
```
|
62 |
|
|
|
|
|
63 |
Here is how to use this model to get the features of a given text in PyTorch:
|
64 |
```python
|
65 |
from transformers import AutoTokenizer, AutoModel
|
|
|
6 |
- text: "الهدف من الحياة هو [MASK] ."
|
7 |
---
|
8 |
|
9 |
+
# CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
|
10 |
|
11 |
## Model description
|
12 |
|
13 |
+
**CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
|
14 |
+
The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
|
15 |
We release eight models with different sizes and variants as follows:
|
16 |
|
17 |
||Model|Variant|Size|#Word|
|
|
|
25 |
||`bert-base-camelbert-msa-eighth`|MSA|14GB|1.6B|
|
26 |
|✔|`bert-base-camelbert-msa-sixteenth`|MSA|6GB|746M|
|
27 |
|
28 |
+
This model card describes **CAMeLBERT-MSA-sixteenth** (`bert-base-camelbert-msa-sixteenth`), a model pre-trained on a sixteenth of the full MSA dataset.
|
29 |
|
30 |
## Intended uses
|
31 |
You can use the released model for either masked language modeling or next sentence prediction.
|
|
|
60 |
'token_str': 'المعرفة'}]
|
61 |
```
|
62 |
|
63 |
+
*Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
|
64 |
+
|
65 |
Here is how to use this model to get the features of a given text in PyTorch:
|
66 |
```python
|
67 |
from transformers import AutoTokenizer, AutoModel
|