Update README.md
Browse files
README.md
CHANGED
@@ -21,11 +21,15 @@ BERTweet-FA is a transformer-based model trained on 20665964 Persian tweets. The
|
|
21 |
|
22 |
The Training Data
|
23 |
---
|
24 |
-
The first version of the model was trained on the "Large Scale Colloquial Persian Dataset" containing more than 20 million tweets in Farsi, gathered by Khojasteh et al., and published on 2020.
|
25 |
|
26 |
Evaluation
|
27 |
---
|
28 |
|
29 |
| Training Loss | Epoch | Step
|
30 |
|:-------------:|:-----:|:-----:|
|
31 |
-
| 0.0036 | 1.0 | 322906 |
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
The Training Data
|
23 |
---
|
24 |
+
The first version of the model was trained on the "[Large Scale Colloquial Persian Dataset](https://iasbs.ac.ir/~ansari/lscp/)" containing more than 20 million tweets in Farsi, gathered by Khojasteh et al., and published on 2020.
|
25 |
|
26 |
Evaluation
|
27 |
---
|
28 |
|
29 |
| Training Loss | Epoch | Step
|
30 |
|:-------------:|:-----:|:-----:|
|
31 |
+
| 0.0036 | 1.0 | 322906 |
|
32 |
+
|
33 |
+
Contributors
|
34 |
+
---
|
35 |
+
- Arman Malekzadeh, PhD Student in AI @ Sharif University of Technology [Linkedin](https://www.linkedin.com/in/arman-malekzadeh/) [Github](https://github.com/arm-on)
|