Commit
·
ae6ab3f
1
Parent(s):
fb5c9a8
Update README.md
Browse files
README.md
CHANGED
@@ -1,11 +1,11 @@
|
|
1 |
-
---
|
2 |
-
language:
|
3 |
-
- es
|
4 |
-
- en
|
5 |
-
---
|
6 |
-
|
7 |
This is a smaller version of the google/mt5-base model with only Spanish and some English embeddings left following the procedure outlined here https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90
|
8 |
|
9 |
|
10 |
The original model has 582M parameters, with 384M of them being input and output embeddings.
|
11 |
After shrinking the sentencepiece vocabulary from 250K to 30K (top 10K English and top 20K Spanish tokens) the number of model parameters reduced to 244M parameters, resulting on a model size reduced from 2.2GB to 0.9GB - 42% of the original one.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
This is a smaller version of the google/mt5-base model with only Spanish and some English embeddings left following the procedure outlined here https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90
|
2 |
|
3 |
|
4 |
The original model has 582M parameters, with 384M of them being input and output embeddings.
|
5 |
After shrinking the sentencepiece vocabulary from 250K to 30K (top 10K English and top 20K Spanish tokens) the number of model parameters reduced to 244M parameters, resulting on a model size reduced from 2.2GB to 0.9GB - 42% of the original one.
|
6 |
+
|
7 |
+
---
|
8 |
+
language:
|
9 |
+
- es
|
10 |
+
- en
|
11 |
+
---
|