Update README.md
Browse files
README.md
CHANGED
@@ -8,13 +8,13 @@ license: cc-by-4.0
|
|
8 |
|
9 |
## Information
|
10 |
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 273930 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`.
|
11 |
-
The model is trained with the following properties: lemmatization and postag with the algorith Global Vectors with window of 5 and dimension of 300.
|
12 |
|
13 |
## How to use?
|
14 |
```
|
15 |
from gensim.models import KeyedVectors
|
16 |
from huggingface_hub import hf_hub_download
|
17 |
-
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/
|
18 |
```
|
19 |
|
20 |
## Citation
|
|
|
8 |
|
9 |
## Information
|
10 |
A word2vec model trained by Andrey Kutuzov ([email protected]) on a vocabulary of size 273930 corresponding to 2252637050 tokens from the dataset `English_Wikipedia_Dump_of_February_2017`.
|
11 |
+
The model is trained with the following properties: no lemmatization and postag with the algorith Global Vectors with window of 5 and dimension of 300.
|
12 |
|
13 |
## How to use?
|
14 |
```
|
15 |
from gensim.models import KeyedVectors
|
16 |
from huggingface_hub import hf_hub_download
|
17 |
+
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/eng_no_lem_postag_300_cont_skip_5_English_Wikipedia_Dump_of_February_2017", filename="model.bin"), binary=True, unicode_errors="ignore")
|
18 |
```
|
19 |
|
20 |
## Citation
|