|
--- |
|
|
|
language: |
|
- hu |
|
tags: |
|
- fill-mask |
|
license: cc-by-nc-4.0 |
|
widget: |
|
- text: "Elmesélek egy történetet a nyelvtechnológiáról." |
|
--- |
|
|
|
# PULI BERT-Large |
|
|
|
For further details, see [our demo site](https://juniper.nytud.hu/demo/nlp). |
|
|
|
- Hungarian BERT large model (MegatronBERT) |
|
- Trained with Megatron-DeepSpeed [github](https://github.com/microsoft/Megatron-DeepSpeed) |
|
- Dataset: 36.3 billion words |
|
- Checkpoint: 150 000 steps |
|
|
|
## Limitations |
|
|
|
- max_seq_length = 1024 |
|
|
|
|
|
## Citation |
|
If you use this model, please cite the following paper: |
|
|
|
``` |
|
@inproceedings {yang-gpt3, |
|
title = {Jönnek a nagyok! GPT-3, GPT-2 és BERT large nyelvmodellek magyar nyelvre}, |
|
booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)}, |
|
year = {2023}, |
|
publisher = {Szegedi Tudományegyetem}, |
|
address = {Szeged, Hungary}, |
|
author = {Yang, Zijian Győző and Dodé, Réka and Ferenczi, Gergő and Héja, Enikő and Kőrös, Ádám and Laki, László János and Ligeti-Nagy, Noémi and Jelencsik-Mátyus, Kinga and Vadász, Noémi and Váradi, Tamás}, |
|
pages = {0} |
|
} |
|
|
|
``` |
|
|
|
## Usage |
|
|
|
```python |
|
from transformers import BertTokenizer, MegatronBertModel |
|
|
|
tokenizer = BertTokenizer.from_pretrained('NYTK/PULI-BERT-Large') |
|
model = MegatronBertModel.from_pretrained('NYTK/PULI-BERT-Large') |
|
text = "Replace me by any text you'd like." |
|
encoded_input = tokenizer(text, return_tensors='pt') |
|
output = model(**encoded_input) |
|
|
|
``` |
|
|