File size: 1,473 Bytes
2d4b337
4700a34
 
 
 
 
2d4b337
4700a34
092990a
2d4b337
4700a34
 
 
 
 
08e2cf2
4700a34
 
e2e454d
4700a34
 
 
 
 
 
 
 
 
 
0293f97
f48d1c1
4700a34
 
0293f97
4700a34
f48d1c1
 
4700a34
 
 
 
 
 
 
 
c2582ae
 
4700a34
8d241b4
4700a34
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---

language: 
  - hu
tags:
- fill-mask
license: cc-by-nc-4.0
widget:
- text: "Mesélek egy [MASK] az oroszlánról."
---

# PULI BERT-Large

For further details, see [our demo site](https://juniper.nytud.hu/demo/nlp).

  - Hungarian BERT large model (MegatronBERT)
  - Trained with Megatron-DeepSpeed [github](https://github.com/microsoft/Megatron-DeepSpeed)
  - Dataset: 36.3 billion words
  - Checkpoint: 1 500 000 steps

## Limitations

- max_seq_length = 1024


## Citation
If you use this model, please cite the following paper:

```
@inproceedings {yang-puli,
    title = {Jönnek a nagyok! BERT-Large, GPT-2 és GPT-3 nyelvmodellek magyar nyelvre},
	booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)},
	year = {2023},
	publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
	address = {Szeged, Hungary},
	author = {Yang, Zijian Győző and Dodé, Réka and Ferenczi, Gergő and Héja, Enikő and Jelencsik-Mátyus, Kinga and Kőrös, Ádám and Laki, László János and Ligeti-Nagy, Noémi and Vadász, Noémi and Váradi, Tamás},
	pages = {247--262}
}
```

## Usage

```python
from transformers import BertTokenizer, MegatronBertModel

tokenizer = BertTokenizer.from_pretrained('NYTK/PULI-BERT-Large')
model = MegatronBertModel.from_pretrained('NYTK/PULI-BERT-Large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt', do_lower_case=False)
output = model(**encoded_input)

```