File size: 1,671 Bytes
1b2806b
 
2c00e7a
 
d6cbc02
1b2806b
2c00e7a
 
 
 
 
 
 
 
59a553d
2c00e7a
 
 
 
 
 
 
 
3afd812
2c00e7a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
license: cc-by-4.0
language:
- he
inference: false
---
# DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew

State-of-the-art language model for Hebrew, as released [here](link to arxiv).

This is the fine-tuned model for the prefix segmentation task.  

Sample usage:

```python
from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('dicta-il/dictabert-seg')
model = AutoModel.from_pretrained('dicta-il/dictabert-seg', trust_remote_code=True)

model.eval()

sentence = '讘砖谞转 1948 讛砖诇讬诐 讗驻专讬诐 拽讬砖讜谉 讗转 诇讬诪讜讚讬讜 讘驻讬住讜诇 诪转讻转 讜讘转讜诇讚讜转 讛讗诪谞讜转 讜讛讞诇 诇驻专住诐 诪讗诪专讬诐 讛讜诪讜专讬住讟讬讬诐'
print(model.predict([sentence], tokenizer))
```

Output:
```json
[
	[
		[ "[CLS]" ],
		[ "讘","砖谞转" ],
		[ "1948" ],
		[ "讛砖诇讬诐" ],
		[ "讗驻专讬诐" ],
		[ "拽讬砖讜谉" ],
		[ "讗转" ],
		[ "诇讬诪讜讚讬讜" ],
		[ "讘","驻讬住讜诇" ],
		[ "诪转讻转" ],
		[ "讜讘","转讜诇讚讜转" ],
		[ "讛","讗诪谞讜转" ],
		[ "讜","讛讞诇" ],
		[ "诇驻专住诐" ],
		[ "诪讗诪专讬诐" ],
		[ "讛讜诪讜专讬住讟讬讬诐" ],
		[ "[SEP]" ]
	]
]
```


## Citation

If you use DictaBERT in your research, please cite ```DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew```

**BibTeX:**

To add

## License

Shield: [![CC BY 4.0][cc-by-shield]][cc-by]

This work is licensed under a
[Creative Commons Attribution 4.0 International License][cc-by].

[![CC BY 4.0][cc-by-image]][cc-by]

[cc-by]: http://creativecommons.org/licenses/by/4.0/
[cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png
[cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg