File size: 2,626 Bytes
c70e2c8
 
 
 
 
 
 
 
0b301be
c70e2c8
 
 
 
 
1345181
c70e2c8
8eeba93
c70e2c8
1345181
c70e2c8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8eeba93
c70e2c8
 
 
 
 
 
 
 
 
 
1345181
 
 
 
 
 
 
 
 
c70e2c8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: apache-2.0
pipeline_tag: text-generation
language:
  - en
  - he
tags:
- pretrained
inference: false
---

[<img src="dicta-logo.jpg" width="300px"/>](https://dicta.org.il)


# Adapting LLMs to Hebrew: Unveiling DictaLM 2.0 with Enhanced Vocabulary and Instruction Capabilities

The DictaLM-2.0 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters trained to specialize in Hebrew text. 

For full details of this model please read our [release blog post](https://dicta.org.il/dicta-lm) or the [technical report](https://arxiv.org/abs/2407.07080).

This model contains the GPTQ 4-bit quantized version of the base model [DictaLM-2.0](https://huggingface.co/dicta-il/dictalm2.0).

You can view and access the full collection of base/instruct unquantized/quantized versions of `DictaLM-2.0` [here](https://huggingface.co/collections/dicta-il/dicta-lm-20-collection-661bbda397df671e4a430c27).

## Example Code

Running this code requires ~5.1GB of GPU VRAM.

```python
from transformers import pipeline

# This loads the model onto the GPU in bfloat16 precision
model = pipeline('text-generation', 'dicta-il/dictalm2.0-GPTQ', device_map='cuda')

# Sample few shot examples
prompt = """
注讘专: 讛诇讻转讬
注转讬讚: 讗诇讱

注讘专: 砖诪专转讬
注转讬讚: 讗砖诪讜专

注讘专: 砖诪注转讬
注转讬讚: 讗砖诪注

注讘专: 讛讘谞转讬
注转讬讚:
"""

print(model(prompt.strip(), do_sample=False, max_new_tokens=4, stop_sequence='\n'))
# [{'generated_text': '注讘专: 讛诇讻转讬\n注转讬讚: 讗诇讱\n\n注讘专: 砖诪专转讬\n注转讬讚: 讗砖诪讜专\n\n注讘专: 砖诪注转讬\n注转讬讚: 讗砖诪注\n\n注讘专: 讛讘谞转讬\n注转讬讚: 讗讘讬谉\n\n'}]
```

## Model Architecture

DictaLM-2.0 is based on the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model with the following changes:
- An extended tokenizer with tokens for Hebrew, increasing the compression ratio
- An extended tokenizer with 1,000 injected tokens specifically for Hebrew, increasing the compression rate from 5.78 tokens/word to 2.76 tokens/word.  

## Notice

DictaLM 2.0 is a pretrained base model and therefore does not have any moderation mechanisms.

## Citation

If you use this model, please cite:

```bibtex
@misc{shmidman2024adaptingllmshebrewunveiling,
      title={Adapting LLMs to Hebrew: Unveiling DictaLM 2.0 with Enhanced Vocabulary and Instruction Capabilities}, 
      author={Shaltiel Shmidman and Avi Shmidman and Amir DN Cohen and Moshe Koppel},
      year={2024},
      eprint={2407.07080},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2407.07080}, 
}
```