Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,72 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
pipeline_tag: text-generation
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
- he
|
7 |
+
tags:
|
8 |
+
- pretrained
|
9 |
+
inference:
|
10 |
+
parameters:
|
11 |
+
temperature: 0.7
|
12 |
+
---
|
13 |
+
|
14 |
+
[<img src="dicta-logo.jpg" width="300px"/>](https://dicta.org.il)
|
15 |
+
|
16 |
+
|
17 |
+
# Model Card for DictaLM-2.0-AWQ
|
18 |
+
|
19 |
+
The DictaLM-2.0 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters specializing in Hebrew.
|
20 |
+
|
21 |
+
For full details of this model please read our [release blog post](https://example.com).
|
22 |
+
|
23 |
+
This model contains the GPTQ 4-bit quantized version of the base model [DictaLM-2.0](https://huggingface.co/dicta-il/dictalm2.0).
|
24 |
+
|
25 |
+
You can view and access the full collection of base/instruct unquantized/quantized versions of `DictaLM-2.0` [here](https://huggingface.co/collections/dicta-il/dicta-lm-20-collection-661bbda397df671e4a430c27).
|
26 |
+
|
27 |
+
## Example Code
|
28 |
+
|
29 |
+
Running this code requires ~5.1GB of GPU VRAM.
|
30 |
+
|
31 |
+
```python
|
32 |
+
from transformers import pipeline
|
33 |
+
|
34 |
+
# This loads the model onto the GPU in bfloat16 precision
|
35 |
+
model = pipeline('text-generation', 'dicta-il/dictalm2.0-GPTQ', device_map='cuda')
|
36 |
+
|
37 |
+
# Sample few shot examples
|
38 |
+
prompt = """
|
39 |
+
עבר: הלכתי
|
40 |
+
עתיד: אלך
|
41 |
+
|
42 |
+
עבר: שמרתי
|
43 |
+
עתיד: אשמור
|
44 |
+
|
45 |
+
עבר: שמעתי
|
46 |
+
עתיד: אשמע
|
47 |
+
|
48 |
+
עבר: הבנתי
|
49 |
+
עתיד:
|
50 |
+
"""
|
51 |
+
|
52 |
+
print(model(prompt.strip(), do_sample=False, max_new_tokens=4, stop_sequence='\n'))
|
53 |
+
# [{'generated_text': 'עבר: הלכתי\nעתיד: אלך\n\nעבר: שמרתי\nעתיד: אשמור\n\nעבר: שמעתי\nעתיד: אשמע\n\nעבר: הבנתי\nעתיד: אבין\n\n'}]
|
54 |
+
```
|
55 |
+
|
56 |
+
## Model Architecture
|
57 |
+
|
58 |
+
DictaLM-2.0 is based on the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model with the following changes:
|
59 |
+
- An extended tokenizer with tokens for Hebrew, increasing the compression ratio
|
60 |
+
- Continued pretraining on over 190B tokens of naturally occuring text, 50% Hebrew and 50% English.
|
61 |
+
|
62 |
+
## Notice
|
63 |
+
|
64 |
+
DictaLM 2.0 is a pretrained base model and therefore does not have any moderation mechanisms.
|
65 |
+
|
66 |
+
## Citation
|
67 |
+
|
68 |
+
If you use this model, please cite:
|
69 |
+
|
70 |
+
```bibtex
|
71 |
+
[Will be added soon]
|
72 |
+
```
|