Shaltiel commited on
Commit
2f752b9
โ€ข
1 Parent(s): 691307d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ language:
5
+ - en
6
+ - he
7
+ tags:
8
+ - instruction-tuned
9
+ base_model: dicta-il/dictalm2.0
10
+ inference:
11
+ parameters:
12
+ temperature: 0.7
13
+ ---
14
+
15
+ [<img src="https://i.ibb.co/5Lbwyr1/dicta-logo.jpg" width="300px"/>](https://dicta.org.il)
16
+
17
+
18
+ # Model Card for DictaLM-2.0-Instruct
19
+
20
+ The DictaLM-2.0-Instruct Large Language Model (LLM) is an instruct fine-tuned version of the [DictaLM-2.0](https://huggingface.co/dicta-il/dictalm2.0) generative model using a variety of conversation datasets.
21
+
22
+ For full details of this model please read our [release blog post](https://example.com).
23
+
24
+ This model contains the AWQ 4-bit quantized version of the instruct-tuned model designed for chat [DictaLM-2.0-Instruct](https://huggingface.co/dicta-il/dictalm2.0-instruct).
25
+
26
+ You can view and access the full collection of base/instruct unquantized/quantized versions of `DictaLM-2.0` [here](https://huggingface.co/collections/dicta-il/dicta-lm-20-collection-661bbda397df671e4a430c27).
27
+
28
+ ## Instruction format
29
+
30
+ In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
31
+
32
+ E.g.
33
+ ```
34
+ text = """<s>[INST] What is your favourite condiment? [/INST]
35
+ Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s>[INST] Do you have mayonnaise recipes? [/INST]"
36
+ ```
37
+
38
+ This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
39
+
40
+ ## Example Code
41
+
42
+ Running this code requires under 5GB of GPU VRAM.
43
+
44
+ ```python
45
+ from transformers import AutoModelForCausalLM, AutoTokenizer
46
+
47
+ device = "cuda" # the device to load the model onto
48
+
49
+ model = AutoModelForCausalLM.from_pretrained("dicta-il/dictalm2.0-instruct-AWQ", device_map=device)
50
+ tokenizer = AutoTokenizer.from_pretrained("dicta-il/dictalm2.0-instruct-AWQ")
51
+
52
+ messages = [
53
+ {"role": "user", "content": "ืžื” ื”ืจื•ื˜ื‘ ืื”ื•ื‘ ืขืœื™ืš?"},
54
+ {"role": "assistant", "content": "ื˜ื•ื‘, ืื ื™ ื“ื™ ืžื—ื‘ื‘ ื›ืžื” ื˜ื™ืคื•ืช ืžื™ืฅ ืœื™ืžื•ืŸ ืกื—ื•ื˜ ื˜ืจื™. ื–ื” ืžื•ืกื™ืฃ ื‘ื“ื™ื•ืง ืืช ื”ื›ืžื•ืช ื”ื ื›ื•ื ื” ืฉืœ ื˜ืขื ื—ืžืฆืžืฅ ืœื›ืœ ืžื” ืฉืื ื™ ืžื‘ืฉืœ ื‘ืžื˜ื‘ื—!"},
55
+ {"role": "user", "content": "ื”ืื ื™ืฉ ืœืš ืžืชื›ื•ื ื™ื ืœืžื™ื•ื ื–?"}
56
+ ]
57
+
58
+ encoded = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
59
+
60
+ generated_ids = model.generate(encoded, max_new_tokens=50, do_sample=True)
61
+ decoded = tokenizer.batch_decode(generated_ids)
62
+ print(decoded[0])
63
+ # <s> [INST] ืžื” ื”ืจื•ื˜ื‘ ืื”ื•ื‘ ืขืœื™ืš? [/INST]
64
+ # ื˜ื•ื‘, ืื ื™ ื“ื™ ืžื—ื‘ื‘ ื›ืžื” ื˜ื™ืคื•ืช ืžื™ืฅ ืœื™ืžื•ืŸ ืกื—ื•ื˜ ื˜ืจื™. ื–ื” ืžื•ืกื™ืฃ ื‘ื“ื™ื•ืง ืืช ื”ื›ืžื•ืช ื”ื ื›ื•ื ื” ืฉืœ ื˜ืขื ื—ืžืฆืžืฅ ืœื›ืœ ืžื” ืฉืื ื™ ืžื‘ืฉืœ ื‘ืžื˜ื‘ื—!</s> [INST] ื”ืื ื™ืฉ ืœืš ืžืชื›ื•ื ื™ื ืœืžื™ื•ื ื–? [/INST]
65
+ # ื”ื ื” ืžืชื›ื•ืŸ ืคืฉื•ื˜ ื•ืงืœ ืœืžื™ื•ื ื– ื‘ื™ืชื™:
66
+ #
67
+ # ืžืจื›ื™ื‘ื™ื:
68
+ # - ื‘ื™ืฆื” ื’ื“ื•ืœื” ืื—ืช
69
+ # - 2 ื›ืคื•ืช ื—ื•ืžืฅ ื™ื™ืŸ ืœื‘ืŸ
70
+ # - 1 ื›ืฃ
71
+ # (it stopped early because we set max_new_tokens=50)
72
+ ```
73
+
74
+ ## Model Architecture
75
+
76
+ DictaLM-2.0-Instruct follows the [Zephyr-7B-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) recipe for fine-tuning an instruct model, with an extended instruct dataset for Hebrew.
77
+
78
+ ## Limitations
79
+
80
+ The DictaLM 2.0 Instruct model is a demonstration that the base model can be fine-tuned to achieve compelling performance.
81
+ It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
82
+ make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
83
+
84
+ ## Citation
85
+
86
+ If you use this model, please cite:
87
+
88
+ ```bibtex
89
+ [Will be added soon]
90
+ ```