|
--- |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
language: |
|
- en |
|
- he |
|
tags: |
|
- instruction-tuned |
|
base_model: dicta-il/dictalm2.0 |
|
inference: false |
|
--- |
|
|
|
[<img src="https://i.ibb.co/5Lbwyr1/dicta-logo.jpg" width="300px"/>](https://dicta.org.il/dicta-lm) |
|
|
|
# Adapting LLMs to Hebrew: Unveiling DictaLM 2.0 with Enhanced Vocabulary and Instruction Capabilities |
|
|
|
The DictaLM-2.0-Instruct Large Language Model (LLM) is an instruct fine-tuned version of the [DictaLM-2.0](https://huggingface.co/dicta-il/dictalm2.0) generative model using a variety of conversation datasets. |
|
|
|
For full details of this model please read our [release blog post](https://dicta.org.il/dicta-lm) or the [technical report](https://arxiv.org/abs/2407.07080). |
|
|
|
This model contains the GPTQ 4-bit quantized version of the instruct-tuned model designed for chat [DictaLM-2.0-Instruct](https://huggingface.co/dicta-il/dictalm2.0-instruct). |
|
|
|
You can view and access the full collection of base/instruct unquantized/quantized versions of `DictaLM-2.0` [here](https://huggingface.co/collections/dicta-il/dicta-lm-20-collection-661bbda397df671e4a430c27). |
|
|
|
## Instruction format |
|
|
|
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. |
|
|
|
E.g. |
|
``` |
|
text = """<s>[INST] 讗讬讝讛 专讜讟讘 讗讛讜讘 注诇讬讱? [/INST] |
|
讟讜讘, 讗谞讬 讚讬 诪讞讘讘 讻诪讛 讟讬驻讜转 诪讬抓 诇讬诪讜谉 住讞讜讟 讟专讬. 讝讛 诪讜住讬祝 讘讚讬讜拽 讗转 讛讻诪讜转 讛谞讻讜谞讛 砖诇 讟注诐 讞诪爪诪抓 诇讻诇 诪讛 砖讗谞讬 诪讘砖诇 讘诪讟讘讞!</s>[INST] 讛讗诐 讬砖 诇讱 诪转讻讜谞讬诐 诇诪讬讜谞讝? [/INST]" |
|
``` |
|
|
|
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: |
|
|
|
## Example Code |
|
|
|
Running this code requires less than 5GB of GPU VRAM. |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
device = "cuda" # the device to load the model onto |
|
|
|
model = AutoModelForCausalLM.from_pretrained("dicta-il/dictalm2.0-instruct-GPTQ", device_map=device) |
|
tokenizer = AutoTokenizer.from_pretrained("dicta-il/dictalm2.0-instruct-GPTQ") |
|
|
|
messages = [ |
|
{"role": "user", "content": "讗讬讝讛 专讜讟讘 讗讛讜讘 注诇讬讱?"}, |
|
{"role": "assistant", "content": "讟讜讘, 讗谞讬 讚讬 诪讞讘讘 讻诪讛 讟讬驻讜转 诪讬抓 诇讬诪讜谉 住讞讜讟 讟专讬. 讝讛 诪讜住讬祝 讘讚讬讜拽 讗转 讛讻诪讜转 讛谞讻讜谞讛 砖诇 讟注诐 讞诪爪诪抓 诇讻诇 诪讛 砖讗谞讬 诪讘砖诇 讘诪讟讘讞!"}, |
|
{"role": "user", "content": "讛讗诐 讬砖 诇讱 诪转讻讜谞讬诐 诇诪讬讜谞讝?"} |
|
] |
|
|
|
encoded = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device) |
|
|
|
generated_ids = model.generate(encoded, max_new_tokens=50, do_sample=True) |
|
decoded = tokenizer.batch_decode(generated_ids) |
|
print(decoded[0]) |
|
# <s> [INST] 讗讬讝讛 专讜讟讘 讗讛讜讘 注诇讬讱? [/INST] |
|
# 讟讜讘, 讗谞讬 讚讬 诪讞讘讘 讻诪讛 讟讬驻讜转 诪讬抓 诇讬诪讜谉 住讞讜讟 讟专讬. 讝讛 诪讜住讬祝 讘讚讬讜拽 讗转 讛讻诪讜转 讛谞讻讜谞讛 砖诇 讟注诐 讞诪爪诪抓 诇讻诇 诪讛 砖讗谞讬 诪讘砖诇 讘诪讟讘讞!</s> [INST] 讛讗诐 讬砖 诇讱 诪转讻讜谞讬诐 诇诪讬讜谞讝? [/INST] |
|
# 讘讟讞, 讛谞讛 诪转讻讜谉 拽诇 诪讗讜讚 诇诪讬讜谞讝 讘讬转讬: |
|
# |
|
# 诪专讻讬讘讬诐: |
|
# - 2 讘讬爪讬诐 讙讚讜诇讜转 |
|
# - 1 讻祝 讞专讚诇 讚讬讝'讜谉 |
|
# - 2 讻驻讜转 |
|
# (it stopped early because we set max_new_tokens=50) |
|
``` |
|
|
|
## Model Architecture |
|
|
|
DictaLM-2.0-Instruct follows the [Zephyr-7B-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) recipe for fine-tuning an instruct model, with an extended instruct dataset for Hebrew. |
|
|
|
## Limitations |
|
|
|
The DictaLM 2.0 Instruct model is a demonstration that the base model can be fine-tuned to achieve compelling performance. |
|
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to |
|
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. |
|
|
|
## Citation |
|
|
|
If you use this model, please cite: |
|
|
|
```bibtex |
|
@misc{shmidman2024adaptingllmshebrewunveiling, |
|
title={Adapting LLMs to Hebrew: Unveiling DictaLM 2.0 with Enhanced Vocabulary and Instruction Capabilities}, |
|
author={Shaltiel Shmidman and Avi Shmidman and Amir DN Cohen and Moshe Koppel}, |
|
year={2024}, |
|
eprint={2407.07080}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL}, |
|
url={https://arxiv.org/abs/2407.07080}, |
|
} |
|
``` |