File size: 1,370 Bytes
b629f8a dbe97d4 b629f8a dbe97d4 b629f8a dbe97d4 b629f8a dbe97d4 b629f8a dbe97d4 b629f8a dbe97d4 eb04d2c dbe97d4 0e79302 dbe97d4 0e79302 dbe97d4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
language:
- en
license: apache-2.0
tags:
- chat
- code
pipeline_tag: text-generation
datasets:
- YashJain/GitAI
library_name: transformers
---
# YashJain/GitAI-Qwen2-0.5B-Instruct
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"YashJain/GitAI-Qwen2-0.5B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("YashJain/GitAI-Qwen2-0.5B-Instruct")
prompt = "How to undo my last commit"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
``` |