Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,7 @@ language:
|
|
7 |
pipeline_tag: translation
|
8 |
tags:
|
9 |
- translate
|
|
|
10 |
---
|
11 |
# **Seagull-13b-translation-AWQ π**
|
12 |
![Seagull-typewriter](./Seagull-typewriter-pixelated.png)
|
@@ -64,7 +65,7 @@ It follows only **ChatML** format.
|
|
64 |
```
|
65 |
|
66 |
#### Example
|
67 |
-
**I highly recommend to
|
68 |
|
69 |
Since, chat_template already contains insturction format above.
|
70 |
You can use the code below.
|
@@ -74,7 +75,8 @@ device = "cuda" # the device to load the model onto
|
|
74 |
model = AutoModelForCausalLM.from_pretrained("kuotient/Seagull-13B-translation")
|
75 |
tokenizer = AutoTokenizer.from_pretrained("kuotient/Seagull-13B-translation")
|
76 |
messages = [
|
77 |
-
{"role": "
|
|
|
78 |
]
|
79 |
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
|
80 |
|
|
|
7 |
pipeline_tag: translation
|
8 |
tags:
|
9 |
- translate
|
10 |
+
- awq
|
11 |
---
|
12 |
# **Seagull-13b-translation-AWQ π**
|
13 |
![Seagull-typewriter](./Seagull-typewriter-pixelated.png)
|
|
|
65 |
```
|
66 |
|
67 |
#### Example
|
68 |
+
**I highly recommend to inference model with vllm. I will write a guide for quick and easy inference if requested.**
|
69 |
|
70 |
Since, chat_template already contains insturction format above.
|
71 |
You can use the code below.
|
|
|
75 |
model = AutoModelForCausalLM.from_pretrained("kuotient/Seagull-13B-translation")
|
76 |
tokenizer = AutoTokenizer.from_pretrained("kuotient/Seagull-13B-translation")
|
77 |
messages = [
|
78 |
+
{"role": "system", "content", "μ£Όμ΄μ§ λ¬Έμ₯μ νκ΅μ΄λ‘ λ²μνμΈμ."}
|
79 |
+
{"role": "user", "content": "Here are five examples of nutritious foods to serve your kids."},
|
80 |
]
|
81 |
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
|
82 |
|