|
--- |
|
license: cc-by-nc-sa-4.0 |
|
datasets: |
|
- squarelike/sharegpt_deepl_ko_translation |
|
language: |
|
- ko |
|
pipeline_tag: translation |
|
tags: |
|
- translate |
|
--- |
|
# **Seagull-13b-translation-AWQ π** |
|
![Seagull-typewriter](./Seagull-typewriter-pixelated.png) |
|
## This is quantized version of original model: Seagull-13b-translation. |
|
**Seagull-13b-translation** is yet another translator model, but carefully considered the following issues from existing translation models. |
|
- `newline` or `space` not matching the original text |
|
- Using translated dataset with first letter removed for training |
|
- Codes |
|
- Markdown format |
|
- LaTeX format |
|
- etc |
|
|
|
μ΄λ° μ΄μλ€μ μΆ©λΆν 체ν¬νκ³ νμ΅μ μ§ννμμ§λ§, λͺ¨λΈμ μ¬μ©ν λλ μ΄λ° λΆλΆμ λν κ²°κ³Όλ₯Ό λ©΄λ°νκ² μ΄ν΄λ³΄λ κ²μ μΆμ²ν©λλ€(μ½λκ° ν¬ν¨λ ν
μ€νΈ λ±). |
|
|
|
> If you're interested in building large-scale language models to solve a wide variety of problems in a wide variety of domains, you should consider joining [Allganize](https://allganize.career.greetinghr.com/o/65146). |
|
For a coffee chat or if you have any questions, please do not hesitate to contact me as well! - [email protected] |
|
|
|
This model was created as a personal experiment, unrelated to the organization I work for. |
|
|
|
# **License** |
|
## From original model author: |
|
- Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT |
|
- Full License available at: https://huggingface.co/beomi/llama-2-koen-13b/blob/main/LICENSE |
|
|
|
# **Model Details** |
|
#### **Developed by** |
|
Jisoo Kim(kuotient) |
|
#### **Base Model** |
|
[beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b) |
|
#### **Datasets** |
|
- [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation) |
|
- AIHUB |
|
- κΈ°μ κ³Όν λΆμΌ ν-μ λ²μ λ³λ ¬ λ§λμΉ λ°μ΄ν° |
|
- μΌμμν λ° κ΅¬μ΄μ²΄ ν-μ λ²μ λ³λ ¬ λ§λμΉ λ°μ΄ν° |
|
|
|
## Usage |
|
#### Format |
|
It follows only **ChatML** format. |
|
|
|
```python |
|
<|im_start|>system |
|
μ£Όμ΄μ§ λ¬Έμ₯μ νκ΅μ΄λ‘ λ²μνμΈμ.<|im_end|> |
|
<|im_start|>user |
|
{instruction}<|im_end|> |
|
<|im_start|>assistant |
|
# Don't miss newline here |
|
``` |
|
```python |
|
<|im_start|>system |
|
μ£Όμ΄μ§ λ¬Έμ₯μ μμ΄λ‘ λ²μνμΈμ.<|im_end|> |
|
<|im_start|>user |
|
{instruction}<|im_end|> |
|
<|im_start|>assistant |
|
# Don't miss newline here |
|
``` |
|
|
|
#### Example |
|
**I highly recommend to use vllm. I will write a guide for quick and easy inference if requested.** |
|
|
|
Since, chat_template already contains insturction format above. |
|
You can use the code below. |
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
device = "cuda" # the device to load the model onto |
|
model = AutoModelForCausalLM.from_pretrained("kuotient/Seagull-13B-translation") |
|
tokenizer = AutoTokenizer.from_pretrained("kuotient/Seagull-13B-translation") |
|
messages = [ |
|
{"role": "user", "content": "λ°λλλ μλ νμμμ΄μΌ?"}, |
|
] |
|
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") |
|
|
|
model_inputs = encodeds.to(device) |
|
model.to(device) |
|
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) |
|
decoded = tokenizer.batch_decode(generated_ids) |
|
print(decoded[0]) |
|
``` |