Seagull-13b-translation-AWQ π
This is quantized version of original model: Seagull-13b-translation.
Seagull-13b-translation is yet another translator model, but carefully considered the following issues from existing translation models.
newline
orspace
not matching the original text- Using translated dataset with first letter removed for training
- Codes
- Markdown format
- LaTeX format
- etc
μ΄λ° μ΄μλ€μ μΆ©λΆν 체ν¬νκ³ νμ΅μ μ§ννμμ§λ§, λͺ¨λΈμ μ¬μ©ν λλ μ΄λ° λΆλΆμ λν κ²°κ³Όλ₯Ό λ©΄λ°νκ² μ΄ν΄λ³΄λ κ²μ μΆμ²ν©λλ€(μ½λκ° ν¬ν¨λ ν μ€νΈ λ±).
If you're interested in building large-scale language models to solve a wide variety of problems in a wide variety of domains, you should consider joining Allganize. For a coffee chat or if you have any questions, please do not hesitate to contact me as well! - [email protected]
This model was created as a personal experiment, unrelated to the organization I work for.
License
From original model author:
- Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT
- Full License available at: https://huggingface.co/beomi/llama-2-koen-13b/blob/main/LICENSE
Model Details
Developed by
Jisoo Kim(kuotient)
Base Model
Datasets
- sharegpt_deepl_ko_translation
- AIHUB
- κΈ°μ κ³Όν λΆμΌ ν-μ λ²μ λ³λ ¬ λ§λμΉ λ°μ΄ν°
- μΌμμν λ° κ΅¬μ΄μ²΄ ν-μ λ²μ λ³λ ¬ λ§λμΉ λ°μ΄ν°
Usage
Format
It follows only ChatML format.
<|im_start|>system
μ£Όμ΄μ§ λ¬Έμ₯μ νκ΅μ΄λ‘ λ²μνμΈμ.<|im_end|>
<|im_start|>user
{instruction}<|im_end|>
<|im_start|>assistant
# Don't miss newline here
<|im_start|>system
μ£Όμ΄μ§ λ¬Έμ₯μ μμ΄λ‘ λ²μνμΈμ.<|im_end|>
<|im_start|>user
{instruction}<|im_end|>
<|im_start|>assistant
# Don't miss newline here
Example
I highly recommend to inference model with vllm. I will write a guide for quick and easy inference if requested.
Since, chat_template already contains insturction format above. You can use the code below.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("kuotient/Seagull-13B-translation")
tokenizer = AutoTokenizer.from_pretrained("kuotient/Seagull-13B-translation")
messages = [
{"role": "system", "content", "μ£Όμ΄μ§ λ¬Έμ₯μ νκ΅μ΄λ‘ λ²μνμΈμ."}
{"role": "user", "content": "Here are five examples of nutritious foods to serve your kids."},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
- Downloads last month
- 17