- Basemodel MLP-KTLim/llama-3-Korean-Bllossom-8B
- Dataset AI Hub - νκ΅μ΄ μ±λ₯μ΄ κ°μ λ μ΄κ±°λAI μΈμ΄λͺ¨λΈ κ°λ° λ° λ°μ΄ν°
Python code with Pipeline
import transformers
import torch
model_id = "VIRNECT/llama-3-Korean-8B-r-v1"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
pipeline.model.eval()
PROMPT = '''You are a helpful AI assistant. Please answer the user's questions kindly. λΉμ μ μ λ₯ν AI μ΄μμ€ν΄νΈ μ
λλ€. μ¬μ©μμ μ§λ¬Έμ λν΄ μΉμ νκ² λ΅λ³ν΄μ£ΌμΈμ.'''
instruction = "νν곡νμ΄ λ€λ₯Έ 곡ν λΆμΌμ μ΄λ»κ² λ€λ₯Έκ°μ?"
messages = [
{"role": "system", "content": f"{PROMPT}"},
{"role": "user", "content": f"{instruction}"}
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=2048,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9
)
print(outputs[0]["generated_text"][len(prompt):])
- Downloads last month
- 2,424
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.