|
--- |
|
license: llama3 |
|
language: |
|
- ko |
|
base_model: saltlux/Ko-Llama3-Luxia-8B |
|
--- |
|
## Model Details |
|
|
|
Saltlux, AI Labs ์์ ๊ฐ๋ฐํ [saltlux/Ko-Llama3-Luxia-8B](https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B) ๋ชจ๋ธ์ Instruction Fine tuningํ ๋ชจ๋ธ์
๋๋ค. |
|
์ฌ์ฉ๋ ๋ฐ์ดํฐ์
์ผ๋ก [maywell/ko_wikidata_QA](https://huggingface.co/datasets/maywell/ko_wikidata_QA)๋ฅผ ์ฌ์ฉํ์์ผ๋ฉฐ SFTTrainer๋ฅผ ํตํด 3ep๋ก ํ์ตํ์ต๋๋ค. |
|
instruction prompt๋ Qwen2 ๋ชจ๋ธ๊ณผ ๋์ผํ๊ฒ ์ ์ฉ์์ผฐ์ต๋๋ค. |
|
|
|
```python |
|
<|im_start|>system |
|
You are a helpful assistant.<|im_end|> |
|
<|im_start|>user |
|
What is the Qwen2?<|im_end|> |
|
<|im_start|>assistant |
|
Qwen2 is the new series of Qwen large language models<|im_end|> |
|
<|im_start|>user |
|
Tell me more<|im_end|> |
|
<|im_start|>assistant |
|
``` |
|
|
|
## HyperParameter |
|
- num_train_epochs = 3 |
|
- warmup_steps=0.03 |
|
- learning_rate=1e-5 |
|
- optim="adamw_torch_fused" |
|
|
|
## Evaluation with Langchain |
|
apply_chat_tempalte์ด ์ ์ฉ๋์ด์์ง ์์ ๋ญ์ฒด์ธ์์ ํ๋กฌํํธ๋ก ์ง์ ์
๋ ฅํ์ฌ ํ๊ฐํด ๋ณผ ์ ์์ต๋๋ค. |
|
|
|
```python |
|
model_id = "lubocido/Ko-Llama3-Luxia-8B-it" |
|
device = "cuda:0" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
model = AutoModelForCausalLM.from_pretrained(model_id,, device_map = device, torch_dtype = torch.bfloat16) |
|
|
|
tokenizer.padding_side = 'right' |
|
tokenizer.pad_token = tokenizer.eos_token |
|
|
|
sys_message = """๋น์ ์ ์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํด์ผํฉ๋๋ค. |
|
์ฌ์ฉ์๊ฐ ์ ๊ณตํ๋ ์ ๋ณด๋ฅผ ์ธ์ฌํ๊ฒ ๋ถ์ํ์ฌ ์ฌ์ฉ์์ ์๋๋ฅผ ์ ์ํ๊ฒ ํ์
ํ๊ณ ๊ทธ์ ๋ฐ๋ผ ๋ต๋ณ์ ์์ฑํด์ผํฉ๋๋ค. |
|
ํญ์ ๋งค์ฐ ์์ฐ์ค๋ฌ์ด ํ๊ตญ์ด๋ก ์๋ตํ์ธ์.""" |
|
|
|
question = "๋ฆฌ๋
์ค์์ ํ๋ก์ธ์ค๋ฅผ ์ฃฝ์ด๋ ๋ช
๋ น์ด๊ฐ ๋ญ์ง?" |
|
|
|
template = """ |
|
<|im_start|>system\n{sys_message}<|im_end|> |
|
<|im_start|>user\n{question}<|im_end|> |
|
<|im_start|>assistant |
|
""" |
|
|
|
input_data = { |
|
'sys_message' : sys_message, |
|
'question' : question, |
|
} |
|
|
|
prompt = PromptTemplate(template=template, input_variables=['sys_message', 'question']) |
|
|
|
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device_map=device, do_sample = True, max_length = 512, temperature = 0.1, repetition_penalty=1.2, num_beams=1,top_k=20,top_p=0.9) |
|
|
|
langchain_pipeline = HuggingFacePipeline(pipeline=pipe) |
|
|
|
chains = LLMChain(llm=langchain_pipeline, prompt=prompt, output_parser=StrOutputParser(), verbose=True) |
|
|
|
print(chains.invoke(input=input_data)['text']) |
|
``` |
|
|
|
``` |
|
<|im_start|>user |
|
๋ฆฌ๋
์ค์์ ํ๋ก์ธ์ค๋ฅผ ์ฃฝ์ด๋ ๋ช
๋ น์ด๊ฐ ๋ญ์ง?<|im_end|> |
|
<|im_start|>assistant |
|
ํ๋ก์ธ์ค๋ ์ด์ ์ฒด์ ๊ฐ ์คํ ์ค์ธ ํ๋ก๊ทธ๋จ์ผ๋ก, ํ๋ก์ธ์ค ID(PID)๋ผ๋ ๊ณ ์ ํ ์๋ณ์๋ฅผ ๊ฐ์ง๊ณ ์์ต๋๋ค. |
|
ํ๋ก์ธ์ค๊ฐ ์ข
๋ฃ๋๋ฉด ์์คํ
์์์ด ํด์ ๋ฉ๋๋ค. ๋ฆฌ๋
์ค์ ๊ฒฝ์ฐ kill ๋ช
๋ น์ด๋ฅผ ํตํด ํ๋ก์ธ์ค๋ฅผ ์ข
๋ฃํ ์ ์์ผ๋ฉฐ, ์ด ๋ช
๋ น์ด๋ PID ๋๋ ์ด๋ฆ๊ณผ ๊ฐ์ ๋ค์ํ ๋ฐฉ๋ฒ์ผ๋ก ํ๋ก์ธ์ค๋ฅผ ์ฐพ์์ ์ข
๋ฃ์ํฌ ์ ์์ต๋๋ค. |
|
๋ํ SIGKILL ์ ํธ๋ฅผ ๋ณด๋ด๊ฑฐ๋ -9 ์ต์
์ ์ฌ์ฉํ๋ฉด ๊ฐ์ ์ ์ผ๋ก ํ๋ก์ธ์ค๋ฅผ ์ข
๋ฃํ ์๋ ์์ต๋๋ค. |
|
๊ทธ๋ฌ๋ ์ผ๋ถ ํ๋ก์ธ์ค๋ ๊ฐ์ ์ข
๋ฃ๋ ๋ ๋ฌธ์ ๋ฅผ ์ผ์ผํฌ ์ ์์ผ๋ฏ๋ก ์ฃผ์ํด์ ์ฌ์ฉํด์ผ ํฉ๋๋ค.<|im_end|> |
|
``` |