κ°μ
- νκ΅μ΄ μμ½ Taskλ₯Ό μννλ λͺ¨λΈμ λλ€.
Base Model
Dataset
AI hubμ μλ μλ μμ½ λ°μ΄ν° μ€ 3λ§κ±΄μ μνλ§νμ¬ μ¬μ©νμμ΅λλ€.
- μΆμ μμ½ μ¬μ€μ± κ²μ¦ λ°μ΄ν°
- μμ½λ¬Έ λ° λ ν¬νΈ μμ± λ°μ΄ν°
- λ¬Έμμμ½ ν μ€νΈ
- λμμλ£ μμ½
λΌμ΄λΈλ¬λ¦¬ μ€μΉ
pip3 install transformers gradio vllm
μμ μ½λ
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
import gradio as gr
import os
model_path = "gangyeolkim/open-llama-2-ko-7b-summarization"
sampling_params = SamplingParams(max_tokens=1024, temperature=0.1)
tokenizer = AutoTokenizer.from_pretrained(model_path)
llm = LLM(model=model_path, tokenizer=model_path, tensor_parallel_size=1)
def gen(text, history):
text = [
"### μλ¬Έ:",
f"{text}\n",
"### μμ½:\n",
]
prompts = "\n".join(text)
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
generated_text = output.outputs[0].text
return generated_text
demo = gr.ChatInterface(gen)
demo.launch(share=True)