gangyeolkim's picture
Update README.md
00bec6d verified
|
raw
history blame
1.69 kB

κ°œμš”

  • ν•œκ΅­μ–΄ μš”μ•½ Taskλ₯Ό μˆ˜ν–‰ν•˜λŠ” λͺ¨λΈμž…λ‹ˆλ‹€.

Base Model

Dataset

AI hub에 μžˆλŠ” μ•„λž˜ μš”μ•½ 데이터 쀑 3λ§Œκ±΄μ„ μƒ˜ν”Œλ§ν•˜μ—¬ μ‚¬μš©ν•˜μ˜€μŠ΅λ‹ˆλ‹€.

라이브러리 μ„€μΉ˜

pip3 install transformers gradio vllm

예제 μ½”λ“œ

from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
import gradio as gr
import os

model_path = "gangyeolkim/open-llama-2-ko-7b-summarization"
sampling_params = SamplingParams(max_tokens=1024, temperature=0.1)
tokenizer = AutoTokenizer.from_pretrained(model_path) 
 
llm = LLM(model=model_path, tokenizer=model_path, tensor_parallel_size=1) 

def gen(text, history):
    
    text = [
            "### 원문:",
            f"{text}\n",
            "### μš”μ•½:\n",
        ]

    prompts = "\n".join(text)
    outputs = llm.generate(prompts, sampling_params) 
    
    for output in outputs:
        generated_text = output.outputs[0].text
    return generated_text 
    
demo = gr.ChatInterface(gen)
demo.launch(share=True)