Uploaded Model
- Developed by: kattyan
- License: apache-2.0
- Finetuned from model: llm-jp/llm-jp-3-13b
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
Required Libraries and Their Versions
- torch>=2.3.0
- transformers>=4.40.1
- tokenizers>=0.19.1
- accelerate>=0.29.3
- flash-attn>=2.5.8
Usage
from unsloth import FastLanguageModel
model_name = "llm-jp/llm-jp-3-13b" # モデル名
max_seq_length = 512 # 最大シーケンス長
dtype = None # データ型(None で自動設定)
load_in_4bit = True # 4bit量子化を使用
# モデルとトークナイザーのロード
model, tokenizer = FastLanguageModel.from_pretrained(
model_name=model_name,
max_seq_length=max_seq_length,
dtype=dtype,
load_in_4bit=load_in_4bit,
token="YOUR_HUGGING_FACE_TOKEN", # Hugging Face トークンを指定
)
# 推論用にモデルを準備
FastLanguageModel.for_inference(model)
# プロンプトの設定
prompt = "LLMとはなんですか?"
# トークナイザーで入力をエンコード
inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
# モデルで生成を行う
outputs = model.generate(**inputs, max_new_tokens=512, use_cache=True, do_sample=False, repetition_penalty=1.2)
# 出力のデコード
prediction = tokenizer.decode(outputs[0], skip_special_tokens=True).split('\n### 回答')[-1]
print(prediction)
Model tree for kattyan/llm-jp-3-13b-finetune3
Base model
llm-jp/llm-jp-3-13b