Uploaded model

  • Developed by: ken2147
  • License: apache-2.0
  • Finetuned from model : llm-jp/llm-jp-3-13b

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Instruction tuning

The models have been fine-tuned on the following datasets.

Language Dataset description
Japanese ichikara-instruction-003-001-1.json A manually constructed instruction dataset
Japanese ichikara-instruction-003-001-2.1.json A manually constructed instruction dataset
Japanese ichikara-instruction-003-001-2.2.json A manually constructed instruction dataset
Japanese ichikara-instruction-003-001-5.1.json A manually constructed instruction dataset
Japanese ichikara-instruction-003-001-5.2.json A manually constructed instruction dataset
Japanese ichikara-instruction-003-002-1.json A manually constructed instruction dataset
Japanese ELYZA-tasks-100: 日本語instructionモデル評価データセット

ichikara-instructionデータセット: 関根聡, 安藤まや, 後藤美知子, 鈴木久美, 河原大輔, 井之上直也, 乾健太郎. ichikara-instruction: LLMのための日本語インストラクションデータの構築. 言語処理学会第30回年次大会(2024)

ELYZA-tasks-100: 日本語instructionモデル評価データセット: Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura(2023)

Usage

下記は、回答出力のためのコードです。

# 必要なライブラリをインストール
!pip install unsloth
!pip uninstall unsloth -y && pip install --upgrade --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install -U torch
!pip install -U peft

# 必要なライブラリを読み込み
from unsloth import FastLanguageModel
from peft import PeftModel
import torch
import json
from tqdm import tqdm
import re

# ベースとなるモデルと学習したLoRAのアダプタ。
model_id = "llm-jp/llm-jp-3-13b"
adapter_id = "ken2147/llm-jp-3-13b-it_lora_v4"

# Hugging Face Token を指定。 
HF_TOKEN = "Your_HF_Token"

# unslothのFastLanguageModelで元のモデルをロード。
dtype = None # Noneにしておけば自動で設定
load_in_4bit = True # 今回は13Bモデルを扱うためTrue

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name=model_id,
    dtype=dtype,
    load_in_4bit=load_in_4bit,
    trust_remote_code=True,
)

# 元のモデルにLoRAのアダプタを統合。
model = PeftModel.from_pretrained(model, adapter_id, token = HF_TOKEN)

# タスクとなるデータの読み込み。
datasets = []
with open("./elyza-tasks-100-TV_0.jsonl", "r") as f: # 環境に合わせてpathを変更
    item = ""
    for line in f:
      line = line.strip()
      item += line
      if item.endswith("}"):
        datasets.append(json.loads(item))
        item = ""

# モデルを用いてタスクの推論。
FastLanguageModel.for_inference(model)
results = []
for dt in tqdm(datasets):
  input = dt["input"]

  prompt = f"""### 指示\n{input}\n### 回答\n"""

  inputs = tokenizer([prompt], return_tensors = "pt").to(model.device)

  outputs = model.generate(**inputs, max_new_tokens = 512, use_cache = True, do_sample=False, repetition_penalty=1.2)
  prediction = tokenizer.decode(outputs[0], skip_special_tokens=True).split('\n### 回答')[-1]

  results.append({"task_id": dt["task_id"], "input": input, "output": prediction})

# 結果をjsonlで保存。
# ファイル名は任意で問題なし。
json_file_id = re.sub(".*/", "", adapter_id)
with open(f"/content/{json_file_id}_output.jsonl", 'w', encoding='utf-8') as f:
    for result in results:
        json.dump(result, f, ensure_ascii=False)
        f.write('\n')
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for ken2147/llm-jp-3-13b-it_lora_v4

Finetuned
(1145)
this model