llm-jp-3-13b-it-7.0 / README.md
84basi's picture
Update README.md
ba566e2 verified
---
base_model: llm-jp/llm-jp-3-13b
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
language:
- en
datasets:
- elyza/ELYZA-tasks-100
---
# Uploaded model
- **Developed by:** 84basi
- **Finetuned from model :** llm-jp/llm-jp-3-13b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## Readme
### 事前準備
- token にご自身の token を指定して下さい
- L4 GPU を選択して下さい
- 事前に elyza-tasks-100-TV_0.jsonl を Google Colab にアップロードして下さい
- 正しく実行が完了すると `/content/llm-jp-3-13b-it-7.0_output.jsonl` が出力されます
```python
token = "" # token
model_id = "llm-jp-3-13b-it-7.0" # llm-jp-3-13b-it-4.17, gemma-2-27b-it-4.19
model_name = "84basi/" + model_id
answer_json_file = "./elyza-tasks-100-TV_0.jsonl"
output_json_file = "./" + model_id + "_output.jsonl"
!pip install unsloth -q
!pip uninstall unsloth -y && pip install --upgrade --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" -q
from unsloth import FastLanguageModel
from peft import PeftModel
import torch
import json
max_seq_length = 2048
dtype = None
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = model_name,
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
token = token,
trust_remote_code=True,
)
# 推論モードに切り替え
FastLanguageModel.for_inference(model)
# データセットの読み込み。
# omnicampusの開発環境では、左にタスクのjsonlをドラッグアンドドロップしてから実行。
datasets = []
with open(answer_json_file, "r") as f:
item = ""
for line in f:
line = line.strip()
item += line
if item.endswith("}"):
datasets.append(json.loads(item))
item = ""
# 推論
from tqdm import tqdm
results = []
for dt in tqdm(datasets):
input = dt["input"]
prompt = f"""### 指示\n{input}\n### 回答\n"""
inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, use_cache=True, do_sample=False, repetition_penalty=1.2)
prediction = tokenizer.decode(outputs[0], skip_special_tokens=True).split('\n### 回答')[-1]
results.append({"task_id": dt["task_id"], "input": input, "output": prediction})
with open(output_json_file, 'w', encoding='utf-8') as f:
for result in results:
json.dump(result, f, ensure_ascii=False)
f.write('\n')
```