--- base_model: llm-jp/llm-jp-3-13b tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** karaage0703 - **License:** apache-2.0 - **Finetuned from model :** llm-jp/llm-jp-3-13b This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth) ## Usage Execute following code in Google Colab ```python # 必要なライブラリをインストール !pip install unsloth !pip uninstall unsloth -y && pip install --upgrade --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" !pip install -U torch !pip install -U peft # 必要なライブラリを読み込み from unsloth import FastLanguageModel from peft import PeftModel import torch import json from tqdm import tqdm import re # ベースとなるモデルと学習したLoRAのアダプタ(Hugging FaceのIDを指定)。 model_id = "llm-jp/llm-jp-3-13b" adapter_id = "karaage0703/llm-jp-3-13b-it-20241205_018" from google.colab import userdata HF_TOKEN=userdata.get('HF_TOKEN') # unslothのFastLanguageModelで元のモデルをロード。 dtype = None # Noneにしておけば自動で設定 load_in_4bit = True # 今回は13Bモデルを扱うためTrue model, tokenizer = FastLanguageModel.from_pretrained( model_name=model_id, dtype=dtype, load_in_4bit=load_in_4bit, trust_remote_code=True, ) # 元のモデルにLoRAのアダプタを統合。 model = PeftModel.from_pretrained(model, adapter_id, token = HF_TOKEN) # タスクとなるデータの読み込み。 # 事前にデータをアップロードしてください。 datasets = [] with open("./elyza-tasks-100-TV_0.jsonl", "r") as f: item = "" for line in f: line = line.strip() item += line if item.endswith("}"): datasets.append(json.loads(item)) item = "" # モデルを用いてタスクの推論。 # 推論するためにモデルのモードを変更 FastLanguageModel.for_inference(model) results = [] for dt in tqdm(datasets): input = dt["input"] prompt = f"""### 指示\n{input} 簡潔に回答してください \n### 回答\n""" inputs = tokenizer([prompt], return_tensors = "pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens = 512, use_cache = True, do_sample=False, repetition_penalty=1.2) prediction = tokenizer.decode(outputs[0], skip_special_tokens=True).split('\n### 回答')[-1] prediction = re.sub(r"[*#]", "", prediction) results.append({"task_id": dt["task_id"], "input": input, "output": prediction}) # 結果をjsonlで保存。 json_file_id = re.sub(".*/", "", adapter_id) with open(f"/content/{json_file_id}_output.jsonl", 'w', encoding='utf-8') as f: for result in results: json.dump(result, f, ensure_ascii=False) f.write('\n') ``` ## Datasets ### Instruction tuning The models have been fine-tuned on the following datasets. | Language | Dataset | description | |:---|:---|:---| |Japanese| Screened data based on Tengentoppa-sft-v1.0 | A manually constructed instruction dataset based on [Tengentoppa-sft-v1.0](https://huggingface.co/datasets/DeL-TaiseiOzaki/Tengentoppa-sft-v1.0) | | | Synthesized data from Elyza-tasks-100| Synthesize data from [Elyza-tasks-100](https://huggingface.co/datasets/elyza/ELYZA-tasks-100) by using LLM(Tanuki-8x8B) |