Uploaded model

  • Developed by: qcube
  • License: apache-2.0
  • Finetuned from model : llm-jp/llm-jp-3-13b

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Sample use

以下は、elyza-tasks-100-TV_0.jsonl の回答のためのコードです。

# ELYZA-tasks-100-TVの読み込み。事前にファイルをアップロードしてください
# データセットの読み込み。
# omnicampusの開発環境では、左にタスクのjsonlをドラッグアンドドロップしてから実行。
import json

datasets = []
with open("./elyza-tasks-100-TV_0.jsonl", "r") as f:
    item = ""
    for line in f:
        line = line.strip()
        item += line
        if item.endswith("}"):
            datasets.append(json.loads(item))
            item = ""


# 学習したモデルを用いてタスクを実行
from tqdm import tqdm

# 推論するためにモデルのモードを変更
FastLanguageModel.for_inference(model)

results = []
for dt in tqdm(datasets):
    input = dt["input"]

    prompt = f"""### 指示\n{input}\n### 回答\n"""

    inputs = tokenizer([prompt], return_tensors="pt").to(model.device)

    outputs = model.generate(
        **inputs,
        max_new_tokens=512,
        use_cache=True,
        do_sample=False,
        repetition_penalty=1.2,
    )
    prediction = tokenizer.decode(
        outputs[0],
        skip_special_tokens=True,
    ).split(
        "\n### 回答"
    )[-1]

    results.append({"task_id": dt["task_id"], "input": input, "output": prediction})


# jsonlで保存
with open(f"{new_model_id}_output.jsonl", 'w', encoding='utf-8') as f:
    for result in results:
        json.dump(result, f, ensure_ascii=False)
        f.write('\n')
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for qcube/llm-jp-3-13b-finetune4

Finetuned
(1120)
this model