Uploaded model

  • Developed by: hiroshij
  • License: apache-2.0
  • Finetuned from model : llm-jp/llm-jp-3-13b

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

How to finetune llm-jp/llm-jp-3-13b

from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig from unsloth import FastLanguageModel import torch max_seq_length = 2048 # Original 512 dtype = None load_in_4bit = True

model_id = "hiroshij/llm-jp-3-13b-finetune-joga-20241202" new_model_id = "llm-jp-3-13b-finetune-joga-20241202-2"

model, tokenizer = FastLanguageModel.from_pretrained( model_name=model_id, dtype=dtype, load_in_4bit=load_in_4bit, trust_remote_code=True, )

model = FastLanguageModel.get_peft_model( model, r = 32, target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj",], lora_alpha = 32, lora_dropout = 0.05, bias = "none", use_gradient_checkpointing = "unsloth", random_state = 3407, use_rslora = False, loftq_config = None, max_seq_length = max_seq_length, )


from tqdm import tqdm

FastLanguageModel.for_inference(model)

results = [] for dt in tqdm(datasets): input = dt["input"]

prompt = f"""### 指示\n{input}\n### 回答\n"""

inputs = tokenizer([prompt], return_tensors = "pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens = 1024, use_cache = True, do_sample=False, repetition_penalty=1.2) #Original 512 prediction = tokenizer.decode(outputs[0], skip_special_tokens=True).split('\n### 回答')[-1]

results.append({"task_id": dt["task_id"], "input": input, "output": prediction})

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for hiroshij/llm-jp-3-13b-finetune-joga-20241202-2

Finetuned
(1128)
this model