from peft import AutoPeftModelForCausalLM

path_to_adapter="macadeliccc/Samantha-Qwen-2-7B-lora"

model = AutoPeftModelForCausalLM.from_pretrained(
    # path to the output directory
    path_to_adapter,
    device_map="auto",
    trust_remote_code=True
).eval()

vpm_resampler_embedtokens_weight = torch.load(f"{path_to_adapter}/vpm_resampler_embedtokens.pt")

msg = model.load_state_dict(vpm_resampler_embedtokens_weight, strict=False)
Downloads last month
7
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for macadeliccc/Samantha-Qwen2-7B-LoRa

Base model

Qwen/Qwen2-7B
Quantized
(48)
this model

Datasets used to train macadeliccc/Samantha-Qwen2-7B-LoRa