GGUF
Not-For-All-Audiences
nsfw
Inference Endpoints

error loading model

#2
by Huegli - opened

Hi Undi95,

using koboldcpp 1.51 I get the following error message when I try to load the model (q5):

error loading model: create_tensor: tensor 'blk.0.ffn_gate.weight' not found
llama_load_model_from_file: failed to load model
Traceback (most recent call last):
File "koboldcpp.py", line 2274, in
File "koboldcpp.py", line 2157, in main
File "koboldcpp.py", line 300, in load_model
OSError: exception: access violation reading 0x000000000000005C
[21764] Failed to execute script 'koboldcpp' due to unhandled exception!

I've never seen this error before.

Greetings

You need to update koboldcpp to 1.52, here is the latest release : https://github.com/LostRuins/koboldcpp/releases/tag/v1.52.1

Sign up or log in to comment