How to use this model

#10
by sponge14 - opened

Hello, I am brand new to this and I do not understand how to load this model into Ollama. On another model I just used the "Ollama run llama2-uncensored" command in WSL. How do I know which name to use for this model in the command?

Cognitive Computations org
edited Mar 15, 2024

This is a very old model but to use ollama, or any other llama.cpp base, you'll need to use a GGUF.

You might be better off with:

You can search for -GGUF versions of most models to find what you're looking for.

In regards to this model specifically, this would be the GGUF version:

--- [edit] ---
Ollama has some of these already built-in to their library:

However, you can build your own:

This is a very old model but to use ollama, or any other llama.cpp base, you'll need to use a GGUF.

You might be better off with:

You can search for -GGUF versions of most models to find what you're looking for.

In regards to this model specifically, this would be the GGUF version:

--- [edit] ---
Ollama has some of these already built-in to their library:

However, you can build your own:

Thanks a lot for the suggestions. When I'm looking at the pages for these models your recommend, where would I find this GGUF phrase to install them?

Cognitive Computations org

Thanks a lot for the suggestions. When I'm looking at the pages for these models your recommend, where would I find this GGUF phrase to install them?

If you want to use ollama with the existing models then just type ollama run dolphin-mistral but if you want to use another then you may want to join their discord if you have trouble reading their documentation.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment