How to use this model ?

#23
by Kevy - opened

Hello everyone !

Can someone explain how to deploy this model ?

Is it possible with llama.cpp ? I'd like to use CPU instead of GPU.

Thanks !

It is explained well here:
https://www.youtube.com/watch?v=cCQdzqAHcFk
The llama cpu install like that worked for me.

I'm on Linux, not windows :/

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment