add-quantized-gguf-files
#1
by
espenhk
- opened
No description provided.
Same as with Mistral model: this PR will add quantized versions of the model as GGUF files, fitting for running locally on your machine, for instance through Ollama.
OK on my end!
espenhk
changed pull request status to
open
Bumping this one, still ready to merge if you would like :)
NorLLM-NTNU
changed pull request status to
merged