|
--- |
|
license: other |
|
license_name: deepseek |
|
license_link: >- |
|
https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct/blob/main/LICENSE |
|
--- |
|
|
|
This is a llamafile for [deepseek-coder-33b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct). |
|
|
|
The quantized gguf was downloaded straight from [TheBloke](https://huggingface.co/TheBloke/deepseek-coder-33B-instruct-GGUF), |
|
and then zipped into a llamafile using [Mozilla's awesome project](https://github.com/Mozilla-Ocho/llamafile). |
|
|
|
It's over 4gb so if you want to use it on Windows you'll have to run it from WSL. |
|
|
|
WSL note: If you get the error about APE, and the recommended command |
|
|
|
`sudo sh -c 'echo -1 > /proc/sys/fs/binfmt_misc/WSLInterop'` |
|
|
|
doesn't work, the file might be named something else so I had success with |
|
|
|
`sudo sh -c 'echo -1 > /proc/sys/fs/binfmt_misc/WSLInterop-late'` |
|
|
|
If that fails too, just navigate to `/proc/sys/fs/binfmt_msc` and see what files look like `WSLInterop` and echo a -1 to whatever it's called by changing that part of the recommended command. |
|
|
|
|
|
Llamafiles are a standalone executable that run an LLM server locally on a variety of operating systems. |
|
You just run it, open the chat interface in a browser, and interact. |
|
Options can be passed in to expose the api etc. |