limcheekin's picture
feat: added notebook on how to use the api and updated index.html to include the link to the notebook
19485c0
raw
history blame
1.64 kB
<!DOCTYPE html>
<html>
<head>
<title>Mistral-7B-Instruct-v0.1-GGUF (Q4_K_M)</title>
</head>
<body>
<h1>Mistral-7B-Instruct-v0.1-GGUF (Q4_K_M)</h1>
<p>
With the utilization of the
<a href="https://github.com/abetlen/llama-cpp-python">llama-cpp-python</a>
package, we are excited to introduce the GGUF model hosted in the Hugging
Face Docker Spaces, made accessible through an OpenAI-compatible API. This
space includes comprehensive API documentation to facilitate seamless
integration.
</p>
<ul>
<li>
The API endpoint:
<a href="https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/v1"
>https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/v1</a
>
</li>
<li>
The API doc:
<a href="https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/docs"
>https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/docs</a
>
</li>
</ul>
<p>
Go ahead and try it out the API endpoint yourself with the
<a
href="https://huggingface.co/spaces/limcheekin/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct.ipynb"
target="_blank"
>
mistral-7b-instruct.ipynb</a
>
jupyter notebook.
</p>
<p>
If you find this resource valuable, your support in the form of starring
the space would be greatly appreciated. Your engagement plays a vital role
in furthering the application for a community GPU grant, ultimately
enhancing the capabilities and accessibility of this space.
</p>
</body>
</html>