language: | |
- en | |
license: apache-2.0 | |
tags: | |
- llama-cpp | |
datasets: | |
- cerebras/SlimPajama-627B | |
- bigcode/starcoderdata | |
- HuggingFaceH4/ultrachat_200k | |
- HuggingFaceH4/ultrafeedback_binarized | |
widget: | |
- example_title: Fibonacci (Python) | |
messages: | |
- role: system | |
content: You are a chatbot who can help code! | |
- role: user | |
content: Write me a function to calculate the first 10 digits of the fibonacci | |
sequence in Python and print it out to the CLI. | |
# reach-vb/TinyLlama-1.1B-Chat-v1.0-Q8_0-GGUF | |
This model was converted to GGUF format from [`TinyLlama/TinyLlama-1.1B-Chat-v1.0`](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) using llama.cpp. | |
Refer to the [original model card](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) for more details on the model. | |
## Use with llama.cpp | |
```bash | |
brew install ggerganov/ggerganov/llama.cpp | |
``` | |
```bash | |
llama-cli --hf-repo reach-vb/TinyLlama-1.1B-Chat-v1.0-Q8_0-GGUF --model tinyllama-1.1b-chat-v1.0.Q8_0.gguf -p "The meaning to life and the universe is " | |
``` | |
```bash | |
llama-server --hf-repo reach-vb/TinyLlama-1.1B-Chat-v1.0-Q8_0-GGUF --model tinyllama-1.1b-chat-v1.0.Q8_0.gguf -c 2048 | |
``` | |