Convert and Quantized of TinyLlama-1.1B-intermediate-step-1431k-3T using llama.cpp
Guide to replicate
Convert and Quantize model to GGUF
Description
This is for my own experiment with Ollama, because Ollama usually put chat fine-tuning in their library. It would be hard to know if my fine-tuning LoRA Adapter works if I don't use base/pretrain model version
Ollama
- Model Page https://ollama.com/pacozaa/tinyllama
ollama run pacozaa/tinyllama
- Downloads last month
- 207
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.