Mistral-Depth-UP-Scaled-9B-AlpacaInstruct-gguf
- [q8_0 , F16] Quantized version of Mistral-Depth-UP-Scaled-9B
GGUF
GGUF is a file format for storing models for inference with GGML and executors based on GGML. GGUF is a binary format that is designed for fast loading and saving of models, and for ease of reading. Models are traditionally developed using PyTorch or another framework, and then converted to GGUF for use in GGML.
It is a successor file format to GGML, GGMF and GGJT, and is designed to be unambiguous by containing all the information needed to load a model. It is also designed to be extensible, so that new information can be added to models without breaking compatibility.
- Downloads last month
- 99