Edit model card

Mistral-Depth-UP-Scaled-9B-AlpacaInstruct-gguf

GGUF

GGUF is a file format for storing models for inference with GGML and executors based on GGML. GGUF is a binary format that is designed for fast loading and saving of models, and for ease of reading. Models are traditionally developed using PyTorch or another framework, and then converted to GGUF for use in GGML.

It is a successor file format to GGML, GGMF and GGJT, and is designed to be unambiguous by containing all the information needed to load a model. It is also designed to be extensible, so that new information can be added to models without breaking compatibility.

Downloads last month
99
GGUF
Model size
8.99B params
Architecture
llama

8-bit

16-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Dataset used to train ayoubkirouane/Mistral-Depth-UP-Scaled-9B-AlpacaInstruct-gguf