emissions-extraction-lora merged with the mistralai/Mistral-7B-Instruct-v0.2, converted into GGUF format and quantized. Can be used with llama.cpp.

Downloads last month
29
GGUF
Model size
7.24B params
Architecture
llama

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Collection including nopperl/emissions-extraction-lora-merged-GGUF