Update README.md
Browse files
README.md
CHANGED
|
@@ -8,6 +8,8 @@ license: apache-2.0
|
|
| 8 |
|
| 9 |
<p><h1> speechless-zephyr-code-functionary-7b </h1></p>
|
| 10 |
|
|
|
|
|
|
|
| 11 |
This model is the one of the moloras (Mixture-of-Multi-LoRAs) experiments.
|
| 12 |
|
| 13 |
Extract LoRA modules from below models (all based Mistral-7B-v0.1), each LoRA module has its own unique skills. By using multi-loras, they can be combined together statically or dynamically to form a versatile new model.
|
|
|
|
| 8 |
|
| 9 |
<p><h1> speechless-zephyr-code-functionary-7b </h1></p>
|
| 10 |
|
| 11 |
+
[4,5,8-bit GGUF models for CPU+GPU inference](https://huggingface.co/uukuguy/speechless-zephyr-code-functionary-7b/tree/main/GGUF)
|
| 12 |
+
|
| 13 |
This model is the one of the moloras (Mixture-of-Multi-LoRAs) experiments.
|
| 14 |
|
| 15 |
Extract LoRA modules from below models (all based Mistral-7B-v0.1), each LoRA module has its own unique skills. By using multi-loras, they can be combined together statically or dynamically to form a versatile new model.
|