readme: update GGUF quant links
Browse files
README.md
CHANGED
@@ -99,7 +99,7 @@ Given the size of this model (218B parameters), it requires substantial computat
|
|
99 |
|
100 |
## Notes
|
101 |
|
102 |
-
This was just a fun testing model, merged with the `merge.py` script in the base of the repo. Find GGUFs at [
|
103 |
|
104 |
Compatible `mergekit` config:
|
105 |
```yaml
|
|
|
99 |
|
100 |
## Notes
|
101 |
|
102 |
+
This was just a fun testing model, merged with the `merge.py` script in the base of the repo. Find GGUFs at [mradermacher/Mistral-Large-218B-Instruct-GGUF](https://huggingface.co/mradermacher/Mistral-Large-218B-Instruct-GGUF)
|
103 |
|
104 |
Compatible `mergekit` config:
|
105 |
```yaml
|