Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,11 @@ library_name: transformers
|
|
7 |
license: apache-2.0
|
8 |
quantized_by: mradermacher
|
9 |
---
|
|
|
|
|
|
|
|
|
|
|
10 |
## About
|
11 |
|
12 |
<!-- ### quantize_version: 2 -->
|
@@ -17,7 +22,8 @@ quantized_by: mradermacher
|
|
17 |
weighted/imatrix quants of https://huggingface.co/xwen-team/Xwen-72B-Chat
|
18 |
|
19 |
<!-- provided-files -->
|
20 |
-
static quants are available at https://huggingface.co/
|
|
|
21 |
## Usage
|
22 |
|
23 |
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
@@ -69,8 +75,7 @@ questions you might have and/or if you want some other model quantized.
|
|
69 |
|
70 |
## Thanks
|
71 |
|
72 |
-
|
73 |
-
|
74 |
-
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
|
75 |
|
76 |
<!-- end -->
|
|
|
7 |
license: apache-2.0
|
8 |
quantized_by: mradermacher
|
9 |
---
|
10 |
+
|
11 |
+
> [!Important]
|
12 |
+
> Big thanks to [@mradermacher](https://huggingface.co/mradermacher) for helping us build this repository of GGUFs for our [Xwen-72B-Chat](https://huggingface.co/xwen-team/Xwen-72B-Chat)!
|
13 |
+
|
14 |
+
|
15 |
## About
|
16 |
|
17 |
<!-- ### quantize_version: 2 -->
|
|
|
22 |
weighted/imatrix quants of https://huggingface.co/xwen-team/Xwen-72B-Chat
|
23 |
|
24 |
<!-- provided-files -->
|
25 |
+
static quants are available at https://huggingface.co/xwen-team/Xwen-72B-Chat-GGUF
|
26 |
+
|
27 |
## Usage
|
28 |
|
29 |
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
|
|
75 |
|
76 |
## Thanks
|
77 |
|
78 |
+
Big thanks to [@mradermacher](https://huggingface.co/mradermacher) for helping us build this repository of GGUFs for our [Xwen-72B-Chat](https://huggingface.co/xwen-team/Xwen-72B-Chat)!
|
79 |
+
|
|
|
80 |
|
81 |
<!-- end -->
|