justheuristic
commited on
Commit
•
35b0e46
1
Parent(s):
b2d16a0
Update README.md
Browse files
README.md
CHANGED
@@ -25,6 +25,8 @@ Results:
|
|
25 |
| | [1x16g16 (1-bit, model link)](https://huggingface.co/ISTA-DASLab/Phi-3-medium-4k-instruct-AQLM-PV-1Bit-1x16-hf) | 7.42 | 10.40 | 2.7Gb |
|
26 |
|
27 |
|
|
|
|
|
28 |
In general, we always recommend the 2-bit models for best accuracy-size trade-offs. If tempted to use the 1-bit model, try a smaller model ,
|
29 |
e.g. Phi-3-**mini** quantized with AQLM+PV [(quantized model link)](https://huggingface.co/ISTA-DASLab/Phi-3-mini-4k-instruct-AQLM-PV-2Bit-1x16-hf) and compare the results, or check our [AQLM+PV collection](https://huggingface.co/collections/ISTA-DASLab/aqlmpv-66564dff5d84f00a893ba93f) for a more appropriate size.
|
30 |
|
|
|
25 |
| | [1x16g16 (1-bit, model link)](https://huggingface.co/ISTA-DASLab/Phi-3-medium-4k-instruct-AQLM-PV-1Bit-1x16-hf) | 7.42 | 10.40 | 2.7Gb |
|
26 |
|
27 |
|
28 |
+
Phi-3-**medium** is not included in the original [PV-Tuining paper](https://arxiv.org/abs/2405.14852). As of yet, we did not have the bandwidth to evaluate it properly. We hope to eventually run the zero-shot evaluation suite, or you can help us by running it yourself and opening a pull-request to the readme!
|
29 |
+
|
30 |
In general, we always recommend the 2-bit models for best accuracy-size trade-offs. If tempted to use the 1-bit model, try a smaller model ,
|
31 |
e.g. Phi-3-**mini** quantized with AQLM+PV [(quantized model link)](https://huggingface.co/ISTA-DASLab/Phi-3-mini-4k-instruct-AQLM-PV-2Bit-1x16-hf) and compare the results, or check our [AQLM+PV collection](https://huggingface.co/collections/ISTA-DASLab/aqlmpv-66564dff5d84f00a893ba93f) for a more appropriate size.
|
32 |
|