Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ This model was obtained by quantizing the weights of [Meta-Llama-3-70B-Instruct]
|
|
27 |
This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 25%.
|
28 |
|
29 |
Only the weights of the linear operators within transformers blocks are quantized. Symmetric group-wise quantization is applied, in which a linear scaling per group maps the INT4 and floating point representations of the quantized weights.
|
30 |
-
[AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) is used for quantization with
|
31 |
|
32 |
|
33 |
## Deployment
|
|
|
27 |
This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 25%.
|
28 |
|
29 |
Only the weights of the linear operators within transformers blocks are quantized. Symmetric group-wise quantization is applied, in which a linear scaling per group maps the INT4 and floating point representations of the quantized weights.
|
30 |
+
[AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) is used for quantization with 10% damping factor, group-size as 128 and 512 sequences sampled from [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
|
31 |
|
32 |
|
33 |
## Deployment
|