qwp4w3hyb commited on
Commit
056a9fb
·
verified ·
1 Parent(s): 1efcf9c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -19,8 +19,8 @@ base_model: google/gemma-2-9b-it
19
  - WIP new version should have better metadata as its quantized from scratch with llama.cpp
20
  - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
21
  - experimental custom quant types
22
- - `_L` with `--output-tensor-type f16 --token-embedding-type f16` (same as bartowski)
23
- - `_XL` with `--output-tensor-type bf16 --token-embedding-type bf16` (same size as _L, in theory even higher numerical accuracy)
24
  - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) release [b3259](https://github.com/ggerganov/llama.cpp/releases/tag/b3259)
25
  - Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski).
26
  ```
 
19
  - WIP new version should have better metadata as its quantized from scratch with llama.cpp
20
  - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
21
  - experimental custom quant types
22
+ - `_L` with `--output-tensor-type f16 --token-embedding-type f16` (same as bartowski's)
23
+ - `_XL` with `--output-tensor-type bf16 --token-embedding-type bf16` (same size as _L, in theory even closer to the source model)
24
  - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) release [b3259](https://github.com/ggerganov/llama.cpp/releases/tag/b3259)
25
  - Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski).
26
  ```