fedric95 commited on
Commit
979ff14
1 Parent(s): 6215f5a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -3
README.md CHANGED
@@ -1,3 +1,63 @@
1
- ---
2
- license: gemma
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/gemma-2-9b
3
+ library_name: transformers
4
+ license: gemma
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - conversational
8
+ quantized_by: fedric95
9
+ extra_gated_heading: Access Gemma on Hugging Face
10
+ extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
11
+ agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
12
+ Face and click below. Requests are processed immediately.
13
+ extra_gated_button_content: Acknowledge license
14
+ ---
15
+
16
+ ## Llamacpp Quantizations of Meta-Llama-3.1-8B
17
+
18
+ Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3583">b3583</a> for quantization.
19
+
20
+ Original model: https://huggingface.co/google/gemma-2-9b
21
+
22
+ ## Download a file (not the whole branch) from below:
23
+
24
+ | Filename | Quant type | File Size | Perplexity (wikitext-2-raw-v1.test) |
25
+ | -------- | ---------- | --------- | ----------- |
26
+ | [gemma-2-9b.FP32.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b.FP32.gguf) | FP32 | 37.00GB | coming_soon |
27
+ | [gemma-2-9b-Q8_0.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q8_0.gguf) | Q8_0 | 9.83GB | coming_soon |
28
+ | [gemma-2-9b-Q6_K.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q6_K.gguf) | Q6_K | 7.59GB | coming_soon |
29
+ | [gemma-2-9b-Q5_K_M.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q5_K_M.gguf) | Q5_K_M | 6.65GB | coming_soon |
30
+ | [gemma-2-9b-Q5_K_S.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q5_K_S.gguf) | Q5_K_S | 6.48GB | coming_soon|
31
+ | [gemma-2-9b-Q4_K_M.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q4_K_M.gguf) | Q4_K_M | 5.76GB | coming_soon |
32
+ | [gemma-2-9b-Q4_K_S.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q4_K_S.gguf) | Q4_K_S | 5.48GB | coming_soon |
33
+ | [gemma-2-9b-Q3_K_L.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q3_K_L.gguf) | Q3_K_L | 5.13GB | coming_soon |
34
+ | [gemma-2-9b-Q3_K_M.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q3_K_M.gguf) | Q3_K_M | 4.76GB | coming_soon |
35
+ | [gemma-2-9b-Q3_K_S.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q3_K_S.gguf) | Q3_K_S | 4.34GB | coming_soon |
36
+ | [gemma-2-9b-Q2_K.gguf](https://huggingface.co/fedric95/gemma-2-9b-GGUF/blob/main/gemma-2-9b-Q2_K.gguf) | Q2_K | 3.81GB | coming_soon |
37
+
38
+ ## Downloading using huggingface-cli
39
+
40
+ First, make sure you have hugginface-cli installed:
41
+
42
+ ```
43
+ pip install -U "huggingface_hub[cli]"
44
+ ```
45
+
46
+ Then, you can target the specific file you want:
47
+
48
+ ```
49
+ huggingface-cli download fedric95/gemma-2-9b-GGUF --include "gemma-2-9b-Q4_K_M.gguf" --local-dir ./
50
+ ```
51
+
52
+ If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
53
+
54
+ ```
55
+ huggingface-cli download fedric95/gemma-2-9b-GGUF --include "gemma-2-9b-Q8_0.gguf/*" --local-dir gemma-2-9b-Q8_0
56
+ ```
57
+
58
+ You can either specify a new local-dir (gemma-2-9b-Q8_0) or download them all in place (./)
59
+
60
+
61
+ ## Reproducibility
62
+
63
+ https://github.com/ggerganov/llama.cpp/discussions/9020#discussioncomment-10335638