Upload README.md
Browse files
README.md
CHANGED
@@ -34,14 +34,14 @@ Run them in [LM Studio](https://lmstudio.ai/)
|
|
34 |
| -------- | ---------- | --------- | ----- | --------------- | ----------- |
|
35 |
| [gemma-2-2b-jpn-it.f16.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.f16.gguf) | f16 | 5.24GB | false | Full F16 weights. |
|
36 |
| [gemma-2-2b-jpn-it.Q8_0.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q8_0.gguf) | Q8_0 | 2.78GB | false | Extremely high quality, *recommended*. |
|
37 |
-
| [gemma-2-2b-jpn-it-imatrix.Q4_0.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0.gguf) | Q4_0 |
|
38 |
-
| [gemma-2-2b-jpn-it-imatrix.Q4_0_8_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_8_8.gguf) | Q4_0_8_8 |
|
39 |
-
| [gemma-2-2b-jpn-it-imatrix.Q4_0_4_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_4_8.gguf) | Q4_0_4_8 |
|
40 |
-
| [gemma-2-2b-jpn-it-imatrix.Q4_0_4_4.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_4_4.gguf) | Q4_0_4_4 |
|
41 |
-
| [gemma-2-2b-jpn-it.Q4_0.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0.gguf) | Q4_0 |
|
42 |
-
| [gemma-2-2b-jpn-it.Q4_0_8_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0_8_8.gguf) | Q4_0_8_8 |
|
43 |
-
| [gemma-2-2b-jpn-it.Q4_0_4_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0_4_8.gguf) | Q4_0_4_8 |
|
44 |
-
| [gemma-2-2b-jpn-it.Q4_0_4_4.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0_4_4.gguf) | Q4_0_4_4 |
|
45 |
|
46 |
## How to check i8mm and sve support for ARM devices
|
47 |
|
@@ -67,8 +67,8 @@ There are also android apps that can display /proc/cpuinfo.
|
|
67 |
## Which Q4_0 model to use for ARM devices
|
68 |
| Brand | Series | Model | i8mm | sve | Quant Type |
|
69 |
| ----- | ------ | ----- | ---- | --- | -----------|
|
70 |
-
| Qualcomm|Snapdragon | >= 7 Gen 1 | Yes | Yes | Q4_0_8_8 |
|
71 |
-
| Qualcomm|Snapdragon | others | No | No | Q4_0_4_4 |
|
72 |
| Apple | M | M1 | No | No | Q4_0_4_4 |
|
73 |
| Apple | M | M2/M3/M4 | Yes | No | Q4_0_4_8 |
|
74 |
| Apple | A | A4 to A14 | No | No | Q4_0_4_4 |
|
|
|
34 |
| -------- | ---------- | --------- | ----- | --------------- | ----------- |
|
35 |
| [gemma-2-2b-jpn-it.f16.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.f16.gguf) | f16 | 5.24GB | false | Full F16 weights. |
|
36 |
| [gemma-2-2b-jpn-it.Q8_0.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q8_0.gguf) | Q8_0 | 2.78GB | false | Extremely high quality, *recommended*. |
|
37 |
+
| [gemma-2-2b-jpn-it-imatrix.Q4_0.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0.gguf) | Q4_0 | 1.63GB | false | Good quality, *recommended for edge device <8GB RAM*. |
|
38 |
+
| [gemma-2-2b-jpn-it-imatrix.Q4_0_8_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_8_8.gguf) | Q4_0_8_8 | 1.63GB | false | Good quality, *recommended for edge device <8GB RAM*. |
|
39 |
+
| [gemma-2-2b-jpn-it-imatrix.Q4_0_4_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_4_8.gguf) | Q4_0_4_8 | 1.63GB | false | Good quality, *recommended for edge device <8GB RAM*. |
|
40 |
+
| [gemma-2-2b-jpn-it-imatrix.Q4_0_4_4.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.63GB | false | Good quality, *recommended for edge device <8GB RAM*. |
|
41 |
+
| [gemma-2-2b-jpn-it.Q4_0.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0.gguf) | Q4_0 | 1.63GB | false | Poor quality, *not recommended*. |
|
42 |
+
| [gemma-2-2b-jpn-it.Q4_0_8_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0_8_8.gguf) | Q4_0_8_8 | 1.63GB | false | Poor quality, *not recommended*. |
|
43 |
+
| [gemma-2-2b-jpn-it.Q4_0_4_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0_4_8.gguf) | Q4_0_4_8 | 1.63GB | false | Poor quality, *not recommended*. |
|
44 |
+
| [gemma-2-2b-jpn-it.Q4_0_4_4.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.63GB | false | Poor quality, *not recommended*. |
|
45 |
|
46 |
## How to check i8mm and sve support for ARM devices
|
47 |
|
|
|
67 |
## Which Q4_0 model to use for ARM devices
|
68 |
| Brand | Series | Model | i8mm | sve | Quant Type |
|
69 |
| ----- | ------ | ----- | ---- | --- | -----------|
|
70 |
+
| Qualcomm |Snapdragon | >= 7 Gen 1 | Yes | Yes | Q4_0_8_8 |
|
71 |
+
| Qualcomm |Snapdragon | others | No | No | Q4_0_4_4 |
|
72 |
| Apple | M | M1 | No | No | Q4_0_4_4 |
|
73 |
| Apple | M | M2/M3/M4 | Yes | No | Q4_0_4_8 |
|
74 |
| Apple | A | A4 to A14 | No | No | Q4_0_4_4 |
|