apepkuss79 commited on
Commit
71da97c
1 Parent(s): 6eb522f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -17
README.md CHANGED
@@ -1,8 +1,8 @@
1
  ---
2
- base_model: meta-llama/Llama-3.2-1B-Instruct
3
  license: llama3.2
4
  model_creator: meta
5
- model_name: Llama-3.2-1B-Instruct
6
  quantized_by: Second State Inc.
7
  language:
8
  - en
@@ -62,7 +62,7 @@ tags:
62
  - Run as LlamaEdge service
63
 
64
  ```bash
65
- wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3.2-1B-Instruct-Q5_K_M.gguf \
66
  llama-api-server.wasm \
67
  --prompt-template llama-3-chat \
68
  --ctx-size 128000 \
@@ -72,7 +72,7 @@ tags:
72
  - Run as LlamaEdge command app
73
 
74
  ```bash
75
- wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3.2-1B-Instruct-Q5_K_M.gguf \
76
  llama-chat.wasm \
77
  --prompt-template llama-3-chat \
78
  --ctx-size 128000
@@ -82,18 +82,18 @@ tags:
82
 
83
  | Name | Quant method | Bits | Size | Use case |
84
  | ---- | ---- | ---- | ---- | ----- |
85
- | [Llama-3.2-1B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q2_K.gguf) | Q2_K | 2 | 581 MB| smallest, significant quality loss - not recommended for most purposes |
86
- | [Llama-3.2-1B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 733 MB| small, substantial quality loss |
87
- | [Llama-3.2-1B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 691 MB| very small, high quality loss |
88
- | [Llama-3.2-1B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 642 MB| very small, high quality loss |
89
- | [Llama-3.2-1B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 771 MB| legacy; small, very high quality loss - prefer using Q3_K_M |
90
- | [Llama-3.2-1B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 808 MB| medium, balanced quality - recommended |
91
- | [Llama-3.2-1B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 776 MB| small, greater quality loss |
92
- | [Llama-3.2-1B-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q5_0.gguf) | Q5_0 | 5 | 893 MB| legacy; medium, balanced quality - prefer using Q4_K_M |
93
- | [Llama-3.2-1B-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 912 MB| large, very low quality loss - recommended |
94
- | [Llama-3.2-1B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 893 MB| large, low quality loss - recommended |
95
- | [Llama-3.2-1B-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q6_K.gguf) | Q6_K | 6 | 1.02 GB| very large, extremely low quality loss |
96
- | [Llama-3.2-1B-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-Q8_0.gguf) | Q8_0 | 8 | 1.32 GB| very large, extremely low quality loss - not recommended |
97
- | [Llama-3.2-1B-Instruct-f16.gguf](https://huggingface.co/second-state/Llama-3.2-1B-Instruct-GGUF/blob/main/Llama-3.2-1B-Instruct-f16.gguf) | f16 | 16 | 2.48 GB| |
98
 
99
  *Quantized with llama.cpp b3807*
 
1
  ---
2
+ base_model: meta-llama/Llama-3.2-3B-Instruct
3
  license: llama3.2
4
  model_creator: meta
5
+ model_name: Llama-3.2-3B-Instruct
6
  quantized_by: Second State Inc.
7
  language:
8
  - en
 
62
  - Run as LlamaEdge service
63
 
64
  ```bash
65
+ wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3.2-3B-Instruct-Q5_K_M.gguf \
66
  llama-api-server.wasm \
67
  --prompt-template llama-3-chat \
68
  --ctx-size 128000 \
 
72
  - Run as LlamaEdge command app
73
 
74
  ```bash
75
+ wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-3.2-3B-Instruct-Q5_K_M.gguf \
76
  llama-chat.wasm \
77
  --prompt-template llama-3-chat \
78
  --ctx-size 128000
 
82
 
83
  | Name | Quant method | Bits | Size | Use case |
84
  | ---- | ---- | ---- | ---- | ----- |
85
+ | [Llama-3.2-3B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-Q2_K.gguf) | Q2_K | 2 | 581 MB| smallest, significant quality loss - not recommended for most purposes |
86
+ | [Llama-3.2-3B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 733 MB| small, substantial quality loss |
87
+ | [Llama-3.2-3B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 691 MB| very small, high quality loss |
88
+ | [Llama-3.2-3B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 642 MB| very small, high quality loss |
89
+ | [Llama-3.2-3B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 771 MB| legacy; small, very high quality loss - prefer using Q3_K_M |
90
+ | [Llama-3.2-3B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 808 MB| medium, balanced quality - recommended |
91
+ | [Llama-3.2-3B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 776 MB| small, greater quality loss |
92
+ | [Llama-3.2-3B-Instruct-Q5_0.gguf](https://huggingface.co/second-state/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-Q5_0.gguf) | Q5_0 | 5 | 893 MB| legacy; medium, balanced quality - prefer using Q4_K_M |
93
+ | [Llama-3.2-3B-Instruct-Q5_K_M.gguf](https://huggingface.co/second-state/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 912 MB| large, very low quality loss - recommended |
94
+ | [Llama-3.2-3B-Instruct-Q5_K_S.gguf](https://huggingface.co/second-state/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 893 MB| large, low quality loss - recommended |
95
+ | [Llama-3.2-3B-Instruct-Q6_K.gguf](https://huggingface.co/second-state/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-Q6_K.gguf) | Q6_K | 6 | 1.02 GB| very large, extremely low quality loss |
96
+ | [Llama-3.2-3B-Instruct-Q8_0.gguf](https://huggingface.co/second-state/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-Q8_0.gguf) | Q8_0 | 8 | 1.32 GB| very large, extremely low quality loss - not recommended |
97
+ | [Llama-3.2-3B-Instruct-f16.gguf](https://huggingface.co/second-state/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-f16.gguf) | f16 | 16 | 2.48 GB| |
98
 
99
  *Quantized with llama.cpp b3807*