morriszms commited on
Commit
abf1548
1 Parent(s): 5aac472

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -12
README.md CHANGED
@@ -44,8 +44,16 @@ This repo contains GGUF format model files for [TheBloke/Llama-2-7B-Chat-fp16](h
44
 
45
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
46
 
 
 
 
 
 
 
 
47
  ## Prompt template
48
 
 
49
  ```
50
 
51
  ```
@@ -54,18 +62,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
54
 
55
  | Filename | Quant type | File Size | Description |
56
  | -------- | ---------- | --------- | ----------- |
57
- | [Llama-2-7B-Chat-fp16-Q2_K.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/tree/main/Llama-2-7B-Chat-fp16-Q2_K.gguf) | Q2_K | 2.359 GB | smallest, significant quality loss - not recommended for most purposes |
58
- | [Llama-2-7B-Chat-fp16-Q3_K_S.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/tree/main/Llama-2-7B-Chat-fp16-Q3_K_S.gguf) | Q3_K_S | 2.746 GB | very small, high quality loss |
59
- | [Llama-2-7B-Chat-fp16-Q3_K_M.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/tree/main/Llama-2-7B-Chat-fp16-Q3_K_M.gguf) | Q3_K_M | 3.072 GB | very small, high quality loss |
60
- | [Llama-2-7B-Chat-fp16-Q3_K_L.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/tree/main/Llama-2-7B-Chat-fp16-Q3_K_L.gguf) | Q3_K_L | 3.350 GB | small, substantial quality loss |
61
- | [Llama-2-7B-Chat-fp16-Q4_0.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/tree/main/Llama-2-7B-Chat-fp16-Q4_0.gguf) | Q4_0 | 3.563 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
62
- | [Llama-2-7B-Chat-fp16-Q4_K_S.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/tree/main/Llama-2-7B-Chat-fp16-Q4_K_S.gguf) | Q4_K_S | 3.592 GB | small, greater quality loss |
63
- | [Llama-2-7B-Chat-fp16-Q4_K_M.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/tree/main/Llama-2-7B-Chat-fp16-Q4_K_M.gguf) | Q4_K_M | 3.801 GB | medium, balanced quality - recommended |
64
- | [Llama-2-7B-Chat-fp16-Q5_0.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/tree/main/Llama-2-7B-Chat-fp16-Q5_0.gguf) | Q5_0 | 4.332 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
65
- | [Llama-2-7B-Chat-fp16-Q5_K_S.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/tree/main/Llama-2-7B-Chat-fp16-Q5_K_S.gguf) | Q5_K_S | 4.332 GB | large, low quality loss - recommended |
66
- | [Llama-2-7B-Chat-fp16-Q5_K_M.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/tree/main/Llama-2-7B-Chat-fp16-Q5_K_M.gguf) | Q5_K_M | 4.455 GB | large, very low quality loss - recommended |
67
- | [Llama-2-7B-Chat-fp16-Q6_K.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/tree/main/Llama-2-7B-Chat-fp16-Q6_K.gguf) | Q6_K | 5.149 GB | very large, extremely low quality loss |
68
- | [Llama-2-7B-Chat-fp16-Q8_0.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/tree/main/Llama-2-7B-Chat-fp16-Q8_0.gguf) | Q8_0 | 6.669 GB | very large, extremely low quality loss - not recommended |
69
 
70
 
71
  ## Downloading instruction
 
44
 
45
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
46
 
47
+
48
+ <div style="text-align: left; margin: 20px 0;">
49
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
50
+ Run them on the TensorBlock client using your local machine ↗
51
+ </a>
52
+ </div>
53
+
54
  ## Prompt template
55
 
56
+
57
  ```
58
 
59
  ```
 
62
 
63
  | Filename | Quant type | File Size | Description |
64
  | -------- | ---------- | --------- | ----------- |
65
+ | [Llama-2-7B-Chat-fp16-Q2_K.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/blob/main/Llama-2-7B-Chat-fp16-Q2_K.gguf) | Q2_K | 2.359 GB | smallest, significant quality loss - not recommended for most purposes |
66
+ | [Llama-2-7B-Chat-fp16-Q3_K_S.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/blob/main/Llama-2-7B-Chat-fp16-Q3_K_S.gguf) | Q3_K_S | 2.746 GB | very small, high quality loss |
67
+ | [Llama-2-7B-Chat-fp16-Q3_K_M.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/blob/main/Llama-2-7B-Chat-fp16-Q3_K_M.gguf) | Q3_K_M | 3.072 GB | very small, high quality loss |
68
+ | [Llama-2-7B-Chat-fp16-Q3_K_L.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/blob/main/Llama-2-7B-Chat-fp16-Q3_K_L.gguf) | Q3_K_L | 3.350 GB | small, substantial quality loss |
69
+ | [Llama-2-7B-Chat-fp16-Q4_0.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/blob/main/Llama-2-7B-Chat-fp16-Q4_0.gguf) | Q4_0 | 3.563 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
70
+ | [Llama-2-7B-Chat-fp16-Q4_K_S.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/blob/main/Llama-2-7B-Chat-fp16-Q4_K_S.gguf) | Q4_K_S | 3.592 GB | small, greater quality loss |
71
+ | [Llama-2-7B-Chat-fp16-Q4_K_M.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/blob/main/Llama-2-7B-Chat-fp16-Q4_K_M.gguf) | Q4_K_M | 3.801 GB | medium, balanced quality - recommended |
72
+ | [Llama-2-7B-Chat-fp16-Q5_0.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/blob/main/Llama-2-7B-Chat-fp16-Q5_0.gguf) | Q5_0 | 4.332 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
73
+ | [Llama-2-7B-Chat-fp16-Q5_K_S.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/blob/main/Llama-2-7B-Chat-fp16-Q5_K_S.gguf) | Q5_K_S | 4.332 GB | large, low quality loss - recommended |
74
+ | [Llama-2-7B-Chat-fp16-Q5_K_M.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/blob/main/Llama-2-7B-Chat-fp16-Q5_K_M.gguf) | Q5_K_M | 4.455 GB | large, very low quality loss - recommended |
75
+ | [Llama-2-7B-Chat-fp16-Q6_K.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/blob/main/Llama-2-7B-Chat-fp16-Q6_K.gguf) | Q6_K | 5.149 GB | very large, extremely low quality loss |
76
+ | [Llama-2-7B-Chat-fp16-Q8_0.gguf](https://huggingface.co/tensorblock/Llama-2-7B-Chat-fp16-GGUF/blob/main/Llama-2-7B-Chat-fp16-Q8_0.gguf) | Q8_0 | 6.669 GB | very large, extremely low quality loss - not recommended |
77
 
78
 
79
  ## Downloading instruction