Tanvir1337
commited on
Commit
β’
da235d5
1
Parent(s):
b4cbc6c
improve readme clarity and formatting
Browse files
README.md
CHANGED
@@ -13,10 +13,11 @@ quantized_by: Tanvir1337
|
|
13 |
---
|
14 |
# Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-GGUF
|
15 |
|
16 |
-
|
17 |
|
18 |
-
## System Prompt
|
19 |
|
|
|
20 |
```
|
21 |
{System}
|
22 |
### Prompt:
|
@@ -24,36 +25,32 @@ Quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp/).
|
|
24 |
### Response:
|
25 |
```
|
26 |
|
27 |
-
## Usage
|
28 |
|
29 |
-
If you
|
30 |
|
31 |
-
##
|
32 |
|
33 |
-
|
34 |
|
35 |
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
|
36 |
|
37 |
-
[Artefact2's](https://
|
38 |
|
39 |
-
##
|
40 |
|
41 |
-
|
42 |
|
43 |
-
|
|
|
44 |
|
45 |
-
|
46 |
|
47 |
-
|
|
|
48 |
|
49 |
-
|
50 |
|
51 |
-
|
52 |
|
53 |
-
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
|
54 |
-
|
55 |
-
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
|
56 |
-
|
57 |
-
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
|
58 |
-
|
59 |
-
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
|
|
|
13 |
---
|
14 |
# Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-GGUF
|
15 |
|
16 |
+
This model has been quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp/), a high-performance inference engine for large language models.
|
17 |
|
18 |
+
## System Prompt Format
|
19 |
|
20 |
+
To interact with the model, use the following prompt format:
|
21 |
```
|
22 |
{System}
|
23 |
### Prompt:
|
|
|
25 |
### Response:
|
26 |
```
|
27 |
|
28 |
+
## Usage Instructions
|
29 |
|
30 |
+
If you're new to using GGUF files, refer to [TheBloke's README](https://huggingface.co/TheBloke/CapybaraHermes-2.5-Mistral-7B-GGUF) for detailed instructions.
|
31 |
|
32 |
+
## Quantization Options
|
33 |
|
34 |
+
The following graph compares various quantization types (lower is better):
|
35 |
|
36 |
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
|
37 |
|
38 |
+
For more information on quantization, see [Artefact2's notes](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9).
|
39 |
|
40 |
+
## Choosing the Right Model File
|
41 |
|
42 |
+
To select the optimal model file, consider the following factors:
|
43 |
|
44 |
+
1. **Memory constraints**: Determine how much RAM and/or VRAM you have available.
|
45 |
+
2. **Speed vs. quality**: If you prioritize speed, choose a model that fits within your GPU's VRAM. For maximum quality, consider a model that fits within the combined RAM and VRAM of your system.
|
46 |
|
47 |
+
**Quantization formats**:
|
48 |
|
49 |
+
* **K-quants** (e.g., Q5_K_M): A good starting point, offering a balance between speed and quality.
|
50 |
+
* **I-quants** (e.g., IQ3_M): Newer and more efficient, but may require specific hardware configurations (e.g., cuBLAS or rocBLAS).
|
51 |
|
52 |
+
**Hardware compatibility**:
|
53 |
|
54 |
+
* **I-quants**: Not compatible with Vulcan (AMD). If you have an AMD card, ensure you're using the rocBLAS build or a compatible inference engine.
|
55 |
|
56 |
+
For more information on the features and trade-offs of each quantization format, refer to the [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix).
|
|
|
|
|
|
|
|
|
|
|
|