Upload folder using huggingface_hub
Browse files- .gitattributes +12 -0
- Llama3-ChatQA-1.5-8B-Q2_K.gguf +3 -0
- Llama3-ChatQA-1.5-8B-Q3_K_L.gguf +3 -0
- Llama3-ChatQA-1.5-8B-Q3_K_M.gguf +3 -0
- Llama3-ChatQA-1.5-8B-Q3_K_S.gguf +3 -0
- Llama3-ChatQA-1.5-8B-Q4_0.gguf +3 -0
- Llama3-ChatQA-1.5-8B-Q4_K_M.gguf +3 -0
- Llama3-ChatQA-1.5-8B-Q4_K_S.gguf +3 -0
- Llama3-ChatQA-1.5-8B-Q5_0.gguf +3 -0
- Llama3-ChatQA-1.5-8B-Q5_K_M.gguf +3 -0
- Llama3-ChatQA-1.5-8B-Q5_K_S.gguf +3 -0
- Llama3-ChatQA-1.5-8B-Q6_K.gguf +3 -0
- Llama3-ChatQA-1.5-8B-Q8_0.gguf +3 -0
- README.md +82 -0
.gitattributes
CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
Llama3-ChatQA-1.5-8B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
Llama3-ChatQA-1.5-8B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
Llama3-ChatQA-1.5-8B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
Llama3-ChatQA-1.5-8B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
Llama3-ChatQA-1.5-8B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
Llama3-ChatQA-1.5-8B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
Llama3-ChatQA-1.5-8B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
Llama3-ChatQA-1.5-8B-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
Llama3-ChatQA-1.5-8B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
Llama3-ChatQA-1.5-8B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
Llama3-ChatQA-1.5-8B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
47 |
+
Llama3-ChatQA-1.5-8B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
Llama3-ChatQA-1.5-8B-Q2_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ee55bcd0ca1b12ecb8e6e3a6422f79df8aee4417417de5d1e4929a8adc1d76d2
|
3 |
+
size 3179131840
|
Llama3-ChatQA-1.5-8B-Q3_K_L.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cbd9867c81342e865c4298e0cd0ddce80b58584b38fd1c7e46e7eada0692811f
|
3 |
+
size 4321956800
|
Llama3-ChatQA-1.5-8B-Q3_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f03d3780473e970571486eb40c213fae9d5ad027ed7ab2fdf803c3e07e586033
|
3 |
+
size 4018918336
|
Llama3-ChatQA-1.5-8B-Q3_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1e0ce7024ca873913d4f8aaf2a0c8d47eb449af4a39e4f90984e7d99d186c9c1
|
3 |
+
size 3664499648
|
Llama3-ChatQA-1.5-8B-Q4_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dc631f51cf9b9eab4617e6a0ad9cdd8e6ea04814c713d6b746295c3f228e1578
|
3 |
+
size 4661212096
|
Llama3-ChatQA-1.5-8B-Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9337916f10c5e6a82580d1f03868e6771c4a5ced8a94975e639f19c026011d97
|
3 |
+
size 4920734656
|
Llama3-ChatQA-1.5-8B-Q4_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e8e473cc102ad3c551926137f70b2c1938af7c9f5c977a958d890bf727ef7c60
|
3 |
+
size 4692669376
|
Llama3-ChatQA-1.5-8B-Q5_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dc9b246ab489a908cd8482c46649e629ae55ac8a1f714427b83cd2115a666cfc
|
3 |
+
size 5599294400
|
Llama3-ChatQA-1.5-8B-Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d5a47a67aff40281626523f1aabbf940dc755d2039c0a869b0789b4eaf3b6d89
|
3 |
+
size 5732987840
|
Llama3-ChatQA-1.5-8B-Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:417ca7f5ff4ede12fd6f986b924e1cd0641fd8b538b45ff19e21313d7fbdaaa3
|
3 |
+
size 5599294400
|
Llama3-ChatQA-1.5-8B-Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c22289e78a2736535f144a7a8260f5583c5718246293f9f6764ba3503eef4db3
|
3 |
+
size 6596006848
|
Llama3-ChatQA-1.5-8B-Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b264fa4099d63952e615682c70d11ddf12982fa6af22ad2c590fcf4ba390253b
|
3 |
+
size 8540771264
|
README.md
ADDED
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
tags:
|
7 |
+
- nvidia
|
8 |
+
- chatqa-1.5
|
9 |
+
- chatqa
|
10 |
+
- llama-3
|
11 |
+
- pytorch
|
12 |
+
- TensorBlock
|
13 |
+
- GGUF
|
14 |
+
base_model: nvidia/Llama3-ChatQA-1.5-8B
|
15 |
+
---
|
16 |
+
|
17 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
18 |
+
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
19 |
+
</div>
|
20 |
+
<div style="display: flex; justify-content: space-between; width: 100%;">
|
21 |
+
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
22 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;">
|
23 |
+
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
|
24 |
+
</p>
|
25 |
+
</div>
|
26 |
+
</div>
|
27 |
+
|
28 |
+
## nvidia/Llama3-ChatQA-1.5-8B - GGUF
|
29 |
+
|
30 |
+
This repo contains GGUF format model files for [nvidia/Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B).
|
31 |
+
|
32 |
+
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
33 |
+
|
34 |
+
## Prompt template
|
35 |
+
|
36 |
+
```
|
37 |
+
<|begin_of_text|>System: This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context.
|
38 |
+
|
39 |
+
User: {prompt}
|
40 |
+
|
41 |
+
Assistant:
|
42 |
+
```
|
43 |
+
|
44 |
+
## Model file specification
|
45 |
+
|
46 |
+
| Filename | Quant type | File Size | Description |
|
47 |
+
| -------- | ---------- | --------- | ----------- |
|
48 |
+
| [Llama3-ChatQA-1.5-8B-Q2_K.gguf](https://huggingface.co/tensorblock/Llama3-ChatQA-1.5-8B-GGUF/tree/main/Llama3-ChatQA-1.5-8B-Q2_K.gguf) | Q2_K | 2.961 GB | smallest, significant quality loss - not recommended for most purposes |
|
49 |
+
| [Llama3-ChatQA-1.5-8B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Llama3-ChatQA-1.5-8B-GGUF/tree/main/Llama3-ChatQA-1.5-8B-Q3_K_S.gguf) | Q3_K_S | 3.413 GB | very small, high quality loss |
|
50 |
+
| [Llama3-ChatQA-1.5-8B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Llama3-ChatQA-1.5-8B-GGUF/tree/main/Llama3-ChatQA-1.5-8B-Q3_K_M.gguf) | Q3_K_M | 3.743 GB | very small, high quality loss |
|
51 |
+
| [Llama3-ChatQA-1.5-8B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Llama3-ChatQA-1.5-8B-GGUF/tree/main/Llama3-ChatQA-1.5-8B-Q3_K_L.gguf) | Q3_K_L | 4.025 GB | small, substantial quality loss |
|
52 |
+
| [Llama3-ChatQA-1.5-8B-Q4_0.gguf](https://huggingface.co/tensorblock/Llama3-ChatQA-1.5-8B-GGUF/tree/main/Llama3-ChatQA-1.5-8B-Q4_0.gguf) | Q4_0 | 4.341 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
53 |
+
| [Llama3-ChatQA-1.5-8B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Llama3-ChatQA-1.5-8B-GGUF/tree/main/Llama3-ChatQA-1.5-8B-Q4_K_S.gguf) | Q4_K_S | 4.370 GB | small, greater quality loss |
|
54 |
+
| [Llama3-ChatQA-1.5-8B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Llama3-ChatQA-1.5-8B-GGUF/tree/main/Llama3-ChatQA-1.5-8B-Q4_K_M.gguf) | Q4_K_M | 4.583 GB | medium, balanced quality - recommended |
|
55 |
+
| [Llama3-ChatQA-1.5-8B-Q5_0.gguf](https://huggingface.co/tensorblock/Llama3-ChatQA-1.5-8B-GGUF/tree/main/Llama3-ChatQA-1.5-8B-Q5_0.gguf) | Q5_0 | 5.215 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
56 |
+
| [Llama3-ChatQA-1.5-8B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Llama3-ChatQA-1.5-8B-GGUF/tree/main/Llama3-ChatQA-1.5-8B-Q5_K_S.gguf) | Q5_K_S | 5.215 GB | large, low quality loss - recommended |
|
57 |
+
| [Llama3-ChatQA-1.5-8B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Llama3-ChatQA-1.5-8B-GGUF/tree/main/Llama3-ChatQA-1.5-8B-Q5_K_M.gguf) | Q5_K_M | 5.339 GB | large, very low quality loss - recommended |
|
58 |
+
| [Llama3-ChatQA-1.5-8B-Q6_K.gguf](https://huggingface.co/tensorblock/Llama3-ChatQA-1.5-8B-GGUF/tree/main/Llama3-ChatQA-1.5-8B-Q6_K.gguf) | Q6_K | 6.143 GB | very large, extremely low quality loss |
|
59 |
+
| [Llama3-ChatQA-1.5-8B-Q8_0.gguf](https://huggingface.co/tensorblock/Llama3-ChatQA-1.5-8B-GGUF/tree/main/Llama3-ChatQA-1.5-8B-Q8_0.gguf) | Q8_0 | 7.954 GB | very large, extremely low quality loss - not recommended |
|
60 |
+
|
61 |
+
|
62 |
+
## Downloading instruction
|
63 |
+
|
64 |
+
### Command line
|
65 |
+
|
66 |
+
Firstly, install Huggingface Client
|
67 |
+
|
68 |
+
```shell
|
69 |
+
pip install -U "huggingface_hub[cli]"
|
70 |
+
```
|
71 |
+
|
72 |
+
Then, downoad the individual model file the a local directory
|
73 |
+
|
74 |
+
```shell
|
75 |
+
huggingface-cli download tensorblock/Llama3-ChatQA-1.5-8B-GGUF --include "Llama3-ChatQA-1.5-8B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
|
76 |
+
```
|
77 |
+
|
78 |
+
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
|
79 |
+
|
80 |
+
```shell
|
81 |
+
huggingface-cli download tensorblock/Llama3-ChatQA-1.5-8B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
|
82 |
+
```
|