Update README.md
Browse files
README.md
CHANGED
@@ -5,6 +5,13 @@ tags:
|
|
5 |
- hpc
|
6 |
- parallel
|
7 |
- axonn
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
|
10 |
# HPC-Coder-v2
|
@@ -34,3 +41,9 @@ Below is an instruction that describes a task. Write a response that appropriate
|
|
34 |
|
35 |
```
|
36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
- hpc
|
6 |
- parallel
|
7 |
- axonn
|
8 |
+
datasets:
|
9 |
+
- hpcgroup/hpc-instruct
|
10 |
+
- ise-uiuc/Magicoder-OSS-Instruct-75K
|
11 |
+
- nickrosh/Evol-Instruct-Code-80k-v1
|
12 |
+
language:
|
13 |
+
- en
|
14 |
+
pipeline_tag: text-generation
|
15 |
---
|
16 |
|
17 |
# HPC-Coder-v2
|
|
|
41 |
|
42 |
```
|
43 |
|
44 |
+
## Quantized Models
|
45 |
+
|
46 |
+
4 and 8 bit quantized weights are available in the GGUF format for use with [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
47 |
+
The 4 bit model requires ~3.8 GB memory and can be found [here](https://huggingface.co/hpcgroup/hpc-coder-v2-6.7b-Q4_K_S-GGUF).
|
48 |
+
The 8 bit model requires ~7.1 GB memory and can be found [here](https://huggingface.co/hpcgroup/hpc-coder-v2-6.7b-Q8_0-GGUF).
|
49 |
+
Further information on how to use them with llama.cpp can be found in [its documentation](https://github.com/ggerganov/llama.cpp).
|