dranger003
commited on
Commit
•
468d58c
1
Parent(s):
689e66f
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,16 @@
|
|
1 |
---
|
2 |
license: bigcode-openrail-m
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: bigcode-openrail-m
|
3 |
+
pipeline_tag: text-generation
|
4 |
+
library_name: gguf
|
5 |
---
|
6 |
+
<u>**NOTE**</u>: You will need a recent build of llama.cpp to run these quants (i.e. at least commit `494c870`).
|
7 |
+
|
8 |
+
GGUF importance matrix (imatrix) quants for https://huggingface.co/TechxGenus/starcoder2-15b-instruct
|
9 |
+
* The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384).
|
10 |
+
* The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well.
|
11 |
+
|
12 |
+
> Fine-tuned starcoder2-15b with an additional 0.7 billion high-quality, code-related tokens for 3 epochs. We used DeepSpeed ZeRO 3 and Flash Attention 2 to accelerate the training process. It achieves 77.4 pass@1 on HumanEval-Python. This model operates using the Alpaca instruction format (excluding the system prompt).
|
13 |
+
|
14 |
+
| Layers | Context | [Template](https://huggingface.co/TechxGenus/starcoder2-15b-instruct#usage) |
|
15 |
+
| --- | --- | --- |
|
16 |
+
| <pre>40</pre> | <pre>16384</pre> | <pre>### Instruction<br>{instruction}<br>### Response<br>{response}</pre> |
|