|
--- |
|
base_model: glaiveai/glaive-coder-7b |
|
inference: false |
|
model_type: llama |
|
prompt_template: | |
|
<s>[INST] |
|
{prompt} |
|
[/INST] |
|
quantized_by: mwitiderrick |
|
tags: |
|
- deepsparse |
|
--- |
|
# Glaive-coder-7b - DeepSparse |
|
This repo contains model files for [Glaive-coder-7b](https://huggingface.co/glaiveai/glaive-coder-7b) optimized for [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models. |
|
|
|
This model was quantized and pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml). |
|
## Inference |
|
Install [DeepSparse LLM](https://github.com/neuralmagic/deepsparse) for fast inference on CPUs: |
|
```bash |
|
pip install deepsparse-nightly[llm] |
|
``` |
|
Run in a [Python pipeline](https://github.com/neuralmagic/deepsparse/blob/main/docs/llms/text-generation-pipeline.md): |
|
```python |
|
from deepsparse import TextGeneration |
|
|
|
template = "<s>[INST] {prompt} [/INST]" |
|
prompt = "Write a quick sort algorithm in Python" |
|
|
|
input_str = template.format(prompt=prompt) |
|
|
|
model = TextGeneration(model_path="hf:nm-testing/glaive-coder-7b-pruned50-quant-ds") |
|
|
|
print(model(input_str, max_new_tokens=200).generations[0].text) |
|
""" |
|
def quick_sort(arr): |
|
if len(arr) <= 1: |
|
return arr |
|
mid = len(arr) // 2 |
|
arr[:mid] = sorted(arr[:mid] ) |
|
left = arr[:mid] |
|
right = arr[mid:] |
|
quick_sort(left) |
|
quick_sort(right) |
|
left.extend(right) |
|
return left + right |
|
|
|
|
|
print(quick_sort([5, 3, 1, 4, 2])) |
|
|
|
This code will give you a sorted array. The quick_sort function sorts the first mid to mid element and the rest of the array. Then it calls the function again on the right part of the array. After that, it |
|
""" |
|
|
|
## Prompt template |
|
```yaml |
|
<s> [INST] |
|
|
|
{prompt} |
|
|
|
[/INST] |
|
``` |
|
## Sparsification |
|
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below. |
|
|
|
```bash |
|
git clone https://github.com/neuralmagic/sparseml |
|
pip install -e "sparseml[transformers]" |
|
python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py glaiveai/glaive-coder-7b open_platypus --recipe recipe.yaml --save True |
|
python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --sequence_length 4096 --task text-generation --model_path obcq_deployment |
|
cp deployment/model.onnx deployment/model-orig.onnx |
|
``` |
|
Run this kv-cache injection to speed up the model at inference by caching the Key and Value states: |
|
```python |
|
import os |
|
import onnx |
|
from sparseml.exporters.kv_cache_injector import KeyValueCacheInjector |
|
input_file = "deployment/model-orig.onnx" |
|
output_file = "deployment/model.onnx" |
|
model = onnx.load(input_file, load_external_data=False) |
|
model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(model) |
|
onnx.save(model, output_file) |
|
print(f"Modified model saved to: {output_file}") |
|
``` |
|
Follow the instructions on our [One Shot With SparseML](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/transformers/sparsification/obcq) page for a step-by-step guide for performing one-shot quantization of large language models. |
|
## Slack |
|
|
|
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ) |