mwitiderrick's picture
Update README.md
ce70f04
---
base_model: HuggingFaceH4/zephyr-7b-beta
inference: false
model_type: mistral
prompt_template: |
### Instruction:\n
{prompt}
### Response:\n
quantized_by: mwitiderrick
tags:
- deepsparse
---
## Zephyr 7B β - DeepSparse
This repo contains model files for [Zephyr 7B β](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) optimized for [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models.
This model was quantized and pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml).
## Inference
Install [DeepSparse LLM](https://github.com/neuralmagic/deepsparse) for fast inference on CPUs:
```bash
pip install deepsparse-nightly[llm]
```
Run in a [Python pipeline](https://github.com/neuralmagic/deepsparse/blob/main/docs/llms/text-generation-pipeline.md):
```python
from deepsparse import TextGeneration
prompt='### Instruction:\nWrite a Perl script that processes a log file and counts the occurrences of different HTTP status codes. The script should accept the log file path as a command-line argument and print the results to the console in descending order of frequency.\n\n### Response:\n'
model = TextGeneration(model_path="hf:neuralmagic/zephyr-7b-beta-pruned50-quant-ds")
print(model(prompt, max_new_tokens=200).generations[0].text)
"""
Here's a Perl script that meets the requirements:
use strict;
use warnings;
sub get_status_code {
my ($status) = ();
my ($match) = qr/\s*\d{3}\s*$/;
return $1 if ($status =~ $match);
}
sub count_occurrences {
my ($file) = shift;
my (%counts) = ();
open my $fh, '<', $file or die "Can't open $file: $!";
while (my $line = <$fh>) {
my ($status) = get_status_code($line);
$counts{$status}++;
}
close $fh;
return \%counts;
}
my ($file) = shift;
my (@codes) = qw(200 300 400 500);
my (@sorted) = ();
foreach my ($status, $count) (@codes, \%{ $status }->value()) {
push @sorted, [$count, $status];
}
foreach my ($code, $freq) (@sorted) {
print "$code\t$freq\n";
}
my ($results) = count_occurrences($file);
my (@sorted) = sort { $b[1] <=> $a[1] } @{$results};
foreach my ($code, $freq) (@sorted) {
print "$code\t$freq\n";
}
"""
```
## Prompt template
```
### Instruction:\n
{prompt}
### Response:\n
```
## Sparsification
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below.
```bash
git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py HuggingFaceH4/zephyr-7b-beta open_platypus --recipe recipe.yaml --save True
python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment
cp deployment/model.onnx deployment/model-orig.onnx
```
Run this kv-cache injection to speed up the model at inference by caching the Key and Value states:
```python
import os
import onnx
from sparseml.exporters.kv_cache_injector import KeyValueCacheInjector
input_file = "deployment/model-orig.onnx"
output_file = "deployment/model.onnx"
model = onnx.load(input_file, load_external_data=False)
model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(model)
onnx.save(model, output_file)
print(f"Modified model saved to: {output_file}")
```
Follow the instructions on our [One Shot With SparseML](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/transformers/sparsification/obcq) page for a step-by-step guide for performing one-shot quantization of large language models.
## Slack
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)