File size: 5,340 Bytes
c04ed14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cad0a2c
c04ed14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---
base_model: stabilityai/stablelm-2-zephyr-1_6b
datasets:
- HuggingFaceH4/ultrachat_200k
- allenai/ultrafeedback_binarized_cleaned
- meta-math/MetaMathQA
- WizardLM/WizardLM_evol_instruct_V2_196k
- openchat/openchat_sharegpt4_dataset
- LDJnr/Capybara
- Intel/orca_dpo_pairs
- hkust-nlp/deita-10k-v0
license: other
license_link: https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b/blob/main/LICENSE
language:
- en
model_creator: stabilityai
model_name: stablelm-2-zephyr-1_6b
model_type: stablelm_epoch
inference: false
tags:
- causal-lm
- stablelm_epoch
pipeline_tag: text-generation
prompt_template: |
  <|system|>
  {{system_message}}<|endoftext|>
  <|user|>
  {{prompt}}<|endoftext|>
  <|assistant|>
  
quantized_by: brittlewis12
---

# StableLM-2-Zephyr-1.6B GGUF

Original model: [StableLM 2 Zephyr 1.6B](https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b)
Model creator: [Stability AI](https://huggingface.co/stabilityai)

This repo contains GGUF format model files for Stability AI’s StableLM 2 Zephyr 1.6B.

> Stable LM 2 Zephyr 1.6B is a 1.6 billion parameter instruction tuned language model inspired by HugginFaceH4's Zephyr 7B training pipeline. The model is trained on a mix of publicly available datasets and synthetic datasets, utilizing Direct Preference Optimization (DPO).



### What is GGUF?

GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Converted using an proposed version of llama.cpp ([PR #5052](https://github.com/ggerganov/llama.cpp/pull/5052))

### Prompt template: Zephyr

```
<|system|>
{{system_message}}<|endoftext|>
<|user|>
{{prompt}}<|endoftext|>
<|assistant|>
```

---

## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!

![cnvrs.ai](https://pbs.twimg.com/profile_images/1744049151241797632/0mIP-P9e_400x400.jpg)

[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
- create & save **Characters** with custom system prompts & temperature settings
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
- make it your own with custom **Theme colors**
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date

---

## Original Model Evaluations:

![MT-Bench](https://cdn-uploads.huggingface.co/production/uploads/61b2bf4f5b1f7cad1799cfbb/QH00HVM3lg-5f17U_py4K.png)
| Model                   | Size | MT-Bench |
|-------------------------|------|----------|
| Mistral-7B-Instruct-v0.2| 7B   | 7.61     |
| Llama2-Chat             | 70B  | 6.86     |
| stablelm-zephyr-3b      | 3B   | 6.64     |
| MPT-30B-Chat            | 30B  | 6.39     |
| **stablelm-2-zephyr-1.6b**  | 1.6B | 5.42     |
| Falcon-40B-Instruct     | 40B  | 5.17     |
| Qwen-1.8B-Chat          | 1.8B | 4.95     |
| dolphin-2.6-phi-2       | 2.7B | 4.93     |
| phi-2                   | 2.7B | 4.29     |
| TinyLlama-1.1B-Chat-v1.0| 1.1B | 3.46     |

### OpenLLM Leaderboard

| Model                                  | Size | Average | ARC Challenge (acc_norm) | HellaSwag (acc_norm) | MMLU (acc_norm) | TruthfulQA (mc2) | Winogrande (acc) | Gsm8k (acc) |
|----------------------------------------|------|---------|-------------------------|----------------------|-----------------|------------------|------------------|-------------|
| microsoft/phi-2                        | 2.7B | 61.32%  | 61.09%                  | 75.11%               | 58.11%          | 44.47%           | 74.35%           | 54.81%      |
| **stabilityai/stablelm-2-zephyr-1_6b**     | 1.6B | 49.89%  | 43.69%                  | 69.34%               | 41.85%          | 45.21%           | 64.09%           | 35.18%      |
| microsoft/phi-1_5                      | 1.3B | 47.69%  | 52.90%                  | 63.79%               | 43.89%          | 40.89%           | 72.22%           | 12.43%      |
| stabilityai/stablelm-2-1_6b            | 1.6B | 45.54%  | 43.43%                  | 70.49%               | 38.93%          | 36.65%           | 65.90%           | 17.82%      |
| mosaicml/mpt-7b                        | 7B   | 44.28%  | 47.70%                  | 77.57%               | 30.80%          | 33.40%           | 72.14%           | 4.02%       |
| KnutJaegersberg/Qwen-1_8B-Llamaified*  | 1.8B | 44.75%  | 37.71%                  | 58.87%               | 46.37%          | 39.41%           | 61.72%           | 24.41%      |
| openlm-research/open_llama_3b_v2       | 3B   | 40.28%  | 40.27%                  | 71.60%               | 27.12%          | 34.78%           | 67.01%           | 0.91%       |
| iiuae/falcon-rw-1b                     | 1B   | 37.07%  | 35.07%                  | 63.56%               | 25.28%          | 35.96%           | 62.04%           | 0.53%       |
| TinyLlama/TinyLlama-1.1B-3T            | 1.1B | 36.40%  | 33.79%                  | 60.31%               | 26.04%          | 37.32%           | 59.51%           | 1.44%       |