File size: 4,117 Bytes
b2d345c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
---
base_model: Undi95/Toppy-M-7B
inference: false
license: cc-by-nc-4.0
model_creator: Undi
model_name: Toppy M 7B
model_type: mistral
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: LogicismTV
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/T1kcNir.jpg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://logicism.tv/">Vist my Website</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/nStuNeZsWz">Join my Discord</a></p>
</div>
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
# Toppy M 7B - ExLlama V2
Original model: [Toppy M 7B](https://huggingface.co/Undi95/Toppy-M-7B)
# Description
This is an EXL2 quantization of the Undi95's Toppy M 7B model.
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
# Quantizations
| Bits Per Weight | Size |
| --------------- | ---- |
| [main (2.4bpw)](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/main) | 2.29 GB |
| [3bpw](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/3bpw) | 2.78 GB |
| [3.5bpw](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/3.5bpw) | 3.19 GB |
| [4bpw](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/4bpw) | 3.59 GB |
| [4.5bpw](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/4.5bpw) | 4.00 GB |
| [5bpw](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/5bpw) | 4.41 GB |
| [6bpw](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/6bpw) | 5.22 GB |
| [8bpw](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/8bpw) | 6.84 GB |
# Original model card: Carsten Kragelund's Chronomaid Storytelling 13B
<!-- description start -->
## Description
This repo contains fp16 files of Toppy-M-7B, a merge I have done with the new task_arithmetic merge method from mergekit.
This project was a request from [BlueNipples](https://huggingface.co/BlueNipples) : [link](https://huggingface.co/Undi95/Utopia-13B/discussions/1)
<!-- description end -->
<!-- description start -->
## Models and loras used
- [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5)
- [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9)
- [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
- [lemonilia/AshhLimaRP-Mistral-7B](lemonilia/AshhLimaRP-Mistral-7B)
- [Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b](https://huggingface.co/Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b)
- [Undi95/Mistral-pippa-sharegpt-7b-qlora](Undi95/Mistral-pippa-sharegpt-7b-qlora)
<!-- description end -->
## The sauce
```
openchat/openchat_3.5
lemonilia/AshhLimaRP-Mistral-7B (LoRA) x 0.38
NousResearch/Nous-Capybara-7B-V1.9
Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b x 0.27
HuggingFaceH4/zephyr-7b-beta
Undi95/Mistral-pippa-sharegpt-7b-qlora x 0.38
merge_method: task_arithmetic
base_model: mistralai/Mistral-7B-v0.1
models:
- model: mistralai/Mistral-7B-v0.1
- model: Undi95/zephyr-7b-beta-pippa-sharegpt
parameters:
weight: 0.42
- model: Undi95/Nous-Capybara-7B-V1.9-120-Days
parameters:
weight: 0.29
- model: Undi95/openchat_3.5-LimaRP-13B
parameters:
weight: 0.48
dtype: bfloat16
```
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
If you want to support me, you can [here](https://ko-fi.com/undiai). |