|
--- |
|
base_model: Undi95/Toppy-M-7B |
|
inference: false |
|
license: cc-by-nc-4.0 |
|
model_creator: Undi |
|
model_name: Toppy M 7B |
|
model_type: mistral |
|
prompt_template: 'Below is an instruction that describes a task. Write a response |
|
that appropriately completes the request. |
|
|
|
|
|
### Instruction: |
|
|
|
{prompt} |
|
|
|
|
|
### Response: |
|
|
|
' |
|
quantized_by: LogicismTV |
|
--- |
|
<div style="width: auto; margin-left: auto; margin-right: auto"> |
|
<img src="https://i.imgur.com/T1kcNir.jpg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
|
</div> |
|
<div style="display: flex; justify-content: space-between; width: 100%;"> |
|
<div style="display: flex; flex-direction: column; align-items: flex-start;"> |
|
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://logicism.tv/">Vist my Website</a></p> |
|
</div> |
|
<div style="display: flex; flex-direction: column; align-items: flex-end;"> |
|
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/nStuNeZsWz">Join my Discord</a></p> |
|
</div> |
|
</div> |
|
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> |
|
|
|
# Toppy M 7B - ExLlama V2 |
|
|
|
Original model: [Toppy M 7B](https://huggingface.co/Undi95/Toppy-M-7B) |
|
|
|
# Description |
|
|
|
This is an EXL2 quantization of the Undi95's Toppy M 7B model. |
|
|
|
## Prompt template: Alpaca |
|
|
|
``` |
|
Below is an instruction that describes a task. Write a response that appropriately completes the request. |
|
|
|
### Instruction: |
|
{prompt} |
|
|
|
### Response: |
|
|
|
``` |
|
|
|
# Quantizations |
|
|
|
| Bits Per Weight | Size | |
|
| --------------- | ---- | |
|
| [main (2.4bpw)](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/main) | 2.29 GB | |
|
| [3bpw](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/3bpw) | 2.78 GB | |
|
| [3.5bpw](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/3.5bpw) | 3.19 GB | |
|
| [4bpw](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/4bpw) | 3.59 GB | |
|
| [4.5bpw](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/4.5bpw) | 4.00 GB | |
|
| [5bpw](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/5bpw) | 4.41 GB | |
|
| [6bpw](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/6bpw) | 5.22 GB | |
|
| [8bpw](https://huggingface.co/LogicismTV/Toppy-M-7B-exl2/tree/8bpw) | 6.84 GB | |
|
|
|
|
|
# Original model card: Carsten Kragelund's Chronomaid Storytelling 13B |
|
|
|
|
|
<!-- description start --> |
|
## Description |
|
|
|
This repo contains fp16 files of Toppy-M-7B, a merge I have done with the new task_arithmetic merge method from mergekit. |
|
|
|
This project was a request from [BlueNipples](https://huggingface.co/BlueNipples) : [link](https://huggingface.co/Undi95/Utopia-13B/discussions/1) |
|
|
|
<!-- description end --> |
|
<!-- description start --> |
|
## Models and loras used |
|
|
|
- [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5) |
|
- [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9) |
|
- [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) |
|
- [lemonilia/AshhLimaRP-Mistral-7B](lemonilia/AshhLimaRP-Mistral-7B) |
|
- [Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b](https://huggingface.co/Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b) |
|
- [Undi95/Mistral-pippa-sharegpt-7b-qlora](Undi95/Mistral-pippa-sharegpt-7b-qlora) |
|
|
|
<!-- description end --> |
|
## The sauce |
|
``` |
|
openchat/openchat_3.5 |
|
lemonilia/AshhLimaRP-Mistral-7B (LoRA) x 0.38 |
|
|
|
NousResearch/Nous-Capybara-7B-V1.9 |
|
Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b x 0.27 |
|
|
|
HuggingFaceH4/zephyr-7b-beta |
|
Undi95/Mistral-pippa-sharegpt-7b-qlora x 0.38 |
|
|
|
merge_method: task_arithmetic |
|
base_model: mistralai/Mistral-7B-v0.1 |
|
models: |
|
- model: mistralai/Mistral-7B-v0.1 |
|
- model: Undi95/zephyr-7b-beta-pippa-sharegpt |
|
parameters: |
|
weight: 0.42 |
|
- model: Undi95/Nous-Capybara-7B-V1.9-120-Days |
|
parameters: |
|
weight: 0.29 |
|
- model: Undi95/openchat_3.5-LimaRP-13B |
|
parameters: |
|
weight: 0.48 |
|
dtype: bfloat16 |
|
``` |
|
<!-- prompt-template start --> |
|
## Prompt template: Alpaca |
|
|
|
``` |
|
Below is an instruction that describes a task. Write a response that appropriately completes the request. |
|
|
|
### Instruction: |
|
{prompt} |
|
|
|
### Response: |
|
|
|
``` |
|
|
|
If you want to support me, you can [here](https://ko-fi.com/undiai). |