|
--- |
|
license: other |
|
language: |
|
- en |
|
--- |
|
[ExLlamaV2](https://github.com/turboderp/exllamav2/tree/master#exllamav2) Models of [Gryphe's MythoMax L2 13B](https://huggingface.co/Gryphe/MythoMax-L2-13b). |
|
|
|
Other quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ) - [GGUF](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGUF) - [AWQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-AWQ) |
|
|
|
## Model details |
|
|
|
To be updated |
|
|
|
## Prompt Format |
|
|
|
This model primarily uses Alpaca formatting, so for optimal model performance, use: |
|
``` |
|
<System prompt/Character Card> |
|
|
|
### Instruction: |
|
Your instruction or question here. |
|
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only. |
|
|
|
### Response: |
|
``` |
|
|
|
--- |
|
license: other |
|
--- |