|
--- |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
tags: |
|
- text-generation-inference |
|
- instruct |
|
- conversational |
|
- roleplay |
|
license: cc-by-4.0 |
|
--- |
|
|
|
<h1 style="text-align: center">Erosumika-7B-v3</h1> |
|
|
|
<div style="display: flex; justify-content: center;"> |
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/6512681f4151fb1fa719e033/ZX5NLfB2CctdwuctS9W8A.gif" alt="Header GIF"> |
|
</div> |
|
|
|
|
|
4.0bpw exl2 quant. great for 16k+ context on 6GB GPUS! |
|
|
|
Original Model : (https://huggingface.co/localfultonextractor/Erosumika-7B-v3) |
|
|
|
## Model Details |
|
A DARE TIES merge between Nitral's [Kunocchini-7b](https://huggingface.co/Nitral-AI/Kunocchini-7b), Endevor's [InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) and my [FlatErosAlpha](https://huggingface.co/localfultonextractor/FlatErosAlpha), a flattened(in order to keep the vocab size 32000) version of tavtav's [eros-7B-ALPHA](https://huggingface.co/tavtav/eros-7B-ALPHA). Alpaca and ChatML work best. |
|
|
|
[GGUF quants](https://huggingface.co/localfultonextractor/Erosumika-7B-v3-GGUF) |
|
|
|
|
|
## Limitations and biases |
|
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope. |
|
It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading. |
|
|
|
|
|
```yaml |
|
base_model: localfultonextractor/FlatErosAlpha |
|
models: |
|
- model: localfultonextractor/FlatErosAlpha |
|
- model: Epiculous/InfinityRP-v1-7B |
|
parameters: |
|
density: 0.4 |
|
weight: 0.25 |
|
- model: Nitral-AI/Kunocchini-7b |
|
parameters: |
|
density: 0.3 |
|
weight: 0.35 |
|
merge_method: dare_ties |
|
dtype: bfloat16 |
|
``` |
|
Note: Copied the tokenizer from InfinityRP-v1-7B. |