File size: 3,237 Bytes
5686dbf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 |
---
base_model:
- princeton-nlp/Llama-3-Instruct-8B-SimPO
- Sao10K/L3-8B-Stheno-v3.2
library_name: transformers
tags:
- mergekit
- merge
- roleplay
- sillytavern
- llama3
- not-for-all-audiences
license: cc-by-nc-4.0
language:
- en
---
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
# QuantFactory/L3-Nymeria-Maid-8B-GGUF
This is quantized version of [tannedbum/L3-Nymeria-Maid-8B](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B) created using llama.cpp
# Original Model Card
![Nymeria](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2/resolve/main/Nymeria.png?)
## This version is solely for scientific purposes, of course.
Nymeria is the balanced version, doesn't force nsfw. Nymeria-Maid has more Stheno's weights, leans more on nsfw and is more submissive.
## SillyTavern
## Text Completion presets
```
temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1
```
## Advanced Formatting
[Context & Instruct preset by Virt-io](https://huggingface.co/Virt-io/SillyTavern-Presets/tree/main/Prompts/LLAMA-3/v1.9)
Instruct Mode: Enabled
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
This model was merged using the slerp merge method.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
* [princeton-nlp/Llama-3-Instruct-8B-SimPO](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Sao10K/L3-8B-Stheno-v3.2
layer_range: [0, 32]
- model: princeton-nlp/Llama-3-Instruct-8B-SimPO
layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
t:
- filter: self_attn
value: [0.2, 0.4, 0.4, 0.6]
- filter: mlp
value: [0.8, 0.6, 0.6, 0.4]
- value: 0.4
dtype: bfloat16
```
---
## Original model information:
## Model: Sao10K/L3-8B-Stheno-v3.2
Stheno-v3.2-Zeta
Changes compared to v3.1
<br>\- Included a mix of SFW and NSFW Storywriting Data, thanks to [Gryphe](https://huggingface.co/datasets/Gryphe/Opus-WritingPrompts)
<br>\- Included More Instruct / Assistant-Style Data
<br>\- Further cleaned up Roleplaying Samples from c2 Logs -> A few terrible, really bad samples escaped heavy filtering. Manual pass fixed it.
<br>\- Hyperparameter tinkering for training, resulting in lower loss levels.
Testing Notes - Compared to v3.1
<br>\- Handles SFW / NSFW seperately better. Not as overly excessive with NSFW now. Kinda balanced.
<br>\- Better at Storywriting / Narration.
<br>\- Better at Assistant-type Tasks.
<br>\- Better Multi-Turn Coherency -> Reduced Issues?
<br>\- Slightly less creative? A worthy tradeoff. Still creative.
<br>\- Better prompt / instruction adherence.
---
Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum
|