File size: 6,406 Bytes
56c2c46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8041db8
56c2c46
8041db8
56c2c46
 
 
 
 
 
8041db8
 
56c2c46
 
 
 
 
 
 
 
4c16ef1
56c2c46
8c22183
 
 
56c2c46
3d0c635
 
8c22183
56c2c46
8c22183
 
 
 
 
56c2c46
 
 
 
 
 
 
eb70692
56c2c46
 
 
 
 
 
3d0c635
56c2c46
3d0c635
56c2c46
8c22183
56c2c46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eb70692
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
---
license: apache-2.0
language:
- en
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prosing
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- rp
- 128k context
- horror
- llama 3.1
- mergekit
pipeline_tag: text-generation
---

(quants uploading... , examples to be added)

<B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. HORROR. Swearing. UNCENSORED. </B>

<h2>L3.1-Dark-Planet-10.7B-ExxxxxxxxTended-GGUF</h2>

It is a LLama3.1 model, max context of 131,000 (128k).

This model has been designed to be relatively bullet proof and operates with all parameters, including temp settings from 0 to 5.

It is an extraordinary compressed model.

This model differs from original "<A href="https://huggingface.co/DavidAU/L3-Dark-Planet-8B-GGUF">Dark Planet 8B</a>" as follows:

- 12 layers were added to the 8B L3/L3.1 base models, bring them to 10.65 B parameters.
- Llama 3 instruct was replaced with Llama 3.1 instruct (also extended)
- All of the "extended" models (changed from 8b to 10.65B) were "DARE-TIED" together in a framework re-arranging the duplicate layers and replacing these carefully.

These changes result in longer output, longer context, and a slight uptick in function of the model.

Content from this model can be especially disturbing, and appear with little warning.

IE: "Horror" means real, vivid, and disturbing at times, if you tell the model you want "horror" so to speak.

This is the first version using these extension techniques, with more to follow (already created).

This model is for any writing, fiction or roleplay activity.

It requires Llama3 template and/or "Command-R" template.

Example outputs below.

<B>Model Notes:</B>

- Detail, prose and fiction writing abilities are significantly increased vs L3.1 Instruct AND L3 Instruct.
- For more varied prose (sentence/paragraph/dialog) raise the temp and/or add more instructions in your prompt(s).
- Role-players: Careful raising temp too high as it may affect instruction following.
- This model works with rep pen of 1 or higher, 1.05+ recommended.
- If you want a specific type of prose (IE horror) add in "(vivid horror)" or "(graphic vivid horror)" (no quotes) in your prompt(s).
- A lot of GPTisms have been removed. There are still a few however - errrrr.
- This is not a "happy ever after" model. It has a negative bias.
- Output length will vary however this model prefers LONG to VERY LONG outputs unless you state the size or set the maximum output.
- For creative uses, different quants will produce slightly different output.
- Due to the stability and compressed nature of this model, all quants will operate at above average levels.

This is a LLAMA 3.1 model, and requires Llama3 template, but may work with other template(s) and has maximum context of 131k. 

If you use "Command-R" template your output will be very different from using "Llama3" template.

Here is the standard LLAMA3 template:

<PRE>
{
  "name": "Llama 3",
  "inference_params": {
    "input_prefix": "<|start_header_id|>user<|end_header_id|>\n\n",
    "input_suffix": "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
    "pre_prompt": "You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.",
    "pre_prompt_prefix": "<|start_header_id|>system<|end_header_id|>\n\n",
    "pre_prompt_suffix": "<|eot_id|>",
    "antiprompt": [
      "<|start_header_id|>",
      "<|eot_id|>"
    ]
  }
}
</PRE>

<B>Model "DNA":</B>

Special thanks to the incredible work of the model makers "SAO10K", "NEVERSLEEP" and "HASTAGARAS".

Models used:

[ https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2]

[ https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS ]

[ https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot ]

Parts of these models were "grafted" / "fused" together to create this model.

<b>Optional Enhancement:</B>

The following can be used in place of the "system prompt" or "system role" to further enhance the model.

It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
In this case the enhancements do not have as strong effect at using "system prompt" or "system role".

Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.

<PRE>
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.

Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)

[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)

Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
</PRE>

You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
and scene continue functions.

This enhancement WAS NOT used to generate the examples below.

<h3>EXAMPLES PROMPTS and OUTPUT:</h3>

Examples are created using quant Q4_K_M, "temp=.8" (unless otherwise stated), minimal parameters and "LLAMA3" template. 

Model has been tested with "temp" from ".1" to "5".

Below are the least creative outputs, prompt is in <B>BOLD</B>.

---

<B><font color="red">WARNING:</font> NSFW. Vivid prose. Visceral Details. Violence. HORROR. Swearing. UNCENSORED. </B>

---

To be added...