LLaMA 3 8B Dirty Writing Prompts LORA

Testing with L3 8B Base

When alpha is left at default (64), it just acts like a r/DWP comment generator for a given prompt.

Testing with Stheno 3.3

When alpha is bumped to 256, it shows effects in the prompts we trained, lower alpha or out of scope prompts are unaffected.

When alpha is bumbed to 768, it always steers the conversation to be horny, and makes up excuses to create lewd scenarios.

This is completely emergent behaviour, we haven't trained for it, all we did was... read here in the model card

Downloads last month
119
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for nothingiisreal/llama3-8B-DWP-lora

Adapter
(51)
this model
Merges
11 models