MS Llama3-24B-Mullein-v1
hasnonname's trashpanda baby is getting a sequel. More JLLM-ish than ever, too.
Severian's notes: No longer as unhinged as v0, so we're discontinuing the instruct version. Varied rerolls, good character/scenario handling, almost no user impersonation now. Huge dependence on intro message quality, but lets it follow up messages from larger models quite nicely. Currently considering it as an overall improvement over v0 as far as tester feedback is concerned. Still seeing some slop and an occasional bad reroll response, though.
Let us know what you think, if it did well we'll spread it to other base models.
Recommended settings
Context/instruct template: Mistral V7 or V3 Tekken, but for some godforsaken reason, we found that this is (arguably) better with Llama 3 context/instruct. It's funny, stupid and insane, we don't know why this is the case. Trying out Llama 3 instruct/context on base MS24B told us it was coherent for it too in 4/5 responses, but not better than Mistral ones. As to why V1 seems to do better than base MS24B, we don't really know.
Samplers: temperature at 0.8 - 1.25, min_p at 0.05, top_a at 0.2, smoothing_factor at 0.2. Some optional settings include repetition_penalty at 1.03 or DRY if you have access to it.
A virt-io derivative prompt worked best during our testing, but feel free to use what you like.
Thank you!
Big thanks to the folks in the trashpanda-org discord for testing and sending over some logs!
(datasets to be attributed later here)
Reviews
It has some slop but when it cooks, the way it writes is so different. The words just go straight to my heart :UwU:
In some rerolls, I notice it gives up using brackets for dialogues. V1 can cook but he cooks like me, sometimes good and sometimes he burns shit.
β OMGWTFBBQ
Really good. That llama preset [with V1] felt like discovering that birbs were government drones all along.
β Myscell
Dayum, liking it on the first gen. The fact that it narrates, something I missed from having been using Deepseek for almost a month. Really like this gen, calm and to the point. It's good for multiple people in a scene too, I don't think it's overly horny, which is good in my book.
β Raihan
v1 was interactive and novel, though it did say "waiting for your response a lot" -- before using llama context/instruct. It had trouble sticking to characters but after the llama preset it was managable. Spatial awareness was good and little to no impersonation.
β AIELO
writing style is really good but I'm not sure if it's following the bot's character really well.
I'm having fun with v1 too btw, working really good rn. I'm so happy that it doesn't use my POV like... I could kiss this model rn
β Carmenta
Character handling/spatial awareness is okay, I just need to reroll. I have a starter that has a pretty simple and straightforward prose, it picks up on it. On another note, the card has a faux stat system. It follows the rules accordingly. It doesn't do phrases/structure repetition like ms3 instruct atleast. It cooks, I'm liking it.
β Sam
I didn't like any of the [blind test] models until I did the llama preset. I genuinely enjoyed v1, it's immersive and makes me want to continuously swipe and continue the story, not just do the same starter over and over.
β moothdragon
Just us having fun, don't mind it
Some logs
(following up on an existing R1 chat with 40 messages)
(following up on 3.5 sonnet)
- Downloads last month
- 159
Model tree for trashpanda-org/Llama3-24B-Mullein-v1
Base model
mistralai/Mistral-Small-24B-Base-2501