
Noteworthy Models 2025 (22B-32B)
Interesting models released in 2025. Tested for overall response quality, ad-hoc function calling, summary, menu navigation and creative writing.
Text Generation • Updated • 103k • • 1.65kNote A proper CoT / reasoning model from Qwen. It absolutely blows the previous 3rd party distills out of the water, it's not even comparable. Best model in this size category by far. Can do creative writing, but its formatting style is kinda distracting, and will give you refusals for more extreme content.
mistralai/Mistral-Small-24B-Instruct-2501
Text Generation • Updated • 746k • • 847Note Decent open source 24B model from Mistral. Passed most of my tests. Obedient, very good at abiding by the system prompt. But, a lot worse at creative writing than 22B, it suffers from the same repetitiveness as other Mistral models. It's more a task oriented LLM than a generalist one.
DavidAU/L3-Grand-Story-Darkness-MOE-4X8-24.9B-e32-GGUF
Text Generation • Updated • 7.19k • 14Note I don't normally look at 'mixture of experts' models, but that might have been a mistake. This one is really impressive. It's good both in creative writing and assistant roles. It's very good at handling complex prompts, and its writing style is surprisingly natural.
Undi95/MistralThinker-v1.1
Updated • 470 • 30Note It needs work, but it's an interesting foray into making a CoT RP model. If you can stomach the way it highlights random words (or can filter that out), it's very fun to play with. On its own, it proves that R1 "creative" datasets need to be manually (or algorithmically) edited.
TheDrummer/Cydonia-24B-v2.1-GGUF
Updated • 2.41k • 15Note Mistral-24B edition of Cydonia. Kept the excellent task-execution abilities of the base model, but it can do creative writing too without being overly repetitive. Use the Mistral-Tekken7 instruct format.
nbeerbower/Dumpling-Qwen2.5-32B-v2
Text Generation • Updated • 81 • 2Note Decent Qwen-32B creative fine-tune. While it suffers from some of the usual Qwen-isms, it's fairly bright, and creative with a bit of guidance. Passed all my usual tasks/tests with no issues. Innate Qwen 2.5 function calling hasn't been tested.