811312532154312048.png

MythoNemo-L3.1-70B-v1.0

This model is a fine-tune of Llama3.1-Nemotron-70B-Instruct, specifically designed to enhance its roleplaying and story writing abilities. Not only did it excel in improving these aspects, but it also maintained its remarkable intelligence, ability to follow instructions, and reasoning skills. In my general tests, I mostly found myself preferring the outputs from this model compared to Nemotron-70B-Instruct, especially in its story writing capabilities that truly stood out.


SillyTavern

CHARACTER CARD RESPONSE EXAMPLE: 1735165523393.jpg

SCENARIO/ADVENTURE TYPE CARD EXAMPLE: 1735325051819.jpg

1735329195051.jpg

❕Those weird bolding or spaces at the examples above are due to the cropping. I don't know why that happens.❕


SILLYTAVERN PRESET:

I recommend using this preset that I made for this model. Ppoyaa/MythoNemo-Preset


❗THIS MODEL CAN AND COULD OUTPUT NSFW RESPONSES❗


Additional Response Examples

REASONING 1735157485528.jpg 1735157614777.jpg

STORYTELLING 1735159594209.jpg


Quants

Big thanks to the quants by mradermacher:

Static: mradermacher/MythoNemo-L3.1-70B-v1.0-GGUF

Weighted/Imatrix: mradermacher/MythoNemo-L3.1-70B-v1.0-i1-GGUF

Downloads last month
180
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Ppoyaa/MythoNemo-L3.1-70B-v1.0

Finetuned
(62)
this model
Quantizations
2 models

Datasets used to train Ppoyaa/MythoNemo-L3.1-70B-v1.0