Very good, thanks.
I already liked Llama-3.1-70B-Instruct-Lorablated a lot so it is nice to see RP/story oriented fine tune on top that. I would not say that this is better in scenarios where Lorablated works. But this finetune can go to places plain Lorablated refuses to go (eg lot darker and sinister). It does not seem to have the positive bias that plain Lorablated has. There is little intelligence loss and it is lot more picky with samplers it seems. But it is still pretty amazing.
I use imatrix IQ4_XS quant with this one. I suggest just MinP sampler and perhaps DRY. When I tried to make it little more random (eg smoothing factor) the intelligence and the ability to follow formatting decreased dramatically. But with just MinP and DRY it seems to work fine.