Mistral-Nemo-Instruct-2407 in ONNX?!?
After much fiddling with knobs like the luddite that I am, I've finally succeeded in converting MistralAI's Mistral Nemo Instruct 2407 to F32 ONNX format. Tests are currently underway, however I was able to use Kaggle to get fairly basic benchmarks of the base model. As soon as the ONNX model finishes loading to Kaggle, I should have the ability to benchmark this monstrosity. If it shows well, I'll be releasing F16, Q8, Q4, and yes, a Q2 simply for dev and research purposes.
Assuming success with this, I'll be setting my sights on making an ONNX version of Mamba 2 to see if I can retain the Mamba 2 architecture's strengths and quantization resistance and use it for my overall project goals.
I wanted to thank everyone here and online for their help, encouragement, and assistance while I've worked on this. To my family and friends, I'm not dead (lol). To everyone out there sharing my passion for exploration, and have no fear of "failing forward" thank you for putting up with my absolute newb questions. Hopefully, this F32 convert - even if it's somewhat problematic, shows that with grit and desire, you can teach yourself how to do anything you want.
And... I know y'all are WAY better at all of this than I am, but I'm running on a Dell G15 5535, which means I have one CUDA, (and the NPU, which I think you can see where I'm going with ONNX. I'm looking at you Vitis AI...) and a lot of patience. lol
Love to everyone out there.
-Ryan