SFT (Non-RL) distillation is this good on a sub-100B model?
I'm amazed to see just SFT is making Llama 3.3 70B look like a SOTA model!
Imagine what Meta is cooking with 200k H100s for Llama 4!
Smol models using R1 reasoning chains are awesome!
Is there the dataset used to distill these models available for replication anywhere? I couldn't find any in the official documentation!
I wonder why they didn't distill Qwen 2.5 72B, as it is even better in reasoning tasks than Llama 3.3 70B. That would be really nice.
I wonder why they didn't distill Qwen 2.5 72B, as it is even better in reasoning tasks than Llama 3.3 70B. That would be really nice.
I guess it might be because of "License".
I guess it might be because of "License".
@DataSoul I doubt it, they didn't comply with the Llama3 license here right (the derived model name has to start with Llama), and they've slapped an MIT license on this one (I believe it'd inherit the Meta license).
I wonder why they didn't distill Qwen 2.5 72B, as it is even better in reasoning tasks than Llama 3.3 70B
I can only speculate but, they've effectively taught Llama3.3 how to reason properly without actually giving it any knew knowledge. In my experience, Llama 3.3 has more general knowledge than Qwen2.5 when prompted correctly.