--- license: apache-2.0 datasets: - open-thoughts/OpenThoughts-114k - prithivMLmods/Deepthink-Reasoning-Ins base_model: - Qwen/Qwen2.5-0.5B-Instruct --- Model currently under training. Official publish will be on (DD/MM/YYYY) 23/02/2025. SaplingDream is a 0.5B parameter small GPT based on [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) finetuned on reasoning datasets with very high caution to ensure a high-quality sapling model—hence "SaplingDream". The base model is finetuned using SGD to ensure better generalisation, in combination with the lr scheduler "Polynomial" with a starting lr of 1e-4. Better safe than sorry, we hope the model picks up on not only the tokens but also on how to actually reason through a problem. We're using [open-thoughts/OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) and [prithivMLmods/Deepthink-Reasoning-Ins](https://huggingface.co/datasets/prithivMLmods/Deepthink-Reasoning-Ins) for training through an entire epoch. **Till the training is finished, each 200th checkpoint out of all 14275 optimisation steps will be uploaded. See `Files and versions`.**