Built with Axolotl

Omega 2.6B

This model is derived from phi 1.3B using layer stacking techniques to double the number of hidden layers in the model. The model was then trained for 1 epoch on data from tiny-textbooks and tiny-lessons.

Training

https://wandb.ai/wing-lian/phi-2x-pt-tiny

Downloads last month
8
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support model that require custom code execution.

Datasets used to train winglian/omega-3b