|
--- |
|
datasets: |
|
- nampdn-ai/tiny-textbooks |
|
- nampdn-ai/tiny-lessons |
|
language: |
|
- en |
|
--- |
|
|
|
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
|
|
|
# Omega 2.6B |
|
|
|
This model is derived from phi 1.3B using layer stacking techniques to double the number of hidden layers in the model. |
|
The model was then trained for 1 epoch on data from tiny-textbooks and tiny-lessons. |
|
|
|
|
|
# Training |
|
|
|
https://wandb.ai/wing-lian/phi-2x-pt-tiny |
|
|