File size: 1,452 Bytes
5719a3d
 
726a902
 
 
 
 
 
5719a3d
 
726a902
5719a3d
726a902
5719a3d
726a902
5719a3d
 
726a902
5719a3d
726a902
5719a3d
726a902
5719a3d
726a902
 
 
 
5719a3d
726a902
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
library_name: transformers
license: apache-2.0
datasets:
- HuggingFaceTB/smollm-corpus
language:
- en
pipeline_tag: text-generation
---

# **Doge 160M checkpoint**

**NOTE: This model is training, you can find the real-time training logs on wandb.**

[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/loser_cheems/huggingface/runs/3uyc9a89) 


![wsd_scheduler](./wsd_scheduler.png)

Doge uses `wsd_scheduler` as the training scheduler, which divides the learning rate into three stages: `warmup`, `stable`, and `decay`. It allows us to continue training on any new dataset from any checkpoint in the `stable stage` without spikes of the training.

Here are the initial learning rates required to continue training at each checkpoint:

- **[Doge-20M](https://huggingface.co/JingzeShi/Doge-20M-checkpoint)**: 8e-3
- **[Doge-60M](https://huggingface.co/JingzeShi/Doge-60M-checkpoint)**: 6e-3
- **[Doge-160M](https://huggingface.co/JingzeShi/Doge-160M-checkpoint)**: 4e-3
- **Doge-320M**: 2e-3

| Model | Learning Rate | Schedule | Warmup Steps | Stable Steps |
|-------|---------------|----------|--------------|--------------|
| Doge-20M | 8e-3 | wsd_scheduler | 800 | 6400 |
| Doge-60M | 6e-3 | wsd_scheduler | 1600 | 12800 |
| Doge-160M | 4e-3 | wsd_scheduler | 2400 | 19200 |
| Doge-320M | 2e-3 | wsd_scheduler | 3200 | 25600 |