Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- pico-lm/pretokenized-paloma
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
metrics:
|
8 |
+
- pico-lm/perplexity
|
9 |
+
pipeline_tag: text-generation
|
10 |
+
---
|
11 |
+
|
12 |
+
# Pico Decoder Large
|
13 |
+
|
14 |
+
**pico-decoder-large** is the largest model (570M) in the current `pico-decoder` suite. It is a full-scale research model designed for in-depth interpretability studies of transformer learning. Trained with [`pico-train`](https://github.com/pico-lm) and fully compatible with [`pico-analyze`](https://github.com/pico-lm), it offers rich checkpointing and analytical insight into large-scale LM behavior.
|
15 |
+
|
16 |
+
## ๐ง Model Details
|
17 |
+
|
18 |
+
| Field | Value |
|
19 |
+
|---------------------|------------------------------------|
|
20 |
+
| **Architecture** | Decoder-only transformer (LLaMA-style) |
|
21 |
+
| **Parameters** | 570M |
|
22 |
+
| **Layers** | 12 |
|
23 |
+
| **Hidden Size** | 1536 |
|
24 |
+
| **Feed Forward Size**| 6144 |
|
25 |
+
| **Attention Heads** | 12 |
|
26 |
+
| **Key/Value Heads** | 4 |
|
27 |
+
|
28 |
+
## ๐ Training
|
29 |
+
|
30 |
+
- **Dataset**: [`pretokenized-dolma`](https://github.com/pico-lm)
|
31 |
+
- **Training steps**: 200,000
|
32 |
+
- **Batch size**: 1024
|
33 |
+
- **Sequence length**: 2048
|
34 |
+
- **Optimizer**: AdamW
|
35 |
+
- **Learning rate schedule**: Linear decay with warmup
|
36 |
+
- **Compute**: 16 A100-SXM4-80GB GPUs
|
37 |
+
|
38 |
+
## ๐ Evaluation and Analysis
|
39 |
+
|
40 |
+
This model supports fine-grained analysis using [pico-analyze](https://github.com/pico-lm). This tool enables researchers to understand how learning unfolds over training, even at very small scales.
|
41 |
+
|
42 |
+
We also evaluate perplexity of the model on the [pico-paloma-tinsy](https://huggingface.co/datasets/pico-lm/pretokenized-paloma-tinsy) dataset.
|
43 |
+
|
44 |
+
## ๐ Citation
|
45 |
+
|
46 |
+
```bibtex
|
47 |
+
@software{pico2025,
|
48 |
+
author = {Diehl Martinez, Richard},
|
49 |
+
title = {Pico: A Lightweight Framework for Studying Language Model Learning Dynamics},
|
50 |
+
year = {2025},
|
51 |
+
url = {https://github.com/pico-lm}
|
52 |
+
}
|