pico-decoder-medium / README.md
rdiehlmartinez's picture
Create README.md
50cbbce verified
|
raw
history blame
1.94 kB
metadata
license: apache-2.0
datasets:
  - pico-lm/pretokenized-paloma
language:
  - en
metrics:
  - pico-lm/perplexity
pipeline_tag: text-generation

Pico Decoder Medium

pico-decoder-medium is a 181M parameter model in the pico-decoder suite, balancing scale and analyzability. Built with pico-train and instrumented with pico-analyze, it enables detailed studies of layer-wise learning behavior during language model pretraining.

πŸ”§ Model Details

Field Value
Architecture Decoder-only transformer (LLaMA-style)
Parameters 181M
Layers 12
Hidden Size 768
Feed Forward Size 3072
Attention Heads 12
Key/Value Heads 4

πŸ“š Training

  • Dataset: pretokenized-dolma
  • Training steps: 200,000
  • Batch size: 1024
  • Sequence length: 2048
  • Optimizer: AdamW
  • Learning rate schedule: Linear decay with warmup
  • Compute: 16 A100-SXM4-80GB GPUs

πŸ“ˆ Evaluation and Analysis

Compatible with pico-analyze for introspecting:

  • Per-head loss and gradient stats
  • Learning saturation across layers
  • Token-level memorization dynamics

Evaluated on pico-paloma-tinsy using perplexity.

πŸ“„ Citation

@software{pico2025,
    author = {Diehl Martinez, Richard},
    title = {Pico: A Lightweight Framework for Studying Language Model Learning Dynamics},
    year = {2025},
    url = {https://github.com/pico-lm}
}