Think before you speak: Training Language Models With Pause Tokens
Abstract
Language models generate responses by producing a series of tokens in immediate succession: the (K+1)^{th} token is an outcome of manipulating K hidden vectors per layer, one vector per preceding token. What if instead we were to let the model manipulate say, K+10 hidden vectors, before it outputs the (K+1)^{th} token? We operationalize this idea by performing training and inference on language models with a (learnable) pause token, a sequence of which is appended to the input prefix. We then delay extracting the model's outputs until the last pause token is seen, thereby allowing the model to process extra computation before committing to an answer. We empirically evaluate pause-training on decoder-only models of 1B and 130M parameters with causal pretraining on C4, and on downstream tasks covering reasoning, question-answering, general understanding and fact recall. Our main finding is that inference-time delays show gains when the model is both pre-trained and finetuned with delays. For the 1B model, we witness gains on 8 of 9 tasks, most prominently, a gain of 18% EM score on the QA task of SQuAD, 8% on CommonSenseQA and 1% accuracy on the reasoning task of GSM8k. Our work raises a range of conceptual and practical future research questions on making delayed next-token prediction a widely applicable new paradigm.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Compressed Chain of Thought: Efficient Reasoning Through Dense Representations (2024)
- Scaling Embedding Layers in Language Models (2025)
- Token Prepending: A Training-Free Approach for Eliciting Better Sentence Embeddings from LLMs (2024)
- SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator (2024)
- Weight-based Analysis of Detokenization in Language Models: Understanding the First Stage of Inference Without Inference (2025)
- Token Assorted: Mixing Latent and Text Tokens for Improved Language Model Reasoning (2025)
- Large Concept Models: Language Modeling in a Sentence Representation Space (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper