Scale-Distribution Decoupling: Enabling Stable and Effective Training of Large Language Models
Abstract
Training stability is a persistent challenge in the pre-training of large language models (LLMs), particularly for architectures such as Post-Norm Transformers, which are prone to gradient explosion and dissipation. In this paper, we propose Scale-Distribution Decoupling (SDD), a novel approach that stabilizes training by explicitly decoupling the scale and distribution of the weight matrix in fully-connected layers. SDD applies a normalization mechanism to regulate activations and a learnable scaling vector to maintain well-conditioned gradients, effectively preventing gradient explosion and dissipation. This separation improves optimization efficiency, particularly in deep networks, by ensuring stable gradient propagation. Experimental results demonstrate that our method stabilizes training across various LLM architectures and outperforms existing techniques in different normalization configurations. Furthermore, the proposed method is lightweight and compatible with existing frameworks, making it a practical solution for stabilizing LLM training. Code is available at https://github.com/kaihemo/SDD.
Community
TL;DR: The paper introduces Scale-Distribution Decoupling (SDD), a novel technique to address training stability challenges in LLMs. By separating the scale and distribution of weight matrices in neural networks, SDD prevents gradient explosion and dissipation, improving optimization efficiency across different model architectures. The method is lightweight, compatible with existing frameworks, and has been experimentally shown to enhance training stability for LLMs, achieving a superior 1.5x convergence speedup compared to traditional LLM architectures. Code is available at this URL.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AdaGC: Improving Training Stability for Large Language Model Pretraining (2025)
- Optimizing Large Language Model Training Using FP4 Quantization (2025)
- Finedeep: Mitigating Sparse Activation in Dense LLMs via Multi-Layer Fine-Grained Experts (2025)
- LESA: Learnable LLM Layer Scaling-Up (2025)
- Peri-LN: Revisiting Layer Normalization in the Transformer Architecture (2025)
- EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular Value Decomposition (2025)
- DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper