File size: 8,790 Bytes
d9aa228
 
 
 
 
 
 
 
99e0105
 
d9aa228
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99e0105
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
---
license: mit
---

# Implementing Transformer from Scratch: A Step-by-Step Guide

This repository provides a detailed guide and implementation of the Transformer architecture from the ["Attention Is All You Need"](https://arxiv.org/abs/1706.03762) paper. The implementation focuses on understanding each component through clear code, comprehensive testing, and visual aids.

For implementions of more recent architectural innovations from DeepSeek, see the **Related Implementations** section.

## Table of Contents
1. [Summary and Key Insights](#summary-and-key-insights)
2. [Implementation Details](#implementation-details)
   - [Embedding and Positional Encoding](#embedding-and-positional-encoding)
   - [Transformer Attention](#transformer-attention)
   - [Feed-Forward Network](#feed-forward-network)
   - [Transformer Decoder](#transformer-decoder)
   - [Encoder-Decoder Stack](#encoder-decoder-stack)
   - [Full Transformer](#full-transformer)
3. [Testing](#testing)
4. [Visualizations](#visualizations)

## Quick Start
View the complete implementation and tutorial in the [Jupyter notebook](Transformer_Implementation_Tutorial.ipynb).

## Summary and Key Insights

### Paper Reference
- ["Attention Is All You Need"](https://arxiv.org/abs/1706.03762) (Vaswani et al., 2017)
- Key sections: 
  - 3.1: Encoder and Decoder Stacks
  - 3.2: Attention Mechanism
  - 3.3: Position-wise Feed-Forward Networks
  - 3.4: Embeddings and Softmax
  - 3.5: Positional Encoding
  - 5.4: Regularization (dropout strategy)

### Implementation Strategy
Breaking down the architecture into manageable pieces and gradually adding complexity:

1. Start with foundational components:
   - Embedding + Positional Encoding
   - Single-head self-attention
   
2. Build up attention mechanism:
   - Extend to multi-head attention
   - Add cross-attention capability
   - Implement attention masking
   
3. Construct larger components:
   - Encoder (self-attention + FFN)
   - Decoder (masked self-attention + cross-attention + FFN)
   
4. Combine into final architecture:
   - Encoder-Decoder stack
   - Full Transformer with input/output layers

### Development Tips
1. Visualization and Planning:
   - Draw out tensor dimensions on paper
   - Sketch attention patterns and masks
   - Map each component back to paper equations
   - This helps catch dimension mismatches early!

2. Dimension Cheat Sheet:
   - Input tokens: [batch_size, seq_len]
   - Embeddings: [batch_size, seq_len, d_model]
   - Attention matrices: [batch_size, num_heads, seq_len, seq_len]
   - FFN hidden layer: [batch_size, seq_len, d_ff]
   - Output logits: [batch_size, seq_len, vocab_size]

3. Common Pitfalls:
   - Forgetting to scale dot products by √d_k
   - Incorrect mask dimensions or application
   - Missing residual connections
   - Wrong order of layer norm and dropout
   - Tensor dimension mismatches in attention
   - Not handling padding properly

4. Performance Considerations:
   - Memory usage scales with sequence length squared
   - Attention computation is O(n²) with sequence length
   - Balance between d_model and num_heads
   - Trade-off between model size and batch size

## Implementation Details

### Embedding and Positional Encoding
This implements the input embedding and positional encoding from Section 3.5 of the paper. Key points:
- Embedding dimension can differ from model dimension (using projection)
- Positional encoding uses sine and cosine functions
- Scale embeddings by √d_model
- Apply dropout to the sum of embeddings and positional encodings

Implementation tips:
- Use `nn.Embedding` for token embeddings
- Store scaling factor as float during initialization
- Remember to expand positional encoding for batch dimension
- Add assertion for input dtype (should be torch.long)

### Transformer Attention
Implements the core attention mechanism from Section 3.2.1. Formula: Attention(Q,K,V) = softmax(QK^T/√d_k)V

Key points:
- Supports both self-attention and cross-attention
- Handles different sequence lengths for encoder/decoder
- Scales dot products by 1/√d_k
- Applies attention masking before softmax

Implementation tips:
- Use separate Q,K,V projections
- Handle masking through addition (not masked_fill)
- Remember to reshape for multi-head attention
- Keep track of tensor dimensions at each step

### Feed-Forward Network (FFN)
Implements the position-wise feed-forward network from Section 3.3: FFN(x) = max(0, xW₁ + b₁)W₂ + b₂

Key points:
- Two linear transformations with ReLU in between
- Inner layer dimension (d_ff) is typically 2048
- Applied identically to each position

Implementation tips:
- Use nn.Linear for transformations
- Remember to include bias terms
- Position-wise means same transformation for each position
- Dimension flow: d_model → d_ff → d_model

### Transformer Decoder
Implements decoder layer from Section 3.1, with three sub-layers:
- Masked multi-head self-attention
- Multi-head cross-attention with encoder output
- Position-wise feed-forward network

Key points:
- Self-attention uses causal masking
- Cross-attention allows attending to all encoder outputs
- Each sub-layer followed by residual connection and layer normalization

Key implementation detail for causal masking:
- Create causal mask using upper triangular matrix:
 ```python
 mask = torch.triu(torch.ones(seq_len, seq_len), diagonal=1)
 mask = mask.masked_fill(mask == 1, float('-inf'))
 ```

This creates a pattern where position i can only attend to positions ≤ i
Using -inf ensures zero attention to future positions after softmax
Visualization of mask for seq_len=5:\
 [[0, -inf, -inf, -inf, -inf],\
 [0,    0, -inf, -inf, -inf],\
 [0,    0,    0, -inf, -inf],\
 [0,    0,    0,    0, -inf],\
 [0,    0,    0,    0,    0]]


Implementation tips:
- Order of operations matters (masking before softmax)
- Each attention layer has its own projections
- Remember to pass encoder outputs for cross-attention
 Careful with mask dimensions in self and cross attention

### Encoder-Decoder Stack
Implements the full stack of encoder and decoder layers from Section 3.1.
Key points:
- Multiple encoder and decoder layers (typically 6)
- Each encoder output feeds into all decoder layers
- Maintains residual connections throughout the stack

Implementation tips:
- Use nn.ModuleList for layer stacks
- Share encoder outputs across decoder layers
- Maintain consistent masking throughout
- Handle padding masks separately from causal masks

### Full Transformer
Combines all components into complete architecture:
- Input embeddings for source and target
- Positional encoding
- Encoder-decoder stack
- Final linear and softmax layer

Key points:
- Handles different vocabulary sizes for source/target
- Shifts decoder inputs for teacher forcing
- Projects outputs to target vocabulary size
- Applies log softmax for training stability

Implementation tips:
- Handle start tokens for decoder input
- Maintain separate embeddings for source/target
- Remember to scale embeddings
- Consider sharing embedding weights with output layer

### Testing
Our implementation includes comprehensive tests for each component:

- Shape preservation through layers
- Masking effectiveness
- Attention pattern verification
- Forward/backward pass validation
- Parameter and gradient checks

See the notebook for detailed test implementations and results.

### Visualizations
The implementation includes visualizations of:

- Attention patterns
- Positional encodings
- Masking effects
- Layer connectivity

These visualizations help understand the inner workings of the transformer and verify correct implementation.

For detailed code and interactive examples, please refer to the complete implementation notebook.

## Related Implementations

This repository is part of a series implementing the key architectural innovations from the DeepSeek paper:

1. **[Transformer Implementation Tutorial](https://huggingface.co/datasets/bird-of-paradise/transformer-from-scratch-tutorial)**(This Tutorial): A detailed tutorial on implementing transformer architecture with explanations of key components.

2. **[DeepSeek Multi-head Latent Attention](https://huggingface.co/bird-of-paradise/deepseek-mla)**: Implementation of DeepSeek's MLA mechanism for efficient KV cache usage during inference.

3. **[DeepSeek MoE](https://huggingface.co/bird-of-paradise/deepseek-moe)**: Implementation of DeepSeek's Mixture of Experts architecture that enables efficient scaling of model parameters.

Together, these implementations cover the core innovations that power DeepSeek's state-of-the-art performance. By combining the MoE architecture with Multi-head Latent Attention, you can build a complete DeepSeek-style model with improved training efficiency and inference performance.