Pclanglais commited on
Commit
f55376e
·
verified ·
1 Parent(s): 88bb560

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. transformer.py +2090 -0
transformer.py ADDED
@@ -0,0 +1,2090 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (C) 2024 Habana Labs, Ltd. an Intel Company.
2
+ # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
3
+
4
+ """Transformer."""
5
+ from contextlib import nullcontext
6
+ import math
7
+ import numpy as np
8
+ import torch
9
+ import torch.nn.functional as F
10
+ from typing import Optional
11
+
12
+ from megatron import get_timers, get_args, get_retro_args, core, get_num_microbatches
13
+ from .module import MegatronModule
14
+ from megatron.core import parallel_state, tensor_parallel, mpu
15
+ from megatron.core.enums import ModelType
16
+ from megatron.model import LayerNorm, RMSNorm
17
+ from megatron.model.enums import AttnMaskType, LayerType, AttnType
18
+ from megatron.model.fused_softmax import FusedScaleMaskSoftmax
19
+ from megatron.model.fused_bias_gelu import bias_gelu_impl
20
+ from megatron.model.rotary_pos_embedding import apply_rotary_pos_emb
21
+ from megatron.model.utils import attention_mask_func, openai_gelu, erf_gelu
22
+ import deepspeed
23
+ from deepspeed.moe.layer import MoE
24
+ from deepspeed.accelerator import get_accelerator
25
+
26
+ try:
27
+ from deepspeed.sequence.layer import DistributedAttention
28
+ dist_attn_supported = True
29
+ except ImportError:
30
+ dist_attn_supported = False
31
+
32
+ try:
33
+ from einops import rearrange
34
+ except ImportError:
35
+ rearrange = None
36
+
37
+ try:
38
+ # FlashAttention (1.x)
39
+ from flash_attn.flash_attn_interface import flash_attn_unpadded_func
40
+ except ImportError:
41
+ flash_attn_unpadded_func = None
42
+
43
+ try:
44
+ from flash_attn.flash_attn_triton import flash_attn_func
45
+ except ImportError:
46
+ flash_attn_func = None
47
+
48
+ try:
49
+ # FlashAttention-2
50
+ from flash_attn.flash_attn_interface import flash_attn_varlen_func
51
+ except ImportError:
52
+ flash_attn_varlen_func = None
53
+
54
+ FlashAttentionBuilder = get_accelerator().get_op_builder("FlashAttentionBuilder")
55
+ flash_attn_builder = None
56
+
57
+
58
+ """ We use the following notation throughout this file:
59
+ h: hidden size
60
+ n: number of attention heads
61
+ p: number of model parallel partitions
62
+ np: n/p
63
+ hp: h/p
64
+ hn: h/n
65
+ b: batch size
66
+ s: sequence length
67
+ l: number of layers
68
+ Transformer takes input of size [s, b, h] and returns a
69
+ tensor of the same size. We use the following arguments:
70
+ hyperparameters: transformer hyperparameters
71
+ """
72
+
73
+ class DropPath(MegatronModule):
74
+ """Drop paths (Stochastic Depth) per sample
75
+ (when applied in main path of residual blocks).
76
+ """
77
+
78
+ def __init__(self, drop_prob=0.):
79
+ super(DropPath, self).__init__()
80
+ self.drop_prob = drop_prob
81
+
82
+ def forward(self, hidden_state):
83
+ if self.drop_prob == 0. or not self.training:
84
+ return hidden_state
85
+ keep_prob = 1 - self.drop_prob
86
+ # work with diff dim tensors, not just 2D ConvNets
87
+ # hidden_state: [s, b, h]
88
+ shape = (1,) + (hidden_state.shape[1],) + (1,) * (hidden_state.ndim - 2)
89
+ random_tensor = keep_prob + \
90
+ torch.rand(shape, dtype=hidden_state.dtype, device=hidden_state.device)
91
+ random_tensor.floor_() # binarize
92
+ output = hidden_state.div(keep_prob) * random_tensor
93
+ return output
94
+
95
+ class ParallelMLP(MegatronModule):
96
+ """MLP.
97
+
98
+ MLP will take the input with h hidden state, project it to 4*h
99
+ hidden dimension, perform nonlinear transformation, and project the
100
+ state back into h hidden dimension.
101
+ """
102
+
103
+ def __init__(self, config, moe=False, enable_expert_tensor_parallelism=False):
104
+ super(ParallelMLP, self).__init__()
105
+ args = get_args()
106
+
107
+ self.add_bias = config.add_bias_linear
108
+
109
+ ffn_hidden_size = config.ffn_hidden_size
110
+ if config.gated_linear_unit:
111
+ ffn_hidden_size *= 2
112
+
113
+ # Project to 4h. If using swiglu double the output width, see https://arxiv.org/pdf/2002.05202.pdf
114
+ self.dense_h_to_4h = tensor_parallel.ColumnParallelLinear(
115
+ config.hidden_size,
116
+ ffn_hidden_size,
117
+ config=config,
118
+ init_method=config.init_method,
119
+ bias=self.add_bias,
120
+ gather_output=False,
121
+ skip_bias_add=True,
122
+ moe=moe,
123
+ enable_expert_tensor_parallelism=enable_expert_tensor_parallelism
124
+ )
125
+
126
+ self.bias_gelu_fusion = False
127
+ self.activation_func = None
128
+ self.swiglu = args.swiglu
129
+
130
+ if args.openai_gelu:
131
+ self.activation_func = openai_gelu
132
+ elif args.onnx_safe:
133
+ self.activation_func = erf_gelu
134
+ elif args.swiglu:
135
+ def swiglu(x):
136
+ x = torch.chunk(x, 2, dim=-1)
137
+ return F.silu(x[0]) * x[1]
138
+ self.activation_func = swiglu
139
+ elif args.squared_relu:
140
+ def squared_relu(x):
141
+ return torch.pow(F.relu(x), 2)
142
+ self.activation_func = squared_relu
143
+ else:
144
+ self.bias_gelu_fusion = args.bias_gelu_fusion
145
+ self.activation_func = F.gelu
146
+
147
+ # Project back to h.
148
+ self.dense_4h_to_h = tensor_parallel.RowParallelLinear(
149
+ config.ffn_hidden_size,
150
+ config.hidden_size,
151
+ config=config,
152
+ init_method=config.output_layer_init_method,
153
+ bias=self.add_bias,
154
+ input_is_parallel=True,
155
+ moe=moe,
156
+ enable_expert_tensor_parallelism=enable_expert_tensor_parallelism
157
+ )
158
+
159
+ def forward(self, hidden_states):
160
+
161
+ # [s, b, 4hp]
162
+ intermediate_parallel, bias_parallel = self.dense_h_to_4h(hidden_states)
163
+
164
+ if self.bias_gelu_fusion:
165
+ assert self.add_bias is True
166
+ # DeepSpeed FLOPS profiler temporarily substitues functions like F.gelu to calculate the throughput
167
+ assert hasattr(self, "__flops__") or self.activation_func == F.gelu
168
+ intermediate_parallel = bias_gelu_impl(intermediate_parallel, bias_parallel)
169
+ else:
170
+ if bias_parallel is not None:
171
+ intermediate_parallel = intermediate_parallel + bias_parallel
172
+ intermediate_parallel = self.activation_func(intermediate_parallel)
173
+
174
+ # [s, b, h]
175
+ output, output_bias = self.dense_4h_to_h(intermediate_parallel)
176
+ return output, output_bias
177
+
178
+ class SwitchMLP(MegatronModule):
179
+ """
180
+ Routes input to one of N MLP "experts"
181
+ """
182
+ def __init__(self, config):
183
+ super(SwitchMLP, self).__init__()
184
+ args = get_args()
185
+ self.router = torch.nn.Linear(config.hidden_size, args.num_experts_switch)
186
+ self.experts = torch.nn.ModuleList()
187
+ for i in range(args.num_experts_switch):
188
+ self.experts.append(ParallelMLP(config))
189
+
190
+ def forward(self, hidden_states):
191
+ # hidden_states: [s, b, h]
192
+ s = hidden_states.size(0)
193
+ b = hidden_states.size(1)
194
+ h = hidden_states.size(2)
195
+ route = self.router(hidden_states)
196
+ route = torch.nn.functional.softmax(route, dim=2)
197
+ max_prob, max_ind = torch.max(route, dim=2)
198
+ max_prob = torch.unsqueeze(max_prob, 2) # [s b 1]
199
+
200
+ # TODO (rprenger) TODO this could be made easier to read
201
+ # Converting [s, b, h] to [s*b, h].
202
+ # Each vector could be routed differently
203
+ hidden_states = hidden_states.view(-1, hidden_states.size(2)) # [s*b h]
204
+ max_prob = max_prob.view(-1, max_prob.size(2)) # [s*b 1]
205
+ max_ind = max_ind.view(-1) # [s*b]
206
+
207
+ output_total = torch.empty_like(hidden_states)
208
+ output_bias_total = torch.empty_like(hidden_states)
209
+ #TODO (rprenger) This does each expert in serial, but it could be parallelized
210
+
211
+ for expert_num, expert in enumerate(self.experts):
212
+ local_indices = (max_ind == expert_num).nonzero()
213
+ hidden = hidden_states[local_indices,:]
214
+ output, output_bias = expert(hidden)
215
+ output_bias = output_bias.expand_as(output)
216
+ output_total[local_indices,:] = output
217
+ output_bias_total[local_indices,:] = output_bias
218
+
219
+ output_total = output_total*max_prob
220
+ output_bias_total = output_bias_total*max_prob
221
+ output_total = output_total.view(s, b, h)
222
+ output_bias_total = output_bias_total.view(s, b, h)
223
+
224
+ return output_total, output_bias_total
225
+
226
+
227
+ class CoreAttention(MegatronModule):
228
+
229
+ def __init__(self, layer_number, config,
230
+ attn_mask_type=AttnMaskType.padding):
231
+ super(CoreAttention, self).__init__()
232
+ self.fp16 = config.fp16
233
+ self.bf16 = config.bf16
234
+
235
+ self.apply_query_key_layer_scaling = config.apply_query_key_layer_scaling
236
+ self.attention_softmax_in_fp32 = config.attention_softmax_in_fp32
237
+ if self.apply_query_key_layer_scaling:
238
+ self.attention_softmax_in_fp32 = True
239
+ self.layer_number = max(1, layer_number)
240
+ self.attn_mask_type = attn_mask_type
241
+ self.sequence_parallel = config.sequence_parallel
242
+
243
+ projection_size = config.kv_channels * config.num_attention_heads
244
+
245
+ # Per attention head and per partition values.
246
+ seq_parallel_world_size = 1
247
+ if parallel_state.sequence_parallel_is_initialized():
248
+ seq_parallel_world_size = parallel_state.get_sequence_parallel_world_size()
249
+ world_size = seq_parallel_world_size if seq_parallel_world_size > 1 else parallel_state.get_tensor_model_parallel_world_size()
250
+
251
+ self.hidden_size_per_partition = core.utils.divide(projection_size,
252
+ world_size)
253
+ self.hidden_size_per_attention_head = core.utils.divide(
254
+ projection_size, config.num_attention_heads)
255
+ self.num_attention_heads_per_partition = core.utils.divide(
256
+ config.num_attention_heads, world_size)
257
+
258
+ coeff = None
259
+ self.norm_factor = math.sqrt(self.hidden_size_per_attention_head)
260
+ if self.apply_query_key_layer_scaling:
261
+ coeff = self.layer_number
262
+ self.norm_factor *= coeff
263
+
264
+ self.scale_mask_softmax = FusedScaleMaskSoftmax(
265
+ self.fp16, self.bf16,
266
+ self.attn_mask_type,
267
+ config.masked_softmax_fusion,
268
+ attention_mask_func,
269
+ self.attention_softmax_in_fp32,
270
+ coeff)
271
+
272
+ # Dropout. Note that for a single iteration, this layer will generate
273
+ # different outputs on different number of parallel partitions but
274
+ # on average it should not be partition dependent.
275
+ self.attention_dropout = torch.nn.Dropout(config.attention_dropout)
276
+
277
+ def forward(self, query_layer, key_layer,
278
+ value_layer, attention_mask):
279
+
280
+ # ===================================
281
+ # Raw attention scores. [b, np, s, s]
282
+ # ===================================
283
+
284
+ # [b, np, sq, sk]
285
+ output_size = (query_layer.size(1),
286
+ query_layer.size(2),
287
+ query_layer.size(0),
288
+ key_layer.size(0))
289
+
290
+ # [sq, b, np, hn] -> [sq, b * np, hn]
291
+ query_layer = query_layer.view(output_size[2],
292
+ output_size[0] * output_size[1], -1)
293
+ # [sk, b, np, hn] -> [sk, b * np, hn]
294
+ key_layer = key_layer.view(output_size[3],
295
+ output_size[0] * output_size[1], -1)
296
+
297
+ # preallocting input tensor: [b * np, sq, sk]
298
+ matmul_input_buffer = parallel_state.get_global_memory_buffer().get_tensor(
299
+ (output_size[0]*output_size[1], output_size[2], output_size[3]),
300
+ query_layer.dtype, "mpu")
301
+
302
+ # Raw attention scores. [b * np, sq, sk]
303
+ matmul_result = torch.baddbmm(
304
+ matmul_input_buffer,
305
+ query_layer.transpose(0, 1), # [b * np, sq, hn]
306
+ key_layer.transpose(0, 1).transpose(1, 2), # [b * np, hn, sk]
307
+ beta=0.0, alpha=(1.0/self.norm_factor))
308
+
309
+ # change view to [b, np, sq, sk]
310
+ attention_scores = matmul_result.view(*output_size)
311
+
312
+ # ===========================
313
+ # Attention probs and dropout
314
+ # ===========================
315
+
316
+ # attention scores and attention mask [b, np, sq, sk]
317
+ attention_probs = self.scale_mask_softmax(attention_scores,
318
+ attention_mask)
319
+
320
+ # This is actually dropping out entire tokens to attend to, which might
321
+ # seem a bit unusual, but is taken from the original Transformer paper.
322
+ if not self.sequence_parallel:
323
+ with tensor_parallel.get_cuda_rng_tracker().fork():
324
+ attention_probs = self.attention_dropout(attention_probs)
325
+ else:
326
+ attention_probs = self.attention_dropout(attention_probs)
327
+
328
+ # =========================
329
+ # Context layer. [sq, b, hp]
330
+ # =========================
331
+
332
+ # value_layer -> context layer.
333
+ # [sk, b, np, hn] --> [b, np, sq, hn]
334
+
335
+ # context layer shape: [b, np, sq, hn]
336
+ output_size = (value_layer.size(1),
337
+ value_layer.size(2),
338
+ query_layer.size(0),
339
+ value_layer.size(3))
340
+
341
+ # change view [sk, b * np, hn]
342
+ value_layer = value_layer.view(value_layer.size(0),
343
+ output_size[0] * output_size[1], -1)
344
+
345
+ # change view [b * np, sq, sk]
346
+ attention_probs = attention_probs.view(output_size[0] * output_size[1],
347
+ output_size[2], -1)
348
+
349
+ # matmul: [b * np, sq, hn]
350
+ context_layer = torch.bmm(attention_probs, value_layer.transpose(0, 1))
351
+
352
+ # change view [b, np, sq, hn]
353
+ context_layer = context_layer.view(*output_size)
354
+
355
+ # [b, np, sq, hn] --> [sq, b, np, hn]
356
+ context_layer = context_layer.permute(2, 0, 1, 3).contiguous()
357
+
358
+ # [sq, b, np, hn] --> [sq, b, hp]
359
+ new_context_layer_shape = context_layer.size()[:-2] + \
360
+ (self.hidden_size_per_partition,)
361
+ context_layer = context_layer.view(*new_context_layer_shape)
362
+
363
+ return context_layer
364
+
365
+
366
+ class FlashSelfAttention(torch.nn.Module):
367
+ """Implement the scaled dot product attention with softmax.
368
+ Arguments
369
+ ---------
370
+ softmax_scale: The temperature to use for the softmax attention.
371
+ (default: 1/sqrt(d_keys) where d_keys is computed at
372
+ runtime)
373
+ attention_dropout: The dropout rate to apply to the attention
374
+ (default: 0.0)
375
+ """
376
+ def __init__(self, causal=False, softmax_scale=None, attention_dropout=0.0,
377
+ device=None, dtype=None):
378
+ super().__init__()
379
+ assert flash_attn_unpadded_func is not None or flash_attn_varlen_func is not None or flash_attn_builder is not None, \
380
+ ('Please install FlashAttention first, e.g., with pip install flash-attn or implement your own flash attention')
381
+ assert rearrange is not None, 'Please install einops first, e.g., with pip install einops'
382
+ self.causal = causal
383
+ self.softmax_scale = softmax_scale
384
+ self.dropout_p = attention_dropout
385
+
386
+ # Use FlashAttention-2 when args.use_flash_attn_v2 is True
387
+ args = get_args()
388
+ self.use_flash_attn_builder_v1 = False
389
+ self.use_flash_attn_builder_v2 = False
390
+ self.use_flash_attn = False
391
+ if args.use_flash_attn_builder:
392
+ if hasattr(flash_attn_builder, 'flash_attn_func'):
393
+ self.flash_attn_func = flash_attn_builder.flash_attn_func
394
+ self.use_flash_attn_builder_v1 = True
395
+ else:
396
+ self.flash_attn_func = flash_attn_builder.flash_attn_func_v2
397
+ self.use_flash_attn_builder_v2 = True
398
+ else:
399
+ self.flash_attn_func = flash_attn_varlen_func if args.use_flash_attn_v2 else flash_attn_unpadded_func
400
+ self.use_flash_attn = True
401
+
402
+ def forward(self, q, k, v):
403
+ """Implements the multihead softmax attention.
404
+ Arguments
405
+ ---------
406
+ q, k, v: The tensor containing the query, key, and value. (B, S, H, D)
407
+ """
408
+
409
+ assert all((i.dtype in [torch.float16, torch.bfloat16] for i in (q,k,v)))
410
+ assert all((get_accelerator().on_accelerator(i) for i in (q, k, v)))
411
+
412
+ batch_size, seqlen_q = q.shape[0], q.shape[1]
413
+ seqlen_k = k.shape[1]
414
+
415
+ if self.use_flash_attn:
416
+ q, k, v = [rearrange(x, 'b s ... -> (b s) ...') for x in [q, k, v]]
417
+ cu_seqlens_q = torch.arange(0, (batch_size + 1) * seqlen_q, step=seqlen_q, dtype=torch.int32,
418
+ device=q.device)
419
+ elif self.use_flash_attn_builder_v1:
420
+ q, k, v = [rearrange(x, 'b s h d -> b h s d').contiguous() for x in [q, k, v]]
421
+ else:
422
+ # use_flash_attn_builder_v2
423
+ q, k, v = [rearrange(x, 'b s h d -> b h s d') for x in [q, k, v]]
424
+
425
+ if self.training:
426
+ # during training q,k,v always have same seqlen
427
+ assert seqlen_k == seqlen_q
428
+
429
+ is_causal = self.causal
430
+ cu_seqlens_k = cu_seqlens_q if get_accelerator().device_name() == 'cuda' else None
431
+ dropout_p = self.dropout_p
432
+ else:
433
+ # turn off FA causal mask after first inference autoregressive iteration
434
+ # only on first autoregressive step q,k,v have same seqlen
435
+ is_causal = seqlen_q == seqlen_k
436
+ cu_seqlens_k = torch.arange(0, (batch_size + 1) * seqlen_k, step=seqlen_k, dtype=torch.int32,
437
+ device=q.device) if get_accelerator().device_name() == 'cuda' else None
438
+ dropout_p = 0
439
+
440
+ if self.use_flash_attn:
441
+ output = self.flash_attn_func(
442
+ q, k, v, cu_seqlens_q, cu_seqlens_k, seqlen_q, seqlen_k,
443
+ dropout_p,
444
+ softmax_scale=self.softmax_scale, causal=is_causal
445
+ )
446
+ else:
447
+ # use_flash_attn_builder
448
+ output = self.flash_attn_func(
449
+ q, k, v, self.dropout_p, self.softmax_scale, is_causal
450
+ )
451
+
452
+ if self.use_flash_attn:
453
+ output = rearrange(output, '(b s) ... -> b s ...', b=batch_size)
454
+ elif self.use_flash_attn_builder_v1:
455
+ output = rearrange(output, 'b h s d -> b s h d').contiguous()
456
+ else:
457
+ # use_flash_attn_builder_v2:
458
+ output = rearrange(output, 'b h s d -> b s h d')
459
+
460
+ return output
461
+
462
+ class FlashSelfAttentionTriton(torch.nn.Module):
463
+ """Implement the scaled dot product attention with softmax.
464
+ Arguments
465
+ ---------
466
+ softmax_scale: The temperature to use for the softmax attention.
467
+ (default: 1/sqrt(d_keys) where d_keys is computed at
468
+ runtime)
469
+ attention_dropout: The dropout rate to apply to the attention
470
+ (default: 0.0)
471
+ """
472
+ def __init__(self, causal=False, softmax_scale=None, attention_dropout=0.0,
473
+ device=None, dtype=None):
474
+ super().__init__()
475
+ assert flash_attn_func is not None, ('Triton version of FlashAttention is not installed.')
476
+ assert rearrange is not None, 'Please install einops first, e.g., with pip install einops'
477
+ self.causal = causal
478
+ self.softmax_scale = softmax_scale
479
+ self.dropout_p = attention_dropout
480
+
481
+ def forward(self, q, k, v):
482
+ """Implements the multihead softmax attention.
483
+ Arguments
484
+ ---------
485
+ q, k, v: The tensor containing the query, key, and value. (B, S, H, D)
486
+ """
487
+
488
+ assert q.dtype in [torch.float16, torch.bfloat16]
489
+ assert q.is_cuda
490
+ q, k, v = [rearrange(x, 's b ... -> b s ...').contiguous()
491
+ for x in (q, k, v)]
492
+
493
+ output = flash_attn_func(q, k, v, None, self.causal)
494
+ output = rearrange(output, 'b s h d -> s b (h d)').contiguous()
495
+ return output
496
+
497
+ class ParallelAttention(MegatronModule):
498
+ """Parallel self-attention layer abstract class.
499
+
500
+ Self-attention layer takes input with size [s, b, h]
501
+ and returns output of the same size.
502
+ """
503
+
504
+ def __init__(self, config, layer_number,
505
+ attention_type=AttnType.self_attn,
506
+ attn_mask_type=AttnMaskType.padding):
507
+ super(ParallelAttention, self).__init__()
508
+ args = get_args()
509
+ self.layer_number = max(1, layer_number)
510
+ self.attention_type = attention_type
511
+ self.attn_mask_type = attn_mask_type
512
+ self.params_dtype = config.params_dtype
513
+ self.sequence_parallel = config.sequence_parallel
514
+ self.num_attention_heads = config.num_attention_heads
515
+ self.num_key_value_heads = config.num_key_value_heads
516
+ self.use_gqa = (self.num_attention_heads != self.num_key_value_heads)
517
+
518
+ self.use_flash_attn = (args.use_flash_attn_v1 or args.use_flash_attn_triton or args.use_flash_attn_v2 or \
519
+ args.use_flash_attn_builder) \
520
+ and attention_type == AttnType.self_attn \
521
+ and self.attn_mask_type == AttnMaskType.causal
522
+ self.use_flash_attn_triton = args.use_flash_attn_triton
523
+ if self.use_flash_attn:
524
+ global flash_attn_builder
525
+ try:
526
+ flash_attn_builder = FlashAttentionBuilder().load()
527
+ except TypeError:
528
+ flash_attn_builder = None
529
+
530
+ if args.use_flash_attn_v1:
531
+ assert flash_attn_unpadded_func != None, "Cannot import FlashAttention v1 "
532
+ if args.use_flash_attn_v2:
533
+ assert flash_attn_varlen_func != None, "Cannot import FlashAttention v2 "
534
+ if args.use_flash_attn_triton:
535
+ assert flash_attn_func != None, "Cannot import FlashAttention triton "
536
+ if args.use_flash_attn_builder:
537
+ assert flash_attn_builder != None, "Cannot find FlashAttention op builder "
538
+
539
+ assert attention_type == AttnType.self_attn, ('FlashAttention code path only supports '
540
+ 'self-attention for now')
541
+ assert self.attn_mask_type == AttnMaskType.causal, ('FlashAttention code path only '
542
+ 'supports causal mask for now')
543
+ if rearrange is None:
544
+ raise ImportError('einops is not installed, please install with pip install einops')
545
+
546
+ projection_size = config.kv_channels * config.num_attention_heads
547
+
548
+ # Per attention head and per partition values.
549
+ world_size = parallel_state.get_tensor_model_parallel_world_size()
550
+ self.hidden_size_per_attention_head = core.utils.divide(
551
+ projection_size, config.num_attention_heads)
552
+ self.num_attention_heads_per_partition = core.utils.divide(
553
+ config.num_attention_heads, world_size)
554
+
555
+ # Per GQA head and per partition values
556
+ self.num_key_value_heads_per_partition = core.utils.divide(
557
+ config.num_key_value_heads, world_size)
558
+ self.num_key_value_groups = core.utils.divide(
559
+ config.num_attention_heads, config.num_key_value_heads)
560
+ kv_projection_size = config.kv_channels * config.num_key_value_heads
561
+ assert self.hidden_size_per_attention_head == core.utils.divide(
562
+ kv_projection_size, config.num_key_value_heads)
563
+
564
+ # Strided linear layer.
565
+ if attention_type == AttnType.self_attn:
566
+ self.query_key_value = tensor_parallel.ColumnParallelLinear(
567
+ config.hidden_size,
568
+ projection_size + 2 * kv_projection_size,
569
+ config=config,
570
+ init_method=config.init_method,
571
+ bias=args.add_bias_linear,
572
+ gather_output=False)
573
+ else:
574
+ assert attention_type == AttnType.cross_attn
575
+ self.query = tensor_parallel.ColumnParallelLinear(
576
+ config.hidden_size,
577
+ projection_size,
578
+ config=config,
579
+ init_method=config.init_method,
580
+ bias=config.add_bias_linear,
581
+ gather_output=False)
582
+
583
+
584
+ self.key_value = tensor_parallel.ColumnParallelLinear(
585
+ config.hidden_size,
586
+ 2 * projection_size,
587
+ config=config,
588
+ init_method=config.init_method,
589
+ bias=config.add_bias_linear,
590
+ gather_output=False)
591
+
592
+ # Currently FlashAttention only works with causal mask
593
+ if self.use_flash_attn_triton:
594
+ local_attn = FlashSelfAttentionTriton(causal=True, attention_dropout=args.attention_dropout)
595
+ elif self.use_flash_attn:
596
+ local_attn = FlashSelfAttention(causal=True, attention_dropout=config.attention_dropout)
597
+ else:
598
+ local_attn = CoreAttention(self.layer_number, config, self.attn_mask_type)
599
+
600
+ self.enable_ds_sequence_parallel = parallel_state.get_sequence_parallel_world_size() > 1 \
601
+ or args.force_ds_sequence_parallel
602
+ if self.enable_ds_sequence_parallel:
603
+ assert dist_attn_supported, 'Distributed attention is not supported in this DeepSpeed version'
604
+ assert args.num_attention_heads % parallel_state.get_sequence_parallel_world_size() == 0
605
+ self.dist_attn = DistributedAttention(
606
+ local_attn,
607
+ parallel_state.get_sequence_parallel_group(),
608
+ gather_idx=1 if args.use_flash_attn_v1 or args.use_flash_attn_v2 else 0)
609
+ # flash_attn_cuda assumes [b, s, nh, hd] layout, we need to make sure all2all gathers into the correct sequence dimension.
610
+ else:
611
+ if self.use_flash_attn:
612
+ self.core_attention_flash = local_attn
613
+ else:
614
+ self.core_attention = local_attn
615
+ self.checkpoint_core_attention = config.recompute_granularity == 'selective'
616
+
617
+ # Output.
618
+ self.dense = tensor_parallel.RowParallelLinear(
619
+ projection_size,
620
+ config.hidden_size,
621
+ config=config,
622
+ init_method=config.output_layer_init_method,
623
+ bias=args.add_bias_linear,
624
+ input_is_parallel=True,
625
+ skip_bias_add=True)
626
+
627
+
628
+ def _checkpointed_attention_forward(self, query_layer, key_layer,
629
+ value_layer, attention_mask,
630
+ rotary_pos_emb=None):
631
+ """Forward method with activation checkpointing."""
632
+ def custom_forward(*inputs):
633
+ query_layer = inputs[0]
634
+ key_layer = inputs[1]
635
+ value_layer = inputs[2]
636
+ attention_mask = inputs[3]
637
+ output_ = self.core_attention(query_layer, key_layer,
638
+ value_layer, attention_mask)
639
+ return output_
640
+
641
+ q_pos_emb, k_pos_emb = (None, None) if rotary_pos_emb is None \
642
+ else rotary_pos_emb
643
+
644
+ hidden_states = tensor_parallel.checkpoint(
645
+ custom_forward,
646
+ False, query_layer, key_layer, value_layer, attention_mask,
647
+ q_pos_emb, k_pos_emb)
648
+
649
+ return hidden_states
650
+
651
+ def _allocate_memory(self, inference_max_sequence_len, batch_size):
652
+ return torch.empty(
653
+ inference_max_sequence_len,
654
+ batch_size,
655
+ self.num_attention_heads_per_partition,
656
+ self.hidden_size_per_attention_head,
657
+ dtype=self.params_dtype,
658
+ device=get_accelerator().current_device_name())
659
+
660
+ def repeat_kv(self, hidden_states, n_rep):
661
+ slen, batch, num_key_value_heads_per_partition, head_dim = hidden_states.shape
662
+ if n_rep == 1:
663
+ return hidden_states
664
+ elif num_key_value_heads_per_partition == 1:
665
+ # If no of KV heads is 1 then just perform expand operation
666
+ # instead of unsqueeze, expand and reshape to match query states.
667
+ return hidden_states.expand(slen, batch, n_rep, head_dim)
668
+ else:
669
+ hidden_states = hidden_states[:, :, :, None, :].expand(
670
+ slen, batch, num_key_value_heads_per_partition, n_rep, head_dim)
671
+ return hidden_states.reshape(slen, batch,
672
+ num_key_value_heads_per_partition * n_rep,
673
+ head_dim)
674
+
675
+ def split_tensor(self, mixed_x_layer):
676
+ query_layer, key_layer, value_layer = torch.split(mixed_x_layer, [self.num_key_value_groups, 1, 1], dim=-2)
677
+ query_layer = query_layer.reshape(mixed_x_layer.shape[:2] + (self.num_attention_heads_per_partition, self.hidden_size_per_attention_head))
678
+ key_layer = torch.squeeze(key_layer, -2)
679
+ value_layer = torch.squeeze(value_layer, -2)
680
+
681
+ return query_layer, key_layer, value_layer
682
+
683
+ def forward(self, hidden_states, attention_mask,
684
+ encoder_output=None, inference_params=None,
685
+ rotary_pos_emb=None):
686
+ # hidden_states: [sq, b, h]
687
+
688
+ # =================================================
689
+ # Pre-allocate memory for key-values for inference.
690
+ # =================================================
691
+ is_first_step = False
692
+ if inference_params:
693
+ if self.layer_number not in inference_params.key_value_memory_dict:
694
+ inf_max_seq_len = inference_params.max_sequence_len
695
+ inf_max_batch_size = inference_params.max_batch_size
696
+ inference_key_memory = self._allocate_memory(
697
+ inf_max_seq_len, inf_max_batch_size)
698
+ inference_value_memory = self._allocate_memory(
699
+ inf_max_seq_len, inf_max_batch_size)
700
+ inference_params.key_value_memory_dict[self.layer_number] = (
701
+ inference_key_memory, inference_value_memory)
702
+ is_first_step = True
703
+ else:
704
+ inference_key_memory, inference_value_memory = \
705
+ inference_params.key_value_memory_dict[self.layer_number]
706
+
707
+ # =====================
708
+ # Query, Key, and Value
709
+ # =====================
710
+
711
+ if self.attention_type == AttnType.self_attn:
712
+ # Attention heads [sq, b, h] --> [sq, b, ((nq + 2 * nkv) * hn)]
713
+ mixed_x_layer, _ = self.query_key_value(hidden_states)
714
+
715
+ # [sq, b, ((nq + 2 * nkv) * hn)] --> [sq, b, nkv, (nq // nkv + 2), hn]
716
+ new_tensor_shape = mixed_x_layer.size()[:-1] + \
717
+ (-1, (self.num_key_value_groups + 2),
718
+ self.hidden_size_per_attention_head)
719
+ mixed_x_layer = mixed_x_layer.view(*new_tensor_shape)
720
+
721
+ # [sq, b, nkv, (nq // nkv + 2), hn] --> 3 [sq, b, np, hn]
722
+ (query_layer,
723
+ key_layer,
724
+ value_layer) = self.split_tensor(mixed_x_layer)
725
+
726
+ # Repeat kv
727
+ if self.use_gqa:
728
+ key_layer = self.repeat_kv(key_layer, self.num_key_value_groups)
729
+ value_layer = self.repeat_kv(value_layer,
730
+ self.num_key_value_groups)
731
+ else:
732
+ assert not self.use_gqa, 'GQA + cross-attn not tested yet'
733
+
734
+ # Attention heads [sk, b, h] --> [sk, b, (np * 2 * hn)]
735
+ mixed_kv_layer, _ = self.key_value(encoder_output)
736
+
737
+ # [sk, b, (np * 2 * hn)] --> [sk, b, np, 2 * hn]
738
+ new_tensor_shape = mixed_kv_layer.size()[:-1] + \
739
+ (self.num_attention_heads_per_partition,
740
+ 2 * self.hidden_size_per_attention_head)
741
+ mixed_kv_layer = mixed_kv_layer.view(*new_tensor_shape)
742
+
743
+ # [sk, b, np, 2 * hn] --> 2 [sk, b, np, hn]
744
+ (key_layer,
745
+ value_layer) = tensor_parallel.split_tensor_along_last_dim(mixed_kv_layer, 2)
746
+
747
+ # Attention head [sq, b, h] --> [sq, b, hp]
748
+ query_layer, _ = self.query(hidden_states)
749
+ # [sq, b, hp] --> [sq, b, np, hn]
750
+ new_tensor_shape = query_layer.size()[:-1] + \
751
+ (self.num_attention_heads_per_partition,
752
+ self.hidden_size_per_attention_head)
753
+ query_layer = query_layer.view(*new_tensor_shape)
754
+
755
+ # ==================================
756
+ # Adjust key and value for inference
757
+ # ==================================
758
+
759
+ # duplicate the pos_emb for self attention
760
+ if rotary_pos_emb is not None:
761
+ if isinstance(rotary_pos_emb, tuple):
762
+ rotary_pos_emb = rotary_pos_emb
763
+ else:
764
+ rotary_pos_emb = ((rotary_pos_emb,) * 2)
765
+
766
+ if inference_params:
767
+ batch_start = inference_params.batch_size_offset
768
+ batch_end = batch_start + key_layer.size(1)
769
+ assert batch_end <= inference_key_memory.size(1)
770
+ sequence_start = inference_params.sequence_len_offset
771
+ sequence_end = sequence_start + key_layer.size(0)
772
+ assert sequence_end <= inference_key_memory.size(0)
773
+ # Copy key and values.
774
+ inference_key_memory[sequence_start:sequence_end,
775
+ batch_start:batch_end, ...] = key_layer
776
+ inference_value_memory[sequence_start:sequence_end,
777
+ batch_start:batch_end, ...] = value_layer
778
+ key_layer = inference_key_memory[
779
+ :sequence_end, batch_start:batch_end, ...]
780
+ value_layer = inference_value_memory[
781
+ :sequence_end, batch_start:batch_end, ...]
782
+
783
+
784
+ # adjust the key rotary positional embedding
785
+ if rotary_pos_emb is not None:
786
+ q_pos_emb, k_pos_emb = rotary_pos_emb
787
+ # need to cross check this condition during inference
788
+ # if not set_inference_key_value_memory:
789
+ if not is_first_step:
790
+ # In inference, we compute one token at a time.
791
+ # Select the correct positional embedding
792
+ # (only the last token in the sequence)
793
+ q_pos_emb = q_pos_emb[sequence_end - 1 : sequence_end]
794
+ else:
795
+ # In the first forward pass of inference,
796
+ # we use the entire provided prefix.
797
+ # q_pos_emb here has the rope embeddings of the entire
798
+ # prefix + to-be-generated output so
799
+ # we slice to just the prefix.
800
+ q_pos_emb = q_pos_emb[:sequence_end, :, :, :]
801
+ k_pos_emb = k_pos_emb[:sequence_end, :, :, :]
802
+ rotary_pos_emb = (q_pos_emb, k_pos_emb)
803
+
804
+
805
+ # ==================================
806
+ # core attention computation
807
+ # ==================================
808
+
809
+ # apply relative positional encoding (rotary embedding)
810
+ if rotary_pos_emb is not None:
811
+ q_pos_emb, k_pos_emb = rotary_pos_emb
812
+ query_layer = apply_rotary_pos_emb(query_layer, q_pos_emb)
813
+ key_layer = apply_rotary_pos_emb(key_layer, k_pos_emb)
814
+ # TODO, can apply positional embedding to value_layer so it has
815
+ # absolute positional embedding.
816
+ # otherwise, only relative positional embedding takes effect
817
+ # value_layer = apply_rotary_pos_emb(value_layer, k_pos_emb)
818
+
819
+ if self.enable_ds_sequence_parallel:
820
+ if self.use_flash_attn:
821
+ if not self.use_flash_attn_triton:
822
+ query_layer, key_layer, value_layer = [rearrange(x, 's b ... -> b s ...').contiguous()
823
+ for x in (query_layer, key_layer, value_layer)]
824
+
825
+ context_layer = self.dist_attn(query_layer, key_layer, value_layer)
826
+
827
+ if not self.use_flash_attn_triton:
828
+ context_layer = rearrange(context_layer, 'b s h d -> s b (h d)').contiguous()
829
+ else:
830
+ context_layer = self.dist_attn(query_layer, key_layer, value_layer, attention_mask)
831
+ else:
832
+ if self.use_flash_attn:
833
+ if not self.use_flash_attn_triton:
834
+ query_layer, key_layer, value_layer = [rearrange(x, 's b ... -> b s ...').contiguous()
835
+ for x in (query_layer, key_layer, value_layer)]
836
+
837
+ if self.sequence_parallel:
838
+ context_layer = self.core_attention_flash(query_layer, key_layer, value_layer)
839
+ else:
840
+ with tensor_parallel.get_cuda_rng_tracker().fork():
841
+ context_layer = self.core_attention_flash(query_layer, key_layer, value_layer)
842
+
843
+ if not self.use_flash_attn_triton:
844
+ context_layer = rearrange(context_layer, 'b s h d -> s b (h d)').contiguous()
845
+ else:
846
+ if self.checkpoint_core_attention:
847
+ context_layer = self._checkpointed_attention_forward(
848
+ query_layer, key_layer, value_layer, attention_mask)
849
+ else:
850
+ context_layer = self.core_attention(
851
+ query_layer, key_layer, value_layer, attention_mask)
852
+
853
+ # =================
854
+ # Output. [sq, b, h]
855
+ # =================
856
+
857
+ output, bias = self.dense(context_layer)
858
+
859
+ return output, bias
860
+
861
+
862
+ def bias_dropout_add(x, bias, residual, prob, training):
863
+ # type: (Tensor, Optional[Tensor], Tensor, float, bool) -> Tensor
864
+ if bias is not None:
865
+ x = x + bias
866
+ out = torch.nn.functional.dropout(x, p=prob, training=training)
867
+ out = residual + out
868
+ return out
869
+
870
+
871
+ def get_bias_dropout_add(training):
872
+ def _bias_dropout_add(x, bias, residual, prob):
873
+ return bias_dropout_add(x, bias, residual, prob, training)
874
+ return _bias_dropout_add
875
+
876
+
877
+ @torch.jit.script
878
+ def bias_dropout_add_fused_train(x: torch.Tensor,
879
+ bias: Optional[torch.Tensor],
880
+ residual: torch.Tensor,
881
+ prob: float) -> torch.Tensor:
882
+ return bias_dropout_add(x, bias, residual, prob, True)
883
+
884
+
885
+ @torch.jit.script
886
+ def bias_dropout_add_fused_inference(x: torch.Tensor,
887
+ bias: Optional[torch.Tensor],
888
+ residual: torch.Tensor,
889
+ prob: float) -> torch.Tensor:
890
+ return bias_dropout_add(x, bias, residual, prob, False)
891
+
892
+
893
+ class ParallelTransformerLayer(MegatronModule):
894
+ """A single transformer layer.
895
+
896
+ Transformer layer takes input with size [s, b, h] and returns an
897
+ output of the same size.
898
+ """
899
+
900
+ def __init__(self, config,
901
+ layer_number, layer_type=LayerType.encoder,
902
+ self_attn_mask_type=AttnMaskType.padding,
903
+ drop_path_rate=0., num_experts=1):
904
+ # retriever=None):
905
+ args = get_args()
906
+
907
+ super(ParallelTransformerLayer, self).__init__()
908
+ self.layer_number = layer_number
909
+ self.layer_type = layer_type
910
+
911
+ self.apply_residual_connection_post_layernorm \
912
+ = config.apply_residual_connection_post_layernorm
913
+
914
+ self.bf16 = config.bf16
915
+ self.fp32_residual_connection = config.fp32_residual_connection
916
+
917
+ # Layernorm on the input data.
918
+ if args.normalization == 'layernorm':
919
+ if get_accelerator().device_name() == 'cuda':
920
+ self.input_layernorm = LayerNorm(
921
+ config.hidden_size,
922
+ eps=config.layernorm_epsilon,
923
+ no_persist_layer_norm=args.no_persist_layer_norm,
924
+ sequence_parallel=config.sequence_parallel,
925
+ apply_layernorm_1p=args.apply_layernorm_1p,
926
+ mem_efficient_ln=args.mem_efficient_ln)
927
+ else:
928
+ self.input_layernorm = LayerNorm(
929
+ config.hidden_size,
930
+ eps=config.layernorm_epsilon)
931
+ else:
932
+ self.input_layernorm = RMSNorm(config.hidden_size, config.layernorm_epsilon)
933
+ # Self attention.
934
+ self.self_attention = ParallelAttention(
935
+ config,
936
+ layer_number,
937
+ attention_type=AttnType.self_attn,
938
+ attn_mask_type=self_attn_mask_type)
939
+ self.hidden_dropout = config.hidden_dropout
940
+ self.bias_dropout_fusion = config.bias_dropout_fusion
941
+ self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0.0 else None
942
+
943
+ # Layernorm on the attention output
944
+ if args.normalization == 'layernorm':
945
+ if get_accelerator().device_name() == 'cuda':
946
+ self.post_attention_layernorm = LayerNorm(
947
+ config.hidden_size,
948
+ eps=config.layernorm_epsilon,
949
+ no_persist_layer_norm=not config.persist_layer_norm,
950
+ sequence_parallel=config.sequence_parallel,
951
+ apply_layernorm_1p=args.apply_layernorm_1p,
952
+ mem_efficient_ln=args.mem_efficient_ln)
953
+ else:
954
+ self.post_attention_layernorm = LayerNorm(
955
+ config.hidden_size,
956
+ eps=config.layernorm_epsilon)
957
+ else:
958
+ self.post_attention_layernorm = RMSNorm(config.hidden_size, config.layernorm_epsilon)
959
+ # Cross attention.
960
+ if self.layer_type in (LayerType.decoder,
961
+ LayerType.retro_decoder,
962
+ LayerType.retro_decoder_with_retriever,
963
+ LayerType.retro_encoder):
964
+ self.inter_attention = ParallelAttention(
965
+ config,
966
+ layer_number,
967
+ attention_type=AttnType.cross_attn)
968
+ # Layernorm on the attention output.
969
+ if args.normalization == 'layernorm':
970
+ self.post_inter_attention_layernorm = LayerNorm(
971
+ config.hidden_size,
972
+ eps=config.layernorm_epsilon,
973
+ no_persist_layer_norm=not config.persist_layer_norm,
974
+ sequence_parallel=config.sequence_parallel,
975
+ apply_layernorm_1p=args.apply_layernorm_1p,
976
+ mem_efficient_ln=args.mem_efficient_ln)
977
+ else:
978
+ self.post_inter_attention_layernorm = RMSNorm(config.hidden_size, config.layernorm_epsilon)
979
+
980
+ # MLP
981
+ self.num_experts = num_experts
982
+ if args.num_experts_switch is not None:
983
+ self.mlp = SwitchMLP(config) # Megatron-LM's MoE
984
+ else:
985
+ if self.num_experts <= 1: # dense, not MoE
986
+ self.mlp = ParallelMLP(config)
987
+ else: # DeepSpeed's MoE
988
+ enable_expert_tensor_parallelism = args.enable_expert_tensor_parallelism
989
+ self.mlp = MoE(args.hidden_size,
990
+ ParallelMLP(config,
991
+ moe=True,
992
+ enable_expert_tensor_parallelism=enable_expert_tensor_parallelism),
993
+ num_experts=self.num_experts,
994
+ ep_size=args.moe_expert_parallel_size,
995
+ k=args.topk,
996
+ use_residual=(args.mlp_type == 'residual'),
997
+ capacity_factor=args.moe_train_capacity_factor,
998
+ eval_capacity_factor=args.moe_eval_capacity_factor,
999
+ min_capacity=args.moe_min_capacity,
1000
+ drop_tokens=args.moe_token_dropping,
1001
+ use_tutel=args.use_tutel,
1002
+ enable_expert_tensor_parallelism=enable_expert_tensor_parallelism,
1003
+ top2_2nd_expert_sampling=args.moe_top2_2nd_expert_sampling)
1004
+
1005
+ # Set bias+dropout+add fusion grad_enable execution handler.
1006
+ TORCH_MAJOR = int(torch.__version__.split('.')[0])
1007
+ TORCH_MINOR = int(torch.__version__.split('.')[1])
1008
+ use_nvfuser = TORCH_MAJOR > 1 or (TORCH_MAJOR == 1 and TORCH_MINOR >= 10)
1009
+ self.bias_dropout_add_exec_handler = \
1010
+ nullcontext if use_nvfuser else torch.enable_grad
1011
+
1012
+ if args.retro_add_retriever:
1013
+ retro_args = get_retro_args()
1014
+ self.retro_num_neighbors = args.retro_num_neighbors
1015
+ self.retro_chunk_length = retro_args.retro_gpt_chunk_length
1016
+ self.retro_retrieved_length = retro_args.retro_gpt_retrieved_length
1017
+
1018
+ # Retriever (bi-directional transformer with cross attention)
1019
+ if layer_type == LayerType.retro_decoder_with_retriever:
1020
+ self.retriever = ParallelTransformer(
1021
+ init_method,
1022
+ output_layer_init_method,
1023
+ model_type=ModelType.retro_encoder,
1024
+ self_attn_mask_type=AttnMaskType.padding,
1025
+ pre_process=True,
1026
+ post_process=False,
1027
+ )
1028
+ self._retriever_key = 'retriever'
1029
+ else:
1030
+ self.retriever = None
1031
+
1032
+ def default_decoder_cross_attention(self,
1033
+ encoder_output,
1034
+ enc_dec_attn_mask,
1035
+ layernorm_input,
1036
+ layernorm_output,
1037
+ bias_dropout_add_func):
1038
+ '''Cross attention for a standard encoder-decoder model.'''
1039
+
1040
+ # Attention.
1041
+ attention_output, attention_bias = \
1042
+ self.inter_attention(layernorm_output,
1043
+ enc_dec_attn_mask,
1044
+ encoder_output=encoder_output)
1045
+
1046
+ # Residual connection.
1047
+ if self.apply_residual_connection_post_layernorm:
1048
+ residual = layernorm_output
1049
+ else:
1050
+ residual = layernorm_input
1051
+
1052
+ if attention_bias is not None:
1053
+ attention_bias = attention_bias.expand_as(residual)
1054
+
1055
+ # Bias-dropout-add.
1056
+ with self.bias_dropout_add_exec_handler():
1057
+ layernorm_input = bias_dropout_add_func(
1058
+ attention_output,
1059
+ attention_bias,
1060
+ residual,
1061
+ self.hidden_dropout)
1062
+
1063
+ # Layer norm.
1064
+ layernorm_output = self.post_inter_attention_layernorm(layernorm_input)
1065
+
1066
+ return layernorm_input, layernorm_output
1067
+
1068
+ def retro_encoder_cross_attention(self,
1069
+ retriever_output,
1070
+ layernorm_input,
1071
+ layernorm_output,
1072
+ bias_dropout_add_func):
1073
+ """Cross attention for Retro encoder.
1074
+
1075
+ Notation:
1076
+ ns : Sequence length.
1077
+ bs : Batch size.
1078
+ d : Hidden size.
1079
+ l : Number of chunks per sample (i.e., seq_length/chunk_length).
1080
+ k : Number of neighbors.
1081
+ r : Number of retrieved tokens (neighbors + continuation).
1082
+ """
1083
+
1084
+ ns, bs, d = layernorm_output.shape # [r, bs * l * k, d]
1085
+
1086
+ # Divide sequence dimension into chunks.
1087
+ chunked_outputs = layernorm_output.reshape(self.retro_retrieved_length,
1088
+ -1,
1089
+ self.retro_num_neighbors,
1090
+ d)
1091
+ chunked_outputs_before_layer_norm = \
1092
+ layernorm_input.reshape(self.retro_retrieved_length, -1,
1093
+ self.retro_num_neighbors, d) # [r, bs*l, k, d]
1094
+
1095
+ # Per-chunk attention.
1096
+ layernorm_inputs = []
1097
+ layernorm_outputs = []
1098
+ for k in range(self.retro_num_neighbors):
1099
+
1100
+ # Attention.
1101
+ chunked_output = chunked_outputs[:,:,k].contiguous()
1102
+ attention_output, attention_bias = \
1103
+ self.inter_attention(
1104
+ chunked_output, # Q (neighbor embedding)
1105
+ None,
1106
+ encoder_output=retriever_output) # K, V (hidden act)
1107
+
1108
+ # Residual connection.
1109
+ if self.apply_residual_connection_post_layernorm:
1110
+ residual = chunked_output
1111
+ else:
1112
+ residual = chunked_outputs_before_layer_norm[:,:,k]
1113
+
1114
+ # Re-enable torch grad to enable fused optimization.
1115
+ with torch.enable_grad():
1116
+ layernorm_input = bias_dropout_add_func(
1117
+ attention_output,
1118
+ None if attention_bias is None else attention_bias.expand_as(residual),
1119
+ residual,
1120
+ self.hidden_dropout)
1121
+ layernorm_inputs.append(layernorm_input)
1122
+
1123
+ # Layer norm.
1124
+ layernorm_output = \
1125
+ self.post_inter_attention_layernorm(layernorm_input)
1126
+ layernorm_outputs.append(layernorm_output)
1127
+
1128
+ # Concatenate layer norms.
1129
+ # layernorm_input : [r, k * bs * l, d]
1130
+ # layernorm_output : [r, k * bs * l, d]
1131
+ layernorm_input = \
1132
+ torch.stack(layernorm_inputs, dim=1).reshape(ns, bs, d)
1133
+ layernorm_output = \
1134
+ torch.stack(layernorm_outputs, dim=1).reshape(ns, bs, d)
1135
+
1136
+ return layernorm_input, layernorm_output
1137
+
1138
+ def retro_decoder_cross_attention(self,
1139
+ retriever_input,
1140
+ retriever_output,
1141
+ retriever_attn_mask,
1142
+ layernorm_input,
1143
+ layernorm_output,
1144
+ inference_params,
1145
+ bias_dropout_add_func):
1146
+ """Cross attention for Retro decoder.
1147
+
1148
+ Notation:
1149
+ ns : Sequence length.
1150
+ bs : Batch size.
1151
+ d : Hidden size.
1152
+ l : Number of chunks per sample (i.e., seq_length/chunk_length).
1153
+ m : Number of tokens per chunk.
1154
+ k : Number of neighbors.
1155
+ r : Number of retrieved tokens (neighbors + continuation).
1156
+ """
1157
+
1158
+ ns, bs, d = layernorm_output.shape
1159
+ l = int(np.ceil(ns / self.retro_chunk_length))
1160
+
1161
+ # Retrieve neighbors.
1162
+ if self.layer_type == LayerType.retro_decoder_with_retriever:
1163
+ first_ns = ns % self.retro_chunk_length
1164
+ if first_ns > 0:
1165
+ raise Exception("test this case.")
1166
+ first_chunk, rest_chunk = \
1167
+ layernorm_output[:first_ns], layernorm_output[first_ns:]
1168
+ first_chunk = torch.nn.functional.pad(
1169
+ first_chunk,
1170
+ (0, 0, 0, 0, 0, self.retro_chunk_length - first_ns),
1171
+ 'constant',
1172
+ 0)
1173
+ chunked_output = \
1174
+ torch.cat((first_chunk, rest_chunk), dim=0) # [l * m, bs, d]
1175
+ else:
1176
+ chunked_output = layernorm_output # [l * m, bs, d]
1177
+ chunked_output = chunked_output \
1178
+ .reshape(l, self.retro_chunk_length, bs, d) \
1179
+ .permute(1, 2, 0, 3) \
1180
+ .reshape(self.retro_chunk_length, bs * l, d) \
1181
+ .contiguous()
1182
+
1183
+ # Get Encoder Output
1184
+ retriever_output = self.retriever(
1185
+ hidden_states=retriever_input,
1186
+ attention_mask=retriever_attn_mask,
1187
+ retriever_output=chunked_output,
1188
+ retriever_attn_mask=retriever_attn_mask,
1189
+ inference_params=inference_params) # [r, k * bs * l , d]
1190
+ retriever_output = retriever_output.reshape(
1191
+ self.retro_retrieved_length * self.retro_num_neighbors, bs * l, d) # [r * k, bs * l, d]
1192
+
1193
+ # Chunks.
1194
+ pad = (ns - 1) % self.retro_chunk_length
1195
+ attending_chunks = layernorm_output[pad:]
1196
+ padded_chunks = torch.nn.functional.pad(
1197
+ attending_chunks,
1198
+ (0, 0, 0, 0, 0, self.retro_chunk_length - 1),
1199
+ 'constant', 0)
1200
+ padded_chunked_output = padded_chunks \
1201
+ .reshape(l, self.retro_chunk_length, bs, d) \
1202
+ .permute(1, 2, 0, 3)
1203
+ padded_chunked_output = padded_chunked_output.reshape(
1204
+ self.retro_chunk_length, bs * l, d).contiguous()
1205
+
1206
+ # Encoder output.
1207
+ attention_output, attention_bias = \
1208
+ self.inter_attention(padded_chunked_output,
1209
+ None,
1210
+ encoder_output=retriever_output)
1211
+
1212
+ # Residual connection.
1213
+ if self.apply_residual_connection_post_layernorm:
1214
+ residual = layernorm_output
1215
+ else:
1216
+ residual = layernorm_input
1217
+
1218
+ # Re-enable torch grad to enable fused optimization.
1219
+ with torch.enable_grad():
1220
+ layernorm_input = bias_dropout_add_func(
1221
+ attention_output,
1222
+ None if attention_bias is None else attention_bias.expand_as(attention_output),
1223
+ torch.zeros_like(attention_output),
1224
+ self.hidden_dropout)
1225
+ layernorm_input = layernorm_input \
1226
+ .reshape(self.retro_chunk_length, bs, l, d) \
1227
+ .permute(2, 0, 1, 3) # [l, m, bs, d]
1228
+ layernorm_input = layernorm_input.reshape(self.retro_chunk_length * l, bs, d)
1229
+ layernorm_input = torch.nn.functional.pad(
1230
+ layernorm_input,
1231
+ (0, 0, 0, 0, pad, 0),
1232
+ 'constant', 0)[:ns] # [ns, b, d]
1233
+ layernorm_input = layernorm_input + residual
1234
+
1235
+ # Layer norm post the decoder attention
1236
+ layernorm_output = self.post_inter_attention_layernorm(layernorm_input)
1237
+
1238
+ return retriever_output, layernorm_input, layernorm_output
1239
+
1240
+ def forward(self, hidden_states, attention_mask=None,
1241
+ encoder_output=None, enc_dec_attn_mask=None,
1242
+ retriever_input=None,
1243
+ retriever_output=None,
1244
+ retriever_attn_mask=None,
1245
+ inference_params=None,
1246
+ rotary_pos_emb=None,
1247
+ aggregated_moe_loss=None):
1248
+ # hidden_states: [s, b, h]
1249
+
1250
+ # Layer norm at the beginning of the transformer layer.
1251
+ layernorm_output = self.input_layernorm(hidden_states)
1252
+
1253
+ # Self attention.
1254
+ attention_output, attention_bias = \
1255
+ self.self_attention(
1256
+ layernorm_output,
1257
+ attention_mask,
1258
+ inference_params=inference_params,
1259
+ rotary_pos_emb=rotary_pos_emb)
1260
+
1261
+ # Residual connection.
1262
+ if self.apply_residual_connection_post_layernorm:
1263
+ residual = layernorm_output
1264
+ else:
1265
+ residual = hidden_states
1266
+
1267
+ if self.drop_path is None:
1268
+ # jit scripting for a nn.module (with dropout) is not
1269
+ # trigerring the fusion kernel. For now, we use two
1270
+ # different nn.functional routines to account for varying
1271
+ # dropout semantics during training and inference phases.
1272
+ if self.bias_dropout_fusion:
1273
+ if self.training:
1274
+ bias_dropout_add_func = bias_dropout_add_fused_train
1275
+ else:
1276
+ bias_dropout_add_func = bias_dropout_add_fused_inference
1277
+ else:
1278
+ bias_dropout_add_func = get_bias_dropout_add(self.training)
1279
+
1280
+ if attention_bias is not None:
1281
+ attention_bias = attention_bias.expand_as(residual)
1282
+ with self.bias_dropout_add_exec_handler():
1283
+ layernorm_input = bias_dropout_add_func(
1284
+ attention_output,
1285
+ attention_bias,
1286
+ residual,
1287
+ self.hidden_dropout)
1288
+ else:
1289
+ out = torch.nn.functional.dropout(attention_output + attention_bias,
1290
+ p=self.hidden_dropout,
1291
+ training=self.training)
1292
+ layernorm_input = residual + self.drop_path(out)
1293
+
1294
+ # Layer norm post the self attention.
1295
+ layernorm_output = self.post_attention_layernorm(layernorm_input)
1296
+
1297
+ # Cross attention.
1298
+ if self.layer_type == LayerType.encoder:
1299
+ pass
1300
+ elif self.layer_type == LayerType.decoder:
1301
+ layernorm_input, layernorm_output = \
1302
+ self.default_decoder_cross_attention(
1303
+ encoder_output,
1304
+ enc_dec_attn_mask,
1305
+ layernorm_input,
1306
+ layernorm_output,
1307
+ bias_dropout_add_func)
1308
+ elif self.layer_type == LayerType.retro_encoder:
1309
+ layernorm_input, layernorm_output = \
1310
+ self.retro_encoder_cross_attention(
1311
+ retriever_output,
1312
+ layernorm_input,
1313
+ layernorm_output,
1314
+ bias_dropout_add_func)
1315
+ elif self.layer_type in (LayerType.retro_decoder,
1316
+ LayerType.retro_decoder_with_retriever):
1317
+ retriever_output, layernorm_input, layernorm_output = \
1318
+ self.retro_decoder_cross_attention(
1319
+ retriever_input,
1320
+ retriever_output,
1321
+ retriever_attn_mask,
1322
+ layernorm_input,
1323
+ layernorm_output,
1324
+ inference_params,
1325
+ bias_dropout_add_func)
1326
+ else:
1327
+ raise Exception("Unsupported layer type, '%s'." %
1328
+ self.layer_type.name)
1329
+
1330
+ # MLP.
1331
+ moe_loss = torch.tensor(0.0, device=layernorm_output.device, dtype=layernorm_output.dtype)
1332
+ mlp_bias = torch.tensor(0.0, device=layernorm_output.device, dtype=layernorm_output.dtype)
1333
+
1334
+ if self.num_experts == 1:
1335
+ mlp_output, mlp_bias = self.mlp(layernorm_output)
1336
+ else:
1337
+ mlp_output, moe_loss, _ = self.mlp(layernorm_output)
1338
+
1339
+ # when aggregated_moe_loss received, returned moe_loss is the aggregated moe loss
1340
+ if aggregated_moe_loss is not None:
1341
+ moe_loss += aggregated_moe_loss
1342
+
1343
+ # Second residual connection.
1344
+ if self.apply_residual_connection_post_layernorm:
1345
+ residual = layernorm_output
1346
+ else:
1347
+ residual = layernorm_input
1348
+
1349
+ if self.drop_path is None:
1350
+ if mlp_bias is not None:
1351
+ mlp_bias = mlp_bias.expand_as(residual)
1352
+ with self.bias_dropout_add_exec_handler():
1353
+ output = bias_dropout_add_func(
1354
+ mlp_output,
1355
+ mlp_bias,
1356
+ residual,
1357
+ self.hidden_dropout)
1358
+
1359
+ # Jit compiled function creates 'view' tensor. This tensor
1360
+ # potentially gets saved in the MPU checkpoint function context,
1361
+ # which rejects view tensors. While making a viewless tensor here
1362
+ # won't result in memory savings (like the data loader, or
1363
+ # p2p_communication), it serves to document the origin of this
1364
+ # 'view' tensor.
1365
+ output = core.utils.make_viewless_tensor(inp = output,
1366
+ requires_grad = output.requires_grad,
1367
+ keep_graph = True)
1368
+
1369
+ else:
1370
+ if mlp_bias is not None:
1371
+ mlp_output = mlp_output + mlp_bias
1372
+ out = torch.nn.functional.dropout(mlp_output,
1373
+ p=self.hidden_dropout,
1374
+ training=self.training)
1375
+ output = residual + self.drop_path(out)
1376
+
1377
+ if self.layer_type == LayerType.retro_decoder_with_retriever:
1378
+ return output, retriever_output, moe_loss
1379
+ else:
1380
+ return output, moe_loss
1381
+
1382
+
1383
+ class ParallelTransformerLayerPipe(ParallelTransformerLayer):
1384
+ """Extends ParallelTransformerLayer to forward attention_mask through the pipeline.
1385
+
1386
+ Forward has two usages that affect attention mask communication:
1387
+
1388
+ 1) forward((input, attn_mask) , **kwargs) -> (output, mask)
1389
+ When the attention mask is provided as the second positional
1390
+ argument, typical pipeline behavior is used and both the output
1391
+ *and* mask are returned in a tuple. This tuple is then forwarded
1392
+ to the next stage in the pipeline.
1393
+
1394
+ This version is useful if masks are dynamic.
1395
+
1396
+ 2) forward(input, **kwargs) -> output
1397
+ When the mask is static over all samples, it is advantageous to
1398
+ cache the mask and avoid communicating it.
1399
+
1400
+ If no mask is provided, the module will query `self._args.attn_mask`
1401
+ for the mask and only return `super().forward(...)`
1402
+ """
1403
+ def __init__(self, config,
1404
+ layer_number, layer_type=LayerType.encoder,
1405
+ self_attn_mask_type=AttnMaskType.padding,
1406
+ drop_path_rate=0., num_experts=1,
1407
+ input_aggregated_moe_loss=False, return_aggregated_moe_loss=False):
1408
+ self.input_aggregated_moe_loss = input_aggregated_moe_loss
1409
+ self.return_aggregated_moe_loss = return_aggregated_moe_loss
1410
+ super().__init__(config, layer_number, layer_type, self_attn_mask_type, drop_path_rate, num_experts)
1411
+
1412
+ def forward(self, inputs, **kwargs):
1413
+ assert torch.is_tensor(inputs) or isinstance(inputs, tuple)
1414
+ if not hasattr(self, '_args'):
1415
+ self._args = get_args()
1416
+ rotary_pos_emb = self._args.rotary_pos_emb if self._args.use_rotary_position_embeddings else None
1417
+ if torch.is_tensor(inputs) or len(inputs) == 1:
1418
+ assert not self.input_aggregated_moe_loss, f'Expecting an input tuple of size >= 2'
1419
+ # No attention mask forwarded, search for args.attn_mask
1420
+ hidden_states, attention_mask = inputs, self._args.attn_mask
1421
+ output, moe_loss = super().forward(hidden_states, attention_mask, **kwargs, rotary_pos_emb=rotary_pos_emb)
1422
+ return (output, moe_loss) if self.return_aggregated_moe_loss else output
1423
+ elif len(inputs) in (2, 3):
1424
+ # Attention mask and aggregated_moe can both be activations.
1425
+ return_attention_mask = False
1426
+ if len(inputs) == 2:
1427
+ if self.input_aggregated_moe_loss:
1428
+ hidden_states, aggregated_moe_loss = inputs[0], inputs[1]
1429
+ attention_mask = self._args.attn_mask
1430
+ else:
1431
+ hidden_states, attention_mask = inputs[0], inputs[1]
1432
+ return_attention_mask = True
1433
+ else:
1434
+ hidden_states, attention_mask, aggregated_moe_loss = inputs[0], inputs[1], inputs[2]
1435
+
1436
+ # Forward aggregated_moe_loss to ParallelTransformerLayer for further accumulation
1437
+ if self.input_aggregated_moe_loss:
1438
+ kwargs.update({'aggregated_moe_loss': aggregated_moe_loss})
1439
+
1440
+ output, moe_loss = super().forward(hidden_states, attention_mask, **kwargs, rotary_pos_emb=rotary_pos_emb)
1441
+
1442
+ ret = (output, )
1443
+ if return_attention_mask:
1444
+ ret += (attention_mask, )
1445
+ if self.return_aggregated_moe_loss:
1446
+ ret += (moe_loss, )
1447
+ return ret
1448
+ else:
1449
+ raise RuntimeError('Received more inputs than understood.')
1450
+
1451
+
1452
+ class NoopTransformerLayer(MegatronModule):
1453
+ """A single 'no-op' transformer layer.
1454
+
1455
+ The sole purpose of this layer is for when a standalone embedding layer
1456
+ is used (i.e., args.standalone_embedding_stage == True). In this case,
1457
+ zero transformer layers are assigned when pipeline rank == 0. Additionally,
1458
+ when virtual pipeline rank >= 1, zero total model parameters are created
1459
+ (virtual rank 0 contains the input embedding). This results in the model's
1460
+ input and output tensors being the same, which causes an error when
1461
+ performing certain memory optimiations on the output tensor (e.g.,
1462
+ deallocating it). Thus, this layer disconnects the input from the output
1463
+ via a clone. Since ranks containing a no-op layer are generally under-
1464
+ utilized (both compute and memory), there's no worry of any performance
1465
+ degredation.
1466
+ """
1467
+
1468
+ def __init__(self, layer_number):
1469
+ super().__init__()
1470
+ self.layer_number = layer_number
1471
+
1472
+ def forward(self, hidden_states, attention_mask,
1473
+ encoder_output=None, enc_dec_attn_mask=None,
1474
+ inference_params=None):
1475
+ return hidden_states.clone()
1476
+
1477
+
1478
+ def _get_num_layers(args, model_type, is_decoder=False):
1479
+ """Compute the number of transformer layers resident on the current rank."""
1480
+ is_encoder_and_decoder_model = (model_type == ModelType.encoder_and_decoder)
1481
+ if model_type == ModelType.retro_encoder:
1482
+ num_layers = args.retro_encoder_layers
1483
+ elif parallel_state.get_pipeline_model_parallel_world_size() > 1:
1484
+ if is_encoder_and_decoder_model:
1485
+ assert args.pipeline_model_parallel_split_rank is not None
1486
+
1487
+ # When a standalone embedding stage is used, a rank is taken from
1488
+ # the encoder's ranks, to be used for the encoder's embedding
1489
+ # layer. This way, the rank referenced by the 'split rank' remains
1490
+ # the same whether or not a standalone embedding stage is used.
1491
+ num_ranks_in_encoder = (
1492
+ args.pipeline_model_parallel_split_rank - 1
1493
+ if args.standalone_embedding_stage else
1494
+ args.pipeline_model_parallel_split_rank
1495
+ )
1496
+ num_ranks_in_decoder = args.transformer_pipeline_model_parallel_size - num_ranks_in_encoder
1497
+ assert args.encoder_num_layers % num_ranks_in_encoder == 0, \
1498
+ 'encoder_num_layers (%d) must be divisible by number of ranks given to encoder (%d)' % (args.encoder_num_layers, num_ranks_in_encoder)
1499
+ assert args.decoder_num_layers % num_ranks_in_decoder == 0, \
1500
+ 'decoder_num_layers (%d) must be divisible by number of ranks given to decoder (%d)' % (args.decoder_num_layers, num_ranks_in_decoder)
1501
+ if parallel_state.is_pipeline_stage_before_split():
1502
+ num_layers = (
1503
+ 0
1504
+ if args.standalone_embedding_stage
1505
+ and parallel_state.get_pipeline_model_parallel_rank() == 0 else
1506
+ args.encoder_num_layers // num_ranks_in_encoder
1507
+ )
1508
+ else:
1509
+ num_layers = args.decoder_num_layers // num_ranks_in_decoder
1510
+ else:
1511
+ assert args.num_layers == args.encoder_num_layers
1512
+ assert args.num_layers % args.transformer_pipeline_model_parallel_size == 0, \
1513
+ 'num_layers must be divisible by transformer_pipeline_model_parallel_size'
1514
+
1515
+ # When a standalone embedding stage is used, all transformer layers
1516
+ # are divided among pipeline rank >= 1, while on pipeline rank 0,
1517
+ # ranks either contain the input embedding layer (virtual pp rank 0),
1518
+ # or no layers at all (virtual pp rank >= 1).
1519
+ num_layers = (
1520
+ 0
1521
+ if args.standalone_embedding_stage
1522
+ and parallel_state.get_pipeline_model_parallel_rank() == 0 else
1523
+ args.num_layers // args.transformer_pipeline_model_parallel_size
1524
+ )
1525
+ else:
1526
+ if not is_decoder:
1527
+ num_layers = args.encoder_num_layers
1528
+ else:
1529
+ num_layers = args.decoder_num_layers
1530
+ return num_layers
1531
+
1532
+
1533
+ def _get_layer_type(model_type, default_layer_type, retro_layer_numbers,
1534
+ layer_number):
1535
+ args = get_args()
1536
+ if args.retro_add_retriever and layer_number in retro_layer_numbers:
1537
+ if model_type == ModelType.retro_decoder:
1538
+ return LayerType.retro_decoder_with_retriever \
1539
+ if layer_number == retro_layer_numbers[0] \
1540
+ else LayerType.retro_decoder
1541
+ elif model_type == ModelType.retro_encoder:
1542
+ return LayerType.retro_encoder
1543
+ else:
1544
+ raise Exception("Unsupported model type, '%s'." % model_type)
1545
+ else:
1546
+ return default_layer_type
1547
+
1548
+
1549
+ def get_num_experts_per_layer(num_experts: list, num_layers: int, expert_interval: int, offset: int = 0) -> list:
1550
+ assert len(num_experts) == 1 or len(num_experts) == num_layers // expert_interval, \
1551
+ 'num_experts must be either a single value or a list of the same length as the number of MoE layers'
1552
+ if len(num_experts) == 1:
1553
+ num_experts = num_experts * (num_layers // expert_interval)
1554
+ experts_per_layer = []
1555
+ for i in range(num_layers):
1556
+ layer_num = i + 1 + offset
1557
+ n_e = num_experts[(layer_num-1) // expert_interval] if layer_num % expert_interval == 0 else 1
1558
+ experts_per_layer.append(n_e)
1559
+ return experts_per_layer
1560
+
1561
+
1562
+ class ParallelTransformer(MegatronModule):
1563
+ """Transformer class."""
1564
+
1565
+ def __init__(self, config,
1566
+ model_type, layer_type=LayerType.encoder,
1567
+ self_attn_mask_type=AttnMaskType.padding,
1568
+ post_layer_norm=True,
1569
+ pre_process=True,
1570
+ post_process=True,
1571
+ drop_path_rate=0.0,
1572
+ num_experts=[1]):
1573
+ super(ParallelTransformer, self).__init__()
1574
+ args = get_args()
1575
+
1576
+ self.layer_type = layer_type
1577
+ self.model_type = model_type
1578
+ self.bf16 = config.bf16
1579
+ self.fp32_residual_connection = config.fp32_residual_connection
1580
+ self.post_layer_norm = post_layer_norm
1581
+ self.pre_process = pre_process
1582
+ self.post_process = post_process
1583
+ self.input_tensor = None
1584
+ self.drop_path_rate = drop_path_rate
1585
+ self.transformer_impl = args.transformer_impl
1586
+ self.retro_add_retriever = args.retro_add_retriever
1587
+ self.ds_inference = args.ds_inference
1588
+
1589
+ # Store activation checkpoiting flag.
1590
+ self.checkpoint_activations = args.checkpoint_activations
1591
+ self.checkpoint_num_layers = args.checkpoint_num_layers
1592
+ self.recompute_granularity = config.recompute_granularity
1593
+ self.recompute_method = config.recompute_method
1594
+ self.recompute_num_layers = config.recompute_num_layers
1595
+ self.distribute_saved_activations = \
1596
+ config.distribute_saved_activations and not config.sequence_parallel
1597
+
1598
+ self.sequence_parallel = config.sequence_parallel
1599
+
1600
+ # Transformer Engine Init.
1601
+ self.transformer_engine_rope_available = False
1602
+ if self.transformer_impl == 'transformer_engine':
1603
+ global transformer_engine
1604
+ import transformer_engine
1605
+ from importlib.metadata import version
1606
+ from pkg_resources import packaging
1607
+
1608
+ te_version = packaging.version.Version(version("transformer-engine"))
1609
+ if te_version >= packaging.version.Version("0.10.0"):
1610
+ self.transformer_engine_rope_available = True
1611
+
1612
+ del version, packaging
1613
+
1614
+ self.use_fp8 = args.fp8_e4m3 or args.fp8_hybrid
1615
+ self.fp8_recipe = None
1616
+ self.fp8_group = None
1617
+ if self.use_fp8:
1618
+ self.fp8_group = parallel_state.get_data_parallel_group()
1619
+ if args.fp8_e4m3:
1620
+ fp8_format = transformer_engine.common.recipe.Format.E4M3
1621
+ elif args.fp8_hybrid:
1622
+ fp8_format = transformer_engine.common.recipe.Format.HYBRID
1623
+ self.fp8_recipe = transformer_engine.common.recipe.DelayedScaling(
1624
+ margin=args.fp8_margin,
1625
+ interval=args.fp8_interval,
1626
+ fp8_format=fp8_format,
1627
+ amax_history_len=args.fp8_amax_history_len,
1628
+ amax_compute_algo=args.fp8_amax_compute_algo,
1629
+ override_linear_precision=(False, False, not args.fp8_wgrad),
1630
+ )
1631
+
1632
+ self.num_microbatches_in_previous_step = -1
1633
+ self.microbatch_count = 0
1634
+ self.checkpoint_core_attention = config.recompute_granularity == 'selective'
1635
+
1636
+ # Number of layers.
1637
+ self.num_layers = _get_num_layers(args, model_type,
1638
+ layer_type==LayerType.decoder)
1639
+
1640
+ self.drop_path_rates = [
1641
+ rate.item() for rate in
1642
+ torch.linspace(0, self.drop_path_rate, config.num_layers)]
1643
+
1644
+ self.retro_layer_numbers = None
1645
+ if model_type == ModelType.retro_decoder:
1646
+ retro_layer_start = 6 if config.num_layers <= 15 else 9
1647
+ self.retro_layer_numbers = \
1648
+ np.arange(retro_layer_start, args.num_layers + 1, 3).tolist()
1649
+ if model_type == ModelType.retro_encoder:
1650
+ self.retro_layer_numbers = [1]
1651
+
1652
+ # Transformer layers.
1653
+ if args.retro_add_retriever:
1654
+ assert self.recompute_granularity != 'full', \
1655
+ "Full recompute not supported for Retro."
1656
+ assert args.transformer_impl == 'local', \
1657
+ "Transformer engine does not support Retro layers."
1658
+ def build_layer(layer_number, n_e):
1659
+ if args.transformer_impl == 'local':
1660
+ current_layer_type = _get_layer_type(
1661
+ model_type, layer_type, self.retro_layer_numbers,
1662
+ layer_number)
1663
+ return ParallelTransformerLayer(
1664
+ config,
1665
+ layer_number,
1666
+ layer_type=current_layer_type,
1667
+ self_attn_mask_type=self_attn_mask_type,
1668
+ drop_path_rate=self.drop_path_rates[layer_number - 1],
1669
+ num_experts=n_e)
1670
+ else:
1671
+ assert config.num_attention_heads == config.num_key_value_heads, \
1672
+ 'Transformer_engine does not support GQA'
1673
+ return transformer_engine.pytorch.TransformerLayer(
1674
+ config.hidden_size,
1675
+ config.ffn_hidden_size,
1676
+ config.num_attention_heads,
1677
+ layernorm_epsilon=config.layernorm_epsilon,
1678
+ hidden_dropout=config.hidden_dropout,
1679
+ attention_dropout=config.attention_dropout,
1680
+ init_method=config.init_method,
1681
+ output_layer_init_method=config.output_layer_init_method,
1682
+ layer_number=layer_number,
1683
+ kv_channels=config.kv_channels,
1684
+ self_attn_mask_type=self_attn_mask_type.name,
1685
+ tp_group=parallel_state.get_tensor_model_parallel_group(),
1686
+ get_rng_state_tracker=tensor_parallel.get_cuda_rng_tracker,
1687
+ fuse_wgrad_accumulation=config.gradient_accumulation_fusion,
1688
+ apply_query_key_layer_scaling=config.apply_query_key_layer_scaling,
1689
+ attention_softmax_in_fp32=config.attention_softmax_in_fp32,
1690
+ seq_length=args.seq_length,
1691
+ micro_batch_size=args.micro_batch_size,
1692
+ sequence_parallel=config.sequence_parallel,
1693
+ params_dtype=config.params_dtype,
1694
+ apply_residual_connection_post_layernorm=config.apply_residual_connection_post_layernorm,
1695
+ output_layernorm=False,
1696
+ layer_type="encoder",
1697
+ drop_path_rate=self.drop_path_rates[layer_number - 1],
1698
+ set_parallel_mode=True,
1699
+ fuse_qkv_params=True)
1700
+
1701
+ if config.virtual_pipeline_model_parallel_size is not None:
1702
+ assert config.num_layers % config.virtual_pipeline_model_parallel_size == 0, \
1703
+ 'num_layers_per_stage must be divisible by ' \
1704
+ 'virtual_pipeline_model_parallel_size'
1705
+ assert args.model_type != ModelType.encoder_and_decoder
1706
+ # Number of layers in each model chunk is the number of layers in the stage,
1707
+ # divided by the number of model chunks in a stage.
1708
+ self.num_layers = self.num_layers // config.virtual_pipeline_model_parallel_size
1709
+ # With 8 layers, 2 stages, and 4 model chunks, we want an assignment of
1710
+ # layers to stages like (each list is a model chunk):
1711
+ # Stage 0: [0] [2] [4] [6]
1712
+ # Stage 1: [1] [3] [5] [7]
1713
+ # With 8 layers, 2 stages, and 2 virtual stages, we want an assignment of
1714
+ # layers to stages like (each list is a model chunk):
1715
+ # Stage 0: [0, 1] [4, 5]
1716
+ # Stage 1: [2, 3] [6, 7]
1717
+ offset = parallel_state.get_virtual_pipeline_model_parallel_rank() * (
1718
+ config.num_layers // config.virtual_pipeline_model_parallel_size) + \
1719
+ (parallel_state.get_pipeline_model_parallel_rank() * self.num_layers)
1720
+ else:
1721
+ # Each stage gets a contiguous set of layers.
1722
+ if args.model_type == ModelType.encoder_and_decoder and \
1723
+ parallel_state.get_pipeline_model_parallel_world_size() > 1:
1724
+ pipeline_rank = parallel_state.get_pipeline_model_parallel_rank()
1725
+ if layer_type == LayerType.encoder:
1726
+ offset = pipeline_rank * self.num_layers
1727
+ else:
1728
+ num_ranks_in_enc = args.pipeline_model_parallel_split_rank
1729
+ offset = (pipeline_rank - num_ranks_in_enc) * self.num_layers
1730
+ else:
1731
+ offset = parallel_state.get_pipeline_model_parallel_rank() * self.num_layers
1732
+
1733
+ if self.num_layers == 0:
1734
+ # When a standalone embedding stage is used (e.g.,
1735
+ # args.standalone_embedding_stage == True), virtual pipeline ranks
1736
+ # on pipeline rank 0 will have zero transformer layers assigned to
1737
+ # them. This results in the model's input and output tensors to be
1738
+ # the same, which will cause failure for certain output tensor
1739
+ # optimizations (e.g., pipeline output deallocation). To remedy
1740
+ # this, we assign a 'no-op' layer on these ranks, which will
1741
+ # disconnect the input tensor from the output tensor.
1742
+ self.num_layers = 1
1743
+ self.layers = torch.nn.ModuleList([ NoopTransformerLayer(1) ])
1744
+ else:
1745
+ # Build the layers
1746
+ self.layers = []
1747
+ experts_per_layer = get_num_experts_per_layer(num_experts, self.num_layers, args.expert_interval, offset)
1748
+ for i in range(self.num_layers):
1749
+ layer_num = i + 1 + offset
1750
+ n_e = experts_per_layer[i]
1751
+ self.layers.append(build_layer(layer_num, n_e))
1752
+ self.layers = torch.nn.ModuleList(self.layers)
1753
+
1754
+ # Update dropout rate for Retro encoder.
1755
+ if model_type == ModelType.retro_encoder:
1756
+ for layer in self.layers:
1757
+ if layer.self_attention.use_flash_attn:
1758
+ layer.self_attention.core_attention_flash.dropout_p = \
1759
+ torch.nn.Dropout(args.retro_encoder_attention_dropout)
1760
+ else:
1761
+ layer.self_attention.core_attention.attention_dropout.p =\
1762
+ args.retro_encoder_attention_dropout
1763
+ layer.hidden_dropout = args.retro_encoder_hidden_dropout
1764
+
1765
+ if self.post_process and self.post_layer_norm:
1766
+ # Final layer norm before output.
1767
+ if args.normalization == 'layernorm':
1768
+ if get_accelerator().device_name() == 'cuda':
1769
+ self.final_layernorm = LayerNorm(
1770
+ config.hidden_size,
1771
+ eps=config.layernorm_epsilon,
1772
+ no_persist_layer_norm=args.no_persist_layer_norm,
1773
+ sequence_parallel=config.sequence_parallel,
1774
+ apply_layernorm_1p=args.apply_layernorm_1p,
1775
+ mem_efficient_ln=args.mem_efficient_ln)
1776
+ else:
1777
+ self.final_layernorm = LayerNorm(
1778
+ config.hidden_size,
1779
+ eps=config.layernorm_epsilon)
1780
+ else:
1781
+ self.final_layernorm = RMSNorm(config.hidden_size, config.layernorm_epsilon)
1782
+
1783
+ def _get_layer(self, layer_number):
1784
+ return self.layers[layer_number]
1785
+
1786
+ def _checkpointed_forward(self, hidden_states, attention_mask,
1787
+ encoder_output, enc_dec_attn_mask,
1788
+ rotary_pos_emb, is_first_microbatch):
1789
+ args = get_args()
1790
+
1791
+ """Forward method with activation checkpointing."""
1792
+ def custom(start, end):
1793
+ def custom_forward(*args, **kwargs):
1794
+ x_, *args = args
1795
+ moe_losses = []
1796
+ for index in range(start, end):
1797
+ layer = self._get_layer(index)
1798
+ output = layer(x_, *args, **kwargs)
1799
+ if isinstance(output, tuple):
1800
+ x_, moe_loss = output
1801
+ else:
1802
+ x_ = output
1803
+ moe_loss = torch.tensor(0.0, device=x_.device, dtype=x_.dtype, requires_grad=True)
1804
+ moe_losses.append(moe_loss)
1805
+ return (x_, *moe_losses)
1806
+ return custom_forward
1807
+
1808
+ if args.deepspeed and args.deepspeed_activation_checkpointing:
1809
+ moe_losses = []
1810
+ # Make sure memory is freed.
1811
+ tensor_parallel.reset_checkpointed_activations_memory_buffer()
1812
+ l = 0
1813
+ while l < self.num_layers:
1814
+ hidden_states, *local_moe_losses = tensor_parallel.checkpoint(
1815
+ custom(l, l + self.checkpoint_num_layers), False,
1816
+ hidden_states, attention_mask, encoder_output, enc_dec_attn_mask,
1817
+ None, None, None, None, rotary_pos_emb)
1818
+ moe_losses.extend(local_moe_losses)
1819
+ l += self.checkpoint_num_layers
1820
+
1821
+ return hidden_states, moe_losses
1822
+ else:
1823
+ moe_losses = []
1824
+ te_forward_kwargs = {}
1825
+ if self.transformer_impl == 'transformer_engine':
1826
+ te_forward_kwargs['is_first_microbatch'] = is_first_microbatch
1827
+ if self.transformer_engine_rope_available:
1828
+ te_forward_kwargs['rotary_pos_emb'] = rotary_pos_emb
1829
+
1830
+ if self.recompute_method == 'uniform':
1831
+ # Uniformly divide the total number of Transformer layers and
1832
+ # checkpoint the input activation of each divided chunk.
1833
+ # A method to further reduce memory usage reducing checkpoints.
1834
+ l = 0
1835
+ while l < self.num_layers:
1836
+ if self.transformer_impl == 'transformer_engine':
1837
+ hidden_states, *local_moe_losses = transformer_engine.pytorch.distributed.checkpoint(
1838
+ custom(l, l + self.recompute_num_layers),
1839
+ self.distribute_saved_activations,
1840
+ tensor_parallel.get_cuda_rng_tracker,
1841
+ mpu.get_tensor_model_parallel_group(),
1842
+ hidden_states, attention_mask, encoder_output,
1843
+ enc_dec_attn_mask, **te_forward_kwargs)
1844
+ else:
1845
+ hidden_states, *local_moe_losses = tensor_parallel.checkpoint(
1846
+ custom(l, l + self.recompute_num_layers),
1847
+ self.distribute_saved_activations,
1848
+ hidden_states, attention_mask,
1849
+ encoder_output, enc_dec_attn_mask,
1850
+ None, None, None, None, rotary_pos_emb)
1851
+ moe_losses.extend(local_moe_losses)
1852
+ l += self.recompute_num_layers
1853
+ elif self.recompute_method == 'block':
1854
+ # Checkpoint the input activation of only a set number of individual
1855
+ # Transformer layers and skip the rest.
1856
+ # A method fully use the device memory removing redundant re-computation.
1857
+ for l in range(self.num_layers):
1858
+ if l < self.recompute_num_layers:
1859
+ if self.transformer_impl == 'transformer_engine':
1860
+ hidden_states, *local_moe_losses = transformer_engine.pytorch.distributed.checkpoint(
1861
+ custom(l, l + 1),
1862
+ self.distribute_saved_activations,
1863
+ tensor_parallel.get_cuda_rng_tracker,
1864
+ mpu.get_tensor_model_parallel_group(),
1865
+ hidden_states, attention_mask, encoder_output,
1866
+ enc_dec_attn_mask, **te_forward_kwargs)
1867
+ else:
1868
+ hidden_states, *local_moe_losses = tensor_parallel.checkpoint(
1869
+ custom(l, l + 1),
1870
+ self.distribute_saved_activations,
1871
+ hidden_states, attention_mask,
1872
+ encoder_output, enc_dec_attn_mask,
1873
+ None, None, None, None, rotary_pos_emb)
1874
+ else:
1875
+ if self.transformer_impl == 'transformer_engine':
1876
+ hidden_states, *local_moe_losses = custom(l, l + 1)(
1877
+ hidden_states, attention_mask, encoder_output,
1878
+ enc_dec_attn_mask, **te_forward_kwargs)
1879
+ else:
1880
+ hidden_states, *local_moe_losses = custom(l, l + 1)(
1881
+ hidden_states, attention_mask,
1882
+ encoder_output, enc_dec_attn_mask,
1883
+ None, None, None, None, rotary_pos_emb)
1884
+
1885
+ moe_losses.extend(local_moe_losses)
1886
+ else:
1887
+ raise ValueError("Invalid activation recompute method.")
1888
+ return hidden_states, moe_losses
1889
+
1890
+ def set_input_tensor(self, input_tensor):
1891
+ """Set input tensor to be used instead of forward()'s input.
1892
+
1893
+ When doing pipeline parallelism the input from the previous
1894
+ stage comes from communication, not from the input, so the
1895
+ model's forward_step_func won't have it. This function is thus
1896
+ used by internal code to bypass the input provided by the
1897
+ forward_step_func"""
1898
+ self.input_tensor = input_tensor
1899
+
1900
+ def forward(self, hidden_states, attention_mask,
1901
+ encoder_output=None, enc_dec_attn_mask=None,
1902
+ retriever_input=None,
1903
+ retriever_output=None,
1904
+ retriever_attn_mask=None,
1905
+ inference_params=None,
1906
+ rotary_pos_emb=None):
1907
+ # hidden_states: [s, b, h]
1908
+
1909
+ # Checks.
1910
+ if inference_params:
1911
+ assert self.recompute_granularity is None, \
1912
+ 'inference does not work with activation checkpointing'
1913
+
1914
+ # TODO: Below old DeepSpeed code are commented because it's unsure whether
1915
+ # it is still relevant.
1916
+ # # Reza's note: DeepSpeed inference does not support transposes
1917
+ # if not self.ds_inference:
1918
+ # if self.pre_process:
1919
+ # # Data format change to avoid explicit tranposes : [b s h] --> [s b h].
1920
+ # # If the input flag for fp32 residual connection is set, convert for float.
1921
+ # if self.fp32_residual_connection:
1922
+ # hidden_states = hidden_states.transpose(0, 1).contiguous().float()
1923
+ # # Otherwise, leave it as is.
1924
+ # else:
1925
+ # hidden_states = hidden_states.transpose(0, 1).contiguous()
1926
+ # else:
1927
+ # # See set_input_tensor()
1928
+ # hidden_states = self.input_tensor
1929
+ # if encoder_output is not None:
1930
+ # encoder_output = encoder_output.transpose(0, 1).contiguous()
1931
+
1932
+ if not self.pre_process:
1933
+ # See set_input_tensor()
1934
+ hidden_states = self.input_tensor
1935
+
1936
+ # Viewless tensor.
1937
+ # - We only need to create a viewless tensor in the case of micro batch
1938
+ # size (mbs) == 1, since in this case, 'hidden_states.transpose()'
1939
+ # above creates a view tensor, and '.contiguous()' is a pass-through.
1940
+ # For mbs >= 2, '.contiguous()' creates a new tensor, eliminating
1941
+ # the need to make it viewless.
1942
+ #
1943
+ # However, we don't explicitly check mbs == 1 here because
1944
+ # make_viewless_tensor() has negligible overhead when its input
1945
+ # is already viewless.
1946
+ #
1947
+ # - For the 'else' case above, calling make_viewless_tensor() here is
1948
+ # likely redundant, since p2p_communication.py (likely originator)
1949
+ # already creates viewless tensors. That said, make_viewless_tensor()
1950
+ # is called here to be future-proof and corner-case-proof.
1951
+ hidden_states = core.utils.make_viewless_tensor(
1952
+ hidden_states,
1953
+ requires_grad=True,
1954
+ keep_graph=True,
1955
+ )
1956
+
1957
+ # RNG context.
1958
+ if self.sequence_parallel:
1959
+ rng_context = tensor_parallel.get_cuda_rng_tracker().fork()
1960
+ else:
1961
+ rng_context = nullcontext()
1962
+
1963
+ # Forward layers.
1964
+ with rng_context:
1965
+ # The fp8_autocast context manager is a no-op when enabled=True
1966
+ # The if...else serves to short circuit name resolution for fp8_autocast
1967
+ with transformer_engine.pytorch.fp8_autocast(
1968
+ enabled=self.use_fp8,
1969
+ fp8_recipe=self.fp8_recipe,
1970
+ fp8_group=self.fp8_group
1971
+ ) if self.use_fp8 else nullcontext():
1972
+ # Determine if the current iteration is first microbatch
1973
+ if self.num_microbatches_in_previous_step != get_num_microbatches():
1974
+ self.microbatch_count = 0 # Reset count on new batch size rampup interval
1975
+ self.num_microbatches_in_previous_step = get_num_microbatches()
1976
+ is_first_microbatch = self.microbatch_count % get_num_microbatches() == 0
1977
+
1978
+ # Forward pass.
1979
+ moe_losses = []
1980
+ if self.checkpoint_activations:
1981
+ hidden_states, moe_losses = self._checkpointed_forward(hidden_states,
1982
+ attention_mask,
1983
+ encoder_output,
1984
+ enc_dec_attn_mask,
1985
+ rotary_pos_emb,
1986
+ is_first_microbatch)
1987
+ elif self.recompute_granularity == 'full':
1988
+ hidden_states, moe_losses = self._checkpointed_forward(hidden_states,
1989
+ attention_mask,
1990
+ encoder_output,
1991
+ enc_dec_attn_mask,
1992
+ rotary_pos_emb,
1993
+ is_first_microbatch)
1994
+ else:
1995
+ forward_kwargs = {
1996
+ 'encoder_output': encoder_output,
1997
+ 'enc_dec_attn_mask': enc_dec_attn_mask,
1998
+ 'inference_params': inference_params,
1999
+ }
2000
+
2001
+ if self.transformer_impl == 'transformer_engine':
2002
+ forward_kwargs['is_first_microbatch'] = is_first_microbatch
2003
+ forward_kwargs['checkpoint_core_attention'] = self.checkpoint_core_attention
2004
+ if self.transformer_engine_rope_available:
2005
+ forward_kwargs['rotary_pos_emb'] = rotary_pos_emb
2006
+ else:
2007
+ forward_kwargs['rotary_pos_emb'] = rotary_pos_emb
2008
+ forward_kwargs['retriever_input'] = retriever_input
2009
+ forward_kwargs['retriever_output'] = retriever_output
2010
+ forward_kwargs['retriever_attn_mask'] = retriever_attn_mask
2011
+
2012
+ for index in range(self.num_layers):
2013
+ layer = self._get_layer(index)
2014
+
2015
+ hidden_states = layer(
2016
+ hidden_states,
2017
+ attention_mask,
2018
+ **forward_kwargs)
2019
+
2020
+ # First Retro decoder layer returns both hidden_states
2021
+ # and retriever_output. Make retriever_output available
2022
+ # to subsequence Retro layers.
2023
+ if isinstance(hidden_states, tuple):
2024
+ assert (len(hidden_states) == 2 or len(hidden_states) == 3)
2025
+ if len(hidden_states) == 2:
2026
+ if not self.ds_inference:
2027
+ hidden_states, moe_loss = hidden_states
2028
+ moe_losses.append(moe_loss)
2029
+ else:
2030
+ forward_kwargs["retriever_output"] = hidden_states[1]
2031
+ if not self.ds_inference:
2032
+ hidden_states, _, moe_loss = hidden_states
2033
+ moe_losses.append(moe_loss)
2034
+
2035
+ # Skip counter update for eval and activation checkpointing
2036
+ if torch.is_grad_enabled() and self.training:
2037
+ self.microbatch_count += 1
2038
+
2039
+ # Final layer norm.
2040
+ if self.post_process and self.post_layer_norm:
2041
+ # TODO: Below old DeepSpeed code are commented because it's unsure whether
2042
+ # it is still relevant.
2043
+ # if not self.ds_inference:
2044
+ # # Reverting data format change [s b h] --> [b s h].
2045
+ # hidden_states = hidden_states.transpose(0, 1).contiguous()
2046
+ hidden_states = self.final_layernorm(hidden_states)
2047
+
2048
+ return (hidden_states, *moe_losses)
2049
+
2050
+ class LMHeadPipe(MegatronModule):
2051
+ """
2052
+ Arguments:
2053
+ vocab_size: size of vocabulary.
2054
+ hidden_size: hidden size
2055
+ gather_output: wether output logits being gathered or not.
2056
+ init_method: init method for weight initialization
2057
+ config:
2058
+ """
2059
+
2060
+ def __init__(self, hidden_size, vocab_size, config):
2061
+ args = get_args()
2062
+ super(LMHeadPipe, self).__init__()
2063
+ self.lm_head = tensor_parallel.ColumnParallelLinear(input_size=hidden_size,
2064
+ output_size=vocab_size,
2065
+ bias=False,
2066
+ config=config,
2067
+ init_method=config.init_method,)
2068
+
2069
+ def forward(self, inputs, **kwargs):
2070
+ assert torch.is_tensor(inputs) or isinstance(inputs, tuple)
2071
+ if isinstance(inputs, tuple):
2072
+ hidden_states = inputs[0]
2073
+ else:
2074
+ hidden_states = inputs
2075
+
2076
+ if not hasattr(self, '_args'):
2077
+ self._args = get_args()
2078
+
2079
+ if hasattr(self._args, 'attn_mask'):
2080
+ attention_mask = None
2081
+ else:
2082
+ attention_mask = inputs[1]
2083
+
2084
+ logits, _ = self.lm_head(hidden_states)
2085
+
2086
+ # If cmd args has attn_mask, we don't forward it as an activation.
2087
+ if hasattr(self._args, 'attn_mask'):
2088
+ return logits
2089
+ else:
2090
+ return logits, attention_mask