dl_dataset_1 / dataset_chunk_119.csv
Vishwas1's picture
Upload dataset_chunk_119.csv with huggingface_hub
a0cca82 verified
text
"id- ual blocks. a) the usual order of linear transformation or convolution followed byarelunonlinearitymeansthateach residualblockcanonlyaddnon-negative quantities. b) with the reverse order, bothpositiveandnegativequantitiescan beadded. however,wemustaddalinear transformation at the start of the net- workincasetheinputisallnegative. c) in practice, it’s common for a residual block to contain several network layers. interpretation is that residual connections turn the original network into an ensemble of these smaller networks whose outputs are summed to compute the result. acomplementarywayofthinkingaboutthisresidualnetworkisthatitcreatessixteen paths of different lengths from input to output. for example, the first function f [x] 1 problem11.2 occurs in eight of these sixteen paths, including as a direct additive term (i.e., a path length of one), and the analogous derivative to equation 11.3 is: problem11.3 (cid:18) (cid:19) (cid:18) (cid:19) ∂y ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f =i+ 2 + 3 + 3 2 + 4 + 4 2 + 4 3 + 4 3 2 , (11.6) ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f ∂f 1 1 1 2 1 1 2 1 3 1 3 2 1 where there is one term for each of the eight paths. the identity term on the right- hand side shows that changes in the parameters ϕ in the first layer f [x,ϕ ] contribute 1 1 1 directly to changes in the network output y. they also contribute indirectly through the other chains of derivatives of varying lengths. in general, gradients through shorter notebook11.2 paths will be better behaved. since both the identity term and various short chains of residual derivatives will contribute to the derivative for each layer, networks with residual links networks suffer less from shattered gradients. 11.2.1 order of operations in residual blocks until now, we have implied that the additive functions f[x] could be any valid network layer (e.g., fully connected or convolutional). this is technically true, but the order of operations in these functions is important. they must contain a nonlinear activation functionlikearelu,ortheentirenetworkwillbelinear. however, inatypicalnetwork layer (figure 11.5a), the relu function is at the end, so the output is non-negative. if we adopt this convention, then each residual block can only increase the input values. hence,itistypicaltochangetheorderofoperationssothattheactivationfunctionis applied first, followed by the linear transformation (figure 11.5b). sometimes there may be several layers of processing within the residual block (figure 11.5c), but these usually terminatewithalineartransformation. finally,wenotethatwhenwestarttheseblocks withareluoperation,theywilldonothingiftheinitialnetworkinputisnegativesince therelu will clipthe entiresignalto zero. hence, it’s typicaltostart the networkwith a linear transformation rather than a residual block, as in figure 11.5b. draft: please send errata to [email protected] 11 residual networks 11.2.2 deeper networks with residual connections adding residual connections roughly doubles the depth of a network that can be practi- callytrainedbeforeperformancedegrades. however,wewouldliketoincreasethedepth further. to understand why residual connections do not allow us to increase the depth arbitrarily, we must consider how the variance of the activations changes during the forward pass and how the gradient magnitudes change during the backward pass. 11.3 exploding gradients in residual networks in section 7.5, we saw that initializing the network parameters is critical. without careful initialization, the magnitudes of the intermediate values during the forward pass ofbackpropagationcanincreaseordecreaseexponentially. similarly,thegradientsduring the backward pass can explode or vanish as we move backward through the network. hence"