text "et al. (2021) and ulku & akagündüz (2022). visualizing convolutional networks: the dramatic success of convolutional networks led toaseriesofeffortstovisualizetheinformationtheyextractfromtheimage(seeqinetal.,2018, for a review). erhan et al. (2009) visualized the optimal stimulus that activated a hidden unit by starting with an image containing noise and then optimizing the input to make the hidden unitmostactiveusinggradientascent. zeiler&fergus(2014)trainedanetworktoreconstruct the input and then set all the hidden units to zero except the one they were interested in; the reconstruction then provides information about what drives the hidden unit. mahendran & vedaldi (2015) visualized an entire layer of a network. their network inversion technique aimedtofindanimagethatresultedintheactivationsatthatlayerbutalsoincorporatesprior knowledge that encourages this image to have similar statistics to natural images. finally, bau et al. (2017) introduced network dissection. here, a series of images with known pixel labels capturing color, texture, and object type are passed through the network, and the correlation of a hidden unit with each property is measured. this method has the advantage that it only uses the forward pass of the network and does not require optimization. these methodsdidprovidesomepartialinsightintohowthenetworkprocessesimages. forexample, bau et al. (2017) showed that earlier layers correlate more with texture and color and later layers with the object type. however, it is fair to say that fully understanding the processing of networks containing millions of parameters is currently not possible. problems problem 10.1∗ showthattheoperationinequation10.4isequivariantwithrespecttotransla- tion. problem 10.2 equation 10.3 defines 1d convolution with a kernel size of three, stride of one, and dilation one. write out the equivalent equation for the 1d convolution with a kernel size of three and a stride of two as pictured in figure 10.3a–b. problem 10.3 writeouttheequationforthe1ddilatedconvolutionwithakernelsizeofthree and a dilation rate of two, as pictured in figure 10.3d. problem 10.4 write out the equation for a 1d convolution with kernel size of seven, a dilation rate of three, and a stride of three. problem 10.5 draw weight matrices in the style of figure 10.4d for (i) the strided convolution in figure 10.3a–b, (ii) the convolution with kernel size 5 in figure 10.3c, and (iii) the dilated convolution in figure 10.3d. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 185 problem10.6∗drawa6×12weightmatrixinthestyleoffigure10.4drelatingtheinputsx ,...,x 1 6 to the outputs h ,...,h in the multi-channel convolution as depicted in figures 10.5a–b. 1 12 problem10.7∗drawa12×6weightmatrixinthestyleoffigure10.4drelatingtheinputsh ,...,h 1 12 to the outputs h′,...,h′ in the multi-channel convolution in figure 10.5c. 1 6 problem 10.8 consider a 1d convolutional network where the input has three channels. the first hidden layer is computed using a kernel size of three and has four channels. the second hiddenlayeriscomputedusingakernelsizeoffiveandhastenchannels. howmanybiasesand how many weights are needed for each of these two convolutional layers? problem10.9anetworkconsistsofthree1dconvolutionallayers. ateachlayer,azero-padded convolution with kernel size three, stride one, and dilation one is applied. what size is the receptive field of the hidden units in the third layer? problem 10.10 a network consists of three 1d convolutional layers. at each layer, a zero- paddedconvolutionwithkernelsizeseven,strideone,anddilationoneisapplied. whatsizeis the receptive field of hidden units in the third layer? problem10.11consideraconvolutionalnetworkwith1dinputx. thefirsthiddenlayerh is 1 computed using a"