Vishwas1 commited on
Commit
aa2710b
·
verified ·
1 Parent(s): 3cd0c4a

Upload dataset_chunk_30.csv with huggingface_hub

Browse files
Files changed (1) hide show
  1. dataset_chunk_30.csv +2 -0
dataset_chunk_30.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ text
2
+ "pimum(cid:0)n(cid:1)umber of regions created bypartitioningad -dimensionalspacewithdhyperplanesis di d . whatisthemaximum i j=0 j number of regions if we add two more hidden units to this model, so d=5? this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.chapter 4 deep neural networks the last chapter described shallow neural networks, which have a single hidden layer. this chapter introduces deep neural networks, which have more than one hidden layer. with relu activation functions, both shallow and deep networks describe piecewise linear mappings from input to output. as the number of hidden units increases, shallow neural networks improve their descriptive power. indeed, with enough hidden units, shallow networks can describe arbitrarily complex functions in high dimensions. however, it turns out that for some functions,therequirednumberofhiddenunitsisimpracticallylarge. deepnetworkscan produce many more linear regions than shallow networks for a given number of parame- ters. hence, from a practical standpoint, they can be used to describe a broader family of functions. 4.1 composing neural networks to gain insight into the behavior of deep neural networks, we first consider composing twoshallownetworkssotheoutputofthefirstbecomestheinputofthesecond. consider twoshallownetworkswiththreehiddenunitseach(figure4.1a). thefirstnetworktakes an input x and returns output y and is defined by: h = a[θ +θ x] 1 10 11 h = a[θ +θ x] 2 20 21 h = a[θ +θ x], (4.1) 3 30 31 and y =ϕ +ϕ h +ϕ h +ϕ h . (4.2) 0 1 1 2 2 3 3 the second network takes y as input and returns y′ and is defined by: draft: please send errata to [email protected] 4 deep neural networks figure 4.1composingtwosingle-layernetworkswiththreehiddenunitseach. a) theoutputy ofthefirstnetworkconstitutestheinputtothesecondnetwork. b) thefirstnetworkmapsinputsx∈[−1,1]tooutputsy∈[−1,1]usingafunction comprising three linear regions that are chosen so that they alternate the sign of their slope. multiple inputs x (gray circles) now map to the same output y (cyan circle). c) the second network defines a function comprising three linear regions that takes y and returns y′ (i.e., the cyan circle is mapped to the brown circle). d) the combined effect of these two functions when composed is that (i) three different inputs x are mapped to any given value of y by the first network and (ii) are processed in the same way by the second network; the result is that thefunctiondefinedbythesecondnetworkinpanel(c)isduplicatedthreetimes, variouslyflippedandrescaledaccordingtotheslopeoftheregionsofpanel(b). this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.4.2 from composing networks to deep networks 43 ′ ′ ′ h = a[θ +θ y] 1 10 11 ′ ′ ′ h = a[θ +θ y] 2 20 21 ′ ′ ′ h = a[θ +θ y], (4.3) 3 30 31 and ′ ′ ′ ′ ′ ′ ′ ′ y =ϕ +ϕ h +ϕ h +ϕ h . (4.4) 0 1 1 2 2 3 3 with relu activations, this model also describes a family of piecewise linear functions. however, the number of linear regions is potentially greater than for a shallow network with six hidden units. to see this, consider choosing the first network to produce three problem4.1 alternating regions of positive and negative slope (figure 4.1b). this means that three differentrangesofxaremappedtothesameoutputrangey ∈[−1,1],andthesubsequent mapping from this range of y to y′ is applied three times. the overall effect is that the notebook4.1 function defined by the second network is duplicated"