Upload dataset_chunk_51.csv with huggingface_hub
Browse files- dataset_chunk_51.csv +2 -0
dataset_chunk_51.csv
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
text
|
2 |
+
"� ,...,ϕ ]t and iterates two steps: 0 1 n step 1. compute the derivatives of the loss with respect to the parameters: 2 3 ∂l 6∂ϕ07 6 ∂l 7 ∂l =66∂ϕ.177. (6.2) ∂ϕ 64 .. 75 ∂l ∂ϕn step 2. update the parameters according to the rule: ∂l ϕ←−ϕ−α· , (6.3) ∂ϕ where the positive scalar α determines the magnitude of the change. the first step computes the gradient of the loss function at the current position. this determines the uphill direction of the loss function. the second step moves a small distance α downhill (hence the negative sign). the parameter α may be fixed (in which notebook6.1 case, we call it a learning rate), or we may perform a line search where we try several linesearch values of α to find the one that most decreases the loss. at the minimum of the loss function, the surface must be flat (or we could improve furtherbygoingdownhill). hence,thegradientwillbezero,andtheparameterswillstop changing. in practice, we monitor the gradient magnitude and terminate the algorithm when it becomes too small. 6.1.1 linear regression example considerapplyinggradientdescenttothe1dlinearregressionmodelfromchapter2. the modelf[x,ϕ]mapsascalarinputxtoascalaroutputyandhasparametersϕ=[ϕ ,ϕ ]t, 0 1 which represent the y-intercept and the slope: y = f[x,ϕ] = ϕ +ϕ x. (6.4) 0 1 given a dataset {x ,y } containing i input/output pairs, we choose the least squares i i loss function: xi xi l[ϕ] = ℓ = (f[x ,ϕ]−y )2 i i i i=1 i=1 xi = (ϕ +ϕ x −y )2, (6.5) 0 1 i i i=1 this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.6.1 gradient descent 79 figure6.1gradientdescentforthelinearregressionmodel. a)trainingsetofi = 12 input/output pairs {x ,y }. b) loss function showing iterations of gradient i i descent. we start at point 0 and move in the steepest downhill direction until we can improve no further to arrive at point 1. we then repeat this procedure. we measure the gradient at point 1 and move downhill to point 2 and so on. c) this can be visualized better as a heatmap, where the brightness represents the loss. after only four iterations, we are already close to the minimum. d) the modelwiththeparametersatpoint0(lightestline)describesthedataverybadly, buteachsuccessiveiterationimprovesthefit. themodelwiththeparametersat point 4 (darkest line) is already a reasonable description of the training data. draft: please send errata to [email protected] 6 fitting models where the term ℓ = (ϕ +ϕ x −y )2 is the individual contribution to the loss from i 0 1 i i the ith training example. thederivativeofthelossfunctionwithrespecttotheparameterscanbedecomposed into the sum of the derivatives of the individual contributions: xi xi ∂l ∂ ∂ℓ = ℓ = i, (6.6) ∂ϕ ∂ϕ i ∂ϕ i=1 i=1 where these are given by: problem6.1 2 3 "" # ∂ℓi =4∂∂ϕℓi05= 2(ϕ0+ϕ1xi−yi) . (6.7) ∂ϕ ∂∂ϕℓi1 2xi(ϕ0+ϕ1xi−yi) figure 6.1 shows the progression of this algorithm as we iteratively compute the notebook6.2 derivativesaccordingtoequations6.6and6.7andthenupdatetheparametersusingthe gradientdescent rule"
|