Upload dataset_chunk_50.csv with huggingface_hub
Browse files- dataset_chunk_50.csv +2 -0
dataset_chunk_50.csv
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
text
|
2 |
+
"� {0,1,2,...}. it has a single parameter λ ∈ r+, which is knownastherateandisthemeanofthedistribution. a–c)poissondistributions with rates of 1.4, 2.8, and 6.0, respectively. problem 5.6 consider building a model to predict the number of pedestrians y ∈ {0,1,2,...} that will pass a given point in the city in the next minute, based on data x that contains information about the time of day, the longitude and latitude, and the type of neighborhood. a suitable distribution for modeling counts is the poisson distribution (figure 5.15). this has a single parameter λ > 0 called the rate that represents the mean of the distribution. the distribution has probability density function: λke−λ pr(y=k)= . (5.36) k! design a loss function for this model assuming we have access to i training pairs {x ,y }. i i problem 5.7 consider a multivariate regression problem where we predict ten outputs, so y∈ r10, and model each with an independent normal distribution where the means µ are pre- d dicted by the network, and variances σ2 are constant. write an expression for the likeli- hood pr(y|f[x,ϕ]). show that minimizing the negative log-likelihood of this model is still equivalent to minimizing a sum of squared terms if we don’t estimate the variance σ2. problem 5.8∗ construct a loss function for making multivariate predictions y ∈ rdi based on independent normal distributions with different variances σ2 for each dimension. assume d a heteroscedastic model so that both the means µ and variances σ2 vary as a function of the d d data. problem 5.9∗ consider a multivariate regression problem in which we predict the height of a person in meters and their weight in kilos from data x. here, the units take quite different ranges. what problems do you see this causing? propose two solutions to these problems. problem 5.10 extend the model from problem 5.3 to predict both the wind direction and the wind speed and define the associated loss function. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.chapter 6 fitting models chapters 3 and 4 described shallow and deep neural networks. these represent families of piecewise linear functions, where the parameters determine the particular function. chapter 5 introduced the loss — a single number representing the mismatch between the network predictions and the ground truth for a training set. the loss depends on the network parameters, and this chapter considers how to find the parameter values that minimize this loss. this is known as learning the network’s parameters or simply as training or fitting the model. the process is to choose initial parameter values and then iterate the following two steps: (i) compute the derivatives (gradients) of the loss with respect to the parameters, and (ii) adjust the parameters based on the gradients to decrease the loss. after many iterations, we hope to reach the overall minimum of the loss function. this chapter tackles the second of these steps; we consider algorithms that adjust theparameterstodecreasetheloss. chapter7discusseshowtoinitializetheparameters and compute the gradients for neural networks. 6.1 gradient descent to fit a model, we need a training set {x ,y } of input/output pairs. we seek parame- i i tersϕforthemodelf[x ,ϕ]thatmaptheinputsx totheoutputsy aswellaspossible. i i i to this end, we define a loss function l[ϕ] that returns a single number that quanti- fies the mismatch in this mapping. the goal of an optimization algorithm is to find parameters ϕˆ that minimize the loss: h i ϕˆ =argmin l[ϕ] . (6.1) ϕ therearemanyfamiliesofoptimizationalgorithms,butthestandardmethodsfortrain- ingneuralnetworksareiterative. thesealgorithmsinitializetheparametersheuristically and then adjust them repeatedly in such a way that the loss decreases. draft: please send errata to [email protected] 6 fitting models the simplest method in this class is gradient descent. this starts with initial param- eters ϕ=[ϕ ,�"
|