Vishwas1 commited on
Commit
05e9370
·
verified ·
1 Parent(s): 59b2d74

Upload dataset_chunk_46.csv with huggingface_hub

Browse files
Files changed (1) hide show
  1. dataset_chunk_46.csv +2 -0
dataset_chunk_46.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ text
2
+ ", continuous, y∈(−π,π] von mises predicting circular direction univariate, discrete, y∈{0,1} bernoulli binary binary classification univariate, discrete, y∈{1,2,...,k} categorical multiclass bounded classification univariate, discrete, y∈[0,1,2,3,...] poisson predicting bounded below event counts multivariate, discrete, y∈perm[1,2,...,k] plackett-luce ranking permutation figure 5.11 distributions for loss functions for different prediction types. whenweminimizethenegativelogprobability,thisproductbecomesasumofterms: xi h i xi x h i l[ϕ]=− log pr(y |f[x ,ϕ]) =− log pr(y |f [x ,ϕ]) . (5.26) i i id d i i=1 i=1 d where y is the dth output from the ith training example. id tomaketwoormorepredictiontypessimultaneously,wesimilarlyassumetheerrors in each are independent. for example, to predict wind direction and strength, we might problems5.7–5.10 choose the von mises distribution (defined on circular domains) for the direction and the exponential distribution (defined on positive real numbers) for the strength. the independence assumption implies that the joint likelihood of the two predictions is the product of individual likelihoods. these terms will become additive when we compute the negative log-likelihood. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.5.7 cross-entropy loss 71 figure 5.12cross-entropymethod. a)empiricaldistributionoftrainingsamples (arrows denote dirac delta functions). b) model distribution (a normal distri- bution with parameters θ =µ,σ2). in the cross-entropy approach, we minimize thedistance(kldivergence)betweenthesetwodistributionsasafunctionofthe model parameters θ. 5.7 cross-entropy loss in this chapter, wedeveloped loss functions that minimize negative log-likelihood. how- ever, the term cross-entropy loss is also commonplace. in this section, we describe the cross-entropy loss and show that it is equivalent to using negative log-likelihood. thecross-entropylossisbasedontheideaoffindingparametersθ thatminimizethe distancebetweentheempiricaldistributionq(y)oftheobserveddatay andamodeldis- tribution pr(y|θ) (figure 5.12). the distance between two probability distributions q(z) appendixc.5.1 and p(z) can be evaluated using the kullback-leibler (kl) divergence: kldivergence z z (cid:2) (cid:3) ∞ (cid:2) (cid:3) ∞ (cid:2) (cid:3) d q||p = q(z)log q(z) dz− q(z)log p(z) dz. (5.27) kl −∞ −∞ now consider that we observe an empirical data distribution at points {y }i . we i i=1 can describe this as a weighted sum of point masses: xi 1 q(y)= δ[y−y ], (5.28) i i i=1 where δ[•] is the dirac delta function. we want to minimize the kl divergence between appendixb.1.3 the model distribution pr(y|θ) and this empirical distribution: diracdelta function (cid:20)z z (cid:21) ∞ (cid:2) (cid:3) ∞ (cid:2) (cid:3) θˆ = argmin q(y)log q(y) dy− q(y)log pr(y|θ) dy θ (cid:20) −z∞ −∞(cid:21) ∞ (cid:2) (cid:3) = argmin − q(y)log pr(y|θ) dy , (5.29) θ −�"