Vishwas1 commited on
Commit
5d480f7
·
verified ·
1 Parent(s): 641e20f

Upload dataset_chunk_48.csv with huggingface_hub

Browse files
Files changed (1) hide show
  1. dataset_chunk_48.csv +2 -0
dataset_chunk_48.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ text
2
+ "that parameterizes the de- gree of robustness. when interpreted in a probabilistic context, it yields a family of univariate probability distributions that includes the normal and cauchy distributions as special cases. estimating quantiles: sometimes, we may not want to estimate the mean or median in a regression task but may instead want to predict a quantile. for example, this is useful for risk models, where we want to know that the true value will be less than the predicted value 90% of the time. this is known as quantile regression (koenker & hallock, 2001). this could be done by fitting a heteroscedastic regression model and then estimating the quantile based on the predicted normal distribution. alternatively, the quantiles can be estimated directly using quantile loss (also known as pinball loss). in practice, this minimizes the absolute deviations of the data from the model but weights the deviations in one direction more than the other. recentworkhasinvestigatedsimultaneouslypredictingmultiplequantilestogetanideaofthe overall distribution shape (rodrigues & pereira, 2020). class imbalance and focal loss: lin et al. (2017c) address data imbalance in classification problems. ifthenumberofexamplesforsomeclassesismuchgreaterthanforothers,thenthe standardmaximumlikelihoodlossdoesnotworkwell;themodelmayconcentrateonbecoming more confident about well-classified examples from the dominant classes and classify less well- represented classes poorly. lin et al. (2017c) introduce focal loss, which adds a single extra parameter that down-weights the effect of well-classified examples to improve performance. learning to rank: cao et al. (2007), xia et al. (2008), and chen et al. (2009) all used the plackett-lucemodelinlossfunctionsforlearningtorankdata. thisisthelistwiseapproachto learningtorankasthemodelingestsanentirelistofobjectstoberankedatonce. alternative approaches are the pointwise approach, in which the model ingests a single object, and the pairwise approach, where the model ingests pairs of objects. chen et al. (2009) summarize different approaches for learning to rank. other data types: fan et al. (2020) use a loss based on the beta distribution for predicting values between zero and one. jacobs et al. (1991) and bishop (1994) investigated mixture density networks for multimodal data. these model the output as a mixture of gaussians draft: please send errata to [email protected] 5 loss functions figure 5.13 the von mises distribu- tion is defined over the circular do- main (−π,π]. it has two parameters. the mean µ determines the position of the peak. the concentration κ > 0 acts like th√e inverse of the vari- ance. hence 1/ κ is roughly equivalent to the standard deviation in a normal distribution. (see figure 5.14) that is conditional on the input. prokudin et al. (2018) used the von mises distributiontopredictdirection(seefigure5.13). fallahetal.(2009)constructedlossfunctions forpredictioncountsusingthepoissondistribution(seefigure5.15). ngetal.(2017)usedloss functions based on the gamma distribution to predict duration. non-probabilistic approaches: it is not strictly necessary to adopt the probabilistic ap- proachdiscussedinthischapter,butthishasbecomethedefaultinrecentyears;anylossfunc- tion that aims to reduce the distance between the model output and the training outputs will suffice,anddistancecanbedefinedinanywaythatseemssensible. thereareseveralwell-known non-probabilistic machinelearning models for classification, including support vectormachines (vapnik,1995;cristianini&shawe-taylor,2000),whichusehinge loss,andadaboost(freund & schapire, 1997), which uses exponential loss. problems problem 5.1 show that the logistic sigmoid function sig[z] maps z = −∞ to 0, z = 0 to 0.5 and z=∞ to 1 where: 1 sig[z]= . (5.32) 1+exp[−z] problem 5.2 the loss l for binary classification for a single training pair {x,y} is: h i"