source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
238,776
What is a Moment Generating Function (MGF)? Can you explain it in layman's terms and along with a simple & easy example? Please, limit using formal math notations as far as possible.
Let's assume that an equation-free intuition is not possible, and still insist on boiling down the math to the very essentials to get an idea of what's going on: we are trying to obtain the statistical raw moments , which, after the obligatory reference to physics , we define as the expected value of a power of a random variable. For a continuous random variable, the raw $k$ -th moment is by LOTUS : \begin{align}\large \color{red}{\mathbb{E}\left[{X^k}\right]} &= \displaystyle\int_{-\infty}^{\infty}\color{blue}{X^k}\,\,\color{green}{\text{pdf}}\,\,\,dx\tag{1}\end{align} The moment generating function , $$M_X(t):=\mathbb E\big[e^{tX}\big],$$ is a way to walk around this integral (Eq.1) by, instead, carrying out: \begin{align} \large \color{blue}{\mathbb{E}\left[e^{\,tX}\right]}&=\displaystyle \int_{-\infty}^{\infty}\color{blue}{e^{tX}}\,\color{green}{\text{pdf}}\, dx\tag{2}\end{align} Why? Because it's easier and there is a fantastic property of the MGF that can be seen by expanding the Maclaurin series of $\color{blue}{e^{\,tX}}$ $$e^{tX}=1+\frac{ X }{1!}\, t +\frac{ X^{2} }{2!}t^{2} +\frac{ X^{3} }{3!} t^{3} +\cdots$$ Taking the expectation of both sides of this power series: $$\begin{align} M_X(t) &= \color{blue}{\mathbb{E}\left[e^{\,tX}\right]} \\[1.5ex] &=1 + \frac{\color{red}{\mathbb{E} \left[X\right]}}{1!} \, t \, + \frac{\color{red}{\mathbb{E} \left[X^2\right]}}{2!} \, t^2 \, + \frac{\color{red}{\mathbb{E} \left[X^3\right]}}{3!} \, t^3 \, + \cdots\tag{3} \end{align}$$ the raw moments appear "perched" on this polynomial "clothesline", ready to be culled by simply differentiating $k$ times and evaluating at zero once we go through the easier integration (in eq. (2)) just once for all moments! The fact that it is an easier integration is most apparent when the pdf is an exponential. To recover the $k$ -th moment: $$M_X^{(k)}(0)=\frac{d^k}{dt^k}M_X(t)\Bigr|_{t=0}$$ The fact that eventually there is a need to differentiate makes it a not a free lunch - in the end it is a two-sided Laplace transform of the pdf with a changed sign in the exponent: $$\mathcal L \{\text{pdf}(x)\}(s) =\int_{-\infty}^{\infty}e^{-sx}\text{pdf}(x) dx$$ such that $$M_X(t)=\mathcal L\{\text{pdf}(x)\}(-s)\tag 4.$$ This, in effect, gives us a physics avenue to the intuition. The Laplace transform is acting on the $\color{green}{\text{pdf}}$ and decomposing it into moments. The similarity to a Fourier transform is inescapable : a FT maps a function to a new function on the real line, and Laplace maps a function to a new function on the complex plane. The Fourier transform expresses a function or signal as a series of frequencies, while the Laplace transform resolves a function into its moments . In fact, a different way of obtaining moments is through a Fourier transform ( characteristic function ). The exponential term in the Laplace transform is in general of the form $e^{-st}$ with $s=\sigma + i\,\omega$ , corresponding to the real exponentials and imaginary sinusoidals , and yielding plots such as this : [ From The Scientist and Engineer's Guide to Signal Processing by Steven W. Smith ] Therefore the $M_X(t)$ function decomposes the $\text{pdf}$ somehow into its "constituent frequencies" when $\sigma=0.$ From eq. (4): \begin{align}\require{cancel} M_X(t)&=\mathbb E\big[e^{-sX}\big]\\[2ex] &=\displaystyle \int_{-\infty}^{\infty}{e^{-sx}}\,\text{pdf}(x)\, dx\\[2ex] &=\displaystyle \int_{-\infty}^{\infty}{e^{-(\sigma+i\omega)x}}\,\text{pdf}(x)\, dx\\[2ex] &=\displaystyle \int_{-\infty}^{\infty}\cancel{e^{-\sigma x}}\,\color{red}{e^{-i\omega x}\,\text{pdf}(x)\, dx} \end{align} which leaves us with the improper integral of the part of the expression in red, corresponding to the Fourier transform of the pdf. In general, the intuition of the Laplace transform poles of a function would be that they provide information of the exponential (decay) and frequency components of the function (in this case, the pdf). In response to the question under comments about the switching from $X^k$ to $e^{tx}$ , this is a completely strategic move: one expression does not follow from the other. Here is an analogy: We have a car of our own and we are free to drive into the city every time we need to take care of some business (read, integrating Eq $(1)$ no matter how tough for every separate, single moment). Instead, we can do something completely different: we can drive to the nearest subway station (read, solve Eq $(2)$ just once), and from there use public transportation to reach every single place we need to visit (read, get any $k$ derivative of the integral in Eq $(2)$ to extract whichever $k$ -th moment we need, knowing (thanks to Eq $(3)$ ) that all the moments are "hiding" in there and isolated by differentiating and evaluating at $0$ .
{ "source": [ "https://stats.stackexchange.com/questions/238776", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/109372/" ] }
239,295
Let $a_t$ and $b_t$ be white noise processes. Can we say $c_t=a_t+b_t$ is necessarily a white noise process?
No, you need more (at least under Hayashi's definition of white noise). For example, the sum of two independent white noise processes is white noise. Why is $a_t$ and $b_t$ white noise insufficient for $a_t+b_t$ to be white noise? Following Hayashi's Econometrics , a covariance stationary process $\{z_t\}$ is defined to be white noise if $\mathrm{E}[z_t] = 0$ and $\mathrm{Cov}\left(z_t, z_{t-j} \right) = 0$ for $j \neq 0$. Let $\{a_t\}$ and $\{b_t\}$ be white noise processes. Define $c_t = a_t + b_t$. Trivially we have $\mathrm{E}[c_t] = 0$. Checking the covariance condition: \begin{align*} \mathrm{Cov} \left( c_t, c_{t-j} \right) &= \mathrm{Cov} \left( a_t, a_{t-j}\right) + \mathrm{Cov} \left( a_t, b_{t-j}\right) + \mathrm{Cov} \left( b_t, a_{t-j}\right) + \mathrm{Cov} \left( b_t, b_{t-j}\right) \end{align*} Applying that $\{a_t\}$ and $\{b_t\}$ are white noise: \begin{align*} \mathrm{Cov} \left( c_t, c_{t-j} \right) &= \mathrm{Cov} \left( a_t, b_{t-j}\right) + \mathrm{Cov} \left( b_t, a_{t-j}\right) \end{align*} So whether $\{c_t\}$ is white noise depends on whether $\mathrm{Cov} \left( a_t, b_{t-j}\right) + \mathrm{Cov} \left( b_t, a_{t-j}\right) = 0$ for all $j\neq 0$. Example where sum of two white noise processes is not white noise: Let $\{a_t\}$ be white noise. Let $b_t = a_{t-1}$. Observe that process $\{b_t\}$ is also white noise. Let $c_t = a_t + b_t$, hence $c_t = a_t + a_{t-1}$, and observe that process $\{c_t\}$ is not white noise.
{ "source": [ "https://stats.stackexchange.com/questions/239295", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/134089/" ] }
239,481
I understand the relation between Principal Component Analysis and Singular Value Decomposition at an algebraic/exact level. My question is about the scikit-learn implementation . The documentation says: " [TruncatedSVD] is very similar to PCA, but operates on sample vectors directly, instead of on a covariance matrix. ", which would reflect the algebraic difference between both approaches. However, it later says: " This estimator [TruncatedSVD] supports two algorithm: a fast randomized SVD solver, and a “naive” algorithm that uses ARPACK as an eigensolver on (X * X.T) or (X.T * X), whichever is more efficient. ". Regarding PCA , it says: "Linear dimensionality reduction using Singular Value Decomposition of the data to project it ...". And PCA implementation supports the same two algorithms (randomized and ARPACK) solvers plus another one, LAPACK. Looking into the code I can see that both ARPACK and LAPACK in both PCA and TruncatedSVD do svd on sample data X, ARPACK being able to deal with sparse matrices (using svds). So, aside from different attributes and methods and that PCA can additionally do exact full singular value decomposition using LAPACK, PCA and TruncatedSVD scikit-learn implementations seem to be exactly the same algorithm. First question: Is this correct? Second question: even though LAPACK and ARPACK use scipy.linalg.svd(X) and scipy.linalg.svds(X), being X the sample matrix, they compute the singular value decomposition or eigen-decomposition of $X^T*X$ or $X*X^T$ internally. While the "randomized" solver doesn't need to compute the product. (This is relevant in connection with numerical stability, see Why PCA of data by means of SVD of the data? ). Is this correct? Relevant code: PCA line 415. TruncatedSVD line 137.
PCA and TruncatedSVD scikit-learn implementations seem to be exactly the same algorithm. No: PCA is (truncated) SVD on centered data (by per-feature mean substraction). If the data is already centered, those two classes will do the same. In practice TruncatedSVD is useful on large sparse datasets which cannot be centered without making the memory usage explode. numpy.linalg.svd and scipy.linalg.svd both rely on LAPACK _GESDD described here: http://www.netlib.org/lapack/lug/node32.html (divide and conquer driver) scipy.sparse.linalg.svds relies on ARPACK to do a eigen value decomposition of XT . X or X . X.T (depending on the shape of the data) via the Arnoldi iteration method. The HTML user guide of ARPACK has a broken formatting which hides the computational details but the Arnoldi iteration is well described on wikipedia: https://en.wikipedia.org/wiki/Arnoldi_iteration Here is the code for the ARPACK-based SVD in scipy: https://github.com/scipy/scipy/blob/master/scipy/sparse/linalg/eigen/arpack/arpack.py#L1642 (search for the string for "def svds" in case of line change in the source code).
{ "source": [ "https://stats.stackexchange.com/questions/239481", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/53653/" ] }
239,937
I'm learning about the Empirical Cumulative Distribution Function. But I still don't understand Why is it called 'Empirical'? Is there any difference between Empirical CDF and CDF?
Let $X$ be a random variable. The cumulative distribution function $F(x)$ gives the $P(X \leq x)$ . An empirical cumulative distribution function function $G(x)$ gives $P(X \leq x)$ based on the observations in your sample. The distinction is which probability measure is used. For the empirical CDF, you use the probability measure defined by the frequency counts in an empirical sample. Simple example (coin flip): Let $X$ be a random variable denoting the result of a single coin flip where $X=1$ denotes heads and $X=0$ denotes tails. The CDF for a fair coin is given by: $$ F(x) = \left\{ \begin{array}{ll} 0 & \text{for } x < 0\\ \frac{1}{2} & \text{for } 0 \leq x < 1 \\1 & \text{for } 1 \leq x \end{array} \right. $$ If you flipped 2 heads and 1 tail, the empirical CDF would be: $$ G(x) = \left\{ \begin{array}{ll} 0 & \text{for } x < 0\\ \frac{2}{3} & \text{for } 0 \leq x < 1 \\1 & \text{for } 1 \leq x \end{array} \right. $$ The empirical CDF would reflect that in your sample, $2/3$ of your flips were heads. Another example ( $F$ is CDF for normal distribution): Let $X$ be a normally distributed random variable with mean $0$ and standard deviation $1$ . The CDF is given by: $$F(x) = \int_{-\infty}^x \frac{1}{\sqrt{2\pi}} e^{\frac{-x^2}{2}}$$ Let's say you had 3 IID draws and obtained the values $x_1 < x_2 < x_3$ . The empirical CDF would be: $$ G(y) = \left\{ \begin{array}{ll} 0 & \text{for } y < x_1\\ \frac{1}{3} & \text{for } x_1 \leq y < x_2 \\\frac{2}{3} & \text{for } x_2 \leq y < x_3 \\1 & \text{for } x_3 \leq y \end{array} \right. $$ With enough IID draws (and certain regularity conditions are satisfied), the empirical CDF would converge on the underlying CDF of the population.
{ "source": [ "https://stats.stackexchange.com/questions/239937", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/134556/" ] }
239,973
I am working on thousands of datasets. Many of them are "unbalanced"; either a multi-class list with highly skewed distribution (For example, three categories with the ratio of 3500:300:4 samples) or a continuous number with skewed distribution. I am looking for some metric that can say " How badly unbalanced " the dataset is. Is there such a metric? Eventually, I want to score these datasets according to their balanced metric and provide a different balancing/ machine learning solution for each of them. I prefer a python solution if it exists.
You could use the Shannon entropy to measure balance . On a data set of $n$ instances, if you have $k$ classes of size $c_i$ you can compute entropy as follows: $$ H = -\sum_{ i = 1}^k \frac{c_i}{n} \log{ \frac{c_i}{n}}. $$ This is equal to: $0$ when there is one single class. In other words, it tends to $0$ when your data set is very unbalanced $\log{k}$ when all your classes are balanced of the same size $\frac{n}{k}$ Therefore, you could use the following measure of Balance for a data set: $$ \mbox{Balance} = \frac{H}{\log{k}} = \frac{-\sum_{ i = 1}^k \frac{c_i}{n} \log{ \frac{c_i}{n}}. } {\log{k}} $$ which is equal to: $0$ for an unbalanced data set $1$ for a balanced data set
{ "source": [ "https://stats.stackexchange.com/questions/239973", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/6932/" ] }
240,305
Is there any general guidelines on where to place dropout layers in a neural network?
In the original paper that proposed dropout layers, by Hinton (2012) , dropout (with p=0.5) was used on each of the fully connected (dense) layers before the output; it was not used on the convolutional layers. This became the most commonly used configuration. More recent research has shown some value in applying dropout also to convolutional layers, although at much lower levels: p=0.1 or 0.2. Dropout was used after the activation function of each convolutional layer: CONV->RELU->DROP.
{ "source": [ "https://stats.stackexchange.com/questions/240305", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12359/" ] }
241,381
Are there any "non-parametric" clustering methods for which we don't need to specify the number of clusters? And other parameters like the number of points per cluster, etc.
Clustering algorithms that require you to pre-specify the number of clusters are a small minority. There are a huge number of algorithms that don't. They are hard to summarize; it's a bit like asking for a description of any organisms that aren't cats. Clustering algorithms are often categorized into broad kingdoms: Partitioning algorithms (like k-means and it's progeny) Hierarchical clustering (as @Tim describes ) Density based clustering (such as DBSCAN ) Model based clustering (e.g., finite Gaussian mixture models , or Latent Class Analysis ) There can be additional categories, and people can disagree with these categories and which algorithms go in which category, because this is heuristic. Nevertheless, something like this scheme is common. Working from this, it is primarily only the partitioning methods (1) that require pre-specification of the number of clusters to find. What other information needs to be pre-specified (e.g., the number of points per cluster), and whether it seems reasonable to call various algorithms 'nonparametric', is likewise highly variable and hard to summarize. Hierarchical clustering does not require you to pre-specify the number of clusters, the way that k-means does, but you do select a number of clusters from your output. On the other hand, DBSCAN doesn't require either (but it does require specification of a minimum number of points for a 'neighborhood'—although there are defaults, so in some sense you could skip specifying that—which does put a floor on the number of patterns in a cluster). GMM doesn't even require any of those three, but does require parametric assumptions about the data generating process. As far as I know, there is no clustering algorithm that never requires you to specify a number of clusters, a minimum number of data per cluster, or any pattern / arrangement of data within clusters. I don't see how there could be. It might help you to read an overview of different types of clustering algorithms. The following might be a place to start: Berkhin, P. "Survey of Clustering Data Mining Techniques" ( pdf )
{ "source": [ "https://stats.stackexchange.com/questions/241381", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/132532/" ] }
241,985
I have been studying LSTMs for a while. I understand at a high level how everything works. However, going to implement them using Tensorflow I've noticed that BasicLSTMCell requires a number of units (i.e. num_units ) parameter. From this very thorough explanation of LSTMs, I've gathered that a single LSTM unit is one of the following which is actually a GRU unit. I assume that parameter num_units of the BasicLSTMCell is referring to how many of these we want to hook up to each other in a layer. That leaves the question - what is a "cell" in this context? Is a "cell" equivalent to a layer in a normal feed-forward neural network?
The terminology is unfortunately inconsistent. num_units in TensorFlow is the number of hidden states, i.e. the dimension of $h_t$ in the equations you gave. Also, from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard9/tf.nn.rnn_cell.RNNCell.md : The definition of cell in this package differs from the definition used in the literature. In the literature, cell refers to an object with a single scalar output. The definition in this package refers to a horizontal array of such units. "LSTM layer" is probably more explicit, example : def lstm_layer(tparams, state_below, options, prefix='lstm', mask=None): nsteps = state_below.shape[0] if state_below.ndim == 3: n_samples = state_below.shape[1] else: n_samples = 1 assert mask is not None […]
{ "source": [ "https://stats.stackexchange.com/questions/241985", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
242,004
An epoch in stochastic gradient descent is defined as a single pass through the data. For each SGD minibatch, $k$ samples are drawn, the gradient computed and parameters are updated. In the epoch setting, the samples are drawn without replacement. But this seems unnecessary. Why not draw each SGD minibatch as $k$ random draws from the whole data set at each iteration? Over a large number of epochs, the small deviations of which samples are seen more or less often would seem to be unimportant.
In addition to Franck's answer about practicalities, and David's answer about looking at small subgroups – both of which are important points – there are in fact some theoretical reasons to prefer sampling without replacement. The reason is perhaps related to David's point (which is essentially the coupon collector's problem ). In 2009, Léon Bottou compared the convergence performance on a particular text classification problem ($n = 781,265$). Bottou (2009). Curiously Fast Convergence of some Stochastic Gradient Descent Algorithms . Proceedings of the symposium on learning and data science. ( author's pdf ) He trained a support vector machine via SGD with three approaches: Random : draw random samples from the full dataset at each iteration. Cycle : shuffle the dataset before beginning the learning process, then walk over it sequentially, so that in each epoch you see the examples in the same order. Shuffle : reshuffle the dataset before each epoch, so that each epoch goes in a different order. He empirically examined the convergence $\mathbb E[ C(\theta_t) - \min_\theta C(\theta) ]$, where $C$ is the cost function, $\theta_t$ the parameters at step $t$ of optimization, and the expectation is over the shuffling of assigned batches. For Random, convergence was approximately on the order of $t^{-1}$ (as expected by existing theory at that point). Cycle obtained convergence on the order of $t^{-\alpha}$ (with $\alpha > 1$ but varying depending on the permutation, for example $\alpha \approx 1.8$ for his Figure 1). Shuffle was more chaotic, but the best-fit line gave $t^{-2}$, much faster than Random. This is his Figure 1 illustrating that: This was later theoretically confirmed by the paper: Gürbüzbalaban, Ozdaglar, and Parrilo (2015). Why Random Reshuffling Beats Stochastic Gradient Descent . arXiv:1510.08560 . ( video of invited talk at NIPS 2015 ) Their proof only applies to the case where the loss function is strongly convex, i.e. not to neural networks. It's reasonable to expect, though, that similar reasoning might apply to the neural network case (which is much harder to analyze).
{ "source": [ "https://stats.stackexchange.com/questions/242004", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22311/" ] }
242,109
With the following dataset, I wanted to see if the response (effect) changes with regard to sites, season, duration, and their interactions. Some online forums on statistics suggested me to go on with Linear Mixed-Effects Models, but the problem is that since replicates are randomised within each station, I have little chance to collect the sample from exactly the same spot in successive seasons (for example, repl-1 of s1 of post-monsoon may not be the same as that of monsoon). It is unlike the clinical trials (with within-subject design) where you measure the same subject repeatedly over seasons. However, considering sites and season as a random factor I ran the following commands and received a warning message: Warning messages: 1: In checkConv(attr(opt, "derivs"), optpar,ctrl=controlpar,ctrl=controlcheckConv, : unable to evaluate scaled gradient 2: In checkConv(attr(opt, "derivs"), optpar,ctrl=controlpar,ctrl=controlcheckConv, : Model failed to converge: degenerate Hessian with 1 negative eigenvalues Can anyone help me solve the issue? The codes are given below: library(lme4) read.table(textConnection("duration season sites effect 4d mon s1 7305.91 4d mon s2 856.297 4d mon s3 649.93 4d mon s1 10121.62 4d mon s2 5137.85 4d mon s3 3059.89 4d mon s1 5384.3 4d mon s2 5014.66 4d mon s3 3378.15 4d post s1 6475.53 4d post s2 2923.15 4d post s3 554.05 4d post s1 7590.8 4d post s2 3888.01 4d post s3 600.07 4d post s1 6717.63 4d post s2 1542.93 4d post s3 1001.4 4d pre s1 9290.84 4d pre s2 2199.05 4d pre s3 1149.99 4d pre s1 5864.29 4d pre s2 4847.92 4d pre s3 4172.71 4d pre s1 8419.88 4d pre s2 685.18 4d pre s3 4133.15 7d mon s1 11129.86 7d mon s2 1492.36 7d mon s3 1375 7d mon s1 10927.16 7d mon s2 8131.14 7d mon s3 9610.08 7d mon s1 13732.55 7d mon s2 13314.01 7d mon s3 4075.65 7d post s1 11770.79 7d post s2 4254.88 7d post s3 753.2 7d post s1 11324.95 7d post s2 5133.76 7d post s3 2156.2 7d post s1 12103.76 7d post s2 3143.72 7d post s3 2603.23 7d pre s1 13928.88 7d pre s2 3208.28 7d pre s3 8015.04 7d pre s1 11851.47 7d pre s2 6815.31 7d pre s3 8478.77 7d pre s1 13600.48 7d pre s2 1219.46 7d pre s3 6987.5 "),header=T)->dat1 m1 = lmer(effect ~ duration + (1+duration|sites) +(1+duration|season), data=dat1, REML=FALSE)
"Solving" the issue you experience in the sense of not receiving warnings about failed convergence is rather straightforward: you do not use the default BOBYQA optimiser but instead you opt to use the Nelder-Mead optimisation routine used by default in earlier 1.0.x previous versions. Or you install the package optimx so you can directly an L-BFGS-B routine or nlminb (same as lme4 versions prior to ver. 1 ). For example: m1 = lmer(effect~duration+(1+duration|sites)+(1+duration|season), data = dat1, REML = FALSE, control = lmerControl(optimizer ="Nelder_Mead") library(optimx) m1 = lmer(effect~duration+(1+duration|sites)+(1+duration|season), data = dat1, REML = FALSE, control = lmerControl( optimizer ='optimx', optCtrl=list(method='L-BFGS-B'))) m1 = lmer(effect~duration+(1+duration|sites)+(1+duration|season), data = dat1, REML = FALSE, control = lmerControl( optimizer ='optimx', optCtrl=list(method='nlminb'))) all work fine (no warnings). The interesting questions are: why you got these warnings to begin with and why when you used REML = TRUE you got no warnings. Succinctly, 1. you received those warnings because you defined duration both as a fixed effect as well as random slope for the factor sites as well as season . The model effectively ran-out of the degrees of freedom to estimate the correlations between the slopes and the intercepts you defined. If you used a marginally simpler model like: m1 = lmer(effect~duration+ (1+duration|sites) + (0+duration|season) + (1|season), data=dat1, REML = FALSE) you would experience no convergence issues. This model would effectively estimate uncorrelated random intercepts and random slopes for each season . In addition, 2. when you defined REML = FALSE you used the Maximum Likelihood estimated instead of the Restricted Maximum Likelihood one. The REML estimates try to "factor out" the influence of the fixed effects $X$ before moving into finding the optimal random-effect variance structure (see the thread " What is "restricted maximum likelihood" and when should it be used? " for more detailed information on the matter). Computationally this procedure is essentially done by multiplying both parts of the original LME model equation $y = X\beta + Z\gamma + \epsilon$ by a matrix $K$ such that $KX = 0$, i.e. you change both the original $y$ to $Ky$ as well as the $Z$ to $KZ$. I strongly suspect that this effected the condition number of the design matrix $Z$ and as such help you out of the numerical hard-place you found yourself in the first place. A final note is that I am not sure whether it makes sense to use season as a random effect to begin with. After all there are only so many seasons so you might as well treat them as fixed effects.
{ "source": [ "https://stats.stackexchange.com/questions/242109", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/123019/" ] }
242,113
Assuming there is enough data, and all predictors and independent variable are positively correlated, in other words, every possible pairwise correlation is positive. Is it possible to end up with some negative coefficients in a multi-linear model fit? All variables are defined on a continuous scale. $$ y = \alpha x_1 + \beta x_2 $$ where: $cor(y, x1) > 0$, $cor(y, x2) > 0$, $cor(x1,x2) > 0$ Given the above can either a or b end up being negative?
"Solving" the issue you experience in the sense of not receiving warnings about failed convergence is rather straightforward: you do not use the default BOBYQA optimiser but instead you opt to use the Nelder-Mead optimisation routine used by default in earlier 1.0.x previous versions. Or you install the package optimx so you can directly an L-BFGS-B routine or nlminb (same as lme4 versions prior to ver. 1 ). For example: m1 = lmer(effect~duration+(1+duration|sites)+(1+duration|season), data = dat1, REML = FALSE, control = lmerControl(optimizer ="Nelder_Mead") library(optimx) m1 = lmer(effect~duration+(1+duration|sites)+(1+duration|season), data = dat1, REML = FALSE, control = lmerControl( optimizer ='optimx', optCtrl=list(method='L-BFGS-B'))) m1 = lmer(effect~duration+(1+duration|sites)+(1+duration|season), data = dat1, REML = FALSE, control = lmerControl( optimizer ='optimx', optCtrl=list(method='nlminb'))) all work fine (no warnings). The interesting questions are: why you got these warnings to begin with and why when you used REML = TRUE you got no warnings. Succinctly, 1. you received those warnings because you defined duration both as a fixed effect as well as random slope for the factor sites as well as season . The model effectively ran-out of the degrees of freedom to estimate the correlations between the slopes and the intercepts you defined. If you used a marginally simpler model like: m1 = lmer(effect~duration+ (1+duration|sites) + (0+duration|season) + (1|season), data=dat1, REML = FALSE) you would experience no convergence issues. This model would effectively estimate uncorrelated random intercepts and random slopes for each season . In addition, 2. when you defined REML = FALSE you used the Maximum Likelihood estimated instead of the Restricted Maximum Likelihood one. The REML estimates try to "factor out" the influence of the fixed effects $X$ before moving into finding the optimal random-effect variance structure (see the thread " What is "restricted maximum likelihood" and when should it be used? " for more detailed information on the matter). Computationally this procedure is essentially done by multiplying both parts of the original LME model equation $y = X\beta + Z\gamma + \epsilon$ by a matrix $K$ such that $KX = 0$, i.e. you change both the original $y$ to $Ky$ as well as the $Z$ to $KZ$. I strongly suspect that this effected the condition number of the design matrix $Z$ and as such help you out of the numerical hard-place you found yourself in the first place. A final note is that I am not sure whether it makes sense to use season as a random effect to begin with. After all there are only so many seasons so you might as well treat them as fixed effects.
{ "source": [ "https://stats.stackexchange.com/questions/242113", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/95411/" ] }
242,999
I was wondering if the reciprocal of P(X = 1) represents anything in particular?
Yes, it provides a 1-in-$n$ scale for probabilities. For example, the reciprocal of .01 is 100, so an event with probability .01 has a 1 in 100 chance of happening. This is a useful way to represent small probabilities, such as .0023, which is about 1 in 435.
{ "source": [ "https://stats.stackexchange.com/questions/242999", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/136625/" ] }
243,003
A meta-analysis includes a bunch of studies, all of which reported a P value greater than 0.05. Is it possible for the overall meta-analysis to report a P value less than 0.05? Under what circumstances? (I am pretty sure the answer is yes, but I'd like a reference or explanation.)
In theory, yes... The results of individual studies may be insignificant but viewed together, the results may be significant. In theory you can proceed by treating the results $y_i$ of study $i$ like any other random variable. Let $y_i$ be some random variable (eg. the estimate from study $i$ ). Then if $y_i$ are independent and $E[y_i]=\mu$ , you can consistently estimate the mean with: $$ \hat{\mu} = \frac{1}{n} \sum_i y_i $$ Adding more assumptions, let $\sigma^2_i$ be the variance of estimate $y_i$ . Then you can efficiently estimate $\mu$ with inverse variance weighting: $$\hat{\mu} = \sum_i w_i y_i \quad \quad w_i = \frac{1 / \sigma^2_i}{\sum_j 1 / \sigma^2_j}$$ In either of these cases, $\hat{\mu}$ may be statistically significant at some confidence level even if the individual estimates are not. BUT there may be big problems, issues to be cognizant of... If $E[y_i] \neq \mu$ then the meta-analysis may not converge to $\mu$ (i.e. the mean of the meta-analysis is an inconsistent estimator). For example, if there's a bias against publishing negative results, this simple meta-analysis may be horribly inconsistent and biased! It would be like estimating the probability that a coin flip lands heads by only observing the flips where it didn't land tails! $y_i$ and $y_j$ may not be independent. For example, if two studies $i$ and $j$ were based upon the same data, then treating $y_i$ and $y_j$ as independent in the meta-analysis may vastly underestimate the standard errors and overstate statistical significance. Your estimates would still be consistent, but the standard-errors need to reasonably account for cross-correlation in the studies. Combining (1) and (2) can be especially bad. For example, the meta-analysis of averaging polls together tends to be more accurate than any individual poll. But averaging polls together is still vulnerable to correlated error. Something that has come up in past elections is that young exit poll workers may tend to interview other young people rather than old people. If all the exit polls make the same error, then you have a bad estimate which you may think is a good estimate (the exit polls are correlated because they use the same approach to conduct exit polls and this approach generates the same error). Undoubtedly people more familiar with meta-analysis may come up with better examples, more nuanced issues, more sophisticated estimation techniques, etc..., but this gets at some of the most basic theory and some of the bigger problems. If the different studies make independent, random error, then meta-analysis may be incredibly powerful. If the error is systematic across studies (eg. everyone undercounts older voters etc...), then the average of the studies will also be off. If you underestimate how correlated studies are or how correlated errors are, you effectively over estimate your aggregate sample size and underestimate your standard errors. There are also all kinds of practical issues of consistent definitions etc...
{ "source": [ "https://stats.stackexchange.com/questions/243003", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25/" ] }
243,207
I have a very imbalanced dataset. I'm trying to follow the tuning advice and use scale_pos_weight but not sure how should I tune it. I can see that RegLossObj.GetGradient does: if (info.labels[i] == 1.0f) w *= param_.scale_pos_weight so a gradient of a positive sample would be more influential. However, according to the xgboost paper , the gradient statistic is always used locally = within the instances of a specific node in a specific tree: within the context of a node, to evaluate the loss reduction of a candidate split within the context of a leaf node, to optimize the weight given to that node So there's no way of knowing in advance what would be a good scale_pos_weight - it is a very different number for a node that ends up with 1:100 ratio between positive and negative instances, and for a node with a 1:2 ratio. Any hints?
Generally, scale_pos_weight is the ratio of number of negative class to the positive class. Suppose, the dataset has 90 observations of negative class and 10 observations of positive class, then ideal value of scale_pos_weight should be 9. See the doc: http://xgboost.readthedocs.io/en/latest/parameter.html
{ "source": [ "https://stats.stackexchange.com/questions/243207", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/37793/" ] }
243,208
Scenario: Say we are given a " black box " which simply outputs a symbol once per second. There are an indefinite number of types of symbols with no way to know any possible upper limit to the number of types. You can imagine them like alphabet characters but where we don't know how many different types there are (could be many more than 26 or many fewer, we don't know). We know absolutely nothing about the way these symbols are being produced. We have no information at all - we must infer everything by observing the stream of symbols. Problem: Before the box is switched on (and starts producing symbols), we must come up with an algorithm which best predicts the next symbol at every step. We can assume that we have infinite computing power (i.e. algorithm efficiency is irrelevant). Thoughts: I think this problem has to do with inductive bias and potentially the " no free lunch " theorem. In my limited reading on these topics on the internet, people seem to suggest that you can't make any useful predictions without first holding some assumptions about the data stream. I may very well be mistaken, but it doesn't seem like that's correct. Imagine two algorithms: Guess that the next symbol will be the symbol that has occurred most frequently so far. Guess that the next symbol will be the symbol that has occurred least frequently so far. Given absolutely no assumptions about the data stream, it's hard to imagine the second algorithm out-performing the first one in general (i.e. across many trials with different black boxes). If this is true, the fact that some algorithms work better than others suggests that there is an "optimal" algorithm for this problem. As you can tell I've only got vague intuitions about this. Are there some assumptions hiding in my reasoning? Thanks for your help!
Generally, scale_pos_weight is the ratio of number of negative class to the positive class. Suppose, the dataset has 90 observations of negative class and 10 observations of positive class, then ideal value of scale_pos_weight should be 9. See the doc: http://xgboost.readthedocs.io/en/latest/parameter.html
{ "source": [ "https://stats.stackexchange.com/questions/243208", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
244,012
Parzen window density estimation is described as $$ p(x)=\frac{1}{n}\sum_{i=1}^{n} \frac{1}{h^2} \phi \left(\frac{x_i - x}{h} \right) $$ where $n$ is number of elements in the vector, $x$ is a vector, $p(x)$ is a probability density of $x$, $h$ is dimension of the Parzen Window, and $\phi$ is a window function. My questions are: What is the basic difference between a Parzen Window Function and other density functions like Gaussian Function and so on? What is the role of the Window Function ($\phi$) in finding the density of $x$? Why can we plug other density functions in place of the Window Function? What is the role of $h$ in in finding the density of $x$?
Parzen window density estimation is another name for kernel density estimation . It is a nonparametric method for estimating continuous density function from the data. Imagine that you have some datapoints $x_1,\dots,x_n$ that come from common unknown, presumably continuous, distribution $f$ . You are interested in estimating the distribution given your data. One thing that you could do is simply to look at the empirical distribution and treat it as a sample equivalent of the true distribution. However if your data is continuous, then most probably you would see each $x_i$ point appear only once in the dataset, so based on this, you would conclude that your data comes from an uniform distribution since each of the values have equal probability. Hopefully, you can do better then this: you can pack your data in some number of equally-spaced intervals and count the values that fall into each interval. This method would be based on estimating the histogram . Unfortunately, with histogram you end up with some number of bins, rather then with continuous distribution, so it's only a rough approximation. Kernel density estimation is the third alternative. The main idea is that you approximate $f$ by a mixture of continuous distributions $K$ (using your notation $\phi$ ), called kernels , that are centered at $x_i$ datapoints and have scale ( bandwidth ) equal to $h$ : $$ \hat{f_h}(x) = \frac{1}{nh} \sum_{i=1}^n K\Big(\frac{x-x_i}{h}\Big) $$ This is illustrated on the picture below, where normal distribution is used as kernel $K$ and different values for bandwidth $h$ are used to estimate distribution given the seven datapoints (marked by the colorful lines on the top of the plots). The colorful densities on the plots are kernels centered at $x_i$ points. Notice that $h$ is a relative parameter, it's value is always chosen depending on your data and the same value of $h$ may not give similar results for different datasets. Kernel $K$ can be thought as a probability density function, and it needs to integrate to unity. It also needs to be symmetric so that $K(x) = K(-x)$ and, what follows, centered at zero. Wikipedia article on kernels lists many popular kernels, like Gaussian (normal distribution), Epanechnikov, rectangular (uniform distribution), etc. Basically any distribution meeting those requirements can be used as a kernel. Obviously, the final estimate will depend on your choice of kernel (but not that much) and on the bandwidth parameter $h$ . The following thread How to interpret the bandwidth value in a kernel density estimation? describes the usage of bandwidth parameters in greater detail. Saying this in plain English, what you assume in here is that the observed points $x_i$ are just a sample and follow some distribution $f$ to be estimated. Since the distribution is continuous, we assume that there is some unknown but nonzero density around the near neighborhood of $x_i$ points (the neighborhood is defined by parameter $h$ ) and we use kernels $K$ to account for it. The more points is in some neighborhood, the more density is accumulated around this region and so, the higher the overall density of $\hat{f_h}$ . The resulting function $\hat{f_h}$ can be now evaluated for any point $x$ (without subscript) to obtain density estimate for it, this is how we obtained function $\hat{f_h}(x)$ that is an approximation of unknown density function $f(x)$ . The nice thing about kernel densities is that, not like histograms, they are continuous functions and that they are themselves valid probability densities since they are a mixture of valid probability densities. In many cases this is as close as you can get to approximating $f$ . The difference between kernel density and other densities, as normal distribution, is that "usual" densities are mathematical functions, while kernel density is an approximation of the true density estimated using your data, so they are not "standalone" distributions. I would recommend you the two nice introductory books on this subject by Silverman (1986) and Wand and Jones (1995). Silverman, B.W. (1986). Density estimation for statistics and data analysis. CRC/Chapman & Hall. Wand, M.P and Jones, M.C. (1995). Kernel Smoothing. London: Chapman & Hall/CRC.
{ "source": [ "https://stats.stackexchange.com/questions/244012", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/109372/" ] }
244,017
I'm reading through " An Introduction to Statistical Learning " . In chapter 2, they discuss the reason for estimating a function $f$ . 2.1.1 Why Estimate $f$ ? There are two main reasons we may wish to estimate f : prediction and inference . We discuss each in turn. I've read it over a few times, but I'm still partly unclear on the difference between prediction and inference. Could someone provide a (practical) example of the differences?
Inference: Given a set of data you want to infer how the output is generated as a function of the data. Prediction: Given a new measurement, you want to use an existing data set to build a model that reliably chooses the correct identifier from a set of outcomes. Inference: You want to find out what the effect of Age, Passenger Class and, Gender has on surviving the Titanic Disaster. You can put up a logistic regression and infer the effect each passenger characteristic has on survival rates. Prediction: Given some information on a Titanic passenger, you want to choose from the set $\{\text{lives}, \text{dies}\}$ and be correct as often as possible. (See bias-variance tradeoff for prediction in case you wonder how to be correct as often as possible.) Prediction doesn't revolve around establishing the most accurate relation between the input and the output, accurate prediction cares about putting new observations into the right class as often as possible. So the 'practical example' crudely boils down to the following difference: Given a set of passenger data for a single passenger the inference approach gives you a probability of surviving, the classifier gives you a choice between lives or dies. Tuning classifiers is a very interesting and crucial topic in the same way that correctly interpreting p-values and confidence intervals is.
{ "source": [ "https://stats.stackexchange.com/questions/244017", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/137298/" ] }
244,059
I studied mathematics a decade ago, so I have a math and stats background, but this question is killing me. This question is still a bit philosophical to me. Why did statisticians develop all sort of techniques in order to work with random matrices? I mean, didn't a random vector solve the problem? If not, what is the mean of the diferent columns of a random matrix? Anderson (2003, Wiley) considers a random vector a special case of a random matrix with only one column. I don't see the point of having random matrices (and I'm sure that's because I'm ignorant). But, bear with me. Imagine I have a model with 20 random variables. If I want to compute the joint probability function, why should I picture them as a matrix instead of a vector? What am I missing? ps: I'm sorry for the poorly tagged question, but there were no tags for random-matrix and I can't create one yet! edit: changed matrix to matrices in the title
It depends which field you're in but, one of the big initial pushes for the study of random matrices came out of atomic physics, and was pioneered by Wigner. You can find a brief overview here . Specifically, it was the eigenvalues (which are energy levels in atomic physics) of random matrices that generated tons of interest because the correlations between eigenvalues gave insight into the emission spectrum of nuclear decay processes. More recently, there has been a large resurgence in this field, with the advent of the Tracy-Widom distribution/s for the largest eigenvalues of random matrices, along with stunning connections to seemingly unrelated fields, such as tiling theory , statistical physics, integrable systems , KPZ phenomena , random combinatorics and even the Riemann Hypothesis . You can find some more examples here . For more down-to-earth examples, a natural question to ask about a matrix of row vectors is what its PCA components might look like. You can get heuristic estimates for this by assuming the data comes from some distribution, and then looking at covariance matrix eigenvalues, which will be predicted from random matrix universality : regardless (within reason) of the distribution of your vectors, the limiting distribution of the eigenvalues will always approach a set of known classes. You can think of this as a kind of CLT for random matrices. See this paper for examples.
{ "source": [ "https://stats.stackexchange.com/questions/244059", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7347/" ] }
244,199
Can somebody explain how the properties of logs make it so you can do log linear regressions where the coefficients are interpreted as percentage changes?
For $x_2$ and $x_1$ close to each other, the percent change $\frac{x_2-x_1}{x_1}$ approximates the log difference $\log x_2 - \log x_1$ . Why does the percent change approximate the log difference? An idea from calculus is that you can approximate a smooth function with a line. The linear approximation is simply the first two terms of a Taylor Series . The first order Taylor Expansion of $\log(x)$ around $x=1$ is given by: $$ \log(x) \approx \log(1) + \frac{d}{dx} \left. \log (x) \right|_{x=1} \left( x - 1 \right)$$ The right hand side simplifies to $0 + \frac{1}{1}\left( x - 1\right)$ hence: $$ \log(x) \approx x-1$$ So for $x$ in the neighborhood of 1, we can approximate $\log(x)$ with the line $y = x - 1$ Below is a graph of $y = \log(x)$ and $y = x - 1$ . Example: $\log(1.02) = .0198 \approx 1.02 - 1$ . Now consider two variables $x_2$ and $x_1$ such that $\frac{x_2}{x_1} \approx 1$ . Then the log difference is approximately the percent change $\frac{x_2}{x_1} - 1 = \frac{x_2 - x_1}{x_1}$ : $$ \log x_2 - \log x_1 = \log\left( \frac{x_2}{x_1} \right) \approx \frac{x_2}{x_1} - 1 $$ The percent change is a linear approximation of the log difference! Why log differences? Often times when you're thinking in terms of compounding percent changes, the mathematically cleaner concept is to think in terms of log differences. When you're repeatedly multiplying terms together, it's often more convenient to work in logs and instead add terms together. Let's say our wealth at time $T$ is given by: $$ W_T = \prod_{t=1}^T (1 + R_t)$$ Then it might be more convenient to write: $$ \log W_T = \sum_{t=1}^T r_t $$ where $r_t = \log (1 + R_t) = \log W_t - \log W_{t-1}$ . Where are percent changes and the log difference NOT the same? For big percent changes, the log difference is not the same thing as the percent change because approximating the curve $y = \log(x)$ with the line $y = x - 1$ gets worse and worse the further you get from $x=1$ . For example: $$ \log\left(1.6 \right) - \log(1) = .47 \neq 1.6 - 1$$ What's the log difference in this case? One way to think about it is that a difference in logs of .47 is equivalent to an accumulation of 47 different .01 log differences, which is approximately 47 1% changes all compounded together. \begin{align*} \log(1.6) - \log(1) &= 47 \left( .01 \right) \\ & \approx 47 \left( \log(1.01) \right) \end{align*} Then exponentiate both sides to get: $$ 1.6 \approx 1.01 ^{47}$$ A log difference of .47 is approximately equivalent to 47 different 1% increases compounded, or even better, 470 different .1% increases all compounded etc... Several of the answers here make this idea more explicit.
{ "source": [ "https://stats.stackexchange.com/questions/244199", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/113447/" ] }
244,507
I am working with many algorithms: RandomForest, DecisionTrees, NaiveBayes, SVM (kernel=linear and rbf), KNN, LDA and XGBoost. All of them were pretty fast except for SVM. That is when I got to know that it needs feature scaling to work faster. Then I started wondering if I should do the same for the other algorithms.
In general, algorithms that exploit distances or similarities (e.g. in the form of scalar product) between data samples, such as k-NN and SVM, are sensitive to feature transformations. Graphical-model based classifiers, such as Fisher LDA or Naive Bayes, as well as Decision trees and Tree-based ensemble methods (RF, XGB) are invariant to feature scaling, but still, it might be a good idea to rescale/standardize your data.
{ "source": [ "https://stats.stackexchange.com/questions/244507", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/129645/" ] }
244,677
I´ve been studying some advanced statistics, but some concepts and differences between methods are hard to grasp. Let´s say I have a large group of individuals, each with a set of variables like age, height, IQ and so on. They can either belong to a Criminal group or NotCriminal group. If I wanted to evaluate which characteristics are more likely to influence if someone is a criminal or not, should I use PCA or logistic regression?
The key difference between two approches PCA will NOT consider the response variable but only the variance of the independent variables. Logistic Regression will consider how each independent variable impact on response variable. We can make an example that PCA and logistic regression will have completely different results, i.e., one method shows some feature is important, but other says opposite. Here is how do we construct the example: Independent variable $X_1$ has very small variance (see left plot $x_1$ and $x_2$ are on different scale.), BUT it closely related to the response (from the code, you can see $y$ is assigned based on $X_1$ plus uniform noise). Logistic regression will say it is very important (see the summary of the model in code section), but PCA will say the opposite (see the biplot / right subfigure, the length of $X_1$ arrow is very short short.). Code (in case you want to make the same simulation) set.seed(0) n_data=200 x1=rnorm(n_data,sd=0.3) x2=rnorm(n_data,sd=1) y=ifelse(x1+0.1*runif(n_data)>0,1,2) par(mfrow=c(1,2),cex=1.2) plot(x1,x2,col=y,pch=20) summary(glm(factor(y)~x1+x2-1,family = binomial())) pr.out=princomp(cbind(x1,x2)) biplot(pr.out,xlabs=rep("*",200)) > summary(glm(factor(y)~x1+x2-1,family = binomial())) Call: glm(formula = factor(y) ~ x1 + x2 - 1, family = binomial()) Deviance Residuals: Min 1Q Median 3Q Max -2.27753 -0.19392 -0.00118 0.05413 1.24053 Coefficients: Estimate Std. Error z value Pr(>|z|) x1 -26.4414 4.8434 -5.459 4.78e-08 *** x2 -0.4267 0.2975 -1.434 0.152 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 277.259 on 200 degrees of freedom Residual deviance: 66.817 on 198 degrees of freedom AIC: 70.817 Number of Fisher Scoring iterations: 8
{ "source": [ "https://stats.stackexchange.com/questions/244677", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/137723/" ] }
244,917
I'm fairly new to Bayesian statistics and I came across a corrected correlation measure, SparCC , that uses the Dirichlet process in the backend of it's algorithm. I have been trying to go through the algorithm step-by-step to really understand what is happening but I am not sure exactly what the alpha vector parameter does in a Dirichlet distribution and how it normalizes the alpha vector parameter? The implementation is in Python using NumPy : https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.dirichlet.html The docs say: alpha : array Parameter of the distribution (k dimension for sample of dimension k). My questions: How do the alphas affect the distribution?; How are the alphas being normalized?; and What happens when the alphas are not integers? import numpy as np import pandas as pd import matplotlib.pyplot as plt # Reproducibility np.random.seed(0) # Integer values for alphas alphas = np.arange(10) # array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) # Dirichlet Distribution dd = np.random.dirichlet(alphas) # array([ 0. , 0.0175113 , 0.00224837, 0.1041491 , 0.1264133 , # 0.06936311, 0.13086698, 0.15698674, 0.13608845, 0.25637266]) # Plot ax = pd.Series(dd).plot() ax.set_xlabel("alpha") ax.set_ylabel("Dirichlet Draw")
The Dirichlet distribution is a multivariate probability distribution that describes $k\ge2$ variables $X_1,\dots,X_k$ , such that each $x_i \in (0,1)$ and $\sum_{i=1}^N x_i = 1$ , that is parametrized by a vector of positive-valued parameters $\boldsymbol{\alpha} = (\alpha_1,\dots,\alpha_k)$ . The parameters do not have to be integers, they only need to be positive real numbers. They are not "normalized" in any way, they are parameters of this distribution. The Dirichlet distribution is a generalization of the beta distribution into multiple dimensions, so you can start by learning about the beta distribution. Beta is a univariate distribution of a random variable $X \in (0,1)$ parameterized by parameters $\alpha$ and $\beta$ . The nice intuition about it comes if you recall that it is a conjugate prior for the binomial distribution and if we assume a beta prior parameterized by $\alpha$ and $\beta$ for the binomial distribution's probability parameter $p$ , then the posterior distribution of $p$ is also a beta distribution parameterized by $\alpha' = \alpha + \text{number of successes}$ and $\beta' = \beta + \text{number of failures}$ . So you can think of $\alpha$ and $\beta$ as of pseudocounts (they do not need to be integers) of successes and failures (check also this thread ). In the case of the Dirichlet distribution, it is a conjugate prior for the multinomial distribution . If in the case of the binomial distribution we can think of it in terms of drawing white and black balls with replacement from the urn, then in case of the multinomial distribution we are drawing with replacement $N$ balls appearing in $k$ colors, where each of colors of the balls can be drawn with probabilities $p_1,\dots,p_k$ . The Dirichlet distribution is a conjugate prior for $p_1,\dots,p_k$ probabilities and $\alpha_1,\dots,\alpha_k$ parameters can be thought of as pseudocounts of balls of each color assumed a priori (but you should read also about the pitfalls of such reasoning ). In Dirichlet-multinomial model $\alpha_1,\dots,\alpha_k$ get updated by summing them with observed counts in each category: $\alpha_1+n_1,\dots,\alpha_k+n_k$ in similar fashion as in case of beta-binomial model. The higher value of $\alpha_i$ , the greater "weight" of $X_i$ and the greater amount of the total "mass" is assigned to it (recall that in total it must be $x_1+\dots+x_k=1$ ). If all $\alpha_i$ are equal, the distribution is symmetric. If $\alpha_i < 1$ , it can be thought of as anti-weight that pushes away $x_i$ toward extremes, while when it is high, it attracts $x_i$ toward some central value (central in the sense that all points are concentrated around it, not in the sense that it is symmetrically central). If $\alpha_1 = \dots = \alpha_k = 1$ , then the points are uniformly distributed. This can be seen on the plots below, where you can see trivariate Dirichlet distributions (unfortunately we can produce reasonable plots only up to three dimensions) parameterized by (a) $\alpha_1 = \alpha_2 = \alpha_3 = 1$ , (b) $\alpha_1 = \alpha_2 = \alpha_3 = 10$ , (c) $\alpha_1 = 1, \alpha_2 = 10, \alpha_3 = 5$ , (d) $\alpha_1 = \alpha_2 = \alpha_3 = 0.2$ . The Dirichlet distribution is sometimes called a "distribution over distributions" since it can be thought of as a distribution of probabilities themselves. Notice that since each $x_i \in (0,1)$ and $\sum_{i=1}^k x_i = 1$ , then $x_i$ 's are consistent with the first and second axioms of probability . So you can use the Dirichlet distribution as a distribution of probabilities for discrete events described by distributions such as categorical or multinomial . It is not true that it is a distribution over any distributions, for example it is not related to probabilities of continuous random variables, or even some discrete ones (e.g. a Poisson distributed random variable describes probabilities of observing values that are any natural numbers, so to use a Dirichlet distribution over their probabilities, you'd need an infinite number of random variables $k$ ).
{ "source": [ "https://stats.stackexchange.com/questions/244917", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/92493/" ] }
245,063
First it was Brexit , now the US election. Many model predictions were off by a wide margin, and are there lessons to be learned here? As late as 4 pm PST yesterday, the betting markets were still favoring Hillary 4 to 1. I take it that the betting markets, with real money on the line, should act as an ensemble of all the available prediction models out there. So it's not far-fetched to say these models didn't do a very good job. I saw one explanation was voters were unwilling to identify themselves as Trump supporters. How could a model incorporate effects like that? One macro explanation I read is the rise of populism . The question then is how could a statistical model capture a macro trend like that? Are these prediction models out there putting too much weight on data from polls and sentiment, not enough from where the country is standing in a 100 year view? I am quoting a friend's comments.
In short, polling is not always easy. This election may have been the hardest. Any time we are trying to do statistical inference, a fundamental question is whether our sample is a good representation of the population of interest. A typical assumption that is required for many types of statistical inference is that of having our sample being a completely random sample from the population of interest (and often, we also need samples to be independent). If these assumptions hold true, we typically have good measures of our uncertainty based on statistical theory. But we definitively do not have these assumptions holding true with polls! We have exactly 0 samples from our population of interest: actual votes cast at election day. In this case, we cannot make any sort of valid inference without further, untestable assumptions about the data. Or at least, untestable until after election day. Do we completely give up and say "50%-50%!"? Typically, no. We can try to make what we believe are reasonable assumptions about how the votes will be cast. For example, maybe we want to believe that polls are unbiased estimates for the election day votes, plus some certain unbiased temporal noise (i.e., evolving public opinion as time passes). I'm not an expert on polling methods, but I believe this is the type of model 538 uses. And in 2012, it worked pretty well. So those assumptions were probably pretty reasonable. Unfortunately, there's no real way of evaluating those assumptions, outside strictly qualitative reasoning. For more discussion on a similar topic, see the topic of Non-Ignorable missingness. My theory for why polls did so poorly in 2016: the polls were not unbiased estimates of voter day behavior. That is, I would guess that Trump supporters (and likely Brexit supporters as well) were much more distrustful of pollsters. Remember that Mr. Trump actively denounced polls. As such, I think Trump supporters were less likely to report their voting intentions to pollsters than supporters of his opponents. I would speculate that this caused an unforeseen heavy bias in the polls. How could analysts have accounted for this when using the poll data? Based on the poll data alone, there is no real way to do this in a quantitative way. The poll data does not tell you anything about those who did not participate. However, one may be able to improve the polls in a qualitative way, by choosing more reasonable (but untestable) assumptions about the relation between polling data and election day behavior. This is non-trivial and the truly difficult part of being a good pollster (note: I am not a pollster). Also note that the results were very surprising to the pundits as well, so it's not like there were obvious signs that the assumptions were wildly off this time. Polling can be hard.
{ "source": [ "https://stats.stackexchange.com/questions/245063", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/98646/" ] }
245,448
I am experimenting a bit autoencoders, and with tensorflow I created a model that tries to reconstruct the MNIST dataset. My network is very simple: X, e1, e2, d1, Y, where e1 and e2 are encoding layers, d2 and Y are decoding layers (and Y is the reconstructed output). X has 784 units, e1 has 100, e2 has 50, d1 has 100 again and Y 784 again. I am using sigmoids as activation functions for layers e1, e2, d1 and Y. Inputs are in [0,1] and so should be the outputs. Well, I tried using cross entropy as loss function, but the output was always a blob, and I noticed that the weights from X to e1 would always converge to an zero-valued matrix. On the other hand, using mean squared errors as loss function, would produce a decent result, and I am now able to reconstruct the inputs. Why is that so? I thought I could interpret the values as probabilities, and therefore use cross entropy, but obviously I am doing something wrong.
I think the best answer to this is that the cross-entropy loss function is just not well-suited to this particular task. In taking this approach, you are essentially saying the true MNIST data is binary, and your pixel intensities represent the probability that each pixel is 'on.' But we know this is not actually the case. The incorrectness of this implicit assumption is then causing us issues. We can also look at the cost function and see why it might be inappropriate. Let's say our target pixel value is 0.8. If we plot the MSE loss, and the cross-entropy loss $- [ (\text{target}) \log (\text{prediction}) + (1 - \text{target}) \log (1 - \text{prediction}) ]$ (normalising this so that it's minimum is at zero), we get: We can see that the cross-entropy loss is asymmetric. Why would we want this? Is it really worse to predict 0.9 for this 0.8 pixel than it is to predict 0.7? I would say it's maybe better, if anything. We could probably go into more detail and figure out why this leads to the specific blobs that you are seeing. I'd hazard a guess that it is because pixel intensities are above 0.5 on average in the region where you are seeing the blob. But in general this is a case of the implicit modelling assumptions you have made being inappropriate for the data. Hope that helps!
{ "source": [ "https://stats.stackexchange.com/questions/245448", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/45389/" ] }
245,462
Looking at the frames of a video, we can see that many frames are almost identical. Is there any algorithm to identify these frames, so I can delete them all but one?
I think the best answer to this is that the cross-entropy loss function is just not well-suited to this particular task. In taking this approach, you are essentially saying the true MNIST data is binary, and your pixel intensities represent the probability that each pixel is 'on.' But we know this is not actually the case. The incorrectness of this implicit assumption is then causing us issues. We can also look at the cost function and see why it might be inappropriate. Let's say our target pixel value is 0.8. If we plot the MSE loss, and the cross-entropy loss $- [ (\text{target}) \log (\text{prediction}) + (1 - \text{target}) \log (1 - \text{prediction}) ]$ (normalising this so that it's minimum is at zero), we get: We can see that the cross-entropy loss is asymmetric. Why would we want this? Is it really worse to predict 0.9 for this 0.8 pixel than it is to predict 0.7? I would say it's maybe better, if anything. We could probably go into more detail and figure out why this leads to the specific blobs that you are seeing. I'd hazard a guess that it is because pixel intensities are above 0.5 on average in the region where you are seeing the blob. But in general this is a case of the implicit modelling assumptions you have made being inappropriate for the data. Hope that helps!
{ "source": [ "https://stats.stackexchange.com/questions/245462", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/132840/" ] }
245,502
In the mini-batch training of a neural network, I heard that an important practice is to shuffle the training data before every epoch. Can somebody explain why the shuffling at each epoch helps? From the google search, I found the following answers: it helps the training converge fast it prevents any bias during the training it prevents the model from learning the order of the training But, I have the difficulty of understanding why any of those effects is caused by the random shuffling. Can anybody provide an intuitive explanation?
Note: throughout this answer I refer to minimization of training loss and I do not discuss stopping criteria such as validation loss. The choice of stopping criteria does not affect the process/concepts described below. The process of training a neural network is to find the minimum value of a loss function $ℒ_X(W)$, where $W$ represents a matrix (or several matrices) of weights between neurons and $X$ represents the training dataset. I use a subscript for $X$ to indicate that our minimization of $ℒ$ occurs only over the weights $W$ (that is, we are looking for $W$ such that $ℒ$ is minimized) while $X$ is fixed. Now, if we assume that we have $P$ elements in $W$ (that is, there are $P$ weights in the network), $ℒ$ is a surface in a $P+1$-dimensional space. To give a visual analogue, imagine that we have only two neuron weights ($P=2$). Then $ℒ$ has an easy geometric interpretation: it is a surface in a 3-dimensional space. This arises from the fact that for any given matrices of weights $W$, the loss function can be evaluated on $X$ and that value becomes the elevation of the surface. But there is the problem of non-convexity; the surface I described will have numerous local minima, and therefore gradient descent algorithms are susceptible to becoming "stuck" in those minima while a deeper/lower/better solution may lie nearby. This is likely to occur if $X$ is unchanged over all training iterations, because the surface is fixed for a given $X$; all its features are static, including its various minima. A solution to this is mini-batch training combined with shuffling. By shuffling the rows and training on only a subset of them during a given iteration, $X$ changes with every iteration, and it is actually quite possible that no two iterations over the entire sequence of training iterations and epochs will be performed on the exact same $X$. The effect is that the solver can easily "bounce" out of a local minimum. Imagine that the solver is stuck in a local minimum at iteration $i$ with training mini-batch $X_i$. This local minimum corresponds to $ℒ$ evaluated at a particular value of weights; we'll call it $ℒ_{X_i}(W_i)$. On the next iteration the shape of our loss surface actually changes because we are using $X_{i+1}$, that is, $ℒ_{X_{i+1}}(W_i)$ may take on a very different value from $ℒ_{X_i}(W_i)$ and it is quite possible that it does not correspond to a local minimum! We can now compute a gradient update and continue with training. To be clear: the shape of $ℒ_{X_{i+1}}$ will -- in general -- be different from that of $ℒ_{X_{i}}$. Note that here I am referring to the loss function $ℒ$ evaluated on a training set $X$; it is a complete surface defined over all possible values of $W$, rather than the evaluation of that loss (which is just a scalar) for a specific value of $W$. Note also that if mini-batches are used without shuffling there is still a degree of "diversification" of loss surfaces, but there will be a finite (and relatively small) number of unique error surfaces seen by the solver (specifically, it will see the same exact set of mini-batches -- and therefore loss surfaces -- during each epoch). One thing I deliberately avoided was a discussion of mini-batch sizes, because there are a million opinions on this and it has significant practical implications (greater parallelization can be achieved with larger batches). However, I believe the following is worth mentioning. Because $ℒ$ is evaluated by computing a value for each row of $X$ (and summing or taking the average; i.e., a commutative operator) for a given set of weight matrices $W$, the arrangement of the rows of $X$ has no effect when using full-batch gradient descent (that is, when each batch is the full $X$, and iterations and epochs are the same thing).
{ "source": [ "https://stats.stackexchange.com/questions/245502", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/27783/" ] }
245,866
Is "hour of the day" where the value can be 0, 1, 2, ..., 23 a categorical variable? I would be tempted to say no, since 5, for example, is 'closer' to 4 or 6 than it is to 3 or 7. On the other hand, there is the discontinuity between 23 and 0. So is it generally considered categorical or not? Note that 'hour' is one of the independent variables, not the variable I'm trying to predict.
Depending on what you want to model, hours (and many other attributes like seasons) are actually ordinal cyclic variables. In case of seasons you can consider them to be more or less categorical, and in case of hours you can model them as continuous as well. However, using hours in your model in a form that does not take care of cyclicity for you will not be fruitful. Instead try to come up with some kind of transformation. Using hours you could use a trigonometric approach by xhr = sin(2*pi*hr/24) yhr = cos(2*pi*hr/24) Thus you would instead use xhr and yhr for modelling. See this post for example: Use of circular predictors in linear regression .
{ "source": [ "https://stats.stackexchange.com/questions/245866", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1618/" ] }
246,047
I'm slightly confused if an independent variable (also called predictor or feature) in a statistical model, for example the $X$ in linear regression $Y=\beta_0+\beta_1 X$, is a random variable ?
There are two common formulations of linear regression. To focus on the concepts, I will abstract them somewhat. The mathematical description is a little more involved than the English description, so let's begin with the latter: Linear regression is a model in which a response $Y$ is assumed to be random with a distribution determined by regressors $X$ via a linear map $\beta(X)$ and, possibly, by other parameters $\theta$ . In most cases the set of possible distributions is a location family with parameters $\alpha$ and $\theta$ and $\beta(X)$ gives the parameter $\alpha$ . The archetypical example is ordinary regression in which the set of distributions is the Normal family $\mathcal{N}(\mu, \sigma)$ and $\mu=\beta(X)$ is a linear function of the regressors. Because I have not yet described this mathematically, it's still an open question what kinds of mathematical objects $X$ , $Y$ , $\beta$ , and $\theta$ refer to--and I believe that is the main issue in this thread. Although one can make various (equivalent) choices, most will be equivalent to, or special cases, of the following description. Fixed regressors. The regressors are represented as real vectors $X\in\mathbb{R}^p$ . The response is a random variable $Y:\Omega\to\mathbb{R}$ (where $\Omega$ is endowed with a sigma field and probability). The model is a function $f:\mathbb{R}\times\Theta\to M^d$ (or, if you like, a set of functions $\mathbb{R}\to M^d$ parameterized by $\Theta$ ). $M^d$ is a finite dimensional topological (usually second differentiable) submanifold (or submanifold-with-boundary) of dimension $d$ of the space of probability distributions. $f$ is usually taken to be continuous (or sufficiently differentiable). $\Theta\subset\mathbb{R}^{d-1}$ are the "nuisance parameters." It is supposed that the distribution of $Y$ is $f(\beta(X), \theta)$ for some unknown dual vector $\beta\in\mathbb{R}^{p*}$ (the "regression coefficients") and unknown $\theta\in\Theta$ . We may write this $$Y \sim f(\beta(X), \theta).$$ Random regressors. The regressors and response are a $p+1$ dimensional vector-valued random variable $Z = (X,Y): \Omega^\prime \to \mathbb{R}^p \times \mathbb{R}$ . The model $f$ is the same kind of object as before, but now it gives the conditional probability $$ Y|X \sim f(\beta(X), \theta).$$ The mathematical description is useless without some prescription telling how it is intended to be applied to data. In the fixed regressor case we conceive of $X$ as being specified by the experimenter. Thus it might help to view $\Omega$ as a product $\mathbb{R}^p\times \Omega^\prime$ endowed with a product sigma algebra. The experimenter determines $X$ and nature determines (some unknown, abstract) $\omega\in\Omega^\prime$ . In the random regressor case, nature determines $\omega\in\Omega^\prime$ , the $X$ -component of the random variable $\pi_X(Z(\omega))$ determines $X$ (which is "observed"), and we now have an ordered pair $(X(\omega), \omega)) \in \Omega$ exactly as in the fixed regressor case. The archetypical example of multiple linear regression (which I will express using standard notation for the objects rather than this more general one) is that $$f(\beta(X), \sigma)=\mathcal{N}(\beta(x), \sigma)$$ for some constant $\sigma \in \Theta = \mathbb{R}^{+}$ . As $x$ varies throughout $\mathbb{R}^p$ , its image differentiably traces out a one-dimensional subset--a curve --in the two-dimensional manifold of Normal distributions. When--in any fashion whatsoever-- $\beta$ is estimated as $\hat\beta$ and $\sigma$ as $\hat\sigma$ , the value of $\hat\beta(x)$ is the predicted value of $Y$ associated with $x$ --whether $x$ is controlled by the experimenter (case 1) or is only observed (case 2). If we either set a value (case 1) or observe a realization (case 2) $x$ of $X$ , then the response $Y$ associated with that $X$ is a random variable whose distribution is $\mathcal{N}(\beta(x), \sigma)$ , which is unknown but estimated to be $\mathcal{N}(\hat\beta(x), \hat\sigma)$ .
{ "source": [ "https://stats.stackexchange.com/questions/246047", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/138575/" ] }
246,061
In many clustering techniques, the values of initial centroids (center) play an important role to draw the results of the clustering process. Could someone please tell me what are exactly the meaning of initial centroids and what are the advantages to pre-define the initial centroids at the first step in clustering process?
There are two common formulations of linear regression. To focus on the concepts, I will abstract them somewhat. The mathematical description is a little more involved than the English description, so let's begin with the latter: Linear regression is a model in which a response $Y$ is assumed to be random with a distribution determined by regressors $X$ via a linear map $\beta(X)$ and, possibly, by other parameters $\theta$ . In most cases the set of possible distributions is a location family with parameters $\alpha$ and $\theta$ and $\beta(X)$ gives the parameter $\alpha$ . The archetypical example is ordinary regression in which the set of distributions is the Normal family $\mathcal{N}(\mu, \sigma)$ and $\mu=\beta(X)$ is a linear function of the regressors. Because I have not yet described this mathematically, it's still an open question what kinds of mathematical objects $X$ , $Y$ , $\beta$ , and $\theta$ refer to--and I believe that is the main issue in this thread. Although one can make various (equivalent) choices, most will be equivalent to, or special cases, of the following description. Fixed regressors. The regressors are represented as real vectors $X\in\mathbb{R}^p$ . The response is a random variable $Y:\Omega\to\mathbb{R}$ (where $\Omega$ is endowed with a sigma field and probability). The model is a function $f:\mathbb{R}\times\Theta\to M^d$ (or, if you like, a set of functions $\mathbb{R}\to M^d$ parameterized by $\Theta$ ). $M^d$ is a finite dimensional topological (usually second differentiable) submanifold (or submanifold-with-boundary) of dimension $d$ of the space of probability distributions. $f$ is usually taken to be continuous (or sufficiently differentiable). $\Theta\subset\mathbb{R}^{d-1}$ are the "nuisance parameters." It is supposed that the distribution of $Y$ is $f(\beta(X), \theta)$ for some unknown dual vector $\beta\in\mathbb{R}^{p*}$ (the "regression coefficients") and unknown $\theta\in\Theta$ . We may write this $$Y \sim f(\beta(X), \theta).$$ Random regressors. The regressors and response are a $p+1$ dimensional vector-valued random variable $Z = (X,Y): \Omega^\prime \to \mathbb{R}^p \times \mathbb{R}$ . The model $f$ is the same kind of object as before, but now it gives the conditional probability $$ Y|X \sim f(\beta(X), \theta).$$ The mathematical description is useless without some prescription telling how it is intended to be applied to data. In the fixed regressor case we conceive of $X$ as being specified by the experimenter. Thus it might help to view $\Omega$ as a product $\mathbb{R}^p\times \Omega^\prime$ endowed with a product sigma algebra. The experimenter determines $X$ and nature determines (some unknown, abstract) $\omega\in\Omega^\prime$ . In the random regressor case, nature determines $\omega\in\Omega^\prime$ , the $X$ -component of the random variable $\pi_X(Z(\omega))$ determines $X$ (which is "observed"), and we now have an ordered pair $(X(\omega), \omega)) \in \Omega$ exactly as in the fixed regressor case. The archetypical example of multiple linear regression (which I will express using standard notation for the objects rather than this more general one) is that $$f(\beta(X), \sigma)=\mathcal{N}(\beta(x), \sigma)$$ for some constant $\sigma \in \Theta = \mathbb{R}^{+}$ . As $x$ varies throughout $\mathbb{R}^p$ , its image differentiably traces out a one-dimensional subset--a curve --in the two-dimensional manifold of Normal distributions. When--in any fashion whatsoever-- $\beta$ is estimated as $\hat\beta$ and $\sigma$ as $\hat\sigma$ , the value of $\hat\beta(x)$ is the predicted value of $Y$ associated with $x$ --whether $x$ is controlled by the experimenter (case 1) or is only observed (case 2). If we either set a value (case 1) or observe a realization (case 2) $x$ of $X$ , then the response $Y$ associated with that $X$ is a random variable whose distribution is $\mathcal{N}(\beta(x), \sigma)$ , which is unknown but estimated to be $\mathcal{N}(\hat\beta(x), \hat\sigma)$ .
{ "source": [ "https://stats.stackexchange.com/questions/246061", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/131686/" ] }
246,512
AlexNet architecture uses zero-paddings as shown in the pic. However, there is no explanation in the paper why this padding is introduced. Standford CS 231n course teaches we use padding to preserve the spatial size: I am curious if that is the only reason for zero padding? Can anyone explain the rationale behind zero padding? Thanks! Reason I am asking Let's say I don't need to preserve the spatial size. Can I just remove padding then w/o loss of performance? I know it results in very fast decrease in spatial size as we go to deeper layers, but I can trade-off that by removing pooling layers as well.
There are couple of reasons padding is important: It's easier to design networks if we preserve the height and width and don't have to worry too much about tensor dimensions when going from one layer to another because dimensions will just "work" . It allows us to design deeper networks . Without padding, reduction in volume size would reduce too quickly. Padding actually improves performance by keeping information at the borders . Quote from Stanford lectures: "In addition to the aforementioned benefit of keeping the spatial sizes constant after CONV, doing this actually improves performance. If the CONV layers were to not zero-pad the inputs and only perform valid convolutions, then the size of the volumes would reduce by a small amount after each CONV, and the information at the borders would be “washed away” too quickly." - source As @dontloo already said, new network architectures need to concatenate convolutional layers with 1x1, 3x3 and 5x5 filters and it wouldn't be possible if they didn't use padding because dimensions wouldn't match. Check this image of inception module to understand better why padding is useful here.
{ "source": [ "https://stats.stackexchange.com/questions/246512", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/96608/" ] }
246,726
I'm learning about bootstrapping as a means of estimating the variance of a sample statistic. I have one basic doubt. Quoting from http://web.stanford.edu/class/psych252/tutorials/doBootstrapPrimer.pdf : • How many observations should we resample? A good suggestion is the original sample size. How can we resample as many observations as in the original sample? If I have a sample size of 100, and I'm trying to estimate the variance of the mean. How can I obtain multiple bootstrap samples of size 100 from a total sample size of 100? Only 1 bootstrap sample would be possible in this case which would be equivalent to the original sample right? I'm obviously misunderstanding something very basic. I understand that the number of ideal bootstrap samples is always infinite, and to determine the number of bootstrap samples necessary for my data I'd have to test for convergence keeping my required precision in mind. But I'm really confused about what should be the size of each individual bootstrap sample.
Bootstrap is conducted by sampling with replacement . It seems that the term "with replacement" is unclear for you. As noted by whuber , illustration of sampling with replacement is given on p. 3 of the paper you refer to (reproduced below). (source: http://web.stanford.edu/class/psych252/tutorials/doBootstrapPrimer.pdf ) The general idea of sampling with replacement is that any case can be sampled multiple times (green marble on the first image above; blue and violet marbles on the last picture). If you want to imagine yourself this process, think of a bowl filled with colorful marbles. Say that you want to draw some number of marbles from this bowl. If you sampled without replacement, then you would be simply taking the marbles out of the bowl and putting the sampled ones aside. If you sampled with replacement, then you would be sampling the marbles one-by-one, by taking single marble out of the bowl, signing down it's color in your notebook and then returning it back to the bowl. So when sampling with replacement the same marble can be sampled multiple times. So when sampling without replacement, you can sample only $n$ marbles out of the bowl containing $n$ marbles, while in case of sampling with replacement you can sample any number of marbles (even greater then $n$) from the finite population. If you sampled $n$ out of $n$ marbles without replacement you would end up with exactly the same sample but in shuffled order. If you sampled $n$ out of $n$ marbles with replacement, each time you can possibly sample a different combination of marbles. There is $n \choose k$ ways of sampling without replacement $k$ cases out of population of size $ n$ and $n+k-1 \choose k$ ways of sampling with replacement. If you want to read more about the math behind it, you can check the 2.1. Combinatorics chapter of Introduction to Probability online handbook by Hossein Pishro-Nik. There is also a handy cheatsheet on WolframMathWorld page.
{ "source": [ "https://stats.stackexchange.com/questions/246726", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/54462/" ] }
247,094
How can I manually generate a random number from a given distribution, as for instance, 10 realisations from the standard normal distribution?
If "manually" includes "mechanical" then you have many options available to you. To simulate a Bernoulli variable with probability half, we can toss a coin: $0$ for tails, $1$ for heads. To simulate a geometric distribution we can count how many coin tosses are needed before we obtain heads. To simulate a binomial distribution, we can toss our coin $n$ times (or simply toss $n$ coins) and count the heads. The "quincunx" or "bean machine" or "Galton box" is a more kinetic alternative — why not set one into action and see for yourself ? It seems there is no such thing as a "weighted coin" but if we wish to vary the probability parameter of our Bernoulli or binomial variable to values other than $p = 0.5$ , the needle of Georges-Louis Leclerc, Comte de Buffon will allow us to do so. To simulate the discrete uniform distribution on $\{1, 2, 3, 4, 5, 6\}$ we roll a six-sided die. Fans of role-playing games will have encountered more exotic dice , for example tetrahedral dice to sample uniformly from $\{1,2,3,4\}$ , while with a spinner or roulette wheel one can go further still. ( Image credit ) Would we have to be mad to generate random numbers in this manner today, when it is just one command away on a computer console — or, if we have a suitable table of random numbers available, one foray to the dustier corners of the bookshelf? Well perhaps, though there is something pleasingly tactile about a physical experiment. But for people working before the Computer Age, indeed before widely available large-scale random number tables (of which more later), simulating random variables manually had more practical importance. When Buffon investigated the St. Petersburg paradox — the famous coin-tossing game where the amount the player wins doubles every time a heads is tossed, the player loses upon the first tails, and whose expected pay-off is counter-intuitively infinite — he needed to simulate the geometric distribution with $p=0.5$ . To do so, it seems he hired a child to toss a coin to simulate 2048 plays of the St. Petersburg game, recording how many tosses before the game ended. This simulated geometric distribution is reproduced in Stigler (1991) : Tosses Frequency 1 1061 2 494 3 232 4 137 5 56 6 29 7 25 8 8 9 6 In the same essay where he published this empirical investigation into the St. Petersburg paradox, Buffon also introduced the famous " Buffon's needle ". If a plane is divided into strips by parallel lines a distance $d$ apart, and a needle of length $l \leq d$ is dropped onto it, the probability the needle crosses one of the lines is $\frac{2l}{\pi d}$ . Buffon's needle can, therefore, be used to simulate a random variable $X \sim \text{Bernoulli}(\frac{2l}{\pi d})$ or $X \sim \text{Binomial}(n,\frac{2l}{\pi d})$ , and we can adjust the probability of success by altering the lengths of our needles or (perhaps more conveniently) the distance at which we rule the lines. An alternative use of Buffon's needles is as a terrifically inefficient way to find a probabilistic approximation for $\pi$ . The image ( credit ) shows 17 matchsticks, of which 11 cross a line. When the distance between the ruled lines is set equal to the length of the matchstick, as here, the expected proportion of crossing matchsticks is $\frac{2}{\pi}$ and hence we can estimate $\hat \pi$ as twice the reciprocal of the observed fraction: here we obtain $\hat \pi = 2 \cdot \frac{17}{11} \approx 3.1$ . In 1901 Mario Lazzarini claimed to have performed the experiment using 2.5 cm needles with lines 3 cm apart, and after 3408 tosses obtained $\hat \pi = \frac{355}{113}$ . This is a well-known rational to $\pi$ , accurate to six decimal places. Badger (1994) provides convincing evidence that this was fraudulent , not least that to be 95% confident of six decimal places of accuracy using Lazzarini's apparatus, a patience-sapping 134 trillion needles must be thrown! Certainly Buffon's needle is more useful as a random number generator than it is as a method for estimating $\pi$ . Our generators so far have been disappointingly discrete. What if we want to simulate a normal distribution? One option is to obtain random digits and use them to form good discrete approximations to a uniform distribution on $[0,1]$ , then perform some calculations to transform these into random normal deviates. A spinner or roulette wheel could give decimal digits from zero to nine; a tossed coin can generate binary digits; if our arithmetic skills can cope with a funkier base, even a standard set of dice would do — or we could use a die to generate binary digits via odd/even scores. Other answers have covered this kind of transformation-based approach in more detail; I defer any further discussion of it until the end. By the late nineteenth century the utility of the normal distribution was well-known, and so there were statisticians keen to simulate random normal deviates. Needless to say, lengthy hand calculations would not have been suitable except to set up the simulating process in the first place. Once that was established, the generation of the random numbers had to be relatively quick and easy. Stigler (1991) lists the methods employed by three statisticians of this era. All were researching smoothing techniques: random normal deviates were of obvious interest, e.g. to simulate measurement error that needed to be smoothed over. The remarkable American statistician Erastus Lyman De Forest was interested in smoothing life tables, and encountered a problem that required the simulation of the absolute values of normal deviates. In what will prove a running theme, De Forest was really sampling from a half-normal distribution . Moreover, rather than using a standard deviation of one (the $Z \sim N(0, 1^2)$ we are used to calling "standard"), De Forest wanted a "probable error" (median deviation) of one. This was the form given in the table of "Probability of Errors" in the appendices of "A Manual Of Spherical And Practical Astronomy, Volume II" by William Chauvenet . From this table, De Forest interpolated the quantiles of a half-normal distribution, from $p=0.005$ to $p=0.995$ , which he deemed to be "errors of equal frequency". Should you wish to simulate the normal distribution, following De Forest, you can print this table out and cut it up. De Forest (1876) wrote that the errors "have been inscribed upon 100 bits of card-board of equal size, which were shaken up in a box and all drawn out one by one". The astronomer and meteorologist Sir George Howard Darwin (son of the naturalist Charles) put a different spin on things, by developing what he called a "roulette" for generating random normal deviates. Darwin (1877) describes how: A circular piece of card was graduated radially, so that a graduation marked $x$ was $\frac{720}{\sqrt \pi} \int_0^x e^{-x^2} dx$ degrees distant from a fixed radius. The card was made to spin round its centre close to a fixed index. It was then spun a number of times, and on stopping it the number opposite the index was read off. [Darwin adds in a footnote: It is better to stop the disk when it is spinning so fast that the graduations are invisible, rather than to let it run its course.] From the nature of the graduation the numbers thus obtained will occur in exactly the same way as errors of observation occur in practice; but they have no signs of addition or subtraction prefixed. Then by tossing up a coin over and over again and calling heads $+$ and tails $-$ , the signs $+$ or $-$ are assigned by chance to this series of errors. "Index" should be read here as "pointer" or "indicator" (c.f. "index finger"). Stigler points out that Darwin, like De Forest, was using a half-normal cumulative distribution around the disk. Subsequently using a coin to attach a sign at random renders this a full normal distribution. Stigler notes that it is unclear how finely the scale was graduated, but presumes the instruction to manually arrest the disk mid-spin was "to diminish potential bias toward one section of the disk and to speed up the procedure". Sir Francis Galton , incidentally a half-cousin to Charles Darwin, has already been mentioned in connection with his quincunx. While this mechanically simulates a binomial distribution that, by the De Moivre–Laplace theorem bears a striking resemblance to the normal distribution (and is occasionally used as a teaching aid for that topic), Galton actually produced a far more elaborate scheme when he desired to sample from a normal distribution. Even more extraordinary than the unconventional examples at the top of this answer, Galton developed normally distributed dice — or more accurately, a set of dice that produce an excellent discrete approximation to a normal distribution with median deviation one. These dice, dating from 1890, are preserved in the Galton Collection at University College London. In an 1890 article in Nature Galton wrote that: As an instrument for selecting at random, I have found nothing superior to dice. It is most tedious to shuffle cards thoroughly between each successive draw, and the method of mixing and stirring up marked balls in a bag is more tedious still. A teetotum or some form of roulette is preferable to these, but dice are better than all. When they are shaken and tossed in a basket, they hurtle so variously against one another and against the ribs of the basket-work that they tumble wildly about, and their positions at the outset afford no perceptible clue to what they will be after even a single good shake and toss. The chances afforded by a die are more various than are commonly supposed; there are 24 equal possibilities, and not only 6, because each face has four edges that may be utilized, as I shall show. It was important for Galton to be able to rapidly generate a sequence of normal deviates. After each roll Galton would line the dice up by touch alone, then record the scores along their front edges. He would initially roll several dice of type I, on whose edges were half-normal deviates, much like De Forest's cards but using 24 not 100 quantiles. For the largest deviates (actually marked as blanks on the type I dice) he would roll as many of the more sensitive type II dice (which showed large deviates only, at a finer graduation) as he needed to fill in the spaces in his sequence. To convert from half-normal to normal deviates, he would roll die III, which would allocate $+$ or $-$ signs to his sequence in blocks of three or four deviates at a time. The dice themselves were mahogany, of side $1 \frac 1 4$ inches, and pasted with thin white paper for the marking to be written on. Galton recommended to prepare three dice of type I, two of II and one of III. Raazesh Sainudiin's Laboratory for Mathematical Statistical Experiments includes a student project from the University of Canterbury, NZ, reproducing Galton's dice . The project includes empirical investigation from rolling the dice many times (including an empirical CDF that looks reassuringly "normal") and an adaptation of the dice scores so they follow the standard normal distribution. Using Galton's original scores, there is also a graph of the discretized normal distribution that the dice scores actually follow. On a grand scale, if you are prepared to stretch the "mechanical" to the electrical, note that RAND's epic A Million Random Digits with 100,000 Normal Deviates was based on a kind of electronic simulation of a roulette wheel. From the technical report (by George W. Brown, originally June 1949) we find: Thus motivated, the RAND people, with the assistance of Douglas Aircraft Company engineering personnel, designed an electro roulette wheel based on a variation of a proposal made by Cecil Hastings. For purposes of this talk a brief description will suffice. A random frequency pulse source was gated by a constant frequency pulse, about once a second, providing on the average about 100,000 pulses in one second. Pulse standardization circuits passed the pulses to a five place binary counter, so that in principle the machine is like a roulette wheel with 32 positions, making on the average about 3000 revolutions on each turn. A binary to decimal conversion was used, throwing away 12 of the 32 positions, and the resulting random digit was fed into an I.B.M. punch, yielding punched card tables of random digits. A detailed analysis of the randomness to be expected from such a machine was made by the designers and indicated that the machine should yield very high quality output. However, before you too are tempted to assemble an electro roulette wheel, it would be a good idea to read the rest of the report! It transpired that the scheme "leaned heavily on the assumption of ideal pulse standardization to overcome natural preferences among the counter positions; later experience showed that this assumption was the weak point, and much of the later fussing with the machine was concerned with troubles originating at this point". Detailed statistical analysis revealed some problems with the output: for instance $\chi^2$ tests of the frequencies of odd and even digits revealed that some batches had a slight imbalance. This was worse in some batches than others, suggesting that "the machine had been running down in the month since its tune up ... The indications are at this machine required excessive maintenance to keep it in tip-top shape". However, a statistical way of resolving these issues was found: At this point we had our original million digits, 20,000 I.B.M. cards with 50 digits to a card, with the small but perceptible odd-even bias disclosed by the statistical analysis. It was now decided to rerandomize the table, or at least alter it, by a little roulette playing with it, to remove the odd-even bias. We added (mod 10) the digits in each card, digit by digit, to the corresponding digits of the previous card. The derived table of one million digits was then subjected to the various standard tests, frequency tests, serial tests, poker tests, etc. These million digits have a clean bill of health and have been adopted as RAND's modern table of random digits. There was, of course, good reason to believe that the addition process would do some good. In a general way, the underlying mechanism is the limiting approach of sums of random variables modulo the unit interval in the rectangular distribution, in the same way that unrestricted sums of random variables approach normality. This method has been used by Horton and Smith, of the Interstate Commerce Commission, to obtain some good batches of apparently random numbers from larger batches of badly non-random numbers. Of course, this concerns generation of random decimal digits , but it easy to use these to produce random deviates sampled uniformly from $[0,1]$ , rounded to however many decimal places you saw fit to take digits. There are various lovely methods to generate deviates of other distributions from your uniform deviates, perhaps the most aesthetically pleasing of which is the ziggurat algorithm for probability distributions which are either monotone decreasing or unimodal symmetric, but conceptually the simplest and most widely applicable is the inverse CDF transform : given a deviate $u$ from the uniform distribution on $[0,1]$ , and if your desired distribution has CDF $F$ , then $F^{-1}(u)$ will be a random deviate from your distribution. If you are interested specifically in random normal deviates then computationally, the Box-Muller transform is more efficient than inverse transform sampling, the Marsaglia polar method is more efficient again, and the ziggurat ( image credit for the animation below ) even better. Some practical issues are discussed on this StackOverflow thread if you intend to implement one or more of these methods in code. References Badger, L. (1994). " Lazzarini's Lucky Approximation of π ". Mathematics Magazine . Mathematical Association of America. 67 (2): 83–91. Brown, G.W. " History of RAND's random digits—Summary ". in A.S. Householder, G.E. Forsythe, and H.H. Germond, eds., "Monte Carlo Method", National Bureau of Standards Applied Mathematics Series , 12 (Washington, D.C.: U.S. Government Printing Office, 1951): 31-32 $(*)$ Darwin, G. H. (1877). " On fallible measures of variable quantities, and on the treatment of meteorological observations. " Philosophical Magazine , 4 (22), 1–14 De Forest, E. L. (1876). Interpolation and adjustment of series . Tuttle, Morehouse and Taylor, New Haven, Conn. Galton, F. (1890). "Dice for statistical experiments". Nature , 42 , 13-14 Stigler, S. M. (1991). "Stochastic simulation in the nineteenth century". Statistical Science , 6 (1), 89-97. $(*)$ In the very same journal is von Neumann's highly-cited paper Various Techniques Used in Connection with Random Digits in which he considers the difficulties of generating random numbers for use in a computer. He rejects the idea of a physical device attached to a computer that generates random input on the fly, and considers whether some physical mechanism might be employed to generate random numbers which are then recorded for future use — essentially what RAND had done with their Million Digits . It also includes his famous quote about what we would describe as the difference between random and pseudo-random number generation: "Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin. For, as has been pointed out several times, there is no such thing as a random number — there are only methods to produce random numbers, and a strict arithmetic procedure of course is not such a method."
{ "source": [ "https://stats.stackexchange.com/questions/247094", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/139449/" ] }
247,871
I've been thinking a lot about the "class imbalance problem" in machine/statistical learning lately, and am drawing ever deeper into a feeling that I just don't understand what is going on. First let me define (or attempt to) define my terms: The class imbalance problem in machine/statistical learning is the observation that some binary classification(*) algorithms do not perform well when the proportion of 0 classes to 1 classes is very skewed. So, in the above, for example, if there were one-hundred $0$ classes for every single $1$ class, I would say the class imbalance is $1$ to $100$ , or $1\%$ . Most statements of the problem I have seen lack what I would think of as sufficient qualification (what models struggle, how imbalanced is a problem), and this is one source of my confusion. A survey of the standard texts in machine/statistical learning turns up little: Elements of Statistical Leaning and Introduction to Statistical Learning do not contain "class imbalance" in the index. Machine Learning for Predictive Data Analytics also does not contain"class imbalance" in the index. Murphy's Machine Learning: A Probabilistic Perspective does contain "class imbalance* in the index. The reference is to a section on SVM's, where I found the following tantalizing comment: It is worth remembering that all these difficulties, and the plethora of heuristics that have been proposed to fix them, fundamentally arise because SVM's do not model uncertainty using probabilities, so their output scores are not comparable across classes. This comment does jive with my intuition and experience: at my previous job we would routinely fit logistic regressions and gradient boosted tree models (to minimize binomial log-likelihood) to unbalanced data (on the order of a $1\%$ class imbalance), with no obvious issues in performance. I have read (somewhere) that classification tree based models (trees themselves and random forest) do also suffer from the class imbalance problem. This muddies the waters a little bit, trees do, in some sense, return probabilities: the voting record for the target class in each terminal node of the tree. So, to wrap up, what I'm really after is a conceptual understanding of the forces that lead to the class imbalance problem (if it exists). Is it something we do to ourselves with badly chosen algorithms and lazy default classification thresholds? Does it vanish if we always fit probability models that optimize proper scoring criteria? Said differently, is the cause simply a poor choice of loss function, i.e. evaluating the predictive power of a model based on hard classification rules and overall accuracy? If so, are models that do not optimize proper scoring rules then useless (or at least less useful)? (*) By classification I mean any statistical model fit to binary response data. I am not assuming that my goal is a hard assignment to one class or the other, though it may be.
An entry from the Encyclopedia of Machine Learning ( https://cling.csd.uwo.ca/papers/cost_sensitive.pdf ) helpfully explains that what gets called "the class imbalance problem" is better understood as three separate problems: assuming that an accuracy metric is appropriate when it is not assuming that the test distribution matches the training distribution when it does not assuming that you have enough minority class data when you do not The authors explain: The class imbalanced datasets occurs in many real-world applications where the class distributions of data are highly imbalanced. Again, without loss of generality, we assume that the minority or rare class is the positive class, and the majority class is the negative class. Often the minority class is very small, such as 1%of the dataset. If we apply most traditional (cost-insensitive) classifiers on the dataset, they will likely to predict everything as negative (the majority class). This was often regarded as a problem in learning from highly imbalanced datasets. However, as pointed out by (Provost, 2000), two fundamental assumptions are often made in the traditional cost-insensitive classifiers. The first is that the goal of the classifiers is to maximize the accuracy (or minimize the error rate); the second is that the class distribution of the training and test datasets is the same. Under these two assumptions, predicting everything as negative for a highly imbalanced dataset is often the right thing to do. (Drummond and Holte, 2005) show that it is usually very difficult to outperform this simple classifier in this situation. Thus, the imbalanced class problem becomes meaningful only if one or both of the two assumptions above are not true; that is, if the cost of different types of error (false positive and false negative in the binary classification) is not the same, or if the class distribution in the test data is different from that of the training data. The first case can be dealt with effectively using methods in cost-sensitive meta-learning. In the case when the misclassification cost is not equal, it is usually more expensive to misclassify a minority (positive) example into the majority (negative) class, than a majority example into the minority class (otherwise it is more plausible to predict everything as negative). That is, FN > FP. Thus, given the values of FN and FP, a variety of cost-sensitive meta-learning methods can be, and have been, used to solve the class imbalance problem (Ling and Li, 1998; Japkowicz and Stephen, 2002). If the values of FN and FP are not unknown explicitly, FN and FP can be assigned to be proportional to p(-):p(+) (Japkowicz and Stephen, 2002). In case the class distributions of training and test datasets are different (for example, if the training data is highly imbalanced but the test data is more balanced), an obvious approach is to sample the training data such that its class distribution is the same as the test data (by oversampling the minority class and/or undersampling the majority class)(Provost, 2000). Note that sometimes the number of examples of the minority class is too small for classifiers to learn adequately. This is the problem of insufficient (small) training data, different from that of the imbalanced datasets. Thus, as Murphy implies, there is nothing inherently problematic about using imbalanced classes, provided you avoid these three mistakes. Models that yield posterior probabilities make it easier to avoid error (1) than do discriminant models like SVM because they enable you to separate inference from decision-making. (See Bishop's section 1.5.4 Inference and Decision for further discussion of that last point.) Hope that helps.
{ "source": [ "https://stats.stackexchange.com/questions/247871", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/74500/" ] }
247,889
Situation I have a data-set (15-20k) with two classes. I can train a classifier on both classes, but am only allowed to test/predict on one class. The data-set is not balanced (~1:4). Goal I want to find out, how much the classifier was able to learn from the data-set and am therefore i am interested in the predicted probabilities of that one class I can test on resp. their "distribution". Problem The TPR, for example exists, but uses only the predicted labels (not the "probabilities"). Having not well balanced sets and not calibrated classifiers, this does not seem to be optimal. Question Is there a good metric available, that takes the predicted "probabilities" (without calibration, we may don't even speak of probabilities...) of only one class (+ true label) and returns a meaning-full score? Or is it possible to calibrate the output of a classifier by using only one class to test on (so that the predictions are more meaning-full)?
An entry from the Encyclopedia of Machine Learning ( https://cling.csd.uwo.ca/papers/cost_sensitive.pdf ) helpfully explains that what gets called "the class imbalance problem" is better understood as three separate problems: assuming that an accuracy metric is appropriate when it is not assuming that the test distribution matches the training distribution when it does not assuming that you have enough minority class data when you do not The authors explain: The class imbalanced datasets occurs in many real-world applications where the class distributions of data are highly imbalanced. Again, without loss of generality, we assume that the minority or rare class is the positive class, and the majority class is the negative class. Often the minority class is very small, such as 1%of the dataset. If we apply most traditional (cost-insensitive) classifiers on the dataset, they will likely to predict everything as negative (the majority class). This was often regarded as a problem in learning from highly imbalanced datasets. However, as pointed out by (Provost, 2000), two fundamental assumptions are often made in the traditional cost-insensitive classifiers. The first is that the goal of the classifiers is to maximize the accuracy (or minimize the error rate); the second is that the class distribution of the training and test datasets is the same. Under these two assumptions, predicting everything as negative for a highly imbalanced dataset is often the right thing to do. (Drummond and Holte, 2005) show that it is usually very difficult to outperform this simple classifier in this situation. Thus, the imbalanced class problem becomes meaningful only if one or both of the two assumptions above are not true; that is, if the cost of different types of error (false positive and false negative in the binary classification) is not the same, or if the class distribution in the test data is different from that of the training data. The first case can be dealt with effectively using methods in cost-sensitive meta-learning. In the case when the misclassification cost is not equal, it is usually more expensive to misclassify a minority (positive) example into the majority (negative) class, than a majority example into the minority class (otherwise it is more plausible to predict everything as negative). That is, FN > FP. Thus, given the values of FN and FP, a variety of cost-sensitive meta-learning methods can be, and have been, used to solve the class imbalance problem (Ling and Li, 1998; Japkowicz and Stephen, 2002). If the values of FN and FP are not unknown explicitly, FN and FP can be assigned to be proportional to p(-):p(+) (Japkowicz and Stephen, 2002). In case the class distributions of training and test datasets are different (for example, if the training data is highly imbalanced but the test data is more balanced), an obvious approach is to sample the training data such that its class distribution is the same as the test data (by oversampling the minority class and/or undersampling the majority class)(Provost, 2000). Note that sometimes the number of examples of the minority class is too small for classifiers to learn adequately. This is the problem of insufficient (small) training data, different from that of the imbalanced datasets. Thus, as Murphy implies, there is nothing inherently problematic about using imbalanced classes, provided you avoid these three mistakes. Models that yield posterior probabilities make it easier to avoid error (1) than do discriminant models like SVM because they enable you to separate inference from decision-making. (See Bishop's section 1.5.4 Inference and Decision for further discussion of that last point.) Hope that helps.
{ "source": [ "https://stats.stackexchange.com/questions/247889", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/112493/" ] }
247,893
A great newbie in statistics, I am. Begging for your help, I am. So, I have two samples. The first one contains 19 mean preference scores (based on a series of twelve 0s and 1s) and the second one contains 20 mean preference scores. Clearly, they do not follow a normal distribution and that is why I read that I should do a Wilcoxon test instead of Student test. OK. But R tells me that I have many ties for this test. Even if it is not an error message, I do not like warning messages. Not at all. So I was wondering if I could trust that results. And also, what do you think about changing the ties by some randomly chosen very very closed values? Like changing 0.7 by a random value chosen between 0.6999 and 0.7001? Can it do the trick? Here are my samples: Treatment MeanPrefScore Treatment MeanPrefScore Quick 0.5 Long 0.571428571 Quick 0.9 Long 0.777777778 Quick 0.916666667 Long 0.333333333 Quick 1 Long 0.666666667 Quick 0.714285714 Long 1 Quick 0.4 Long 1 Quick 0.888888889 Long 0.777777778 Quick 0.857142857 Long 0.857142857 Quick 1 Long 0.916666667 Quick 1 Long 1 Quick 1 Long 0.75 Quick 0.916666667 Long 0.916666667 Quick 0.5 Long 1 Quick 0.909090909 Long 0.909090909 Quick 0.571428571 Long 0.8 Quick 0.909090909 Long 0.75 Quick 0.8 Long 1 Quick 0.5 Long 0.5 Quick 0.545454545 Long 0.916666667 Quick 0.777777778 Here is what R tells me: Result1 <- wilcox.test(MeanPrefScore ~ Treatment, data = MeanPrefScore) Warning message: In wilcox.test.default(x = c(0.571428571, 0.777777778, 0.333333333, : cannot compute exact p-value with ties Result1 Wilcoxon rank sum test with continuity correction data: MeanPrefScore by Treatment W = 209, p-value = 0.6002 alternative hypothesis: true location shift is not equal to 0 So any help, any explanation for a super simple quick to do test, would be infinitely appreciated!
An entry from the Encyclopedia of Machine Learning ( https://cling.csd.uwo.ca/papers/cost_sensitive.pdf ) helpfully explains that what gets called "the class imbalance problem" is better understood as three separate problems: assuming that an accuracy metric is appropriate when it is not assuming that the test distribution matches the training distribution when it does not assuming that you have enough minority class data when you do not The authors explain: The class imbalanced datasets occurs in many real-world applications where the class distributions of data are highly imbalanced. Again, without loss of generality, we assume that the minority or rare class is the positive class, and the majority class is the negative class. Often the minority class is very small, such as 1%of the dataset. If we apply most traditional (cost-insensitive) classifiers on the dataset, they will likely to predict everything as negative (the majority class). This was often regarded as a problem in learning from highly imbalanced datasets. However, as pointed out by (Provost, 2000), two fundamental assumptions are often made in the traditional cost-insensitive classifiers. The first is that the goal of the classifiers is to maximize the accuracy (or minimize the error rate); the second is that the class distribution of the training and test datasets is the same. Under these two assumptions, predicting everything as negative for a highly imbalanced dataset is often the right thing to do. (Drummond and Holte, 2005) show that it is usually very difficult to outperform this simple classifier in this situation. Thus, the imbalanced class problem becomes meaningful only if one or both of the two assumptions above are not true; that is, if the cost of different types of error (false positive and false negative in the binary classification) is not the same, or if the class distribution in the test data is different from that of the training data. The first case can be dealt with effectively using methods in cost-sensitive meta-learning. In the case when the misclassification cost is not equal, it is usually more expensive to misclassify a minority (positive) example into the majority (negative) class, than a majority example into the minority class (otherwise it is more plausible to predict everything as negative). That is, FN > FP. Thus, given the values of FN and FP, a variety of cost-sensitive meta-learning methods can be, and have been, used to solve the class imbalance problem (Ling and Li, 1998; Japkowicz and Stephen, 2002). If the values of FN and FP are not unknown explicitly, FN and FP can be assigned to be proportional to p(-):p(+) (Japkowicz and Stephen, 2002). In case the class distributions of training and test datasets are different (for example, if the training data is highly imbalanced but the test data is more balanced), an obvious approach is to sample the training data such that its class distribution is the same as the test data (by oversampling the minority class and/or undersampling the majority class)(Provost, 2000). Note that sometimes the number of examples of the minority class is too small for classifiers to learn adequately. This is the problem of insufficient (small) training data, different from that of the imbalanced datasets. Thus, as Murphy implies, there is nothing inherently problematic about using imbalanced classes, provided you avoid these three mistakes. Models that yield posterior probabilities make it easier to avoid error (1) than do discriminant models like SVM because they enable you to separate inference from decision-making. (See Bishop's section 1.5.4 Inference and Decision for further discussion of that last point.) Hope that helps.
{ "source": [ "https://stats.stackexchange.com/questions/247893", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
248,113
My question flows out of this comment on an Andrew Gelman's blog post in which he advocates the use of 50% confidence intervals instead of 95% confidence intervals, although not on the grounds that they are more robustly estimated: I prefer 50% to 95% intervals for 3 reasons: Computational stability, More intuitive evaluation (half the 50% intervals should contain the true value), A sense that in aplications it’s best to get a sense of where the parameters and predicted values will be, not to attempt an unrealistic near-certainty. The commenter's idea seems to be that problems with the assumptions underlying the construction of the confidence interval will have more an impact if it's a 95% CI than if it's a 50% CI. However, he doesn't really explain why. [...] as you go to larger intervals, you become more sensitive in general to details or assumptions of your model. For example, you would never believe that you had correctly identified the 99.9995% interval. Or at least that’s my intuition. If it’s right, it argues that 50-percent should be better estimated than 95-percent. Or maybe “more robustly” estimated, since it is less sensitive to assumptions about the noise, perhaps? Is it true? Why/why not?
This answer analyzes the meaning of the quotation and offers the results of a simulation study to illustrate it and help understand what it might be trying to say. The study can easily be extended by anybody (with rudimentary R skills) to explore other confidence interval procedures and other models. Two interesting issues emerged in this work. One concerns how to evaluate the accuracy of a confidence interval procedure. The impression one gets of robustness depends on that. I display two different accuracy measures so you can compare them. The other issue is that although a confidence interval procedure with low confidence may be robust, the corresponding confidence limits might not be robust at all. Intervals tend to work well because the errors they make at one end often counterbalance the errors they make at the other. As a practical matter, you can be pretty sure that around half of your $50\%$ confidence intervals are covering their parameters, but the actual parameter might consistently lie near one particular end of each interval, depending on how reality departs from your model assumptions. Robust has a standard meaning in statistics: Robustness generally implies insensitivity to departures from assumptions surrounding an underlying probabilistic model. (Hoaglin, Mosteller, and Tukey, Understanding Robust and Exploratory Data Analysis . J. Wiley (1983), p. 2.) This is consistent with the quotation in the question. To understand the quotation we still need to know the intended purpose of a confidence interval. To this end, let's review what Gelman wrote. I prefer 50% to 95% intervals for 3 reasons: Computational stability, More intuitive evaluation (half the 50% intervals should contain the true value), A sense that in applications it’s best to get a sense of where the parameters and predicted values will be, not to attempt an unrealistic near-certainty. Since getting a sense of predicted values is not what confidence intervals (CIs) are intended for, I will focus on getting a sense of parameter values, which is what CIs do. Let's call these the "target" values. Whence, by definition, a CI is intended to cover its target with a specified probability (its confidence level). Achieving intended coverage rates is the minimum criterion for evaluating the quality of any CI procedure. (Additionally, we might be interested in typical CI widths. To keep the post to a reasonable length, I will ignore this issue.) These considerations invite us to study how much a confidence interval calculation could mislead us concerning the target parameter value. The quotation could be read as suggesting that lower-confidence CIs might retain their coverage even when the data are generated by a process different than the model. That's something we can test. The procedure is: Adopt a probability model that includes at least one parameter. The classic one is sampling from a Normal distribution of unknown mean and variance. Select a CI procedure for one or more of the model's parameters. An excellent one constructs the CI from the sample mean and sample standard deviation, multiplying the latter by a factor given by a Student t distribution. Apply that procedure to various different models--departing not too much from the adopted one--to assess its coverage over a range of confidence levels. As an example, I have done just that. I have allowed the underlying distribution to vary across a wide range, from almost Bernoulli, to Uniform, to Normal, to Exponential, and all the way to Lognormal. These include symmetric distributions (the first three) and strongly skewed ones (the last two). For each distribution I generated 50,000 samples of size 12. For each sample I constructed two-sided CIs of confidence levels between $50\%$ and $99.8\%$, which covers most applications. An interesting issue now arises: How should we measure how well (or how badly) a CI procedure is performing? A common method simply evaluates the difference between the actual coverage and the confidence level. This can look suspiciously good for high confidence levels, though. For instance, if you are trying to achieve 99.9% confidence but you get only 99% coverage, the raw difference is a mere 0.9%. However, that means your procedure fails to cover the target ten times more often than it should! For this reason, a more informative way of comparing coverages ought to use something like odds ratios. I use differences of logits, which are the logarithms of odds ratios. Specifically, when the desired confidence level is $\alpha$ and the actual coverage is $p$, then $$\log\left(\frac{p}{1-p}\right) - \log\left(\frac{\alpha}{1-\alpha}\right)$$ nicely captures the difference. When it is zero, the coverage is exactly the value intended. When it is negative, the coverage is too low--which means the CI is too optimistic and underestimates the uncertainty. The question, then, is how do these error rates vary with confidence level as the underlying model is perturbed? We can answer it by plotting the simulation results. These plots quantify how "unrealistic" the "near-certainty" of a CI might be in this archetypal application. The graphics show the same results, but the one at the left displays the values on logit scales while the one at the right uses raw scales. The Beta distribution is a Beta$(1/30,1/30)$ (which is practically a Bernoulli distribution). The lognormal distribution is the exponential of the standard Normal distribution. The normal distribution is included to verify that this CI procedure really does attain its intended coverage and to reveal how much variation to expect from the finite simulation size. (Indeed, the graphs for the normal distribution are comfortably close to zero, showing no significant deviations.) It is clear that on the logit scale, the coverages grow more divergent as the confidence level increases. There are some interesting exceptions, though. If we are unconcerned with perturbations of the model that introduce skewness or long tails, then we can ignore the exponential and lognormal and focus on the rest. Their behavior is erratic until $\alpha$ exceeds $95\%$ or so (a logit of $3$), at which point the divergence has set in. This little study brings some concreteness to Gelman's claim and illustrates some of the phenomena he might have had in mind. In particular, when we are using a CI procedure with a low confidence level, such as $\alpha=50\%$, then even when the underlying model is strongly perturbed, it looks like the coverage will still be close to $50\%$: our feeling that such a CI will be correct about half the time and incorrect the other half is borne out. That is robust . If instead we are hoping to be right, say, $95\%$ of the time, which means we really want to be wrong only $5\%$ of the time, then we should be prepared for our error rate to be much greater in case the world doesn't work quite the way our model supposes. Incidentally, this property of $50\%$ CIs holds in large part because we are studying symmetric confidence intervals . For the skewed distributions, the individual confidence limits can be terrible (and not robust at all), but their errors often cancel out. Typically one tail is short and the other long, leading to over-coverage at one end and under-coverage at the other. I believe that $50\%$ confidence limits will not be anywhere near as robust as the corresponding intervals. This is the R code that produced the plots. It is readily modified to study other distributions, other ranges of confidence, and other CI procedures. # # Zero-mean distributions. # distributions <- list(Beta=function(n) rbeta(n, 1/30, 1/30) - 1/2, Uniform=function(n) runif(n, -1, 1), Normal=rnorm, #Mixture=function(n) rnorm(n, -2) + rnorm(n, 2), Exponential=function(n) rexp(n) - 1, Lognormal=function(n) exp(rnorm(n, -1/2)) - 1 ) n.sample <- 12 n.sim <- 5e4 alpha.logit <- seq(0, 6, length.out=21); alpha <- signif(1 / (1 + exp(-alpha.logit)), 3) # # Normal CI. # CI <- function(x, Z=outer(c(1,-1), qt((1-alpha)/2, n.sample-1))) mean(x) + Z * sd(x) / sqrt(length(x)) # # The simulation. # #set.seed(17) alpha.s <- paste0("alpha=", alpha) sim <- lapply(distributions, function(dist) { x <- matrix(dist(n.sim*n.sample), n.sample) x.ci <- array(apply(x, 2, CI), c(2, length(alpha), n.sim), dimnames=list(Endpoint=c("Lower", "Upper"), Alpha=alpha.s, NULL)) covers <- x.ci["Lower",,] * x.ci["Upper",,] <= 0 rowMeans(covers) }) (sim) # # The plots. # logit <- function(p) log(p/(1-p)) colors <- hsv((1:length(sim)-1)/length(sim), 0.8, 0.7) par(mfrow=c(1,2)) plot(range(alpha.logit), c(-2,1), type="n", main="Confidence Interval Accuracies (Logit Scales)", cex.main=0.8, xlab="Logit(alpha)", ylab="Logit(coverage) - Logit(alpha)") abline(h=0, col="Gray", lwd=2) legend("bottomleft", names(sim), col=colors, lwd=2, bty="n", cex=0.8) for(i in 1:length(sim)) { coverage <- sim[[i]] lines(alpha.logit, logit(coverage) - alpha.logit, col=colors[i], lwd=2) } plot(range(alpha), c(-0.2, 0.05), type="n", main="Raw Confidence Interval Accuracies", cex.main=0.8, xlab="alpha", ylab="coverage-alpha") abline(h=0, col="Gray", lwd=2) legend("bottomleft", names(sim), col=colors, lwd=2, bty="n", cex=0.8) for(i in 1:length(sim)) { coverage <- sim[[i]] lines(alpha, coverage - alpha, col=colors[i], lwd=2) }
{ "source": [ "https://stats.stackexchange.com/questions/248113", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9162/" ] }
249,015
I have just carried out an analysis of my data using logistic regression however I am also required to have a descriptive Statistics part in my report. I honestly don't see the point in this and I was hoping that someone might be able to explain why it is necessary. For example if I plot a histogram of one of my independent continuous variables and it shows normality or it shows skewness how will this add any value to the report? My data consists of a dependent variable true or false of getting a job and the independent variable is grades in mid-term, grades in final exams, and male or female.
In my field, the descriptive part of the report is extremely important because it sets the context for the generalisability of the results. For example, a researcher wishes to identify the predictors of traumatic brain injury following motorcycle accidents in a sample from a hospital. Her dependent variable is binary and she had a series of independent variables. Multivariable logistic regression allowed her to produce the following findings: no helmet use adjusted OR = 4.5 (95% CI 3.6, 5.5) compared to helmet use. all other variables were not included in the final model. To be clear, there were no issues with the modelling. We focus on the value that the descriptive statistics can add. Without the descriptive statistics, a reader cannot put these findings in perspective. Why? Let me show you the descriptive statistics: age, years, mean (SD) 54 (2) males, freq (%) 490 (98) blood alcohol level, %, mean (SD) 0.10 (0.01) ... You can see from the above that her sample consisted of older, intoxicated males. With this information the reader is able say what, if any, these results can say about injuries in young males or injuries in non-intoxicated riders or in female riders. Please don't ignore descriptive statistics.
{ "source": [ "https://stats.stackexchange.com/questions/249015", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/45001/" ] }
249,023
If I want to make a regression model where sales in billions are dependent variable and my independent variables consist of very low values, for example rainy days (highest number is 15). My question is, is there any problem if I do a regression with original data, or should I do some transformation and therefore make my variables comparable? And which transformation would you suggest? Is sensible to use logarithmic transformation of data here? I tried to find similar discussion, but struggled to do it.
In my field, the descriptive part of the report is extremely important because it sets the context for the generalisability of the results. For example, a researcher wishes to identify the predictors of traumatic brain injury following motorcycle accidents in a sample from a hospital. Her dependent variable is binary and she had a series of independent variables. Multivariable logistic regression allowed her to produce the following findings: no helmet use adjusted OR = 4.5 (95% CI 3.6, 5.5) compared to helmet use. all other variables were not included in the final model. To be clear, there were no issues with the modelling. We focus on the value that the descriptive statistics can add. Without the descriptive statistics, a reader cannot put these findings in perspective. Why? Let me show you the descriptive statistics: age, years, mean (SD) 54 (2) males, freq (%) 490 (98) blood alcohol level, %, mean (SD) 0.10 (0.01) ... You can see from the above that her sample consisted of older, intoxicated males. With this information the reader is able say what, if any, these results can say about injuries in young males or injuries in non-intoxicated riders or in female riders. Please don't ignore descriptive statistics.
{ "source": [ "https://stats.stackexchange.com/questions/249023", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/140704/" ] }
249,471
Genetic algorithms are one form of optimization method. Often stochastic gradient descent and its derivatives are the best choice for function optimization, but genetic algorithms are still sometimes used. For example, the antenna of NASA's ST5 spacecraft was created with a genetic algorithm: When are genetic optimization methods a better choice than more common gradient descent methods?
Genetic algorithms (GA) are a family of heuristics which are empirically good at providing a decent answer in many cases, although they are rarely the best option for a given domain. You mention derivative-based algorithms, but even in the absence of derivatives there are plenty of derivative-free optimization algorithms that perform way better than GAs. See this and this answer for some ideas. What many standard optimization algorithms have in common (even derivative-free methods) is the assumption that the underlying space is a smooth manifold (perhaps with a few discrete dimensions), and the function to optimize is somewhat well-behaved. However, not all functions are defined on a smooth manifold. Sometimes you want to optimize over a graph or other discrete structures (combinatorial optimization) -- here there are dedicated algorithms, but GAs would also work. The more you go towards functions defined over complex, discrete structures, the more GAs can be useful, especially if you can find a representation in which the genetic operators work at their best (which requires a lot of hand-tuning and domain knowledge). Of course, the future might lead to forget GAs altogether and develop methods to map discrete spaces to continuous space , and use the optimization machinery we have on the continuous representation.
{ "source": [ "https://stats.stackexchange.com/questions/249471", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/141025/" ] }
249,688
It came as a bit of a shock to me the first time I did a normal distribution Monte Carlo simulation and discovered that the mean of $100$ standard deviations from $100$ samples, all having a sample size of only $n=2$, proved to be much less than, i.e., averaging $ \sqrt{\frac{2}{\pi }}$ times, the $\sigma$ used for generating the population. However, this is well known, if seldom remembered, and I sort of did know, or I would not have done a simulation. Here is a simulation. Here is an example for predicting 95% confidence intervals of $N(0,1)$ using 100, $n=2$, estimates of $\text{SD}$, and $\text{E}(s_{n=2})=\sqrt\frac{\pi}{2}\text{SD}$. RAND() RAND() Calc Calc N(0,1) N(0,1) SD E(s) -1.1171 -0.0627 0.7455 0.9344 1.7278 -0.8016 1.7886 2.2417 1.3705 -1.3710 1.9385 2.4295 1.5648 -0.7156 1.6125 2.0209 1.2379 0.4896 0.5291 0.6632 -1.8354 1.0531 2.0425 2.5599 1.0320 -0.3531 0.9794 1.2275 1.2021 -0.3631 1.1067 1.3871 1.3201 -1.1058 1.7154 2.1499 -0.4946 -1.1428 0.4583 0.5744 0.9504 -1.0300 1.4003 1.7551 -1.6001 0.5811 1.5423 1.9330 -0.5153 0.8008 0.9306 1.1663 -0.7106 -0.5577 0.1081 0.1354 0.1864 0.2581 0.0507 0.0635 -0.8702 -0.1520 0.5078 0.6365 -0.3862 0.4528 0.5933 0.7436 -0.8531 0.1371 0.7002 0.8775 -0.8786 0.2086 0.7687 0.9635 0.6431 0.7323 0.0631 0.0791 1.0368 0.3354 0.4959 0.6216 -1.0619 -1.2663 0.1445 0.1811 0.0600 -0.2569 0.2241 0.2808 -0.6840 -0.4787 0.1452 0.1820 0.2507 0.6593 0.2889 0.3620 0.1328 -0.1339 0.1886 0.2364 -0.2118 -0.0100 0.1427 0.1788 -0.7496 -1.1437 0.2786 0.3492 0.9017 0.0022 0.6361 0.7972 0.5560 0.8943 0.2393 0.2999 -0.1483 -1.1324 0.6959 0.8721 -1.3194 -0.3915 0.6562 0.8224 -0.8098 -2.0478 0.8754 1.0971 -0.3052 -1.1937 0.6282 0.7873 0.5170 -0.6323 0.8127 1.0186 0.6333 -1.3720 1.4180 1.7772 -1.5503 0.7194 1.6049 2.0115 1.8986 -0.7427 1.8677 2.3408 2.3656 -0.3820 1.9428 2.4350 -1.4987 0.4368 1.3686 1.7153 -0.5064 1.3950 1.3444 1.6850 1.2508 0.6081 0.4545 0.5696 -0.1696 -0.5459 0.2661 0.3335 -0.3834 -0.8872 0.3562 0.4465 0.0300 -0.8531 0.6244 0.7826 0.4210 0.3356 0.0604 0.0757 0.0165 2.0690 1.4514 1.8190 -0.2689 1.5595 1.2929 1.6204 1.3385 0.5087 0.5868 0.7354 1.1067 0.3987 0.5006 0.6275 2.0015 -0.6360 1.8650 2.3374 -0.4504 0.6166 0.7545 0.9456 0.3197 -0.6227 0.6664 0.8352 -1.2794 -0.9927 0.2027 0.2541 1.6603 -0.0543 1.2124 1.5195 0.9649 -1.2625 1.5750 1.9739 -0.3380 -0.2459 0.0652 0.0817 -0.8612 2.1456 2.1261 2.6647 0.4976 -1.0538 1.0970 1.3749 -0.2007 -1.3870 0.8388 1.0513 -0.9597 0.6327 1.1260 1.4112 -2.6118 -0.1505 1.7404 2.1813 0.7155 -0.1909 0.6409 0.8033 0.0548 -0.2159 0.1914 0.2399 -0.2775 0.4864 0.5402 0.6770 -1.2364 -0.0736 0.8222 1.0305 -0.8868 -0.6960 0.1349 0.1691 1.2804 -0.2276 1.0664 1.3365 0.5560 -0.9552 1.0686 1.3393 0.4643 -0.6173 0.7648 0.9585 0.4884 -0.6474 0.8031 1.0066 1.3860 0.5479 0.5926 0.7427 -0.9313 0.5375 1.0386 1.3018 -0.3466 -0.3809 0.0243 0.0304 0.7211 -0.1546 0.6192 0.7760 -1.4551 -0.1350 0.9334 1.1699 0.0673 0.4291 0.2559 0.3207 0.3190 -0.1510 0.3323 0.4165 -1.6514 -0.3824 0.8973 1.1246 -1.0128 -1.5745 0.3972 0.4978 -1.2337 -0.7164 0.3658 0.4585 -1.7677 -1.9776 0.1484 0.1860 -0.9519 -0.1155 0.5914 0.7412 1.1165 -0.6071 1.2188 1.5275 -1.7772 0.7592 1.7935 2.2478 0.1343 -0.0458 0.1273 0.1596 0.2270 0.9698 0.5253 0.6583 -0.1697 -0.5589 0.2752 0.3450 2.1011 0.2483 1.3101 1.6420 -0.0374 0.2988 0.2377 0.2980 -0.4209 0.5742 0.7037 0.8819 1.6728 -0.2046 1.3275 1.6638 1.4985 -1.6225 2.2069 2.7659 0.5342 -0.5074 0.7365 0.9231 0.7119 0.8128 0.0713 0.0894 1.0165 -1.2300 1.5885 1.9909 -0.2646 -0.5301 0.1878 0.2353 -1.1488 -0.2888 0.6081 0.7621 -0.4225 0.8703 0.9141 1.1457 0.7990 -1.1515 1.3792 1.7286 0.0344 -0.1892 0.8188 1.0263 mean E(.) SD pred E(s) pred -1.9600 -1.9600 -1.6049 -2.0114 2.5% theor, est 1.9600 1.9600 1.6049 2.0114 97.5% theor, est 0.3551 -0.0515 2.5% err -0.3551 0.0515 97.5% err Drag the slider down to see the grand totals. Now, I used the ordinary SD estimator to calculate 95% confidence intervals around a mean of zero, and they are off by 0.3551 standard deviation units. The E(s) estimator is off by only 0.0515 standard deviation units. If one estimates standard deviation, standard error of the mean, or t-statistics, there may be a problem. My reasoning was as follows, the population mean, $\mu$, of two values can be anywhere with respect to a $x_1$ and is definitely not located at $\frac{x_1+x_2}{2}$, which latter makes for an absolute minimum possible sum squared so that we are underestimating $\sigma$ substantially, as follows w.l.o.g. let $x_2-x_1=d$, then $\Sigma_{i=1}^{n}(x_i-\bar{x})^2$ is $2 (\frac{d}{2})^2=\frac{d^2}{2}$, the least possible result. That means that standard deviation calculated as $\text{SD}=\sqrt{\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{n-1}}$ , is a biased estimator of the population standard deviation ($\sigma$). Note, in that formula we decrement the degrees of freedom of $n$ by 1 and dividing by $n-1$, i.e., we do some correction, but it is only asymptotically correct, and $n-3/2$ would be a better rule of thumb . For our $x_2-x_1=d$ example the $\text{SD}$ formula would give us $SD=\frac{d}{\sqrt 2}\approx 0.707d$, a statistically implausible minimum value as $\mu\neq \bar{x}$, where a better expected value ($s$) would be $E(s)=\sqrt{\frac{\pi }{2}}\frac{d}{\sqrt 2}=\frac{\sqrt\pi }{2}d\approx0.886d$. For the usual calculation, for $n<10$, $\text{SD}$s suffer from very significant underestimation called small number bias , which only approaches 1% underestimation of $\sigma$ when $n$ is approximately $25$. Since many biological experiments have $n<25$, this is indeed an issue. For $n=1000$, the error is approximately 25 parts in 100,000. In general, small number bias correction implies that the unbiased estimator of population standard deviation of a normal distribution is $\text{E}(s)\,=\,\,\frac{\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{n}{2}\right)}\sqrt{\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{2}}>\text{SD}=\sqrt{\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{n-1}}\; .$ From Wikipedia under creative commons licensing one has a plot of SD underestimation of $\sigma$ Since SD is a biased estimator of population standard deviation, it cannot be the minimum variance unbiased estimator MVUE of population standard deviation unless we are happy with saying that it is MVUE as $n\rightarrow \infty$, which I, for one, am not. Concerning non-normal distributions and approximately unbiased $SD$ read this . Now comes the question Q1 Can it be proven that the $\text{E}(s)$ above is MVUE for $\sigma$ of a normal distribution of sample-size $n$, where $n$ is a positive integer greater than one? Hint: (But not the answer) see How can I find the standard deviation of the sample standard deviation from a normal distribution? . Next question, Q2 Would someone please explain to me why we are using $\text{SD}$ anyway as it is clearly biased and misleading? That is, why not use $\text{E}(s)$ for most everything? Supplementary, it has become clear in the answers below that variance is unbiased, but its square root is biased. I would request that answers address the question of when unbiased standard deviation should be used. As it turns out, a partial answer is that to avoid bias in the simulation above, the variances could have been averaged rather than the SD-values. To see the effect of this, if we square the SD column above, and average those values we get 0.9994, the square root of which is an estimate of the standard deviation 0.9996915 and the error for which is only 0.0006 for the 2.5% tail and -0.0006 for the 95% tail. Note that this is because variances are additive, so averaging them is a low error procedure. However, standard deviations are biased, and in those cases where we do not have the luxury of using variances as an intermediary, we still need small number correction. Even if we can use variance as an intermediary, in this case for $n=100$, the small sample correction suggests multiplying the square root of unbiased variance 0.9996915 by 1.002528401 to give 1.002219148 as an unbiased estimate of standard deviation. So, yes, we can delay using small number correction but should we therefore ignore it entirely? The question here is when should we be using small number correction, as opposed to ignoring its use, and predominantly, we have avoided its use. Here is another example, the minimum number of points in space to establish a linear trend that has an error is three. If we fit these points with ordinary least squares the result for many such fits is a folded normal residual pattern if there is non-linearity and half normal if there is linearity. In the half-normal case our distribution mean requires small number correction. If we try the same trick with 4 or more points, the distribution will not generally be normal related or easy to characterize. Can we use variance to somehow combine those 3-point results? Perhaps, perhaps not. However, it is easier to conceive of problems in terms of distances and vectors.
For the more restricted question Why is a biased standard deviation formula typically used? the simple answer Because the associated variance estimator is unbiased. There is no real mathematical/statistical justification. may be accurate in many cases. However, this is not necessarily always the case. There are at least two important aspects of these issues that should be understood. First, the sample variance $s^2$ is not just unbiased for Gaussian random variables. It is unbiased for any distribution with finite variance $\sigma^2$ (as discussed below, in my original answer). The question notes that $s$ is not unbiased for $\sigma$, and suggests an alternative which is unbiased for a Gaussian random variable. However it is important to note that unlike the variance, for the standard deviation it is not possible to have a "distribution free" unbiased estimator (*see note below). Second, as mentioned in the comment by whuber the fact that $s$ is biased does not impact the standard "t test". First note that, for a Gaussian variable $x$, if we estimate z-scores from a sample $\{x_i\}$ as $$z_i=\frac{x_i-\mu}{\sigma}\approx\frac{x_i-\bar{x}}{s}$$ then these will be biased. However the t statistic is usually used in the context of the sampling distribution of $\bar{x}$. In this case the z-score would be $$z_{\bar{x}}=\frac{\bar{x}-\mu}{\sigma_{\bar{x}}}\approx\frac{\bar{x}-\mu}{s/\sqrt{n}}=t$$ though we can compute neither $z$ nor $t$, as we do not know $\mu$. Nonetheless, if the $z_{\bar{x}}$ statistic would be normal, then the $t$ statistic will follow a Student-t distribution . This is not a large-$n$ approximation. The only assumption is that the $x$ samples are i.i.d. Gaussian. (Commonly the t-test is applied more broadly for possibly non-Gaussian $x$. This does rely on large-$n$, which by the central limit theorem ensures that $\bar{x}$ will still be Gaussian.) *Clarification on "distribution-free unbiased estimator" By "distribution free", I mean that the estimator cannot depend on any information about the population $x$ aside from the sample $\{x_1,\ldots,x_n\}$. By "unbiased" I mean that the expected error $\mathbb{E}[\hat{\theta}_n]-\theta$ is uniformly zero, independent of the sample size $n$. (As opposed to an estimator that is merely asymptotically unbiased, a.k.a. " consistent ", for which the bias vanishes as $n\to\infty$.) In the comments this was given as a possible example of a "distribution-free unbiased estimator". Abstracting a bit, this estimator is of the form $\hat{\sigma}=f[s,n,\kappa_x]$, where $\kappa_x$ is the excess kurtosis of $x$. This estimator is not "distribution free", as $\kappa_x$ depends on the distribution of $x$. The estimator is said to satisfy $\mathbb{E}[\hat{\sigma}]-\sigma_x=\mathrm{O}[\frac{1}{n}]$, where $\sigma_x^2$ is the variance of $x$. Hence the estimator is consistent, but not (absolutely) "unbiased", as $\mathrm{O}[\frac{1}{n}]$ can be arbitrarily large for small $n$. Note: Below is my original "answer". From here on, the comments are about the standard "sample" mean and variance, which are "distribution-free" unbiased estimators (i.e. the population is not assumed to be Gaussian). This is not a complete answer, but rather a clarification on why the sample variance formula is commonly used. Given a random sample $\{x_1,\ldots,x_n\}$, so long as the variables have a common mean, the estimator $\bar{x}=\frac{1}{n}\sum_ix_i$ will be unbiased , i.e. $$\mathbb{E}[x_i]=\mu \implies \mathbb{E}[\bar{x}]=\mu$$ If the variables also have a common finite variance, and they are uncorrelated , then the estimator $s^2=\frac{1}{n-1}\sum_i(x_i-\bar{x})^2$ will also be unbiased, i.e. $$\mathbb{E}[x_ix_j]-\mu^2=\begin{cases}\sigma^2&i=j\\0&i\neq{j}\end{cases} \implies \mathbb{E}[s^2]=\sigma^2$$ Note that the unbiasedness of these estimators depends only on the above assumptions (and the linearity of expectation; the proof is just algebra). The result does not depend on any particular distribution, such as Gaussian. The variables $x_i$ do not have to have a common distribution, and they do not even have to be independent (i.e. the sample does not have to be i.i.d. ). The "sample standard deviation" $s$ is not an unbiased estimator, $\mathbb{s}\neq\sigma$, but nonetheless it is commonly used. My guess is that this is simply because it is the square root of the unbiased sample variance. (With no more sophisticated justification.) In the case of an i.i.d. Gaussian sample, the maximum likelihood estimates (MLE) of the parameters are $\hat{\mu}_\mathrm{MLE}=\bar{x}$ and $(\hat{\sigma}^2)_\mathrm{MLE}=\frac{n-1}{n}s^2$, i.e. the variance divides by $n$ rather than $n^2$. Moreover, in the i.i.d. Gaussian case the standard deviation MLE is just the square root of the MLE variance. However these formulas, as well as the one hinted at in your question, depend on the Gaussian i.i.d. assumption. Update: Additional clarification on "biased" vs. "unbiased". Consider an $n$-element sample as above, $X=\{x_1,\ldots,x_n\}$, with sum-square-deviation $$\delta^2_n=\sum_i(x_i-\bar{x})^2$$ Given the assumptions outlined in the first part above, we necessarily have $$\mathbb{E}[\delta^2_n]=(n-1)\sigma^2$$ so the (Gaussian-)MLE estimator is biased $$\widehat{\sigma^2_n}=\tfrac{1}{n}\delta^2_n \implies \mathbb{E}[\widehat{\sigma^2_n}]=\tfrac{n-1}{n}\sigma^2 $$ while the "sample variance" estimator is unbiased $$s^2_n=\tfrac{1}{n-1}\delta^2_n \implies \mathbb{E}[s^2_n]=\sigma^2$$ Now it is true that $\widehat{\sigma^2_n}$ becomes less biased as the sample size $n$ increases. However $s^2_n$ has zero bias no matter the sample size (so long as $n>1$). For both estimators, the variance of their sampling distribution will be non-zero, and depend on $n$. As an example, the below Matlab code considers an experiment with $n=2$ samples from a standard-normal population $z$. To estimate the sampling distributions for $\bar{x},\widehat{\sigma^2},s^2$, the experiment is repeated $N=10^6$ times. (You can cut & paste the code here to try it out yourself.) % n=sample size, N=number of samples n=2; N=1e6; % generate standard-normal random #'s z=randn(n,N); % i.e. mu=0, sigma=1 % compute sample stats (Gaussian MLE) zbar=sum(z)/n; zvar_mle=sum((z-zbar).^2)/n; % compute ensemble stats (sampling-pdf means) zbar_avg=sum(zbar)/N, zvar_mle_avg=sum(zvar_mle)/N % compute unbiased variance zvar_avg=zvar_mle_avg*n/(n-1) Typical output is like zbar_avg = 1.4442e-04 zvar_mle_avg = 0.49988 zvar_avg = 0.99977 confirming that \begin{align} \mathbb{E}[\bar{z}]&\approx\overline{(\bar{z})}\approx\mu=0 \\ \mathbb{E}[s^2]&\approx\overline{(s^2)}\approx\sigma^2=1 \\ \mathbb{E}[\widehat{\sigma^2}]&\approx\overline{(\widehat{\sigma^2})}\approx\frac{n-1}{n}\sigma^2=\frac{1}{2} \end{align} Update 2: Note on fundamentally "algebraic" nature of unbiased-ness. In the above numerical demonstration, the code approximates the true expectation $\mathbb{E}[\,]$ using an ensemble average with $N=10^6$ replications of the experiment (i.e. each is a sample of size $n=2$). Even with this large number, the typical results quoted above are far from exact. To numerically demonstrate that the estimators are really unbiased, we can use a simple trick to approximate the $N\to\infty$ case: simply add the following line to the code % optional: "whiten" data (ensure exact ensemble stats) [U,S,V]=svd(z-mean(z,2),'econ'); z=sqrt(N)*U*V'; (placing after "generate standard-normal random #'s" and before "compute sample stats") With this simple change, even running the code with $N=10$ gives results like zbar_avg = 1.1102e-17 zvar_mle_avg = 0.50000 zvar_avg = 1.00000
{ "source": [ "https://stats.stackexchange.com/questions/249688", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/99274/" ] }
250,277
I am a bit confused about advantages of mixed models in regard to predictive modelling. Since predictive models are usually meant to predict values of previously unknown observations then it seems obvious to me that the only way a mixed model may be useful is through its ability to provide population-level predictions (that is without adding any random effects). However, the problem is that so far in my experience population-level predictions based on mixed models are significantly worse than predictions based on standard regression models with fixed effects only. So what is the point of mixed models in regard to prediction problems? EDIT. The problem is the following: I fitted a mixed model (with both fixed and random effects) and standard linear model with fixed effects only. When I do cross-validation I get a following hierarchy of predictive accuracy: 1) mixed models when predicting using fixed and random effects (but this works of course only for observations with known levels of random effects variables, so this predictive approach seems not to be suitable for real predictive applications!); 2) standard linear model; 3) mixed model when using population-level predictions (so with random effects thrown out). Thus, the only difference between standard linear model and mixed model are somewhat different value of coefficients due to different estimation methods (i.e. there are the same effects/predictors in both models, but they have different associated coefficients). So my confusion boils down to a question, why would I ever use a mixed model as a predictive model, since using mixed model to generate population-level predictions seems to be an inferior strategy in comparison to a standard linear model.
It depends on the nature of the data, but in general I would expect the mixed model to outperform the fixed-effects only models. Let's take an example: modelling the relationship between sunshine and the height of wheat stalks. We have a number of measurements of individual stalks, but many of the stalks are measured at the same sites (which are similar in soil, water and other things that may affect height). Here are some possible models: 1) height ~ sunshine 2) height ~ sunshine + site 3) height ~ sunshine + (1|site) We want to use these models to predict the height of new wheat stalks given some estimate of the sunshine they will experience. I'm going to ignore the parameter penalty you would pay for having many sites in a fixed-effects only model, and just consider the relative predictive power of the models. The most relevant question here is whether these new data points you are trying to predict are from one of the sites you have measured; you say this is rare in the real world, but it does happen. A) New data are from a site you have measured If so, models #2 and #3 will outperform #1. They both use more relevant information (mean site effect) to make predictions. B) New data are from an unmeasured site I would still expect model #3 to outperform #1 and #2, for the following reasons. (i) Model #3 vs #1: Model #1 will produce estimates that are biased in favour of overrepresented sites. If you have similar numbers of points from each site and a reasonably representative sample of sites, you should get similar results from both. (ii) Model #3 vs. #2: Why would model #3 be better that model #2 in this case? Because random effects take advantage of shrinkage - the site effects will be 'shrunk' towards zero. In other words, you will tend to find less extreme values for site effects when it is specified as a random effect than when it is specified as a fixed effect. This is useful and improves your predictive ability when the population means can reasonably be thought of as being drawn from a normal distribution (see Stein's Paradox in Statistics ). If the population means are not expected to follow a normal distribution, this might be a problem, but it's usually a very reasonable assumption and the method is robust to small deviations. [Side note: by default, when fitting model #2, most software would use one of the sites as a reference and estimate coefficients for the other sites that represent their the deviation from the reference. So it may appear as though there is no way to calculate an overall 'population effect'. But you can calculate this by averaging across predictions for all of the individual sites, or more simply by changing the coding of the model so that coefficients are calculated for every site.]
{ "source": [ "https://stats.stackexchange.com/questions/250277", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/111884/" ] }
250,522
I know the Bayes rule is derived from the conditional probability. But intuitively, what is the difference? The equation looks the same to me. The nominator is the joint probability and the denominator is the probability of the given outcome. This is the conditional probability: $P(A∣B)=\frac{P(A \cap B)}{P(B)}$ This is the Bayes' rule: $P(A∣B)=\frac{P(B|A) * P(A)}{P(B)}$ . Isn't $P(B|A)*P(A)$ and $P(A \cap B)$ the same? When $A$ and $B$ are independent, there is no need to use the Bayes rule, right?
OK, now that you have updated your question to include the two formulas: $$P(A\mid B) = \frac{P(A\cap B)}{P(B)} ~~ \text{provided that } P(B) > 0, \tag{1}$$ is the definition of the conditional probability of $A$ given that $B$ occurred. Similarly, $$P(B\mid A) = \frac{P(B\cap A)}{P(A)} = \frac{P(A\cap B)}{P(A)} ~~ \text{provided that } P(A) > 0, \tag{2}$$ is the definition of the conditional probability of $B$ given that $A$ occurred. Now, it is true that it is a trivial matter to substitute the value of $P(A\cap B)$ from $(2)$ into $(1)$ to arrive at $$P(A\mid B) = \frac{P(B\mid A)P(A)}{P(B)} ~~ \text{provided that } P(A), P(B) > 0, \tag{3}$$ which is Bayes' formula but notice that Bayes's formula actually connects two different conditional probabilities $P(A\mid B)$ and $P(B\mid A)$ , and is essentially a formula for "turning the conditioning around". The Reverend Thomas Bayes referred to this in terms of "inverse probability" and even today, there is vigorous debate as to whether statistical inference should be based on $P(B\mid A)$ or the inverse probability (called the a posteriori or posterior probability). It is undoubtedly as galling to you as it was to me when I first discovered that Bayes' formula was just a trivial substitution of $(2)$ into $(1)$ . Perhaps if you have been born 250 years ago, you (Note: the OP masqueraded under username AlphaBetaGamma when I wrote this answer but has since changed his username) could have made the substitution and then people today would be talking about the AlphaBetaGamma formula and the AlphaBetaGammian heresy and the Naive AlphaBetaGamma method $^*$ instead of invoking Bayes' name everywhere. So let me console you on your loss of fame by pointing out a different version of Bayes' formula. The Law of Total Probability says that $$P(B) = P(B\mid A)P(A) + P(B\mid A^c)P(A^c) \tag{4}$$ and using this, we can write $(3)$ as $$P(A\mid B) = \frac{P(B\mid A)P(A)}{P(B\mid A)P(A) + P(B\mid A^c)P(A^c)}, \tag{5}$$ or more generally as $$P(A_i\mid B) = \frac{P(B\mid A_i)P(A_i)}{P(B\mid A_1)P(A_1) + P(B\mid A_2)P(A_2) + \cdots + P(B\mid A_n)P(A_n)}, \tag{6}$$ where the posterior probability of a possible "cause" $A_i$ of a "datum" $B$ is related to $P(B\mid A_i)$ , the likelihood of the observation $B$ when $A_i$ is the true hypothesis and $P(A_i)$ , the prior probability (horrors!) of the hypothesis $A_i$ . $^*$ There is a famous paper R. Alpher, H. Bethe, and G. Gamow, "The Origin of Chemical Elements", Physical Review, April 1, 1948, that is commonly referred to as the $\alpha\beta\gamma$ paper .
{ "source": [ "https://stats.stackexchange.com/questions/250522", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/114477/" ] }
250,819
How can I calculate the p-value given Chi Squared and the Degrees of Freedom? For example, what would be the exact p-value of a Chi Squared = 15 with df = 2?
In applied statistics, chisquared test statistics arise as sums of squared residuals, or from sums of squared effects or from log-likelihood differences. In all of these applications, the aim is to test whether some vector parameter is zero vs the alternative that it is non-zero and the chisquare statistic is related to the squared size of the observed effect. The required p-value is the right tail probability for the chisquare value, which in R for your example is: > pchisq(15, df=2, lower.tail=FALSE) [1] 0.0005530844 For other df or statistic values, you obviously just substitute them into the above code. All cumulative probability functions in R compute left tail probabilities by default. However they also have a lower.tail argument, and you can always set this FALSE to get the right tail probability. It is good practice to do this rather than to compute $1-p$ as you might see in some elementary textbooks. The function qchisq does the reverse calculation, finding the value ("q" is for quantile) of the chisquare statistic corresponding to any given tail probability. For example, the chisquare statistic corresponding to a p-value of 0.05 is given by > qchisq(0.05, df=2, lower.tail=FALSE) [1] 5.991465
{ "source": [ "https://stats.stackexchange.com/questions/250819", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/26456/" ] }
250,937
I read about two versions of the loss function for logistic regression, which of them is correct and why? From Machine Learning , Zhou Z.H (in Chinese), with $\beta = (w, b)\text{ and }\beta^Tx=w^Tx +b$ : $$l(\beta) = \sum\limits_{i=1}^{m}\Big(-y_i\beta^Tx_i+\ln(1+e^{\beta^Tx_i})\Big) \tag 1$$ From my college course, with $z_i = y_if(x_i)=y_i(w^Tx_i + b)$ : $$L(z_i)=\log(1+e^{-z_i}) \tag 2$$ I know that the first one is an accumulation of all samples and the second one is for a single sample, but I am more curious about the difference in the form of two loss functions. Somehow I have a feeling that they are equivalent.
The relationship is as follows: $l(\beta) = \sum_i L(z_i)$. Define a logistic function as $f(z) = \frac{e^{z}}{1 + e^{z}} = \frac{1}{1+e^{-z}}$. They possess the property that $f(-z) = 1-f(z)$. Or in other words: $$ \frac{1}{1+e^{z}} = \frac{e^{-z}}{1+e^{-z}}. $$ If you take the reciprocal of both sides, then take the log you get: $$ \ln(1+e^{z}) = \ln(1+e^{-z}) + z. $$ Subtract $z$ from both sides and you should see this: $$ -y_i\beta^Tx_i+ln(1+e^{y_i\beta^Tx_i}) = L(z_i). $$ Edit: At the moment I am re-reading this answer and am confused about how I got $-y_i\beta^Tx_i+ln(1+e^{\beta^Tx_i})$ to be equal to $-y_i\beta^Tx_i+ln(1+e^{y_i\beta^Tx_i})$. Perhaps there's a typo in the original question. Edit 2: In the case that there wasn't a typo in the original question, @ManelMorales appears to be correct to draw attention to the fact that, when $y \in \{-1,1\}$, the probability mass function can be written as $P(Y_i=y_i) = f(y_i\beta^Tx_i)$, due to the property that $f(-z) = 1 - f(z)$. I am re-writing it differently here, because he introduces a new equivocation on the notation $z_i$. The rest follows by taking the negative log-likelihood for each $y$ coding. See his answer below for more details.
{ "source": [ "https://stats.stackexchange.com/questions/250937", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/139862/" ] }
251,460
I read on https://en.wikipedia.org/wiki/Generative_adversarial_networks : [Generative adversarial networks] were introduced by Ian Goodfellow et al in 2014. but Jurgen Schmidhuber claims to have performed similar work earlier in that direction (e.g., there was some debate at NIPS 2016 during the generative adversarial networks tutorial: https://channel9.msdn.com/Events/Neural-Information-Processing-Systems-Conference/Neural-Information-Processing-Systems-Conference-NIPS-2016/Generative-Adversarial-Networks see 1h03min). Was the idea behind generative adversarial networks first publicly introduced by Jürgen Schmidhuber? If not, how similar were Jürgen Schmidhuber's ideas?
I self-published the basic idea of a deterministic variety of generative adversarial networks (GANs) in a 2010 blog post (archive.org) . I had searched for but could not find anything similar anywhere, and had no time to try implementing it. I was not and still am not a neural network researcher and have no connections in the field. I'll copy-paste the blog post here: 2010-02-24 A method for training artificial neural networks to generate missing data within a variable context. As the idea is hard to put in a single sentence, I will use an example: An image may have missing pixels (let's say, under a smudge). How can one restore the missing pixels, knowing only the surrounding pixels? One approach would be a "generator" neural network that, given the surrounding pixels as input, generates the missing pixels. But how to train such a network? One can't expect the network to exactly produce the missing pixels. Imagine, for example, that the missing data is a patch of grass. One could teach the network with a bunch of images of lawns, with portions removed. The teacher knows the data that is missing, and could score the network according to the root mean square difference (RMSD) between the generated patch of grass and the original data. The problem is that if the generator encounters an image that is not part of the training set, it would be impossible for the neural network to put all the leaves, especially in the middle of the patch, in exactly the right places. The lowest RMSD error would probably be achieved by the network filling the middle area of the patch with a solid color that is the average of the color of pixels in typical images of grass. If the network tried to generate grass that looks convincing to a human and as such fulfills its purpose, there would be an unfortunate penalty by the RMSD metric. My idea is this (see figure below): Train simultaneously with the generator a classifier network that is given, in random or alternating sequence, generated and original data. The classifier then has to guess, in the context of the surrounding image context, whether the input is original (1) or generated (0). The generator network is simultaneously trying to get a high score (1) from the classifier. The outcome, hopefully, is that both networks start out really simple, and progress towards generating and recognizing more and more advanced features, approaching and possibly defeating human's ability to discern between the generated data and the original. If multiple training samples are considered for each score, then RMSD is the correct error metric to use, as this will encourage the classifier network to output probabilities. Artificial neural network training setup When I mention RMSD at the end I mean the error metric for the "probability estimate", not the pixel values. I originally started considering the use of neural networks in 2000 (comp.dsp post) to generate missing high frequencies for up-sampled (resampled to a higher sampling frequency) digital audio, in a way that would be convincing rather than accurate. In 2001 I collected an audio library for the training. Here are parts of an EFNet #musicdsp Internet Relay Chat (IRC) log from 20 January 2006 in which I (yehar) talk about the idea with another user (_Beta): [22:18] <yehar> the problem with samples is that if you don't have something "up there" already then what can you do if you upsample... [22:22] <yehar> i once collected a big library of sounds so that i could develop a "smart" algo to solve this exact problem [22:22] <yehar> i would have used neural networks [22:22] <yehar> but i didn't finish the job :-D [22:23] <_Beta> problem with neural networks is that you have to have some way of measuring the goodness of results [22:24] <yehar> beta: i have this idea that you can develop a "listener" at the same time as you develop the "smart up-there sound creator" [22:26] <yehar> beta: and this listener will learn to detect when it's listening a created or a natural up-there spectrum. and the creator develops at the same time to try to circumvent this detection Sometime between 2006 and 2010, a friend invited an expert to take a look at my idea and discuss it with me. They thought that it was interesting, but said that it wasn't cost-effective to train two networks when a single network can do the job. I was never sure if they did not get the core idea or if they immediately saw a way to formulate it as a single network, perhaps with a bottleneck somewhere in the topology to separate it into two parts. This was at a time when I didn't even know that backpropagation is still the de-facto training method (learned that making videos in the Deep Dream craze of 2015). Over the years I had talked about my idea with a couple of data scientists and others that I thought might be interested, but the response was mild. In May 2017 I saw Ian Goodfellow's tutorial presentation on YouTube [Mirror] , which totally made my day. It appeared to me as the same basic idea, with differences as I currently understand outlined below, and the hard work had been done to make it give good results. Also he gave a theory, or based everything on a theory, of why it should work, while I never did any sort of a formal analysis of my idea. Goodfellow's presentation answered questions that I had had and much more. Goodfellow's GAN and his suggested extensions include a noise source in the generator. I never thought of including a noise source but have instead the training data context, better matching the idea to a conditional GAN (cGAN) without a noise vector input and with the model conditioned on a part of the data. My current understanding based on Mathieu et al. 2016 is that a noise source is not needed for useful results if there is enough input variability. The other difference is that Goodfellow's GAN minimizes log-likelihood. Later, a least squares GAN (LSGAN) has been introduced ( Mao et al. 2017 ) which matches my RMSD suggestion. So, my idea would match that of a conditional least squares generative adversarial network (cLSGAN) without a noise vector input to the generator and with a part of the data as the conditioning input. A generative generator samples from an approximation of the data distribution. I do now know if and doubt that real-world noisy input would enable that with my idea, but that is not to say that the results would not be useful if it didn't. The differences mentioned in the above are the primary reason why I believe Goodfellow did not know or hear about my idea. Another is that my blog has had no other machine learning content, so it would have enjoyed very limited exposure in machine learning circles. It is a conflict of interests when a reviewer puts pressure on an author to cite the reviewer's own work.
{ "source": [ "https://stats.stackexchange.com/questions/251460", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12359/" ] }
251,600
I am trying to understand the quantile regression, but one thing that makes me suffer is the choice of the loss function. $\rho_\tau(u) = u(\tau-1_{\{u<0\}})$ I know that the minimum of the expectation of $\rho_\tau(y-u)$ is equal to the $\tau\%$-quantile, but what is the intuitive reason to start off with this function? I don't see the relation between minimizing this function and the quantile. Can somebody explain it to me?
I understand this question as asking for insight into how one could come up with any loss function that produces a given quantile as a loss minimizer no matter what the underlying distribution might be. It would be unsatisfactory, then, just to repeat the analysis in Wikipedia or elsewhere that shows this particular loss function works. Let's begin with something familiar and simple. What you're talking about is finding a "location" $x^{*}$ relative to a distribution or set of data $F$. It is well known, for instance, that the mean $\bar x$ minimizes the expected squared residual; that is, it is a value for which $$\mathcal{L}_F(\bar x)=\int_{\mathbb{R}} (x - \bar x)^2 dF(x)$$ is as small as possible. I have used this notation to remind us that $\mathcal{L}$ is derived from a loss , that it is determined by $F$, but most importantly it depends on the number $\bar x$. The standard way to show that $x^{*}$ minimizes any function begins by demonstrating the function's value does not decrease when $x^{*}$ is changed by a little bit. Such a value is called a critical point of the function. What kind of loss function $\Lambda$ would result in a percentile $F^{-1}(\alpha)$ being a critical point? The loss for that value would be $$\mathcal{L}_F(F^{-1}(\alpha)) = \int_{\mathbb{R}} \Lambda(x-F^{-1}(\alpha))dF(x)=\int_0^1\Lambda\left(F^{-1}(u)-F^{-1}(\alpha)\right)du.$$ For this to be a critical point, its derivative must be zero. Since we're just trying to find some solution, we won't pause to see whether the manipulations are legitimate: we'll plan to check technical details (such as whether we really can differentiate $\Lambda$, etc. ) at the end. Thus $$\eqalign{0 &=\mathcal{L}_F^\prime(x^{*})= \mathcal{L}_F^\prime(F^{-1}(\alpha))= -\int_0^1 \Lambda^\prime\left(F^{-1}(u)-F^{-1}(\alpha)\right)du \\ &= -\int_0^{\alpha} \Lambda^\prime\left(F^{-1}(u)-F^{-1}(\alpha)\right)du -\int_{\alpha}^1 \Lambda^\prime\left(F^{-1}(u)-F^{-1}(\alpha)\right)du.\tag{1} }$$ On the left hand side, the argument of $\Lambda$ is negative, whereas on the right hand side it is positive. Other than that, we have little control over the values of these integrals because $F$ could be any distribution function. Consequently our only hope is to make $\Lambda^\prime$ depend only on the sign of its argument, and otherwise it must be constant. This implies $\Lambda$ will be piecewise linear, potentially with different slopes to the left and right of zero. Clearly it should be decreasing as zero is approached--it is, after all, a loss and not a gain . Moreover, rescaling $\Lambda$ by a constant will not change its properties, so we may feel free to set the left hand slope to $-1$. Let $\tau \gt 0$ be the right hand slope. Then $(1)$ simplifies to $$0 = \alpha - \tau (1 - \alpha),$$ whence the unique solution is, up to a positive multiple, $$\Lambda(x) = \cases{-x, \ x \le 0 \\ \frac{\alpha}{1-\alpha}x, \ x \ge 0.}$$ Multiplying this (natural) solution by $1-\alpha$, to clear the denominator, produces the loss function presented in the question. Clearly all our manipulations are mathematically legitimate when $\Lambda$ has this form.
{ "source": [ "https://stats.stackexchange.com/questions/251600", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/74691/" ] }
252,129
Can you provide an example of an MLE estimator of the mean that is biased? I am not looking for an example that breaks MLE estimators in general by violating regularity conditions. All examples I can see on the internet refer to the variance, and I can't seem to find anything related to the mean. EDIT @MichaelHardy provided an example where we get a biased estimate of the mean of uniform distribution using MLE under a certain proposed model. However https://en.wikipedia.org/wiki/Uniform_distribution_(continuous)#Estimation_of_midpoint suggests that MLE is a uniformly minimum unbiased estimator of the mean, clearly under another proposed model. At this point it is still not very clear to me what's meant by MLE estimation if it is very hypothesized model dependent as opposed to say a sample mean estimator which is model neutral. At the end I am interested in estimating something about the population and don't really care about the estimation of a parameter of a hypothesized model. EDIT 2 As @ChristophHanck showed the model with additional information introduced bias but did not manage to reduce the MSE. We also have additional results: http://www.maths.manchester.ac.uk/~peterf/CSI_ch4_part1.pdf (p61) http://www.cs.tut.fi/~hehu/SSP/lecture6.pdf (slide 2) http://www.stats.ox.ac.uk/~marchini/bs2a/lecture4_4up.pdf (slide 5) "If a most efficient unbiased estimator ˆθ of θ exists (i.e. ˆθ is unbiased and its variance is equal to the CRLB) then the maximum likelihood method of estimation will produce it." "Moreover, if an efficient estimator exists, it is the ML estimator." Since the MLE with free model parameters is unbiased and efficient, by definition is this "the" Maximum Likelihood Estimator? EDIT 3 @AlecosPapadopoulos has an example with Half Normal distribution on math forum. https://math.stackexchange.com/questions/799954/can-the-maximum-likelihood-estimator-be-unbiased-and-fail-to-achieve-cramer-rao It is not anchoring any of its parameters like in the uniform case. I would say that settles it, though he hasn't demonstrated the bias of the mean estimator.
Christoph Hanck has not posted the details of his proposed example. I take it he means the uniform distribution on the interval $[0,\theta],$ based on an i.i.d. sample $X_1,\ldots,X_n$ of size more than $n=1.$ The mean is $\theta/2$. The MLE of the mean is $\max\{X_1,\ldots,X_n\}/2.$ That is biased since $\Pr(\max < \theta) = 1,$ so $\operatorname{E}({\max}/2)<\theta/2.$ PS: Perhaps we should note that the best unbiased estimator of the mean $\theta/2$ is not the sample mean, but rather is $$\frac{n+1} {2n} \cdot \max\{X_1,\ldots,X_n\}.$$ The sample mean is a lousy estimator of $\theta/2$ because for some samples, the sample mean is less than $\dfrac 1 2 \max\{X_1,\ldots,X_n\},$ and it is clearly impossible for $\theta/2$ to be less than ${\max}/2.$ end of PS I suspect the Pareto distribution is another such case. Here's the probability measure: $$ \alpha\left( \frac \kappa x \right)^\alpha\ \frac{dx} x \text{ for } x >\kappa. $$ The expected value is $\dfrac \alpha {\alpha -1 } \kappa.$ The MLE of the expected value is $$ \frac n {n - \sum_{i=1}^n \big((\log X_i) - \log(\min)\big)} \cdot \min $$ where $\min = \min\{X_1,\ldots,X_n\}.$ I haven't worked out the expected value of the MLE for the mean, so I don't know what its bias is.
{ "source": [ "https://stats.stackexchange.com/questions/252129", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/20980/" ] }
252,133
Imagine a Bayesian Network with binary random variables, with the structure of a binary tree of arbitrary height: I want to find the minimum number of probabilities that I must store at depth k to describe the entire tree up to that point (i.e. the joint distribution) Starting at depth 0 (the root node A), I only need to store 1 probability, i.e. $P(A = True) = P(a)$ because I get $P(\bar{a}) = 1 - P(a)$ for free. At depth 1, I need to store $P(ABC) = \sum_{ABC} P(A)P(B|A)P(C|A)$. I need to only store $P(a), P(b|a), P(b|\bar{a}), P(c|a), P(c|\bar{a})$ = 5 probabilities. This is because from $P(b|a)$ I get $P(\bar{b}|a) = 1 - P(b|a)$, and $P(\bar{c}|a)$ similarly. At depth 2, using similar logic, I store 1 + 4 + 8 = 13 probabilities And so on. So at depth k, I store $\sum_{i=0}^{k} 2^{i+1}-1$ probabilities. Is this correct?
Christoph Hanck has not posted the details of his proposed example. I take it he means the uniform distribution on the interval $[0,\theta],$ based on an i.i.d. sample $X_1,\ldots,X_n$ of size more than $n=1.$ The mean is $\theta/2$. The MLE of the mean is $\max\{X_1,\ldots,X_n\}/2.$ That is biased since $\Pr(\max < \theta) = 1,$ so $\operatorname{E}({\max}/2)<\theta/2.$ PS: Perhaps we should note that the best unbiased estimator of the mean $\theta/2$ is not the sample mean, but rather is $$\frac{n+1} {2n} \cdot \max\{X_1,\ldots,X_n\}.$$ The sample mean is a lousy estimator of $\theta/2$ because for some samples, the sample mean is less than $\dfrac 1 2 \max\{X_1,\ldots,X_n\},$ and it is clearly impossible for $\theta/2$ to be less than ${\max}/2.$ end of PS I suspect the Pareto distribution is another such case. Here's the probability measure: $$ \alpha\left( \frac \kappa x \right)^\alpha\ \frac{dx} x \text{ for } x >\kappa. $$ The expected value is $\dfrac \alpha {\alpha -1 } \kappa.$ The MLE of the expected value is $$ \frac n {n - \sum_{i=1}^n \big((\log X_i) - \log(\min)\big)} \cdot \min $$ where $\min = \min\{X_1,\ldots,X_n\}.$ I haven't worked out the expected value of the MLE for the mean, so I don't know what its bias is.
{ "source": [ "https://stats.stackexchange.com/questions/252133", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/73527/" ] }
252,577
I got some questions about the Bayesian regression: Given a standard regression as $y = \beta_0 + \beta_1 x + \varepsilon$. If I want to change this into a Bayesian regression, do I need prior distributions both for $\beta_0$ and $\beta_1$ (or doesn't it work this way)? In standard regression one would try to minimize the residuals to get single values for $\beta_0$ and $\beta_1$. How is this done in Bayes regression? I really struggle a lot here: $$ \text{posterior} = \text{prior} \times \text{likelihood} $$ Likelihood comes from the current dataset (so it's my regression parameter but not as a single value but as a likelihood distribution, right?). Prior comes from a previous research (let's say). So I got this equation: $$ y = \beta_1 x + \varepsilon $$ with $\beta_1$ being my likelihood or posterior (or is this just totally wrong)? I simply can't understand how the standard regression transforms into a Bayes one.
The simple linear regression model $$ y_i = \alpha + \beta x_i + \varepsilon $$ can be written in terms of the probabilistic model behind it $$ \mu_i = \alpha + \beta x_i \\ y_i \sim \mathcal{N}(\mu_i, \sigma) $$ i.e. dependent variable $Y$ follows normal distribution parametrized by mean $\mu_i$ , that is a linear function of $X$ parametrized by $\alpha,\beta$ , and by standard deviation $\sigma$ . If you estimate such a model using ordinary least squares , you do not have to bother about the probabilistic formulation, because you are searching for optimal values of $\alpha,\beta$ parameters by minimizing the squared errors of fitted values to predicted values. On another hand, you could estimate such model using maximum likelihood estimation , where you would be looking for optimal values of parameters by maximizing the likelihood function $$ \DeclareMathOperator*{\argmax}{arg\,max} \argmax_{\alpha,\,\beta,\,\sigma} \prod_{i=1}^n \mathcal{N}(y_i; \alpha + \beta x_i, \sigma) $$ where $\mathcal{N}$ is a density function of normal distribution evaluated at $y_i$ points, parametrized by means $\alpha + \beta x_i$ and standard deviation $\sigma$ . In the Bayesian approach instead of maximizing the likelihood function alone, we would assume prior distributions for the parameters and use the Bayes theorem $$ \text{posterior} \propto \text{likelihood} \times \text{prior} $$ The likelihood function is the same as above, but what changes is that you assume some prior distributions for the estimated parameters $\alpha,\beta,\sigma$ and include them into the equation $$ \underbrace{f(\alpha,\beta,\sigma\mid Y,X)}_{\text{posterior}} \propto \underbrace{\prod_{i=1}^n \mathcal{N}(y_i\mid \alpha + \beta x_i, \sigma)}_{\text{likelihood}} \; \underbrace{f_{\alpha}(\alpha) \, f_{\beta}(\beta) \, f_{\sigma}(\sigma)}_{\text{priors}} $$ "What distributions?" is a different question, since there is an unlimited number of choices. For $\alpha,\beta$ parameters you could, for example, assume normal distributions parametrized by some hyperparameters , or $t$ -distribution if you want to assume heavier tails, or uniform distribution if you do not want to make many assumptions, but you want to assume that the parameters can be a priori "anything in the given range", etc. For $\sigma$ you need to assume some prior distribution that is bounded to be greater than zero since standard deviation needs to be positive. This may lead to the model formulation as illustrated below by John K. Kruschke. (source: http://www.indiana.edu/~kruschke/BMLR/ ) While in the maximum likelihood you were looking for a single optimal value for each of the parameters, in the Bayesian approach by applying the Bayes theorem you obtain the posterior distribution of the parameters. The final estimate will depend on the information that comes from your data and from your priors , but the more information is contained in your data, the less influential are priors . Notice that when using uniform priors, they take form $f(\theta) \propto 1$ after dropping the normalizing constants. This makes Bayes theorem proportional to the likelihood function alone, so the posterior distribution will reach its maximum at exactly the same point as the maximum likelihood estimate. What follows, the estimate under uniform priors will be the same as by using ordinary least squares since minimizing the squared errors corresponds to maximizing the normal likelihood . To estimate a model in the Bayesian approach in some cases you can use conjugate priors , so the posterior distribution is directly available (see example here ). However, in the vast majority of cases, posterior distribution will not be directly available and you will have to use Markov Chain Monte Carlo methods for estimating the model (check this example of using Metropolis-Hastings algorithm to estimate parameters of linear regression). Finally, if you are only interested in point estimates of parameters, you could use maximum a posteriori estimation , i.e. $$ \argmax_{\alpha,\,\beta,\,\sigma} f(\alpha,\beta,\sigma\mid Y,X) $$ For a more detailed description of logistic regression, you can check the Bayesian logit model - intuitive explanation? thread. For learning more you could check the following books: Kruschke, J. (2014). Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan. Academic Press. Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2004). Bayesian data analysis. Chapman & Hall/CRC.
{ "source": [ "https://stats.stackexchange.com/questions/252577", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/128710/" ] }
252,622
I have two groups of curves that are clearly different. How I can calculate p-value? Green and blue curves is the average of about 10 curves from each group. The filled area is standard deviation. And the black curve at the bottom is the difference between average curves.
The simple linear regression model $$ y_i = \alpha + \beta x_i + \varepsilon $$ can be written in terms of the probabilistic model behind it $$ \mu_i = \alpha + \beta x_i \\ y_i \sim \mathcal{N}(\mu_i, \sigma) $$ i.e. dependent variable $Y$ follows normal distribution parametrized by mean $\mu_i$ , that is a linear function of $X$ parametrized by $\alpha,\beta$ , and by standard deviation $\sigma$ . If you estimate such a model using ordinary least squares , you do not have to bother about the probabilistic formulation, because you are searching for optimal values of $\alpha,\beta$ parameters by minimizing the squared errors of fitted values to predicted values. On another hand, you could estimate such model using maximum likelihood estimation , where you would be looking for optimal values of parameters by maximizing the likelihood function $$ \DeclareMathOperator*{\argmax}{arg\,max} \argmax_{\alpha,\,\beta,\,\sigma} \prod_{i=1}^n \mathcal{N}(y_i; \alpha + \beta x_i, \sigma) $$ where $\mathcal{N}$ is a density function of normal distribution evaluated at $y_i$ points, parametrized by means $\alpha + \beta x_i$ and standard deviation $\sigma$ . In the Bayesian approach instead of maximizing the likelihood function alone, we would assume prior distributions for the parameters and use the Bayes theorem $$ \text{posterior} \propto \text{likelihood} \times \text{prior} $$ The likelihood function is the same as above, but what changes is that you assume some prior distributions for the estimated parameters $\alpha,\beta,\sigma$ and include them into the equation $$ \underbrace{f(\alpha,\beta,\sigma\mid Y,X)}_{\text{posterior}} \propto \underbrace{\prod_{i=1}^n \mathcal{N}(y_i\mid \alpha + \beta x_i, \sigma)}_{\text{likelihood}} \; \underbrace{f_{\alpha}(\alpha) \, f_{\beta}(\beta) \, f_{\sigma}(\sigma)}_{\text{priors}} $$ "What distributions?" is a different question, since there is an unlimited number of choices. For $\alpha,\beta$ parameters you could, for example, assume normal distributions parametrized by some hyperparameters , or $t$ -distribution if you want to assume heavier tails, or uniform distribution if you do not want to make many assumptions, but you want to assume that the parameters can be a priori "anything in the given range", etc. For $\sigma$ you need to assume some prior distribution that is bounded to be greater than zero since standard deviation needs to be positive. This may lead to the model formulation as illustrated below by John K. Kruschke. (source: http://www.indiana.edu/~kruschke/BMLR/ ) While in the maximum likelihood you were looking for a single optimal value for each of the parameters, in the Bayesian approach by applying the Bayes theorem you obtain the posterior distribution of the parameters. The final estimate will depend on the information that comes from your data and from your priors , but the more information is contained in your data, the less influential are priors . Notice that when using uniform priors, they take form $f(\theta) \propto 1$ after dropping the normalizing constants. This makes Bayes theorem proportional to the likelihood function alone, so the posterior distribution will reach its maximum at exactly the same point as the maximum likelihood estimate. What follows, the estimate under uniform priors will be the same as by using ordinary least squares since minimizing the squared errors corresponds to maximizing the normal likelihood . To estimate a model in the Bayesian approach in some cases you can use conjugate priors , so the posterior distribution is directly available (see example here ). However, in the vast majority of cases, posterior distribution will not be directly available and you will have to use Markov Chain Monte Carlo methods for estimating the model (check this example of using Metropolis-Hastings algorithm to estimate parameters of linear regression). Finally, if you are only interested in point estimates of parameters, you could use maximum a posteriori estimation , i.e. $$ \argmax_{\alpha,\,\beta,\,\sigma} f(\alpha,\beta,\sigma\mid Y,X) $$ For a more detailed description of logistic regression, you can check the Bayesian logit model - intuitive explanation? thread. For learning more you could check the following books: Kruschke, J. (2014). Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan. Academic Press. Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2004). Bayesian data analysis. Chapman & Hall/CRC.
{ "source": [ "https://stats.stackexchange.com/questions/252622", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/119488/" ] }
252,630
Define $X$ and $Y$ to be two exponential random variables. The probability distribution functions are $f_{X}(x)=\lambda e^{-\lambda x}$ and $f_{Y}(y)=\delta e^{-\delta y}$. Note that $X$ and $Y$ are independent. Define $Z=X +\frac{a Y}{Y+b}$, where $a$ and $b$ are some positive constants. My goal is to derive the CDF of $Z$, i.e. I want to calculate $P(Z <z)$. My approach consists in deriving the CDF of $R=\frac{a Y}{Y+b}$, and then trying to find the CDF of the sum, which will be a function of the CDF of $R$. So I have started as follows: $P(\frac{a Y}{Y+b} < r)=P(Y(a-r) <rb)$. It is clear that here $a$ should be $>r$, because otherwise we get something negative.. Thus, following this approach forces us to assume that $a>r$, something that I don't know if I can assume or not. Question: is this assumption necessary to be able to derive the CDF? is there any other approach that I can adopt to derive the CDF without making this assumption?
The simple linear regression model $$ y_i = \alpha + \beta x_i + \varepsilon $$ can be written in terms of the probabilistic model behind it $$ \mu_i = \alpha + \beta x_i \\ y_i \sim \mathcal{N}(\mu_i, \sigma) $$ i.e. dependent variable $Y$ follows normal distribution parametrized by mean $\mu_i$ , that is a linear function of $X$ parametrized by $\alpha,\beta$ , and by standard deviation $\sigma$ . If you estimate such a model using ordinary least squares , you do not have to bother about the probabilistic formulation, because you are searching for optimal values of $\alpha,\beta$ parameters by minimizing the squared errors of fitted values to predicted values. On another hand, you could estimate such model using maximum likelihood estimation , where you would be looking for optimal values of parameters by maximizing the likelihood function $$ \DeclareMathOperator*{\argmax}{arg\,max} \argmax_{\alpha,\,\beta,\,\sigma} \prod_{i=1}^n \mathcal{N}(y_i; \alpha + \beta x_i, \sigma) $$ where $\mathcal{N}$ is a density function of normal distribution evaluated at $y_i$ points, parametrized by means $\alpha + \beta x_i$ and standard deviation $\sigma$ . In the Bayesian approach instead of maximizing the likelihood function alone, we would assume prior distributions for the parameters and use the Bayes theorem $$ \text{posterior} \propto \text{likelihood} \times \text{prior} $$ The likelihood function is the same as above, but what changes is that you assume some prior distributions for the estimated parameters $\alpha,\beta,\sigma$ and include them into the equation $$ \underbrace{f(\alpha,\beta,\sigma\mid Y,X)}_{\text{posterior}} \propto \underbrace{\prod_{i=1}^n \mathcal{N}(y_i\mid \alpha + \beta x_i, \sigma)}_{\text{likelihood}} \; \underbrace{f_{\alpha}(\alpha) \, f_{\beta}(\beta) \, f_{\sigma}(\sigma)}_{\text{priors}} $$ "What distributions?" is a different question, since there is an unlimited number of choices. For $\alpha,\beta$ parameters you could, for example, assume normal distributions parametrized by some hyperparameters , or $t$ -distribution if you want to assume heavier tails, or uniform distribution if you do not want to make many assumptions, but you want to assume that the parameters can be a priori "anything in the given range", etc. For $\sigma$ you need to assume some prior distribution that is bounded to be greater than zero since standard deviation needs to be positive. This may lead to the model formulation as illustrated below by John K. Kruschke. (source: http://www.indiana.edu/~kruschke/BMLR/ ) While in the maximum likelihood you were looking for a single optimal value for each of the parameters, in the Bayesian approach by applying the Bayes theorem you obtain the posterior distribution of the parameters. The final estimate will depend on the information that comes from your data and from your priors , but the more information is contained in your data, the less influential are priors . Notice that when using uniform priors, they take form $f(\theta) \propto 1$ after dropping the normalizing constants. This makes Bayes theorem proportional to the likelihood function alone, so the posterior distribution will reach its maximum at exactly the same point as the maximum likelihood estimate. What follows, the estimate under uniform priors will be the same as by using ordinary least squares since minimizing the squared errors corresponds to maximizing the normal likelihood . To estimate a model in the Bayesian approach in some cases you can use conjugate priors , so the posterior distribution is directly available (see example here ). However, in the vast majority of cases, posterior distribution will not be directly available and you will have to use Markov Chain Monte Carlo methods for estimating the model (check this example of using Metropolis-Hastings algorithm to estimate parameters of linear regression). Finally, if you are only interested in point estimates of parameters, you could use maximum a posteriori estimation , i.e. $$ \argmax_{\alpha,\,\beta,\,\sigma} f(\alpha,\beta,\sigma\mid Y,X) $$ For a more detailed description of logistic regression, you can check the Bayesian logit model - intuitive explanation? thread. For learning more you could check the following books: Kruschke, J. (2014). Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan. Academic Press. Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2004). Bayesian data analysis. Chapman & Hall/CRC.
{ "source": [ "https://stats.stackexchange.com/questions/252630", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/98670/" ] }
253,632
This is something that has been bugging me for a while, and I couldn't find any satisfactory answers online, so here goes: After reviewing a set of lectures on convex optimization, Newton's method seems to be a far superior algorithm than gradient descent to find globally optimal solutions, because Newton's method can provide a guarantee for its solution, it's affine invariant, and most of all it converges in far fewer steps. Why is second-order optimization algorithms, such as Newton's method not as widely used as stochastic gradient descent in machine learning problems?
Gradient descent maximizes a function using knowledge of its derivative. Newton's method, a root finding algorithm, maximizes a function using knowledge of its second derivative. That can be faster when the second derivative is known and easy to compute (the Newton-Raphson algorithm is used in logistic regression). However, the analytic expression for the second derivative is often complicated or intractable, requiring a lot of computation. Numerical methods for computing the second derivative also require a lot of computation -- if $N$ values are required to compute the first derivative, $N^2$ are required for the second derivative.
{ "source": [ "https://stats.stackexchange.com/questions/253632", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/143640/" ] }
254,107
I am doing some research on optimization techniques for machine learning, but I am surprised to find large numbers of optimization algorithms are defined in terms of other optimization problems. I illustrate some examples in the following. For example https://arxiv.org/pdf/1511.05133v1.pdf Everything looks nice and good but then there is this $\text{argmin}_x$ in the $z^{k+1}$ update....so what is the algorithm that solves for the $\text{argmin}$? We don't know, and it doesn't say. So magically we are to solve another optimization problem which is find the minimizing vector so that the inner product is at minimum - how can this be done? Take another example: https://arxiv.org/pdf/1609.05713v1.pdf Everything looks nice and good until you hit that proximal operator in the middle of the algorithm, and what is the definition of that operator? Boom: Now pray tell, how do we solve this $\text{argmin}_x$ in the proximal operator? It doesn't say. In any case, that optimization problem looks hard (NP HARD) depending on what $f$ is. Can someone please enlighten me as to: Why are so many optimization algorithms defined in terms of other optimization problems? (Wouldn't this be some sort of chicken and egg problem: to solve problem 1, you need to solve problem 2, using method of solving problem 3, which relies on solving problem ....) How do you solve these optimization problems that are embedded in these algorithms? For example, $x^{k+1} = \text{argmin}_x \text{really complicated loss function}$, how to find the minimizer on the right hand side? Ultimately, I am puzzled as to how these algorithms can be numerically implemented. I recognize that adding and multiplying vectors are easy operations in python, but what about $\text{argmin}_x$, is there some function (script) that magically gives you the minimizer to a function? (Bounty: can anyone reference a paper for which the authors make clear the algorithm for the sub-problem embedded in the high level optimization algorithm?)
You are looking at top level algorithm flow charts. Some of the individual steps in the flow chart may merit their own detailed flow charts. However, in published papers having an emphasis on brevity, many details are often omitted. Details for standard inner optimization problems, which are considered to be "old hat" may not be provided at all. The general idea is that optimization algorithms may require the solution of a series of generally easier optimization problems. It's not uncommon to have 3 or even 4 levels of optimization algorithms within a top level algorithm, although some of them are internal to standard optimizers. Even deciding when to terminate an algorithm (at one of the hierarchial levels) may require solving a side optimization problem. For instance, a non-negatively constrained linear least squares problem might be solved to determine the Lagrange multipliers used to evaluate the KKT optimality score used to decide when to declare optimality. If the optimization problem is stochastic or dynamic, there may be yet additional hierarchial levels of optimization. Here's an example. Sequential Quadratic Programming (SQP). An initial optimization problem is treated by iteratively solving the Karush-Kuhn-Tucker optimality conditions, starting from an initial point with an objective which is a quadratic approximation of the Lagrangian of the problem, and a linearization of the constraints. The resulting Quadratic Program (QP) is solved. The QP which was solved either has trust region constraints, or a line search is conducted from the current iterate to the solution of the QP, which is itself an optimization problem, in order to find the next iterate. If a Quasi-Newton method is being used, an optimization problem has to be solved to determine the Quasi-Newton update to the Hessian of the Lagrangian - usually this is a closed form optimization using closed form formulas such as BFGS or SR1, but it could be a numerical optimization. Then the new QP is solved, etc. If the QP is ever infeasible, including to start, an optimization problem is solved to find a feasible point. Meanwhile, there may be one or two levels of internal optimization problems being called inside the QP solver. At the end of each iteration, a non-negative linear least squares problem might be solved to determine the optimality score. Etc. If this is a mixed integer problem, then this whole shebang might be performed at each branching node, as part of a higher level algorithm. Similarly for a global optimizer - a local optimization problem is used to produce an upper bound on the globally optimal solution, then a relaxation of some constraints is done to produce a lower bound optimization problem. Thousands or even millions of "easy" optimization problems from branch and bound might be solved in order to solve one mixed integer or global optimization problem. This should start to give you an idea. Edit : In response to the chicken and egg question which was added to the question after my answer: If there's a chicken and egg problem, then it's not a well-defined practical algorithm. In the examples I gave, there is no chicken and egg. Higher level algorithm steps invoke optimization solvers, which are either defined or already exist. SQP iteratively invokes a QP solver to solve sub-problems, but the QP solver solves an easier problem, QP, than the original problem. If there is an even higher level global optimization algorithm, it may invoke an SQP solver to solve local nonlinear optimization subproblems, and the SQP solver in turn calls a QP solver to solve QP subproblems. No chiicken and egg. Note: Optimization opportunities are "everywhere". Optimization experts, such as those developing optimization algorithms, are more likely to see these optimization opportunities, and view them as such, than the average Joe or Jane. And being algorithmically inclined, quite naturally they see opportunities for building up optimization algorithms out of lower-level optimization algorithms. Formulation and solution of optimization problems serve as building blocks for other (higher level) optimization algorithms. Edit 2 : In response to bounty request which was just added by the OP. The paper describing the SQP nonlinear optimizer SNOPT https://web.stanford.edu/group/SOL/reports/snopt.pdf specifically mentions the QP solver SQOPT, which is separately documented, as being used to solve QP subproblems in SNOPT.
{ "source": [ "https://stats.stackexchange.com/questions/254107", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/143973/" ] }
254,254
I just browsed through this wonderful book: Applied multivariate statistical analysis by Johnson and Wichern . The irony is, I am still not able to understand the motivation for using multivariate (regression) models instead of separate univariate (regression) models. I went through stats.statexchange posts 1 and 2 that explain (a) difference between multiple and multivariate regression and (b) interpretation of multivariate regression results, but I am not able to tweak out the use of multivariate statistical models from all the information I get online about them. My questions are: Why do we need multivariate regression? What is the advantage of considering outcomes simultaneously rather than individually, in order to draw inferences. When to use multivariate models and when to use multiple univariate models (for multiple outcomes). Take an example given in the UCLA site with three outcomes: locus of control, self-concept, and motivation. With respect to 1. and 2., can we compare the analysis when we do three univariate multiple regression versus one multivariate multiple regression? How to justify one over another? I haven't come across many scholarly papers that utilize multivariate statistical models. Is this because of the multivariate normality assumption, the complexity of model fitting/interpretation or any other specific reason?
Be sure to read the full example on the UCLA site that you linked. Regarding 1: Using a multivariate model helps you (formally, inferentially) compare coefficients across outcomes . In that linked example, they use the multivariate model to test whether the write coefficient is significantly different for the locus_of_control outcome vs for the self_concept outcome. I'm no psychologist, but presumably it's interesting to ask whether your writing ability affects/predicts two different psych variables in the same way. (Or, if we don't believe the null, it's still interesting to ask whether you have collected enough data to demonstrate convincingly that the effects really do differ.) If you ran separate univariate analyses, it would be harder to compare the write coefficient across the two models. Both estimates would come from the same dataset, so they would be correlated. The multivariate model accounts for this correlation. Also, regarding 4: There are some very commonly-used multivariate models, such as Repeated Measures ANOVA . With an appropriate study design, imagine that you give each of several drugs to every patient, and measure each patient's health after every drug. Or imagine you measure the same outcome over time, as with longitudinal data, say children's heights over time. Then you have multiple outcomes for each unit (even when they're just repeats of "the same" type of measurement). You'll probably want to do at least some simple contrasts: comparing the effects of drug A vs drug B, or the average effects of drugs A and B vs placebo. For this, Repeated Measures ANOVA is an appropriate multivariate statistical model/analysis.
{ "source": [ "https://stats.stackexchange.com/questions/254254", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/41289/" ] }
254,282
I was asked this question in an interview. Lets say we have a correlation matrix of the form \begin{bmatrix}1&0.6&0.8\\0.6&1&\gamma\\0.8&\gamma&1\end{bmatrix} I was asked to find the value of gamma, given this correlation matrix. I thought I could do something with the eigenvalues, since they should be all greater than or equal to 0.(Matrix should be positive semidefinite) - but I don't think this approach will yield the answer. I am missing a trick. Could you please provide a hint to solve for the same?
We already know $\gamma$ is bounded between $[-1,1]$ The correlation matrix should be positive semidefinite and hence its principal minors should be nonnegative Thus, \begin{align*} 1(1-\gamma^2)-0.6(0.6-0.8\gamma)+0.8(0.6\gamma-0.8) &\geq 0\\ -\gamma^2+0.96\gamma \geq 0\\ \implies \gamma(\gamma-0.96) \leq 0 \text{ and } -1 \leq \gamma \leq 1 \\ \implies 0 \leq \gamma \leq 0.96 \end{align*}
{ "source": [ "https://stats.stackexchange.com/questions/254282", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/130350/" ] }
254,357
Consider the elementary identity of variance: $$ \begin{eqnarray} Var(X) &=& E[(X - E[X])^2]\\ &=& ...\\ &=& E[X^2] - (E[X])^2 \end{eqnarray} $$ It is a simple algebraic manipulation of the definition of a central moment into non-central moments. It allows convenient manipulation of $Var(X)$ in other contexts. It also allows calculation of variance via a single pass over data rather than two passes, first to calculate the mean, and then to calculate the variance. But what does it mean ? To me there's no immediate geometric intuition that relates spread about the mean to spread about 0. As $X$ is a set on a single dimension, how do you view the spread around a mean as the difference between spread around the origin and the square of the mean? Are there any good linear algebra interpretations or physical interpretations or other that would give insight into this identity?
Expanding on @whuber's point in the comments, if $Y$ and $Z$ are orthogonal, you have the Pythagorean Theorem : $$ \|Y\|^2 + \|Z\|^2 = \|Y + Z\|^2 $$ Observe that $\langle Y, Z \rangle \equiv \mathrm{E}[YZ]$ is a valid inner product and that $\|Y\| = \sqrt{\mathrm{E}[Y^2]}$ is the norm induced by that inner product . Let $X$ be some random variable. Let $Y = \mathrm{E}[X]$, Let $Z = X - \mathrm{E}[X]$. If $Y$ and $Z$ are orthogonal: \begin{align*} & \|Y\|^2 + \|Z\|^2 = \|Y + Z\|^2 \\ \Leftrightarrow \quad&\mathrm{E}[\mathrm{E}[X]^2] + \mathrm{E}[(X - \mathrm{E}[X])^2] = \mathrm{E}[X^2] \\ \Leftrightarrow \quad & \mathrm{E[X]}^2 + \mathrm{Var}[X]= \mathrm{E}[X^2] \end{align*} And it's easy to show that $Y = \mathrm{E}[X]$ and $Z = X - \mathrm{E}[X]$ are orthogonal under this inner product: $$\langle Y, Z \rangle = \mathrm{E}[\mathrm{E}[X]\left(X - \mathrm{E}[X] \right)] = \mathrm{E}[X]^2 - \mathrm{E}[X]^2 = 0$$ One of the legs of the triangle is $X - \mathrm{E}[X]$, the other leg is $\mathrm{E}[X]$, and the hypotenuse is $X$. And the Pythagorean theorem can be applied because a demeaned random variable is orthogonal to its mean. Technical remark: $Y$ in this example really should be the vector $Y = \mathrm{E}[X] \mathbf{1}$, that is, the scalar $\mathrm{E}[X]$ times the constant vector $\mathbf{1}$ (e.g. $\mathbf{1} = [1, 1, 1, \ldots, 1]'$ in the discrete, finite outcome case). $Y$ is the vector projection of $X$ onto the constant vector $\mathbf{1}$. Simple Example Consider the case where $X$ is a Bernoulli random variable where $p = .2$. We have: $$ X = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \quad P = \begin{bmatrix} .2 \\ .8 \end{bmatrix} \quad \mathrm{E}[X] = \sum_i P_iX_i = .2 $$ $$ Y = \mathrm{E}[X]\mathbf{1} = \begin{bmatrix} .2 \\ .2 \end{bmatrix} \quad Z = X - \mathrm{E}[X] = \begin{bmatrix} .8 \\ -.2 \end{bmatrix} $$ And the picture is: The squared magnitude of the red vector is the variance of $X$, the squared magnitude of the blue vector is $\mathrm{E}[X]^2$, and the squared magnitude of the yellow vector is $\mathrm{E}[X^2]$. REMEMBER though that these magnitudes, the orthogonality etc... aren't with respect to the usual dot product $\sum_i Y_iZ_i$ but the inner product $\sum_i P_iY_iZ_i$. The magnitude of the yellow vector isn't 1, it is .2. The red vector $Y = \mathrm{E}[X]$ and the blue vector $Z = X - \mathrm{E}[X]$ are perpendicular under the inner product $\sum_i P_i Y_i Z_i$ but they aren't perpendicular in the intro, high school geometry sense. Remember we're not using the usual dot product $\sum_i Y_i Z_i$ as the inner product!
{ "source": [ "https://stats.stackexchange.com/questions/254357", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3186/" ] }
255,105
I have a four layer CNN to predict response to cancer using MRI data. I use ReLU activations to introduce nonlinearities. The train accuracy and loss monotonically increase and decrease respectively. But, my test accuracy starts to fluctuate wildly. I have tried changing the learning rate, reduce the number of layers. But, it doesn't stop the fluctuations. I even read this answer and tried following the directions in that answer, but not luck again. Could anyone help me figure out where I am going wrong?
If I understand the definition of accuracy correctly, accuracy (% of data points classified correctly) is less cumulative than let's say MSE (mean squared error). That's why you see that your loss is rapidly increasing, while accuracy is fluctuating. Intuitively, this basically means, that some portion of examples is classified randomly , which produces fluctuations, as the number of correct random guesses always fluctuate (imagine accuracy when coin should always return "heads"). Basically sensitivity to noise (when classification produces random result) is a common definition of overfitting (see wikipedia): In statistics and machine learning, one of the most common tasks is to fit a "model" to a set of training data, so as to be able to make reliable predictions on general untrained data. In overfitting, a statistical model describes random error or noise instead of the underlying relationship Another evidence of overfitting is that your loss is increasing, Loss is measured more precisely, it's more sensitive to the noisy prediction if it's not squashed by sigmoids/thresholds (which seems to be your case for the Loss itself). Intuitively, you can imagine a situation when network is too sure about output (when it's wrong), so it gives a value far away from threshold in case of random misclassification. Regarding your case, your model is not properly regularized, possible reasons: not enough data-points, too much capacity ordering no/wrong feature scaling/normalization learning rate: $\alpha$ is too large, so SGD jumps too far and misses the area near local minima. This would be extreme case of "under-fitting" (insensitivity to data itself), but might generate (kind of) "low-frequency" noise on the output by scrambling data from the input - contrary to the overfitting intuition, it would be like always guessing heads when predicting a coin. As @JanKukacka pointed out, arriving at the area "too close to" a minima might cause overfitting, so if $\alpha$ is too small it would get sensitive to "high-frequency" noise in your data. $\alpha$ should be somewhere in between. Possible solutions: obtain more data-points (or artificially expand the set of existing ones) play with hyper-parameters (increase/decrease capacity or regularization term for instance) regularization : try dropout, early-stopping, so on
{ "source": [ "https://stats.stackexchange.com/questions/255105", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/103055/" ] }
255,230
I am a bit confused about ensemble learning. In a nutshell, it runs k models and gets the average of these k models. How can it be guaranteed that the average of the k models would be better than any of the models by themselves? I do understand that the bias is "spread out" or "averaged". However, what if there are two models in the ensemble (i.e. k = 2) and one of the is worse than the other - wouldn't the ensemble be worse than the better model?
It's not guaranteed. As you say, the ensemble could be worse than the individual models. For example, taking the average of the true model and a bad model would give a fairly bad model. The average of $k$ models is only going to be an improvement if the models are (somewhat) independent of one another. For example, in bagging, each model is built from a random subset of the data, so some independence is built in. Or models could be built using different combinations of features, and then combined by averaging. Also, model averaging only works well when the individual models have high variance. That's why a random forest is built using very large trees. On the other hand, averaging a bunch of linear regression models still gives you a linear model, which isn't likely to be better than the models you started with (try it!) Other ensemble methods, such as boosting and blending, work by taking the outputs from individual models, together with the training data, as inputs to a bigger model. In this case, it's not surprising that they often work better than the individual models, since they are in fact more complicated, and they still use the training data.
{ "source": [ "https://stats.stackexchange.com/questions/255230", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/44715/" ] }
255,456
I'm reviewing a paper that has performed >15 separate 2x2 Chi Square tests. I've suggested that they need to correct for multiple comparisons, but they have replied saying that all the comparisons were planned, and therefore this is not necessary. I feel like this must not be correct but can't find any resources that explicitly state whether this is the case. Is anyone able to help with this? Update: Thanks for all of your very helpful responses. In response to @gung's request for some more information on the study and the analyses, they are comparing count data for two types of participants (students, non-students) in two conditions, across three time periods. The multiple 2x2 Chi Square tests are comparing each time period, in each condition, for each type of participant (if that makes sense; e.g. students, condition 1, time period 1 vs time period 2), so all analyses are testing the same hypothesis.
This is IMHO a complex issue and I would like to make three comments about this situation. First and generally, I would more focus on whether you face a confirmatory study with a set of well-shaped hypotheses defined in a argumentative context or an explanatory study in which many likely indicators are observed than whether they are planned or not (because you can simply plan to make all possible comparisons). Second, I would also focus on how the resulting p-values are then discussed. Are they individually used to serve a set of definitive conclusions, or are they jointly discussed as evidence and lack of evidence? Finally, I would discuss the possibility that the >15 hypothesis resulting from the >15 separate chi-squared tests are in fact the expression of a single few hypotheses (maybe a single one) that may be summarized. More generally, regardless of whether hypothesis are prespecified or not, correcting for multiple comparisons or not is a matter of what you include in the type I error. By not correcting for MC, you only keep a per comparison type I error rate control. So in case of numerous comparisons, you have a high family-wise type I error rate and thus are more false discovery prone.
{ "source": [ "https://stats.stackexchange.com/questions/255456", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/116613/" ] }
255,765
My input variables have different dimensions. Some variables are decimal while some are hundreds. Is it essential to center (subtract mean) or scale (divide by standard deviation) these input variables in order to make the data dimensionless when using random forest?
No. Random Forests are based on tree partitioning algorithms. As such, there's no analogue to a coefficient one obtain in general regression strategies, which would depend on the units of the independent variables. Instead, one obtain a collection of partition rules, basically a decision given a threshold, and this shouldn't change with scaling. In other words, the trees only see ranks in the features. Basically, any monotonic transformation of your data shouldn't change the forest at all (in the most common implementations). Also, decision trees are usually robust to numerical instabilities that sometimes impair convergence and precision in other algorithms.
{ "source": [ "https://stats.stackexchange.com/questions/255765", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/128406/" ] }
256,563
I am generating 8 random bits (either a 0 or a 1) and concatenating them together to form an 8-bit number. A simple Python simulation yields a uniform distribution on the discrete set [0, 255]. I am trying to justify why this makes sense in my head. If I compare this to flipping 8 coins, wouldn't the expected value be somewhere around 4 heads/4 tails? So to me, it makes sense that my results should reflect a spike in the middle of the range. In other words, why does a sequence of 8 zeroes or 8 ones seem to be equally as likely as a sequence of 4 and 4, or 5 and 3, etc.? What am I missing here?
TL;DR: The sharp contrast between the bits and coins is that in the case of the coins, you're ignoring the order of the outcomes. HHHHTTTT is treated as the same as TTTTHHHH (both have 4 heads and 4 tails). But in bits, you care about the order (because you have to give "weights" to the bit positions in order to get 256 outcomes), so 11110000 is different from 00001111. Longer explanation: These concepts can be more precisely unified if we are a bit more formal in framing the problem. Consider an experiment to be a sequence of eight trials with dichotomous outcomes and probability of a "success" 0.5, and a "failure" 0.5, and the trials are independent. In general, I'll call this $k$ successes, $n$ total trials and $n-k$ failures and the probability of success is $p$. In the coin example, the outcome "$k$ heads, $n-k$ tails" ignores the ordering of the trials (4 heads is 4 heads no matter the order of occurrence), and this gives rise to your observation that 4 heads are more likely than 0 or 8 heads. Four heads are more common because there are many ways to make four heads (TTHHTTHH, or HHTTHHTT, etc.) than there are some other number (8 heads only has one sequence). The binomial theorem gives the number of ways to make these different configurations. By contrast, the order is important to bits because each place has an associated "weight" or "place value." One property of the binomial coefficient is that $2^n=\sum_{k=0}^n\binom{n}{k}$, that is if we count up all the different ordered sequences, we get $2^8=256$. This directly connects the idea of how many different ways there are to make $k$ heads in $n$ binomial trials to the number of different byte sequences. Additionally, we can show that the 256 outcomes are equally likely by the property of independence. Previous trials have no influence on the next trial, so the probability of a particular ordering is, in general, $p^k(1-p)^{n-k}$ (because joint probability of independent events is the product of their probabilities). Because the trials are fair, $P(\text{success})=P(\text{fail})=p=0.5$, this expression reduces to $P(\text{any ordering})=0.5^8=\frac{1}{256}$. Because all orderings have the same probability, we have a uniform distribution over these outcomes (which by binary encoding can be represented as integers in $[0,255]$). Finally, we can take this full circle back to the coin toss and binomial distribution. We know the occurrence of 0 heads doesn't have the same probability as 4 heads, and that this is because there are different ways to order the occurrences of 4 heads, and that the number of such orderings are given by the binomial theorem. So $P(\text{4 heads})$ must be weighted somehow, specifically it must be weighted by the binomial coefficient. So this gives us the PMF of the binomial distribution, $P(k \text{ successes})=\binom{n}{k}p^k(1-p)^{n-k}$. It might be surprising that this expression is a PMF, specifically because it's not immediately obvious that it sums to 1. To verify, we have to check that $\sum_{k=0}^n \binom{n}{k}p^k(1-p)^{n-k}=1$, however this is just a problem of binomial coefficients: $1=1^n=(p+1-p)^n=\sum_{k=0}^n \binom{n}{k}p^k(1-p)^{n-k}$.
{ "source": [ "https://stats.stackexchange.com/questions/256563", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/145494/" ] }
257,321
Can somebody explain what is a global max pooling layer and why and when do we use it for training a neural network. Do they have any advantage over ordinary max pooling layer?
Global max pooling = ordinary max pooling layer with pool size equals to the size of the input (minus filter size + 1, to be precise). You can see that MaxPooling1D takes a pool_length argument, whereas GlobalMaxPooling1D does not. For example, if the input of the max pooling layer is $0,1,2,2,5,1,2$, global max pooling outputs $5$, whereas ordinary max pooling layer with pool size equals to 3 outputs $2,2,5,5,5$ (assuming stride=1). This can be seen in the code : class GlobalMaxPooling1D(_GlobalPooling1D): """Global max pooling operation for temporal data. # Input shape 3D tensor with shape: `(samples, steps, features)`. # Output shape 2D tensor with shape: `(samples, features)`. """ def call(self, x, mask=None): return K.max(x, axis=1) In some domains, such as natural language processing, it is common to use global max pooling. In some other domains, such as computer vision, it is common to use a max pooling that isn't global.
{ "source": [ "https://stats.stackexchange.com/questions/257321", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/15973/" ] }
258,379
I understand $P(A\cap B)/P(B) = P(A|B)$. The conditional is the intersection of A and B divided by the whole area of B. But why is $P(A\cap B|C)/P(B|C) = P(A|B \cap C)$? Can you give some intuition? Shouldn't it be: $P(A\cap B \cap C)/P(B,C) = P(A|B \cap C)$?
Any probability result that is true for unconditional probability remains true if everything is conditioned on some event. You know that by definition, $$P(A\mid B) = \frac{P(A\cap B)}{P(B)}\tag{1}$$ and so if we condition everything on $C$ having occurred, we get that $$P(A\mid (B \cap C)) = \frac{P((A\cap B)\mid C)}{P(B\mid C)}\tag{2}$$ which is the result that puzzles and surprises you; you think it should be $$P(A\mid (B \cap C)) = \frac{P(A\cap B \cap C)}{P(B\cap C)}.$$ So, let's start by setting $D = B\cap C$ write $P(A\mid (B \cap C)) = P(A\mid D)$ as in $(1)$ to get \begin{align} P(A\mid (B \cap C)) &= P(A\mid D)\\ &= \frac{P(A\cap D)}{P(D)}\\ &= \frac{P(A\cap (B \cap C))}{P(B\cap C)}\\ &= \frac{P(A\cap B \cap C)}{P(B\cap C)}\tag{3}\end{align} which is what you think the result should be. But observe that if you multiply and divide the right side of $(3)$ by $P(C))$ , you can get \begin{align} P(A\mid (B \cap C)) &= \frac{P(A\cap B \cap C)}{P(B\cap C)}\times \frac{P(C)}{P(C)}\\ &= \dfrac{\dfrac{P(A\cap B \cap C)}{P(C)}}{\dfrac{P(B\cap C)}{P(C)}}\\ &= \dfrac{P(A\cap B \mid C)}{P(B\mid C)} \end{align} which is just $(2)$ . In short, the intuition about $(2)$ is that it is just $(3)$ (which you agree with) re-written in terms of conditional probabilities conditioned on the same event $C$ .
{ "source": [ "https://stats.stackexchange.com/questions/258379", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/37793/" ] }
258,461
In light of this question : Proof that the coefficients in an OLS model follow a t-distribution with (n-k) degrees of freedom I would love to understand why $$ F = \frac{(\text{TSS}-\text{RSS})/(p-1)}{\text{RSS}/(n-p)},$$ where $p$ is the number of model parameters and $n$ the number of observations and $TSS$ the total variance, $RSS$ the residual variance, follows an $F_{p-1,n-p}$ distribution. I must admit I have not even attempted to prove it as I wouldn't know where to start.
Let us show the result for the general case of which your formula for the test statistic is a special case. In general, we need to verify that the statistic can be, according to the characterization of the $F$ distribution , be written as the ratio of independent $\chi^2$ r.v.s divided by their degrees of freedom. Let $H_{0}:R^\prime\beta=r$ with $R$ and $r$ known, nonrandom and $R:k\times q$ has full column rank $q$ . This represents $q$ linear restrictions for (unlike in OPs notation) $k$ regressors including the constant term. So, in @user1627466's example, $p-1$ corresponds to the $q=k-1$ restrictions of setting all slope coefficients to zero. In view of $Var\bigl(\hat{\beta}_{\text{ols}}\bigr)=\sigma^2(X'X)^{-1}$ , we have \begin{eqnarray*} R^\prime(\hat{\beta}_{\text{ols}}-\beta)\sim N\left(0,\sigma^{2}R^\prime(X^\prime X)^{-1} R\right), \end{eqnarray*} so that (with $B^{-1/2}=\{R^\prime(X^\prime X)^{-1} R\}^{-1/2}$ being a "matrix square root" of $B^{-1}=\{R^\prime(X^\prime X)^{-1} R\}^{-1}$ , via, e.g., a Cholesky decomposition) \begin{eqnarray*} n:=\frac{B^{-1/2}}{\sigma}R^\prime(\hat{\beta}_{\text{ols}}-\beta)\sim N(0,I_{q}), \end{eqnarray*} as \begin{eqnarray*} Var(n)&=&\frac{B^{-1/2}}{\sigma}R^\prime Var\bigl(\hat{\beta}_{\text{ols}}\bigr)R\frac{B^{-1/2}}{\sigma}\\ &=&\frac{B^{-1/2}}{\sigma}\sigma^2B\frac{B^{-1/2}}{\sigma}=I \end{eqnarray*} where the second line uses the variance of the OLSE. This, as shown in the answer that you link to (see also here ), is independent of $$d:=(n-k)\frac{\hat{\sigma}^{2}}{\sigma^{2}}\sim\chi^{2}_{n-k},$$ where $\hat{\sigma}^{2}=y'M_Xy/(n-k)$ is the usual unbiased error variance estimate, with $M_{X}=I-X(X'X)^{-1}X'$ is the "residual maker matrix" from regressing on $X$ . So, as $n'n$ is a quadratic form in normals, \begin{eqnarray*} \frac{\overbrace{n^\prime n}^{\sim\chi^{2}_{q}}/q}{d/(n-k)}=\frac{(\hat{\beta}_{\text{ols}}-\beta)^\prime R\left\{R^\prime(X^\prime X)^{-1}R\right\}^{-1}R^\prime(\hat{\beta}_{\text{ols}}-\beta)/q}{\hat{\sigma}^{2}}\sim F_{q,n-k}. \end{eqnarray*} In particular, under $H_{0}:R^\prime\beta=r$ , this reduces to the statistic \begin{eqnarray} F=\frac{(R^\prime\hat{\beta}_{\text{ols}}-r)^\prime\left\{R^\prime(X^\prime X)^{-1}R\right\}^{-1}(R^\prime\hat{\beta}_{\text{ols}}-r)/q}{\hat{\sigma}^{2}}\sim F_{q,n-k}. \end{eqnarray} For illustration, consider the special case $R^\prime=I$ , $r=0$ , $q=2$ , $\hat{\sigma}^{2}=1$ and $X^\prime X=I$ . Then, \begin{eqnarray} F=\hat{\beta}_{\text{ols}}^\prime\hat{\beta}_{\text{ols}}/2=\frac{\hat{\beta}_{\text{ols},1}^2+\hat{\beta}_{\text{ols},2}^2}{2}, \end{eqnarray} the squared Euclidean distance of the OLS estimate from the origin standardized by the number of elements - highlighting that, since $\hat{\beta}_{\text{ols},2}^2$ are squared standard normals and hence $\chi^2_1$ , the $F$ distribution may be seen as an "average $\chi^2$ distribution. In case you prefer a little simulation (which is of course not a proof!), in which the null is tested that none of the $k$ regressors matter - which they indeed do not, so that we simulate the null distribution. We see very good agreement between the theoretical density and the histogram of the Monte Carlo test statistics. library(lmtest) n <- 100 reps <- 20000 sloperegs <- 5 # number of slope regressors, q or k-1 (minus the constant) in the above notation critical.value <- qf(p = .95, df1 = sloperegs, df2 = n-sloperegs-1) # for the null that none of the slope regrssors matter Fstat <- rep(NA,reps) for (i in 1:reps){ y <- rnorm(n) X <- matrix(rnorm(n*sloperegs), ncol=sloperegs) reg <- lm(y~X) Fstat[i] <- waldtest(reg, test="F")$F[2] } mean(Fstat>critical.value) # very close to 0.05 hist(Fstat, breaks = 60, col="lightblue", freq = F, xlim=c(0,4)) x <- seq(0,6,by=.1) lines(x, df(x, df1 = sloperegs, df2 = n-sloperegs-1), lwd=2, col="purple") To see that the versions of the test statistics in the question and the answer are indeed equivalent, note that the null corresponds to the restrictions $R'=[0\;\;I]$ and $r=0$ . Let $X=[X_1\;\;X_2]$ be partitioned according to which coefficients are restricted to be zero under the null (in your case, all but the constant, but the derivation to follow is general). Also, let $\hat{\beta}_{\text{ols}}=(\hat{\beta}_{\text{ols},1}^\prime,\hat{\beta}_{\text{ols},2}')'$ be the suitably partitioned OLS estimate. Then, $$ R'\hat{\beta}_{\text{ols}}=\hat{\beta}_{\text{ols},2} $$ and $$ R^\prime(X^\prime X)^{-1}R\equiv\tilde D, $$ the lower right block of \begin{align*} (X^TX)^{-1}&=\left( \begin{array} {c,c} X_1'X_1&X_1'X_2 \\ X_2'X_1&X_2'X_2\end{array} \right)^{-1}\\&\equiv\left( \begin{array} {c,c} \tilde A&\tilde B \\ \tilde C&\tilde D\end{array} \right) \end{align*} Now, use results for partitioned inverses to obtain $$ \tilde D=(X_2'X_2-X_2'X_1(X_1'X_1)^{-1}X_1'X_2)^{-1}=(X_2'M_{X_1}X_2)^{-1} $$ where $M_{X_1}=I-X_1(X_1'X_1)^{-1}X_1'$ . Thus, the numerator of the $F$ statistic becomes (without the division by $q$ ) $$ F_{num}=\hat{\beta}_{\text{ols},2}'(X_2'M_{X_1}X_2)\hat{\beta}_{\text{ols},2} $$ Next, recall that by the Frisch-Waugh-Lovell theorem we may write $$ \hat{\beta}_{\text{ols},2}=(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y $$ so that \begin{align*} F_{num}&=y'M_{X_1}X_2(X_2'M_{X_1}X_2)^{-1}(X_2'M_{X_1}X_2)(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y\\ &=y'M_{X_1}X_2(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y \end{align*} It remains to show that this numerator is identical to $\text{RSSR}-\text{USSR}$ , the difference in restricted and unrestricted sum of squared residuals. Here, $$\text{RSSR}=y'M_{X_1}y$$ is the residual sum of squares from regressing $y$ on $X_1$ , i.e., with $H_0$ imposed. In your special case, this is just $TSS=\sum_i(y_i-\bar y)^2$ , the residuals of a regression on a constant. Again using FWL (which also shows that the residuals of the two approaches are identical), we can write $\text{USSR}$ (SSR in your notation) as the SSR of the regression $$ M_{X_1}y\quad\text{on}\quad M_{X_1}X_2 $$ That is, \begin{eqnarray*} \text{USSR}&=&y'M_{X_1}'M_{M_{X_1}X_2}M_{X_1}y\\ &=&y'M_{X_1}'(I-P_{M_{X_1}X_2})M_{X_1}y\\ &=&y'M_{X_1}y-y'M_{X_1}M_{X_1}X_2((M_{X_1}X_2)'M_{X_1}X_2)^{-1}(M_{X_1}X_2)'M_{X_1}y\\ &=&y'M_{X_1}y-y'M_{X_1}X_2(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y \end{eqnarray*} Thus, \begin{eqnarray*} \text{RSSR}-\text{USSR}&=&y'M_{X_1}y-(y'M_{X_1}y-y'M_{X_1}X_2(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y)\\ &=&y'M_{X_1}X_2(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y \end{eqnarray*}
{ "source": [ "https://stats.stackexchange.com/questions/258461", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22498/" ] }
258,704
If $X$ and $Y$ are two random variables that can only take two possible states, how can I show that $Cov(X,Y) = 0$ implies independence? This kind of goes against what I learned back in the day that $Cov(X,Y) = 0$ does not imply independence... The hint says to start with $1$ and $0$ as the possible states and generalize from there. And I can do that and show $E(XY) = E(X)E(Y)$, but this doesn't imply independence??? Kind of confused how to do this mathematically I guess.
For binary variables their expected value equals the probability that they are equal to one. Therefore, $$ E(XY) = P(XY = 1) = P(X=1 \cap Y=1) \\ E(X) = P(X=1) \\ E(Y) = P(Y=1) \\ $$ If the two have zero covariance this means $E(XY) = E(X)E(Y)$, which means $$ P(X=1 \cap Y=1) = P(X=1) \cdot P(Y=1) $$ It is trivial to see all other joint probabilities multiply as well, using the basic rules about independent events (i.e. if $A$ and $B$ are independent then their complements are independent, etc.), which means the joint mass function factorizes, which is the definition of two random variables being independent.
{ "source": [ "https://stats.stackexchange.com/questions/258704", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/52007/" ] }
259,208
I have data on movement behaviours (time spent sleeping, sedentary, and doing physical activity) that sums to approximately 24 (as in hours per day). I want to create a variable that captures the relative time spent in each of these behaviours - I've been told that an isometric log-ratio transformation would accomplish this. It looks like I should use the ilr function in R, but can't find any actual examples with code. Where do I start? The variables I have are time spent sleeping, average sedentary time, average average light physical activity, average moderate physical activity, and average vigorous physical activity. Sleep was self-reported, while the others are averages from valid days of accelerometer data. So for these variables, cases do not sum to exactly 24. My guess: I'm working in SAS, but it looks like R will be much easier to use for this part. So first import data with only the variables of interest. Then use acomp() function. Then I can't figure out the syntax for the ilr() function. Any help would be much appreciated.
The ILR (Isometric Log-Ratio) transformation is used in the analysis of compositional data. Any given observation is a set of positive values summing to unity, such as the proportions of chemicals in a mixture or proportions of total time spent in various activities. The sum-to-unity invariant implies that although there may be $k\ge 2$ components to each observation, there are only $k-1$ functionally independent values. (Geometrically, the observations lie on a $k-1$ -dimensional simplex in $k$ -dimensional Euclidean space $\mathbb{R}^k$ . This simplicial nature is manifest in the triangular shapes of the scatterplots of simulated data shown below.) Typically, the distributions of the components become "nicer" when log transformed. This transformation can be scaled by dividing all values in an observation by their geometric mean before taking the logs. (Equivalently, the logs of the data in any observation are centered by subtracting their mean.) This is known as the "Centered Log-Ratio" transformation, or CLR. The resulting values still lie within a hyperplane in $\mathbb{R}^k$ , because the scaling causes the sum of the logs to be zero. The ILR consists of choosing any orthonormal basis for this hyperplane: the $k-1$ coordinates of each transformed observation become its new data. Equivalently, the hyperplane is rotated (or reflected) to coincide with the plane with vanishing $k^\text{th}$ coordinate and one uses the first $k-1$ coordinates. (Because rotations and reflections preserve distance they are isometries , whence the name of this procedure.) Tsagris, Preston, and Wood state that "a standard choice of [the rotation matrix] $H$ is the Helmert sub-matrix obtained by removing the first row from the Helmert matrix." The Helmert matrix of order $k$ is constructed in a simple manner (see Harville p. 86 for instance). Its first row is all $1$ s. The next row is one of the the simplest that can be made orthogonal to the first row, namely $(1, -1, 0, \ldots, 0)$ . Row $j$ is among the simplest that is orthogonal to all preceding rows: its first $j-1$ entries are $1$ s, which guarantees it is orthogonal to rows $2, 3, \ldots, j-1$ , and its $j^\text{th}$ entry is set to $1-j$ to make it orthogonal to the first row (that is, its entries must sum to zero). All rows are then rescaled to unit length. Here, to illustrate the pattern, is the $4\times 4$ Helmert matrix before its rows have been rescaled: $$\pmatrix{1&1&1&1 \\ 1&-1&0&0 \\ 1&1&-2&0 \\ 1&1&1&-3}.$$ (Edit added August 2017) One particularly nice aspect of these "contrasts" (which are read row by row) is their interpretability. The first row is dropped, leaving $k-1$ remaining rows to represent the data. The second row is proportional to the difference between the second variable and the first. The third row is proportional to the difference between the third variable and the first two. Generally, row $j$ ( $2\le j \le k$ ) reflects the difference between variable $j$ and all those that precede it, variables $1, 2, \ldots, j-1$ . This leaves the first variable $j=1$ as a "base" for all contrasts. I have found these interpretations helpful when following the ILR by Principal Components Analysis (PCA): it enables the loadings to be interpreted, at least roughly, in terms of comparisons among the original variables. I have inserted a line into the R implementation of ilr below that gives the output variables suitable names to help with this interpretation. (End of edit.) Since R provides a function contr.helmert to create such matrices (albeit without the scaling, and with rows and columns negated and transposed), you don't even have to write the (simple) code to do it. Using this, I implemented the ILR (see below). To exercise and test it, I generated $1000$ independent draws from a Dirichlet distribution (with parameters $1,2,3,4$ ) and plotted their scatterplot matrix. Here, $k=4$ . The points all clump near the lower left corners and fill triangular patches of their plotting areas, as is characteristic of compositional data. Their ILR has just three variables, again plotted as a scatterplot matrix: This does indeed look nicer: the scatterplots have acquired more characteristic "elliptical cloud" shapes, better amenable to second-order analyses such as linear regression and PCA. Tsagris et al. generalize the CLR by using a Box-Cox transformation, which generalizes the logarithm. (The log is a Box-Cox transformation with parameter $0$ .) It is useful because, as the authors (correctly IMHO) argue, in many applications the data ought to determine their transformation. For these Dirichlet data a parameter of $1/2$ (which is halfway between no transformation and a log transformation) works beautifully: "Beautiful" refers to the simple description this picture permits: instead of having to specify the location, shape, size, and orientation of each point cloud, we need only observe that (to an excellent approximation) all the clouds are circular with similar radii. In effect, the CLR has simplified an initial description requiring at least 16 numbers into one that requires only 12 numbers and the ILR has reduced that to just four numbers (three univariate locations and one radius), at a price of specifying the ILR parameter of $1/2$ --a fifth number. When such dramatic simplifications happen with real data, we usually figure we're on to something: we have made a discovery or achieved an insight. This generalization is implemented in the ilr function below. The command to produce these "Z" variables was simply z <- ilr(x, 1/2) One advantage of the Box-Cox transformation is its applicability to observations that include true zeros: it is still defined provided the parameter is positive. References Michail T. Tsagris, Simon Preston and Andrew T.A. Wood, A data-based power transformation for compositional data . arXiv:1106.1451v2 [stat.ME] 16 Jun 2011. David A. Harville, Matrix Algebra From a Statistician's Perspective . Springer Science & Business Media, Jun 27, 2008. Here is the R code. # # ILR (Isometric log-ratio) transformation. # `x` is an `n` by `k` matrix of positive observations with k >= 2. # ilr <- function(x, p=0) { y <- log(x) if (p != 0) y <- (exp(p * y) - 1) / p # Box-Cox transformation y <- y - rowMeans(y, na.rm=TRUE) # Recentered values k <- dim(y)[2] H <- contr.helmert(k) # Dimensions k by k-1 H <- t(H) / sqrt((2:k)*(2:k-1)) # Dimensions k-1 by k if(!is.null(colnames(x))) # (Helps with interpreting output) colnames(z) <- paste0(colnames(x)[-1], ".ILR") return(y %*% t(H)) # Rotated/reflected values } # # Specify a Dirichlet(alpha) distribution for testing. # alpha <- c(1,2,3,4) # # Simulate and plot compositional data. # n <- 1000 k <- length(alpha) x <- matrix(rgamma(n*k, alpha), nrow=n, byrow=TRUE) x <- x / rowSums(x) colnames(x) <- paste0("X.", 1:k) pairs(x, pch=19, col="#00000040", cex=0.6) # # Obtain the ILR. # y <- ilr(x) colnames(y) <- paste0("Y.", 1:(k-1)) # # Plot the ILR. # pairs(y, pch=19, col="#00000040", cex=0.6)
{ "source": [ "https://stats.stackexchange.com/questions/259208", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/147410/" ] }
259,398
I'm seeing this image passed around a lot. I have a gut-feeling that the information provided this way is somehow incomplete or even erroneous, but I'm not well versed enough in statistics to respond. It makes me think of this xkcd comic , that even with solid historical data, certain situations can change how things can be predicted. Is this chart as presented useful for accurately showing what the threat level from refugees is? Is there necessary statistical context that makes this chart more or less useful? Note: Try to keep it in layman's terms :)
Imagine your job is to forecast the number of Americans that will die from various causes next year. A reasonable place to start your analysis might be the National Vital Statistics Data final death data for 2014. The assumption is that 2017 might look roughly like 2014. You'll find that approximately 2,626,000 Americans died in 2014: 614,000 died of heart disease. 592,000 died of cancer. 147,000 from respiratory disease. 136,000 from accidents. ... 42,773 from suicide. 42,032 from accidental poisoning (subset of accidents category). 15,809 from homicide. 0 from terrorism under the CDC, NCHS classification . 18 from terrorism using a broader definition (University of Maryland Global Terrorism Datbase) See link for definitions. By my quick count, 0 of the perpetrators of these 2014 attacks were born outside the United States. Note that anecdote is not the same as data, but I've assembled links to the underlying news stories here: 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , and 9 . Terrorist incidents in the U.S. are quite rare, so estimating off a single year is going to be problematic. Looking at the time-series, what you see is that the vast majority of U.S. terrorism fatalities came during the 9/11 attacks (See this report from the National Consortium for the Study of Terrorism and Responses to Terrorism.) I've copied their Figure 1 below: Immediately you see that you have an outlier, rare events problem. A single outlier is driving the overall number. If you're trying to forecast deaths from terrorism, there are numerous issues: What counts as terrorism? Terrorism can be defined broadly or narrowly. Is the process stationary ? If we take a time-series average, what are we estimating? Are conditions changing? What does a forecast conditional on current conditions look like? If the vast majority of deaths come from a single outlier, how do you reasonably model that? We can get more data in a sense by looking more broadly at other countries and going back further in time but then there are questions as to whether any of those patterns apply in today's world. IMHO, the FT graphic picked an overly narrow definition (the 9/11 attacks don't show up in the graphic because the attackers weren't refugees). There are legitimate issues with the chart, but the FT's broader point is correct that terrorism in the U.S. is quite rare. Your chance of being killed by a foreign born terrorist in the United States is close to zero. Life expectancy in the U.S. is about 78.7 years. What has moved life expectancy numbers down in the past has been events like the 1918 Spanish flu pandemic or WWII. Additional risks to life expectancy now might include obesity and opioid abuse. If you're trying to create a detailed estimate of terrorism risk, there are huge statistical issues, but to understand the big picture requires not so much statistics as understanding orders of magnitude and basic quantitative literacy. A more reasonable concern... (perhaps veering off topic) Looking back at history, the way huge numbers of people get killed is through disease, genocide, and war. A more reasonable concern might be that some rare, terrorist event triggers something catastrophic (eg. how the assassination of Archduke Ferdinand help set off WWI.) Or one could worry about nuclear weapons in the hands of someone crazy. Thinking about extremely rare but catastrophic events is incredibly difficult. It's a multidisciplinary pursuit and goes far outside of statistics. Perhaps the only statistical point here is that it's hard to estimate the probability and effects of some event which hasn't happened? (Except to say that it can't be that common or it would have happened already.)
{ "source": [ "https://stats.stackexchange.com/questions/259398", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/147538/" ] }
259,502
Suppose I have a $2 \times 2$ table that looks like: Disease No Disease Treatment 55 67 Control 42 34 I would like to do a logistic regression in R on this table. I understand that the standard way is to use the glm function with a cbind function in the response. In other words, the code looks like: glm(formula = cbind(c(55,67),c(42,34)) ~ as.factor(c(1, 0)), family = binomial()) I am wondering why R requires us to use the cbind function and why simply using proportions is not sufficient. Is there a way to write this out explicitly as a formula? What would it look in the form of: $$ log\left(\frac{p}{1-p}\right) = \beta_0 + \beta_1X $$ where $X = 1$ if we have treatment and $X=0$ for control? Right now it seems like I am regressing on a matrix for the dependent value.
First I show how you can specify a formula using aggregated data with proportions and weights. Then I show how you could specify a formula after dis-aggregating your data to individual observations. Documentation in glm indicates that: "For a binomial GLM prior weights are used to give the number of trials when the response is the proportion of successes" I create new columns total and proportion_disease in df for the 'number of trials' and 'proportion of successes' respectively. library(dplyr) df <- tibble(treatment_status = c("treatment", "no_treatment"), disease = c(55, 42), no_disease = c(67,34)) %>% mutate(total = no_disease + disease, proportion_disease = disease / total) model_weighted <- glm(proportion_disease ~ treatment_status, data = df, family = binomial("logit"), weights = total) The above weighted approach takes in aggregated data and will give the same solution as the cbind method but allows you to specify a formula. (Below is equivalent to Original Poster's method but cbind(c(55,42), c(67,34)) rather than cbind(c(55,67),c(42,34)) so that 'Disease' rather than 'Treatment' is the response variable.) model_cbinded <- glm(cbind(disease, no_disease) ~ treatment_status, data = df, family = binomial("logit")) You could also just dis-aggregate your data into individual observations and pass these into glm (allowing you to specify a formula as well). df_expanded <- tibble(disease_status = c(1, 1, 0, 0), treatment_status = rep(c("treatment", "control"), 2)) %>% .[c(rep(1, 55), rep(2, 42), rep(3, 67), rep(4, 34)), ] model_expanded <- glm(disease_status ~ treatment_status, data = df_expanded, family = binomial("logit")) Let's compare these now by passing each model into summary . model_weighted and model_cbinded both produce the exact same results. model_expanded produces the same coefficients and standard errors, though outputs different degrees of freedom, deviance, AIC, etc. (corresponding with the number of rows/observations). > lapply(list(model_weighted, model_cbinded, model_expanded), summary) [[1]] Call: glm(formula = proportion_disease ~ treatment_status, family = binomial("logit"), data = df, weights = total) Deviance Residuals: [1] 0 0 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.2113 0.2307 0.916 0.360 treatment_statustreatment -0.4087 0.2938 -1.391 0.164 (Dispersion parameter for binomial family taken to be 1) Null deviance: 1.9451e+00 on 1 degrees of freedom Residual deviance: 1.0658e-14 on 0 degrees of freedom AIC: 14.028 Number of Fisher Scoring iterations: 2 [[2]] Call: glm(formula = cbind(disease, no_disease) ~ treatment_status, family = binomial("logit"), data = df) Deviance Residuals: [1] 0 0 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.2113 0.2307 0.916 0.360 treatment_statustreatment -0.4087 0.2938 -1.391 0.164 (Dispersion parameter for binomial family taken to be 1) Null deviance: 1.9451e+00 on 1 degrees of freedom Residual deviance: 1.0658e-14 on 0 degrees of freedom AIC: 14.028 Number of Fisher Scoring iterations: 2 [[3]] Call: glm(formula = disease_status ~ treatment_status, family = binomial("logit"), data = df_expanded) Deviance Residuals: Min 1Q Median 3Q Max -1.268 -1.095 -1.095 1.262 1.262 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.2113 0.2307 0.916 0.360 treatment_statustreatment -0.4087 0.2938 -1.391 0.164 (Dispersion parameter for binomial family taken to be 1) Null deviance: 274.41 on 197 degrees of freedom Residual deviance: 272.46 on 196 degrees of freedom AIC: 276.46 Number of Fisher Scoring iterations: 3 (See R bloggers for conversation on weights parameter in glm in the regression context.)
{ "source": [ "https://stats.stackexchange.com/questions/259502", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/108150/" ] }
259,654
The standard definition of an outlier for a Box and Whisker plot is points outside of the range $\left\{Q1-1.5IQR,Q3+1.5IQR\right\}$, where $IQR= Q3-Q1$ and $Q1$ is the first quartile and $Q3$ is the third quartile of the data. What is the basis for this definition? With a large number of points, even a perfectly normal distribution returns outliers. For example, suppose you start with the sequence: xseq<-seq(1-.5^1/4000,.5^1/4000, by = -.00025) This sequence creates a percentile ranking of 4000 points of data. Testing normality for the qnorm of this series results in: shapiro.test(qnorm(xseq)) Shapiro-Wilk normality test data: qnorm(xseq) W = 0.99999, p-value = 1 ad.test(qnorm(xseq)) Anderson-Darling normality test data: qnorm(xseq) A = 0.00044273, p-value = 1 The results are exactly as expected: the normality of a normal distribution is normal. Creating a qqnorm(qnorm(xseq)) creates (as expected) a straight line of data: If a boxplot of the same data is created, boxplot(qnorm(xseq)) produces the result: The boxplot, unlike shapiro.test , ad.test , or qqnorm identifies several points as outliers when the sample size is sufficiently large (as in this example).
Boxplots Here is a relevant section from Hoaglin, Mosteller and Tukey (2000): Understanding Robust and Exploratory Data Analysis. Wiley . Chapter 3, "Boxplots and Batch Comparison", written by John D. Emerson and Judith Strenio (from page 62): [...] Our definition of outliers as data values that are smaller than $F_{L}-\frac{3}{2}d_{F}$ or larger than $F_{U}+\frac{3}{2}d_{F}$ is somewhat arbitrary, but experience with many data sets indicates that this definition serves well in identifying values that may require special attention.[...] $F_{L}$ and $F_{U}$ denote the first and third quartile, whereas $d_{F}$ is the interquartile range (i.e. $F_{U}-F_{L}$ ). They go on and show the application to a Gaussian population (page 63): Consider the standard Gaussian distribution, with mean $0$ and variance $1$ . We look for population values of this distribution that are analogous to the sample values used in the boxplot. For a symmetric distribution, the median equals the mean, so the population median of the standard Gaussian distribution is $0$ . The population fourths are $-0.6745$ and $0.6745$ , so the population fourth-spread is $1.349$ , or about $\frac{4}{3}$ . Thus $\frac{3}{2}$ times the fourth-spread is $2.0235$ (about $2$ ). The population outlier cutoffs are $\pm 2.698$ (about $2\frac{2}{3}$ ), and they contain $99.3\%$ of the distribution. [...] So [they] show that if the cutoffs are applied to a Gaussian distribution, then $0.7\%$ of the population is outside the outlier cutoffs; this figure provides a standard of comparison for judging the placement of the outlier cutoffs [...]. Further, they write [...] Thus we can judge whether our data seem heavier-tailed than Gaussian by how many points fall beyond the outlier cutoffs. [...] They provide a table with the expected proportion of values that fall outside the outlier cutoffs (labelled "Total % Out"): So these cutoffs where never intended to be a strict rule about what data points are outliers or not. As you noted, even a perfect Normal distribution is expected to exhibit "outliers" in a boxplot. Outliers As far as I know, there is no universally accepted definition of outlier. I like the definition by Hawkins (1980): An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism. Ideally, you should only treat data points as outliers once you understand why they don't belong to the rest of the data. A simple rule is not sufficient. A good treatment of outliers can be found in Aggarwal (2013). References Aggarwal CC (2013): Outlier Analysis. Springer. Hawkins D (1980): Identification of Outliers. Chapman and Hall. Hoaglin, Mosteller and Tukey (2000): Understanding Robust and Exploratory Data Analysis. Wiley.
{ "source": [ "https://stats.stackexchange.com/questions/259654", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/137032/" ] }
259,664
Pearson correlation computes linear association between variables and Spearman computes monotonic relations that could be non-linear. I computed Pearson and Spearman correlation between different features. Both of them gave similar values. What does this indicate. How can a linear method give similar values to a non-linear method.
Boxplots Here is a relevant section from Hoaglin, Mosteller and Tukey (2000): Understanding Robust and Exploratory Data Analysis. Wiley . Chapter 3, "Boxplots and Batch Comparison", written by John D. Emerson and Judith Strenio (from page 62): [...] Our definition of outliers as data values that are smaller than $F_{L}-\frac{3}{2}d_{F}$ or larger than $F_{U}+\frac{3}{2}d_{F}$ is somewhat arbitrary, but experience with many data sets indicates that this definition serves well in identifying values that may require special attention.[...] $F_{L}$ and $F_{U}$ denote the first and third quartile, whereas $d_{F}$ is the interquartile range (i.e. $F_{U}-F_{L}$ ). They go on and show the application to a Gaussian population (page 63): Consider the standard Gaussian distribution, with mean $0$ and variance $1$ . We look for population values of this distribution that are analogous to the sample values used in the boxplot. For a symmetric distribution, the median equals the mean, so the population median of the standard Gaussian distribution is $0$ . The population fourths are $-0.6745$ and $0.6745$ , so the population fourth-spread is $1.349$ , or about $\frac{4}{3}$ . Thus $\frac{3}{2}$ times the fourth-spread is $2.0235$ (about $2$ ). The population outlier cutoffs are $\pm 2.698$ (about $2\frac{2}{3}$ ), and they contain $99.3\%$ of the distribution. [...] So [they] show that if the cutoffs are applied to a Gaussian distribution, then $0.7\%$ of the population is outside the outlier cutoffs; this figure provides a standard of comparison for judging the placement of the outlier cutoffs [...]. Further, they write [...] Thus we can judge whether our data seem heavier-tailed than Gaussian by how many points fall beyond the outlier cutoffs. [...] They provide a table with the expected proportion of values that fall outside the outlier cutoffs (labelled "Total % Out"): So these cutoffs where never intended to be a strict rule about what data points are outliers or not. As you noted, even a perfect Normal distribution is expected to exhibit "outliers" in a boxplot. Outliers As far as I know, there is no universally accepted definition of outlier. I like the definition by Hawkins (1980): An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism. Ideally, you should only treat data points as outliers once you understand why they don't belong to the rest of the data. A simple rule is not sufficient. A good treatment of outliers can be found in Aggarwal (2013). References Aggarwal CC (2013): Outlier Analysis. Springer. Hawkins D (1980): Identification of Outliers. Chapman and Hall. Hoaglin, Mosteller and Tukey (2000): Understanding Robust and Exploratory Data Analysis. Wiley.
{ "source": [ "https://stats.stackexchange.com/questions/259664", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/122285/" ] }
260,294
I setup a grid search for a bunch of params. I am trying to find the best parameters for a Keras neural net that does binary classification. The output is either a 1 or a 0. There are about 200 features. When I did a grid search, I got a bunch of models and their parameters. The best model had these parameters: Epochs : 20 Batch Size : 10 First Activation : sigmoid Learning Rate : 1 First Init : uniform and the results for that model were : loss acc val_loss val_acc 1 0.477424 0.768542 0.719960 0.722550 2 0.444588 0.788861 0.708650 0.732130 3 0.435809 0.794336 0.695768 0.732682 4 0.427056 0.798784 0.684516 0.721137 5 0.420828 0.803048 0.703748 0.720707 6 0.418129 0.806206 0.730803 0.723717 7 0.417522 0.805206 0.778434 0.721936 8 0.415197 0.807549 0.802040 0.733849 9 0.412922 0.808865 0.823036 0.731761 10 0.410463 0.810654 0.839087 0.730410 11 0.407369 0.813892 0.831844 0.725252 12 0.404436 0.815760 0.835217 0.723102 13 0.401728 0.816287 0.845178 0.722488 14 0.399623 0.816471 0.842231 0.717514 15 0.395746 0.819498 0.847118 0.719541 16 0.393361 0.820366 0.858291 0.714873 17 0.390947 0.822025 0.850880 0.723348 18 0.388478 0.823341 0.858591 0.721014 19 0.387062 0.822735 0.862971 0.721936 20 0.383744 0.825762 0.880477 0.721322 So I reran that model with more epochs (150 of them) and these are the results I got. I am not sure why this is happening, is this normal or what am I doing wrong? loss acc val_loss val_acc 1 0.476387 0.769279 0.728492 0.722550 2 0.442604 0.789941 0.701136 0.730472 3 0.431936 0.796915 0.676995 0.723655 4 0.426349 0.800258 0.728562 0.721997 5 0.421143 0.803653 0.739789 0.716900 6 0.416389 0.807575 0.720850 0.711373 7 0.413163 0.809154 0.751340 0.718128 8 0.409013 0.811418 0.780856 0.723409 9 0.405871 0.813576 0.789046 0.719295 10 0.402579 0.815524 0.804526 0.720278 11 0.400152 0.816813 0.811905 0.719541 12 0.400304 0.817261 0.787449 0.713154 13 0.397917 0.817945 0.804222 0.721567 14 0.395266 0.819524 0.801722 0.723348 15 0.393957 0.820156 0.793889 0.719049 16 0.391780 0.821103 0.794179 0.721199 17 0.390206 0.822393 0.806803 0.722611 18 0.388075 0.823604 0.817850 0.723901 19 0.385985 0.824762 0.841883 0.722058 20 0.383762 0.826867 0.857071 0.720830 21 0.381493 0.827947 0.864432 0.718005 22 0.379520 0.829210 0.872835 0.720400 23 0.377488 0.830526 0.879962 0.721383 24 0.375619 0.830736 0.887850 0.723839 25 0.373684 0.832000 0.891267 0.724822 26 0.372023 0.832368 0.891562 0.724638 27 0.370155 0.833184 0.892528 0.724883 28 0.368511 0.834684 0.887061 0.724699 29 0.366522 0.835606 0.883541 0.724883 30 0.364500 0.836422 0.882823 0.724515 31 0.362612 0.836737 0.882611 0.722427 32 0.360742 0.837448 0.884282 0.720769 33 0.359093 0.838738 0.884339 0.719418 34 0.357436 0.839080 0.888006 0.716470 35 0.355723 0.840633 0.892658 0.713830 36 0.354305 0.840764 0.897303 0.710575 37 0.352758 0.841343 0.901147 0.709408 38 0.351414 0.842054 0.899546 0.707934 39 0.349619 0.843370 0.905133 0.704864 40 0.347993 0.844475 0.910400 0.701363 41 0.346402 0.845581 0.915086 0.699337 42 0.345014 0.845818 0.918697 0.697617 43 0.343708 0.846607 0.923413 0.695652 44 0.342335 0.847292 0.930816 0.693441 45 0.340745 0.848081 0.940737 0.689020 46 0.339623 0.848713 0.948633 0.685274 47 0.338846 0.849845 0.952492 0.683923 48 0.337724 0.850134 0.961147 0.683984 49 0.336247 0.850976 0.967792 0.683309 50 0.334444 0.851529 0.984107 0.680238 51 0.333086 0.852029 1.001179 0.678273 52 0.331756 0.853240 1.016130 0.674589 53 0.330738 0.854003 1.024875 0.673606 54 0.329548 0.854030 1.040597 0.670044 55 0.328813 0.855372 1.041871 0.668509 56 0.327120 0.855898 1.050617 0.668755 57 0.325962 0.855819 1.064525 0.666667 58 0.324602 0.856898 1.078078 0.662859 59 0.323560 0.857241 1.085016 0.661938 60 0.322243 0.858662 1.093114 0.661140 61 0.320680 0.858872 1.117269 0.656841 62 0.319267 0.860004 1.138825 0.654815 63 0.318132 0.860636 1.154959 0.653648 64 0.316956 0.861531 1.180216 0.649718 65 0.315543 0.862320 1.198216 0.648428 66 0.314405 0.862610 1.218663 0.647384 67 0.313501 0.863873 1.245123 0.644252 68 0.312513 0.864558 1.262998 0.643147 69 0.311567 0.865347 1.283213 0.641918 70 0.310069 0.866505 1.302089 0.640752 71 0.309087 0.866611 1.318972 0.641857 72 0.307767 0.867321 1.361531 0.638787 73 0.306750 0.866742 1.382162 0.638357 74 0.305760 0.867242 1.378694 0.641611 75 0.305289 0.867769 1.393187 0.642594 76 0.304089 0.868479 1.435852 0.635532 77 0.302472 0.869006 1.435019 0.639892 78 0.301118 0.869400 1.447060 0.639216 79 0.300629 0.870058 1.488730 0.634918 80 0.299364 0.870295 1.488376 0.636576 81 0.298380 0.870822 1.504260 0.634611 82 0.297253 0.871664 1.525655 0.634058 83 0.296760 0.871875 1.538717 0.632891 84 0.295502 0.872585 1.551178 0.633751 85 0.294569 0.872927 1.562323 0.633137 86 0.294780 0.872585 1.555390 0.629944 87 0.293796 0.872743 1.587800 0.627057 88 0.293029 0.873427 1.608010 0.627549 89 0.291822 0.874006 1.626047 0.627303 90 0.290643 0.874533 1.651658 0.626689 91 0.289920 0.875270 1.681202 0.623925 92 0.289661 0.875375 1.683188 0.626505 93 0.288103 0.876323 1.706517 0.625031 94 0.287917 0.876770 1.722031 0.624417 95 0.287020 0.877270 1.743283 0.624478 96 0.286750 0.877639 1.762506 0.624048 97 0.285712 0.877481 1.780433 0.622267 98 0.284635 0.878639 1.789917 0.622206 99 0.283627 0.879191 1.862468 0.616925 100 0.282214 0.879455 1.915643 0.612810 101 0.281749 0.879244 1.881444 0.615205 102 0.281710 0.879639 1.916390 0.614223 103 0.280293 0.880350 1.938470 0.612810 104 0.279233 0.881008 1.979127 0.609187 105 0.279204 0.880297 1.997384 0.606546 106 0.278264 0.881876 2.009851 0.607652 107 0.277511 0.882876 2.038530 0.606116 108 0.277521 0.881771 2.034664 0.604888 109 0.276264 0.882534 2.058179 0.604827 110 0.275230 0.883587 2.078912 0.604274 111 0.275147 0.883034 2.073272 0.603537 112 0.273717 0.883797 2.100150 0.600958 113 0.273372 0.883692 2.114416 0.601634 114 0.272626 0.883692 2.129778 0.601941 115 0.272001 0.883929 2.138462 0.601326 116 0.271344 0.884508 2.148771 0.602923 117 0.270134 0.884692 2.115114 0.604581 118 0.269494 0.885140 2.135719 0.603107 119 0.268803 0.885587 2.162380 0.601695 120 0.268593 0.886219 2.183793 0.599239 121 0.267141 0.886035 2.195810 0.600221 122 0.266565 0.886772 2.192426 0.600528 123 0.265715 0.886561 2.260088 0.596598 124 0.264788 0.887693 2.253029 0.597335 125 0.263643 0.887693 2.289285 0.597028 126 0.263612 0.887956 2.311600 0.596536 127 0.261996 0.888588 2.339754 0.595063 128 0.263069 0.887588 2.364881 0.594449 129 0.261684 0.889272 2.321568 0.596598 130 0.261304 0.889509 2.389324 0.591562 131 0.260336 0.889640 2.403542 0.593098 132 0.259131 0.890272 2.413964 0.592115 133 0.258756 0.890193 2.422454 0.591992 134 0.257794 0.891009 2.454598 0.591255 135 0.257187 0.891009 2.459366 0.590088 136 0.257249 0.891088 2.448625 0.591624 137 0.256344 0.891404 2.495104 0.589167 138 0.255590 0.891720 2.495032 0.589781 139 0.254596 0.892299 2.496050 0.589229 140 0.254308 0.892588 2.510471 0.589536 141 0.253694 0.892509 2.519580 0.589720 142 0.252973 0.893088 2.527464 0.590273 143 0.252714 0.893194 2.553902 0.589106 144 0.252190 0.893720 2.536494 0.590457 145 0.251870 0.893352 2.553102 0.588799 146 0.250437 0.893694 2.565141 0.589597 147 0.250066 0.894141 2.575599 0.588553 148 0.249596 0.894273 2.590722 0.588123 149 0.248569 0.894983 2.596031 0.588676 150 0.248096 0.895273 2.602810 0.588860
(this may be a duplicate) It looks like your model is over fitting, that is just memorizing the training data. In general a model that over fits can be improved by adding more dropout, or training and validating on a larger data set. Explain more about the data/features and the model for further ideas.
{ "source": [ "https://stats.stackexchange.com/questions/260294", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/147413/" ] }
260,487
I am trying to evaluate clustering performance. I was reading the skiscit-learn documentation on metrics . I do not understand the difference between ARI and AMI. It seems to me that they do the same thing in two different ways. Citing from the documentation: Given the knowledge of the ground truth class assignments labels_true and our clustering algorithm assignments of the same samples labels_pred, the adjusted Rand index is a function that measures the similarity of the two assignments, ignoring permutations and with chance normalization. vs Given the knowledge of the ground truth class assignments labels_true and our clustering algorithm assignments of the same samples labels_pred, the Mutual Information is a function that measures the agreement of the two assignments, ignoring permutations ... AMI was proposed more recently and is normalized against chance. Should I use both of them in my clustering evaluation or would this be redundant?
Short answer Use ARI when the ground truth clustering has large equal sized clusters Use AMI when the ground truth clustering is unbalanced and there exist small clusters Longer answer I worked on this topic. Reference: Adjusting for Chance Clustering Comparison Measures A one-line summary of the paper is: AMI is high when there are pure clusters in the clustering solution. Let's have a look at an example. We have a reference clustering V consisting of 4 equal size clusters. Each cluster is of size 25. Then we have two clustering solutions: U1 that has pure clusters (many zeros in the contingency table) U2 that has impure clusters AMI will choose U1 and ARI will choose U2 . Eventually: U1 is unbalanced. Unbalanced clusters have more chances to present pure clusters. AMI is biased towards unbalanced clustering solutions U2 is balanced. ARI is biased towards balanced clustering solutions. If we are using external validity indices such as AMI and ARI, we are aiming at matching the reference clustering with our clustering solution. This is why the recommendation at the top: AMI when the reference clustering is unbalanced, and ARI when the reference clustering is balanced. We do this mainly due to the biases in both measures. Also, when we have an unbalanced reference clustering with small clusters, we are even more interested in generating pure small clusters in the solution. We want to identify precisely the small clusters from the reference. Even a single mismatched data point can have a relatively higher impact. Other than the recommendations above, we could use AMI when we are interested in having pure clusters in the solution. Experiment Here I sketched an experiment where P generates solutions U which are balanced when P=1 and unbalanced when P=0 . You can play with the notebook here .
{ "source": [ "https://stats.stackexchange.com/questions/260487", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/148273/" ] }
260,505
First of all, I realized if I need to perform binary predictions, I have to create at least two classes through performing a one-hot-encoding. Is this correct? However, is binary cross-entropy only for predictions with only one class? If I were to use a categorical cross-entropy loss, which is typically found in most libraries (like TensorFlow), would there be a significant difference? In fact, what are the exact differences between a categorical and binary cross-entropy? I have never seen an implementation of binary cross-entropy in TensorFlow, so I thought perhaps the categorical one works just as fine.
Bernoulli $^*$ cross-entropy loss is a special case of categorical cross-entropy loss for $m=2$ . $$ \begin{align} \mathcal{L}(\theta) &= -\frac{1}{n}\sum_{i=1}^n\sum_{j=1}^m y_{ij}\log(p_{ij}) \\ &= -\frac{1}{n}\sum_{i=1}^n \left[y_i \log(p_i) + (1-y_i) \log(1-p_i)\right] \end{align} $$ Where $i$ indexes samples/observations and $j$ indexes classes, and $y$ is the sample label (binary for LSH, one-hot vector on the RHS) and $p_{ij}\in(0,1):\sum_{j} p_{ij} =1\forall i,j$ is the prediction for a sample. I write "Bernoulli cross-entropy" because this loss arises from a Bernoulli probability model. There is not a "binary distribution." A "binary cross-entropy" doesn't tell us if the thing that is binary is the one-hot vector of $k \ge 2$ labels, or if the author is using binary encoding for each trial (success or failure). This isn't a general convention, but it makes clear that these formulae arise from particular probability models. Conventional jargon is not clear in that way.
{ "source": [ "https://stats.stackexchange.com/questions/260505", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/129145/" ] }
260,899
I don't understand what exactly is the difference between "in-sample" and "out of sample" prediction? An in-sample forecast utilizes a subset of the available data to forecast values outside of the estimation period. An out of sample forecast instead uses all available data . Are these correct? Very specifically is the following definition correct? A within sample forecast utilizes a subset of the available data to forecast values outside of the estimation period and compare them to the corresponding known or actual outcomes. This is done to assess the ability of the model to forecast known values. For example, a within sample forecast from 1980 to 2015 might use data from 1980 to 2012 to estimate the model. Using this model, the forecaster would then predict values for 2013-2015 and compare the forecasted values to the actual known values. An out of sample forecast instead uses all available data in the sample to estimate a models. For the previous example, estimation would be performed over 1980-2015, and the forecast(s) would commence in 2016.
By the "sample" it is meant the data sample that you are using to fit the model. First - you have a sample Second - you fit a model on the sample Third - you can use the model for forecasting If you are forecasting for an observation that was part of the data sample - it is in-sample forecast. If you are forecasting for an observation that was not part of the data sample - it is out-of-sample forecast. So the question you have to ask yourself is: Was the particular observation used for the model fitting or not ? If it was used for the model fitting, then the forecast of the observation is in-sample. Otherwise it is out-of-sample. if you use data 1990-2013 to fit the model and then you forecast for 2011-2013, it's in-sample forecast. but if you only use 1990-2010 for fitting the model and then you forecast 2011-2013, then its out-of-sample forecast.
{ "source": [ "https://stats.stackexchange.com/questions/260899", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/148579/" ] }
260,949
Is it true that for two random variables $A$ and $B$, $$E(A\mid B)=E(B\mid A)\frac{E(A)}{E(B)}?$$
$$E[A\mid B] \stackrel{?}= E[B\mid A]\frac{E[A]}{E[B]} \tag 1$$ The conjectured result $(1)$ is trivially true for independent random variables $A$ and $B$ with nonzero means. If $E[B]=0$, then the right side of $(1)$ involves a division by $0$ and so $(1)$ is meaningless. Note that whether or not $A$ and $B$ are independent is not relevant. In general , $(1)$ does not hold for dependent random variables but specific examples of dependent $A$ and $B$ satisfying $(1)$ can be found. Note that we must continue to insist that $E[B]\neq 0$, else the right side of $(1)$ is meaningless. Bear in mind that $E[A\mid B]$ is a random variable that happens to be a function of the random variable $B$, say $g(B)$ while $E[B\mid A]$ is a random variable that is a function of the random variable $A$, say $h(A)$. So, $(1)$ is similar to asking whether $$g(B)\stackrel{?}= h(A)\frac{E[A]}{E[B]} \tag 2$$ can be a true statement, and obviously the answer is that $g(B)$ cannot be a multiple of $h(A)$ in general. To my knowledge, there are only two special cases where $(1)$ can hold. As noted above, for independent random variables $A$ and $B$, $g(B)$ and $h(A)$ are degenerate random variables (called constants by statistically-illiterate folks) that equal $E[A]$ and $E[B]$ respectively, and so if $E[B]\neq 0$, we have equality in $(1)$. At the other end of the spectrum from independence, suppose that $A=g(B)$ where $g(\cdot)$ is an invertible function and thus $A=g(B)$ and $B=g^{-1}(A)$ are wholly dependent random variables. In this case, $$E[A\mid B] = g(B), \quad E[B\mid A] = g^{-1}(A) = g^{-1}(g(B)) = B$$ and so $(1)$ becomes $$g(B)\stackrel{?}= B\frac{E[A]}{E[B]}$$ which holds exactly when $g(x) = \alpha x$ where $\alpha$ can be any nonzero real number. Thus, $(1)$ holds whenever $A$ is a scalar multiple of $B$, and of course $E[B]$ must be nonzero (cf. Michael Hardy's answer ). The above development shows that $g(x)$ must be a linear function and that $(1)$ cannot hold for affine functions $g(x) = \alpha x + \beta$ with $\beta \neq 0$. However, note that Alecos Papadopolous in his answer and his comments thereafter claims that if $B$ is a normal random variable with nonzero mean, then for specific values of $\alpha$ and $\beta\neq 0$ that he provides, $A=\alpha B+\beta$ and $B$ satisfy $(1)$. In my opinion, his example is incorrect. In a comment on this answer, Huber has suggested considering the symmetric conjectured equality $$E[A\mid B]E[B] \stackrel{?}=E[B\mid A]E[A]\tag{3}$$ which of course always holds for independent random variables regardless of the values of $E[A]$ and $E[B]$ and for scalar multiples $A = \alpha B$ also. Of course, more trivially, $(3)$ holds for any zero-mean random variables $A$ and $B$ (independent or dependent, scalar multiple or not; it does not matter!): $E[A]=E[B]=0$ is sufficient for equality in $(3)$. Thus, $(3)$ might not be as interesting as $(1)$ as a topic for discussion.
{ "source": [ "https://stats.stackexchange.com/questions/260949", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/24515/" ] }
262,044
I was reading the FaceNet paper and in the 3rd paragraph of the introduction it says: Previous face recognition approaches based on deep networks use a classification layer trained over a set of known face identities and then take an intermediate bottleneck layer as a representation used to generalize recognition beyond the set of identities used in training. I was wondering what they mean by an intermediate bottleneck layer?
A bottleneck layer is a layer that contains few nodes compared to the previous layers. It can be used to obtain a representation of the input with reduced dimensionality. An example of this is the use of autoencoders with bottleneck layers for nonlinear dimensionality reduction. My understanding of the quote is that previous approaches use a deep network to classify faces. They then take the first several layers of this network, from the input up to some intermediate layer (say, the $k$th layer, containing $n_k$ nodes). This subnetwork implements a mapping from the input space to an $n_k$-dimensional vector space. The $k$th layer is a bottleneck layer, so the vector of activations of nodes in the $k$th layer gives a lower dimensional representation of the input. The original network can't be used to classify new identities, on which it wasn't trained. But, the $k$th layer may provide a good representation of faces in general. So, to learn new identities, new classifier layers can be stacked on top of the $k$th layer and trained. Or, the new training data can be fed through the subnetwork to obtain representations from the $k$th layer, and these representations can be fed to some other classifier.
{ "source": [ "https://stats.stackexchange.com/questions/262044", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/90815/" ] }
262,885
I have seen people have put a lot of efforts on SVM and Kernels, and they look pretty interesting as a starter in Machine Learning. But if we expect that almost-always we could find outperforming solution in terms of (deep) Neural Network, what is the meaning of trying other methods in this era? Here is my constraint on this topic. We think of only Supervised-Learnings; Regression, and Classification. Readability of the Result is not counted; only the Accuracy on the Supervised-Learning Problem counts. Computational-Cost is not in consideration. I am not saying that any other methods are useless.
Here is one theoretical and two practical reasons why someone might rationally prefer a non-DNN approach. The No Free Lunch Theorem from Wolpert and Macready says We have dubbed the associated results NFL theorems because they demonstrate that if an algorithm performs well on a certain class of problems then it necessarily pays for that with degraded performance on the set of all remaining problems. In other words, no single algorithm rules them all; you've got to benchmark. The obvious rebuttal here is that you usually don't care about all possible problems, and deep learning seems to work well on several classes of problems that people do care about (e.g., object recognition), and so it's a reasonable first/only choice for other applications in those domains. Many of these very deep networks require tons of data, as well as tons of computation, to fit. If you have (say) 500 examples, a twenty layer network is never going to learn well, while it might be possible to fit a much simpler model. There are a surprising number of problems where it's not feasible to collect a ton of data. On the other hand, one might try learning to solve a related problem (where more data is available), use something like transfer learning to adapt it to the specific low-data-availability-task. Deep neural networks can also have unusual failure modes. There are some papers showing that barely-human-perceptible changes can cause a network to flip from correctly classifying an image to confidently mis classifying it. (See here and the accompanying paper by Szegedy et al.) Other approaches may be more robust against this: there are poisoning attacks against SVMs (e.g., this by Biggio, Nelson, and Laskov), but those happen at train, rather than test time. At the opposite extreme, there are known (but not great) performance bounds for the nearest-neighbor algorithm. In some situations, you might happier with lower overall performance with less chance of catastrophe.
{ "source": [ "https://stats.stackexchange.com/questions/262885", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/143975/" ] }
263,238
I have a prediction model tested with four methods as you can see in the boxplot figure below. The attribute that the model predicts is in range of 0-8. You may notice that there is one upper-bound outlier and three lower-bound outliers indicated by all methods. I wonder if it is appropriate to remove these instances from the data? Or is this a sort of cheating to improve the prediction model?
It is almost always a cheating to remove observations to improve a regression model. You should drop observations only when you truly think that these are in fact outliers. For instance, you have time series from the heart rate monitor connected to your smart watch. If you take a look at the series, it's easy to see that there would be erroneous observations with readings like 300bps. These should be removed, but not because you want to improve the model (whatever it means). They're errors in reading which have nothing to do with your heart rate. One thing to be careful though is the correlation of errors with the data. In my example it could be argued that you have errors when the heart rate monitor is displaced during exercises such as running o jumping. Which will make these errors correlated with the hart rate. In this case, care must be taken in removal of these outliers and errors, because they are not at random I'll give you a made up example of when to not remove outliers . Let's say you're measuring the movement of a weight on a spring. If the weight is small relative to the strength of the weight, then you'll notice that Hooke's law works very well: $$F=-k\Delta x,$$ where $F$ is force, $k$ - tension coefficient and $\Delta x$ is the position of the weight. Now if you put a very heavy weight or displace the weight too much, you'll start seeing deviations: at large enough displacements $\Delta x$ the motion will seem to deviate from the linear model. So, you might be tempted to remove the outliers to improve the linear model. This would not be a good idea, because the model is not working very well since Hooke's law is only approximately right. UPDATE In your case I would suggest pulling those data points and looking at them closer. Could it be lab instrument failure? External interference? Sample defect? etc. Next try to identify whether the presnece of these outliers could be correlated with what you measure like in the example I gave. If there's correlation then there's no simple way to go about it. If there's no correlation then you can remove the outliers
{ "source": [ "https://stats.stackexchange.com/questions/263238", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/91142/" ] }
263,324
The first sentence of this wiki page claims that "In econometrics, an endogeneity problem occurs when an explanatory variable is correlated with the error term. 1 " My question is that how can this ever happen? Isn't regression beta chosen such that the error term is orthogonal to the column space of the design matrix?
You are conflating two types of "error" term. Wikipedia actually has an article devoted to this distinction between errors and residuals . In an OLS regression, the residuals (your estimates of the error or disturbance term) $\hat \varepsilon$ are indeed guaranteed to be uncorrelated with the predictor variables, assuming the regression contains an intercept term. But the "true" errors $\varepsilon$ may well be correlated with them, and this is what counts as endogeneity. To keep things simple, consider the regression model (you might see this described as the underlying " data generating process " or "DGP", the theoretical model that we assume to generate the value of $y$): $$y_i = \beta_1 + \beta_2 x_i + \varepsilon_i$$ There is no reason, in principle, why $x$ can't be correlated with $\varepsilon$ in our model, however much we would prefer it not to breach the standard OLS assumptions in this way. For example, it might be that $y$ depends on another variable that has been omitted from our model, and this has been incorporated into the disturbance term (the $\varepsilon$ is where we lump in all the things other than $x$ that affect $y$). If this omitted variable is also correlated with $x$, then $\varepsilon$ will in turn be correlated with $x$ and we have endogeneity (in particular, omitted-variable bias ). When you estimate your regression model on the available data, we get $$y_i = \hat \beta_1 + \hat \beta_2 x_i + \hat \varepsilon_i$$ Because of the way OLS works*, the residuals $\hat \varepsilon$ will be uncorrelated with $x$. But that doesn't mean we have avoided endogeneity — it just means that we can't detect it by analysing the correlation between $\hat \varepsilon$ and $x$, which will be (up to numerical error) zero. And because the OLS assumptions have been breached, we are no longer guaranteed the nice properties, such as unbiasedness, we enjoy so much about OLS. Our estimate $\hat \beta_2$ will be biased. $(*)$ The fact that $\hat \varepsilon$ is uncorrelated with $x$ follows immediately from the "normal equations" we use to choose our best estimates for the coefficients. If you are not used to the matrix setting, and I stick to the bivariate model used in my example above, then the sum of squared residuals is $S(b_1, b_2) = \sum_{i=1}^n \varepsilon_i^2 = \sum_{i=1}^n (y_i-b_1 - b_2 x_i)^2$ and to find the optimal $b_1 = \hat \beta_1$ and $b_2 = \hat \beta_2$ that minimise this we find the normal equations, firstly the first-order condition for the estimated intercept: $$\frac{\partial S}{\partial b_1} = \sum_{i=1}^n -2(y_i-b_1 - b_2 x_i) = -2 \sum_{i=1}^n \hat \varepsilon_i = 0$$ which shows that the sum (and hence mean) of the residuals is zero, so the formula for the covariance between $\hat \varepsilon$ and any variable $x$ then reduces to $\frac{1}{n-1} \sum_{i=1}^n x_i \hat \varepsilon_i$. We see this is zero by considering the first-order condition for the estimated slope, which is that $$\frac{\partial S}{\partial b_2} = \sum_{i=1}^n -2 x_i (y_i-b_1 - b_2 x_i) = -2 \sum_{i=1}^n x_i \hat \varepsilon_i = 0$$ If you are used to working with matrices, we can generalise this to multiple regression by defining $S(b) = \varepsilon' \varepsilon = (y-Xb)'(y-Xb)$; the first-order condition to minimise $S(b)$ at optimal $b = \hat \beta$ is: $$\frac{dS}{db}(\hat\beta) = \frac{d}{db}\bigg(y'y - b'X'y - y'Xb + b'X'Xb\bigg)\bigg|_{b=\hat\beta} = -2X'y + 2X'X\hat\beta = -2X'(y - X\hat\beta) = -2X'\hat \varepsilon = 0$$ This implies each row of $X'$, and hence each column of $X$, is orthogonal to $\hat \varepsilon$. Then if the design matrix $X$ has a column of ones (which happens if your model has an intercept term), we must have $\sum_{i=1}^n \hat \varepsilon_i = 0$ so the residuals have zero sum and zero mean. The covariance between $\hat \varepsilon$ and any variable $x$ is again $\frac{1}{n-1} \sum_{i=1}^n x_i \hat \varepsilon_i$ and for any variable $x$ included in our model we know this sum is zero, because $\hat \varepsilon$ is orthogonal to every column of the design matrix. Hence there is zero covariance, and zero correlation, between $\hat \varepsilon$ and any predictor variable $x$. If you prefer a more geometric view of things , our desire that $\hat y$ lies as close as possible to $y$ in a Pythagorean kind of way , and the fact that $\hat y$ is constrained to the column space of the design matrix $X$, dictate that $\hat y$ should be the orthogonal projection of the observed $y$ onto that column space. Hence the vector of residuals $\hat \varepsilon = y - \hat y$ is orthogonal to every column of $X$, including the vector of ones $\mathbf{1_n}$ if an intercept term is included in the model. As before, this implies the sum of residuals is zero, whence the residual vector's orthogonality with the other columns of $X$ ensures it is uncorrelated with each of those predictors. But nothing we have done here says anything about the true errors $\varepsilon$. Assuming there is an intercept term in our model, the residuals $\hat \varepsilon$ are only uncorrelated with $x$ as a mathematical consequence of the manner in which we chose to estimate regression coefficients $\hat \beta$. The way we selected our $\hat \beta$ affects our predicted values $\hat y$ and hence our residuals $\hat \varepsilon = y - \hat y$. If we choose $\hat \beta$ by OLS, we must solve the normal equations and these enforce that our estimated residuals $\hat \varepsilon$ are uncorrelated with $x$. Our choice of $\hat \beta$ affects $\hat y$ but not $\mathbb{E}(y)$ and hence imposes no conditions on the true errors $\varepsilon = y - \mathbb{E}(y)$. It would be a mistake to think that $\hat \varepsilon$ has somehow "inherited" its uncorrelatedness with $x$ from the OLS assumption that $\varepsilon$ should be uncorrelated with $x$. The uncorrelatedness arises from the normal equations.
{ "source": [ "https://stats.stackexchange.com/questions/263324", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/81308/" ] }
263,393
Scikit has CalibratedClassifierCV , which allows us to calibrate our models on a particular X, y pair. It also states clearly that data for fitting the classifier and for calibrating it must be disjoint. If they must be disjoint, is it legitimate to train the classifier with the following? model = CalibratedClassifierCV(my_classifier) model.fit(X_train, y_train) I fear that by using the same training set I'm breaking the disjoint data rule. An alternative might be to have a validation set my_classifier.fit(X_train, y_train) model = CalibratedClassifierCV(my_classifier, cv='prefit') model.fit(X_valid, y_valid) Which has the disadvantage of leaving less data for training. Also, if CalibratedClassifierCV should only be fit on models fit on a different training set, why would it's default options be cv=3 , which will also fit the base estimator? Does the cross validation handle the disjoint rule on its own? Question: what is the correct way to use CalibratedClassifierCV?
There are two things mentioned in the CalibratedClassifierCV docs that hint towards the ways it can be used: base_estimator: If cv=prefit, the classifier must have been fit already on data. cv: If “prefit” is passed, it is assumed that base_estimator has been fitted already and all data is used for calibration. I may obviously be interpreting this wrong, but it appears you can use the CCCV (short for CalibratedClassifierCV) in two ways: Number one: You train your model as usual, your_model.fit(X_train, y_train) . Then, you create your CCCV instance, your_cccv = CalibratedClassifierCV(your_model, cv='prefit') . Notice you set cv to flag that your model has already been fit. Finally, you call your_cccv.fit(X_validation, y_validation) . This validation data is used solely for calibration purposes. Number two: You have a new, untrained model. Then you create your_cccv=CalibratedClassifierCV(your_untrained_model, cv=3) . Notice cv is now the number of folds. Finally, you call your_cccv.fit(X, y) . Because your model is untrained, X and y have to be used for both training and calibration. The way to ensure the data is 'disjoint' is cross validation: for any given fold, CCCV will split X and y into your training and calibration data, so they do not overlap. TLDR: Method one allows you to control what is used for training and for calibration. Method two uses cross validation to try and make the most out of your data for both purposes.
{ "source": [ "https://stats.stackexchange.com/questions/263393", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/87568/" ] }
263,539
I've got an application where it'd be handy to cluster a noisy dataset before looking for subgroup effects within the clusters. I first looked at PCA, but it takes ~30 components to get to 90% of the variability, so clustering on just a couple of PC's will throw away a lot of information. I then tried t-SNE (for the first time), which gives me an odd shape in two dimensions that is very amenable to clustering via k-means. What's more, running a random forest on the data with the cluster assignment as the outcome shows that the clusters have a fairly sensible interpretation given the context of the problem, in terms of the variables that make up the raw data. But if I'm going to report on these clusters, how do I describe them? K-means clusters on principal components reveal individuals who are nearby to one another in terms of the derived variables that comprise X% of the variance in the dataset. What equivalent statement can be made about t-SNE clusters? Perhaps something to the effect of: t-SNE reveals approximate contiguity in an underlying high-dimensional manifold, so clusters on the low-dimensional representation of the high-dimensional space maximize the "likelihood" that contiguous individuals will not be in the same cluster Can anyone propose a better blurb than that?
The problem with t-SNE is that it does not preserve distances nor density. It only to some extent preserves nearest-neighbors. The difference is subtle, but affects any density- or distance based algorithm. While clustering after t-SNE will sometimes (often?) work, you will never know whether the "clusters" you find are real , or just artifacts of t-SNE. You will not be able to explain the clusters. You may just be seeing 'shapes in clouds'. To see this effect, simply generate a multivariate Gaussian distribution. If you visualize this, you will have a ball that is dense and gets much less dense outwards, with some outliers that can be really far away. Now run t-SNE on this data. You will usually get a circle of rather uniform density. If you use a low perplexity, it may even have some odd patterns in there. But you cannot really tell apart outliers anymore. Now lets make things more complicated. Let's use 250 points in a normal distribution at (-2,0), and 750 points in a normal distribution at (+2,0). This is supposed to be an easy data set, for example with EM: If we run t-SNE with default perplexity of 40, we get an oddly shaped pattern: Not bad, but also not that easy to cluster, is it? You will have a hard time finding a clustering algorithm that works here exactly as desired. And even if you would ask humans to cluster this data, most likely they will find much more than 2 clusters here. If we run t-SNE with a too small perplexity such as 20, we get more of these patterns that do not exist: This will cluster e.g. with DBSCAN, but it will yield four clusters. So beware, t-SNE can produce "fake" patterns! The optimum perplexity appears to be somewhere around 80 for this data set; but I don't think this parameter should work for every other data set. Now this is visually pleasing, but not better for analysis . A human annotator could likely select a cut and get a decent result; k-means however will fail even in this very very easy scenario ! You can already see that density information is lost , all data seems to live in area of almost the same density. If we would instead further increase the perplexity, the uniformity would increase, and the separation would reduce again. In conclusions, use t-SNE for visualization (and try different parameters to get something visually pleasing!), but rather do not run clustering afterwards , in particular do not use distance- or density based algorithms, as this information was intentionally (!) lost. Neighborhood-graph based approaches may be fine, but then you don't need to first run t-SNE beforehand, just use the neighbors immediately (because t-SNE tries to keep this nn-graph largely intact). More examples These examples were prepared for the presentation of the paper (but cannot be found in the paper yet, as I did this experiment later) Erich Schubert, and Michael Gertz. Intrinsic t-Stochastic Neighbor Embedding for Visualization and Outlier Detection – A Remedy Against the Curse of Dimensionality? In: Proceedings of the 10th International Conference on Similarity Search and Applications (SISAP), Munich, Germany. 2017 First, we have this input data: As you may guess, this is derived from a "color me" image for kids. If we run this through SNE ( NOT t-SNE , but the predecessor): Wow, our fish has become quite a sea monster! Because the kernel size is chosen locally, we lose much of the density information. But you will be really surprised by the output of t-SNE: I have actually tried two implementations (the ELKI, and the sklearn implementations), and both produced such a result. Some disconnected fragments, but that each look somewhat consistent with the original data. Two important points to explain this: SGD relies on an iterative refinement procedure, and may get stuck in local optima. In particular, this makes it hard for the algorithm to "flip" a part of the data that it has mirrored, as this would require moving points through others that are supposed to be separate. So if some parts of the fish are mirrored, and other parts are not mirrored, it may be unable to fix this. t-SNE uses the t-distribution in the projected space. In contrast to the Gaussian distribution used by regular SNE, this means most points will repel each other , because they have 0 affinity in the input domain (Gaussian gets zero quickly), but >0 affinity in the output domain. Sometimes (as in MNIST) this makes nicer visualization. In particular, it can help "splitting" a data set a bit more than in the input domain. This additional repulsion also often causes points to more evenly use the area, which can also be desirable. But here in this example, the repelling effects actually cause fragments of the fish to separate. We can help (on this toy data set) the first issue by using the original coordinates as initial placement, rather than random coordinates (as usually used with t-SNE). This time, the image is sklearn instead of ELKI, because the sklearn version already had a parameter to pass initial coordinates: As you can see, even with "perfect" initial placement, t-SNE will "break" the fish in a number of places that were originally connected because the Student-t repulsion in the output domain is stronger than the Gaussian affinity in the input space. As you can see, t-SNE (and SNE, too!) are interesting visualization techniques, but they need to be handled carefully. I would rather not apply k-means on the result! because the result will be heavily distorted, and neither distances nor density are preserved well. Instead, rather use it for visualization.
{ "source": [ "https://stats.stackexchange.com/questions/263539", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/17359/" ] }
263,544
Both methods use Gaussian process, and kriging uses the Best Linear Unbiased Predictor (BLUP) to predict the mean (this is not seen in Bayesian optimization?). At the bottom line, they also have covariance matrix, whose inverse has to be computed before moving out to the next sample point. As far as I understand, Bayesian optimization yields a posterior pdf with mean and variance kriging yields a predicted mean and MSE $\sigma^2$ Obviously they are different for some reasons. Why are they different?
The problem with t-SNE is that it does not preserve distances nor density. It only to some extent preserves nearest-neighbors. The difference is subtle, but affects any density- or distance based algorithm. While clustering after t-SNE will sometimes (often?) work, you will never know whether the "clusters" you find are real , or just artifacts of t-SNE. You will not be able to explain the clusters. You may just be seeing 'shapes in clouds'. To see this effect, simply generate a multivariate Gaussian distribution. If you visualize this, you will have a ball that is dense and gets much less dense outwards, with some outliers that can be really far away. Now run t-SNE on this data. You will usually get a circle of rather uniform density. If you use a low perplexity, it may even have some odd patterns in there. But you cannot really tell apart outliers anymore. Now lets make things more complicated. Let's use 250 points in a normal distribution at (-2,0), and 750 points in a normal distribution at (+2,0). This is supposed to be an easy data set, for example with EM: If we run t-SNE with default perplexity of 40, we get an oddly shaped pattern: Not bad, but also not that easy to cluster, is it? You will have a hard time finding a clustering algorithm that works here exactly as desired. And even if you would ask humans to cluster this data, most likely they will find much more than 2 clusters here. If we run t-SNE with a too small perplexity such as 20, we get more of these patterns that do not exist: This will cluster e.g. with DBSCAN, but it will yield four clusters. So beware, t-SNE can produce "fake" patterns! The optimum perplexity appears to be somewhere around 80 for this data set; but I don't think this parameter should work for every other data set. Now this is visually pleasing, but not better for analysis . A human annotator could likely select a cut and get a decent result; k-means however will fail even in this very very easy scenario ! You can already see that density information is lost , all data seems to live in area of almost the same density. If we would instead further increase the perplexity, the uniformity would increase, and the separation would reduce again. In conclusions, use t-SNE for visualization (and try different parameters to get something visually pleasing!), but rather do not run clustering afterwards , in particular do not use distance- or density based algorithms, as this information was intentionally (!) lost. Neighborhood-graph based approaches may be fine, but then you don't need to first run t-SNE beforehand, just use the neighbors immediately (because t-SNE tries to keep this nn-graph largely intact). More examples These examples were prepared for the presentation of the paper (but cannot be found in the paper yet, as I did this experiment later) Erich Schubert, and Michael Gertz. Intrinsic t-Stochastic Neighbor Embedding for Visualization and Outlier Detection – A Remedy Against the Curse of Dimensionality? In: Proceedings of the 10th International Conference on Similarity Search and Applications (SISAP), Munich, Germany. 2017 First, we have this input data: As you may guess, this is derived from a "color me" image for kids. If we run this through SNE ( NOT t-SNE , but the predecessor): Wow, our fish has become quite a sea monster! Because the kernel size is chosen locally, we lose much of the density information. But you will be really surprised by the output of t-SNE: I have actually tried two implementations (the ELKI, and the sklearn implementations), and both produced such a result. Some disconnected fragments, but that each look somewhat consistent with the original data. Two important points to explain this: SGD relies on an iterative refinement procedure, and may get stuck in local optima. In particular, this makes it hard for the algorithm to "flip" a part of the data that it has mirrored, as this would require moving points through others that are supposed to be separate. So if some parts of the fish are mirrored, and other parts are not mirrored, it may be unable to fix this. t-SNE uses the t-distribution in the projected space. In contrast to the Gaussian distribution used by regular SNE, this means most points will repel each other , because they have 0 affinity in the input domain (Gaussian gets zero quickly), but >0 affinity in the output domain. Sometimes (as in MNIST) this makes nicer visualization. In particular, it can help "splitting" a data set a bit more than in the input domain. This additional repulsion also often causes points to more evenly use the area, which can also be desirable. But here in this example, the repelling effects actually cause fragments of the fish to separate. We can help (on this toy data set) the first issue by using the original coordinates as initial placement, rather than random coordinates (as usually used with t-SNE). This time, the image is sklearn instead of ELKI, because the sklearn version already had a parameter to pass initial coordinates: As you can see, even with "perfect" initial placement, t-SNE will "break" the fish in a number of places that were originally connected because the Student-t repulsion in the output domain is stronger than the Gaussian affinity in the input space. As you can see, t-SNE (and SNE, too!) are interesting visualization techniques, but they need to be handled carefully. I would rather not apply k-means on the result! because the result will be heavily distorted, and neither distances nor density are preserved well. Instead, rather use it for visualization.
{ "source": [ "https://stats.stackexchange.com/questions/263544", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/95842/" ] }
263,712
Can anybody please clarify what a surrogate loss function is? I'm familiar with what a loss function is, and that we want to bring about a convex function that is differentiable, but I don't understand the theory behind how you can satisfactorily use a surrogate loss function and actually trust its results.
In the context of learning, say you have a classification problem with data set $\{(X_1, Y_1), \dots, (X_n, Y_n)\}$ , where $X_n$ are your features and $Y_n$ are your true labels. Given a hypothesis function $h(x)$ , the loss function $l: (h(X_n), Y_n) \rightarrow \mathbb{R}$ takes the hypothesis function's prediction (i.e. $h(X_n)$ ) as well as the true label for that particular input and returns a penalty. Now, a general goal is to find a hypothesis such that it minimizes the empirical risk (that is, it minimizes the chances of being wrong): $$R_l(h) = E_{\text{empirical}}[l(h(X), Y)] = \dfrac{1}{m}\sum_i^m{l(h(X_i), Y_i)}$$ In the case of binary classification, a common loss function that is used is the $0$ - $1$ loss function: $$ l(h(X), Y) = \begin{cases} 0 & Y = h(X) \\ 1 & \text{otherwise} \end{cases} $$ In general, the loss function that we care about cannot be optimized efficiently. For example, the $0$ - $1$ loss function is discontinuous. So, we consider another loss function that will make our life easier, which we call the surrogate loss function . An example of a surrogate loss function could be $\psi(h(x)) = \max(1 - h(x), 0)$ (the so-called hinge loss in SVM), which is convex and easy to optimize using conventional methods. This function acts as a proxy for the actual loss we wanted to minimize in the first place. Obviously, it has its disadvantages, but in some cases a surrogate loss function actually results in being able to learn more. By this, I mean that once your classifier achieves optimal risk (i.e. highest accuracy), you can still see the loss decreasing, which means that it is trying to push the different classes even further apart to improve its robustness.
{ "source": [ "https://stats.stackexchange.com/questions/263712", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/150543/" ] }
264,016
I have a set of 150 features, and many of them are highly correlated with each other. My goal is to predict the value of a discrete variable, whose range is 1-8 . My sample size is 550 , and I am using 10-fold cross-validation. AFAIK, among the regularization methods (Lasso, ElasticNet, and Ridge), Ridge is more rigorous to correlation among the features. That is why I expected that with Ridge, I should obtain a more accurate prediction. However, my results show that the mean absolute error of Lasso or Elastic is around 0.61 whereas this score is 0.97 for the ridge regression. I wonder what would be an explanation for this. Is this because I have many features, and Lasso performs better because it makes a sort of feature selection, getting rid of the redundant features?
Suppose you have two highly correlated predictor variables $x,z$ , and suppose both are centered and scaled (to mean zero, variance one). Then the ridge penalty on the parameter vector is $\beta_1^2 + \beta_2^2$ while the lasso penalty term is $ \mid \beta_1 \mid + \mid \beta_2 \mid$ . Now, since the model is supposed highly colinear, so that $x$ and $z$ more or less can substitute each other in predicting $Y$ , so many linear combination of $x, z$ where we simply substitute in part $x$ for $z$ , will work very similarly as predictors, for example $0.2 x + 0.8 z, 0.3 x + 0.7 z$ or $0.5 x + 0.5 z$ will be about equally good as predictors. Now look at these three examples, the lasso penalty in all three cases are equal, it is 1, while the ridge penalty differ, it is respectively 0.68, 0.58, 0.5, so the ridge penalty will prefer equal weighting of colinear variables while lasso penalty will not be able to choose. This is one reason ridge (or more generally, elastic net, which is a linear combination of lasso and ridge penalties) will work better with colinear predictors: When the data give little reason to choose between different linear combinations of colinear predictors, lasso will just "wander" while ridge tends to choose equal weighting. That last might be a better guess for use with future data! And, if that is so with present data, could show up in cross validation as better results with ridge. We can view this in a Bayesian way: Ridge and lasso implies different prior information, and the prior information implied by ridge tend to be more reasonable in such situations. (This explanation here I learned , more or less, from the book: "Statistical Learning with Sparsity The Lasso and Generalizations" by Trevor Hastie, Robert Tibshirani and Martin Wainwright, but at this moment I was not able to find a direct quote). But the OP seems to have a different problem: However, my results show that the mean absolute error of Lasso or Elastic is around 0.61 whereas this score is 0.97 for the ridge regression Now, lasso is also effectively doing variable selection, it can set some coefficients exactly to zero. Ridge cannot do that (except with probability zero.) So it might be that with the OP data, among the colinear variables, some are effective and others don't act at all (and the degree of colinearity sufficiently low that this can be detected.) See When should I use lasso vs ridge? where this is discussed. A detailed analysis would need more information than is given in the question.
{ "source": [ "https://stats.stackexchange.com/questions/264016", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/91142/" ] }
264,017
I'm trying to find the correct terminology for a dataset I'm working with: the data consists of events that have a time of occurrence (irregular, i.e. not from a fixed sample rate) and a scalar value. The aggregated values (their sum) represent the system's state. The events are largely independent w.r.t. both timing and value. An example would be transactions on a bank account. So far I'm referring to the stream of events as a time series, which is ( according to Wikipedia ) "a series of data points indexed [...] in time order". However, most of the materials on time series that I've found seem to assume that each data point is a sample from the same underlying and time-dependent "value" (a stock price, temperature, ...). In my case that's true for the system state (the "account balance") but not for the individual events. What is the appropriate terminology for such a dataset?
Suppose you have two highly correlated predictor variables $x,z$ , and suppose both are centered and scaled (to mean zero, variance one). Then the ridge penalty on the parameter vector is $\beta_1^2 + \beta_2^2$ while the lasso penalty term is $ \mid \beta_1 \mid + \mid \beta_2 \mid$ . Now, since the model is supposed highly colinear, so that $x$ and $z$ more or less can substitute each other in predicting $Y$ , so many linear combination of $x, z$ where we simply substitute in part $x$ for $z$ , will work very similarly as predictors, for example $0.2 x + 0.8 z, 0.3 x + 0.7 z$ or $0.5 x + 0.5 z$ will be about equally good as predictors. Now look at these three examples, the lasso penalty in all three cases are equal, it is 1, while the ridge penalty differ, it is respectively 0.68, 0.58, 0.5, so the ridge penalty will prefer equal weighting of colinear variables while lasso penalty will not be able to choose. This is one reason ridge (or more generally, elastic net, which is a linear combination of lasso and ridge penalties) will work better with colinear predictors: When the data give little reason to choose between different linear combinations of colinear predictors, lasso will just "wander" while ridge tends to choose equal weighting. That last might be a better guess for use with future data! And, if that is so with present data, could show up in cross validation as better results with ridge. We can view this in a Bayesian way: Ridge and lasso implies different prior information, and the prior information implied by ridge tend to be more reasonable in such situations. (This explanation here I learned , more or less, from the book: "Statistical Learning with Sparsity The Lasso and Generalizations" by Trevor Hastie, Robert Tibshirani and Martin Wainwright, but at this moment I was not able to find a direct quote). But the OP seems to have a different problem: However, my results show that the mean absolute error of Lasso or Elastic is around 0.61 whereas this score is 0.97 for the ridge regression Now, lasso is also effectively doing variable selection, it can set some coefficients exactly to zero. Ridge cannot do that (except with probability zero.) So it might be that with the OP data, among the colinear variables, some are effective and others don't act at all (and the degree of colinearity sufficiently low that this can be detected.) See When should I use lasso vs ridge? where this is discussed. A detailed analysis would need more information than is given in the question.
{ "source": [ "https://stats.stackexchange.com/questions/264017", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9038/" ] }
264,533
My objective is to classify sensor signals. The concept of my solution so far is : i) Engineering features from raw signal ii) Selecting relevant features with ReliefF and a clustering approach iii) Apply N.N, Random Forest and SVM However I am trapped in a dilemma. In ii) and iii), there are hyperparameters like k-Nearest Neigbours for ReliefF or the window length, for which the sensor signal is evaluated, or the number of hidden units in each layer of N.N. There are 3 Problems I see here : 1) Tuning feature selection parameters will influence the classifier performance 2) Optimizing hyperparameters of classifier will influence the choice of features. 3) Evaluating each possible combination of configuration is intractable. So my questions are : a) Can I make a simplifying assumption, s.t. tuning feature selection parameters can be decoupled from tuning classifier parameters ? b) Are there any other possible solutions ?
Like you already observed yourself, your choice of features (feature selection) may have an impact on which hyperparameters for your algorithm are optimal, and which hyperparameters you select for your algorithm may have an impact on which choice of features would be optimal. So, yes, if you really really care about squeezing every single percent of performance out of your model, and you can afford the required amount of computation, the best solution is probably to do feature selection and hyperparamter tuning "at the same time". That's probably not easy (depending on how you do feature selection) though. The way I imagine it working would be like having different sets of features as candidates, and treating the selection of one set of features out of all those candidate sets as an additional hyperparameter. In practice that may not really be feasible though. In general, if you cannot afford to evaluate all the possible combinations, I'd recommend: Very loosely optimize hyperparameters, just to make sure you don't assign extremely bad values to some hyperparameters. This can often just be done by hand if you have a good intuitive understanding of your hyperparameters, or done with a very brief hyperparameter optimization procedure using just a bunch of features that you know to be decently good otherwise. Feature selection, with hyperparameters that are maybe not 100% optimized but at least not extremely terrible either. If you have at least a somewhat decently configured machine learning algorithm already, having good features will be significantly more important for your performance than micro-optimizing hyperparameters. Extreme examples: If you have no features, you can't predict anything. If you have a cheating feature that contains the class label, you can perfectly classify everything. Optimize hyperparameters with the features selected in the step above. This should be a good feature set now, where it actually may be worth optimizing hyperparams a bit. To address the additional question that Nikolas posted in the comments, concering how all these things (feature selection, hyperparameter optimization) interact with k-fold cross validation: I'd say it depends. Whenever you use data in one of the folds for anything at all, and then evaluate performance on that same fold, you get a biased estimate of your performance (you'll overestimate performance). So, if you use data in all the folds for the feature selection step, and then evaluate performance on each of those folds, you'll get biased estimates of performance for each of them (which is not good). Similarly, if you have data-driven hyperparameter optimization and use data from certain folds (or all folds), and then evaluate on those same folds, you'll again get biased estimates of performance. Possible solutions are: Repeat the complete pipeline within every fold separately (e.g. within each fold, do feature selection + hyperparameter optimization and training model). Doing this means that k-fold cross validation gives you unbiased estimates of the performance of this complete pipeline . Split your initial dataset into a ''preprocessing dataset'' and a ''train/test dataset''. You can do your feature selection + hyperparameter optimization on the ''preprocessing dataset''. Then, you fix your selected features and hyperparameters, and do k-fold cross validation on the ''train/test dataset''. Doing this means that k-fold cross validation gives you unbiased estimates of the performance of your ML algorithm given the fixed feature-set and hyperparameter values . Note how the two solutions result in slightly different estimates of performance. Which one is more interesting depends on your use-case, depends on how you plan to deploy your machine learning solutions in practice. If you're, for example, a company that intends to have the complete pipeline of feature selection + hyperparameter optimization + training running automatically every day/week/month/year/whatever, you'll also be interested in the performance of that complete pipeline, and you'll want the first solution. If, on the other hand, you can only afford to do the feature selection + hyperparameter optimization a single time in your life, and afterwards only somewhat regularly re-train your algorithm (with feature-set and hyperparam values fixed), then the performance of only that step will be what you're interested in, and you should go for the second solution
{ "source": [ "https://stats.stackexchange.com/questions/264533", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/119642/" ] }
264,546
I am going through the following blog on LSTM neural network: http://machinelearningmastery.com/understanding-stateful-lstm-recurrent-neural-networks-python-keras/ The author reshapes the input vector X as [samples, time steps, features] for different configuration of LSTMs. The author writes Indeed, the sequences of letters are time steps of one feature rather than one time step of separate features. We have given more context to the network, but not more sequence as it expected What does this mean?
I found this just below the [samples, time_steps, features] you are concerned with. X = numpy.reshape(dataX, (len(dataX), seq_length, 1)) Samples - This is the len(dataX), or the amount of data points you have. Time steps - This is equivalent to the amount of time steps you run your recurrent neural network. If you want your network to have memory of 60 characters, this number should be 60. Features - this is the amount of features in every time step. If you are processing pictures, this is the amount of pixels. In this case you seem to have 1 feature per time step.
{ "source": [ "https://stats.stackexchange.com/questions/264546", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/78390/" ] }
265,024
I was reading an article and I saw the following sentence: For a given martingale, if it has an upper or a lower bound, then the martingale must converge (a.s.). Since the likelihood is always nonnegative, 0 is a lower bound. What does "a.s." stand for? Is it a common usage? My guess is "asymptotically" but I'd like to verify.
It stands for "almost surely," i.e. the probability of this occurring is 1. See: https://en.wikipedia.org/wiki/Almost_surely
{ "source": [ "https://stats.stackexchange.com/questions/265024", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/41166/" ] }
265,094
Is it true that Bayesian methods don't overfit? (I saw some papers and tutorials making this claim) For example, if we apply a Gaussian Process to MNIST (handwritten digit classification), but only show it a single sample, will it revert to the prior distribution for any inputs different from that single sample, however small the difference?
No, it is not true. Bayesian methods will certainly overfit the data. There are a couple of things that make Bayesian methods more robust against overfitting and you can make them more fragile as well. The combinatoric nature of Bayesian hypotheses, rather than binary hypotheses allows for multiple comparisons when someone lacks the "true" model for null hypothesis methods. A Bayesian posterior effectively penalizes an increase in model structure such as adding variables while rewarding improvements in fit. The penalties and gains are not optimizations as would be the case in non-Bayesian methods, but shifts in probabilities from new information. While this generally gives a more robust methodology, there is an important constraint and that is using proper prior distributions. While there is a tendency to want to mimic Frequentist methods by using flat priors, this does not assure a proper solution. There are articles on overfitting in Bayesian methods and it appears to me that the sin seems to be in trying to be "fair" to non-Bayesian methods by starting with strictly flat priors. The difficulty is that the prior is important in normalizing the likelihood. Bayesian models are intrinsically optimal models in Wald's admissibility sense of the word, but there is a hidden bogeyman in there. Wald is assuming the prior is your true prior and not some prior you are using so that editors won't ding you for putting too much information in it. They are not optimal in the same sense that Frequentist models are. Frequentist methods begin with the optimization of minimizing the variance while remaining unbiased. This is a costly optimization in that it discards information and is not intrinsically admissible in the Wald sense, though it frequently is admissible. So Frequentist models provide an optimal fit to the data, given unbiasedness. Bayesian models are neither unbiased nor optimal fits to the data. This is the trade you are making to minimize overfitting. Bayesian estimators are intrinsically biased estimators, unless special steps are taken to make them unbiased, that are usually a worse fit to the data. Their virtue is that they never use less information than an alternative method to find the "true model" and this additional information makes Bayesian estimators never more risky than alternative methods, particularly when working out of sample. That said, there will always exist a sample that could have been randomly drawn that would systematically "deceive" the Bayesian method. As to the second part of your question, if you were to analyze a single sample, the posterior would be forever altered in all its parts and would not revert to the prior unless there was a second sample that exactly cancelled out all the information in the first sample. At least theoretically this is true. In practice, if the prior is sufficiently informative and the observation sufficiently uninformative, then the impact could be so small that a computer could not measure the differences because of the limitation on the number of significant digits. It is possible for an effect to be too small for a computer to process a change in the posterior. So the answer is "yes" you can overfit a sample using a Bayesian method, particularly if you have a small sample size and improper priors. The second answer is "no" Bayes theorem never forgets the impact of prior data, though the effect could be so small you miss it computationally.
{ "source": [ "https://stats.stackexchange.com/questions/265094", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/35791/" ] }
265,123
I understand how we get 3.5 as the expected value for rolling a fair 6-sided die. But intuitively, I can expect each face with equal chance of 1/6. So shouldn't the expected value of rolling a die be either of the number between 1-6 with equal probability? In other words, when asked the question 'what's the expected value of throwing a fair 6-sided die?', one should answer 'oh, it can be anything between 1-6 with equal chance'. Instead it's 3.5. Intuitively in real world, can someone explain how 3.5 is the value I should expect on throwing a die? Again I don't want the formula or the derivation for the expectation.
Imagine that you are in Paris in 1654 and you and your friend are observing a gambling game based on sequential rolling of a six sided dice. Now, gambling is highly illegal and busts by the gendarme are quite frequent, and to be caught at a table with stacks of livre is to almost surely guarantee a lengthy stint in the Chateau d'If. To get around this you and your friend have a gentleman's agreement on a bet made between two of you prior to the last die roll. He agrees to pay you five livre if you observe two sixes in the next five rolls of dice, and you agree to pay him the same amount if two ones are rolled, with no other action if these combinations do not come up. Now, the last die roll is a six so you are on the edge of your seat, figuratively. At this moment, heavily armed guardsmen burst into the den and arrests everyone at the table, and the crowd disperses. Your friend believes that the bet made between the two of you is now invalidated. However, you believe that he should pay you some amount as one six had already been rolled. What is a fair way of settling this dispute between the two of you? (This is my interpretation of the origins of the expected value as presented in here and discussed in greater detail here ) Let's answer this question of fair value in a non rigorous way. The amount your friend should pay you can be calculated in the following manner. Consider all possible rolls of four dice. Some sets of rolls (namely those containing at least one six) will result in your friend paying out the agreed amount. However, on other sets (namely, those not containing a single six) will result in you receiving no money. How do you balance the possibility of these two types of rolls happening? Simple, average out the amount you would have been paid over ALL possible rolls. However, your friend, (quite unlikely), can still win his bet! You have to consider the number of times two ones will be rolled in the remaining four dice, and average out the amount you will pay him over the number of all possible rolls of four dice. This is the fair amount you should pay your friend for his bet. Thus the amount you end up getting is the amount your friend should pay you, minus what you should pay your friend. This is why we call it the "expected value". It is the average amount you expect to receive if you are able to simulate an event happening in multiple simultaneous universes.
{ "source": [ "https://stats.stackexchange.com/questions/265123", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/146856/" ] }
265,133
Suppose $X,Y$ are independent $N(1,1)$. How can I calculate $P(2-X<Y<X)$?
Imagine that you are in Paris in 1654 and you and your friend are observing a gambling game based on sequential rolling of a six sided dice. Now, gambling is highly illegal and busts by the gendarme are quite frequent, and to be caught at a table with stacks of livre is to almost surely guarantee a lengthy stint in the Chateau d'If. To get around this you and your friend have a gentleman's agreement on a bet made between two of you prior to the last die roll. He agrees to pay you five livre if you observe two sixes in the next five rolls of dice, and you agree to pay him the same amount if two ones are rolled, with no other action if these combinations do not come up. Now, the last die roll is a six so you are on the edge of your seat, figuratively. At this moment, heavily armed guardsmen burst into the den and arrests everyone at the table, and the crowd disperses. Your friend believes that the bet made between the two of you is now invalidated. However, you believe that he should pay you some amount as one six had already been rolled. What is a fair way of settling this dispute between the two of you? (This is my interpretation of the origins of the expected value as presented in here and discussed in greater detail here ) Let's answer this question of fair value in a non rigorous way. The amount your friend should pay you can be calculated in the following manner. Consider all possible rolls of four dice. Some sets of rolls (namely those containing at least one six) will result in your friend paying out the agreed amount. However, on other sets (namely, those not containing a single six) will result in you receiving no money. How do you balance the possibility of these two types of rolls happening? Simple, average out the amount you would have been paid over ALL possible rolls. However, your friend, (quite unlikely), can still win his bet! You have to consider the number of times two ones will be rolled in the remaining four dice, and average out the amount you will pay him over the number of all possible rolls of four dice. This is the fair amount you should pay your friend for his bet. Thus the amount you end up getting is the amount your friend should pay you, minus what you should pay your friend. This is why we call it the "expected value". It is the average amount you expect to receive if you are able to simulate an event happening in multiple simultaneous universes.
{ "source": [ "https://stats.stackexchange.com/questions/265133", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/151520/" ] }
266,387
Can AUC-ROC values be between 0-0.5? Does the model ever output values between 0 and 0.5?
A perfect predictor gives an AUC-ROC score of 1, a predictor which makes random guesses has an AUC-ROC score of 0.5. If you get a score of 0 that means the classifier is perfectly incorrect, it is predicting the incorrect choice 100% of the time. If you just changed the prediction of this classifier to the opposite choice then it could predict perfectly and have an AUC-ROC score of 1. So in practice if you get an AUC-ROC score between 0 and 0.5 you might have a mistake in the way you labeled your classifier targets or you might have a bad training algorithm. If you get a score of 0.2 this shows that the data contains enough information to get a score of 0.8 but something went wrong.
{ "source": [ "https://stats.stackexchange.com/questions/266387", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/149881/" ] }
266,968
If we process say 10 examples in a batch, I understand we can sum the loss for each example, but how does backpropagation work in regard to updating the weights for each example? For example: Example 1 --> loss = 2 Example 2 --> loss = -2 This results in an average loss of 0 (E = 0), so how would this update each weight and converge? Is it simply by the randomization of the batches that we "hopefully" converge sooner or later? Also doesn't this only compute the gradient for the first set of weights for the last example processed?
Gradient descent doesn't quite work the way you suggested but a similar problem can occur. We don't calculate the average loss from the batch, we calculate the average gradients of the loss function. The gradients are the derivative of the loss with respect to the weight and in a neural network the gradient for one weight depends on the inputs of that specific example and it also depends on many other weights in the model. If your model has 5 weights and you have a mini-batch size of 2 then you might get this: Example 1. Loss=2, $\text{gradients}=(1.5,-2.0,1.1,0.4,-0.9)$ Example 2. Loss=3, $\text{gradients}=(1.2,2.3,-1.1,-0.8,-0.7)$ The average of the gradients in this mini-batch are calculated, they are $(1.35,0.15,0,-0.2,-0.8)$ The benefit of averaging over several examples is that the variation in the gradient is lower so the learning is more consistent and less dependent on the specifics of one example. Notice how the average gradient for the third weight is $0$, this weight won't change this weight update but it will likely be non-zero for the next examples chosen which get computed with different weights. edit in response to comments: In my example above the average of the gradients is computed. For a mini-batch size of $k$ where we calculate the loss $L_i$ for each example we and aim to get the average gradient of the loss with respect to a weight $w_j$. The way I wrote it in my example I averaged each gradient like: $\frac{\partial L}{\partial w_j} = \frac{1}{k} \sum_{i=1}^{k} \frac{\partial L_i}{\partial w_j}$ The tutorial code you linked to in the comments uses Tensorflow to minimize the average loss. Tensorflow aims to minimize $\frac{1}{k} \sum_{i=1}^{k} L_i$ To minimize this it computes the gradients of the average loss with respect to each weight and uses gradient-descent to update the weights: $\frac{\partial L}{\partial w_j} = \frac{\partial }{\partial w_j} \frac{1}{k} \sum_{i=1}^{k} L_i$ The differentiation can be brought inside the sum so it's the same as the expression from the approach in my example. $\frac{\partial }{\partial w_j} \frac{1}{k} \sum_{i=1}^{k} L_i = \frac{1}{k} \sum_{i=1}^{k} \frac{\partial L_i}{\partial w_j}$
{ "source": [ "https://stats.stackexchange.com/questions/266968", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/69633/" ] }
266,996
What do the terms "dense" and "sparse" mean in the context of neural networks (NNs)? What is the difference between them? Why are they so called?
In mathematics, "sparse" and "dense" often refer to the number of zero vs. non-zero elements in an array (e.g. vector or matrix). A sparse array is one that contains mostly zeros and few non-zero entries. A dense array contains mostly non-zeros. There's no hard threshold for what counts as sparse; it's a loose term, but can be made more specific. For example, a vector is $k$ -sparse if it contains at most $k$ non-zero entries. Another way of saying this is that the vector's $\ell_0$ norm is $k$ . The usage of these terms in the context of neural networks is similar to their usage in other fields. In the context of NNs, things that may be described as sparse or dense include the activations of units within a particular layer , the weights , and the data . One could also talk about "sparse connectivity", which refers to the situation where only a small subset of units are connected to each other . This is a similar concept to sparse weights, because a connection with zero weight is effectively unconnected. "Sparse array" can also refer to a class of data types that are efficient for representing arrays that are sparse. This is a concept within the domain of programming languages. It's related to, but distinct from the mathematical concept.
{ "source": [ "https://stats.stackexchange.com/questions/266996", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/152382/" ] }
267,393
I am wondering if maximum likelihood estimation ever used in statistics. We learn the concept of it but I wonder when it is actually used. If we assume the distribution of the data, we find two parameters, one for the mean and one for the variance, but do you actually use it in real situations? Can somebody tell me a simple case in which it is used for?
I am wondering if maximum likelihood estimation ever used in statistics. Certainly! Actually quite a lot -- but not always. We learn the concept of it but I wonder when it is actually used. When people have a parametric distributional model, they quite often choose to use maximum likelihood estimation. When the model is correct, there are a number of handy properties of maximum likelihood estimators. For one example -- the use of generalized linear models is quite widespread and in that case the parameters describing the mean are estimated by maximum likelihood. It can happen that some parameters are estimated by maximum likelihood and others are not. For example, consider an overdispersed Poisson GLM -- the dispersion parameter won't be estimated by maximum likelihood, because the MLE is not useful in that case. If we assume the distribution of the data, we find two parameters Well, sometimes you might have two, but sometimes you have one parameter, sometimes three or four or more. one for the mean and one for the variance, Are you thinking of a particular model perhaps? This is not always the case. Consider estimating the parameter of an exponential distribution or a Poisson distribution, or a binomial distribution. In each of those cases, there's one parameter and the variance is a function of the parameter that describes the mean. Or consider a generalized gamma distribution , which has three parameters. Or a four-parameter beta distribution , which has (perhaps unsurprisingly) four parameters. Note also that (depending on the particular parameterization) the mean or the variance or both might not be represented by a single parameter but by functions of several of them. For example, the gamma distribution, for which there are three parameterizations that see fairly common use -- the two most common of which have both the mean and the variance being functions of two parameters. Typically in a regression model or a GLM, or a survival model (among many other model types), the model may depend on multiple predictors, in which case the distribution associated with each observation under the model may have one of its own parameter (or even several parameters) that are related to many predictor variables ("independent variables").
{ "source": [ "https://stats.stackexchange.com/questions/267393", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/116165/" ] }
268,126
I am learning survival analysis from this post on UCLA IDRE and got tripped up at section 1.2.1. The tutorial says: ... if the survival times were known to be exponentially distributed , then the probability of observing a survival time ... Why are survival times assumed to be exponentially distributed? It seems very unnatural to me. Why not normally distributed? Say suppose we are investigating some creature's life span under certain condition (say number of days), should it be more centered around some number with some variance (say 100 days with variance 3 days)? If we want time to be strictly positive, why not make normal distribution with higher mean and very small variance (will have almost no chance to get negative number.)?
Exponential distributions are often used to model survival times because they are the simplest distributions that can be used to characterize survival / reliability data. This is because they are memoryless, and thus the hazard function is constant w/r/t time, which makes analysis very simple. This kind of assumption may be valid, for example, for some kinds of electronic components like high-quality integrated circuits. I'm sure you can think of more examples where the effect of time on hazard can safely be assumed to be negligible. However, you are correct to observe that this would not be an appropriate assumption to make in many cases. Normal distributions can be alright in some situations, though obviously negative survival times are meaningless. For this reason, lognormal distributions are often considered. Other common choices include Weibull, Smallest Extreme Value, Largest Extreme Value, Log-logistic, etc. A sensible choice for model would be informed by subject-area experience and probability plotting . You can also, of course, consider non-parametric modeling. A good reference for classical parametric modeling in survival analysis is: William Q. Meeker and Luis A. Escobar (1998). Statistical Methods for Reliability Data , Wiley
{ "source": [ "https://stats.stackexchange.com/questions/268126", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/113777/" ] }
268,133
I have an adjudication automation problem. I have many lists of different instruments, each with a set of specifications. These specifications were sent to vendors that tender on the instruments. I receive back information from the vendors specifying what they can provide. I now need to compare the requirement with what they can supply and label the comparison of each specification with a "C" for compliance, "NC" for non-compliance, "PC" for partially compliant, "N/A" for not applicable and "INA" for information not available. The judgement cannot be made with a simple comparison as the judgement may depend on some of the other specification fields of the instrument. The specifications are mostly text, but sometimes numbers as well. Here is an example of a table with some data: Limmited Data Example I want to transform the text into features, but I have difficulty in determining the best route: The one way that I want to proceed is to vectorize the strings by tokenizing and a bag of words, but I do not know if this will generalise well because of the comparative nature of the test. So I thought I would then create 3 bags of words; one for the spec value, one for the vendor-value-and one for the line number(which is alphanumerical) and stack the vectors together as a feature vector The second way I am considering is doing several similarity tests between the specified value and the vendor provided value and use the outcomes of the similarity tests as features for training. How should I typically start with such a classification? I want to complete this problem in C#
Exponential distributions are often used to model survival times because they are the simplest distributions that can be used to characterize survival / reliability data. This is because they are memoryless, and thus the hazard function is constant w/r/t time, which makes analysis very simple. This kind of assumption may be valid, for example, for some kinds of electronic components like high-quality integrated circuits. I'm sure you can think of more examples where the effect of time on hazard can safely be assumed to be negligible. However, you are correct to observe that this would not be an appropriate assumption to make in many cases. Normal distributions can be alright in some situations, though obviously negative survival times are meaningless. For this reason, lognormal distributions are often considered. Other common choices include Weibull, Smallest Extreme Value, Largest Extreme Value, Log-logistic, etc. A sensible choice for model would be informed by subject-area experience and probability plotting . You can also, of course, consider non-parametric modeling. A good reference for classical parametric modeling in survival analysis is: William Q. Meeker and Luis A. Escobar (1998). Statistical Methods for Reliability Data , Wiley
{ "source": [ "https://stats.stackexchange.com/questions/268133", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/153521/" ] }
268,638
I am confused with the definition of non-parametric model after reading this link Parametric vs Nonparametric Models and Answer comments of my another question . Originally I thought "parametric vs non-parametric" means if we have distribution assumptions on the model (similar to parametric or non-parametric hypothesis testing). But both of the resources claim "parametric vs non-parametric" can be determined by if number of parameters in the model is depending on number of rows in the data matrix. For kernel density estimation (non-parametric) such a definition can be applied. But under this definition how can a neural network be a non-parametric model, as the number of parameters in the model is depending on the neural network structure and not on the number of rows in the data matrix? What exactly is the difference between parametric and a non-parametric model?
In a parametric model, the number of parameters is fixed with respect to the sample size. In a nonparametric model, the (effective) number of parameters can grow with the sample size. In an OLS regression, the number of parameters will always be the length of $\beta$, plus one for the variance. A neural net with fixed architecture and no weight decay would be a parametric model. But if you have weight decay, then the value of the decay parameter selected by cross-validation will generally get smaller with more data. This can be interpreted as an increase in the effective number of parameters with increasing sample size.
{ "source": [ "https://stats.stackexchange.com/questions/268638", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/113777/" ] }
268,671
I have a large dataset describing numerous customers' behaviour and I am trying to solve a binary classification problem with a null accuracy on 90% (90/10 distribution amongst the two classes). Given that I have computational limitations and thus are forced to take a subset of the sample, would it make sense for me to manipulate the balance to, let's say; 60/40 or 50/50 in my sample, now that I am limited to a fixed amount of total observations due to my hardware, just to "expose the machine learning algorithm to more of both classes" (from an marginal utility point of view)? I have found multiple discussions about this online but not about this exact situation. I am very much aware of the fact that it would be optimal to just use ALL observations, and that it will mess up the true disitribution, but my rationale is that the problem is nothing like a poll sample but rather the idea of feeding the algorithm with more examples of observations that it haven't seen that many times. Following guide states: "Consider testing under-sampling when you have a lot data (tens- or hundreds of thousands of instances or more)" Would this impact the performance of the machine learning algorithm negatively and thus my prediction model so that I will get worse classifications on a 90/10 test set? And would someone be able to explain me why?
In a parametric model, the number of parameters is fixed with respect to the sample size. In a nonparametric model, the (effective) number of parameters can grow with the sample size. In an OLS regression, the number of parameters will always be the length of $\beta$, plus one for the variance. A neural net with fixed architecture and no weight decay would be a parametric model. But if you have weight decay, then the value of the decay parameter selected by cross-validation will generally get smaller with more data. This can be interpreted as an increase in the effective number of parameters with increasing sample size.
{ "source": [ "https://stats.stackexchange.com/questions/268671", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/153848/" ] }
268,755
In a recent colloquium, the speaker's abstract claimed they were using machine learning. During the talk, the only thing related to machine learning was that they perform linear regression on their data. After calculating the best-fit coefficients in 5D parameter space, they compared these coefficients in one system to the best-fit coefficients of other systems. When is linear regression machine learning , as opposed to simply finding a best-fit line? (Was the researcher's abstract misleading?) With all the attention machine learning has been garnering recently, it seems important to make such distinctions. My question is like this one , except that that question asks for the definition of "linear regression", whereas mine asks when linear regression (which has a broad number of applications) may appropriately be called "machine learning". Clarifications I'm not asking when linear regression is the same as machine learning. As some have pointed out, a single algorithm does not constitute a field of study. I'm asking when it's correct to say that one is doing machine learning when the algorithm one is using is simply a linear regression. All jokes aside (see comments), one of the reasons I ask this is because it is unethical to say that one is doing machine learning to add a few gold stars to your name if they aren't really doing machine learning. (Many scientists calculate some type of best-fit line for their work, but this does not mean that they are doing machine learning.) On the other hand, there are clearly situations when linear regression is being used as part of machine learning. I'm looking for experts to help me classify these situations. ;-)
Answering your question with a question: what exactly is machine learning? Trevor Hastie, Robert Tibshirani and Jerome Friedman in The Elements of Statistical Learning , Kevin P. Murphy in Machine Learning A Probabilistic Perspective , Christopher Bishop in Pattern Recognition and Machine Learning , Ian Goodfellow, Yoshua Bengio and Aaron Courville in Deep Learning and a number of other machine learning "bibles" mention linear regression as one of the machine learning "algorithms". Machine learning is partly a buzzword for applied statistics and the distinction between statistics and machine learning is often blurry.
{ "source": [ "https://stats.stackexchange.com/questions/268755", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/85943/" ] }