source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
268,820
I'm curious about how gradients are back-propagated through a neural network using ResNet modules/skip connections. I've seen a couple of questions about ResNet (e.g. Neural network with skip-layer connections ) but this one is asking specifically about back-propagation of gradients during training. The basic architecture is here: I read this paper, Study of Residual Networks for Image Recognition , and in Section 2 they talk about how one of the goals of ResNet is to allow a shorter/clearer path for the gradient to back-propagate to the base layer. Can anyone explain how the gradient is flowing through this type of network? I don't quite understand how the addition operation, and lack of a parameterized layer after addition, allows for better gradient propagation. Does it have something to do with how the gradient doesn't change when flowing through an add operator and is somehow redistributed without multiplication? Furthermore, I can understand how the vanishing gradient problem is alleviated if the gradient doesn't need to flow through the weight layers, but if theres no gradient flow through the weights then how do they get updated after the backward pass?
Add sends the gradient back equally to both inputs. You can convince yourself of this by running the following in tensorflow: import tensorflow as tf graph = tf.Graph() with graph.as_default(): x1_tf = tf.Variable(1.5, name='x1') x2_tf = tf.Variable(3.5, name='x2') out_tf = x1_tf + x2_tf grads_tf = tf.gradients(ys=[out_tf], xs=[x1_tf, x2_tf]) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) fd = { out_tf: 10.0 } print(sess.run(grads_tf, feed_dict=fd)) Output: [1.0, 1.0] So, the gradient will be: passed back to previous layers, unchanged, via the skip-layer connection, and also passed to the block with weights, and used to update those weights Edit: there is a question: "what is the operation at the point where the highway connection and the neural net block join back together again, at the bottom of Figure 2?" There answer is: they are summed. You can see this from Figure 2's formula: $$ \mathbf{\text{output}} \leftarrow \mathcal{F}(\mathbf{x}) + \mathbf{x} $$ What this says is that: the values in the bus ($\mathbf{x}$) are added to the results of passing the bus values, $\mathbf{x}$, through the network, ie $\mathcal{F}(\mathbf{x})$ to give the output from the residual block, which I've labelled here as $\mathbf{\text{output}}$ Edit 2: Rewriting in slightly different words: in the forwards direction, the input data flows down the bus at points along the bus, residual blocks can learn to add/remove values to the bus vector in the backwards direction, the gradients flow back down the bus along the way, the gradients update the residual blocks they move past the residual blocks will themselves modify the gradients slightly too The residual blocks do modify the gradients flowing backwards, but there are no 'squashing' or 'activation' functions that the gradients flow through. 'squashing'/'activation' functions are what causes the exploding/vanishing gradient problem, so by removing those from the bus itself, we mitigate this problem considerably. Edit 3: Personally I imagine a resnet in my head as the following diagram. Its topologically identical to figure 2, but it shows more clearly perhaps how the bus just flows straight through the network, whilst the residual blocks just tap the values from it, and add/remove some small vector against the bus:
{ "source": [ "https://stats.stackexchange.com/questions/268820", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/68268/" ] }
269,405
Sorry if this is has been answered elsewhere, I haven't been able to find it. I am wondering why we take the square root , in particular, of variance to create the standard deviation? What is it about taking the square root that produces a useful value?
In some sense this is a trivial question, but in another, it is actually quite deep! As others have mentioned, taking the square root implies $\operatorname{Stdev}(X)$ has the same units as $X$ . Taking the square root gives you absolute homogeneity aka absolute scalability . For any scalar $\alpha$ and random variable $X$ , we have: $$ \operatorname{Stdev}[\alpha X] = |\alpha| \operatorname{Stdev}[X]$$ Absolute homogeneity is a required property of a norm . The standard deviation can be interpreted as a norm (on the vector space of mean zero random variables) in a similar way that $\sqrt{x^2 + y^2+z^2}$ is the standard Euclidian norm in a three-dimensional space. The standard deviation is a measure of distance between a random variable and its mean. Standard deviation and the $L_2$ norm Finite dimension case: In an $n$ dimensional vector space, the standard Euclidian norm aka the $L_2$ norm is defined as: $$\|\mathbf{x}\|_2 = \sqrt{\sum_i x_i^2}$$ More broadly, the $p$ -norm $\|\mathbf{x}\|_p = \left(\sum_i |x_i|^p \right)^{\frac{1}{p}}$ takes the $p$ th root to get absolute homogeneity: $\|\alpha \mathbf{x}\|_p = \left( \sum_i |\alpha x_i|^p \right)^\frac{1}{p} = | \alpha | \left( \sum_i |x_i|^p \right)^\frac{1}{p} = |\alpha | \|\mathbf{x}\|_p $ . If you have weights $q_i$ then the weighted sum $\sqrt{\sum_i x_i^2 q_i}$ is also a valid norm. Furthermore, it's the standard deviation if $q_i$ represent probabilities and $\operatorname{E}[\mathbf{x}] \equiv \sum_i x_i q_i = 0$ Infinite dimension case: In an infinite dimensional Hilbert Space we similarly may define the $L_2$ norm : $$ \|X\|_2 = \sqrt{\int_\omega X(\omega)^2 dP(\omega) }$$ If $X$ is a mean zero random variable and $P$ is the probability measure, what's the standard deviation? It's the same: $\sqrt{\int_\omega X(\omega)^2 dP(\omega) }$ . Summary: Taking the square root makes means the standard deviation satisfies absolute homogeneity , a required property of a norm . On a space of random variables, $\langle X, Y \rangle = \operatorname{E}[XY]$ is an inner product and $\|X\|_2 = \sqrt{\operatorname{E}[X^2]}$ the norm induced by that inner product . Thus the standard deviation is the norm of a demeaned random variable: $$\operatorname{Stdev}[X] = \|X - \operatorname{E}[X]\|_2$$ It's a measure of distance from mean $\operatorname{E}[X]$ to $X$ . (Technical point: while $\sqrt{\operatorname{E}[X^2]}$ is a norm, the standard deviation $\sqrt{\operatorname{E}[(X - \operatorname{E}[X])^2]}$ isn't a norm over random variables in general because a requirement for a normed vector space is $\|x\| = \mathbf{0}$ if and only if $x = \mathbf{0}$ . A standard deviation of 0 doesn't imply the random variable is the zero element.)
{ "source": [ "https://stats.stackexchange.com/questions/269405", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/67137/" ] }
270,546
Need to understand the working of 'Embedding' layer in Keras library. I execute the following code in Python import numpy as np from keras.models import Sequential from keras.layers import Embedding model = Sequential() model.add(Embedding(5, 2, input_length=5)) input_array = np.random.randint(5, size=(1, 5)) model.compile('rmsprop', 'mse') output_array = model.predict(input_array) which gives the following output input_array = [[4 1 3 3 3]] output_array = [[[ 0.03126476 0.00527241] [-0.02369716 -0.02856163] [ 0.0055749 0.01492429] [ 0.0055749 0.01492429] [ 0.0055749 0.01492429]]] I understand that each value in the input_array is mapped to 2 element vector in the output_array, so a 1 X 4 vector gives 1 X 4 X 2 vectors. But how are the mapped values computed?
In fact, the output vectors are not computed from the input using any mathematical operation. Instead, each input integer is used as the index to access a table that contains all possible vectors. That is the reason why you need to specify the size of the vocabulary as the first argument (so the table can be initialized). The most common application of this layer is for text processing. Let's see a simple example. Our training set consists only of two phrases: Hope to see you soon Nice to see you again So we can encode these phrases by assigning each word a unique integer number (by order of appearance in our training dataset for example). Then our phrases could be rewritten as: [0, 1, 2, 3, 4] [5, 1, 2, 3, 6] Now imagine we want to train a network whose first layer is an embedding layer. In this case, we should initialize it as follows: Embedding(7, 2, input_length=5) The first argument (7) is the number of distinct words in the training set. The second argument (2) indicates the size of the embedding vectors. The input_length argument, of course, determines the size of each input sequence. Once the network has been trained, we can get the weights of the embedding layer, which in this case will be of size (7, 2) and can be thought as the table used to map integers to embedding vectors: +------------+------------+ | index | Embedding | +------------+------------+ | 0 | [1.2, 3.1] | | 1 | [0.1, 4.2] | | 2 | [1.0, 3.1] | | 3 | [0.3, 2.1] | | 4 | [2.2, 1.4] | | 5 | [0.7, 1.7] | | 6 | [4.1, 2.0] | +------------+------------+ So according to these embeddings, our second training phrase will be represented as: [[0.7, 1.7], [0.1, 4.2], [1.0, 3.1], [0.3, 2.1], [4.1, 2.0]] It might seem counterintuitive at first, but the underlying automatic differentiation engines (e.g., Tensorflow or Theano) manage to optimize these vectors associated with each input integer just like any other parameter of your model. For an intuition of how this table lookup is implemented as a mathematical operation which can be handled by the automatic differentiation engines, consider the embeddings table from the example as a (7, 2) matrix. Then, for a given word, you create a one-hot vector based on its index and multiply it by the embeddings matrix, effectively replicating a lookup. For instance, for the word " soon " the index is 4, and the one-hot vector is [0, 0, 0, 0, 1, 0, 0] . If you multiply this (1, 7) matrix by the (7, 2) embeddings matrix you get the desired two-dimensional embedding, which in this case is [2.2, 1.4] . It is also interesting to use the embeddings learned by other methods/people in different domains (see https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html ) as done in [1]. [1] López-Sánchez, D., Herrero, J. R., Arrieta, A. G., & Corchado, J. M. Hybridizing metric learning and case-based reasoning for adaptable clickbait detection. Applied Intelligence, 1-16.
{ "source": [ "https://stats.stackexchange.com/questions/270546", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/86202/" ] }
270,547
currently I just met a case and Give the context below: I got a 1000 (rows) x 6 (columns) data set. The variables are Date, Hour, average temperature, average humidity, the sum of water consumed in one hour, average ph value in the water. How should I build a model to predict the daily water consumption in the next few days? The given test data set only has the average temperature, average humidity and the average ph value in the water. So I suppose I should mainly focus on these three variables to build models right? At the same time, since this data set contains date and hour as variables, there are some missing value or some time line there is not data be recorded, shall I try to replace them back with KNN or other methods? Best.
In fact, the output vectors are not computed from the input using any mathematical operation. Instead, each input integer is used as the index to access a table that contains all possible vectors. That is the reason why you need to specify the size of the vocabulary as the first argument (so the table can be initialized). The most common application of this layer is for text processing. Let's see a simple example. Our training set consists only of two phrases: Hope to see you soon Nice to see you again So we can encode these phrases by assigning each word a unique integer number (by order of appearance in our training dataset for example). Then our phrases could be rewritten as: [0, 1, 2, 3, 4] [5, 1, 2, 3, 6] Now imagine we want to train a network whose first layer is an embedding layer. In this case, we should initialize it as follows: Embedding(7, 2, input_length=5) The first argument (7) is the number of distinct words in the training set. The second argument (2) indicates the size of the embedding vectors. The input_length argument, of course, determines the size of each input sequence. Once the network has been trained, we can get the weights of the embedding layer, which in this case will be of size (7, 2) and can be thought as the table used to map integers to embedding vectors: +------------+------------+ | index | Embedding | +------------+------------+ | 0 | [1.2, 3.1] | | 1 | [0.1, 4.2] | | 2 | [1.0, 3.1] | | 3 | [0.3, 2.1] | | 4 | [2.2, 1.4] | | 5 | [0.7, 1.7] | | 6 | [4.1, 2.0] | +------------+------------+ So according to these embeddings, our second training phrase will be represented as: [[0.7, 1.7], [0.1, 4.2], [1.0, 3.1], [0.3, 2.1], [4.1, 2.0]] It might seem counterintuitive at first, but the underlying automatic differentiation engines (e.g., Tensorflow or Theano) manage to optimize these vectors associated with each input integer just like any other parameter of your model. For an intuition of how this table lookup is implemented as a mathematical operation which can be handled by the automatic differentiation engines, consider the embeddings table from the example as a (7, 2) matrix. Then, for a given word, you create a one-hot vector based on its index and multiply it by the embeddings matrix, effectively replicating a lookup. For instance, for the word " soon " the index is 4, and the one-hot vector is [0, 0, 0, 0, 1, 0, 0] . If you multiply this (1, 7) matrix by the (7, 2) embeddings matrix you get the desired two-dimensional embedding, which in this case is [2.2, 1.4] . It is also interesting to use the embeddings learned by other methods/people in different domains (see https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html ) as done in [1]. [1] López-Sánchez, D., Herrero, J. R., Arrieta, A. G., & Corchado, J. M. Hybridizing metric learning and case-based reasoning for adaptable clickbait detection. Applied Intelligence, 1-16.
{ "source": [ "https://stats.stackexchange.com/questions/270547", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/142967/" ] }
270,548
I have 25 correlated independent variables and one dependent variable that is an aggregated score of a Likert scale. I also have 90 samples. I want to do variable selection for linear regression so I am using LASSO. In python I used LASSO CV (coordinate descent). Q1 I can use LASSO AIC/BIC, LASSO CV, LASSO LARS CV is there any reason to pick one over the other? Q2 I tried LASSO CV with nested cross validation as I was told that it will calculate better my hyper parameters since I have only 90 samples but the (3-fold inner/3-fold outer) but the 3 models I get are very different to each other. Do I need to do bootstrapping and do non nested instead?
In fact, the output vectors are not computed from the input using any mathematical operation. Instead, each input integer is used as the index to access a table that contains all possible vectors. That is the reason why you need to specify the size of the vocabulary as the first argument (so the table can be initialized). The most common application of this layer is for text processing. Let's see a simple example. Our training set consists only of two phrases: Hope to see you soon Nice to see you again So we can encode these phrases by assigning each word a unique integer number (by order of appearance in our training dataset for example). Then our phrases could be rewritten as: [0, 1, 2, 3, 4] [5, 1, 2, 3, 6] Now imagine we want to train a network whose first layer is an embedding layer. In this case, we should initialize it as follows: Embedding(7, 2, input_length=5) The first argument (7) is the number of distinct words in the training set. The second argument (2) indicates the size of the embedding vectors. The input_length argument, of course, determines the size of each input sequence. Once the network has been trained, we can get the weights of the embedding layer, which in this case will be of size (7, 2) and can be thought as the table used to map integers to embedding vectors: +------------+------------+ | index | Embedding | +------------+------------+ | 0 | [1.2, 3.1] | | 1 | [0.1, 4.2] | | 2 | [1.0, 3.1] | | 3 | [0.3, 2.1] | | 4 | [2.2, 1.4] | | 5 | [0.7, 1.7] | | 6 | [4.1, 2.0] | +------------+------------+ So according to these embeddings, our second training phrase will be represented as: [[0.7, 1.7], [0.1, 4.2], [1.0, 3.1], [0.3, 2.1], [4.1, 2.0]] It might seem counterintuitive at first, but the underlying automatic differentiation engines (e.g., Tensorflow or Theano) manage to optimize these vectors associated with each input integer just like any other parameter of your model. For an intuition of how this table lookup is implemented as a mathematical operation which can be handled by the automatic differentiation engines, consider the embeddings table from the example as a (7, 2) matrix. Then, for a given word, you create a one-hot vector based on its index and multiply it by the embeddings matrix, effectively replicating a lookup. For instance, for the word " soon " the index is 4, and the one-hot vector is [0, 0, 0, 0, 1, 0, 0] . If you multiply this (1, 7) matrix by the (7, 2) embeddings matrix you get the desired two-dimensional embedding, which in this case is [2.2, 1.4] . It is also interesting to use the embeddings learned by other methods/people in different domains (see https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html ) as done in [1]. [1] López-Sánchez, D., Herrero, J. R., Arrieta, A. G., & Corchado, J. M. Hybridizing metric learning and case-based reasoning for adaptable clickbait detection. Applied Intelligence, 1-16.
{ "source": [ "https://stats.stackexchange.com/questions/270548", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/152608/" ] }
271,234
Is there any continuous distribution expressible in closed form, whose mean is such that the geometric mean of the samples is an unbiased estimator for that mean? Update: I just realized that my samples have to be positive (or else the geometric mean may not exist) so maybe continuous isn't the right word. How about a distribution which is zero for negative values of the random variable and is continuous for positive values. Something like a truncated distribution.
I believe you are asking what is, if any, the distribution of an r.v. $X$, such that, if we have an i.i.d. sample of size $n>1$ from that distribution, it will hold that $$E[GM] = E\left[\left(\prod_{i=1}^n X_{i}\right)^{1/n}\right] = E(X)$$ Due to the i.i.d. assumption , we have $$E\left[\left(\prod_{i=1}^n X_{i}\right)^{1/n}\right] = E\left(X_1^{1/n}\cdot ...\cdot X_n^{1/n}\right) = E\left (X_1^{1/n}\right)\cdot ...\cdot E\left(X_n^{1/n}\right) = \left[E\left(X^{1/n}\right)\right]^n$$ and so we are asking whether we can have $$\left[E\left(X^{1/n}\right)\right]^n = E(X)$$ But by Jensen's inequality, and the fact that the power function is strictly convex for powers higher than unity, we have that, almost surely for a non-degenerate (non-constant) random variable, $$\left[E\left(X^{1/n}\right)\right]^n < E\left[\left(X^{1/n}\right)\right]^n = E(X)$$ So no such distribution exists. Regarding the mention of the log-normal distribution in a comment, what holds is that the geometric mean ($GM$) of the sample from a log-normal distribution is a biased but asymptotically consistent estimator of the median . This is because, for the lognormal distribution it holds that $$E(X^s) = \exp\left\{s\mu + \frac {s^2\sigma^2}{2}\right \}$$ (where $\mu$ and $\sigma$ are the parameters of the underlying normal, not the mean and variance of the log-normal). In our case, $s = 1/n$ so we get $$E(GM) = \left[E\left(X^{1/n}\right)\right]^n = \left[\exp\left\{(\mu/n) + \frac {\sigma^2}{2n^2}\right \}\right]^n = \exp\left\{\mu + \frac {\sigma^2}{2n}\right \}$$ (which tells us that it is a biased estimator of the median). But $$\lim \left[E\left(X^{1/n}\right)\right]^n = \lim \exp\left\{\mu + \frac {\sigma^2}{2n}\right \} = e^{\mu}$$ which is the median of the distribution. One can also show that the variance of the geometric mean of the sample converges to zero, and these two conditions are sufficient for this estimator to be asymptotically consistent - for the median, $$GM \to_p e^{\mu}$$
{ "source": [ "https://stats.stackexchange.com/questions/271234", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/53608/" ] }
271,434
I am an economics student with some experience with econometrics and R. I would like to know if there is ever a situation where we should include a variable in a regression in spite of it not being statistically significant?
Yes! That a coefficient is statistically indistinguishable from zero does not imply that the coefficient actually is zero, that the coefficient is irrelevant. That an effect does not pass some arbitrary cutoff for statistical significance does not imply one should not attempt to control for it. Generally speaking, the problem at hand and your research design should guide what to include as regressors. Some Quick Examples: And do not take this as an exhaustive list. It's not hard to come up with tons more... 1. Fixed effects A situation where this often occurs is a regression with fixed effects . Let's say you have panel data and want to estimate $b$ in the model: $$ y_{it} = b x_{it} + u_i + \epsilon_{it}$$ Estimating this model with ordinary least squares where $u_i$ are treated as fixed effects is equivalent to running ordinary least squares with an indicator variable for each individual $i$. Anyway, the point is that the $u_i$ variables (i.e. the coefficients on the indicator variables) are often poorly estimated. Any individual fixed effect $u_i$ is often statistically insignificant. But you still include all the indicator variables in the regression if you are taking account of fixed effects. (Further note that most stats packages won't even give you the standard errors for individual fixed effects when you use the built-in methods. You don't really care about significance of individual fixed effects. You probably do care about their collective significance.) 2. Functions that go together... (a) Polynomial curve fitting (hat tip @NickCox in the comments) If you're fitting a $k$th degree polynomial to some curve, you almost always include lower order polynomial terms. E.g. if you were fitting a 2nd order polynomial you would run: $$ y_i = b_0 + b_1 x_i + b_2 x_i^2 + \epsilon_i$$ Usually it would be quite bizarre to force $b_1 = 0$ and instead run $$ y_i = b_0 + b_2 x_i^2 + \epsilon_i$$ but students of Newtonian mechanics will be able to imagine exceptions. (b) AR(p) models: Let's say you were estimating an AR(p) model you would also include the lower order terms. For example for an AR(2) you would run: $$ y_t = b_0 + b_1 y_{t-1} + b_2 y_{t-2} + \epsilon_t$$ And it would be bizarre to run: $$ y_t = b_0 + b_2 y_{t-2} + \epsilon_t$$ (c) Trigonometric functions As @NickCox mentions, $\cos$ and $\sin$ terms similarly tend to go together. For more on that, see e.g. this paper . More broadly... You want to include right-hand side variables when there are good theoretical reasons to do so. And as other answers here and across StackExchange discuss, step-wise variable selection can create numerous statistical problems. It's also important to distinguish between: a coefficient statistically indistinguishable from zero with a small standard error. a coefficient statistically indistinguishable from zero with a large standard error. In the latter case, it's problematic to argue the coefficient doesn't matter. It may simply be poorly measured.
{ "source": [ "https://stats.stackexchange.com/questions/271434", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/155650/" ] }
273,537
I was confused about the differences between the F1 score, Dice score and IoU (intersection over union). By now I found out that F1 and Dice mean the same thing (right?) and IoU has a very similar formula to the other two. F1 / Dice: $$\frac{2TP}{2TP+FP+FN}$$ IoU / Jaccard: $$\frac{TP}{TP+FP+FN}$$ Are there any practical differences or other things worth noting except that F1 weights the true-positives higher? Is there a situation where I'd use one but not the other?
You're on the right track. So a few things right off the bat. From the definition of the two metrics, we have that IoU and F score are always within a factor of 2 of each other: $$ F/2 \leq IoU \leq F $$ and also that they meet at the extremes of one and zero under the conditions that you would expect (perfect match and completely disjoint). Note also that the ratio between them can be related explicitly to the IoU: $$ IoU/F = 1/2 + IoU/2 $$ so that the ratio approaches 1/2 as both metrics approach zero. But there's a stronger statement that can be made for the typical application of classification a la machine learning. For any fixed "ground truth", the two metrics are always positively correlated. That is to say that if classifier A is better than B under one metric, it is also better than classifier B under the other metric. It is tempting then to conclude that the two metrics are functionally equivalent so the choice between them is arbitrary, but not so fast! The problem comes when taking the average score over a set of inferences . Then the difference emerges when quantifying how much worse classifier B is than A for any given case. In general, the IoU metric tends to penalize single instances of bad classification more than the F score quantitatively even when they can both agree that this one instance is bad. Similarly to how L2 can penalize the largest mistakes more than L1, the IoU metric tends to have a "squaring" effect on the errors relative to the F score. So the F score tends to measure something closer to average performance, while the IoU score measures something closer to the worst case performance. Suppose for example that the vast majority of the inferences are moderately better with classifier A than B, but some of them of them are significantly worse using classifier A. It may be the case then that the F metric favors classifier A while the IoU metric favors classifier B. To be sure, both of these metrics are much more alike than they are different. But both of them suffer from another disadvantage from the standpoint of taking averages of these scores over many inferences: they both overstate the importance of sets with little-to-no actual ground truth positive sets. In the common example of image segmentation, if an image only has a single pixel of some detectable class, and the classifier detects that pixel and one other pixel, its F score is a lowly 2/3 and the IoU is even worse at 1/2. Trivial mistakes like these can seriously dominate the average score taken over a set of images. In short, it weights each pixel error inversely proportionally to the size of the selected/relevant set rather than treating them equally. There is a far simpler metric that avoids this problem. Simply use the total error: FN + FP (e.g. 5% of the image's pixels were miscategorized). In the case where one is more important than the other, a weighted average may be used: $c_0$FP + $c_1$FN.
{ "source": [ "https://stats.stackexchange.com/questions/273537", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/157173/" ] }
274,135
I am currently studying Statistical Inference class on Coursera. In one of the assignments, the following question comes up. | Suppose you rolled the fair die twice. What is the probability of rolling the same number two times in a row? 1: 2/6 2: 1/36 3: 0 4: 1/6 Selection: 2 | You're close...I can feel it! Try it again. | Since we don't care what the outcome of the first roll is, its probability is 1. The second roll of the dice has to match the outcome of the first, so that has a probability of 1/6. The probability of both events occurring is 1 * 1/6. I do not understand this bit. I understand that the two die rolls are independent events and their probabilities can be multiplied, so the outcome should be 1/36. Can you please explain, why I am wrong?
The probability of rolling a specific number twice in a row is indeed 1/36, because you have a 1/6 chance of getting that number on each of two rolls (1/6 x 1/6). The probability of rolling any number twice in a row is 1/6, because there are six ways to roll a specific number twice in a row (6 x 1/36). Another way to think about it is that you don't care what the first number is, you just need the second number to match it (with probability 1/6 ).
{ "source": [ "https://stats.stackexchange.com/questions/274135", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/157594/" ] }
274,151
I'm looking at this site for a workshop on GAM in R: http://qcbs.ca/wiki/r_workshop8 In the end of the section 2. Multiple smooth terms they show an example, where they use anova to compare three different models to determine the best fit model. The output is Analysis of Deviance Table Model 1: y ~ x0 + s(x1) Model 2: y ~ x0 + s(x1) + x2 Model 3: y ~ x0 + s(x1) + s(x2) Resid. Df Resid. Dev Df Deviance Pr(>Chi) 1 394.08 5231.6 2 393.10 4051.3 0.97695 1180.2 < 2.2e-16 *** 3 385.73 1839.5 7.37288 2211.8 < 2.2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Based on this they conclude that model 3 is best. My question is how they see that? My current understanding is: The Pr(>Chi) -value is small for both model 2 and 3, so these are better than model 1. However, what other variable are they using to determine that 3 is better than 2?
The output from anova() is a series of likelihood ratio tests. The lines in the output are: The first line in the output corresponds to the simplest model with only a smooth of x1 (I'm ignoring the factor x0 as it isn't up for consideration in your example) — this is not tested against anything simpler hence the last few column entries are empty. The second line is a likelihood ratio test between the model in line 1 and the model in line 2. At the cost of 0.97695 extra degrees of freedom, the residual deviance is decreased by 1180.2 . This reduction in deviance (or conversely, increase in deviance explained), at the cost of <1 degree of freedom, is highly unlikely if the true effect of x2 were 0. Why 0.97695 degrees of freedom increase? Well, the linear function of x2 would add 1 df to the model but the smoother for x1 will be penalised back a little bit more than before and hence use slightly fewer effective degrees of freedom, hence the <1 change in overall degrees of freedom. The third line is exactly the same as I described above but for a comparison between the model in the second line and the model in the third line: i.e. the third line is evaluating the improvement in moving from modelling x2 as a linear term to modelling x2 as a smooth function. Again, this improvement in model fit (change in deviance is now 2211.8 at the cost of 7.37288 more degrees of freedom) is unlikely if the extra parameters associated with s(x2) were all equal to 0. In summary, line 2 says Model 2 fits better than Model 1, so a linear function of x2 is better than no effect of x1 . But line 3 says that Model 3 fits the data better than Model 2, so a smooth function of x2 is preferred over a linear function of x2 . This is a sequential analysis of models, not a series of comparisons against the simplest model. However… What they're showing is not the best way to do this — recent theory would suggest that the output from summary(m3) would have the most "correct" coverage properties. Furthermore, to select between models, one should probably use select = TRUE when fitting the full model (the one with two smooths), which would allow for shrinkage of terms that would include the model with linear x2 or even no effect of this variable. They're also not fitting using REML or ML smoothness selection, which many of us mgcv users would consider the default option (even though it isn't the actual default in gam() ) What I would do is: library("mgcv") gam_data <- gamSim(eg=5) m3 <- gam(y ~ x0 + s(x1) + s(x2), data = gam_data, select = TRUE, method = "REML") summary(m3) The final line produces the following: > summary(m3) Family: gaussian Link function: identity Formula: y ~ x0 + s(x1) + s(x2) Parametric coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 8.4097 0.2153 39.053 < 2e-16 *** x02 1.9311 0.3073 6.284 8.93e-10 *** x03 4.4241 0.3052 14.493 < 2e-16 *** x04 5.7639 0.3042 18.948 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Approximate significance of smooth terms: edf Ref.df F p-value s(x1) 2.487 9 25.85 <2e-16 *** s(x2) 7.627 9 76.03 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 R-sq.(adj) = 0.769 Deviance explained = 77.7% -REML = 892.61 Scale est. = 4.5057 n = 400 We can see that both smooth terms are significantly different from null functions. What select = TRUE is doing is putting an extra penalty on the null space of the penalty (this is the part of the spline that is perfectly smooth). If you don't have this, smoothness selection can only penalise a smooth back to a linear function (because the penalty that's doing smoothness selection only works on the non-smooth (the wiggly) parts of the basis). To perform selection we need to be able to penalise the null space (the smooth parts of the basis) as well. select = TRUE achieves this through the use of a second penalty added to all smooth terms in the model (Marra and Wood, 2011). This acts as a kinds of shrinkage, pulling all smooth terms somewhat towards 0, but it will pull superfluous terms towards 0 much more quickly, hence selecting them out of the model if they don't have any explanatory power. We pay a price for this when evaluating the significance of the smooths; note the Ref.df column above (the 9 comes from the default value of k = 10 , which for thin plate splines with centring constraints means 9 basis functions), instead of paying something like 2.5 and 7.7 degrees of freedom for the splines, we're paying 9 degrees of freedom each. This reflects that fact that we've done the selection, that we weren't sure which terms should be in the model. Note: it is important that you don't use anova(m1, m2, m3) type calls on models using select = TRUE . As noted in ?mgcv:::anova.gam , the approximation used can be very bad for smooths with penalties on their null spaces. In the comments, @BillyJean mentioned using AIC for selection. Recent work by Simon Wood and colleagues (Wood et al, 2016) derived an AIC that accounts for the extra uncertainty due to us having estimated the smoothness parameters in the model. This AIC works reasonably well, but there is some discussion as to the behaviour of their derivation of AIC when IIRC smooths are close to linear functions. Anyway, AIC would give us: m1 <- gam(y ~ x0 + s(x1), data = gam_data, method = "ML") m2 <- gam(y ~ x0 + s(x1) + x2, data = gam_data, method = "ML") m3 <- gam(y ~ x0 + s(x1) + s(x2), data = gam_data, method = "ML") AIC(m1, m2, m3) > AIC(m1, m2, m3) df AIC m1 7.307712 2149.046 m2 8.608444 2055.651 m3 16.589330 1756.890 Note I refitted all of these with ML smoothness selection as I'm not certain what the AIC does when select = TRUE and you have to be careful comparing models with different fixed effects, that aren't fully penalised, using REML. Again the inference is clear; the model with smooths of x1 and x2 has substantially better fit than either of the other two models. Marra, G. & Wood, S. N. Practical variable selection for generalized additive models. Comput. Stat. Data Anal. 55, 2372–2387 (2011). Wood, S. N., Pya, N. & Säfken, B. Smoothing Parameter and Model Selection for General Smooth Models. J. Am. Stat. Assoc. 111, 1548–1563 (2016).
{ "source": [ "https://stats.stackexchange.com/questions/274151", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/86790/" ] }
274,286
Topology of Google Inception model could be found here: Google Inception Netowrk I noticed that there is 3 softmax layer in this model(#154,#152,#145), and 2 of them are some sort of early escape of this model. From what I know,softmax layer is for final output,so why there is so many? what's the purpose of other 2 layer?
Short answer: Deep architectures, and specifically GoogLeNet (22 layers) are in danger of the vanishing gradients problem during training (back-propagation algorithm). The engineers of GoogLeNet addressed this issue by adding classifiers in the intermediate layers as well, such that the final loss is a combination of the intermediate loss and the final loss. This is why you see a total of three loss layers, unlike the usual single layer as the last layer of the network. Longer answer: In classic Machine Learning, there is usually a distinction between feature engineering and classification. Neural networks are most famous for their ability to solve problems "end to end", i.e, they combine the stages of learning a representation for the data, and training a classifier. Therefore, you can think of a neural network with a standard architecture (for example, AlexNet) as being composed of a "representation learning" phase (the layers up until previous to last) and a "classification" phase, which as expected, includes a loss function. When creating deeper networks, there arises a problem coined as the "vanishing gradients" problem. It's actually not specific to neural networks; rather to any gradient based learning methods. It not that trivial and therefore deserves a proper explanation for itself; see here for a good reference. Intuitively, you can think about the gradients carrying less and less information the deeper we go inside the network, which is of course a major concern, since we tune the network's parameters (weights) based solely on the gradients, using the "back-prop" algorithm. How did the developers of GoogLeNet handle this problem? They recognized the fact that it's not only the features of the final layers that carry all the discriminatory information: intermediate features are also capable of discriminating different labels; and, most importantly, their values are more "reliable" since they are extracted from earlier layers in which the gradient carry more information. Building on this intuition, they added "auxiliary classifiers" in two intermediate layers. This is the reason for the "early escape" loss layers in the middle of the network which you referenced to in your question. The total loss is then a combination of these three loss layers. I quote from the original article: These classifiers take the form of smaller convolutional networks put on top of the output of the Inception (4a) and (4d) modules. During training, their loss gets added to the total loss of the network with a discount weight (the losses of the auxiliary classi- fiers were weighted by 0.3). At inference time, these auxiliary networks are discarded. Visually:
{ "source": [ "https://stats.stackexchange.com/questions/274286", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/116358/" ] }
274,336
I have been studying linear regression and tried it on below set {(x,y)}, where x specified the area of house in square-feet, and y specified the price in dollars. This is the first example in Andrew Ng Notes . 2104,400 1600,330 2400,369 1416,232 3000,540 I developed a sample code but when I run it, the cost is increasing with each step whereas it should be decreasing with each step. Code and output given below. bias is W 0 X 0 , where X 0 =1. featureWeights is an array of [X 1 ,X 2 ,...,X N ] I also tried an online python solution available here , and explained here . But this example is also giving the same output. Where is the gap in understanding the concept? Code: package com.practice.cnn; import java.util.Arrays; public class LinearRegressionExample { private float ALPHA = 0.0001f; private int featureCount = 0; private int rowCount = 0; private float bias = 1.0f; private float[] featureWeights = null; private float optimumCost = Float.MAX_VALUE; private boolean status = true; private float trainingInput[][] = null; private float trainingOutput[] = null; public void train(float[][] input, float[] output) { if (input == null || output == null) { return; } if (input.length != output.length) { return; } if (input.length == 0) { return; } rowCount = input.length; featureCount = input[0].length; for (int i = 1; i < rowCount; i++) { if (input[i] == null) { return; } if (featureCount != input[i].length) { return; } } featureWeights = new float[featureCount]; Arrays.fill(featureWeights, 1.0f); bias = 0; //temp-update-1 featureWeights[0] = 0; //temp-update-1 this.trainingInput = input; this.trainingOutput = output; int count = 0; while (true) { float cost = getCost(); System.out.print("Iteration[" + (count++) + "] ==> "); System.out.print("bias -> " + bias); for (int i = 0; i < featureCount; i++) { System.out.print(", featureWeights[" + i + "] -> " + featureWeights[i]); } System.out.print(", cost -> " + cost); System.out.println(); // if (cost > optimumCost) { // status = false; // break; // } else { // optimumCost = cost; // } optimumCost = cost; float newBias = bias + (ALPHA * getGradientDescent(-1)); float[] newFeaturesWeights = new float[featureCount]; for (int i = 0; i < featureCount; i++) { newFeaturesWeights[i] = featureWeights[i] + (ALPHA * getGradientDescent(i)); } bias = newBias; for (int i = 0; i < featureCount; i++) { featureWeights[i] = newFeaturesWeights[i]; } } } private float getCost() { float sum = 0; for (int i = 0; i < rowCount; i++) { float temp = bias; for (int j = 0; j < featureCount; j++) { temp += featureWeights[j] * trainingInput[i][j]; } float x = (temp - trainingOutput[i]) * (temp - trainingOutput[i]); sum += x; } return (sum / rowCount); } private float getGradientDescent(final int index) { float sum = 0; for (int i = 0; i < rowCount; i++) { float temp = bias; for (int j = 0; j < featureCount; j++) { temp += featureWeights[j] * trainingInput[i][j]; } float x = trainingOutput[i] - (temp); sum += (index == -1) ? x : (x * trainingInput[i][index]); } return ((sum * 2) / rowCount); } public static void main(String[] args) { float[][] input = new float[][] { { 2104 }, { 1600 }, { 2400 }, { 1416 }, { 3000 } }; float[] output = new float[] { 400, 330, 369, 232, 540 }; LinearRegressionExample example = new LinearRegressionExample(); example.train(input, output); } } Output: Iteration[0] ==> bias -> 0.0, featureWeights[0] -> 0.0, cost -> 150097.0 Iteration[1] ==> bias -> 0.07484, featureWeights[0] -> 168.14847, cost -> 1.34029099E11 Iteration[2] ==> bias -> -70.60721, featureWeights[0] -> -159417.34, cost -> 1.20725801E17 Iteration[3] ==> bias -> 67012.305, featureWeights[0] -> 1.51299168E8, cost -> 1.0874295E23 Iteration[4] ==> bias -> -6.3599688E7, featureWeights[0] -> -1.43594258E11, cost -> 9.794949E28 Iteration[5] ==> bias -> 6.036088E10, featureWeights[0] -> 1.36281745E14, cost -> 8.822738E34 Iteration[6] ==> bias -> -5.7287012E13, featureWeights[0] -> -1.29341617E17, cost -> Infinity Iteration[7] ==> bias -> 5.4369677E16, featureWeights[0] -> 1.2275491E20, cost -> Infinity Iteration[8] ==> bias -> -5.1600908E19, featureWeights[0] -> -1.1650362E23, cost -> Infinity Iteration[9] ==> bias -> 4.897313E22, featureWeights[0] -> 1.1057068E26, cost -> Infinity Iteration[10] ==> bias -> -4.6479177E25, featureWeights[0] -> -1.0493987E29, cost -> Infinity Iteration[11] ==> bias -> 4.411223E28, featureWeights[0] -> 9.959581E31, cost -> Infinity Iteration[12] ==> bias -> -4.186581E31, featureWeights[0] -> -Infinity, cost -> Infinity Iteration[13] ==> bias -> Infinity, featureWeights[0] -> NaN, cost -> NaN Iteration[14] ==> bias -> NaN, featureWeights[0] -> NaN, cost -> NaN
The short answer is that your step size is too big. Instead of descending the canyon wall, your step is so big that you're jumping across from one side to higher up on the other! Cost function below: The long answer is that it's difficult for a naive gradient descent to solve this problem because the level sets of your cost function are highly elongated ellipses rather than circles. To robustly solve this problem, note that there are more sophisticated ways to choose: a step size (than hardcoding a constant). a step direction (than gradient descent). Underlying problem The underlying problem is that level sets of your cost function are highly elongated ellipses, and this causes problems for gradient descent. The below figure shows level sets for the cost function. With highly elliptical level sets, the direction of steepest descent may barely align with the direction of the solution. For example in this problem, the intercept term (what you call "bias") needs to travel a great distance (from $0$ to $\approx 26.789$ along the canyon floor) but it is for the other feature where the partial derivative has a much larger slope. If step size is too big, you will literally jump over the lower blue region and ascend instead of descend. BUT if you if you reduce your step size, your progress in getting $\theta_0$ to the proper value becomes painfully slow. I suggest reading this answer on Quora. Quick fix 1: Change your code to private float ALPHA = 0.0000002f; and you'll stop overshooting. Quick fix 2: If you rescale your X data to 2.104, 1.600, etc... your level sets become spherical and gradient descent quickly converges with a higher learning rate. This lowers the condition number of your design matrix $X'X$ . More advanced fixes If the goal were to efficiently solve ordinary least squares rather than simply learn gradient descent for a class, observe that: There are more sophisticated ways to calculate step size, such as line search and the Armijo rule . Near an answer where local conditions prevail, Newton's method obtains quadratic convergence and is a great way to choose a step direction and size. Solving least squares is equivalent to solving a linear system. Modern algorithms don't use naive gradient descent. Instead: For small systems ( $k$ on the order of several thousand or less), they use something like QR decomposition with partial pivoting. For large systems, they do formulate it is an optimization problem and use iterative methods such as the Krylov subspace methods. Note that there are many packages which will solve the linear system $(X'X) b = X'y$ for $b$ and you can check the results of your gradient descent algorithm against that. The actual solution is 26.789880528523071 0.165118878075797 You will find that those achieve the minimum value for the cost function.
{ "source": [ "https://stats.stackexchange.com/questions/274336", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/157747/" ] }
274,478
I'm trying to use the example described in the Keras documentation named "Stacked LSTM for sequence classification" (see code below) and can't figure out the input_shape parameter in the context of my data. I have as input a matrix of sequences of 25 possible characters encoded in integers to a padded sequence of maximum length 31. As a result, my x_train has the shape (1085420, 31) meaning (n_observations, sequence_length) . from keras.models import Sequential from keras.layers import LSTM, Dense import numpy as np data_dim = 16 timesteps = 8 num_classes = 10 # expected input data shape: (batch_size, timesteps, data_dim) model = Sequential() model.add(LSTM(32, return_sequences=True, input_shape=(timesteps, data_dim))) # returns a sequence of vectors of dimension 32 model.add(LSTM(32, return_sequences=True)) # returns a sequence of vectors of dimension 32 model.add(LSTM(32)) # return a single vector of dimension 32 model.add(Dense(10, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) # Generate dummy training data x_train = np.random.random((1000, timesteps, data_dim)) y_train = np.random.random((1000, num_classes)) # Generate dummy validation data x_val = np.random.random((100, timesteps, data_dim)) y_val = np.random.random((100, num_classes)) model.fit(x_train, y_train, batch_size=64, epochs=5, validation_data=(x_val, y_val)) In this code x_train has the shape (1000, 8, 16) , as for an array of 1000 arrays of 8 arrays of 16 elements. There I get completely lost on what is what and how my data can reach this shape. Looking at Keras doc and various tutorials and Q&A, it seems I'm missing something obvious. Can someone give me a hint of what to look for ? Thanks for your help !
LSTM shapes are tough so don't feel bad, I had to spend a couple days battling them myself: If you will be feeding data 1 character at a time your input shape should be (31,1) since your input has 31 timesteps, 1 character each. You will need to reshape your x_train from (1085420, 31) to (1085420, 31,1) which is easily done with this command : x_train=x_train.reshape(x_train.shape[0],x_train.shape[1],1))
{ "source": [ "https://stats.stackexchange.com/questions/274478", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/54464/" ] }
275,677
A basic limitation of null hypothesis significance testing is that it does not allow a researcher to gather evidence in favor of the null ( Source ) I see this claim repeated in multiple places, but I can't find justification for it. If we perform a large study and we don't find statistically significant evidence against the null hypothesis , isn't that evidence for the null hypothesis?
Failing to reject a null hypothesis is evidence that the null hypothesis is true, but it might not be particularly good evidence, and it certainly doesn't prove the null hypothesis. Let's take a short detour. Consider for a moment the old cliché: Absence of evidence is not evidence of absence. Notwithstanding its popularity, this statement is nonsense. If you look for something and fail to find it, that is absolutely evidence that it isn't there. How good that evidence is depends on how thorough your search was. A cursory search provides weak evidence; an exhaustive search provides strong evidence. Now, back to hypothesis testing. When you run a hypothesis test, you are looking for evidence that the null hypothesis is not true. If you don't find it, then that is certainly evidence that the null hypothesis is true, but how strong is that evidence? To know that, you have to know how likely it is that evidence that would have made you reject the null hypothesis could have eluded your search. That is, what is the probability of a false negative on your test? This is related to the power, $\beta$, of the test (specifically, it is the complement, 1-$\beta$.) Now, the power of the test, and therefore the false negative rate, usually depends on the size of the effect you are looking for. Large effects are easier to detect than small ones. Therefore, there is no single $\beta$ for an experiment, and therefore no definitive answer to the question of how strong the evidence for the null hypothesis is. Put another way, there is always some effect size small enough that it's not ruled out by the experiment. From here, there are two ways to proceed. Sometimes you know you don't care about an effect size smaller than some threshold. In that case, you probably should reframe your experiment such that the null hypothesis is that the effect is above that threshold, and then test the alternative hypothesis that the effect is below the threshold. Alternatively, you could use your results to set bounds on the believable size of the effect. Your conclusion would be that the size of the effect lies in some interval, with some probability. That approach is just a small step away from a Bayesian treatment, which you might want to learn more about, if you frequently find yourself in this sort of situation. There's a nice answer to a related question that touches on evidence of absence testing , which you might find useful.
{ "source": [ "https://stats.stackexchange.com/questions/275677", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/101702/" ] }
276,067
I'm trying to better understand log loss and how it works but one thing I can't seem to find is putting the log loss number into some sort of context. If my model has a log loss of 0.5, is that good? What's considered a good and bad score? How do these thresholds change?
The logloss is simply $L(p_i)=-\log(p_i)$ where $p$ is simply the probability attributed to the real class. So $L(p)=0$ is good, we attributed the probability $1$ to the right class, while $L(p)=+\infty$ is bad, because we attributed the probability $0$ to the actual class. So, answering your question, $L(p)=0.5$ means, on average, you attributed to the right class the probability $p\approx0.61$ across samples. Now, deciding if this is good enough is actually application-dependent, and so it's up to the argument.
{ "source": [ "https://stats.stackexchange.com/questions/276067", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/102726/" ] }
276,497
I have two data sets (source and target data) which follow different distributions. I am using MMD - that is a non-parametric distribution distance - to compute marginal distribution between the source and target data. source data, Xs target data, Xt adaptation Matrix A ** Projected data, Zs = A' Xs and Zt = A' Xt * MMD => Distance(P(Xs),P(Xt)) = | mean(A'Xs) - mean(A' Xt) | That means: the distribution's distance between the source and target data in the original space is equivalent to the distance between means of projected source and target data in the embedded space. I have a question about the concept of MMD. In the MMD formula, why with computing distance in the latent space we could measure the distribution's distance in the original space? Thanks
It might help to give slightly more of an overview of MMD. $\DeclareMathOperator{\E}{\mathbb E}\newcommand{\R}{\mathbb R}\newcommand{\X}{\mathcal X}\newcommand{\h}{\mathcal H}\DeclareMathOperator{\MMD}{MMD}$ In general, MMD is defined by the idea of representing distances between distributions as distances between mean embeddings of features. That is, say we have distributions $P$ and $Q$ over a set $\X$ . The MMD is defined by a feature map $\varphi : \X \to \h$ , where $\mathcal H$ is what's called a reproducing kernel Hilbert space. In general, the MMD is $$ \MMD(P, Q) = \lVert \E_{X \sim P}[ \varphi(X) ] - \E_{Y \sim Q}[ \varphi(Y) ] \rVert_\h .$$ As one example, we might have $\X = \h = \R^d$ and $\varphi(x) = x$ . In that case: \begin{align} \MMD(P, Q) &= \lVert \E_{X \sim P}[ \varphi(X) ] - \E_{Y \sim Q}[ \varphi(Y) ] \rVert_\h \\&= \lVert \E_{X \sim P}[ X ] - \E_{Y \sim Q}[ Y ] \rVert_{\R^d} \\&= \lVert \mu_P - \mu_Q \rVert_{\R^d} ,\end{align} so this MMD is just the distance between the means of the two distributions. Matching distributions like this will match their means, though they might differ in their variance or in other ways. Your case is slightly different: we have $\mathcal X = \mathbb R^d$ and $\mathcal H = \mathbb R^p$ , with $\varphi(x) = A' x$ , where $A$ is a $d \times p$ matrix. So we have \begin{align} \MMD(P, Q) &= \lVert \E_{X \sim P}[ \varphi(X) ] - \E_{Y \sim Q}[ \varphi(Y) ] \rVert_\h \\&= \lVert \E_{X \sim P}[ A' X ] - \E_{Y \sim Q}[ A' Y ] \rVert_{\R^p} \\&= \lVert A' \E_{X \sim P}[ X ] - A' \E_{Y \sim Q}[ Y ] \rVert_{\R^p} \\&= \lVert A'( \mu_P - \mu_Q ) \rVert_{\R^p} .\end{align} This MMD is the difference between two different projections of the mean. If $p < d$ or the mapping $A'$ otherwise isn't invertible, then this MMD is weaker than the previous one: it doesn't distinguish between some distributions that the previous one does. You can also construct stronger distances. For example, if $\X = \R$ and you use $\varphi(x) = (x, x^2)$ , then the MMD becomes $\sqrt{(\E X - \E Y)^2 + (\E X^2 - \E Y^2)^2}$ , and can distinguish not only distributions with different means but with different variances as well. And you can get much stronger than that: if $\varphi$ maps to a general reproducing kernel Hilbert space, then you can apply the kernel trick to compute the MMD, and it turns out that many kernels, including the Gaussian kernel, lead to the MMD being zero if and only the distributions are identical. Specifically, letting $k(x, y) = \langle \varphi(x), \varphi(y) \rangle_\h$ , you get \begin{align} \MMD^2(P, Q) &= \lVert \E_{X \sim P} \varphi(X) - \E_{Y \sim Q} \varphi(Y) \rVert_\h^2 \\&= \langle \E_{X \sim P} \varphi(X), \E_{X' \sim P} \varphi(X') \rangle_\h + \langle \E_{Y \sim Q} \varphi(Y), \E_{Y' \sim Q} \varphi(Y') \rangle_\h - 2 \langle \E_{X \sim P} \varphi(X), \E_{Y \sim Q} \varphi(Y) \rangle_\h \\&= \E_{X, X' \sim P} k(X, X') + \E_{Y, Y' \sim Q} k(Y, Y') - 2 \E_{X \sim P, Y \sim Q} k(X, Y) \end{align} which you can straightforwardly estimate with samples. Update: here's where the "maximum" in the name comes from. The feature map $\varphi: \X \to \h$ maps into a reproducing kernel Hilbert space. These are spaces of functions , and satisfy a key property (called the reproducing property ): $\langle f, \varphi(x) \rangle_\h = f(x)$ for any $f \in \h$ . In the simplest example, $\X = \h = \R^d$ with $\varphi(x) = x$ , we view each $f \in \h$ as the function corresponding to some $w \in \R^d$ , by $f(x) = w' x$ . Then the reproducing property $\langle f, \varphi(x) \rangle_\h = \langle w, x \rangle_{\R^d}$ should make sense. In more complex settings, like a Gaussian kernel, $f$ is a much more complicated function, but the reproducing property still holds. Now, we can give an alternative characterization of the MMD: \begin{align} \MMD(P, Q) &= \lVert \E_{X \sim P}[\varphi(X)] - \E_{Y \sim Q}[\varphi(Y)] \rVert_\h \\&= \sup_{f \in \h : \lVert f \rVert_\h \le 1} \langle f, \E_{X \sim P}[\varphi(X)] - \E_{Y \sim Q}[\varphi(Y)] \rangle_\h \\&= \sup_{f \in \h : \lVert f \rVert_\h \le 1} \langle f, \E_{X \sim P}[\varphi(X)] \rangle_\h - \langle f, \E_{Y \sim Q}[\varphi(Y)] \rangle_\h \\&= \sup_{f \in \h : \lVert f \rVert_\h \le 1} \E_{X \sim P}[\langle f, \varphi(X)\rangle_\h] - \E_{Y \sim Q}[\langle f, \varphi(Y) \rangle_\h] \\&= \sup_{f \in \h : \lVert f \rVert_\h \le 1} \E_{X \sim P}[f(X)] - \E_{Y \sim Q}[f(Y)] .\end{align} The second line is a general fact about norms in Hilbert spaces: $\sup_{f : \lVert f \rVert \le 1} \langle f, g \rangle_\h = \lVert g \rVert$ is achieved by $f = g / \lVert g \rVert$ . The fourth depends on a technical condition known as Bochner integrability but is true e.g. for bounded kernels or distributions with bounded support. Then at the end we use the reproducing property. This last line is why it's called the "maximum mean discrepancy" – it's the maximum, over test functions $f$ in the unit ball of $\h$ , of the mean difference between the two distributions.
{ "source": [ "https://stats.stackexchange.com/questions/276497", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/158081/" ] }
276,831
I read that these are the conditions for using the multiple regression model: the residuals of the model are nearly normal, the variability of the residuals is nearly constant the residuals are independent, and each variable is linearly related to the outcome. How are 1 and 2 different? You can see one here right: So the above graph says that the residual that is 2 standard deviation away is 10 away from Y-hat. That means that the residuals follow a normal distribution. Can't you infer 2 from this? That the variability of residuals is nearly constant?
1. Normal distribution of residuals : The normality condition comes into play when you're trying to get confidence intervals and/or p-values. $\varepsilon\vert X\sim N (0,\sigma^2 I_n)$ is not a Gauss Markov condition . This plot tries to illustrate the distribution of points in the population in blue (with the population regression line as a solid cyan line), superimposed on a sample dataset in big yellow dots (with its estimated regression line plotted at as dashed yellow line). Evidently this is only for conceptual consumption, since there would be infinity points for each value of $X = x$) - so it is a graphical iconographic discretization of the concept of regression as the continuous distribution of values around a mean (corresponded to the predicted value of the "independent" variable) at each given value of the regressor, or explanatory variable. If we run diagnostic R plots on the simulated "population" data we'd get... The variance of the the residuals is constant along all values of $X.$ The typical plot would be: Conceptually, introducing multiple regressors or explanatory variables doesn't alter the idea. I find the hands-on tutorial of the package swirl() extremely helpful in understanding how multiple regression is really a process of regressing dependent variables against each other carrying forward the residual, unexplained variation in the model; or more simply, a vectorial form of simple linear regression : The general technique is to pick one regressor and to replace all other variables by the residuals of their regressions against that one. 2. The variability of the residuals is nearly constant (Homoskedasticity) : $E[ \varepsilon_i^2 \vert X ] = \sigma^2$ The problem with violating this condition is: Heteroskedasticity has serious consequences for the OLS estimator. Although the OLS estimator remains unbiased, the estimated SE is wrong. Because of this, confidence intervals and hypotheses tests cannot be relied on. In addition, the OLS estimator is no longer BLUE. In this plot the variance increases with the values of the regressor (explanatory variable), as opposed to staying constant. In this case the residuals are normally distributed, but the variance of this normal distribution changes (increases) with the explanatory variable. Notice that the "true" (population) regression line does not change with respect to the population regression line under homoskedasticity in the first plot (solid dark blue), but it is intuitively clear that estimates are going to be more uncertain. The diagnostic plots on the dataset are... which correspond to "heavy-tailed" distribution , which makes sense is we were to telescope all the "side-by-side" vertical Gaussian plots into a single one, which would retain its bell shape, but have very long tails. @Glen_b "... a complete coverage of the distinction between the two would also consider homoskedastic-but-not-normal." The residuals are highly skewed and the variance increases with the values of the explanatory variable. These would be the diagnostic plots... corresponding to marked right skewed-ness. To close the loop, we'd see also skewed-ness in a homoskedastic model with non-Gaussian distribution of errors: with diagnostic plots as...
{ "source": [ "https://stats.stackexchange.com/questions/276831", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/158648/" ] }
278,755
I am taking the Machine Learning courses online and learnt about Gradient Descent for calculating the optimal values in the hypothesis. h(x) = B0 + B1X why we need to use Gradient Descent if we can easily find the values with the below formula? This looks straight forward and easy too. but GD needs multiple iterations to get the value. B1 = Correlation * (Std. Dev. of y/ Std. Dev. of x) B0 = Mean(Y) – B1 * Mean(X) NOTE: Taken as in https://www.dezyre.com/data-science-in-r-programming-tutorial/linear-regression-tutorial I did checked on the below questions and for me it was not clear to understand. Why is gradient descent required? Why is optimisation solved with gradient descent rather than with an analytical solution? The above answers compares GD vs. using derivatives.
The main reason why gradient descent is used for linear regression is the computational complexity: it's computationally cheaper (faster) to find the solution using the gradient descent in some cases. The formula which you wrote looks very simple, even computationally, because it only works for univariate case, i.e. when you have only one variable. In the multivariate case, when you have many variables, the formulae is slightly more complicated on paper and requires much more calculations when you implement it in software: $$\beta=(X'X)^{-1}X'Y$$ Here, you need to calculate the matrix $X'X$ then invert it (see note below). It's an expensive calculation. For your reference, the (design) matrix X has K+1 columns where K is the number of predictors and N rows of observations. In a machine learning algorithm you can end up with K>1000 and N>1,000,000. The $X'X$ matrix itself takes a little while to calculate, then you have to invert $K\times K$ matrix - this is expensive. So, the gradient descent allows to save a lot of time on calculations. Moreover, the way it's done allows for a trivial parallelization, i.e. distributing the calculations across multiple processors or machines. The linear algebra solution can also be parallelized but it's more complicated and still expensive. Additionally, there are versions of gradient descent when you keep only a piece of your data in memory, lowering the requirements for computer memory. Overall, for extra large problems it's more efficient than linear algebra solution. This becomes even more important as the dimensionality increases, when you have thousands of variables like in machine learning. Remark . I was surprised by how much attention is given to the gradient descent in Ng's lectures. He spends nontrivial amount of time talking about it, maybe 20% of entire course. To me it's just an implementation detail, it's how exactly you find the optimum. The key is in formulating the optimization problem, and how exactly you find it is nonessential. I wouldn't worry about it too much. Leave it to computer science people, and focus on what's important to you as a statistician. Having said this I must qualify by saying that it is indeed important to understand the computational complexity and numerical stability of the solution algorithms. I still don't think you must know the details of implementation and code of the algorithms. It's not the best use of your time as a statistician usually. Note 1 . I wrote that you have to invert the matrix for didactic purposes and it's not how usually you solve the equation. In practice, the linear algebra problems are solved by using some kind of factorization such as QR, where you don't directly invert the matrix but do some other mathematically equivalent manipulations to get an answer. You do this because matrix inversion is an expensive and numerically unstable operation in many cases. This brings up another little advantageof the gradient descent algorithm as a side effect: it works even when the design matrix has collinearity issues. The usual linear algebra path would blow up and gradient descent will keep going even for collinear predictors.
{ "source": [ "https://stats.stackexchange.com/questions/278755", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/160881/" ] }
278,771
I am doing the Machine Learning Stanford course on Coursera. In the chapter on Logistic Regression, the cost function is this: Then, it is differentiated here: I tried getting the derivative of the cost function, but I got something completely different. How is the derivative obtained? Which are the intermediary steps?
Adapted from the notes in the course, which I don't see available (including this derivation) outside the notes contributed by students within the page of Andrew Ng's Coursera Machine Learning course . In what follows, the superscript $(i)$ denotes individual measurements or training "examples." $\small \frac{\partial J(\theta)}{\partial \theta_j} = \frac{\partial}{\partial \theta_j} \,\frac{-1}{m}\sum_{i=1}^m \left[ y^{(i)}\log\left(h_\theta \left(x^{(i)}\right)\right) + (1 -y^{(i)})\log\left(1-h_\theta \left(x^{(i)}\right)\right)\right] \\[2ex]\small\underset{\text{linearity}}= \,\frac{-1}{m}\,\sum_{i=1}^m \left[ y^{(i)}\frac{\partial}{\partial \theta_j}\log\left(h_\theta \left(x^{(i)}\right)\right) + (1 -y^{(i)})\frac{\partial}{\partial \theta_j}\log\left(1-h_\theta \left(x^{(i)}\right)\right) \right] \\[2ex]\Tiny\underset{\text{chain rule}}= \,\frac{-1}{m}\,\sum_{i=1}^m \left[ y^{(i)}\frac{\frac{\partial}{\partial \theta_j}h_\theta \left(x^{(i)}\right)}{h_\theta\left(x^{(i)}\right)} + (1 -y^{(i)})\frac{\frac{\partial}{\partial \theta_j}\left(1-h_\theta \left(x^{(i)}\right)\right)}{1-h_\theta\left(x^{(i)}\right)} \right] \\[2ex]\small\underset{h_\theta(x)=\sigma\left(\theta^\top x\right)}=\,\frac{-1}{m}\,\sum_{i=1}^m \left[ y^{(i)}\frac{\frac{\partial}{\partial \theta_j}\sigma\left(\theta^\top x^{(i)}\right)}{h_\theta\left(x^{(i)}\right)} + (1 -y^{(i)})\frac{\frac{\partial}{\partial \theta_j}\left(1-\sigma\left(\theta^\top x^{(i)}\right)\right)}{1-h_\theta\left(x^{(i)}\right)} \right] \\[2ex]\Tiny\underset{\sigma'}=\frac{-1}{m}\,\sum_{i=1}^m \left[ y^{(i)}\, \frac{\sigma\left(\theta^\top x^{(i)}\right)\left(1-\sigma\left(\theta^\top x^{(i)}\right)\right)\frac{\partial}{\partial \theta_j}\left(\theta^\top x^{(i)}\right)}{h_\theta\left(x^{(i)}\right)} - (1 -y^{(i)})\,\frac{\sigma\left(\theta^\top x^{(i)}\right)\left(1-\sigma\left(\theta^\top x^{(i)}\right)\right)\frac{\partial}{\partial \theta_j}\left(\theta^\top x^{(i)}\right)}{1-h_\theta\left(x^{(i)}\right)} \right] \\[2ex]\small\underset{\sigma\left(\theta^\top x\right)=h_\theta(x)}= \,\frac{-1}{m}\,\sum_{i=1}^m \left[ y^{(i)}\frac{h_\theta\left( x^{(i)}\right)\left(1-h_\theta\left( x^{(i)}\right)\right)\frac{\partial}{\partial \theta_j}\left(\theta^\top x^{(i)}\right)}{h_\theta\left(x^{(i)}\right)} - (1 -y^{(i)})\frac{h_\theta\left( x^{(i)}\right)\left(1-h_\theta\left(x^{(i)}\right)\right)\frac{\partial}{\partial \theta_j}\left( \theta^\top x^{(i)}\right)}{1-h_\theta\left(x^{(i)}\right)} \right] \\[2ex]\small\underset{\frac{\partial}{\partial \theta_j}\left(\theta^\top x^{(i)}\right)=x_j^{(i)}}=\,\frac{-1}{m}\,\sum_{i=1}^m \left[y^{(i)}\left(1-h_\theta\left(x^{(i)}\right)\right)x_j^{(i)}- \left(1-y^{i}\right)\,h_\theta\left(x^{(i)}\right)x_j^{(i)} \right] \\[2ex]\small\underset{\text{distribute}}=\,\frac{-1}{m}\,\sum_{i=1}^m \left[y^{i}-y^{i}h_\theta\left(x^{(i)}\right)- h_\theta\left(x^{(i)}\right)+y^{(i)}h_\theta\left(x^{(i)}\right) \right]\,x_j^{(i)} \\[2ex]\small\underset{\text{cancel}}=\,\frac{-1}{m}\,\sum_{i=1}^m \left[y^{(i)}-h_\theta\left(x^{(i)}\right)\right]\,x_j^{(i)} \\[2ex]\small=\frac{1}{m}\sum_{i=1}^m\left[h_\theta\left(x^{(i)}\right)-y^{(i)}\right]\,x_j^{(i)} $ The derivative of the sigmoid function is $\Tiny\begin{align}\frac{d}{dx}\sigma(x)&=\frac{d}{dx}\left(\frac{1}{1+e^{-x}}\right)\\[2ex] &=\frac{-(1+e^{-x})'}{(1+e^{-x})^2}\\[2ex] &=\frac{e^{-x}}{(1+e^{-x})^2}\\[2ex] &=\left(\frac{1}{1+e^{-x}}\right)\left(\frac{e^{-x}}{1+e^{-x}}\right)\\[2ex] &=\left(\frac{1}{1+e^{-x}}\right)\,\left(\frac{1+e^{-x}}{1+e^{-x}}-\frac{1}{1+e^{-x}}\right)\\[2ex] &=\sigma(x)\,\left(\frac{1+e^{-x}}{1+e^{-x}}-\sigma(x)\right)\\[2ex] &=\sigma(x)\,(1-\sigma(x)) \end{align}$
{ "source": [ "https://stats.stackexchange.com/questions/278771", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/76692/" ] }
278,882
Assume that a model has 100% accuracy on the training data, but 70% accuracy on the test data. Is the following argument true about this model? It is obvious that this is an overfitted model. The test accuracy can be enhanced by reducing the overfitting. But, this model can still be a useful model, since it has an acceptable accuracy for the test data.
I think the argument is correct. If 70% is acceptable in the particular application, then the model is useful even though it is overfitted (more generally, regardless of whether it is overfitted or not). While balancing overfitting against underfitting concerns optimality (looking for an optimal solution), having satisfactory performance is about sufficiency (is the model performing well enough for the task?). A model can be sufficiently good without being optimal. Edit: after the comments by Firebug and Matthew Drury under the OP, I will add that to judge whether the model is overfitted without knowing the validation performance can be problematic. Firebug suggests comparing the validation vs. the test performance to measure the amount of overfitting. Nevertheless, when the model delivers 100% accuracy on the training set without delivering 100% accuracy on the test set, it is an indicator of possible overfitting (especially so in the case of regression but not necessarily in classification).
{ "source": [ "https://stats.stackexchange.com/questions/278882", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/146327/" ] }
278,892
I have a multi-class classification problem where the algorithm should detect (and later on classify) new classes. An example for such a task could be classifying if an image shows a dog or a cat. Furthermore, the model should be able to recognize that a goose doesn't fit into one of these two categories, thus create a new class. Specific Questions: How can the model detect new classes? Some unsupervised clustering algorithm When all classes are predicted by a value beyond a certain threshold? Is there a (proven) model, which can handle a growing number of classes to classify - without expensive retraining of all the other classes? one vs all? one-class? something completely different? I greatly appreciate every form of help and experiences you had with such a problem. References to papers or tutorials would be great too. Thank you in advance. Here are two links where similar questions were asked, but (at least for me) not fully answered. Stackexchange: Streaming multi-class classification Stackexchange multi-class classification word2vec Stackoverflow multiclass classification growing number of classes
I think the argument is correct. If 70% is acceptable in the particular application, then the model is useful even though it is overfitted (more generally, regardless of whether it is overfitted or not). While balancing overfitting against underfitting concerns optimality (looking for an optimal solution), having satisfactory performance is about sufficiency (is the model performing well enough for the task?). A model can be sufficiently good without being optimal. Edit: after the comments by Firebug and Matthew Drury under the OP, I will add that to judge whether the model is overfitted without knowing the validation performance can be problematic. Firebug suggests comparing the validation vs. the test performance to measure the amount of overfitting. Nevertheless, when the model delivers 100% accuracy on the training set without delivering 100% accuracy on the test set, it is an indicator of possible overfitting (especially so in the case of regression but not necessarily in classification).
{ "source": [ "https://stats.stackexchange.com/questions/278892", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/160947/" ] }
279,706
I'm simplifying a research question that I have at work. Imagine that I have 5 coins and let's call heads a success. These are VERY biased coins with probability of success p=0.1. Now, if the coins were independent, then getting the probability of at least 1 head or more is very simple, $1-(1-1/10)^5$. In my scenario, my Bernoulli trials (coin tosses) are not independent. The only information I have access to are the probability of successes (each one is p=.1) and the theoretical Pearson correlations among the binary variables. Is there any way to calculate the probability of one success or more only with this information? I'm trying to avoid a simulation-based approach because these theoretical results will be used to guide the accuracy of a simulation study. I have been looking into the multivariate Bernoulli distribution but I don't think that I can fully specify it only with correlations and marginal probabilities of success. A friend of mine recommended constructing a Gaussian copula with bernoulli marginals (using the R package copula ) and then using the pMvdc() function on a large sample to get the probability I want but I'm not exactly sure how to go about it with it.
No, this is impossible whenever you have three or more coins. The case of two coins Let us first see why it works for two coins as this provides some intuition about what breaks down in the case of more coins. Let $X$ and $Y$ denote the Bernoulli distributed variables corresponding to the two cases, $X \sim \mathrm{Ber}(p)$, $Y \sim \mathrm{Ber}(q)$. First, recall that the correlation of $X$ and $Y$ is $$\mathrm{corr}(X, Y) = \frac{E[XY] - E[X]E[Y]}{\sqrt{\mathrm{Var}(X)\mathrm{Var}(Y)}},$$ and since you know the marginals, you know $E[X]$, $E[Y]$, $\mathrm{Var}(X)$, and $\mathrm{Var}(Y)$, so by knowing the correlation, you also know $E[XY]$. Now, $XY = 1$ if and only if both $X = 1$ and $Y = 1$, so $$E[XY] = P(X = 1, Y = 1).$$ By knowing the marginals, you know $p = P(X = 1, Y = 0) + P(X = 1, Y = 1)$, and $q = P(X = 0, Y = 1) + P(X = 1, Y = 1)$. Since we just found that you know $P(X = 1, Y = 1)$, this means that you also know $P(X = 1, Y = 0)$ and $P(X = 0, Y = 0)$, but now you're done, as the probability you are looking for is $$P(X = 1, Y = 0) + P(X = 0, Y = 1) + P(X = 1, Y = 1).$$ Now, I personally find all of this easier to see with a picture. Let $P_{ij} = P(X = i, Y = j)$. Then we may picture the various probabilities as forming a square: Here, we saw that knowing the correlations meant that you could deduce $P_{11}$, marked red, and that knowing the marginals, you knew the sum for each edge (one of which are indicated with a blue rectangle). The case of three coins This will not go as easily for three coins; intuitively it is not hard to see why: By knowing the marginals and the correlation, you know a total of $6 = 3 + 3$ parameters, but the joint distribution has $2^3 = 8$ outcomes, but by knowing the probabilities for $7$ of those, you can figure out the last one; now, $7 > 6$, so it seems reasonable that one could cook up two different joint distributions whose marginals and correlations are the same, and that one could permute the probabilities until the ones you are looking for will differ. Let $X$, $Y$, and $Z$ be the three variables, and let $$P_{ijk} = P(X = i, Y = j, Z = k).$$ In this case, the picture from above becomes the following: The dimensions have been bumped by one: The red vertex has become several coloured edges, and the edge covered by a blue rectangle have become an entire face. Here, the blue plane indicates that by knowing the marginal, you know the sum of the probabilities within; for the one in the picture, $$P(X = 0) = P_{000} + P_{010} + P_{001} + P_{011},$$ and similarly for all other faces in the cube. The coloured edges indicate that by knowing the correlations, you know the sum of the two probabilities connected by the edge. For example, by knowing $\mathrm{corr}(X, Y)$, you know $E[XY]$ (exactly as above), and $$E[XY] = P(X = 1, Y = 1) = P_{110} + P_{111}.$$ So, this puts some limitations on possible joint distributions, but now we've reduced the exercise to the combinatorial exercise of putting numbers on the vertices of a cube. Without further ado, let us provide two joint distributions whose marginals and correlations are the same: Here, divide all numbers by $100$ to obtain a probability distribution. To see that these work and have the same marginals/correlations, simply note that the sum of probabilities on each face is $1/2$ (meaning that the variables are $\mathrm{Ber}(1/2)$), and that the sums for the vertices on the coloured edges agree in both cases (in this particular case, all correlations are in fact the same, but that's doesn't have to be the case in general). Finally, the probabilities of getting at least one head, $1 - P_{000}$ and $1 - P_{000}'$, are different in the two cases, which is what we wanted to prove. For me, coming up with these examples came down to putting numbers on the cube to produce one example, and then simply modifying $P_{111}$ and letting the changes propagate. Edit: This is the point where I realized that you were actually working with fixed marginals, and that you know that each variable was $\mathrm{Ber}(1/10)$, but if the picture above makes sense, it is possible to tweak it until you have the desired marginals. Four or more coins Finally, when we have more than three coins it should not be surprising that we can cook up examples that fail, as we now have an even bigger discrepancy between the number of parameters required to describe the joint distribution and those provided to us by marginals and correlations. Concretely, for any number of coins greater than three, you could simply consider the examples whose first three coins behave as in the two examples above and for which the outcomes of the final two coins are independent from all other coins.
{ "source": [ "https://stats.stackexchange.com/questions/279706", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/27089/" ] }
279,718
I feel really dumb even asking such a basic question but here goes: If I have a random variable $X$ that can take values $0$ and $1$, with $P(X=1) = p$ and $P(X=0) = 1-p$, then if I draw $n$ samples out of it, I'll get a binomial distribution. The mean of the distribution is $\mu = np = E(X)$ The variance of the distribution is $\sigma^2 = np(1-p)$ Here is where my trouble begins: Variance is defined by $\sigma^2 = E(X^2) - E(X)^2$. Because the square of the two possible $X$ outcomes don't change anything ($0^2 = 0$ and $1^2 = 1$), that means $E(X^2) = E(X)$, so that means $\sigma^2 = E(X^2) - E(X)^2 = E(X) - E(X)^2 = np - n^2p^2 = np(1-np) \neq np(1-p)$ Where does the extra $n$ go? As you can probably tell I am not very good at stats so please don't use complicated terminology :s
A random variable $X$ taking values $0$ and $1$ with probabilities $P(X=1)=p$ and $P(X=0)=1-p$ is called a Bernoulli random variable with parameter $p$. This random variable has \begin{eqnarray*} E(X)&=&0\cdot (1-p) + 1\cdot p = p\\ E(X^2)&=&0^2\cdot(1-p) + 1^2\cdot p = p\\ Var(X)&=& E(X^2)-(E(X))^2=p-p^2=p(1-p) \end{eqnarray*} Suppose you have a random sample $X_{1},X_{2},\cdots,X_{n}$ of size $n$ from $Bernoulli(p)$, and define a new random variable $Y=X_{1}+X_{2}+\cdots +X_{n}$, then the distribution of $Y$ is called Binomial, whose parameters are $n$ and $p$. The mean and variance of the Binomial random variable Y is given by \begin{eqnarray*} E(Y)&=&E(X_{1}+X_{2}+\cdots + X_{n})=\underbrace{ p+p+\cdots +p}_{n}=np\\ Var(Y)&=& Var(X_{1}+X_{2}+\cdots + X_{n})=Var(X_{1})+Var(X_{2})+\cdots + Var(X_{n})\\ & &\text{ (as $X_{i}$'s are independent)} \\ &=&\underbrace{p(1-p)+p(1-p)+\cdots+ p(1-p)}_{n}\quad \text{ (as $X_{i}$'s are identically distributed)} \\ &=&np(1-p) \end{eqnarray*}
{ "source": [ "https://stats.stackexchange.com/questions/279718", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/161448/" ] }
281,162
I have been trying to achieve a system which can scale a number down and in between two ranges. I have been stuck with the mathematical part of it. What im thinking is lets say number 200 to be normalized so it falls between a range lets say 0 to 0.66 or 0.66 to 1 or 1 to 1.66. The range being variable as well. Any help would be appreciated. Thanks
Your scaling will need to take into account the possible range of the original number. There is a difference if your 200 could have been in the range [200,201] or in [0,200] or in [0,10000]. So let $r_{\text{min}}$ denote the minimum of the range of your measurement $r_{\text{max}}$ denote the maximum of the range of your measurement $t_{\text{min}}$ denote the minimum of the range of your desired target scaling $t_{\text{max}}$ denote the maximum of the range of your desired target scaling $m\in[r_{\text{min}},r_{\text{max}}]$ denote your measurement to be scaled Then $$ m\mapsto \frac{m-r_{\text{min}}}{r_{\text{max}}-r_{\text{min}}}\times (t_{\text{max}}-t_{\text{min}}) + t_{\text{min}}$$ will scale $m$ linearly into $[t_{\text{min}},t_{\text{max}}]$ as desired. To go step by step, $ m\mapsto m-r_{\text{min}}$ maps $m$ to $[0,r_{\text{max}}-r_{\text{min}}]$. Next, $$ m\mapsto \frac{m-r_{\text{min}}}{r_{\text{max}}-r_{\text{min}}} $$ maps $m$ to the interval $[0,1]$, with $m=r_{\text{min}}$ mapped to $0$ and $m=r_{\text{max}}$ mapped to $1$. Multiplying this by $(t_{\text{max}}-t_{\text{min}})$ maps $m$ to $[0,t_{\text{max}}-t_{\text{min}}]$. Finally, adding $t_{\text{min}}$ shifts everything and maps $m$ to $[t_{\text{min}},t_{\text{max}}]$ as desired.
{ "source": [ "https://stats.stackexchange.com/questions/281162", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/162398/" ] }
282,160
I am training a simple neural network on the CIFAR10 dataset. After some time, validation loss started to increase, whereas validation accuracy is also increasing. The test loss and test accuracy continue to improve. How is this possible? It seems that if validation loss increase, accuracy should decrease. P.S. There are several similar questions, but nobody explained what was happening there.
Other answers explain well how accuracy and loss are not necessarily exactly (inversely) correlated, as loss measures a difference between raw prediction (float) and class (0 or 1), while accuracy measures the difference between thresholded prediction (0 or 1) and class. So if raw predictions change, loss changes but accuracy is more "resilient" as predictions need to go over/under a threshold to actually change accuracy. However, accuracy and loss intuitively seem to be somewhat (inversely) correlated, as better predictions should lead to lower loss and higher accuracy, and the case of higher loss and higher accuracy shown by OP is surprising. I have myself encountered this case several times, and I present here my conclusions based on the analysis I had conducted at the time. There may be other reasons for OP's case. Let's consider the case of binary classification, where the task is to predict whether an image is a cat or a horse, and the output of the network is a sigmoid (outputting a float between 0 and 1), where we train the network to output 1 if the image is one of a cat and 0 otherwise. I believe that in this case, two phenomenons are happening at the same time. Some images with borderline predictions get predicted better and so their output class changes (eg a cat image whose prediction was 0.4 becomes 0.6). This is the classic " loss decreases while accuracy increases " behavior that we expect. Some images with very bad predictions keep getting worse (eg a cat image whose prediction was 0.2 becomes 0.1). This leads to a less classic " loss increases while accuracy stays the same ". Note that when one uses cross-entropy loss for classification as it is usually done, bad predictions are penalized much more strongly than good predictions are rewarded. For a cat image, the loss is $log(1-prediction)$ , so even if many cat images are correctly predicted (low loss), a single misclassified cat image will have a high loss, hence "blowing up" your mean loss. See this answer for further illustration of this phenomenon. (Getting increasing loss and stable accuracy could also be caused by good predictions being classified a little worse, but I find it less likely because of this loss "asymmetry"). So I think that when both accuracy and loss are increasing, the network is starting to overfit, and both phenomena are happening at the same time. The network is starting to learn patterns only relevant for the training set and not great for generalization, leading to phenomenon 2, some images from the validation set get predicted really wrong, with an effect amplified by the "loss asymmetry". However, it is at the same time still learning some patterns which are useful for generalization (phenomenon one, "good learning") as more and more images are being correctly classified. I sadly have no answer for whether or not this "overfitting" is a bad thing in this case: should we stop the learning once the network is starting to learn spurious patterns, even though it's continuing to learn useful ones along the way? Finally, I think this effect can be further obscured in the case of multi-class classification, where the network at a given epoch might be severely overfit on some classes but still learning on others.
{ "source": [ "https://stats.stackexchange.com/questions/282160", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/81013/" ] }
282,419
Is the average of multiple positive-definite matrices necessarily positive-definite or positive semi-definite? The average is element-wise average.
Yes, it is. jth asnwer is correct (+1) but I think you can get a much simple explanation with just basic Linear Algebra. Assume $A$ and $B$ are positive definite matrices for size $n$. By definition this means that for all $u \in R^n$, $0 < u^TAu$ and $0 < u^TBu$. This means that $0 < u^TAu + u^TBu$ or equivalently that $ 0 < u^T(A+B)u$. ie. $(A+B)$ has to be positive definite.
{ "source": [ "https://stats.stackexchange.com/questions/282419", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/78448/" ] }
282,459
I am trying to understand how XGBoost works. I already understand how gradient boosted trees work on Python sklearn. What is not clear to me is if XGBoost works the same way, but faster, or if there are fundamental differences between it and the python implementation. When I read this paper http://learningsys.org/papers/LearningSys_2015_paper_32.pdf It looks to me like the end result coming out of XGboost is the same as in the Python implementation, however the main difference is how XGboost finds the best split to make in each regression tree. Basically, XGBoost gives the same result, but it is faster. Is this correct, or is there something else I am missing ?
You are correct, XGBoost ('eXtreme Gradient Boosting') and sklearn's GradientBoost are fundamentally the same as they are both gradient boosting implementations. However, there are very significant differences under the hood in a practical sense. XGBoost is a lot faster (see http://machinelearningmastery.com/gentle-introduction-xgboost-applied-machine-learning/ ) than sklearn's. XGBoost is quite memory-efficient and can be parallelized (I think sklearn's cannot do so by default, I don't know exactly about sklearn's memory-efficiency but I am pretty confident it is below XGBoost's). Having used both, XGBoost's speed is quite impressive and its performance is superior to sklearn's GradientBoosting.
{ "source": [ "https://stats.stackexchange.com/questions/282459", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/101955/" ] }
282,987
Which sequential input problems are best suited for each? Does input dimensionality determine which is a better match? Are problems which require "longer memory" better suited for an LSTM RNN, while problems with cyclical input patterns (stock market, weather) more easily solved by an HMM? It seems like there is a lot of overlap; Im curious what subtle differences exist between the two.
Summary Hidden Markov Models (HMMs) are much simpler than Recurrent Neural Networks (RNNs), and rely on strong assumptions which may not always be true. If the assumptions are true then you may see better performance from an HMM since it is less finicky to get working. An RNN may perform better if you have a very large dataset, since the extra complexity can take better advantage of the information in your data. This can be true even if the HMMs assumptions are true in your case. Finally, don't be restricted to only these two models for your sequence task, sometimes simpler regressions (e.g. ARIMA) can win out, and sometimes other complicated approaches such as Convolutional Neural Networks might be the best. (Yes, CNNs can be applied to some kinds of sequence data just like RNNs.) As always, the best way to know which model is best is to make the models and measure performance on a held out test set. Strong Assumptions of HMMs State transitions only depend on the current state, not on anything in the past. This assumption does not hold in a lot of the areas I am familiar with. For example, pretend you are trying to predict for every minute of the day whether a person was awake or asleep from movement data. The chance of someone transitioning from asleep to awake increases the longer the person has been in the asleep state. An RNN could theoretically learn this relationship and exploit it for higher predictive accuracy. You can try to get around this, for example by including the previous state as a feature, or defining composite states, but the added complexity does not always increase an HMM's predictive accuracy, and it definitely doesn't help computation times. You must pre-define the total number of states. Returning to the sleep example, it may appear as if there are only two states we care about. However, even if we only care about predicting awake vs. asleep , our model may benefit from figuring out extra states such as driving, showering, etc. (e.g. showering usually comes right before sleeping). Again, an RNN could theoretically learn such a relationship if showed enough examples of it. Difficulties with RNNs It may seem from the above that RNNs are always superior. I should note, though, that RNNs can be difficult to get working, especially when your dataset is small or your sequences very long. I've personally had troubles getting RNNs to train on some of my data, and I have a suspicion that most published RNN methods/guidelines are tuned to text data. When trying to use RNNs on non-text data I have had to perform a wider hyperparameter search than I care to in order to get good results on my particular datasets. In some cases, I've found the best model for sequential data is actually a UNet style ( https://arxiv.org/pdf/1505.04597.pdf ) Convolutional Neural Network model since it is easier and faster to train, and is able to take the full context of the signal into account.
{ "source": [ "https://stats.stackexchange.com/questions/282987", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/162507/" ] }
282,988
Suppose LN(Y) is regressed on a matrix of binary variables and a continuous variable. How can the interactive effect of the continuous variable and each one of the binary variables be determined? For example, suppose I attempt to estimate the effect of rainfall in a geographic area on the amount of vegetation in that area, having divided that geographic area into sections. How can I isolate the effect of rainfall on amount of vegetation in each one of those geographic sections?
Summary Hidden Markov Models (HMMs) are much simpler than Recurrent Neural Networks (RNNs), and rely on strong assumptions which may not always be true. If the assumptions are true then you may see better performance from an HMM since it is less finicky to get working. An RNN may perform better if you have a very large dataset, since the extra complexity can take better advantage of the information in your data. This can be true even if the HMMs assumptions are true in your case. Finally, don't be restricted to only these two models for your sequence task, sometimes simpler regressions (e.g. ARIMA) can win out, and sometimes other complicated approaches such as Convolutional Neural Networks might be the best. (Yes, CNNs can be applied to some kinds of sequence data just like RNNs.) As always, the best way to know which model is best is to make the models and measure performance on a held out test set. Strong Assumptions of HMMs State transitions only depend on the current state, not on anything in the past. This assumption does not hold in a lot of the areas I am familiar with. For example, pretend you are trying to predict for every minute of the day whether a person was awake or asleep from movement data. The chance of someone transitioning from asleep to awake increases the longer the person has been in the asleep state. An RNN could theoretically learn this relationship and exploit it for higher predictive accuracy. You can try to get around this, for example by including the previous state as a feature, or defining composite states, but the added complexity does not always increase an HMM's predictive accuracy, and it definitely doesn't help computation times. You must pre-define the total number of states. Returning to the sleep example, it may appear as if there are only two states we care about. However, even if we only care about predicting awake vs. asleep , our model may benefit from figuring out extra states such as driving, showering, etc. (e.g. showering usually comes right before sleeping). Again, an RNN could theoretically learn such a relationship if showed enough examples of it. Difficulties with RNNs It may seem from the above that RNNs are always superior. I should note, though, that RNNs can be difficult to get working, especially when your dataset is small or your sequences very long. I've personally had troubles getting RNNs to train on some of my data, and I have a suspicion that most published RNN methods/guidelines are tuned to text data. When trying to use RNNs on non-text data I have had to perform a wider hyperparameter search than I care to in order to get good results on my particular datasets. In some cases, I've found the best model for sequential data is actually a UNet style ( https://arxiv.org/pdf/1505.04597.pdf ) Convolutional Neural Network model since it is easier and faster to train, and is able to take the full context of the signal into account.
{ "source": [ "https://stats.stackexchange.com/questions/282988", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/149193/" ] }
283,170
We already had multiple questions about unbalanced data when using logistic regression , SVM , decision trees , bagging and a number of other similar questions, what makes it a very popular topic! Unfortunately, each of the questions seems to be algorithm-specific and I didn't find any general guidelines for dealing with unbalanced data. Quoting one of the answers by Marc Claesen , dealing with unbalanced data (...) heavily depends on the learning method. Most general purpose approaches have one (or several) ways to deal with this. But when exactly should we worry about unbalanced data? Which algorithms are mostly affected by it and which are able to deal with it? Which algorithms would need us to balance the data? I am aware that discussing each of the algorithms would be impossible on a Q&A site like this. I am rather looking for general guidelines on when it could be a problem.
Not a direct answer, but it's worth noting that in the statistical literature, some of the prejudice against unbalanced data has historical roots. Many classical models simplify neatly under the assumption of balanced data, especially for methods like ANOVA that are closely related to experimental design—a traditional / original motivation for developing statistical methods. But the statistical / probabilistic arithmetic gets quite ugly, quite quickly, with unbalanced data. Prior to the widespread adoption of computers, the by-hand calculations were so extensive that estimating models on unbalanced data was practically impossible. Of course, computers have basically rendered this a non-issue. Likewise, we can estimate models on massive datasets, solve high-dimensional optimization problems, and draw samples from analytically intractable joint probability distributions, all of which were functionally impossible like, fifty years ago. It's an old problem, and academics sank a lot of time into working on the problem...meanwhile, many applied problems outpaced / obviated that research, but old habits die hard... Edit to add: I realize I didn't come out and just say it: there isn't a low level problem with using unbalanced data. In my experience, the advice to "avoid unbalanced data" is either algorithm-specific, or inherited wisdom. I agree with AdamO that in general, unbalanced data poses no conceptual problem to a well-specified model.
{ "source": [ "https://stats.stackexchange.com/questions/283170", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/35989/" ] }
283,172
R-squared (coefficient of determination) is usually used to assess the goodness of fit of a regression model to the data. Here, I provide two simple datasets that I think their best-fit lines are equally good, but they got two different r-squared values. x1 = [1,2,3] y1 = [1,2,3.5] x2 = [1,2,3] y2 = [2,4,6.5] The best fit line to x1,y1 got r2=0.9868 , and the best fit line to x2,y2 got r2=0.9959 . While the r-squared values are different for these two best-fit lines, the residuals for different points are exactly the same for them: [-0.083,0.167,-0.083] . I think these two lines are equally good in fitting their respective data, while they get different r-squared values. What is wrong with my intuition about coefficient of determination.
Not a direct answer, but it's worth noting that in the statistical literature, some of the prejudice against unbalanced data has historical roots. Many classical models simplify neatly under the assumption of balanced data, especially for methods like ANOVA that are closely related to experimental design—a traditional / original motivation for developing statistical methods. But the statistical / probabilistic arithmetic gets quite ugly, quite quickly, with unbalanced data. Prior to the widespread adoption of computers, the by-hand calculations were so extensive that estimating models on unbalanced data was practically impossible. Of course, computers have basically rendered this a non-issue. Likewise, we can estimate models on massive datasets, solve high-dimensional optimization problems, and draw samples from analytically intractable joint probability distributions, all of which were functionally impossible like, fifty years ago. It's an old problem, and academics sank a lot of time into working on the problem...meanwhile, many applied problems outpaced / obviated that research, but old habits die hard... Edit to add: I realize I didn't come out and just say it: there isn't a low level problem with using unbalanced data. In my experience, the advice to "avoid unbalanced data" is either algorithm-specific, or inherited wisdom. I agree with AdamO that in general, unbalanced data poses no conceptual problem to a well-specified model.
{ "source": [ "https://stats.stackexchange.com/questions/283172", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/146327/" ] }
283,207
I am running a generalized linear model with Gamma distribution in R (glm, family=gamma) for my data (gene expression as response variable and few predictors). I want to calculate r-squared for this model. I have been reading about it online and found there are multiple formulas for calculating $R^2$ (psuedo) for glm (in R) with Gaussian (r2 from linear model), logistic regression (1-deviance/null deviance), Poisson distribution (using pR2 in the pscl package, D-squared value from the modEvA R package). But I could not find anything specific to Gamma distributions. Can pscl and modEVA packages be used for the Gamma distribution as well, or is there any other formula for doing the same?
Not a direct answer, but it's worth noting that in the statistical literature, some of the prejudice against unbalanced data has historical roots. Many classical models simplify neatly under the assumption of balanced data, especially for methods like ANOVA that are closely related to experimental design—a traditional / original motivation for developing statistical methods. But the statistical / probabilistic arithmetic gets quite ugly, quite quickly, with unbalanced data. Prior to the widespread adoption of computers, the by-hand calculations were so extensive that estimating models on unbalanced data was practically impossible. Of course, computers have basically rendered this a non-issue. Likewise, we can estimate models on massive datasets, solve high-dimensional optimization problems, and draw samples from analytically intractable joint probability distributions, all of which were functionally impossible like, fifty years ago. It's an old problem, and academics sank a lot of time into working on the problem...meanwhile, many applied problems outpaced / obviated that research, but old habits die hard... Edit to add: I realize I didn't come out and just say it: there isn't a low level problem with using unbalanced data. In my experience, the advice to "avoid unbalanced data" is either algorithm-specific, or inherited wisdom. I agree with AdamO that in general, unbalanced data poses no conceptual problem to a well-specified model.
{ "source": [ "https://stats.stackexchange.com/questions/283207", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/163374/" ] }
284,996
I was wondering if it might be possible to generate correlated random binomial variables following a linear transformation approach? Below, I tried something simple in R and it produces some correlation. But I was wondering if there is a principled way to do this? X1 = rbinom(1e4, 6, .5) ; X2 = rbinom(1e4, 6, .5) ; X3 = rbinom(1e4, 6, .5) ; a = .5 Y1 = X1 + (a*X2) ; Y2 = X2 + (a*X3) ## Y1 and Y2 are supposed to be correlated cor(Y1, Y2)
Binomial variables are usually created by summing independent Bernoulli variables. Let's see whether we can start with a pair of correlated Bernoulli variables $(X,Y)$ and do the same thing. Suppose $X$ is a Bernoulli$(p)$ variable (that is, $\Pr(X=1)=p$ and $\Pr(X=0)=1-p$) and $Y$ is a Bernoulli$(q)$ variable. To pin down their joint distribution we need to specify all four combinations of outcomes. Writing $$\Pr((X,Y)=(0,0))=a,$$ we can readily figure out the rest from the axioms of probability: $$\Pr((X,Y)=(1,0))=1-q-a, \\\Pr((X,Y)=(0,1))=1-p-a, \\\Pr((X,Y)=(1,1))=a+p+q-1.$$ Plugging this into the formula for the correlation coefficient $\rho$ and solving gives $$a = (1-p)(1-q) + \rho\sqrt{{pq}{(1-p)(1-q)}}.\tag{1}$$ Provided all four probabilities are non-negative, this will give a valid joint distribution--and this solution parameterizes all bivariate Bernoulli distributions. (When $p=q$, there is a solution for all mathematically meaningful correlations between $-1$ and $1$.) When we sum $n$ of these variables, the correlation remains the same--but now the marginal distributions are Binomial$(n,p)$ and Binomial$(n,q)$, as desired. Example Let $n=10$, $p=1/3$, $q=3/4$, and we would like the correlation to be $\rho=-4/5$. The solution to $(1)$ is $a=0.00336735$ (and the other probabilities are around $0.247$, $0.663$, and $0.087$). Here is a plot of $1000$ realizations from the joint distribution: The red lines indicate the means of the sample and the dotted line is the regression line. They are all close to their intended values. The points have been randomly jittered in this image to resolve the overlaps: after all, Binomial distributions only produce integral values, so there will be a great amount of overplotting. One way to generate these variables is to sample $n$ times from $\{1,2,3,4\}$ with the chosen probabilities and then convert each $1$ into $(0,0)$, each $2$ into $(1,0)$, each $3$ into $(0,1)$, and each $4$ into $(1,1)$. Sum the results (as vectors) to obtain one realization of $(X,Y)$. Code Here is an R implementation. # # Compute Pr(0,0) from rho, p=Pr(X=1), and q=Pr(Y=1). # a <- function(rho, p, q) { rho * sqrt(p*q*(1-p)*(1-q)) + (1-p)*(1-q) } # # Specify the parameters. # n <- 10 p <- 1/3 q <- 3/4 rho <- -4/5 # # Compute the four probabilities for the joint distribution. # a.0 <- a(rho, p, q) prob <- c(`(0,0)`=a.0, `(1,0)`=1-q-a.0, `(0,1)`=1-p-a.0, `(1,1)`=a.0+p+q-1) if (min(prob) < 0) { print(prob) stop("Error: a probability is negative.") } # # Illustrate generation of correlated Binomial variables. # set.seed(17) n.sim <- 1000 u <- sample.int(4, n.sim * n, replace=TRUE, prob=prob) y <- floor((u-1)/2) x <- 1 - u %% 2 x <- colSums(matrix(x, nrow=n)) # Sum in groups of `n` y <- colSums(matrix(y, nrow=n)) # Sum in groups of `n` # # Plot the empirical bivariate distribution. # plot(x+rnorm(length(x), sd=1/8), y+rnorm(length(y), sd=1/8), pch=19, cex=1/2, col="#00000010", xlab="X", ylab="Y", main=paste("Correlation is", signif(cor(x,y), 3))) abline(v=mean(x), h=mean(y), col="Red") abline(lm(y ~ x), lwd=2, lty=3)
{ "source": [ "https://stats.stackexchange.com/questions/284996", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/140365/" ] }
285,000
I have data on security incidents of various companies. I am trying to predict the 'time to discovery' using covariates such as 'motive of security incident', 'pattern of security incident', 'company location' and so on. Each company experienced at least one incident so there are multiple lines per company (where each line represents an incident). I ran a GLMM model (normal distribution with identity link function) but I keep getting an error saying that "estimated covariance matrix of the random effects (G matrix) is not positive definite" and "final Hessian matrix is not positive definite although all convergence criteria are satisfied" How can I address these errors?
Binomial variables are usually created by summing independent Bernoulli variables. Let's see whether we can start with a pair of correlated Bernoulli variables $(X,Y)$ and do the same thing. Suppose $X$ is a Bernoulli$(p)$ variable (that is, $\Pr(X=1)=p$ and $\Pr(X=0)=1-p$) and $Y$ is a Bernoulli$(q)$ variable. To pin down their joint distribution we need to specify all four combinations of outcomes. Writing $$\Pr((X,Y)=(0,0))=a,$$ we can readily figure out the rest from the axioms of probability: $$\Pr((X,Y)=(1,0))=1-q-a, \\\Pr((X,Y)=(0,1))=1-p-a, \\\Pr((X,Y)=(1,1))=a+p+q-1.$$ Plugging this into the formula for the correlation coefficient $\rho$ and solving gives $$a = (1-p)(1-q) + \rho\sqrt{{pq}{(1-p)(1-q)}}.\tag{1}$$ Provided all four probabilities are non-negative, this will give a valid joint distribution--and this solution parameterizes all bivariate Bernoulli distributions. (When $p=q$, there is a solution for all mathematically meaningful correlations between $-1$ and $1$.) When we sum $n$ of these variables, the correlation remains the same--but now the marginal distributions are Binomial$(n,p)$ and Binomial$(n,q)$, as desired. Example Let $n=10$, $p=1/3$, $q=3/4$, and we would like the correlation to be $\rho=-4/5$. The solution to $(1)$ is $a=0.00336735$ (and the other probabilities are around $0.247$, $0.663$, and $0.087$). Here is a plot of $1000$ realizations from the joint distribution: The red lines indicate the means of the sample and the dotted line is the regression line. They are all close to their intended values. The points have been randomly jittered in this image to resolve the overlaps: after all, Binomial distributions only produce integral values, so there will be a great amount of overplotting. One way to generate these variables is to sample $n$ times from $\{1,2,3,4\}$ with the chosen probabilities and then convert each $1$ into $(0,0)$, each $2$ into $(1,0)$, each $3$ into $(0,1)$, and each $4$ into $(1,1)$. Sum the results (as vectors) to obtain one realization of $(X,Y)$. Code Here is an R implementation. # # Compute Pr(0,0) from rho, p=Pr(X=1), and q=Pr(Y=1). # a <- function(rho, p, q) { rho * sqrt(p*q*(1-p)*(1-q)) + (1-p)*(1-q) } # # Specify the parameters. # n <- 10 p <- 1/3 q <- 3/4 rho <- -4/5 # # Compute the four probabilities for the joint distribution. # a.0 <- a(rho, p, q) prob <- c(`(0,0)`=a.0, `(1,0)`=1-q-a.0, `(0,1)`=1-p-a.0, `(1,1)`=a.0+p+q-1) if (min(prob) < 0) { print(prob) stop("Error: a probability is negative.") } # # Illustrate generation of correlated Binomial variables. # set.seed(17) n.sim <- 1000 u <- sample.int(4, n.sim * n, replace=TRUE, prob=prob) y <- floor((u-1)/2) x <- 1 - u %% 2 x <- colSums(matrix(x, nrow=n)) # Sum in groups of `n` y <- colSums(matrix(y, nrow=n)) # Sum in groups of `n` # # Plot the empirical bivariate distribution. # plot(x+rnorm(length(x), sd=1/8), y+rnorm(length(y), sd=1/8), pch=19, cex=1/2, col="#00000010", xlab="X", ylab="Y", main=paste("Correlation is", signif(cor(x,y), 3))) abline(v=mean(x), h=mean(y), col="Red") abline(lm(y ~ x), lwd=2, lty=3)
{ "source": [ "https://stats.stackexchange.com/questions/285000", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/145331/" ] }
285,004
I'm trying to cluster a dataset using 4 variables, all of which are categorical variables. I'd also like to include another numerical variable that's actually the number of observations of another column. My data is laid out like below: ColA, ColB, ColC, ColD, ColE where ColE would be the frequency of ColD; and all Columns A-D are categorical variable. I don't want to use a supervising learning technique because of various reasons (the top one being I don't know what my result should be; only that I want to have k number of groups that are similar enough). What's the best clustering algorithm to use for this? I've been thinking k-modes but that doesn't solve the problem of ColE being a feature of ColD.
Binomial variables are usually created by summing independent Bernoulli variables. Let's see whether we can start with a pair of correlated Bernoulli variables $(X,Y)$ and do the same thing. Suppose $X$ is a Bernoulli$(p)$ variable (that is, $\Pr(X=1)=p$ and $\Pr(X=0)=1-p$) and $Y$ is a Bernoulli$(q)$ variable. To pin down their joint distribution we need to specify all four combinations of outcomes. Writing $$\Pr((X,Y)=(0,0))=a,$$ we can readily figure out the rest from the axioms of probability: $$\Pr((X,Y)=(1,0))=1-q-a, \\\Pr((X,Y)=(0,1))=1-p-a, \\\Pr((X,Y)=(1,1))=a+p+q-1.$$ Plugging this into the formula for the correlation coefficient $\rho$ and solving gives $$a = (1-p)(1-q) + \rho\sqrt{{pq}{(1-p)(1-q)}}.\tag{1}$$ Provided all four probabilities are non-negative, this will give a valid joint distribution--and this solution parameterizes all bivariate Bernoulli distributions. (When $p=q$, there is a solution for all mathematically meaningful correlations between $-1$ and $1$.) When we sum $n$ of these variables, the correlation remains the same--but now the marginal distributions are Binomial$(n,p)$ and Binomial$(n,q)$, as desired. Example Let $n=10$, $p=1/3$, $q=3/4$, and we would like the correlation to be $\rho=-4/5$. The solution to $(1)$ is $a=0.00336735$ (and the other probabilities are around $0.247$, $0.663$, and $0.087$). Here is a plot of $1000$ realizations from the joint distribution: The red lines indicate the means of the sample and the dotted line is the regression line. They are all close to their intended values. The points have been randomly jittered in this image to resolve the overlaps: after all, Binomial distributions only produce integral values, so there will be a great amount of overplotting. One way to generate these variables is to sample $n$ times from $\{1,2,3,4\}$ with the chosen probabilities and then convert each $1$ into $(0,0)$, each $2$ into $(1,0)$, each $3$ into $(0,1)$, and each $4$ into $(1,1)$. Sum the results (as vectors) to obtain one realization of $(X,Y)$. Code Here is an R implementation. # # Compute Pr(0,0) from rho, p=Pr(X=1), and q=Pr(Y=1). # a <- function(rho, p, q) { rho * sqrt(p*q*(1-p)*(1-q)) + (1-p)*(1-q) } # # Specify the parameters. # n <- 10 p <- 1/3 q <- 3/4 rho <- -4/5 # # Compute the four probabilities for the joint distribution. # a.0 <- a(rho, p, q) prob <- c(`(0,0)`=a.0, `(1,0)`=1-q-a.0, `(0,1)`=1-p-a.0, `(1,1)`=a.0+p+q-1) if (min(prob) < 0) { print(prob) stop("Error: a probability is negative.") } # # Illustrate generation of correlated Binomial variables. # set.seed(17) n.sim <- 1000 u <- sample.int(4, n.sim * n, replace=TRUE, prob=prob) y <- floor((u-1)/2) x <- 1 - u %% 2 x <- colSums(matrix(x, nrow=n)) # Sum in groups of `n` y <- colSums(matrix(y, nrow=n)) # Sum in groups of `n` # # Plot the empirical bivariate distribution. # plot(x+rnorm(length(x), sd=1/8), y+rnorm(length(y), sd=1/8), pch=19, cex=1/2, col="#00000010", xlab="X", ylab="Y", main=paste("Correlation is", signif(cor(x,y), 3))) abline(v=mean(x), h=mean(y), col="Red") abline(lm(y ~ x), lwd=2, lty=3)
{ "source": [ "https://stats.stackexchange.com/questions/285004", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/165027/" ] }
285,834
I was led to use some techniques of statistics and machine learning, especially random forest method. I need to understand the difference between random forests and decision trees and what are the advantages of random forests compared to decision trees.
You are right that the two concepts are similar. As is implied by the names "Tree" and "Forest," a Random Forest is essentially a collection of Decision Trees. A decision tree is built on an entire dataset, using all the features/variables of interest, whereas a random forest randomly selects observations/rows and specific features/variables to build multiple decision trees from and then averages the results. After a large number of trees are built using this method, each tree "votes" or chooses the class, and the class receiving the most votes by a simple majority is the "winner" or predicted class. There are of course some more detailed differences, but this is the main conceptual difference.
{ "source": [ "https://stats.stackexchange.com/questions/285834", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/165637/" ] }
285,840
I am recently working on Missing Value Imputation. The dataset I am using is Mammographic Mass data set found from here . Now, the dataset contains missing values in multiple columns. I need some ideas how I can build a model or use any technique to impute the missing values.
You are right that the two concepts are similar. As is implied by the names "Tree" and "Forest," a Random Forest is essentially a collection of Decision Trees. A decision tree is built on an entire dataset, using all the features/variables of interest, whereas a random forest randomly selects observations/rows and specific features/variables to build multiple decision trees from and then averages the results. After a large number of trees are built using this method, each tree "votes" or chooses the class, and the class receiving the most votes by a simple majority is the "winner" or predicted class. There are of course some more detailed differences, but this is the main conceptual difference.
{ "source": [ "https://stats.stackexchange.com/questions/285840", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/151913/" ] }
285,931
I'm aware that there's been lots of advances with regards to image recognition, image classification, etc with deep, convolutional neural nets. But if I train a net on, say, PNG images, will it only work for images so encoded? What other image properties affect this? (alpha channel, interlacing, resolution, etc?)
Short answer is NO . The format in which the image is encoded has to do with its quality. Neural networks are essentially mathematical models that perform lots and lots of operations (matrix multiplications, element-wise additions and mapping functions). A neural network sees a Tensor as its input (i.e. a multi-dimensional array). It's shape usually is 4-D (number of images per batch, image height, image width, number of channels). Different image formats (especially lossy ones) may produce different input arrays but strictly speaking neural nets see arrays in their input, and NOT images.
{ "source": [ "https://stats.stackexchange.com/questions/285931", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/95456/" ] }
286,016
I wanted to know how much of machine learning requires optimization. From what I've heard statistics is an important mathematical topic for people working with machine learning. Similarly how important is it for someone working with machine learning to learn about convex or non-convex optimization?
The way I look at it is that statistics / machine learning tells you what you should be optimizing, and optimization is how you actually do so. For example, consider linear regression with $Y = X\beta + \varepsilon$ where $E(\varepsilon) = 0$ and $Var(\varepsilon) = \sigma^2I$. Statistics tells us that this is (often) a good model, but we find our actual estimate $\hat \beta$ by solving an optimization problem $$ \hat \beta = \textrm{argmin}_{b \in \mathbb R^p} ||Y - Xb||^2. $$ The properties of $\hat \beta$ are known to us through statistics so we know that this is a good optimization problem to solve. In this case it is an easy optimization but this still shows the general principle. More generally, much of machine learning can be viewed as solving $$ \hat f = \textrm{argmin}_{f \in \mathscr F} \frac 1n \sum_{i=1}^n L(y_i, f(x_i)) $$ where I'm writing this without regularization but that could easily be added. A huge amount of research in statistical learning theory (SLT) has studied the properties of these argminima, whether or not they are asymptotically optimal, how they relate to the complexity of $\mathscr F$, and many other such things. But when you actually want to get $\hat f$, often you end up with a difficult optimization and it's a whole separate set of people who study that problem. I think the history of SVM is a good example here. We have the SLT people like Vapnik and Cortes (and many others) who showed how SVM is a good optimization problem to solve. But then it was others like John Platt and the LIBSVM authors who made this feasible in practice. To answer your exact question, knowing some optimization is certainly helpful but generally no one is an expert in all these areas so you learn as much as you can but some aspects will always be something of a black box to you. Maybe you haven't properly studied the SLT results behind your favorite ML algorithm, or maybe you don't know the inner workings of the optimizer you're using. It's a lifelong journey.
{ "source": [ "https://stats.stackexchange.com/questions/286016", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/164741/" ] }
286,179
In practice, using a standard T-test to check the significance of a linear regression coefficient is common practice. The mechanics of the calculation make sense to me. Why is it that the T-distribution can be used to model the standard test statistic used in linear regression hypothesis testing? Standard test statistic I am referring to here: $$ T_{0} = \frac{\widehat{\beta} - \beta_{0}}{SE(\widehat{\beta})} $$
To understand why we use the t-distribution, you need to know what is the underlying distribution of $\widehat{\beta}$ and of the Residual sum of squares ($RSS$) as these two put together will give you the t-distribution. The easier part is the distribution of $\widehat{\beta}$ which is a normal distribution - to see this note that $\widehat{\beta}$=$(X^{T}X)^{-1}X^{T}Y$ so it is a linear function of $Y$ where $Y\sim N(X\beta, \sigma^{2}I_{n})$. As a result it is also normally distributed, $\widehat{\beta} \sim N(\beta, \sigma^{2}(X^{T}X)^{-1})$ - let me know if you need help deriving the distribution of $\widehat{\beta}$. Additionally, $RSS \sim \sigma^{2}\chi^{2}_{n-p}$, where $n$ is the number of observations and $p$ is the number of parameters used in your regression. The proof of this is a bit more involved, but also straightforward to derive (see proof here Why is RSS distributed chi square times n-p? ). Up until this point I have considered everything in matrix/vector notation, but let's for simplicity use $\widehat{\beta}_{i}$ and use its normal distribution which will give us: \begin{equation} \frac{\widehat{\beta}_{i}-\beta_{i}}{\sigma\sqrt{(X^{T}X)^{-1}_{ii}}} \sim N(0,1) \end{equation} Additionally, from the chi-squared distribution of $RSS$ we have that: \begin{equation} \frac{(n-p)s^{2}}{\sigma^{2}} \sim \chi^{2}_{n-p} \end{equation} This was simply a rearrangement of the first chi-squared expression and is independent of the $N(0,1)$. Additionally, we define $s^{2}=\frac{RSS}{n-p}$, which is an unbiased estimator for $\sigma^{2}$. By the definition of the $t_{n-p}$ definition that dividing a normal distribution by an independent chi-squared (over its degrees of freedom) gives you a t-distribution (for the proof see: A normal divided by the $\sqrt{\chi^2(s)/s}$ gives you a t-distribution -- proof ) you get that: \begin{equation} \frac{\widehat{\beta}_{i}-\beta_{i}}{s\sqrt{(X^{T}X)^{-1}_{ii}}} \sim t_{n-p} \end{equation} Where $s\sqrt{(X^{T}X)^{-1}_{ii}}=SE(\widehat{\beta}_{i})$. Let me know if it makes sense.
{ "source": [ "https://stats.stackexchange.com/questions/286179", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/165894/" ] }
286,203
The question I'm trying to answer is (fundamentally) this: I have a bag of coins that I suspect are weighted, some towards heads, some towards tails . I toss each coin 4 times and record the outcomes (e.g., 3H1T). As a group, do the coins tend to be unfair? I can't figure out what an appropriate test would be, though it seems like there ought to be one. Here are some relevant thoughts and options I've considered. (1) Binomial test - Appropriate way to test EACH COIN's fairness, but (a) 4 tosses isn't enough for statistical significance ($\alpha$ = .05) at the level of the individual coin and (b) since I suspect different coins may be weighted in opposite directions, lumping all the data together would make these coins cancel each other out. (See similar comments on this question ) (2) Chi-square goodness-of-fit or multinomial test over counts - This will tell me if my observed counts for each outcome (4H0T, 3H1T, 2H2T...) differ from the expected counts (they do), but not how. It will return a high test statistic whether my coins are all magically fair (all 2H2T results) or if they are all weighted (all either 4H0T, or 0H4T). It also ignores the underlying binomial nature of the data. (3) Regression seems like overkill for this data, and linear regression/linear mixed models wouldn't answer the right question anyway: coins with opposite weighting would cancel each other out. For reference, my actual counts are as follows, of a total of 56 "coins". Unfortunately, since they're not real coins, and the experiment is over, I can't just go flip each one a few more times! 16 4H, 10 3H, 9 2H, 8 1H, 13 0H
To understand why we use the t-distribution, you need to know what is the underlying distribution of $\widehat{\beta}$ and of the Residual sum of squares ($RSS$) as these two put together will give you the t-distribution. The easier part is the distribution of $\widehat{\beta}$ which is a normal distribution - to see this note that $\widehat{\beta}$=$(X^{T}X)^{-1}X^{T}Y$ so it is a linear function of $Y$ where $Y\sim N(X\beta, \sigma^{2}I_{n})$. As a result it is also normally distributed, $\widehat{\beta} \sim N(\beta, \sigma^{2}(X^{T}X)^{-1})$ - let me know if you need help deriving the distribution of $\widehat{\beta}$. Additionally, $RSS \sim \sigma^{2}\chi^{2}_{n-p}$, where $n$ is the number of observations and $p$ is the number of parameters used in your regression. The proof of this is a bit more involved, but also straightforward to derive (see proof here Why is RSS distributed chi square times n-p? ). Up until this point I have considered everything in matrix/vector notation, but let's for simplicity use $\widehat{\beta}_{i}$ and use its normal distribution which will give us: \begin{equation} \frac{\widehat{\beta}_{i}-\beta_{i}}{\sigma\sqrt{(X^{T}X)^{-1}_{ii}}} \sim N(0,1) \end{equation} Additionally, from the chi-squared distribution of $RSS$ we have that: \begin{equation} \frac{(n-p)s^{2}}{\sigma^{2}} \sim \chi^{2}_{n-p} \end{equation} This was simply a rearrangement of the first chi-squared expression and is independent of the $N(0,1)$. Additionally, we define $s^{2}=\frac{RSS}{n-p}$, which is an unbiased estimator for $\sigma^{2}$. By the definition of the $t_{n-p}$ definition that dividing a normal distribution by an independent chi-squared (over its degrees of freedom) gives you a t-distribution (for the proof see: A normal divided by the $\sqrt{\chi^2(s)/s}$ gives you a t-distribution -- proof ) you get that: \begin{equation} \frac{\widehat{\beta}_{i}-\beta_{i}}{s\sqrt{(X^{T}X)^{-1}_{ii}}} \sim t_{n-p} \end{equation} Where $s\sqrt{(X^{T}X)^{-1}_{ii}}=SE(\widehat{\beta}_{i})$. Let me know if it makes sense.
{ "source": [ "https://stats.stackexchange.com/questions/286203", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/165883/" ] }
288,261
Most common convolutional neural networks contains pooling layers to reduce the dimensions of output features. Why couldn't I achieve the same thing by simply increase the stride of the convolutional layer? What makes the pooling layer necessary?
You can indeed do that, see Striving for Simplicity: The All Convolutional Net . Pooling gives you some amount of translation invariance, which may or may not be helpful. Also, pooling is faster to compute than convolutions. Still, you can always try replacing pooling by convolution with stride and see what works better. Some current works use average pooling ( Wide Residual Networks , DenseNets ), others use convolution with stride ( DelugeNets )
{ "source": [ "https://stats.stackexchange.com/questions/288261", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/91850/" ] }
288,451
In 5.5, Deep Learning (by Ian Goodfellow, Yoshua Bengio and Aaron Courville), it states that Any loss consisting of a negative log-likelihood is a cross-entropy between the empirical distribution defined by the training set and the probability distribution defined by model. For example, mean squared error is the cross-entropy between the empirical distribution and a Gaussian model. I can't understand why they are equivalent and the authors do not expand on the point.
Let the data be $\mathbf{x}=(x_1, \ldots, x_n)$. Write $F(\mathbf{x})$ for the empirical distribution. By definition, for any function $f$, $$\mathbb{E}_{F(\mathbf{x})}[f(X)] = \frac{1}{n}\sum_{i=1}^n f(x_i).$$ Let the model $M$ have density $e^{f(x)}$ where $f$ is defined on the support of the model. The cross-entropy of $F(\mathbf{x})$ and $M$ is defined to be $$H(F(\mathbf{x}), M) = -\mathbb{E}_{F(\mathbf{x})}[\log(e^{f(X)}] = -\mathbb{E}_{F(\mathbf{x})}[f(X)] =-\frac{1}{n}\sum_{i=1}^n f(x_i).\tag{1}$$ Assuming $x$ is a simple random sample, its negative log likelihood is $$-\log(L(\mathbf{x}))=-\log \prod_{i=1}^n e^{f(x_i)} = -\sum_{i=1}^n f(x_i)\tag{2}$$ by virtue of the properties of logarithms (they convert products to sums). Expression $(2)$ is a constant $n$ times expression $(1)$. Because loss functions are used in statistics only by comparing them, it makes no difference that one is a (positive) constant times the other. It is in this sense that the negative log likelihood "is a" cross-entropy in the quotation. It takes a bit more imagination to justify the second assertion of the quotation. The connection with squared error is clear, because for a "Gaussian model" that predicts values $p(x)$ at points $x$, the value of $f$ at any such point is $$f(x; p, \sigma) = -\frac{1}{2}\left(\log(2\pi \sigma^2) + \frac{(x-p(x))^2}{\sigma^2}\right),$$ which is the squared error $(x-p(x))^2$ but rescaled by $1/(2\sigma^2)$ and shifted by a function of $\sigma$. One way to make the quotation correct is to assume it does not consider $\sigma$ part of the "model"--$\sigma$ must be determined somehow independently of the data. In that case differences between mean squared errors are proportional to differences between cross-entropies or log-likelihoods, thereby making all three equivalent for model fitting purposes. (Ordinarily, though, $\sigma = \sigma(x)$ is fit as part of the modeling process, in which case the quotation would not be quite correct.)
{ "source": [ "https://stats.stackexchange.com/questions/288451", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/147592/" ] }
289,467
In dimensionality reduction technique such as Principal Component Analysis, LDA etc often the term manifold is used. What is a manifold in non-technical term? If a point $x$ belongs to a sphere whose dimension I want to reduce, and if there is a noise $y$ and $x$ and $y$ are uncorrelated, then the actual points $x$ would be far separated from each other due to the noise. Therefore, noise filtering would be required. So, dimension reduction would be performed on $z = x+y$. Therefore, over here does $x$ and $y$ belong to different manifolds? I am working on point cloud data that is often used in robot vision; the point clouds are noisy due to noise in acquisition and I need to reduce the noise before dimension reduction. Otherwise, I will get incorrect dimension reduction. So, what is the manifold here and is noise a part of the same manifold to which $x$ belongs?
In non technical terms, a manifold is a continuous geometrical structure having finite dimension : a line, a curve, a plane, a surface, a sphere, a ball, a cylinder, a torus, a "blob"... something like this : It is a generic term used by mathematicians to say "a curve" (dimension 1) or "surface" (dimension 2), or a 3D object (dimension 3)... for any possible finite dimension $n$. A one dimensional manifold is simply a curve (line, circle...). A two dimensional manifold is simply a surface (plane, sphere, torus, cylinder...). A three dimensional manifold is a "full object" (ball, full cube, the 3D space around us...). A manifold is often described by an equation : the set of points $(x,y)$ such as $x^2+y^2=1$ is a one dimensional manifold (a circle). A manifold has the same dimension everywhere. For example, if you append a line (dimension 1) to a sphere (dimension 2) then the resulting geometrical structure is not a manifold. Unlike the more general notions of metric space or topological space also intended to describe our natural intuition of a continuous set of points, a manifold is intended to be something locally simple: like a finite dimension vector space : $\mathbb{R}^n$. This rules out abstract spaces (like infinite dimension spaces) that often fail to have a geometric concrete meaning. Unlike a vector space, manifolds can have various shapes. Some manifolds can be easily visualized (sphere ,ball...), some are difficult to visualize, like the Klein bottle or the real projective plane . In statistics, machine learning, or applied maths generally, the word "manifold" is often used to say "like a linear subspace" but possibly curved. Anytime you write a linear equation like : $3x+2y-4z=1$ you get a linear (affine) subspace (here a plane). Usually, when the equation is non linear like $x^2+2y^2+3z^2=7$, this is a manifold (here a stretched sphere). For example the " manifold hypothesis " of ML says "high dimensional data are points in a low dimensional manifold with high dimensional noise added". You can imagine points of a 1D circle with some 2D noise added. While the points are not exactly on the circle, they satisfy statistically the equation $x^2+y^2=1$. The circle is the underlying manifold:
{ "source": [ "https://stats.stackexchange.com/questions/289467", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21961/" ] }
290,750
PCA is considered a linear procedure, however: $$\mathrm{PCA}(X)\neq \mathrm{PCA}(X_1)+\mathrm{PCA}(X_2)+\ldots+\mathrm{PCA}(X_n),$$ where $X=X_1+X_2+\ldots+X_n$. This is to say that the eigenvectors obtained by the PCAs on the data matrices $X_i$ do not sum up to equal the eigenvectors obtained by PCA on the sum of the data matrices $X_i$. But isn't the definition of a linear function $f$ that: $$f(x+y)=f(x)+f(y)?$$ So why is PCA considered "linear" if it does not satisfy this very basic condition of linearity?
When we say that PCA is a linear method, we refer to the dimensionality reducing mapping $f:\mathbf x\mapsto \mathbf z$ from high-dimensional space $\mathbb R^p$ to a lower-dimensional space $\mathbb R^k$. In PCA, this mapping is given by multiplication of $\mathbf x$ by the matrix of PCA eigenvectors and so is manifestly linear (matrix multiplication is linear): $$\mathbf z = f(\mathbf x) = \mathbf V^\top \mathbf x.$$ This is in contrast with nonlinear methods of dimensionality reduction , where the dimensionality reducing mapping can be nonlinear. On the other hand, the $k$ top eigenvectors $\mathbf V\in \mathbb R^{p\times k}$ are computed from the data matrix $\mathbf X\in \mathbb R^{n\times p}$ using what you called $\mathrm{PCA}()$ in your question: $$\mathbf V = \mathrm{PCA}(\mathbf X),$$ and this mapping is certainly non-linear: it involves computing eigenvectors of the covariance matrix, which is a non-linear procedure. (As a trivial example, multiplying $\mathbf X$ by $2$ increases the covariance matrix by $4$, but its eigenvectors stay the same as they are normalized to have unit length.)
{ "source": [ "https://stats.stackexchange.com/questions/290750", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/120194/" ] }
291,820
Intro Background Within a convolutional neural network, we usually have a general structure / flow that looks like this: input image (i.e. a 2D vector x ) (1st Convolutional layer (Conv1) starts here...) convolve a set of filters ( w1 ) along the 2D image (i.e. do the z1 = w1*x + b1 dot product multiplications), where z1 is 3D, and b1 is biases. apply an activation function (e.g. ReLu) to make z1 non-linear (e.g. a1 = ReLu(z1) ), where a1 is 3D. (2nd Convolutional layer (Conv2) starts here...) convolve a set of filters along the newly computed activations (i.e. do the z2 = w2*a1 + b2 dot product multiplications), where z2 is 3D, and and b2 is biases. apply an activation function (e.g. ReLu) to make z2 non-linear (e.g. a2 = ReLu(z2) ), where a2 is 3D. The Question The definition of the term "feature map" seems to vary from literature to literature. Concretely: For the 1st convolutional layer, does "feature map" corresponds to the input vector x , or the output dot product z1 , or the output activations a1 , or the "process" converting x to a1 , or something else? Similarly, for the 2nd convolutional layer, does "feature map" corresponds to the input activations a1 , or the output dot product z2 , or the output activation a2 , or the "process" converting a1 to a2 , or something else? In addition, is it true that the term "feature map" is exactly the same as "activation map"? (or do they actually mean two different thing?) Additional references: Snippets from Neural Networks and Deep Learning - Chapter 6 : *The nomenclature is being used loosely here. In particular, I'm using "feature map" to mean not the function computed by the convolutional layer, but rather the activation of the hidden neurons output from the layer. This kind of mild abuse of nomenclature is pretty common in the research literature. Snippets from Visualizing and Understanding Convolutional Networks by Matt Zeiler : In this paper we introduce a visualization technique that reveals the input stimuli that excite individual feature maps at any layer in the model. [...] Our approach, by contrast, provides a non-parametric view of invariance, showing which patterns from the training set activate the feature map. [...] a local contrast operation that normalizes the responses across feature maps. [...] To examine a given convnet activation, we set all other activations in the layer to zero and pass the feature maps as input to the attached deconvnet layer. [...] The convnet uses relu non-linearities, which rectify the feature maps thus ensuring the feature maps are always positive. [...] The convnet uses learned filters to convolve the feature maps from the previous layer. [...] Fig. 6, these visualizations are accurate representations of the input pattern that stimulates the given feature map in the model [...] when the parts of the original input image corresponding to the pattern are occluded, we see a distinct drop in activity within the feature map. [...] Remarks: also introduces the term "feature map" and "rectified feature map" in Fig 1. Snippets from Stanford CS231n Chapter on CNN : [...] One dangerous pitfall that can be easily noticed with this visualization is that some activation maps may be all zero for many different inputs, which can indicate dead filters, and can be a symptom of high learning rates [...] Typical-looking activations on the first CONV layer (left), and the 5th CONV layer (right) of a trained AlexNet looking at a picture of a cat. Every box shows an activation map corresponding to some filter. Notice that the activations are sparse (most values are zero, in this visualization shown in black) and mostly local. Snippets from A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks [...] Every unique location on the input volume produces a number. After sliding the filter over all the locations, you will find out that what you’re left with is a 28 x 28 x 1 array of numbers, which we call an activation map or feature map.
A feature map, or activation map, is the output activations for a given filter (a1 in your case) and the definition is the same regardless of what layer you are on. Feature map and activation map mean exactly the same thing. It is called an activation map because it is a mapping that corresponds to the activation of different parts of the image, and also a feature map because it is also a mapping of where a certain kind of feature is found in the image. A high activation means a certain feature was found. A "rectified feature map" is just a feature map that was created using Relu. You could possibly see the term "feature map" used for the result of the dot products (z1) because this is also really a map of where certain features are in the image, but that is not common to see.
{ "source": [ "https://stats.stackexchange.com/questions/291820", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/113350/" ] }
294,995
Look at this excerpt from "The study skills handbook", Palgrave, 2012, by Stella Cottrell, page 155: Percentages Notice when percentages are given. Suppose, instead, the statement above read: 60% of people preferred oranges; 40% said they preferred apples. This looks convincing: Numerical quantities are given. But is the difference between 60% and 40% significant ? Here we would need to know how many people were asked. If 1000 people were asked of whom 600 preferred oranges, the number would be persuasive. However, if only 10 people were asked, 60% simply means 6 people preferred oranges. "60%" sounds convincing in a way that "6 out of 10" does not. As a critical reader, you need to be on the lookout for percentages being used to make insufficient data look impressive. What is this characteristic called in statistics? I would like to read more about it.
I would like to list another intuitive example. Suppose I tell you I can predict the outcome of any coin flip. You do not believe and want to test my ability. You tested 5 times, and I got all of them right. Do you believe I have the special ability? Maybe not. Because I can get all of them right by chance. (Specifically, suppose the coin is a fair coin, and each experiment is independent, then I can get all rights with $0.5^5\approx0.03$ with no super power. See Shufflepants's link for a joke about it). On the other hand, if you tested me large number of times, then it is very unlikely that I can get it by chance. For example, if you tested $100$ times, the probability of me getting all of them right is $0.5^{100}\approx 0$. The statistical concept is called statistical power, from Wikipeida The power of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis (H0) when the alternative hypothesis (H1) is true. Back to the super power on coin flip example, essentially you want to run a hypothesis testing. Null hypothesis (H0): I do not have the super power Alternative hypothesis (H1): I have the super power Now as you can see in the numerical example (test me 5 times vs test me 100 times), The statistical power has been affected by the sample size. More to to read here . (more technical and based on t-test). An interactive tool to understand statistical power can be found here . Note, the statistical power changes with the sample size !
{ "source": [ "https://stats.stackexchange.com/questions/294995", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/46708/" ] }
295,005
In Box and Tiao (1973) page# 156, authors write that if the distributions of two random variables are identical except location, then the distribution of the differences would certainly be symmetric. In other words, if two random variables are identical except in their mean, then the difference of the two random variables would be symmetric. But authors have not provided any proof for this claim. Maybe because it is supposed to be obvious. But I am not able to understand why this statement is true? It will be helpful if someone can show a proof for this claim or an intuitive explanation
I would like to list another intuitive example. Suppose I tell you I can predict the outcome of any coin flip. You do not believe and want to test my ability. You tested 5 times, and I got all of them right. Do you believe I have the special ability? Maybe not. Because I can get all of them right by chance. (Specifically, suppose the coin is a fair coin, and each experiment is independent, then I can get all rights with $0.5^5\approx0.03$ with no super power. See Shufflepants's link for a joke about it). On the other hand, if you tested me large number of times, then it is very unlikely that I can get it by chance. For example, if you tested $100$ times, the probability of me getting all of them right is $0.5^{100}\approx 0$. The statistical concept is called statistical power, from Wikipeida The power of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis (H0) when the alternative hypothesis (H1) is true. Back to the super power on coin flip example, essentially you want to run a hypothesis testing. Null hypothesis (H0): I do not have the super power Alternative hypothesis (H1): I have the super power Now as you can see in the numerical example (test me 5 times vs test me 100 times), The statistical power has been affected by the sample size. More to to read here . (more technical and based on t-test). An interactive tool to understand statistical power can be found here . Note, the statistical power changes with the sample size !
{ "source": [ "https://stats.stackexchange.com/questions/295005", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/91688/" ] }
295,363
I am reading the book: Bishop, Pattern Recognition and Machine Learning (2006) which defines the exponential family as distributions of the form (Eq. 2.194): $$ p(\mathbf x|\boldsymbol \eta) = h(\mathbf x) g(\boldsymbol \eta) \exp \{\boldsymbol \eta^\mathrm T \mathbf u(\mathbf x)\} $$ But I see no restrictions placed on $h(\mathbf x)$ or $\mathbf u(\mathbf x)$ . Doesn't this mean that any distribution can be put in this form, by appropriate choice of $h(\mathbf x)$ and $\mathbf u(\mathbf x)$ (in fact only one of them has to be chosen properly!)? So how come the exponential family does not include all probability distributions? What am I missing? Finally, a more particular question that I am interested in is this: Is the Bernoulli distribution in the exponential family ? Wikipedia claims it is, but since I am obviously confused about something here, I would like to see why.
First, note there is a terminology problem in your title: the exponential family seems to imply one exponential family. You should say a exponential family , there are many exponential families. Well, one consequence of your definition: $$p(\mathbf x|\boldsymbol \eta) = h(\mathbf x) g(\boldsymbol \eta) \exp \{\boldsymbol \eta^\mathrm T \mathbf u(\mathbf x)\}$$ is that the support of the distribution family indexed by parameter $\eta$ do not depend on $\eta$ . (The support of a probability distribution is the (closure of) the least set with probability one, or in other words, where the distribution lives .) So it is enough to give a counterexample of a distribution family with support depending on the parameter, the most easy example is the following family of uniform distributions: $ \text{U}(0, \eta), \quad \eta > 0$ . (the other answer by @Chaconne gives a more sophisticated counterexample). Another, unrelated reason that not all distributions are exponential family, is that an exponential family distribution always have an existing moment generating function. Not all distributions have a mgf.
{ "source": [ "https://stats.stackexchange.com/questions/295363", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5536/" ] }
295,364
Lets say for fraud detection which has two labels for each transaction. Fraud Non fraud In real world scenario we usually get more number of examples of Non fraud data points and very low number of fraud data points. Lets assume the ratio of Non fraud: fraud is 80:20. So my question is even if I build any classifier my model will predict the majority label but I know that data itself is not well distributed. So for such scenarios what should be the approach.
First, note there is a terminology problem in your title: the exponential family seems to imply one exponential family. You should say a exponential family , there are many exponential families. Well, one consequence of your definition: $$p(\mathbf x|\boldsymbol \eta) = h(\mathbf x) g(\boldsymbol \eta) \exp \{\boldsymbol \eta^\mathrm T \mathbf u(\mathbf x)\}$$ is that the support of the distribution family indexed by parameter $\eta$ do not depend on $\eta$ . (The support of a probability distribution is the (closure of) the least set with probability one, or in other words, where the distribution lives .) So it is enough to give a counterexample of a distribution family with support depending on the parameter, the most easy example is the following family of uniform distributions: $ \text{U}(0, \eta), \quad \eta > 0$ . (the other answer by @Chaconne gives a more sophisticated counterexample). Another, unrelated reason that not all distributions are exponential family, is that an exponential family distribution always have an existing moment generating function. Not all distributions have a mgf.
{ "source": [ "https://stats.stackexchange.com/questions/295364", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/168851/" ] }
295,617
What is the practical difference between Wasserstein metric and Kullback-Leibler divergence ? Wasserstein metric is also referred to as Earth mover's distance . From Wikipedia: Wasserstein (or Vaserstein) metric is a distance function defined between probability distributions on a given metric space M. and Kullback–Leibler divergence is a measure of how one probability distribution diverges from a second expected probability distribution. I've seen KL been used in machine learning implementations, but I recently came across the Wasserstein metric. Is there a good guideline on when to use one or the other? (I have insufficient reputation to create a new tag with Wasserstein or Earth mover's distance .)
When considering the advantages of Wasserstein metric compared to KL divergence, then the most obvious one is that W is a metric whereas KL divergence is not, since KL is not symmetric (i.e. $D_{KL}(P||Q) \neq D_{KL}(Q||P)$ in general) and does not satisfy the triangle inequality (i.e. $D_{KL}(R||P) \leq D_{KL}(Q||P) + D_{KL}(R||Q)$ does not hold in general). As what comes to practical difference, then one of the most important is that unlike KL (and many other measures) Wasserstein takes into account the metric space and what this means in less abstract terms is perhaps best explained by an example (feel free to skip to the figure, code just for producing it): # define samples this way as scipy.stats.wasserstein_distance can't take probability distributions directly sampP = [1,1,1,1,1,1,2,3,4,5] sampQ = [1,2,3,4,5,5,5,5,5,5] # and for scipy.stats.entropy (gives KL divergence here) we want distributions P = np.unique(sampP, return_counts=True)[1] / len(sampP) Q = np.unique(sampQ, return_counts=True)[1] / len(sampQ) # compare to this sample / distribution: sampQ2 = [1,2,2,2,2,2,2,3,4,5] Q2 = np.unique(sampQ2, return_counts=True)[1] / len(sampQ2) fig = plt.figure(figsize=(10,7)) fig.subplots_adjust(wspace=0.5) plt.subplot(2,2,1) plt.bar(np.arange(len(P)), P, color='r') plt.xticks(np.arange(len(P)), np.arange(1,5), fontsize=0) plt.subplot(2,2,3) plt.bar(np.arange(len(Q)), Q, color='b') plt.xticks(np.arange(len(Q)), np.arange(1,5)) plt.title("Wasserstein distance {:.4}\nKL divergence {:.4}".format( scipy.stats.wasserstein_distance(sampP, sampQ), scipy.stats.entropy(P, Q)), fontsize=10) plt.subplot(2,2,2) plt.bar(np.arange(len(P)), P, color='r') plt.xticks(np.arange(len(P)), np.arange(1,5), fontsize=0) plt.subplot(2,2,4) plt.bar(np.arange(len(Q2)), Q2, color='b') plt.xticks(np.arange(len(Q2)), np.arange(1,5)) plt.title("Wasserstein distance {:.4}\nKL divergence {:.4}".format( scipy.stats.wasserstein_distance(sampP, sampQ2), scipy.stats.entropy(P, Q2)), fontsize=10) plt.show() Here the measures between red and blue distributions are the same for KL divergence whereas Wasserstein distance measures the work required to transport the probability mass from the red state to the blue state using x-axis as a “road”. This measure is obviously the larger the further away the probability mass is (hence the alias earth mover's distance). So which one you want to use depends on your application area and what you want to measure. As a note, instead of KL divergence there are also other options like Jensen-Shannon distance that are proper metrics.
{ "source": [ "https://stats.stackexchange.com/questions/295617", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/165105/" ] }
296,679
When people talk about neural networks, what do they mean when they say "kernel size"? Kernels are similarity functions, but what does that say about kernel size?
Deep neural networks, more concretely convolutional neural networks (CNN), are basically a stack of layers which are defined by the action of a number of filters on the input. Those filters are usually called kernels. For example, the kernels in the convolutional layer, are the convolutional filters. Actually no convolution is performed, but a cross-correlation. The kernel size here refers to the widthxheight of the filter mask. The max pooling layer, for example, returns the pixel with maximum value from a set of pixels within a mask (kernel). That kernel is swept across the input, subsampling it. So nothing to do with the concept of kernels in support vector machines or regularization networks. You can think of them as feature extractors.
{ "source": [ "https://stats.stackexchange.com/questions/296679", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/171379/" ] }
297,380
Fundamental problem with deep learning and neural networks in general. The solutions that fit training data are infinite. We don't have precise mathematical equation that is satisfied by only a single one and that we can say generalizes best. Simply speaking we don't know which generalizes best. Optimizing weights is not a convex problem, so we never know we end up with a global or a local minimum. So why not just dump the neural networks and instead search for a better ML model? Something that we understand, and something that is consistent with a set of mathematical equations? Linear and SVM do not have this mathematical drawbacks and are fully consistent with a a set of mathematical equations. Why not just think on same lines (need not be linear though) and come up with a new ML model better than Linear and SVM and neural networks and deep learning?
Not being able to know what solution generalizes best is an issue, but it shouldn't deter us from otherwise using a good solution. Humans themselves often do not known what generalizes best (consider, for example, competing unifying theories of physics), but that doesn't cause us too many problems. It has been shown that it is extremely rare for training to fail because of local minimums. Most of the local minimums in a deep neural network are close in value to the global minimum, so this is not an issue. source But the broader answer is that you can talk all day about nonconvexity and model selection, and people will still use neural networks simply because they work better than anything else (at least on things like image classification). Of course there are also people arguing that we shouldn't get too focused on CNNs like the community was focused on SVMs a few decades ago, and instead keep looking for the next big thing. In particular, I think I remember Hinton regretting the effectiveness of CNNs as something which might hinder research. related post
{ "source": [ "https://stats.stackexchange.com/questions/297380", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/80699/" ] }
297,504
I'm interested in learning how to develop a geographic approximation of some kind of epicenter based on the data from the John Snow Cholera outbreak. What statistical modeling could be used to solve such a problem without prior knowledge of where wells are located. As a general problem, you would have available the time, location of known points, and the walking path of the observer. The method I'm looking for would use these three things to estimate the epicenter of the "outbreak".
Not to give a complete or authoritative answer, but just to stimulate ideas, I will report on a quick analysis I made for a lab exercise in a spatial stats course I was teaching ten years ago. The purpose was to see what effect an accurate accounting of likely travel pathways (on foot), compared to using Euclidean distances, would have on a relatively simple exploratory method: a kernel density estimate. Where would the peak (or peaks) of the density be relative to the pump whose handle Snow removed? Using a fairly high-resolution raster representation (2946 rows by 3160 columns) of Snow's map (properly georeferenced), I digitized each of the hundreds of little black coffins shown on the map (finding 558 of them at 309 addresses), assigning each to the edge of the street corresponding to its address, and summarizing by address into a count at each location. After some image processing to identify the streets and alleyways, I conducted a simple Gaussian diffusion limited to those areas (using repeated focal means in a GIS). This is the KDE. The result speaks for itself--it scarcely even needs a legend to explain it. (The map shows many other pumps, but they all lie outside this view, which focuses on the areas of highest density.)
{ "source": [ "https://stats.stackexchange.com/questions/297504", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/60091/" ] }
298,485
Lots of emphasis is placed on relying on and reporting effect sizes rather than p-values in applied research (e.g. quotes further below). But is it not the case that an effect size just like a p-value is a random variable and as such can vary from sample to sample when the same experiment is repeated? In other words, I'm asking what statistical features (e.g., effect size is less variable from sample to sample than p-value) make effect sizes better evidence-measuring indices than p-values? I should, however, mention an important fact that separates a p-value from an effect size. That is, an effect size is something to be estimated because it has a population parameter but a p-value is nothing to be estimated because it doesn't have any population parameter. To me, effect size is simply a metric that in certain areas of research (e.g., human research) helps transforming empirical findings that come from various researcher-developed measurement tools into a common metric (fair to say using this metric human research can better fit the quant research club). Maybe if we take a simple proportion as an effect size, the following (in R) is what shows the supremacy of effect sizes over p-values? (p-value changes but effect size doesn't) binom.test(55, 100, .5) ## p-value = 0.3682 ## proportion of success 55% binom.test(550, 1000, .5) ## p-value = 0.001731 ## proportion of success 55% Note that most effect sizes are linearly related to a test statistic. Thus, it is an easy step to do null-hypothesis testing using effect sizes. For example, t statistic resulted from a pre-post design can easily be converted to a corresponding Cohen's d effect size. As such, distribution of Cohen's d is simply the scale-location version of a t distribution. The quotes: Because p-values are confounded indices, in theory 100 studies with varying sample sizes and 100 different effect sizes could each have the same single p-value, and 100 studies with the same single effect size could each have 100 different values for p-value. or p-value is a random variable that varies from sample to sample. . . . Consequently, it is not appropriate to compare the p-values from two distinct experiments, or from tests on two variables measured in the same experiment, and declare that one is more significant than the other? Citations: Thompson, B. (2006). Foundations of behavioral statistics: An insight-based approach. New York, NY: Guilford Press. Good, P. I., & Hardin, J. W. (2003). Common errors in statistics (and how to avoid them). New York: Wiley.
The advice to provide effect sizes rather than P-values is based on a false dichotomy and is silly. Why not present both? Scientific conclusions should be based on a rational assessment of available evidence and theory. P-values and observed effect sizes alone or together are not enough. Neither of the quoted passages that you supply is helpful. Of course P-values vary from experiment to experiment, the strength of evidence in the data varies from experiment to experiment. The P-value is just a numerical extraction of that evidence by way of the statistical model. Given the nature of the P-value, it is very rarely relevant to analytical purposes to compare one P-value with another, so perhaps that is what the quote author is trying to convey. If you find yourself wanting to compare P-values then you probably should have performed a significance test on a different arrangement of the data in order to sensibly answer the question of interest. See these questions: p-values for p-values? and If one group's mean differs from zero but the other does not, can we conclude that the groups are different? So, the answer to your question is complex. I do not find dichotomous responses to data based on either P-values or effect sizes to be useful, so are effect sizes superior to P-values? Yes, no, sometimes, maybe, and it depends on your purpose.
{ "source": [ "https://stats.stackexchange.com/questions/298485", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/140365/" ] }
298,717
Sorry for the confusing title, I think this is a general statistics question, but I'm working in R. I have a combined dataset of two samples from different countries (n=240 and n=1,010), and when I run a linear regression between the same three variables in each dataset, both datasets produce a significant result, with almost identical coefficients. However, when I merge the datasets and run the same regression on the combined dataset, it is no longer significant. Can anyone explain this? In case it matters, the regression has the form lm(a~b*c) .
Without seeing your data, this is difficult to answer definitively. One possibility is that your datasets span different ranges of the independent variable. It is well-known that combining data across different groups can sometimes reverse correlations seen in each group individually. This effect is known as Simpson's Paradox .
{ "source": [ "https://stats.stackexchange.com/questions/298717", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/174128/" ] }
298,917
This initially arose in connection some work we are doing to a model to classify natural text, but I've simplified it... Perhaps too much. You have a blue car (by some objective scientific measure - it is blue). You show it to 1000 people. 900 say it is blue. 100 do not. You give this information to someone who cannot see the car. All they know is that 900 people said it was blue, and 100 did not. You know nothing more about these people (the 1000). Based on this, you ask the person, "What is the probability that the car is blue?" This has caused a huge divergence of opinion amongst those I have asked! What is the right answer, if there is one?
TL;DR: Unless you assume people are unreasonably bad at judging car color, or that blue cars are unreasonably rare, the large number of people in your example means the probability that the car is blue is basically 100%. Matthew Drury already gave the right answer but I'd just like to add to that with some numerical examples, because you chose your numbers such that you actually get pretty similar answers for a wide range of different parameter settings. For example, let's assume, as you said in one of your comments, that the probability that people judge the color of a car correctly is 0.9. That is: $$p(\text{say it's blue}|\text{car is blue})=0.9=1-p(\text{say it isn't blue}|\text{car is blue})$$ and also $$p(\text{say it isn't blue}|\text{car isn't blue})=0.9=1-p(\text{say it is blue}|\text{car isn't blue})$$ Having defined that, the remaining thing we have to decide is: what is the prior probability that the car is blue? Let's pick a very low probability just to see what happens, and say that $p(\text{car is blue})=0.001$, i.e. only 0.1% of all cars are blue. Then the posterior probability that the car is blue can be calculated as: \begin{align*} &p(\text{car is blue}|\text{answers})\\ &=\frac{p(\text{answers}|\text{car is blue})\,p(\text{car is blue})}{p(\text{answers}|\text{car is blue})\,p(\text{car is blue})+p(\text{answers}|\text{car isn't blue})\,p(\text{car isn't blue})}\\ &=\frac{0.9^{900}\times 0.1^{100}\times0.001}{0.9^{900}\times 0.1^{100}\times0.001+0.1^{900}\times0.9^{100}\times0.999} \end{align*} If you look at the denominator, it's pretty clear that the second term in that sum will be negligible, since the relative size of the terms in the sum is dominated by the ratio of $0.9^{900}$ to $0.1^{900}$, which is on the order of $10^{58}$. And indeed, if you do this calculation on a computer (taking care to avoid numerical underflow issues) you get an answer that is equal to 1 (within machine precision). The reason the prior probabilities don't really matter much here is because you have so much evidence for one possibility (the car is blue) versus another. This can be quantified by the likelihood ratio , which we can calculate as: $$ \frac{p(\text{answers}|\text{car is blue})}{p(\text{answers}|\text{car isn't blue})}=\frac{0.9^{900}\times 0.1^{100}}{0.1^{900}\times 0.9^{100}}\approx 10^{763} $$ So before even considering the prior probabilities, the evidence suggests that one option is already astronomically more likely than the other, and for the prior to make any difference, blue cars would have to be unreasonably, stupidly rare (so rare that we would expect to find 0 blue cars on earth). So what if we change how accurate people are in their descriptions of car color? Of course, we could push this to the extreme and say they get it right only 50% of the time, which is no better than flipping a coin. In this case, the posterior probability that the car is blue is simply equal to the prior probability, because the people's answers told us nothing. But surely people do at least a little better than that, and even if we say that people are accurate only 51% of the time, the likelihood ratio still works out such that it is roughly $10^{13}$ times more likely for the car to be blue. This is all a result of the rather large numbers you chose in your example. If it had been 9/10 people saying the car was blue, it would have been a very different story, even though the same ratio of people were in one camp vs. the other. Because statistical evidence doesn't depend on this ratio, but rather on the numerical difference between the opposing factions. In fact, in the likelihood ratio (which quantifies the evidence), the 100 people who say the car isn't blue exactly cancel 100 of the 900 people who say it is blue, so it's the same as if you had 800 people all agreeing it was blue. And that's obviously pretty clear evidence. (Edit: As Silverfish pointed out , the assumptions I made here actually implied that whenever a person describes a non-blue car incorrectly, they will default to saying it's blue. This isn't realistic of course, because they could really say any color, and will say blue only some of the time. This makes no difference to the conclusions though, since the less likely people are to mistake a non-blue car for a blue one, the stronger the evidence that it is blue when they say it is. So if anything, the numbers given above are actually only a lower bound on the pro-blue evidence.)
{ "source": [ "https://stats.stackexchange.com/questions/298917", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/174276/" ] }
299,722
The title is the question. I am told that ratios and inverses of random variables often are problematic. What is meant is that expectation often do not exist. Is there a simple, general explication of that?
I would like to offer a very simple, intuitive explanation. It amounts to looking at a picture: the rest of this post explains the picture and draws conclusions from it. Here is what it comes down to: when there is a "probability mass" concentrated near $X=0$ , there will be too much probability near $1/X\approx \pm \infty$ , causing its expectation to be undefined. Instead of being fully general, let's focus on random variables $X$ that have continuous densities $f_X$ in a neighborhood of $0$ . Suppose $f_X(0)\ne 0$ . Visually, these conditions mean the graph of $f$ lies above the axis around $0$ : The continuity of $f_X$ around $0$ implies that for any positive height $p$ less than $f_X(0)$ and sufficiently small $\epsilon$ , we may carve out a rectangle beneath this graph which is centered around $x=0$ , has width $2\epsilon$ , and height $p$ , as shown. This corresponds to expressing the original distribution as a mixture of a uniform distribution (with weight $p\times 2\epsilon=2p\epsilon$ ) and whatever remains. In other words, we may think of $X$ as arising in the following way: With probability $2p\epsilon$ , draw a value from a Uniform $(-\epsilon,\epsilon)$ distribution. Otherwise, draw a value from the distribution whose density is proportional to $f_X - p I_{(-\epsilon,\epsilon)}$ . (This is the function drawn in yellow at the right.) ( $I$ is the indicator function.) Step $(1)$ shows that for any $0 \lt u \lt \epsilon$ , the chance that $X$ is between $0$ and $u$ exceeds $p u / 2$ . Equivalently, this is the chance that $1/X$ exceeds $1/u$ . To put it another way: writing $S$ for the survivor function of $1/X$ $$S(x) = \Pr(1/X \gt x),$$ the picture shows $S(x) \gt p / (2x)$ for all $x \gt 1/\epsilon$ . We're done now, because this fact about $S$ implies the expectation is undefined. Compare the integrals involved in computing the expectation of the positive part of $1/X$ , $(1/X)_{+} = \max(0, 1/X)$ : $$E[(1/X)_{+}] = \int_0^\infty S(x)dx \gt \int_{1/\epsilon}^x S(x)dx \gt \int_{1/\epsilon}^x \frac{p}{2x}dx = \frac{p}{2} \log(x\epsilon).$$ (This is a purely geometric argument: every integral represents an identifiable two-dimensional region and all the inequalities arise from strict inclusions within those regions. Indeed, we don't even need to know the final integral is a logarithm: there are simple geometric arguments showing this integral diverges.) Since the right side diverges as $x\to\infty$ , $E[(1/X)_{+}]$ diverges, too. The situation with the negative part of $1/X$ is the same (because the rectangle is centered around $0$ ), and the same argument shows the expectation of the negative part of $1/X$ diverges. Consequently the expectation of $1/X$ itself is undefined. Incidentally, the same argument shows that when $X$ has probability concentrated on one side of $0$ , such as any Exponential or Gamma distribution (with shape parameter less than $1$ ), then still the positive expectation diverges, but the negative expectation is zero. In this case the expectation is defined, but is infinite.
{ "source": [ "https://stats.stackexchange.com/questions/299722", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11887/" ] }
299,915
When used as an activation function in deep neural networks The ReLU function outperforms other non-linear functions like tanh or sigmoid . In my understanding the whole purpose of an activation function is to let the weighted inputs to a neuron interact non-linearly. For example, when using $sin(z)$ as the activation, the output of a two input neuron would be: $$ sin(w_0+w_1*x_1+w_2*x_2) $$ which would approximate the function $$ (w_0+w_1*x_1+w_2*x_2) - {(w_0+w_1*x_1+w_2*x_2)^3 \over 6} + {(w_0+w_1*x_1+w_2*x_2)^5 \over 120} $$ and contain all kinds of combinations of different powers of the features $x_1$ and $x_2$. Although the ReLU is also technically a non-linear function, I don't see how it can produce non-linear terms like the $sin(), tanh()$ and other activations do. Edit: Although my question is similar to this question , I'd like to know how even a cascade of ReLUs are able to approximate such non-linear terms.
Suppose you want to approximate $f(x)=x^2$ using ReLUs $g(ax+b)$. One approximation might look like $h_1(x)=g(x)+g(-x)=|x|$. But this isn't a very good approximation. But you can add more terms with different choices of $a$ and $b$ to improve the approximation. One such improvement, in the sense that the error is "small" across a larger interval, is we have $h_2(x)=g(x)+g(-x)+g(2x-2)+g(-2x+2)$, and it gets better. You can continue this procedure of adding terms to as much complexity as you like. Notice that, in the first case, the approximation is best for $x\in[-1,1]$, while in the second case, the approximation is best for $x\in[-2,2]$. x <- seq(-3,3,length.out=1000) y_true <- x^2 relu <- function(x,a=1,b=0) sapply(x, function(t) max(a*t+b,0)) h1 <- function(x) relu(x)+relu(-x) png("fig1.png") plot(x, h1(x), type="l") lines(x, y_true, col="red") dev.off() h2 <- function(x) h1(x) + relu(2*(x-1)) + relu(-2*(x+1)) png("fig2.png") plot(x, h2(x), type="l") lines(x, y_true, col="red") dev.off() l2 <- function(y_true,y_hat) 0.5 * (y_true - y_hat)^2 png("fig3.png") plot(x, l2(y_true,h1(x)), type="l") lines(x, l2(y_true,h2(x)), col="red") dev.off()
{ "source": [ "https://stats.stackexchange.com/questions/299915", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/69501/" ] }
301,532
The objective function of Principal Component Analysis (PCA) is minimizing the reconstruction error in L2 norm (see section 2.12 here . Another view is trying to maximize the variance on projection. We also have an excellent post here: What is the objective function of PCA? ). My question is that is PCA optimization convex? (I found some discussions here , but wish someone could provide a nice proof here on CV).
No, the usual formulations of PCA are not convex problems. But they can be transformed into a convex optimization problem. The insight and the fun of this is following and visualizing the sequence of transformations rather than just getting the answer: it lies in the journey, not the destination. The chief steps in this journey are Obtain a simple expression for the objective function. Enlarge its domain, which is not convex, into one which is. Modify the objective, which is not convex, into one which is, in a way that obviously does not change the points at which it attains its optimal values. If you keep close watch, you can see the SVD and Lagrange multipliers lurking--but they're just a sideshow, there for scenic interest, and I won't comment on them further. The standard variance-maximizing formulation of PCA (or at least its key step) is $$\text{Maximize }f(x)=\ x^\prime \mathbb{A} x\ \text{ subject to }\ x^\prime x=1\tag{*}$$ where the $n\times n$ matrix $\mathbb A$ is a symmetric, positive-semidefinite matrix constructed from the data (usually its sum of squares and products matrix, its covariance matrix, or its correlation matrix). (Equivalently, we may try to maximize the unconstrained objective $x^\prime \mathbb{A} x / x^\prime x$ . Not only is this a nastier expression--it's no longer a quadratic function--but graphing special cases will quickly show it is not a convex function, either. Usually one observes this function is invariant under rescalings $x\to \lambda x$ and then reduces it to the constrained formulation $(*)$ .) Any optimization problem can be abstractly formulated as Find at least one $x\in\mathcal{X}$ that makes the function $f:\mathcal{X}\to\mathbb{R}$ as large as possible. Recall that an optimization problem is convex when it enjoys two separate properties: The domain $\mathcal{X}\subset\mathbb{R}^n$ is convex. This can be formulated in many ways. One is that whenever $x\in\mathcal{X}$ and $y\in\mathcal{X}$ and $0 \le \lambda \le 1$ , $\lambda x + (1-\lambda)y\in\mathcal{X}$ also. Geometrically: whenever two endpoints of a line segment lie in $\mathcal X$ , the entire segment lies in $\mathcal X$ . The function $f$ is convex. This also can be formulated in many ways. One is that whenever $x\in\mathcal{X}$ and $y\in\mathcal{X}$ and $0 \le \lambda \le 1$ , $$f(\lambda x + (1-\lambda)y) \ge \lambda f(x) + (1-\lambda) f(y).$$ (We needed $\mathcal X$ to be convex in order for this condition to make any sense.) Geometrically: whenever $\bar{xy}$ is any line segment in $\mathcal X$ , the graph of $f$ (as restricted to this segment) lies above or on the segment connecting $(x,f(x))$ and $(y,f(y))$ in $\mathbb{R}^{n+1}$ . The archetype of a convex function is locally everywhere parabolic with non-positive leading coefficient: on any line segment it can be expressed in the form $y\to a y^2 + b y + c$ with $a \le 0.$ A difficulty with $(*)$ is that $\mathcal X$ is the unit sphere $S^{n-1}\subset\mathbb{R}^n$ , which is decidedly not convex. However, we can modify this problem by including smaller vectors. That is because when we scale $x$ by a factor $\lambda$ , $f$ is multiplied by $\lambda^2$ . When $0 \lt x^\prime x \lt 1$ , we can scale $x$ up to unit length by multiplying it by $\lambda=1/\sqrt{x^\prime x} \gt 1$ , thereby increasing $f$ but staying within the unit ball $D^n = \{x\in\mathbb{R}^n\mid x^\prime x \le 1\}$ . Let us therefore reformulate $(*)$ as $$\text{Maximize }f(x)=\ x^\prime \mathbb{A} x\ \text{ subject to }\ x^\prime x\le1\tag{**}$$ Its domain is $\mathcal{X}=D^n$ which clearly is convex, so we're halfway there. It remains to consider the convexity of the graph of $f$ . A good way to think about the problem $(**)$ --even if you don't intend to carry out the corresponding calculations--is in terms of the Spectral Theorem. It says that by means of an orthogonal transformation $\mathbb P$ , you can find at least one basis of $\mathbb{R}^n$ in which $\mathbb A$ is diagonal: that is, $$\mathbb {A = P^\prime \Sigma P}$$ where all the off-diagonal entries of $\Sigma$ are zero. Such a choice of $\mathbb{P}$ can be conceived of as changing nothing at all about $\mathbb A$ , but merely changing how you describe it : when you rotate your point of view, the axes of the level hypersurfaces of the function $x\to x^\prime \mathbb{A} x$ (which were always ellipsoids) align with the coordinate axes. Since $\mathbb A$ is positive-semidefinite, all the diagonal entries of $\Sigma$ must be non-negative. We may further permute the axes (which is just another orthogonal transformation, and therefore can be absorbed into $\mathbb P$ ) to assure that $$\sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_n \ge 0.$$ If we let $x=\mathbb{P}^\prime y$ be the new coordinates $x$ (entailing $y=\mathbb{P}x$ ), the function $f$ is $$f(y) = y^\prime \mathbb{A} y = x^\prime \mathbb{P^\prime A P} x = x^\prime \Sigma x = \sigma_1 x_1^2 + \sigma_2 x_2^2 + \cdots + \sigma_n x_n^2.$$ This function is decidedly not convex! Its graph looks like part of a hyperparaboloid: at every point in the interior of $\mathcal X$ , the fact that all the $\sigma_i$ are nonnegative makes it curl upward rather than downward . However, we can turn $(**)$ into a convex problem with one very useful technique. Knowing that the maximum will occur where $x^\prime x = 1$ , let's subtract the constant $\sigma_1$ from $f$ , at least for points on the boundary of $\mathcal{X}$ . That will not change the locations of any points on the boundary at which $f$ is optimized, because it lowers all the values of $f$ on the boundary by the same value $\sigma_1$ . This suggests examining the function $$g(y) = f(y) - \sigma_1 y^\prime y.$$ This indeed subtracts the constant $\sigma_1$ from $f$ at boundary points, and subtracts smaller values at interior points. This will assure that $g$ , compared to $f$ , has no new global maxima on the interior of $\mathcal X$ . Let's examine what has happened with this sleight-of-hand of replacing $-\sigma_1$ by $-\sigma_1 y^\prime y$ . Because $\mathbb P$ is orthogonal, $y^\prime y = x^\prime x$ . (That's practically the definition of an orthogonal transformation.) Therefore, in terms of the $x$ coordinates, $g$ can be written $$g(y) = \sigma_1 x_1 ^2 + \cdots + \sigma_n x_n^2 - \sigma_1(x_1^2 + \cdots + x_n^2) = (\sigma_2-\sigma_1)x_2^2 + \cdots + (\sigma_n - \sigma_1)x_n^2.$$ Because $\sigma_1 \ge \sigma_i$ for all $i$ , each of the coefficients is zero or negative. Consequently, (a) $g$ is convex and (b) $g$ is optimized when $x_2=x_3=\cdots=x_n=0$ . ( $x^\prime x=1$ then implies $x_1=\pm 1$ and the optimum is attained when $y = \mathbb{P} (\pm 1,0,\ldots, 0)^\prime$ , which is--up to sign--the first column of $\mathbb P$ .) Let's recapitulate the logic. Because $g$ is optimized on the boundary $\partial D^n=S^{n-1}$ where $y^\prime y = 1$ , because $f$ differs from $g$ merely by the constant $\sigma_1$ on that boundary, and because the values of $g$ are even closer to the values of $f$ on the interior of $D^n$ , the maxima of $f$ must coincide with the maxima of $g$ .
{ "source": [ "https://stats.stackexchange.com/questions/301532", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/113777/" ] }
302,247
If two random variables $X$ and $Y$ are uncorrelated, can we also know that $X^2$ and $Y$ uncorrelated? My hypothesis is yes. $X, Y$ uncorrelated means $E[XY]=E[X]E[Y]$, or $$ E[XY]=\int xy f_X(x)f_Y(y)dxdy=\int xf_X(x)dx\int yf_Y(y)dy=E[X]E[Y] $$ Does that also mean the following? $$ E[X^2Y]=\int x^2y f_X(x)f_Y(y)dxdy=\int x^2f_X(x)dx\int yf_Y(y)dy=E[X^2]E[Y] $$
No. A counterexample: Let $X$ be uniformly distributed on $[-1, 1]$, $Y = X^2$. Then $E[X]=0$ and also $E[XY]=E[X^3]=0$ ($X^3$ is odd function), so $X,Y$ are uncorrelated. But $E[X^2Y] = E[X^4] = E[{X^2}^2] > E[X^2]^2 = E[X^2]E[Y]$ The last inequality follows from Jensen's inequality. It also follows from the fact that $E[{X^2}^2] - E[X^2]^2 = Var(X) > 0$ since $X$ is not constant. The problem with your reasoning is that $f_X$ might depend on $y$ and vice versa, so your penultimate equality is invalid.
{ "source": [ "https://stats.stackexchange.com/questions/302247", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/94859/" ] }
302,900
http://www.deeplearningbook.org/contents/ml.html Page 116 explains bayes error as below The ideal model is an oracle that simply knows the true probability distribution that generates the data. Even such a model will still incur some error on many problems, because there may still be some noise in the distribution. In the case of supervised learning, the mapping from x to y may be inherently stochastic, or y may be a deterministic function that involves other variables besides those included in x. The error incurred by an oracle making predictions from the true distribution p(x, y) is called the Bayes error. Questions Please explain Bayes error intuitively? How is it different from irreducible error? Can I say total error = Bias + Variance +Bayes error? What is meaning of "y may be inherently stochastic"?
Bayes error is the lowest possible prediction error that can be achieved and is the same as irreducible error. If one would know exactly what process generates the data, then errors will still be made if the process is random. This is also what is meant by "$y$ is inherently stochastic". For example, when flipping a fair coin, we know exactly what process generates the outcome (a binomial distribution). However, if we were to predict the outcome of a series of coin flips, we would still make errors, because the process is inherently random (i.e. stochastic). To answer your other question, you are correct in stating that the total error is the sum of (squared) bias, variance and irreducible error. See also this article for an easy to understand explanation of these three concepts.
{ "source": [ "https://stats.stackexchange.com/questions/302900", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/54214/" ] }
303,244
A perfect estimator would be accurate (unbiased) and precise (good estimation even with small samples). I never really thought of the question of precision but only the one of accuracy (as I did in Estimator of $\frac{\sigma^2}{\mu (1 - \mu)}$ when sampling without replacement for example). Are there cases where the unbiased estimator is less precise (and therefore eventually "less good") than a biased estimator? If yes, I would love a simple example proving mathematically that the less accurate estimator is so much more precise that it could be considered better.
One example is estimates from ordinary least squares regression when there is collinearity. They are unbiased but have huge variance. Ridge regression on the same problem yields estimates that are biased but have much lower variance. E.g. install.packages("ridge") library(ridge) set.seed(831) data(GenCont) ridgemod <- linearRidge(Phenotypes ~ ., data = as.data.frame(GenCont)) summary(ridgemod) linmod <- lm(Phenotypes ~ ., data = as.data.frame(GenCont)) summary(linmod) The t values are much larger for ridge regression than linear regression. The bias is fairly small.
{ "source": [ "https://stats.stackexchange.com/questions/303244", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/24097/" ] }
303,857
I am training a neural network using i) SGD and ii) Adam Optimizer. When using normal SGD, I get a smooth training loss vs. iteration curve as seen below (the red one). However, when I used the Adam Optimizer, the training loss curve has some spikes. What's the explanation of these spikes? Model Details: 14 input nodes -> 2 hidden layers (100 -> 40 units) -> 4 output units I am using default parameters for Adam beta_1 = 0.9 , beta_2 = 0.999 , epsilon = 1e-8 and a batch_size = 32 . i) With SGD ii) With Adam
The spikes are an unavoidable consequence of Mini-Batch Gradient Descent in Adam ( batch_size=32 ). Some mini-batches have 'by chance' unlucky data for the optimization, inducing those spikes you see in your cost function using Adam. If you try stochastic gradient descent (same as using batch_size=1 ) you will see that there are even more spikes in the cost function. The same doesn´t happen in (Full) Batch GD because it uses all training data (i.e the batch size is equal to the cardinality of your training set) each optimization epoch. As in your first graphic the cost is monotonically decreasing smoothly it seems the title ( i) With SGD ) is wrong and you are using (Full) Batch Gradient Descent instead of SGD. On his great Deep Learning course at Coursera , Andrew Ng explains in great details this using the image below:
{ "source": [ "https://stats.stackexchange.com/questions/303857", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/107565/" ] }
304,538
The premise is this quote from vignette of R package betareg 1 . Further-more, the model shares some properties (such as linear predictor, link function, dispersion parameter) with generalized linear models (GLMs; McCullagh and Nelder 1989), but it is not a special case of this framework (not even for fixed dispersion) This answer also makes allusion to the fact: [...] This is a type of regression model that is appropriate when the response variable is distributed as Beta. You can think of it as analogous to a generalized linear model. It's exactly what you are looking for [...] (emphasis mine) Question title says it all: why Beta/Dirichlet Regression are not considered Generalized Linear Models (are they not)? As far as I know, the Generalized Linear Model defines models built on the expectation of their dependent variables conditional on the independent ones. $f$ is the link function that maps the expectation, $g$ is probability distribution, $Y$ the outcomes and $X$ the predictiors, $\beta$ are linear parameters and $\sigma^2$ the variance. $$f\left(\mathbb E\left(Y\mid X\right)\right) \sim g(\beta X, I\sigma^2)$$ Different GLMs impose (or relax) the relationship between the mean and the variance, but $g$ must be a probability distribution in the exponential family, a desirable property which should improve robustness of the estimation if I recall correctly. The Beta and Dirichlet distributions are part of the exponential family, though, so I'm out of ideas. [1] Cribari-Neto, F., & Zeileis, A. (2009). Beta regression in R.
Check the original reference: Ferrari, S., & Cribari-Neto, F. (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics, 31(7), 799-815. as the authors note, the parameters of re-parametrized beta distribution are correlated, so Note that the parameters $\beta$ and $\phi$ are not orthogonal, in contrast to what is verified in the class of generalized linear regression models (McCullagh and Nelder, 1989). So while the model looks like a GLM and quacks like a GLM, it does not perfectly fit the framework.
{ "source": [ "https://stats.stackexchange.com/questions/304538", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/60613/" ] }
304,977
I understand the reasoning behind splitting the data into a Test set and a Validation set. I also understand that the size of the split will depend on the situation but will generally vary from 50/50 to 90/10. I built a RNN to correct spelling and start with a data set of ~5m sentences. I shave off 500k sentences and then train with the remaining ~4.5m sentences. When the training is done I take my validation set and compute the accuracy. The interesting thing is that after only 4% of my validation set I have an accuracy of 69.4% and this percentage doesn't change by more than 0.1% in either direction. Eventually I just cut the validation short because the number is stuck at 69.5%. So why slice off 10% for Validation when I could probably get away with 1%? Does it matter?
Larger validation sets give more accurate estimates of out-of-sample performance. But as you've noticed, at some point that estimate might be as accurate as you need it to be, and you can make some rough predictions as to the validation sample size you need to reach that point. For simple correct/incorrect classification accuracy, you can calculate the standard error of the estimate as $\sqrt{p(1−p)/n}$ (standard deviation of a Bernouilli variable), where $p$ is the probability of a correct classification, and $n$ is the size of the validation set. Of course you don't know $p$, but you might have some idea of its range. E.g. let's say you expect an accuracy between 60-80%, and you want your estimates to have a standard error smaller than 0.1%: $$ \sqrt{p(1−p)/n}<0.001 $$ How large should $n$ (the size of the validation set) be? For $p=0.6$ we get: $$ n > \frac{0.6-0.6^2}{0.001^2}=240,000 $$ For $p=0.8$ we get: $$ n > \frac{0.8-0.8^2}{0.001^2}=160,000 $$ So this tells us you could get away with using less than 5% of your 5 million data samples, for validation. This percentage goes down if you expect higher performance, or especially if you are satisfied with a lower standard error of your out-of-sample performance estimate (e.g. with $p=0.7$ and for a s.e. < 1%, you need only 2100 validation samples, or less than a twentieth of a percent of your data). These calculations also showcase the point made by Tim in his answer, that the accuracy of your estimates depends on the absolute size of your validation set (i.e. on $n$), rather than its size relative to the training set. (Also I might add that I'm assuming representative sampling here. If your data are very heterogeneous you might need to use larger validation sets just to make sure that the validation data includes all the same conditions etc. as your train & test data.)
{ "source": [ "https://stats.stackexchange.com/questions/304977", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/68563/" ] }
305,116
I am currently in a linear regression class, but I can't shake the feeling that what I am learning is no longer relevant in either modern statistics or machine learning. Why is so much time spent on doing inference on simple or multiple linear regression when so many interesting datasets these days frequently violate many of the unrealistic assumptions of linear regression? Why not instead teach inference on more flexible, modern tools like regression using support vector machines or Gaussian process? Though more complicated than finding a hyperplane in a space, wouldn't this give students a much better background for which to tackle modern day problems?
It is true that the assumptions of linear regression aren't realistic. However, this is true of all statistical models. "All models are wrong, but some are useful." I guess you're under the impression that there's no reason to use linear regression when you could use a more complex model. This isn't true, because in general, more complex models are more vulnerable to overfitting, and they use more computational resources, which are important if, e.g., you're trying to do statistics on an embedded processor or a web server. Simpler models are also easier to understand and interpret; by contrast, complex machine-learning models such as neural networks tend to end up as black boxes, more or less. Even if linear regression someday becomes no longer practically useful (which seems extremely unlikely in the foreseeable future), it will still be theoretically important, because more complex models tend to build on linear regression as a foundation. For example, in order to understand a regularized mixed-effects logistic regression, you need to understand plain old linear regression first. This isn't to say that more complex, newer, and shinier models aren't useful or important. Many of them are. But the simpler models are more widely applicable and hence more important, and clearly make sense to present first if you're going to present a variety of models. There are a lot of bad data analyses conducted these days by people who call themselves "data scientists" or something but don't even know the foundational stuff, like what a confidence interval really is. Don't be a statistic!
{ "source": [ "https://stats.stackexchange.com/questions/305116", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/178613/" ] }
305,713
How to construct an example of a probability distribution for which $\mathbb{E}\left(\frac{1}{X}\right)=\frac{1}{\mathbb{E}(X)}$ holds, assuming $\mathbb{P}(X\ne0)=1$ ? The inequality which follows from Jensen's inequality for a positive-valued RV $X$ is like $\mathbb{E}\left(\frac{1}{X}\right)\ge\frac{1}{\mathbb{E}(X)}$ (the reverse inequality if $X<0$ ). This is because the mapping $x\mapsto\frac{1}{x}$ is convex for $x>0$ and concave for $x<0$ . Following the equality condition in Jensen's inequality, I guess the distribution has to be degenerate for the required equality to hold. A trivial case where the equality holds is of course if $X=1$ a.e. Here is an example I found in a problem book: Consider a discrete random variable $X$ such that $\mathbb{P}(X=-1)=\frac{1}{9}, \mathbb{P}(X=\frac{1}{2})=\mathbb{P}(X=2)=\frac{4}{9}$ . It is then easily verified that $\mathbb{E}\left(\frac{1}{X}\right)=\frac{1}{\mathbb{E}(X)}=1$ . This example shows that $X$ need not be positive (or negative) a.e. for the equality in the title to hold. The distribution here is not degenerate either. How do I construct an example, possibly like the one I found in the book? Is there any motivation?
Let's construct all possible examples of random variables $X$ for which $E[X]E[1/X]=1$. Then, among them, we may follow some heuristics to obtain the simplest possible example. These heuristics consist of giving the simplest possible values to all expressions that drop out of a preliminary analysis. This turns out to be the textbook example. Preliminary analysis This requires only a little bit of analysis based on definitions. The solution is of only secondary interest: the main objective is to develop insights to help us understand the results intuitively. First observe that Jensen's Inequality (or the Cauchy-Schwarz Inequality) implies that for a positive random variable $X$, $E[X]E[1/X] \ge 1$, with equality holding if and only if $X$ is "degenerate": that is, $X$ is almost surely constant. When $X$ is a negative random variable, $-X$ is positive and the preceding result holds with the inequality sign reversed. Consequently, any example where $E[1/X]=1/E[X]$ must have positive probability of being negative and positive probability of being positive. The insight here is that any $X$ with $E[X]E[1/X]=1$ must somehow be "balancing" the inequality from its positive part against the inequality in the other direction from its negative part. This will become clearer as we go along. Consider any nonzero random variable $X$. An initial step in formulating a definition of expectation (at least when this is done in full generality using measure theory) is to decompose $X$ into its positive and negative parts, both of which are positive random variables: $$\eqalign{ Y &= \operatorname{Positive part}(X) = \max(0, X);\\ Z &= \operatorname{Negative part}(X) = -\min(0, X). }$$ Let's think of $X$ as a mixture of $Y$ with weight $p$ and $-Z$ with weight $1-p$ where $$p=\Pr(X\gt 0),\ 1-p = \Pr(X \lt 0).$$ Obviously $$0 \lt p \lt 1.$$ This will enable us to write expectations of $X$ and $1/X$ in terms of the expectations of the positive variables $Y$ and $Z$. To simplify the forthcoming algebra a little, note that uniformly rescaling $X$ by a number $\sigma$ does not change $E[X]E[1/X]$--but it does multiply $E[Y]$ and $E[Z]$ each by $\sigma$. For positive $\sigma$, this simply amounts to selecting the units of measurement of $X$. A negative $\sigma$ switches the roles of $Y$ and $Z$. Choosing the sign of $\sigma$ appropriately we may therefore suppose $$E[Z]=1\text{ and }E[Y] \ge E[Z].\tag{1}$$ Notation That's it for preliminary simplifications. To create a nice notation, let us therefore write $$\mu = E[Y];\ \nu = E[1/Y];\ \lambda=E[1/Z]$$ for the three expectations we cannot control. All three quantities are positive. Jensen's Inequality asserts $$\mu\nu \ge 1\text{ and }\lambda \ge 1.\tag{2}$$ The Law of Total Probability expresses the expectations of $X$ and $1/X$ in terms of the quantities we have named: $$E[X] = E[X\mid X\gt 0]\Pr(X \gt 0) + E[X\mid X \lt 0]\Pr(X \lt 0) = \mu p - (1-p) = (\mu + 1)p - 1$$ and, since $1/X$ has the same sign as $X$, $$E\left[\frac{1}{X}\right] = E\left[\frac{1}{X}\mid X\gt 0\right]\Pr(X \gt 0) + E\left[\frac{1}{X}\mid X \lt 0\right]\Pr(X \lt 0) = \nu p - \lambda(1-p) = (\nu + \lambda)p - \lambda.$$ Equating the product of these two expressions with $1$ provides an essential relationship among the variables: $$1 = E[X]E\left[\frac{1}{X}\right] = ((\mu +1)p - 1)((\nu + \lambda)p - \lambda).\tag{*}$$ Reformulation of the Problem Suppose the parts of $X$--$Y$ and $Z$--are any positive random variables (degenerate or not). That determines $\mu, \nu,$ and $\lambda$. When can we find $p$, with $0 \lt p \lt 1$, for which $(*)$ holds? This clearly articulates the "balancing" insight previously stated only vaguely: we are going to hold $Y$ and $Z$ fixed and hope to find a value of $p$ that appropriately balances their relative contributions to $X$. Although it's not immediately evident that such a $p$ need exist, what is clear is that it depends only on the moments $E[Y]$, $E[1/Y]$, $E[Z]$, and $E[1/Z]$. The problem thereby is reduced to relatively simple algebra--all the analysis of random variables has been completed. Solution This algebraic problem isn't too hard to solve, because $(*)$ is at worst a quadratic equation for $p$ and the governing inequalities $(1)$ and $(2)$ are relatively simple. Indeed, $(*)$ tells us the product of its roots $p_1$ and $p_2$ is $$p_1p_2 = (\lambda - 1)\frac{1}{(\mu+1)(\nu+\lambda)} \ge 0$$ and the sum is $$p_1 + p_2 = (2\lambda + \lambda \mu + \nu)\frac{1}{(\mu+1)(\nu+\lambda)} \gt 0.$$ Therefore both roots must be positive. Furthermore, their average is less than $1$, because $$ 1 - \frac{(p_1+p_2)}{2} = \frac{\lambda \mu + \nu + 2 \mu \nu}{2(\mu+1)(\nu+\lambda)} \gt 0.$$ (By doing a bit of algebra, it's not hard to show the larger of the two roots does not exceed $1$, either.) A Theorem Here is what we have found: Given any two positive random variables $Y$ and $Z$ (at least one of which is nondegenerate) for which $E[Y]$, $E[1/Y]$, $E[Z]$, and $E[1/Z]$ exist and are finite. Then there exist either one or two values $p$, with $0 \lt p \lt 1$, that determine a mixture variable $X$ with weight $p$ for $Y$ and weight $1-p$ for $-Z$ and for which $E[X]E[1/X]=1$. Every such instance of a random variable $X$ with $E[X]E[1/X]=1$ is of this form. That gives us a rich set of examples indeed! Constructing the Simplest Possible Example Having characterized all examples, let's proceed to construct one that is as simple as possible. For the negative part $Z$, let's choose a degenerate variable --the very simplest kind of random variable. It will be scaled to make its value $1$, whence $\lambda=1$. The solution of $(*)$ includes $p_1=0$, reducing it to an easily solved linear equation: the only positive root is $$p = \frac{1}{1+\mu} + \frac{1}{1+\nu}.\tag{3}$$ For the positive part $Y$, we obtain nothing useful if $Y$ is degenerate, so let's give it some probability at just two distinct positive values $a \lt b$, say $\Pr(X=b)=q$. In this case the definition of expectation gives $$\mu = E[Y] = (1-q)a + qb;\ \nu = E[1/Y] = (1-q)/a + q/b.$$ To make this even simpler, let's make $Y$ and $1/Y$ identical: this forces $q=1-q=1/2$ and $a=1/b$. Now $$\mu = \nu = \frac{b + 1/b}{2}.$$ The solution $(3)$ simplifies to $$p = \frac{2}{1+\mu} = \frac{4}{2 + b + 1/b}.$$ How can we make this involve simple numbers? Since $a\lt b$ and $ab=1$, necessarily $b\gt 1$. Let's choose the simplest number greater than $1$ for $b$; namely, $b=2$. The foregoing formula yields $p = 4/(2+2+1/2) = 8/9$ and our candidate for the simplest possible example therefore is $$\eqalign{ \Pr(X=2) = \Pr(X=b) = \Pr(Y=b)p = qp = \frac{1}{2}\frac{8}{9} = \frac{4}{9};\\ \Pr(X=1/2) = \Pr(X=a) = \Pr(Y=a)p = qp = \cdots = \frac{4}{9};\\ \Pr(X=-1) = \Pr(Z=1)(1-p) = 1-p = \frac{1}{9}. }$$ This is the very example offered in the textbook.
{ "source": [ "https://stats.stackexchange.com/questions/305713", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/119261/" ] }
305,725
I am comparing the means of male and female blood sugar levels to see if males have lower blood sugar levels than females. My hypothesis is as follows: H0 : µf - µm = 0 Ha : µf - µm >0 on a 5% significance level, H0 should be rejected, since p-value (Prob>|t|) = 0.001/2 < 0.05. Is this correct? Because when I look at the output and compare the means the male mean is higher than the female mean.
Let's construct all possible examples of random variables $X$ for which $E[X]E[1/X]=1$. Then, among them, we may follow some heuristics to obtain the simplest possible example. These heuristics consist of giving the simplest possible values to all expressions that drop out of a preliminary analysis. This turns out to be the textbook example. Preliminary analysis This requires only a little bit of analysis based on definitions. The solution is of only secondary interest: the main objective is to develop insights to help us understand the results intuitively. First observe that Jensen's Inequality (or the Cauchy-Schwarz Inequality) implies that for a positive random variable $X$, $E[X]E[1/X] \ge 1$, with equality holding if and only if $X$ is "degenerate": that is, $X$ is almost surely constant. When $X$ is a negative random variable, $-X$ is positive and the preceding result holds with the inequality sign reversed. Consequently, any example where $E[1/X]=1/E[X]$ must have positive probability of being negative and positive probability of being positive. The insight here is that any $X$ with $E[X]E[1/X]=1$ must somehow be "balancing" the inequality from its positive part against the inequality in the other direction from its negative part. This will become clearer as we go along. Consider any nonzero random variable $X$. An initial step in formulating a definition of expectation (at least when this is done in full generality using measure theory) is to decompose $X$ into its positive and negative parts, both of which are positive random variables: $$\eqalign{ Y &= \operatorname{Positive part}(X) = \max(0, X);\\ Z &= \operatorname{Negative part}(X) = -\min(0, X). }$$ Let's think of $X$ as a mixture of $Y$ with weight $p$ and $-Z$ with weight $1-p$ where $$p=\Pr(X\gt 0),\ 1-p = \Pr(X \lt 0).$$ Obviously $$0 \lt p \lt 1.$$ This will enable us to write expectations of $X$ and $1/X$ in terms of the expectations of the positive variables $Y$ and $Z$. To simplify the forthcoming algebra a little, note that uniformly rescaling $X$ by a number $\sigma$ does not change $E[X]E[1/X]$--but it does multiply $E[Y]$ and $E[Z]$ each by $\sigma$. For positive $\sigma$, this simply amounts to selecting the units of measurement of $X$. A negative $\sigma$ switches the roles of $Y$ and $Z$. Choosing the sign of $\sigma$ appropriately we may therefore suppose $$E[Z]=1\text{ and }E[Y] \ge E[Z].\tag{1}$$ Notation That's it for preliminary simplifications. To create a nice notation, let us therefore write $$\mu = E[Y];\ \nu = E[1/Y];\ \lambda=E[1/Z]$$ for the three expectations we cannot control. All three quantities are positive. Jensen's Inequality asserts $$\mu\nu \ge 1\text{ and }\lambda \ge 1.\tag{2}$$ The Law of Total Probability expresses the expectations of $X$ and $1/X$ in terms of the quantities we have named: $$E[X] = E[X\mid X\gt 0]\Pr(X \gt 0) + E[X\mid X \lt 0]\Pr(X \lt 0) = \mu p - (1-p) = (\mu + 1)p - 1$$ and, since $1/X$ has the same sign as $X$, $$E\left[\frac{1}{X}\right] = E\left[\frac{1}{X}\mid X\gt 0\right]\Pr(X \gt 0) + E\left[\frac{1}{X}\mid X \lt 0\right]\Pr(X \lt 0) = \nu p - \lambda(1-p) = (\nu + \lambda)p - \lambda.$$ Equating the product of these two expressions with $1$ provides an essential relationship among the variables: $$1 = E[X]E\left[\frac{1}{X}\right] = ((\mu +1)p - 1)((\nu + \lambda)p - \lambda).\tag{*}$$ Reformulation of the Problem Suppose the parts of $X$--$Y$ and $Z$--are any positive random variables (degenerate or not). That determines $\mu, \nu,$ and $\lambda$. When can we find $p$, with $0 \lt p \lt 1$, for which $(*)$ holds? This clearly articulates the "balancing" insight previously stated only vaguely: we are going to hold $Y$ and $Z$ fixed and hope to find a value of $p$ that appropriately balances their relative contributions to $X$. Although it's not immediately evident that such a $p$ need exist, what is clear is that it depends only on the moments $E[Y]$, $E[1/Y]$, $E[Z]$, and $E[1/Z]$. The problem thereby is reduced to relatively simple algebra--all the analysis of random variables has been completed. Solution This algebraic problem isn't too hard to solve, because $(*)$ is at worst a quadratic equation for $p$ and the governing inequalities $(1)$ and $(2)$ are relatively simple. Indeed, $(*)$ tells us the product of its roots $p_1$ and $p_2$ is $$p_1p_2 = (\lambda - 1)\frac{1}{(\mu+1)(\nu+\lambda)} \ge 0$$ and the sum is $$p_1 + p_2 = (2\lambda + \lambda \mu + \nu)\frac{1}{(\mu+1)(\nu+\lambda)} \gt 0.$$ Therefore both roots must be positive. Furthermore, their average is less than $1$, because $$ 1 - \frac{(p_1+p_2)}{2} = \frac{\lambda \mu + \nu + 2 \mu \nu}{2(\mu+1)(\nu+\lambda)} \gt 0.$$ (By doing a bit of algebra, it's not hard to show the larger of the two roots does not exceed $1$, either.) A Theorem Here is what we have found: Given any two positive random variables $Y$ and $Z$ (at least one of which is nondegenerate) for which $E[Y]$, $E[1/Y]$, $E[Z]$, and $E[1/Z]$ exist and are finite. Then there exist either one or two values $p$, with $0 \lt p \lt 1$, that determine a mixture variable $X$ with weight $p$ for $Y$ and weight $1-p$ for $-Z$ and for which $E[X]E[1/X]=1$. Every such instance of a random variable $X$ with $E[X]E[1/X]=1$ is of this form. That gives us a rich set of examples indeed! Constructing the Simplest Possible Example Having characterized all examples, let's proceed to construct one that is as simple as possible. For the negative part $Z$, let's choose a degenerate variable --the very simplest kind of random variable. It will be scaled to make its value $1$, whence $\lambda=1$. The solution of $(*)$ includes $p_1=0$, reducing it to an easily solved linear equation: the only positive root is $$p = \frac{1}{1+\mu} + \frac{1}{1+\nu}.\tag{3}$$ For the positive part $Y$, we obtain nothing useful if $Y$ is degenerate, so let's give it some probability at just two distinct positive values $a \lt b$, say $\Pr(X=b)=q$. In this case the definition of expectation gives $$\mu = E[Y] = (1-q)a + qb;\ \nu = E[1/Y] = (1-q)/a + q/b.$$ To make this even simpler, let's make $Y$ and $1/Y$ identical: this forces $q=1-q=1/2$ and $a=1/b$. Now $$\mu = \nu = \frac{b + 1/b}{2}.$$ The solution $(3)$ simplifies to $$p = \frac{2}{1+\mu} = \frac{4}{2 + b + 1/b}.$$ How can we make this involve simple numbers? Since $a\lt b$ and $ab=1$, necessarily $b\gt 1$. Let's choose the simplest number greater than $1$ for $b$; namely, $b=2$. The foregoing formula yields $p = 4/(2+2+1/2) = 8/9$ and our candidate for the simplest possible example therefore is $$\eqalign{ \Pr(X=2) = \Pr(X=b) = \Pr(Y=b)p = qp = \frac{1}{2}\frac{8}{9} = \frac{4}{9};\\ \Pr(X=1/2) = \Pr(X=a) = \Pr(Y=a)p = qp = \cdots = \frac{4}{9};\\ \Pr(X=-1) = \Pr(Z=1)(1-p) = 1-p = \frac{1}{9}. }$$ This is the very example offered in the textbook.
{ "source": [ "https://stats.stackexchange.com/questions/305725", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/179028/" ] }
306,276
I am trying to learn how to use Neural Networks. I was reading this tutorial . After fitting a Neural Network on a Time Series using the value at $t$ to predict the value at $t+1$ the author obtains the following plot, where the blue line is the time series, the green is the prediction on train data, red is the prediction on test data (he used a test-train split) and calls it "We can see that the model did a pretty poor job of fitting both the training and the test datasets. It basically predicted the same input value as the output." Then the author decides to use $t$, $t-1$ and $t-2$ to predict the value at $t+1$. In doing so obtains and says "Looking at the graph, we can see more structure in the predictions." My question Why is the first "poor"? it looks almost perfect to me, it predicts every single change perfectly! And similarly, why is the second better? Where is the "structure"? To me it seem much poorer than the first one. In general, when is a prediction on time series good and when is it bad?
It's sort of an optical illusion: the eye looks at the graph, and sees that the red and blue graphs are right next to each. The problem is that they are right next to each other horizontally , but what matters is the vertical distance. The eye most easily see the distance between the curves in the two-dimensional space of the Cartesian graph, but what matters is the one-dimensional distance within a particular t value. For example, suppose we had points A1= (10,100), A2 = (10.1, 90), A3 = (9.8,85), P1 = (10.1,100.1), and P2 = (9.8, 88). The eye is naturally going to compare P1 to A1, because that is the closest point, while P2 is going to be compared to A2. Since P1 is closer to A1 than P2 is to A3, P1 is going to look like a better prediction. But when you compare P1 to A1, you're just looking at how well A1 is able to just repeat what it saw earlier; with respect to A1, P1 isn't a prediction . The proper comparison is between P1 v. A2, and P2 v. A3, and in this comparison P2 is better than P1. It would have been clearer if, in addition to plotting y_actual and y_pred against t, there had been graphs of (y_pred-y_actual) against t.
{ "source": [ "https://stats.stackexchange.com/questions/306276", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/146552/" ] }
306,827
In basic under-grad statistics courses, students are (usually?) taught hypothesis testing for the mean of a population. Why is it that the focus is on the mean and not on the median? My guess is that it is easier to test the mean due to the central limit theorem, but I'd love to read some educated explanations.
Because Alan Turing was born after Ronald Fisher. In the old days, before computers, all this stuff had to be done by hand or, at best, with what we would now call calculators. Tests for comparing means can be done this way - it's laborious, but possible. Tests for quantiles (such as the median) would be pretty much impossible to do this way. For example, quantile regression relies on minimizing a relatively complicated function.This would not be possible by hand. It is possible with programming. See e.g. Koenker or Wikipedia . Quantile regression has fewer assumptions than OLS regression and provides more information.
{ "source": [ "https://stats.stackexchange.com/questions/306827", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/34756/" ] }
306,829
Recently, I am working on DBSCAN algorithm, the original paper is M. Ester, H. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in large spatial databases with noise. However, there are debates on whether the clustering result is deterministic or not. I read through the paper and relevant materials, but no solid proof can be found. It's quite easy to say that given epsilon and min samples, the core points set should be unique so the clustering should be deterministic either. However, can any one give a shot on proving this in a more rigorous mathematical style?
Because Alan Turing was born after Ronald Fisher. In the old days, before computers, all this stuff had to be done by hand or, at best, with what we would now call calculators. Tests for comparing means can be done this way - it's laborious, but possible. Tests for quantiles (such as the median) would be pretty much impossible to do this way. For example, quantile regression relies on minimizing a relatively complicated function.This would not be possible by hand. It is possible with programming. See e.g. Koenker or Wikipedia . Quantile regression has fewer assumptions than OLS regression and provides more information.
{ "source": [ "https://stats.stackexchange.com/questions/306829", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/150501/" ] }
306,840
Does anyone know if I have a chance to get a closed form for the following integral (and if yes, what would be the trick)? $$\int \left\{1-\Phi(x)\right\}^k \Phi(x)^{n-k} \varphi\left(\frac{x-\mu}{\sigma}\right) dx,$$ where $\Phi$ and $\varphi$ are the normal cumulative distribution and density functions, $k \leq n$ are integers, and $\mu$ and $\sigma >0$ are real numbers. I had a look at this page https://en.wikipedia.org/wiki/List_of_integrals_of_Gaussian_functions , but I'm not sure it contains what I need.
Because Alan Turing was born after Ronald Fisher. In the old days, before computers, all this stuff had to be done by hand or, at best, with what we would now call calculators. Tests for comparing means can be done this way - it's laborious, but possible. Tests for quantiles (such as the median) would be pretty much impossible to do this way. For example, quantile regression relies on minimizing a relatively complicated function.This would not be possible by hand. It is possible with programming. See e.g. Koenker or Wikipedia . Quantile regression has fewer assumptions than OLS regression and provides more information.
{ "source": [ "https://stats.stackexchange.com/questions/306840", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/79097/" ] }
307,210
Here, have a look: You can see exactly where the training data ends. Training data goes from $-1$ to $1$. I used Keras and a 1-100-100-2 dense network with tanh activation. I calculate the result from two values, p and q as p / q. This way I can achive any size of number using only smaller than 1 values. Please note I am still a beginner in this field, so go easy on me.
You're using a feed-forward network; the other answers are correct that FFNNs are not great at extrapolation beyond the range of the training data. However, since the data has a periodic quality, the problem may be amenable to modeling with an LSTM. LSTMs are a variety of neural network cell that operate on sequences, and have a "memory" about what they have "seen" before. The abstract of this book chapter suggests an LSTM approach is a qualified success on periodic problems. In this case, the training data would be a sequence of tuples $(x_i, \sin(x_i))$, and the task to make accurate predictions for new inputs $x_{i+1} \dots x_{i+n}$ for some $n$ and $i$ indexes some increasing sequence. The length of each input sequence, the width of the interval which they cover, and their spacing, are up to you. Intuitively, I'd expect a regular grid covering 1 period to be a good place to start, with training sequences covering a wide range of values, rather than restricted to some interval. (Jimenez-Guarneros, Magdiel and Gomez-Gil, Pilar and Fonseca-Delgado, Rigoberto and Ramirez-Cortes, Manuel and Alarcon-Aquino, Vicente, "Long-Term Prediction of a Sine Function Using a LSTM Neural Network", in Nature-Inspired Design of Hybrid Intelligent Systems )
{ "source": [ "https://stats.stackexchange.com/questions/307210", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/180413/" ] }
308,397
In RCTs, randomisation balances unmeasured confounders and, I'm told, ATE and ATT would be the same. In observational studies, this is not possible and Propensity Scores are used in various ways to estimate ATT and/or ATE. The analyses that I've performed and examples that I've seen (eg this helpful text ) shows different ATT and ATE (albeit slightly). Please can anyone help me understand why they are different and, more importantly, what the differences mean (eg. if ATE>ATT or ATT>ATE), if anything?
The Average Treatment Effect ( ATE ) and the Average Treatment Effect on Treated ( ATT ) are commonly defined across the different groups of individuals. In addition, ATE and ATT are often different because they might measure outcomes ($Y$) that are not affected from the treatment $D$ in the same manner. First, some additional notation: $Y^0$: population-level random variable for outcome $Y$ in control state. $Y^1$: population-level random variable for outcome $Y$ in treatment state. $\delta$: individual-level causal effect of the treatment. $\pi$: proportion of population that takes treatment. Given the above, the ATT is defined as: $\mathrm{E}[\delta|D=1]$ ie. what is the expected causal effect of the treatment for individuals in the treatment group. This can be decomposed more meaningfully as: \begin{align} \mathrm{E}[\delta|D=1] = & \mathrm{E}[Y^1 - Y^0|D=1] \\ & \mathrm{E}[Y^1|D=1] - \mathrm{E}[Y^0|D=1] \end{align} (Notice that $\mathrm{E}[Y^0|D=1]$ is unobserved so it refers to a counterfactual variable which is not realised in our observed sample.) Similarly the ATE is defined as: $\mathrm{E}[\delta]$, ie. what is the expected causal effect of the treatment across all individuals in the population. Again we can decompose this more meaningfully as: \begin{align} \mathrm{E}[\delta] =& \{ \pi \mathrm{E}[Y^1|D=1] + (1-\pi) \mathrm{E}[Y^1|D=0] \} \\ -& \{ \pi \mathrm{E}[Y^0|D=1] + (1-\pi) \mathrm{E}[Y^0|D=0] \} \end{align} As you see the ATT and the more general ATE are referring by definition to different portions of the population of interest. More importantly, in the ideal scenario of a randomised control trial ( RCT ) ATE equals ATT because we assume that: $\mathrm{E}[Y^0|D=1] = \mathrm{E}[Y^0|D=0]$ and $\mathrm{E}[Y^1|D=1] = \mathrm{E}[Y^1|D=0]$, ie. we have believe respectively that: the baseline of the treatment group equals the baseline of the control group (layman terms: people in the treatment group would do as bad as the control group if they were not treated) and the treatment effect on the treated group equals the treatment effect on the control group (layman terms: people in the control group would do as good as the treatment group if they were treated). These are very strong assumptions which are commonly violated in observational studies and therefore the ATT and the ATE are not expected to be equal. (Notice that if only the baselines are equal, you can still get an ATT through simple differences: $\mathrm{E}[Y^1|D=1] - \mathrm{E}[Y^0|D=0]$.) Especially in the cases where the individuals self-select to enter the treatment group or not (eg. an e-shop providing cash bonus where a customer can redeem a bonus coupon for $X$ amount given she shops items worth at least $Y$ amount) the baselines as well as the treatment effects can be different (eg. repeat buyers are more likely to redeem such a bonus, low-value customers might find the threshold $Y$ unrealistically high or high-value customers might be indifferent to the bonus amount $X$ - this also relates to SUTVA ). In scenarios like this even talking about ATE is probably ill-defined (eg. it is unrealistic to expect that all the customers of an e-shop will ever shop items worth $Y$). ATT being unequal to ATE is not unexpected. If ATT is smaller or greater than ATE is application specific. The inequality of the two suggests that the treatment assignment mechanism was potentially not random. In general, in an observational study because the above-mentioned assumptions do not generally hold, we either partition our sample accordingly or we control for difference through "regression-like" techniques. For a more detailed but easy to follow exposition of the matter I recommend looking into Morgan & Winship's Counterfactuals and Causal Inference .
{ "source": [ "https://stats.stackexchange.com/questions/308397", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/112640/" ] }
308,468
I understand that, for example, maximizing the log-likelihood is equivalent to minimizing the negative log-likelihood. It is indeed a simple change, but still an extra step taken (it seems) for the unique purpose of designing a loss function that will be minimized instead of maximized. I wonder why this has become the standard in Machine Learning? Is there any numerical consideration that favors function minimization instead of maximization? Why has gradient descent become such a universal standard? (I have never seen a Deep Learning paper in which they use gradient ascent to directly maximize the likelihood) Disclaimer : I came across many similar questions, but none of which that have been truly answered. People typically just explain how both approaches are equivalent, or explain why we use the logarithm for numerical stability, but without explaining why minimization is favored over maximization. (See those two questions : 1 , 2 )
Minimising $f(x)$ is entirely equivalent to maximising $-f(x)$ , in every aspect: result, numerical precision, computational complexity... everything. Historically, the convention might have been established because of the "least squares" in linear regression (but don't take my word for it). If it were the other way round, you'd be asking why we don't minimise some cost function...
{ "source": [ "https://stats.stackexchange.com/questions/308468", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/149208/" ] }
308,777
Computers have for a long time been able to play chess using a "brute-force"-technique, searching to a certain depth and then evaluating the position. The AlphaGo computer however, only use an ANN to evaluate the positions (it does not do any depth-search as far as I know). Is it possible to create an chess engine that plays chess in the same way as AlphaGo plays Go? Why has no one done this? Would this program perform better than the top chess-engines (and chess players) of today?
EDIT (after reading the paper): I've read the paper thoughtfully. Let's start off with what Google claimed in the paper: They defeated Stockfish with Monte-Carlo-Tree-Search + Deep neural networks The match was absolutely one-sided, many wins for AlphaZero but none for Stockfish They were able to do it in just four hours AlphaZero played like a human Unfortunately, I don't think it's a good journal paper. I'm going to explain with links (so you know I'm not dreaming): https://chess.stackexchange.com/questions/19360/how-is-alpha-zero-more-human has my answer on how AlphaZero played like a human The match was unfair , strongly biased. I quote Tord Romstad, the original programmer for Stockfish. https://www.chess.com/news/view/alphazero-reactions-from-top-gms-stockfish-author The match results by themselves are not particularly meaningful because of the rather strange choice of time controls and Stockfish parameter settings: The games were played at a fixed time of 1 minute/move, which means that Stockfish has no use of its time management heuristics (lot of effort has been put into making Stockfish identify critical points in the game and decide when to spend some extra time on a move; at a fixed time per move, the strength will suffer significantly). Stockfish couldn't have played the best chess with only a minute per move. The program was not designed for that. Stockfish was running on a regular commercial machine, while AlphaZero was on a 4 millions+ TPU machine tuned for AlphaZero. This is a like matching your high-end desktop against a cheap Android phone. Tord wrote: One is a conventional chess program running on ordinary computers, the other uses fundamentally different techniques and is running on custom designed hardware that is not available for purchase (and would be way out of the budget of ordinary users if it were). Google inadvertently gave 64 threads to a 32 core machine for Stockfish. I quote GM Larry Kaufman (world class computer chess expert): http://talkchess.com/forum/viewtopic.php?p=741987&highlight=#741987 I agree that the test was far from fair; another issue that hurt SF was that it was apparently run on 64 threads on a 32 core machine, but it would play much better running just 32 threads on that machine, since there is almost no SMP benefit to offset the roughly 5 to 3 slowdown. Also the cost ratio was more than I said; I was thinking it was a 64 core machine, but a 32 core machine costs about half what I guessed. So maybe all in all 30 to 1 isn't so bad an estimate. On the other hand I think you underestimate how much it could be further improved. Stockfish gave only 1GB hash table. This is a joke... I have a larger hash table for my Stockfish iOS app (Disclaimer: I'm the author) on my iPhone! Tord wrote: ... way too small hash tables for the number of threads ... 1GB hash table is absolutely unacceptable for a match like this. Stockfish would frequently encounter hash collision. It takes CPU cycles to replace old hash entries. Stockfish is not designed to run with that many number of threads. In my iOS chess app, only a few threads are used. Tord wrote: ... was playing with far more search threads than has ever received any significant amount of testing ... Stockfish was running without an opening book or 6-piece Syzygy endgame tablebase. The sample size was insufficient. The Stockfish version was not the latest. Discussion here . CONCLUSION Google has not proven without doubts their methods are superior to Stockfish. Their numbers are superficial and strongly biased to AlphaZero. Their methods are not reproducible by an independent third party. It's still a bit too early to say Deep Learning is a superior method to traditional chess programming. EDIT (Dec 2017): There is a new paper from Google Deepmind ( https://arxiv.org/pdf/1712.01815.pdf ) for deep reinforcement learning in chess. From the abstract, the world number one Stockfish chess engine was "convincingly" defeated. I think this is the most significant achievement in computer chess since the 1997 Deep Blue match. I'll update my answer once I read the paper in details. Original (before Dec 2017) Let's clarify your question: No, chess engines don't use brute-force. AlphaGo does use tree searching, it uses Monte Carlo Tree Search . Google " Monte Carlo Tree Search alphaGo " if you want to be convinced. ANN can be used for chess engines: Giraffe (the link posted by @Tim) NeuroChess Would this program perform better than the top chess-engines (and chess players) of today? Giraffe plays at about Internation Master level, which is about FIDE 2400 rating. However, Stockfish, Houdini and Komodo all play at about FIDE 3000. This is a big gap. Why? Why not Monte-Carlo Tree Search? Material heuristic in chess is simple. Most of the time, a chess position is winning/losing by just counting materials on the board. Please recall counting materials doesn't work for Go. Material counting is orders of magnitude faster than running neural networks - this can be done by bitboards represented by a 64-bit integer. On the 64 bits system, it can be done by only several machine instructions. Searching with the traditional algorithm is much faster than machine learning. Higher nodes per second translate to deeper search. Similarly, there're very useful and cheap techniques such as null move pruning, late move reduction and killer moves etc. They are cheap to run, and much efficient to the approach used in AlphaGo. Static evaluation in chess is fast and useful Machine learning is useful for optimizating parameters, but we also have SPSA and CLOP for chess. There are lots of useful metrics for tree reduction in chess. Much less so for Go. There was research that Monte Carlo Tree Search don't scale well for chess. Go is a different game to chess. The chess algorithms don't work for Go because chess relies on brutal tactics. Tactics is arguably more important in chess. Now, we've established that MCTS work well for AlphaGo but less so for chess. Deep learning would be more useful if: The tuned NN evaluation is better than the traditional algorithms. However ... deep learning is not magic, you as the programmer would still need to do the programming. As mentioned, we have something like SPSA for self-playing for parameters tuning in chess. Investment, money! There's not much money for machine learning in chess. Stockfish is free and open source, but strong enough to defeat all human players. Why would Google spend millions if anybody can just download Stockfish for free? Why's going to pay for the CPU clusters? Who's going to pay for talents? Nobody wants to do it, because chess is considered a "solved" game. If deep learning can achieve the following, it'll beat the traditional algorithm: Given a chess position, "feel" it like a human grandmaster. For example, a human grandmaster wouldn't go into lines that are bad - by experience. Neither the traditional algorithm nor deep learning can achieve that. Your NN model might give you a probability [0..1] for your position, but that's not good enough. Let me point out: No. Giraffe (the link posted by @Tim) doesn't use Monte Carlo Tree Search. It uses the regular nega-max algorithm. All it does is replace the regular evaluation function with NN, and it's very slow. one more: Although Kasparov was beaten by Deep Blue in the 1997 match. "Humanity" was really lost around 2003-2005, when Kramnik lost a match to Deep Fritz without a win and Michael Adams lost to a cluster machine in a one-sided match. Around that time, Rybka proved too strong for even the best players in the world. Reference: http://www.talkchess.com/forum/viewtopic.php?t=64096&postdays=0&postorder=asc&highlight=alphago+chess&topic_view=flat&start=0 I quote: In chess we have the concept of materiality which already gives a resonable estimation of how well an engine is doing and can be computed quickly. Furthermore, there a lot of other aspects of the game that can be encoded in a static evaluation function which couldn`t be done in Go. Due to the many heuristics and good evaluation, the EBF (Effective-Branching-Factor) is quite small. Using a Neural Network as a replacement for the static evaluation function would definently slow down the engine by quite a lot.
{ "source": [ "https://stats.stackexchange.com/questions/308777", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/181463/" ] }
310,119
I was wondering exactly why collecting data until a significant result (e.g., $p \lt .05$) is obtained (i.e., p-hacking) increases the Type I error rate? I would also highly appreciate an R demonstration of this phenomenon.
The problem is that you're giving yourself too many chances to pass the test. It's just a fancy version of this dialog: I'll flip you to see who pays for dinner. OK, I call heads. Rats, you won. Best two out of three? To understand this better, consider a simplified--but realistic--model of this sequential procedure . Suppose you will start with a "trial run" of a certain number of observations, but are willing to continue experimenting longer in order to get a p-value less than $0.05$. The null hypothesis is that each observation $X_i$ comes (independently) from a standard Normal distribution. The alternative is that the $X_i$ come independently from a unit-variance normal distribution with a nonzero mean. The test statistic will be the mean of all $n$ observations, $\bar X$, divided by their standard error, $1/\sqrt{n}$. For a two-sided test, the critical values are the $0.025$ and $0.975$ percentage points of the standard Normal distribution, $ Z_\alpha=\pm 1.96$ approximately. This is a good test --for a single experiment with a fixed sample size $n$. It has exactly a $5\%$ chance of rejecting the null hypothesis, no matter what $n$ might be. Let's algebraically convert this to an equivalent test based on the sum of all $n$ values, $$S_n=X_1+X_2+\cdots+X_n = n\bar X.$$ Thus, the data are "significant" when $$\left| Z_\alpha\right| \le \left| \frac{\bar X}{1/\sqrt{n}} \right| = \left| \frac{S_n}{n/\sqrt{n}} \right| = \left| S_n \right| / \sqrt{n};$$ that is, $$\left| Z_\alpha\right| \sqrt{n} \le \left| S_n \right| .\tag{1}$$ If we're smart, we'll cut our losses and give up once $n$ grows very large and the data still haven't entered the critical region. This describes a random walk $S_n$. The formula $(1)$ amounts to erecting a curved parabolic "fence," or barrier, around the plot of the random walk $(n, S_n)$: the result is "significant" if any point of the random walk hits the fence. It is a property of random walks that if we wait long enough, it's very likely that at some point the result will look significant. Here are 20 independent simulations out to a limit of $n=5000$ samples. They all begin testing at $n=30$ samples, at which point we check whether the each point lies outside the barriers that have been drawn according to formula $(1)$. From the point at which the statistical test is first "significant," the simulated data are colored red. You can see what's going on: the random walk whips up and down more and more as $n$ increases. The barriers are spreading apart at about the same rate--but not fast enough always to avoid the random walk. In 20% of these simulations, a "significant" difference was found--usually quite early on--even though in every one of them the null hypothesis is absolutely correct! Running more simulations of this type indicates that the true test size is close to $25\%$ rather than the intended value of $\alpha=5\%$: that is, your willingness to keep looking for "significance" up to a sample size of $5000$ gives you a $25\%$ chance of rejecting the null even when the null is true. Notice that in all four "significant" cases, as testing continued, the data stopped looking significant at some points. In real life, an experimenter who stops early is losing the chance to observe such "reversions." This selectiveness through optional stopping biases the results. In honest-to-goodness sequential tests, the barriers are lines. They spread faster than the curved barriers shown here. library(data.table) library(ggplot2) alpha <- 0.05 # Test size n.sim <- 20 # Number of simulated experiments n.buffer <- 5e3 # Maximum experiment length i.min <- 30 # Initial number of observations # # Generate data. # set.seed(17) X <- data.table( n = rep(0:n.buffer, n.sim), Iteration = rep(1:n.sim, each=n.buffer+1), X = rnorm((1+n.buffer)*n.sim) ) # # Perform the testing. # Z.alpha <- -qnorm(alpha/2) X[, Z := Z.alpha * sqrt(n)] X[, S := c(0, cumsum(X))[-(n.buffer+1)], by=Iteration] X[, Trigger := abs(S) >= Z & n >= i.min] X[, Significant := cumsum(Trigger) > 0, by=Iteration] # # Plot the results. # ggplot(X, aes(n, S, group=Iteration)) + geom_path(aes(n,Z)) + geom_path(aes(n,-Z)) + geom_point(aes(color=!Significant), size=1/2) + facet_wrap(~ Iteration)
{ "source": [ "https://stats.stackexchange.com/questions/310119", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/124093/" ] }
310,676
Negative binomial (NB) distribution is defined on non-negative integers and has probability mass function$$f(k;r,p)={\binom {k+r-1}{k}}p^{k}(1-p)^{r}.$$ Does it make sense to consider a continuous distribution on non-negative reals defined by the same formula (replacing $k\in \mathbb N_0$ by $x\in\mathbb R_{\ge 0}$)? The binomial coefficient can be rewritten as a product of $(k+1)\cdot\ldots\cdot(k+r-1)$, which is well-defined for any real $k$. So we would have a PDF $$f(x;r,p)\propto\prod_{i=1}^{r-1}(x+i)\cdot p^{x}(1-p)^{r}.$$ More generally, we can replace the binomial coefficient with Gamma functions, allowing for non-integer values of $r$: $$f(x;r,p)\propto\frac{\Gamma(x+r)}{\Gamma(x+1)\Gamma(r)}\cdot p^{x}(1-p)^{r}.$$ Is it a valid distribution? Does it have a name? Does it have any uses? Is it maybe some compound or a mixture? Are there closed formulas for the mean and the variance (and the proportionality constant in the PDF)? (I am currently studying a paper that uses NB mixture model (with fixed $r=2$) and fits it via EM. However, the data are integers after some normalization, i.e. not integers. Nevertheless, the authors apply the standard NB formula to compute the likelihood and get very reasonable results, so everything seems to work out just fine. I found it very puzzling. Note that this question is not about NB GLM.)
That's an interesting question. My research group has been using the distribution you refer to for some years in our publicly available bioinformatics software. As far as I know, the distribution does not have a name and there is no literature on it. While the paper by Chandra et al (2012) cited by Aksakal is closely related, the distribution they consider seems to be restricted to integer values for $r$ and they don't seem to give an explicit expression for the pdf. To give you some background, the NB distribution is very heavily used in genomic research to model gene expression data arising from RNA-seq and related technologies. The count data arises as the number of DNA or RNA sequence reads extracted from a biological sample that can be mapped to each gene. Typically there are tens of millions of reads from each biological sample that are mapped to about 25,000 genes. Alternatively one might have DNA samples from which reads are mapped to genomic windows. We and others have popularized an approach whereby NB glms are fitted to the sequence reads for each gene, and empirical Bayes methods are used to moderate the genewise dispersion estimators (dispersion $\phi=1/r$). This approach has been cited in tens of thousands of journal articles in the genomic literature, so you can get an idea of how much it gets used. My group maintains the edgeR R sofware package . Some years ago we revised the whole package so that it works with fractional counts, using a continuous version of the NB pmf. We simply converted all the binomial coefficients in the NB pmf to ratios of gamma functions and used it as a (mixed) continuous pdf. The motivation for this was that sequence read counts can sometimes be fractional because of (1) ambiguous mapping of reads to the transcriptome or genome and/or (2) normalization of counts to correct for technical effects. So the counts are sometimes expected counts or estimated counts rather than observed counts. And of course the read counts can be exactly zero with positive probability. Our approach ensures that the inference results from our software are continuous in the counts, matching exactly with discrete NB results when the estimated counts happen to be integers. As far as I know, there is no closed form for the normalizing constant in the pdf, nor are there closed forms for the mean or variance. When one considers that there is no closed form for the integral $$\int_0^\infty \frac{1}{\Gamma(x)}dz$$ (the Fransen-Robinson constant) it is clear that there cannot be for the integral of the continuous NB pdf either. However it seems to me that traditional mean and variance formulas for the NB should continue to be good approximations for the continuous NB. Moreover the normalizing constant should vary slowly with the parameters and therefore can be ignored as having negligible influence in the maximum likelihood calculations. One can confirm these hypotheses by numerical integration. The NB distribution arises in bioinformatics as a gamma mixture of Poisson distributions (see the Wikipedia negative binomial article or McCarthy et al below). The continuous NB distribution arises simply by replacing the Poisson distribution with its continuous analog with pdf $$f(x;\lambda)=a(\lambda)\frac{e^{-\lambda}\lambda^x}{\Gamma(x+1)}$$ for $x\ge 0$ where $a(\lambda)$ is a normalizing constant to ensure the density integrates to 1. Suppose for example that $\lambda=10$. The Poisson distribution has pmf equal the above pdf on the non-negative integers and, with $\lambda=10$, the Poisson mean and variance are equal to 10. Numerical integration shows that $a(10)=1/0.999875$ and the mean and variance of the continuous distribution are equal to 10 to about 4 significant figures. So the normalizing constant is virtually 1 and the mean and variance are almost exactly the same as for the discrete Poisson distribution. The approximation is improved even more if we add a continuity correction, integrating from $-1/2$ to $\infty$ instead of from 0. With the continuity correction, everything is correct (normalizing constant is 1 and moments agree with discrete Poisson) to about 6 figures. In our edgeR package, we do not need to make any adjustment for the fact that there is mass at zero, because we always work with conditional log-likelihoods or with log-likelihood differences and any delta functions cancel out of the calculations. This is typical BTW for glms with mixed probability distributions. Alternatively, we could consider the distribution to have no mass at zero but to have support starting at -1/2 instead of at zero. Either theoretical perspective leads to the same calculations in practice. Although we make active use of the continuous NB distribution, we haven't published anything on it explicitly. The articles cited below explain the NB approach to genomic data but don't discuss the continuous NB distribution explicitly. In summary, I am not surprised that the article you are studying obtained reasonable results from a continualized version of the NB pdf, because that is our experience also. The key requirement is that we should be modelling the means and variances correctly and that will be fine provided the data, whether integer or not, exhibits the same form of quadratic mean-variance relationship that the NB distribution does. References Robinson, M., and Smyth, G. K. (2008). Small sample estimation of negative binomial dispersion, with applications to SAGE data . Biostatistics 9, 321-332. Robinson, MD, and Smyth, GK (2007). Moderated statistical tests for assessing differences in tag abundance . Bioinformatics 23, 2881-2887. McCarthy, DJ, Chen, Y, Smyth, GK (2012). Differential expression analysis of multifactor RNA-Seq experiments with respect to biological variation . Nucleic Acids Research 40, 4288-4297. Chen, Y, Lun, ATL, and Smyth, GK (2014). Differential expression analysis of complex RNA-seq experiments using edgeR. In: Statistical Analysis of Next Generation Sequence Data, Somnath Datta and Daniel S Nettleton (eds), Springer, New York, pages 51--74. Preprint Lun, ATL, Chen, Y, and Smyth, GK (2016). It's DE-licious: a recipe for differential expression analyses of RNA-seq experiments using quasi-likelihood methods in edgeR. Methods in Molecular Biology 1418, 391-416. Preprint Chen Y, Lun ATL, and Smyth, GK (2016). From reads to genes to pathways: differential expression analysis of RNA-Seq experiments using Rsubread and the edgeR quasi-likelihood pipeline . F1000Research 5, 1438.
{ "source": [ "https://stats.stackexchange.com/questions/310676", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/28666/" ] }
311,494
I'm working on a Kaggle problem (the problem closed some time ago, but doing it for self-study/practice) where the output is clearly affected by both the year and the month. The original datetime data provides year/month/day/hour information and I felt that year and month were probably the only necessary data. So I've currently modified the feature such that the data is represented only by year and month ( ex) March of 2016 would be 201603) and graphed each outcome with respect to the modified time variable consisting of year/month pair. As you can see here, the 1st outcome has some minor seasonal fluctuations whereas the 3rd and 4th outcomes have clear seasonal trends. On the other hand, the 2nd outcome drastically decreases after May of 2015 (201505). For my model prediction, I'd like to somehow incorporate time as a variable in a way that makes sense. What would be the best approach here? Can I just assume the earliest time period in the data to equal to 1 and increment by 1 for every month and treat the variable as a nominal category variable? Or something else?
You want to preserve the cyclical nature of your inputs. One approach is to cut the datetime variable into four variables: year, month, day, and hour. Then, decompose each of these ( except for year) variables in two. You create a sine and a cosine facet of each of these three variables (i.e., month, day, hour), which will retain the fact that hour 24 is closer to hour 0 than to hour 21, and that month 12 is closer to month 1 than to month 10. A quick Google search got me a few links on how to do it: https://ianlondon.github.io/blog/encoding-cyclical-features-24hour-time/ Optimal construction of day feature in neural networks https://datascience.stackexchange.com/questions/5990/what-is-a-good-way-to-transform-cyclic-ordinal-attributes https://medium.com/towards-data-science/top-6-errors-novice-machine-learning-engineers-make-e82273d394db
{ "source": [ "https://stats.stackexchange.com/questions/311494", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/181479/" ] }
311,813
My understanding of the algorithm is the following: No U-Turn Sampler (NUTS) is a Hamiltonian Monte Carlo Method. This means that it is not a Markov Chain method and thus, this algorithm avoids the random walk part, which is often deemed as inefficient and slow to converge. Instead of doing the random walk, NUTS does jumps of length x. Each jump doubles as the algorithm continues to run. This happens until the trajectory reaches a point where it wants to return to the starting point. My questions: What is so special about the U-turn? How does doubling the trajectory not skip the optimized point? Is my above description correct?
The no U-turn bit is how proposals are generated. HMC generates a hypothetical physical system: imagine a ball with a certain kinetic energy rolling around a landscape with valleys and hills (the analogy breaks down with more than 2 dimensions) defined by the posterior you want to sample from. Every time you want to take a new MCMC sample, you randomly pick the kinetic energy and start the ball rolling from where you are. You simulate in discrete time steps, and to make sure you explore the parameter space properly you simulate steps in one direction and the twice as many in the other direction, turn around again etc. At some point you want to stop this and a good way of doing that is when you have done a U-turn (i.e. appear to have gone all over the place). At this point the proposed next step of your Markov Chain gets picked (with certain limitations) from the points you have visited. I.e. that whole simulation of the hypothetical physical system was "just" to get a proposal that then gets accepted (the next MCMC sample is the proposed point) or rejected (the next MCMC sample is the starting point). The clever thing about it is that proposals are made based on the shape of the posterior and can be at the other end of the distribution. In contrast Metropolis-Hastings makes proposals within a (possibly skewed) ball, Gibbs sampling only moves along one (or at least very few) dimensions at a time.
{ "source": [ "https://stats.stackexchange.com/questions/311813", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/86174/" ] }
311,828
I don't know which of the classification or clustering should I use for my data.could anyone explain which conditions should exist for implementing each of them(classification , clustering)?In addition I should mention my data has both categorical and continuous variables. Any little help would be greatly appreciated.
The no U-turn bit is how proposals are generated. HMC generates a hypothetical physical system: imagine a ball with a certain kinetic energy rolling around a landscape with valleys and hills (the analogy breaks down with more than 2 dimensions) defined by the posterior you want to sample from. Every time you want to take a new MCMC sample, you randomly pick the kinetic energy and start the ball rolling from where you are. You simulate in discrete time steps, and to make sure you explore the parameter space properly you simulate steps in one direction and the twice as many in the other direction, turn around again etc. At some point you want to stop this and a good way of doing that is when you have done a U-turn (i.e. appear to have gone all over the place). At this point the proposed next step of your Markov Chain gets picked (with certain limitations) from the points you have visited. I.e. that whole simulation of the hypothetical physical system was "just" to get a proposal that then gets accepted (the next MCMC sample is the proposed point) or rejected (the next MCMC sample is the starting point). The clever thing about it is that proposals are made based on the shape of the posterior and can be at the other end of the distribution. In contrast Metropolis-Hastings makes proposals within a (possibly skewed) ball, Gibbs sampling only moves along one (or at least very few) dimensions at a time.
{ "source": [ "https://stats.stackexchange.com/questions/311828", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/174353/" ] }
312,119
I have a question regarding classification in general. Let $f$ be a classifier, which outputs a set of probabilities given some data D. Normally, one would say: well, if $P(c|D) > 0.5$ , we will assign a class 1, otherwise 0 (let this be a binary classification). My question is, what if I find out, that if I classify the class as 1 also when the probabilities are larger than, for instance 0.2, and the classifier performs better. Is it legitimate to then use this new threshold when doing classification? I would interpret the necessity for lower classification bound in the context of the data emitting a smaller signal; yet still significant for the classification problem. I realize this is one way to do it. However, if this is not correct thinking of reducing the threshold, what would be some data transformations, which emphasize individual features in a similar manner, so that the threshold can remain at 0.5?
Frank Harrell has written about this on his blog: Classification vs. Prediction , which I agree with wholeheartedly. Essentially, his argument is that the statistical component of your exercise ends when you output a probability for each class of your new sample. Choosing a threshold beyond which you classify a new observation as 1 vs. 0 is not part of the statistics any more. It is part of the decision component. And here, you need the probabilistic output of your model - but also considerations like: What are the consequences of deciding to treat a new observation as class 1 vs. 0? Do I then send out a cheap marketing mail to all 1s? Or do I apply an invasive cancer treatment with big side effects? What are the consequences of treating a "true" 0 as 1, and vice versa? Will I tick off a customer? Subject someone to unnecessary medical treatment? Are my "classes" truly discrete? Or is there actually a continuum (e.g., blood pressure), where clinical thresholds are in reality just cognitive shortcuts? If so, how far beyond a threshold is the case I'm "classifying" right now? Or does a low-but-positive probability to be class 1 actually mean "get more data", "run another test"? So, to answer your question: talk to the end consumer of your classification, and get answers to the questions above. Or explain your probabilistic output to her or him, and let her or him walk through the next steps. Here is another way of looking at this. You ask: what if I find out, that if I classify the class as 1 also when the probabilities are larger than, for instance 0.2, and the classifier performs better. They key word in this question is "better". What does it mean that your classifier performs "better"? This of course depends on your evaluation metric, and depending on your metric, a "better" performing classifier may look very different. In a numerical prediction framework, I have written a short paper on this ( Kolassa, 2020 ), but the exact same thing happens for classification. Importantly, this is the case even if we have perfect probabilistic classifications. That is, they are calibrated : if an instance is predicted to have a probability $\hat{p}$ to belong to the target class, then that is indeed its true probability to be of that class. As an illustration, suppose you have applied your probabilistic classifier to a new set of instances. Some of them have a high predicted probability to belong to the target class, more not. Perhaps the distribution of these predicted probabilities looks like this: Now suppose you need to make hard 0-1 classifications. For that, you need to decide on a threshold such that you will classify each instance into the target class if its predicted probability exceeds that threshold. What is the optimal threshold to use? Based on my paragraph above, it should not come as a surprise that this optimal threshold (where the classifier performs "best") depends on the evaluation measure. In this case, we can simulate: we draw $10^7$ samples for the predicted probability as above, then for each sample $\hat{p}$ assign it to the target class with probability $\hat{p}$ , as the ground truth. In parallel, we can compare the probabilities to all possible thresholds $0\leq t\leq 1$ and evaluate common error measures for such thresholded hard classifications: These plots are unsurprising. Using a threshold of $t=0$ (assigning everything to the target class) yields a perfect recall of $1$ . Precision is undefined for high thresholds where there are no instances whose predicted probabilities exceed that threshold, and it is unstable just below that high threshold, depending on whether the highest-scoring instances are in the target class or not. Finally, since we have an unbalanced dataset with more negatives than positives, assigning everything to the non-target class (i.e., using a threshold of $t=1$ ) maximizes accuracy . So, these three measures elicit classifications that are probably not very useful. In practice, people often use combinations of precision and recall. One very common such combination is the F1 score, which will indeed elicit an "optimal" threshold that is not $0$ or $1$ , but in between. Sounds better, right? However, note that this again depends on the particular weight between precision and recall we want. The F1 score uses equal weighting, but it is just one member of an entire family of evaluation metrics parameterized by the relative weights of precision and recall. And, again unsurprisingly, the "optimal" threshold depends on which F $\beta$ score we use, i.e., on which weight we use, and we are back to square one: in order to find the "optimal" classifier, we need to tailor our evaluation metric to the business problem at hand . R code: aa <- 2 bb <- 10 n_sims <- 1e7 set.seed(1) sim_probs <- rbeta(n_sims,aa,bb) sim_actuals <- runif(n_sims)<sim_probs summary(sim_probs) summary(sim_actuals) par(mai=c(.5,.5,.5,.1)) xx <- seq(0,1,by=.01) plot(xx,dbeta(xx,aa,bb),type="l",xlab="",ylab="", las=1,main="Distribution of predicted probabilities") thresholds <- seq(0,1,by=0.01) recall <- sapply(thresholds,function(tt) sum(sim_probs>=tt & sim_actuals)/sum(sim_actuals)) precision <- sapply(thresholds,function(tt) sum(sim_probs>=tt & sim_actuals)/sum(sim_probs>=tt)) accuracy <- sapply(thresholds,function(tt) (sum(sim_probs>=tt & sim_actuals)+sum(sim_probs<tt & !sim_actuals))/n_sims) opar <- par(mfrow=c(1,3),mai=c(.7,.5,.5,.1)) plot(thresholds,recall,type="l",xlab="Threshold", ylab="",las=1,main="Recall") plot(thresholds,precision,type="l",xlab="Threshold", ylab="",las=1,main="Precision") plot(thresholds,accuracy,type="l",xlab="Threshold", ylab="",las=1,main="Accuracy") betas <- c(0.5,1,2) FF <- sapply(betas,function(bb) sapply(thresholds,function(tt) (1+bb^2)*sum(sim_probs>=tt & sim_actuals)/ ((1+bb^2)*sum(sim_probs>=tt & sim_actuals)+ sum(sim_probs>=tt & !sim_actuals)+bb^2*sum(sim_probs<tt & sim_actuals)))) for ( ii in seq_along(betas) ) { plot(thresholds,FF[,ii],type="l",xlab="Threshold", ylab="",las=1,main=paste0("F",betas[ii]," score")) abline(v=thresholds[which.max(FF[,ii])],col="red") }
{ "source": [ "https://stats.stackexchange.com/questions/312119", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/93932/" ] }
312,424
I'm studying this Tutorial on Variational Autoencoders by Carl Doersch . In the second page it states: One of the most popular such frameworks is the Variational Autoencoder [1, 3], the subject of this tutorial. The assumptions of this model are weak, and training is fast via backpropagation. VAEs do make an approximation, but the error introduced by this approximation is arguably small given high-capacity models . These characteristics have contributed to a quick rise in their popularity. I've read in the past these sort of claims about high-capacity models , but I don't seem to find any clear definition for it. I also found this related stackoverflow question but to me the answer is very unsatisfying. Is there a definition for the capacity of a model? Can you measure it?
Capacity is an informal term. It's very close (if not a synonym) for model complexity. It's a way to talk about how complicated a pattern or relationship a model can express. You could expect a model with higher capacity to be able to model more relationships between more variables than a model with a lower capacity. Drawing an analogy from the colloquial definition of capacity, you can think of it as the ability of a model to learn from more and more data, until it's been completely "filled up" with information. There are various ways to formalize capacity and compute a numerical value for it, but importantly these are just some possible "operationalizations" of capacity (in much the same way that, if someone came up with a formula to compute beauty, you would realize that the formula is just one fallible interpretation of beauty). VC dimension is a mathematically rigorous formulation of capacity. However, there can be a large gap between the VC dimension of a model and the model's actual ability to fit the data. Even though knowing the VC dim gives a bound on the generalization error of the model, this is usually too loose to be useful with neural networks. Another line of research see here is to use the spectral norm of the weight matrices in a neural network as a measure of capacity. One way to understand this is that the spectral norm bounds the Lipschitz constant of the network. The most common way to estimate the capacity of a model is to count the number of parameters. The more parameters, the higher the capacity in general. Of course, often a smaller network learns to model more complex data better than a larger network, so this measure is also far from perfect. Another way to measure capacity might be to train your model with random labels ( Neyshabur et. al ) -- if your network can correctly remember a bunch of inputs along with random labels, it essentially shows that the model has the ability to remember all those data points individually. The more input/output pairs which can be "learned", the higher the capacity. Adapting this to an auto-encoder, you might generate random inputs, train the network to reconstruct them, and then count how many random inputs you can successfully reconstruct with less than $\epsilon$ error.
{ "source": [ "https://stats.stackexchange.com/questions/312424", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/183008/" ] }
312,474
Often, in the course of my (self-)study of statistics, I've met the terminology "$\sigma$-algebra generated by a random variable". I don't understand the definition on Wikipedia , but most importantly I don't get the intuition behind it. Why/when do we need $\sigma-$algebras generated by random variables? What is their meaning? I know the following: a $\sigma$-algebra on a set $\Omega$ is a nonempty collection of subsets of $\Omega$ which contains $\Omega$, is closed under complement and under countable union. we introduce $\sigma$-algebras to build probability spaces on infinite sample spaces. In particular, if $\Omega$ is uncountably infinite, we know there can exist unmeasurable subsets (sets for which we cannot define a probability). Thus, we can't just use the power set of $\Omega$ $\mathcal{P}(\Omega)$ as our set of events $\mathcal{F}$. We need a smaller set, which is still large enough so that we can define the probability of interesting events, and we can talk about convergence of a sequence of random variables. In short, I think I have a fair intuitive understanding of $\sigma-$algebras. I would like to have a similar understanding for the $\sigma-$algebras generated by random variables: definition, why we need them, intuition, an example...
Consider a random variable $X$ . We know that $X$ is nothing but a measurable function from $\left(\Omega, \mathcal{A} \right)$ into $\left(\mathbb{R}, \mathcal{B}(\mathbb{R}) \right)$ , where $\mathcal{B}(\mathbb{R})$ are the Borel sets of the real line. By definition of measurability we know that we have $$X^{-1} \left(B \right) \in \mathcal{A}, \quad \forall B \in \mathcal{B}\left(\mathbb{R}\right)$$ But in practice the preimages of the Borel sets may not be all of $\mathcal{A}$ but instead they may constitute a much coarser subset of it. To see this, let us define $$\mathcal{\Sigma} = \left\{ S \in \mathcal{A}: S = X^{-1}(B), \ B \in \mathcal{B}(\mathbb{R}) \right\}$$ Using the properties of preimages, it is not too difficult to show that $\mathcal{\Sigma}$ is a sigma-algebra. It also follows immediately that $\mathcal{\Sigma} \subset \mathcal{A}$ , hence $\mathcal{\Sigma}$ is a sub-sigma-algebra. Further, by the definitions it is easy to see that the mapping $X: \left( \Omega, \mathcal{\Sigma} \right) \to \left( \mathbb{R}, \mathcal{B} \left(\mathbb{R} \right) \right)$ is measurable. $\mathcal{\Sigma}$ is in fact the smallest sigma-algebra that makes $X$ a random variable as all other sigma-algebras of that kind would at the very least include $\mathcal{\Sigma}$ . For the reason that we are dealing with preimages of the random variable $X$ , we call $\mathcal{\Sigma}$ the sigma-algebra induced by the random variable $X$ . Here is an extreme example: consider a constant random variable $X$ , that is, $X(\omega) \equiv \alpha$ . Then $X^{-1} \left(B \right), \ B \in \mathcal{B} \left(\mathbb{R} \right)$ equals either $\Omega$ or $\varnothing$ depending on whether $\alpha \in B$ . The sigma-algebra thus generated is trivial and as such, it is definitely included in $\mathcal{A}$ . Hope this helps.
{ "source": [ "https://stats.stackexchange.com/questions/312474", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/58675/" ] }
312,552
Some statistical tests are robust and some are not. What exactly does robustness mean? Surprisingly, I couldn't find such a question on this site. Moreover, sometimes, robustness and powerfulness of a test are discussed together. And intuitively, I couldn't differentiate between the two concepts. What is a powerful test? How is it different from a robust statistical test?
Robustness has various meanings in statistics, but all imply some resilience to changes in the type of data used. This may sound a bit ambiguous, but that is because robustness can refer to different kinds of insensitivities to changes. For example: Robustness to outliers Robustness to non-normality Robustness to non-constant variance (or heteroscedasticity) In the case of tests , robustness usually refers to the test still being valid given such a change. In other words, whether the outcome is significant or not is only meaningful if the assumptions of the test are met. When such assumptions are relaxed (i.e. not as important), the test is said to be robust. The power of a test is its ability to detect a significant difference if there is a true difference. The reason specific tests and models are used with various assumptions is that these assumptions simplify the problem (e.g. require less parameters to be estimated). The more assumptions a test makes, the less robust it is, because all these assumptions must be met for the test to be valid. On the other hand, a test with fewer assumptions is more robust. However, robustness generally comes at the cost of power, because either less information from the input is used, or more parameters need to be estimated. Robust A $t$-test could be said to be robust, because while it assumes normally distributed groups, it is still a valid test for comparing approximately normally distributed groups. A Wilcoxon test is less powerful when the assumptions of the $t$-test are met, but it is more robust, because it does not assume an underlying distribution and is thus valid for non-normal data. Its power is generally lower because it uses the ranks of the data, rather than the original numbers and thus essentially discards some information. Not Robust An $F$-test is a comparison of variances, but it is very sensitive to non-normality and therefore invalid for approximate normality. In other words, the $F$-test is not robust.
{ "source": [ "https://stats.stackexchange.com/questions/312552", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/179171/" ] }
312,573
I am given the model $Y_i=\alpha_0+\beta_0 X_i+\epsilon_0$, where $i=1,2,...,n$, $X_i$ are fixed numbers and $\epsilon \sim N(0, \sigma^2).$ I am also given that $\sigma^2$ and the parameters $(\alpha_0,\beta_0)$ for $E(Y_i)=\alpha_0+\beta_0 X_i$ are unknown. I have to estimate $(\alpha_0,\beta_0)$ using $(\alpha^*,\beta^*)$ which are found by minimizing $\sum_{i=1}^n (Y_i-\alpha-\beta X_i)^2.$ To find $\alpha^*$, I set $\frac{d}{d\alpha}\sum_{i=1}^n (Y_i-\alpha-\beta X_i)^2=0$ and got that $\alpha^*=\widehat{Y}-\beta^* \widehat{X}$, where $\widehat{Y}=\frac{Y_1+Y_2+...+Y_n}{n}$ and $\widehat{X}=\frac{X_1+X_2+...+X_n}{n}$. However I am having a hard time finding $\beta^*$. Here is my process and where I get stuck: $\frac{d}{d\beta}\sum_{i=1}^n (Y_i-\alpha-\beta X_i)^2=0$ $\sum_{i=1}^n \frac{d}{d\beta}(Y_i-\alpha-\beta X_i)^2=0$ $\sum_{i=1}^n X_i(Y_i-\alpha-\beta X_i)=0$ $\sum_{i=1}^n (Y_i X_i-\alpha X_i-\beta X_i^2)=0$ $\sum_{i=1}^n (Y_i X_i-(\widehat{Y}-\beta \widehat{X}) X_i-\beta X_i^2)=0$ $\sum_{i=1}^n (Y_i X_i-\widehat{Y} X_i+\beta \widehat{X} X_i-\beta X_i^2)=0$ $\sum_{i=1}^n (Y_i X_i-\widehat{Y} X_i+\beta (\widehat{X} X_i-X_i^2))=0$ $\sum_{i=1}^n (Y_i X_i-\widehat{Y} X_i+\beta (\widehat{X} X_i-X_i^2))=0$ $\sum_{i=1}^n \beta (\widehat{X} X_i-X_i^2)=\sum_{i=1}^n(\widehat{Y} X_i-Y_i X_i)$ $\beta^* =\frac{\sum_{i=1}^n(\widehat{Y} X_i-Y_i X_i)}{\sum_{i=1}^n(\widehat{X} X_i-X_i^2)}$ However, $\beta^*$ should equal $\frac{\sum_{i=1}^n(X_i-\widehat{X})Y_i}{\sum_{i=1}^n(X_i-\widehat{X})^2}$. Could someone please explain what went wrong? Thank you. Ps I am sorry for any mistakes in typing the given information, I am not too familiar with the topic.
Robustness has various meanings in statistics, but all imply some resilience to changes in the type of data used. This may sound a bit ambiguous, but that is because robustness can refer to different kinds of insensitivities to changes. For example: Robustness to outliers Robustness to non-normality Robustness to non-constant variance (or heteroscedasticity) In the case of tests , robustness usually refers to the test still being valid given such a change. In other words, whether the outcome is significant or not is only meaningful if the assumptions of the test are met. When such assumptions are relaxed (i.e. not as important), the test is said to be robust. The power of a test is its ability to detect a significant difference if there is a true difference. The reason specific tests and models are used with various assumptions is that these assumptions simplify the problem (e.g. require less parameters to be estimated). The more assumptions a test makes, the less robust it is, because all these assumptions must be met for the test to be valid. On the other hand, a test with fewer assumptions is more robust. However, robustness generally comes at the cost of power, because either less information from the input is used, or more parameters need to be estimated. Robust A $t$-test could be said to be robust, because while it assumes normally distributed groups, it is still a valid test for comparing approximately normally distributed groups. A Wilcoxon test is less powerful when the assumptions of the $t$-test are met, but it is more robust, because it does not assume an underlying distribution and is thus valid for non-normal data. Its power is generally lower because it uses the ranks of the data, rather than the original numbers and thus essentially discards some information. Not Robust An $F$-test is a comparison of variances, but it is very sensitive to non-normality and therefore invalid for approximate normality. In other words, the $F$-test is not robust.
{ "source": [ "https://stats.stackexchange.com/questions/312573", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/177983/" ] }
312,780
This is a general question that was asked indirectly multiple times in here, but it lacks a single authoritative answer. It would be great to have a detailed answer to this for the reference. Accuracy , the proportion of correct classifications among all classifications, is very simple and very "intuitive" measure, yet it may be a poor measure for imbalanced data . Why does our intuition misguide us here and are there any other problems with this measure?
Most of the other answers focus on the example of unbalanced classes. Yes, this is important. However, I argue that accuracy is problematic even with balanced classes. Frank Harrell has written about this on his blog: Classification vs. Prediction and Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules . Essentially, his argument is that the statistical component of your exercise ends when you output a probability for each class of your new sample. Mapping these predicted probabilities $(\hat{p}, 1-\hat{p})$ to a 0-1 classification, by choosing a threshold beyond which you classify a new observation as 1 vs. 0 is not part of the statistics any more. It is part of the decision component. And here, you need the probabilistic output of your model - but also considerations like: What are the consequences of deciding to treat a new observation as class 1 vs. 0? Do I then send out a cheap marketing mail to all 1s? Or do I apply an invasive cancer treatment with big side effects? What are the consequences of treating a "true" 0 as 1, and vice versa? Will I tick off a customer? Subject someone to unnecessary medical treatment? Are my "classes" truly discrete? Or is there actually a continuum (e.g., blood pressure), where clinical thresholds are in reality just cognitive shortcuts? If so, how far beyond a threshold is the case I'm "classifying" right now? Or does a low-but-positive probability to be class 1 actually mean "get more data", "run another test"? Depending on the consequences of your decision, you will use a different threshold to make the decision. If the action is invasive surgery, you will require a much higher probability for your classification of the patient as suffering from something than if the action is to recommend two aspirin. Or you might even have three different decisions although there are only two classes (sick vs. healthy): "go home and don't worry" vs. "run another test because the one we have is inconclusive" vs. "operate immediately". The correct way of assessing predicted probabilities $(\hat{p}, 1-\hat{p})$ is not to compare them to a threshold, map them to $(0,1)$ based on the threshold and then assess the transformed $(0,1)$ classification. Instead, one should use proper scoring-rules . These are loss functions that map predicted probabilities and corresponding observed outcomes to loss values, which are minimized in expectation by the true probabilities $(p,1-p)$ . The idea is that we take the average over the scoring rule evaluated on multiple (best: many) observed outcomes and the corresponding predicted class membership probabilities, as an estimate of the expectation of the scoring rule. Note that "proper" here has a precisely defined meaning - there are improper scoring rules as well as proper scoring rules and finally strictly proper scoring rules . Scoring rules as such are loss functions of predictive densities and outcomes. Proper scoring rules are scoring rules that are minimized in expectation if the predictive density is the true density. Strictly proper scoring rules are scoring rules that are only minimized in expectation if the predictive density is the true density. As Frank Harrell notes , accuracy is an improper scoring rule. (More precisely, accuracy is not even a scoring rule at all : see my answer to Is accuracy an improper scoring rule in a binary classification setting? ) This can be seen, e.g., if we have no predictors at all and just a flip of an unfair coin with probabilities $(0.6,0.4)$ . Accuracy is maximized if we classify everything as the first class and completely ignore the 40% probability that any outcome might be in the second class. (Here we see that accuracy is problematic even for balanced classes.) Proper scoring-rules will prefer a $(0.6,0.4)$ prediction to the $(1,0)$ one in expectation. In particular, accuracy is discontinuous in the threshold: moving the threshold a tiny little bit may make one (or multiple) predictions change classes and change the entire accuracy by a discrete amount. This makes little sense. More information can be found at Frank's two blog posts linked to above, as well as in Chapter 10 of Frank Harrell's Regression Modeling Strategies . (This is shamelessly cribbed from an earlier answer of mine .) EDIT. My answer to Example when using accuracy as an outcome measure will lead to a wrong conclusion gives a hopefully illustrative example where maximizing accuracy can lead to wrong decisions even for balanced classes .
{ "source": [ "https://stats.stackexchange.com/questions/312780", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/35989/" ] }
312,789
I have a time series. I would like to have a model that takes in past and current values and outputs something like a probability / number that tells me: If the value is going up by more than 25% If the value is going down by more than 25% Neither of the above I wanted to try Neural Networks (Not interested in LSTMs), but are there other techniques? If not, do you know of a good resource?
Most of the other answers focus on the example of unbalanced classes. Yes, this is important. However, I argue that accuracy is problematic even with balanced classes. Frank Harrell has written about this on his blog: Classification vs. Prediction and Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules . Essentially, his argument is that the statistical component of your exercise ends when you output a probability for each class of your new sample. Mapping these predicted probabilities $(\hat{p}, 1-\hat{p})$ to a 0-1 classification, by choosing a threshold beyond which you classify a new observation as 1 vs. 0 is not part of the statistics any more. It is part of the decision component. And here, you need the probabilistic output of your model - but also considerations like: What are the consequences of deciding to treat a new observation as class 1 vs. 0? Do I then send out a cheap marketing mail to all 1s? Or do I apply an invasive cancer treatment with big side effects? What are the consequences of treating a "true" 0 as 1, and vice versa? Will I tick off a customer? Subject someone to unnecessary medical treatment? Are my "classes" truly discrete? Or is there actually a continuum (e.g., blood pressure), where clinical thresholds are in reality just cognitive shortcuts? If so, how far beyond a threshold is the case I'm "classifying" right now? Or does a low-but-positive probability to be class 1 actually mean "get more data", "run another test"? Depending on the consequences of your decision, you will use a different threshold to make the decision. If the action is invasive surgery, you will require a much higher probability for your classification of the patient as suffering from something than if the action is to recommend two aspirin. Or you might even have three different decisions although there are only two classes (sick vs. healthy): "go home and don't worry" vs. "run another test because the one we have is inconclusive" vs. "operate immediately". The correct way of assessing predicted probabilities $(\hat{p}, 1-\hat{p})$ is not to compare them to a threshold, map them to $(0,1)$ based on the threshold and then assess the transformed $(0,1)$ classification. Instead, one should use proper scoring-rules . These are loss functions that map predicted probabilities and corresponding observed outcomes to loss values, which are minimized in expectation by the true probabilities $(p,1-p)$ . The idea is that we take the average over the scoring rule evaluated on multiple (best: many) observed outcomes and the corresponding predicted class membership probabilities, as an estimate of the expectation of the scoring rule. Note that "proper" here has a precisely defined meaning - there are improper scoring rules as well as proper scoring rules and finally strictly proper scoring rules . Scoring rules as such are loss functions of predictive densities and outcomes. Proper scoring rules are scoring rules that are minimized in expectation if the predictive density is the true density. Strictly proper scoring rules are scoring rules that are only minimized in expectation if the predictive density is the true density. As Frank Harrell notes , accuracy is an improper scoring rule. (More precisely, accuracy is not even a scoring rule at all : see my answer to Is accuracy an improper scoring rule in a binary classification setting? ) This can be seen, e.g., if we have no predictors at all and just a flip of an unfair coin with probabilities $(0.6,0.4)$ . Accuracy is maximized if we classify everything as the first class and completely ignore the 40% probability that any outcome might be in the second class. (Here we see that accuracy is problematic even for balanced classes.) Proper scoring-rules will prefer a $(0.6,0.4)$ prediction to the $(1,0)$ one in expectation. In particular, accuracy is discontinuous in the threshold: moving the threshold a tiny little bit may make one (or multiple) predictions change classes and change the entire accuracy by a discrete amount. This makes little sense. More information can be found at Frank's two blog posts linked to above, as well as in Chapter 10 of Frank Harrell's Regression Modeling Strategies . (This is shamelessly cribbed from an earlier answer of mine .) EDIT. My answer to Example when using accuracy as an outcome measure will lead to a wrong conclusion gives a hopefully illustrative example where maximizing accuracy can lead to wrong decisions even for balanced classes .
{ "source": [ "https://stats.stackexchange.com/questions/312789", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/146552/" ] }
313,605
I have learned that, when dealing with data using model-based approach, the first step is modeling data procedure as a statistical model. Then the next step is developing efficient/fast inference/learning algorithm based on this statistical model. So I want to ask which statistical model is behind the support vector machine (SVM) algorithm?
You can often write a model that corresponds to a loss function (here I'm going to talk about SVM regression rather than SVM-classification; it's particularly simple) For example, in a linear model, if your loss function is $\sum_i g(\varepsilon_i) = \sum_i g(y_i-x_i'\beta)$ then minimizing that will correspond to maximum likelihood for $f\propto \exp(-a\,g(\varepsilon))$ $= \exp(-a\,g(y-x'\beta))$. (Here I have a linear kernel) If I recall correctly SVM-regression has a loss function like this: That corresponds to a density that is uniform in the middle with exponential tails (as we see by exponentiating its negative, or some multiple of its negative). There's a 3 parameter family of these: corner-location (relative insensitivity threshold) plus location and scale. It's an interesting density; if I recall rightly from looking at that particular distribution a few decades ago, a good estimator for location for it is the average of two symmetrically-placed quantiles corresponding to where the corners are (e.g. midhinge would give a good approximation to MLE for one particular choice of the constant in the SVM loss); a similar estimator for the scale parameter would be based on their difference, while the third parameter corresponds basically to working out which percentile the corners are at (this might be chosen rather than estimated as it often is for SVM). So at least for SVM regression it seems pretty straightforward, at least if we're choosing to get our estimators by maximum likelihood. (In case you're about to ask ... I have no reference for this particular connection to SVM: I just worked that out now. It's so simple, however, that dozens of people will have worked it out before me so no doubt there are references for it -- I've just never seen any.)
{ "source": [ "https://stats.stackexchange.com/questions/313605", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/243687/" ] }
313,681
I'm trying to understand the history of Gradient descent and Stochastic gradient descent . Gradient descent was invented in Cauchy in 1847. Méthode générale pour la résolution des systèmes d'équations simultanées . pp. 536–538 For more information about it see here . Since then gradient descent methods kept developing and I'm not familiar with their history. In particular I'm interested in the invention of stochastic gradient descent. A reference that can be used in an academic paper in more than welcomed.
Stochastic Gradient Descent is preceded by Stochastic Approximation as first described by Robbins and Monro in their paper, A Stochastic Approximation Method . Kiefer and Wolfowitz subsequently published their paper, * Stochastic Estimation of the Maximum of a Regression Function* which is more recognizable to people familiar with the ML variant of Stochastic Approximation (i.e Stochastic Gradient Descent), as pointed out by Mark Stone in the comments. The 60's saw plenty of research along that vein -- Dvoretzky, Powell, Blum all published results that we take for granted today. It is a relatively minor leap to get from the Robbins and Monro method to the Kiefer Wolfowitz method, and merely a reframing of the problem to then get to Stochastic Gradient Descent (for regression problems). The above papers are widely cited as being the antecedents of Stochastic Gradient Descent, as mentioned in this review paper by Nocedal, Bottou, and Curtis , which provides a brief historical perspective from a Machine Learning point of view. I believe that Kushner and Yin in their book Stochastic Approximation and Recursive Algorithms and Applications suggest that the notion had been used in control theory as far back as the 40's, but I don't recall if they had a citation for that or if it was anecdotal, nor do I have access to their book to confirm this. Herbert Robbins and Sutton Monro A Stochastic Approximation Method The Annals of Mathematical Statistics, Vol. 22, No. 3. (Sep., 1951), pp. 400-407, DOI: 10.1214/aoms/1177729586 J. Kiefer and J. Wolfowitz Stochastic Estimation of the Maximum of a Regression Function Ann. Math. Statist. Volume 23, Number 3 (1952), 462-466, DOI: 10.1214/aoms/1177729392 Leon Bottou and Frank E. Curtis and Jorge Nocedal Optimization Methods for Large-Scale Machine Learning , Technical Report, arXiv:1606.04838
{ "source": [ "https://stats.stackexchange.com/questions/313681", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/81056/" ] }
315,402
This is a question of terminology. Sometimes I see people refer to deep neural networks as "multi-layered perceptrons", why is this? A perceptron, I was taught, is a single layer classifier (or regressor) with a binary threshold output using a specific way of training the weights (not back-prop). If the output of the perceptron doesn't match the target output, we add or subtract the input vector to the weights (depending on if the perceptron gave a false positive or a false negative). It's a quite primitive machine learning algorithm. The training procedure doesn't appear to generalize to a multi-layer case (at least not without modification). A deep neural network is trained via backprop which uses the chain rule to propagate gradients of the cost function back through all of the weights of the network. So, the question is. Is a "multi-layer perceptron" the same thing as a "deep neural network"? If so, why is this terminology used? It seems to be unnecessarily confusing. In addition, assuming the terminology is somewhat interchangeable, I've only seen the terminology "multi-layer perceptron" when referring to a feed-forward network made up of fully connected layers (no convolutional layers, or recurrent connections). How broad is this terminology? Would one use the term "multi-layered perceptron" when referring to, for example, Inception net? How about for a recurrent network using LSTM modules used in NLP?
One can consider multi-layer perceptron (MLP) to be a subset of deep neural networks (DNN), but are often used interchangeably in literature. The assumption that perceptrons are named based on their learning rule is incorrect. The classical "perceptron update rule" is one of the ways that can be used to train it. The early rejection of neural networks was because of this very reason, as the perceptron update rule was prone to vanishing and exploding gradients, making it impossible to train networks with more than a layer. The use of back-propagation in training networks led to using alternate squashing activation functions such as tanh and sigmoid . So, to answer the questions, the question is. Is a "multi-layer perceptron" the same thing as a "deep neural network"? MLP is subset of DNN. While DNN can have loops and MLP are always feed-forward, i.e., A multi layer perceptrons (MLP)is a finite acyclic graph why is this terminology used? A lot of the terminologies used in the literature of science has got to do with trends of the time and has caught on. How broad is this terminology? Would one use the term "multi-layered perceptron" when referring to, for example, Inception net? How about for a recurrent network using LSTM modules used in NLP? So, yes inception, convolutional network, resnet etc are all MLP because there is no cycle between connections. Even if there is a shortcut connections skipping layers, as long as it is in forward direction, it can be called a multilayer perceptron. But, LSTMs, or Vanilla RNNs etc have cyclic connections, hence cannot be called MLPs but are a subset of DNN. This is my understanding of things. Please correct me if I am wrong. Reference Links: https://cs.stackexchange.com/questions/53521/what-is-difference-between-multilayer-perceptron-and-multilayer-neural-network https://en.wikipedia.org/wiki/Multilayer_perceptron https://en.wikipedia.org/wiki/Perceptron http://ml.informatik.uni-freiburg.de/former/_media/teaching/ss10/05_mlps.printer.pdf
{ "source": [ "https://stats.stackexchange.com/questions/315402", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/185965/" ] }
315,502
The question (slightly modified) goes as follows and if you have never encountered it before you can check it in example 6a, chapter 2, of Sheldon Ross' A First Course in Probability : Suppose that we possess an infinitely large urn and an infinite collection of balls labeled ball number 1, number 2, number 3, and so on. Consider an experiment performed as follows: At 1 minute to 12 P.M., balls numbered 1 through 10 are placed in the urn and one ball removed at random. (Assume that the withdrawal takes no time.) At 1/2 minute to 12 P.M., balls numbered 11 through 20 are placed in the urn and another ball removed at random. At 1/4 minute to 12P.M., balls numbered 21 through 30 are placed in the urn and another ball removed at random... and so on. The question of interest is, How many balls are in the urn at 12 P.M.? This question, as it's posed, forces basically everyone to get it wrong --- usually the intuition is to say there will be infinitely many balls at 12 P.M. The answer provided by Ross, however, is that with probability one the urn will be empty at 12 P.M. When teaching probability theory this problem is one of those for which is very hard to give a good intuitive explanation. On the one hand, you could try to explain it like this: "think of the probability of any ball i being on the urn at 12 P.M. During the infinite random draws, it will eventually be removed. Since this holds for all balls, none of them can be there at the end". However, students will correctly argue with you: "but I'm putting 10 balls and removing 1 ball at each time. It's impossible there will be zero balls at the end". What's the best explanation we can give to them to solve these conflicting intuitions? I'm also open to the argument the question is ill-posed and that if we formulate it better the "paradox" disappears or to the argument that the paradox is "purely mathematical" (but please try to be precise about it).
Ross describes three versions of this "paradox" in the Example 6a in his textbook . In each version, 10 balls are added to the urn and 1 ball is removed at each step of the procedure. In the first version, $10n$-th ball is removed at the $n$-th step. There are infinitely many balls left after midnight because all balls with numbers not ending in zero are still in there. In the second version, $n$-th ball is removed at the $n$-th step. There are zero balls left after midnight because each ball is eventually going to be removed at the corresponding step. In the third version, balls are removed uniformly at random. Ross computes the probability of each ball to be removed by step $n$ and finds that it converges to $1$ as $n\to\infty$ (note that this is not evident! one actually has to perform the computation). This means, by Boole's inequality , that the probability of having zero balls in the end is also $1$. You are saying that this last conclusion is not intuitive and hard to explain; this is wonderfully supported by many confused answers and comments in this very thread. However, the conclusion of the second version is exactly as un-intuitive! And it has absolutely nothing to do with probability or statistics. I think that after one accepts the second version, there is nothing particularly surprising about the third version anymore. So whereas the "probabilistic" discussion must be about the third version [see very insightful answers by @paw88789, @Paul, and @ekvall], the "philosophical" discussion should rather focus on the second version which is much easier and is similar in spirit to the Hilbert's hotel . The second version is known as the Ross-Littlewood paradox . I link to the Wikipedia page, but the discussion there is horribly confusing and I do not recommend reading it at all. Instead, take a look at this MathOverflow thread from years ago . It is closed by now but contains several very perceptive answers. A short summary of the answers that I find most crucial is as follows. We can define a set $S_n$ of the balls present in the urn after step $n$. We have that $S_1=\{2,\ldots 10\}$, $S_2=\{3,\ldots 20\}$, etc. There is a mathematically well-defined notion of the limit of a sequence of sets and one can rigorously prove that the limit of this sequence exists and is the empty set $\varnothing$. Indeed, what balls can be in the limit set? Only the ones that are never removed. But every ball is eventually removed. So the limit is empty. We can write $S_n \to \varnothing$. At the same time, the number $|S_n|$ of the balls in the set $S_n$, also known as the cardinality of this set, is equal to $10n-n=9n$. The sequence $9n$ is obviously diverging, meaning that the cardinality converges to the cardinality of $\mathbb N$, also known as aleph-zero $\aleph_0$. So we can write that $|S_n|\to \aleph_0$. The "paradox" now is that these two statements seem to contradict each other: \begin{align} S_n &\to \varnothing \\ |S_n| &\to \aleph_0 \ne 0 \end{align} But of course there is no real paradox and no contradiction. Nobody said that taking cardinality is a "continuous" operation on sets, so we cannot exchange it with the limit:$$\lim |S_n| \ne |\lim S_n|.$$ In other words, from the fact that $|S_n|=9n$ for all integer $n\in \mathbb N$ we cannot conclude that $|S_\omega|$ (the value at the first ordinal ) is equal to $\infty$. Instead, $|S_\omega|$ has to be computed directly and turns out to be zero. So I think what we get out of this really is the conclusion that taking cardinalities is a discontinous operation... [@HarryAltman] So I think this paradox is just the human tendency to assume that "simple" operations are continuous. [@NateEldredge] This is easier to understand with functions instead of sets. Consider a characteristic (aka indicator) function $f_n(x)$ of set $S_n$ which is defined to be equal to one on the $[n, 10n]$ interval and zero elsewhere. The first ten functions look like that (compare the ASCII art from @Hurkyl's answer): $\quad\quad\quad$ Everybody will agree that for each point $a\in\mathbb R$, we have $\lim f_n(a) = 0$. This by definition means that functions $f_n(x)$ converge to the function $g(x)=0$. Again, everybody will agree to that. However, observe that the integrals of these functions $\int_0^\infty f(x)dx = 9n$ get larger and larger and the sequence of integrals diverges. In other words, $$\lim\int f_n(x)dx \ne \int \lim f_n(x) dx.$$ This is a completely standard and familiar analysis result. But it is an exact reformulation of our paradox! A good way to formalize the problem is to describe the state of the jug not as a set (a subset of $\mathbb N$), because those are hard to take limits of, but as its characteristic function. The first "paradox" is that pointwise limits are not the same as uniform limits. [@TheoJohnson-Freyd] The crucial point is that "at midnight noon" the whole infinite sequence has already passed , i.e. we made a "trasfinite jump" and arrived to the transfinite state $f_\omega = \lim f_n(x)$. The value of the integral "at midnight noon" has to be the value of the integral of $\lim f_n$, not the other way around. Please note that some of the answers in this thread are misleading despite being highly upvoted. In particular, @cmaster computes $\lim_{n\to\infty} \operatorname{ballCount}(S_n)$ which is indeed infinite, but this is not what the paradox asks about. The paradox asks about what happens after the whole infinite sequence of steps; this is a transfinite construction and so we need to be computing $\operatorname{ballCount}(S_\omega)$ which is equal to zero as explained above.
{ "source": [ "https://stats.stackexchange.com/questions/315502", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/39630/" ] }
315,626
I am using Multilayer Perceptron MLPClassifier for training a classification model for my problem. I noticed that using the solver lbfgs (I guess it implies Limited-memory BFGS in scikit learn) outperforms ADAM when the dataset is relatively small (less than 100K). Can someone provide a concrete justification for that? In fact, I couldn't find a good resource that explains the reason behind that. Any participation is appreciated. Thank you
There are a lot of reasons that this could be the case. Off the top of my head I can think of one plausible cause, but without knowing more about the problem it is difficult to suggest that it is the one . An L-BFGS solver is a true quasi-Newton method in that it estimates the curvature of the parameter space via an approximation of the Hessian. So if your parameter space has plenty of long, nearly-flat valleys then L-BFGS would likely perform well. It has the downside of additional costs in performing a rank-two update to the (inverse) Hessian approximation at every step. While this is reasonably fast, it does begin to add up, particularly as the input space grows. This may account for the fact that ADAM outperforms L-BFGS for you as you get more data. ADAM is a first order method that attempts to compensate for the fact that it doesn't estimate the curvature by adapting the step-size in every dimension. In some sense, this is similar to constructing a diagonal Hessian at every step, but they do it cleverly by simply using past gradients. In this way it is still a first order method, though it has the benefit of acting as though it is second order. The estimate is cruder than that of the L-BFGS in that it is only along each dimension and doesn't account for what would be the off-diagonals in the Hessian. If your Hessian is nearly singular then these off-diagonals may play an important role in the curvature and ADAM is likely to underperform relative the BFGS.
{ "source": [ "https://stats.stackexchange.com/questions/315626", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/150552/" ] }
316,086
Is there a distribution or can I work from another distribution to create a distribution like that in the image below (apologies for the bad drawings)? where I give a number (0.2, 0.5 and 0.9 in the examples) for where the peak should be and a standard deviation (sigma) that makes the function wider or less wide. P.S.: When the given number is 0.5 the distribution is a normal distribution.
One possible choice is the beta distribution , but re-parametrized in terms of mean $\mu$ and precision $\phi$, that is, "for fixed $\mu$, the larger the value of $\phi$, the smaller the variance of $y$" (see Ferrari, and Cribari-Neto, 2004). The probability density function is constructed by replacing the standard parameters of beta distribution with $\alpha = \phi\mu$ and $\beta = \phi(1-\mu)$ $$ f(y) = \frac{1}{\mathrm{B}(\phi\mu,\; \phi(1-\mu))}\; y^{\phi\mu-1} (1-y)^{\phi(1-\mu)-1} $$ where $E(Y) = \mu$ and $\mathrm{Var}(Y) = \frac{\mu(1-\mu)}{1+\phi}$. Alternatively, you can calculate appropriate $\alpha$ and $\beta$ parameters that would lead to beta distribution with pre-defined mean and variance. However, notice that there are restrictions on possible values of variance that are valid for beta distribution. For me personally, the parametrization using precision is more intuitive (think of $x\,/\,\phi$ proportions in binomially distributed $X$, with sample size $\phi$ and the probability of success $\mu$). Kumaraswamy distribution is another bounded continuous distribution, but it would be harder to re-parametrize like above. As others have noticed, it is not normal since normal distribution has the $(-\infty, \infty)$ support, so at best you could use the truncated normal as an approximation. Ferrari, S., & Cribari-Neto, F. (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics, 31(7), 799-815.
{ "source": [ "https://stats.stackexchange.com/questions/316086", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/185859/" ] }
317,073
The definition of the min_child_weight parameter in xgboost is given as the: minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. I have read quite a few things on xgboost including the original paper (see formula 8 and the one just after equation 9), this question and most things to do with xgboost that appear on the first few pages of a google search. ;) Basically I'm still not happy as to why we are imposing a constraint on the sum of the hessian? My only thought at the minute from the original paper is that it relates to the weighted quantile sketch section (and the reformulation as of equation 3 weighted squared loss) which has $h_i$ as the 'weight' of each instance. A further question relates to why it is simply the number of instances in linear regression mode? I guess this is related to the second derivative of the sum of squares equation?
For a regression, the loss of each point in a node is $\frac{1}{2}(y_i - \hat{y_i})^2$ The second derivative of this expression with respect to $\hat{y_i}$ is $1$. So when you sum the second derivative over all points in the node, you get the number of points in the node. Here, min_child_weight means something like "stop trying to split once your sample size in a node goes below a given threshold". For a binary logistic regression, the hessian for each point in a node is going to contain terms like $\sigma(\hat{y_i})(1 - \sigma(\hat{y_i}))$ where $\sigma$ is the sigmoid function. Say you're at a pure node (e.g., all of the training examples in the node are 1's). Then all of the $\hat{y_i}$'s will probably be large positive numbers, so all of the $\sigma(\hat{y_i})$'s will be near 1, so all of the hessian terms will be near 0. Similar logic holds if all of the training examples in the node are 0. Here, min_child_weight means something like "stop trying to split once you reach a certain degree of purity in a node and your model can fit it". The Hessian's a sane thing to use for regularization and limiting tree depth. For regression, it's easy to see how you might overfit if you're always splitting down to nodes with, say, just 1 observation. Similarly, for classification, it's easy to see how you might overfit if you insist on splitting until each node is pure.
{ "source": [ "https://stats.stackexchange.com/questions/317073", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/137387/" ] }
317,504
I have two regressions of the same Y and three-level X. Overall n=15, with n=5 in each group or level of X. The first regression treats the X as categorical, assigning indicator variables to levels 2 and 3 with level one being the reference. Indicators / dummies are like so: X1 = 1 if level = 2, 0 if else X2 = 1 if level = 3, 0 if else As a result my fitted model looks something like this: y = b0 + b1(x1) + b2(x2) I run the regression, and the output includes this Analysis of Variance table: The rest of the output is irrelevant here. Okay so now I run a different regression on the same data. I ditch the categorical analysis and treat X as continuous, but I add a variable to the equation: X^2, the square of X. So now I have the following model: y = b0 + b1(X) + b2(X)^2 If I run it, it spits out the same exact Analysis of Variance table that I showed you above. Why do these two regressions give rise to the same tables? [Credit for this little conundrum goes to Thomas Belin in the Dept of Biostatistics at the University of California Los Angeles.]
In matrix terms your models are in the usual form $E[Y]=X\beta$. The first model represents an element of the first group by the row $(1,0,0)$ in $X$, corresponding to the intercept, the indicator for category 2, and the indicator for category 3. It represents an element of the second group by the row $(1,1,0)$ and an element of the third group by $(1,0,1)$. The second model instead uses rows $(1,1,1^2)=(1,1,1)$, $(1,2,2^2)=(1,2,4)$, and $(1,3,3^2)=(1,3,9)$, respectively. Let's call the resulting model matrices $X_1$ and $X_2$. They are simply related: the columns of one are linear combinations of the columns of the other. For instance, let $$V = \pmatrix{1&1&1 \\ 0&1&3 \\ 0&2&8}.$$ Then since $$\pmatrix{1&0&0 \\ 1&1&0 \\ 1&0&1} V = \pmatrix{1&1&1 \\ 1&2&4 \\ 1&3&9},$$ it follows that $$X_1 V = X_2.$$ The models themselves therefore are related by $$X_1\beta_1 = E[Y] = X_2\beta_2 = (X_1V)\beta_2 = X_1(V\beta_2).$$ That is, the coefficients $\beta_2$ for the second model must be related to those of the first one via $$\beta_1 = V\beta_2.$$ The same relationship therefore holds for their least squares estimates. This shows that the models have identical fits : they merely express them differently. Since the first columns of the two model matrices are the same, any ANOVA table that decomposes variance between the first column and the remaining columns will not change. An ANOVA table that distinguishes between the second and third columns, though, will depend on how the data are encoded. Geometrically (and somewhat more abstractly), the three-dimensional subspace of $\mathbb{R}^{15}$ generated by the columns of $X_1$ coincides with the subspace generated by the columns of $X_2$. Therefore the models will have identical fits. The fits are expressed differently only because the spaces are described with two different bases. To illustrate, here are data just like yours (but with different responses) and the corresponding analyses as generated in R . set.seed(17) D <- data.frame(group=rep(1:3, each=5), y=rnorm(3*5, rep(1:3, each=5), sd=2)) Fit the two models: fit.1 <- lm(y ~ factor(group), D) fit.2 <- lm(y ~ group + I(group^2), D) Display their ANOVA tables: anova(fit.1) anova(fit.2) The output for the first model is Df Sum Sq Mean Sq F value Pr(>F) factor(group) 2 51.836 25.918 14.471 0.000634 *** Residuals 12 21.492 1.791 For the second model it is Df Sum Sq Mean Sq F value Pr(>F) group 1 50.816 50.816 28.3726 0.0001803 *** I(group^2) 1 1.020 1.020 0.5694 0.4650488 Residuals 12 21.492 1.791 You can see that the residual sums of squares are the same. By adding the first two rows in the second model you will obtain the same DF and sum of squares, from which the same mean square, F value, and p-value can be computed. Finally, let's compare the coefficient estimates. beta.1.hat <- coef(fit.1) beta.2.hat <- coef(fit.2) The output is (Intercept) factor(group)2 factor(group)3 0.4508762 2.8073697 4.5084944 (Intercept) group I(group^2) -3.4627385 4.4667371 -0.5531225 Even the intercepts are completely different. That's because the estimates of any variable in a multiple regression depend on the estimates of all other variables (unless they are all mutually orthogonal, which is not the case for either model). However, look at what multiplication by $V$ accomplishes: $$\pmatrix{1&1&1 \\ 0&1&3 \\ 0&2&8}\pmatrix{-3.4627385 \\ 4.4667371 \\-0.5531225} = \pmatrix{ 0.4508762 \\ 2.8073697 \\ 4.5084944 }.$$ The fits really are the same just as claimed.
{ "source": [ "https://stats.stackexchange.com/questions/317504", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/79385/" ] }
317,541
My understanding of what an estimator and an estimate is: Estimator: A rule to calculate an estimate Estimate: The value calculated from a set of data based on the estimator Between these two terms, if I am asked to point out the random variable, I would say the estimate is the random variable since it's value will change randomly based on the samples in the dataset. But the answer I was given is that the Estimator is the random variable and the estimate is not a random variable. Why is that ?
Somewhat loosely -- I have a coin in front of me. The value of the next toss of the coin (let's take {Head=1, Tail=0} say) is a random variable. It has some probability of taking the value $1$ ( $\frac12$ if the experiment is "fair"). But once I have tossed it and observed the outcome, it's an observation, and that observation doesn't vary, I know what it is. Consider now I will toss the coin twice ( $X_1, X_2$ ). Both of these are random variables and so is their sum (the total number of heads in two tosses). So is their average (the proportion of head in two tosses) and their difference, and so forth. That is, functions of random variables are in turn random variables. So an estimator -- which is a function of random variables -- is itself a random variable. But once you observe that random variable -- like when you observe a coin toss or any other random variable -- the observed value is just a number. It doesn't vary -- you know what it is. So an estimate -- the value you have calculated based on a sample is an observation on a random variable (the estimator) rather than a random variable itself.
{ "source": [ "https://stats.stackexchange.com/questions/317541", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/149964/" ] }
319,215
Axiomatically, probability is a function $P$ that assigns a real number $P(A)$ to each event $A$ if it satisfies the three fundamental assumptions (Kolmogorov's assumptions): $P(A) \geq 0 \ \text{for every} A$ $P(\Omega) = 1$ $\text{If} \ A_1, A_2, \cdots \text{are disjoint, then}\\ P\left(\bigcup_{i=1}^{\infty}A_i\right) = \sum\limits_{i=1}^{\infty}P(A_i)$ My question is, in the last assumption, is the converse assumed? If I show that the probabilities for a certain number of events can be added to get the probability of their union, can I directly use this axiom to claim that the events are disjoint?
No, but you can conclude that the probability of any shared events is zero. Disjoint means that $A_i \cap A_j=\emptyset$ for any $i\ne j$. You cannot conclude that, but you can conclude that $P(A_i \cap A_j)=0$ for all $i\ne j$. Any shared elements must have probability zero. Same goes for all higher-order intersections as well. In other words, you can say, with probability 1, that none of the sets can occur together. I have seen such sets called almost disjoint or almost surely disjoint but such terminology is not standard I think.
{ "source": [ "https://stats.stackexchange.com/questions/319215", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/59355/" ] }
319,323
In Tensorflow's implementation of ResNet , I find they use variance scaling initializer, I also find xavier initializer is popular. I don't have too much experience on this, which is better in practice?
Historical perspective Xavier initialization , originally proposed by Xavier Glorot and Yoshua Bengio in "Understanding the difficulty of training deep feedforward neural networks" , is the weights initialization technique that tries to make the variance of the outputs of a layer to be equal to the variance of its inputs. This idea turned out to be very useful in practice. Naturally, this initialization depends on the layer activation function. And in their paper, Glorot and Bengio considered logistic sigmoid activation function, which was the default choice at that moment. Later on, the sigmoid activation was surpassed by ReLu, because it allowed to solve vanishing / exploding gradients problem. Consequently, there appeared a new initialization technique, which applied the same idea (balancing of the variance of the activation) to this new activation function. It was proposed by Kaiming He at al in "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification" , and now it often referred to as He initialization . In tensorflow, He initialization is implemented in variance_scaling_initializer() function (which is, in fact, a more general initializer, but by default performs He initialization), while Xavier initializer is logically xavier_initializer() . Summary In summary, the main difference for machine learning practitioners is the following: He initialization works better for layers with ReLu activation. Xavier initialization works better for layers with sigmoid activation.
{ "source": [ "https://stats.stackexchange.com/questions/319323", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/112537/" ] }