source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
210,830 | Why is the F -test for difference in variance so sensitive to the assumption of normal distribution, even for large $N$? I have tried to search the web and visited the library, but none of it gave any good answers. It says that the test is very sensitive for violation of the assumption for normal distribution, but I do not understand why. Does anyone have a good answer for this? | I presume you mean the F-test for the ratio of variances when testing a pair of sample variances for equality (because that's the simplest one that's quite sensitive to normality; F-test for ANOVA is less sensitive) If your samples are drawn from normal distributions, the sample variance has a scaled chi square distribution Imagine that instead of data drawn from normal distributions, you had distribution that was heavier-tailed than normal. Then you'd get too many large variances relative to that scaled chi-square distribution, and the probability of the sample variance getting out into the far right tail is very responsive to the tails of the distribution from which the data were drawn=. (There will also be too many small variances, but the effect is a bit less pronounced) Now if both samples are drawn from that heavier tailed distribution, the larger tail on the numerator will produce an excess of large F values and the larger tail on the denominator will produce an excess of small F values (and vice versa for the left tail) Both of these effects will tend to lead to rejection in a two-tailed test, even though both samples have the same variance . This means that when the true distribution is heavier tailed than normal, actual significance levels tend to be higher than we want. Conversely, drawing a sample from a lighter tailed distribution produces a distribution of sample variances that's got too short a tail -- variance values tend to be more "middling" than you get with data from normal distributions. Again, the impact is stronger in the far upper tail than the lower tail. Now if both samples are drawn from that lighter-tailed distribution, the this results in an excess of F values near the median and too few in either tail (actual significance levels will be lower than desired). These effects don't seem to necessarily reduce much with larger sample size; in some cases it seems to get worse. By way of partial illustration, here are 10000 sample variances (for $n=10$ ) for normal, $t_5$ and uniform distributions, scaled to have the same mean as a $\chi^2_9$ : It's a bit hard to see the far tail since it's relatively small compared to the peak (and for the $t_5$ the observations in the tail extend out a fair way past where we have plotted to), but we can see something of the effect on the distribution on the variance. It's perhaps even more instructive to transform these by the inverse of the chi-square cdf, which in the normal case looks uniform (as it should), in the t-case has a big peak in the upper tail (and a smaller peak in the lower tail) and in the uniform case is more hill-like but with a broad peak around 0.6 to 0.8 and the extremes have much lower probability than they should if we were sampling from normal distributions. These in turn produce the effects on the distribution of the ratio of variances I described before. Again, to improve our ability to see the effect on the tails (which can be hard to see), I've transformed by the inverse of the cdf (in this case for the $F_{9,9}$ distribution): In a two-tailed test, we look at both tails of the F distribution; both tails are over-represented when drawing from the $t_5$ and both are under-represented when drawing from a uniform. There would be many other cases to investigate for a full study, but this at least gives a sense of the kind and direction of effect, as well as how it arises. | {
"source": [
"https://stats.stackexchange.com/questions/210830",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/114516/"
]
} |
211,419 | Intuitively, the mean is just the average of observations. The variance is how much these observations vary from the mean. I would like to know why the inverse of the variance is known as the precision. What intuition can we make from this? And why is the precision matrix as useful as the covariance matrix in multivariate (normal) distribution? Insights please? | Precision is often used in Bayesian software by convention. It gained popularity because gamma distribution can be used as a conjugate prior for precision . Some say that precision is more "intuitive" than variance because it says how concentrated are the values around the mean rather than how spread they are. It is said that we are more interested in how precise is some measurement rather than how imprecise it is (but honestly I do not see how it would be more intuitive). The more spread are the values around the mean (high variance), the less precise they are (small precision). The smaller the variance, the greater the precision. Precision is just an inverted variance $\tau = 1/\sigma^2$ . There is really nothing more than this. | {
"source": [
"https://stats.stackexchange.com/questions/211419",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/67413/"
]
} |
211,436 | There are some variations on how to normalize the images but most seem to use these two methods: Subtract the mean per channel calculated over all images (e.g. VGG_ILSVRC_16_layers ) Subtract by pixel/channel calculated over all images (e.g. CNN_S , also see Caffe's reference network ) The natural approach would in my mind to normalize each image. An image taken in broad daylight will cause more neurons to fire than a night-time image and while it may inform us of the time we usually care about more interesting features present in the edges etc. Pierre Sermanet refers in 3.3.3 that local contrast normalization that would be per-image based but I haven't come across this in any of the examples/tutorials that I've seen. I've also seen an interesting Quora question and Xiu-Shen Wei's post but they don't seem to support the two above approaches. What exactly am I missing? Is this a color normalization issue or is there a paper that actually explain why so many use this approach? | Subtracting the dataset mean serves to "center" the data. Additionally, you ideally would like to divide by the sttdev of that feature or pixel as well if you want to normalize each feature value to a z-score. The reason we do both of those things is because in the process of training our network, we're going to be multiplying (weights) and adding to (biases) these initial inputs in order to cause activations that we then backpropogate with the gradients to train the model. We'd like in this process for each feature to have a similar range so that our gradients don't go out of control (and that we only need one global learning rate multiplier). Another way you can think about it is deep learning networks traditionally share many parameters - if you didn't scale your inputs in a way that resulted in similarly-ranged feature values (ie: over the whole dataset by subtracting mean) sharing wouldn't happen very easily because to one part of the image weight w is a lot and to another it's too small. You will see in some CNN models that per-image whitening is used, which is more along the lines of your thinking. | {
"source": [
"https://stats.stackexchange.com/questions/211436",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5429/"
]
} |
211,848 | I am studying about maximum likelihood estimation and I read that the likelihood function is the product of the probabilities of each variable. Why is it the product? Why not the sum? I have been trying to search on Google but I can't find any meaningful answers. https://en.wikipedia.org/wiki/Maximum_likelihood | This is a very basic question, and instead of using formal language and mathematical notation, I will try to answer it at a level at which everybody who can understand the question can also understand the answer. Imagine that we have a race of cats. They have a 75% probability of being born white, and 25% probability of being born grey, no other colors. Also, they have a 50% probability of having green eyes and 50% probability of having blue eyes, and coat color and eye color are independent. Now let us look at a litter of eight kittens: You will see that 1 out of 4, or 25%, are grey. Also, 1 out of 2, or 50% have blue eyes. Now the question is, how many kittens have grey fur and blue eyes? You can count them, the answer is one. That is, $\frac{1}{4} \times \frac{1}{2} = \frac{1}{8}$, or 12.5% of 8 kittens. Why does it happen? Because any cat has a 1 in 4 probability to be grey. So, pick four cats, and you can expect one of them to be grey. But if you only pick four cats out of many (and get the expected value of 1 grey cat), the one which is grey has a 1 in 2 probability to have blue eyes. This means, of the total of cats you pick, you first multiply the total by 25% to get the grey cats, and then you multiply the selected 25% of all cats by 50% to get those of them which have blue eyes. This gives you the probability of getting blue-eyed grey cats. Summing them up would give you $\frac{1}{4} + \frac{1}{2}$, which makes $\frac{3}{4}$ or 6 out of 8. In our picture, it corresponds to summing up the cats which have blue eyes with the cats which have grey fur - and counting the one grey blue-eyed kitten twice! Such a calculation can have its place, but it is rather unusual in probability calculations, and it is certainly not the one you are asking about. | {
"source": [
"https://stats.stackexchange.com/questions/211848",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/110023/"
]
} |
211,967 | What is the expected number of times you must roll a die until each side has appeared 3 times? This question was asked in primary school in New Zealand and it was solved using simulations. What is the analytical solution for this problem? | Suppose all $d=6$ sides have equal chances. Let's generalize and find the expected number of rolls needed until side $1$ has appeared $n_1$ times, side $2$ has appeared $n_2$ times, ..., and side $d$ has appeared $n_d$ times. Because the identities of the sides do not matter (they all have equal chances), the description of this objective can be condensed: let us suppose that $i_0$ sides don't have to appear at all, $i_1$ of the sides need to appear just once, ..., and $i_n$ of the sides have to appear $n=\max(n_1,n_2,\ldots,n_d)$ times. Let $$\mathbf{i}=(i_0,i_1,\ldots,i_n)$$ designate this situation and write $$e(\mathbf{i})$$ for the expected number of rolls. The question asks for $e(0,0,0,6)$: $i_3 = 6$ indicates all six sides need to be seen three times each. An easy recurrence is available. At the next roll, the side that appears corresponds to one of the $i_j$: that is, either we didn't need to see it, or we needed to see it once, ..., or we needed to see it $n$ more times. $j$ is the number of times we needed to see it. When $j=0$, we didn't need to see it and nothing changes. This happens with probability $i_0/d$. When $j \gt 0$ then we did need to see this side. Now there is one less side that needs to seen $j$ times and one more side that needs to be seen $j-1$ times. Thus, $i_j$ becomes $i_j-1$ and $i_{j-1}$ becomes $i_j+1$. Let this operation on the components of $\mathbf{i}$ be designated $\mathbf{i}\cdot j$, so that $$\mathbf{i}\cdot j = (\color{gray}{i_0, \ldots, i_{j-2}}, i_{j-1}+1, i_j-1, \color{gray}{i_{j+1},\ldots, i_n}).$$ This happens with probability $i_j/d$. We merely have to count this die roll and use recursion to tell us how many more rolls are expected. By the laws of expectation and total probability, $$e(\mathbf{i}) = 1 + \frac{i_0}{d}e(\mathbf{i}) + \sum_{j=1}^n \frac{i_j}{d}e(\mathbf{i}\cdot j)$$ (Let's understand that whenever $i_j=0$, the corresponding term in the sum is zero.) If $i_0=d$, we are done and $e(\mathbf{i}) =0$. Otherwise we may solve for $e(\mathbf{i})$, giving the desired recursive formula $$e(\mathbf{i}) = \frac{d + i_1 e(\mathbf{i}\cdot 1) + \cdots + i_n e(\mathbf{i}\cdot n)}{d - i_0}.\tag{1}$$ Notice that $$|\mathbf{i}| = 0(i_0) + 1(i_1) + \cdots + n(i_n)$$ is the total number of events we wish to see. The operation $\cdot j$ reduces that quantity by one for any $j\gt 0$ provided $i_j \gt 0$, which is always the case. Therefore this recursion terminates at a depth of precisely $|\mathbf{i}|$ (equal to $3(6) = 18$ in the question). Moreover (as is not difficult to check) the number of possibilities at each recursion depth in this question is small (never exceeding $8$). Consequently, this is an efficient method, at least when the combinatorial possibilities are not too numerous and we memoize the intermediate results (so that no value of $e$ is calculated more than once). I compute that $$e(0,0,0,6) = \frac{2\,286\,878\,604\,508\,883}{69\,984\,000\,000\,000}\approx 32.677.$$ That seemed awfully small to me, so I ran a simulation (using R ). After over three million rolls of the dice, this game had been played to its completion over 100,000 times, with an average length of $32.669$. The standard error of that estimate is $0.027$: the difference between this average and the theoretical value is insignificant, confirming the accuracy of the theoretical value. The distribution of lengths may be of interest. (Obviously it must begin at $18$, the minimum number of rolls needed to collect all six sides three times each.) # Specify the problem
d <- 6 # Number of faces
k <- 3 # Number of times to see each
N <- 3.26772e6 # Number of rolls
# Simulate many rolls
set.seed(17)
x <- sample(1:d, N, replace=TRUE)
# Use these rolls to play the game repeatedly.
totals <- sapply(1:d, function(i) cumsum(x==i))
n <- 0
base <- rep(0, d)
i.last <- 0
n.list <- list()
for (i in 1:N) {
if (min(totals[i, ] - base) >= k) {
base <- totals[i, ]
n <- n+1
n.list[[n]] <- i - i.last
i.last <- i
}
}
# Summarize the results
sim <- unlist(n.list)
mean(sim)
sd(sim) / sqrt(length(sim))
length(sim)
hist(sim, main="Simulation results", xlab="Number of rolls", freq=FALSE, breaks=0:max(sim)) Implementation Although the recursive calculation of $e$ is simple, it presents some challenges in some computing environments. Chief among these is storing the values of $e(\mathbf{i})$ as they are computed. This is essential, for otherwise each value will be (redundantly) computed a very large number of times. However, the storage potentially needed for an array indexed by $\mathbf{i}$ could be enormous. Ideally, only values of $\mathbf{i}$ that are actually encountered during the computation should be stored. This calls for a kind of associative array. To illustrate, here is working R code. The comments describe the creation of a simple "AA" (associative array) class for storing intermediate results. Vectors $\mathbf{i}$ are converted to strings and those are used to index into a list E that will hold all the values. The $\mathbf{i}\cdot j$ operation is implemented as %.% . These preliminaries enable the recursive function $e$ to be defined rather simply in a way that parallels the mathematical notation. In particular, the line x <- (d + sum(sapply(1:n, function(i) j[i+1]*e.(j %.% i))))/(d - j[1]) is directly comparable to the formula $(1)$ above. Note that all indexes have been increased by $1$ because R starts indexing its arrays at $1$ rather than $0$. Timing shows it takes $0.01$ seconds to compute e(c(0,0,0,6)) ; its value is 32.6771634160506 Accumulated floating point roundoff error has destroyed the last two digits (which should be 68 rather than 06 ). e <- function(i) {
#
# Create a data structure to "memoize" the values.
#
`[[<-.AA` <- function(x, i, value) {
class(x) <- NULL
x[[paste(i, collapse=",")]] <- value
class(x) <- "AA"
x
}
`[[.AA` <- function(x, i) {
class(x) <- NULL
x[[paste(i, collapse=",")]]
}
E <- list()
class(E) <- "AA"
#
# Define the "." operation.
#
`%.%` <- function(i, j) {
i[j+1] <- i[j+1]-1
i[j] <- i[j] + 1
return(i)
}
#
# Define a recursive version of this function.
#
e. <- function(j) {
#
# Detect initial conditions and return initial values.
#
if (min(j) < 0 || sum(j[-1])==0) return(0)
#
# Look up the value (if it has already been computed).
#
x <- E[[j]]
if (!is.null(x)) return(x)
#
# Compute the value (for the first and only time).
#
d <- sum(j)
n <- length(j) - 1
x <- (d + sum(sapply(1:n, function(i) j[i+1]*e.(j %.% i))))/(d - j[1])
#
# Store the value for later re-use.
#
E[[j]] <<- x
return(x)
}
#
# Do the calculation.
#
e.(i)
}
e(c(0,0,0,6)) Finally, here is the original Mathematica implementation that produced the exact answer. The memoization is accomplished via the idiomatic e[i_] := e[i] = ... expression, eliminating almost all the R preliminaries. Internally, though, the two programs are doing the same things in the same way. shift[j_, x_List] /; Length[x] >= j >= 2 := Module[{i = x},
i[[j - 1]] = i[[j - 1]] + 1;
i[[j]] = i[[j]] - 1;
i];
e[i_] := e[i] = With[{i0 = First@i, d = Plus @@ i},
(d + Sum[If[i[[k]] > 0, i[[k]] e[shift[k, i]], 0], {k, 2, Length[i]}])/(d - i0)];
e[{x_, y__}] /; Plus[y] == 0 := e[{x, y}] = 0
e[{0, 0, 0, 6}] $\frac{2286878604508883}{69984000000000}$ | {
"source": [
"https://stats.stackexchange.com/questions/211967",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/114350/"
]
} |
212,813 | I am reading about the quantile function, but it is not clear to me. Could you provide a more intuitive explanation than the one provided below? Since the cdf $F$ is a monotonically increasing function, it has an
inverse; let us denote this by $F^{−1}$. If $F$ is the cdf of $X$,
then $F^{−1}(\alpha)$ is the value of $x_\alpha$ such that $P(X \le
x_\alpha) = \alpha$; this is called the $\alpha$ quantile of $F$. The
value $F^{−1}(0.5)$ is the median of the distribution, with half of
the probability mass on the left, and half on the right. The values
$F^{−1}(0.25)$ and $F^{−1}(0.75)$ are the lower and upper quartiles. | All this may sound complicated at first, but it is essentially about something very simple. By cumulative distribution function we denote the function that returns probabilities of $X$ being smaller than or equal to some value $x$ , $$ \Pr(X \le x) = F(x).$$ This function takes as input $x$ and returns values from the $[0, 1]$ interval (probabilities)—let's denote them as $p$ . The inverse of the cumulative distribution function (or quantile function) tells you what $x$ would make $F(x)$ return some value $p$ , $$ F^{-1}(p) = x.$$ This is illustrated in the diagram below which uses the normal cumulative distribution function (and its inverse) as an example. Example As an simple example, you can take a standard Gumbel distribution. Its cumulative distribution function is $$ F(x) = e^{-e^{-x}} $$ and it can be easily inverted: recall natural logarithm function is an inverse of exponential function, so it is instantly obvious that quantile function for Gumbel distribution is $$ F^{-1}(p) = -\ln(-\ln(p)) $$ As you can see, the quantile function, according to its alternative name, "inverts" the behaviour of cumulative distribution function. Generalized inverse distribution function Not every function has an inverse. That is why the quotation you refer to says "monotonically increasing function". Recall that from the definition of the function , it has to assign for each input value exactly one output. Cumulative distribution functions for continuous random variables satisfy this property since they are monotonically increasing. For discrete random variables cumulative distribution functions are not continuous and increasing, so we use generalized inverse distribution functions which need to be non-decreasing. More formally, the generalized inverse distribution function is defined as $$ F^{-1}(p) = \inf \big\{x \in \mathbb{R}: F(x) \ge p \big\}. $$ The definition, translated to plain English, says that for given probability value $p$ , we are looking for some $x$ , that results in $F(x)$ returning value greater or equal then $p$ , but since there could be multiple values of $x$ that meet this condition (e.g. $F(x) \ge 0$ is true for any $x$ ), so we take the smallest $x$ of those. Functions with no inverses In general, there are no inverses for functions that can return same value for different inputs, for example density functions (e.g., the standard normal density function is symmetric, so it returns the same values for $-2$ and $2$ etc.). The normal distribution is an interesting example for one more reason—it is one of the examples of cumulative distribution functions that do not have a closed-form inverse . Not every cumulative distribution function has to have a closed-form inverse! Hopefully in such cases the inverses can be found using numerical methods. Use-case The quantile function can be used for random generation as described in How does the inverse transform method work? | {
"source": [
"https://stats.stackexchange.com/questions/212813",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/108408/"
]
} |
213,017 | This is not homework. I am interested in understanding if my logic is correct with this simple stats problem. Let's say I have a 2 sided coin where the probability of flipping a head is $P(H)$ and the probability of flipping a tail is $1-P(H)$. Let's assume all flips have independent probabilities. Now, let's say I want to maximize my chances of predicting whether the coin will be a head or tail on the next flip. If $P(H) = 0.5$, I can guess heads or tails at random and the probability of me being correct is $0.5$. Now, suppose that $P(H) = 0.2$, if I want to maximize my chances of guessing correctly, should I always guess tails where the probability is $0.8$? Taking this one step further, if I had a 3-sided die, and the probability of rolling a 1, 2, or, 3 was $P(1)=0.1$, $P(2)=0.5$, and $P(3)=0.4$, should I always guess 2 to maximize my chances of guessing correctly? Is there another approach that would allow me to guess more accurately? | You're right. If $P(H) = 0.2$, and you're using zero-one loss (that is, you need to guess an actual outcome as opposed to a probability or something, and furthermore, getting heads when you guessed tails is equally as bad as getting tails when you guessed heads), you should guess tails every time. People often mistakenly think that the answer is to guess tails on a randomly selected 80% of trials and heads on the remainder. This strategy is called " probability matching " and has been studied extensively in behavioral decision-making. See, for example, West, R. F., & Stanovich, K. E. (2003). Is probability matching smart? Associations between probabilistic choices and cognitive ability. Memory & Cognition, 31 , 243–251. doi:10.3758/BF03194383 | {
"source": [
"https://stats.stackexchange.com/questions/213017",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/13090/"
]
} |
213,464 | In statistical learning, implicitly or explicitly, one always assumes that the training set $\mathcal{D} = \{ \bf {X}, \bf{y} \}$ is composed of $N$ input/response tuples $({\bf{X}}_i,y_i)$ that are independently drawn from the same joint distribution $\mathbb{P}({\bf{X}},y)$ with $$ p({\bf{X}},y) = p( y \vert {\bf{X}}) p({\bf{X}}) $$ and $p( y \vert {\bf{X}})$ the relationship we are trying to capture through a particular learning algorithm. Mathematically, this i.i.d. assumption writes: \begin{gather}
({\bf{X}}_i,y_i) \sim \mathbb{P}({\bf{X}},y), \forall i=1,...,N \\
({\bf{X}}_i,y_i) \text{ independent of } ({\bf{X}}_j,y_j), \forall i \ne j \in \{1,...,N\}
\end{gather} I think we can all agree that this assumption is rarely satisfied in practice, see this related SE question and the wise comments of @Glen_b and @Luca. My question is therefore: Where exactly does the i.i.d. assumption becomes critical in practice? [Context] I'm asking this because I can think of many situations where such a stringent assumption is not needed to train a certain model (e.g. linear regression methods), or at least one can work around the i.i.d. assumption and obtain robust results. Actually the results will usually stay the same, it is rather the inferences that one can draw that will change (e.g. heteroskedasticity and autocorrelation consistent HAC estimators in linear regression: the idea is to re-use the good old OLS regression weights but to adapt the finite-sample behaviour of the OLS estimator to account for the violation of the Gauss-Markov assumptions). My guess is therefore that the i.i.d. assumption is required not to be able to train a particular learning algorithm, but rather to guarantee that techniques such as cross-validation can indeed be used to infer a reliable measure of the model's capability of generalising well , which is the only thing we are interested in at the end of the day in statistical learning because it shows that we can indeed learn from the data. Intuitively, I can indeed understand that using cross-validation on dependent data could be optimistically biased (as illustrated/explained in this interesting example ). For me i.i.d. has thus nothing to do with training a particular model but everything to do with that model's generalisability . This seems to agree with a paper I found by Huan Xu et al, see "Robustness and Generalizability for Markovian Samples" here . Would you agree with that? [Example] If this can help the discussion, consider the problem of using the LASSO algorithm to perform a smart selection amongst $P$ features given $N$ training samples $({\bf{X}}_i,y_i)$ with $\forall i=1,...,N$
$$ {\bf{X}}_i=[X_{i1},...,X_{iP}] $$
We can further assume that: The inputs ${\bf{X}}_i$ are dependent hence leading to a violation of the i.i.d. assumption (e.g. for each feature $j=1,..,P$ we observe a $N$ point time series, hence introducing temporal auto-correlation) The conditional responses $y_i \vert {\bf{X}}_i$ are independent. We have $P \gg N$. In what way(s) does the violation of the i.i.d. assumption can pose problem in that case assuming we plan to determine the LASSO penalisation coefficient $\lambda$ using a cross-validation approach (on the full data set) + use a nested cross-validation to get a feel for the generalisation error of this learning strategy (we can leave the discussion concerning the inherent pros/cons of the LASSO aside, except if it is useful). | The i.i.d. assumption about the pairs $(\mathbf{X}_i, y_i)$ , $i = 1, \ldots, N$ , is often made in statistics and in machine learning. Sometimes for a good reason, sometimes out of convenience and sometimes just because we usually make this assumption. To satisfactorily answer if the assumption is really necessary, and what the consequences are of not making this assumption, I would easily end up writing a book (if you ever easily end up doing something like that). Here I will try to give a brief overview of what I find to be the most important aspects. A fundamental assumption Let's assume that we want to learn a probability model of $y$ given $\mathbf{X}$ , which we call $p(y \mid \mathbf{X})$ . We do not make any assumptions about this model a priory, but we will make the minimal assumption that such a model exists such that the conditional distribution of $y_i$ given $\mathbf{X}_i$ is $p(y_i \mid \mathbf{X}_i)$ . What is worth noting about this assumption is that the conditional distribution of $y_i$ depends on $i$ only through $\mathbf{X}_i$ . This is what makes the model useful, e.g. for prediction. The assumption holds as a consequence of the identically distributed part under the i.i.d. assumption, but it is weaker because we don't make any assumptions about the $\mathbf{X}_i$ 's. In the following the focus will mostly be on the role of independence. Modelling There are two major approaches to learning a model of $y$ given $\mathbf{X}$ . One approach is known as discriminative modelling and the other as generative modelling. Discriminative modelling : We model $p(y \mid \mathbf{X})$ directly, e.g. a logistic regression model, a neural network, a tree or a random forest. The working modelling assumption will typically be that the $y_i$ 's are conditionally independent given the $\mathbf{X}_i$ 's, though estimation techniques relying on subsampling or bootstrapping make most sense under the i.i.d. or the weaker exchangeability assumption (see below). But generally, for discriminative modelling we don't need to make distributional assumptions about the $\mathbf{X}_i$ 's. Generative modelling : We model the joint distribution, $p(\mathbf{X}, y)$ , of $(\mathbf{X}, y)$ typically by modelling the conditional distribution $p(\mathbf{X} \mid y)$ and the marginal distribution $p(y)$ . Then we use Bayes's formula for computing $p(y \mid \mathbf{X})$ . Linear discriminant analysis and naive Bayes methods are examples. The working modelling assumption will typically be the i.i.d. assumption. For both modelling approaches the working modelling assumption is used to derive or propose learning methods (or estimators). That could be by maximising the (penalised) log-likelihood, minimising the empirical risk or by using Bayesian methods. Even if the working modelling assumption is wrong, the resulting method can still provide a sensible fit of $p(y \mid \mathbf{X})$ . Some techniques used together with discriminative modelling, such as bagging (bootstrap aggregation), work by fitting many models to data sampled randomly from the dataset. Without the i.i.d. assumption (or exchangeability) the resampled datasets will not have a joint distribution similar to that of the original dataset. Any dependence structure has become "messed up" by the resampling. I have not thought deeply about this, but I don't see why that should necessarily break the method as a method for learning $p(y \mid \mathbf{X})$ . At least not for methods based on the working independence assumptions. I am happy to be proved wrong here. Consistency and error bounds A central question for all learning methods is whether they result in models close to $p(y \mid \mathbf{X})$ . There is a vast theoretical literature in statistics and machine learning dealing with consistency and error bounds. A main goal of this literature is to prove that the learned model is close to $p(y \mid \mathbf{X})$ when $N$ is large. Consistency is a qualitative assurance, while error bounds provide (semi-) explicit quantitative control of the closeness and give rates of convergence. The theoretical results all rely on assumptions about the joint distribution of the observations in the dataset. Often the working modelling assumptions mentioned above are made (that is, conditional independence for discriminative modelling and i.i.d. for generative modelling). For discriminative modelling, consistency and error bounds will require that the $\mathbf{X}_i$ 's fulfil certain conditions. In classical regression one such condition is that $\frac{1}{N} \mathbb{X}^T \mathbb{X} \to \Sigma$ for $N \to \infty$ , where $\mathbb{X}$ denotes the design matrix with rows $\mathbf{X}_i^T$ . Weaker conditions may be enough for consistency. In sparse learning another such condition is the restricted eigenvalue condition, see e.g. On the conditions used to prove oracle results for the Lasso . The i.i.d. assumption together with some technical distributional assumptions imply that some such sufficient conditions are fulfilled with large probability, and thus the i.i.d. assumption may prove to be a sufficient but not a necessary assumption to get consistency and error bounds for discriminative modelling. The working modelling assumption of independence may be wrong for either of the modelling approaches. As a rough rule-of-thumb one can still expect consistency if the data comes from an ergodic process , and one can still expect some error bounds if the process is sufficiently fast mixing . A precise mathematical definition of these concepts would take us too far away from the main question. It is enough to note that there exist dependence structures besides the i.i.d. assumption for which the learning methods can be proved to work as $N$ tends to infinity. If we have more detailed knowledge about the dependence structure, we may choose to replace the working independence assumption used for modelling with a model that captures the dependence structure as well. This is often done for time series. A better working model may result in a more efficient method. Model assessment Rather than proving that the learning method gives a model close to $p(y \mid \mathbf{X})$ it is of great practical value to obtain a (relative) assessment of "how good a learned model is". Such assessment scores are comparable for two or more learned models, but they will not provide an absolute assessment of how close a learned model is to $p(y \mid \mathbf{X})$ . Estimates of assessment scores are typically computed empirically based on splitting the dataset into a training and a test dataset or by using cross-validation. As with bagging, a random splitting of the dataset will "mess up" any dependence structure. However, for methods based on the working independence assumptions, ergodicity assumptions weaker than i.i.d. should be sufficient for the assessment estimates to be reasonable, though standard errors on these estimates will be very difficult to come up with. [ Edit: Dependence among the variables will result in a distribution of the learned model that differs from the distribution under the i.i.d. assumption. The estimate produced by cross-validation is not obviously related to the generalization error. If the dependence is strong, it will most likely be a poor estimate.] Summary (tl;dr) All the above is under the assumption that there is a fixed conditional probability model, $p(y \mid \mathbf{X})$ . Thus there cannot be trends or sudden changes in the conditional distribution not captured by $\mathbf{X}$ . When learning a model of $y$ given $\mathbf{X}$ , independence plays a role as a useful working modelling assumption that allows us to derive learning methods a sufficient but not necessary assumption for proving consistency and providing error bounds a sufficient but not necessary assumption for using random data splitting techniques such as bagging for learning and cross-validation for assessment. To understand precisely what alternatives to i.i.d. that are also sufficient is non-trivial and to some extent a research subject. | {
"source": [
"https://stats.stackexchange.com/questions/213464",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/109618/"
]
} |
213,966 | I wish to better understand how the continuity correction to the binomial distribution for the normal approximation was derived. What method was used to decide we should add 1/2 (why not another number?). Any explanation (or a link to suggested reading, other than this , would be appreciated). | In fact it doesn't always "work" (in the sense of always improving the approximation of the binomial cdf by the normal at any $x$ ). If the binomial $p$ is 0.5 I think it always helps, except perhaps for the most extreme tail. If $p$ is not too far from 0.5, for reasonably large $n$ it generally works very well except in the far tail, but if $p$ is near 0 or 1 it might not help at all (see point 6. below) One thing to keep in mind (in spite of illustrations almost always involving pmfs and pdfs) is that the thing we're trying to approximate is the cdf. It can be useful to ponder what's going on with the cdf of the binomial and the approximating normal (e.g. here's $n=20,p=0.5$ ): In the limit the cdf of a standardized binomial will go to a standard normal (note that standardizing affects the scale on the x-axis but not the y-axis); along the way to increasingly large $n$ the binomial cdf's jumps tend to more evenly straddle the normal cdf. Let's zoom in and look at this in the above simple example: Notice that since the approximating normal passes close to the middle of the vertical jumps*, while in the limit the normal cdf is locally approximately linear and (as is the progression of the binomial cdf at the top of each jump); as a result the cdf tends to cross the horizontal steps near $x+\frac{_1}{^2}$ . If you want to approximate the value of the binomial cdf, $F(x)$ at integer $x$ , the normal cdf reaches that height near to $x+\frac{_1}{^2}$ . * If we apply Berry-Esseen to mean-corrected Bernoulli variables, the Berry-Esseen bounds allow for very little wiggle room when $p$ is near $\frac12$ and $x$ is near $\mu$ -- the normal cdf must pass reasonably close to the middle of the jumps there because otherwise the absolute difference in cdfs will exceed the best Berry-Esseen bound on one side or the other. This in turn relates to how far from $x+\frac{_1}{^2}$ the normal cdf can cross horizontal part of the binomial cdf's step-function. Expanding on the motivation that in 1. let's consider how we'd use a normal approximation to the binomial cdf to work out $P(X=k)$ . E.g. $n=20, p=0.5, k=9$ (see the second diagram above). So our normal with the same mean and sd is $N(10,(\sqrt{5})^2)$ . Note that we would approximate the jump in cdf at 9 by the change in normal cdf between about 8.5 and 9.5. Doing the same thing under the less formal but more "usual" textbook motivation (which is perhaps more intuitive, especially for beginning students), we're trying to approximate a discrete variable by a continuous one. We can make a continuous version of the binomial by replacing each probability spike of height $p(x)$ by a rectangle of width 1 centered at $x$ , giving it height $p(x)$ (see the blue rectangle below; imagine one for every x-value) and then approximating that by the normal density with the same mean and sd as the original binomial: The area under the box is approximated by the normal between $x-\frac12$ and $x+\frac12$ ; the two almost-triangular parts that lie above and below the horizontal step are close together in area. Some sum of binomial probabilities in an interval will reduce to a collection of these approximations. (Drawing a diagram like this is often very useful if it's not instantly clear whether you need to go up or down by 0.5 for a particular calculation ... work out which binomial values you want in your calculation and go either side by $\frac12$ for each one.) One can motivate this approach algebraically using a derivation [along the lines of de Moivre's -- see here or here for example] to derive the normal approximation (though it can be performed somewhat more directly than de Moivre's approach). That essentially proceeds via several approximations, including using Stirling's approximation on the ${n \choose x}$ term and using that $\log(1+x)\approx x-x^2/2$ to obtain that $$P(X=x)\approx \frac{1}{\sqrt{2\pi np(1-p)}}\exp(-\frac{(x-np)^2}{2np(1-p)})$$ which is to say that the density of a normal with mean $\mu=np$ and variance $\sigma^2 = np(1-p)$ at $x$ is approximately the height of the binomial pmf at $x$ . This is essentially where de Moivre got to. So now consider that we have a midpoint-rule approximation for normal areas in terms of binomial heights ... that is, for $Y\sim N(np,np(1-p))$ , the midpoint rule says that $F(y+\frac12)-F(y-\frac12) = \int_{y-\frac12}^{y+\frac12}f_Y(u)du\approx f_Y(y)$ and we have from de Moivre that $f_Y(x)\approx P(X=x)$ . Flipping that about, $P(X=x)\approx F(x+\frac12)-F(x-\frac12)$ . [A similar "midpoint rule" type approximation can be used to motivate other such approximations of continuous pmfs by densities using a continuity correction, but one must always be careful to pay attention to where it makes sense to invoke that approximation] Historical note: the continuity correction seems to have originated with Augustus de Morgan in 1838 as an improvement of de Moivre's approximation. See, for example Hald (2007)[1]. From Hald's description, his reasoning was along the lines of item 4. above (i.e. essentially in terms of trying to approximate the pmf by replacing the probability spike with a "block" of width 1 centered at the x-value). An illustration of a situation where continuity correction doesn't help: In the plot on the left (where as before, $X$ is the binomial, $Y$ is the normal approximation), $F_X(x)\approx F_Y(x+\frac12)$ and so $p(x) \approx F_Y(x+\frac12)-F_Y(x-\frac12)$ . In the plot on the right (the same binomial but further into the tail), $F_X(x)\approx F_Y(x)$ and so $p(x) \approx F_Y(x)-F_Y(x-1)$ -- which is to say that ignoring the continuity correction is better than using it in this region. [1]: Hald, Anders (2007), "A History of Parametric Statistical Inference from Bernoulli to Fisher, 1713-1935", Sources and Studies in the History of Mathematics and Physical Sciences, Springer-Verlag New York | {
"source": [
"https://stats.stackexchange.com/questions/213966",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/253/"
]
} |
214,485 | My stat prof basically said, if given one of the following three, you can find the other two: Cumulative distribution function Moment Generating Function Probability Density Function But my econometrics professor said CDFs are more fundamental than PDFs because there are examples where you can have a CDF but the PDF isn't defined. Are CDFs more fundamental than PDFs? How do I know whether a PDF or a MGF can be derived from a CDF? | Every probability distribution on (a subset of) $\mathbb R^n$ has a cumulative distribution function , and it uniquely defines the distribution. So, in this sense, the CDF is indeed as fundamental as the distribution itself. A probability density function , however, exists only for (absolutely) continuous probability distributions . The simplest example of a distribution lacking a PDF is any discrete probability distribution , such as the distribution of a random variable that only takes integer values. Of course, such discrete probability distributions can be characterized by a probability mass function instead, but there are also distributions that have neither and PDF or a PMF, such as any mixture of a continuous and a discrete distribution: (Diagram shamelessly stolen from Glen_b's answer to a related question.) There are even singular probability distributions , such as the Cantor distribution , which cannot be described even by a combination of a PDF and a PMF. Such distributions still have a well defined CDF, though. For example, here is the CDF of the Cantor distribution, also sometimes called the "Devil's staircase": ( Image from Wikimedia Commons by users Theon and Amirki , used under the CC-By-SA 3.0 license.) The CDF, known as the Cantor function , is continuous but not absolutely continuous. In fact, it is constant everywhere except on a Cantor set of zero Lebesgue measure, but which still contains infinitely many points. Thus, the entire probability mass of the Cantor distribution is concentrated on this vanishingly small subset of the real number line, but every point in the set still individually has zero probability. There are also probability distributions that do not have a moment-generating function . Probably the best known example is the Cauchy distribution , a fat-tailed distribution which has no well-defined moments of order 1 or higher (thus, in particular, having no well-defined mean or variance!). All probability distributions on $\mathbb R^n$ do, however, have a (possibly complex-valued) characteristic function ), whose definition differs from that of the MGF only by a multiplication with the imaginary unit . Thus, the characteristic function may be regarded as being as fundamental as the CDF. | {
"source": [
"https://stats.stackexchange.com/questions/214485",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/68473/"
]
} |
214,733 | I am doing some numerical experiment that consists in sampling a lognormal distribution $X\sim\mathcal{LN}(\mu, \sigma)$, and trying to estimate the moments $\mathbb{E}[X^n]$ by two methods: Looking at the sample mean of the $X^n$ Estimating $\mu$ and $\sigma^2$ by using the sample means for $\log(X), \log^2(X)$, and then using the fact that for a lognormal distribution, we have $\mathbb{E}[X^n]=\exp(n \mu + (n \sigma)^2/2)$. The question is : I find experimentally that the second method performs much better then the first one, when I keep the number of samples fixed, and increase $\mu, \sigma^2$ by some factor T. Is there some simple explanation for this fact? I'm attaching a figure in which the x-axis is T, while the y axis are the values of $\mathbb{E}[X^2]$ comparing the true values of $\mathbb{E}[X^2] = \exp(2 \mu + 2 \sigma^2)$ (orange line), to the estimated values. method 1 - blue dots, method 2 - green dots. y-axis is in log scale EDIT: Below is a minimal Mathematica code to produce the results for one T, with the output: ClearAll[n,numIterations,sigma,mu,totalTime,data,rmomentFromMuSigma,rmomentSample,rmomentSample]
(* Define variables *)
n=2; numIterations = 10^4; sigma = 0.5; mu=0.1; totalTime = 200;
(* Create log normal data*)
data=RandomVariate[LogNormalDistribution[mu*totalTime,sigma*Sqrt[totalTime]],numIterations];
(* the moment by theory:*)
rmomentTheory = Exp[(n*mu+(n*sigma)^2/2)*totalTime];
(*Calculate directly: *)
rmomentSample = Mean[data^n];
(*Calculate through estimated mu and sigma *)
muNumerical = Mean[Log[data]]; (*numerical \[Mu] (gaussian mean) *)
sigmaSqrNumerical = Mean[Log[data]^2]-(muNumerical)^2; (* numerical gaussian variance *)
rmomentFromMuSigma = Exp[ muNumerical*n + (n ^2sigmaSqrNumerical)/2];
(*output*)
Log@{rmomentTheory, rmomentSample,rmomentFromMuSigma} Output: (*Log of {analytic, sample mean of r^2, using mu and sigma} *)
{140., 91.8953, 137.519} above, the second result is the sample mean of $r^2$, which is below the two other results | There is something puzzling in those results since the first method provides an unbiased estimator of $\mathbb{E}[X^2]$, namely$$\frac{1}{N}\sum_{i=1}^N X_i^2$$has $\mathbb{E}[X^2]$ as its mean. Hence the blue dots should be around the expected value (orange curve); the second method provides a biased estimator of $\mathbb{E}[X^2]$, namely$$\mathbb{E}[\exp(n \hat\mu + n^2 \hat{\sigma}^2/2)]>\exp(n \mu + (n \sigma)^2/2)$$when $\hat\mu$ and $\hat\sigma²$ are unbiased estimators of $\mu$ and $\sigma²$ respectively, and it is thus strange that the green dots are aligned with the orange curve. but they are due to the problem and not to the numerical computations: I repeated the experiment in R and got the following picture with the same colour code and the same sequence of $\mu_T$'s and $\sigma_T$'s, which represents each estimator divided by the true expectation: Here is the corresponding R code: moy1=moy2=rep(0,200)
mus=0.14*(1:200)
sigs=sqrt(0.13*(1:200))
tru=exp(2*mus+2*sigs^2)
for (t in 1:200){
x=rnorm(1e5)
moy1[t]=mean(exp(2*sigs[t]*x+2*mus[t]))
moy2[t]=exp(2*mean(sigs[t]*x+mus[t])+2*var(sigs[t]*x+mus[t]))}
plot(moy1/tru,col="blue",ylab="relative mean",xlab="T",cex=.4,pch=19)
abline(h=1,col="orange")
lines((moy2/tru),col="green",cex=.4,pch=19) Hence there is indeed a collapse of the second empirical moment as $\mu$ and $\sigma$ increase that I would attribute to the enormous increase in the variance of the said second empirical moment as $\mu$ and $\sigma$ increase. My explanation of this curious phenomenon is that, while $\mathbb{E}[X^2]$ obviously is the mean of
$X^2$, it is not a central value: actually the median of $X^2$ is equal to $e^{2\mu}$. When representing the random variable $X^2$ as $\exp\{2\mu+2\sigma\epsilon\}$ where $\epsilon\sim\mathcal{N}(0,1)$, it is clear that,
when $\sigma$ is large enough, the random variable $\sigma\epsilon$ is almost never of the
magnitude of $\sigma^2$. In other words if
$X$ is $\mathcal{LN}(\mu,\sigma)$ $$\begin{align*}\mathbb{P}(X^2>\mathbb{E}[X^2])&=\mathbb{P}(\log\{X^2\}>2\mu+2\sigma^2)\\&=\mathbb{P}(\mu+\sigma\epsilon>\mu+\sigma^2)\\&=\mathbb{P}(\epsilon>\sigma)\\
&=1-\Phi(\sigma)\end{align*}$$
which can be arbitrarily small. | {
"source": [
"https://stats.stackexchange.com/questions/214733",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/116872/"
]
} |
215,349 | Why would we use odds instead of probabilities when performing logistic regression? | The advantage is that the odds defined on $(0,\infty)$ map to log-odds on $(-\infty, \infty)$, while this is not the case of probabilities. As a result, you can use regression equations like
$$\log \left(\frac{p_i}{1-p_i}\right) = \beta_0 + \sum_{j=1}^J \beta_j x_{ij}$$
for the log-odds without any problem (i.e. for any value of the regression coefficients and covariates a valid value for the odds are predicted). You would need extremely complicated multi-dimensional constraints on the regression coefficients $\beta_0,\beta_1,\ldots$, if you wanted to do the same for the log probability (and of course this would not work in a straightforward way for the untransformed probability or odds, either). As a consequence you get effects like being unable to have a constant risk ratio across all baseline probabilities (some risk ratios would result in probabilities > 1), while this is not an issue with an odds-ratio. | {
"source": [
"https://stats.stackexchange.com/questions/215349",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/117323/"
]
} |
215,497 | For example, in R , the MASS::mvrnorm() function is useful for generating data to demonstrate various things in statistics. It takes a mandatory Sigma argument which is a symmetric matrix specifying the covariance matrix of the variables. How would I create a symmetric $n\times n$ matrix with arbitrary entries? | Create an $n\times n$ matrix $A$ with arbitrary values and then use $\Sigma = A^T A$ as your covariance matrix. For example n <- 4
A <- matrix(runif(n^2)*2-1, ncol=n)
Sigma <- t(A) %*% A | {
"source": [
"https://stats.stackexchange.com/questions/215497",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/78807/"
]
} |
215,696 | I am an enthusiast of programming and machine learning. Only a few months back I started learning about machine learning programming. Like many who don't have a quantitative science background I also started learning about ML by tinkering with the algorithms and datasets in the widely used ML package(caret R). A while back I read a blog in which the author talks about usage of linear regression in ML. If I am remembering correct he talked about how all machine learning in the end uses some kind of "linear regression"(not sure whether he used this exact term) even for linear or non-linear problems. That time I didn't understood what he meant by that. My understanding of using machine learning for non-linear data is to use a non linear algorithm to separate the data. This was my thinking Let's say to classify linear data we used linear equation $y=mx+c$ and for non linear data we use non-linear equation say $y=sin(x)$ This image is taken from sikit learn website of support vector machine. In SVM we used different kernels for ML purpose. So my initial thinking was linear kernel separates the data using a linear function and RBF kernel uses a non-linear function to separate the data. But then I saw this blog where the author talks about Neural networks. To classify the non linear problem in left subplot, the neural network transforms the data in such a way that in the end we can use simple linear separation to the transformed data in the right sub-plot My question is whether all machine learning algorithms in the end uses a linear separation to classifiction(linear /non-linear dataset)? | The answer is No. user20160 has a perfect answer, I will add 3 examples with visualization to illustrate the idea. Note, these plots may not be helpful for you to see if the "final decision" is in linear form but give you some sense about tree, boosting and KNN. We will start with decision trees. With many splits, it is a non-linear decision boundary. And we cannot think all the previous splits are "feature transformations" and there are a final decision line at the end. Another example is the boosting model, which aggregates many "weak classifiers" and the final decision boundary is not linear. You can think about it is a complicated code/algorithm to make the final prediction. Finally, think about K Nearest Neighbors (KNN). It is also not a linear decision function at the end layer. in addition, there are no "feature transformations" in KNN. Here are three visualizations in 2D space (Tree, Boosting and KNN from top to bottom). The ground truth is 2 spirals represent two classes, and the left subplot is the predictions from the model and the right subplot is the decision boundaries from the model. EDIT: @ssdecontrol's answer in this post gives another perspective. It depends on how we define the "transformation" . Any function that partitions the data into two pieces can be transformed into a linear model of this form, with an intercept and a single input (an indicator of which "side" of the partition the data point is on). It is important to take note of the difference between a decision function and a decision boundary. | {
"source": [
"https://stats.stackexchange.com/questions/215696",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/15973/"
]
} |
216,003 | Suppose I have a random sample $\lbrace X_i, Y_i\rbrace_{i=1}^n$. Assume this sample is such that the Gauss-Markov assumptions are satisfied such that I can construct an OLS estimator where $$\hat{\beta}_1^{OLS} = \frac{\text{Cov}(X,Y)}{\text{Var(X)}}$$
$$\hat{\beta}_0^{OLS} = \bar{Y} - \bar{X} \hat{\beta}_1^{OLS}$$ Now suppose I take my data set and double it, meaning there is an exact copy for each of the $n$ $(X_i,Y_i)$ pairs. My Question How does this affect my ability to use OLS? Is it still consistent and identified? | Do you have a good reason to do the doubling (or duplication?) It doesn't make much statistical sense, but still it is interesting to see what happens algebraically. In matrix form your linear model is $$ \DeclareMathOperator{\V}{\mathbb{V}}
Y = X \beta + E,
$$ the least square estimator is $\hat{\beta}_{\text{ols}} = (X^T X)^{-1} X^T Y $ and the variance matrix is $ \V \hat{\beta}_{\text{ols}}= \sigma^2 (X^t X)^{-1} $ . "Doubling the data" means that $Y$ is replaced by $\begin{pmatrix} Y \\ Y \end{pmatrix}$ and $X$ is replaced by $\begin{pmatrix} X \\ X \end{pmatrix}$ . The ordinary least squares estimator then becomes $$
\left(\begin{pmatrix}X \\ X \end{pmatrix}^T \begin{pmatrix} X \\ X \end{pmatrix} \right )^{-1} \begin{pmatrix} X \\ X \end{pmatrix}^T \begin{pmatrix} Y \\ Y \end{pmatrix} = \\
(x^T X + X^T X)^{-1} (X^T Y + X^T Y ) = (2 X^T X)^{-1} 2 X^T Y = \\
\hat{\beta}_{\text{ols}}
$$ so the calculated estimator doesn't change at all. But the calculated variance matrix becomes wrong: Using the same kind of algebra as above, we get the variance matrix $\frac{\sigma^2}{2}(X^T X)^{-1}$ , half of the correct value. A consequence is that confidence intervals will shrink with a factor of $\frac{1}{\sqrt{2}}$ . The reason is that we have calculated as if we still have iid data, which is untrue: the pair of doubled values obviously have a correlation equal to $1.0$ . If we take this into account and use weighted least squares correctly, we will find the correct variance matrix. From this, more consequences of the doubling will be easy to find as an exercise, for instance, the value of R-squared will not change. | {
"source": [
"https://stats.stackexchange.com/questions/216003",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/68473/"
]
} |
217,374 | Reading about the true meaning of 95% confidence ellipse, I tend to come across 2 explanations : The ellipse that contains 95% of the data Not the above, but the ellipse that explains the variance of the data. I am not sure I understand correctly but they seem to mean that if a new data point coming in, there is a 95% chance that the new variance will stay in the ellipse. Can you shed some light? | Actually, neither explanation is correct. A confidence ellipse has to do with unobserved population parameters , like the true population mean of your bivariate distribution. A 95% confidence ellipse for this mean is really an algorithm with the following property: if you were to replicate your sampling from the underlying distribution many times and each time calculate a confidence ellipse, then 95% of the ellipses so constructed would contain the underlying mean. (Note that each sample would of course yield a different ellipse.) Thus, a confidence ellipse will usually not contain 95% of the observations. In fact, as the number of observations increases, the mean will usually be better and better estimated, leading to smaller and smaller confidence ellipses, which in turn contain a smaller and smaller proportion of the actual data. (Unfortunately, some people calculate the smallest ellipse that contains 95% of their data, reminiscent of a quantile, which by itself is quite OK... but then go on to call this "quantile ellipse" a "confidence ellipse", which, as you see, leads to confusion.) The variance of the underlying population relates to the confidence ellipse. High variance will mean that the data are all over the place, so the mean is not well estimated, so the confidence ellipse will be larger than if the variance were smaller. Of course, we can calculate confidence ellipses also for any other population parameter we may wish to estimate. Or we could look at other confidence regions than ellipses, especially if we don't know the estimated parameter to be (asymptotically) normally distributed. The one-dimensional analogue of the confidence ellipse is the confidence-interval , and browsing through previous questions in this tag is helpful. Our current top-voted question in this tag is particularly nice: Why does a 95% CI not imply a 95% chance of containing the mean? Most of the discussion there holds just as well for higher dimensional analogues of the one-dimensional confidence interval. | {
"source": [
"https://stats.stackexchange.com/questions/217374",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/117323/"
]
} |
217,995 | I've read a lot about PCA, including various tutorials and questions (such as this one , this one , this one , and this one ). The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. When I first read that, I immediately thought of something like linear regression; maybe you can solve it using gradient descent if needed. However, then my mind was blown when I read that the optimization problem is solved by using linear algebra and finding eigenvectors and eigenvalues. I simply do not understand how this use of linear algebra comes into play. So my question is: How can PCA turn from a geometric optimization problem to a linear algebra problem? Can someone provide an intuitive explanation? I am not looking for an answer like this one that says "When you solve the mathematical problem of PCA, it ends up being equivalent to finding the eigenvalues and eigenvectors of the covariance matrix." Please explain why eigenvectors come out to be the principal components and why the eigenvalues come out to be variance of the data projected onto them I am a software engineer and not a mathematician, by the way. Note: the figure above was taken and modified from this PCA tutorial . | Problem statement The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data. That's right. I explain the connection between these two formulations in my answer here (without math) or here (with math). Let's take the second formulation: PCA is trying the find the direction such that the projection of the data on it has the highest possible variance. This direction is, by definition, called the first principal direction. We can formalize it as follows: given the covariance matrix $\mathbf C$ , we are looking for a vector $\mathbf w$ having unit length, $\|\mathbf w\|=1$ , such that $\mathbf w^\top \mathbf{Cw}$ is maximal. (Just in case this is not clear: if $\mathbf X$ is the centered data matrix, then the projection is given by $\mathbf{Xw}$ and its variance is $\frac{1}{n-1}(\mathbf{Xw})^\top \cdot \mathbf{Xw} = \mathbf w^\top\cdot (\frac{1}{n-1}\mathbf X^\top\mathbf X)\cdot \mathbf w = \mathbf w^\top \mathbf{Cw}$ .) On the other hand, an eigenvector of $\mathbf C$ is, by definition, any vector $\mathbf v$ such that $\mathbf{Cv}=\lambda \mathbf v$ . It turns out that the first principal direction is given by the eigenvector with the largest eigenvalue. This is a nontrivial and surprising statement. Proofs If one opens any book or tutorial on PCA, one can find there the following almost one-line proof of the statement above. We want to maximize $\mathbf w^\top \mathbf{Cw}$ under the constraint that $\|\mathbf w\|=\mathbf w^\top \mathbf w=1$ ; this can be done introducing a Lagrange multiplier and maximizing $\mathbf w^\top \mathbf{Cw}-\lambda(\mathbf w^\top \mathbf w-1)$ ; differentiating, we obtain $\mathbf{Cw}-\lambda\mathbf w=0$ , which is the eigenvector equation. We see that $\lambda$ has in fact to be the largest eigenvalue by substituting this solution into the objective function, which gives $\mathbf w^\top \mathbf{Cw}-\lambda(\mathbf w^\top \mathbf w-1) = \mathbf w^\top \mathbf{Cw} = \lambda\mathbf w^\top \mathbf{w} = \lambda$ . By virtue of the fact that this objective function should be maximized, $\lambda$ must be the largest eigenvalue, QED. This tends to be not very intuitive for most people. A better proof (see e.g. this neat answer by @cardinal ) says that because $\mathbf C$ is symmetric matrix, it is diagonal in its eigenvector basis. (This is actually called spectral theorem .) So we can choose an orthogonal basis, namely the one given by the eigenvectors, where $\mathbf C$ is diagonal and has eigenvalues $\lambda_i$ on the diagonal. In that basis, $\mathbf w^\top \mathbf{C w}$ simplifies to $\sum \lambda_i w_i^2$ , or in other words the variance is given by the weighted sum of the eigenvalues. It is almost immediate that to maximize this expression one should simply take $\mathbf w = (1,0,0,\ldots, 0)$ , i.e. the first eigenvector, yielding variance $\lambda_1$ (indeed, deviating from this solution and "trading" parts of the largest eigenvalue for the parts of smaller ones will only lead to smaller overall variance). Note that the value of $\mathbf w^\top \mathbf{C w}$ does not depend on the basis! Changing to the eigenvector basis amounts to a rotation, so in 2D one can imagine simply rotating a piece of paper with the scatterplot; obviously this cannot change any variances. I think this is a very intuitive and a very useful argument, but it relies on the spectral theorem. So the real issue here I think is: what is the intuition behind the spectral theorem? Spectral theorem Take a symmetric matrix $\mathbf C$ . Take its eigenvector $\mathbf w_1$ with the largest eigenvalue $\lambda_1$ . Make this eigenvector the first basis vector and choose other basis vectors randomly (such that all of them are orthonormal). How will $\mathbf C$ look in this basis? It will have $\lambda_1$ in the top-left corner, because $\mathbf w_1=(1,0,0\ldots 0)$ in this basis and $\mathbf {Cw}_1=(C_{11}, C_{21}, \ldots C_{p1})$ has to be equal to $\lambda_1\mathbf w_1 = (\lambda_1,0,0 \ldots 0)$ . By the same argument it will have zeros in the first column under the $\lambda_1$ . But because it is symmetric, it will have zeros in the first row after $\lambda_1$ as well. So it will look like that: $$\mathbf C=\begin{pmatrix}\lambda_1 & 0 & \ldots & 0 \\ 0 & & & \\ \vdots & & & \\ 0 & & & \end{pmatrix},$$ where empty space means that there is a block of some elements there. Because the matrix is symmetric, this block will be symmetric too. So we can apply exactly the same argument to it, effectively using the second eigenvector as the second basis vector, and getting $\lambda_1$ and $\lambda_2$ on the diagonal. This can continue until $\mathbf C$ is diagonal. That is essentially the spectral theorem. (Note how it works only because $\mathbf C$ is symmetric.) Here is a more abstract reformulation of exactly the same argument. We know that $\mathbf{Cw}_1 = \lambda_1 \mathbf w_1$ , so the first eigenvector defines a 1-dimensional subspace where $\mathbf C$ acts as a scalar multiplication. Let us now take any vector $\mathbf v$ orthogonal to $\mathbf w_1$ . Then it is almost immediate that $\mathbf {Cv}$ is also orthogonal to $\mathbf w_1$ . Indeed: $$ \mathbf w_1^\top \mathbf{Cv} = (\mathbf w_1^\top \mathbf{Cv})^\top = \mathbf v^\top \mathbf C^\top \mathbf w_1 = \mathbf v^\top \mathbf {Cw}_1=\lambda_1 \mathbf v^\top \mathbf w_1 = \lambda_1\cdot 0 = 0.$$ This means that $\mathbf C$ acts on the whole remaining subspace orthogonal to $\mathbf w_1$ such that it stays separate from $\mathbf w_1$ . This is the crucial property of symmetric matrices. So we can find the largest eigenvector there, $\mathbf w_2$ , and proceed in the same manner, eventually constructing an orthonormal basis of eigenvectors. | {
"source": [
"https://stats.stackexchange.com/questions/217995",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/29072/"
]
} |
218,542 | While the choice of activation functions for the hidden layer is quite clear (mostly sigmoid or tanh), I wonder how to decide on the activation function for the output layer. Common choices are linear functions, sigmoid functions and softmax functions. However, when should I use which one? | Regression: linear (because values are unbounded) Classification: softmax (simple sigmoid works too but softmax works better) Use simple sigmoid only if your output admits multiple "true" answers, for instance, a network that checks for the presence of various objects in an image. In other words, the output is not a probability distribution (does not need to sum to 1). | {
"source": [
"https://stats.stackexchange.com/questions/218542",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/17025/"
]
} |
219,241 | I would ask a question related to this one . I found an example of writing custom loss function for xgboost here : loglossobj <- function(preds, dtrain) {
# dtrain is the internal format of the training data
# We extract the labels from the training data
labels <- getinfo(dtrain, "label")
# We compute the 1st and 2nd gradient, as grad and hess
preds <- 1/(1 + exp(-preds))
grad <- preds - labels
hess <- preds * (1 - preds)
# Return the result as a list
return(list(grad = grad, hess = hess))
} Logistic loss function is $$log(1+e^{-yP})$$ where $P$ is log-odds and $y$ is labels (0 or 1). My question is: how we can get gradient (first derivative) simply equal to difference between true values and predicted probabilities (calculated from log-odds as preds <- 1/(1 + exp(-preds)) )? | My answer for my question: yes, it can be shown that gradient for logistic loss is equal to difference between true values and predicted probabilities. Brief explanation was found here . First, logistic loss is just negative log-likelihood, so we can start with expression for log-likelihood ( p. 74 - this expression is log-likelihood itself, not negative log-likelihood): $$L=y_{i}\cdot log(p_{i})+(1-y_{i})\cdot log(1-p_{i})$$ $p_{i}$ is logistic function: $p_{i}=\frac{1}{1+e^{-\hat{y}_{i}}}$, where $\hat{y}_{i}$ is predicted values before logistic transformation (i.e., log-odds): $$L=y_{i}\cdot log\left(\frac{1}{1+e^{-\hat{y}_{i}}}\right)+(1-y_{i})\cdot log\left(\frac{e^{-\hat{y}_{i}}}{1+e^{-\hat{y}_{i}}}\right)$$ First derivative obtained using Wolfram Alpha: $${L}'=\frac{y_{i}-(1-y_{i})\cdot e^{\hat{y}_{i}}}{1+e^{\hat{y}_{i}}}$$ After multiplying by $\frac{e^{-\hat{y}_{i}}}{e^{-\hat{y}_{i}}}$: $${L}'=\frac{y_{i}\cdot e^{-\hat{y}_{i}}+y_{i}-1}{1+e^{-\hat{y}_{i}}}=
\frac{y_{i}\cdot (1+e^{-\hat{y}_{i}})}{1+e^{-\hat{y}_{i}}}-\frac{1}{1+e^{-\hat{y}_{i}}}=y_{i}-p_{i}$$ After changing sign we have expression for gradient of logistic loss function: $$p_{i}-y_{i}$$ | {
"source": [
"https://stats.stackexchange.com/questions/219241",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/81185/"
]
} |
219,471 | I am referring to practices that still maintain their presence, even though the problems (usually computational) they were designed to cope with have been mostly solved. For example, Yates' continuity correction was invented to approximate Fisher's exact test with $\chi^2$ test, but it is no longer practical since software can now handle Fisher's test even with large samples (I know this may not be a good example of "maintaining its presence", since textbooks, like Agresti's Categorical Data Analysis , often acknowledge that Yates' correction "is no longer needed"). What are some other examples of such practices? | It's strongly arguable that the use of threshold significance levels such as $P = 0.05$ or $P = 0.01$ is a historical hangover from a period when most researchers depended on previously calculated tables of critical values. Now good software will give $P$-values directly. Indeed, good software lets you customise your analysis and not depend on textbook tests. This is contentious if only because some significance testing problems do require decisions, as in quality control where accepting or rejecting a batch is the decision needed, followed by an action either way. But even there the thresholds to be used should grow out of a risk analysis, not depend on tradition. And often in the sciences, analysis of quantitative indications is more appropriate than decisions: thinking quantitatively implies attention to sizes of $P$-values and not just to a crude dichotomy, significant versus not significant. I will flag that I here touch on an intricate and controversial issue which is the focus of entire books and probably thousands of papers, but it seems a fair example for this thread. | {
"source": [
"https://stats.stackexchange.com/questions/219471",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/29905/"
]
} |
219,579 | I remember sitting in stats courses as an undergrad hearing about why extrapolation was a bad idea. Furthermore, there are a variety of sources online which comment on this. There's also a mention of it here . Can anyone help me understand why extrapolation is a bad idea?
If it is, how is it that forecasting techniques aren't statistically invalid? | A regression model is often used for extrapolation, i.e. predicting the response to an input which lies outside of the range of the values of the predictor variable used to fit the model. The danger associated with extrapolation is illustrated in the following figure. The regression model is “by construction” an interpolation model, and should not be used for extrapolation, unless this is properly justified. | {
"source": [
"https://stats.stackexchange.com/questions/219579",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/115244/"
]
} |
220,907 | Recently I read that a recurrent neural network can approximate any algorithm. So my question is: what does this exactly mean and can you give me a reference where this is proved? | Background We first have to go over some concepts from the theory of computation. An algorithm is a procedure for calculating a function. Given the input, the algorithm must produce the correct output in a finite number of steps and then terminate. To say that a function is computable means that there exists an algorithm for calculating it. Among the infinite set of all functions, most are not computable. Turing machines are a mathematical model that formalizes the notion of computation. Other equivalent models exist, but Turing machines are the standard 'reference model'. According to the Church-Turing thesis , any algorithm can be implemented by a Turing machine, and all computable functions can be computed thusly. Any particular instance of a Turing machine only computes a particular function. But, there exist a special class of Turing machines called universal Turing machines that can simulate any other Turing machine for any input. They do this by taking a description of the machine to be simulated (and its input) as part of their own input. Any particular instance of a Universal Turing machine can therefore compute any computable function (i.e. can implement any algorithm). Any system that shares this ability is called Turing complete . One way to prove that a system is Turing complete is to show that it can simulate a universal Turing machine. Many systems have been shown to be Turing complete (e.g. most programming languages, certain cellular automata , and quantum mechanics ). Recurrent neural networks The following paper shows that, for any computable function, there exists a finite recurrent neural network (RNN) that can compute it. Furthermore, there exist finite RNNs that are Turing complete, and can therefore implement any algorithm. Siegelmann and Sontag (1992) . On the computational power of neural nets They use networks containing a finite number of recurrently connected units, which receive external inputs at each time point. The state of each unit is given by a weighted sum of its inputs (plus a bias), run through a nonlinear activation function. The activation function is a saturated linear function, which is a piecewise linear approximation of a sigmoid. The weights and biases are fixed, so no learning occurs. The network performs a mapping from a binary input sequence to a binary output sequence. There are two external inputs to the network, which are fed to all units: a 'data line' and a 'validation line'. The data line contains the input sequence of zeros and ones, then zero after the input sequence is finished. The validation line lets the network know when the input sequence is happening. It contains one for the duration of the input sequence, then zero after it has finished. One unit is considered to be the 'output unit'. It outputs zeros for some arbitrary delay, then the output sequence of zeros and ones, then zero after the output sequence has finished. Another unit is considered to be the 'validation unit', which let's us know when the output sequence is happening. It outputs one while the output sequence is happening, and zero otherwise. Although these RNNs map binary input sequences to binary output sequences, we might be interested in functions defined on various other mathematical objects (other types of numbers, vectors, images, graphs, etc.). But, for any computable function, these other types of objects can be encoded as binary sequences (e.g. see here for a description of encoding other objects using natural numbers, which can in turn be represented in binary). Result They show that, for every computable function, there exists a finite RNN (of the form described above) that can compute it. They do this by showing that it's possible to use a RNN to explicitly simulate a pushdown automaton with two stacks. This is another model that's computationally equivalent to a Turing machine. Any computable function can be computed by a Turing machine. Any Turing machine can be simulated by a pushdown automaton with two stacks. Any pushdown automaton with two stacks can be simulated by a RNN. Therefore, any computable function can be computed by a RNN. Furthermore, because some Turing machines are universal, the RNNs that simulate them are Turing complete, and can therefore implement any algorithm. In particular, they show that there exist Turing complete RNNs with 1058 or fewer units. Other consequences An interesting consequence of the simulation results is that certain questions about the behavior of RNNs are undecidable. This means that there exists no algorithm that can answer them for arbitrary RNNs (although they may be answerable in the case of particular RNNs). For example, the question of whether a given unit ever takes the value 0 is undecidable; if one could answer this question in general, it would be possible to solve the halting problem for Turing machines, which is undecidable. Computational power In the above paper, all network parameters and states are rational numbers. This is important because it constrains the power of the RNNs, and makes the resulting networks more realistic. The reason is that the rationals are computable numbers , which means that there exists an algorithm for calculating them to arbitrary precision. Most real numbers are uncomputable, and therefore inaccessible--even the most powerful Turing machine can't represent them, and many people doubt that they could even be represented in the physical world. When we deal with 'real numbers' on digital computers, we're accessing an even smaller subset (e.g. 64 bit floating point numbers). Representing arbitrary real numbers would require infinite information. The paper says that giving the network access to real numbers would boost the computational power even further, beyond Turing machines. Siegelmann wrote a number of other papers exploring this 'super-Turing' capability. However, it's important to note that these are mathematical models, and the results don't mean that such a machine could actually exist in the physical world. There are good reasons to think that it couldn't, although it's an open question. | {
"source": [
"https://stats.stackexchange.com/questions/220907",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/121446/"
]
} |
221,358 | I'll explain my problem with an example. Suppose you want to predict the income of an individual given some attributes: {Age, Gender, Country, Region, City}. You have a training dataset like so train <- data.frame(CountryID=c(1,1,1,1, 2,2,2,2, 3,3,3,3),
RegionID=c(1,1,1,2, 3,3,4,4, 5,5,5,5),
CityID=c(1,1,2,3, 4,5,6,6, 7,7,7,8),
Age=c(23,48,62,63, 25,41,45,19, 37,41,31,50),
Gender=factor(c("M","F","M","F", "M","F","M","F", "F","F","F","M")),
Income=c(31,42,71,65, 50,51,101,38, 47,50,55,23))
train
CountryID RegionID CityID Age Gender Income
1 1 1 1 23 M 31
2 1 1 1 48 F 42
3 1 1 2 62 M 71
4 1 2 3 63 F 65
5 2 3 4 25 M 50
6 2 3 5 41 F 51
7 2 4 6 45 M 101
8 2 4 6 19 F 38
9 3 5 7 37 F 47
10 3 5 7 41 F 50
11 3 5 7 31 F 55
12 3 5 8 50 M 23 Now suppose I want to predict the income of a new person who lives in City 7. My training set has a whopping 3 samples with people in City 7 (assume this is a lot) so I can probably use the average income in City 7 to predict the income of this new individual. Now suppose I want to predict the income of a new person who lives in City 2. My training set only has 1 sample with City 2 so the average income in City 2 probably isn't a reliable predictor. But I can probably use the average income in Region 1. Extrapolating this idea a bit, I can transform my training dataset as Age Gender CountrySamples CountryIncome RegionSamples RegionIncome CitySamples CityIncome
1: 23 M 4 52.25 3 48.00 2 36.5000
2: 48 F 4 52.25 3 48.00 2 36.5000
3: 62 M 4 52.25 3 48.00 1 71.0000
4: 63 F 4 52.25 1 65.00 1 65.0000
5: 25 M 4 60.00 2 50.50 1 50.0000
6: 41 F 4 60.00 2 50.50 1 51.0000
7: 45 M 4 60.00 2 69.50 2 69.5000
8: 19 F 4 60.00 2 69.50 2 69.5000
9: 37 F 4 43.75 4 43.75 3 50.6667
10: 41 F 4 43.75 4 43.75 3 50.6667
11: 31 F 4 43.75 4 43.75 3 50.6667
12: 50 M 4 43.75 4 43.75 1 23.0000 So, the goal is to somehow combine the average CityIncome, RegionIncome, and CountryIncome while using the number of training samples for each to give a weight/credibility to each value. (Ideally, still including information from Age and Gender.) What are tips for solving this type of problem? I prefer to use tree based models like random forest or gradient boosting, but I'm having trouble getting these to perform well. UPDATE For anyone willing to take a stab at this problem, I've generated sample data to test your proposed solution here . | I have been thinking about this problem for a while, with inspirations from the following questions on this site. How can I include random effects into a randomForest? Random forest on grouped data Random Forests / adaboost in panel regression setting Random forest for binary panel data Modelling clustered data using boosted regression trees Let me first introduce the mixed-effects models for hierarchical/nested data and start from a simple two-level model (samples nested within cities). For the $j$-th sample in the $i$-th city, we write the outcome $y_{ij}$ as a function of covariates $\boldsymbol x_{ij}$ (a list of variables including gender and age),
$$ y_{ij}=f(\boldsymbol x_{ij})+{u_i}+\epsilon_{ij},$$
where ${u_i}$ is the random intercept for each city, $j=1,\ldots,n_i$. If we assume $u_i$ and $\epsilon_{ij}$ follow normal distributions with mean 0 and variances $\sigma^2_u$ and $\sigma^2$, the empirical Bayesian (EB) estimate of $u_i$ is $$\hat{u}_i=\frac{\sigma^2_u}{\sigma^2_u+\sigma^2/n_i}(\bar{\mathbf{y}}_{i.}-f(\bar{\boldsymbol x}_{i.})),$$ where $\bar{\mathbf{y}}_{i.}=\frac{1}{n_i}\sum_i^{n_i}y_{ij}$, $f(\bar{\boldsymbol x}_{i.})=\frac{1}{n_i}\sum_i^{n_i}f(\boldsymbol x_{ij}).$ If we treat $(\bar{\mathbf{y}}_{i.}-f(\bar{\boldsymbol x}_{i.}))$ as the OLS (ordinary least square) estimate of $u_i$, then the EB estimate is a weighted sum of 0 and the OLS estimate, and the weight is an increasing function of the sample size $n_i$. The final prediction is $$\hat{f}(\boldsymbol x_{ij})+\hat{u}_{i},$$ where $\hat{f}(\boldsymbol x_{ij})$ is the estimate of the fixed effect from linear regression or machine learning method like random forest. This can be easily extended to any level of data, say samples nested in cities and then regions and then countries. Other than the tree-based methods, there is a method based on SVM . For random-forest-based method, you can try MixRF() in our R package MixRF on CRAN. | {
"source": [
"https://stats.stackexchange.com/questions/221358",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31542/"
]
} |
221,402 | I'm teaching myself about reinforcement learning, and trying to understand the concept of discounted reward. So the reward is necessary to tell the system which state-action pairs are good, and which are bad. But what I don't understand is why the discounted reward is necessary. Why should it matter whether a good state is reached soon rather than later? I do understand that this is relevant in some specific cases. For example, if you are using reinforcement learning to trade in the stock market, it is more beneficial to make profit sooner rather than later. This is because having that money now allows you to do things with that money now, which is more desirable than doing things with that money later. But in most cases, I don't see why the discounting is useful. For example, let's say you wanted a robot to learn how to navigate around a room to reach the other side, where there are penalties if it collides with an obstacle. If there was no discount factor, then it would learn to reach the other side perfectly, without colliding with any obstacles. It may take a long time to get there, but it will get there eventually. But if we give a discount to the reward, then the robot will be encouraged to reach the other side of the room quickly, even if it has to collide with objects along the way. This is clearly not a desirable outcome. Sure, you want the robot to get to the other side quickly, but not if this means that it has to collide with objects along the way. So my intuition is that any form of discount factor, will actually lead to a sub-optimal solution. And the choice of the discount factor often seems arbitrary -- many methods I have seen simply set it to 0.9. This appears to be very naive to me, and seems to give an arbitrary trade-off between the optimum solution and the fastest solution, whereas in reality this trade-off is very important. Please can somebody help me to understand all this? Thank you :) | TL;DR. The fact that the discount rate is bounded to be smaller than 1 is a mathematical trick to make an infinite sum finite. This helps proving the convergence of certain algorithms. In practice, the discount factor could be used to model the fact that the decision maker is uncertain about if in the next decision instant the world (e.g., environment / game / process ) is going to end. For example: If the decision maker is a robot, the discount factor could be the
probability that the robot is switched off in the next time instant
(the world ends in the previous terminology). That is the reason why the robot is
short sighted and does not optimize the sum reward but the discounted sum reward. Discount factor smaller than 1 (In Detail) In order to answer more precisely, why the discount rate has to be smaller than one I will first introduce the Markov Decision Processes (MDPs). Reinforcement learning techniques can be used to solve MDPs. An MDP provides a mathematical framework for modeling decision-making situations where outcomes are partly random and partly under the control of the decision maker. An MDP is defined via a state space $\mathcal{S}$ , an action space $\mathcal{A}$ , a function of transition probabilities between states (conditioned to the action taken by the decision maker), and a reward function. In its basic setting, the decision maker takes and action, and gets a reward from the environment, and the environment changes its state. Then the decision maker senses the state of the environment, takes an action, gets a reward, and so on so forth. The state transitions are probabilistic and depend solely on the actual state and the action taken by the decision maker. The reward obtained by the decision maker depends on the action taken, and on both the original and the new state of the environment. A reward $R_{a_i}(s_j,s_k)$ is obtained when taking action $a_i$ in state $s_j$ and the environment/system changes to state $s_k$ after the decision maker takes action $a_i$ . The decision maker follows a policy, $\pi$ $\pi(\cdot):\mathcal{S}\rightarrow\mathcal{A}$ , that for each state $s_j \in \mathcal{S}$ takes an action $a_i \in \mathcal{A}$ . So that the policy is what tells the decision maker which actions to take in each state. The policy $\pi$ may be randomized as well but it does not matter for now. The objective is to find a policy $\pi$ such that \begin{equation} \label{eq:1}
\max_{\pi:S(n)\rightarrow a_i} \lim_{T\rightarrow \infty } E \left\{ \sum_{n=1}^T \beta^n R_{x_i}(S(n),S(n+1)) \right\} (1),
\end{equation} where $\beta$ is the discount factor and $\beta<1$ . Note that the optimization problem above, has infinite time horizon ( $T\rightarrow \infty $ ), and the objective is to maximize the sum $discounted$ reward (the reward $R$ is multiplied by $\beta^n$ ).
This is usually called an MDP problem with a infinite horizon discounted reward criteria . The problem is called discounted because $\beta<1$ . If it was not a discounted problem $\beta=1$ the sum would not converge. All policies that have obtain on average a positive reward at each time instant would sum up to infinity. The would be a infinite horizon sum reward criteria , and is not a good optimization criteria. Here is a toy example to show you what I mean: Assume that there are only two possible actions $a={0,1}$ and that the reward function $R$ is equal to $1$ if $a=1$ , and $0$ if $a=0$ (reward does not depend on the state). It is clear the the policy that get more reward is to take always action $a=1$ and never action $a=0$ .
I'll call this policy $\pi^*$ . I'll compare $\pi^*$ to another policy $\pi'$ that takes action $a=1$ with small probability $\alpha << 1$ , and action $a=0$ otherwise. In the infinite horizon discounted reward criteria equation (1) becomes $\frac{1}{1-\beta}$ (the sum of a geometric series) for policy $\pi^*$ while for policy $\pi '$ equation (1) becomes $\frac{\alpha}{1-\beta}$ . Since $\frac{1}{1-\beta} > \frac{\alpha}{1-\beta}$ , we say that $\pi^*$ is a better policy than $\pi '$ . Actually $\pi^*$ is the optimal policy. In the infinite horizon sum reward criteria ( $\beta=1$ ) equation (1) does not converge for any of the polices (it sums up to infinity). So whereas policy $\pi$ achieves higher rewards than $\pi'$ both policies are equal according to this criteria. That is one reason why the infinite horizon sum reward criteria is not useful. As I mentioned before, $\beta<1$ makes the trick of making the sum in equation (1) converge. Other optimality criteria There are other optimality criteria that do not impose that $\beta<1$ : The finite horizon criteria case the objective is to maximize the discounted reward until the time horizon $T$ \begin{equation} \label{eq:2}
\max_{\pi:S(n)\rightarrow a_i} E \left\{ \sum_{n=1}^T \beta^n R_{x_i}(S(n),S(n+1)) \right\},
\end{equation} for $\beta \leq 1$ and $T$ finite. In the infinite horizon average reward criteria the objective is \begin{equation}
\max_{\pi:S(n)\rightarrow a_i} \lim_{T\rightarrow \infty } E \left\{ \sum_{n=1}^T \frac{1}{T} R_{x_i}(S(n),S(n+1)) \right\},
\end{equation} End note Depending on the optimality criteria one would use a different algorithm to find the optimal policy. For instances the optimal policies of the finite horizon problems would depend on both the state and the actual time instant. Most Reinforcement Learning algorithms (such as SARSA or Q-learning) converge to the optimal policy only for the discounted reward infinite horizon criteria (the same happens for the Dynamic programming algorithms). For the average reward criteria there is no algorithm that has been shown to converge to the optimal policy, however one can use R-learning which have good performance albeit not good theoretical convergence. | {
"source": [
"https://stats.stackexchange.com/questions/221402",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/72307/"
]
} |
221,513 | I've recently become interested in LSTMs and I was surprised to learn that the weights are shared across time. I know that if you share the weights across time, then your input time sequences can be a variable length. With shared weights you have many fewer parameters to train. From my understanding, the reason one would turn to an LSTM vs. some other learning method is because you believe there is some sort of temporal/sequential structure/dependence in your data that you would like to learn. If you sacrifice the variable length ‘luxury’, and accept long computation time, wouldn’t an RNN/LSTM without shared weights (i.e. for every time step you have different weights) perform way better or is there something I’m missing? | The accepted answer focuses on the practical side of the question: it would require a lot of resources, if there parameters are not shared. However, the decision to share parameters in an RNN has been made when any serious computation was a problem (1980s according to wiki ), so I believe it wasn't the main argument (though still valid). There are pure theoretical reasons for parameter sharing: It helps in applying the model to examples of different lengths. While reading a sequence, if RNN model uses different parameters for each step during training, it won't generalize to unseen sequences of different lengths. Oftentimes, the sequences operate according to the same rules across the sequence.
For instance, in NLP: "On Monday it was snowing" "It was snowing on Monday" ...these two sentences mean the same thing, though the details are in different parts of the sequence. Parameter sharing reflects the fact that we are performing the same task at each step, as a result, we don't have to relearn the rules at each point in the sentence. LSTM is no different in this sense, hence it uses shared parameters as well. | {
"source": [
"https://stats.stackexchange.com/questions/221513",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/121817/"
]
} |
221,936 | I know that correlation does not imply causality but does an absence of correlation imply absence of causality? | does an absence of correlation imply absence of causality? No. Any controlled system is a counterexample. Without causal relationships control is clearly impossible, but successful control means - roughly speaking - that some quantity is being maintained constant, which implies it won't be correlated with anything, including whatever things are causing it to be constant. So in this situation, concluding no causal relationship from lack of correlation would be a mistake. Here's a somewhat topical example . | {
"source": [
"https://stats.stackexchange.com/questions/221936",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/91744/"
]
} |
222,179 | Imagine a standard machine-learning scenario: You are confronted with a large multivariate dataset and you have a
pretty blurry understanding of it. What you need to do is to make
predictions about some variable based on what you have. As usual, you
clean the data, look at descriptive statistics, run some models,
cross-validate them etc., but after several attempts, going back and
forth and trying multiple models nothing seems to work and your
results are miserable. You can spend hours, days, or weeks on such a
problem... The question is: when to stop? How do you know that your data actually is hopeless and all the fancy models wouldn't do you any more good than predicting the average outcome for all cases or some other trivial solution? Of course, this is a forecastability issue, but as far as I know, it is hard to assess forecastability for multivariate data before trying something on it. Or am I wrong? Disclaimer: this question was inspired by this one When have I to stop looking for a model? that did not attract much attention. It would be nice to have detailed answer to such question for reference. | Forecastability You are right that this is a question of forecastability. There have been a few articles on forecastability in the IIF's practitioner-oriented journal Foresight . (Full disclosure: I'm an Associate Editor.) The problem is that forecastability is already hard to assess in "simple" cases. A few examples Suppose you have a time series like this but don't speak German: How would you model the large peak in April, and how would you include this information in any forecasts? Unless you knew that this time series is the sales of eggs in a Swiss supermarket chain, which peaks right before western calendar Easter , you would not have a chance. Plus, with Easter moving around the calendar by as much as six weeks, any forecasts that don't include the specific date of Easter (by assuming, say, that this was just some seasonal peak that would recur in a specific week next year) would probably be very off. Similarly, assume you have the blue line below and want to model whatever happened on 2010-02-28 so differently from "normal" patterns on 2010-02-27: Again, without knowing what happens when a whole city full of Canadians watches an Olympic ice hockey finals game on TV, you have no chance whatsoever to understand what happened here, and you won't be able to predict when something like this will recur. Finally, look at this: This is a time series of daily sales at a cash and carry store. (On the right, you have a simple table: 282 days had zero sales, 42 days saw sales of 1... and one day saw sales of 500.) I don't know what item it is. To this day, I don't know what happened on that one day with sales of 500. My best guess is that some customer pre-ordered a large amount of whatever product this was and collected it. Now, without knowing this, any forecast for this particular day will be far off. Conversely, assume that this happened right before Easter, and we have a dumb-smart algorithm that believes this could be an Easter effect (maybe these are eggs?) and happily forecasts 500 units for the next Easter. Oh my, could that go wrong. Summary In all cases, we see how forecastability can only be well understood once we have a sufficiently deep understanding of likely factors that influence our data. The problem is that unless we know these factors, we don't know that we may not know them. As per Donald Rumsfeld : [T]here are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don't know we don't know. If Easter or Canadians' predilection for Hockey are unknown unknowns to us, we are stuck - and we don't even have a way forward, because we don't know what questions we need to ask. The only way of getting a handle on these is to gather domain knowledge. Conclusions I draw three conclusions from this: You always need to include domain knowledge in your modeling and prediction. Even with domain knowledge, you are not guaranteed to get enough information for your forecasts and predictions to be acceptable to the user. See that outlier above. If "your results are miserable", you may be hoping for more than you can achieve. If you are forecasting a fair coin toss, then there is no way to get above 50% accuracy. Don't trust external forecast accuracy benchmarks, either. The Bottom Line Here is how I would recommend building models - and noticing when to stop: Talk to someone with domain knowledge if you don't already have it yourself. Identify the main drivers of the data you want to forecast, including likely interactions, based on step 1. Build models iteratively, including drivers in decreasing order of strength as per step 2. Assess models using cross-validation or a holdout sample. If your prediction accuracy does not increase any further, either go back to step 1 (e.g., by identifying blatant mis-predictions you can't explain, and discussing these with the domain expert), or accept that you have reached the end of your models' capabilities. Time-boxing your analysis in advance helps. Note that I am not advocating trying different classes of models if your original model plateaus. Typically, if you started out with a reasonable model, using something more sophisticated will not yield a strong benefit and may simply be "overfitting on the test set". I have seen this often, and other people agree . | {
"source": [
"https://stats.stackexchange.com/questions/222179",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/35989/"
]
} |
222,238 | I have just started reading about GPs and analogous to the regular Gaussian distribution it is characterized by a mean function and the covariance function or the kernel. I was at a talk and the speaker said that the mean function is usually quite uninteresting and all the inference effort is spent on estimating the correct covariance function. Can someone explain to me why that should be the case? | I think I know what the speaker was getting at. Personally I don't completely agree with her/him, and there's a lot of people who don't. But to be fair, there are also many who do :) First of all, note that specifying the covariance function (kernel) implies specifying a prior distribution over functions. Just by changing the kernel, the realizations of the Gaussian Process change drastically, from the very smooth, infinitely differentiable, functions generated by the Squared Exponential kernel to the "spiky", nondifferentiable functions corresponding to an Exponential kernel (or Matern kernel with $\nu=1/2$) Another way to see it is to write the predictive mean (the mean of the Gaussian Process predictions, obtained by conditioning the GP on the training points) in a test point $x^*$, in the simplest case of a zero mean function: $$y^*=\mathbf{k}^{*T}(K+\sigma^{2}I)^{-1}\mathbf{y}$$ where $\mathbf{k}^*$ is the vector of covariances between the test point $x^*$ and the training points $x_1,\ldots,x_n$, $K$ is the covariance matrix of the training points, $\sigma$ is the noise term (just set $\sigma=0$ if your lecture concerned noise-free predictions, i.e., Gaussian Process interpolation), and $\mathbf{y}=(y_1,\ldots,y_n)$ is the vector of observations in the training set. As you can see, even if the mean of the GP prior is zero, the predictive mean is not zero at all, and depending on the kernel and on the number of training points, it can be a very flexible model, able to learn extremely complex patterns. More generally, it's the kernel which defines the generalization properties of the GP. Some kernels have the universal approximation property , i.e., they are in principle capable to approximate any continuous function on a compact subset, to any prespecified maximum tolerance, given enough training points. Then, why should you care at all about the mean function? First of all, a simple mean function (a linear or orthogonal polynomial one) makes the model much more interpretable, and this advantage must not be underestimated for model as flexible (thus, complicated) as the GP. Secondly, in some way the zero mean (or, for what's worth, also the constant mean) GP kind of sucks at prediction far away from the training data. Many stationary kernels (except the periodic kernels) are such that $k(x_i-x^*) \to 0 $ for $\operatorname{dist}(x_i,x^*)\to\infty$. This convergence to 0 can happen surprisingly quickly, expecially with the Squared Exponential kernel, and particularly when a short correlation length is necessary to fit the training set well. Thus a GP with zero mean function will invariably predict $y^*\approx 0$ as soon as you get away from the training set. Now, this could make sense in your application: after all, it is often a bad idea to use a data-driven model to perform predictions away from the set of data points used to train the model. See here for many interesting and fun examples of why this can be a bad idea. In this respect, the zero mean GP, which always converges to 0 away from the training set, is safer than a model (such as for example an high degree multivariate orthogonal polynomial model), which will happily shoot out insanely large predictions as soon as you get away from the training data. In other cases, however, you may want your model to have a certain asympotic behavior, which is not to converge to a constant. Maybe physical consideration tell you that for $x^*$ sufficiently large, your model must become linear. In that case you want a linear mean function. In general, when the global properties of the model are of interest for your application, then you have to pay attention to the choice of the mean function. When you are interested only in the local (close to the training points) behavior of your model, then a zero or constant mean GP may be more than enough. | {
"source": [
"https://stats.stackexchange.com/questions/222238",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36540/"
]
} |
222,584 | I am trying to understand different Recurrent Neural Network (RNN) architectures to be applied to time series data and I am getting a bit confused with the different names that are frequently used when describing RNNs. Is the structure of Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) essentially an RNN with a feedback loop? | All RNNs have feedback loops in the recurrent layer. This lets them maintain information in 'memory' over time. But, it can be difficult to train standard RNNs to solve problems that require learning long-term temporal dependencies. This is because the gradient of the loss function decays exponentially with time (called the vanishing gradient problem). LSTM networks are a type of RNN that uses special units in addition to standard units. LSTM units include a 'memory cell' that can maintain information in memory for long periods of time. A set of gates is used to control when information enters the memory, when it's output, and when it's forgotten. This architecture lets them learn longer-term dependencies. GRUs are similar to LSTMs, but use a simplified structure. They also use a set of gates to control the flow of information, but they don't use separate memory cells, and they use fewer gates. This paper gives a good overview: Chung et al. (2014) . Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. | {
"source": [
"https://stats.stackexchange.com/questions/222584",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/122509/"
]
} |
222,883 | In recent years, convolutional neural networks (or perhaps deep neural networks in general) have become deeper and deeper, with state-of-the-art networks going from 7 layers ( AlexNet ) to 1000 layers ( Residual Nets) in the space of 4 years. The reason behind the boost in performance from a deeper network, is that a more complex, non-linear function can be learned. Given sufficient training data, this enables the networks to more easily discriminate between different classes. However, the trend seems to not have followed with the number of parameters in each layer. For example, the number of feature maps in the convolutional layers, or the number of nodes in the fully-connected layers, has remained roughly the same and is still relatively small in magnitude, despite the large increase in the number of layers. From my intuition though, it would seem that increasing the number of parameters per layer would give each layer a richer source of data from which to learn its non-linear function; but this idea seems to have been overlooked in favour of simply adding more layers, each with a small number of parameters. So whilst networks have become "deeper", they have not become "wider". Why is this? | As a disclaimer, I work on neural nets in my research, but I generally use relatively small, shallow neural nets rather than the really deep networks at the cutting edge of research you cite in your question. I am not an expert on the quirks and peculiarities of very deep networks and I will defer to someone who is. First, in principle, there is no reason you need deep neural nets at all. A sufficiently wide neural network with just a single hidden layer can approximate any (reasonable) function given enough training data. There are, however, a few difficulties with using an extremely wide, shallow network. The main issue is that these very wide, shallow networks are very good at memorization, but not so good at generalization . So, if you train the network with every possible input value, a super wide network could eventually memorize the corresponding output value that you want. But that's not useful because for any practical application you won't have every possible input value to train with. The advantage of multiple layers is that they can learn features at various levels of abstraction . For example, if you train a deep convolutional neural network to classify images, you will find that the first layer will train itself to recognize very basic things like edges, the next layer will train itself to recognize collections of edges such as shapes, the next layer will train itself to recognize collections of shapes like eyes or noses, and the next layer will learn even higher-order features like faces. Multiple layers are much better at generalizing because they learn all the intermediate features between the raw data and the high-level classification. So that explains why you might use a deep network rather than a very wide but shallow network. But why not a very deep, very wide network? I think the answer there is that you want your network to be as small as possible to produce good results. As you increase the size of the network, you're really just introducing more parameters that your network needs to learn, and hence increasing the chances of overfitting. If you build a very wide, very deep network, you run the chance of each layer just memorizing what you want the output to be, and you end up with a neural network that fails to generalize to new data. Aside from the specter of overfitting, the wider your network, the longer it will take to train . Deep networks already can be very computationally expensive to train, so there's a strong incentive to make them wide enough that they work well, but no wider. | {
"source": [
"https://stats.stackexchange.com/questions/222883",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/72307/"
]
} |
223,315 | As announced in https://www.youtube.com/watch?v=xAoljeRJ3lU , Matplotlib changes the default colormap from jet to viridis. However, I don't understand it pretty well. Maybe because I'm color blind? The original colormap jet looks very strong, I can feel the contrast: While the new colormap viridis lacks that contrast: Can anyone please explain it simpler for me? I need the plot for my paper. And I need a good reason to convince my supervisor (and myself) that the viridis is a better one. | See this video . You could also google it because there are a lot of (reasonable) jet-bashing everywhere. Jet is very pleasing because it is flashy, colorful, and it does not require you to think about your color scale: even if you have just a few outliers, you still get "all the features" in your plot. You said it yourself: jet almost never lacks contrast. However this comes at a very high price: jet literally shows things that do not exist . It creates contrast out of nowhere: just change your color scale a little bit in jet and you should see that the picture is change dramatically. Do the same thing in viridis, and you would merely have the impression that you are putting more or less light on the exact same thing. If you don't like viridis, use the other colormaps that were discussed in the video above: they have the same nice properties, and they won't make your data lie. Also change the color scale: starting at 0, even if it is logical from a scientific point of view, may not be a good idea to represent these specific data (but change your colorbar to reflect that, e.g. "<25"). But again, see the video, there are a lot of examples in there as well as complete explanations. | {
"source": [
"https://stats.stackexchange.com/questions/223315",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/80763/"
]
} |
223,316 | I have researched multiple related questions( here , here ) but it lacks detailed context and solutions. My goal is to improve my daily sales forecast accuracy after having incorporated a simple holiday dummy for lunar new year. y <- msts(train$Sales, seasonal.periods=c(7,365.25))
# precomputed optimal fourier terms
bestfit$i <- 3
bestfit$j <- 20
z <- fourier(y, K=c(bestfit$i, bestfit$j))
fit <- auto.arima(y, xreg=cbind(z,train_df$cny), seasonal=FALSE)
# forecasting
horizon <- length(test_ts)
zf <- fourier(y, K=c(bestfit$i, bestfit$j), h=horizon)
fc <- forecast(bestfit, xreg=cbind(zf,test_df$cny), h=horizon)
plot(fc, include=365, type="l", xlab="Days", ylab="Sales", main="Comparing arimax forecast and actuals")
lines(test_ts, col='green') However, this does not reflect the lagged effect of the holiday. An approach will be to model the effects with a continuous variable(fitted to the effect curve above), but will like to heard other suggestions. | See this video . You could also google it because there are a lot of (reasonable) jet-bashing everywhere. Jet is very pleasing because it is flashy, colorful, and it does not require you to think about your color scale: even if you have just a few outliers, you still get "all the features" in your plot. You said it yourself: jet almost never lacks contrast. However this comes at a very high price: jet literally shows things that do not exist . It creates contrast out of nowhere: just change your color scale a little bit in jet and you should see that the picture is change dramatically. Do the same thing in viridis, and you would merely have the impression that you are putting more or less light on the exact same thing. If you don't like viridis, use the other colormaps that were discussed in the video above: they have the same nice properties, and they won't make your data lie. Also change the color scale: starting at 0, even if it is logical from a scientific point of view, may not be a good idea to represent these specific data (but change your colorbar to reflect that, e.g. "<25"). But again, see the video, there are a lot of examples in there as well as complete explanations. | {
"source": [
"https://stats.stackexchange.com/questions/223316",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/123011/"
]
} |
223,486 | I'm fairly new to Machine Learning/Modelling and I'd like some background to this problem.
I have a dataset where the number of observations is $n<200$ however the number of variables is $p\sim 8000$.
Firstly does it even make sense to consider building a model on a dataset like this or should one consider a variable selection technique to start with such as ridge regression or Lasso? I've read that this situation can lead to over-fitting. Is that the case for all ML techniques or do some techniques handle this better than others? Without too much maths a simple explanation on why the maths start to breakdown for $p>n$ would be appreciated. | It's certainly possible to fit good models when there are more variables than data points, but this must be done with care. When there are more variables than data points, the problem may not have a unique solution unless it's further constrained. That is, there may be multiple (perhaps infinitely many) solutions that fit the data equally well. Such a problem is called 'ill-posed' or 'underdetermined'. For example, when there are more variables than data points, standard least squares regression has infinitely many solutions that achieve zero error on the training data. Such a model would certainly overfit because it's 'too flexible' for the amount of training data. As model flexibility increases (e.g. more variables in a regression model) and the amount of training data shrinks, it becomes increasingly likely that the model will be able to achieve a low error by fitting random fluctuations in the training data that don't represent the true, underlying distribution. Performance will therefore be poor when the model is run on future data drawn from the same distribution. The problems of ill-posedness and overfitting can both be addressed by imposing constraints. This can take the form of explicit constraints on the parameters, a penalty/regularization term, or a Bayesian prior. Training then becomes a tradeoff between fitting the data well and satisfying the constraints. You mentioned two examples of this strategy for regression problems: 1) LASSO constrains or penalizes the $\ell_1$ norm of the weights, which is equivalent to imposing a Laplacian prior. 2) Ridge regression constrains or penalizes the $\ell_2$ norm of the weights, which is equivalent to imposing a Gaussian prior. Constraints can yield a unique solution, which is desirable when we want to interpret the model to learn something about the process that generated the data. They can also yield better predictive performance by limiting the model's flexibility, thereby reducing the tendency to overfit. However, simply imposing constraints or guaranteeing that a unique solution exists doesn't imply that the resulting solution will be good. Constraints will only produce good solutions when they're actually suited to the problem. A couple miscellaneous points: The existence of multiple solutions isn't necessarily problematic. For example, neural nets can have many possible solutions that are distinct from each other but near equally good. The existence of more variables than data points, the existence of multiple solutions, and overfitting often coincide. But, these are distinct concepts; each can occur without the others. | {
"source": [
"https://stats.stackexchange.com/questions/223486",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/73403/"
]
} |
223,799 | I started off learning about neural networks with the neuralnetworksanddeeplearning dot com tutorial. In particular in the 3rd chapter there is a section about the cross entropy function, and defines the cross entropy loss as: $C = -\frac{1}{n} \sum\limits_x \sum\limits_j (y_j \ln a^L_j + (1-y_j) \ln (1 - a^L_j))$ However, reading the Tensorflow introduction , the cross entropy loss is defined as: $C = -\frac{1}{n} \sum\limits_x \sum\limits_j (y_j \ln a^L_j)$ (when using the same symbols as above) Then searching around to find what was going on I found another set of notes: ( https://cs231n.github.io/linear-classify/#softmax-classifier ) that uses a completely different definition of the cross entropy loss, albeit this time for an softmax classifier rather than for a neural network. Can someone explain to me what is going on here? Why are there discrepancies btw. what people define the cross-entropy loss as? Is there just some overarching principle? | These three definitions are essentially the same. 1) The Tensorflow introduction ,
$$C = -\frac{1}{n} \sum\limits_x\sum\limits_{j} (y_j \ln a_j).$$ 2) For binary classifications $j=2$, it becomes
$$C = -\frac{1}{n} \sum\limits_x (y_1 \ln a_1 + y_2 \ln a_2)$$
and because of the constraints $\sum_ja_j=1$ and $\sum_jy_j=1$, it can be rewritten as
$$C = -\frac{1}{n} \sum\limits_x (y_1 \ln a_1 + (1-y_1) \ln (1-a_1))$$
which is the same as in the 3rd chapter . 3) Moreover, if $y$ is a one-hot vector (which is commonly the case for classification labels) with $y_k$ being the only non-zero element, then the cross entropy loss of the corresponding sample is
$$C_x=-\sum\limits_{j} (y_j \ln a_j)=-(0+0+...+y_k\ln a_k)=-\ln a_k.$$ In the cs231 notes , the cross entropy loss of one sample is given together with softmax normalization as
$$C_x=-\ln(a_k)=-\ln\left(\frac{e^{f_k}}{\sum_je^{f_j}}\right).$$ | {
"source": [
"https://stats.stackexchange.com/questions/223799",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/123326/"
]
} |
223,808 | Background I'm doing clinical research in medicine and have taken several statistics courses. I've never published a paper using linear/logistic regression and would like to do variable selection correctly. Interpretability is important, so no fancy machine learning techniques. I've summarized my understanding of variable selection - would someone mind shedding light on any misconceptions? I found two (1) similar (2) CV posts to this one, but they didn't quite fully answer my concerns. Any thoughts would be much appreciated! I have 3 primary questions at the end. Problem and Discussion My typical regression/classification problem has 200-300 observations, an adverse event rate of 15% (if classification), and info on 25 out of 40 variables that have been claimed to have a "statistically significant" effect in the literature or make plausible sense by domain knowledge. I put "statistically significant" in quotes, because it seems like everyone and their mother uses stepwise regression, but Harrell (3) and Flom (4) don't appear to like it for a number of good reasons. This is further supported by a Gelman blog post discussion (5). It seems like the only real time that stepwise is acceptable is if this is truly exploratory analysis, or one is interested in prediction and has a cross-validation scheme involved. Especially since many medical comorbidities suffer from collinearity AND studies suffer from small sample size, my understanding is that there will be a lot of false positives in the literature; this also makes me less likely to trust the literature for potential variables to include. Another popular approach is to use a series of univariate regressions/associations between predictors and independent variable as a starting point. below a particular threshold (say, p < 0.2). This seems incorrect or at least misleading for the reasons outlined in this StackExchange post (6). Lastly, an automated approach that appears popular in machine learning is to use penalization like L1 (Lasso), L2 (Ridge), or L1+L2 combo (Elastic Net). My understanding is that these do not have the same easy interpretations as OLS or logistic regression. Gelman + Hill propose the following: In my Stats course, I also recall using F tests or Analysis of Deviance to compare full and nested models to do model/variable selection variable by variable. This seems reasonable, but fitting sequential nested models systematically to find variables that cause largest drop in deviance per df seems like it could be easily automated (so I'm a bit concerned) and also seems like it suffers from problems of the order in which you test variable inclusion. My understanding is that this should also be supplemented by investigating multicollinearity and residual plots (residual vs. predicted). Questions: Is the Gelman summary the way to go? What would you add or change in his proposed strategy? Aside from purely thinking about potential interactions and transformations (which seems very bias/error/omission prone), is there another way to discover potential ones? Multivariate adaptive regression spline (MARS) was recommended to me, but I was informed that the nonlinearities/transformations don't translate into the same variables in a standard regression model. Suppose my goal is very simple: say, "I'd like to estimate the association of X1 on Y, only accounting for X2". Is it adequate to simply regress Y ~ X1 + X2, report the outcome, without reference to actual predictive ability (as might be measured by cross-validation RMSE or accuracy measures)? Does this change depending on event rate or sample size or if R^2 is super low (I'm aware that R^2 is not good because you can always increase it by overfitting)? I am generally more interested in inference/interpretability than optimizing predictive power. Example conclusions: "Controlling for X2, X1 was not statistically significantly associated with Y relative to X1's reference level." (logistic regression coefficient) "X1 was not a statistically significant predictor of Y since in the model drop in deviance was not enough relative to the change in df." (Analysis of Deviance) Is cross-validation always necessary? In which case, one might also want to do some balancing of classes via SMOTE, sampling, etc. | Andrew Gelman is definitely a respected name in the statistical world. His principles closely align with some of the causal modeling research that has been done by other "big names" in the field. But I think given your interest in clinical research, you should be consulting other sources. I am using the word "causal" loosely (as do others) because there is a fine line we must draw between performing "causal inference" from observational data, and asserting causal relations between variables. We all agree RCTs are the main way of assessing causality. We rarely adjust for anything in such trials per the randomization assumption, with few exceptions ( Senn, 2004 ). Observational studies have their importance and utility ( Weiss, 1989 ) and the counterfactual based approach to making inference from observational data is accepted as a philosophically sound approach to doing so ( Höfler, 2005 ). It often approximates very closely the use-efficacy measured in RCTs ( Anglemyer, 2014 ). Therefore, I'll focus on studies from observational data. My point of contention with Gelman's recommendations is: all predictors in a model and their posited causal relationship between a single exposure of interest and a single outcome of interest should be specified apriori . Throwing in and excluding covariates based on their relationship between a set of main findings is actually inducing a special case of 'Munchausen's statistical grid' ( Martin, 1984 ). Some journals (and the trend is catching on) will summarily reject any article which uses stepwise regression to identify a final model ( Babyak, 2004 ), and I think the problem is seen in similar ways here. The rationale for inclusion and exclusion of covariates in a model is discussed in: Judea Pearl's Causality ( Pearl, 2002 ). It is perhaps one of the best texts around for understanding the principles of statistical inference, regression, and multivariate adjustment. Also practically anything by Sanders and Greenland is illuminating, in particular their discussion on confounding which is regretfully omitted from this list of recommendations ( Greenland et al. 1999 ). Specific covariates can be assigned labels based on a graphical relation with a causal model. Designations such as prognostic, confounder, or precision variables warrant inclusion as covariates in statistical models. Mediators, colliders, or variables beyond the causal pathway should be omitted. The definitions of these terms are made rigorous with plenty of examples in Causality. Given this little background I'll address the points one-by-one. This is generally a sound approach with one MAJOR caveat: these variables must NOT be mediators of the outcome. If, for instance, you are inspecting the relationship between smoking and physical fitness, and you adjust for lung function, that is attenuating the effect of smoking because it's direct impact on fitness is that of reducing lung function. This should NOT be confused with confounding where the third variable is causal of the predictor of interest AND the outcome of interest. Confounders must be included in models. Additionally, overadjustment can cause multiple forms of bias in analyses. Mediators and confounders are deemed as such NOT because of what is found in analyses, but because of what is BELIEVED by YOU as the subject-matter-expert (SME). If you have 20 observations per variable or fewer, or 20 observations per event in time-to-event or logistic analyses, you should consider conditional methods instead. This is an excellent power saving approach that is not so complicated as propensity score adjustment or SEM or factor analysis. I would definitely recommend doing this whenever possible. I disagree wholeheartedly. The point of adjusting for other variables in analyses is to create strata for which comparisons are possible. Misspecifying confounder relations does not generally lead to overbiased analyses, so residual confounding from omitted interaction terms is, in my experience, not a big issue. You might, however, consider interaction terms between the predictor of interest and other variables as a post-hoc analysis. This is a hypothesis generating procedure that is meant to refine any possible findings (or lack thereof) as a. potentially belonging to a subgroup or b. involving a mechanistic interaction between two environmental and/or genetic factors. I also disagree with this wholeheartedly. It does not coincide with the confirmatory analysis based approach to regression. You are the SME. The analyses should be informed by the QUESTION and not the DATA. State with confidence what you believe to be happening, based on a pictoral depiction of the causal model (using a DAG and related principles from Pearl et. al), then choose the predictors for your model of interest, fit, and discuss. Only as a secondary analysis should you consider this approach, even at all. The role of machine learning in all of this is highly debatable. In general, machine learning is focused on prediction and not inference which are distinct approaches to data analysis. You are right that the interpretation of effects from penalized regression are not easily interpreted for a non-statistical community, unlike estimates from an OLS, where 95% CIs and coefficient estimates provide a measure of association. The interpretation of the coefficient from an OLS model Y~X is straightforward: it is a slope, an expected difference in Y comparing groups differing by 1 unit in X. In a multivariate adjusted model Y~X1+X2 we modify this as a conditional slope: it is an expected difference in Y comparing groups differing by 1 unit in X1 who have the same value of X2. Geometrically, adjusting for X2 leads to distinct strata or "cross sections" of the three space where we compare X1 to Y, then we average up the findings over each of those strata. In R, the coplot function is very useful for visualizing such relations. | {
"source": [
"https://stats.stackexchange.com/questions/223808",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/123319/"
]
} |
224,005 | I know the definition of symmetric positive definite (SPD) matrix, but want to understand more. Why are they so important, intuitively? Here is what I know. What else? For a given data, Co-variance matrix is SPD. Co-variance matrix is a important metric, see this excellent post for intuitive explanation. The quadratic form $\frac 1 2 x^\top Ax-b^\top x +c$ is convex, if $A$ is SPD. Convexity is a nice property for a function that can make sure the local solution is global solution. For Convex problems, there are many good algorithms to solve, but not for non-covex problems. When $A$ is SPD, the optimization solution for the quadratic form $$\text{minimize}~~~ \frac 1 2 x^\top Ax-b^\top x +c$$ and the solution for linear system $$Ax=b$$ are the same. So we can run conversions between two classical problems. This is important because it enables us to use tricks discovered in one domain in the another. For example, we can use the conjugate gradient method to solve a linear system. There are many good algorithms (fast, numerical stable) that work better for an SPD matrix, such as Cholesky decomposition. EDIT: I am not trying ask the identities for SPD matrix, but the intuition behind the property to show the importance. For example, as mentioned by @Matthew Drury, if a matrix is SPD, Eigenvalues are all positive real numbers, but why all positive matters. @Matthew Drury had a great answer to flow and that is what I was looking for. | A (real) symmetric matrix has a complete set of orthogonal eigenvectors for which the corresponding eigenvalues are are all real numbers. For non-symmetric matrices this can fail. For example, a rotation in two dimensional space has no eigenvector or eigenvalues in the real numbers, you must pass to a vector space over the complex numbers to find them. If the matrix is additionally positive definite, then these eigenvalues are all positive real numbers. This fact is much easier than the first, for if $v$ is an eigenvector with unit length, and $\lambda$ the corresponding eigenvalue, then $$ \lambda = \lambda v^t v = v^t A v > 0 $$ where the last equality uses the definition of positive definiteness. The importance here for intuition is that the eigenvectors and eigenvalues of a linear transformation describe the coordinate system in which the transformation is most easily understood. A linear transformation can be very difficult to understand in a "natural" basis like the standard coordinate system, but each comes with a "preferred" basis of eigenvectors in which the transformation acts as a scaling in all directions. This makes the geometry of the transformation much easier to understand. For example, the second derivative test for the local extrema of a function $R^2 \rightarrow R$ is often given as a series of mysterious conditions involving an entry in the second derivative matrix and some determinants. In fact, these conditions simply encode the following geometric observation: If the matrix of second derivatives is positive definite, you're at a local minimum. If the matrix of second derivatives is negative definite, you're at a local maximum. Otherwise, you are at neither, a saddle point. You can understand this with the geometric reasoning above in an eigenbasis. The first derivative at a critical point vanishes, so the rates of change of the function here are controlled by the second derivative. Now we can reason geometrically In the first case there are two eigen-directions, and if you move along either the function increases. In the second, two eigen-directions, and if you move in either the function decreases. In the last, there are two eigen-directions, but in one of them the function increases, and in the other it decreases. Since the eigenvectors span the whole space, any other direction is a linear combination of eigen-directions, so the rates of change in those directions are linear combinations of the rates of change in the eigen directions. So in fact, this holds in all directions (this is more or less what it means for a function defined on a higher dimensional space to be differentiable). Now if you draw a little picture in your head, this makes a lot of sense out of something that is quite mysterious in beginner calculus texts. This applies directly to one of your bullet points The quadratic form $\frac 1 2 x^\top Ax-b^\top x +c$ is convex, if $A$ is SPD. Convex is a nice property that can make sure the local solution is global solution The matrix of second derivatives is $A$ everywhere, which is symmetric positive definite. Geometrically, this means that if we move away in any eigen-direction (and hence any direction, because any other is a linear combination of eigen-directions) the function itself will bend away above it's tangent plane. This means the whole surface is convex. | {
"source": [
"https://stats.stackexchange.com/questions/224005",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/113777/"
]
} |
224,014 | A disclaimer: By using an informal term such as "generalize", I am aware I am getting close to philosophical territory, and that my question could be seen as unsuitable for CV. I will do my best to be specific enough in phrasing it, to allow for meaningful answers by the standards of this community. I am trying to gain some broad overview of the methods with which ML researchers and practitioners, statisticians and mathematicians, tackle the question whether a model successfully learned to generalize -- or whether it failed to do so. Or maybe better, to avoid framing the question categorically: to which degree a model learned to generalize? In other words, I am asking which formally specified (and to remain practical: computable and computationally tractable) methods exist that can be seen as addressing the informal question posed in the title, that of a model's or method's ability to 'generalize'. Does that question make any sense up to this point? And is it possible to answer it in the context of CV? Some additional remarks, trying to clarify the question further: What I'm asking about is maybe a taxonomy of sorts (happily accepting your personal taxonomy, in case no canonical one exists) of such methods, of both 'hard' formal results , and methods relying on empirical evaluation . Closely related to 'generalization', and, I'm afraid, equally underspecified: the notion of systematicity . A model's ability to generalize often seems to be mentioned alongside the question whether the model found a systematical solution for a task it was trained on. Does that help in any way? (Probably not.) Maybe the following distinction needs to be made: mentioning "models" above, I somewhat conflate the general learning algorithm or method, and particular instances of these methods, i.e. models that are constructed by the algorithm from training data. My question then contains at least two sub-questions: ways to speak about the 'generalization' ability of the algorithm itself, and which to evaluate the same for a trained model? At least in the context of neural networks (the family of models I'm most familiar with), it seems to me that the 'generalization' question is answered mostly empirically, and mostly by one particular method only (perhaps the only one available, in reality?): by separating the data into distinct sets (standard being 2 for train/test, or 3 for train/test/evaluation), keeping data used for training and performance evaluation separate. As a consequence, we can then consider overfitting of a model (as measured by the model's performance on the withheld data), and find ways to combat it (regularization). Which brings me to consider another distinction: (i) 'generalization' as performance over unseen data that we (plausibly) know was generated by the same function that also generated the training data, in contrast to: (ii) generalization to (unseen) data that we might only hypothesize or believe to be produced by the same underlying, general function. By my own (still very incomplete) understanding, it appears then that we are usually only concerned with evaluating 'generalization' of the first type (i.e. by measuring and comparing performance over unseen data generated with some certainty by the same function as the training data -- I say "with some certainty", because we either chose the function generating the data ourselves, or following from the relatively natural assumption that the same function generated the particular data set we used, say, a set of 1 million 400 by 600 px pictures of dancing cats), while generalization of the second type is not usually measured or considered (i.e. performance over unseen examples that are in a sense truly new and different from the ones encountered during training, but that we might believe are the product of the same function that generated the training data). Here's another strong possibility: I am completely wrong with that characterization (not really surprising, considering how confused I still am about all things ML). If that's the case, my apologies for misrepresenting (and misunderstanding) the current approaches. | A (real) symmetric matrix has a complete set of orthogonal eigenvectors for which the corresponding eigenvalues are are all real numbers. For non-symmetric matrices this can fail. For example, a rotation in two dimensional space has no eigenvector or eigenvalues in the real numbers, you must pass to a vector space over the complex numbers to find them. If the matrix is additionally positive definite, then these eigenvalues are all positive real numbers. This fact is much easier than the first, for if $v$ is an eigenvector with unit length, and $\lambda$ the corresponding eigenvalue, then $$ \lambda = \lambda v^t v = v^t A v > 0 $$ where the last equality uses the definition of positive definiteness. The importance here for intuition is that the eigenvectors and eigenvalues of a linear transformation describe the coordinate system in which the transformation is most easily understood. A linear transformation can be very difficult to understand in a "natural" basis like the standard coordinate system, but each comes with a "preferred" basis of eigenvectors in which the transformation acts as a scaling in all directions. This makes the geometry of the transformation much easier to understand. For example, the second derivative test for the local extrema of a function $R^2 \rightarrow R$ is often given as a series of mysterious conditions involving an entry in the second derivative matrix and some determinants. In fact, these conditions simply encode the following geometric observation: If the matrix of second derivatives is positive definite, you're at a local minimum. If the matrix of second derivatives is negative definite, you're at a local maximum. Otherwise, you are at neither, a saddle point. You can understand this with the geometric reasoning above in an eigenbasis. The first derivative at a critical point vanishes, so the rates of change of the function here are controlled by the second derivative. Now we can reason geometrically In the first case there are two eigen-directions, and if you move along either the function increases. In the second, two eigen-directions, and if you move in either the function decreases. In the last, there are two eigen-directions, but in one of them the function increases, and in the other it decreases. Since the eigenvectors span the whole space, any other direction is a linear combination of eigen-directions, so the rates of change in those directions are linear combinations of the rates of change in the eigen directions. So in fact, this holds in all directions (this is more or less what it means for a function defined on a higher dimensional space to be differentiable). Now if you draw a little picture in your head, this makes a lot of sense out of something that is quite mysterious in beginner calculus texts. This applies directly to one of your bullet points The quadratic form $\frac 1 2 x^\top Ax-b^\top x +c$ is convex, if $A$ is SPD. Convex is a nice property that can make sure the local solution is global solution The matrix of second derivatives is $A$ everywhere, which is symmetric positive definite. Geometrically, this means that if we move away in any eigen-direction (and hence any direction, because any other is a linear combination of eigen-directions) the function itself will bend away above it's tangent plane. This means the whole surface is convex. | {
"source": [
"https://stats.stackexchange.com/questions/224014",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/94656/"
]
} |
224,037 | I have a simple question regarding "conditional probability" and "Likelihood". (I have already surveyed this question here but to no avail.) It starts from the Wikipedia page on likelihood . They say this: The likelihood of a set of parameter values, $\theta$, given
outcomes $x$, is equal to the probability of those observed outcomes
given those parameter values, that is $$\mathcal{L}(\theta \mid x) = P(x \mid \theta)$$ Great! So in English, I read this as: "The likelihood of parameters equaling theta, given data X = x, (the left-hand-side), is equal to the probability of the data X being equal to x, given that the parameters are equal to theta". ( Bold is mine for emphasis ). However, no less than 3 lines later on the same page, the Wikipedia entry then goes on to say: Let $X$ be a random variable with a discrete probability distribution
$p$ depending on a parameter $\theta$. Then the function $$\mathcal{L}(\theta \mid x) = p_\theta (x) = P_\theta (X=x), \, $$ considered as a function of $\theta$, is called the likelihood
function (of $\theta$, given the outcome $x$ of the random variable
$X$). Sometimes the probability of the value $x$ of $X$ for the
parameter value $\theta$ is written as $P(X=x\mid\theta)$; often
written as $P(X=x;\theta)$ to emphasize that this differs from
$\mathcal{L}(\theta \mid x) $ which is not a conditional probability ,
because $\theta$ is a parameter and not a random variable. ( Bold is mine for emphasis ). So, in the first quote, we are literally told about a conditional probability of $P(x\mid\theta)$, but immediately afterwards, we are told that this is actually NOT a conditional probability, and should be in fact written as $P(X = x; \theta)$? So, which one is is? Does the likelihood actually connote a conditional probability ala the first quote? Or does it connote a simple probability ala the second quote? EDIT: Based on all the helpful and insightful answers I have received thus far, I have summarized my question - and my understanding thus far as so: In English , we say that: "The likelihood is a function of parameters, GIVEN the observed data." In math , we write it as: $L(\mathbf{\Theta}= \theta \mid \mathbf{X}=x)$. The likelihood is not a probability. The likelihood is not a probability distribution. The likelihood is not a probability mass. The likelihood is however, in English : "A product of probability distributions, (continuous case), or a product of probability masses, (discrete case), at where $\mathbf{X} = x$, and parameterized by $\mathbf{\Theta}= \theta$." In math , we then write it as such: $L(\mathbf{\Theta}= \theta \mid \mathbf{X}=x) = f(\mathbf{X}=x ; \mathbf{\Theta}= \theta) $ (continuous case, where $f$ is a PDF), and as $L(\mathbf{\Theta}= \theta \mid \mathbf{X}=x) = P(\mathbf{X}=x ; \mathbf{\Theta}= \theta) $ (discrete case, where $P$ is a probability mass). The takeaway here is that at no point here whatsoever is a conditional probability coming into play at all. In Bayes theorem, we have: $P(\mathbf{\Theta}= \theta \mid \mathbf{X}=x) = \frac{P(\mathbf{X}=x \mid \mathbf{\Theta}= \theta) \ P(\mathbf{\Theta}= \theta)}{P(\mathbf{X}=x)}$. Colloquially, we are told that "$P(\mathbf{X}=x \mid \mathbf{\Theta}= \theta)$ is a likelihood", however, this is not true , since $\mathbf{\Theta}$ might be an actual random variable. Therefore, what we can correctly say however, is that this term $P(\mathbf{X}=x \mid \mathbf{\Theta}= \theta)$ is simply "similar" to a likelihood. (?) [On this I am not sure.] EDIT II: Based on @amoebas answer, I have drawn his last comment. I think it's quite elucidating, and I think it clears up the main contention I was having. (Comments on the image). EDIT III: I extended @amoebas comments to the Bayesian case just now as well: | I think this is largely unnecessary splitting hairs. Conditional probability $P(x\mid y)\equiv P(X=x \mid Y=y)$ of $x$ given $y$ is defined for two random variables $X$ and $Y$ taking values $x$ and $y$. But we can also talk about probability $P(x\mid\theta)$ of $x$ given $\theta$ where $\theta$ is not a random variable but a parameter. Note that in both cases the same term "given" and the same notation $P(\cdot\mid\cdot)$ can be used. There is no need to invent different notations. Moreover, what is called "parameter" and what is called "random variable" can depend on your philosophy, but the math does not change. The first quote from Wikipedia states that $\mathcal{L}(\theta \mid x) = P(x \mid \theta)$ by definition. Here it is assumed that $\theta$ is a parameter. The second quote says that $\mathcal{L}(\theta \mid x)$ is not a conditional probability. This means that it is not a conditional probability of $\theta$ given $x$; and indeed it cannot be, because $\theta$ is assumed to be a parameter here. In the context of Bayes theorem $$P(a\mid b)=\frac{P(b\mid a)P(a)}{P(b)},$$ both $a$ and $b$ are random variables. But we can still call $P(b\mid a)$ "likelihood" (of $a$), and now it is also a bona fide conditional probability (of $b$). This terminology is standard in Bayesian statistics. Nobody says it is something "similar" to the likelihood; people simply call it the likelihood. Note 1: In the last paragraph, $P(b\mid a)$ is obviously a conditional probability of $b$. As a likelihood $\mathcal L(a\mid b)$ it is seen as a function of $a$; but it is not a probability distribution (or conditional probability) of $a$! Its integral over $a$ does not necessarily equal $1$. (Whereas its integral over $b$ does.) Note 2: Sometimes likelihood is defined up to an arbitrary proportionality constant, as emphasized by @MichaelLew (because most of the time people are interested in likelihood ratios ). This can be useful, but is not always done and is not essential. See also What is the difference between "likelihood" and "probability"? and in particular @whuber's answer there. I fully agree with @Tim's answer in this thread too (+1). | {
"source": [
"https://stats.stackexchange.com/questions/224037",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/27158/"
]
} |
224,051 | There are two different ways to encoding categorical variables. Say, one categorical variable has n values. One-hot encoding converts it into n variables, while dummy encoding converts it into n-1 variables. If we have k categorical variables, each of which has n values. One hot encoding ends up with kn variables, while dummy encoding ends up with kn-k variables. I hear that for one-hot encoding, intercept can lead to collinearity problem, which makes the model not sound. Someone call it " dummy variable trap ". My questions: Scikit-learn's linear regression model allows users to disable intercept. So for one-hot encoding, should I always set fit_intercept=False? For dummy encoding, fit_intercept should always be set to True? I do not see any "warning" on the website. Since one-hot encoding generates more variables, does it have more degree of freedom than dummy encoding? | Scikit-learn's linear regression model allows users to disable intercept. So for one-hot encoding, should I always set fit_intercept=False? For dummy encoding, fit_intercept should always be set to True? I do not see any "warning" on the website. For an unregularized linear model with one-hot encoding, yes, you need to set the intercept to be false or else incur perfect collinearity. sklearn also allows for a ridge shrinkage penalty, and in that case it is not necessary, and in fact you should include both the intercept and all the levels. For dummy encoding you should include an intercept, unless you have standardized all your variables, in which case the intercept is zero. Since one-hot encoding generates more variables, does it have more degree of freedom than dummy encoding? The intercept is an additional degree of freedom, so in a well specified model it all equals out. For the second one, what if there are k categorical variables? k variables are removed in dummy encoding. Is the degree of freedom still the same? You could not fit a model in which you used all the levels of both categorical variables, intercept or not. For, as soon as you have one-hot-encoded all the levels in one variable in the model, say with binary variables $x_1, x_2, \ldots, x_n$, then you have a linear combination of predictors equal to the constant vector $$ x_1 + x_2 + \cdots + x_n = 1 $$ If you then try to enter all the levels of another categorical $x'$ into the model, you end up with a distinct linear combination equal to a constant vector $$ x_1' + x_2' + \cdots + x_k' = 1 $$ and so you have created a linear dependency $$ x_1 + x_2 + \cdots x_n - x_1' - x_2' - \cdots - x_k' = 0$$ So you must leave out a level in the second variable, and everything lines up properly. Say, I have 3 categorical variables, each of which has 4 levels. In dummy encoding, 3*4-3=9 variables are built with one intercept. In one-hot encoding, 3*4=12 variables are built without an intercept. Am I correct? The second thing does not actually work. The $3 \times 4 = 12$ column design matrix you create will be singular. You need to remove three columns, one from each of three distinct categorical encodings, to recover non-singularity of your design. | {
"source": [
"https://stats.stackexchange.com/questions/224051",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/35802/"
]
} |
224,098 | Suppose $X$ is uniformly distributed on $[0, 2\pi]$. Let $Y = \sin X$ and $Z = \cos X$. Show that the correlation between $Y$ and $Z$ is zero. It seems I would need to know the standard deviation of the sine and cosine, and their covariance. How can I calculate these? I think I need to assume $X$ has uniform distribution, and the look at the transformed variables $Y=\sin(X)$ and $Z=\cos(X)$. Then the law of the unconscious statistician would give the expected value $$E[Y] = \frac{1}{b-a}\int_{-\infty}^{\infty} \sin(x)dx$$ and $$E[Z] = \frac{1}{b-a}\int_{-\infty}^{\infty} \cos(x)dx$$ (the density is constant since it is a uniform distribution, and can thus be moved out of the integral). However, those integrals are not defined (but have Cauchy principal values of zero I think). How could I solve this problem? I think I know the solution (correlation is zero because sine and cosine have opposite phases) but I cannot find how to derive it. | Since $$\begin{align}
\operatorname{Cov}(Y, Z)
&= E[(Y - E[Y])(Z - E[Z])] \\
&= E[(Y - {\textstyle \int}_0^{2\pi} \sin x \;dx)(Z - {\textstyle \int}_0^{2\pi} \cos x \;dx)] \\
&= E[(Y - 0)(Z - 0)] \\
&= E[YZ] \\
&= \int_0^{2\pi} \sin x \cos x \;dx \\
&= 0 ,
\end{align}$$ the correlation must also be 0. | {
"source": [
"https://stats.stackexchange.com/questions/224098",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/123521/"
]
} |
224,330 | Based on the little knowledge that I have on MCMC (Markov chain Monte Carlo) methods, I understand that sampling is a crucial part of the aforementioned technique. The most commonly used sampling methods are Hamiltonian and Metropolis. Is there a way to utilise machine learning or even deep learning to construct a more efficient MCMC sampler? | Yes. Unlike what other answers state, 'typical' machine-learning methods such as nonparametrics and (deep) neural networks can help create better MCMC samplers. The goal of MCMC is to draw samples from an (unnormalized) target distribution $f(x)$. The obtained samples are used to approximate $f$ and mostly allow to compute expectations of functions under $f$ (i.e., high-dimensional integrals) and, in particular, properties of $f$ (such as moments). Sampling usually requires a large number of evaluations of $f$, and possibly of its gradient, for methods such as Hamiltonian Monte Carlo (HMC).
If $f$ is costly to evaluate, or the gradient is unavailable, it is sometimes possible to build a less expensive surrogate function that can help guide the sampling and is evaluated in place of $f$ (in a way that still preserves the properties of MCMC). For example, a seminal paper ( Rasmussen 2003 ) proposes to use Gaussian Processes (a nonparametric function approximation) to build an approximation to $\log f$ and perform HMC on the surrogate function, with only the acceptance/rejection step of HMC based on $f$. This reduces the number of evaluation of the original $f$, and allows to perform MCMC on pdfs that would otherwise too expensive to evaluate. The idea of using surrogates to speed up MCMC has been explored a lot in the past few years, essentially by trying different ways to build the surrogate function and combine it efficiently/adaptively with different MCMC methods (and in a way that preserves the 'correctness' of MCMC sampling). Related to your question, these two very recent papers use advanced machine learning techniques -- random networks ( Zhang et al. 2015 ) or adaptively learnt exponential kernel functions ( Strathmann et al. 2015 ) -- to build the surrogate function. HMC is not the only form of MCMC that can benefit from surrogates. For example, Nishiara et al. (2014) build an approximation of the target density by fitting a multivariate Student's $t$ distribution to the multi-chain state of an ensemble sampler, and use this to perform a generalized form of elliptical slice sampling . These are only examples. In general, a number of distinct ML techniques (mostly in the area of function approximation and density estimation) can be used to extract information that might improve the efficiency of MCMC samplers. Their actual usefulness -- e.g. measured in number of "effective independent samples per second" -- is conditional on $f$ being expensive or somewhat hard to compute; also, many of these methods may require tuning of their own or additional knowledge, restricting their applicability. References: Rasmussen, Carl Edward. "Gaussian processes to speed up hybrid Monte Carlo for expensive Bayesian integrals." Bayesian Statistics 7. 2003. Zhang, Cheng, Babak Shahbaba, and Hongkai Zhao. "Hamiltonian Monte Carlo Acceleration using Surrogate Functions with Random Bases." arXiv preprint arXiv:1506.05555 (2015). Strathmann, Heiko, et al. "Gradient-free Hamiltonian Monte Carlo with efficient kernel exponential families." Advances in Neural Information Processing Systems. 2015. Nishihara, Robert, Iain Murray, and Ryan P. Adams. "Parallel MCMC with generalized elliptical slice sampling." Journal of Machine Learning Research 15.1 (2014): 2087-2112. | {
"source": [
"https://stats.stackexchange.com/questions/224330",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/120218/"
]
} |
225,175 | I have a dataset. There are lots of missing values. For some columns, the missing value was replaced with -999, but other columns, the missing value was marked as 'NA'. Why would we use -999 to replace the missing value? | This is a holdout from earlier times, when computer software stored numerical vectors as numerical vectors. No real number has the semantics "I'm missing". So when early statistical software had to differentiate between "true" numbers and missing values, they put in something that was "obviously" not a valid number, like -999 or -9999. Of course, that -999 or -9999 stood for a missing value is not "obvious" at all. Quite often, it can certainly be a valid value. Unless you explicitly check for such values, you can have all kinds of "interesting" errors in your analyses. Nowadays, numerical vectors that can contain missing values are internally represented as "enriched" numerical vectors, i.e., numerical vectors with additional information as to which values are missing. This of course is much better, because then missing values will be treated as such and not mistakenly treated as valid. Unfortunately, some software still uses such a convention, perhaps for compatibility. And some users have soaked up this convention through informal osmosis and enter -999 instead of NA even if their software supports cleanly entering missing values. Moral: don't encode missing values as -999. | {
"source": [
"https://stats.stackexchange.com/questions/225175",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/96184/"
]
} |
225,353 | As the election is a one time event, it is not an experiment that can be repeated. So exactly what does the statement "Hillary has a 75% chance of winning" technically mean? I am seeking a statistically correct definition not an intuitive or conceptual one. I am an amateur statistics fan who is trying to respond to this question that came up in a discussion. I am pretty sure there's a good objective response to it but I can't come up with it myself... | All the answers so far provided are helpful, but they aren't very statistically precise, so I'll take a shot at that. At the same time, I'm going to give a general answer rather than focusing on this election. The first thing to keep in mind when we're trying to answer questions about real-world events like Clinton winning the election, as opposed to made-up math problems like taking balls of various colors out of an urn, is that there isn't a unique reasonable way to answer the question, and hence not a unique reasonable answer. If somebody just says "Hillary has a 75% chance of winning" and doesn't go on to describe their model of the election, the data they used to make their estimates, the results of their model validation, their background assumptions, whether they're referring to the popular vote or the electoral vote, etc., then they haven't really told you what they mean, much less provided enough information for you to evaluate whether their prediction is any good. Besides, it isn't beneath some people to do no data analysis at all and simply draw a precise-sounding number out of thin air. So, what are some procedures a statistician might use to estimate Clinton's chances? Indeed, how might they frame the problem? At a high level, there are various notions of probability itself, two of the most important of which are frequentist and Bayesian. In a frequentist view, a probability represents the limiting frequency of an event over many independent trials of the same experiment, as in the law of large numbers (strong or weak). Even though any particular election is a unique event, its outcome can be seen as a draw from an infinite population of events both historical and hypothetical, which could comprise all American presidential elections, or all elections worldwide in 2016, or something else. A 75% chance of a Clinton victory means that if $X_1, X_2, …$ is a sequence of outcomes (0 or 1) of independent elections that are entirely equivalent to this election so far as our model is concerned, then the sample mean of $X_1, X_2, …, X_n$ converges in probability to .75 as $n$ goes to infinity. In a Bayesian view, a probability represents a degree of believability or credibility (which may or may not be actual belief, depending on whether you're a subjectivist Bayesian). A 75% chance of a Clinton victory means that it is 75% credible she will win. Credibilities, in turn, can be chosen freely (based on a model's or analyst's preexisting beliefs) within the constraints of basic laws of probability (like Bayes's theorem , and the fact that the probability of a joint event cannot exceed the marginal probability of either of the component events). One way to summarize these laws is that if you take bets on the outcome of an event, offering odds to gamblers according to your credibilities, then no gambler can construct a Dutch book against you, that is, a set of bets that guarantees you will lose money no matter how the event actually works out. Whether you take a frequentist or Bayesian view on probability, there are still a lot of decisions to be made about how to analyze the data and estimate the probability. Possibly the most popular method is based on parametric regression models, such as linear regression. In this setting, the analyst chooses a parametric family of distributions (that is, probability measures ) that is indexed by a vector of numbers called parameters. Each outcome is an independent random variable drawn from this distribution, transformed according to the covariates, which are known values (such as the unemployment rate) that the analyst wants to use to predict the outcome. The analyst chooses estimates of the parameter values using the data and a criterion of model fit such as least squares or maximum likelihood . Using these estimates, the model can produce a prediction of the outcome (possibly just a single value, possibly an interval or other set of values) for any given value of the covariates. In particular, it can predict the outcome of an election. Besides parametric models, there are nonparametric models (that is, models defined by a family of distributions that is indexed with an infinitely long parameter vector), and also methods of deciding on predicted values that use no model by which the data was generated at all, such as nearest-neighbor classifiers and random forests . Coming up with predictions is one thing, but how do you know whether they're any good? After all, sufficiently inaccurate predictions are worse than useless. Testing predictions is part of the larger practice of model validation, that is, quantifying how good a given model is for a given purpose. Two popular methods for validating predictions are cross-validation and splitting the data into training and testing subsets before fitting any models. To the degree that the elections included in the data are representative of the 2016 US presidential election, the estimates of predictive accuracy we get from validating predictions will inform us how accurate our prediction will be of the 2016 US presidential election. | {
"source": [
"https://stats.stackexchange.com/questions/225353",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/124322/"
]
} |
225,434 | If the data is 1d, the variance shows the extent to which the data points are different from each other. If the data is multi-dimensional, we'll get a covariance matrix. Is there a measure that gives a single number of how the data points are different from each other in general for multi-dimensional data? I feel that there might be many solutions already, but I'm not sure the correct term to use to search for them. Maybe I can do something like adding up the eigenvalues of the covariance matrix, does that sound sensible? | (The answer below merely introduces and states the theorem proven in Eq. (0) The beauty in that paper is that most of the arguments are made in terms of basic linear algebra. To answer this question it will be enough to state the main results, but by all means, go check the original source). In any situation where the multivariate pattern of the data can be described by a $k$ -variate elliptical distribution, statistical inference will, by definition, reduce it to the problem of fitting (and characterizing) a $k$ -variate location vector (say $\boldsymbol\theta$ ) and a $k\times k$ symmetric semi-positive definite (SPSD) matrix (say $\boldsymbol\varSigma$ ) to the data. For reasons explained below (which are assumed as premises) it will often be more meaningful to decompose $\boldsymbol\varSigma$ into its shape component (a SPSD matrix of the same size as $\boldsymbol\varSigma$ ) accounting for the shape of the density contours of your multivariate distribution and a scalar $\sigma_S$ expressing the scale of these contours. In univariate data ( $k=1$ ), $\boldsymbol\varSigma$ , the covariance matrix of your data is a scalar and, as will follow from the discussion below, the shape component of $\boldsymbol\varSigma$ is 1 so that $\boldsymbol\varSigma$ equals its scale component $\boldsymbol\varSigma=\sigma_S$ always and no ambiguity is possible. In multivariate data, there are many possible choices for scaling functions $\sigma_S$ . One in particular ( $\sigma_S=|\pmb\varSigma|^{1/k}$ ) stands out in having a key desirable propriety, making it the preferred choice of scaling functions in the context of elliptical families. Many problems in MV-statistics involve estimation of a scatter matrix, defined as a function(al) SPSD matrix in $\mathbb{R}^{k\times k}$ ( $\boldsymbol\varSigma$ ) satisfying: $$(0)\quad\boldsymbol\varSigma(\boldsymbol A\boldsymbol X+\boldsymbol b)=\boldsymbol A\boldsymbol\varSigma(\boldsymbol X)\boldsymbol A^\top$$ (for non singular matrices $\boldsymbol A$ and vectors $\boldsymbol b$ ). For example the classical estimate of covariance satisfies (0) but it is by no means the only one. In the presence of elliptical distributed data, where all the density contours are ellipses defined by the same shape matrix, up to multiplication by a scalar, it is natural to consider normalized versions of $\boldsymbol\varSigma$ of the form: $$\boldsymbol V_S = \boldsymbol\varSigma / S(\boldsymbol\varSigma)$$ where $S$ is a 1-honogenous function satisfying: $$(1)\quad S(\lambda \boldsymbol\varSigma)=\lambda S(\boldsymbol\varSigma) $$ for all $\lambda>0$ . Then, $\boldsymbol V_S$ is called the shape component of the scatter matrix (in short shape matrix) and $\sigma_S=S^{1/2}(\boldsymbol\varSigma)$ is called the scale component of the scatter matrix. Examples of multivariate estimation problems where the loss function only depends on $\boldsymbol\varSigma$ through its shape component $\boldsymbol V_S$ include tests of sphericity, PCA and CCA among others. Of course, there are many possible scaling functions so this still leaves the open the question of which (if any) of several choices of normalization function $S$ are in some sense optimal. For example: $S=\text{tr}(\boldsymbol\varSigma)/k$ (for example the one proposed by @amoeba in his comment below the OP's question as well as @HelloGoodbye's answer below. See also [1], [2], [3]) $S=|\boldsymbol\varSigma|^{1/k}$ ([4], [5], [6], [7], [8]) $\boldsymbol\varSigma_{11}$ (the first entry of the covariance matrix) $\lambda_1(\boldsymbol\varSigma)$ (the first eigenvalue of $\boldsymbol\varSigma$ ), this is called the spectral norm and is discussed in @Aksakal answer below. Among these, $S=|\boldsymbol\varSigma|^{1/k}$ is the only scaling function for which the Fisher Information matrix for the corresponding estimates of scale and shape, in locally asymptotically normal families, are block diagonal (that is the scale and shape components of the estimation problem are asymptotically orthogonal) [0]. This means, among other things, that the scale functional $S=|\boldsymbol\varSigma|^{1/k}$ is the only choice of $S$ for which the non specification of $\sigma_S$ does not cause any loss of efficiency when performing inference on $\boldsymbol V_S$ . I do not know of any comparably strong optimality characterization for any of the many possible choices of $S$ that satisfy (1). [0] Paindaveine, D., A canonical definition of shape, Statistics & Probability Letters, Volume 78, Issue 14, 1 October 2008, Pages 2240-2247. Ungated link [1] Dumbgen, L. (1998). On Tyler’s M-functional of scatter
in high dimension, Ann. Inst. Statist. Math. 50, 471–491. [2] Ollila, E., T.P. Hettmansperger, and H. Oja (2004). Affine equivariant multivariate sign methods. Preprint, University of Jyvaskyla. [3] Tyler, D.E. (1983). Robustness and efficiency properties of scatter matrices, Biometrika 70, 411–420. [4] Dumbgen, L., and D.E. Tyler (2005). On the breakdown properties of some multivariate M-Functionals, Scand. J. Statist.
32, 247–264. [5] Hallin, M. and D. Paindaveine (2008). Optimal rank-based tests for homogeneity of scatter, Ann. Statist., to appear. [6] Salibian-Barrera, M., S. Van Aelst, and G. Willems (200
6). Principal components analysis based on multivariate MM-estimators with fast and robust bootstrap, J. Amer. Statist. Assoc. 101, 1198–1211. [7] Taskinen, S., C. Croux, A. Kankainen, E. Ollila, and H. O
ja (2006). Influence functions and efficiencies of the canonical correlation and vector estimates based on scatter and shape matrices,
J. Multivariate Anal. 97, 359–384. [8] Tatsuoka, K.S., and D.E. Tyler (2000). On the uniqueness of S-Functionals and M-functionals under nonelliptical distributions, Ann. Statist. 28, 1219–1243. | {
"source": [
"https://stats.stackexchange.com/questions/225434",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/95569/"
]
} |
225,734 | This may be a simple question for many but here it is: Why isn't variance defined as the difference between every value following each other instead of the difference to the average of the values? This would be the more logical choice to me, I guess I'm obviously overseeing some disadvantages. Thanks EDIT: Let me rephrase as clearly as possible. This is what I mean: Assume you have a range of numbers, ordered: 1,2,3,4,5 Calculate and sum up (the absolute) differences (continuously, between every following value, not pairwise) between values (without using the average). Divide by number of differences (Follow-up: would the answer be different if the numbers were un-ordered) -> What are the disadvantages of this approach compared to the standard formula for variance? | The most obvious reason is that there is often no time sequence in the values. So if you jumble the data, it makes no difference in the information conveyed by the data. If we follow your method, then every time you jumble the data you get a different sample variance. The more theoretical answer is that sample variance estimates the true variance of a random variable. The true variance of a random variable $X$ is
$$E\left[ (X - EX)^2 \right]. $$ Here $E$ represents expectation or "average value". So the definition of the variance is the average squared distance between the variable from its average value. When you look at this definition, there is no "time order" here since there is no data. It is just an attribute of the random variable. When you collect iid data from this distribution, you have realizations $x_1, x_2, \dots, x_n$. The best way to estimate the expectation is to take the sample averages. The key here is that we got iid data, and thus there is no ordering to the data. The sample $x_1, x_2, \dots, x_n$ is the same as the sample $x_2, x_5, x_1, x_n..$ EDIT Sample variance measures a specific kind of dispersion for the sample, the one that measures the average distance from the mean. There are other kinds of dispersion like range of data, and Inter-Quantile range. Even if you sort your values in ascending order, that does not change the characteristics of the sample. The sample (data) you get are realizations from a variable. Calculating the sample variance is akin to understanding how much dispersion is in the variable. So for example, if you sample 20 people, and calculate their height, then those are 20 "realizations" from the random variable $X = $ height of people. Now the sample variance is supposed to measure the variability in the height of individuals in general. If you order the data
$$ 100, 110, 123, 124, \dots,$$ that does not change the information in the sample. Lets look at one more example. lets say you have 100 observations from a random variable ordered in this way $$1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, ... 100.$$ Then the average subsequent distance is 1 units, so by your method the variance will be 1. The way to interpret "variance" or "dispersion" is to understand what range of values are likely for the data. In this case you will get a range of .99 unit, which of course does not represent the variation well. If instead of taking average you just sum the subsequent differences, then your variance will be 99. Of course that does not represent the variability in the sample, because 99 gives you the range of the data, not a sense of variability. | {
"source": [
"https://stats.stackexchange.com/questions/225734",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/107356/"
]
} |
225,949 | I've been reading about k-fold validation, and I want to make sure I understand how it works. I know that for the holdout method, the data is split into three sets, and the test set is only used at the very end to assess the performance of the model, while the validation set is used for tuning hyperparameters, etc. In the k-fold method, do we still hold out a test set for the very end, and only use the remaining data for training and hyperparameter tuning, i.e. we split the remaining data into k folds, and then use the average accuracy after training with each fold (or whatever performance metric we choose to tune our hyperparameters)? Or do we not use a separate test set at all, and simply split the entire dataset into k folds (if this is the case, I assume that we just consider the average accuracy on the k folds to be our final accuracy)? | In the K-Fold method, do we still hold out a test set for the very end, and only use the remaining data for training and hyperparameter tuning (ie. we split the remaining data into k folds, and then use the average accuracy after training with each fold (or whatever performance metric we choose) to tune our hyperparameters)? Yes. As a rule, the test set should never be used to change your model (e.g., its hyperparameters). However, cross-validation can sometimes be used for purposes other than hyperparameter tuning, e.g. determining to what extent the train/test split impacts the results. | {
"source": [
"https://stats.stackexchange.com/questions/225949",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/124743/"
]
} |
226,109 | How does randomForest package estimate class probabilities when I use predict(model, data, type = "prob") ? I was using ranger for training random forests using the probability = T argument to predict probabilities. ranger says in documentation that it: Grow a probability forest as in Malley et al. (2012). I simulated some data and tried both packages and obtained very different results (see code below) So I know that it uses a different technique (then ranger) to estimate probabilities. But which one? simulate_data <- function(n){
X <- data.frame(matrix(runif(n*10), ncol = 10))
Y <- data.frame(Y = rbinom(n, size = 1, prob = apply(X, 1, sum) %>%
pnorm(mean = 5)
) %>%
as.factor()
)
dplyr::bind_cols(X, Y)
}
treino <- simulate_data(10000)
teste <- simulate_data(10000)
library(ranger)
modelo_ranger <- ranger(Y ~., data = treino,
num.trees = 100,
mtry = floor(sqrt(10)),
write.forest = T,
min.node.size = 100,
probability = T
)
modelo_randomForest <- randomForest(Y ~., data = treino,
ntree = 100,
mtry = floor(sqrt(10)),
nodesize = 100
)
pred_ranger <- predict(modelo_ranger, teste)$predictions[,1]
pred_randomForest <- predict(modelo_randomForest, teste, type = "prob")[,2]
prob_real <- apply(teste[,1:10], 1, sum) %>% pnorm(mean = 5)
data.frame(prob_real, pred_ranger, pred_randomForest) %>%
tidyr::gather(pacote, prob, -prob_real) %>%
ggplot(aes(x = prob, y = prob_real)) + geom_point(size = 0.1) + facet_wrap(~pacote) | It's just the proportion of votes of the trees in the ensemble. library(randomForest)
rf = randomForest(Species~., data = iris, norm.votes = TRUE, proximity = TRUE)
p1 = predict(rf, iris, type = "prob")
p2 = predict(rf, iris, type = "vote", norm.votes = TRUE)
identical(p1,p2)
#[1] TRUE Alternatively, if you multiply your probabilities by ntree , you get the same result, but now in counts instead of proportions. p1 = predict(rf, iris, type = "prob")
p2 = predict(rf, iris, type = "vote", norm.votes = FALSE)
identical(500*p1,p2)
#[1] TRUE | {
"source": [
"https://stats.stackexchange.com/questions/226109",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/44359/"
]
} |
226,118 | I am using MLP neural network. My question is for training the neural network and testing it how much splitting of data is needed like is there any rule that I always have to split data 70% for training and 30% for testing when I did this my accuracy was not good as when I split it into 10% for training and 90% for testing I got more accuracy... Is this valid? | It's just the proportion of votes of the trees in the ensemble. library(randomForest)
rf = randomForest(Species~., data = iris, norm.votes = TRUE, proximity = TRUE)
p1 = predict(rf, iris, type = "prob")
p2 = predict(rf, iris, type = "vote", norm.votes = TRUE)
identical(p1,p2)
#[1] TRUE Alternatively, if you multiply your probabilities by ntree , you get the same result, but now in counts instead of proportions. p1 = predict(rf, iris, type = "prob")
p2 = predict(rf, iris, type = "vote", norm.votes = FALSE)
identical(500*p1,p2)
#[1] TRUE | {
"source": [
"https://stats.stackexchange.com/questions/226118",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/123778/"
]
} |
226,553 | When doing regression, for example, two hyper parameters to choose are often the capacity of the function (eg. the largest exponent of a polynomial), and the amount of regularisation. What I'm confused about, is why not just choose a low capacity function, and then ignore any regularisation? In that way, it will not overfit. If I have a high capacity function together with regularisation, isn't that just the same as having a low capacity function and no regularisation? | I recently made a little in browser app that you can use to play with these ideas: Scatterplot Smoothers (*). Here's some data I made up, with a low degree polynomial fit It's clear that the quadratic polynomial is just not flexible enough to give a good fit to the data. We have regions of very high bias, between $0.6$ and $0.85$ all the data is below the fit, and after $0.85$ all the data is above the curve. To rid ourselves of bias, we can increase the degree of the curve to three, but the problem remains, the cubic curve is still too rigid So we continue to increase the degree, but now we incur the opposite problem This curve tracks the data too closely, and has a tendency to fly off in directions not so well borne out by general patterns in the data. This is where regularization comes in. With the same degree curve (ten) and some well chosen regularization We get a really nice fit! It's worth a little focus on one aspect of well chosen above. When you are fitting polynomials to data you have a discrete set of choices for degree. If a degree three curve is underfit and a degree four curve is overfit, you have nowhere to go in the middle. Regularization solves this problem, as it gives you a continuous range of complexity parameters to play with. how do you claim "We get a really nice fit!". For me they all look the same, namely, inconclusive. Which rational are you using to decide what is a nice and a bad fit? Fair point. The assumption I'm making here is that a well fit model should have no discernable pattern in the residuals. Now, I'm not plotting the residuals, so you have to do a little bit of work when looking at the pictures, but you should be able to use your imagination. In the first picture, with the quadratic curve fit to the data, I can see the following pattern in the residuals From 0.0 to 0.3 they are about evenly placed above and below the curve. From 0.3 to about 0.55 all the data points are above the curve. From 0.55 to about 0.85 all the data points are below the curve. From 0.85 on, they are all above the curve again. I'd refer to these behaviours as local bias , there are regions where the curve is not well approximating the conditional mean of the data. Compare this to the last fit, with the cubic spline. I can't pick out any regions by eye where the fit does not look like it's running precisely through the center of mass of the data points. This is generally (though imprecisely) what I mean by a good fit. Final Note : Take all this as illustration. In practice, I do not recommend using polynomial basis expansions for any degree higher than $2$ . Their problems are well discussed elsewhere, but, for example: Their behaviour at the boundaries of your data can be very chaotic,
even with regularization. They are not local in any sense. Changing your data in one place can significantly affect the fit in a very different place. Instead, in a situation like you describe, I recommend using natural cubic splines along with regularization, which give the best compromise between flexibility and stability. You can see for yourself by fitting some splines in the app. (*) I believe this only works in Chrome and Firefox due to my use of some modern javascript features (and overall laziness to fix it in Safari and IE). The source code is here , if you are interested. | {
"source": [
"https://stats.stackexchange.com/questions/226553",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/72307/"
]
} |
226,923 | Why do we use rectified linear units (ReLU) with neural networks? How does that improve neural network? Why do we say that ReLU is an activation function? Isn't softmax activation function for neural networks? I am guessing that we use both, ReLU and softmax, like this: neuron 1 with softmax output ----> ReLU on the output of neuron 1, which is input of neuron 2 ---> neuron 2 with softmax output --> ... so that the input of neuron 2 is basically ReLU(softmax(x1)). Is this correct? | The ReLU function is $f(x)=\max(0, x).$ Usually this is applied element-wise to the output of some other function, such as a matrix-vector product. In MLP usages, rectifier units replace all other activation functions except perhaps the readout layer. But I suppose you could mix-and-match them if you'd like. One way ReLUs improve neural networks is by speeding up training. The gradient computation is very simple (either 0 or 1 depending on the sign of $x$ ). Also, the computational step of a ReLU is easy: any negative elements are set to 0.0 -- no exponentials, no multiplication or division operations. Gradients of logistic and hyperbolic tangent networks are smaller than the positive portion of the ReLU. This means that the positive portion is updated more rapidly as training progresses. However, this comes at a cost. The 0 gradient on the left-hand side is has its own problem, called "dead neurons," in which a gradient update sets the incoming values to a ReLU such that the output is always zero; modified ReLU units such as ELU (or Leaky ReLU, or PReLU, etc.) can ameliorate this. $\frac{d}{dx}\text{ReLU}(x)=1\forall x > 0$ . By contrast, the gradient of a sigmoid unit is at most $0.25$ ; on the other hand, $\tanh$ fares better for inputs in a region near 0 since $0.25 < \frac{d}{dx}\tanh(x) \le 1 \forall x \in [-1.31, 1.31]$ (approximately). | {
"source": [
"https://stats.stackexchange.com/questions/226923",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/125388/"
]
} |
227,034 | This is probably an amateur question, but I am interested in how did the scientists come up with the shape of the normal distribution probability density function? Basically what bugs me is that for someone it would perhaps be more intuitive that the probability function of normally distributed data has a shape of an isosceles triangle rather than a bell curve, and how would you prove to such a person that the probability density function of all normally distributed data has a bell shape? By experiment? Or by some mathematical derivation? After all what do we actually consider normally distributed data? Data that follows the probability pattern of a normal distribution, or something else? Basically my question is why does the normal distribution probability density function has a bell shape and not any other? And how did scientists figure out on which real life scenarios can the normal distribution be applied, by experiment or by studying the nature of various data itself? So I've found this link to be really helpful in explaining the derivation of the functional form of the normal distribution curve, and thus answering the question "Why does the normal distribution look like it does and not anything else?". Truly mindblowing reasoning, at least for me. | " The Evolution of the Normal Distribution " by SAUL STAHL is the best source of information to answer pretty much all the questions in your post. I'll recite a few points for your convenience only, because you'll find the detailed discussion inside the paper. This is probably an amateur question No, it's an interesting question to anyone who uses statistics, because this is not covered in detail anywhere in standard courses. Basically what bugs me is that for someone it would perhaps be more intuitive that the probability function of normally distributed data has a shape of an isosceles triangle rather than a bell curve, and how would you prove to such a person that the probability density function of all normally distributed data has a bell shape? Look at this picture from the paper. It shows the error curves that Simpson came up with before Gaussian (Normal) was discovered to analyze experimental data. So, your intuition is spot on. By experiment? Yes, that's why they were called "error curves". The experiment was astronomical measurements. Astronomers struggled with measurement errors for centuries. Or by some mathematical derivation? Again, YES! Long story short: the analysis of errors in astronomical data led Gauss to his (aka Normal) distribution. These are the assumptions he used: By the way, Laplace used a few different approaches, and also came up with his distribution too while working with astronomical data: As to why normal distribution shows in experiment as measurement errors, here's a typical "hand-wavy" explanation physicist are used to give (a quote from Gerhard Bohm, Günter Zech, Introduction to Statistics and Data
Analysis for Physicists p.85): Many experimental signals follow to a very good approximation a normal
distribution. This is due to the fact that they consist of the sum of
many contributions and a consequence of the central limit theorem. | {
"source": [
"https://stats.stackexchange.com/questions/227034",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/125486/"
]
} |
227,088 | I had an online course, where I learned, that unbalanced classes in the training data might lead to problems, because classification algorithms go for the majority rule, as it gives good results if the unbalance is too much. In an assignment one had to balance the data via undersampling the majority class. In this blog however, someone claims that balanced data is even worse: https://matloff.wordpress.com/2015/09/29/unbalanced-data-is-a-problem-no-balanced-data-is-worse/ So which one is it? Should I balance the data or not? Does it depend on the algorithm used, as some might be able to adept to the unbalanced proportions of classes? If so, which ones are reliable on unbalanced data? | The intuitive reasoning has been explained in the blogpost: If our goal is Prediction, this will cause a definite bias. And worse,
it will be a permanent bias, in the sense that we will not have
consistent estimates as the sample size grows. So, arguably the problem of (artificially) balanced data is worse than
the unbalanced case. Balanced data are good for classification, but you obviously loose information about appearance frequencies, which is going to affect accuracy metrics themselves, as well as production performance. Let's say you're recognizing hand-written letters from English alphabet (26 letters). Overbalancing every letter appearance will give every letter a probability of being classified (correctly or not) roughly 1/26, so classifier will forget about actual distribution of letters in the original sample. And it's ok when classifier is able to generalize and recognize every letter with high accuracy . But if accuracy and most importantly generalization isn't "so high" (I can't give you a definition - you can think of it just as a "worst case") - the misclassified points will most-likely equally distribute among all letters, something like: "A" was misclassified 10 times
"B" was misclassified 10 times
"C" was misclassified 11 times
"D" was misclassified 10 times
...and so on As opposed to without balancing (assuming that "A" and "C" have much higher probabilities of appearance in text) "A" was misclassified 3 times
"B" was misclassified 14 times
"C" was misclassified 3 times
"D" was misclassified 14 times
...and so on So frequent cases will get fewer misclassifications. Whether it's good or not depends on your task. For natural text recognition, one could argue that letters with higher frequencies are more viable, as they would preserve semantics of the original text, bringing the recognition task closer to prediction (where semantics represent tendencies ). But if you're trying to recognize something like screenshot of ECDSA-key (more entropy -> less prediction) - keeping data unbalanced wouldn't help. So, again, it depends. The most important distinction is that the accuracy estimate is, itself, getting biased (as you can see in the balanced alphabet example), so you don't know how the model's behavior is getting affected by most rare or most frequent points. P.S. You can always track performance of unbalanced classification with Precision/Recall metrics first and decide whether you need to add balancing or not. EDIT : There is additional confusion that lies in estimation theory precisely in the difference between sample mean and population mean. For instance, you might know (arguably) actual distribution of English letters in the alphabet $p(x_i | \theta)$ , but your sample (training set) is not large enough to estimate it correctly (with $p(x_i | \hat \theta)$ ). So in order to compensate for a $\hat \theta_i - \theta_i$ , it is sometimes recommended to rebalance classes according to either population itself or parameters known from a larger sample (thus better estimator). However, in practice there is no guarantee that "larger sample" is identically distributed due to risk of getting biased data on every step (let's say English letters collected from technical literature vs fiction vs the whole library) so balancing could still be harmful. This answer should also clarify applicability criteria for balancing: The class imbalance problem is caused by there not being enough
patterns belonging to the minority class, not by the ratio of positive
and negative patterns itself per se. Generally if you have enough data, the "class imbalance problem" doesn't arise As a conclusion, artificial balancing is rarely useful if training set is large enough. Absence of statistical data from a larger identically distributed sample also suggests no need for artificial balancing (especially for prediction), otherwise the quality of estimator is as good as "probability to meet a dinosaur": What is the probability to meet a dinosaur out in the street? 1/2 you either meet a dinosaur or you do not meet a dinosaur | {
"source": [
"https://stats.stackexchange.com/questions/227088",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/124558/"
]
} |
228,763 | Regularization using methods such as Ridge, Lasso, ElasticNet is quite common for linear regression. I wanted to know the following:
Are these methods applicable for logistic regression? If so, are there any differences in the way they need to be used for logistic regression? If these methods are not applicable, how does one regularize a logistic regression? | Yes, Regularization can be used in all linear methods, including both regression and classification. I would like to show you that there are not too much difference between regression and classification: the only difference is the loss function. Specifically, there are three major components of linear method, Loss Function, Regularization, Algorithms . Where loss function plus regularization is the objective function in the problem in optimization form and the algorithm is the way to solve it (the objective function is convex, we will not discuss in this post). In loss function setting, we can have different loss in both regression and classification cases. For example, Least squares and least absolute deviation loss can be used for regression. And their math representation are $L(\hat y,y)=(\hat y -y)^2$ and $L(\hat y,y)=|\hat y -y|$. (The function $L( \cdot ) $ is defined on two scalar, $y$ is ground truth value and $\hat y$ is predicted value.) On the other hand, logistic loss and hinge loss can be used for classification. Their math representations are $L(\hat y, y)=\log (1+ \exp(-\hat y y))$ and $L(\hat y, y)= (1- \hat y y)_+$. (Here, $y$ is the ground truth label in $\{-1,1\}$ and $\hat y$ is predicted "score". The definition of $\hat y$ is a little bit unusual, please see the comment section.) In regularization setting, you mentioned about the L1 and L2 regularization, there are also other forms, which will not be discussed in this post. Therefore, in a high level a linear method is $$\underset{w}{\text{minimize}}~~~ \sum_{x,y} L(w^{\top} x,y)+\lambda h(w)$$ If you replace the Loss function from regression setting to logistic loss, you get the logistic regression with regularization. For example, in ridge regression, the optimization problem is $$\underset{w}{\text{minimize}}~~~ \sum_{x,y} (w^{\top} x-y)^2+\lambda w^\top w$$ If you replace the loss function with logistic loss, the problem becomes $$\underset{w}{\text{minimize}}~~~ \sum_{x,y} \log(1+\exp{(-w^{\top}x \cdot y)})+\lambda w^\top w$$ Here you have the logistic regression with L2 regularization. This is how it looks like in a toy synthesized binary data set. The left figure is the data with the linear model (decision boundary). The right figure is the objective function contour (x and y axis represents the values for 2 parameters.). The data set was generated from two Gaussian, and we fit the logistic regression model without intercept, so there are only two parameters we can visualize in the right sub-figure. The blue lines are the logistic regression without regularization and the black lines are logistic regression with L2 regularization. The blue and black points in right figure are optimal parameters for objective function. In this experiment, we set a large $\lambda$, so you can see two coefficients are close to $0$. In addition, from the contour, we can observe the regularization term is dominated and the whole function is like a quadratic bowl. Here is another example with L1 regularization. Note that, the purpose of this experiment is trying to show how the regularization works in logistic regression, but not argue regularized model is better. Here are some animations about L1 and L2 regularization and how it affects the logistic loss objective. In each frame, the title suggests the regularization type and $\lambda$, the plot is objective function (logistic loss + regularization) contour. We increase the regularization parameter $\lambda$ in each frame and the optimal solution will shrink to $0$ frame by frame. Some notation comments. $w$ and $x$ are column vectors,$y$ is a scalar. So the linear model $\hat y = f(x)=w^\top x$. If we want to include the intercept term, we can append $1$ as a column to the data. In regression setting, $y$ is a real number and in classification setting $y \in \{-1,1\}$. Note it is a little bit strange for the definition of $\hat y=w^{\top} x$ in classification setting. Since most people use $\hat y$ to represent a predicted value of $y$. In our case, $\hat y = w^{\top} x$ is a real number, but not in $\{-1,1\}$. We use this definition of $\hat y$ because we can simplify the notation on logistic loss and hinge loss. Also note that, in some other notation system, $y \in \{0,1\}$, the form of the logistic loss function would be different. The code can be found in my other answer here. Is there any intuitive explanation of why logistic regression will not work for perfect separation case? And why adding regularization will fix it? | {
"source": [
"https://stats.stackexchange.com/questions/228763",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/60606/"
]
} |
229,619 | At what point do we start classifying multi layered neural networks as deep neural networks or to put it in another way 'What is the minimum number of layers in a deep neural network?' | "Deep" is a marketing term: you can therefore use it whenever you need to market your multi-layered neural network. | {
"source": [
"https://stats.stackexchange.com/questions/229619",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/127022/"
]
} |
230,097 | I am looking at some lecture slides on a data science course which can be found here: https://github.com/cs109/2015/blob/master/Lectures/01-Introduction.pdf I, unfortunately, cannot see the video for this lecture and at one point on the slide, the presenter has the following text: Some Key Principles Think like a Bayesian, check like a Frequentist (reconciliation) Does anyone know what that actually means? I have a feeling there is a good insight about these two schools of thought to be gathered from this. | The main difference between the Bayesian and frequentist schools of statistics arises due to a difference in interpretation of probability. A Bayesian probability is a statement about personal belief that an event will (or has) occurred. A frequentist probability is a statement about the proportion of similar events that occur in the limit as the number of those events increases. For me, to "think like a Bayesian" means to update your personal belief as new information arises and to "check [or worry] like a frequentist" means to be concerned with performance of statistical procedures aggregated across the times those procedures are used, e.g. what is the coverage of credible intervals, what is the Type I/II error rates, etc. | {
"source": [
"https://stats.stackexchange.com/questions/230097",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36540/"
]
} |
230,415 | It says on Wikipedia that: the mathematics [of probability] is largely independent of any interpretation of probability. Question: Then if we want to be mathematically correct, shouldn't we disallow any interpretation of probability? I.e., are both Bayesian and frequentism mathematically incorrect? I don't like philosophy, but I do like math, and I want to work exclusively within the framework of Kolmogorov's axioms. If this is my goal, should it follow from what it says on Wikipedia that I should reject both Bayesianism and frequentism? If the concepts are purely philosophical and not at all mathematical, then why do they appear in statistics in the first place? Background/Context: This blog post doesn't quite say the same thing, but it does argue that attempting to classify techniques as "Bayesian" or "frequentist" is counter-productive from a pragmatic perspective. If the quote from Wikipedia is true, then it seems like from a philosophical perspective attempting to classify statistical methods is also counter-productive -- if a method is mathematically correct, then it is valid to use the method when the assumptions of the underlying mathematics hold, otherwise, if it is not mathematically correct or if the assumptions do not hold, then it is invalid to use it. On the other hand, a lot of people seem to identify "Bayesian inference" with probability theory (i.e. Kolmogorov's axioms), although I'm not quite sure why. Some examples are Jaynes's treatise on Bayesian inference called "Probability", as well as James Stone's book "Bayes' Rule". So if I took these claims at face value, that means I should prefer Bayesianism. However, Casella and Berger's book seems like it is frequentist because it discusses maximum likelihood estimators but ignores maximum a posteriori estimators, but it also seems like everything therein is mathematically correct. So then wouldn't it follow that the only mathematically correct version of statistics is that which refuses to be anything but entirely agnostic with respect to Bayesianism and frequentism? If methods with both classifications are mathematically correct, then isn't it improper practice to prefer some over the others, because that would be prioritizing vague, ill-defined philosophy over precise, well-defined mathematics? Summary: In short, I don't understand what the mathematical basis is for the Bayesian versus frequentist debate, and if there is no mathematical basis for the debate (which is what Wikipedia claims), I don't understand why it is tolerated at all in academic discourse. | Stats is not Math First, I steal @whuber's words from a comment in Stats is not maths? (applied in a different context, so I'm stealing words, not citing): If you were to replace "statistics" by "chemistry," "economics," "engineering," or any other field that employs mathematics (such as home economics), it appears none of your argument would change. All these fields are allowed to exist and to have questions that are not solved only by checking which theorems are correct. Though some answers at Stats is not maths? disagree, I think it is clear that statistics is not (pure) mathematics. If you want to do probability theory, a branch of (pure) mathematics, you may indeed ignore all debates of the kind you ask about. If you want to apply probability theory into modeling some real-world questions, you need something more to guide you than just the axioms and theorems of the mathematical framework. The remainder of the answer is rambling about this point. The claim "if we want to be mathematically correct, shouldn't we disallow any interpretation of probability" also seems unjustified. Putting an interpretation on top of a mathematical framework does not make the mathematics incorrect (as long as the interpretation is not claimed to be a theorem in the mathematical framework). The debate is not (mainly) about axioms Though there are some alternative axiomatizations*, the(?) debate is not about disputing Kolmogorov axioms. Ignoring some subtleties with zero-measure conditioning events, leading to regular conditional probability etc., about which I don't know enough, the Kolmogorov axioms and conditional probability imply the Bayes rule, which no-one disputes. However, if $X$ is not even a random variable in your model (model in the sense of the mathematical setup consisting of a probability space or a family of them, random variables, etc.), it is of course not possible to compute the conditional distribution $P(X\mid Y)$. No-one also disputes that the frequency properties, if correctly computed, are consequences of the model. For example, the conditional distributions $p(y\mid \theta)$ in a Bayesian model define an indexed family of probability distributions $p(y; \theta)$ by simply letting $p(y \mid \theta) = p(y; \theta)$ and if some results hold for all $\theta$ in the latter, they hold for all $\theta$ in the former, too. The debate is about how to apply the mathematics The debates (as much as any exist**), are instead about how to decide what kind of probability model to set up for a (real-life, non-mathematical) problem and which implications of the model are relevant for drawing (real-life) conclusions. But these questions would exist even if all statisticians agreed. To quote from the blog post you linked to [1], we want to answer questions like How should I design a roulette so my casino makes $? Does this fertilizer increase crop yield? Does streptomycin cure pulmonary tuberculosis? Does smoking cause cancer? What movie would would this user enjoy? Which baseball player should the Red Sox give a contract to? Should this patient receive chemotherapy? The axioms of probability theory do not even contain a definition of baseball, so it is obvious that "Red Sox should give a contract to baseball player X" is not a theorem in probability theory. Note about mathematical justifications of the Bayesian approach There are 'mathematical justifications' for considering all unknowns as probabilistic such as the Cox theorem that Jaynes refers to, (though I hear it has mathematical problems, that may or not have been fixed, I don't know, see [2] and references therein) or the (subjective Bayesian) Savage approach (I've heard this is in [3] but haven't ever read the book) that proves that under certain assumptions, a rational decision-maker will have a probability distribution over states of world and select his action based on maximizing the expected value of a utility function. However, whether or not the manager of Red Sox should accept the assumptions, or whether we should accept the theory that smoking causes cancer, cannot be deduced from any mathematical framework, so the debate cannot be (only) about the correctness of these justifications as theorems. Footnotes *I have not studied it, but I've heard de Finetti has an approach where conditional probabilities are primitives rather than obtained from the (unconditional) measure by conditioning. [4] mentions a debate between (Bayesians) José Bernardo, Dennis Lindley and Bruno de Finetti in a cosy French restaurant about whether $\sigma$-additivity is needed. **as mentioned in the blog post you link to [1], there might be no clear cut debate with every statistician belonging to one team and despising the other team. I have heard it said that we are all pragmatics nowadays and the useless debate is over. However, in my experience these differences exist in, for example, whether someone's first approach is to model all unknowns as random variables or not and how interested someone is in frequency guarantees. References [1] Simply Statistics, a statistical blog by Rafa Irizarry, Roger Peng, and Jeff Leek, "I declare the Bayesian vs. Frequentist debate over for data scientists", 13 Oct 2014, http://simplystatistics.org/2014/10/13/as-an-applied-statistician-i-find-the-frequentists-versus-bayesians-debate-completely-inconsequential/ [2] Dupré, M. J., & Tipler, F. J. (2009). New axioms for rigorous Bayesian probability. Bayesian Analysis, 4(3), 599-606. http://projecteuclid.org/download/pdf_1/euclid.ba/1340369856 [3] Savage, L. J. (1972). The foundations of statistics. Courier Corporation. [4] Bernardo, J.M. The Valencia Story - Some details of the origin and development of the Valencia International Meetings on Bayesian Statistics. http://www.uv.es/bernardo/ValenciaStory.pdf | {
"source": [
"https://stats.stackexchange.com/questions/230415",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/113090/"
]
} |
230,750 | When should we discretize/bin independent variables/features and when should not? My attempts to answer the question: In general, we should not bin, because binning will lose information. Binning is actually increasing the degree of freedom of the model, so, it is possible to cause over-fitting after binning. If we have a "high bias" model, binning may not be bad, but if we have a "high variance" model, we should avoid binning. It depends on what model we are using. If it is a linear mode, and data has a lot of "outliers" binning probability is better. If we have a tree model, then, outlier and binning will make too much difference. Am I right? and what else? I thought this question should be asked many times but I cannot find it in CV only these posts Should we bin continuous variables? What is the benefit of breaking up a continuous predictor variable? | Aggregation is substantively meaningful (whether or not the researcher is aware of that). One should bin data, including independent variables, based on the data itself when one wants: To hemorrhage statistical power. To bias measures of association. A literature starting, I believe, with Ghelke and Biehl (1934—definitely worth a read, and suggestive of some easy enough computer simulations that one can run for one's self), and continuing especially in the 'modifiable areal unit problem' literature (Openshaw, 1983; Dudley, 1991; Lee and Kemp, 2000) makes both these points clear. Unless one has an a priori theory of the scale of aggregation (how many units to aggregate to) and the categorization function of aggregation (which individual observations will end up in which aggregate units), one should not aggregate. For example, in epidemiology, we care about the health of individuals , and about the health of populations . The latter are not simply random collections of the former, but defined by, for example, geopolitical boundaries, social circumstances like race-ethnic categorization, carceral status and history categories, etc. (See, for example Krieger, 2012) References Dudley, G. (1991). Scale, aggregation, and the modifiable areal unit problem . [pay-walled] The Operational Geographer, 9(3):28–33. Gehlke, C. E. and Biehl, K. (1934). Certain Effects of Grouping Upon the Size of the Correlation Coefficient in Census Tract Material . [pay-walled] Journal of the American Statistical Association , 29(185):169–170. Krieger, N. (2012). Who and what is a “population”? historical debates, current controversies, and implications for understanding “population health” and rectifying health inequities . The Milbank Quarterly , 90(4):634–681. Lee, H. T. K. and Kemp, Z. (2000). Hierarchical reasoning and on-line analytical processing of spatial and temporal data . In Proceedings of the 9th International Symposium on Spatial Data Handling , Beijing, P.R. China. International Geographic Union. Openshaw, S. (1983). The modifiable areal unit problem. Concepts and Techniques in Modern Geography . Geo Books, Norwich, UK. | {
"source": [
"https://stats.stackexchange.com/questions/230750",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/113777/"
]
} |
231,285 | My understanding is that in machine learning it can be a problem if your dataset has highly correlated features, as they effectively encode the same information. Recently someone pointed out that when you do one-hot encoding on a categorical variable you end up with correlated features, so you should drop one of them as a "reference". For example, encoding gender as two variables, is_male and is_female , produces two features which are perfectly negatively correlated, so they suggested just using one of them, effectively setting the baseline to say male, and then seeing if the is_female column is important in the predictive algorithm. That made sense to me but I haven't found anything online to suggest this may be the case, so is this wrong or am I missing something? Possible (unanswered) duplicate: Does collinearity of one-hot encoded features matter for SVM and LogReg? | This depends on the models (and maybe even software) you want to use. With linear regression, or generalized linear models estimated by maximum likelihood (or least squares) (in R this means using functions lm or glm ), you need to leave out one column. Otherwise you will get a message about some columns "left out because of singularities" $^\dagger$ . But if you estimate such models with regularization , for example ridge, lasso er the elastic net, then you should not leave out any columns. The regularization takes care of the singularities, and more important, the prediction obtained may depend on which columns you leave out. That will not happen when you do not use regularization $^\ddagger$ . See the answer at How to interpret coefficients of a multinomial elastic net (glmnet) regression which supports this view (with a direct quote from one of the authors of glmnet ). With other models, use the same principles. If the predictions obtained depends on which columns you leave out, then do not do it. Otherwise it is fine. So far, this answer only mentions linear (and some mildly non-linear) models. But what about very non-linear models, like trees and randomforests? The ideas about categorical encoding, like one-hot, stems mainly from linear models and extensions. There is little reason to think that ideas derived from that context should apply without modification for trees and forests! For some ideas see Random Forest Regression with sparse data in Python . $^\dagger$ But, using factor variables, R will take care of that for you. $^\ddagger$ Trying to answer extra question in comment: When using regularization, most often iterative methods are used (as with lasso or elasticnet) which do not need matrix inversion, so that the design matrix do not have full rank is not a problem. With ridge regularization, matrix inversion may be used, but in that case the regularization term added to the matrix before inversion makes it invertible. That is a technical reason, a more profound reason is that removing one column changes the optimization problem , it changes the meaning of the parameters, and it will actually lead to different optimal solutions . As a concrete example, say you have a categorical variable with three levels, 1,2 and 3. The corresponding parameters is $\beta_, \beta_2, \beta_3$ . Leaving out column 1 leads to $\beta_1=0$ , while the other two parameters change meaning to $\beta_2-\beta_1, \beta_3-\beta_1$ . So those two differences will be shrunk. If you leave out another column, other contrasts in the original parameters will be shrunk. So this changes the criterion function being optimized, and there is no reason to expect equivalent solutions! If this is not clear enough, I can add a simulated example (but not today). | {
"source": [
"https://stats.stackexchange.com/questions/231285",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/94687/"
]
} |
231,292 | Decision stump is a decision tree with only one split. It can also be written as a piecewise function. For example, assume $x$ is a vector, and $x_1$ is the first component of $x$, in regression setting, some decision stump can be $f(x)= \begin{cases}
3& x_1\leq 2 \\
5 & x_1 > 2 \\
\end{cases}
$ But is it a linear model? where can be written as $f(x)=\beta^T x$? This Question may sound strange, because as mentioned in the answers and comments, if we plot the piecewise function it is not a line. Please see next section for why I am asking this question. EDIT: The reason I ask this question is logistic regression is a (generalized) linear model and the decision boundary is a line, also for decision stump. Note, we also have this question: Why is logistic regression a linear model? . On the other hand, it seems not true that decision stump is a linear model. Another reason I asked this is because of this question: In boosting, if the base learner is a linear model, does the final model is just a simple linear model? where, if we use a linear model as a base learner, we get nothing more than linear regression. But if we select base learner as a decision stump, we are getting very interesting model. Here is one example of decision stump boosting on regression with 2 features and 1 continuous response. | No, unless you transform the data. It is a linear model if you transform $x$ using indicator function:
$$
x' = \mathbb I \left(\{x>2\}\right) = \begin{cases}\begin{align} 0 \quad &x\leq 2\\ 1 \quad &x>2 \end{align}\end{cases}
$$ Then $f(x) = 2x' + 3 = \left(\matrix{3 \\2}\right)^T \left(\matrix{1 \\x'}\right)$ Edit: this was mentioned in the comments but I want to emphasize it here as well. Any function that partitions the data into two pieces can be transformed into a linear model of this form, with an intercept and a single input (an indicator of which "side" of the partition the data point is on). It is important to take note of the difference between a decision function and a decision boundary . | {
"source": [
"https://stats.stackexchange.com/questions/231292",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/113777/"
]
} |
231,425 | In statistics, does independent and random describe the same characteristics? What's the difference between them? We often come across the description like "two independent random variables" or "random sampling". I am wondering what's the exact difference between them. Can someone explain this and give some examples? for instance non-independent but random process? | I'll try to explain it in non-technical terms: A random variable describes an outcome of an experiment; you can not know in advance what the exact outcome will be but you have some information: you know which outcomes are possible and you know, for each outcome, its probability. For example, if you toss a fair coin then you do not know in advance whether you will get head or tail, but you know that these are the possible outcomes and you know that each has 50% chance of occurrence. To explain independence you have to toss two fair coins. After tossing the first coin you know that for the second toss the probabilities of head is still 50% and for tail also. If the first toss has no influence on the probabilities of the second one then both tosses are independent. If the first toss has an influence on the probabilities of the second toss then they are dependent. An example of dependent tosses is when you glue the two coins together. | {
"source": [
"https://stats.stackexchange.com/questions/231425",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/13702/"
]
} |
232,056 | Standard Gradient Descent would compute gradient for the entire training dataset. for i in range(nb_epochs):
params_grad = evaluate_gradient(loss_function, data, params)
params = params - learning_rate * params_grad For a pre-defined number of epochs, we first compute the gradient vector weights_grad of the loss function for the whole dataset w.r.t. our parameter vector params. Stochastic Gradient Descent in contrast performs a parameter update for each training example x(i) and label y(i). for i in range(nb_epochs):
np.random.shuffle(data)
for example in data:
params_grad = evaluate_gradient(loss_function, example, params)
params = params - learning_rate * params_grad SGD is said to be much faster. However, I do not understand how it can be much faster if we still have a loop over all data points. Does the computation of the gradient in GD is much slower than computation of GD for each data point separately? Code comes from here . | Short answer: In many big data setting (say several million data points), calculating cost or gradient takes very long time, because we need to sum over all data points. We do NOT need to have exact gradient to reduce the cost in a given iteration. Some approximation of gradient would work OK. Stochastic gradient decent (SGD) approximate the gradient using only one data point. So, evaluating gradient saves a lot of time compared to summing over all data. With "reasonable" number of iterations (this number could be couple of thousands, and much less than the number of data points, which may be millions), stochastic gradient decent may get a reasonable good solution. Long answer: My notation follows Andrew NG's machine learning Coursera course. If you are not familiar with it, you can review the lecture series here . Let's assume regression on squared loss, the cost function is \begin{align}
J(\theta)= \frac 1 {2m} \sum_{i=1}^m (h_{\theta}(x^{(i)})-y^{(i)})^2
\end{align} and the gradient is \begin{align}
\frac {d J(\theta)}{d \theta}= \frac 1 {m} \sum_{i=1}^m (h_{\theta}(x^{(i)})-y^{(i)})x^{(i)}
\end{align} for gradient decent (GD), we update the parameter by \begin{align}
\theta_{new} &=\theta_{old} - \alpha \frac 1 {m} \sum_{i=1}^m (h_{\theta}(x^{(i)})-y^{(i)})x^{(i)}
\end{align} For stochastic gradient decent we get rid of the sum and $1/m$ constant, but get the gradient for current data point $x^{(i)},y^{(i)}$ , where comes time saving. \begin{align}
\theta_{new}=\theta_{old} - \alpha \cdot (h_{\theta}(x^{(i)})-y^{(i)})x^{(i)}
\end{align} Here is why we are saving time: Suppose we have 1 billion data points. In GD, in order to update the parameters once, we need to have the (exact) gradient. This requires to sum up these 1 billion data points to perform 1 update. In SGD, we can think of it as trying to get an approximated gradient instead of exact gradient . The approximation is coming from one data point (or several data points called mini batch). Therefore, in SGD, we can update the parameters very quickly. In addition, if we "loop" over all data (called one epoch), we actually have 1 billion updates. The trick is that, in SGD you do not need to have 1 billion iterations/updates, but much less iterations/updates, say 1 million, and you will have "good enough" model to use. I am writing a code to demo the idea. We first solve the linear system by normal equation, then solve it with SGD. Then we compare the results in terms of parameter values and final objective function values. In order to visualize it later, we will have 2 parameters to tune. set.seed(0);n_data=1e3;n_feature=2;
A=matrix(runif(n_data*n_feature),ncol=n_feature)
b=runif(n_data)
res1=solve(t(A) %*% A, t(A) %*% b)
sq_loss<-function(A,b,x){
e=A %*% x -b
v=crossprod(e)
return(v[1])
}
sq_loss_gr_approx<-function(A,b,x){
# note, in GD, we need to sum over all data
# here i is just one random index sample
i=sample(1:n_data, 1)
gr=2*(crossprod(A[i,],x)-b[i])*A[i,]
return(gr)
}
x=runif(n_feature)
alpha=0.01
N_iter=300
loss=rep(0,N_iter)
for (i in 1:N_iter){
x=x-alpha*sq_loss_gr_approx(A,b,x)
loss[i]=sq_loss(A,b,x)
} The results: as.vector(res1)
[1] 0.4368427 0.3991028
x
[1] 0.3580121 0.4782659 Note, although the parameters are not too close, the loss values are $124.1343$ and $123.0355$ which are very close. Here is the cost function values over iterations, we can see it can effectively decrease the loss, which illustrates the idea: we can use a subset of data to approximate the gradient and get "good enough" results. Now let's check the computational efforts between two approaches. In the experiment, we have $1000$ data points, using SD, evaluate gradient once needs to sum over them data. BUT in SGD, sq_loss_gr_approx function only sum up 1 data point, and overall we see, the algorithm converges less than $300$ iterations (note, not $1000$ iterations.) This is the computational savings. | {
"source": [
"https://stats.stackexchange.com/questions/232056",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/82889/"
]
} |
232,106 | A player is given a fair, six-sided die. To win, she must roll a number greater than 4 (i.e., a 5 or a 6). If she rolls a 4, she must roll again. What are her odds of winning? I think the probability of winning $P(W)$, can be expressed recursively as: $$
P(W) = P(r = 5 \cup r = 6) + P(r = 4) \cdot P(W)
$$ I've approximated $P(W)$ as $0.3999$ by running 1 million trials in Java, like this: import java.util.Random;
public class Dice {
public static void main(String[] args) {
int runs = 1000000000;
int wins = 0;
for (int i = 0; i < runs; i++) {
wins += playGame();
}
System.out.println(wins / (double)runs);
}
static Random r = new Random();
private static int playGame() {
int roll;
while ((roll = r.nextInt(6) + 1) == 4);
return (roll == 5 || roll == 6) ? 1 : 0;
}
} And I see that one could expand $P(W)$ like this: $$
P(W) = \frac{1}{3} + \frac{1}{6} \left(\frac{1}{3} + \frac{1}{6}\left(\frac{1}{3} + \frac{1}{6}\right)\right)...
$$ But I don't know how to solve this type of recurrence relation without resorting to this sort of approximation. Is it possible? | Just solve it using algebra: \begin{aligned}
P(W) &= \tfrac 2 6 + \tfrac 1 6 \cdot P(W) \\[7pt]
\tfrac 5 6 \cdot P(W) &= \tfrac 2 6 \\[7pt]
P(W) &= \tfrac 2 5.
\end{aligned} | {
"source": [
"https://stats.stackexchange.com/questions/232106",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/55745/"
]
} |
232,500 | I believe that the title of this question says it all. | It helps to think about what The Curse of Dimensionality is. There are several very good threads on CV that are worth reading. Here is a place to start: Explain “Curse of dimensionality” to a child . I note that you are interested in how this applies to $k$-means clustering. It is worth being aware that $k$-means is a search strategy to minimize (only) the squared Euclidean distance. In light of that, it's worth thinking about how Euclidean distance relates to the curse of dimensionality (see: Why is Euclidean distance not a good metric in high dimensions? ). The short answer from these threads is that the volume (size) of the space increases at an incredible rate relative to the number of dimensions. Even $10$ dimensions (which doesn't seem like it's very 'high-dimensional' to me) can bring on the curse. If your data were distributed uniformly throughout that space, all objects become approximately equidistant from each other. However, as @Anony-Mousse notes in his answer to that question, this phenomenon depends on how the data are arrayed within the space; if they are not uniform, you don't necessarily have this problem. This leads to the question of whether uniformly-distributed high-dimensional data are very common at all (see: Does “curse of dimensionality” really exist in real data? ). I would argue that what matters is not necessarily the number of variables (the literal dimensionality of your data), but the effective dimensionality of your data. Under the assumption that $10$ dimensions is 'too high' for $k$-means, the simplest strategy would be to count the number of features you have. But if you wanted to think in terms of the effective dimensionality, you could perform a principle components analysis (PCA) and look at how the eigenvalues drop off. It is quite common that most of the variation exists in a couple of dimensions (which typically cut across the original dimensions of your dataset). That would imply you are less likely to have a problem with $k$-means in the sense that your effective dimensionality is actually much smaller. A more involved approach would be to examine the distribution of pairwise distances in your dataset along the lines @hxd1011 suggests in his answer . Looking at simple marginal distributions will give you some hint of the possible uniformity. If you normalize all the variables to lie within the interval $[0,\ 1]$, the pairwise distances must lie within the interval $[0,\ \sqrt{\sum D}]$. Distances that are highly concentrated will cause problems; on the other hand, a multi-modal distribution may be hopeful (you can see an example in my answer here: How to use both binary and continuous variables together in clustering? ). However, whether $k$-means will 'work' is still a complicated question. Under the assumption that there are meaningful latent groupings in your data, they don't necessarily exist in all of your dimensions or in constructed dimensions that maximize variation (i.e., the principle components). The clusters could be in the lower-variation dimensions (see: Examples of PCA where PCs with low variance are “useful” ). That is, you could have clusters with points that are close within and well-separated between on just a few of your dimensions or on lower-variation PCs, but aren't remotely similar on high-variation PCs, which would cause $k$-means to ignore the clusters you're after and pick out faux clusters instead (some examples can be seen here: How to understand the drawbacks of K-means ). | {
"source": [
"https://stats.stackexchange.com/questions/232500",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/129490/"
]
} |
232,504 | Currently reading Platt's paper, Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines , I got stuck in section 2.3 Computing the Threshold : SVM notation objective function:
\begin{array}{1}
\max _{\alpha }\sum _{i=1}^{n}\alpha _{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}y_{j}K_{ij}\alpha _{i}\alpha _{j}\\
0\leqslant \alpha_i \leqslant C : Lagrange multipliers\\
\sum_{i=1}^Nyi\alpha_i=0\\
\end{array} KKT condition: \begin{array}{l}
\quad {a_i} = 0 \quad \Leftrightarrow \quad {y_i}u_i \ge 1\\
0 < {a_i} < C \quad \Leftrightarrow \quad {y_i}u_i = 1\\
\quad {a_i} = C \quad \Leftrightarrow \quad {y_i}u_i \le 1
\end{array} $b$: threshold in SVM model $w^Tx-b$ $u_i=\sum_{j=1}^Ny_j\alpha_jK_{ij}-b$: predict value using SVM $E_i=u_i-y_i$: difference between target and prediction $K_{ij}=K(x_i, x_j)=K(x_j,x_i)$: the kernel matrix Brief description about SMO According to Platt, SMO optimize two Lagrange multipliers one time, for example: $y_1\alpha_1+y_2\alpha_2=-\sum_{i=3}^Ny_i\alpha_i=Const$ ... Update $\alpha_i$ ... The question if $\alpha_i$ is not at bound, threshold $b$ can be computed by forcing the output to be $y_i$:
$b_i=E_i+y_i(\alpha^{new}_1-\alpha_1)K_{11}+y_2(\alpha_2^{new,clipped}-\alpha_2)K_{12}+b^{old}$ (eq.1) if both $\alpha_1$ and $\alpha_2$ are at bound, then using eq.1 computing $b_1$ and $b_2$, all thresholds between $b_1$ and $b_2$ are consistent with KKT conditions. I understand case 1 since $0<\alpha_i<C$,we get $y_iu_i=1$, prediction error must be 0, but I failed to understand case 2... | It helps to think about what The Curse of Dimensionality is. There are several very good threads on CV that are worth reading. Here is a place to start: Explain “Curse of dimensionality” to a child . I note that you are interested in how this applies to $k$-means clustering. It is worth being aware that $k$-means is a search strategy to minimize (only) the squared Euclidean distance. In light of that, it's worth thinking about how Euclidean distance relates to the curse of dimensionality (see: Why is Euclidean distance not a good metric in high dimensions? ). The short answer from these threads is that the volume (size) of the space increases at an incredible rate relative to the number of dimensions. Even $10$ dimensions (which doesn't seem like it's very 'high-dimensional' to me) can bring on the curse. If your data were distributed uniformly throughout that space, all objects become approximately equidistant from each other. However, as @Anony-Mousse notes in his answer to that question, this phenomenon depends on how the data are arrayed within the space; if they are not uniform, you don't necessarily have this problem. This leads to the question of whether uniformly-distributed high-dimensional data are very common at all (see: Does “curse of dimensionality” really exist in real data? ). I would argue that what matters is not necessarily the number of variables (the literal dimensionality of your data), but the effective dimensionality of your data. Under the assumption that $10$ dimensions is 'too high' for $k$-means, the simplest strategy would be to count the number of features you have. But if you wanted to think in terms of the effective dimensionality, you could perform a principle components analysis (PCA) and look at how the eigenvalues drop off. It is quite common that most of the variation exists in a couple of dimensions (which typically cut across the original dimensions of your dataset). That would imply you are less likely to have a problem with $k$-means in the sense that your effective dimensionality is actually much smaller. A more involved approach would be to examine the distribution of pairwise distances in your dataset along the lines @hxd1011 suggests in his answer . Looking at simple marginal distributions will give you some hint of the possible uniformity. If you normalize all the variables to lie within the interval $[0,\ 1]$, the pairwise distances must lie within the interval $[0,\ \sqrt{\sum D}]$. Distances that are highly concentrated will cause problems; on the other hand, a multi-modal distribution may be hopeful (you can see an example in my answer here: How to use both binary and continuous variables together in clustering? ). However, whether $k$-means will 'work' is still a complicated question. Under the assumption that there are meaningful latent groupings in your data, they don't necessarily exist in all of your dimensions or in constructed dimensions that maximize variation (i.e., the principle components). The clusters could be in the lower-variation dimensions (see: Examples of PCA where PCs with low variance are “useful” ). That is, you could have clusters with points that are close within and well-separated between on just a few of your dimensions or on lower-variation PCs, but aren't remotely similar on high-variation PCs, which would cause $k$-means to ignore the clusters you're after and pick out faux clusters instead (some examples can be seen here: How to understand the drawbacks of K-means ). | {
"source": [
"https://stats.stackexchange.com/questions/232504",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/129493/"
]
} |
232,967 | Many PDFs range from minus to positive infinity, yet some means are defined and some are not. What common trait makes some computable? | The mean of a distribution is defined in terms of an integral (I'll write it as if for a continuous distribution - as a Riemann integral, say - but the issue applies more generally; we can proceed to Stieltjes or Lebesgue integration to deal with these properly and all at once): $$E(X) = \int_{-\infty}^\infty x f(x)\, dx$$ But what does that mean? It's effectively a shorthand for $$\stackrel{\lim}{_{a\to\infty,b\to\infty}} \int_{-a}^b x\, f(x)\, dx$$ or $$\stackrel{\lim}{_{a\to\infty}} \int_{-a}^0 x f(x)\, dx \, +\, \stackrel{\lim}{_{b\to\infty}} \int_{0}^b x f(x)\, dx$$ (though you could break it anywhere not just at 0) The problem comes when the limits of those integrals are not finite. So for example, consider the standard Cauchy density, which is proportional to $\frac{1}{1+x^2}$ ... note that $$\stackrel{\lim}{_{b\to\infty}} \int_{0}^b \frac{x}{1+x^2}\, dx$$ let $u=1+x^2$, so $du=2x\,dx$ $$=\,\stackrel{\lim}{_{b\to\infty}}\frac12 \int_{1}^{1+b^2} \frac{1}{u}\, du$$ $$=\,\stackrel{\lim}{_{b\to\infty}} \frac{_1}{^2}\ln(u)\Bigg |_{1}^{1+b^2} $$ $$=\,\stackrel{\lim}{_{b\to\infty}} \frac{_1}{^2}\ln(1+b^2)$$ which isn't finite. The limit in the lower half is also not finite; the expectation is thereby undefined. Or if we had as our random variable the absolute value of a standard Cauchy, its entire expectation would be proportional to that limit we just looked at (i.e. $\stackrel{\lim}{_{b\to\infty}} \frac12\ln(1+b^2)$). On the other hand, some other densities do continue out "to infinity" but their integral does have a limit. | {
"source": [
"https://stats.stackexchange.com/questions/232967",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/82897/"
]
} |
233,232 | I am learning about splines from the book "The Elements of
Statistical Learning Data Mining, Inference, and Prediction" by Hastie et al. I found on page 145 that Natural cubic splines are linear beyond the boundary knots. There are $K$ knots, $\xi_1, \xi_2, ... \xi_K$ in the splines and the following is given about such a spline in the book. Question 1: How are 4 degrees of freedom freed up? I don't get this part. Question 2 : In the definition of $d_k(X)$ when $k=K$ then $d_K(X) = \frac 0 0$. What is the author trying to do in this formula? How does this help making sure that splines are linear beyond boundary knots? | Let's start by considering ordinary cubic splines. They're cubic between every pair of knots and cubic outside the boundary knots. We start with 4df for the first cubic (left of the first boundary knot), and each knot adds one new parameter (because the continuity of cubic splines and derivatives and second derivatives adds three constraints, leaving one free parameter), making a total of $K+4$ parameters for $K$ knots. A natural cubic spline is linear at both ends. This constrains the cubic and quadratic parts there to 0, each reducing the df by 1. That's 2 df at each of two ends of the curve, reducing $K+4$ to $K$. Imagine you decide you can spend some total number of degrees of freedom ($p$, say) on your non-parametric curve estimate. Since imposing a natural spline uses 4 fewer degrees of freedom than an ordinary cubic spline (for the same number of knots), with those $p$ parameters you can have 4 more knots (and so 4 more parameters) to model the curve between the boundary knots. Note that the definition for $N_{k+2}$ is for $k=1,2,...,K-2$ (since there are $K$ basis functions in all). So the last basis function in that list, $N_{K}=d_{K-2}-d_{K-1}$. So the highest $k$ needed for definitions of $d_k$ is for $k=K-1$. (That is, we don't need to try to figure out what some $d_K$ might do, since we don't use it.) | {
"source": [
"https://stats.stackexchange.com/questions/233232",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/55227/"
]
} |
233,356 | I'm about to introduce the standard normal table in my introductory statistics class, and that got me wondering: who created the first standard normal table? How did they do it before computers came along? I shudder to think of someone brute-force computing a thousand Riemann sums by hand. | Laplace was the first to recognize the need for tabulation, coming up with the approximation: $$\begin{align}G(x)&=\int_x^\infty e^{-t^2}dt\\[2ex]&=\small \frac1 x- \frac{1}{2x^3}+\frac{1\cdot3}{4x^5} -\frac{1\cdot 3\cdot5}{8x^7}+\frac{1\cdot 3\cdot 5\cdot 7}{16x^9}+\cdots\tag{1}
\end{align}$$ The first modern table of the normal distribution was later built by the French astronomer Christian Kramp in Analyse des Réfractions Astronomiques et Terrestres (Par le citoyen Kramp, Professeur de Chymie et de Physique expérimentale à l'école centrale du Département de la Roer, 1799) . From Tables Related to the Normal Distribution: A Short History Author(s): Herbert A. David Source: The American Statistician, Vol. 59, No. 4 (Nov., 2005), pp. 309-311 : Ambitiously, Kramp gave eight-decimal ( $8$ D) tables up to $x = 1.24,$ $9$ D to $1.50,$ $10$ D to $1.99,$ and $11$ D to $3.00$ together with the
differences needed for interpolation. Writing down the first six derivatives of $G(x),$ he simply uses a Taylor series expansion of $G(x + h)$ about $G(x),$ with $h = .01,$ up to the term in $h^3.$ This enables him to proceed step by step from $x = 0$ to $x = h, 2h, 3h,\dots,$ upon multiplying $h\,e^{-x^2}$ by $$1-hx+ \frac 1 3 \left(2x^2 - 1\right)h^2 - \frac 1 6 \left(2x^3 - 3x\right)h^3.$$ Thus, at $x = 0$ this product reduces to $$.01 \left(1 - \frac 1 3 \times .0001 \right) = .00999967,$$ so that at $G(.01) = .88622692 - .00999967 = .87622725.$ $$\vdots$$ But... how accurate could he be? OK, let's take $2.97$ as an example: Amazing! Let's move on to the modern (normalized) expression of the Gaussian pdf: The pdf of $\mathscr N(0,1)$ is: $$f_X(X=x)=\large \frac{1}{\sqrt{2\pi}}\,e^{-\frac {x^2}{2}}= \frac{1}{\sqrt{2\pi}}\,e^{-\left(\frac {x}{\sqrt{2}}\right)^2}= \frac{1}{\sqrt{2\pi}}\,e^{-\left(z\right)^2}$$ where $z = \frac{x}{\sqrt{2}}$ . And hence, $x = z \times \sqrt{2}$ . So let's go to R, and look up the $P_Z(Z>z=2.97)$ ... OK, not so fast. First we have to remember that when there is a constant multiplying the exponent in an exponential function $e^{ax}$ , the integral will be divided by that exponent: $1/a$ . Since we are aiming at replicating the results in the old tables, we are actually multiplying the value of $x$ by $\sqrt{2}$ , which will have to appear in the denominator. Further, Christian Kramp did not normalize, so we have to correct the results given by R accordingly, multiplying by $\sqrt{2\pi}$ . The final correction will look like this: $$\frac{\sqrt{2\pi}}{\sqrt{2}}\,\mathbb P(X>x)=\sqrt{\pi}\,\,\mathbb P(X>x)$$ In the case above, $z=2.97$ and $x=z\times \sqrt{2}=4.200214$ . Now let's go to R: (R = sqrt(pi) * pnorm(x, lower.tail = F))
[1] 0.00002363235e-05 Fantastic! Let's go to the top of the table for fun, say $0.06$ ... z = 0.06
(x = z * sqrt(2))
(R = sqrt(pi) * pnorm(x, lower.tail = F))
[1] 0.8262988 What says Kramp? $0.82629882$ . So close... The thing is... how close, exactly? After all the up-votes received, I couldn't leave the actual answer hanging. The problem was that all the optical character recognition (OCR) applications I tried were incredibly off - not surprising if you have taken a look at the original. So, I learned to appreciate Christian Kramp for the tenacity of his work as I personally typed each digit in the first column of his Table Première . After some valuable help from @Glen_b, now it may very well be accurate, and it's ready to copy and paste on the R console in this GitHub link . Here is an analysis of the accuracy of his calculations. Brace yourself... Absolute cumulative difference between [R] values and Kramp's approximation: $0.000001200764$ - in the course of $301$ calculations, he managed to accumulate an error of approximately $1$ millionth! Mean absolute error (MAE) , or mean(abs(difference)) with difference = R - kramp : $0.000000003989249$ - he managed to make an outrageously ridiculous $3$ one-billionth error on average! On the entry in which his calculations were most divergent as compared to [R] the first different decimal place value was in the eighth position (hundred millionth). On average (median) his first "mistake" was in the tenth decimal digit (tenth billionth!). And, although he didn't fully agree with with [R] in any instances, the closest entry doesn't diverge until the thirteen digital entry. Mean relative difference or mean(abs(R - kramp)) / mean(R) (same as all.equal(R[,2], kramp[,2], tolerance = 0) ): $0.00000002380406$ Root mean squared error (RMSE) or deviation (gives more weight to large mistakes), calculated as sqrt(mean(difference^2)) : $0.000000007283493$ If you find a picture or portrait of Chistian Kramp, please edit this post and place it here. | {
"source": [
"https://stats.stackexchange.com/questions/233356",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/130023/"
]
} |
233,658 | What decides the choice of function ( Softmax vs Sigmoid ) in a Logistic classifier ? Suppose there are 4 output classes . Each of the above function gives the probabilities of each class being the correct output . So which one to take for a classifier ? | The sigmoid function is used for the two-class logistic regression, whereas the softmax function is used for the multiclass logistic regression (a.k.a. MaxEnt, multinomial logistic regression, softmax Regression, Maximum Entropy Classifier). In the two-class logistic regression, the predicted probablies are as follows, using the sigmoid function: $$
\begin{align}
\Pr(Y_i=0) &= \frac{e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \, \\
\Pr(Y_i=1) &= 1 - \Pr(Y_i=0) = \frac{1} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}}
\end{align}
$$ In the multiclass logistic regression, with $K$ classes, the predicted probabilities are as follows, using the softmax function: $$
\begin{align}
\Pr(Y_i=k) &= \frac{e^{\boldsymbol\beta_k \cdot \mathbf{X}_i}} {~\sum_{0 \leq c \leq K}^{}{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}} \, \\
\end{align}
$$ One can observe that the softmax function is an extension of the sigmoid function to the multiclass case, as explained below. Let's look at the multiclass logistic regression, with $K=2$ classes: $$
\begin{align}
\Pr(Y_i=0) &= \frac{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i}} {~\sum_{0 \leq c \leq K}^{}{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}} = \frac{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i}}{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} = \frac{e^{(\boldsymbol\beta_0 - \boldsymbol\beta_1) \cdot \mathbf{X}_i}}{e^{(\boldsymbol\beta_0 - \boldsymbol\beta_1) \cdot \mathbf{X}_i} + 1} = \frac{e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \\ \, \\
\Pr(Y_i=1) &= \frac{e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} {~\sum_{0 \leq c \leq K}^{}{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}} = \frac{e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}}{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} = \frac{1}{e^{(\boldsymbol\beta_0-\boldsymbol\beta_1) \cdot \mathbf{X}_i} + 1} = \frac{1} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \, \\
\end{align}
$$ with $\boldsymbol\beta = - (\boldsymbol\beta_0 - \boldsymbol\beta_1)$ . We see that we obtain the same probabilities as in the two-class logistic regression using the sigmoid function. Wikipedia expands a bit more on that. | {
"source": [
"https://stats.stackexchange.com/questions/233658",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/91102/"
]
} |
233,667 | What is actual difference between scale-space and wavelet transform? It seems that wavelets require an orthonormal basis of kernels, whereas scale-space does not. Is it the only difference? Can scale-space be considered as particular case of wavelet transform? UPDATE Example. Suppose I convolve a 1d signal with several gaussian kernel of different width. I got: What was it? Wavelet decomposition or scale-space? Are they technically the same? | The sigmoid function is used for the two-class logistic regression, whereas the softmax function is used for the multiclass logistic regression (a.k.a. MaxEnt, multinomial logistic regression, softmax Regression, Maximum Entropy Classifier). In the two-class logistic regression, the predicted probablies are as follows, using the sigmoid function: $$
\begin{align}
\Pr(Y_i=0) &= \frac{e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \, \\
\Pr(Y_i=1) &= 1 - \Pr(Y_i=0) = \frac{1} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}}
\end{align}
$$ In the multiclass logistic regression, with $K$ classes, the predicted probabilities are as follows, using the softmax function: $$
\begin{align}
\Pr(Y_i=k) &= \frac{e^{\boldsymbol\beta_k \cdot \mathbf{X}_i}} {~\sum_{0 \leq c \leq K}^{}{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}} \, \\
\end{align}
$$ One can observe that the softmax function is an extension of the sigmoid function to the multiclass case, as explained below. Let's look at the multiclass logistic regression, with $K=2$ classes: $$
\begin{align}
\Pr(Y_i=0) &= \frac{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i}} {~\sum_{0 \leq c \leq K}^{}{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}} = \frac{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i}}{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} = \frac{e^{(\boldsymbol\beta_0 - \boldsymbol\beta_1) \cdot \mathbf{X}_i}}{e^{(\boldsymbol\beta_0 - \boldsymbol\beta_1) \cdot \mathbf{X}_i} + 1} = \frac{e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \\ \, \\
\Pr(Y_i=1) &= \frac{e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} {~\sum_{0 \leq c \leq K}^{}{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}} = \frac{e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}}{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} = \frac{1}{e^{(\boldsymbol\beta_0-\boldsymbol\beta_1) \cdot \mathbf{X}_i} + 1} = \frac{1} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \, \\
\end{align}
$$ with $\boldsymbol\beta = - (\boldsymbol\beta_0 - \boldsymbol\beta_1)$ . We see that we obtain the same probabilities as in the two-class logistic regression using the sigmoid function. Wikipedia expands a bit more on that. | {
"source": [
"https://stats.stackexchange.com/questions/233667",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/117136/"
]
} |
233,674 | I am fitting a glm model on my data set. But in my case I don't have any competing models to decide upon, i.e., I just have only one model and have to estimate the coefficients of the model. In such a case shouldn't I use all the data to train the model or should I still divide the data set into training and testing? Also cross-posted to the data science site. | The sigmoid function is used for the two-class logistic regression, whereas the softmax function is used for the multiclass logistic regression (a.k.a. MaxEnt, multinomial logistic regression, softmax Regression, Maximum Entropy Classifier). In the two-class logistic regression, the predicted probablies are as follows, using the sigmoid function: $$
\begin{align}
\Pr(Y_i=0) &= \frac{e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \, \\
\Pr(Y_i=1) &= 1 - \Pr(Y_i=0) = \frac{1} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}}
\end{align}
$$ In the multiclass logistic regression, with $K$ classes, the predicted probabilities are as follows, using the softmax function: $$
\begin{align}
\Pr(Y_i=k) &= \frac{e^{\boldsymbol\beta_k \cdot \mathbf{X}_i}} {~\sum_{0 \leq c \leq K}^{}{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}} \, \\
\end{align}
$$ One can observe that the softmax function is an extension of the sigmoid function to the multiclass case, as explained below. Let's look at the multiclass logistic regression, with $K=2$ classes: $$
\begin{align}
\Pr(Y_i=0) &= \frac{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i}} {~\sum_{0 \leq c \leq K}^{}{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}} = \frac{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i}}{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} = \frac{e^{(\boldsymbol\beta_0 - \boldsymbol\beta_1) \cdot \mathbf{X}_i}}{e^{(\boldsymbol\beta_0 - \boldsymbol\beta_1) \cdot \mathbf{X}_i} + 1} = \frac{e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \\ \, \\
\Pr(Y_i=1) &= \frac{e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} {~\sum_{0 \leq c \leq K}^{}{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}} = \frac{e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}}{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} = \frac{1}{e^{(\boldsymbol\beta_0-\boldsymbol\beta_1) \cdot \mathbf{X}_i} + 1} = \frac{1} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \, \\
\end{align}
$$ with $\boldsymbol\beta = - (\boldsymbol\beta_0 - \boldsymbol\beta_1)$ . We see that we obtain the same probabilities as in the two-class logistic regression using the sigmoid function. Wikipedia expands a bit more on that. | {
"source": [
"https://stats.stackexchange.com/questions/233674",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/113309/"
]
} |
233,804 | I'm reading the article Error propagation by the Monte Carlo method in geochemical calculations, Anderson (1976) and there's something I don't quite understand. Consider some measured data $\{A\pm\sigma_A, B\pm\sigma_B, C\pm\sigma_C\}$ and a program that processes it and returns a given value. In the article, this program is used to first obtain the best value using the means of the data (ie: $\{A, B, C\}$). The author then uses a Monte Carlo method to assign an uncertainty to this best value, by varying the input parameters within their uncertainty limits (given by a Gaussian distribution with means $\{A, B, C\}$ and standard deviations $\{\sigma_A, \sigma_B, \sigma_C\}$) before feeding them to the program. This is illustrated in the figure below: ( Copyright: ScienceDirect ) where the uncertainty can be obtained from the final $Z$ distribution. What would happen if, instead of this Monte Carlo method, I applied a bootstrap method? Something like this: This is: instead of varying the data within their uncertainties before feeding it to the program, I sample with replacement from them. What are the differences between these two methods in this case? What caveats should I be aware of before applying any of them? I'm aware of this question Bootstrap, Monte Carlo , but it doesn't quite solve my doubt since, in this case, the data contains assigned uncertainties. | As far as I understand your question, the difference between the "Monte Carlo" approach and the bootstrap approach is essentially the difference between parametric and non-parametric statistics. In the parametric framework, one knows exactly how the data $x_1,\ldots,x_N$ is generated, that is, given the parameters of the model ($A$, $\sigma_A$, &tc. in your description), you can produce new realisations of such datasets, and from them new realisations of your statistical procedure (or "output"). It is thus possible to describe entirely and exactly the probability distribution of the output $Z$, either by mathematical derivations or by a Monte Carlo experiment returning a sample of arbitrary size from this distribution. In the non-parametric framework, one does not wish to make such assumptions on the data and thus uses the data and only the data to estimate its distribution, $F$. The bootstrap is such an approach in that the unknown distribution is estimated by the empirical distribution $\hat F$ made by setting a probability weight of $1/n$ on each point of the sample (in the simplest case when the data is iid). Using this empirical distribution $\hat F$ as a replacement for the true distribution $F$, one can derive by Monte Carlo simulations the estimated distribution of the output $Z$. Thus, the main difference between both approaches is whether or not
one makes this parametric assumption about the distribution of the
data. | {
"source": [
"https://stats.stackexchange.com/questions/233804",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10416/"
]
} |
234,280 | Tikhonov regularization and ridge regression are terms often used as if they were identical. Is it possible to specify exactly what the difference is? | Tikhonov regularizarization is a larger set than ridge regression. Here is my attempt to spell out exactly how they differ. Suppose that for a known matrix $A$ and vector $b$, we wish to find a vector $\mathbf{x}$ such that
: $A\mathbf{x}=\mathbf{b}$. The standard approach is ordinary least squares linear regression. However, if no $x$ satisfies the equation or more than one $x$ does—that is the solution is not unique—the problem is said to be ill-posed. Ordinary least squares seeks to minimize the sum of squared residuals, which can be compactly written as: $\|A\mathbf{x}-\mathbf{b}\|^2 $ where $\left \| \cdot \right \|$ is the Euclidean norm. In matrix notation the solution, denoted by $\hat{x}$, is given by: $\hat{x} = (A^{T}A)^{-1}A^{T}\mathbf{b}$ Tikhonov regularization minimizes $\|A\mathbf{x}-\mathbf{b}\|^2+ \|\Gamma \mathbf{x}\|^2$ for some suitably chosen Tikhonov matrix, $\Gamma $. An explicit matrix form solution, denoted by $\hat{x}$, is given by: $\hat{x} = (A^{T}A+ \Gamma^{T} \Gamma )^{-1}A^{T}{b}$ The effect of regularization may be varied via the scale of matrix $\Gamma$. For $\Gamma = 0$ this reduces to the unregularized least squares solution provided that (A T A) −1 exists. Typically for ridge regression , two departures from Tikhonov regularization are described. First, the Tikhonov matrix is replaced by a multiple of the identity matrix $\Gamma= \alpha I $, giving preference to solutions with smaller norm, i.e., the $L_2$ norm. Then $\Gamma^{T} \Gamma$ becomes $\alpha^2 I$ leading to $\hat{x} = (A^{T}A+ \alpha^2 I )^{-1}A^{T}{b}$ Finally, for ridge regression, it is typically assumed that $A$ variables are scaled so that $X^{T}X$ has the form of a correlation matrix. and $X^{T}b$ is the correlation vector between the $x$ variables and $b$, leading to $\hat{x} = (X^{T}X+ \alpha^2 I )^{-1}X^{T}{b}$ Note in this form the Lagrange multiplier $\alpha^2$ is usually replaced by $k$, $\lambda$, or some other symbol but retains the property $\lambda\geq0$ In formulating this answer, I acknowledge borrowing liberally from Wikipedia and from Ridge estimation of transfer function weights | {
"source": [
"https://stats.stackexchange.com/questions/234280",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/99274/"
]
} |
234,282 | I am new to machine learning and do have a very large dataset for a set of 100 people over a period of 1 year. and the goal is to find out who are buddys based on their lunch times. I have the following dataset: Person StartTime EndTime Duration(dif for start and end times)
Person1 Time11 Time12 diff1
Person2 Time21 Time22 diff2
Person3 Time31 Time22 diff3
Person4 Time41 Time32 diff4 Now I would like to cluster/group people together based on their times ( with +/- 5 minutes time difference, meaning if start time and end time of person 1 is 12:00 - 1:00 PM and person 2 is 11:55 - 1:05 they fall under the same group relative to Person 1) | Tikhonov regularizarization is a larger set than ridge regression. Here is my attempt to spell out exactly how they differ. Suppose that for a known matrix $A$ and vector $b$, we wish to find a vector $\mathbf{x}$ such that
: $A\mathbf{x}=\mathbf{b}$. The standard approach is ordinary least squares linear regression. However, if no $x$ satisfies the equation or more than one $x$ does—that is the solution is not unique—the problem is said to be ill-posed. Ordinary least squares seeks to minimize the sum of squared residuals, which can be compactly written as: $\|A\mathbf{x}-\mathbf{b}\|^2 $ where $\left \| \cdot \right \|$ is the Euclidean norm. In matrix notation the solution, denoted by $\hat{x}$, is given by: $\hat{x} = (A^{T}A)^{-1}A^{T}\mathbf{b}$ Tikhonov regularization minimizes $\|A\mathbf{x}-\mathbf{b}\|^2+ \|\Gamma \mathbf{x}\|^2$ for some suitably chosen Tikhonov matrix, $\Gamma $. An explicit matrix form solution, denoted by $\hat{x}$, is given by: $\hat{x} = (A^{T}A+ \Gamma^{T} \Gamma )^{-1}A^{T}{b}$ The effect of regularization may be varied via the scale of matrix $\Gamma$. For $\Gamma = 0$ this reduces to the unregularized least squares solution provided that (A T A) −1 exists. Typically for ridge regression , two departures from Tikhonov regularization are described. First, the Tikhonov matrix is replaced by a multiple of the identity matrix $\Gamma= \alpha I $, giving preference to solutions with smaller norm, i.e., the $L_2$ norm. Then $\Gamma^{T} \Gamma$ becomes $\alpha^2 I$ leading to $\hat{x} = (A^{T}A+ \alpha^2 I )^{-1}A^{T}{b}$ Finally, for ridge regression, it is typically assumed that $A$ variables are scaled so that $X^{T}X$ has the form of a correlation matrix. and $X^{T}b$ is the correlation vector between the $x$ variables and $b$, leading to $\hat{x} = (X^{T}X+ \alpha^2 I )^{-1}X^{T}{b}$ Note in this form the Lagrange multiplier $\alpha^2$ is usually replaced by $k$, $\lambda$, or some other symbol but retains the property $\lambda\geq0$ In formulating this answer, I acknowledge borrowing liberally from Wikipedia and from Ridge estimation of transfer function weights | {
"source": [
"https://stats.stackexchange.com/questions/234282",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/130564/"
]
} |
234,492 | From what I read here https://onlinecourses.science.psu.edu/stat510/node/47 , there is no seasonality as the data are annual data. "Is there seasonality, meaning that there is a regularly repeating pattern of highs and lows related to calendar time such as seasons, quarters, months, days of the week, and so on." But for example if I have a time series for rain with data in years, and the data show a pattern that is repeated in the same months during the year, this is not seasonal because the period is in years? How do you know if it's seasonality or period cycle? Look the example below Look what they said for the housing sales series "The monthly housing sales (top left) show strong seasonality within each year, as well as some strong cyclic behaviour with period about 6–10 years. There is no apparent trend in the data over this period." But how they know it is seasonality? I can see nothing. | The difference between seasonal and cyclical behavior has to do with how regular the period of change is. A seasonal behavior is very strictly regular, meaning there is a precise amount of time between the peaks and troughs of the data. For instance temperature would have a seasonal behavior. The coldest day of the year and the warmest day of the year may move (because of factors other than time than influence the data) but you will never see drift over time where eventually winter comes in June in the northern hemisphere. Cyclical behavior on the other hand can drift over time because the time between periods isn't precise. For example, the stock market tends to cycle between periods of high and low values, but there is no set amount of time between those fluctuations. Series can show both cyclical and seasonal behavior. In the home prices example above, there is a cyclical effect due to the market, but there is also a seasonal effect because most people would rather move in the summer when their kids are between grades of school. You can also have multiple seasonal (or cyclical) effects. For example, people tend to try and make positive behavioral changes on the "1st" of something, so you see spikes in gym attendance of course on the 1st of the year, but also the first of each month and each week, so gym attendance has yearly, monthly, and weekly seasonality. When you are looking for a second seasonal pattern or a cyclical pattern in seasonal data, it can help to take a moving average at the higher seasonal frequency to remove those seasonal effects. For instance, if you take a moving average of the housing data with a window size of 12 you will see the cyclical pattern more clearly. This only works though to remove a higher frequency pattern from a lower frequency one. Also, for the record, seasonal behavior does not have to happen only on sub-year time units. For example, the sun goes through what are called "solar cycles" which are periods of time where it puts out more or less heat. This behavior shows a seasonality of almost exactly 11 years, so a yearly time series of the heat put out by the sun would have a seasonality of 11. In many cases the difference in seasonal vs cyclical behavior can be known or measured with reasonable accuracy by looking at the regularity of the peaks in your data and looking for a drift the timing peaks from the mean distance between them. A series with strong seasonality will show clear peaks in the partial auto-correlation function as well as the auto-correlation function, whereas a cyclical series will only have the strong peaks in the auto-correlation function. However if you don't have enough data to determine this or if the data is very noisy making the measurements difficult, the best way to determine if a behavior is cyclical or seasonal can be by thinking about the cause of the fluctuation in the data. If the cause is dependent directly on time then the data are likely seasonal (ex. it takes ~365.25 days for the earth to travel around the sun, the position of the earth around the sun effects temperature, therefore temperature shows a yearly seasonal pattern). If on the other hand, the cause is based on previous values of the series rather than directly on time, the series is likely cyclical (ex. when the value of stocks go up, it gives confidence in the market, so more people invest making prices go up, and vice versa, therefore stocks show a cyclical pattern). | {
"source": [
"https://stats.stackexchange.com/questions/234492",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
]
} |
234,544 | This is probably a trivial question, but my search has been fruitless so far, including this wikipedia article , and the "Compendium of Distributions" document . If $X$ has a uniform distribution, does it mean that $e^X$ follows an exponential distribution? Similarly, if $Y$ follows an exponential distribution, does it mean $ln(Y)$ follows a uniform distribution? | It is not the case that exponentiating a uniform random variable gives an exponential, nor does taking the log of an exponential random variable yield a uniform. Let $U$ be uniform on $(0,1)$ and let $X=\exp(U)$. $F_X(x) = P(X \leq x) = P(\exp(U)\leq x) = P(U\leq \ln x) = \ln x\,,\quad 1<x<e$ So $f_x(x) = \frac{d}{dx} \ln x = \frac{1}{x}\,,\quad 1<x<e$. This is not an exponential variate. A similar calculation shows that the log of an exponential is not uniform. Let $Y$ be standard exponential, so $F_Y(y)=P(Y\leq y) = 1-e^{-y}\,,\quad y>0$. Let $V=\ln Y$. Then $F_V(v) = P(V\leq v) = P(\ln Y\leq v) = P(Y\leq e^v) = 1-e^{-e^v}\,,\quad v<0$. This is not a uniform. (Indeed $-V$ is a Gumbel -distributed random variable, so you might call the distribution of $V$ a 'flipped Gumbel'.) However, in each case we can see it more quickly by simply considering the bounds on random variables. If $U$ is uniform(0,1) it lies between 0 and 1 so $X=\exp(U)$ lies between $1$ and $e$ ... so it's not exponential. Similarly, for $Y$ exponential, $\ln Y$ is on $(-\infty,\infty)$, so that can't be uniform(0,1), nor indeed any other uniform. We could also simulate, and again see it right away: First, exponentiating a uniform -- [the blue curve is the density (1/x on the indicated interval) we worked out above...] Second, the log of a exponential: Which we can see is far from uniform! (If we differentiate the cdf we worked out before, which would give the density, it matches the shape we see here.) Indeed the inverse cdf method indicates that taking the negative of the log of a uniform(0,1) variate gives a standard exponential variate, and conversely, exponentiating the negative of a standard exponential gives a uniform. [Also see probability integral transform ] This method tells us that if $U=F_Y(Y)$, $Y = F^{-1}(U)$. If we apply the inverse of the cdf as a transformation on $U$, a standard uniform, the resulting random variable has distribution function $F_Y$. If we let $U$ be uniform(0,1), then $P(U\leq u) = u$. Let $Y=-\ln (1-U)$. (Note that $1-U$ is also uniform on (0,1) so you could actually let $Y=-\ln U$, but we're following the inverse cdf method in full here) Then $P(Y\leq y) = P(-\ln (1-U) \leq y) = P( 1-U \geq e^{-y}) = P( U \leq 1-e^{-y}) = 1-e^{-y}$, which is the cdf of a standard exponential. [This property of the inverse cdf transform is why the $\log$ transform is actually required to obtain an exponential distribution, and the probability integral transform is why exponentiating the negative of a negative exponential gets back to a uniform.] | {
"source": [
"https://stats.stackexchange.com/questions/234544",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/100369/"
]
} |
234,561 | I have an 'expected distribution' between two variables X,Y that I have observed for 54000 examples. An example distribution that I observe between these variables looks like this: Through some software, I process separate examples such that I generate a new dataset containing a 'processed distribution'. Which may look something like this, for example: Originally I was measuring how successfully the software was able to generate a dataset which has a similar form to what I expect, by ensuring that the correlations that I get are the same. However, this clearly doesn't make sense in cases like this where the distribution is multi-modal, messy and/or non-linear. I was then thinking that I could determine the function that maps X to Y, and then ensure that this function is satisfied by the data points in my processed set. However, I am not sure of any techniques which can map a single X to multiple defined Ys, in the case of (for example) two clear relationships between the variables. Is there any metric which I could use to judge that these two distributions are similar? I can clearly see that: The shapes of the distributions are similar (Y increases for a fixed X near 900, and X increases for a fixed Y around 2000). Some of the data is missing, particularly some of higher X-values are not found in the processed dataset I can't think of any sensible metric which would reflect the above, and hopefully increase as the problems are alleviated e.g. as my processed distribution incorporates the higher X value ranges at Y ~ 2000, the metric would increase (although if these higher-X examples also had a wildly different Y e.g. 5000, I would lose 'similarity'). Thanks very much for any advice. | It is not the case that exponentiating a uniform random variable gives an exponential, nor does taking the log of an exponential random variable yield a uniform. Let $U$ be uniform on $(0,1)$ and let $X=\exp(U)$. $F_X(x) = P(X \leq x) = P(\exp(U)\leq x) = P(U\leq \ln x) = \ln x\,,\quad 1<x<e$ So $f_x(x) = \frac{d}{dx} \ln x = \frac{1}{x}\,,\quad 1<x<e$. This is not an exponential variate. A similar calculation shows that the log of an exponential is not uniform. Let $Y$ be standard exponential, so $F_Y(y)=P(Y\leq y) = 1-e^{-y}\,,\quad y>0$. Let $V=\ln Y$. Then $F_V(v) = P(V\leq v) = P(\ln Y\leq v) = P(Y\leq e^v) = 1-e^{-e^v}\,,\quad v<0$. This is not a uniform. (Indeed $-V$ is a Gumbel -distributed random variable, so you might call the distribution of $V$ a 'flipped Gumbel'.) However, in each case we can see it more quickly by simply considering the bounds on random variables. If $U$ is uniform(0,1) it lies between 0 and 1 so $X=\exp(U)$ lies between $1$ and $e$ ... so it's not exponential. Similarly, for $Y$ exponential, $\ln Y$ is on $(-\infty,\infty)$, so that can't be uniform(0,1), nor indeed any other uniform. We could also simulate, and again see it right away: First, exponentiating a uniform -- [the blue curve is the density (1/x on the indicated interval) we worked out above...] Second, the log of a exponential: Which we can see is far from uniform! (If we differentiate the cdf we worked out before, which would give the density, it matches the shape we see here.) Indeed the inverse cdf method indicates that taking the negative of the log of a uniform(0,1) variate gives a standard exponential variate, and conversely, exponentiating the negative of a standard exponential gives a uniform. [Also see probability integral transform ] This method tells us that if $U=F_Y(Y)$, $Y = F^{-1}(U)$. If we apply the inverse of the cdf as a transformation on $U$, a standard uniform, the resulting random variable has distribution function $F_Y$. If we let $U$ be uniform(0,1), then $P(U\leq u) = u$. Let $Y=-\ln (1-U)$. (Note that $1-U$ is also uniform on (0,1) so you could actually let $Y=-\ln U$, but we're following the inverse cdf method in full here) Then $P(Y\leq y) = P(-\ln (1-U) \leq y) = P( 1-U \geq e^{-y}) = P( U \leq 1-e^{-y}) = 1-e^{-y}$, which is the cdf of a standard exponential. [This property of the inverse cdf transform is why the $\log$ transform is actually required to obtain an exponential distribution, and the probability integral transform is why exponentiating the negative of a negative exponential gets back to a uniform.] | {
"source": [
"https://stats.stackexchange.com/questions/234561",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/93593/"
]
} |
234,602 | I have about 40 variables for each subject in a human population. For each time period, people join and exit the study. As a made up example, I want to see whether there are increases in average spending on movies as time progresses. The problem is that my population is very volatile, there could be 15% male in one time period and 99% male in another. Given that this is the case, how can I figure you whether the increase I observe is due to actual increase, population change or just variance? What I'm looking for is what subject I should be learning to address this problem. A particular textbook on clustering, regression? Or something like that. I cannot change or resample the data I'm given and I'm looking for something that's masters or bachelors level. | It is not the case that exponentiating a uniform random variable gives an exponential, nor does taking the log of an exponential random variable yield a uniform. Let $U$ be uniform on $(0,1)$ and let $X=\exp(U)$. $F_X(x) = P(X \leq x) = P(\exp(U)\leq x) = P(U\leq \ln x) = \ln x\,,\quad 1<x<e$ So $f_x(x) = \frac{d}{dx} \ln x = \frac{1}{x}\,,\quad 1<x<e$. This is not an exponential variate. A similar calculation shows that the log of an exponential is not uniform. Let $Y$ be standard exponential, so $F_Y(y)=P(Y\leq y) = 1-e^{-y}\,,\quad y>0$. Let $V=\ln Y$. Then $F_V(v) = P(V\leq v) = P(\ln Y\leq v) = P(Y\leq e^v) = 1-e^{-e^v}\,,\quad v<0$. This is not a uniform. (Indeed $-V$ is a Gumbel -distributed random variable, so you might call the distribution of $V$ a 'flipped Gumbel'.) However, in each case we can see it more quickly by simply considering the bounds on random variables. If $U$ is uniform(0,1) it lies between 0 and 1 so $X=\exp(U)$ lies between $1$ and $e$ ... so it's not exponential. Similarly, for $Y$ exponential, $\ln Y$ is on $(-\infty,\infty)$, so that can't be uniform(0,1), nor indeed any other uniform. We could also simulate, and again see it right away: First, exponentiating a uniform -- [the blue curve is the density (1/x on the indicated interval) we worked out above...] Second, the log of a exponential: Which we can see is far from uniform! (If we differentiate the cdf we worked out before, which would give the density, it matches the shape we see here.) Indeed the inverse cdf method indicates that taking the negative of the log of a uniform(0,1) variate gives a standard exponential variate, and conversely, exponentiating the negative of a standard exponential gives a uniform. [Also see probability integral transform ] This method tells us that if $U=F_Y(Y)$, $Y = F^{-1}(U)$. If we apply the inverse of the cdf as a transformation on $U$, a standard uniform, the resulting random variable has distribution function $F_Y$. If we let $U$ be uniform(0,1), then $P(U\leq u) = u$. Let $Y=-\ln (1-U)$. (Note that $1-U$ is also uniform on (0,1) so you could actually let $Y=-\ln U$, but we're following the inverse cdf method in full here) Then $P(Y\leq y) = P(-\ln (1-U) \leq y) = P( 1-U \geq e^{-y}) = P( U \leq 1-e^{-y}) = 1-e^{-y}$, which is the cdf of a standard exponential. [This property of the inverse cdf transform is why the $\log$ transform is actually required to obtain an exponential distribution, and the probability integral transform is why exponentiating the negative of a negative exponential gets back to a uniform.] | {
"source": [
"https://stats.stackexchange.com/questions/234602",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/130790/"
]
} |
234,876 | I am new to Bayesian Statistics research. I heard from researchers that Bayesian researchers better implement MCMC by themselves rather than using tools like JAGS/Stan. May I ask what is the benefit of implementing MCMC algorithm by oneself (in a "not quite fast" languages like R), except for learning purpose? | In general, I would strongly suggest not coding your own MCMC for a real applied Bayesian analysis. This is both a good deal of work and time and very likely to introduce bugs in the code. Blackbox samplers, such as Stan, already use very sophisticated samplers. Trust me, you will not code a sampler of this caliber just for one analysis! There are special cases in which in this will not be sufficient. For example, if you needed to do an analysis in real time (i.e. computer decision based on incoming data), these programs would not be a good idea. This is because Stan requires compiling C++ code, which may take considerably more time than just running an already prepared sampler for relatively simple models. In that case, you may want to write your own code. In addition, I believe there are special cases where packages like Stan do very poorly, such as Non-Gaussian state-space models (full disclosure: I believe Stan does poorly in this case, but do not know). In that case, it may be worth it to implement a custom MCMC. But this is the exception, not the rule! To be quite honest, I think most researchers who write samplers for a single analysis (and this does happen, I have seen it) do so because they like to write their own samplers. At the very least, I can say that I fall under that category (i.e. I'm disappointed that writing my own sampler is not the best way to do things). Also, while it does not make sense to write your own sampler for a single analysis , it can make a lot of sense to write your own code for a class of analyses . Being that JAGs, Stan, etc. are black-box samplers, you can always make things faster by specializing for a given model, although the amount of improvement is model dependent. But writing an extremely efficient sampler from the ground up is maybe 10-1,000 hours of work, depending on experience, model complexity etc. If you're doing research in Bayesian methods or writing statistical software, that's fine; it's your job. But if your boss says "Hey, you can you analyze this repeated measures data set?" and you spend 250 hours writing an efficient sampler, your boss is likely to be upset. In contrast, you could have written this model in Stan in, say, 2 hours, and had 2 minutes of run time
instead of the 1 minute run time achieved by the efficient sampler. | {
"source": [
"https://stats.stackexchange.com/questions/234876",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/100855/"
]
} |
234,881 | I've got a table with 59 different values for my response variable, with different amounts of number of observations for each one of them (from 1 to 5e5, so that I observed the least frequent value once, and over 5e5 times the most frequent). This follows a normal/Gauss distribution for which I've already calculated mean, median, mode, variance and standard deviation. I've got around 5.8 million observations on this dataset. The mean is 64.833 while the mode and median are both 65, but I found out that 44% of the observations are higher than 65 and 48% of them are lower (8% of the observations have the value of 65). My standard deviation (for a mean of 64.833) is 5.305. Getting to the point, I want to know how significant is this difference (those 4%), considering my standard deviation. How can I calculate that? If it's significant at say, 5 or 10% and, if not, the lowest p-value at which that would actually be significant? I'm currently thinking on using a Hypthotesis with 64.833 as my sample mean (x) against 65 as the population mean (u), as H0: x=u
H1: x!=u z=(x-u)/(s/(sqrt(n)) My problem here is what I should use as 's' (standard deviation with 65 or 64.833 as mean?) and what I should be using as 'n' (the 5.8 million observations in the whole dataset?). Would that work, calculating z with those values? X as 64.833, u as 65, s as 5,305 (my standard deviation with 64.833 as mean) and n as 5.8 million (my sample size that lead me to 64.833 as mean)? Using that, I got z=-75.505, which got me a p-value of 0.00001... but I don't know if that should be it. It seems that due to the high n, pretty much anything would be significant, so I'm not sure if I should be using 5.8 million as my n or not... | In general, I would strongly suggest not coding your own MCMC for a real applied Bayesian analysis. This is both a good deal of work and time and very likely to introduce bugs in the code. Blackbox samplers, such as Stan, already use very sophisticated samplers. Trust me, you will not code a sampler of this caliber just for one analysis! There are special cases in which in this will not be sufficient. For example, if you needed to do an analysis in real time (i.e. computer decision based on incoming data), these programs would not be a good idea. This is because Stan requires compiling C++ code, which may take considerably more time than just running an already prepared sampler for relatively simple models. In that case, you may want to write your own code. In addition, I believe there are special cases where packages like Stan do very poorly, such as Non-Gaussian state-space models (full disclosure: I believe Stan does poorly in this case, but do not know). In that case, it may be worth it to implement a custom MCMC. But this is the exception, not the rule! To be quite honest, I think most researchers who write samplers for a single analysis (and this does happen, I have seen it) do so because they like to write their own samplers. At the very least, I can say that I fall under that category (i.e. I'm disappointed that writing my own sampler is not the best way to do things). Also, while it does not make sense to write your own sampler for a single analysis , it can make a lot of sense to write your own code for a class of analyses . Being that JAGs, Stan, etc. are black-box samplers, you can always make things faster by specializing for a given model, although the amount of improvement is model dependent. But writing an extremely efficient sampler from the ground up is maybe 10-1,000 hours of work, depending on experience, model complexity etc. If you're doing research in Bayesian methods or writing statistical software, that's fine; it's your job. But if your boss says "Hey, you can you analyze this repeated measures data set?" and you spend 250 hours writing an efficient sampler, your boss is likely to be upset. In contrast, you could have written this model in Stan in, say, 2 hours, and had 2 minutes of run time
instead of the 1 minute run time achieved by the efficient sampler. | {
"source": [
"https://stats.stackexchange.com/questions/234881",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/131000/"
]
} |
234,891 | I want to use deep learning in my project. I went through a couple of papers and a question occurred to me: is there any difference between convolution neural network and deep learning? Are these things the same or do they have any major differences, and which is better? | Deep Learning is the branch of Machine Learning based on Deep Neural Networks (DNNs), meaning neural networks with at the very least 3 or 4 layers (including the input and output layers). But for some people (especially non-technical), any neural net qualifies as Deep Learning, regardless of its depth. And others consider a 10-layer neural net as shallow. Convolutional Neural Networks (CNNs) are one of the most popular neural network architectures. They are extremely successful at image processing, but also for many other tasks (such as speech recognition, natural language processing, and more). The state of the art CNNs are pretty deep (dozens of layers at least), so they are part of Deep Learning. But you can build a shallow CNN for a simple task, in which case it's not (really) Deep Learning. But CNNs are not alone, there are many other neural network architectures out there, including Recurrent Neural Networks (RNN), Autoencoders, Transformers, Deep Belief Nets (DBN = a stack of Restricted Boltzmann Machines, RBM), and more. They can be shallow or deep. Note: even shallow RNNs can be considered part of Deep Learning since training them requires unrolling them through time, resulting in a deep net. | {
"source": [
"https://stats.stackexchange.com/questions/234891",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/64799/"
]
} |
235,007 | Except decision trees and logistic regression, what other classification models provide good interpretation? I am not interested in the accuracy or other parameters, only the interpretation of the results is important. | 1) I would argue that decision trees are not as interpretable as people make them out to be. They look interpretable, since each node is a simple, binary decision. The problem is that as you go down the tree, each node is conditional on every node above it. If your tree is only four or five levels deep, it's still not too difficult to convert one terminal node's path (four or five splits) into something interpretable (e.g. "this node reflects long-term customers who are high-income males with multiple accounts"), but trying to keep track of multiple terminal nodes is difficult. If all you have to do is convince a client that your model is interpretable ("look, each circle here has a simple yes/no decision in it, easy to understand, no?") then I'd keep decision trees in your list. If you want actionable interpretability, I'd suggest they might not make the cut. 2) Another issue is clarifying what you mean by "interpretability of results". I've run into interpretability in four contexts: The client being able to understand the methodology. (Not what you're asking about.) A Random Forest is pretty straightforwardly explainable by analogy, and most clients feel comfortable with it once it's explained simply. Explaining how the methodology fits a model. (I had a client who insisted I explain how a decision tree is fitted because they felt it would help them understand how to use the results more intelligently. After I did a very nice writeup, with lots of nice diagrams, they dropped the subject. It's not helpful to interpreting/understanding at all.) Again, I believe this is not what you're asking about. Once a model is fitted, interpreting what the model "believes" or "says" about the predictors. Here's where a decision tree looks interpretable, but is much more complex than first impressions. Logistic regression is fairly straightforward here. When a particular data point is classified, explaining why that decision was made. Why does your logistic regression say it's an 80% chance of fraud? Why does your decision tree say it's low-risk? If the client is satisfied with printing out the decision nodes leading to the terminal node, this is easy for a decision tree. If "why" needs to be summarized into human speak ("this person is rated a low risk because they are a long-term male customer who has high-income and multiple accounts with our firm"), it's a lot harder. So at one level of interpretability or explainability (#1 with a little #4, above), K-Nearest Neighbor is easy: "this customer was judged to be high risk because 8 out of 10 customers who have been previously evaluated and were most similar to them in terms of X, Y, and Z, were found to be high risk." At actionable, full level #4, it's not so interpretable. (I've thought of actually presenting the other 8 customers to them, but that would require them to drill down into those customers to manually figure out what those customers have in common, and thus what the rated customer has in common with them.) I've read a couple of papers recently about using sensitivity-analysis-like methods to try to come up with automated explanations of type #4. I don't have any at hand, though. Perhaps someone can throw some links into comments? | {
"source": [
"https://stats.stackexchange.com/questions/235007",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14730/"
]
} |
235,070 | I am a beginner in machine learning. I can do programming fine but the theory confuses me a lot of the times. What is the relation between Maximum Likelihood Estimation (MLE), Maximum A posteriori (MAP) estimate, and Expectation-Maximization (EM) algorithm? I see them used as the methods that actually do the optimization. | Imagine that you have some data $X$ and probabilistic model parametrized by $\theta$ , you are interested in learning about $\theta$ given your data. The relation between data, parameter and model is described using likelihood function $$ \mathcal{L}(\theta \mid X) = p(X \mid \theta) $$ To find the best fitting $\theta$ you have to look for such value that maximizes the conditional probability of $\theta$ given $X$ . Here things start to get complicated, because you can have different views on what $\theta$ is. You may consider it as a fixed parameter, or as a random variable. If you consider it as fixed, then to find it's value you need to find such value of $\theta$ that maximizes the likelihood function ( maximum likelihood method [ML]). On another hand, if you consider it as a random variable, then this means that it also has some distribution, so you need to make one more assumption about prior distribution of $\theta$ , i.e. $p(\theta)$ , and you will be using Bayes theorem for estimation $$ p(\theta \mid X) \propto p(X \mid \theta) \, p(\theta) $$ If you are not interested in estimating the posterior distribution of $\theta$ but only about point estimate that maximizes the posterior probability, then you will be using maximum a posteriori (MAP) method for estimating it. As about expectation-maximalization (EM), it is an algorithm that can be used in maximum likelihood approach for estimating certain kind of models (e.g. involving latent variables, or in missing data scenarios). Check the following threads to learn more: Maximum Likelihood Estimation (MLE) in layman terms What is the difference between Maximum Likelihood Estimation & Gradient Descent? Bayesian and frequentist reasoning in plain English Who Are The Bayesians? Is there a difference between the "maximum probability" and the "mode" of a parameter? What is the difference between "likelihood" and "probability"? Wikipedia entry on likelihood seems ambiguous Numerical example to understand Expectation-Maximization | {
"source": [
"https://stats.stackexchange.com/questions/235070",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/131017/"
]
} |
235,082 | I'm struggling with understanding confidence intervals and the common fallacies that I'm obviously not alone in doing. For example, why can't I say that I'm 95% confidence that the true value lies within my 95% confidence interval if I've made a measurement with a device that has a normal distributed error with known standard deviation? In an attempt to understand this better I read the submarine example in the article The Fallacy of Placing Confidence in Confidence Intervals by Morey, R.D., Hoekstra, R., Lee, M.D., Rouder, J.N., Wagenmakers, E-J, the example goes like this: "A 10-meter-long research submersible with several people on board has lost contact with its surface support vessel.
The submersible has a rescue hatch exactly halfway along
its length, to which the support vessel will drop a rescue
line. Because the rescuers only get one rescue attempt, it
is crucial that when the line is dropped to the craft in the
deep water that the line be as close as possible to this hatch.
The researchers on the support vessel do not know where
the submersible is, but they do know that it forms two dis-
tinctive bubbles. These bubbles could form anywhere along
the craft’s length, independently, with equal probability, and
float to the surface where they can be seen by the support
vessel." They go on by showing several different ways of calculating a 50% confidence interval which highlights the fallacies of confidence and precision very well. However I can't really translate the example into a real-world scenario. Say that I'm performing a measurement with a device that has a normal distributed error with known standard deviation. Are there other ways to calculate the 95% confidence interval besides $[x-1.96\sigma, x+1.96\sigma]$ in that scenario as well? | Imagine that you have some data $X$ and probabilistic model parametrized by $\theta$ , you are interested in learning about $\theta$ given your data. The relation between data, parameter and model is described using likelihood function $$ \mathcal{L}(\theta \mid X) = p(X \mid \theta) $$ To find the best fitting $\theta$ you have to look for such value that maximizes the conditional probability of $\theta$ given $X$ . Here things start to get complicated, because you can have different views on what $\theta$ is. You may consider it as a fixed parameter, or as a random variable. If you consider it as fixed, then to find it's value you need to find such value of $\theta$ that maximizes the likelihood function ( maximum likelihood method [ML]). On another hand, if you consider it as a random variable, then this means that it also has some distribution, so you need to make one more assumption about prior distribution of $\theta$ , i.e. $p(\theta)$ , and you will be using Bayes theorem for estimation $$ p(\theta \mid X) \propto p(X \mid \theta) \, p(\theta) $$ If you are not interested in estimating the posterior distribution of $\theta$ but only about point estimate that maximizes the posterior probability, then you will be using maximum a posteriori (MAP) method for estimating it. As about expectation-maximalization (EM), it is an algorithm that can be used in maximum likelihood approach for estimating certain kind of models (e.g. involving latent variables, or in missing data scenarios). Check the following threads to learn more: Maximum Likelihood Estimation (MLE) in layman terms What is the difference between Maximum Likelihood Estimation & Gradient Descent? Bayesian and frequentist reasoning in plain English Who Are The Bayesians? Is there a difference between the "maximum probability" and the "mode" of a parameter? What is the difference between "likelihood" and "probability"? Wikipedia entry on likelihood seems ambiguous Numerical example to understand Expectation-Maximization | {
"source": [
"https://stats.stackexchange.com/questions/235082",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/73593/"
]
} |
235,528 | I'm trying to understand how backpropagation works for a softmax/cross-entropy output layer. The cross entropy error function is $$E(t,o)=-\sum_j t_j \log o_j$$ with $t$ and $o$ as the target and output at neuron $j$, respectively. The sum is over each neuron in the output layer. $o_j$ itself is the result of the softmax function: $$o_j=softmax(z_j)=\frac{e^{z_j}}{\sum_j e^{z_j}}$$ Again, the sum is over each neuron in the output layer and $z_j$ is the input to neuron $j$: $$z_j=\sum_i w_{ij}o_i+b$$ That is the sum over all neurons in the previous layer with their corresponding output $o_i$ and weight $w_{ij}$ towards neuron $j$ plus a bias $b$. Now, to update a weight $w_{ij}$ that connects a neuron $j$ in the output layer with a neuron $i$ in the previous layer, I need to calculate the partial derivative of the error function using the chain rule: $$\frac{\partial E} {\partial w_{ij}}=\frac{\partial E} {\partial o_j} \frac{\partial o_j} {\partial z_{j}} \frac{\partial z_j} {\partial w_{ij}}$$ with $z_j$ as the input to neuron $j$. The last term is quite simple. Since there's only one weight between $i$ and $j$, the derivative is: $$\frac{\partial z_j} {\partial w_{ij}}=o_i$$ The first term is the derivation of the error function with respect to the output $o_j$: $$\frac{\partial E} {\partial o_j} = \frac{-t_j}{o_j}$$ The middle term is the derivation of the softmax function with respect to its input $z_j$ is harder: $$\frac{\partial o_j} {\partial z_{j}}=\frac{\partial} {\partial z_{j}} \frac{e^{z_j}}{\sum_j e^{z_j}}$$ Let's say we have three output neurons corresponding to the classes $a,b,c$ then $o_b = softmax(b)$ is: $$o_b=\frac{e^{z_b}}{\sum e^{z}}=\frac{e^{z_b}}{e^{z_a}+e^{z_b}+e^{z_c}} $$ and its derivation using the quotient rule: $$\frac{\partial o_b} {\partial z_{b}}=\frac{e^{z_b}*\sum e^z - (e^{z_b})^2}{(\sum_j e^{z})^2}=\frac{e^{z_b}}{\sum e^z}-\frac{(e^{z_b})^2}{(\sum e^z)^2}$$
$$=softmax(b)-softmax^2(b)=o_b-o_b^2=o_b(1-o_b)$$
Back to the middle term for backpropagation this means:
$$\frac{\partial o_j} {\partial z_{j}}=o_j(1-o_j)$$ Putting it all together I get $$\frac{\partial E} {\partial w_{ij}}= \frac{-t_j}{o_j}*o_j(1-o_j)*o_i=-t_j(1-o_j)*o_i$$ which means, if the target for this class is $t_j=0$, then I will not update the weights for this. That does not sound right. Investigating on this I found people having two variants for the softmax derivation, one where $i=j$ and the other for $i\ne j$, like here or here . But I can't make any sense out of this. Also I'm not even sure if this is the cause of my error, which is why I'm posting all of my calculations. I hope someone can clarify me where I am missing something or going wrong. | Note: I am not an expert on backprop, but now having read a bit, I think the following caveat is appropriate. When reading papers or books on neural nets, it is not uncommon for derivatives to be written using a mix of the standard summation/index notation , matrix notation , and multi-index notation (include a hybrid of the last two for tensor-tensor derivatives). Typically the intent is that this should be "understood from context", so you have to be careful! I noticed a couple of inconsistencies in your derivation. I do not do neural networks really, so the following may be incorrect. However, here is how I would go about the problem. First, you need to take account of the summation in $E$, and you cannot assume each term only depends on one weight. So taking the gradient of $E$ with respect to component $k$ of $z$, we have
$$E=-\sum_jt_j\log o_j\implies\frac{\partial E}{\partial z_k}=-\sum_jt_j\frac{\partial \log o_j}{\partial z_k}$$ Then, expressing $o_j$ as
$$o_j=\tfrac{1}{\Omega}e^{z_j} \,,\, \Omega=\sum_ie^{z_i} \implies \log o_j=z_j-\log\Omega$$
we have
$$\frac{\partial \log o_j}{\partial z_k}=\delta_{jk}-\frac{1}{\Omega}\frac{\partial\Omega}{\partial z_k}$$
where $\delta_{jk}$ is the Kronecker delta . Then the gradient of the softmax-denominator is
$$\frac{\partial\Omega}{\partial z_k}=\sum_ie^{z_i}\delta_{ik}=e^{z_k}$$
which gives
$$\frac{\partial \log o_j}{\partial z_k}=\delta_{jk}-o_k$$
or, expanding the log
$$\frac{\partial o_j}{\partial z_k}=o_j(\delta_{jk}-o_k)$$
Note that the derivative is with respect to $z_k$, an arbitrary component of $z$, which gives the $\delta_{jk}$ term ($=1$ only when $k=j$). So the gradient of $E$ with respect to $z$ is then
$$\frac{\partial E}{\partial z_k}=\sum_jt_j(o_k-\delta_{jk})=o_k\left(\sum_jt_j\right)-t_k \implies \frac{\partial E}{\partial z_k}=o_k\tau-t_k$$
where $\tau=\sum_jt_j$ is constant (for a given $t$ vector). This shows a first difference from your result: the $t_k$ no longer multiplies $o_k$. Note that for the typical case where $t$ is "one-hot" we have $\tau=1$ (as noted in your first link). A second inconsistency, if I understand correctly, is that the "$o$" that is input to $z$ seems unlikely to be the "$o$" that is output from the softmax. I would think that it makes more sense that this is actually "further back" in network architecture? Calling this vector $y$, we then have
$$z_k=\sum_iw_{ik}y_i+b_k \implies \frac{\partial z_k}{\partial w_{pq}}=\sum_iy_i\frac{\partial w_{ik}}{\partial w_{pq}}=\sum_iy_i\delta_{ip}\delta_{kq}=\delta_{kq}y_p$$ Finally, to get the gradient of $E$ with respect to the weight-matrix $w$, we use the chain rule
$$\frac{\partial E}{\partial w_{pq}}=\sum_k\frac{\partial E}{\partial z_k}\frac{\partial z_k}{\partial w_{pq}}=\sum_k(o_k\tau-t_k)\delta_{kq}y_p=y_p(o_q\tau-t_q)$$
giving the final expression (assuming a one-hot $t$, i.e. $\tau=1$)
$$\frac{\partial E}{\partial w_{ij}}=y_i(o_j-t_j)$$
where $y$ is the input on the lowest level (of your example). So this shows a second difference from your result: the "$o_i$" should presumably be from the level below $z$, which I call $y$, rather than the level above $z$ (which is $o$). Hopefully this helps. Does this result seem more consistent? Update: In response to a query from the OP in the comments, here is an expansion of the first step.
First, note that the vector chain rule requires summations (see here ). Second, to be certain of getting all gradient components, you should always introduce a new subscript letter for the component in the denominator of the partial derivative. So to fully write out the gradient with the full chain rule, we have
$$\frac{\partial E}{\partial w_{pq}}=\sum_i \frac{\partial E}{\partial o_i}\frac{\partial o_i}{\partial w_{pq}}$$
and
$$\frac{\partial o_i}{\partial w_{pq}}=\sum_k \frac{\partial o_i}{\partial z_k}\frac{\partial z_k}{\partial w_{pq}}$$
so
$$\frac{\partial E}{\partial w_{pq}}=\sum_i \left[ \frac{\partial E}{\partial o_i}\left(\sum_k \frac{\partial o_i}{\partial z_k}\frac{\partial z_k}{\partial w_{pq}}\right) \right]$$
In practice the full summations reduce, because you get a lot of $\delta_{ab}$ terms. Although it involves a lot of perhaps "extra" summations and subscripts, using the full chain rule will ensure you always get the correct result. | {
"source": [
"https://stats.stackexchange.com/questions/235528",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/131435/"
]
} |
235,673 | I'm wondering if there is any relationship among these 3 measures. I can't seem to make a connection among them by referring to the definitions (possibly because I am new to these definitions and am having a bit of a rough time grasping them). I know the range of the cosine similarity can be from 0 - 1, and that the pearson correlation can range from -1 to 1, and I'm not sure on the range of the z-score. I don't know, however, how a certain value of cosine similarity could tell you anything about the pearson correlation or the z-score, and vice versa? | The cosine similarity between two vectors $a$ and $b$ is just the angle between them $$\cos\theta = \frac{a\cdot b}{\lVert{a}\rVert \, \lVert{b}\rVert}$$ In many applications that use cosine similarity, the vectors are non-negative (e.g. a term frequency vector for a document), and in this case the cosine similarity will also be non-negative. For a vector $x$ the " $z$ -score" vector would typically be defined as $$z=\frac{x-\bar{x}}{s_x}$$ where $\bar{x}=\frac{1}{n}\sum_ix_i$ and $s_x^2=\overline{(x-\bar{x})^2}$ are the mean and variance of $x$ . So $z$ has mean 0 and standard deviation 1, i.e. $z_x$ is the standardized version of $x$ . For two vectors $x$ and $y$ , their correlation coefficient would be $$\rho_{x,y}=\overline{(z_xz_y)}$$ Now if the vector $a$ has zero mean, then its variance will be $s_a^2=\frac{1}{n}\lVert{a}\rVert^2$ , so its unit vector and z-score will be related by $$\hat{a}=\frac{a}{\lVert{a}\rVert}=\frac{z_a}{\sqrt n}$$ So if the vectors $a$ and $b$ are centered (i.e. have zero means), then their cosine similarity will be the same as their correlation coefficient. TL;DR Cosine similarity is a dot product of unit vectors. Pearson correlation is cosine similarity between centered vectors. The "Z-score transform" of a vector is the centered vector scaled to a norm of $\sqrt n$ . | {
"source": [
"https://stats.stackexchange.com/questions/235673",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/85959/"
]
} |
235,808 | I have a data set in the form of (features, binary output 0 or 1), but 1 happens pretty rarely, so just by always predicting 0, I get accuracy between 70% and 90% (depending on the particular data I look at). The ML methods give me about the same accuracy, and I feel, there should be some standard methods to apply in this situation, that would improve the accuracy over the obvious prediction rule. | Both hxd1011 and Frank are right (+1).
Essentially resampling and/or cost-sensitive learning are the two main ways of getting around the problem of imbalanced data; third is to use kernel methods that sometimes might be less effected by the class imbalance.
Let me stress that there is no silver-bullet solution. By definition you have one class that is represented inadequately in your samples. Having said the above I believe that you will find the algorithms SMOTE and ROSE very helpful. SMOTE effectively uses a $k$-nearest neighbours approach to exclude members of the majority class while in a similar way creating synthetic examples of a minority class. ROSE tries to create estimates of the underlying distributions of the two classes using a smoothed bootstrap approach and sample them for synthetic examples. Both are readily available in R, SMOTE in the package DMwR and ROSE in the package with the same name . Both SMOTE and ROSE result in a training dataset that is smaller than the original one. I would probably argue that a better (or less bad) metric for the case of imbalanced data is using Cohen's $k$ and/or Receiver operating characteristic's Area under the curve .
Cohen's kappa directly controls for the expected accuracy, AUC as it
is a function of sensitivity and specificity, the curve is insensitive to disparities in the class proportions. Again, notice that these are just metrics that should be used with a large grain of salt. You should ideally adapt them to your specific problem taking account of the gains and costs correct and wrong classifications convey in your case. I have found that looking at lift-curves is actually rather informative for this matter.
Irrespective of your metric you should try to use a separate test to assess the performance of your algorithm; exactly because of the class imbalanced over-fitting is even more likely so out-of-sample testing is crucial. Probably the most popular recent paper on the matter is Learning from Imbalanced Data by He and Garcia. It gives a very nice overview of the points raised by myself and in other answers. In addition I believe that the walk-through on Subsampling For Class Imbalances , presented by Max Kuhn as part of the caret package is an excellent resource to get a structure example of how under-/over-sampling as well as synthetic data creation can measure against each other. | {
"source": [
"https://stats.stackexchange.com/questions/235808",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/30893/"
]
} |
235,862 | Many neural network books and tutorials spend a lot of time on the backpropagation algorithm, which is essentially a tool to compute the gradient. Let's assume we are building a model with ~10K parameters / weights. Is it possible to run the optimization using some gradient free optimization algorithms? I think computing the numerical gradient would be too slow, but how about other methods such as Nelder-Mead, Simulated Annealing or a Genetic Algorithm? All the algorithms would suffer from local minima, why obsessed with gradient? | The first two algorithms you mention (Nelder-Mead and Simulated Annealing) are generally considered pretty much obsolete in optimization circles, as there are much better alternatives which are both more reliable and less costly. Genetic algorithms covers a wide range, and some of these can be reasonable. However, in the broader class of derivative-free optimization (DFO) algorithms, there are many which are significantly better than these "classics", as this has been an active area of research in recent decades. So, might some of these newer approaches be reasonable for deep learning? A relatively recent paper comparing the state of the art is the following: Rios, L. M., & Sahinidis, N. V. (2013) Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization. This is a nice paper which has many interesting insights into recent techniques. For example, the results clearly show that the best local optimizers are all "model-based", using different forms of sequential quadratic programming (SQP). However, as noted in their abstract "We find that the ability of all these solvers to obtain good solutions diminishes with increasing problem size." To give an idea of the numbers, for all problems the solvers were given a budget of 2500 function evaluations, and problem sizes were a maximum of ~300 parameters to optimize. Beyond O[10] parameters, very few of these optimizers performed very well, and even the best ones showed a noticable decay in performance as problem size was increased. So for very high dimensional problems, DFO algorithms just are not competitive with derivative based ones. To give some perspective, PDE (partial differential equation)-based optimization is another area with very high dimensional problems (e.g. several parameter for each cell of a large 3D finite element grid). In this realm, the " adjoint method " is one of the most used methods. This is also a gradient-descent optimizer based on automatic differentiation of a forward model code. The closest to a high-dimensional DFO optimizer is perhaps the Ensemble Kalman Filter , used for assimilating data into complex PDE simulations, e.g. weather models. Interestingly, this is essentially an SQP approach, but with a Bayesian-Gaussian interpretation (so the quadratic model is positive definite, i.e. no saddle points). But I do not think that the number of parameters or observations in these applications is comparable to what is seen in deep learning. Side note (local minima): From the little I have read on deep learning, I think the consensus is that it is saddle points rather than local minima, which are most problematic for high dimensional NN-parameter spaces. For example, the recent review in Nature says "Recent theoretical and empirical results strongly suggest that local minima are not a serious issue in general. Instead, the landscape is packed with a combinatorially large number of saddle points where the gradient is zero, and the surface curves up in most dimensions and curves down in the remainder." A related concern is about local vs. global optimization (for example this question pointed out in the comments). While I do not do deep learning, in my experience overfitting is definitely a valid concern. In my opinion, global optimization methods are most suited for engineering design problems that do not strongly depend on "natural" data. In data assimilation problems, any current global minima could easily change upon addition of new data (caveat: My experience is concentrated in geoscience problems, where data is generally "sparse" relative to model capacity). An interesting perspective is perhaps O. Bousquet & L. Bottou (2008) The tradeoffs of large scale learning. NIPS. which provides semi-theoretical arguments on why and when approximate optimization may be preferable in practice. End note (meta-optimization): While gradient based techniques seem likely to be dominant for training networks, there may be a role for DFO in associated meta-optimization tasks. One example would be hyper-parameter tuning. (Interestingly, the successful model-based DFO optimizers from Rios & Sahinidis could be seen as essentially solving a sequence of design-of-experiments/ response-surface problems.) Another example might be designing architectures, in terms of the set-up of layers (e.g. number, type, sequence, nodes/layer). In this discrete-optimization context genetic-style algorithms may be more appropriate. Note that here I am thinking of the case where connectivity is determined implicitly by these factors (e.g. fully-connected layers, convolutional layers, etc.). In other words the $\mathrm{O}[N^2]$ connectivity is $not$ meta-optimized explicitly. (The connection strength would fall under training, where e.g. sparsity could be promoted by $L_1$ regularization and/or ReLU activations ... these choices could be meta-optimized however.) | {
"source": [
"https://stats.stackexchange.com/questions/235862",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/113777/"
]
} |
235,882 | Am i misunderstanding something. This is my code using sklearn import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn import decomposition
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
pca = decomposition.PCA(n_components=3)
x = np.array([
[0.387,4878, 5.42],
[0.723,12104,5.25],
[1,12756,5.52],
[1.524,6787,3.94],
])
pca.fit_transform(x) Output: array([[ -4.25324997e+03, -8.41288672e-01, -8.37858943e-03],
[ 2.97275001e+03, -1.25977271e-01, 1.82476780e-01],
[ 3.62475003e+03, -1.56843494e-01, -1.65224286e-01],
[ -2.34425007e+03, 1.12410944e+00, -8.87390454e-03]]) Using numpy methods x_std = StandardScaler().fit_transform(x)
cov = np.cov(x_std.T)
ev , eig = np.linalg.eig(cov)
a = eig.dot(x_std.T) Output array([[ 0.06406894, 0.94063993, -1.62373172],
[-0.35357757, 0.7509653 , 0.63365168],
[ 0.29312477, 0.6710958 , 1.11766206],
[-0.00361615, -2.36270102, -0.12758202]]) I have kept all 3 components but it doesnt seem to allow me to retain my original data. May I know why is it so? If I want to obtain back my original matrix what should I do? | The difference is because decomposition.PCA does not standardize your variables before doing PCA, whereas in your manual computation you call StandardScaler to do the standardization. Hence, you are observing this difference: PCA on correlation or covariance? If you replace pca.fit_transform(x) with x_std = StandardScaler().fit_transform(x)
pca.fit_transform(x_std) you will get the same result as with manual computation... ...but only up to the order of the PCs. That is because when you run ev , eig = np.linalg.eig(cov) you get eigenvalues not necessarily in the decreasing order. I get array([ 0.07168571, 2.49382602, 1.43448827]) So you will want to order them manually. Sklearn does that for you. Regarding reconstructing original variables, please see How to reverse PCA and reconstruct original variables from several principal components? | {
"source": [
"https://stats.stackexchange.com/questions/235882",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/45720/"
]
} |
235,885 | I have a mixed model (using lme in R) with a random intercept. model <- lme(HCA ~ time, random=~1|subject, data=mydata) my supervisor asked me to extract the slopes of HCA for each individual, so that I can use this in another model.
I am in doubt here: as I have no random slope, wouldn't the slope be the same for each subject? Or should I add the intercept and the slope of time? Thanks! Here is the spaghetti plot | The difference is because decomposition.PCA does not standardize your variables before doing PCA, whereas in your manual computation you call StandardScaler to do the standardization. Hence, you are observing this difference: PCA on correlation or covariance? If you replace pca.fit_transform(x) with x_std = StandardScaler().fit_transform(x)
pca.fit_transform(x_std) you will get the same result as with manual computation... ...but only up to the order of the PCs. That is because when you run ev , eig = np.linalg.eig(cov) you get eigenvalues not necessarily in the decreasing order. I get array([ 0.07168571, 2.49382602, 1.43448827]) So you will want to order them manually. Sklearn does that for you. Regarding reconstructing original variables, please see How to reverse PCA and reconstruct original variables from several principal components? | {
"source": [
"https://stats.stackexchange.com/questions/235885",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/113780/"
]
} |
237,037 | How do we go about calculating a posterior with a prior N~(a, b) after observing n data points? I assume that we have to calculate the sample mean and variance of the data points and do some sort of calculation that combines the posterior with the prior, but I'm not quite sure what the combination formula looks like. | The basic idea of Bayesian updating is that given some data $X$ and prior over parameter of interest $\theta$, where the relation between data and parameter is described using likelihood function, you use Bayes theorem to obtain posterior $$ p(\theta \mid X) \propto p(X \mid \theta) \, p(\theta) $$ This can be done sequentially, where after seeing first data point $x_1$ prior $\theta$ becomes updated to posterior $\theta'$, next you can take second data point $x_2$ and use posterior obtained before $\theta'$ as your prior , to update it once again etc. Let me give you an example. Imagine that you want to estimate mean $\mu$ of normal distribution and $\sigma^2$ is known to you. In such case we can use normal-normal model. We assume normal prior for $\mu$ with hyperparameters $\mu_0,\sigma_0^2:$ \begin{align}
X\mid\mu &\sim \mathrm{Normal}(\mu,\ \sigma^2) \\
\mu &\sim \mathrm{Normal}(\mu_0,\ \sigma_0^2)
\end{align} Since normal distribution is a conjugate prior for $\mu$ of normal distribution, we have closed-form solution to update the prior \begin{align}
E(\mu' \mid x) &= \frac{\sigma^2\mu + \sigma^2_0 x}{\sigma^2 + \sigma^2_0} \\[7pt]
\mathrm{Var}(\mu' \mid x) &= \frac{\sigma^2 \sigma^2_0}{\sigma^2 + \sigma^2_0}
\end{align} Unfortunately, such simple closed-form solutions are not available for more sophisticated problems and you have to rely on optimization algorithms (for point estimates using maximum a posteriori approach), or MCMC simulation. Below you can see data example: n <- 1000
set.seed(123)
x <- rnorm(n, 1.4, 2.7)
mu <- numeric(n)
sigma <- numeric(n)
mu[1] <- (10000*x[i] + (2.7^2)*0)/(10000+2.7^2)
sigma[1] <- (10000*2.7^2)/(10000+2.7^2)
for (i in 2:n) {
mu[i] <- ( sigma[i-1]*x[i] + (2.7^2)*mu[i-1] )/(sigma[i-1]+2.7^2)
sigma[i] <- ( sigma[i-1]*2.7^2 )/(sigma[i-1]+2.7^2)
} If you plot the results, you'll see how posterior approaches the estimated value (it's true value is marked by red line) as new data is accumulated. For learning more you can check those slides and Conjugate Bayesian analysis of the Gaussian distribution paper by Kevin P. Murphy. Check also Do Bayesian priors become irrelevant with large sample size? You can also check those notes and this blog entry for accessible step-by-step introduction to Bayesian inference. | {
"source": [
"https://stats.stackexchange.com/questions/237037",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/132459/"
]
} |
237,086 | I read this article about Palantir's case where Deparment of Labor is accusing them in discrimination against Asians. Does anyone know where did they get these probability estimates from? I'm not getting 1/741 in item (a). (a) For the QA Engineer position, from a pool of more than 730 qualified applicants—approximately 77% of whom were Asian—Palantir hired six non-Asian applicants and only one Asian applicant. The adverse impact calculated by OFCCP exceeds three standard deviations. The likelihood that this result occurred according to chance is approximately one in 741. (b) For the Software Engineer position, from a pool of more than 1,160 qualified applicants—approximately 85% of whom were Asian—Palantir hired 14 non-Asian applicants and only 11 Asian applicants. The adverse impact calculated by OFCCP exceeds five standard deviations. The likelihood that this result occurred according to chance is approximately one in 3.4 million. (c) For the QA Engineer Intern position, from a pool of more than 130 qualified applicants—approximately 73% of whom were Asian—Palantir hired 17 non-Asian applicants and only four Asian applicants. The adverse impact calculated by OFCCP exceeds six standard deviations. The likelihood that this result occurred according to chance is approximately one in a
billion. | I am going to reverse-engineer this from experience with discrimination cases. I can definitely establish where the values of "one in 741," etc , came from. However, so much information was lost in translation that the rest of my reconstruction relies on having seen how people do statistics in courtroom settings. I can only guess at some of the details. Since the time anti-discrimination laws were passed in the 1960's (Title VI), the courts in the United States have learned to look at p-values and compare them to thresholds of $0.05$ and $0.01$. They have also learned to look at standardized effects, typically referred to as "standard deviations," and compare them to a threshold of "two to three standard deviations." In order to establish a prima facie case for a discrimination suit, plaintiffs typically attempt a statistical calculation showing a "disparate impact" that exceeds these thresholds. If such a calculation cannot be supported, the case usually cannot advance. Statistical experts for plaintiffs often attempt to phrase their results in these familiar terms. Some of the experts conduct a statistical test in which the null hypothesis expresses "no adverse impact," assuming employment decisions were purely random and ungoverned by any other characteristics of the employees. (Whether it is a one-tailed or two-tailed alternative may depend on the expert and the circumstances.) They then convert the p-value of this test into a number of "standard deviations" by referring it to the standard Normal distribution-- even when the standard Normal is irrelevant to the original test. In this roundabout way they hope to communicate their conclusions clearly to the judge. The favored test for data that can be summarized in contingency tables is Fisher's Exact Test. The occurrence of "Exact" in its name is particularly pleasing to the plaintiffs, because it connotes a statistical determination that has been made without error (whatever that might be!). Here, then, is my (speculative reconstruction) of the Department of Labor's calculations. They ran Fisher's Exact Test, or something like it (such as a $\chi^2$ test with a p-value determined via randomization). This test assumes a hypergeometric distribution as described in Matthew Gunn's answer. (For the small numbers of people involved in this complaint, the hypergeometric distribution is not well approximated by a Normal distribution.) They converted its p-value to a normal Z score ("number of standard deviations"). They rounded the Z score to the nearest integer: "exceeds three standard deviations," "exceeds five standard deviations," and "exceeds six standard deviations." (Because some of these Z-scores rounded the up to more standard deviations, I cannot justify the "exceeds"; all I can do is quote it.) In the complaint these integral Z scores were converted back to p-values! Again the standard Normal distribution was used. These p-values are described (arguably in a misleading way) as "the likelihood that this result occurred according to chance." To support this speculation, note that the p-values for Fisher's Exact Test in the three instances are approximately $1/1280$, $1/565000$, and $1/58000000$. These are based on assuming pools of $730$, $1160$, and $130$ corresponding to "more than" $730$, $1160$, and $130$, respectively. These numbers have normal Z scores of $-3.16$, $-4.64$, and $-5.52$, respectively, which when rounded are three, five, and six standard deviations, exactly the numbers appearing in the complaint. They correspond to (one-tailed) normal p-values of $1/741$, $1/3500000$, and $1/1000000000$: precisely the values cited in the complaint. Here is some R code used to perform these calculations. f <- function(total, percent.asian, hired.asian, hired.non.asian) {
asian <- round(percent.asian/100 * total)
non.asian <- total-asian
x <- matrix(c(asian-hired.asian, non.asian-hired.non.asian, hired.asian, hired.non.asian),
nrow = 2,
dimnames=list(Race=c("Asian", "non-Asian"),
Status=c("Not hired", "Hired")))
s <- fisher.test(x)
s$p.value
}
1/pnorm(round(qnorm(f(730, 77, 1, 6))))
1/pnorm(round(qnorm(f(1160, 85, 11, 14))))
1/pnorm(round(qnorm(f(130, 73, 4, 17)))) | {
"source": [
"https://stats.stackexchange.com/questions/237086",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36041/"
]
} |
237,512 | This is my data frame: Group <- c("G1","G1","G1","G1","G1","G1","G1","G1","G1","G1","G1","G1","G1","G1","G1","G2","G2","G2","G2","G2","G2","G2","G2","G2","G2","G2","G2","G2","G2","G2","G3","G3","G3","G3","G3","G3","G3","G3","G3","G3","G3","G3","G3","G3","G3")
Subject <- c("S1","S2","S3","S4","S5","S6","S7","S8","S9","S10","S11","S12","S13","S14","S15","S1","S2","S3","S4","S5","S6","S7","S8","S9","S10","S11","S12","S13","S14","S15","S1","S2","S3","S4","S5","S6","S7","S8","S9","S10","S11","S12","S13","S14","S15")
Value <- c(9.832217741,13.62390117,13.19671612,14.68552076,9.26683366,11.67886655,14.65083473,12.20969772,11.58494621,13.58474896,12.49053635,10.28208078,12.21945867,12.58276212,15.42648969,9.466436017,11.46582655,10.78725485,10.66159358,10.86701127,12.97863424,12.85276916,8.672953949,10.44587257,13.62135205,13.64038394,12.45778874,8.655142642,10.65925259,13.18336949,11.96595556,13.5552118,11.8337142,14.01763101,11.37502161,14.14801305,13.21640866,9.141392359,11.65848845,14.20350364,14.1829714,11.26202565,11.98431285,13.77216009,11.57303893)
data <- data.frame(Group, Subject, Value) Then I run a linear-mixed effects model to compare the 3 Groups' difference on "Value", where "Subject" is the random factor: library(lme4)
library(lmerTest)
model <- lmer (Value~Group + (1|Subject), data = data)
summary(model) The results are: Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 12.48771 0.42892 31.54000 29.114 <2e-16 ***
GroupG2 -1.12666 0.46702 28.00000 -2.412 0.0226 *
GroupG3 0.03828 0.46702 28.00000 0.082 0.9353 However, how to compare Group2 with Group3? What is the convention in academic article? | You could use emmeans::emmeans() or lmerTest::difflsmeans() , or multcomp::glht() . I prefer emmeans (previously lsmeans ). library(emmeans)
emmeans(model, list(pairwise ~ Group), adjust = "tukey") The next option is difflsmeans . Note difflsmeans cannot correct for multiple comparisons, and uses the Satterthwaite method for calculating degrees of freedom as default instead of the Kenward-Roger method used by default by emmeans , so it might be best to explicitly specify the method you prefer. library(lmerTest)
difflsmeans(model, test.effs = "Group", ddf="Kenward-Roger") The multcomp::glht() method is described in the other answer to this question, by Hack-R. Also, you can get the ANOVA p-values by loading lmerTest and then using anova . library(lmerTest)
lmerTest::anova(model) Just to be clear, you intended for the Value to be assessed three times for each subject, right? It looks like Group is "within-subjects", not "between-subjects." | {
"source": [
"https://stats.stackexchange.com/questions/237512",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/56707/"
]
} |
237,674 | I know this question has been asked with the case mean=median, but I did not find anything related to mean=mode. If the mode equals the mean, can I always conclude this is a symmetric distribution? Will I be forced to know also the median for this manner? | Mean = mode doesn't imply symmetry. Even if mean = median = mode you still don't necessarily have symmetry. And in anticipation of the potential followup -- even if mean=median=mode and the third central moment is zero (so moment-skewness is 0), you still don't necessarily have symmetry. ... but there was a followup to that one. NickT asked in comments if having all odd moments zero was enough to require symmetry. The answer to that is also no. [See the discussion at the end.$^\dagger$] Those various things are all implied by symmetry (assuming the relevant moments are finite) but the implication doesn't go the other way - in spite of many an elementary text clearly saying otherwise about one or more of them. Counterexamples are pretty trivial to construct. Consider the following discrete distribution: x -4 0 1 5
P(X=x) 0.2 0.4 0.3 0.1 It has mean, median, mode and third central moment (and hence moment-skewness) all 0 but it is asymmetric. This sort of example can be done with a purely continuous distribution as well. For example, here's a density with the same properties: This is a mixture of symmetric triangular densities (each with range 2) with means at
-6, -4, -3, -1, 0, 1, 2, 5 and mixture weights 0.08, 0.08, 0.12, 0.08, 0.28, 0.08, 0.08, 0.20 respectively. The fact that I just made this now -- having never seen it before -- suggests how simple these cases are to construct. [I chose triangular mixture components in order that the mode would be visually unambiguous -- a smoother distribution could have been used.] Here's an additional discrete example to address Hong Ooi's questions about how far from symmetry these conditions allow you to get. This is by no means a limiting case, it's just illustrating that it's simple to make a less symmetric looking example: x -2 0 1 6
P(X=x) 0.175 0.5 0.32 0.005 The spike at 0 can be made relatively higher or lower without changing the conditions; similarly the point out to the right can be placed further away (with a reduction in probability) without changing the relative heights at 1 and -2 by much (i.e. their relative probability will stay close to the 2:1 ratio as you move the rightmost element about). More detail on the response to NickT's question $\dagger$ The all-odd-moments zero case is addressed in a number of questions on site. There's an example here (see the plot) based on the details here (see toward the end of the answer). That is a continuous unimodal asymmetric density with all odd moments 0 and mean=median=mode. The median is 0 by the 50-50 mixture construction, the mode is 0 by inspection -- all members of the family on the real half-line from which the example is constructed have a density that's monotonic decreasing from a finite value at the origin, and the mean is zero because all odd moments are 0. | {
"source": [
"https://stats.stackexchange.com/questions/237674",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/74669/"
]
} |
238,214 | I used to think that "random effects model" in econometrics corresponds to a "mixed model with random intercept" outside of econometrics, but now I am not sure. Does it? Econometrics uses terms like "fixed effects" and "random effects" somewhat differently from the literature on mixed models, and this causes a notorious confusion. Let us consider a simple situation where $y$ linearly depends on $x$ but with a different intercept in different groups of measurements: $$y_{it} = \beta x_{it} + u_i + \epsilon_{it}.$$ Here each unit/group $i$ is observed at different timepoints $t$. Econometricians call it "panel data". In mixed models terminology, we can treat $u_i$ as a fixed effect or as a random effect (in this case, it's random intercept). Treating it as fixed means fitting $\hat \beta$ and $\hat u_i$ to minimize squared error (i.e. running OLS regression with dummy group variables). Treating it as random means that we additionally assume that $u_i\sim\mathcal N(u_0,\sigma^2_u)$ and use maximum likelihood to fit $u_0$ and $\sigma^2_u$ instead of fitting each $u_i$ on its own. This leads to the "partial pooling" effect, where the estimates $\hat u_i$ get shrunk toward their mean $\hat u_0$. R formula when treating group as fixed: y ~ x + group
R formula when treating group as random: y ~ x + (1|group) In econometrics terminology, we can treat this whole model as a fixed effects model or as a random effects model. The first option is equivalent to the fixed effect above (but econometrics has its own way of estimating $\beta$ in this case, called "within" estimator ). I used to think that the second option is equivalent to the random effect above; e.g. @JiebiaoWang in his highly upvoted answer to What is a difference between random effects-, fixed effects- and marginal model? says that In econometrics, the random-effects model may only refer to random intercept model as in biostatistics Okay --- let us test if this understanding is correct. Here is some random data generated by @ChristophHanck in his answer to What is the difference between fixed effect, random effect and mixed effect models? (I put the data here on pastebin for those who do not use R): @Christoph does two fits using econometrics approaches: fe <- plm(stackY~stackX, data = paneldata, model = "within")
re <- plm(stackY~stackX, data = paneldata, model = "random") The first one yields the estimate of beta equal to -1.0451 , the second one 0.77031 (yes, positive!). I tried to reproduce it with lm and lmer : l1 = lm(stackY ~ stackX + as.factor(unit), data = paneldata)
l2 = lmer(stackY ~ stackX + (1|as.factor(unit)), data = paneldata) The first one yields -1.045 in perfect agreement with the within estimator above. Cool. But the second yields -1.026 , which is miles away from the random effects estimator. Heh? What is going on? In fact, what is plm even doing , when called with model = "random" ? Whatever it is doing, can one somehow understand it via the mixed models perspective? And what is the intuition behind whatever it is doing? I read in a couple of econometrics places that random effects estimator is a weighted average between the fixed effects estimator and the "between" estimator which is more or less regression slope if we do not include group identity in the model at all (this estimate is strongly positive in this case, around 4 .) E.g. @Andy writes here : The random effects estimator then uses a matrix weighted average of the within and between variation of your data. [...] This makes random effects more efficient[.] Why? Why would we want this weighted average? And in particular, why would we want it instead of running a mixed model? | Summary: the "random-effects model" in econometrics and a "random intercept mixed model" are indeed the same models, but they are estimated in different ways. The econometrics way is to use FGLS, and the mixed model way is to use ML. There are different algorithms of doing FGLS, and some of them (on this dataset) produce results that are very close to ML. 1. Differences between estimation methods in plm I will answer with my testing on plm(..., model = "random") and lmer() , using the data generated by @ChristophHanck. According to the plm package manual , there are four options for random.method : the method of estimation for the variance components in the random effects model. @amoeba used the default one swar (Swamy and Arora, 1972). For random effects models, four estimators of the transformation
parameter are available by setting random.method to one of "swar"
(Swamy and Arora (1972)) (default), "amemiya" (Amemiya (1971)),
"walhus" (Wallace and Hussain (1969)), or "nerlove" (Nerlove (1971)). I tested all the four options using the same data, getting an error for amemiya , and three totally different coefficient estimates for the variable stackX . The ones from using random.method='nerlove' and 'amemiya' are nearly equivalent to that from lmer() , -1.029 and -1.025 vs -1.026. They are also not very different from that obtained in the "fixed-effects" model, -1.045. # "amemiya" only works using the most recent version:
# install.packages("plm", repos="http://R-Forge.R-project.org")
re0 <- plm(stackY~stackX, data = paneldata, model = "random") #random.method='swar'
re1 <- plm(stackY~stackX, data = paneldata, model = "random", random.method='amemiya')
re2 <- plm(stackY~stackX, data = paneldata, model = "random", random.method='walhus')
re3 <- plm(stackY~stackX, data = paneldata, model = "random", random.method='nerlove')
l2 <- lmer(stackY~stackX+(1|as.factor(unit)), data = paneldata)
coef(re0) # (Intercept) stackX 18.3458553 0.7703073
coef(re1) # (Intercept) stackX 30.217721 -1.025186
coef(re2) # (Intercept) stackX -1.15584 3.71973
coef(re3) # (Intercept) stackX 30.243678 -1.029111
fixef(l2) # (Intercept) stackX 30.226295 -1.026482 Unfortunately I do not have time right now, but interested readers can find the four references, to check their estimation procedures. It would be very helpful to figure out why they make such a difference. I expect that for some cases, the plm estimation procedure using the lm() on transformed data should be equivalent to the maximum likelihood procedure utilized in lmer() . 2. Comparison between GLS and ML The authors of plm package did compare the two in Section 7 of their paper: Yves Croissant and Giovanni Millo, 2008, Panel Data Econometrics in R: The plm package . Econometrics deal mostly with non-experimental data. Great emphasis is put on specification procedures and misspecification testing. Model specifications tend therefore to be very simple, while great attention is put on the issues of endogeneity of the regressors, dependence
structures in the errors and robustness of the estimators under deviations from normality.
The preferred approach is often semi- or non-parametric, and heteroskedasticity-consistent
techniques are becoming standard practice both in estimation and testing. For all these reasons, [...] panel model estimation in econometrics is mostly
accomplished in the generalized least squares framework based on Aitken’s Theorem [...]. On the contrary, longitudinal data
models in nlme and lme4 are estimated by (restricted or unrestricted) maximum likelihood. [...] The econometric GLS approach has closed-form analytical solutions computable by standard linear algebra and, although the latter can sometimes get computationally heavy on
the machine, the expressions for the estimators are usually rather simple. ML estimation of
longitudinal models, on the contrary, is based on numerical optimization of nonlinear functions without closed-form solutions and is thus dependent on approximations and convergence
criteria. 3. Update on mixed models I appreciate that @ChristophHanck provided a thorough introduction about the four random.method used in plm and explained why their estimates are so different. As requested by @amoeba, I will add some thoughts on the mixed models (likelihood-based) and its connection with GLS. The likelihood-based method usually assumes a distribution for both the random effect and the error term. A normal distribution assumption is commonly used, but there are also some studies assuming a non-normal distribution. I will follow @ChristophHanck's notations for a random intercept model, and allow unbalanced data, i.e., let $T=n_i$. The model is
\begin{equation}
y_{it}= \boldsymbol x_{it}^{'}\boldsymbol\beta + \eta_i + \epsilon_{it}\qquad i=1,\ldots,m,\quad t=1,\ldots,n_i
\end{equation}
with $\eta_i \sim N(0,\sigma^2_\eta), \epsilon_{it} \sim N(0,\sigma^2_\epsilon)$. For each $i$, $$\boldsymbol y_i \sim N(\boldsymbol X_{i}\boldsymbol\beta, \boldsymbol\Sigma_i), \qquad\boldsymbol\Sigma_i = \sigma^2_\eta \boldsymbol 1_{n_i} \boldsymbol 1_{n_i}^{'} + \sigma^2_\epsilon \boldsymbol I_{n_i}.$$
So the log-likelihood function is $$const -\frac{1}{2} \sum_i\mathrm{log}|\boldsymbol\Sigma_i| - \frac{1}{2} \sum_i(\boldsymbol y_i - \boldsymbol X_{i}\boldsymbol\beta)^{'}\boldsymbol\Sigma_i^{-1}(\boldsymbol y_i - \boldsymbol X_{i}\boldsymbol\beta).$$ When all the variances are known, as shown in Laird and Ware (1982), the MLE is
$$\hat{\boldsymbol\beta} = \left(\sum_i\boldsymbol X_i^{'} \boldsymbol\Sigma_i^{-1} \boldsymbol X_i \right)^{-1} \left(\sum_i \boldsymbol X_i^{'} \boldsymbol\Sigma_i^{-1} \boldsymbol y_i \right),$$
which is equivalent to the GLS $\hat\beta_{RE}$ derived by @ChristophHanck. So the key difference is in the estimation for the variances. Given that there is no closed-form solution, there are several approaches: directly maximization of the log-likelihood function using optimization algorithms; Expectation-Maximization (EM) algorithm: closed-form solutions exist, but the estimator for $\boldsymbol \beta$ involves empirical Bayesian estimates of the random intercept; a combination of the above two, Expectation/Conditional Maximization Either (ECME) algorithm (Schafer, 1998; R package lmm ). With a different parameterization, closed-form solutions for $\boldsymbol \beta$ (as above) and $\sigma^2_\epsilon$ exist. The solution for $\sigma^2_\epsilon$ can be written as $$\sigma^2_\epsilon = \frac{1}{\sum_i n_i}\sum_i(\boldsymbol y_i - \boldsymbol X_{i} \hat{\boldsymbol\beta})^{'}(\hat\xi \boldsymbol 1_{n_i} \boldsymbol 1_{n_i}^{'} + \boldsymbol I_{n_i})^{-1}(\boldsymbol y_i - \boldsymbol X_{i} \hat{\boldsymbol\beta}),$$ where $\xi$ is defined as $\sigma^2_\eta/\sigma^2_\epsilon$ and can be estimated in an EM framework. In summary, MLE has distribution assumptions, and it is estimated in an iterative algorithm. The key difference between MLE and GLS is in the estimation for the variances. Croissant and Millo (2008) pointed out that While under normality, homoskedasticity and no serial correlation of the errors OLS are also the maximum likelihood estimator, in all the other cases there are important differences. In my opinion, for the distribution assumption, just as the difference between parametric and non-parametric approaches, MLE would be more efficient when the assumption holds, while GLS would be more robust. | {
"source": [
"https://stats.stackexchange.com/questions/238214",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28666/"
]
} |
238,246 | Typically when one takes random sample averages of a distribution (with sample size greater than 30) one obtains a normal distribution centering around the mean value. However, I heard that the Cauchy distribution has no mean value. What distribution does one obtain then when obtaining sample means of the Cauchy distribution? Basically for a Cauchy distribution $\mu_x$ is undefined so what is $\mu_{\bar{x}}$ and what is the distribution of $\bar{x}$? | If $X_1, \ldots, X_n$ are i.i.d. Cauchy $(0, 1)$ then we can show that $\bar{X}$ is also Cauchy $(0, 1)$ using a characteristic function argument: \begin{align}
\varphi_{\bar{X}}(t) &= \text{E} \left (e^{it \bar{X}} \right ) \\
&= \text{E} \left ( \prod_{j=1}^{n} e^{it X_j / n} \right ) \\
&= \prod_{j=1}^{n} \text{E} \left ( e^{it X_j / n} \right ) \\
&= \text{E} \left (e^{it X_1 / n} \right )^n \\
&= e^{- |t|}
\end{align} which is the characteristic function of the standard Cauchy distribution. The proof for the more general Cauchy $(\mu, \sigma)$ case is basically identical. | {
"source": [
"https://stats.stackexchange.com/questions/238246",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/111109/"
]
} |
238,458 | I understand the procedure and what it controls. So what's the formula for the adjusted p-value in the BH procedure for multiple comparisons? Just now I realized the original BH didn't produce adjusted p-values, only adjusted the (non) rejection condition: https://www.jstor.org/stable/2346101 . Gordon Smyth introduced adjusted BH p-values in 2002 anyways, so the question still applies. It's implemented in R as p.adjust with method BH . | The famous seminal Benjamini & Hochberg (1995) paper described the procedure for accepting/rejecting hypotheses based on adjusting the alpha levels. This procedure has a straightforward equivalent reformulation in terms of adjusted $p$ -values, but it was not discussed in the original paper. According to Gordon Smyth , he introduced adjusted $p$ -values in 2002 when implementing p.adjust in R. Unfortunately, there is no corresponding citation, so it has always been unclear to me what one should cite if one uses BH-adjusted $p$ -values. Turns out, the procedure is described in the Benjamini, Heller, Yekutieli (2009) : An alternative way of presenting the results of this procedure is by presenting the adjusted $p$ -values. The BH-adjusted $p$ -values are defined as $$p^\mathrm{BH}_{(i)} = \min\Big\{\min_{j\ge i}\big\{\frac{mp_{(j)}}{j}\big\},1\Big\}.$$ This formula looks more complicated than it really is. It says: First, order all $p$ -values from small to large. Then multiply each $p$ -value by the total number of tests $m$ and divide by its rank order. Second, make sure that the resulting sequence is non-decreasing: if it ever starts decreasing, make the preceding $p$ -value equal to the subsequent (repeatedly, until the whole sequence becomes non-decreasing). If any $p$ -value ends up larger than 1, make it equal to 1. This is a straightforward reformulation of the original BH procedure from 1995. There might exist an earlier paper that explicitly introduced the concept of BH-adjusted $p$ -values, but I am not aware of any. Update. @Zenit found that Yekutieli & Benjamini (1999) described the same thing already back in 1999: | {
"source": [
"https://stats.stackexchange.com/questions/238458",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/60613/"
]
} |
238,529 | I'm trying to understand why the sum of two (or more) lognormal random variables approaches a lognormal distribution as you increase the number of observations. I've looked online and not found any results concerning this. Clearly if $X$ and $Y$ are independent lognormal variables, then by properties of exponents and gaussian random variables, $X \times Y$ is also lognormal. However, there is no reason to suggest that $X+Y$ is also lognormal. HOWEVER If you generate two independent lognormal random variables $X$ and $Y$, and let $Z=X+Y$, and repeat this process many many times, the distribution of $Z$ appears lognormal. It even appears to get closer to a lognormal distribution as you increase the number of observations. For example: After generating 1 million pairs, the distribution of the natural log of Z is given in the histogram below. This very clearly resembles a normal distribution, suggesting $Z$ is indeed lognormal. Does anyone have any insight or references to texts that may be of use in understanding this? | This approximate lognormality of sums of lognormals is a well-known rule of thumb; it's mentioned in numerous papers -- and in a number of posts on site. A lognormal approximation for a sum of lognormals by matching the first two moments is sometimes called a Fenton-Wilkinson approximation. You may find this document by Dufresne useful (available here , or here ). I have also in the past sometimes pointed people to Mitchell's paper Mitchell, R.L. (1968), "Permanence of the log-normal distribution." J. Optical Society of America . 58: 1267-1272. But that's now covered in the references of Dufresne. But while it holds in a fairly wide set of not-too-skew cases, it doesn't hold in general, not even for i.i.d. lognormals, not even as $n$ gets quite large. Here's a histogram of 1000 simulated values, each the log of the sum of fifty-thousand i.i.d lognormals: As you see ... the log is quite skew, so the sum is not very close to lognormal. Indeed, this example would also count as a useful example for people thinking (because of the central limit theorem) that some $n$ in the hundreds or thousands will give very close to normal averages; this one is so skew that its log is considerably right skew, but the central limit theorem nevertheless applies here; an $n$ of many millions* would be necessary before it begins to look anywhere near symmetric. * I have not tried to figure out how many but, because of the way that skewness of sums (equivalently, averages) behaves, a few million will clearly be insufficient Since more details were requested in comments, you can get a similar-looking result to the example with the following code, which produces 1000 replicates of the sum of 50,000 lognormal random variables with scale parameter $\mu=0$ and shape parameter $\sigma=4$ : res <- replicate(1000,sum(rlnorm(50000,0,4)))
hist(log(res),n=100) (I have since tried $n=10^6$ . Its log is still heavily right skew) | {
"source": [
"https://stats.stackexchange.com/questions/238529",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/129126/"
]
} |
238,538 | I want to see how 7 measures of text correction behaviour (time spent correcting the text, number of keystrokes, etc.) relate to each other. The measures are correlated. I ran a PCA to see how the measures projected onto PC1 and PC2, which avoided the overlap of running separate two-way correlation tests between the measures. I was asked why not using t-SNE, since the relationship between some of the measures might be non-linear. I can see how allowing for non-linearity would improve this, but I wonder if there is any good reason to use PCA in this case and not t-SNE? I'm not interested in clustering the texts according to their relationship to the measures, but rather in the relationship between the measures themselves. (I guess EFA could also a better/another approach, but that's a different discussion.)
Compared to other methods, there are few posts on here about t-SNE, so the question seems worth asking. | $t$-SNE is a great piece of Machine Learning but one can find many reasons to use PCA instead of it. Of the top of my head, I will mention five. As most other computational methodologies in use, $t$-SNE is no silver bullet and there are quite a few reasons that make it a suboptimal choice in some cases. Let me mention some points in brief: Stochasticity of final solution . PCA is deterministic; $t$-SNE is not. One gets a nice visualisation and then her colleague gets another visualisation and then they get artistic which looks better and if a difference of $0.03\%$ in the $KL(P||Q)$ divergence is meaningful... In PCA the correct answer to the question posed is guaranteed. $t$-SNE might have multiple minima that might lead to different solutions. This necessitates multiple runs as well as raises questions about the reproducibility of the results. Interpretability of mapping . This relates to the above point but let's assume that a team has agreed in a particular random seed/run. Now the question becomes what this shows... $t$-SNE tries to map only local / neighbours correctly so our insights from that embedding should be very cautious; global trends are not accurately represented (and that can be potentially a great thing for visualisation). On the other hand, PCA is just a diagonal rotation of our initial covariance matrix and the eigenvectors represent a new axial system in the space spanned by our original data. We can directly explain what a particular PCA does. Application to new/unseen data . $t$-SNE is not learning a function from the original space to the new (lower) dimensional one and that's a problem. On that matter, $t$-SNE is a non-parametric learning algorithm so approximating with parametric algorithm is an ill-posed problem. The embedding is learned by directly moving the data across the low dimensional space. That means one does not get an eigenvector or a similar construct to use in new data. In contrast, using PCA the eigenvectors offer a new axes system what can be directly used to project new data. [Apparently one could try training a deep-network to learn the $t$-SNE mapping (you can hear Dr. van der Maaten at ~46' of this video suggesting something along this lines) but clearly no easy solution exists.] Incomplete data . Natively $t$-SNE does not deal with incomplete data. In fairness, PCA does not deal with them either but numerous extensions of PCA for incomplete data (eg. probabilistic PCA ) are out there and are almost standard modelling routines. $t$-SNE currently cannot handle incomplete data (aside obviously training a probabilistic PCA first and passing the PC scores to $t$-SNE as inputs). The $k$ is not (too) small case. $t$-SNE solves a problem known as the crowding problem, effectively that somewhat similar points in higher dimension collapsing on top of each other in lower dimensions (more here ). Now as you increase the dimensions used the crowding problem gets less severe ie. the problem you are trying to solve through the use of $t$-SNE gets attenuated. You can work around this issue but it is not trivial. Therefore if you need a $k$ dimensional vector as the reduced set and $k$ is not quite small the optimality of the produce solution is in question. PCA on the other hand offer always the $k$ best linear combination in terms of variance explained. (Thanks to @amoeba for noticing I made a mess when first trying to outline this point.) I do not mention issues about computational requirements (eg. speed or memory size) nor issues about selecting relevant hyperparameters (eg. perplexity). I think these are internal issues of the $t$-SNE methodology and are irrelevant when comparing it to another algorithm. To summarise, $t$-SNE is great but as all algorithms has its limitations when it comes to its applicability. I use $t$-SNE almost on any new dataset I get my hands on as an explanatory data analysis tool. I think though it has certain limitations that do not make it nearly as applicable as PCA. Let me stress that PCA is not perfect either; for example, the PCA-based visualisations are often inferior to those of $t$-SNE. | {
"source": [
"https://stats.stackexchange.com/questions/238538",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/48460/"
]
} |
238,581 | In my study I will be measuring workload with several metrics. With heart-rate variability (HRV), electrodermal activity (EDA) and with a subjective scale (IWS). After normalization the IWS has three values: Workload lower than normal Workload is average Workload is higher than normal. I want to see how well the physiological measures can predict subjective workload. Therefore I want to use ratio data to predict ordinal values. According to: How do I run Ordinal Logistic Regression analysis in R with both numerical / categorical values? this is easily done by using the MASS:polr function. However, I also want to account for random effects such as between-subject differences, gender, smoking etc. Looking at this tutorial , I don't see how I can add random effects to MASS:polr . Alternatively lme4:glmer would then be an option, but this function only allows the prediction of binary data. Is it possible to add random effects to an ordinal logistic regression? | In principle you can make the machinery of any logistic mixed model software perform ordinal logistic regression by expanding the ordinal response variable into a series of binary contrasts between successive levels (e.g. see Dobson and Barnett Introduction to Generalized Linear Models section 8.4.6). However, this is a pain, and luckily there are a few options in R: the ordinal package , via the clmm and clmm2 functions ( clmm = C umulative L ink M ixed M odel) the mixor package , via the mixor function the MCMCglmm package , via family="ordinal" (see ?MCMCglmm ) the brms package , e.g. via family="cumulative" (see ?brmsfamily ) The latter two options are implemented within Bayesian MCMC frameworks. As far as I know, all of the functions quoted (with the exception of ordinal::clmm2 ) can handle multiple random effects (intercepts, slopes, etc.); most of them (maybe not MCMCglmm ?) can handle choices of link function (logit, probit, etc.). ( If I have time I will come back and revise this answer with a worked example of setting up ordinal models from scratch using lme4 ) | {
"source": [
"https://stats.stackexchange.com/questions/238581",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/120367/"
]
} |
238,726 | In many tutorials or manuals the narrative seems to imply that R and python coexist as complementary components of the analysis process. To my untrained eye, however, it seems that both languages sort of do the same thing. So my question is if there are really specialized niches for the two languages or if it's just a personal preference whether to use one or the other? | They are complementary. It is true that both can do the same things, yet this can be said of most languages. Each has its strengths and weaknesses. The common outlook seems to be that Python is best for data gathering and preparation, as well as for textual analysis. R is considered best for the data analysis, as it is a statistical language first and foremost. R has a smorgasbord of packages for anything you can think of, but its staple is statistical analysis - from basic chi-square to factor analysis and hazard models, it is easy and robust. Some of the biggest names in statistics create R packages, and it has a lively community to help with your every need. ggplot2 is a standard in data visualization (graphs etc..). R is a vectorized language and built to loop through data efficiently. It also stores all data in the RAM, which is a double-edged sword - it is snappy on smaller data sets (although some might argue with me), but it can't handle big data well (although it has packages to bypass this, such as ff ). Python is considerably easier to learn than R - especially for those who have previous programming experience. R is just... weird. Python is great at data retrieval, and is the language to use for web scraping (with the amazing beautifulsoup ). Python is known for its strength in string parsing and text manipulation. pandas is a great library for data manipulation, merging, transforming, etc., and is fast (and probably inspired by R). Python is great when you need to do some programming. This is not surprising as it is a general-purpose language. R, however, with all its extensions, was built by statisticians for statisticians. So while Python may be easier and better and faster at many applications, R would be the go-to platform for statistical analysis. | {
"source": [
"https://stats.stackexchange.com/questions/238726",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/103231/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.