idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
2,501 | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis? | Part 1. Pictures & theory
In this my answer (a second and additional to the other of mine here) I will try to show in pictures that PCA does not restore a covariance any well (whereas it restores - maximizes - variance optimally).
As in a number of my answers on PCA or Factor analysis I will turn to vector representation of variables in subject space. In this instance it is but a loading plot showing variables and their component loadings. So we got $X_1$ and $X_2$ the variables (we had only two in the dataset), $F$ their 1st principal component, with loadings $a_1$ and $a_2$. The angle between the variables is also marked. Variables were centered preliminary, so their squared lengths, $h_1^2$ and $h_2^2$ are their respective variances.
The covariance between $X_1$ and $X_2$ is - it is their scalar product - $h_1 h_2 cos \phi$ (this cosine is the correlation value, by the way). Loadings of PCA, of course, capture the maximum possible of the overall variance $h_1^2+h_2^2$ by $a_1^2+a_2^2$, the component $F$'s variance.
Now, the covariance $h_1 h_2 cos \phi = g_1 h_2$, where $g_1$ is the projection of variable $X_1$ on variable $X_2$ (the projection which is the regression prediction of the first by the second). And so the magnitude of the covariance could be rendered by the area of the rectangle below (with sides $g_1$ and $h_2$).
According to the so called "factor theorem" (might know if you read something on factor analysis), covariance(s) between variables should be (closely, if not exactly) reproduced by multiplication of loadings of the extracted latent variable(s) (read). That is, by, $a_1 a_2$, in our particular case (if to recognize the principal component to be our latent variable). That value of the reproduced covariance could be rendered by the area of a rectangle with sides $a_1$ and $a_2$. Let us draw the rectangle, aligned by the previous rectangle, to compare. That rectangle is shown hatched below, and its area is nicknamed cov* (reproduced cov).
It's obvious that the two areas are pretty dissimilar, with cov* being considerably larger in our example. Covariance got overestimated by the loadings of $F$, the 1st principal component. This is contrary to somebody who might expect that PCA, by the 1st component alone of the two possible, will restore the observed value of the covariance.
What could we do with our plot to enchance the reproduction? We can, for example, rotate the $F$ beam clockwise a bit, even until it superposes with $X_2$. When their lines coincide, that means that we forced $X_2$ to be our latent variable. Then loading $a_2$ (projection of $X_2$ on it) will be $h_2$, and loading $a_1$ (projection of $X_1$ on it) will be $g_1$. Then two rectangles are the same one - the one that was labeled cov, and so the covariance is reproduced perfectly. However, $g_1^2 + h_2^2$, the variance explained by the new "latent variable", is smaller than $a_1^2 + a_2^2$, the variance explained by the old latent variable, the 1st principal component (square and stack the sides of each of the two rectangles on the picture, to compare). It appears that we managed to reproduce the covariance, but at expense of explaining the amount of variance. I.e. by selecting another latent axis instead of the first principal component.
Our imagination or guess may suggest (I won't and possibly cannot prove it by math, I'm not a mathematician) that if we release the latent axis from the space defined by $X_1$ and $X_2$, the plane, allowing it to swing a bit towards us, we can find some optimal position of it - call it, say, $F^*$ - whereby the covariance is again reproduced perfectly by the emergent loadings ($a_1^* a_2^*$) while the variance explained ($a_1^{*2} + a_2^{*2}$) will be bigger than $g_1^2 + h_2^2$, albeit not as big as $a_1^2 + a_2^2$ of the principal component $F$.
I believe that this condition is achievable, particularly in that case when the latent axis $F^*$ gets drawn extending out of the plane in such a way as to pull a "hood" of two derived orthogonal planes, one containing the axis and $X_1$ and the other containing the axis and $X_2$. Then this latent axis we'll call the common factor, and our entire "attempt at originality" will be named factor analysis.
Part 2. A reply to @amoeba's "Update 2" in respect to PCA.
@amoeba is correct and relevant to recall Eckart-Young theorem which is fundamental to PCA and its congeneric techniques (PCoA, biplot, correspondence analysis) based on SVD or eigen-decomposition. According to it, $k$ first principal axes of $\bf X$ optimally minimize $\bf ||X-X_k||^2$ - a quantity equal to $\bf tr(X'X)-tr(X_k'X_k)$, - as well as $\bf ||X'X-X_k'X_k||^2$. Here $\bf X_k$ stands for the data as reproduced by the $k$ principal axes. $\bf X_k'X_k$ is known to be equal to $\bf W_k W_k'$, with $\bf W_k$ being the variable loadings of the $k$ components.
Does it mean that minimization $\bf ||X'X-X_k'X_k||^2$ remain true if we consider only off-diagonal portions of both symmetric matrices? Let's inspect it by experimenting.
500 random 10x6 matrices $\bf X$ were generated (uniform distribution). For each, after centering its columns, PCA was performed, and two reconstructed data matrices $\bf X_k$ computed: one as reconstructed by components 1 through 3 ($k$ first, as usual in PCA), and the other as reconstructed by components 1, 2, and 4 (that is, component 3 was replaced by a weaker component 4). The reconstruction error $\bf ||X'X-X_k'X_k||^2$ (sum of squared difference = squared Euclidean distance) was then computed for one $\bf X_k$, for the other $\bf X_k$. These two values is a pair to show on a scatterplot.
The reconstruction error was computed each time in two versions: (a) whole matrices $\bf X'X$ and $\bf X_k'X_k$ compared; (b) only off-diagonals of the two matrices compared. Thus, we have two scatterplots, with 500 points each.
We see, that on the "whole matrix" plot all points lie above y=x line. Which means that the reconstruction for the whole scalar-product matrix is always more accurate by "1 through 3 components" than by "1, 2, 4 components". This is in line with Eckart-Young theorem says: first $k$ principal components are the best fitters.
However, when we look at "off-diagonals only" plot we notice a number of points below the y=x line. It appeared that sometimes reconstruction of off-diagonal portions by "1 through 3 components" was worse than by "1, 2, 4 components". Which automatically leads to the conclusion that first $k$ principal components are not regularly the best fitters of off-diagonal scalar products among fitters available in PCA. For example, taking a weaker component instead of a stronger may sometimes improve the reconstruction.
So, even in the domain of PCA itself, senior principal components - who do approximate overall variance, as we know, and even the whole covariance matrix, too, - not necessarily approximate off-diagonal covariances. Better optimization of those is required therefore; and we know that factor analysis is the (or among the) technique that can offer it.
Part 3. A follow-up to @amoeba's "Update 3": Does PCA approach FA as the number of variables grows? Is PCA a valid substitute of FA?
I've conducted a lattice of simulation studies. A few number of population factor structures, loading matrices $\bf A$ were constructed of random numbers and converted to their corresponding population covariance matrices as $\bf R=AA'+ U^2$, with $\bf U^2$ being a diagonal noise (unique variances). These covariance matrices were made with all variances 1, therefore they were equal to their correlation matrices.
Two types of factor structure were designed - sharp and diffuse. Sharp structure is one having clear simple structure: loadings are either "high" of "low", no intermediate; and (in my design) each variable is highly loaded exactly by one factor. Corresponding $\bf R$ is hence noticebly block-like. Diffuse structure does not differentiate between high and low loadings: they can be any random value within a bound; and no pattern within loadings is conceived. Consequently, corresponding $\bf R$ comes smoother. Examples of the population matrices:
The number of factors was either $2$ or $6$. The number of variables was determined by the ratio k = number of variables per factor; k ran values $4,7,10,13,16$ in the study.
For each of the few constructed population $\bf R$, $50$ its random realizations from Wishart distribution (under sample size n=200) were generated. These were sample covariance matrices. Each was factor-analyzed by FA (by principal axis extraction) as well as by PCA. Additionally, each such covariance matrix was converted into corresponding sample correlation matrix that was also factor-analyzed (factored) same ways. Lastly, I also performed factoring of the "parent", population covariance (=correlation) matrix itself. Kaiser-Meyer-Olkin measure of sampling adequacy was always above 0.7.
For data with 2 factors, the analyses extracted 2, and also 1 as well as 3 factors ("underestimation" and "overestimation" of the correct number of factors regimes). For data with 6 factors, the analyses likewise extracted 6, and also 4 as well as 8 factors.
The aim of the study was the covariances/correlations restoration qualities of FA vs PCA. Therefore residuals of off-diagonal elements were obtained. I registered residuals between the reproduced elements and the population matrix elements, as well as residuals between the former and the analyzed sample matrix elements. The residuals of the 1st type were conceptually more interesting.
Results obtained after analyses done on sample covariance and on sample correlation matrices had certain differences, but all the principal findings occured to be similar. Therefore I'm discussing (showing results) only of the "correlations-mode" analyses.
1. Overall off-diagonal fit by PCA vs FA
The graphics below plot, against various number of factors and different k, the ratio of the mean squared off-diagonal residual yielded in PCA to the same quantity yielded in FA. This is similar to what @amoeba showed in "Update 3". The lines on the plot represent average tendencies across the 50 simulations (I omit showing st. error bars on them).
(Note: the results are about factoring of random sample correlation matrices, not about factoring the population matrix parental to them: it is silly to compare PCA with FA as to how well they explain a population matrix - FA will always win, and if the correct number of factors is extracted, its residuals will be almost zero, and so the ratio would rush towards infinity.)
Commenting these plots:
General tendency: as k (number of variables per factor) grows the PCA/FA overall subfit ratio fades towards 1. That is, with more variables PCA approaches FA in explaining off-diagonal correlations/covariances. (Documented by @amoeba in his answer.) Presumably the law approximating the curves is ratio= exp(b0 + b1/k) with b0 close to 0.
Ratio is greater w.r.t. residuals “sample minus reproduced sample” (left plot) than w.r.t. residuals “population minus reproduced sample” (right plot). That is (trivially), PCA is inferior to FA in fitting the matrix being immediately analyzed. However, lines on the left plot have faster rate of decrease, so by k=16 the ratio is below 2, too, as it is on the right plot.
With residuals “population minus reproduced sample”, trends are not always convex or even monotonic (the unusual elbows are shown circled). So, as long as speech is about explaining a population matrix of coefficients via factoring a sample, rising the number of variables does not regularly bring PCA closer to FA in its fittinq quality, though the tendency is there.
Ratio is greater for m=2 factors than for m=6 factors in population (bold red lines are below bold green lines). Which means that with more factors acting in the data PCA sooner catches up with FA. For example, on the right plot k=4 yields ratio about 1.7 for 6 factors, while the same value for 2 factors is reached at k=7.
Ratio is higher if we extract more factors relative the true number of factors. That is, PCA is only slightly worse a fitter than FA if at extraction we underestimate the number of factors; and it loses more to it if the number of factors is correct or overestimated (compare thin lines with bold lines).
There is an interesting effect of the sharpness of factor structure which appears only if we consider residuals “population minus reproduced sample”: compare grey and yellow plots on the right. If population factors load variables diffusely, red lines (m=6 factors) sink to the bottom. That is, in diffuse structure (such as loadings of chaotic numbers) PCA (performed on a sample) is only few worse than FA in reconstructing the population correlations- even under small k, provided that the number of factors in the population isn’t very small. This is probably the condition when PCA is most close to FA and is most warranted as its cheeper substitute. Whereas in the presence of sharp factor structure PCA isn’t so optimistic in reconstructing the population correlations (or covariances): it approaches FA only in big k perspective.
2. Element-level fit by PCA vs FA: distribution of residuals
For every simulation experiment where factoring (by PCA or FA) of 50 random sample matrices from the population matrix was performed, distribution of residuals "population correlation minus reproduced (by the factoring) sample's correlation" was obtained for every off-diagonal correlation element. Distributions followed clear patterns, and examples of typical distributions are depicted right below. Results after PCA factoring are blue left sides and results after FA factoring are green right sides.
The principal finding is that
Pronounced, by absolute magnitude, population correlations are restored by PCA inadequetly: the reproduced values are overestimates by magnitude.
But the bias vanishes as k (number of variables to number of factors ratio) increases. On the pic, when there is only k=4 variables per factor, PCA's residuals spread in offset from 0. This is seen both when there exist 2 factors and 6 factors. But with k=16 the offset is hardly seen - it almost dissapeared and PCA fit approaches FA fit. No difference in spread (variance) of residuals between PCA and FA is observed.
Similar picture is seen also when the number of factors extracted does not match true number of factors: only variance of residuals somewhat change.
The distributions shown above on grey background pertain to the experiments with sharp (simple) factor structure present in the population. When all the analyses were done in situation of diffuse population factor structure, it was found that the bias of PCA fades away not only with the rise of k, but also with the rise of m (number of factors). Please see the scaled down yellow-background attachments to the column "6 factors, k=4": there is almost no offset from 0 observed for PCA results (the offset is yet present with m=2, that is not shown on the pic).
Thinking that the described findings are important I decided to inspect those residual distributions deeper and plotted the scatterplots of the residuals (Y axis) against the element (population correlation) value (X axis). These scatterplots each combine results of all the many (50) simulations/analyses. LOESS fit line (50% local points to use, Epanechnikov kernel) is highlighted. The first set of plots is for the case of sharp factor structure in the population (the trimodality of correlation values is apparent therefore):
Commenting:
We clearly see the (described above) reconstuction bias which is
characteristic of PCA as the skew, negative trend loess line: big in
absolute value population correlations are overestimated by PCA of
sample datasets. FA is unbiased (horizontal loess).
As k grows, PCA's bias diminishes.
PCA is biased irrespective of how many factors there are in the population: with 6 factors existent (and 6 extracted at analyses) it is similarly defective as with 2 factors existent (2 extracted).
The second set of plots below is for the case of diffuse factor structure in the population:
Again we observe the bias by PCA. However, as opposed to sharp factor structure case, the bias fades as the number of factors increases: with 6 population factors, PCA's loess line is not very far from being horizontal even under k only 4. This is what we've expressed by "yellow histograms" earlier.
One interesting phenomenon on both sets of scatterplots is that loess lines for PCA are S-curved. This curvature shows under other population factor structures (loadings) randomly constructed by me (I checked), although its degree varies and is often weak. If follows from the S-shape then that PCA starts to distort correlations rapidly as they bounce from 0 (especially under small k), but from some value on - around .30 or .40 - it stabilizes. I will not speculate at this time for possible reason of that behavior, althougt I believe the "sinusoid" stems from the triginometric nature of correlation.
3. Fit by PCA vs FA: Conclusions
As the overall fitter of the off-diagonal portion of a correlation/covariance matrix, PCA - when applied to analyze a sample matrix from a population - can be a fairly good substitute for factor analysis. This happens when the ratio number of variables / number of expected factors is big enough. (Geometrical reason for the beneficial effect of the ratio is explained in the bottom Footnote $^1$.) With more factors existent the ratio may be less than with only few factors. The presence of sharp factor structure (simple structure exists in the population) hampers PCA to approach the quality of FA.
The effect of sharp factor structure on the overall fit ability of PCA is apparent only as long as residuals "population minus reproduced sample" are considered. Therefore one can miss to recognize it outside a simulation study setting - in an observational study of a sample we don't have access to these important residuals.
Unlike factor analysis, PCA is a (positively) biased estimator of the magnitude of population correlations (or covariances) that are away from zero. The biasedness of PCA however decreases as the ratio number of variables / number of expected factors grows. The biasedness also decreases as the number of factors in the population grows, but this latter tendency is hampered under a sharp factor structure present.
I would remark that PCA fit bias and the effect of sharp structure on it can be uncovered also in considering residuals "sample minus reproduced sample"; I simply omitted showing such results because they seem not to add new impressions.
My very tentative, broad advice in the end might be to refrain from using PCA instead of FA for typical (i.e. with 10 or less factors expected in the population) factor analytic purposes unless you have some 10+ times more variables than the factors. And the fewer are factors the severer is the necessary ratio. I would further not recommend using PCA in place of FA at all whenever data with well-established, sharp factor structure is analyzed - such as when factor analysis is done to validate the being developed or already launched psychological test or questionnaire with articulated constructs/scales . PCA may be used as a tool of initial, preliminary selection of items for a psychometric instrument.
Limitations of the study. 1) I used only PAF method of factor extraction. 2) The sample size was fixed (200). 3) Normal population was assumed in sampling the sample matrices. 4) For sharp structure, there was modeled equal number of variables per factor. 5) Constructing population factor loadings I borrowed them from roughly uniform (for sharp structure - trimodal, i.e. 3-piece uniform) distribution. 6) There could be oversights in this instant examination, of course, as anywhere.
Footnote $1$. PCA will mimic results of FA and become the equivalent fitter of the correlations when - as said here - error variables of the model, called unique factors, become uncorrelated. FA seeks to make them uncorrelated, but PCA doesn't, they may happen to be uncorrelated in PCA. The major condition when it may occur is when the number of variables per number of common factors (components kept as common factors) is large.
Consider the following pics (if you need first to learn how to understand them, please read this answer):
By the requirement of factor analysis to be able to restore succesfully correlations with few m common factors, unique factors $U$, characterizing statistically unique portions of the p manifest variables $X$, must be uncorrelated. When PCA is used, the p $U$s have to lie in the p-m subspace of the p-space spanned by the $X$s because PCA does not leave the space of the analyzed variables. Thus - see the left pic - with m=1 (principal component $P_1$ is the extracted factor) and p=2 ($X_1$, $X_2$) analyzed, unique factors $U_1$, $U_2$ compulsorily superimpose on the remaining second component (serving as error of the analysis). Consequently they have to be correlated with $r=-1$. (On the pic, correlations equal cosines of angles between vectors.) The required orthogonality is impossible, and the observed correlation between the variables can never be restored (unless the unique factors are zero vectors, a trivial case).
But if you add one more variable ($X_3$), right pic, and extract still one pr. component as the common factor, the three $U$s have to lie in a plane (defined by the remaining two pr. components). Three arrows can span a plane in a way that angles between them are smaller than 180 degrees. There freedom for the angles emerges. As a possible particular case, the angles can be about equal, 120 degrees. That is already not very far from 90 degrees, that is, from uncorrelatedness. This is the situation shown on the pic.
As we add 4th variable, 4 $U$s will be spanning 3d space. With 5, 5 to span 4d, etc. Room for a lot of the angles simultaneously to attain closer to 90 degrees will expand. Which means that the room for PCA to approach FA in its ability to fit off-diagonal triangles of correlation matrix will also expand.
But true FA is usually able to restore the correlations even under small ratio "number of variables / number of factors" because, as explained here (and see 2nd pic there) factor analysis allows all the factor vectors (common factor(s) and unique ones) to deviate from lying in the variables' space. Hence there is the room for the orthogonality of $U$s even with only 2 variables $X$ and one factor.
The more diffuse (the less sharp) is the factor structure - that is, the more unimodal or even is the distribution of the factor loadings, - the closer the above said orthogonality is to become real, realized; the better is PCA executing the role of FA.
The pics above also give obvious clue to why PCA overestimates correlations. On the left pic, for example, $r_{X_1X_2}= a_1a_2 - u_1u_2$, where the $a$s are the projections of the $X$s on $P_1$ (loadings of $P_1$) and the $u$s are the lengths of the $U$s (loadings of $P_2$). But that correlation as reconstructed by $P_1$ alone equals just $a_1a_2$, i.e. bigger than $r_{X_1X_2}$. | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysi | Part 1. Pictures & theory
In this my answer (a second and additional to the other of mine here) I will try to show in pictures that PCA does not restore a covariance any well (whereas it restores - m | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis?
Part 1. Pictures & theory
In this my answer (a second and additional to the other of mine here) I will try to show in pictures that PCA does not restore a covariance any well (whereas it restores - maximizes - variance optimally).
As in a number of my answers on PCA or Factor analysis I will turn to vector representation of variables in subject space. In this instance it is but a loading plot showing variables and their component loadings. So we got $X_1$ and $X_2$ the variables (we had only two in the dataset), $F$ their 1st principal component, with loadings $a_1$ and $a_2$. The angle between the variables is also marked. Variables were centered preliminary, so their squared lengths, $h_1^2$ and $h_2^2$ are their respective variances.
The covariance between $X_1$ and $X_2$ is - it is their scalar product - $h_1 h_2 cos \phi$ (this cosine is the correlation value, by the way). Loadings of PCA, of course, capture the maximum possible of the overall variance $h_1^2+h_2^2$ by $a_1^2+a_2^2$, the component $F$'s variance.
Now, the covariance $h_1 h_2 cos \phi = g_1 h_2$, where $g_1$ is the projection of variable $X_1$ on variable $X_2$ (the projection which is the regression prediction of the first by the second). And so the magnitude of the covariance could be rendered by the area of the rectangle below (with sides $g_1$ and $h_2$).
According to the so called "factor theorem" (might know if you read something on factor analysis), covariance(s) between variables should be (closely, if not exactly) reproduced by multiplication of loadings of the extracted latent variable(s) (read). That is, by, $a_1 a_2$, in our particular case (if to recognize the principal component to be our latent variable). That value of the reproduced covariance could be rendered by the area of a rectangle with sides $a_1$ and $a_2$. Let us draw the rectangle, aligned by the previous rectangle, to compare. That rectangle is shown hatched below, and its area is nicknamed cov* (reproduced cov).
It's obvious that the two areas are pretty dissimilar, with cov* being considerably larger in our example. Covariance got overestimated by the loadings of $F$, the 1st principal component. This is contrary to somebody who might expect that PCA, by the 1st component alone of the two possible, will restore the observed value of the covariance.
What could we do with our plot to enchance the reproduction? We can, for example, rotate the $F$ beam clockwise a bit, even until it superposes with $X_2$. When their lines coincide, that means that we forced $X_2$ to be our latent variable. Then loading $a_2$ (projection of $X_2$ on it) will be $h_2$, and loading $a_1$ (projection of $X_1$ on it) will be $g_1$. Then two rectangles are the same one - the one that was labeled cov, and so the covariance is reproduced perfectly. However, $g_1^2 + h_2^2$, the variance explained by the new "latent variable", is smaller than $a_1^2 + a_2^2$, the variance explained by the old latent variable, the 1st principal component (square and stack the sides of each of the two rectangles on the picture, to compare). It appears that we managed to reproduce the covariance, but at expense of explaining the amount of variance. I.e. by selecting another latent axis instead of the first principal component.
Our imagination or guess may suggest (I won't and possibly cannot prove it by math, I'm not a mathematician) that if we release the latent axis from the space defined by $X_1$ and $X_2$, the plane, allowing it to swing a bit towards us, we can find some optimal position of it - call it, say, $F^*$ - whereby the covariance is again reproduced perfectly by the emergent loadings ($a_1^* a_2^*$) while the variance explained ($a_1^{*2} + a_2^{*2}$) will be bigger than $g_1^2 + h_2^2$, albeit not as big as $a_1^2 + a_2^2$ of the principal component $F$.
I believe that this condition is achievable, particularly in that case when the latent axis $F^*$ gets drawn extending out of the plane in such a way as to pull a "hood" of two derived orthogonal planes, one containing the axis and $X_1$ and the other containing the axis and $X_2$. Then this latent axis we'll call the common factor, and our entire "attempt at originality" will be named factor analysis.
Part 2. A reply to @amoeba's "Update 2" in respect to PCA.
@amoeba is correct and relevant to recall Eckart-Young theorem which is fundamental to PCA and its congeneric techniques (PCoA, biplot, correspondence analysis) based on SVD or eigen-decomposition. According to it, $k$ first principal axes of $\bf X$ optimally minimize $\bf ||X-X_k||^2$ - a quantity equal to $\bf tr(X'X)-tr(X_k'X_k)$, - as well as $\bf ||X'X-X_k'X_k||^2$. Here $\bf X_k$ stands for the data as reproduced by the $k$ principal axes. $\bf X_k'X_k$ is known to be equal to $\bf W_k W_k'$, with $\bf W_k$ being the variable loadings of the $k$ components.
Does it mean that minimization $\bf ||X'X-X_k'X_k||^2$ remain true if we consider only off-diagonal portions of both symmetric matrices? Let's inspect it by experimenting.
500 random 10x6 matrices $\bf X$ were generated (uniform distribution). For each, after centering its columns, PCA was performed, and two reconstructed data matrices $\bf X_k$ computed: one as reconstructed by components 1 through 3 ($k$ first, as usual in PCA), and the other as reconstructed by components 1, 2, and 4 (that is, component 3 was replaced by a weaker component 4). The reconstruction error $\bf ||X'X-X_k'X_k||^2$ (sum of squared difference = squared Euclidean distance) was then computed for one $\bf X_k$, for the other $\bf X_k$. These two values is a pair to show on a scatterplot.
The reconstruction error was computed each time in two versions: (a) whole matrices $\bf X'X$ and $\bf X_k'X_k$ compared; (b) only off-diagonals of the two matrices compared. Thus, we have two scatterplots, with 500 points each.
We see, that on the "whole matrix" plot all points lie above y=x line. Which means that the reconstruction for the whole scalar-product matrix is always more accurate by "1 through 3 components" than by "1, 2, 4 components". This is in line with Eckart-Young theorem says: first $k$ principal components are the best fitters.
However, when we look at "off-diagonals only" plot we notice a number of points below the y=x line. It appeared that sometimes reconstruction of off-diagonal portions by "1 through 3 components" was worse than by "1, 2, 4 components". Which automatically leads to the conclusion that first $k$ principal components are not regularly the best fitters of off-diagonal scalar products among fitters available in PCA. For example, taking a weaker component instead of a stronger may sometimes improve the reconstruction.
So, even in the domain of PCA itself, senior principal components - who do approximate overall variance, as we know, and even the whole covariance matrix, too, - not necessarily approximate off-diagonal covariances. Better optimization of those is required therefore; and we know that factor analysis is the (or among the) technique that can offer it.
Part 3. A follow-up to @amoeba's "Update 3": Does PCA approach FA as the number of variables grows? Is PCA a valid substitute of FA?
I've conducted a lattice of simulation studies. A few number of population factor structures, loading matrices $\bf A$ were constructed of random numbers and converted to their corresponding population covariance matrices as $\bf R=AA'+ U^2$, with $\bf U^2$ being a diagonal noise (unique variances). These covariance matrices were made with all variances 1, therefore they were equal to their correlation matrices.
Two types of factor structure were designed - sharp and diffuse. Sharp structure is one having clear simple structure: loadings are either "high" of "low", no intermediate; and (in my design) each variable is highly loaded exactly by one factor. Corresponding $\bf R$ is hence noticebly block-like. Diffuse structure does not differentiate between high and low loadings: they can be any random value within a bound; and no pattern within loadings is conceived. Consequently, corresponding $\bf R$ comes smoother. Examples of the population matrices:
The number of factors was either $2$ or $6$. The number of variables was determined by the ratio k = number of variables per factor; k ran values $4,7,10,13,16$ in the study.
For each of the few constructed population $\bf R$, $50$ its random realizations from Wishart distribution (under sample size n=200) were generated. These were sample covariance matrices. Each was factor-analyzed by FA (by principal axis extraction) as well as by PCA. Additionally, each such covariance matrix was converted into corresponding sample correlation matrix that was also factor-analyzed (factored) same ways. Lastly, I also performed factoring of the "parent", population covariance (=correlation) matrix itself. Kaiser-Meyer-Olkin measure of sampling adequacy was always above 0.7.
For data with 2 factors, the analyses extracted 2, and also 1 as well as 3 factors ("underestimation" and "overestimation" of the correct number of factors regimes). For data with 6 factors, the analyses likewise extracted 6, and also 4 as well as 8 factors.
The aim of the study was the covariances/correlations restoration qualities of FA vs PCA. Therefore residuals of off-diagonal elements were obtained. I registered residuals between the reproduced elements and the population matrix elements, as well as residuals between the former and the analyzed sample matrix elements. The residuals of the 1st type were conceptually more interesting.
Results obtained after analyses done on sample covariance and on sample correlation matrices had certain differences, but all the principal findings occured to be similar. Therefore I'm discussing (showing results) only of the "correlations-mode" analyses.
1. Overall off-diagonal fit by PCA vs FA
The graphics below plot, against various number of factors and different k, the ratio of the mean squared off-diagonal residual yielded in PCA to the same quantity yielded in FA. This is similar to what @amoeba showed in "Update 3". The lines on the plot represent average tendencies across the 50 simulations (I omit showing st. error bars on them).
(Note: the results are about factoring of random sample correlation matrices, not about factoring the population matrix parental to them: it is silly to compare PCA with FA as to how well they explain a population matrix - FA will always win, and if the correct number of factors is extracted, its residuals will be almost zero, and so the ratio would rush towards infinity.)
Commenting these plots:
General tendency: as k (number of variables per factor) grows the PCA/FA overall subfit ratio fades towards 1. That is, with more variables PCA approaches FA in explaining off-diagonal correlations/covariances. (Documented by @amoeba in his answer.) Presumably the law approximating the curves is ratio= exp(b0 + b1/k) with b0 close to 0.
Ratio is greater w.r.t. residuals “sample minus reproduced sample” (left plot) than w.r.t. residuals “population minus reproduced sample” (right plot). That is (trivially), PCA is inferior to FA in fitting the matrix being immediately analyzed. However, lines on the left plot have faster rate of decrease, so by k=16 the ratio is below 2, too, as it is on the right plot.
With residuals “population minus reproduced sample”, trends are not always convex or even monotonic (the unusual elbows are shown circled). So, as long as speech is about explaining a population matrix of coefficients via factoring a sample, rising the number of variables does not regularly bring PCA closer to FA in its fittinq quality, though the tendency is there.
Ratio is greater for m=2 factors than for m=6 factors in population (bold red lines are below bold green lines). Which means that with more factors acting in the data PCA sooner catches up with FA. For example, on the right plot k=4 yields ratio about 1.7 for 6 factors, while the same value for 2 factors is reached at k=7.
Ratio is higher if we extract more factors relative the true number of factors. That is, PCA is only slightly worse a fitter than FA if at extraction we underestimate the number of factors; and it loses more to it if the number of factors is correct or overestimated (compare thin lines with bold lines).
There is an interesting effect of the sharpness of factor structure which appears only if we consider residuals “population minus reproduced sample”: compare grey and yellow plots on the right. If population factors load variables diffusely, red lines (m=6 factors) sink to the bottom. That is, in diffuse structure (such as loadings of chaotic numbers) PCA (performed on a sample) is only few worse than FA in reconstructing the population correlations- even under small k, provided that the number of factors in the population isn’t very small. This is probably the condition when PCA is most close to FA and is most warranted as its cheeper substitute. Whereas in the presence of sharp factor structure PCA isn’t so optimistic in reconstructing the population correlations (or covariances): it approaches FA only in big k perspective.
2. Element-level fit by PCA vs FA: distribution of residuals
For every simulation experiment where factoring (by PCA or FA) of 50 random sample matrices from the population matrix was performed, distribution of residuals "population correlation minus reproduced (by the factoring) sample's correlation" was obtained for every off-diagonal correlation element. Distributions followed clear patterns, and examples of typical distributions are depicted right below. Results after PCA factoring are blue left sides and results after FA factoring are green right sides.
The principal finding is that
Pronounced, by absolute magnitude, population correlations are restored by PCA inadequetly: the reproduced values are overestimates by magnitude.
But the bias vanishes as k (number of variables to number of factors ratio) increases. On the pic, when there is only k=4 variables per factor, PCA's residuals spread in offset from 0. This is seen both when there exist 2 factors and 6 factors. But with k=16 the offset is hardly seen - it almost dissapeared and PCA fit approaches FA fit. No difference in spread (variance) of residuals between PCA and FA is observed.
Similar picture is seen also when the number of factors extracted does not match true number of factors: only variance of residuals somewhat change.
The distributions shown above on grey background pertain to the experiments with sharp (simple) factor structure present in the population. When all the analyses were done in situation of diffuse population factor structure, it was found that the bias of PCA fades away not only with the rise of k, but also with the rise of m (number of factors). Please see the scaled down yellow-background attachments to the column "6 factors, k=4": there is almost no offset from 0 observed for PCA results (the offset is yet present with m=2, that is not shown on the pic).
Thinking that the described findings are important I decided to inspect those residual distributions deeper and plotted the scatterplots of the residuals (Y axis) against the element (population correlation) value (X axis). These scatterplots each combine results of all the many (50) simulations/analyses. LOESS fit line (50% local points to use, Epanechnikov kernel) is highlighted. The first set of plots is for the case of sharp factor structure in the population (the trimodality of correlation values is apparent therefore):
Commenting:
We clearly see the (described above) reconstuction bias which is
characteristic of PCA as the skew, negative trend loess line: big in
absolute value population correlations are overestimated by PCA of
sample datasets. FA is unbiased (horizontal loess).
As k grows, PCA's bias diminishes.
PCA is biased irrespective of how many factors there are in the population: with 6 factors existent (and 6 extracted at analyses) it is similarly defective as with 2 factors existent (2 extracted).
The second set of plots below is for the case of diffuse factor structure in the population:
Again we observe the bias by PCA. However, as opposed to sharp factor structure case, the bias fades as the number of factors increases: with 6 population factors, PCA's loess line is not very far from being horizontal even under k only 4. This is what we've expressed by "yellow histograms" earlier.
One interesting phenomenon on both sets of scatterplots is that loess lines for PCA are S-curved. This curvature shows under other population factor structures (loadings) randomly constructed by me (I checked), although its degree varies and is often weak. If follows from the S-shape then that PCA starts to distort correlations rapidly as they bounce from 0 (especially under small k), but from some value on - around .30 or .40 - it stabilizes. I will not speculate at this time for possible reason of that behavior, althougt I believe the "sinusoid" stems from the triginometric nature of correlation.
3. Fit by PCA vs FA: Conclusions
As the overall fitter of the off-diagonal portion of a correlation/covariance matrix, PCA - when applied to analyze a sample matrix from a population - can be a fairly good substitute for factor analysis. This happens when the ratio number of variables / number of expected factors is big enough. (Geometrical reason for the beneficial effect of the ratio is explained in the bottom Footnote $^1$.) With more factors existent the ratio may be less than with only few factors. The presence of sharp factor structure (simple structure exists in the population) hampers PCA to approach the quality of FA.
The effect of sharp factor structure on the overall fit ability of PCA is apparent only as long as residuals "population minus reproduced sample" are considered. Therefore one can miss to recognize it outside a simulation study setting - in an observational study of a sample we don't have access to these important residuals.
Unlike factor analysis, PCA is a (positively) biased estimator of the magnitude of population correlations (or covariances) that are away from zero. The biasedness of PCA however decreases as the ratio number of variables / number of expected factors grows. The biasedness also decreases as the number of factors in the population grows, but this latter tendency is hampered under a sharp factor structure present.
I would remark that PCA fit bias and the effect of sharp structure on it can be uncovered also in considering residuals "sample minus reproduced sample"; I simply omitted showing such results because they seem not to add new impressions.
My very tentative, broad advice in the end might be to refrain from using PCA instead of FA for typical (i.e. with 10 or less factors expected in the population) factor analytic purposes unless you have some 10+ times more variables than the factors. And the fewer are factors the severer is the necessary ratio. I would further not recommend using PCA in place of FA at all whenever data with well-established, sharp factor structure is analyzed - such as when factor analysis is done to validate the being developed or already launched psychological test or questionnaire with articulated constructs/scales . PCA may be used as a tool of initial, preliminary selection of items for a psychometric instrument.
Limitations of the study. 1) I used only PAF method of factor extraction. 2) The sample size was fixed (200). 3) Normal population was assumed in sampling the sample matrices. 4) For sharp structure, there was modeled equal number of variables per factor. 5) Constructing population factor loadings I borrowed them from roughly uniform (for sharp structure - trimodal, i.e. 3-piece uniform) distribution. 6) There could be oversights in this instant examination, of course, as anywhere.
Footnote $1$. PCA will mimic results of FA and become the equivalent fitter of the correlations when - as said here - error variables of the model, called unique factors, become uncorrelated. FA seeks to make them uncorrelated, but PCA doesn't, they may happen to be uncorrelated in PCA. The major condition when it may occur is when the number of variables per number of common factors (components kept as common factors) is large.
Consider the following pics (if you need first to learn how to understand them, please read this answer):
By the requirement of factor analysis to be able to restore succesfully correlations with few m common factors, unique factors $U$, characterizing statistically unique portions of the p manifest variables $X$, must be uncorrelated. When PCA is used, the p $U$s have to lie in the p-m subspace of the p-space spanned by the $X$s because PCA does not leave the space of the analyzed variables. Thus - see the left pic - with m=1 (principal component $P_1$ is the extracted factor) and p=2 ($X_1$, $X_2$) analyzed, unique factors $U_1$, $U_2$ compulsorily superimpose on the remaining second component (serving as error of the analysis). Consequently they have to be correlated with $r=-1$. (On the pic, correlations equal cosines of angles between vectors.) The required orthogonality is impossible, and the observed correlation between the variables can never be restored (unless the unique factors are zero vectors, a trivial case).
But if you add one more variable ($X_3$), right pic, and extract still one pr. component as the common factor, the three $U$s have to lie in a plane (defined by the remaining two pr. components). Three arrows can span a plane in a way that angles between them are smaller than 180 degrees. There freedom for the angles emerges. As a possible particular case, the angles can be about equal, 120 degrees. That is already not very far from 90 degrees, that is, from uncorrelatedness. This is the situation shown on the pic.
As we add 4th variable, 4 $U$s will be spanning 3d space. With 5, 5 to span 4d, etc. Room for a lot of the angles simultaneously to attain closer to 90 degrees will expand. Which means that the room for PCA to approach FA in its ability to fit off-diagonal triangles of correlation matrix will also expand.
But true FA is usually able to restore the correlations even under small ratio "number of variables / number of factors" because, as explained here (and see 2nd pic there) factor analysis allows all the factor vectors (common factor(s) and unique ones) to deviate from lying in the variables' space. Hence there is the room for the orthogonality of $U$s even with only 2 variables $X$ and one factor.
The more diffuse (the less sharp) is the factor structure - that is, the more unimodal or even is the distribution of the factor loadings, - the closer the above said orthogonality is to become real, realized; the better is PCA executing the role of FA.
The pics above also give obvious clue to why PCA overestimates correlations. On the left pic, for example, $r_{X_1X_2}= a_1a_2 - u_1u_2$, where the $a$s are the projections of the $X$s on $P_1$ (loadings of $P_1$) and the $u$s are the lengths of the $U$s (loadings of $P_2$). But that correlation as reconstructed by $P_1$ alone equals just $a_1a_2$, i.e. bigger than $r_{X_1X_2}$. | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysi
Part 1. Pictures & theory
In this my answer (a second and additional to the other of mine here) I will try to show in pictures that PCA does not restore a covariance any well (whereas it restores - m |
2,502 | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis? | (This is really a comment to @ttnphns's second answer)
As far as the different type of reproduction of covariance assuming error by PC and by FA is concerned, I've simply printed out the loadings/components of variance which occur in the two procedures; just for the examples I took 2 variables.
We assume the construction of the two items as of one common factor and itemspecific factors. Here is that factor-loadingsmatrix:
L_fa:
f1 f2 f3
X1: 0.894 0.447 .
X1: 0.894 . 0.447
The correlation matrix by this is
C:
X1 X2
X1: 1.000 0.800
X2: 0.800 1.000
If we look at the loadings-matrix L_fa and interpret it as usual in FA that f2 and f3 are error terms/itemspecific error, we reproduce C without that error, receiving
C1_Fa
X1 X2
X1: 0.800 0.800
X2: 0.800 0.800
So we have perfectly reproduced the off-diagonal element, which is the covariance (and the diagonal is reduced)
If we look at the pca-solution (can be done by simple rotations) we get the two factors from the same correlation-matrix:
L_pca :
f1 f2
X1: 0.949 -0.316
X2: 0.949 0.316
Assuming the second factor as error we get the reproduced matrix of covariances
C1_PC :
X1 X2
X1: 0.900 0.900
X2: 0.900 0.900
where we've overestimated the true correlation. This is because we ignored the correcting negative partial covariance in the second factor = error.
Note that the PPCA would be identical with the first example.
With more items this is no more so obvious but still an inherent effect. Therefore there is also the concept of MinRes-extraction (or -rotation?) and I've also seen something like maximum-determinant extraction and...
[update] As for the question of @amoeba:
I understood the concept of "Minimal Residuals" ("MinRes")-rotation as a concurring method to the earlier methods of CFA-computation, to achieve the best reproduction of the off-diagonal elements of a correlation matrix. I learned this in the 80'ies/90'ies and didn't follow the development of factor-analysis (as indepth as before in the recent years), so possibly "MinRes" is out of fashion.
To compare it with the PCA-solution: one can think of finding the pc-solution by rotations of the factors when they are thought as axes in an euclidean space and the loadings are the coordinates of the items in that vectorspace.
Then for a pair of axes say x,y the sums-of-squares from the loadings of the x-axis and that of the y-axis are computed.
From this one can find a rotation angle, by which we should rotate, to get the sums-of-squares in the rotated axes maximal on the x° and minimal on the y°-axis (where the little circle indicates the rotated axes).
Doing this for all pairs of axes (where only always the x-axis is the left and the y-axis is the right (so for 4 factors we have only 6 pairs of rotation)) and then repeat the whole process to a stable result realizes the so-called "Jacobi-method" for the finding of the principal components solution: it will locate the first axis such that it collects the maximum possible sum of squares of loadings ("SSqL") (which means also "of the variance") on one axis in the current correlational configuration.
As far as I understood things, "MinRes" should look at the partial correlations instead of the SSqL; so it does not sum up the squares of the loadings (as done in the Jacobi-pc-rotation) but is sums up the crossproducts of the loadings in each factor - except of the "crossproducts" (=squares) of the loadings of each item with itself.
After the criteria for the x and for the y-axis are computed it proceeds the same way as described for the iterative Jacobi-rotation.
Since the rotation-criterion is numerically different from the maximum-SSqL-criterion the result/the rotational position shall be different from the PCA-solution. If it converges it should instead provide the maximum possible partial correlation on one axis in the first factor, the next maximal correlation on the next factor and so on. The idea seems to be, then to assume so many axes/factors such that the remaining/residual partial covariance becomes marginal.
(Note this is only how I interpreted things, I've not seen that procedure explicitly written out (or cannot remember at the moment); a description at mathworld seems to express it rather in terms of the formulae like in amoeba's answer) and is likely more authoritative. Just found another reference in the R-project documentation and a likely very good reference in the Gorsuch book on factoranalysis, page 116, available via google-books) | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysi | (This is really a comment to @ttnphns's second answer)
As far as the different type of reproduction of covariance assuming error by PC and by FA is concerned, I've simply printed out the loadings/com | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis?
(This is really a comment to @ttnphns's second answer)
As far as the different type of reproduction of covariance assuming error by PC and by FA is concerned, I've simply printed out the loadings/components of variance which occur in the two procedures; just for the examples I took 2 variables.
We assume the construction of the two items as of one common factor and itemspecific factors. Here is that factor-loadingsmatrix:
L_fa:
f1 f2 f3
X1: 0.894 0.447 .
X1: 0.894 . 0.447
The correlation matrix by this is
C:
X1 X2
X1: 1.000 0.800
X2: 0.800 1.000
If we look at the loadings-matrix L_fa and interpret it as usual in FA that f2 and f3 are error terms/itemspecific error, we reproduce C without that error, receiving
C1_Fa
X1 X2
X1: 0.800 0.800
X2: 0.800 0.800
So we have perfectly reproduced the off-diagonal element, which is the covariance (and the diagonal is reduced)
If we look at the pca-solution (can be done by simple rotations) we get the two factors from the same correlation-matrix:
L_pca :
f1 f2
X1: 0.949 -0.316
X2: 0.949 0.316
Assuming the second factor as error we get the reproduced matrix of covariances
C1_PC :
X1 X2
X1: 0.900 0.900
X2: 0.900 0.900
where we've overestimated the true correlation. This is because we ignored the correcting negative partial covariance in the second factor = error.
Note that the PPCA would be identical with the first example.
With more items this is no more so obvious but still an inherent effect. Therefore there is also the concept of MinRes-extraction (or -rotation?) and I've also seen something like maximum-determinant extraction and...
[update] As for the question of @amoeba:
I understood the concept of "Minimal Residuals" ("MinRes")-rotation as a concurring method to the earlier methods of CFA-computation, to achieve the best reproduction of the off-diagonal elements of a correlation matrix. I learned this in the 80'ies/90'ies and didn't follow the development of factor-analysis (as indepth as before in the recent years), so possibly "MinRes" is out of fashion.
To compare it with the PCA-solution: one can think of finding the pc-solution by rotations of the factors when they are thought as axes in an euclidean space and the loadings are the coordinates of the items in that vectorspace.
Then for a pair of axes say x,y the sums-of-squares from the loadings of the x-axis and that of the y-axis are computed.
From this one can find a rotation angle, by which we should rotate, to get the sums-of-squares in the rotated axes maximal on the x° and minimal on the y°-axis (where the little circle indicates the rotated axes).
Doing this for all pairs of axes (where only always the x-axis is the left and the y-axis is the right (so for 4 factors we have only 6 pairs of rotation)) and then repeat the whole process to a stable result realizes the so-called "Jacobi-method" for the finding of the principal components solution: it will locate the first axis such that it collects the maximum possible sum of squares of loadings ("SSqL") (which means also "of the variance") on one axis in the current correlational configuration.
As far as I understood things, "MinRes" should look at the partial correlations instead of the SSqL; so it does not sum up the squares of the loadings (as done in the Jacobi-pc-rotation) but is sums up the crossproducts of the loadings in each factor - except of the "crossproducts" (=squares) of the loadings of each item with itself.
After the criteria for the x and for the y-axis are computed it proceeds the same way as described for the iterative Jacobi-rotation.
Since the rotation-criterion is numerically different from the maximum-SSqL-criterion the result/the rotational position shall be different from the PCA-solution. If it converges it should instead provide the maximum possible partial correlation on one axis in the first factor, the next maximal correlation on the next factor and so on. The idea seems to be, then to assume so many axes/factors such that the remaining/residual partial covariance becomes marginal.
(Note this is only how I interpreted things, I've not seen that procedure explicitly written out (or cannot remember at the moment); a description at mathworld seems to express it rather in terms of the formulae like in amoeba's answer) and is likely more authoritative. Just found another reference in the R-project documentation and a likely very good reference in the Gorsuch book on factoranalysis, page 116, available via google-books) | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysi
(This is really a comment to @ttnphns's second answer)
As far as the different type of reproduction of covariance assuming error by PC and by FA is concerned, I've simply printed out the loadings/com |
2,503 | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis? | In my view, the notions of "PCA" and "FA" are on a different dimension from that of notions of "exploratory", "confirmatory" or maybe "inferential". So each of the two mathematical/statistical methods can be applied with one of the three approaches.
For instance, why should it be unsensical to have a hypothese, that my data have a general factor and also the structure of a set of principal components (because my experiment with my electronical apparatus gave me nearly errorfree data) and I test my hypothese, that the eigenvalues of the subsequent factors occur with ratio of 75% ? This is then PCA in a confirmatory framework.
On the other hand, it seems ridiculous that in our research team we create with much work an item battery for measuring violence between pupils and assuming 3 main behaves (physical agression, depression, search for help by authorities/parents) and putting the concerning questions in that battery ... and "exploratorily" work out how many factors we have... Instead to look, how well our scale contains three recognizable factors (besides neglectable itemspecific and possibly even spuriously correlated error). And after that, when I've confirmed, that indeed our item-battery serves the intention, we might test the hypothese, that in the classes of younger children the loadings on the factor indicating "searching-help-by-authorities" are higher than that of older pupils. Hmmm, again confirmatory...
And exploratory? I have a set of measures taken from a research on microbiology from 1960 and they had not much theory but sampled everything they could manage because their field of research was just very young, and I re-explore the dominant factorstructure, assuming (for example), that all errors are of the same amount because of the optical precision of the microscope used (the ppca-ansatz as I have just learnt). Then I use the statistical (and subsequently the mathematical) model for the FA, but in this case in an explorative manner.
This is it at least how I understand the terms.
Maybe I'm completely on the wrong track here, but I don't assume it.
Ps. In the 90'ies I wrote a small interactive program to explore the method of PCA and factoranalysis down to the bottom. It was written in Turbo-Pascal, can still only be run in a Dos-Window ("Dos-box" under Win7) but has a really nice appeal: interactively switching factors to be included or not, then rotate, separate itemspecific error-variance (according to the SMC-criterion or the equal-variances-criterion (ppca?)), switch the Kaiser-option on and off, the use of the covariances on and off - just all while the factorloadingsmatrix is visible like in a spreadsheet and can be rotated for the basic different rotation-methods.
It is not highly sophisticated: no chisquare-test for instance, just intended for self-learning of the internal mathematical mechanics. It has also a "demo-mode", where the program runs itself, showing explanative comments on the screen and simulating the keyboard-inputs, which the user normally would do.
Whoever is interested to do selfstudy or teaching with it can download it from my small software-pages inside-(R).zip Just expand the files in the zip in a directory accessible by the Dos-Box and call "demoall.bat" In the third part of the "demoall" I've made a demonstration how to model itemspecific errors by rotations from an initially pca-solution... | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysi | In my view, the notions of "PCA" and "FA" are on a different dimension from that of notions of "exploratory", "confirmatory" or maybe "inferential". So each of the two mathematical/statistical meth | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis?
In my view, the notions of "PCA" and "FA" are on a different dimension from that of notions of "exploratory", "confirmatory" or maybe "inferential". So each of the two mathematical/statistical methods can be applied with one of the three approaches.
For instance, why should it be unsensical to have a hypothese, that my data have a general factor and also the structure of a set of principal components (because my experiment with my electronical apparatus gave me nearly errorfree data) and I test my hypothese, that the eigenvalues of the subsequent factors occur with ratio of 75% ? This is then PCA in a confirmatory framework.
On the other hand, it seems ridiculous that in our research team we create with much work an item battery for measuring violence between pupils and assuming 3 main behaves (physical agression, depression, search for help by authorities/parents) and putting the concerning questions in that battery ... and "exploratorily" work out how many factors we have... Instead to look, how well our scale contains three recognizable factors (besides neglectable itemspecific and possibly even spuriously correlated error). And after that, when I've confirmed, that indeed our item-battery serves the intention, we might test the hypothese, that in the classes of younger children the loadings on the factor indicating "searching-help-by-authorities" are higher than that of older pupils. Hmmm, again confirmatory...
And exploratory? I have a set of measures taken from a research on microbiology from 1960 and they had not much theory but sampled everything they could manage because their field of research was just very young, and I re-explore the dominant factorstructure, assuming (for example), that all errors are of the same amount because of the optical precision of the microscope used (the ppca-ansatz as I have just learnt). Then I use the statistical (and subsequently the mathematical) model for the FA, but in this case in an explorative manner.
This is it at least how I understand the terms.
Maybe I'm completely on the wrong track here, but I don't assume it.
Ps. In the 90'ies I wrote a small interactive program to explore the method of PCA and factoranalysis down to the bottom. It was written in Turbo-Pascal, can still only be run in a Dos-Window ("Dos-box" under Win7) but has a really nice appeal: interactively switching factors to be included or not, then rotate, separate itemspecific error-variance (according to the SMC-criterion or the equal-variances-criterion (ppca?)), switch the Kaiser-option on and off, the use of the covariances on and off - just all while the factorloadingsmatrix is visible like in a spreadsheet and can be rotated for the basic different rotation-methods.
It is not highly sophisticated: no chisquare-test for instance, just intended for self-learning of the internal mathematical mechanics. It has also a "demo-mode", where the program runs itself, showing explanative comments on the screen and simulating the keyboard-inputs, which the user normally would do.
Whoever is interested to do selfstudy or teaching with it can download it from my small software-pages inside-(R).zip Just expand the files in the zip in a directory accessible by the Dos-Box and call "demoall.bat" In the third part of the "demoall" I've made a demonstration how to model itemspecific errors by rotations from an initially pca-solution... | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysi
In my view, the notions of "PCA" and "FA" are on a different dimension from that of notions of "exploratory", "confirmatory" or maybe "inferential". So each of the two mathematical/statistical meth |
2,504 | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis? | Just one additional remark for @amoebas's long (and really great) answer on the character of the $\Psi$-estimate.
In your initial statements you have three $\Psi$: for PCA is $ \Psi = 0$, for PPCA is $ \Psi=\sigma ^2 I $ and for FA you left $\Psi$ indeterminate.
But it should be mentioned, that there is an infinite number of various possible $\Psi$ (surely restricted) but exactly a single one which minimizes the rank of the factor matrix. Let's call this $\Psi_{opt}$ The standard (automatical) estimate for $\Psi_{std}$ is the diagonalmatrix based on the SMC's, so let's write this as $\Psi_{std}= \alpha^2 D_{smc}$ (and even some software (seem to) do not attempt to optimize $\alpha$ down from $1$ while $ \alpha \lt 1$ is (generally) required to prevent Heywood-cases/negative-definiteness). And moreover, even such optimized $\alpha^2$ would not guarantee minimal rank of the remaining covariances, thus usually we have this not equal: in general $\Psi_{std} \ne \Psi_{opt}$.
To really find $\Psi_{opt}$ is a very difficult game, and as far as I know (but that's no more so "far" as, say, 20 years ago when I was more involved and nearer to the books) this is still an unsolved problem.
Well this reflects the ideal, mathematical side of the problem, and my distinction between $\Psi_{std} $ and $\Psi_{opt}$ also might be actually small. A more general caveat is however, that it discusses the whole factorization machinery from the view that I study only my sample or have data of the whole population; in the model of inferential statistics, where I infer from an imperfect sample on the population, my empirical covariance- and thus also the factormatrix is only an estimate, it's only a shadow of the "true" covariance-/factormatrix. Thus in such a framework/model we should even consider that our "errors" are not ideal, and thus might be spuriously correlated. So in fact in such models we should/would leave the somehow idealistic assumption of uncorrelated error, and thus of a strictly diagonal form of $\Psi$, behind us. | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysi | Just one additional remark for @amoebas's long (and really great) answer on the character of the $\Psi$-estimate.
In your initial statements you have three $\Psi$: for PCA is $ \Psi = 0$, f | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis?
Just one additional remark for @amoebas's long (and really great) answer on the character of the $\Psi$-estimate.
In your initial statements you have three $\Psi$: for PCA is $ \Psi = 0$, for PPCA is $ \Psi=\sigma ^2 I $ and for FA you left $\Psi$ indeterminate.
But it should be mentioned, that there is an infinite number of various possible $\Psi$ (surely restricted) but exactly a single one which minimizes the rank of the factor matrix. Let's call this $\Psi_{opt}$ The standard (automatical) estimate for $\Psi_{std}$ is the diagonalmatrix based on the SMC's, so let's write this as $\Psi_{std}= \alpha^2 D_{smc}$ (and even some software (seem to) do not attempt to optimize $\alpha$ down from $1$ while $ \alpha \lt 1$ is (generally) required to prevent Heywood-cases/negative-definiteness). And moreover, even such optimized $\alpha^2$ would not guarantee minimal rank of the remaining covariances, thus usually we have this not equal: in general $\Psi_{std} \ne \Psi_{opt}$.
To really find $\Psi_{opt}$ is a very difficult game, and as far as I know (but that's no more so "far" as, say, 20 years ago when I was more involved and nearer to the books) this is still an unsolved problem.
Well this reflects the ideal, mathematical side of the problem, and my distinction between $\Psi_{std} $ and $\Psi_{opt}$ also might be actually small. A more general caveat is however, that it discusses the whole factorization machinery from the view that I study only my sample or have data of the whole population; in the model of inferential statistics, where I infer from an imperfect sample on the population, my empirical covariance- and thus also the factormatrix is only an estimate, it's only a shadow of the "true" covariance-/factormatrix. Thus in such a framework/model we should even consider that our "errors" are not ideal, and thus might be spuriously correlated. So in fact in such models we should/would leave the somehow idealistic assumption of uncorrelated error, and thus of a strictly diagonal form of $\Psi$, behind us. | Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysi
Just one additional remark for @amoebas's long (and really great) answer on the character of the $\Psi$-estimate.
In your initial statements you have three $\Psi$: for PCA is $ \Psi = 0$, f |
2,505 | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"? | All the answers so far provided are helpful, but they aren't very statistically precise, so I'll take a shot at that. At the same time, I'm going to give a general answer rather than focusing on this election.
The first thing to keep in mind when we're trying to answer questions about real-world events like Clinton winning the election, as opposed to made-up math problems like taking balls of various colors out of an urn, is that there isn't a unique reasonable way to answer the question, and hence not a unique reasonable answer. If somebody just says "Hillary has a 75% chance of winning" and doesn't go on to describe their model of the election, the data they used to make their estimates, the results of their model validation, their background assumptions, whether they're referring to the popular vote or the electoral vote, etc., then they haven't really told you what they mean, much less provided enough information for you to evaluate whether their prediction is any good. Besides, it isn't beneath some people to do no data analysis at all and simply draw a precise-sounding number out of thin air.
So, what are some procedures a statistician might use to estimate Clinton's chances? Indeed, how might they frame the problem? At a high level, there are various notions of probability itself, two of the most important of which are frequentist and Bayesian.
In a frequentist view, a probability represents the limiting frequency of an event over many independent trials of the same experiment, as in the law of large numbers (strong or weak). Even though any particular election is a unique event, its outcome can be seen as a draw from an infinite population of events both historical and hypothetical, which could comprise all American presidential elections, or all elections worldwide in 2016, or something else. A 75% chance of a Clinton victory means that if $X_1, X_2, …$ is a sequence of outcomes (0 or 1) of independent elections that are entirely equivalent to this election so far as our model is concerned, then the sample mean of $X_1, X_2, …, X_n$ converges in probability to .75 as $n$ goes to infinity.
In a Bayesian view, a probability represents a degree of believability or credibility (which may or may not be actual belief, depending on whether you're a subjectivist Bayesian). A 75% chance of a Clinton victory means that it is 75% credible she will win. Credibilities, in turn, can be chosen freely (based on a model's or analyst's preexisting beliefs) within the constraints of basic laws of probability (like Bayes's theorem, and the fact that the probability of a joint event cannot exceed the marginal probability of either of the component events). One way to summarize these laws is that if you take bets on the outcome of an event, offering odds to gamblers according to your credibilities, then no gambler can construct a Dutch book against you, that is, a set of bets that guarantees you will lose money no matter how the event actually works out.
Whether you take a frequentist or Bayesian view on probability, there are still a lot of decisions to be made about how to analyze the data and estimate the probability. Possibly the most popular method is based on parametric regression models, such as linear regression. In this setting, the analyst chooses a parametric family of distributions (that is, probability measures) that is indexed by a vector of numbers called parameters. Each outcome is an independent random variable drawn from this distribution, transformed according to the covariates, which are known values (such as the unemployment rate) that the analyst wants to use to predict the outcome. The analyst chooses estimates of the parameter values using the data and a criterion of model fit such as least squares or maximum likelihood. Using these estimates, the model can produce a prediction of the outcome (possibly just a single value, possibly an interval or other set of values) for any given value of the covariates. In particular, it can predict the outcome of an election. Besides parametric models, there are nonparametric models (that is, models defined by a family of distributions that is indexed with an infinitely long parameter vector), and also methods of deciding on predicted values that use no model by which the data was generated at all, such as nearest-neighbor classifiers and random forests.
Coming up with predictions is one thing, but how do you know whether they're any good? After all, sufficiently inaccurate predictions are worse than useless. Testing predictions is part of the larger practice of model validation, that is, quantifying how good a given model is for a given purpose. Two popular methods for validating predictions are cross-validation and splitting the data into training and testing subsets before fitting any models. To the degree that the elections included in the data are representative of the 2016 US presidential election, the estimates of predictive accuracy we get from validating predictions will inform us how accurate our prediction will be of the 2016 US presidential election. | Probability of a single real-life future event: What does it mean when they say that "Hillary has a | All the answers so far provided are helpful, but they aren't very statistically precise, so I'll take a shot at that. At the same time, I'm going to give a general answer rather than focusing on this | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"?
All the answers so far provided are helpful, but they aren't very statistically precise, so I'll take a shot at that. At the same time, I'm going to give a general answer rather than focusing on this election.
The first thing to keep in mind when we're trying to answer questions about real-world events like Clinton winning the election, as opposed to made-up math problems like taking balls of various colors out of an urn, is that there isn't a unique reasonable way to answer the question, and hence not a unique reasonable answer. If somebody just says "Hillary has a 75% chance of winning" and doesn't go on to describe their model of the election, the data they used to make their estimates, the results of their model validation, their background assumptions, whether they're referring to the popular vote or the electoral vote, etc., then they haven't really told you what they mean, much less provided enough information for you to evaluate whether their prediction is any good. Besides, it isn't beneath some people to do no data analysis at all and simply draw a precise-sounding number out of thin air.
So, what are some procedures a statistician might use to estimate Clinton's chances? Indeed, how might they frame the problem? At a high level, there are various notions of probability itself, two of the most important of which are frequentist and Bayesian.
In a frequentist view, a probability represents the limiting frequency of an event over many independent trials of the same experiment, as in the law of large numbers (strong or weak). Even though any particular election is a unique event, its outcome can be seen as a draw from an infinite population of events both historical and hypothetical, which could comprise all American presidential elections, or all elections worldwide in 2016, or something else. A 75% chance of a Clinton victory means that if $X_1, X_2, …$ is a sequence of outcomes (0 or 1) of independent elections that are entirely equivalent to this election so far as our model is concerned, then the sample mean of $X_1, X_2, …, X_n$ converges in probability to .75 as $n$ goes to infinity.
In a Bayesian view, a probability represents a degree of believability or credibility (which may or may not be actual belief, depending on whether you're a subjectivist Bayesian). A 75% chance of a Clinton victory means that it is 75% credible she will win. Credibilities, in turn, can be chosen freely (based on a model's or analyst's preexisting beliefs) within the constraints of basic laws of probability (like Bayes's theorem, and the fact that the probability of a joint event cannot exceed the marginal probability of either of the component events). One way to summarize these laws is that if you take bets on the outcome of an event, offering odds to gamblers according to your credibilities, then no gambler can construct a Dutch book against you, that is, a set of bets that guarantees you will lose money no matter how the event actually works out.
Whether you take a frequentist or Bayesian view on probability, there are still a lot of decisions to be made about how to analyze the data and estimate the probability. Possibly the most popular method is based on parametric regression models, such as linear regression. In this setting, the analyst chooses a parametric family of distributions (that is, probability measures) that is indexed by a vector of numbers called parameters. Each outcome is an independent random variable drawn from this distribution, transformed according to the covariates, which are known values (such as the unemployment rate) that the analyst wants to use to predict the outcome. The analyst chooses estimates of the parameter values using the data and a criterion of model fit such as least squares or maximum likelihood. Using these estimates, the model can produce a prediction of the outcome (possibly just a single value, possibly an interval or other set of values) for any given value of the covariates. In particular, it can predict the outcome of an election. Besides parametric models, there are nonparametric models (that is, models defined by a family of distributions that is indexed with an infinitely long parameter vector), and also methods of deciding on predicted values that use no model by which the data was generated at all, such as nearest-neighbor classifiers and random forests.
Coming up with predictions is one thing, but how do you know whether they're any good? After all, sufficiently inaccurate predictions are worse than useless. Testing predictions is part of the larger practice of model validation, that is, quantifying how good a given model is for a given purpose. Two popular methods for validating predictions are cross-validation and splitting the data into training and testing subsets before fitting any models. To the degree that the elections included in the data are representative of the 2016 US presidential election, the estimates of predictive accuracy we get from validating predictions will inform us how accurate our prediction will be of the 2016 US presidential election. | Probability of a single real-life future event: What does it mean when they say that "Hillary has a
All the answers so far provided are helpful, but they aren't very statistically precise, so I'll take a shot at that. At the same time, I'm going to give a general answer rather than focusing on this |
2,506 | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"? | When statisticians want to predict a binary outcome (Hillary wins vs Hillary does not win), they imagine that the universe is tossing an imaginary coin - Heads, Hillary wins; tails, she loses. To some statisticians, the coin represents their degree of belief in the outcome; to others, the coin represents what might happen if we reran the election under the same circumstances over and over. Philosophically speaking, it's hard to know what we mean when we speak of uncertain future events, even before we drag numbers into it. But we can look at where the number comes from.
At this point in the election, we have a sequence of poll results. These are of the form: 1000 people were polled in, say, Ohio. 40% support Trump, 39% support Hillary, 21% are undecided. There would be similar polls from previous elections for the respective Democratic, Republican (and other trace party) candidates. For previous years, there are also outcomes. You might know that, say, candidates with 40% of the vote in a poll in July, won 8 out of the 10 previous elections. Or the results might say, in 7 out of 10 elections, Democrats took Ohio. You might know how Ohio compares to Texas (perhaps they never choose the same candidate) - you might have information on how the undecided vote breaks down - and you might have interesting models of what happens when a candidate begins to "surge". They might also look at who tends to actually get out and vote, and what happens if it's snowing on The Day.
So when you take previous elections into account, you can say that the election coin has already been tossed a number of times. The same election is not being rerun every 4 years, but we can pretend that it sort of is. From all this information, the pollsters build complex models to predict the outcome for this year.
Hillary's 75% chance of winning is relative to our state of knowledge "today". It's saying that a candidate with the kind of poll results she has "now", in the states that she has them, and given the trends in her polls throughout the campaign, wins the election in 3 election years out of 4. A month from now, her probability of winning will have changed, because the model will be based on the state of polls in August.
The US hasn't had a statistically large number of elections in its history, much less since polling began. Nor can we be sure that polling trends from, say, the 70's, still apply. So it's all a bit dodgy.
The bottom line is that Hillary should begin working on her inauguration speech. | Probability of a single real-life future event: What does it mean when they say that "Hillary has a | When statisticians want to predict a binary outcome (Hillary wins vs Hillary does not win), they imagine that the universe is tossing an imaginary coin - Heads, Hillary wins; tails, she loses. To some | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"?
When statisticians want to predict a binary outcome (Hillary wins vs Hillary does not win), they imagine that the universe is tossing an imaginary coin - Heads, Hillary wins; tails, she loses. To some statisticians, the coin represents their degree of belief in the outcome; to others, the coin represents what might happen if we reran the election under the same circumstances over and over. Philosophically speaking, it's hard to know what we mean when we speak of uncertain future events, even before we drag numbers into it. But we can look at where the number comes from.
At this point in the election, we have a sequence of poll results. These are of the form: 1000 people were polled in, say, Ohio. 40% support Trump, 39% support Hillary, 21% are undecided. There would be similar polls from previous elections for the respective Democratic, Republican (and other trace party) candidates. For previous years, there are also outcomes. You might know that, say, candidates with 40% of the vote in a poll in July, won 8 out of the 10 previous elections. Or the results might say, in 7 out of 10 elections, Democrats took Ohio. You might know how Ohio compares to Texas (perhaps they never choose the same candidate) - you might have information on how the undecided vote breaks down - and you might have interesting models of what happens when a candidate begins to "surge". They might also look at who tends to actually get out and vote, and what happens if it's snowing on The Day.
So when you take previous elections into account, you can say that the election coin has already been tossed a number of times. The same election is not being rerun every 4 years, but we can pretend that it sort of is. From all this information, the pollsters build complex models to predict the outcome for this year.
Hillary's 75% chance of winning is relative to our state of knowledge "today". It's saying that a candidate with the kind of poll results she has "now", in the states that she has them, and given the trends in her polls throughout the campaign, wins the election in 3 election years out of 4. A month from now, her probability of winning will have changed, because the model will be based on the state of polls in August.
The US hasn't had a statistically large number of elections in its history, much less since polling began. Nor can we be sure that polling trends from, say, the 70's, still apply. So it's all a bit dodgy.
The bottom line is that Hillary should begin working on her inauguration speech. | Probability of a single real-life future event: What does it mean when they say that "Hillary has a
When statisticians want to predict a binary outcome (Hillary wins vs Hillary does not win), they imagine that the universe is tossing an imaginary coin - Heads, Hillary wins; tails, she loses. To some |
2,507 | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"? | When statisticians say this they are not referring to the margin of victory or the share of the vote. They are running a large number of simulations of the election and counting what percentage of the vote each candidate gains. For many robust presidential models they have forecasts for each state. Some are close and if the race is run multiple times, both candidates could win. Because prediction intervals many times overlap a margin of victory of 0, it is not a binary response but instead a simulation will tell us more precisely what to expect.
FiveThirtyEight's methodology page may help understand a little more what is under the hood: http://fivethirtyeight.com/features/a-users-guide-to-fivethirtyeights-2016-general-election-forecast/ | Probability of a single real-life future event: What does it mean when they say that "Hillary has a | When statisticians say this they are not referring to the margin of victory or the share of the vote. They are running a large number of simulations of the election and counting what percentage of the | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"?
When statisticians say this they are not referring to the margin of victory or the share of the vote. They are running a large number of simulations of the election and counting what percentage of the vote each candidate gains. For many robust presidential models they have forecasts for each state. Some are close and if the race is run multiple times, both candidates could win. Because prediction intervals many times overlap a margin of victory of 0, it is not a binary response but instead a simulation will tell us more precisely what to expect.
FiveThirtyEight's methodology page may help understand a little more what is under the hood: http://fivethirtyeight.com/features/a-users-guide-to-fivethirtyeights-2016-general-election-forecast/ | Probability of a single real-life future event: What does it mean when they say that "Hillary has a
When statisticians say this they are not referring to the margin of victory or the share of the vote. They are running a large number of simulations of the election and counting what percentage of the |
2,508 | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"? | There's an episode of freakonomics radio that is very relevant to this question (in general, not in the specifics of en election). In it, Stephen Dubner interviews the lead of a project from a United States defense agency to determine the best way to forecast global political events.
It [also] helps a lot to know more about politics than most people do. I would say they’re almost necessary conditions for doing well. But they’re not sufficient, because there are plenty of people who are very smart and close-minded. There are plenty of people who are very smart and think that it’s impossible to attach probabilities to unique events.
Then they discuss what not to do
if you ask those types of questions, most people say, “How could you possibly assign probabilities to what seem to be unique historical events?” There just doesn’t seem to be any way to do that. The best we can really do is, use vague verbiage, make vague-verbiage forecasts. We can say things like, “Well, this might happen. This could happen. This may happen.” And to say something could happen isn’t to say a lot.
Then the episode goes into the methodologies that the most successful forecasters used to estimate these probabilities, advocating an informal Bayesian approach
So, knowing nothing about the African dictator or the country even, let’s say I’ve never heard of this dictator, I’ve never heard of this country, and I just look at the base rate and I say, “hmm, looks like about 87 percent.” That would be my initial hunch estimate. Then the question is, “What do I do?” Well, then I start to learn something about the country and the dictator. And if I learn that the dictator in question is 91 years old and has advanced prostate cancer, I should adjust my probability. And if I learn that there are riots in the capital city and there are hints of military coups in the offing, I should again adjust my probability. But starting with the base-rate probability is a good way to at least ensure that you’re going to be in the plausibility ballpark initially.
The episode is called How to Be Less Terrible at Predicting the Future, and is a very fun listen. I encourage you to check it out if you're interested in this sort of thing! | Probability of a single real-life future event: What does it mean when they say that "Hillary has a | There's an episode of freakonomics radio that is very relevant to this question (in general, not in the specifics of en election). In it, Stephen Dubner interviews the lead of a project from a United | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"?
There's an episode of freakonomics radio that is very relevant to this question (in general, not in the specifics of en election). In it, Stephen Dubner interviews the lead of a project from a United States defense agency to determine the best way to forecast global political events.
It [also] helps a lot to know more about politics than most people do. I would say they’re almost necessary conditions for doing well. But they’re not sufficient, because there are plenty of people who are very smart and close-minded. There are plenty of people who are very smart and think that it’s impossible to attach probabilities to unique events.
Then they discuss what not to do
if you ask those types of questions, most people say, “How could you possibly assign probabilities to what seem to be unique historical events?” There just doesn’t seem to be any way to do that. The best we can really do is, use vague verbiage, make vague-verbiage forecasts. We can say things like, “Well, this might happen. This could happen. This may happen.” And to say something could happen isn’t to say a lot.
Then the episode goes into the methodologies that the most successful forecasters used to estimate these probabilities, advocating an informal Bayesian approach
So, knowing nothing about the African dictator or the country even, let’s say I’ve never heard of this dictator, I’ve never heard of this country, and I just look at the base rate and I say, “hmm, looks like about 87 percent.” That would be my initial hunch estimate. Then the question is, “What do I do?” Well, then I start to learn something about the country and the dictator. And if I learn that the dictator in question is 91 years old and has advanced prostate cancer, I should adjust my probability. And if I learn that there are riots in the capital city and there are hints of military coups in the offing, I should again adjust my probability. But starting with the base-rate probability is a good way to at least ensure that you’re going to be in the plausibility ballpark initially.
The episode is called How to Be Less Terrible at Predicting the Future, and is a very fun listen. I encourage you to check it out if you're interested in this sort of thing! | Probability of a single real-life future event: What does it mean when they say that "Hillary has a
There's an episode of freakonomics radio that is very relevant to this question (in general, not in the specifics of en election). In it, Stephen Dubner interviews the lead of a project from a United |
2,509 | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"? | The 2016 election is indeed a one time event. But so is the flip of a coin or the toss of a die.
When someone claims they know a candidate has a 75% chance of winning they are not predicting the outcome. They are claiming they know the shape of the die.
The outcome of the election can't invalidate this. But if the model they use to arrive at 75% is tested against many elections it could be shown to have limited predictive value. Or it may be born out as valuable.
Of course, once a valuable predictor is known to the candidates they can change their behavior and the model can be made irrelevant. Or it can be blown all out of proportion. Just look at what happens in Iowa. | Probability of a single real-life future event: What does it mean when they say that "Hillary has a | The 2016 election is indeed a one time event. But so is the flip of a coin or the toss of a die.
When someone claims they know a candidate has a 75% chance of winning they are not predicting the outc | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"?
The 2016 election is indeed a one time event. But so is the flip of a coin or the toss of a die.
When someone claims they know a candidate has a 75% chance of winning they are not predicting the outcome. They are claiming they know the shape of the die.
The outcome of the election can't invalidate this. But if the model they use to arrive at 75% is tested against many elections it could be shown to have limited predictive value. Or it may be born out as valuable.
Of course, once a valuable predictor is known to the candidates they can change their behavior and the model can be made irrelevant. Or it can be blown all out of proportion. Just look at what happens in Iowa. | Probability of a single real-life future event: What does it mean when they say that "Hillary has a
The 2016 election is indeed a one time event. But so is the flip of a coin or the toss of a die.
When someone claims they know a candidate has a 75% chance of winning they are not predicting the outc |
2,510 | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"? | When someone says that "Hillary has a 75% chance of winning", they mean that if you offered them a bet where one person gets 25 dollars if Hillary wins and the other person gets 75 dollars if Hillary does not win, they would consider that a fair bet and have no particular reason to prefer either side.
These percentages typically come from prediction markets. These summarize all the information available and typically outperform analytical methods of predicting most events.
Prediction markets offer people the opportunity to wager on whether or not a particular event will occur. The payoffs are set by negotiation between the people on both sides of the proposition. Generally, people who have special knowledge about a proposition will be try to leverage that knowledge to make money, which has the side effect of leaking that information.
For example, suppose there's a prediction market on whether a particular celebrity will live until the end of this year. The public knows the celebrity's age and anyone can look up the basic probability that the celebrity will die by the end of the year. If that was all that was known, you would expect people to be willing to bet on one side or the other of this proposition at roughly that probability.
Now, suppose someone knew that celebrity was in poor health but was concealing it. Or even say lots of people knew that that celebrity's family had a history of heart disease that would reduce their odds of surviving. The people with that information will be willing to take one side of that proposition, causing the rate to adjust just as buyers push a stock price up and sellers push it down.
In other words, when the odds are too low, people looking to profit push them up. And when they are too high, people looking to profit push them down. The price of the bet ultimately reflects the collective wisdom of everyone on the odds of the proposition occurring just as all prices reflect collective wisdom on the costs and values of things. | Probability of a single real-life future event: What does it mean when they say that "Hillary has a | When someone says that "Hillary has a 75% chance of winning", they mean that if you offered them a bet where one person gets 25 dollars if Hillary wins and the other person gets 75 dollars if Hillary | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"?
When someone says that "Hillary has a 75% chance of winning", they mean that if you offered them a bet where one person gets 25 dollars if Hillary wins and the other person gets 75 dollars if Hillary does not win, they would consider that a fair bet and have no particular reason to prefer either side.
These percentages typically come from prediction markets. These summarize all the information available and typically outperform analytical methods of predicting most events.
Prediction markets offer people the opportunity to wager on whether or not a particular event will occur. The payoffs are set by negotiation between the people on both sides of the proposition. Generally, people who have special knowledge about a proposition will be try to leverage that knowledge to make money, which has the side effect of leaking that information.
For example, suppose there's a prediction market on whether a particular celebrity will live until the end of this year. The public knows the celebrity's age and anyone can look up the basic probability that the celebrity will die by the end of the year. If that was all that was known, you would expect people to be willing to bet on one side or the other of this proposition at roughly that probability.
Now, suppose someone knew that celebrity was in poor health but was concealing it. Or even say lots of people knew that that celebrity's family had a history of heart disease that would reduce their odds of surviving. The people with that information will be willing to take one side of that proposition, causing the rate to adjust just as buyers push a stock price up and sellers push it down.
In other words, when the odds are too low, people looking to profit push them up. And when they are too high, people looking to profit push them down. The price of the bet ultimately reflects the collective wisdom of everyone on the odds of the proposition occurring just as all prices reflect collective wisdom on the costs and values of things. | Probability of a single real-life future event: What does it mean when they say that "Hillary has a
When someone says that "Hillary has a 75% chance of winning", they mean that if you offered them a bet where one person gets 25 dollars if Hillary wins and the other person gets 75 dollars if Hillary |
2,511 | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"? | The key question is how do you assign a probability to a unique event.the answer is that you develop a model by which it is no longer unique. I think an easier example is what is the probability of the president dying in office? You may view the president as a person of a certain age, as a person of a certain age and sex,. Etc... each model gives you a different prediction ...a priori there is no correct model..it is up to the statistician to select which model is most appropriate. | Probability of a single real-life future event: What does it mean when they say that "Hillary has a | The key question is how do you assign a probability to a unique event.the answer is that you develop a model by which it is no longer unique. I think an easier example is what is the probability of th | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"?
The key question is how do you assign a probability to a unique event.the answer is that you develop a model by which it is no longer unique. I think an easier example is what is the probability of the president dying in office? You may view the president as a person of a certain age, as a person of a certain age and sex,. Etc... each model gives you a different prediction ...a priori there is no correct model..it is up to the statistician to select which model is most appropriate. | Probability of a single real-life future event: What does it mean when they say that "Hillary has a
The key question is how do you assign a probability to a unique event.the answer is that you develop a model by which it is no longer unique. I think an easier example is what is the probability of th |
2,512 | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"? | Given the polls show a very tight race, the 75% may or may not be accurate.
You are asking what it means, not how did they calculate this. The implication is that (if we ignore anyone else except Clinton and her one major opponent) that you would need to bet \$3 to get a \$4 return if she wins. Alternately, a \$1 bet on the other runner would return $4 if he wins.
My answer makes a small distinction, between the actual chance for either candidate to win, and what people (gamblers, or odds makes) are expecting. I suspect that when you see numbers like this, e.g. 75%, you are seeing the odds makers numbers, when you see 49 to 48%, you are seeing poll results. | Probability of a single real-life future event: What does it mean when they say that "Hillary has a | Given the polls show a very tight race, the 75% may or may not be accurate.
You are asking what it means, not how did they calculate this. The implication is that (if we ignore anyone else except Cli | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"?
Given the polls show a very tight race, the 75% may or may not be accurate.
You are asking what it means, not how did they calculate this. The implication is that (if we ignore anyone else except Clinton and her one major opponent) that you would need to bet \$3 to get a \$4 return if she wins. Alternately, a \$1 bet on the other runner would return $4 if he wins.
My answer makes a small distinction, between the actual chance for either candidate to win, and what people (gamblers, or odds makes) are expecting. I suspect that when you see numbers like this, e.g. 75%, you are seeing the odds makers numbers, when you see 49 to 48%, you are seeing poll results. | Probability of a single real-life future event: What does it mean when they say that "Hillary has a
Given the polls show a very tight race, the 75% may or may not be accurate.
You are asking what it means, not how did they calculate this. The implication is that (if we ignore anyone else except Cli |
2,513 | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"? | If they're doing it right, something happens approximately three-fourths of those times when they say it had a 75% chance of happening. (or more generally, the same idea adapted over all percentage forecasts)
It is possible to ascribe more meaning than that depending on our philosophical opinions and how much we believe the models, but this pragmatic point of view is something of a lowest common denominator — at the very least, statistical methods try (although possibly as a side effect rather than directly) to make forecasts obeying this pragmatic point of view. | Probability of a single real-life future event: What does it mean when they say that "Hillary has a | If they're doing it right, something happens approximately three-fourths of those times when they say it had a 75% chance of happening. (or more generally, the same idea adapted over all percentage fo | Probability of a single real-life future event: What does it mean when they say that "Hillary has a 75% chance of winning"?
If they're doing it right, something happens approximately three-fourths of those times when they say it had a 75% chance of happening. (or more generally, the same idea adapted over all percentage forecasts)
It is possible to ascribe more meaning than that depending on our philosophical opinions and how much we believe the models, but this pragmatic point of view is something of a lowest common denominator — at the very least, statistical methods try (although possibly as a side effect rather than directly) to make forecasts obeying this pragmatic point of view. | Probability of a single real-life future event: What does it mean when they say that "Hillary has a
If they're doing it right, something happens approximately three-fourths of those times when they say it had a 75% chance of happening. (or more generally, the same idea adapted over all percentage fo |
2,514 | How to visualize what canonical correlation analysis does (in comparison to what principal component analysis does)? | Well, I think it is really difficult to present a visual explanation of Canonical correlation analysis (CCA) vis-a-vis Principal components analysis (PCA) or Linear regression. The latter two are often explained and compared by means of a 2D or 3D data scatterplots, but I doubt if that is possible with CCA. Below I've drawn pictures which might explain the essence and the differences in the three procedures, but even with these pictures - which are vector representations in the "subject space" - there are problems with capturing CCA adequately. (For algebra/algorithm of canonical correlation analysis look in here.)
Drawing individuals as points in a space where the axes are variables, a usual scatterplot, is a variable space. If you draw the opposite way - variables as points and individuals as axes - that will be a subject space. Drawing the many axes is actually needless because the space has the number of non-redundant dimensions equal to the number of non-collinear variables. Variable points are connected with the origin and form vectors, arrows, spanning the subject space; so here we are (see also). In a subject space, if variables have been centered, the cosine of the angle between their vectors is Pearson correlation between them, and the vectors' lengths squared are their variances. On the pictures below the variables displayed are centered (no need for a constant arises).
Principal Components
Variables $X_1$ and $X_2$ positively correlate: they have acute angle between them. Principal components $P_1$ and $P_2$ lie in the same space "plane X" spanned by the two variables. The components are variables too, only mutually orthogonal (uncorrelated). The direction of $P_1$ is such as to maximize the sum of the two squared loadings of this component; and $P_2$, the remaining component, goes orthogonally to $P_1$ in plane X. The squared lengths of all the four vectors are their variances (the variance of a component is the aforementioned sum of its squared loadings). Component loadings are the coordinates of variables onto the components - $a$'s shown on the left pic. Each variable is the error-free linear combination of the two components, with the corresponding loadings being the regression coefficients. And vice versa, each component is the error-free linear combination of the two variables; the regression coefficients in this combination are given by the skew coordinates of the components onto the variables - $b$'s shown on the right pic. The actual regression coefficient magnitude will be $b$ divided by the product of lengths (standard deviations) of the predicted component and the predictor variable, e.g. $b_{12}/(|P_1|*|X_2|)$. [Footnote: The components' values appearing in the mentioned above two linear combinations are standardized values, st. dev. = 1. This because the information about their variances is captured by the loadings. To speak in terms of unstandardized component values, $a$'s on the pic above should be eigenvectors' values, the rest of the reasoning being the same.]
Multiple Regression
Whereas in PCA everything lies in plane X, in multiple regression there appears a dependent variable $Y$ which usually doesn't belong to plane X, the space of the predictors $X_1$, $X_2$. But $Y$ is perpendicularly projected onto plane X, and the projection $Y'$, the $Y$'s shade, is the prediction by or linear combination of the two $X$'s. On the picture, the squared length of $e$ is the error variance. The cosine between $Y$ and $Y'$ is the multiple correlation coefficient. Like it was with PCA, the regression coefficients are given by the skew coordinates of the prediction ($Y'$) onto the variables - $b$'s. The actual regression coefficient magnitude will be $b$ divided by the length (standard deviation) of the predictor variable, e.g. $b_{2}/|X_2|$.
Canonical Correlation
In PCA, a set of variables predict themselves: they model principal components which in turn model back the variables, you don't leave the space of the predictors and (if you use all the components) the prediction is error-free. In multiple regression, a set of variables predict one extraneous variable and so there is some prediction error. In CCA, the situation is similar to that in regression, but (1) the extraneous variables are multiple, forming a set of their own; (2) the two sets predict each other simultaneously (hence correlation rather than regression); (3) what they predict in each other is rather an extract, a latent variable, than the observed predictand of a regression (see also).
Let's involve the second set of variables $Y_1$ and $Y_2$ to correlate canonically with our $X$'s set. We have spaces - here, planes - X and Y. It should be notified that in order the situation to be nontrivial - like that was above with regression where $Y$ stands out of plane X - planes X and Y must intersect only in one point, the origin. Unfortunately it is impossible to draw on paper because 4D presentation is necessary. Anyway, the grey arrow indicates that the two origins are one point and the only one shared by the two planes. If that is taken, the rest of the picture resembles what was with regression. $V_x$ and $V_y$ are the pair of canonical variates. Each canonical variate is the linear combination of the respective variables, like $Y'$ was. $Y'$ was the orthogonal projection of $Y$ onto plane X. Here $V_x$ is a projection of $V_y$ on plane X and simultaneously $V_y$ is a projection of $V_x$ on plane Y, but they are not orthogonal projections. Instead, they are found (extracted) so as to minimize the angle $\phi$ between them. Cosine of that angle is the canonical correlation. Since projections need not be orthogonal, lengths (hence variances) of the canonical variates are not automatically determined by the fitting algorithm and are subject to conventions/constraints which may differ in different implementations. The number of pairs of canonical variates (and hence the number of canonical correlations) is min(number of $X$s, number of $Y$s). And here comes the time when CCA resembles PCA. In PCA, you skim mutually orthogonal principal components (as if) recursively until all the multivariate variability is exhausted. Similarly, in CCA mutually orthogonal pairs of maximally correlated variates are extracted until all the multivariate variability that can be predicted in the lesser space (lesser set) is up. In our example with $X_1$ $X_2$ vs $Y_1$ $Y_2$ there remains the second and weaker correlated canonical pair $V_{x(2)}$ (orthogonal to $V_x$) and $V_{y(2)}$ (orthogonal to $V_y$).
For the difference between CCA and PCA+regression see also Doing CCA vs. building a dependent variable with PCA and then doing regression.
What is the benefit of canonical correlation over individual Pearson correlations of pairs of variables from the two sets? (my answer's in comments). | How to visualize what canonical correlation analysis does (in comparison to what principal component | Well, I think it is really difficult to present a visual explanation of Canonical correlation analysis (CCA) vis-a-vis Principal components analysis (PCA) or Linear regression. The latter two are ofte | How to visualize what canonical correlation analysis does (in comparison to what principal component analysis does)?
Well, I think it is really difficult to present a visual explanation of Canonical correlation analysis (CCA) vis-a-vis Principal components analysis (PCA) or Linear regression. The latter two are often explained and compared by means of a 2D or 3D data scatterplots, but I doubt if that is possible with CCA. Below I've drawn pictures which might explain the essence and the differences in the three procedures, but even with these pictures - which are vector representations in the "subject space" - there are problems with capturing CCA adequately. (For algebra/algorithm of canonical correlation analysis look in here.)
Drawing individuals as points in a space where the axes are variables, a usual scatterplot, is a variable space. If you draw the opposite way - variables as points and individuals as axes - that will be a subject space. Drawing the many axes is actually needless because the space has the number of non-redundant dimensions equal to the number of non-collinear variables. Variable points are connected with the origin and form vectors, arrows, spanning the subject space; so here we are (see also). In a subject space, if variables have been centered, the cosine of the angle between their vectors is Pearson correlation between them, and the vectors' lengths squared are their variances. On the pictures below the variables displayed are centered (no need for a constant arises).
Principal Components
Variables $X_1$ and $X_2$ positively correlate: they have acute angle between them. Principal components $P_1$ and $P_2$ lie in the same space "plane X" spanned by the two variables. The components are variables too, only mutually orthogonal (uncorrelated). The direction of $P_1$ is such as to maximize the sum of the two squared loadings of this component; and $P_2$, the remaining component, goes orthogonally to $P_1$ in plane X. The squared lengths of all the four vectors are their variances (the variance of a component is the aforementioned sum of its squared loadings). Component loadings are the coordinates of variables onto the components - $a$'s shown on the left pic. Each variable is the error-free linear combination of the two components, with the corresponding loadings being the regression coefficients. And vice versa, each component is the error-free linear combination of the two variables; the regression coefficients in this combination are given by the skew coordinates of the components onto the variables - $b$'s shown on the right pic. The actual regression coefficient magnitude will be $b$ divided by the product of lengths (standard deviations) of the predicted component and the predictor variable, e.g. $b_{12}/(|P_1|*|X_2|)$. [Footnote: The components' values appearing in the mentioned above two linear combinations are standardized values, st. dev. = 1. This because the information about their variances is captured by the loadings. To speak in terms of unstandardized component values, $a$'s on the pic above should be eigenvectors' values, the rest of the reasoning being the same.]
Multiple Regression
Whereas in PCA everything lies in plane X, in multiple regression there appears a dependent variable $Y$ which usually doesn't belong to plane X, the space of the predictors $X_1$, $X_2$. But $Y$ is perpendicularly projected onto plane X, and the projection $Y'$, the $Y$'s shade, is the prediction by or linear combination of the two $X$'s. On the picture, the squared length of $e$ is the error variance. The cosine between $Y$ and $Y'$ is the multiple correlation coefficient. Like it was with PCA, the regression coefficients are given by the skew coordinates of the prediction ($Y'$) onto the variables - $b$'s. The actual regression coefficient magnitude will be $b$ divided by the length (standard deviation) of the predictor variable, e.g. $b_{2}/|X_2|$.
Canonical Correlation
In PCA, a set of variables predict themselves: they model principal components which in turn model back the variables, you don't leave the space of the predictors and (if you use all the components) the prediction is error-free. In multiple regression, a set of variables predict one extraneous variable and so there is some prediction error. In CCA, the situation is similar to that in regression, but (1) the extraneous variables are multiple, forming a set of their own; (2) the two sets predict each other simultaneously (hence correlation rather than regression); (3) what they predict in each other is rather an extract, a latent variable, than the observed predictand of a regression (see also).
Let's involve the second set of variables $Y_1$ and $Y_2$ to correlate canonically with our $X$'s set. We have spaces - here, planes - X and Y. It should be notified that in order the situation to be nontrivial - like that was above with regression where $Y$ stands out of plane X - planes X and Y must intersect only in one point, the origin. Unfortunately it is impossible to draw on paper because 4D presentation is necessary. Anyway, the grey arrow indicates that the two origins are one point and the only one shared by the two planes. If that is taken, the rest of the picture resembles what was with regression. $V_x$ and $V_y$ are the pair of canonical variates. Each canonical variate is the linear combination of the respective variables, like $Y'$ was. $Y'$ was the orthogonal projection of $Y$ onto plane X. Here $V_x$ is a projection of $V_y$ on plane X and simultaneously $V_y$ is a projection of $V_x$ on plane Y, but they are not orthogonal projections. Instead, they are found (extracted) so as to minimize the angle $\phi$ between them. Cosine of that angle is the canonical correlation. Since projections need not be orthogonal, lengths (hence variances) of the canonical variates are not automatically determined by the fitting algorithm and are subject to conventions/constraints which may differ in different implementations. The number of pairs of canonical variates (and hence the number of canonical correlations) is min(number of $X$s, number of $Y$s). And here comes the time when CCA resembles PCA. In PCA, you skim mutually orthogonal principal components (as if) recursively until all the multivariate variability is exhausted. Similarly, in CCA mutually orthogonal pairs of maximally correlated variates are extracted until all the multivariate variability that can be predicted in the lesser space (lesser set) is up. In our example with $X_1$ $X_2$ vs $Y_1$ $Y_2$ there remains the second and weaker correlated canonical pair $V_{x(2)}$ (orthogonal to $V_x$) and $V_{y(2)}$ (orthogonal to $V_y$).
For the difference between CCA and PCA+regression see also Doing CCA vs. building a dependent variable with PCA and then doing regression.
What is the benefit of canonical correlation over individual Pearson correlations of pairs of variables from the two sets? (my answer's in comments). | How to visualize what canonical correlation analysis does (in comparison to what principal component
Well, I think it is really difficult to present a visual explanation of Canonical correlation analysis (CCA) vis-a-vis Principal components analysis (PCA) or Linear regression. The latter two are ofte |
2,515 | How to visualize what canonical correlation analysis does (in comparison to what principal component analysis does)? | For me it was much helpful to read in the book of S. Mulaik "The Foundations of Factoranalysis" (1972), that there is a method purely of rotations of a matrix of factor loadings to arrive at a canonical correlation, so I could locate it in that ensemble of concepts which I had already understood so far from principal components analysis and factor analysis.
Perhaps you're interested in this example (which I've rebuilt from a first implementation/discussion of about 1998 just a couple of days ago to crosscheck and re-verify the method against the computation by SPSS). See here . I'm using my small matrix/pca-tools Inside-[R] and Matmate for this, but I think it can be reconstructed in R without too much effort. | How to visualize what canonical correlation analysis does (in comparison to what principal component | For me it was much helpful to read in the book of S. Mulaik "The Foundations of Factoranalysis" (1972), that there is a method purely of rotations of a matrix of factor loadings to arrive at a canonic | How to visualize what canonical correlation analysis does (in comparison to what principal component analysis does)?
For me it was much helpful to read in the book of S. Mulaik "The Foundations of Factoranalysis" (1972), that there is a method purely of rotations of a matrix of factor loadings to arrive at a canonical correlation, so I could locate it in that ensemble of concepts which I had already understood so far from principal components analysis and factor analysis.
Perhaps you're interested in this example (which I've rebuilt from a first implementation/discussion of about 1998 just a couple of days ago to crosscheck and re-verify the method against the computation by SPSS). See here . I'm using my small matrix/pca-tools Inside-[R] and Matmate for this, but I think it can be reconstructed in R without too much effort. | How to visualize what canonical correlation analysis does (in comparison to what principal component
For me it was much helpful to read in the book of S. Mulaik "The Foundations of Factoranalysis" (1972), that there is a method purely of rotations of a matrix of factor loadings to arrive at a canonic |
2,516 | How to visualize what canonical correlation analysis does (in comparison to what principal component analysis does)? | This answer doesn't provide a visual aid for understanding CCA, however a good geometric interpretation of CCA is presented in Chapter 12 of Anderson-1958 [1]. The gist of it is as follows:
Consider $N$ data points $x_1, x_2, ..., x_N$, all of dimension $p$. Let $X$ be the $p\times N$ matrix containing $x_i$. One way of looking at the data is to interpret $X$ as a collection of $p$ data points in the $(N-1)$-dimensional subspace$^*$. In that case, if we separate the first $p_1$ data points from the remaining $p_2$ data points, CCA tries to find a linear combination of $x_1,...,x_{p_1}$ vectors that is parallel (as parallel as possible) with the linear combination of the remaining $p_2$ vectors $x_{p_1+1}, ..., x_p$.
I find this perspective interesting for these reasons:
It provides an interesting geometric interpretation about the entries of CCA canonical variables.
The correlation coefficients is linked to the angle between the two CCA projections.
The ratios of $\frac{p_1}{N}$ and $\frac{p_2}{N}$ can be directly related to the ability of CCA to find maximally correlated data points. Therefore, the relationship between overfitting and CCA solutions is clear. $\rightarrow$ Hint: The data points are able to span the $(N-1)$-dimensional space, when $N$ is too small (sample-poor case).
Here I've added an example with some code where you can change $p_1$ and $p_2$ and see when they are too high, CCA projections fall on top of each other.
* Note that the sub-space is $(N-1)$-dimensional and not $N$-dimensional, because of the centering constraint (i.e., $\text{mean}(x_i) = 0$).
[1] Anderson, T. W. An introduction to multivariate statistical analysis. Vol. 2. New York: Wiley, 1958. | How to visualize what canonical correlation analysis does (in comparison to what principal component | This answer doesn't provide a visual aid for understanding CCA, however a good geometric interpretation of CCA is presented in Chapter 12 of Anderson-1958 [1]. The gist of it is as follows:
Consider $ | How to visualize what canonical correlation analysis does (in comparison to what principal component analysis does)?
This answer doesn't provide a visual aid for understanding CCA, however a good geometric interpretation of CCA is presented in Chapter 12 of Anderson-1958 [1]. The gist of it is as follows:
Consider $N$ data points $x_1, x_2, ..., x_N$, all of dimension $p$. Let $X$ be the $p\times N$ matrix containing $x_i$. One way of looking at the data is to interpret $X$ as a collection of $p$ data points in the $(N-1)$-dimensional subspace$^*$. In that case, if we separate the first $p_1$ data points from the remaining $p_2$ data points, CCA tries to find a linear combination of $x_1,...,x_{p_1}$ vectors that is parallel (as parallel as possible) with the linear combination of the remaining $p_2$ vectors $x_{p_1+1}, ..., x_p$.
I find this perspective interesting for these reasons:
It provides an interesting geometric interpretation about the entries of CCA canonical variables.
The correlation coefficients is linked to the angle between the two CCA projections.
The ratios of $\frac{p_1}{N}$ and $\frac{p_2}{N}$ can be directly related to the ability of CCA to find maximally correlated data points. Therefore, the relationship between overfitting and CCA solutions is clear. $\rightarrow$ Hint: The data points are able to span the $(N-1)$-dimensional space, when $N$ is too small (sample-poor case).
Here I've added an example with some code where you can change $p_1$ and $p_2$ and see when they are too high, CCA projections fall on top of each other.
* Note that the sub-space is $(N-1)$-dimensional and not $N$-dimensional, because of the centering constraint (i.e., $\text{mean}(x_i) = 0$).
[1] Anderson, T. W. An introduction to multivariate statistical analysis. Vol. 2. New York: Wiley, 1958. | How to visualize what canonical correlation analysis does (in comparison to what principal component
This answer doesn't provide a visual aid for understanding CCA, however a good geometric interpretation of CCA is presented in Chapter 12 of Anderson-1958 [1]. The gist of it is as follows:
Consider $ |
2,517 | How to visualize what canonical correlation analysis does (in comparison to what principal component analysis does)? | The best way to teach statistics is with data. Multivariate statistical techniques are often made very complicated with matrices which are not intuitive. I would explain CCA using Excel. Create two samples, add new variates (columns basically) and show the calculation. And as far as the matrix construction of CCA is concerned, best way is to teach with a bivariate case first and then expand it. | How to visualize what canonical correlation analysis does (in comparison to what principal component | The best way to teach statistics is with data. Multivariate statistical techniques are often made very complicated with matrices which are not intuitive. I would explain CCA using Excel. Create two sa | How to visualize what canonical correlation analysis does (in comparison to what principal component analysis does)?
The best way to teach statistics is with data. Multivariate statistical techniques are often made very complicated with matrices which are not intuitive. I would explain CCA using Excel. Create two samples, add new variates (columns basically) and show the calculation. And as far as the matrix construction of CCA is concerned, best way is to teach with a bivariate case first and then expand it. | How to visualize what canonical correlation analysis does (in comparison to what principal component
The best way to teach statistics is with data. Multivariate statistical techniques are often made very complicated with matrices which are not intuitive. I would explain CCA using Excel. Create two sa |
2,518 | What is global max pooling layer and what is its advantage over maxpooling layer? | Global max pooling = ordinary max pooling layer with pool size equals to the size of the input (minus filter size + 1, to be precise). You can see that MaxPooling1D takes a pool_length argument, whereas GlobalMaxPooling1D does not.
For example, if the input of the max pooling layer is $0,1,2,2,5,1,2$, global max pooling outputs $5$, whereas ordinary max pooling layer with pool size equals to 3 outputs $2,2,5,5,5$ (assuming stride=1).
This can be seen in the code:
class GlobalMaxPooling1D(_GlobalPooling1D):
"""Global max pooling operation for temporal data.
# Input shape
3D tensor with shape: `(samples, steps, features)`.
# Output shape
2D tensor with shape: `(samples, features)`.
"""
def call(self, x, mask=None):
return K.max(x, axis=1)
In some domains, such as natural language processing, it is common to use global max pooling. In some other domains, such as computer vision, it is common to use a max pooling that isn't global. | What is global max pooling layer and what is its advantage over maxpooling layer? | Global max pooling = ordinary max pooling layer with pool size equals to the size of the input (minus filter size + 1, to be precise). You can see that MaxPooling1D takes a pool_length argument, whe | What is global max pooling layer and what is its advantage over maxpooling layer?
Global max pooling = ordinary max pooling layer with pool size equals to the size of the input (minus filter size + 1, to be precise). You can see that MaxPooling1D takes a pool_length argument, whereas GlobalMaxPooling1D does not.
For example, if the input of the max pooling layer is $0,1,2,2,5,1,2$, global max pooling outputs $5$, whereas ordinary max pooling layer with pool size equals to 3 outputs $2,2,5,5,5$ (assuming stride=1).
This can be seen in the code:
class GlobalMaxPooling1D(_GlobalPooling1D):
"""Global max pooling operation for temporal data.
# Input shape
3D tensor with shape: `(samples, steps, features)`.
# Output shape
2D tensor with shape: `(samples, features)`.
"""
def call(self, x, mask=None):
return K.max(x, axis=1)
In some domains, such as natural language processing, it is common to use global max pooling. In some other domains, such as computer vision, it is common to use a max pooling that isn't global. | What is global max pooling layer and what is its advantage over maxpooling layer?
Global max pooling = ordinary max pooling layer with pool size equals to the size of the input (minus filter size + 1, to be precise). You can see that MaxPooling1D takes a pool_length argument, whe |
2,519 | What is global max pooling layer and what is its advantage over maxpooling layer? | As described in this paper that proposed global average pooling (GAP):
Conventional convolutional neural networks perform convolution in the lower layers of the network. For classification, the feature maps of the last convolutional layer are vectorized and fed into fully connected layers followed by a softmax logistic regression layer. This structure bridges the convolutional structure with traditional neural network classifiers. It treats the convolutional layers as feature extractors, and the resulting feature is classified in a traditional way.
However, the fully connected layers are prone to overfitting, thus hampering the generalization ability of the overall network. Dropout is proposed by Hinton et al as a regularizer which randomly sets half of the activations to the fully connected layers to zero during training. It has improved the
generalization ability and largely prevents overfitting.
In this paper, we propose another strategy called global average pooling to replace the traditional fully connected layers in CNN. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the softmax layer. One advantage of global average pooling over the fully connected layers is that it is more native to the convolution structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Futhermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input. We can see global average pooling as a structural regularizer that explicitly enforces feature maps to be confidence maps of concepts (categories). This is made possible by the mlpconv layers, as they makes better approximation to the confidence maps than GLMs.
Edit:
As suggested by @MaxLawnboy, here is another paper on the same topic . | What is global max pooling layer and what is its advantage over maxpooling layer? | As described in this paper that proposed global average pooling (GAP):
Conventional convolutional neural networks perform convolution in the lower layers of the network. For classification, the featur | What is global max pooling layer and what is its advantage over maxpooling layer?
As described in this paper that proposed global average pooling (GAP):
Conventional convolutional neural networks perform convolution in the lower layers of the network. For classification, the feature maps of the last convolutional layer are vectorized and fed into fully connected layers followed by a softmax logistic regression layer. This structure bridges the convolutional structure with traditional neural network classifiers. It treats the convolutional layers as feature extractors, and the resulting feature is classified in a traditional way.
However, the fully connected layers are prone to overfitting, thus hampering the generalization ability of the overall network. Dropout is proposed by Hinton et al as a regularizer which randomly sets half of the activations to the fully connected layers to zero during training. It has improved the
generalization ability and largely prevents overfitting.
In this paper, we propose another strategy called global average pooling to replace the traditional fully connected layers in CNN. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the softmax layer. One advantage of global average pooling over the fully connected layers is that it is more native to the convolution structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Futhermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input. We can see global average pooling as a structural regularizer that explicitly enforces feature maps to be confidence maps of concepts (categories). This is made possible by the mlpconv layers, as they makes better approximation to the confidence maps than GLMs.
Edit:
As suggested by @MaxLawnboy, here is another paper on the same topic . | What is global max pooling layer and what is its advantage over maxpooling layer?
As described in this paper that proposed global average pooling (GAP):
Conventional convolutional neural networks perform convolution in the lower layers of the network. For classification, the featur |
2,520 | Help me understand Support Vector Machines | I think you are trying to start from a bad end. What one should know about SVM to use it is just that this algorithm is finding a hyperplane in hyperspace of attributes that separates two classes best, where best means with biggest margin between classes (the knowledge how it is done is your enemy here, because it blurs the overall picture), as illustrated by a famous picture like this:
Now, there are some problems left.
First of all, what to with those nasty outliers laying shamelessly in a center of cloud of points of a different class?
To this end we allow the optimizer to leave certain samples mislabelled, yet punish each of such examples. To avoid multiobjective opimization, penalties for mislabelled cases are merged with margin size with an use of additional parameter C which controls the balance among those aims.
Next, sometimes the problem is just not linear and no good hyperplane can be found. Here, we introduce kernel trick -- we just project the original, nonlinear space to a higher dimensional one with some nonlinear transformation, of course defined by a bunch of additional parameters, hoping that in the resulting space the problem will be suitable for a plain SVM:
Yet again, with some math and we can see that this whole transformation procedure can be elegantly hidden by modifying objective function by replacing dot product of objects with so-called kernel function.
Finally, this all works for 2 classes, and you have 3; what to do with it? Here we create 3 2-class classifiers (sitting -- no sitting, standing -- no standing, walking -- no walking) and in classification combine those with voting.
Ok, so problems seems solved, but we have to select kernel (here we consult with our intuition and pick RBF) and fit at least few parameters (C+kernel). And we must have overfit-safe objective function for it, for instance error approximation from cross-validation. So we leave computer working on that, go for a coffee, come back and see that there are some optimal parameters. Great! Now we just start nested cross-validation to have error approximation and voila.
This brief workflow is of course too simplified to be fully correct, but shows reasons why I think you should first try with random forest, which is almost parameter-independent, natively multiclass, provides unbiased error estimate and perform almost as good as well fitted SVMs. | Help me understand Support Vector Machines | I think you are trying to start from a bad end. What one should know about SVM to use it is just that this algorithm is finding a hyperplane in hyperspace of attributes that separates two classes best | Help me understand Support Vector Machines
I think you are trying to start from a bad end. What one should know about SVM to use it is just that this algorithm is finding a hyperplane in hyperspace of attributes that separates two classes best, where best means with biggest margin between classes (the knowledge how it is done is your enemy here, because it blurs the overall picture), as illustrated by a famous picture like this:
Now, there are some problems left.
First of all, what to with those nasty outliers laying shamelessly in a center of cloud of points of a different class?
To this end we allow the optimizer to leave certain samples mislabelled, yet punish each of such examples. To avoid multiobjective opimization, penalties for mislabelled cases are merged with margin size with an use of additional parameter C which controls the balance among those aims.
Next, sometimes the problem is just not linear and no good hyperplane can be found. Here, we introduce kernel trick -- we just project the original, nonlinear space to a higher dimensional one with some nonlinear transformation, of course defined by a bunch of additional parameters, hoping that in the resulting space the problem will be suitable for a plain SVM:
Yet again, with some math and we can see that this whole transformation procedure can be elegantly hidden by modifying objective function by replacing dot product of objects with so-called kernel function.
Finally, this all works for 2 classes, and you have 3; what to do with it? Here we create 3 2-class classifiers (sitting -- no sitting, standing -- no standing, walking -- no walking) and in classification combine those with voting.
Ok, so problems seems solved, but we have to select kernel (here we consult with our intuition and pick RBF) and fit at least few parameters (C+kernel). And we must have overfit-safe objective function for it, for instance error approximation from cross-validation. So we leave computer working on that, go for a coffee, come back and see that there are some optimal parameters. Great! Now we just start nested cross-validation to have error approximation and voila.
This brief workflow is of course too simplified to be fully correct, but shows reasons why I think you should first try with random forest, which is almost parameter-independent, natively multiclass, provides unbiased error estimate and perform almost as good as well fitted SVMs. | Help me understand Support Vector Machines
I think you are trying to start from a bad end. What one should know about SVM to use it is just that this algorithm is finding a hyperplane in hyperspace of attributes that separates two classes best |
2,521 | What's so 'moment' about 'moments' of a probability distribution? | According to the paper "First (?) Occurrence of Common Terms in Mathematical Statistics" by H.A. David, the first use of the word 'moment' in this situation was in a 1893 letter to Nature by Karl Pearson entitled "Asymmetrical Frequency Curves".
Neyman's 1938 Biometrika paper "A Historical Note on Karl Pearson's Deduction of the Moments of the Binomial" gives a good synopsis of the letter and Pearson's subsequent work on moments of the binomial distribution and the method of moments. It's a really good read. Hopefully you have access JSTOR for I don't have the time now to give a good summary of the paper (though I will this weekend). Though I will mention one piece that may give insight as to why the term 'moment' was used. From Neyman's paper:
It [Pearson's memoir] deals primarily with methods of approximating
continuous frequency curves by means of some processes involving the
calculation of easy formulae. One of these formulae considered was the
"point-binomial" or the "binomial with loaded ordinates". The formula
differs from what to-day we call a binomial, viz. (4), only by a factor
$\alpha$, representing the area under the continuous curve which it is desired
to fit.
This is what eventually led to the 'method of moments.' Neyman goes over the Pearson's derivation of the binomial moments in the above paper.
And from Pearson's letter:
We shall now proceed to find the first four moments of the system of
rectangles round GN. If the inertia of each rectangle might be considered
as concentrated along its mid vertical, we should have for the $s^{\text{th}}$ moment
round NG, writing $d = c(1 + nq)$.
This hints at the fact that Pearson used the term 'moment' as an allusion to 'moment of inertia,' a term common in physics.
Here's a scan of most of Pearson's Nature letter:
You can view the entire article on page 615 here. | What's so 'moment' about 'moments' of a probability distribution? | According to the paper "First (?) Occurrence of Common Terms in Mathematical Statistics" by H.A. David, the first use of the word 'moment' in this situation was in a 1893 letter to Nature by Karl Pear | What's so 'moment' about 'moments' of a probability distribution?
According to the paper "First (?) Occurrence of Common Terms in Mathematical Statistics" by H.A. David, the first use of the word 'moment' in this situation was in a 1893 letter to Nature by Karl Pearson entitled "Asymmetrical Frequency Curves".
Neyman's 1938 Biometrika paper "A Historical Note on Karl Pearson's Deduction of the Moments of the Binomial" gives a good synopsis of the letter and Pearson's subsequent work on moments of the binomial distribution and the method of moments. It's a really good read. Hopefully you have access JSTOR for I don't have the time now to give a good summary of the paper (though I will this weekend). Though I will mention one piece that may give insight as to why the term 'moment' was used. From Neyman's paper:
It [Pearson's memoir] deals primarily with methods of approximating
continuous frequency curves by means of some processes involving the
calculation of easy formulae. One of these formulae considered was the
"point-binomial" or the "binomial with loaded ordinates". The formula
differs from what to-day we call a binomial, viz. (4), only by a factor
$\alpha$, representing the area under the continuous curve which it is desired
to fit.
This is what eventually led to the 'method of moments.' Neyman goes over the Pearson's derivation of the binomial moments in the above paper.
And from Pearson's letter:
We shall now proceed to find the first four moments of the system of
rectangles round GN. If the inertia of each rectangle might be considered
as concentrated along its mid vertical, we should have for the $s^{\text{th}}$ moment
round NG, writing $d = c(1 + nq)$.
This hints at the fact that Pearson used the term 'moment' as an allusion to 'moment of inertia,' a term common in physics.
Here's a scan of most of Pearson's Nature letter:
You can view the entire article on page 615 here. | What's so 'moment' about 'moments' of a probability distribution?
According to the paper "First (?) Occurrence of Common Terms in Mathematical Statistics" by H.A. David, the first use of the word 'moment' in this situation was in a 1893 letter to Nature by Karl Pear |
2,522 | What's so 'moment' about 'moments' of a probability distribution? | Everybody has its moment on moments. I had mine in Cumulant and moment names beyond variance, skewness and kurtosis, and spent some time reading this gorgious thread.
Oddly, I did not find the "moment mention" in " H. A. David's paper. So I went to
Karl Pearson: The Scientific Life in a Statistical Age, a book by T. M. Porter. and Karl Pearson and the Origins of Modern Statistics: An Elastician becomes a Statistician. He for instance edited A History of the Theory of Elasticity and of the Strength of Materials from Galilei to the Present Time.
His background was very wide, and he was notably a professor of engineering and elastician, who was involved in determining the bending moments of a bridge span and calculating stresses on masonry dams.
In elasticity, one only observe what is is going on (rupture) in a limited manner. He seemingly was interested in (from Porter's book):
graphical calculation or, in its most dignified and mathematical form,
graphical statics.
Later :
From the beginning of his statistical career, and even before that, he
fit curves using the "method of moments." In mechanics, this meant
matching a complicated body to a simple or abstract one that had the
same center of mass and "swing radius," respectively the first and
second moments. These quantities corresponded in statistics to the
mean and the spread or dispersion of measurements around the mean.
And since:
Pearson dealt in discrete measurement intervals, this was a sum rather
than an integral
Inertial moments can stand for a summary of a moving body: computations can be carried out as if the body was reduced to a single point.
Pearson set up these five equalities as a system of equations, which
combined into one of the ninth degree. A numerical solution was only
possible by successive approximations. There could have been as many
as nine real solutions, though in the present instance there were only
two. He graphed both results alongside the original, and was generally
pleased with the appearance of the result. He did not, however, rely
on visual inspection to decide between them, but calculated the sixth
moment to decide the best match
Let us go back to physics. A moment is a physical quantity that takes into account the local arrangement of a physical property, generally with respect to a certain ordinal point or axis (classically in space or time). It summarizes physical quantities as measured at some distance from a reference. If the quantity is not concentrated at a single point, the moment is "averaged" over the whole space, by means of integrals or sums.
Apparently, the concept of moments can be traced back to the discovery of the operating principle of the lever "discovered" by Archimedes. One of the first known occurrence is the Latin word "momentorum" with the present accepted sense (moment about a center of rotation). In 1565, Federico Commandino translated Archimedes' work (Liber de Centro Gravitatis Solidorum) as:
The center of gravity of each solid figure is that point within it,
about which on all sides parts of equal moment stand.
or
Centrum gravitatis uniuscuiusque solidae figurae est punctum illud
intra positum, circa quod undique partes aequalium momentorum
So apparently, the analogy with physics is quite strong: from a complicated discrete physical shape, find quantities that approximate it sufficiently, a form of compression or parsimony. | What's so 'moment' about 'moments' of a probability distribution? | Everybody has its moment on moments. I had mine in Cumulant and moment names beyond variance, skewness and kurtosis, and spent some time reading this gorgious thread.
Oddly, I did not find the "mom | What's so 'moment' about 'moments' of a probability distribution?
Everybody has its moment on moments. I had mine in Cumulant and moment names beyond variance, skewness and kurtosis, and spent some time reading this gorgious thread.
Oddly, I did not find the "moment mention" in " H. A. David's paper. So I went to
Karl Pearson: The Scientific Life in a Statistical Age, a book by T. M. Porter. and Karl Pearson and the Origins of Modern Statistics: An Elastician becomes a Statistician. He for instance edited A History of the Theory of Elasticity and of the Strength of Materials from Galilei to the Present Time.
His background was very wide, and he was notably a professor of engineering and elastician, who was involved in determining the bending moments of a bridge span and calculating stresses on masonry dams.
In elasticity, one only observe what is is going on (rupture) in a limited manner. He seemingly was interested in (from Porter's book):
graphical calculation or, in its most dignified and mathematical form,
graphical statics.
Later :
From the beginning of his statistical career, and even before that, he
fit curves using the "method of moments." In mechanics, this meant
matching a complicated body to a simple or abstract one that had the
same center of mass and "swing radius," respectively the first and
second moments. These quantities corresponded in statistics to the
mean and the spread or dispersion of measurements around the mean.
And since:
Pearson dealt in discrete measurement intervals, this was a sum rather
than an integral
Inertial moments can stand for a summary of a moving body: computations can be carried out as if the body was reduced to a single point.
Pearson set up these five equalities as a system of equations, which
combined into one of the ninth degree. A numerical solution was only
possible by successive approximations. There could have been as many
as nine real solutions, though in the present instance there were only
two. He graphed both results alongside the original, and was generally
pleased with the appearance of the result. He did not, however, rely
on visual inspection to decide between them, but calculated the sixth
moment to decide the best match
Let us go back to physics. A moment is a physical quantity that takes into account the local arrangement of a physical property, generally with respect to a certain ordinal point or axis (classically in space or time). It summarizes physical quantities as measured at some distance from a reference. If the quantity is not concentrated at a single point, the moment is "averaged" over the whole space, by means of integrals or sums.
Apparently, the concept of moments can be traced back to the discovery of the operating principle of the lever "discovered" by Archimedes. One of the first known occurrence is the Latin word "momentorum" with the present accepted sense (moment about a center of rotation). In 1565, Federico Commandino translated Archimedes' work (Liber de Centro Gravitatis Solidorum) as:
The center of gravity of each solid figure is that point within it,
about which on all sides parts of equal moment stand.
or
Centrum gravitatis uniuscuiusque solidae figurae est punctum illud
intra positum, circa quod undique partes aequalium momentorum
So apparently, the analogy with physics is quite strong: from a complicated discrete physical shape, find quantities that approximate it sufficiently, a form of compression or parsimony. | What's so 'moment' about 'moments' of a probability distribution?
Everybody has its moment on moments. I had mine in Cumulant and moment names beyond variance, skewness and kurtosis, and spent some time reading this gorgious thread.
Oddly, I did not find the "mom |
2,523 | What's so 'moment' about 'moments' of a probability distribution? | Question: So what does the word "moment" mean in this case? Why this choice of word? It doesn't sound intuitive to me (or I never heard it that way back in college :) Come to think of it I am equally curious with its usage in "moment of inertia" ;) but let's not focus on that for now.
Answer: Actually, in a historical sense, moment of inertia is probably where the sense of the word moments comes from. Indeed, one can (as below) show how the moment of inertia relates to variance. This also yields a physical interpretation of higher moments.
In physics, a moment (sic, paraphrased:) is the product of a distance and a force, and in this way it accounts for how the force is located or arranged. Moments are usually defined with respect to a fixed reference point; they deal with forces as measured at some distance from that reference point. For example, the moment of force acting on an object, often called torque, is the product of the force and the distance from a reference point, as in the example below.
Less confusing than the names usually given, e.g., hyperflatness etc. for higher moments would be moments from circular motion e.g., moments of inertia for circular motion, of rigid bodies which is an simple conversion. Angular acceleration is the derivative of angular velocity, which is the derivative of angle with respect to time, i.e., $ \dfrac{d\omega}{dt}=\alpha,\,\dfrac{d\theta}{dt}=\omega$. Consider that the second moment is analogous to torque applied to a circular motion, or if you will an acceleration/deceleration (also second derivative) of that circular (i.e., angular, $\theta$) motion. Similarly, the third moment would be a rate of change of torque, and so on and so forth for yet higher moments to make rates of change of rates of change of rates of change, i.e., sequential derivatives of circular motion. This is perhaps easier to visualize this with actual examples.
There are limits to physical plausibility, e.g., where an object begins and ends, i.e., its support, which renders the comparison more or less realistic. Let us take the example of a beta distribution, which has (finite) support on [0,1] and show the correspondence for that. The beta distribution density function (pdf) is
$$\beta(x;\alpha,\beta)=\begin{array}{cc}
\Bigg\{ &
\begin{array}{cc}
\dfrac{x^{\alpha -1} (1-x)^{\beta -1}}{B(\alpha ,\beta )} & 0<x<1 \\
0 & \text{True} \\
\end{array}
\\
\end{array}\,,$$
where $B(\alpha,\beta)=\dfrac{\Gamma(\alpha)\,\Gamma(\beta)}{\Gamma(\alpha+\beta)}$, and $\Gamma(.)$ is the gamma function, $\Gamma(z) = \int_0^\infty x^{z-1} e^{-x}\,dx$.
The mean is then the first moment of rotation around the $z$-axis for the beta function plotted as a rigidly rotating thin sheet of uniform area density with the minimum $x$-value affixed to the (0,0,0) origin, with its base in the $x,y$ plane.
$$\mu=\int_0^1r\,\beta(r;\alpha,\beta)\,dr=\frac{\alpha}{\alpha+\beta}\,,$$
as illustrated for $\beta(r;2,2)$, i.e., $\mu=\dfrac{1}{2}$, below
Note that there is nothing preventing us from moving the beta distribution thin sheet to another location and re-scaling it, e.g., from $0\leq r\leq1$ to $2\leq r\leq4$, or changing the vertical shape, for example to be a paddle rather than a hump.
To calculate the beta distribution variance, we would calculate the moment of inertia for a shifted beta distribution with the $r$-value mean placed on the $z$-axis of rotation,
$$\sigma^2=\int_0^1 (r-\mu)^2 \beta(r;\alpha,\beta) \, dr =\frac{\alpha \beta }{(\alpha +\beta )^2 (\alpha +\beta +1)}\,,$$
which for $\beta(r;2,2)$, i.e., $I=\sigma^2=\dfrac{1}{20}$, where $I$ is the moment of inertia, looks like this,
Now for higher so called 'central' moments, i.e., moments about the mean, like skewness, and kurtosis we calculate the $n^{\text{th}}$ moment around the mean from
$$\int_0^1 (r-\mu)^n \beta(r;\alpha,\beta) \, dr\,.$$
This can also be understood to be the $n^{\text{th}}$ derivative of circular motion.
What if we want to calculate backwards, that is, take a 3D solid object and turn it into a probability function? Things then get a bit trickier. For example, let us take a torus.
First we take its circular cross section, then we make it into a half ellipse to show the density of any flat coin like slice, then we convert the coin into a wedge-shaped coin to account for the increasing density with increasing distance ($r$) from the $z$-axis, and finally we normalize for the area to make a density function. This is outlined graphically below with the mathematics left to the reader.
Finally, we ask how these equivalences relate to motion? Note that as above the moment of inertia, $I$, can be made related to the second central moment, $\sigma^2$, A.K.A., the variance. Then $I=\dfrac{\tau}{a}$, that is, the ratio of the torque, $\tau$, and the angular acceleration, $a$. We would then differentiate to obtain higher order rates of change in time. | What's so 'moment' about 'moments' of a probability distribution? | Question: So what does the word "moment" mean in this case? Why this choice of word? It doesn't sound intuitive to me (or I never heard it that way back in college :) Come to think of it I am equally | What's so 'moment' about 'moments' of a probability distribution?
Question: So what does the word "moment" mean in this case? Why this choice of word? It doesn't sound intuitive to me (or I never heard it that way back in college :) Come to think of it I am equally curious with its usage in "moment of inertia" ;) but let's not focus on that for now.
Answer: Actually, in a historical sense, moment of inertia is probably where the sense of the word moments comes from. Indeed, one can (as below) show how the moment of inertia relates to variance. This also yields a physical interpretation of higher moments.
In physics, a moment (sic, paraphrased:) is the product of a distance and a force, and in this way it accounts for how the force is located or arranged. Moments are usually defined with respect to a fixed reference point; they deal with forces as measured at some distance from that reference point. For example, the moment of force acting on an object, often called torque, is the product of the force and the distance from a reference point, as in the example below.
Less confusing than the names usually given, e.g., hyperflatness etc. for higher moments would be moments from circular motion e.g., moments of inertia for circular motion, of rigid bodies which is an simple conversion. Angular acceleration is the derivative of angular velocity, which is the derivative of angle with respect to time, i.e., $ \dfrac{d\omega}{dt}=\alpha,\,\dfrac{d\theta}{dt}=\omega$. Consider that the second moment is analogous to torque applied to a circular motion, or if you will an acceleration/deceleration (also second derivative) of that circular (i.e., angular, $\theta$) motion. Similarly, the third moment would be a rate of change of torque, and so on and so forth for yet higher moments to make rates of change of rates of change of rates of change, i.e., sequential derivatives of circular motion. This is perhaps easier to visualize this with actual examples.
There are limits to physical plausibility, e.g., where an object begins and ends, i.e., its support, which renders the comparison more or less realistic. Let us take the example of a beta distribution, which has (finite) support on [0,1] and show the correspondence for that. The beta distribution density function (pdf) is
$$\beta(x;\alpha,\beta)=\begin{array}{cc}
\Bigg\{ &
\begin{array}{cc}
\dfrac{x^{\alpha -1} (1-x)^{\beta -1}}{B(\alpha ,\beta )} & 0<x<1 \\
0 & \text{True} \\
\end{array}
\\
\end{array}\,,$$
where $B(\alpha,\beta)=\dfrac{\Gamma(\alpha)\,\Gamma(\beta)}{\Gamma(\alpha+\beta)}$, and $\Gamma(.)$ is the gamma function, $\Gamma(z) = \int_0^\infty x^{z-1} e^{-x}\,dx$.
The mean is then the first moment of rotation around the $z$-axis for the beta function plotted as a rigidly rotating thin sheet of uniform area density with the minimum $x$-value affixed to the (0,0,0) origin, with its base in the $x,y$ plane.
$$\mu=\int_0^1r\,\beta(r;\alpha,\beta)\,dr=\frac{\alpha}{\alpha+\beta}\,,$$
as illustrated for $\beta(r;2,2)$, i.e., $\mu=\dfrac{1}{2}$, below
Note that there is nothing preventing us from moving the beta distribution thin sheet to another location and re-scaling it, e.g., from $0\leq r\leq1$ to $2\leq r\leq4$, or changing the vertical shape, for example to be a paddle rather than a hump.
To calculate the beta distribution variance, we would calculate the moment of inertia for a shifted beta distribution with the $r$-value mean placed on the $z$-axis of rotation,
$$\sigma^2=\int_0^1 (r-\mu)^2 \beta(r;\alpha,\beta) \, dr =\frac{\alpha \beta }{(\alpha +\beta )^2 (\alpha +\beta +1)}\,,$$
which for $\beta(r;2,2)$, i.e., $I=\sigma^2=\dfrac{1}{20}$, where $I$ is the moment of inertia, looks like this,
Now for higher so called 'central' moments, i.e., moments about the mean, like skewness, and kurtosis we calculate the $n^{\text{th}}$ moment around the mean from
$$\int_0^1 (r-\mu)^n \beta(r;\alpha,\beta) \, dr\,.$$
This can also be understood to be the $n^{\text{th}}$ derivative of circular motion.
What if we want to calculate backwards, that is, take a 3D solid object and turn it into a probability function? Things then get a bit trickier. For example, let us take a torus.
First we take its circular cross section, then we make it into a half ellipse to show the density of any flat coin like slice, then we convert the coin into a wedge-shaped coin to account for the increasing density with increasing distance ($r$) from the $z$-axis, and finally we normalize for the area to make a density function. This is outlined graphically below with the mathematics left to the reader.
Finally, we ask how these equivalences relate to motion? Note that as above the moment of inertia, $I$, can be made related to the second central moment, $\sigma^2$, A.K.A., the variance. Then $I=\dfrac{\tau}{a}$, that is, the ratio of the torque, $\tau$, and the angular acceleration, $a$. We would then differentiate to obtain higher order rates of change in time. | What's so 'moment' about 'moments' of a probability distribution?
Question: So what does the word "moment" mean in this case? Why this choice of word? It doesn't sound intuitive to me (or I never heard it that way back in college :) Come to think of it I am equally |
2,524 | What's so 'moment' about 'moments' of a probability distribution? | Being overly simplistic, statistical moments are additional descriptors of a curve/distribution. We are familiar with the first two moments and these are generally useful for continuous normal distributions or similar curves. However these first two moments lose their informational value for other distributions. Thus other moments provide additional information on the shape/form of the distribution. | What's so 'moment' about 'moments' of a probability distribution? | Being overly simplistic, statistical moments are additional descriptors of a curve/distribution. We are familiar with the first two moments and these are generally useful for continuous normal distrib | What's so 'moment' about 'moments' of a probability distribution?
Being overly simplistic, statistical moments are additional descriptors of a curve/distribution. We are familiar with the first two moments and these are generally useful for continuous normal distributions or similar curves. However these first two moments lose their informational value for other distributions. Thus other moments provide additional information on the shape/form of the distribution. | What's so 'moment' about 'moments' of a probability distribution?
Being overly simplistic, statistical moments are additional descriptors of a curve/distribution. We are familiar with the first two moments and these are generally useful for continuous normal distrib |
2,525 | What is wrong with extrapolation? | A regression model is often used for extrapolation, i.e. predicting the response to an input which lies outside of the range of the values of the predictor variable used to fit the model. The danger associated with extrapolation is illustrated in the following figure.
The regression model is “by construction” an interpolation model, and should not be used for extrapolation, unless this is properly justified. | What is wrong with extrapolation? | A regression model is often used for extrapolation, i.e. predicting the response to an input which lies outside of the range of the values of the predictor variable used to fit the model. The danger a | What is wrong with extrapolation?
A regression model is often used for extrapolation, i.e. predicting the response to an input which lies outside of the range of the values of the predictor variable used to fit the model. The danger associated with extrapolation is illustrated in the following figure.
The regression model is “by construction” an interpolation model, and should not be used for extrapolation, unless this is properly justified. | What is wrong with extrapolation?
A regression model is often used for extrapolation, i.e. predicting the response to an input which lies outside of the range of the values of the predictor variable used to fit the model. The danger a |
2,526 | What is wrong with extrapolation? | This xkcd comic explains it all.
Using the data points Cueball (the man with the stick) has, he has extrapolated that the woman will have "four dozen" husbands by late next month, and used this extrapolation to lead to the conclusion of buying the wedding cake in bulk.
Edit 3: For those of you who say "he doesn't have enough data points", here's another xkcd comic:
Here, the usage of the word "sustainable" over time is shown on a semi-log plot, and extrapolating the data points we receive an unreasonable estimates of how often the word "sustainable" will occur in the future.
Edit 2: For those of you who say "you need all past data points too", yet another xkcd comic:
Here, we have all past data points but we fail to accurately predict the resolution of Google Earth. Note that this is a semi-log graph too.
Edit: Sometimes, even the strongest of (r=.9979 in this case) correlations are just plain wrong.
If you extrapolate without other supporting evidence you also violating correlation does not imply causation; another great sin in the world of statistics.
If you do extrapolate X with Y, however, you must make sure that you can accurately (enough to satisfy your requirements) predict X with only Y. Almost always, there are multiple factors than impact X.
I would like to share a link to another answer that explains it in the words of Nassim Nicholas Taleb. | What is wrong with extrapolation? | This xkcd comic explains it all.
Using the data points Cueball (the man with the stick) has, he has extrapolated that the woman will have "four dozen" husbands by late next month, and used this extra | What is wrong with extrapolation?
This xkcd comic explains it all.
Using the data points Cueball (the man with the stick) has, he has extrapolated that the woman will have "four dozen" husbands by late next month, and used this extrapolation to lead to the conclusion of buying the wedding cake in bulk.
Edit 3: For those of you who say "he doesn't have enough data points", here's another xkcd comic:
Here, the usage of the word "sustainable" over time is shown on a semi-log plot, and extrapolating the data points we receive an unreasonable estimates of how often the word "sustainable" will occur in the future.
Edit 2: For those of you who say "you need all past data points too", yet another xkcd comic:
Here, we have all past data points but we fail to accurately predict the resolution of Google Earth. Note that this is a semi-log graph too.
Edit: Sometimes, even the strongest of (r=.9979 in this case) correlations are just plain wrong.
If you extrapolate without other supporting evidence you also violating correlation does not imply causation; another great sin in the world of statistics.
If you do extrapolate X with Y, however, you must make sure that you can accurately (enough to satisfy your requirements) predict X with only Y. Almost always, there are multiple factors than impact X.
I would like to share a link to another answer that explains it in the words of Nassim Nicholas Taleb. | What is wrong with extrapolation?
This xkcd comic explains it all.
Using the data points Cueball (the man with the stick) has, he has extrapolated that the woman will have "four dozen" husbands by late next month, and used this extra |
2,527 | What is wrong with extrapolation? | "Prediction is very difficult, especially if it's about the future". The quote is attributed to many people in some form. I restrict in the following "extrapolation" to "prediction outside the known range", and in a one-dimensional setting, extrapolation from a known past to an unknown future.
So what is wrong with extrapolation. First, it is not easy to model the past. Second, it is hard to know whether a model from the past can be used for the future. Behind both assertions dwell deep questions about causality or ergodicity, sufficiency of explanatory variables, etc. that are quite case dependent. What is wrong is that it is difficult to choose an single extrapolation scheme that works fine in different contexts, without a lot of extra information.
This generic mismatch is clearly illustrated in the Anscombe quartet dataset shown below. The linear regression is also (outside the $x$-coordinate range) an instance of extrapolation. The same line regresses four set of points, with the same standard statistics. However, the underlying models are quite different: the first one is quite standard. The second is a parametric model error (a second or third degree polynomial could be better suited), the third shows a perfect fit except for one value (outlier?), the fourth a lack of smooth relationships (hysteresis?).
However, forecasting can be rectified to some extend.
Adding to other answers, a couple of ingredients can help practical extrapolation:
You can weight the samples according to their distance (index $n$) to the location $p$ where you want to extrapolate. For instance, use an increasing function $f_p(n)$ (with $p\ge n$), like exponential weighting or smoothing, or sliding windows of samples, to give less importance to older values.
You can use several extrapolation models, and combine them or select the best (Combining forecasts, J. Scott Armstrong, 2001). Recently, there have been a number of works on their optimal combination (I may provide references if needed).
Recently, I have been involved in a project for extrapolating values for the communication of simulation subsystems in a real-time environment. The dogma in this domain was that extrapolation may cause instability. We actually realized that combining the two above ingredients was very efficient, without noticeable instability (without a formal proof yet: CHOPtrey: contextual online polynomial extrapolation for enhanced multi-core co-simulation of complex systems, Simulation, 2017). And the extrapolation worked with simple polynomials, with a very low computational burden, most of the operations being computed beforehand and stored in look-up tables.
Finally, as extrapolation suggests funny drawings, the following is the backward effect of linear regression: | What is wrong with extrapolation? | "Prediction is very difficult, especially if it's about the future". The quote is attributed to many people in some form. I restrict in the following "extrapolation" to "prediction outside the known | What is wrong with extrapolation?
"Prediction is very difficult, especially if it's about the future". The quote is attributed to many people in some form. I restrict in the following "extrapolation" to "prediction outside the known range", and in a one-dimensional setting, extrapolation from a known past to an unknown future.
So what is wrong with extrapolation. First, it is not easy to model the past. Second, it is hard to know whether a model from the past can be used for the future. Behind both assertions dwell deep questions about causality or ergodicity, sufficiency of explanatory variables, etc. that are quite case dependent. What is wrong is that it is difficult to choose an single extrapolation scheme that works fine in different contexts, without a lot of extra information.
This generic mismatch is clearly illustrated in the Anscombe quartet dataset shown below. The linear regression is also (outside the $x$-coordinate range) an instance of extrapolation. The same line regresses four set of points, with the same standard statistics. However, the underlying models are quite different: the first one is quite standard. The second is a parametric model error (a second or third degree polynomial could be better suited), the third shows a perfect fit except for one value (outlier?), the fourth a lack of smooth relationships (hysteresis?).
However, forecasting can be rectified to some extend.
Adding to other answers, a couple of ingredients can help practical extrapolation:
You can weight the samples according to their distance (index $n$) to the location $p$ where you want to extrapolate. For instance, use an increasing function $f_p(n)$ (with $p\ge n$), like exponential weighting or smoothing, or sliding windows of samples, to give less importance to older values.
You can use several extrapolation models, and combine them or select the best (Combining forecasts, J. Scott Armstrong, 2001). Recently, there have been a number of works on their optimal combination (I may provide references if needed).
Recently, I have been involved in a project for extrapolating values for the communication of simulation subsystems in a real-time environment. The dogma in this domain was that extrapolation may cause instability. We actually realized that combining the two above ingredients was very efficient, without noticeable instability (without a formal proof yet: CHOPtrey: contextual online polynomial extrapolation for enhanced multi-core co-simulation of complex systems, Simulation, 2017). And the extrapolation worked with simple polynomials, with a very low computational burden, most of the operations being computed beforehand and stored in look-up tables.
Finally, as extrapolation suggests funny drawings, the following is the backward effect of linear regression: | What is wrong with extrapolation?
"Prediction is very difficult, especially if it's about the future". The quote is attributed to many people in some form. I restrict in the following "extrapolation" to "prediction outside the known |
2,528 | What is wrong with extrapolation? | Although the fit of a model might be "good", extrapolation beyond the range of the data must be treated skeptically. The reason is that in many cases extrapolation (unfortunately and unavoidably) relies on untestable assumptions
about the behaviour of the data beyond their observed support.
When extrapolating one must do two judgement calls: First, from a quantitative perspective, how valid is the model outside the range of the data? Second, from a qualitative perspective, how plausible is a point $x_{out}$ laying outside the observed sample range to be a member of the population we assume for the sample? Because both questions entail a certain degree of ambiguity extrapolation is considered an ambiguous technique too. If you have reasons to accept that these assumptions hold, then extrapolation is usually a valid inferential procedure.
An additional caveat is that many non-parametric estimation techniques do not permit extrapolation natively. This problem is particularly noticeable in the case of spline smoothing where there are no more knots to anchor the fitted spline.
Let me stress that extrapolation is far from evil. For example, numerical methods widely used in Statistics (for example Aitken's delta-squared process and Richardson's Extrapolation) are essentially extrapolation schemes based on the idea that the underlying behaviour of the function analysed for the observed data remains stable across the function's support. | What is wrong with extrapolation? | Although the fit of a model might be "good", extrapolation beyond the range of the data must be treated skeptically. The reason is that in many cases extrapolation (unfortunately and unavoidably) reli | What is wrong with extrapolation?
Although the fit of a model might be "good", extrapolation beyond the range of the data must be treated skeptically. The reason is that in many cases extrapolation (unfortunately and unavoidably) relies on untestable assumptions
about the behaviour of the data beyond their observed support.
When extrapolating one must do two judgement calls: First, from a quantitative perspective, how valid is the model outside the range of the data? Second, from a qualitative perspective, how plausible is a point $x_{out}$ laying outside the observed sample range to be a member of the population we assume for the sample? Because both questions entail a certain degree of ambiguity extrapolation is considered an ambiguous technique too. If you have reasons to accept that these assumptions hold, then extrapolation is usually a valid inferential procedure.
An additional caveat is that many non-parametric estimation techniques do not permit extrapolation natively. This problem is particularly noticeable in the case of spline smoothing where there are no more knots to anchor the fitted spline.
Let me stress that extrapolation is far from evil. For example, numerical methods widely used in Statistics (for example Aitken's delta-squared process and Richardson's Extrapolation) are essentially extrapolation schemes based on the idea that the underlying behaviour of the function analysed for the observed data remains stable across the function's support. | What is wrong with extrapolation?
Although the fit of a model might be "good", extrapolation beyond the range of the data must be treated skeptically. The reason is that in many cases extrapolation (unfortunately and unavoidably) reli |
2,529 | What is wrong with extrapolation? | Contrary to other answers, I'd say that there is nothing wrong with extrapolation as far as it is not used in mindless way. First, notice that extrapolation is:
the process of estimating, beyond the
original observation range, the value of a variable on the basis of
its relationship with another variable.
...so it's very broad term and many different methods ranging from simple linear extrapolation, to linear regression, polynomial regression, or even some advanced time-series forecasting methods fit such definition. In fact, extrapolation, prediction and forecast are closely related. In statistics we often make predictions and forecasts. This is also what the link you refer to says:
We’re taught from day 1 of statistics that extrapolation is a big
no-no, but that’s exactly what forecasting is.
Many extrapolation methods are used for making predictions, moreover, often some simple methods work pretty well with small samples, so can be preferred then the complicated ones. The problem is, as noticed in other answers, when you use extrapolation method improperly.
For example, many studies show that the age of sexual initiation decreases over time in western countries. Take a look at a plot below about age of first intercourse in the US. If we blindly used linear regression to predict age of first intercourse we would predict it to go below zero at some number of years (accordingly with first marriage and first birth happening at some time after death)... However, if you needed to make one-year-ahead forecast, then I'd guess that linear regression would lead to pretty accurate short term predictions for the trend.
(source guttmacher.org)
Another great example comes from completely different domain, since it is about "extrapolating" for test done by Microsoft Excel, as shown below (I don't know if this is already fixed or not). I don't know the author of this image, it comes from Giphy.
All models are wrong, extrapolation is also wrong, since it wouldn't enable you to make precise predictions. As other mathematical/statistical tools it will enable you to make approximate predictions. The extent of how accurate they will be depends on quality of the data that you have, using methods adequate for your problem, the assumptions you made while defining your model and many other factors. But this doesn't mean that we can't use such methods. We can, but we need to remember about their limitations and should assess their quality for a given problem. | What is wrong with extrapolation? | Contrary to other answers, I'd say that there is nothing wrong with extrapolation as far as it is not used in mindless way. First, notice that extrapolation is:
the process of estimating, beyond the
| What is wrong with extrapolation?
Contrary to other answers, I'd say that there is nothing wrong with extrapolation as far as it is not used in mindless way. First, notice that extrapolation is:
the process of estimating, beyond the
original observation range, the value of a variable on the basis of
its relationship with another variable.
...so it's very broad term and many different methods ranging from simple linear extrapolation, to linear regression, polynomial regression, or even some advanced time-series forecasting methods fit such definition. In fact, extrapolation, prediction and forecast are closely related. In statistics we often make predictions and forecasts. This is also what the link you refer to says:
We’re taught from day 1 of statistics that extrapolation is a big
no-no, but that’s exactly what forecasting is.
Many extrapolation methods are used for making predictions, moreover, often some simple methods work pretty well with small samples, so can be preferred then the complicated ones. The problem is, as noticed in other answers, when you use extrapolation method improperly.
For example, many studies show that the age of sexual initiation decreases over time in western countries. Take a look at a plot below about age of first intercourse in the US. If we blindly used linear regression to predict age of first intercourse we would predict it to go below zero at some number of years (accordingly with first marriage and first birth happening at some time after death)... However, if you needed to make one-year-ahead forecast, then I'd guess that linear regression would lead to pretty accurate short term predictions for the trend.
(source guttmacher.org)
Another great example comes from completely different domain, since it is about "extrapolating" for test done by Microsoft Excel, as shown below (I don't know if this is already fixed or not). I don't know the author of this image, it comes from Giphy.
All models are wrong, extrapolation is also wrong, since it wouldn't enable you to make precise predictions. As other mathematical/statistical tools it will enable you to make approximate predictions. The extent of how accurate they will be depends on quality of the data that you have, using methods adequate for your problem, the assumptions you made while defining your model and many other factors. But this doesn't mean that we can't use such methods. We can, but we need to remember about their limitations and should assess their quality for a given problem. | What is wrong with extrapolation?
Contrary to other answers, I'd say that there is nothing wrong with extrapolation as far as it is not used in mindless way. First, notice that extrapolation is:
the process of estimating, beyond the
|
2,530 | What is wrong with extrapolation? | I quite like the example by Nassim Taleb (which was an adaptation of an earlier example by Bertrand Russell):
Consider a turkey that is fed every day. Every single feeding will firm up the bird's belief that it is the general rule of life to be fed every day by friendly members of the human race "looking out for its best interests," as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.
Some mathematical analogs are the following:
knowledge of the first few Taylor coefficients of a function does not always guarantee that the succeeding coefficients will follow your presumed pattern.
knowledge of a differential equation's initial conditions does not always guarantee knowledge of its asymptotic behavior (e.g. Lorenz's equations, sometimes distorted into the so-called "butterfly effect")
Here is a nice MO thread on the matter. | What is wrong with extrapolation? | I quite like the example by Nassim Taleb (which was an adaptation of an earlier example by Bertrand Russell):
Consider a turkey that is fed every day. Every single feeding will firm up the bird's bel | What is wrong with extrapolation?
I quite like the example by Nassim Taleb (which was an adaptation of an earlier example by Bertrand Russell):
Consider a turkey that is fed every day. Every single feeding will firm up the bird's belief that it is the general rule of life to be fed every day by friendly members of the human race "looking out for its best interests," as a politician would say. On the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.
Some mathematical analogs are the following:
knowledge of the first few Taylor coefficients of a function does not always guarantee that the succeeding coefficients will follow your presumed pattern.
knowledge of a differential equation's initial conditions does not always guarantee knowledge of its asymptotic behavior (e.g. Lorenz's equations, sometimes distorted into the so-called "butterfly effect")
Here is a nice MO thread on the matter. | What is wrong with extrapolation?
I quite like the example by Nassim Taleb (which was an adaptation of an earlier example by Bertrand Russell):
Consider a turkey that is fed every day. Every single feeding will firm up the bird's bel |
2,531 | What is wrong with extrapolation? | Ponder the following story, if you will.
I also remember sitting in a Statistics course, and the professor told us extrapolation was a bad idea. Then during the next class he told us it was a bad idea again; in fact, he said it twice.
I was sick for the rest of the semester, but I was certain I couldn't have missed a lot of material, because by the last week the guy must surely have been doing nothing but telling people again and again how extrapolation was a bad idea.
Strangely enough, I didn't score very high on the exam. | What is wrong with extrapolation? | Ponder the following story, if you will.
I also remember sitting in a Statistics course, and the professor told us extrapolation was a bad idea. Then during the next class he told us it was a bad idea | What is wrong with extrapolation?
Ponder the following story, if you will.
I also remember sitting in a Statistics course, and the professor told us extrapolation was a bad idea. Then during the next class he told us it was a bad idea again; in fact, he said it twice.
I was sick for the rest of the semester, but I was certain I couldn't have missed a lot of material, because by the last week the guy must surely have been doing nothing but telling people again and again how extrapolation was a bad idea.
Strangely enough, I didn't score very high on the exam. | What is wrong with extrapolation?
Ponder the following story, if you will.
I also remember sitting in a Statistics course, and the professor told us extrapolation was a bad idea. Then during the next class he told us it was a bad idea |
2,532 | What is wrong with extrapolation? | The question is not just statistical, it's also epistemological. Extrapolation is one of the ways we learn about the nature, it's a form of induction.
Let's say we have data for electrical conductivity of a material in a range of temperatures from 0 to 20 Celsius, what can we say about the conductivity at 40 degree Celsius?
It's closely related to small sample inference: what can we say about the entire population from measurements conducted on a small sample? This was started by Gosset as Guiness, who came up with Student t-distributions. Before him statisticians didn't bother to think about small samples assuming that the sample size can always be large. He was at Guinnes and had to deal with samples of beer to decide what to do with the entire batch of beer to ship.
So, in practice (business), engineering and science we always have to extrapolate in some ways. It could be extrapolating small samples to large one, or from limited range of input conditions to a wider set of conditions, from what's going on in the accelerator to what happened to a black hole billions of miles away etc. It's especially important in science though, as we really learn by studying the discrepancies between our extrapolation estimates and actual measurements. Often we find new phenomena when the discrepancies are large or consistent.
hence, I say there is no problem with extrapolation. It's something we have to do every day. It's just difficult. | What is wrong with extrapolation? | The question is not just statistical, it's also epistemological. Extrapolation is one of the ways we learn about the nature, it's a form of induction.
Let's say we have data for electrical conductivi | What is wrong with extrapolation?
The question is not just statistical, it's also epistemological. Extrapolation is one of the ways we learn about the nature, it's a form of induction.
Let's say we have data for electrical conductivity of a material in a range of temperatures from 0 to 20 Celsius, what can we say about the conductivity at 40 degree Celsius?
It's closely related to small sample inference: what can we say about the entire population from measurements conducted on a small sample? This was started by Gosset as Guiness, who came up with Student t-distributions. Before him statisticians didn't bother to think about small samples assuming that the sample size can always be large. He was at Guinnes and had to deal with samples of beer to decide what to do with the entire batch of beer to ship.
So, in practice (business), engineering and science we always have to extrapolate in some ways. It could be extrapolating small samples to large one, or from limited range of input conditions to a wider set of conditions, from what's going on in the accelerator to what happened to a black hole billions of miles away etc. It's especially important in science though, as we really learn by studying the discrepancies between our extrapolation estimates and actual measurements. Often we find new phenomena when the discrepancies are large or consistent.
hence, I say there is no problem with extrapolation. It's something we have to do every day. It's just difficult. | What is wrong with extrapolation?
The question is not just statistical, it's also epistemological. Extrapolation is one of the ways we learn about the nature, it's a form of induction.
Let's say we have data for electrical conductivi |
2,533 | What is wrong with extrapolation? | Extrapolation itself isn't necessarily evil, but it is a process which lends itself to conclusions which are more unreasonable than you arrive at with interpolation.
Extrapolation is often done to explore values quite far from the sampled region. If I'm sampling 100 values from 0-10, and then extrapolate out just a little bit, merely to 11, my new point is likely 10 times further away from any datapoint than any interpolation could ever get. This means that there's that much more space for a variable to get out of hand (qualitatively). Note that I intentionally chose only a minor extrapolation. It can get far worse
Extrapolation must be done with curve fits that were intended to do extrapolation. For example, many polynomial fits are very poor for extrapolation because terms which behave well over the sampled range can explode once you leave it. Good extrapolation depends on a "good guess" as to what happens outside of the sampled region. Which brings me to...
It is often extremely difficult to use extrapolation due to the presence of phase transitions. Many processes which one may wish to extrapolate on have decidedly nonlinear properties which are not sufficiently exposed over the sampled region. Aeronautics around the speed of sound are an excellent example. Many extrapolations from lower speeds fall apart as you reach and exceed the speed of information transfer in the air. This also occurs quite often with soft sciences, where the policy itself can impact the success of the policy. Keynesian economics extrapolated out how the economy would behave with different levels of inflation, and predicted the best possible outcome. Unfortunately, there were second order effects and the result was not economic prosperity, but rather some of the highest inflation rates the US has seen.
People like extrapolations. Generally speaking, people really want someone to peer into a crystal ball and tell them the future. They will accept surprisingly bad extrapolations simply because it's all the information they have. This may not make extrapolation itself bad, per se, but it is definitely something one should account for when using it.
For the ultimate in extrapolation, consider the Manhattan Project. The physicists there where forced to work with extremely small scale tests before constructing the real thing. They simply didn't have enough Uranium to waste on tests. They did the best they could, and they were smart. However, when the final test occurred, it was decided that each scientist would decide how far away from the blast they wanted to be when it went off. There was substantial differences of opinion as to how far away was "safe" because every scientists knew they were extrapolating quite far from their tests. There was even a non-trivial consideration that they might set the atmosphere on fire with the nuclear bomb, an issue also put to rest with substantial extrapolation! | What is wrong with extrapolation? | Extrapolation itself isn't necessarily evil, but it is a process which lends itself to conclusions which are more unreasonable than you arrive at with interpolation.
Extrapolation is often done to ex | What is wrong with extrapolation?
Extrapolation itself isn't necessarily evil, but it is a process which lends itself to conclusions which are more unreasonable than you arrive at with interpolation.
Extrapolation is often done to explore values quite far from the sampled region. If I'm sampling 100 values from 0-10, and then extrapolate out just a little bit, merely to 11, my new point is likely 10 times further away from any datapoint than any interpolation could ever get. This means that there's that much more space for a variable to get out of hand (qualitatively). Note that I intentionally chose only a minor extrapolation. It can get far worse
Extrapolation must be done with curve fits that were intended to do extrapolation. For example, many polynomial fits are very poor for extrapolation because terms which behave well over the sampled range can explode once you leave it. Good extrapolation depends on a "good guess" as to what happens outside of the sampled region. Which brings me to...
It is often extremely difficult to use extrapolation due to the presence of phase transitions. Many processes which one may wish to extrapolate on have decidedly nonlinear properties which are not sufficiently exposed over the sampled region. Aeronautics around the speed of sound are an excellent example. Many extrapolations from lower speeds fall apart as you reach and exceed the speed of information transfer in the air. This also occurs quite often with soft sciences, where the policy itself can impact the success of the policy. Keynesian economics extrapolated out how the economy would behave with different levels of inflation, and predicted the best possible outcome. Unfortunately, there were second order effects and the result was not economic prosperity, but rather some of the highest inflation rates the US has seen.
People like extrapolations. Generally speaking, people really want someone to peer into a crystal ball and tell them the future. They will accept surprisingly bad extrapolations simply because it's all the information they have. This may not make extrapolation itself bad, per se, but it is definitely something one should account for when using it.
For the ultimate in extrapolation, consider the Manhattan Project. The physicists there where forced to work with extremely small scale tests before constructing the real thing. They simply didn't have enough Uranium to waste on tests. They did the best they could, and they were smart. However, when the final test occurred, it was decided that each scientist would decide how far away from the blast they wanted to be when it went off. There was substantial differences of opinion as to how far away was "safe" because every scientists knew they were extrapolating quite far from their tests. There was even a non-trivial consideration that they might set the atmosphere on fire with the nuclear bomb, an issue also put to rest with substantial extrapolation! | What is wrong with extrapolation?
Extrapolation itself isn't necessarily evil, but it is a process which lends itself to conclusions which are more unreasonable than you arrive at with interpolation.
Extrapolation is often done to ex |
2,534 | What is wrong with extrapolation? | Lots of good answers here, I just want to try and synthesize what I see as the core of the issue: it is dangerous to extrapolate beyond that data generating process that gave rise to the estimation sample. This is sometimes called a 'structural change'.
Forecasting comes with assumptions, the main one being that the data generating process is (as near as makes no significant difference) the same as the one that generated the sample (except for the rhs variables, whose changes you explicitly account for in the model). If a structural change occurs (i.e. Thanksgiving in Taleb's example), all bets are off. | What is wrong with extrapolation? | Lots of good answers here, I just want to try and synthesize what I see as the core of the issue: it is dangerous to extrapolate beyond that data generating process that gave rise to the estimation sa | What is wrong with extrapolation?
Lots of good answers here, I just want to try and synthesize what I see as the core of the issue: it is dangerous to extrapolate beyond that data generating process that gave rise to the estimation sample. This is sometimes called a 'structural change'.
Forecasting comes with assumptions, the main one being that the data generating process is (as near as makes no significant difference) the same as the one that generated the sample (except for the rhs variables, whose changes you explicitly account for in the model). If a structural change occurs (i.e. Thanksgiving in Taleb's example), all bets are off. | What is wrong with extrapolation?
Lots of good answers here, I just want to try and synthesize what I see as the core of the issue: it is dangerous to extrapolate beyond that data generating process that gave rise to the estimation sa |
2,535 | Examples for teaching: Correlation does not mean causation | It might be useful to explain that "causes" is an asymmetric relation (X causes Y is different from Y causes X), whereas "is correlated with" is a symmetric relation.
For instance, homeless population and crime rate might be correlated, in that both tend to be high or low in the same locations. It is equally valid to say that homelesss population is correlated with crime rate, or crime rate is correlated with homeless population. To say that crime causes homelessness, or homeless populations cause crime are different statements. And correlation does not imply that either is true. For instance, the underlying cause could be a 3rd variable such as drug abuse, or unemployment.
The mathematics of statistics is not good at identifying underlying causes, which requires some other form of judgement. | Examples for teaching: Correlation does not mean causation | It might be useful to explain that "causes" is an asymmetric relation (X causes Y is different from Y causes X), whereas "is correlated with" is a symmetric relation.
For instance, homeless population | Examples for teaching: Correlation does not mean causation
It might be useful to explain that "causes" is an asymmetric relation (X causes Y is different from Y causes X), whereas "is correlated with" is a symmetric relation.
For instance, homeless population and crime rate might be correlated, in that both tend to be high or low in the same locations. It is equally valid to say that homelesss population is correlated with crime rate, or crime rate is correlated with homeless population. To say that crime causes homelessness, or homeless populations cause crime are different statements. And correlation does not imply that either is true. For instance, the underlying cause could be a 3rd variable such as drug abuse, or unemployment.
The mathematics of statistics is not good at identifying underlying causes, which requires some other form of judgement. | Examples for teaching: Correlation does not mean causation
It might be useful to explain that "causes" is an asymmetric relation (X causes Y is different from Y causes X), whereas "is correlated with" is a symmetric relation.
For instance, homeless population |
2,536 | Examples for teaching: Correlation does not mean causation | My favorites:
1) The more firemen are sent to a fire, the more damage is done.
2) Children who get tutored get worse grades than children who do not get tutored
and (this is my top one)
3) In the early elementary school years, astrological sign is correlated with IQ, but this correlation weakens with age and disappears by adulthood. | Examples for teaching: Correlation does not mean causation | My favorites:
1) The more firemen are sent to a fire, the more damage is done.
2) Children who get tutored get worse grades than children who do not get tutored
and (this is my top one)
3) In the ear | Examples for teaching: Correlation does not mean causation
My favorites:
1) The more firemen are sent to a fire, the more damage is done.
2) Children who get tutored get worse grades than children who do not get tutored
and (this is my top one)
3) In the early elementary school years, astrological sign is correlated with IQ, but this correlation weakens with age and disappears by adulthood. | Examples for teaching: Correlation does not mean causation
My favorites:
1) The more firemen are sent to a fire, the more damage is done.
2) Children who get tutored get worse grades than children who do not get tutored
and (this is my top one)
3) In the ear |
2,537 | Examples for teaching: Correlation does not mean causation | I've always liked this one:
source: http://pubs.acs.org/doi/abs/10.1021/ci700332k | Examples for teaching: Correlation does not mean causation | I've always liked this one:
source: http://pubs.acs.org/doi/abs/10.1021/ci700332k | Examples for teaching: Correlation does not mean causation
I've always liked this one:
source: http://pubs.acs.org/doi/abs/10.1021/ci700332k | Examples for teaching: Correlation does not mean causation
I've always liked this one:
source: http://pubs.acs.org/doi/abs/10.1021/ci700332k |
2,538 | Examples for teaching: Correlation does not mean causation | Sometimes correlation is enough. For example, in car insurance, male drivers are correlated with more accidents, so insurance companies charge them more. There is no way you could actually test this for causation. You cannot change the genders of the drivers experimentally. Google has made hundreds of billions of dollars not caring about causation.
To find causation, you generally need experimental data, not observational data. Though, in economics, they often use observed "shocks" to the system to test for causation, like if a CEO dies suddenly and the stock price goes up, you can assume causation.
Correlation is a necessary but not sufficient condition for causation. To show causation requires a counter-factual. | Examples for teaching: Correlation does not mean causation | Sometimes correlation is enough. For example, in car insurance, male drivers are correlated with more accidents, so insurance companies charge them more. There is no way you could actually test this f | Examples for teaching: Correlation does not mean causation
Sometimes correlation is enough. For example, in car insurance, male drivers are correlated with more accidents, so insurance companies charge them more. There is no way you could actually test this for causation. You cannot change the genders of the drivers experimentally. Google has made hundreds of billions of dollars not caring about causation.
To find causation, you generally need experimental data, not observational data. Though, in economics, they often use observed "shocks" to the system to test for causation, like if a CEO dies suddenly and the stock price goes up, you can assume causation.
Correlation is a necessary but not sufficient condition for causation. To show causation requires a counter-factual. | Examples for teaching: Correlation does not mean causation
Sometimes correlation is enough. For example, in car insurance, male drivers are correlated with more accidents, so insurance companies charge them more. There is no way you could actually test this f |
2,539 | Examples for teaching: Correlation does not mean causation | The number of Nobel prizes won by a country (adjusting for population) correlates well with per capita chocolate consumption. (New England Journal of Medicine) | Examples for teaching: Correlation does not mean causation | The number of Nobel prizes won by a country (adjusting for population) correlates well with per capita chocolate consumption. (New England Journal of Medicine) | Examples for teaching: Correlation does not mean causation
The number of Nobel prizes won by a country (adjusting for population) correlates well with per capita chocolate consumption. (New England Journal of Medicine) | Examples for teaching: Correlation does not mean causation
The number of Nobel prizes won by a country (adjusting for population) correlates well with per capita chocolate consumption. (New England Journal of Medicine) |
2,540 | Examples for teaching: Correlation does not mean causation | I have a few examples I like to use.
When investigating the cause of crime in New York City in the 80s, when they were trying to clean up the city, an academic found a strong correlation between the amount of serious crime committed and the amount of ice cream sold by street vendors! (Which is the cause and which is the effect?) Obviously, there was an unobserved variable causing both. Summers are when crime is the greatest and when the most ice cream is sold.
The size of your palm is negatively correlated with how long you will live (really!). In fact, women tend to have smaller palms and live longer.
[My favorite] I heard of a study a few years ago that found the amount of soda a person drinks is positively correlated to the likelihood of obesity. (I said to myself - that makes sense since it must be due to people drinking the sugary soda and getting all those empty calories.) A few days later more details came out. Almost all the correlation was due to an increased consumption of diet soft drinks. (That blew my theory!) So, which way is the causation? Do the diet soft drinks cause one to gain weight, or does a gain in weight cause an increased consumption in diet soft drinks? (Before you conclude it is the latter, see the study where a controlled experiments with rats showed the group that was fed a yogurt with artificial sweetener gained more weight than the group that was fed the normal yogurt.) Two references: Drink More Diet Soda, Gain More Weight?; Diet sodas linked to obesity . I think they are still trying to sort this one out. | Examples for teaching: Correlation does not mean causation | I have a few examples I like to use.
When investigating the cause of crime in New York City in the 80s, when they were trying to clean up the city, an academic found a strong correlation between the | Examples for teaching: Correlation does not mean causation
I have a few examples I like to use.
When investigating the cause of crime in New York City in the 80s, when they were trying to clean up the city, an academic found a strong correlation between the amount of serious crime committed and the amount of ice cream sold by street vendors! (Which is the cause and which is the effect?) Obviously, there was an unobserved variable causing both. Summers are when crime is the greatest and when the most ice cream is sold.
The size of your palm is negatively correlated with how long you will live (really!). In fact, women tend to have smaller palms and live longer.
[My favorite] I heard of a study a few years ago that found the amount of soda a person drinks is positively correlated to the likelihood of obesity. (I said to myself - that makes sense since it must be due to people drinking the sugary soda and getting all those empty calories.) A few days later more details came out. Almost all the correlation was due to an increased consumption of diet soft drinks. (That blew my theory!) So, which way is the causation? Do the diet soft drinks cause one to gain weight, or does a gain in weight cause an increased consumption in diet soft drinks? (Before you conclude it is the latter, see the study where a controlled experiments with rats showed the group that was fed a yogurt with artificial sweetener gained more weight than the group that was fed the normal yogurt.) Two references: Drink More Diet Soda, Gain More Weight?; Diet sodas linked to obesity . I think they are still trying to sort this one out. | Examples for teaching: Correlation does not mean causation
I have a few examples I like to use.
When investigating the cause of crime in New York City in the 80s, when they were trying to clean up the city, an academic found a strong correlation between the |
2,541 | Examples for teaching: Correlation does not mean causation | Although it's more of an illustration of the problem of multiple comparisons, it is also a good example of misattributed causation:
Rugby (the religion of Wales) and its influence on the Catholic church: should Pope Benedict XVI be worried?
"every time Wales win the rugby grand slam, a Pope dies, except for 1978 when Wales were really good, and two Popes died." | Examples for teaching: Correlation does not mean causation | Although it's more of an illustration of the problem of multiple comparisons, it is also a good example of misattributed causation:
Rugby (the religion of Wales) and its influence on the Catholic chur | Examples for teaching: Correlation does not mean causation
Although it's more of an illustration of the problem of multiple comparisons, it is also a good example of misattributed causation:
Rugby (the religion of Wales) and its influence on the Catholic church: should Pope Benedict XVI be worried?
"every time Wales win the rugby grand slam, a Pope dies, except for 1978 when Wales were really good, and two Popes died." | Examples for teaching: Correlation does not mean causation
Although it's more of an illustration of the problem of multiple comparisons, it is also a good example of misattributed causation:
Rugby (the religion of Wales) and its influence on the Catholic chur |
2,542 | Examples for teaching: Correlation does not mean causation | There's two aspects to this post hoc ergo propter hoc problem that I like to cover: (i) reverse causality and (ii) endogeneity
An example of "possible" reverse causality:
Social drinking and earnings - drinkers earn more money according to Bethany L. Peters & Edward Stringham (2006. "No Booze? You May Lose: Why Drinkers Earn More Money Than Nondrinkers," Journal of Labor Research, Transaction Publishers, vol. 27(3), pages 411-421, June). Or do people who earn more money drink more either because they have a greater disposable income or due to stress? This is a great paper to discuss for all sorts of reasons including measurement error, response bias, causality, etc.
An example of "possible" endogeneity:
The Mincer Equation explains log earnings by education, experience and experience squared. There is a long literature on this topic. Labour economists want to estimate the causal relationship of education on earnings but perhaps education is endogenous because "ability" could increase the amount of education an individual has (by lowering the cost of obtaining it) and could lead to an increase in earnings, irrespective of the level of education. A potential solution to this could be an instrumental variable. Angrist and Pischke's book, Mostly Harmless Econometrics covers this and relates topics in great detail and clarity.
Other silly examples that I have no support for include:
- Number of televisions per capita and the numbers of mortality rate. So let's send TVs to developing countries. Obviously both are endogenous to something like GDP.
- Number of shark attacks and ice cream sales. Both are endogenous to the temperature perhaps?
I also like to tell the terrible joke about the lunatic and the spider. A lunatic is wandering the corridors of an asylum with a spider he's carrying in the palm of his hand. He sees the doctor and says, "Look Doc, I can talk to spiders. Watch this. "Spider, go left!" The spider duly moves to the left. He continues, "Spider, go right." The spider shuffles to the right of his palm. The doctor replies, "Interesting, maybe we should talk about this in the next group session." The lunatic retorts, "That's nothing Doc. Watch this." He pulls off each of the spider's legs one by one and then shouts, "Spider, go left!" The spider lies motionless on his palm and the lunatic turns to the doctor and concludes, "If you pull off a spider's legs he'll go deaf." | Examples for teaching: Correlation does not mean causation | There's two aspects to this post hoc ergo propter hoc problem that I like to cover: (i) reverse causality and (ii) endogeneity
An example of "possible" reverse causality:
Social drinking and earnings | Examples for teaching: Correlation does not mean causation
There's two aspects to this post hoc ergo propter hoc problem that I like to cover: (i) reverse causality and (ii) endogeneity
An example of "possible" reverse causality:
Social drinking and earnings - drinkers earn more money according to Bethany L. Peters & Edward Stringham (2006. "No Booze? You May Lose: Why Drinkers Earn More Money Than Nondrinkers," Journal of Labor Research, Transaction Publishers, vol. 27(3), pages 411-421, June). Or do people who earn more money drink more either because they have a greater disposable income or due to stress? This is a great paper to discuss for all sorts of reasons including measurement error, response bias, causality, etc.
An example of "possible" endogeneity:
The Mincer Equation explains log earnings by education, experience and experience squared. There is a long literature on this topic. Labour economists want to estimate the causal relationship of education on earnings but perhaps education is endogenous because "ability" could increase the amount of education an individual has (by lowering the cost of obtaining it) and could lead to an increase in earnings, irrespective of the level of education. A potential solution to this could be an instrumental variable. Angrist and Pischke's book, Mostly Harmless Econometrics covers this and relates topics in great detail and clarity.
Other silly examples that I have no support for include:
- Number of televisions per capita and the numbers of mortality rate. So let's send TVs to developing countries. Obviously both are endogenous to something like GDP.
- Number of shark attacks and ice cream sales. Both are endogenous to the temperature perhaps?
I also like to tell the terrible joke about the lunatic and the spider. A lunatic is wandering the corridors of an asylum with a spider he's carrying in the palm of his hand. He sees the doctor and says, "Look Doc, I can talk to spiders. Watch this. "Spider, go left!" The spider duly moves to the left. He continues, "Spider, go right." The spider shuffles to the right of his palm. The doctor replies, "Interesting, maybe we should talk about this in the next group session." The lunatic retorts, "That's nothing Doc. Watch this." He pulls off each of the spider's legs one by one and then shouts, "Spider, go left!" The spider lies motionless on his palm and the lunatic turns to the doctor and concludes, "If you pull off a spider's legs he'll go deaf." | Examples for teaching: Correlation does not mean causation
There's two aspects to this post hoc ergo propter hoc problem that I like to cover: (i) reverse causality and (ii) endogeneity
An example of "possible" reverse causality:
Social drinking and earnings |
2,543 | Examples for teaching: Correlation does not mean causation | The best one I've been taught has been the number of drownings and sales of ice creams may be highly correlated but that doesnt imply one causes the other. Drownings and sales of ice cream are obviously higher in the summer months when the weather is good. Third variable aka good weather causes them. | Examples for teaching: Correlation does not mean causation | The best one I've been taught has been the number of drownings and sales of ice creams may be highly correlated but that doesnt imply one causes the other. Drownings and sales of ice cream are obvious | Examples for teaching: Correlation does not mean causation
The best one I've been taught has been the number of drownings and sales of ice creams may be highly correlated but that doesnt imply one causes the other. Drownings and sales of ice cream are obviously higher in the summer months when the weather is good. Third variable aka good weather causes them. | Examples for teaching: Correlation does not mean causation
The best one I've been taught has been the number of drownings and sales of ice creams may be highly correlated but that doesnt imply one causes the other. Drownings and sales of ice cream are obvious |
2,544 | Examples for teaching: Correlation does not mean causation | As a generalization of 'pirates cause global warming': Pick any two quantities which are (monotonically) increasing or decreasing with time and you should see some correlation. | Examples for teaching: Correlation does not mean causation | As a generalization of 'pirates cause global warming': Pick any two quantities which are (monotonically) increasing or decreasing with time and you should see some correlation. | Examples for teaching: Correlation does not mean causation
As a generalization of 'pirates cause global warming': Pick any two quantities which are (monotonically) increasing or decreasing with time and you should see some correlation. | Examples for teaching: Correlation does not mean causation
As a generalization of 'pirates cause global warming': Pick any two quantities which are (monotonically) increasing or decreasing with time and you should see some correlation. |
2,545 | Examples for teaching: Correlation does not mean causation | I work with students in teaching correlation vs causation in my Algebra One classes. We examine a lot of possible examples. I found the article Bundled-Up Babies and Dangerous Ice Cream: Correlation Puzzlers from the February 2013 Mathematics Teacher to be useful. I like the idea of talking about "lurking variables". Also this cartoon is a cute conversation starter:
We identify the independent and dependent variable in the cartoon and talk about whether this is an example of causation, if not why not. | Examples for teaching: Correlation does not mean causation | I work with students in teaching correlation vs causation in my Algebra One classes. We examine a lot of possible examples. I found the article Bundled-Up Babies and Dangerous Ice Cream: Correlation P | Examples for teaching: Correlation does not mean causation
I work with students in teaching correlation vs causation in my Algebra One classes. We examine a lot of possible examples. I found the article Bundled-Up Babies and Dangerous Ice Cream: Correlation Puzzlers from the February 2013 Mathematics Teacher to be useful. I like the idea of talking about "lurking variables". Also this cartoon is a cute conversation starter:
We identify the independent and dependent variable in the cartoon and talk about whether this is an example of causation, if not why not. | Examples for teaching: Correlation does not mean causation
I work with students in teaching correlation vs causation in my Algebra One classes. We examine a lot of possible examples. I found the article Bundled-Up Babies and Dangerous Ice Cream: Correlation P |
2,546 | Examples for teaching: Correlation does not mean causation | You can spend a few minutes on Google Correlate, and come up with all kinds of spurious correlations. | Examples for teaching: Correlation does not mean causation | You can spend a few minutes on Google Correlate, and come up with all kinds of spurious correlations. | Examples for teaching: Correlation does not mean causation
You can spend a few minutes on Google Correlate, and come up with all kinds of spurious correlations. | Examples for teaching: Correlation does not mean causation
You can spend a few minutes on Google Correlate, and come up with all kinds of spurious correlations. |
2,547 | Examples for teaching: Correlation does not mean causation | The standard citation pointing out the correlation between the number of newborn babies and breeding-pairs of storks in West Germany is A new parameter for sex education, Nature 332, 495 (07 April 1988); doi:10.1038/332495a0 | Examples for teaching: Correlation does not mean causation | The standard citation pointing out the correlation between the number of newborn babies and breeding-pairs of storks in West Germany is A new parameter for sex education, Nature 332, 495 (07 April 198 | Examples for teaching: Correlation does not mean causation
The standard citation pointing out the correlation between the number of newborn babies and breeding-pairs of storks in West Germany is A new parameter for sex education, Nature 332, 495 (07 April 1988); doi:10.1038/332495a0 | Examples for teaching: Correlation does not mean causation
The standard citation pointing out the correlation between the number of newborn babies and breeding-pairs of storks in West Germany is A new parameter for sex education, Nature 332, 495 (07 April 198 |
2,548 | Examples for teaching: Correlation does not mean causation | A correlation on its own can never establish a causal link. David Hume (1771-1776) argued quite effectively that we can not obtain certain knowlege of cauasality by purely empirical means. Kant attempted to address this, the Wikipedia page for Kant seems to sum it up quite nicely:
Kant believed himself to be creating a compromise between the empiricists and the rationalists. The empiricists believed that knowledge is acquired through experience alone, but the rationalists maintained that such knowledge is open to Cartesian doubt and that reason alone provides us with knowledge. Kant argues, however, that using reason without applying it to experience will only lead to illusions, while experience will be purely subjective without first being subsumed under pure reason.
In otherwords, Hume tells us that we can never know a causal relationship exists just by observing a correlation, but Kant suggests that we may be able to use our reason to distinguish between correlations that do imply a causal link from those who don't. I don't think Hume would have disagreed, as long as Kant were writing in terms of plausibility rather than certain knowledge.
In short, a correlation provides circumstantial evidence implying a causal link, but the weight of the evidence depends greatly on the particular circumstances involved, and we can never be absolutely sure. The ability to predict the effects of interventions is one way to gain confidence (we can't prove anything, but we can disprove by observational evidence, so we have then at least attempted to falsify the theory of a causal link). Having a simple model that explains why we should observed a correlation that also explains other forms of evidence is another way we can apply our reasoning as Kant suggests.
Caveat emptor: It is entirely possible I have misunderstood the philosophy, however it remains the case that a correlation can never provide proof of a causal link. | Examples for teaching: Correlation does not mean causation | A correlation on its own can never establish a causal link. David Hume (1771-1776) argued quite effectively that we can not obtain certain knowlege of cauasality by purely empirical means. Kant atte | Examples for teaching: Correlation does not mean causation
A correlation on its own can never establish a causal link. David Hume (1771-1776) argued quite effectively that we can not obtain certain knowlege of cauasality by purely empirical means. Kant attempted to address this, the Wikipedia page for Kant seems to sum it up quite nicely:
Kant believed himself to be creating a compromise between the empiricists and the rationalists. The empiricists believed that knowledge is acquired through experience alone, but the rationalists maintained that such knowledge is open to Cartesian doubt and that reason alone provides us with knowledge. Kant argues, however, that using reason without applying it to experience will only lead to illusions, while experience will be purely subjective without first being subsumed under pure reason.
In otherwords, Hume tells us that we can never know a causal relationship exists just by observing a correlation, but Kant suggests that we may be able to use our reason to distinguish between correlations that do imply a causal link from those who don't. I don't think Hume would have disagreed, as long as Kant were writing in terms of plausibility rather than certain knowledge.
In short, a correlation provides circumstantial evidence implying a causal link, but the weight of the evidence depends greatly on the particular circumstances involved, and we can never be absolutely sure. The ability to predict the effects of interventions is one way to gain confidence (we can't prove anything, but we can disprove by observational evidence, so we have then at least attempted to falsify the theory of a causal link). Having a simple model that explains why we should observed a correlation that also explains other forms of evidence is another way we can apply our reasoning as Kant suggests.
Caveat emptor: It is entirely possible I have misunderstood the philosophy, however it remains the case that a correlation can never provide proof of a causal link. | Examples for teaching: Correlation does not mean causation
A correlation on its own can never establish a causal link. David Hume (1771-1776) argued quite effectively that we can not obtain certain knowlege of cauasality by purely empirical means. Kant atte |
2,549 | Examples for teaching: Correlation does not mean causation | I read (a long time ago) of an interesting example about a decline in birth rates (or fertility rates if you prefer that measure) especially in the US, starting in the early 1960's, as nuclear weapons testing was at an all-time high (in 1961 the biggest nuclear bomb ever detonated was tested in the USSR). Rates continued to deline until towards the end of the twentieth century when most countried finally stopped doing this.
I can't find a reference which combines these figures now, but this Wikipedia article has figures on nuclear weapons test numbers by country.
Of course, it might make better sense to look at the correlation of birth rate with the introduction and legalisation of the contraceptive pill 'coincidentally' starting in the early 1960s. (In only some states first, then all states for married women only, then some for unmarried, then across the board), But even that could only be part of the cause; lots of other aspects of equality, economic changes and other factors play a significant part. | Examples for teaching: Correlation does not mean causation | I read (a long time ago) of an interesting example about a decline in birth rates (or fertility rates if you prefer that measure) especially in the US, starting in the early 1960's, as nuclear weapon | Examples for teaching: Correlation does not mean causation
I read (a long time ago) of an interesting example about a decline in birth rates (or fertility rates if you prefer that measure) especially in the US, starting in the early 1960's, as nuclear weapons testing was at an all-time high (in 1961 the biggest nuclear bomb ever detonated was tested in the USSR). Rates continued to deline until towards the end of the twentieth century when most countried finally stopped doing this.
I can't find a reference which combines these figures now, but this Wikipedia article has figures on nuclear weapons test numbers by country.
Of course, it might make better sense to look at the correlation of birth rate with the introduction and legalisation of the contraceptive pill 'coincidentally' starting in the early 1960s. (In only some states first, then all states for married women only, then some for unmarried, then across the board), But even that could only be part of the cause; lots of other aspects of equality, economic changes and other factors play a significant part. | Examples for teaching: Correlation does not mean causation
I read (a long time ago) of an interesting example about a decline in birth rates (or fertility rates if you prefer that measure) especially in the US, starting in the early 1960's, as nuclear weapon |
2,550 | Examples for teaching: Correlation does not mean causation | when $x_t$; $y_t$ are stationary time series, then correlation between $y_t$ and $x_{t-1}$ implies causation of $y_t$ by $x_{t-1}$. For some reason it's not mentioned here. | Examples for teaching: Correlation does not mean causation | when $x_t$; $y_t$ are stationary time series, then correlation between $y_t$ and $x_{t-1}$ implies causation of $y_t$ by $x_{t-1}$. For some reason it's not mentioned here. | Examples for teaching: Correlation does not mean causation
when $x_t$; $y_t$ are stationary time series, then correlation between $y_t$ and $x_{t-1}$ implies causation of $y_t$ by $x_{t-1}$. For some reason it's not mentioned here. | Examples for teaching: Correlation does not mean causation
when $x_t$; $y_t$ are stationary time series, then correlation between $y_t$ and $x_{t-1}$ implies causation of $y_t$ by $x_{t-1}$. For some reason it's not mentioned here. |
2,551 | Examples for teaching: Correlation does not mean causation | Sperm count in males in Slovene villages and the number of bears (also in Slovenia) show a negative correlation. Some people find this very worrying. I'll try and get the study that did this. | Examples for teaching: Correlation does not mean causation | Sperm count in males in Slovene villages and the number of bears (also in Slovenia) show a negative correlation. Some people find this very worrying. I'll try and get the study that did this. | Examples for teaching: Correlation does not mean causation
Sperm count in males in Slovene villages and the number of bears (also in Slovenia) show a negative correlation. Some people find this very worrying. I'll try and get the study that did this. | Examples for teaching: Correlation does not mean causation
Sperm count in males in Slovene villages and the number of bears (also in Slovenia) show a negative correlation. Some people find this very worrying. I'll try and get the study that did this. |
2,552 | Examples for teaching: Correlation does not mean causation | I've recently been to a conference and one of the speakers gave this very interesting example (although the point was to illustrate something else):
Americans and English eat a lot of fat food. There is a high rate of cardiovascular diseases in US and UK.
French eat a lot of fat food, but they have a low(er) rate of cardiovascular diseases.
Americans and English drink a lot of alcohol. There is a high rate of cardiovascular diseases in US and UK.
Italians drink a lot of alcohol but, again, they have a low(er) rate of cardiovascular diseases.
The conclusion? Eat and drink what you want.
And you have a higher chance of getting a heart attack if you speak English! | Examples for teaching: Correlation does not mean causation | I've recently been to a conference and one of the speakers gave this very interesting example (although the point was to illustrate something else):
Americans and English eat a lot of fat food. There | Examples for teaching: Correlation does not mean causation
I've recently been to a conference and one of the speakers gave this very interesting example (although the point was to illustrate something else):
Americans and English eat a lot of fat food. There is a high rate of cardiovascular diseases in US and UK.
French eat a lot of fat food, but they have a low(er) rate of cardiovascular diseases.
Americans and English drink a lot of alcohol. There is a high rate of cardiovascular diseases in US and UK.
Italians drink a lot of alcohol but, again, they have a low(er) rate of cardiovascular diseases.
The conclusion? Eat and drink what you want.
And you have a higher chance of getting a heart attack if you speak English! | Examples for teaching: Correlation does not mean causation
I've recently been to a conference and one of the speakers gave this very interesting example (although the point was to illustrate something else):
Americans and English eat a lot of fat food. There |
2,553 | Examples for teaching: Correlation does not mean causation | This cartoon rom XKCD is also posted elsewhere at CrossValidated. | Examples for teaching: Correlation does not mean causation | This cartoon rom XKCD is also posted elsewhere at CrossValidated. | Examples for teaching: Correlation does not mean causation
This cartoon rom XKCD is also posted elsewhere at CrossValidated. | Examples for teaching: Correlation does not mean causation
This cartoon rom XKCD is also posted elsewhere at CrossValidated. |
2,554 | Examples for teaching: Correlation does not mean causation | Another example of correlation I've used is the large increase in the number of people eating organic food and the increase in the number of children diagnosed with autism in the U.S. There's a parody graph on the web - | Examples for teaching: Correlation does not mean causation | Another example of correlation I've used is the large increase in the number of people eating organic food and the increase in the number of children diagnosed with autism in the U.S. There's a parody | Examples for teaching: Correlation does not mean causation
Another example of correlation I've used is the large increase in the number of people eating organic food and the increase in the number of children diagnosed with autism in the U.S. There's a parody graph on the web - | Examples for teaching: Correlation does not mean causation
Another example of correlation I've used is the large increase in the number of people eating organic food and the increase in the number of children diagnosed with autism in the U.S. There's a parody |
2,555 | Examples for teaching: Correlation does not mean causation | http://tylervigen.com/
This shows a ton of correlations that obviously have nothing to do with causation - Or do you have any good idea what's the causation to the correlation of
Age of Miss America
correlates with
Murders by steam, hot vapours and hot objects
?? | Examples for teaching: Correlation does not mean causation | http://tylervigen.com/
This shows a ton of correlations that obviously have nothing to do with causation - Or do you have any good idea what's the causation to the correlation of
Age of Miss America
c | Examples for teaching: Correlation does not mean causation
http://tylervigen.com/
This shows a ton of correlations that obviously have nothing to do with causation - Or do you have any good idea what's the causation to the correlation of
Age of Miss America
correlates with
Murders by steam, hot vapours and hot objects
?? | Examples for teaching: Correlation does not mean causation
http://tylervigen.com/
This shows a ton of correlations that obviously have nothing to do with causation - Or do you have any good idea what's the causation to the correlation of
Age of Miss America
c |
2,556 | Examples for teaching: Correlation does not mean causation | Teaching "Correlation does not mean causation" doesn't really help anyone because at the end of the day all deductive arguments are based in part on correlation.
Human are very bad at learning not to do something.
The goal should rather be constructive: Always think about alternatives to your starting assumptions that might produce the same data. | Examples for teaching: Correlation does not mean causation | Teaching "Correlation does not mean causation" doesn't really help anyone because at the end of the day all deductive arguments are based in part on correlation.
Human are very bad at learning not to | Examples for teaching: Correlation does not mean causation
Teaching "Correlation does not mean causation" doesn't really help anyone because at the end of the day all deductive arguments are based in part on correlation.
Human are very bad at learning not to do something.
The goal should rather be constructive: Always think about alternatives to your starting assumptions that might produce the same data. | Examples for teaching: Correlation does not mean causation
Teaching "Correlation does not mean causation" doesn't really help anyone because at the end of the day all deductive arguments are based in part on correlation.
Human are very bad at learning not to |
2,557 | Examples for teaching: Correlation does not mean causation | Well my Prof. used these in Introductory probability class:
1) Shoe size are correlated with reading ability
2) Shark attack is correlated with sale of ice cream. | Examples for teaching: Correlation does not mean causation | Well my Prof. used these in Introductory probability class:
1) Shoe size are correlated with reading ability
2) Shark attack is correlated with sale of ice cream. | Examples for teaching: Correlation does not mean causation
Well my Prof. used these in Introductory probability class:
1) Shoe size are correlated with reading ability
2) Shark attack is correlated with sale of ice cream. | Examples for teaching: Correlation does not mean causation
Well my Prof. used these in Introductory probability class:
1) Shoe size are correlated with reading ability
2) Shark attack is correlated with sale of ice cream. |
2,558 | Examples for teaching: Correlation does not mean causation | The more fire engines sent to a fire, the bigger the damage. | Examples for teaching: Correlation does not mean causation | The more fire engines sent to a fire, the bigger the damage. | Examples for teaching: Correlation does not mean causation
The more fire engines sent to a fire, the bigger the damage. | Examples for teaching: Correlation does not mean causation
The more fire engines sent to a fire, the bigger the damage. |
2,559 | Examples for teaching: Correlation does not mean causation | I think a better paradigm might be causation requires correlation associated with a credible and preferably proven mechanism. I think the word imply should be used very sparingly in this context, as it has several meanings including that of suggestion. | Examples for teaching: Correlation does not mean causation | I think a better paradigm might be causation requires correlation associated with a credible and preferably proven mechanism. I think the word imply should be used very sparingly in this context, as | Examples for teaching: Correlation does not mean causation
I think a better paradigm might be causation requires correlation associated with a credible and preferably proven mechanism. I think the word imply should be used very sparingly in this context, as it has several meanings including that of suggestion. | Examples for teaching: Correlation does not mean causation
I think a better paradigm might be causation requires correlation associated with a credible and preferably proven mechanism. I think the word imply should be used very sparingly in this context, as |
2,560 | Examples for teaching: Correlation does not mean causation | The storks example is on page 8 of the first edition (1978) of Box, Hunter & Hunter's book entitled "Statistics for Experimenters..." (Wiley). I don't know whether it's in the 2nd edition. They identify the city as Oldenburg and the time period as 1930-1936.
They reference Ornithologische Monatsberichte, 44, No 2, Jahrgang, 1936, Berlin, and 48, No 1, Jahrgang, 1940, Berlin, and Statistiches Jahrbuch Deutscher Gemeinden, 27-33, 1932-1938, Gustav Fischer, Jena. | Examples for teaching: Correlation does not mean causation | The storks example is on page 8 of the first edition (1978) of Box, Hunter & Hunter's book entitled "Statistics for Experimenters..." (Wiley). I don't know whether it's in the 2nd edition. They iden | Examples for teaching: Correlation does not mean causation
The storks example is on page 8 of the first edition (1978) of Box, Hunter & Hunter's book entitled "Statistics for Experimenters..." (Wiley). I don't know whether it's in the 2nd edition. They identify the city as Oldenburg and the time period as 1930-1936.
They reference Ornithologische Monatsberichte, 44, No 2, Jahrgang, 1936, Berlin, and 48, No 1, Jahrgang, 1940, Berlin, and Statistiches Jahrbuch Deutscher Gemeinden, 27-33, 1932-1938, Gustav Fischer, Jena. | Examples for teaching: Correlation does not mean causation
The storks example is on page 8 of the first edition (1978) of Box, Hunter & Hunter's book entitled "Statistics for Experimenters..." (Wiley). I don't know whether it's in the 2nd edition. They iden |
2,561 | Examples for teaching: Correlation does not mean causation | I saw a funny one in an article.
Butter production in Bangladesh has one of the highest correlation with the S&P 500 over a ten year period. | Examples for teaching: Correlation does not mean causation | I saw a funny one in an article.
Butter production in Bangladesh has one of the highest correlation with the S&P 500 over a ten year period. | Examples for teaching: Correlation does not mean causation
I saw a funny one in an article.
Butter production in Bangladesh has one of the highest correlation with the S&P 500 over a ten year period. | Examples for teaching: Correlation does not mean causation
I saw a funny one in an article.
Butter production in Bangladesh has one of the highest correlation with the S&P 500 over a ten year period. |
2,562 | Examples for teaching: Correlation does not mean causation | Here is a perfect one. And unfortunately, it can be used as a great teaching point because neither the Washington Post's staff nor the Centers for Disease Control and Prevention demonstrate any inkling of knowledge that the article should be a satire piece in The Onion.
https://www.washingtonpost.com/health/trumps-presidency-may-be-making-latinos-sick/2019/07/19/4e89b9f0-a97f-11e9-9214-246e594de5d5_story.html?utm_term=.9dd329c2e837 | Examples for teaching: Correlation does not mean causation | Here is a perfect one. And unfortunately, it can be used as a great teaching point because neither the Washington Post's staff nor the Centers for Disease Control and Prevention demonstrate any inkli | Examples for teaching: Correlation does not mean causation
Here is a perfect one. And unfortunately, it can be used as a great teaching point because neither the Washington Post's staff nor the Centers for Disease Control and Prevention demonstrate any inkling of knowledge that the article should be a satire piece in The Onion.
https://www.washingtonpost.com/health/trumps-presidency-may-be-making-latinos-sick/2019/07/19/4e89b9f0-a97f-11e9-9214-246e594de5d5_story.html?utm_term=.9dd329c2e837 | Examples for teaching: Correlation does not mean causation
Here is a perfect one. And unfortunately, it can be used as a great teaching point because neither the Washington Post's staff nor the Centers for Disease Control and Prevention demonstrate any inkli |
2,563 | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields? | There are some terminological differences where the same thing is called different names in different disciplines:
Longitudinal data in biostatistics are repeated observations of the same individuals = panel data in econometrics.
The model for a binary dependent variable in which the probability of 1 is modeled as $1/(1+\exp[-x'\beta])$ is called a logit model in econometrics, and logistic model in biostatistics. Biostatisticians tend to work with logistic regression in terms of odds ratios, as their $x$s are often binary, so the odds ratios represent the relative frequencies of the outcome of interest in the two groups in the population. This is such a common interpretation that you will often see a continuous variable transformed into two categories (low vs high blood pressure) to make this interpretation easier.
Statisticians' "estimating equations" are econometricians' "moment conditions". Statisticians' $M$-estimates are econometricians' extremum estimators.
There are terminological differences where the same term is used to mean different things in different disciplines:
Fixed effects stand for the $x'\beta$ in the regression equation for ANOVA statisticians, and for a "within" estimator in longitudinal/panel data models for econometricians. (Random effects are cursed for econometricians, for good.)
Robust inference means heteroskedasticity-corrected standard errors for economists (with extensions to clustered standard errors and/or autocorrelation-corrected standard errors) and methods robust to far outliers to statisticians.
It seems that economists have a ridiculous idea that stratified samples are those in which probabilities of selection vary between observations. These should be called unequal probability samples. Stratified samples are those in which the population is split into pre-defined groups according to characteristics known before sampling takes place.
Econometricians' "data mining" (at least in the 1980s literature) used to mean multiple testing and pitfalls related to it that have been wonderfully explained in Harrell's book. Computer scientists' (and statisticians') data mining procedures are non-parametric methods of finding patterns in the data, also known as statistical learning.
Horvitz-Thompson estimator is a non-parametric estimator of a finite population total in sampling statistics that relies on fixed probabilities of selection, with variance determined by the second order selection probabilities. In econometrics, it had grown to denote inverse propensity weighting estimators that rely on a moderately long list of the standard causal inference assumptions (conditional independence, SUTVA, overlap, all that stuff that makes Rubin's counterfactuals work). Yeah, there is some sort of probability in the denominator in both, but understanding the estimator in one context gives you zero ability to understand the other context.
I view the unique contributions of econometrics to be
Ways to deal with endogeneity and poorly specified regression models, recognizing, as mpiktas has explained in another answer, that (i) the explanatory variables may themselves be random (and hence correlated with regression errors producing bias in parameter estimates), (ii) the models can suffer from omitted variables (which then become part of the error term), (iii) there may be unobserved heterogeneity of how economic agents react to the stimuli, thus complicating the standard regression models. Angrist & Pischke is a wonderful review of these issues, and statisticians will learn a lot about how to do regression analysis from it. At the very least, statisticians should learn and understand instrumental variables regression.
More generally, economists want to make as few assumptions as possible about their models, so as to make sure that their findings do not hinge on something as ridiculous as multivariate normality. That's why GMM and empirical likelihood are hugely popular with economists, and never caught up in statistics (GMM was first described as minimum $\chi^2$ by Ferguson, and empirical likelihood, by Jon Rao, both famous statisticians, in the late 1960s). That's why economists run their regression with "robust" standard errors, and statisticians, with the default OLS $s^2 (X'X)^{-1}$ standard errors.
There's been a lot of work in the time domain with regularly spaced processes -- that's how macroeconomic data are collected. The unique contributions include integrated and cointegrated processes and autoregressive conditional heteroskedasticity ( (G)ARCH ) methods. Being generally a micro person, I am less familiar with these.
Overall, economists tend to look for strong interpretation of coefficients in their models. Statisticians would take a logistic model as a way to get to the probability of the positive outcome, often as a simple predictive device, and may also note the GLM interpretation with nice exponential family properties that it possesses, as well as connections with discriminant analysis. Economists would think about the utility interpretation of the logit model, and be concerned that only $\beta/\sigma$ is identified in this model, and that heteroskedasticity can throw it off. (Statisticians will be wondering what $\sigma$ are the economists talking about, of course.) Of course, a utility that is linear in its inputs is a very funny thing from the perspective of Microeconomics 101, although some generalizations to semi-concave functions are probably done in Mas-Collel.
What economists generally tend to miss, but, IMHO, would benefit from, are aspects of multivariate analysis (including latent variable models as a way to deal with measurement errors and multiple proxies... statisticians are oblivious to these models, though, too), regression diagnostics (all these Cook's distances, Mallows' $C_p$, DFBETA, etc.), analysis of missing data (Manski's partial identification is surely fancy, but the mainstream MCAR/MAR/NMAR breakdown and multiple imputation are more useful), and survey statistics. A lot of other contributions from the mainstream statistics have been entertained by econometrics and either adopted as a standard methodology, or passed by as a short term fashion: ARMA models of the 1960s are probably better known in econometrics than in statistics, as some graduate programs in statistics may fail to offer a time series course these days; shrinkage estimators/ridge regression of the 1970s have come and gone; the bootstrap of the 1980s is a knee-jerk reaction for any complicated situations, although economists need to be better aware of the limitations of the bootstrap; the empirical likelihood of the 1990s has seen more methodology development from theoretical econometricians than from theoretical statisticians; computational Bayesian methods of the 2000s are being entertained in econometrics, but my feeling is that are just too parametric, too heavily model-based, to be compatible with the robustness paradigm I mentioned earlier. (EDIT: that was the view on the scene in 2012; by 2020, Bayesian models have become standard in empirical macro where people probably care a little less about robustness, and are making their presence heard in empirical micro, as well. They are just too easy to run these days to pass by.) Whether economists will find any use of the statistical learning/bioinformatics or spatio-temporal stuff that is extremely hot in modern statistics is an open call. | What are the major philosophical, methodological, and terminological differences between econometric | There are some terminological differences where the same thing is called different names in different disciplines:
Longitudinal data in biostatistics are repeated observations of the same individuals | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields?
There are some terminological differences where the same thing is called different names in different disciplines:
Longitudinal data in biostatistics are repeated observations of the same individuals = panel data in econometrics.
The model for a binary dependent variable in which the probability of 1 is modeled as $1/(1+\exp[-x'\beta])$ is called a logit model in econometrics, and logistic model in biostatistics. Biostatisticians tend to work with logistic regression in terms of odds ratios, as their $x$s are often binary, so the odds ratios represent the relative frequencies of the outcome of interest in the two groups in the population. This is such a common interpretation that you will often see a continuous variable transformed into two categories (low vs high blood pressure) to make this interpretation easier.
Statisticians' "estimating equations" are econometricians' "moment conditions". Statisticians' $M$-estimates are econometricians' extremum estimators.
There are terminological differences where the same term is used to mean different things in different disciplines:
Fixed effects stand for the $x'\beta$ in the regression equation for ANOVA statisticians, and for a "within" estimator in longitudinal/panel data models for econometricians. (Random effects are cursed for econometricians, for good.)
Robust inference means heteroskedasticity-corrected standard errors for economists (with extensions to clustered standard errors and/or autocorrelation-corrected standard errors) and methods robust to far outliers to statisticians.
It seems that economists have a ridiculous idea that stratified samples are those in which probabilities of selection vary between observations. These should be called unequal probability samples. Stratified samples are those in which the population is split into pre-defined groups according to characteristics known before sampling takes place.
Econometricians' "data mining" (at least in the 1980s literature) used to mean multiple testing and pitfalls related to it that have been wonderfully explained in Harrell's book. Computer scientists' (and statisticians') data mining procedures are non-parametric methods of finding patterns in the data, also known as statistical learning.
Horvitz-Thompson estimator is a non-parametric estimator of a finite population total in sampling statistics that relies on fixed probabilities of selection, with variance determined by the second order selection probabilities. In econometrics, it had grown to denote inverse propensity weighting estimators that rely on a moderately long list of the standard causal inference assumptions (conditional independence, SUTVA, overlap, all that stuff that makes Rubin's counterfactuals work). Yeah, there is some sort of probability in the denominator in both, but understanding the estimator in one context gives you zero ability to understand the other context.
I view the unique contributions of econometrics to be
Ways to deal with endogeneity and poorly specified regression models, recognizing, as mpiktas has explained in another answer, that (i) the explanatory variables may themselves be random (and hence correlated with regression errors producing bias in parameter estimates), (ii) the models can suffer from omitted variables (which then become part of the error term), (iii) there may be unobserved heterogeneity of how economic agents react to the stimuli, thus complicating the standard regression models. Angrist & Pischke is a wonderful review of these issues, and statisticians will learn a lot about how to do regression analysis from it. At the very least, statisticians should learn and understand instrumental variables regression.
More generally, economists want to make as few assumptions as possible about their models, so as to make sure that their findings do not hinge on something as ridiculous as multivariate normality. That's why GMM and empirical likelihood are hugely popular with economists, and never caught up in statistics (GMM was first described as minimum $\chi^2$ by Ferguson, and empirical likelihood, by Jon Rao, both famous statisticians, in the late 1960s). That's why economists run their regression with "robust" standard errors, and statisticians, with the default OLS $s^2 (X'X)^{-1}$ standard errors.
There's been a lot of work in the time domain with regularly spaced processes -- that's how macroeconomic data are collected. The unique contributions include integrated and cointegrated processes and autoregressive conditional heteroskedasticity ( (G)ARCH ) methods. Being generally a micro person, I am less familiar with these.
Overall, economists tend to look for strong interpretation of coefficients in their models. Statisticians would take a logistic model as a way to get to the probability of the positive outcome, often as a simple predictive device, and may also note the GLM interpretation with nice exponential family properties that it possesses, as well as connections with discriminant analysis. Economists would think about the utility interpretation of the logit model, and be concerned that only $\beta/\sigma$ is identified in this model, and that heteroskedasticity can throw it off. (Statisticians will be wondering what $\sigma$ are the economists talking about, of course.) Of course, a utility that is linear in its inputs is a very funny thing from the perspective of Microeconomics 101, although some generalizations to semi-concave functions are probably done in Mas-Collel.
What economists generally tend to miss, but, IMHO, would benefit from, are aspects of multivariate analysis (including latent variable models as a way to deal with measurement errors and multiple proxies... statisticians are oblivious to these models, though, too), regression diagnostics (all these Cook's distances, Mallows' $C_p$, DFBETA, etc.), analysis of missing data (Manski's partial identification is surely fancy, but the mainstream MCAR/MAR/NMAR breakdown and multiple imputation are more useful), and survey statistics. A lot of other contributions from the mainstream statistics have been entertained by econometrics and either adopted as a standard methodology, or passed by as a short term fashion: ARMA models of the 1960s are probably better known in econometrics than in statistics, as some graduate programs in statistics may fail to offer a time series course these days; shrinkage estimators/ridge regression of the 1970s have come and gone; the bootstrap of the 1980s is a knee-jerk reaction for any complicated situations, although economists need to be better aware of the limitations of the bootstrap; the empirical likelihood of the 1990s has seen more methodology development from theoretical econometricians than from theoretical statisticians; computational Bayesian methods of the 2000s are being entertained in econometrics, but my feeling is that are just too parametric, too heavily model-based, to be compatible with the robustness paradigm I mentioned earlier. (EDIT: that was the view on the scene in 2012; by 2020, Bayesian models have become standard in empirical macro where people probably care a little less about robustness, and are making their presence heard in empirical micro, as well. They are just too easy to run these days to pass by.) Whether economists will find any use of the statistical learning/bioinformatics or spatio-temporal stuff that is extremely hot in modern statistics is an open call. | What are the major philosophical, methodological, and terminological differences between econometric
There are some terminological differences where the same thing is called different names in different disciplines:
Longitudinal data in biostatistics are repeated observations of the same individuals |
2,564 | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields? | It is best to explain in terms of linear regression, since it is the main tool of econometrics. In linear regression we have a model:
$$Y=X\beta+\varepsilon$$
The main difference between other statistical fields and econometrics is that $X$ is treated as fixed in other fields and is treated as random variable in econometrics. The extra care you have to use to adjust for this difference produces different jargon and different methods. In general you can say that all the methods used in econometrics are the same methods as in other statistics fields with adjustment for the randomness of explanatory variables. The notable exception is GMM, which is uniquely econometric tool.
Another way of looking at the difference is that the data in other statistic fields can be considered as an iid sample. In econometrics the data in a lot of cases is a sample from stochastic process, of which iid is only a special case. Hence again different jargon.
Knowing the above is usually enough to easily jump from other statistic fields to econometrics. Since usually the model is given, it is not hard to figure out what is what. In my personal opinion the jargon difference between machine learning and classical statistics is much bigger than between econometrics and classical statistics.
Note though that there are terms which have convoluted meaning in statistics without the econometrics. The prime example is fixed and random effects. Wikipedia articles about these terms are a mess, mixing econometrics with statistics. | What are the major philosophical, methodological, and terminological differences between econometric | It is best to explain in terms of linear regression, since it is the main tool of econometrics. In linear regression we have a model:
$$Y=X\beta+\varepsilon$$
The main difference between other statist | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields?
It is best to explain in terms of linear regression, since it is the main tool of econometrics. In linear regression we have a model:
$$Y=X\beta+\varepsilon$$
The main difference between other statistical fields and econometrics is that $X$ is treated as fixed in other fields and is treated as random variable in econometrics. The extra care you have to use to adjust for this difference produces different jargon and different methods. In general you can say that all the methods used in econometrics are the same methods as in other statistics fields with adjustment for the randomness of explanatory variables. The notable exception is GMM, which is uniquely econometric tool.
Another way of looking at the difference is that the data in other statistic fields can be considered as an iid sample. In econometrics the data in a lot of cases is a sample from stochastic process, of which iid is only a special case. Hence again different jargon.
Knowing the above is usually enough to easily jump from other statistic fields to econometrics. Since usually the model is given, it is not hard to figure out what is what. In my personal opinion the jargon difference between machine learning and classical statistics is much bigger than between econometrics and classical statistics.
Note though that there are terms which have convoluted meaning in statistics without the econometrics. The prime example is fixed and random effects. Wikipedia articles about these terms are a mess, mixing econometrics with statistics. | What are the major philosophical, methodological, and terminological differences between econometric
It is best to explain in terms of linear regression, since it is the main tool of econometrics. In linear regression we have a model:
$$Y=X\beta+\varepsilon$$
The main difference between other statist |
2,565 | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields? | Of course, any broad statements are bound to be overly broad. But my experience has been that econometrics is concerned about causal relationships and statistics has become more interested in prediction.
On the economics side, you can't avoid the "credibility revolution" literature (Mostly Harmless Econometrics, etc). Economists are focused on the impact of some treatment on some outcome with an eye towards policy evaluation and recommendation.
On the statistics side, you see the rise of data mining/machine learning with applications to online analytics and genetics being notable examples. Here, researchers are more interested in predicting behavior or relationships, rather than precisely explaining them; they look for patterns, rather than causes.
I would also mention that statisticians were traditionally more interested in experimental design, going back to the agricultural experiments in the 1930s. | What are the major philosophical, methodological, and terminological differences between econometric | Of course, any broad statements are bound to be overly broad. But my experience has been that econometrics is concerned about causal relationships and statistics has become more interested in predicti | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields?
Of course, any broad statements are bound to be overly broad. But my experience has been that econometrics is concerned about causal relationships and statistics has become more interested in prediction.
On the economics side, you can't avoid the "credibility revolution" literature (Mostly Harmless Econometrics, etc). Economists are focused on the impact of some treatment on some outcome with an eye towards policy evaluation and recommendation.
On the statistics side, you see the rise of data mining/machine learning with applications to online analytics and genetics being notable examples. Here, researchers are more interested in predicting behavior or relationships, rather than precisely explaining them; they look for patterns, rather than causes.
I would also mention that statisticians were traditionally more interested in experimental design, going back to the agricultural experiments in the 1930s. | What are the major philosophical, methodological, and terminological differences between econometric
Of course, any broad statements are bound to be overly broad. But my experience has been that econometrics is concerned about causal relationships and statistics has become more interested in predicti |
2,566 | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields? | I've noticed that compared with what I'd call mainstream statistical science econometricians seem reluctant to use graphs, either schematic or data-based. The coverage of regression, which is naturally even more central in econometrics than elsewhere, is a major case in point. Modern introductions to regression by statisticians emphasise throughout the value of plotting the data and plotting the results of regression, including diagnostic plots, whereas the treatment in econometrics texts is distinctly more formal. Leading texts in econometrics don't include many graphs and don't promote their value strongly.
It's difficult to analyse this without the risk of seeming undiplomatic or worse, but I'd guess at some combination of the following as contributory.
Desire for rigour. Econometricians tend to be suspicious or hostile to learning from the data and strongly prefer decisions to be based on formal tests (whenever they don't come out of a theorem). This is linked to a preference for models to be based on "theory" (although this can mean just that a predictor was mentioned previously in a paper by some economist not talking about data).
Publication practices. Papers for economics or econometrics journals are heavy with highly stylised tables of coefficients, standard errors, t-statistics and P-values. Adding graphs does not even seem to be thought about in many cases and if offered would possibly be suggested for cutting by reviewers. These practices have become embedded over a generation or more to the extent that they have become automatic, with rigid conventions over starring significance levels, etc.
Graphics for complex models. Tacitly graphs are ignored whenever it does not seem as if there is a graph that matches a complex model with many predictors, etc., etc. (which indeed is often difficult to decide).
Naturally, what I am suggesting is a difference of means, as it were, and I recognise much variability in both cases. But as a non-economist who has much occasion to observe what economists do, it seems to me that even introductory econometrics texts cover a great deal of formal territory in a rather stiff way. They are also correspondingly weak on what one author called "plain data analysis". | What are the major philosophical, methodological, and terminological differences between econometric | I've noticed that compared with what I'd call mainstream statistical science econometricians seem reluctant to use graphs, either schematic or data-based. The coverage of regression, which is naturall | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields?
I've noticed that compared with what I'd call mainstream statistical science econometricians seem reluctant to use graphs, either schematic or data-based. The coverage of regression, which is naturally even more central in econometrics than elsewhere, is a major case in point. Modern introductions to regression by statisticians emphasise throughout the value of plotting the data and plotting the results of regression, including diagnostic plots, whereas the treatment in econometrics texts is distinctly more formal. Leading texts in econometrics don't include many graphs and don't promote their value strongly.
It's difficult to analyse this without the risk of seeming undiplomatic or worse, but I'd guess at some combination of the following as contributory.
Desire for rigour. Econometricians tend to be suspicious or hostile to learning from the data and strongly prefer decisions to be based on formal tests (whenever they don't come out of a theorem). This is linked to a preference for models to be based on "theory" (although this can mean just that a predictor was mentioned previously in a paper by some economist not talking about data).
Publication practices. Papers for economics or econometrics journals are heavy with highly stylised tables of coefficients, standard errors, t-statistics and P-values. Adding graphs does not even seem to be thought about in many cases and if offered would possibly be suggested for cutting by reviewers. These practices have become embedded over a generation or more to the extent that they have become automatic, with rigid conventions over starring significance levels, etc.
Graphics for complex models. Tacitly graphs are ignored whenever it does not seem as if there is a graph that matches a complex model with many predictors, etc., etc. (which indeed is often difficult to decide).
Naturally, what I am suggesting is a difference of means, as it were, and I recognise much variability in both cases. But as a non-economist who has much occasion to observe what economists do, it seems to me that even introductory econometrics texts cover a great deal of formal territory in a rather stiff way. They are also correspondingly weak on what one author called "plain data analysis". | What are the major philosophical, methodological, and terminological differences between econometric
I've noticed that compared with what I'd call mainstream statistical science econometricians seem reluctant to use graphs, either schematic or data-based. The coverage of regression, which is naturall |
2,567 | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields? | One subtle difference is that economists sometimes ascribe meaning to the error terms in models. This is especially true among "structural" economists who believe that you can estimate structural parameters that represent interest or individual heterogeneity.
A class example of this is the probit. While statisticians are generally agnostic about what causes the error term, economists frequently view the error terms in regressions as representing heterogeneity of preferences. For the probit case, you might model a woman's decision to join the labor force. This will be determined by a variety of variables, but the error term will represent an unobserved degree to which individual preferences for work may vary. | What are the major philosophical, methodological, and terminological differences between econometric | One subtle difference is that economists sometimes ascribe meaning to the error terms in models. This is especially true among "structural" economists who believe that you can estimate structural para | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields?
One subtle difference is that economists sometimes ascribe meaning to the error terms in models. This is especially true among "structural" economists who believe that you can estimate structural parameters that represent interest or individual heterogeneity.
A class example of this is the probit. While statisticians are generally agnostic about what causes the error term, economists frequently view the error terms in regressions as representing heterogeneity of preferences. For the probit case, you might model a woman's decision to join the labor force. This will be determined by a variety of variables, but the error term will represent an unobserved degree to which individual preferences for work may vary. | What are the major philosophical, methodological, and terminological differences between econometric
One subtle difference is that economists sometimes ascribe meaning to the error terms in models. This is especially true among "structural" economists who believe that you can estimate structural para |
2,568 | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields? | Unlike most other quantitative disciplines, economics deals with things at the MARGIN. That is, marginal utility, marginal rate of substitution, etc. In calculus terms, economics deals with "first" (and higher order derivatives).
Many statistical disciplines deal with non-derivative quantities such as means and variances. Of course, you can go into the area of marginal and conditional probability distributions, but some of these applications also go into economics (e.g. "expected value.") | What are the major philosophical, methodological, and terminological differences between econometric | Unlike most other quantitative disciplines, economics deals with things at the MARGIN. That is, marginal utility, marginal rate of substitution, etc. In calculus terms, economics deals with "first" (a | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields?
Unlike most other quantitative disciplines, economics deals with things at the MARGIN. That is, marginal utility, marginal rate of substitution, etc. In calculus terms, economics deals with "first" (and higher order derivatives).
Many statistical disciplines deal with non-derivative quantities such as means and variances. Of course, you can go into the area of marginal and conditional probability distributions, but some of these applications also go into economics (e.g. "expected value.") | What are the major philosophical, methodological, and terminological differences between econometric
Unlike most other quantitative disciplines, economics deals with things at the MARGIN. That is, marginal utility, marginal rate of substitution, etc. In calculus terms, economics deals with "first" (a |
2,569 | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields? | Econometricians are almost exclusively interested in causal inference, whereas statisticians also use models for predicting outcomes. As a result, econometricians focus more on exogeneity (as others have mentioned). Reduced form econometricians and structural econometricians get at this causal interpretations in different ways.
Reduced form econometricians frequently deal with exogeneity using instrumental variables techniques (while IV is used much less frequently by statisticians.)
Structural econometricians get causal interpretations of parameters by relying on an amount of theory that is rare in work by statisticians. | What are the major philosophical, methodological, and terminological differences between econometric | Econometricians are almost exclusively interested in causal inference, whereas statisticians also use models for predicting outcomes. As a result, econometricians focus more on exogeneity (as others | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields?
Econometricians are almost exclusively interested in causal inference, whereas statisticians also use models for predicting outcomes. As a result, econometricians focus more on exogeneity (as others have mentioned). Reduced form econometricians and structural econometricians get at this causal interpretations in different ways.
Reduced form econometricians frequently deal with exogeneity using instrumental variables techniques (while IV is used much less frequently by statisticians.)
Structural econometricians get causal interpretations of parameters by relying on an amount of theory that is rare in work by statisticians. | What are the major philosophical, methodological, and terminological differences between econometric
Econometricians are almost exclusively interested in causal inference, whereas statisticians also use models for predicting outcomes. As a result, econometricians focus more on exogeneity (as others |
2,570 | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields? | It is not econometrics, it is context. If your likelihood function does not have a unique optimum, it will concern both a statistician and an econometrician. Now if you propose an assumption that comes from economic theory and restricts the parametrization so that the parameter is identified, it might be called econometrics, but the assumption could have come from any substantive field.
Exogeneity is a philosophical matter. See e.g. http://andrewgelman.com/2009/07/disputes_about/ for a comparison of different views, where economists typically understand it the way Rubin does.
So, in short, either adopt the jargon your teacher uses, or keep an open mind and read widely. | What are the major philosophical, methodological, and terminological differences between econometric | It is not econometrics, it is context. If your likelihood function does not have a unique optimum, it will concern both a statistician and an econometrician. Now if you propose an assumption that come | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields?
It is not econometrics, it is context. If your likelihood function does not have a unique optimum, it will concern both a statistician and an econometrician. Now if you propose an assumption that comes from economic theory and restricts the parametrization so that the parameter is identified, it might be called econometrics, but the assumption could have come from any substantive field.
Exogeneity is a philosophical matter. See e.g. http://andrewgelman.com/2009/07/disputes_about/ for a comparison of different views, where economists typically understand it the way Rubin does.
So, in short, either adopt the jargon your teacher uses, or keep an open mind and read widely. | What are the major philosophical, methodological, and terminological differences between econometric
It is not econometrics, it is context. If your likelihood function does not have a unique optimum, it will concern both a statistician and an econometrician. Now if you propose an assumption that come |
2,571 | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields? | As a statistician I think of this in more general terms. We have biometrics and econometrics. These are both areas where statistics is used to solve problems. With biometrics we are dealing with biological/medical problems whereas econometrics deals with economics. Otherwise they would be the same except that different disciplines emphasize different statistical techniques. In biometrics survival analysis and contingency table analysis are heavily used. For econometrics time series is heavily used. Regression analysis is common to both.
Having seen the answers about terminology differences between economatrics and biostatistics it seems that the actual question was mainly about terminology and I really only addressed the other two. The answers are so good that I can't add anything to it. I particularly liked StasK's answers. But as a biostatistician I do think that we use logit model and logistic model interchangeably. We do call log(p/[1-p]) the logit transformation. | What are the major philosophical, methodological, and terminological differences between econometric | As a statistician I think of this in more general terms. We have biometrics and econometrics. These are both areas where statistics is used to solve problems. With biometrics we are dealing with bio | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields?
As a statistician I think of this in more general terms. We have biometrics and econometrics. These are both areas where statistics is used to solve problems. With biometrics we are dealing with biological/medical problems whereas econometrics deals with economics. Otherwise they would be the same except that different disciplines emphasize different statistical techniques. In biometrics survival analysis and contingency table analysis are heavily used. For econometrics time series is heavily used. Regression analysis is common to both.
Having seen the answers about terminology differences between economatrics and biostatistics it seems that the actual question was mainly about terminology and I really only addressed the other two. The answers are so good that I can't add anything to it. I particularly liked StasK's answers. But as a biostatistician I do think that we use logit model and logistic model interchangeably. We do call log(p/[1-p]) the logit transformation. | What are the major philosophical, methodological, and terminological differences between econometric
As a statistician I think of this in more general terms. We have biometrics and econometrics. These are both areas where statistics is used to solve problems. With biometrics we are dealing with bio |
2,572 | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields? | I'd like to add one thing to give greater detail on something that was only mentioned in the other answers. Having taken classes in both areas, one of the main differences between the two is that statistics tends to stress the rigidity of the assumptions for regression models. The regression course I took in the statistics department focused a great deal of time pounding into our heads regression assumptions and taught techniques to check and those assumptions. However, what the stats department course lacked was a focus on what to do if the assumptions are not met. In the real world, data does not always fit all required assumptions. Econometrics tends to be more real world driven in the sense that economic data is typically only observable and cannot be generated by an experiment, which is where idea that $X$ is fixed in statistics vs. not fixed in econometrics comes in. Consider macro economic data. It is very difficult to perform an experiment on an entire economy and observe the results. As such, economists are at the mercy of what data is observed and reported by the government, not what they can perform in a lab by changing the dose of a medication to see what happens to their dependent variable. That is why econometrics spends a significant amount of time focusing on what to do when the assumptions are not met. Is a model still useful if assumptions are not met and can we correct for issues? An econometrician would say yes. Some examples of topics that are more heavy in econometrics are how to deal with heteroskedasticity and autocorrelation. Econometrics also focuses more on time series.
Although there is a decent amount of overlap, I would be hesitant to say the the the concepts are the same. Statistics is the foundation from where all ideas in econometrics come from, but economists focus on finding ways to apply those ideas in the real world and have their own techniques and even some different models for solving difficult problems that may not be possible from a statistician's point of view. | What are the major philosophical, methodological, and terminological differences between econometric | I'd like to add one thing to give greater detail on something that was only mentioned in the other answers. Having taken classes in both areas, one of the main differences between the two is that stat | What are the major philosophical, methodological, and terminological differences between econometrics and other statistical fields?
I'd like to add one thing to give greater detail on something that was only mentioned in the other answers. Having taken classes in both areas, one of the main differences between the two is that statistics tends to stress the rigidity of the assumptions for regression models. The regression course I took in the statistics department focused a great deal of time pounding into our heads regression assumptions and taught techniques to check and those assumptions. However, what the stats department course lacked was a focus on what to do if the assumptions are not met. In the real world, data does not always fit all required assumptions. Econometrics tends to be more real world driven in the sense that economic data is typically only observable and cannot be generated by an experiment, which is where idea that $X$ is fixed in statistics vs. not fixed in econometrics comes in. Consider macro economic data. It is very difficult to perform an experiment on an entire economy and observe the results. As such, economists are at the mercy of what data is observed and reported by the government, not what they can perform in a lab by changing the dose of a medication to see what happens to their dependent variable. That is why econometrics spends a significant amount of time focusing on what to do when the assumptions are not met. Is a model still useful if assumptions are not met and can we correct for issues? An econometrician would say yes. Some examples of topics that are more heavy in econometrics are how to deal with heteroskedasticity and autocorrelation. Econometrics also focuses more on time series.
Although there is a decent amount of overlap, I would be hesitant to say the the the concepts are the same. Statistics is the foundation from where all ideas in econometrics come from, but economists focus on finding ways to apply those ideas in the real world and have their own techniques and even some different models for solving difficult problems that may not be possible from a statistician's point of view. | What are the major philosophical, methodological, and terminological differences between econometric
I'd like to add one thing to give greater detail on something that was only mentioned in the other answers. Having taken classes in both areas, one of the main differences between the two is that stat |
2,573 | Does no correlation imply no causality? | does an absence of correlation imply absence of causality?
No. Any controlled system is a counterexample.
Without causal relationships control is clearly impossible, but successful control means - roughly speaking - that some quantity is being maintained constant, which implies it won't be correlated with anything, including whatever things are causing it to be constant.
So in this situation, concluding no causal relationship from lack of correlation would be a mistake.
Here's a somewhat topical example. | Does no correlation imply no causality? | does an absence of correlation imply absence of causality?
No. Any controlled system is a counterexample.
Without causal relationships control is clearly impossible, but successful control means - ro | Does no correlation imply no causality?
does an absence of correlation imply absence of causality?
No. Any controlled system is a counterexample.
Without causal relationships control is clearly impossible, but successful control means - roughly speaking - that some quantity is being maintained constant, which implies it won't be correlated with anything, including whatever things are causing it to be constant.
So in this situation, concluding no causal relationship from lack of correlation would be a mistake.
Here's a somewhat topical example. | Does no correlation imply no causality?
does an absence of correlation imply absence of causality?
No. Any controlled system is a counterexample.
Without causal relationships control is clearly impossible, but successful control means - ro |
2,574 | Does no correlation imply no causality? | No. Mainly because by correlation you most likely mean linear correlation. Two variables can be correlated nonlinearly, and may show no linear correlation. It's easy to construct an example like that, but I'll give you an example which is closer to your (narrower) question.
Let's look at the random variable $x$, and the non random function $f(x)=x^2$, with which we create a random variable $y=f(x)$. The latter is clearly caused by the former variable, not just correlated. Let's draw a scatter plot:
Nice, clear nonlinear correlation picture, but in this case it's also direct causality. However, the linear correlation coefficient is non significant, i.e. there's no linear correlation despite obvious nonlinear correlation, and even causality:
>> x=randn(100,1);
>> y=x.^2;
>> scatter(x,y)
>> [rho,pval]=corr(x,y)
rho =
0.0140
pval =
0.8904
UPDATE:
@Kodiologist is right in the comment. It can be shown mathematically that linear correlation coefficient for these two variables is zero indeed. In my example $x$ is the standard normal variable, so we have the following:
$$E[x]=0$$
$$E[x^2]=1$$
$$E[x\cdot x^2]=E[x^3]=0$$
Hence, the covariance (and subsequently the correlation) is zero:
$$Cov[x,x^2]=E[x \cdot x^2]-E[x]E[x^2]=0$$
We'd get the same result for any symmetrical distribution, such as uniform $U[-1,1]$. | Does no correlation imply no causality? | No. Mainly because by correlation you most likely mean linear correlation. Two variables can be correlated nonlinearly, and may show no linear correlation. It's easy to construct an example like that, | Does no correlation imply no causality?
No. Mainly because by correlation you most likely mean linear correlation. Two variables can be correlated nonlinearly, and may show no linear correlation. It's easy to construct an example like that, but I'll give you an example which is closer to your (narrower) question.
Let's look at the random variable $x$, and the non random function $f(x)=x^2$, with which we create a random variable $y=f(x)$. The latter is clearly caused by the former variable, not just correlated. Let's draw a scatter plot:
Nice, clear nonlinear correlation picture, but in this case it's also direct causality. However, the linear correlation coefficient is non significant, i.e. there's no linear correlation despite obvious nonlinear correlation, and even causality:
>> x=randn(100,1);
>> y=x.^2;
>> scatter(x,y)
>> [rho,pval]=corr(x,y)
rho =
0.0140
pval =
0.8904
UPDATE:
@Kodiologist is right in the comment. It can be shown mathematically that linear correlation coefficient for these two variables is zero indeed. In my example $x$ is the standard normal variable, so we have the following:
$$E[x]=0$$
$$E[x^2]=1$$
$$E[x\cdot x^2]=E[x^3]=0$$
Hence, the covariance (and subsequently the correlation) is zero:
$$Cov[x,x^2]=E[x \cdot x^2]-E[x]E[x^2]=0$$
We'd get the same result for any symmetrical distribution, such as uniform $U[-1,1]$. | Does no correlation imply no causality?
No. Mainly because by correlation you most likely mean linear correlation. Two variables can be correlated nonlinearly, and may show no linear correlation. It's easy to construct an example like that, |
2,575 | Does no correlation imply no causality? | No. In particular, random variables can be dependent but uncorrelated.
Here's an example. Suppose I have a machine that takes a single input $x ∈ [-1, 1]$ and produces a random number $Y$, which is equal to either $x$ or $-x$ with equal probability. Clearly $x$ causes $Y$. Now let $X$ be a random variable uniformly distributed on $[-1, 1]$ and select $Y$ with $x = X$, inducing a joint distribution on $(X, Y)$. $X$ and $Y$ are dependent, since
$$
P(X < -\tfrac{1}{2})P(|Y| < \tfrac{1}{2}) = \tfrac{1}{4} \cdot \tfrac{1}{2} = \tfrac{1}{8} ≠ 0 = P(X < -\tfrac{1}{2},\; |Y| < \tfrac{1}{2}).
$$
However, the correlation of $X$ and $Y$ is 0, because
$$
\operatorname{Corr}(X, Y)
= \frac{\operatorname{Cov}(X, Y)}{σ_Xσ_Y}
= \frac{E[XY] - E[X]E[Y]}{σ_Xσ_Y}
= \frac{0 - 0\cdot0}{σ_Xσ_Y}
= 0.
$$ | Does no correlation imply no causality? | No. In particular, random variables can be dependent but uncorrelated.
Here's an example. Suppose I have a machine that takes a single input $x ∈ [-1, 1]$ and produces a random number $Y$, which is eq | Does no correlation imply no causality?
No. In particular, random variables can be dependent but uncorrelated.
Here's an example. Suppose I have a machine that takes a single input $x ∈ [-1, 1]$ and produces a random number $Y$, which is equal to either $x$ or $-x$ with equal probability. Clearly $x$ causes $Y$. Now let $X$ be a random variable uniformly distributed on $[-1, 1]$ and select $Y$ with $x = X$, inducing a joint distribution on $(X, Y)$. $X$ and $Y$ are dependent, since
$$
P(X < -\tfrac{1}{2})P(|Y| < \tfrac{1}{2}) = \tfrac{1}{4} \cdot \tfrac{1}{2} = \tfrac{1}{8} ≠ 0 = P(X < -\tfrac{1}{2},\; |Y| < \tfrac{1}{2}).
$$
However, the correlation of $X$ and $Y$ is 0, because
$$
\operatorname{Corr}(X, Y)
= \frac{\operatorname{Cov}(X, Y)}{σ_Xσ_Y}
= \frac{E[XY] - E[X]E[Y]}{σ_Xσ_Y}
= \frac{0 - 0\cdot0}{σ_Xσ_Y}
= 0.
$$ | Does no correlation imply no causality?
No. In particular, random variables can be dependent but uncorrelated.
Here's an example. Suppose I have a machine that takes a single input $x ∈ [-1, 1]$ and produces a random number $Y$, which is eq |
2,576 | Does no correlation imply no causality? | Maybe looking at it from a computational perspective will help.
As a concrete example, take a pseudorandom number generator.
Is there a causal relationship between the seed you set and the $k^\text{th}$ output from the generator?
Is there any measurable correlation? | Does no correlation imply no causality? | Maybe looking at it from a computational perspective will help.
As a concrete example, take a pseudorandom number generator.
Is there a causal relationship between the seed you set and the $k^\text{th | Does no correlation imply no causality?
Maybe looking at it from a computational perspective will help.
As a concrete example, take a pseudorandom number generator.
Is there a causal relationship between the seed you set and the $k^\text{th}$ output from the generator?
Is there any measurable correlation? | Does no correlation imply no causality?
Maybe looking at it from a computational perspective will help.
As a concrete example, take a pseudorandom number generator.
Is there a causal relationship between the seed you set and the $k^\text{th |
2,577 | Does no correlation imply no causality? | The better answer to the question is that correlation is a statistical, mathematical, and/or physical relationship while causation is a metaphysical relationship. You can't LOGICALLY get from correlation (or non-correlation) to causation, without a (large) set of assumptions binding the metaphysics to the physics. (One example is that what two people might agree to be "a rational observer" is to a large degree arbitrary and probably ambiguous). If A pays B to do C which results in D, what is D's cause? There's simply no rational reason to choose C or B or A (or any of A's precursor events).
Control theory deals with systems in realms where they are under control. One way to get a dependent variable under control is to reduce the response of that variable to the possible range of (controlled) variation of the independent variable to statistical noise. For instance, we know air pressure correlates to health (just try breathing vacuum), but if we control air pressure to 1 +/-0.001 atm, how likely is ANY variation of air pressure to effect health? | Does no correlation imply no causality? | The better answer to the question is that correlation is a statistical, mathematical, and/or physical relationship while causation is a metaphysical relationship. You can't LOGICALLY get from correlat | Does no correlation imply no causality?
The better answer to the question is that correlation is a statistical, mathematical, and/or physical relationship while causation is a metaphysical relationship. You can't LOGICALLY get from correlation (or non-correlation) to causation, without a (large) set of assumptions binding the metaphysics to the physics. (One example is that what two people might agree to be "a rational observer" is to a large degree arbitrary and probably ambiguous). If A pays B to do C which results in D, what is D's cause? There's simply no rational reason to choose C or B or A (or any of A's precursor events).
Control theory deals with systems in realms where they are under control. One way to get a dependent variable under control is to reduce the response of that variable to the possible range of (controlled) variation of the independent variable to statistical noise. For instance, we know air pressure correlates to health (just try breathing vacuum), but if we control air pressure to 1 +/-0.001 atm, how likely is ANY variation of air pressure to effect health? | Does no correlation imply no causality?
The better answer to the question is that correlation is a statistical, mathematical, and/or physical relationship while causation is a metaphysical relationship. You can't LOGICALLY get from correlat |
2,578 | Does no correlation imply no causality? | Yes, contrary to previous replies. I'm going to take the question as nontechnical, particularly the definition of "correlation". Maybe I'm using it too broadly, but see my second bullet. I hope it will be considered appropriate to discuss other answers here, because they illuminate different portions of the question. I'm drawing on Pearl's approach to causation, and in particular my take on it in some papers with Kevin Korb. Woodward probably has the clearest nontechnical account.
@conjugateprior says "any controlled system is a counterexample". Yes, to the stronger claim that noncorrelation observed in your experiment implies no causation. I'm going to assume the question is more general. Certainly one experiment might have failed to control for masking causes, or inappropriately controlled for common effects, and hidden the correlation. But if $x$ causes $y$, there will be a controlled experiment where that relationship is revealed. Almost all definitions or accounts of causation treat it as a difference that makes a difference. Therefore no causation without (some kind of) correlation. If there is a direct link $x \rightarrow y$ in a causal Bayesian network, it does not mean that $x$ always makes a difference to $y$, only that there is some experiment fixing all other causes of $y$ where wiggling $x$ wiggles $y$.
@aksakal has a great example why linear causation is insufficient. Agreed, but I want to be broad and nontechnical. If $y=x^2$, it's incomplete to tell a client that $y$ is uncorrelated with $x$. So I'll use correlation very broadly to mean a difference in $x$ that is reliably associated with a difference in $y$. It can be as nonlinear or nonparametric as you like. Threshold effects are fine ($x$ makes a difference to $y$, but only over a finite range, or only by being larger or smaller than a particular value, like voltage in digital circuits).
@Kodiologist creates an example where $y = \mathrm{Unif}({x,-x})$, so $|y| = |x|$ but no linear correlation. But clearly there is a discoverable relationship, so correlated in the broad sense.
@Szabolcs uses random number generators to show an output stream constructed to appear uncorrelated. Like the digits of $\pi$, the stream appears random but is deterministic. I agree you're unlikely to find the relationship if given only the data, but it's there.
@Li Zhi notes you can't logically jump from correlation to causation. Yes, no causes in, no causes out. But the question begins from causation: does it imply correlation? In the air pressure example, we have a threshold effect. There is a range where air pressure is uncorrelated with health. Indeed plausibly where it has no causal effect on health. But there is a range where it does. That is sufficient. But probably better to note ranges where there is and is not an effect. If $A \rightarrow B \rightarrow C \rightarrow D$, then there is correlation all along the chain, because there is causation. Repeated observation (or experiment) can show that $A$ does not directly cause $D$ but the correlation is there because there is a causal story.
I do not know what @user2088176 had in mind, but I think if we take the question very generally, then the answer is yes. At least I think that's the answer required of the causal discovery literature and the interventionist account of causation. Causes are differences that make a difference. And that difference will be revealed, in some experiment, as persistent association. | Does no correlation imply no causality? | Yes, contrary to previous replies. I'm going to take the question as nontechnical, particularly the definition of "correlation". Maybe I'm using it too broadly, but see my second bullet. I hope it wi | Does no correlation imply no causality?
Yes, contrary to previous replies. I'm going to take the question as nontechnical, particularly the definition of "correlation". Maybe I'm using it too broadly, but see my second bullet. I hope it will be considered appropriate to discuss other answers here, because they illuminate different portions of the question. I'm drawing on Pearl's approach to causation, and in particular my take on it in some papers with Kevin Korb. Woodward probably has the clearest nontechnical account.
@conjugateprior says "any controlled system is a counterexample". Yes, to the stronger claim that noncorrelation observed in your experiment implies no causation. I'm going to assume the question is more general. Certainly one experiment might have failed to control for masking causes, or inappropriately controlled for common effects, and hidden the correlation. But if $x$ causes $y$, there will be a controlled experiment where that relationship is revealed. Almost all definitions or accounts of causation treat it as a difference that makes a difference. Therefore no causation without (some kind of) correlation. If there is a direct link $x \rightarrow y$ in a causal Bayesian network, it does not mean that $x$ always makes a difference to $y$, only that there is some experiment fixing all other causes of $y$ where wiggling $x$ wiggles $y$.
@aksakal has a great example why linear causation is insufficient. Agreed, but I want to be broad and nontechnical. If $y=x^2$, it's incomplete to tell a client that $y$ is uncorrelated with $x$. So I'll use correlation very broadly to mean a difference in $x$ that is reliably associated with a difference in $y$. It can be as nonlinear or nonparametric as you like. Threshold effects are fine ($x$ makes a difference to $y$, but only over a finite range, or only by being larger or smaller than a particular value, like voltage in digital circuits).
@Kodiologist creates an example where $y = \mathrm{Unif}({x,-x})$, so $|y| = |x|$ but no linear correlation. But clearly there is a discoverable relationship, so correlated in the broad sense.
@Szabolcs uses random number generators to show an output stream constructed to appear uncorrelated. Like the digits of $\pi$, the stream appears random but is deterministic. I agree you're unlikely to find the relationship if given only the data, but it's there.
@Li Zhi notes you can't logically jump from correlation to causation. Yes, no causes in, no causes out. But the question begins from causation: does it imply correlation? In the air pressure example, we have a threshold effect. There is a range where air pressure is uncorrelated with health. Indeed plausibly where it has no causal effect on health. But there is a range where it does. That is sufficient. But probably better to note ranges where there is and is not an effect. If $A \rightarrow B \rightarrow C \rightarrow D$, then there is correlation all along the chain, because there is causation. Repeated observation (or experiment) can show that $A$ does not directly cause $D$ but the correlation is there because there is a causal story.
I do not know what @user2088176 had in mind, but I think if we take the question very generally, then the answer is yes. At least I think that's the answer required of the causal discovery literature and the interventionist account of causation. Causes are differences that make a difference. And that difference will be revealed, in some experiment, as persistent association. | Does no correlation imply no causality?
Yes, contrary to previous replies. I'm going to take the question as nontechnical, particularly the definition of "correlation". Maybe I'm using it too broadly, but see my second bullet. I hope it wi |
2,579 | What is translation invariance in computer vision and convolutional neural network? | You're on the right track.
Invariance means that you can recognize an object as an object, even when its appearance varies in some way. This is generally a good thing, because it preserves the object's identity, category, (etc) across changes in the specifics of the visual input, like relative positions of the viewer/camera and the object.
The image below contains many views of the same statue. You (and well-trained neural networks) can recognize that the same object appears in every picture, even though the actual pixel values are quite different.
Note that translation here has a specific meaning in vision, borrowed from geometry. It does not refer to any type of conversion, unlike say, a translation from French to English or between file formats. Instead, it means that each point/pixel in the image has been moved the same amount in the same direction. Alternately, you can think of the origin as having been shifted an equal amount in the opposite direction. For example, we can generate the 2nd and 3rd images in the first row from the first by moving each pixel 50 or 100 pixels to the right.
One can show that the convolution operator commutes with respect to translation. If you convolve $f$ with $g$, it doesn't matter if you translate the convolved output $f*g$, or if you translate $f$ or $g$ first, then convolve them. Wikipedia has a bit more.
One approach to translation-invariant object recognition is to take a "template" of the object and convolve it with every possible location of the object in the image. If you get a large response at a location, it suggests that an object resembling the template is located at that location. This approach is often called template-matching.
Invariance vs. Equivariance
Santanu_Pattanayak's answer (here) points out that there is a difference between translation invariance and translation equivariance. Translation invariance means that the system produces exactly the same response, regardless of how its input is shifted. For example, a face-detector might report "FACE FOUND" for all three images in the top row. Equivariance means that the system works equally well across positions, but its response shifts with the position of the target. For example, a heat map of "face-iness" would have similar bumps at the left, center, and right when it processes the first row of images.
This is is sometimes an important distinction, but many people call both phenomena "invariance", especially since it is usually trivial to convert an equivariant response into an invariant one--just disregard all the position information). | What is translation invariance in computer vision and convolutional neural network? | You're on the right track.
Invariance means that you can recognize an object as an object, even when its appearance varies in some way. This is generally a good thing, because it preserves the object' | What is translation invariance in computer vision and convolutional neural network?
You're on the right track.
Invariance means that you can recognize an object as an object, even when its appearance varies in some way. This is generally a good thing, because it preserves the object's identity, category, (etc) across changes in the specifics of the visual input, like relative positions of the viewer/camera and the object.
The image below contains many views of the same statue. You (and well-trained neural networks) can recognize that the same object appears in every picture, even though the actual pixel values are quite different.
Note that translation here has a specific meaning in vision, borrowed from geometry. It does not refer to any type of conversion, unlike say, a translation from French to English or between file formats. Instead, it means that each point/pixel in the image has been moved the same amount in the same direction. Alternately, you can think of the origin as having been shifted an equal amount in the opposite direction. For example, we can generate the 2nd and 3rd images in the first row from the first by moving each pixel 50 or 100 pixels to the right.
One can show that the convolution operator commutes with respect to translation. If you convolve $f$ with $g$, it doesn't matter if you translate the convolved output $f*g$, or if you translate $f$ or $g$ first, then convolve them. Wikipedia has a bit more.
One approach to translation-invariant object recognition is to take a "template" of the object and convolve it with every possible location of the object in the image. If you get a large response at a location, it suggests that an object resembling the template is located at that location. This approach is often called template-matching.
Invariance vs. Equivariance
Santanu_Pattanayak's answer (here) points out that there is a difference between translation invariance and translation equivariance. Translation invariance means that the system produces exactly the same response, regardless of how its input is shifted. For example, a face-detector might report "FACE FOUND" for all three images in the top row. Equivariance means that the system works equally well across positions, but its response shifts with the position of the target. For example, a heat map of "face-iness" would have similar bumps at the left, center, and right when it processes the first row of images.
This is is sometimes an important distinction, but many people call both phenomena "invariance", especially since it is usually trivial to convert an equivariant response into an invariant one--just disregard all the position information). | What is translation invariance in computer vision and convolutional neural network?
You're on the right track.
Invariance means that you can recognize an object as an object, even when its appearance varies in some way. This is generally a good thing, because it preserves the object' |
2,580 | What is translation invariance in computer vision and convolutional neural network? | I think there is some confusion about what is meant by translational invariance. Convolution provides translation equivariance meaning if an object in an image is at area A and through convolution a feature is detected at the output at area B, then the same feature would be detected when the object in the image is translated to A'. The position of the output feature would also be translated to a new area B' based on the filter kernel size. This is called translational equivariance and not translational invariance. | What is translation invariance in computer vision and convolutional neural network? | I think there is some confusion about what is meant by translational invariance. Convolution provides translation equivariance meaning if an object in an image is at area A and through convolution a f | What is translation invariance in computer vision and convolutional neural network?
I think there is some confusion about what is meant by translational invariance. Convolution provides translation equivariance meaning if an object in an image is at area A and through convolution a feature is detected at the output at area B, then the same feature would be detected when the object in the image is translated to A'. The position of the output feature would also be translated to a new area B' based on the filter kernel size. This is called translational equivariance and not translational invariance. | What is translation invariance in computer vision and convolutional neural network?
I think there is some confusion about what is meant by translational invariance. Convolution provides translation equivariance meaning if an object in an image is at area A and through convolution a f |
2,581 | What is translation invariance in computer vision and convolutional neural network? | @Santanu
While your answer is correct in part and leads to confusion. It is true that Convolutional layers themselves or output feature maps are translation equivariant. What the max-pooling layers do is provide some translation invariance as @Matt points out.
That is to say, the equivariance in the feature maps combined with max-pooling layer function leads to translation invariance in the output layer (softmax) of the network. The first set of images above would still produce a prediction called "statue" even though it has been translated to the left or right. The fact that the prediction remains "statue" (i.e. the same) despite translating the input means the network has achieved some translation invariance. | What is translation invariance in computer vision and convolutional neural network? | @Santanu
While your answer is correct in part and leads to confusion. It is true that Convolutional layers themselves or output feature maps are translation equivariant. What the max-pooling layers do | What is translation invariance in computer vision and convolutional neural network?
@Santanu
While your answer is correct in part and leads to confusion. It is true that Convolutional layers themselves or output feature maps are translation equivariant. What the max-pooling layers do is provide some translation invariance as @Matt points out.
That is to say, the equivariance in the feature maps combined with max-pooling layer function leads to translation invariance in the output layer (softmax) of the network. The first set of images above would still produce a prediction called "statue" even though it has been translated to the left or right. The fact that the prediction remains "statue" (i.e. the same) despite translating the input means the network has achieved some translation invariance. | What is translation invariance in computer vision and convolutional neural network?
@Santanu
While your answer is correct in part and leads to confusion. It is true that Convolutional layers themselves or output feature maps are translation equivariant. What the max-pooling layers do |
2,582 | What is translation invariance in computer vision and convolutional neural network? | The answer is actually trickier than it appears at first. Generally, the translational invariance means that you would recognize the object regerdless of where it appears on the frame.
In the next picture in frame A and B you would recognize the word "stressed" if your vision supports translation invariance of words.
I highlighted the term words because if your invariance is only supported on letters, then the frame C will also be equal to frames A and B: it has exactly the same letters.
In practical terms, if you trained your CNN on letters, then things like MAX POOL will help to achieve the translation invariance on letters, but may not necessarily lead to translation invariance on words. Pooling pulls out the feature (that's extracted by a corresponding layer) without relation to the location of other features, so it'll lose the knowledge of relative position of letters D and T and words STRESSED and DESSERTS will look the same.
The term itself is probably from physics, where translational symmetry means that the equations stay the same regardless of translation in space. | What is translation invariance in computer vision and convolutional neural network? | The answer is actually trickier than it appears at first. Generally, the translational invariance means that you would recognize the object regerdless of where it appears on the frame.
In the next pic | What is translation invariance in computer vision and convolutional neural network?
The answer is actually trickier than it appears at first. Generally, the translational invariance means that you would recognize the object regerdless of where it appears on the frame.
In the next picture in frame A and B you would recognize the word "stressed" if your vision supports translation invariance of words.
I highlighted the term words because if your invariance is only supported on letters, then the frame C will also be equal to frames A and B: it has exactly the same letters.
In practical terms, if you trained your CNN on letters, then things like MAX POOL will help to achieve the translation invariance on letters, but may not necessarily lead to translation invariance on words. Pooling pulls out the feature (that's extracted by a corresponding layer) without relation to the location of other features, so it'll lose the knowledge of relative position of letters D and T and words STRESSED and DESSERTS will look the same.
The term itself is probably from physics, where translational symmetry means that the equations stay the same regardless of translation in space. | What is translation invariance in computer vision and convolutional neural network?
The answer is actually trickier than it appears at first. Generally, the translational invariance means that you would recognize the object regerdless of where it appears on the frame.
In the next pic |
2,583 | What is translation invariance in computer vision and convolutional neural network? | I will answer you briefly,
translation invariant means that the algorithm will recognize the object even if you shifted its position from any place in the picture to any other place, lets say , we have a cat in photo with grass background , if you shifted the cat to the corner of the image or any other place , model still recognize , it is not affected by the translation ( shifting ) | What is translation invariance in computer vision and convolutional neural network? | I will answer you briefly,
translation invariant means that the algorithm will recognize the object even if you shifted its position from any place in the picture to any other place, lets say , we hav | What is translation invariance in computer vision and convolutional neural network?
I will answer you briefly,
translation invariant means that the algorithm will recognize the object even if you shifted its position from any place in the picture to any other place, lets say , we have a cat in photo with grass background , if you shifted the cat to the corner of the image or any other place , model still recognize , it is not affected by the translation ( shifting ) | What is translation invariance in computer vision and convolutional neural network?
I will answer you briefly,
translation invariant means that the algorithm will recognize the object even if you shifted its position from any place in the picture to any other place, lets say , we hav |
2,584 | What are good RMSE values? [duplicate] | I think you have two different types of questions there. One thing is what you ask in the title: "What are good RMSE values?" and another thing is how to compare models with different datasets using RMSE.
For the first, i.e., the question in the title, it is important to recall that RMSE has the same unit as the dependent variable (DV). It means that there is no absolute good or bad threshold, however you can define it based on your DV.
For a datum which ranges from 0 to 1000, an RMSE of 0.7 is small, but if the range goes from 0 to 1, it is not that small anymore. However, although the smaller the RMSE, the better, you can make theoretical claims on levels of the RMSE by knowing what is expected from your DV in your field of research.
Keep in mind that you can always normalize the RMSE.
For the second question, i.e., about comparing two models with different datasets by using RMSE, you may do that provided that the DV is the same in both models. Here, the smaller the better but remember that small differences between those RMSE may not be relevant or even significant. | What are good RMSE values? [duplicate] | I think you have two different types of questions there. One thing is what you ask in the title: "What are good RMSE values?" and another thing is how to compare models with different datasets using R | What are good RMSE values? [duplicate]
I think you have two different types of questions there. One thing is what you ask in the title: "What are good RMSE values?" and another thing is how to compare models with different datasets using RMSE.
For the first, i.e., the question in the title, it is important to recall that RMSE has the same unit as the dependent variable (DV). It means that there is no absolute good or bad threshold, however you can define it based on your DV.
For a datum which ranges from 0 to 1000, an RMSE of 0.7 is small, but if the range goes from 0 to 1, it is not that small anymore. However, although the smaller the RMSE, the better, you can make theoretical claims on levels of the RMSE by knowing what is expected from your DV in your field of research.
Keep in mind that you can always normalize the RMSE.
For the second question, i.e., about comparing two models with different datasets by using RMSE, you may do that provided that the DV is the same in both models. Here, the smaller the better but remember that small differences between those RMSE may not be relevant or even significant. | What are good RMSE values? [duplicate]
I think you have two different types of questions there. One thing is what you ask in the title: "What are good RMSE values?" and another thing is how to compare models with different datasets using R |
2,585 | What are good RMSE values? [duplicate] | The RMSE for your training and your test sets should be very similar if you have built a good model. If the RMSE for the test set is much higher than that of the training set, it is likely that you've badly over fit the data, i.e. you've created a model that tests well in sample, but has little predictive value when tested out of sample. | What are good RMSE values? [duplicate] | The RMSE for your training and your test sets should be very similar if you have built a good model. If the RMSE for the test set is much higher than that of the training set, it is likely that you'v | What are good RMSE values? [duplicate]
The RMSE for your training and your test sets should be very similar if you have built a good model. If the RMSE for the test set is much higher than that of the training set, it is likely that you've badly over fit the data, i.e. you've created a model that tests well in sample, but has little predictive value when tested out of sample. | What are good RMSE values? [duplicate]
The RMSE for your training and your test sets should be very similar if you have built a good model. If the RMSE for the test set is much higher than that of the training set, it is likely that you'v |
2,586 | What are good RMSE values? [duplicate] | Even though this is an old thread, I am hoping that my answer helps anyone who is looking for an answer to the same question.
When we talk about time series analysis, most of the time we mean the study of ARIMA models (and its variants). Hence I will start by assuming the same in my answer.
First of all, as the earlier commenter R. Astur explains, there is no such thing as a good RMSE, because it is scale-dependent, i.e. dependent on your dependent variable. Hence one can not claim a universal number as a good RMSE.
Even if you go for scale-free measures of fit such as MAPE or MASE, you still can not claim a threshold of being good. This is just a wrong approach. You can't say "My MAPE is such and such, hence my fit/forecast is good". How I believe you should approach your problem is as follows. First find a couple of "best possible" models, using a logic such as looping over the arima() function outputs in R, and select the best n estimated models based on the lowest RMSE or MAPE or MASE. Since we are talking about one specific series, and not trying to make a universal claim, you can pick either of these measures. Of course you have to do the residual diagnostics, and make sure your best models produce White Noise residuals with well-behaved ACF plots. Now that you have a few good candidates, test the out-of-sample MAPE of each model, and pick the one with the best out-of-sample MAPE.
The resulting model is the best model, in the sense that it:
Gives you a good in-sample fit, associated with low error measures and WN residuals.
And avoids overfitting by giving you the best out-of-sample forecast accuracy.
Now, one crucial point is that it is possible to estimate a time series with an ARIMA (or its variants) by including enough lags of the dependent variable or the residual term. However, that fitted "best" model may just over-fit, and give you a dramatically low out-of-sample accuracy, i.e. satisfy my bullet point 1 but not 2.
In that case what you need to do is:
Add an exogenous explanatory variable and go for ARIMAX,
Add an endogenous explanatory variable and go for VAR/VECM,
Or change your approach completely to non-linear machine learning models, and fit them to your time series using a Cross-Validation approach. Fit a neural network or random forest to your time series, for example. And repeat the in-sample and out-of-sample performance comparison. This is a trending approach to time series, and the papers I've seen are applauding the machine learning models for their superior (out-of-sample) forecasting performance.
Hope this helps. | What are good RMSE values? [duplicate] | Even though this is an old thread, I am hoping that my answer helps anyone who is looking for an answer to the same question.
When we talk about time series analysis, most of the time we mean the stud | What are good RMSE values? [duplicate]
Even though this is an old thread, I am hoping that my answer helps anyone who is looking for an answer to the same question.
When we talk about time series analysis, most of the time we mean the study of ARIMA models (and its variants). Hence I will start by assuming the same in my answer.
First of all, as the earlier commenter R. Astur explains, there is no such thing as a good RMSE, because it is scale-dependent, i.e. dependent on your dependent variable. Hence one can not claim a universal number as a good RMSE.
Even if you go for scale-free measures of fit such as MAPE or MASE, you still can not claim a threshold of being good. This is just a wrong approach. You can't say "My MAPE is such and such, hence my fit/forecast is good". How I believe you should approach your problem is as follows. First find a couple of "best possible" models, using a logic such as looping over the arima() function outputs in R, and select the best n estimated models based on the lowest RMSE or MAPE or MASE. Since we are talking about one specific series, and not trying to make a universal claim, you can pick either of these measures. Of course you have to do the residual diagnostics, and make sure your best models produce White Noise residuals with well-behaved ACF plots. Now that you have a few good candidates, test the out-of-sample MAPE of each model, and pick the one with the best out-of-sample MAPE.
The resulting model is the best model, in the sense that it:
Gives you a good in-sample fit, associated with low error measures and WN residuals.
And avoids overfitting by giving you the best out-of-sample forecast accuracy.
Now, one crucial point is that it is possible to estimate a time series with an ARIMA (or its variants) by including enough lags of the dependent variable or the residual term. However, that fitted "best" model may just over-fit, and give you a dramatically low out-of-sample accuracy, i.e. satisfy my bullet point 1 but not 2.
In that case what you need to do is:
Add an exogenous explanatory variable and go for ARIMAX,
Add an endogenous explanatory variable and go for VAR/VECM,
Or change your approach completely to non-linear machine learning models, and fit them to your time series using a Cross-Validation approach. Fit a neural network or random forest to your time series, for example. And repeat the in-sample and out-of-sample performance comparison. This is a trending approach to time series, and the papers I've seen are applauding the machine learning models for their superior (out-of-sample) forecasting performance.
Hope this helps. | What are good RMSE values? [duplicate]
Even though this is an old thread, I am hoping that my answer helps anyone who is looking for an answer to the same question.
When we talk about time series analysis, most of the time we mean the stud |
2,587 | What are good RMSE values? [duplicate] | You can't fix particular threshold value for RMSE. We have to look at comparison of RMSE of both test and train datasets. If your model is good then your RMSE of test data is quite simillar to train dataset. Otherwise below conditions met.
RMSE of test > RMSE of train => OVER FITTING of the data.
RMSE of test < RMSE of train => UNDER FITTING of the data. | What are good RMSE values? [duplicate] | You can't fix particular threshold value for RMSE. We have to look at comparison of RMSE of both test and train datasets. If your model is good then your RMSE of test data is quite simillar to train d | What are good RMSE values? [duplicate]
You can't fix particular threshold value for RMSE. We have to look at comparison of RMSE of both test and train datasets. If your model is good then your RMSE of test data is quite simillar to train dataset. Otherwise below conditions met.
RMSE of test > RMSE of train => OVER FITTING of the data.
RMSE of test < RMSE of train => UNDER FITTING of the data. | What are good RMSE values? [duplicate]
You can't fix particular threshold value for RMSE. We have to look at comparison of RMSE of both test and train datasets. If your model is good then your RMSE of test data is quite simillar to train d |
2,588 | What are good RMSE values? [duplicate] | Personally I like the RMSE / standard deviation approach. Range is misleading, you could have a skewed distribution or outliers, whereas standard deviation takes care of this. Similarly, RMSE / mean is totally wrong - what if your mean is zero? However, this does not help to tell you whether you have a good model or not. This challenge is similar to working with binary classifications and asking "is my Gini of 80% good". That depends. Maybe by doing some additional tuning or feature engineering, you could have built a better model that gave you a Gini of 90% (and still validates against the test sample). It also depends on the use case and industry. If you were developing a behaviour credit score, then a Gini of 80% is "pretty good". But if you are developing a new application credit score (which inherently has access to less data) then a Gini of 60% is pretty good. I guess when it comes to whether your model's RMSE / std dev "score" is good or not, you need to develop your own intuition by applying this and learning from many different use cases. | What are good RMSE values? [duplicate] | Personally I like the RMSE / standard deviation approach. Range is misleading, you could have a skewed distribution or outliers, whereas standard deviation takes care of this. Similarly, RMSE / mean i | What are good RMSE values? [duplicate]
Personally I like the RMSE / standard deviation approach. Range is misleading, you could have a skewed distribution or outliers, whereas standard deviation takes care of this. Similarly, RMSE / mean is totally wrong - what if your mean is zero? However, this does not help to tell you whether you have a good model or not. This challenge is similar to working with binary classifications and asking "is my Gini of 80% good". That depends. Maybe by doing some additional tuning or feature engineering, you could have built a better model that gave you a Gini of 90% (and still validates against the test sample). It also depends on the use case and industry. If you were developing a behaviour credit score, then a Gini of 80% is "pretty good". But if you are developing a new application credit score (which inherently has access to less data) then a Gini of 60% is pretty good. I guess when it comes to whether your model's RMSE / std dev "score" is good or not, you need to develop your own intuition by applying this and learning from many different use cases. | What are good RMSE values? [duplicate]
Personally I like the RMSE / standard deviation approach. Range is misleading, you could have a skewed distribution or outliers, whereas standard deviation takes care of this. Similarly, RMSE / mean i |
2,589 | Choosing a clustering method | There is no definitive answer to your question, as even within the same method the choice of the distance to represent individuals (dis)similarity may yield different result, e.g. when using euclidean vs. squared euclidean in hierarchical clustering. As an other example, for binary data, you can choose the Jaccard index as a measure of similarity and proceed with classical hierarchical clustering; but there are alternative approaches, like the Mona (Monothetic Analysis) algorithm which only considers one variable at a time, while other hierarchical approaches (e.g. classical HC, Agnes, Diana) use all variables at each step. The k-means approach has been extended in various way, including partitioning around medoids (PAM) or representative objects rather than centroids (Kaufman and Rousseuw, 1990), or fuzzy clustering (Chung and Lee, 1992). For instance, the main difference between the k-means and PAM is that PAM minimizes a sum of dissimilarities rather than a sum of squared euclidean distances; fuzzy clustering allows to consider "partial membership" (we associate to each observation a weight reflecting class membership). And for methods relying on a probabilistic framework, or so-called model-based clustering (or latent profile analysis for the psychometricians), there is a great package: Mclust. So definitively, you need to consider how to define the resemblance of individuals as well as the method for linking individuals together (recursive or iterative clustering, strict or fuzzy class membership, unsupervised or semi-supervised approach, etc.).
Usually, to assess cluster stability, it is interesting to compare several algorithm which basically "share" some similarity (e.g. k-means and hierarchical clustering, because euclidean distance work for both). For assessing the concordance between two cluster solutions, some pointers were suggested in response to this question, Where to cut a dendrogram? (see also the cross-references for other link on this website). If you are using R, you will see that several packages are already available in Task View on Cluster Analysis, and several packages include vignettes that explain specific methods or provide case studies.
Cluster Analysis: Basic Concepts and Algorithms provides a good overview of several techniques used in Cluster Analysis.
As for a good recent book with R illustrations, I would recommend chapter 12 of Izenman, Modern Multivariate Statistical Techniques (Springer, 2008). A couple of other standard references is given below:
Cormack, R., 1971. A review of classification. Journal of the Royal Statistical Society, A 134, 321–367.
Everitt, B., 1974. Cluster analysis. London: Heinemann Educ. Books.
Gordon, A., 1987. A review of hierarchical classification. Journal of the Royal Statistical Society, A 150, 119–137.
Gordon, A., 1999. Classification, 2nd Edition. Chapman and Hall.
Kaufman, L., Rousseuw, P., 1990. Finding Groups in Data: An Introduction to Cluster Analysis. New York, Wiley. | Choosing a clustering method | There is no definitive answer to your question, as even within the same method the choice of the distance to represent individuals (dis)similarity may yield different result, e.g. when using euclidean | Choosing a clustering method
There is no definitive answer to your question, as even within the same method the choice of the distance to represent individuals (dis)similarity may yield different result, e.g. when using euclidean vs. squared euclidean in hierarchical clustering. As an other example, for binary data, you can choose the Jaccard index as a measure of similarity and proceed with classical hierarchical clustering; but there are alternative approaches, like the Mona (Monothetic Analysis) algorithm which only considers one variable at a time, while other hierarchical approaches (e.g. classical HC, Agnes, Diana) use all variables at each step. The k-means approach has been extended in various way, including partitioning around medoids (PAM) or representative objects rather than centroids (Kaufman and Rousseuw, 1990), or fuzzy clustering (Chung and Lee, 1992). For instance, the main difference between the k-means and PAM is that PAM minimizes a sum of dissimilarities rather than a sum of squared euclidean distances; fuzzy clustering allows to consider "partial membership" (we associate to each observation a weight reflecting class membership). And for methods relying on a probabilistic framework, or so-called model-based clustering (or latent profile analysis for the psychometricians), there is a great package: Mclust. So definitively, you need to consider how to define the resemblance of individuals as well as the method for linking individuals together (recursive or iterative clustering, strict or fuzzy class membership, unsupervised or semi-supervised approach, etc.).
Usually, to assess cluster stability, it is interesting to compare several algorithm which basically "share" some similarity (e.g. k-means and hierarchical clustering, because euclidean distance work for both). For assessing the concordance between two cluster solutions, some pointers were suggested in response to this question, Where to cut a dendrogram? (see also the cross-references for other link on this website). If you are using R, you will see that several packages are already available in Task View on Cluster Analysis, and several packages include vignettes that explain specific methods or provide case studies.
Cluster Analysis: Basic Concepts and Algorithms provides a good overview of several techniques used in Cluster Analysis.
As for a good recent book with R illustrations, I would recommend chapter 12 of Izenman, Modern Multivariate Statistical Techniques (Springer, 2008). A couple of other standard references is given below:
Cormack, R., 1971. A review of classification. Journal of the Royal Statistical Society, A 134, 321–367.
Everitt, B., 1974. Cluster analysis. London: Heinemann Educ. Books.
Gordon, A., 1987. A review of hierarchical classification. Journal of the Royal Statistical Society, A 150, 119–137.
Gordon, A., 1999. Classification, 2nd Edition. Chapman and Hall.
Kaufman, L., Rousseuw, P., 1990. Finding Groups in Data: An Introduction to Cluster Analysis. New York, Wiley. | Choosing a clustering method
There is no definitive answer to your question, as even within the same method the choice of the distance to represent individuals (dis)similarity may yield different result, e.g. when using euclidean |
2,590 | Choosing a clustering method | A quote from Hastie, Tibshirani and Friedman,
Elements of Statistical Learning,
p. 506:
"An appropriate dissimilarity measure
is far more important in obtaining
success with clustering than choice
of clustering algorithm. This aspect
of the problem ...
depends on domain specific knowledge
and is less amenable to general
research."
(That said, wouldn't it would be nice if (wibni) there were a site
where students could try a few algorithms and metrics
on a few small standard datasets ?) | Choosing a clustering method | A quote from Hastie, Tibshirani and Friedman,
Elements of Statistical Learning,
p. 506:
"An appropriate dissimilarity measure
is far more important in obtaining
success with clustering than choic | Choosing a clustering method
A quote from Hastie, Tibshirani and Friedman,
Elements of Statistical Learning,
p. 506:
"An appropriate dissimilarity measure
is far more important in obtaining
success with clustering than choice
of clustering algorithm. This aspect
of the problem ...
depends on domain specific knowledge
and is less amenable to general
research."
(That said, wouldn't it would be nice if (wibni) there were a site
where students could try a few algorithms and metrics
on a few small standard datasets ?) | Choosing a clustering method
A quote from Hastie, Tibshirani and Friedman,
Elements of Statistical Learning,
p. 506:
"An appropriate dissimilarity measure
is far more important in obtaining
success with clustering than choic |
2,591 | Choosing a clustering method | You can't know in advance which clustering algorithm would be better, but there are some clues, for example if you want to cluster images there are certain algorithms you should try first like Fuzzy Art, or if you want to group faces you should start with (GGCI) global geometric clustering for image.
Anyway this does not guarantee the best result, so what I would do is use a program that allows you to methodically run different cluster algorithms, such as weka, RapidMiner or even R (which is non visual), There I will set the program to launch all the different clustering algorithms I can, with all the possible different distances, and if they need parameters, experiment each one with a variety of different parameter values (besides if I do not know the amount of clusters, run each one with a variety of numbers of it). Once you settled the experiment, leave it running, but remember to store somewhere the results of each clustering run.
Then compare the results in order to obtain the best resulting clustering. This is tricky because there are several metrics you can compare and not all are provided by every algorithm. For example fuzzy clustering algorithms have different metrics than non-fuzzy, but they can still be compared by considering the fuzzy resulting groups as non-fuzzy,
I will stick for the comparison to the classic metrics such as:
• SSE: sum of the square error from the items of each cluster.
• Inter cluster distance: sum of the square distance between each cluster centroid.
• Intra cluster distance for each cluster: sum of the square distance from the items of each cluster to its centroid.
• Maximum Radius: largest distance from an instance to its cluster centroid.
• Average Radius: sum of the largest distance from an instance to its cluster centroid divided by the number of clusters. | Choosing a clustering method | You can't know in advance which clustering algorithm would be better, but there are some clues, for example if you want to cluster images there are certain algorithms you should try first like Fuzzy A | Choosing a clustering method
You can't know in advance which clustering algorithm would be better, but there are some clues, for example if you want to cluster images there are certain algorithms you should try first like Fuzzy Art, or if you want to group faces you should start with (GGCI) global geometric clustering for image.
Anyway this does not guarantee the best result, so what I would do is use a program that allows you to methodically run different cluster algorithms, such as weka, RapidMiner or even R (which is non visual), There I will set the program to launch all the different clustering algorithms I can, with all the possible different distances, and if they need parameters, experiment each one with a variety of different parameter values (besides if I do not know the amount of clusters, run each one with a variety of numbers of it). Once you settled the experiment, leave it running, but remember to store somewhere the results of each clustering run.
Then compare the results in order to obtain the best resulting clustering. This is tricky because there are several metrics you can compare and not all are provided by every algorithm. For example fuzzy clustering algorithms have different metrics than non-fuzzy, but they can still be compared by considering the fuzzy resulting groups as non-fuzzy,
I will stick for the comparison to the classic metrics such as:
• SSE: sum of the square error from the items of each cluster.
• Inter cluster distance: sum of the square distance between each cluster centroid.
• Intra cluster distance for each cluster: sum of the square distance from the items of each cluster to its centroid.
• Maximum Radius: largest distance from an instance to its cluster centroid.
• Average Radius: sum of the largest distance from an instance to its cluster centroid divided by the number of clusters. | Choosing a clustering method
You can't know in advance which clustering algorithm would be better, but there are some clues, for example if you want to cluster images there are certain algorithms you should try first like Fuzzy A |
2,592 | Choosing a clustering method | Choosing the right distance is not an elementary task. When we want to make a cluster analysis on a data set, different results could appear using different distances, so it's very important to be careful in which distance to choose because we can make a false good artefact that capture well the variability, but actually without sense in our problem.
The Euclidean distance is appropriate when I have continuous numerical variables and I want to reflect absolute distances. This distance takes into account every variable and doesn’t remove redundancies, so if I had three variables that explain the same (are correlated), I would weight this effect by three. Moreover, this distance is not scale invariant, so generally I have to scale previously to use the distance.
Example ecology: We have different observations from many localities, of which the experts have taken samples of some microbiological, physical and chemical factors. We want to find patterns in ecosystems. These factors have a high correlation, but we know everyone is relevant, so we don’t want to remove these redundancies. We use the Euclidean distance with scaled data to avoid the effect of units.
The Mahalanobis distance is appropriate when I have continuous numerical variables and I want to reflect absolute distances, but we want to remove redundancies. If we have repeated variables, their repetitious effect will disappear.
The family Hellinger, Species Profile and Chord distance are appropriate when we want to make emphasis on differences between variables, when we want to differentiate profiles. These distances weights by total quantities of each observation, in such a way that the distances are small when variable by variable the individuals are more similar, although in absolute magnitudes was very different. Watch out! These distances reflect very well the difference between profiles, but lost the magnitude effect. They could be very useful when we have different sample sizes.
Example ecology: We want to study the fauna of many lands and we have a data matrix of an inventory of the gastropod (sampling locations in rows and species names in columns). The matrix is characterized by having many zeros and different magnitudes because some localities have some species and others have other species. We could use Hellinger distance.
Bray-Curtis is quite similar, but it’s more appropriate when we want to differentiate profiles and also take relative magnitudes into account. | Choosing a clustering method | Choosing the right distance is not an elementary task. When we want to make a cluster analysis on a data set, different results could appear using different distances, so it's very important to be car | Choosing a clustering method
Choosing the right distance is not an elementary task. When we want to make a cluster analysis on a data set, different results could appear using different distances, so it's very important to be careful in which distance to choose because we can make a false good artefact that capture well the variability, but actually without sense in our problem.
The Euclidean distance is appropriate when I have continuous numerical variables and I want to reflect absolute distances. This distance takes into account every variable and doesn’t remove redundancies, so if I had three variables that explain the same (are correlated), I would weight this effect by three. Moreover, this distance is not scale invariant, so generally I have to scale previously to use the distance.
Example ecology: We have different observations from many localities, of which the experts have taken samples of some microbiological, physical and chemical factors. We want to find patterns in ecosystems. These factors have a high correlation, but we know everyone is relevant, so we don’t want to remove these redundancies. We use the Euclidean distance with scaled data to avoid the effect of units.
The Mahalanobis distance is appropriate when I have continuous numerical variables and I want to reflect absolute distances, but we want to remove redundancies. If we have repeated variables, their repetitious effect will disappear.
The family Hellinger, Species Profile and Chord distance are appropriate when we want to make emphasis on differences between variables, when we want to differentiate profiles. These distances weights by total quantities of each observation, in such a way that the distances are small when variable by variable the individuals are more similar, although in absolute magnitudes was very different. Watch out! These distances reflect very well the difference between profiles, but lost the magnitude effect. They could be very useful when we have different sample sizes.
Example ecology: We want to study the fauna of many lands and we have a data matrix of an inventory of the gastropod (sampling locations in rows and species names in columns). The matrix is characterized by having many zeros and different magnitudes because some localities have some species and others have other species. We could use Hellinger distance.
Bray-Curtis is quite similar, but it’s more appropriate when we want to differentiate profiles and also take relative magnitudes into account. | Choosing a clustering method
Choosing the right distance is not an elementary task. When we want to make a cluster analysis on a data set, different results could appear using different distances, so it's very important to be car |
2,593 | Choosing a clustering method | Here is a summary of several clustering algorithms that can help to answer the question
"which clustering technique i should use?"
There is no objectively "correct" clustering algorithm Ref
Clustering algorithms can be categorized based on their "cluster model". An algorithm designed for a particular kind of model will generally fail on a different kind of model. For eg, k-means cannot find non-convex clusters, it can find only circular shaped clusters.
Therefore, understanding these "cluster models" becomes the key to understanding how to choose among the various clustering algorithms / methods. Typical cluster models include:
[1] Connectivity models: Builds models based on distance connectivity. Eg hierarchical clustering. Used when we need different partitioning based on tree cut height. R function: hclust in stats package.
[2] Centroid models: Builds models by representing each cluster by a single mean vector. Used when we need crisp partitioning (as opposed to fuzzy clustering described later). R function: kmeans in stats package.
[3] Distribution models: Builds models based on statistical distributions such as multivariate normal distributions used by the expectation-maximization algorithm. Used when cluster shapes can be arbitrary unlike k-means which assumes circular clusters. R function: emcluster in the emcluster package.
[4] Density models: Builds models based on clusters as connected dense regions in the data space. Eg DBSCAN and OPTICS. Used when cluster shapes can be arbitrary unlike k-means which assumes circular clusters.. R function dbscan in package dbscan.
[5] Subspace models: Builds models based on both cluster members and relevant attributes. Eg biclustering (also known as co-clustering or two-mode-clustering). Used when simultaneous row and column clustering is needed. R function biclust in biclust package.
[6] Group models: Builds models based on the grouping information. Eg collaborative filtering (recommender algorithm). R function Recommender in recommenderlab package.
[7] Graph-based models: Builds models based on clique. Community structure detection algorithms try to find dense subgraphs in directed or undirected graphs. Eg R function cluster_walktrap in igraph package.
[8] Kohonen Self-Organizing Feature Map: Builds models based on neural network. R function som in the kohonen package.
[9] Spectral Clustering: Builds models based on non-convex cluster structure, or when a measure of the center is not a suitable description of the complete cluster. R function specc in the kernlab package.
[10] subspace clustering : For high-dimensional data, distance functions could be problematic. cluster models include the relevant attributes for the cluster. Eg, hddc function in the R package HDclassif.
[11] Sequence clustering: Group sequences that are related. rBlast package.
[12] Affinity propagation: Builds models based on message passing between data points. It does not require the number of clusters to be determined before running the algorithm. It is better for certain computer vision and computational biology tasks, e.g. clustering of pictures of human faces and identifying regulated transcripts, than k-means, Ref Rpackage APCluster.
[13] Stream clustering: Builds models based on data that arrive continuously such as telephone records, financial transactions etc. Eg R package BIRCH [https://cran.r-project.org/src/contrib/Archive/birch/]
[14] Document clustering (or text clustering): Builds models based on SVD. It has used in topic extraction. Eg Carrot [http://search.carrot2.org] is an open source search results clustering engine which can cluster documents into thematic categories.
[15] Latent class model: It relates a set of observed multivariate variables to a set of latent variables. LCA may be used in collaborative filtering. R function Recommender in recommenderlab package has collaborative filtering functionality.
[16] Biclustering: Used to simultaneously cluster rows and columns of two-mode data. Eg R function biclust in package biclust.
[17] Soft clustering (fuzzy clustering): Each object belongs to each cluster to a certain degree. Eg R Fclust function in the fclust package. | Choosing a clustering method | Here is a summary of several clustering algorithms that can help to answer the question
"which clustering technique i should use?"
There is no objectively "correct" clustering algorithm Ref
Cluste | Choosing a clustering method
Here is a summary of several clustering algorithms that can help to answer the question
"which clustering technique i should use?"
There is no objectively "correct" clustering algorithm Ref
Clustering algorithms can be categorized based on their "cluster model". An algorithm designed for a particular kind of model will generally fail on a different kind of model. For eg, k-means cannot find non-convex clusters, it can find only circular shaped clusters.
Therefore, understanding these "cluster models" becomes the key to understanding how to choose among the various clustering algorithms / methods. Typical cluster models include:
[1] Connectivity models: Builds models based on distance connectivity. Eg hierarchical clustering. Used when we need different partitioning based on tree cut height. R function: hclust in stats package.
[2] Centroid models: Builds models by representing each cluster by a single mean vector. Used when we need crisp partitioning (as opposed to fuzzy clustering described later). R function: kmeans in stats package.
[3] Distribution models: Builds models based on statistical distributions such as multivariate normal distributions used by the expectation-maximization algorithm. Used when cluster shapes can be arbitrary unlike k-means which assumes circular clusters. R function: emcluster in the emcluster package.
[4] Density models: Builds models based on clusters as connected dense regions in the data space. Eg DBSCAN and OPTICS. Used when cluster shapes can be arbitrary unlike k-means which assumes circular clusters.. R function dbscan in package dbscan.
[5] Subspace models: Builds models based on both cluster members and relevant attributes. Eg biclustering (also known as co-clustering or two-mode-clustering). Used when simultaneous row and column clustering is needed. R function biclust in biclust package.
[6] Group models: Builds models based on the grouping information. Eg collaborative filtering (recommender algorithm). R function Recommender in recommenderlab package.
[7] Graph-based models: Builds models based on clique. Community structure detection algorithms try to find dense subgraphs in directed or undirected graphs. Eg R function cluster_walktrap in igraph package.
[8] Kohonen Self-Organizing Feature Map: Builds models based on neural network. R function som in the kohonen package.
[9] Spectral Clustering: Builds models based on non-convex cluster structure, or when a measure of the center is not a suitable description of the complete cluster. R function specc in the kernlab package.
[10] subspace clustering : For high-dimensional data, distance functions could be problematic. cluster models include the relevant attributes for the cluster. Eg, hddc function in the R package HDclassif.
[11] Sequence clustering: Group sequences that are related. rBlast package.
[12] Affinity propagation: Builds models based on message passing between data points. It does not require the number of clusters to be determined before running the algorithm. It is better for certain computer vision and computational biology tasks, e.g. clustering of pictures of human faces and identifying regulated transcripts, than k-means, Ref Rpackage APCluster.
[13] Stream clustering: Builds models based on data that arrive continuously such as telephone records, financial transactions etc. Eg R package BIRCH [https://cran.r-project.org/src/contrib/Archive/birch/]
[14] Document clustering (or text clustering): Builds models based on SVD. It has used in topic extraction. Eg Carrot [http://search.carrot2.org] is an open source search results clustering engine which can cluster documents into thematic categories.
[15] Latent class model: It relates a set of observed multivariate variables to a set of latent variables. LCA may be used in collaborative filtering. R function Recommender in recommenderlab package has collaborative filtering functionality.
[16] Biclustering: Used to simultaneously cluster rows and columns of two-mode data. Eg R function biclust in package biclust.
[17] Soft clustering (fuzzy clustering): Each object belongs to each cluster to a certain degree. Eg R Fclust function in the fclust package. | Choosing a clustering method
Here is a summary of several clustering algorithms that can help to answer the question
"which clustering technique i should use?"
There is no objectively "correct" clustering algorithm Ref
Cluste |
2,594 | Choosing a clustering method | As far as I'm concerned, if you want a safe choice, spectral clustering methods have been achieving the highest accuracy rates in recent years - at least in image clustering.
As for the distance metric, it depends a lot on how your data is organized. The safe choice is the simple euclidean distance but if you know your data contains manifolds, you should map the points via kernel methods.
PS: they're all related to numerical values, not categorical. I'm not sure how one would go about clustering categorical data. | Choosing a clustering method | As far as I'm concerned, if you want a safe choice, spectral clustering methods have been achieving the highest accuracy rates in recent years - at least in image clustering.
As for the distance metri | Choosing a clustering method
As far as I'm concerned, if you want a safe choice, spectral clustering methods have been achieving the highest accuracy rates in recent years - at least in image clustering.
As for the distance metric, it depends a lot on how your data is organized. The safe choice is the simple euclidean distance but if you know your data contains manifolds, you should map the points via kernel methods.
PS: they're all related to numerical values, not categorical. I'm not sure how one would go about clustering categorical data. | Choosing a clustering method
As far as I'm concerned, if you want a safe choice, spectral clustering methods have been achieving the highest accuracy rates in recent years - at least in image clustering.
As for the distance metri |
2,595 | Rules of thumb for minimum sample size for multiple regression | I'm not a fan of simple formulas for generating minimum sample sizes.
At the very least, any formula should consider effect size and the questions of interest.
And the difference between either side of a cut-off is minimal.
Sample size as optimisation problem
Bigger samples are better.
Sample size is often determined by pragmatic considerations.
Sample size should be seen as one consideration in an optimisation problem where the cost in time, money, effort, and so on of obtaining additional participants is weighed against the benefits of having additional participants.
A Rough Rule of Thumb
In terms of very rough rules of thumb within the typical context of observational psychological studies involving things like ability tests, attitude scales, personality measures, and so forth, I sometimes think of:
n=100 as adequate
n=200 as good
n=400+ as great
These rules of thumb are grounded in the 95% confidence intervals associated with correlations at these respective levels and the degree of precision that I'd like to theoretically understand the relations of interest.
However, it is only a heuristic.
G Power 3
I typically use G-Power 3 to calculate power based on various assumptions
see my post.
See this tutorial from the G Power 3 site specific to multiple regression
The Power Primer is also a useful tool for applied researchers.
Multiple Regression tests multiple hypotheses
Any power analysis question requires consideration of effect sizes.
Power analysis for multiple regression is made more complicated by the fact that there are multiple effects including the overall r-squared and one for each individual coefficient.
Furthermore, most studies include more than one multiple regression.
For me, this is further reason to rely more on general heuristics, and thinking about the minimal effect size that you want to detect.
In relation to multiple regression, I'll often think more in terms of the degree of precision in estimating the underlying correlation matrix.
Accuracy in Parameter Estimation
I also like Ken Kelley and colleagues' discussion of Accuracy in Parameter Estimation.
See Ken Kelley's website for publications
As mentioned by @Dmitrij, Kelley and Maxwell (2003) FREE PDF have a useful article.
Ken Kelley developed the MBESS package in R to perform analyses relating sample size to precision in parameter estimation. | Rules of thumb for minimum sample size for multiple regression | I'm not a fan of simple formulas for generating minimum sample sizes.
At the very least, any formula should consider effect size and the questions of interest.
And the difference between either side o | Rules of thumb for minimum sample size for multiple regression
I'm not a fan of simple formulas for generating minimum sample sizes.
At the very least, any formula should consider effect size and the questions of interest.
And the difference between either side of a cut-off is minimal.
Sample size as optimisation problem
Bigger samples are better.
Sample size is often determined by pragmatic considerations.
Sample size should be seen as one consideration in an optimisation problem where the cost in time, money, effort, and so on of obtaining additional participants is weighed against the benefits of having additional participants.
A Rough Rule of Thumb
In terms of very rough rules of thumb within the typical context of observational psychological studies involving things like ability tests, attitude scales, personality measures, and so forth, I sometimes think of:
n=100 as adequate
n=200 as good
n=400+ as great
These rules of thumb are grounded in the 95% confidence intervals associated with correlations at these respective levels and the degree of precision that I'd like to theoretically understand the relations of interest.
However, it is only a heuristic.
G Power 3
I typically use G-Power 3 to calculate power based on various assumptions
see my post.
See this tutorial from the G Power 3 site specific to multiple regression
The Power Primer is also a useful tool for applied researchers.
Multiple Regression tests multiple hypotheses
Any power analysis question requires consideration of effect sizes.
Power analysis for multiple regression is made more complicated by the fact that there are multiple effects including the overall r-squared and one for each individual coefficient.
Furthermore, most studies include more than one multiple regression.
For me, this is further reason to rely more on general heuristics, and thinking about the minimal effect size that you want to detect.
In relation to multiple regression, I'll often think more in terms of the degree of precision in estimating the underlying correlation matrix.
Accuracy in Parameter Estimation
I also like Ken Kelley and colleagues' discussion of Accuracy in Parameter Estimation.
See Ken Kelley's website for publications
As mentioned by @Dmitrij, Kelley and Maxwell (2003) FREE PDF have a useful article.
Ken Kelley developed the MBESS package in R to perform analyses relating sample size to precision in parameter estimation. | Rules of thumb for minimum sample size for multiple regression
I'm not a fan of simple formulas for generating minimum sample sizes.
At the very least, any formula should consider effect size and the questions of interest.
And the difference between either side o |
2,596 | Rules of thumb for minimum sample size for multiple regression | I don't prefer to think of this as a power issue, but rather ask the question "how large should $n$ be so that the apparent $R^2$ can be trusted"? One way to approach that is to consider the ratio or difference between $R^2$ and $R_{adj}^{2}$, the latter being the adjusted $R^2$ given by $1 - (1 - R^{2})\frac{n-1}{n-p-1}$ and forming a more unbiased estimate of "true" $R^2$.
Some R code can be used to solve for the factor of $p$ that $n-1$ should be such that $R_{adj}^{2}$ is only a factor $k$ smaller than $R^2$ or is only smaller by $k$.
require(Hmisc)
dop <- function(k, type) {
z <- list()
R2 <- seq(.01, .99, by=.01)
for(a in k) z[[as.character(a)]] <-
list(R2=R2, pfact=if(type=='relative') ((1/R2) - a) / (1 - a) else
(1 - R2 + a) / a)
labcurve(z, pl=TRUE, ylim=c(0,100), adj=0, offset=3,
xlab=expression(R^2), ylab=expression(paste('Multiple of ',p)))
}
par(mfrow=c(1,2))
dop(c(.9, .95, .975), 'relative')
dop(c(.075, .05, .04, .025, .02, .01), 'absolute')
Legend: Degradation in $R^{2}$ that achieves a relative drop from $R^{2}$ to $R^{2}_{adj}$ by a the indicated relative factor (left panel, 3 factors) or absolute difference (right panel, 6 decrements).
If anyone has seen this already in print please let me know. | Rules of thumb for minimum sample size for multiple regression | I don't prefer to think of this as a power issue, but rather ask the question "how large should $n$ be so that the apparent $R^2$ can be trusted"? One way to approach that is to consider the ratio or | Rules of thumb for minimum sample size for multiple regression
I don't prefer to think of this as a power issue, but rather ask the question "how large should $n$ be so that the apparent $R^2$ can be trusted"? One way to approach that is to consider the ratio or difference between $R^2$ and $R_{adj}^{2}$, the latter being the adjusted $R^2$ given by $1 - (1 - R^{2})\frac{n-1}{n-p-1}$ and forming a more unbiased estimate of "true" $R^2$.
Some R code can be used to solve for the factor of $p$ that $n-1$ should be such that $R_{adj}^{2}$ is only a factor $k$ smaller than $R^2$ or is only smaller by $k$.
require(Hmisc)
dop <- function(k, type) {
z <- list()
R2 <- seq(.01, .99, by=.01)
for(a in k) z[[as.character(a)]] <-
list(R2=R2, pfact=if(type=='relative') ((1/R2) - a) / (1 - a) else
(1 - R2 + a) / a)
labcurve(z, pl=TRUE, ylim=c(0,100), adj=0, offset=3,
xlab=expression(R^2), ylab=expression(paste('Multiple of ',p)))
}
par(mfrow=c(1,2))
dop(c(.9, .95, .975), 'relative')
dop(c(.075, .05, .04, .025, .02, .01), 'absolute')
Legend: Degradation in $R^{2}$ that achieves a relative drop from $R^{2}$ to $R^{2}_{adj}$ by a the indicated relative factor (left panel, 3 factors) or absolute difference (right panel, 6 decrements).
If anyone has seen this already in print please let me know. | Rules of thumb for minimum sample size for multiple regression
I don't prefer to think of this as a power issue, but rather ask the question "how large should $n$ be so that the apparent $R^2$ can be trusted"? One way to approach that is to consider the ratio or |
2,597 | Rules of thumb for minimum sample size for multiple regression | (+1) for indeed a crucial, in my opinion, question.
In macro-econometrics you usually have much smaller sample sizes than in micro, financial or sociological experiments. A researcher feels quite well when on can provide at least feasible estimations. My personal least possible rule of thumb is $4\cdot m$ ($4$ degrees of freedom on one estimated parameter). In other applied fields of studies you usually are more lucky with data (if it is not too expensive, just collect more data points) and you may ask what is the optimal size of a sample (not just minimum value for such). The latter issue comes from the fact that more low quality (noisy) data is not better than smaller sample of high quality ones.
Most of the sample sizes are linked to the power of tests for the hypothesis you are going to test after you fit the multiple regression model.
There is a nice calculator that could be useful for multiple regression models and some formula behind the scenes. I think such a-priory calculator could be easily applied by non-statistician.
Probably K.Kelley and S.E.Maxwell article may be useful to answer the other questions, but I need more time first to study the problem. | Rules of thumb for minimum sample size for multiple regression | (+1) for indeed a crucial, in my opinion, question.
In macro-econometrics you usually have much smaller sample sizes than in micro, financial or sociological experiments. A researcher feels quite wel | Rules of thumb for minimum sample size for multiple regression
(+1) for indeed a crucial, in my opinion, question.
In macro-econometrics you usually have much smaller sample sizes than in micro, financial or sociological experiments. A researcher feels quite well when on can provide at least feasible estimations. My personal least possible rule of thumb is $4\cdot m$ ($4$ degrees of freedom on one estimated parameter). In other applied fields of studies you usually are more lucky with data (if it is not too expensive, just collect more data points) and you may ask what is the optimal size of a sample (not just minimum value for such). The latter issue comes from the fact that more low quality (noisy) data is not better than smaller sample of high quality ones.
Most of the sample sizes are linked to the power of tests for the hypothesis you are going to test after you fit the multiple regression model.
There is a nice calculator that could be useful for multiple regression models and some formula behind the scenes. I think such a-priory calculator could be easily applied by non-statistician.
Probably K.Kelley and S.E.Maxwell article may be useful to answer the other questions, but I need more time first to study the problem. | Rules of thumb for minimum sample size for multiple regression
(+1) for indeed a crucial, in my opinion, question.
In macro-econometrics you usually have much smaller sample sizes than in micro, financial or sociological experiments. A researcher feels quite wel |
2,598 | Rules of thumb for minimum sample size for multiple regression | Your rule of thumb is not particularly good if $m$ is very large. Take $m=500$: your rule says its ok to fit $500$ variables with only $600$ observations. I hardly think so!
For multiple regression, you have some theory to suggest a minimum sample size. If you are going to be using ordinary least squares, then one of the assumptions you require is that the "true residuals" be independent. Now when you fit a least squares model to $m$ variables, you are imposing $m+1$ linear constraints on your empirical residuals (given by the least squares or "normal" equations). This implies that the empirical residuals are not independent - once we know $n-m-1$ of them, the remaining $m+1$ can be deduced, where $n$ is the sample size. So we have a violation of this assumption. Now the order of the dependence is $O\left(\frac{m+1}{n}\right)$. Hence if you choose $n=k(m+1)$ for some number $k$, then the order is given by $O\left(\frac{1}{k}\right)$. So by choosing $k$, you are choosing how much dependence you are willing to tolerate. I choose $k$ in much the same way you do for applying the "central limit theorem" - $10-20$ is good, and we have the "stats counting" rule $30\equiv\infty$ (i.e. the statistician's counting system is $1,2,\dots,26,27,28,29,\infty$). | Rules of thumb for minimum sample size for multiple regression | Your rule of thumb is not particularly good if $m$ is very large. Take $m=500$: your rule says its ok to fit $500$ variables with only $600$ observations. I hardly think so!
For multiple regression, | Rules of thumb for minimum sample size for multiple regression
Your rule of thumb is not particularly good if $m$ is very large. Take $m=500$: your rule says its ok to fit $500$ variables with only $600$ observations. I hardly think so!
For multiple regression, you have some theory to suggest a minimum sample size. If you are going to be using ordinary least squares, then one of the assumptions you require is that the "true residuals" be independent. Now when you fit a least squares model to $m$ variables, you are imposing $m+1$ linear constraints on your empirical residuals (given by the least squares or "normal" equations). This implies that the empirical residuals are not independent - once we know $n-m-1$ of them, the remaining $m+1$ can be deduced, where $n$ is the sample size. So we have a violation of this assumption. Now the order of the dependence is $O\left(\frac{m+1}{n}\right)$. Hence if you choose $n=k(m+1)$ for some number $k$, then the order is given by $O\left(\frac{1}{k}\right)$. So by choosing $k$, you are choosing how much dependence you are willing to tolerate. I choose $k$ in much the same way you do for applying the "central limit theorem" - $10-20$ is good, and we have the "stats counting" rule $30\equiv\infty$ (i.e. the statistician's counting system is $1,2,\dots,26,27,28,29,\infty$). | Rules of thumb for minimum sample size for multiple regression
Your rule of thumb is not particularly good if $m$ is very large. Take $m=500$: your rule says its ok to fit $500$ variables with only $600$ observations. I hardly think so!
For multiple regression, |
2,599 | Rules of thumb for minimum sample size for multiple regression | In Psychology:
Green (1991) indicates that $N > 50 + 8m$ (where m is the number of independent variables) is needed for testing multiple correlation and $N > 104 + m$ for testing individual predictors.
Other rules that can be used are...
Harris (1985) says that the number of participants should exceed the number of predictors by at least $50$.
Van Voorhis & Morgan (2007) (pdf) using 6 or more predictors the absolute minimum of participants should be $10$. Though it is better to go for $30$ participants per variable. | Rules of thumb for minimum sample size for multiple regression | In Psychology:
Green (1991) indicates that $N > 50 + 8m$ (where m is the number of independent variables) is needed for testing multiple correlation and $N > 104 + m$ for testing individual predictor | Rules of thumb for minimum sample size for multiple regression
In Psychology:
Green (1991) indicates that $N > 50 + 8m$ (where m is the number of independent variables) is needed for testing multiple correlation and $N > 104 + m$ for testing individual predictors.
Other rules that can be used are...
Harris (1985) says that the number of participants should exceed the number of predictors by at least $50$.
Van Voorhis & Morgan (2007) (pdf) using 6 or more predictors the absolute minimum of participants should be $10$. Though it is better to go for $30$ participants per variable. | Rules of thumb for minimum sample size for multiple regression
In Psychology:
Green (1991) indicates that $N > 50 + 8m$ (where m is the number of independent variables) is needed for testing multiple correlation and $N > 104 + m$ for testing individual predictor |
2,600 | Rules of thumb for minimum sample size for multiple regression | I agree that power calculators are useful, especially to see the effect of different factors on the power. In that sense, calculators that include more input information are much better. For linear regression, I like the regression calculator here which includes factors such as error in Xs, correlation between Xs, and more. | Rules of thumb for minimum sample size for multiple regression | I agree that power calculators are useful, especially to see the effect of different factors on the power. In that sense, calculators that include more input information are much better. For linear re | Rules of thumb for minimum sample size for multiple regression
I agree that power calculators are useful, especially to see the effect of different factors on the power. In that sense, calculators that include more input information are much better. For linear regression, I like the regression calculator here which includes factors such as error in Xs, correlation between Xs, and more. | Rules of thumb for minimum sample size for multiple regression
I agree that power calculators are useful, especially to see the effect of different factors on the power. In that sense, calculators that include more input information are much better. For linear re |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.