idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
3,901
What is the intuition behind conditional Gaussian distributions?
Synopsis Every statement in the question can be understood as a property of ellipses. The only property particular to the bivariate Normal distribution that is needed is the fact that in a standard bivariate Normal distribution of $X,Y$--for which $X$ and $Y$ are uncorrelated--the conditional variance of $Y$ does not depend on $X$. (This in turn is an immediate consequence of the fact that lack of correlation implies independence for jointly Normal variables.) The following analysis shows precisely what property of ellipses is involved and derives all the equations of the question using elementary ideas and the simplest possible arithmetic, in a way intended to be easily remembered. Circularly symmetric distributions The distribution of the question is a member of the family of bivariate Normal distributions. They are all derived from a basic member, the standard bivariate Normal, which describes two uncorrelated standard Normal distributions (forming its two coordinates). The left side is a relief plot of the standard bivariate normal density. The right side shows the same in pseudo-3D, with the front part sliced away. This is an example of a circularly symmetric distribution: the density varies with distance from a central point but not with the direction away from that point. Thus, the contours of its graph (at the right) are circles. Most other bivariate Normal distributions are not circularly symmetric, however: their cross-sections are ellipses. These ellipses model the characteristic shape of many bivariate point clouds. These are portraits of the bivariate Normal distribution with covariance matrix $\Sigma = \left(\begin{array}{cc} 1 & -\frac{2}{3} \\ -\frac{2}{3} & 1 \\\end{array}\right).$ It is a model for data with correlation coefficient $-2/3$. How to Create Ellipses An ellipse--according to its oldest definition--is a conic section, which is a circle distorted by a projection onto another plane. By considering the nature of projection, just as visual artists do, we may decompose it into a sequence of distortions that are easy to understand and calculate with. First, stretch (or, if necessary, squeeze) the circle along what will become the long axis of the ellipse until it is the correct length: Next, squeeze (or stretch) this ellipse along its minor axis: Third, rotate it around its center into its final orientation: Finally, shift it to the desired location: These are all affine transformations. (In fact, the first three are linear transformations; the final shift makes it affine.) Because a composition of affine transformations is (by definition) still affine, the net distortion from the circle to the final ellipse is an affine transformation. But it can be somewhat complicated: Notice what happened to the ellipse's (natural) axes: after they were created by the shift and squeeze, they (of course) rotated and shifted along with the axis itself. We easily see these axes even when they are not drawn, because they are axes of symmetry of the ellipse itself. We would like to apply our understanding of ellipses to understanding distorted circularly symmetric distributions, like the bivariate Normal family. Unfortunately, there is a problem with these distortions: they do not respect the distinction between the $x$ and $y$ axes. The rotation at step 3 ruins that. Look at the faint coordinate grids in the backgrounds: these show what happens to a grid (of mesh $1/2$ in both directions) when it is distorted. In the first image the spacing between the original vertical lines (shown solid) is doubled. In the second image the spacing between the original horizontal lines (shown dashed) is shrunk by a third. In the third image the grid spacings are not changed, but all the lines are rotated. They shift up and to the right in the fourth image. The final image, showing the net result, displays this stretched, squeezed, rotated, shifted grid. The original solid lines of constant $x$ coordinate no longer are vertical. The key idea--one might venture to say it is the crux of regression--is that there is a way in which the circle can be distorted into an ellipse without rotating the vertical lines. Because the rotation was the culprit, let's cut to the chase and show how to created a rotated ellipse without actually appearing to rotate anything! This is a skew transformation. It does two things at once: It squeezes in the $y$ direction (by an amount $\lambda$, say). This leaves the $x$-axis alone. It lifts any resulting point $(x,\lambda y)$ by an amount directly proportional to $x$. Writing that constant of proportionality as $\rho$, this sends $(x,\lambda y)$ to $(x, \lambda y+\rho x)$. The second step lifts the $x$-axis into the line $y=\rho x$, shown in the previous figure. As shown in that figure, I want to work with a special skew transformation, one that effectively rotates the ellipse by 45 degrees and inscribes it into the unit square. The major axis of this ellipse is the line $y=x$. It is visually evident that $|\rho| \le 1$. (Negative values of $\rho$ tilt the ellipse down to the right rather than up to the right.) This is the geometric explanation of "regression to the mean." Choosing an angle of 45 degrees makes the ellipse symmetric around the square's diagonal (part of the line $y=x$). To figure out the parameters of this skew transformation, observe: The lifting by $\rho x$ moves the point $(1,0)$ to $(1,\rho)$. The symmetry around the main diagonal then implies the point $(\rho, 1)$ also lies on the ellipse. Where did this point start out? The original (upper) point on the unit circle (having implicit equation $x^2+y^2=1$) with $x$ coordinate $\rho$ was $(\rho, \sqrt{1-\rho^2})$. Any point of the form $(\rho, y)$ first got squeezed to $(\rho, \lambda y)$ and then lifted to $(\rho, \lambda y + \rho\times\rho)$. The unique solution to the equation $(\rho, \lambda \sqrt{1-\rho^2} + \rho^2) = (\rho, 1)$ is $\lambda = \sqrt{1-\rho^2}$. That is the amount by which all distances in the vertical direction must be squeezed in order to create an ellipse at a 45 degree angle when it is skewed vertically by $\rho$. To firm up these ideas, here is a tableau showing how a circularly symmetric distribution is distorted into distributions with elliptical contours by means of these skew transformations. The panels show values of $\rho$ equal to $0,$ $3/10,$ $6/10,$ and $9/10,$ from left to right. The leftmost figure shows a set of starting points around one of the circular contours as well as part of the horizontal axis. Subsequent figures use arrows to show how those points are moved. The image of the horizontal axis appears as a slanted line segment (with slope $\rho$). (The colors represent different amounts of density in the different figures.) Application We are ready to do regression. A standard, elegant (yet simple) method to perform regression is first to express the original variables in new units of measurement: we center them at their means and use their standard deviations as the units. This moves the center of the distribution to the origin and makes all its elliptical contours slant 45 degrees (up or down). When these standardized data form a circular point cloud, the regression is easy: the means conditional on $x$ are all $0$, forming a line passing through the origin. (Circular symmetry implies symmetry with respect to the $x$ axis, showing that all conditional distributions are symmetric, whence they have $0$ means.) As we have seen, we may view the standardized distribution as arising from this basic simple situation in two steps: first, all the (standardized) $y$ values are multiplied by $\sqrt{1-\rho^2}$ for some value of $\rho$; next, all values with $x$-coordinates are vertically skewed by $\rho x$. What did these distortions do to the regression line (which plots the conditional means against $x$)? The shrinking of $y$ coordinates multiplied all vertical deviations by a constant. This merely changed the vertical scale and left all conditional means unaltered at $0$. The vertical skew transformation added $\rho x$ to all conditional values at $x$, thereby adding $\rho x$ to their conditional mean: the curve $y=\rho x$ is the regression curve, which turns out to be a line. Similarly, we may verify that because the $x$-axis is the least squares fit to the circularly symmetric distribution, the least squares fit to the transformed distribution also is the line $y=\rho x$: the least-squares line coincides with the regression line. These beautiful results are a consequence of the fact that the vertical skew transformation does not change any $x$ coordinates. We can easily say more: The first bullet (about shrinking) shows that when $(X,Y)$ has any circularly symmetric distribution, the conditional variance of $Y|X$ was multiplied by $\left(\sqrt{1-\rho^2}\right)^2 = 1 - \rho^2$. More generally: the vertical skew transformation rescales each conditional distribution by $\sqrt{1-\rho^2}$ and then recenters it by $\rho x$. For the standard bivariate Normal distribution, the conditional variance is a constant (equal to $1$), independent of $x$. We immediately conclude that after applying this skew transformation, the conditional variance of the vertical deviations is still a constant and equals $1-\rho^2$. Because the conditional distributions of a bivariate Normal are themselves Normal, now that we know their means and variances, we have full information about them. Finally, we need to relate $\rho$ to the original covariance matrix $\Sigma$. For this, recall that the (nicest) definition of the correlation coefficient between two standardized variables $X$ and $Y$ is the expectation of their product $XY$. (The correlation of $X$ and $Y$ is simply declared to be the correlation of their standardized versions.) Therefore, when $(X,Y)$ follows any circularly symmetric distribution and we apply the skew transformation to the variables, we may write $$\varepsilon = Y - \rho X$$ for the vertical deviations from the regression line and notice that $\varepsilon$ must have a symmetric distribution around $0$. Why? Because before the skew transformation was applied, $Y$ had a symmetric distribution around $0$ and then we (a) squeezed it and (b) lifted it by $\rho X$. The former did not change its symmetry while the latter recentered it at $\rho X$, QED. The next figure illustrates this. The black lines trace out heights proportional to the conditional densities at various regularly-spaced values of $x$. The thick white line is the regression line, which passes through the center of symmetry of each conditional curve. This plot shows the case $\rho = -1/2$ in standardized coordinates. Consequently $$\mathbb{E}(XY) = \mathbb{E}\left(X(\rho X + \varepsilon)\right) = \rho\mathbb{E}(X^2) + \mathbb{E}(X\varepsilon) = \rho(1) + 0=\rho.$$ The final equality is due to two facts: (1) because $X$ has been standardized, the expectation of its square is its standardized variance, equal to $1$ by construction; and (2) the expectation of $X\varepsilon$ equals the expectation of $X(-\varepsilon)$ by virtue of the symmetry of $\varepsilon$. Because the latter is the negative of the former, both must equal $0$: this term drops out. We have identified the parameter of the skew transformation, $\rho$, as being the correlation coefficient of $X$ and $Y$. Conclusions By observing that any ellipse may be produced by distorting a circle with a vertical skew transformation that preserves the $x$ coordinate, we have arrived at an understanding of the contours of any distribution of random variables $(X,Y)$ that is obtained from a circularly symmetric one by means of stretches, squeezes, rotations, and shifts (that is, any affine transformation). By re-expressing the results in terms of the original units of $x$ and $y$--which amount to adding back their means, $\mu_x$ and $\mu_y$, after multiplying by their standard deviations $\sigma_x$ and $\sigma_y$--we find that: The least-squares line and the regression curve both pass through the origin of the standardized variables, which corresponds to the "point of averages" $(\mu_x,\mu_y)$ in original coordinates. The regression curve, which is defined to be the locus of conditional means, $\{(x, \rho x)\},$ coincides with the least-squares line. The slope of the regression line in standardized coordinates is the correlation coefficient $\rho$; in the original units it therefore equals $\sigma_y \rho / \sigma_x$. Consequently the equation of the regression line is $$y = \frac{\sigma_y\rho}{\sigma_x}\left(x - \mu_x\right) + \mu_y.$$ The conditional variance of $Y|X$ is $\sigma_y^2(1-\rho^2)$ times the conditional variance of $Y'|X'$ where $(X',Y')$ has a standard distribution (circularly symmetric with unit variances in both coordinates), $X'=(X-\mu_X)/\sigma_x$, and $Y'=(Y-\mu_Y)/\sigma_Y$. None of these results is a particular property of bivariate Normal distributions! For the bivariate Normal family, the conditional variance of $Y'|X'$ is constant (and equal to $1$): this fact makes that family particularly simple to work with. In particular: Because in the covariance matrix $\Sigma$ the coefficients are $\sigma_{11}=\sigma_x^2,$ $\sigma_{12}=\sigma_{21}=\rho\sigma_x\sigma_y,$ and $\sigma_{22}=\sigma_y^2,$ the conditional variance of $Y|X$ for a bivariate Normal distribution is $$\sigma_y^2(1-\rho^2)=\sigma_{22}\left(1-\left(\frac{\sigma_{12}}{\sqrt{\sigma_{11}\sigma_{22}}}\right)^2\right)=\sigma_{22} - \frac{\sigma_{12}^2}{\sigma_{11}}.$$ Technical Notes The key idea can be stated in terms of matrices describing the linear transformations. It comes down to finding a suitable "square root" of the correlation matrix for which $y$ is an eigenvector. Thus: $$\left(\begin{array}{cc} 1 & \rho \\ \rho & 1 \\\end{array}\right) = \mathbb{A}\mathbb{A}'$$ where $$\mathbb{A} = \left(\begin{array}{cc} 1 & 0 \\ \rho & \sqrt{1-\rho^2} \\\end{array}\right).$$ A much better known square root is the one initially described (involving a rotation instead of a skew transformation); it is the one produced by a singular value decomposition and it plays a prominent role in principal components analysis (PCA): $$\left(\begin{array}{cc} 1 & \rho \\ \rho & 1 \\\end{array}\right) = \mathbb{B}\mathbb{B}';$$ $$\mathbb{B} = \mathbb{Q} \left( \begin{array}{cc} \sqrt{\rho +1} & 0 \\ 0 & \sqrt{1-\rho } \\ \end{array} \right)\mathbb{Q}'$$ where $\mathbb{Q} = \left( \begin{array}{cc} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \end{array} \right)$ is the rotation matrix for a $45$ degree rotation. Thus, the distinction between PCA and regression comes down to the difference between two special square roots of the correlation matrix.
What is the intuition behind conditional Gaussian distributions?
Synopsis Every statement in the question can be understood as a property of ellipses. The only property particular to the bivariate Normal distribution that is needed is the fact that in a standard b
What is the intuition behind conditional Gaussian distributions? Synopsis Every statement in the question can be understood as a property of ellipses. The only property particular to the bivariate Normal distribution that is needed is the fact that in a standard bivariate Normal distribution of $X,Y$--for which $X$ and $Y$ are uncorrelated--the conditional variance of $Y$ does not depend on $X$. (This in turn is an immediate consequence of the fact that lack of correlation implies independence for jointly Normal variables.) The following analysis shows precisely what property of ellipses is involved and derives all the equations of the question using elementary ideas and the simplest possible arithmetic, in a way intended to be easily remembered. Circularly symmetric distributions The distribution of the question is a member of the family of bivariate Normal distributions. They are all derived from a basic member, the standard bivariate Normal, which describes two uncorrelated standard Normal distributions (forming its two coordinates). The left side is a relief plot of the standard bivariate normal density. The right side shows the same in pseudo-3D, with the front part sliced away. This is an example of a circularly symmetric distribution: the density varies with distance from a central point but not with the direction away from that point. Thus, the contours of its graph (at the right) are circles. Most other bivariate Normal distributions are not circularly symmetric, however: their cross-sections are ellipses. These ellipses model the characteristic shape of many bivariate point clouds. These are portraits of the bivariate Normal distribution with covariance matrix $\Sigma = \left(\begin{array}{cc} 1 & -\frac{2}{3} \\ -\frac{2}{3} & 1 \\\end{array}\right).$ It is a model for data with correlation coefficient $-2/3$. How to Create Ellipses An ellipse--according to its oldest definition--is a conic section, which is a circle distorted by a projection onto another plane. By considering the nature of projection, just as visual artists do, we may decompose it into a sequence of distortions that are easy to understand and calculate with. First, stretch (or, if necessary, squeeze) the circle along what will become the long axis of the ellipse until it is the correct length: Next, squeeze (or stretch) this ellipse along its minor axis: Third, rotate it around its center into its final orientation: Finally, shift it to the desired location: These are all affine transformations. (In fact, the first three are linear transformations; the final shift makes it affine.) Because a composition of affine transformations is (by definition) still affine, the net distortion from the circle to the final ellipse is an affine transformation. But it can be somewhat complicated: Notice what happened to the ellipse's (natural) axes: after they were created by the shift and squeeze, they (of course) rotated and shifted along with the axis itself. We easily see these axes even when they are not drawn, because they are axes of symmetry of the ellipse itself. We would like to apply our understanding of ellipses to understanding distorted circularly symmetric distributions, like the bivariate Normal family. Unfortunately, there is a problem with these distortions: they do not respect the distinction between the $x$ and $y$ axes. The rotation at step 3 ruins that. Look at the faint coordinate grids in the backgrounds: these show what happens to a grid (of mesh $1/2$ in both directions) when it is distorted. In the first image the spacing between the original vertical lines (shown solid) is doubled. In the second image the spacing between the original horizontal lines (shown dashed) is shrunk by a third. In the third image the grid spacings are not changed, but all the lines are rotated. They shift up and to the right in the fourth image. The final image, showing the net result, displays this stretched, squeezed, rotated, shifted grid. The original solid lines of constant $x$ coordinate no longer are vertical. The key idea--one might venture to say it is the crux of regression--is that there is a way in which the circle can be distorted into an ellipse without rotating the vertical lines. Because the rotation was the culprit, let's cut to the chase and show how to created a rotated ellipse without actually appearing to rotate anything! This is a skew transformation. It does two things at once: It squeezes in the $y$ direction (by an amount $\lambda$, say). This leaves the $x$-axis alone. It lifts any resulting point $(x,\lambda y)$ by an amount directly proportional to $x$. Writing that constant of proportionality as $\rho$, this sends $(x,\lambda y)$ to $(x, \lambda y+\rho x)$. The second step lifts the $x$-axis into the line $y=\rho x$, shown in the previous figure. As shown in that figure, I want to work with a special skew transformation, one that effectively rotates the ellipse by 45 degrees and inscribes it into the unit square. The major axis of this ellipse is the line $y=x$. It is visually evident that $|\rho| \le 1$. (Negative values of $\rho$ tilt the ellipse down to the right rather than up to the right.) This is the geometric explanation of "regression to the mean." Choosing an angle of 45 degrees makes the ellipse symmetric around the square's diagonal (part of the line $y=x$). To figure out the parameters of this skew transformation, observe: The lifting by $\rho x$ moves the point $(1,0)$ to $(1,\rho)$. The symmetry around the main diagonal then implies the point $(\rho, 1)$ also lies on the ellipse. Where did this point start out? The original (upper) point on the unit circle (having implicit equation $x^2+y^2=1$) with $x$ coordinate $\rho$ was $(\rho, \sqrt{1-\rho^2})$. Any point of the form $(\rho, y)$ first got squeezed to $(\rho, \lambda y)$ and then lifted to $(\rho, \lambda y + \rho\times\rho)$. The unique solution to the equation $(\rho, \lambda \sqrt{1-\rho^2} + \rho^2) = (\rho, 1)$ is $\lambda = \sqrt{1-\rho^2}$. That is the amount by which all distances in the vertical direction must be squeezed in order to create an ellipse at a 45 degree angle when it is skewed vertically by $\rho$. To firm up these ideas, here is a tableau showing how a circularly symmetric distribution is distorted into distributions with elliptical contours by means of these skew transformations. The panels show values of $\rho$ equal to $0,$ $3/10,$ $6/10,$ and $9/10,$ from left to right. The leftmost figure shows a set of starting points around one of the circular contours as well as part of the horizontal axis. Subsequent figures use arrows to show how those points are moved. The image of the horizontal axis appears as a slanted line segment (with slope $\rho$). (The colors represent different amounts of density in the different figures.) Application We are ready to do regression. A standard, elegant (yet simple) method to perform regression is first to express the original variables in new units of measurement: we center them at their means and use their standard deviations as the units. This moves the center of the distribution to the origin and makes all its elliptical contours slant 45 degrees (up or down). When these standardized data form a circular point cloud, the regression is easy: the means conditional on $x$ are all $0$, forming a line passing through the origin. (Circular symmetry implies symmetry with respect to the $x$ axis, showing that all conditional distributions are symmetric, whence they have $0$ means.) As we have seen, we may view the standardized distribution as arising from this basic simple situation in two steps: first, all the (standardized) $y$ values are multiplied by $\sqrt{1-\rho^2}$ for some value of $\rho$; next, all values with $x$-coordinates are vertically skewed by $\rho x$. What did these distortions do to the regression line (which plots the conditional means against $x$)? The shrinking of $y$ coordinates multiplied all vertical deviations by a constant. This merely changed the vertical scale and left all conditional means unaltered at $0$. The vertical skew transformation added $\rho x$ to all conditional values at $x$, thereby adding $\rho x$ to their conditional mean: the curve $y=\rho x$ is the regression curve, which turns out to be a line. Similarly, we may verify that because the $x$-axis is the least squares fit to the circularly symmetric distribution, the least squares fit to the transformed distribution also is the line $y=\rho x$: the least-squares line coincides with the regression line. These beautiful results are a consequence of the fact that the vertical skew transformation does not change any $x$ coordinates. We can easily say more: The first bullet (about shrinking) shows that when $(X,Y)$ has any circularly symmetric distribution, the conditional variance of $Y|X$ was multiplied by $\left(\sqrt{1-\rho^2}\right)^2 = 1 - \rho^2$. More generally: the vertical skew transformation rescales each conditional distribution by $\sqrt{1-\rho^2}$ and then recenters it by $\rho x$. For the standard bivariate Normal distribution, the conditional variance is a constant (equal to $1$), independent of $x$. We immediately conclude that after applying this skew transformation, the conditional variance of the vertical deviations is still a constant and equals $1-\rho^2$. Because the conditional distributions of a bivariate Normal are themselves Normal, now that we know their means and variances, we have full information about them. Finally, we need to relate $\rho$ to the original covariance matrix $\Sigma$. For this, recall that the (nicest) definition of the correlation coefficient between two standardized variables $X$ and $Y$ is the expectation of their product $XY$. (The correlation of $X$ and $Y$ is simply declared to be the correlation of their standardized versions.) Therefore, when $(X,Y)$ follows any circularly symmetric distribution and we apply the skew transformation to the variables, we may write $$\varepsilon = Y - \rho X$$ for the vertical deviations from the regression line and notice that $\varepsilon$ must have a symmetric distribution around $0$. Why? Because before the skew transformation was applied, $Y$ had a symmetric distribution around $0$ and then we (a) squeezed it and (b) lifted it by $\rho X$. The former did not change its symmetry while the latter recentered it at $\rho X$, QED. The next figure illustrates this. The black lines trace out heights proportional to the conditional densities at various regularly-spaced values of $x$. The thick white line is the regression line, which passes through the center of symmetry of each conditional curve. This plot shows the case $\rho = -1/2$ in standardized coordinates. Consequently $$\mathbb{E}(XY) = \mathbb{E}\left(X(\rho X + \varepsilon)\right) = \rho\mathbb{E}(X^2) + \mathbb{E}(X\varepsilon) = \rho(1) + 0=\rho.$$ The final equality is due to two facts: (1) because $X$ has been standardized, the expectation of its square is its standardized variance, equal to $1$ by construction; and (2) the expectation of $X\varepsilon$ equals the expectation of $X(-\varepsilon)$ by virtue of the symmetry of $\varepsilon$. Because the latter is the negative of the former, both must equal $0$: this term drops out. We have identified the parameter of the skew transformation, $\rho$, as being the correlation coefficient of $X$ and $Y$. Conclusions By observing that any ellipse may be produced by distorting a circle with a vertical skew transformation that preserves the $x$ coordinate, we have arrived at an understanding of the contours of any distribution of random variables $(X,Y)$ that is obtained from a circularly symmetric one by means of stretches, squeezes, rotations, and shifts (that is, any affine transformation). By re-expressing the results in terms of the original units of $x$ and $y$--which amount to adding back their means, $\mu_x$ and $\mu_y$, after multiplying by their standard deviations $\sigma_x$ and $\sigma_y$--we find that: The least-squares line and the regression curve both pass through the origin of the standardized variables, which corresponds to the "point of averages" $(\mu_x,\mu_y)$ in original coordinates. The regression curve, which is defined to be the locus of conditional means, $\{(x, \rho x)\},$ coincides with the least-squares line. The slope of the regression line in standardized coordinates is the correlation coefficient $\rho$; in the original units it therefore equals $\sigma_y \rho / \sigma_x$. Consequently the equation of the regression line is $$y = \frac{\sigma_y\rho}{\sigma_x}\left(x - \mu_x\right) + \mu_y.$$ The conditional variance of $Y|X$ is $\sigma_y^2(1-\rho^2)$ times the conditional variance of $Y'|X'$ where $(X',Y')$ has a standard distribution (circularly symmetric with unit variances in both coordinates), $X'=(X-\mu_X)/\sigma_x$, and $Y'=(Y-\mu_Y)/\sigma_Y$. None of these results is a particular property of bivariate Normal distributions! For the bivariate Normal family, the conditional variance of $Y'|X'$ is constant (and equal to $1$): this fact makes that family particularly simple to work with. In particular: Because in the covariance matrix $\Sigma$ the coefficients are $\sigma_{11}=\sigma_x^2,$ $\sigma_{12}=\sigma_{21}=\rho\sigma_x\sigma_y,$ and $\sigma_{22}=\sigma_y^2,$ the conditional variance of $Y|X$ for a bivariate Normal distribution is $$\sigma_y^2(1-\rho^2)=\sigma_{22}\left(1-\left(\frac{\sigma_{12}}{\sqrt{\sigma_{11}\sigma_{22}}}\right)^2\right)=\sigma_{22} - \frac{\sigma_{12}^2}{\sigma_{11}}.$$ Technical Notes The key idea can be stated in terms of matrices describing the linear transformations. It comes down to finding a suitable "square root" of the correlation matrix for which $y$ is an eigenvector. Thus: $$\left(\begin{array}{cc} 1 & \rho \\ \rho & 1 \\\end{array}\right) = \mathbb{A}\mathbb{A}'$$ where $$\mathbb{A} = \left(\begin{array}{cc} 1 & 0 \\ \rho & \sqrt{1-\rho^2} \\\end{array}\right).$$ A much better known square root is the one initially described (involving a rotation instead of a skew transformation); it is the one produced by a singular value decomposition and it plays a prominent role in principal components analysis (PCA): $$\left(\begin{array}{cc} 1 & \rho \\ \rho & 1 \\\end{array}\right) = \mathbb{B}\mathbb{B}';$$ $$\mathbb{B} = \mathbb{Q} \left( \begin{array}{cc} \sqrt{\rho +1} & 0 \\ 0 & \sqrt{1-\rho } \\ \end{array} \right)\mathbb{Q}'$$ where $\mathbb{Q} = \left( \begin{array}{cc} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \end{array} \right)$ is the rotation matrix for a $45$ degree rotation. Thus, the distinction between PCA and regression comes down to the difference between two special square roots of the correlation matrix.
What is the intuition behind conditional Gaussian distributions? Synopsis Every statement in the question can be understood as a property of ellipses. The only property particular to the bivariate Normal distribution that is needed is the fact that in a standard b
3,902
What is the intuition behind conditional Gaussian distributions?
This is essentially linear (OLS) regression. In that case, you are finding the conditional distribution of $Y$ given that $X=x_i$. (Strictly speaking, OLS regression does not make assumptions about the distribution of $X$, whereas your example is a multivariate normal, but we will ignore these things.) Now, if the covariance between $X_1$ and $X_2$ is not $0$, then the mean of the conditional distribution of $X_2$ has to shift as you change the value of $x_1$ where you are 'slicing through' the multivariate distribution. Consider the figure below: Here we see that the marginal distributions are both normal, with a positive correlation between $X_1$ and $X_2$. If you look at the conditional distribution of $X_2$ at any point on $X_1$, the distribution is a univariate normal. However, because of the positive correlation (i.e., the non-zero covariance), the mean of those conditional distributions shifts up as you move from left to right. For example, the figure shows that $\mu_{X_2|X_1=25}\ne\mu_{X_2|X_1=45}$. (For any future readers who might be confused by the symbols, I want to state that, e.g., $\sigma_{22}$ is an element of the covariance matrix $\Sigma$. Thus, it is the variance of $X_2$, even though people will typically think of a variance as $\sigma^2$, and $\sigma$ as a standard deviation.) Your equation for the mean is directly connected to the equation for estimating the slope in OLS regression (and remember that in regression $\hat y_i$ is the conditional mean): $$ \hat\beta_1=\frac{\text{Cov}(x,y)}{\text{Var}(x)} $$ In your equation, $^{\sigma_{12}}/_{\sigma_{22}}$ is the covariance over the variance; that is, it is the slope, just as above. Thus, your equation for the mean is just sliding your conditional mean, $\mu_{X_2|X_1=x_i}$, up or down from its unconditional mean, $\mu_{X_2}$, based on how far away from $\mu_{X_2}$ $x_{2i}$ is and the slope of the relationship between $X_1$ and $X_2$.
What is the intuition behind conditional Gaussian distributions?
This is essentially linear (OLS) regression. In that case, you are finding the conditional distribution of $Y$ given that $X=x_i$. (Strictly speaking, OLS regression does not make assumptions about
What is the intuition behind conditional Gaussian distributions? This is essentially linear (OLS) regression. In that case, you are finding the conditional distribution of $Y$ given that $X=x_i$. (Strictly speaking, OLS regression does not make assumptions about the distribution of $X$, whereas your example is a multivariate normal, but we will ignore these things.) Now, if the covariance between $X_1$ and $X_2$ is not $0$, then the mean of the conditional distribution of $X_2$ has to shift as you change the value of $x_1$ where you are 'slicing through' the multivariate distribution. Consider the figure below: Here we see that the marginal distributions are both normal, with a positive correlation between $X_1$ and $X_2$. If you look at the conditional distribution of $X_2$ at any point on $X_1$, the distribution is a univariate normal. However, because of the positive correlation (i.e., the non-zero covariance), the mean of those conditional distributions shifts up as you move from left to right. For example, the figure shows that $\mu_{X_2|X_1=25}\ne\mu_{X_2|X_1=45}$. (For any future readers who might be confused by the symbols, I want to state that, e.g., $\sigma_{22}$ is an element of the covariance matrix $\Sigma$. Thus, it is the variance of $X_2$, even though people will typically think of a variance as $\sigma^2$, and $\sigma$ as a standard deviation.) Your equation for the mean is directly connected to the equation for estimating the slope in OLS regression (and remember that in regression $\hat y_i$ is the conditional mean): $$ \hat\beta_1=\frac{\text{Cov}(x,y)}{\text{Var}(x)} $$ In your equation, $^{\sigma_{12}}/_{\sigma_{22}}$ is the covariance over the variance; that is, it is the slope, just as above. Thus, your equation for the mean is just sliding your conditional mean, $\mu_{X_2|X_1=x_i}$, up or down from its unconditional mean, $\mu_{X_2}$, based on how far away from $\mu_{X_2}$ $x_{2i}$ is and the slope of the relationship between $X_1$ and $X_2$.
What is the intuition behind conditional Gaussian distributions? This is essentially linear (OLS) regression. In that case, you are finding the conditional distribution of $Y$ given that $X=x_i$. (Strictly speaking, OLS regression does not make assumptions about
3,903
What is the intuition behind conditional Gaussian distributions?
Gung's answer is good (+1). There is another way of looking at it, though. Imagine that the covariance between $X_1$ and $X_2$ were to be positive. What does it mean for $\sigma_{1,2}>0$? Well, it means that when $X_2$ is above $X_2$'s mean, $X_1$ tends to be above $X_1$'s mean, and vice versa. Now suppose I told you that $X_2=\mathit{x}_2>\mu_2$. That is, suppose I told you that $X_2$ is above its mean. Wouldn't you conclude that $X_1$ is likely above its mean (since you know $\sigma_{1,2}>0$ and you know what covariance means)? So, now, if you take the mean of $X_1$, knowing that $X_2$ is above $X_2$'s mean, you are going to get a number above $X_1$'s mean. That is what the formula says: \begin{equation} E\{X_1 | X_2=\mathit{x}_2\} = \mu_1 + \frac{\sigma_{1,2}}{\sigma_{2,2}}\left( \mathit{x}_2-\mu_2\right) \end{equation} If the covariance is positive and $X_2$ is above its mean, then $E\{X_1 | X_2=\mathit{x}_2\} > \mu_1$. The conditional expectation takes the form above for the normal distribution, not for all distributions. This seems a little strange given that the reasoning in the paragraph above seems pretty compelling. However, (almost) no matter what the distributions of $X_1$ and $X_2$ this formula is right: \begin{equation} BLP\{X_1 | X_2=\mathit{x}_2\} = \mu_1 + \frac{\sigma_{1,2}}{\sigma_{2,2}}\left( \mathit{x}_2-\mu_2\right) \end{equation} Where $BLP$ means best linear predictor. The normal distribution is special in that conditional expectation and best linear predictor are the same thing.
What is the intuition behind conditional Gaussian distributions?
Gung's answer is good (+1). There is another way of looking at it, though. Imagine that the covariance between $X_1$ and $X_2$ were to be positive. What does it mean for $\sigma_{1,2}>0$? Well, it
What is the intuition behind conditional Gaussian distributions? Gung's answer is good (+1). There is another way of looking at it, though. Imagine that the covariance between $X_1$ and $X_2$ were to be positive. What does it mean for $\sigma_{1,2}>0$? Well, it means that when $X_2$ is above $X_2$'s mean, $X_1$ tends to be above $X_1$'s mean, and vice versa. Now suppose I told you that $X_2=\mathit{x}_2>\mu_2$. That is, suppose I told you that $X_2$ is above its mean. Wouldn't you conclude that $X_1$ is likely above its mean (since you know $\sigma_{1,2}>0$ and you know what covariance means)? So, now, if you take the mean of $X_1$, knowing that $X_2$ is above $X_2$'s mean, you are going to get a number above $X_1$'s mean. That is what the formula says: \begin{equation} E\{X_1 | X_2=\mathit{x}_2\} = \mu_1 + \frac{\sigma_{1,2}}{\sigma_{2,2}}\left( \mathit{x}_2-\mu_2\right) \end{equation} If the covariance is positive and $X_2$ is above its mean, then $E\{X_1 | X_2=\mathit{x}_2\} > \mu_1$. The conditional expectation takes the form above for the normal distribution, not for all distributions. This seems a little strange given that the reasoning in the paragraph above seems pretty compelling. However, (almost) no matter what the distributions of $X_1$ and $X_2$ this formula is right: \begin{equation} BLP\{X_1 | X_2=\mathit{x}_2\} = \mu_1 + \frac{\sigma_{1,2}}{\sigma_{2,2}}\left( \mathit{x}_2-\mu_2\right) \end{equation} Where $BLP$ means best linear predictor. The normal distribution is special in that conditional expectation and best linear predictor are the same thing.
What is the intuition behind conditional Gaussian distributions? Gung's answer is good (+1). There is another way of looking at it, though. Imagine that the covariance between $X_1$ and $X_2$ were to be positive. What does it mean for $\sigma_{1,2}>0$? Well, it
3,904
Machine learning cookbook / reference card / cheatsheet?
Some of the best and freely available resources are: Hastie, Friedman et al. The Elements of Statistical Learning: Data Mining, Inference, and Prediction David Barber. Bayesian Reasoning and Machine Learning David MacKay. Information Theory, Inference and Learning Algorithms (http://www.inference.phy.cam.ac.uk/mackay/itila/) As to the author's question I haven't met "All in one page" solution
Machine learning cookbook / reference card / cheatsheet?
Some of the best and freely available resources are: Hastie, Friedman et al. The Elements of Statistical Learning: Data Mining, Inference, and Prediction David Barber. Bayesian Reasoning and Machine
Machine learning cookbook / reference card / cheatsheet? Some of the best and freely available resources are: Hastie, Friedman et al. The Elements of Statistical Learning: Data Mining, Inference, and Prediction David Barber. Bayesian Reasoning and Machine Learning David MacKay. Information Theory, Inference and Learning Algorithms (http://www.inference.phy.cam.ac.uk/mackay/itila/) As to the author's question I haven't met "All in one page" solution
Machine learning cookbook / reference card / cheatsheet? Some of the best and freely available resources are: Hastie, Friedman et al. The Elements of Statistical Learning: Data Mining, Inference, and Prediction David Barber. Bayesian Reasoning and Machine
3,905
Machine learning cookbook / reference card / cheatsheet?
If you want to learn Machine Learning I strongly advise you enroll in the free online ML course in the winter taught by Prof. Andrew Ng. I did the previous one in the autumn and all learning material is of exceptional quality and geared toward practical applications, and a lot easier to grok that struggling alone with a book. It's also made a pretty low hanging fruit with good intuitive explanations and the minimum amount of math.
Machine learning cookbook / reference card / cheatsheet?
If you want to learn Machine Learning I strongly advise you enroll in the free online ML course in the winter taught by Prof. Andrew Ng. I did the previous one in the autumn and all learning material
Machine learning cookbook / reference card / cheatsheet? If you want to learn Machine Learning I strongly advise you enroll in the free online ML course in the winter taught by Prof. Andrew Ng. I did the previous one in the autumn and all learning material is of exceptional quality and geared toward practical applications, and a lot easier to grok that struggling alone with a book. It's also made a pretty low hanging fruit with good intuitive explanations and the minimum amount of math.
Machine learning cookbook / reference card / cheatsheet? If you want to learn Machine Learning I strongly advise you enroll in the free online ML course in the winter taught by Prof. Andrew Ng. I did the previous one in the autumn and all learning material
3,906
Machine learning cookbook / reference card / cheatsheet?
Yes, you are fine; Christopher Bishop's "Pattern Recognition and Machine Learning" is an excellent book for general reference, you can't really go wrong with it. A fairly recent book but also very well-written and equally broad is David Barber's "Bayesian Reasoning and Machine Learning"; a book I would feel is slightly more suitable for a new-comer in the field. I have used "The Elements of Statistical Learning" from Hastie et al. (mentioned by Macro) and while a very strong book I would not recommended it as a first reference; maybe it would serve you better as a second reference for more specialized topics. In that aspect, David MacKay's book, Information Theory, Inference, and Learning Algorithms, can also do a splendid job.
Machine learning cookbook / reference card / cheatsheet?
Yes, you are fine; Christopher Bishop's "Pattern Recognition and Machine Learning" is an excellent book for general reference, you can't really go wrong with it. A fairly recent book but also very wel
Machine learning cookbook / reference card / cheatsheet? Yes, you are fine; Christopher Bishop's "Pattern Recognition and Machine Learning" is an excellent book for general reference, you can't really go wrong with it. A fairly recent book but also very well-written and equally broad is David Barber's "Bayesian Reasoning and Machine Learning"; a book I would feel is slightly more suitable for a new-comer in the field. I have used "The Elements of Statistical Learning" from Hastie et al. (mentioned by Macro) and while a very strong book I would not recommended it as a first reference; maybe it would serve you better as a second reference for more specialized topics. In that aspect, David MacKay's book, Information Theory, Inference, and Learning Algorithms, can also do a splendid job.
Machine learning cookbook / reference card / cheatsheet? Yes, you are fine; Christopher Bishop's "Pattern Recognition and Machine Learning" is an excellent book for general reference, you can't really go wrong with it. A fairly recent book but also very wel
3,907
Machine learning cookbook / reference card / cheatsheet?
Since the consensus seems to be that this question is not a duplicate, I'd like to share my favorite for machine learner beginners: I found Programming Collective Intelligence the easiest book for beginners, since the author Toby Segaran is is focused on allowing the median software developer to get his/her hands dirty with data hacking as fast as possible. Typical chapter: The data problem is clearly described, followed by a rough explanation how the algorithm works and finally shows how to create some insights with just a few lines of code. The usage of python allows one to understand everything rather fast (you do not need to know python, seriously, I did not know it before, too). DONT think that this book is only focused on creating recommender system. It also deals with text mining / spam filtering / optimization / clustering / validation etc. and hence gives you a neat overview over the basic tools of every data miner.
Machine learning cookbook / reference card / cheatsheet?
Since the consensus seems to be that this question is not a duplicate, I'd like to share my favorite for machine learner beginners: I found Programming Collective Intelligence the easiest book for beg
Machine learning cookbook / reference card / cheatsheet? Since the consensus seems to be that this question is not a duplicate, I'd like to share my favorite for machine learner beginners: I found Programming Collective Intelligence the easiest book for beginners, since the author Toby Segaran is is focused on allowing the median software developer to get his/her hands dirty with data hacking as fast as possible. Typical chapter: The data problem is clearly described, followed by a rough explanation how the algorithm works and finally shows how to create some insights with just a few lines of code. The usage of python allows one to understand everything rather fast (you do not need to know python, seriously, I did not know it before, too). DONT think that this book is only focused on creating recommender system. It also deals with text mining / spam filtering / optimization / clustering / validation etc. and hence gives you a neat overview over the basic tools of every data miner.
Machine learning cookbook / reference card / cheatsheet? Since the consensus seems to be that this question is not a duplicate, I'd like to share my favorite for machine learner beginners: I found Programming Collective Intelligence the easiest book for beg
3,908
Machine learning cookbook / reference card / cheatsheet?
Witten and Frank, "Data Mining", Elsevier 2005 is a good book for self-learning as there is a Java library of code (Weka) to go with the book and is very practically oriented. I suspect there is a more recent edition than the one I have.
Machine learning cookbook / reference card / cheatsheet?
Witten and Frank, "Data Mining", Elsevier 2005 is a good book for self-learning as there is a Java library of code (Weka) to go with the book and is very practically oriented. I suspect there is a mo
Machine learning cookbook / reference card / cheatsheet? Witten and Frank, "Data Mining", Elsevier 2005 is a good book for self-learning as there is a Java library of code (Weka) to go with the book and is very practically oriented. I suspect there is a more recent edition than the one I have.
Machine learning cookbook / reference card / cheatsheet? Witten and Frank, "Data Mining", Elsevier 2005 is a good book for self-learning as there is a Java library of code (Weka) to go with the book and is very practically oriented. I suspect there is a mo
3,909
Machine learning cookbook / reference card / cheatsheet?
I have Machine Learning: An Algorithmic Perspective by Stephen Marsland and find it very useful for self-learning. Python code is given throughout the book. I agree with what is said in this favourable review: http://blog.rtwilson.com/review-machine-learning-an-algorithmic-perspective-by-stephen-marsland/
Machine learning cookbook / reference card / cheatsheet?
I have Machine Learning: An Algorithmic Perspective by Stephen Marsland and find it very useful for self-learning. Python code is given throughout the book. I agree with what is said in this favourab
Machine learning cookbook / reference card / cheatsheet? I have Machine Learning: An Algorithmic Perspective by Stephen Marsland and find it very useful for self-learning. Python code is given throughout the book. I agree with what is said in this favourable review: http://blog.rtwilson.com/review-machine-learning-an-algorithmic-perspective-by-stephen-marsland/
Machine learning cookbook / reference card / cheatsheet? I have Machine Learning: An Algorithmic Perspective by Stephen Marsland and find it very useful for self-learning. Python code is given throughout the book. I agree with what is said in this favourab
3,910
Machine learning cookbook / reference card / cheatsheet?
http://scikit-learn.org/stable/tutorial/machine_learning_map/ Often the hardest part of solving a machine learning problem can be finding the right estimator for the job. Different estimators are better suited for different types of data and different problems. The flowchart below is designed to give users a bit of a rough guide on how to approach problems with regard to which estimators to try on your data. Click on any estimator in the chart below to see it’s documentation.
Machine learning cookbook / reference card / cheatsheet?
http://scikit-learn.org/stable/tutorial/machine_learning_map/ Often the hardest part of solving a machine learning problem can be finding the right estimator for the job. Different estimators are
Machine learning cookbook / reference card / cheatsheet? http://scikit-learn.org/stable/tutorial/machine_learning_map/ Often the hardest part of solving a machine learning problem can be finding the right estimator for the job. Different estimators are better suited for different types of data and different problems. The flowchart below is designed to give users a bit of a rough guide on how to approach problems with regard to which estimators to try on your data. Click on any estimator in the chart below to see it’s documentation.
Machine learning cookbook / reference card / cheatsheet? http://scikit-learn.org/stable/tutorial/machine_learning_map/ Often the hardest part of solving a machine learning problem can be finding the right estimator for the job. Different estimators are
3,911
Machine learning cookbook / reference card / cheatsheet?
"Elements of Statistical Learning" would be a great book for your purposes. The and 5th printing (2011) of the 2nd edition (2009) of the book is freely available at http://www.stanford.edu/~hastie/local.ftp/Springer/ESLII_print5.pdf
Machine learning cookbook / reference card / cheatsheet?
"Elements of Statistical Learning" would be a great book for your purposes. The and 5th printing (2011) of the 2nd edition (2009) of the book is freely available at http://www.stanford.edu/~hastie/lo
Machine learning cookbook / reference card / cheatsheet? "Elements of Statistical Learning" would be a great book for your purposes. The and 5th printing (2011) of the 2nd edition (2009) of the book is freely available at http://www.stanford.edu/~hastie/local.ftp/Springer/ESLII_print5.pdf
Machine learning cookbook / reference card / cheatsheet? "Elements of Statistical Learning" would be a great book for your purposes. The and 5th printing (2011) of the 2nd edition (2009) of the book is freely available at http://www.stanford.edu/~hastie/lo
3,912
Machine learning cookbook / reference card / cheatsheet?
The awesome-machine-learning repository seems to be a master list of resources, including code, tutorials and books.
Machine learning cookbook / reference card / cheatsheet?
The awesome-machine-learning repository seems to be a master list of resources, including code, tutorials and books.
Machine learning cookbook / reference card / cheatsheet? The awesome-machine-learning repository seems to be a master list of resources, including code, tutorials and books.
Machine learning cookbook / reference card / cheatsheet? The awesome-machine-learning repository seems to be a master list of resources, including code, tutorials and books.
3,913
Machine learning cookbook / reference card / cheatsheet?
Most books mentioned in other answers are very good and you can't really go wrong with any of them. Additionally, I find the following cheat sheet for Python's scikit-learn quite useful.
Machine learning cookbook / reference card / cheatsheet?
Most books mentioned in other answers are very good and you can't really go wrong with any of them. Additionally, I find the following cheat sheet for Python's scikit-learn quite useful.
Machine learning cookbook / reference card / cheatsheet? Most books mentioned in other answers are very good and you can't really go wrong with any of them. Additionally, I find the following cheat sheet for Python's scikit-learn quite useful.
Machine learning cookbook / reference card / cheatsheet? Most books mentioned in other answers are very good and you can't really go wrong with any of them. Additionally, I find the following cheat sheet for Python's scikit-learn quite useful.
3,914
Machine learning cookbook / reference card / cheatsheet?
Microsoft Azure also provides a similar cheat-sheet to the scikit-learn one posted by Anton Tarasenko. (source: https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-algorithm-cheat-sheet) They accompany it with a notice: The suggestions offered in this algorithm cheat sheet are approximate rules-of-thumb. Some can be bent, and some can be flagrantly violated. This is intended to suggest a starting point. (...) Microsoft additionally provides an introductory article providing further details. Please notice that those materials are focused on the methods implemented in Microsoft Azure.
Machine learning cookbook / reference card / cheatsheet?
Microsoft Azure also provides a similar cheat-sheet to the scikit-learn one posted by Anton Tarasenko. (source: https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-algorithm-chea
Machine learning cookbook / reference card / cheatsheet? Microsoft Azure also provides a similar cheat-sheet to the scikit-learn one posted by Anton Tarasenko. (source: https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-algorithm-cheat-sheet) They accompany it with a notice: The suggestions offered in this algorithm cheat sheet are approximate rules-of-thumb. Some can be bent, and some can be flagrantly violated. This is intended to suggest a starting point. (...) Microsoft additionally provides an introductory article providing further details. Please notice that those materials are focused on the methods implemented in Microsoft Azure.
Machine learning cookbook / reference card / cheatsheet? Microsoft Azure also provides a similar cheat-sheet to the scikit-learn one posted by Anton Tarasenko. (source: https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-algorithm-chea
3,915
Machine learning cookbook / reference card / cheatsheet?
I like Duda, Hart and Stork "Pattern Classification". This is a recent revision of a classic text that explains everything very well. Not sure that it is updated to have much coverage of neural networks and SVMs. The book by Hastie, Tibshirani and Friedman is about the best there is but may be a bit more technical than what you are looking for and is detailed rather than an overview of the subject.
Machine learning cookbook / reference card / cheatsheet?
I like Duda, Hart and Stork "Pattern Classification". This is a recent revision of a classic text that explains everything very well. Not sure that it is updated to have much coverage of neural netw
Machine learning cookbook / reference card / cheatsheet? I like Duda, Hart and Stork "Pattern Classification". This is a recent revision of a classic text that explains everything very well. Not sure that it is updated to have much coverage of neural networks and SVMs. The book by Hastie, Tibshirani and Friedman is about the best there is but may be a bit more technical than what you are looking for and is detailed rather than an overview of the subject.
Machine learning cookbook / reference card / cheatsheet? I like Duda, Hart and Stork "Pattern Classification". This is a recent revision of a classic text that explains everything very well. Not sure that it is updated to have much coverage of neural netw
3,916
Machine learning cookbook / reference card / cheatsheet?
Don't start with Elements of Statistical Learning. It is great, but it is a reference book, which doesn't sound like what you are looking for. I would start with Programming Collective Intelligence as it's an easy read.
Machine learning cookbook / reference card / cheatsheet?
Don't start with Elements of Statistical Learning. It is great, but it is a reference book, which doesn't sound like what you are looking for. I would start with Programming Collective Intelligence as
Machine learning cookbook / reference card / cheatsheet? Don't start with Elements of Statistical Learning. It is great, but it is a reference book, which doesn't sound like what you are looking for. I would start with Programming Collective Intelligence as it's an easy read.
Machine learning cookbook / reference card / cheatsheet? Don't start with Elements of Statistical Learning. It is great, but it is a reference book, which doesn't sound like what you are looking for. I would start with Programming Collective Intelligence as
3,917
Machine learning cookbook / reference card / cheatsheet?
For a first book on machine learning, which does a good job of explaining the principles, I would strongly recommend Rogers and Girolami, A First Course in Machine Learning, (Chapman & Hall/CRC Machine Learning & Pattern Recognition), 2011. Chris Bishop's book, or David Barber's both make good choices for a book with greater breadth, once you have a good grasp of the principles.
Machine learning cookbook / reference card / cheatsheet?
For a first book on machine learning, which does a good job of explaining the principles, I would strongly recommend Rogers and Girolami, A First Course in Machine Learning, (Chapman & Hall/CRC Mac
Machine learning cookbook / reference card / cheatsheet? For a first book on machine learning, which does a good job of explaining the principles, I would strongly recommend Rogers and Girolami, A First Course in Machine Learning, (Chapman & Hall/CRC Machine Learning & Pattern Recognition), 2011. Chris Bishop's book, or David Barber's both make good choices for a book with greater breadth, once you have a good grasp of the principles.
Machine learning cookbook / reference card / cheatsheet? For a first book on machine learning, which does a good job of explaining the principles, I would strongly recommend Rogers and Girolami, A First Course in Machine Learning, (Chapman & Hall/CRC Mac
3,918
Machine learning cookbook / reference card / cheatsheet?
I wrote a summary like that, but only on one machine learning task (Netflix Prize), and it has 195 pages: http://arek-paterek.com/book
Machine learning cookbook / reference card / cheatsheet?
I wrote a summary like that, but only on one machine learning task (Netflix Prize), and it has 195 pages: http://arek-paterek.com/book
Machine learning cookbook / reference card / cheatsheet? I wrote a summary like that, but only on one machine learning task (Netflix Prize), and it has 195 pages: http://arek-paterek.com/book
Machine learning cookbook / reference card / cheatsheet? I wrote a summary like that, but only on one machine learning task (Netflix Prize), and it has 195 pages: http://arek-paterek.com/book
3,919
Machine learning cookbook / reference card / cheatsheet?
Check this link featuring some free ebooks on machine learning : http://designimag.com/best-free-machine-learning-ebooks/. it might be useful for you.
Machine learning cookbook / reference card / cheatsheet?
Check this link featuring some free ebooks on machine learning : http://designimag.com/best-free-machine-learning-ebooks/. it might be useful for you.
Machine learning cookbook / reference card / cheatsheet? Check this link featuring some free ebooks on machine learning : http://designimag.com/best-free-machine-learning-ebooks/. it might be useful for you.
Machine learning cookbook / reference card / cheatsheet? Check this link featuring some free ebooks on machine learning : http://designimag.com/best-free-machine-learning-ebooks/. it might be useful for you.
3,920
Machine learning cookbook / reference card / cheatsheet?
A good cheatsheet is the one in Max Kuhn book Applied Predictive Modeling. In the book there is a good summary table of several ML learning models. The table is in appendix A page 549:
Machine learning cookbook / reference card / cheatsheet?
A good cheatsheet is the one in Max Kuhn book Applied Predictive Modeling. In the book there is a good summary table of several ML learning models. The table is in appendix A page 549:
Machine learning cookbook / reference card / cheatsheet? A good cheatsheet is the one in Max Kuhn book Applied Predictive Modeling. In the book there is a good summary table of several ML learning models. The table is in appendix A page 549:
Machine learning cookbook / reference card / cheatsheet? A good cheatsheet is the one in Max Kuhn book Applied Predictive Modeling. In the book there is a good summary table of several ML learning models. The table is in appendix A page 549:
3,921
Is it a good practice to always scale/normalize data for machine learning? [duplicate]
First things first, I don't think there are many questions of the form "Is it a good practice to always X in machine learning" where the answer is going to be definitive. Always? Always always? Across parametric, non-parametric, Bayesian, Monte Carlo, social science, purely mathematic, and million feature models? That'd be nice, wouldn't it! Concretely though, here are a few ways in which: it just depends. Some times when normalizing is good: 1) Several algorithms, in particular SVMs come to mind, can sometimes converge far faster on normalized data (although why, precisely, I can't recall). 2) When your model is sensitive to magnitude, and the units of two different features are different, and arbitrary. This is like the case you suggest, in which something gets more influence than it should. But of course -- not all algorithms are sensitive to magnitude in the way you suggest. Linear regression coefficients will be identical if you do, or don't, scale your data, because it's looking at proportional relationships between them. Some times when normalizing is bad: 1) When you want to interpret your coefficients, and they don't normalize well. Regression on something like dollars gives you a meaningful outcome. Regression on proportion-of-maximum-dollars-in-sample might not. 2) When, in fact, the units on your features are meaningful, and distance does make a difference! Back to SVMs -- if you're trying to find a max-margin classifier, then the units that go into that 'max' matter. Scaling features for clustering algorithms can substantially change the outcome. Imagine four clusters around the origin, each one in a different quadrant, all nicely scaled. Now, imagine the y-axis being stretched to ten times the length of the the x-axis. instead of four little quadrant-clusters, you're going to get the long squashed baguette of data chopped into four pieces along its length! (And, the important part is, you might prefer either of these!) In I'm sure unsatisfying summary, the most general answer is that you need to ask yourself seriously what makes sense with the data, and model, you're using.
Is it a good practice to always scale/normalize data for machine learning? [duplicate]
First things first, I don't think there are many questions of the form "Is it a good practice to always X in machine learning" where the answer is going to be definitive. Always? Always always? Across
Is it a good practice to always scale/normalize data for machine learning? [duplicate] First things first, I don't think there are many questions of the form "Is it a good practice to always X in machine learning" where the answer is going to be definitive. Always? Always always? Across parametric, non-parametric, Bayesian, Monte Carlo, social science, purely mathematic, and million feature models? That'd be nice, wouldn't it! Concretely though, here are a few ways in which: it just depends. Some times when normalizing is good: 1) Several algorithms, in particular SVMs come to mind, can sometimes converge far faster on normalized data (although why, precisely, I can't recall). 2) When your model is sensitive to magnitude, and the units of two different features are different, and arbitrary. This is like the case you suggest, in which something gets more influence than it should. But of course -- not all algorithms are sensitive to magnitude in the way you suggest. Linear regression coefficients will be identical if you do, or don't, scale your data, because it's looking at proportional relationships between them. Some times when normalizing is bad: 1) When you want to interpret your coefficients, and they don't normalize well. Regression on something like dollars gives you a meaningful outcome. Regression on proportion-of-maximum-dollars-in-sample might not. 2) When, in fact, the units on your features are meaningful, and distance does make a difference! Back to SVMs -- if you're trying to find a max-margin classifier, then the units that go into that 'max' matter. Scaling features for clustering algorithms can substantially change the outcome. Imagine four clusters around the origin, each one in a different quadrant, all nicely scaled. Now, imagine the y-axis being stretched to ten times the length of the the x-axis. instead of four little quadrant-clusters, you're going to get the long squashed baguette of data chopped into four pieces along its length! (And, the important part is, you might prefer either of these!) In I'm sure unsatisfying summary, the most general answer is that you need to ask yourself seriously what makes sense with the data, and model, you're using.
Is it a good practice to always scale/normalize data for machine learning? [duplicate] First things first, I don't think there are many questions of the form "Is it a good practice to always X in machine learning" where the answer is going to be definitive. Always? Always always? Across
3,922
Is it a good practice to always scale/normalize data for machine learning? [duplicate]
Well, I believe a more geometric point of view will help better decide whether normalization helps or not. Imagine your problem of interest has only two features and they range differently. Then geometrically, the data points are spread around and form an ellipsoid. However, if the features are normalized they will be more concentrated and hopefully, form a unit circle and make the covariance diagonal or at least close to diagonal. This is what the idea is behind methods such as batch-normalizing the intermediate representations of data in neural networks. Using BN the convergence speed increases amazingly (maybe 5-10 times) since the gradient can easily help the gradients do what they are supposed to do in order to reduce the error. In the unnormalized case, gradient-based optimization algorithms will have a very hard time to move the weight vectors towards a good solution. However, the cost surface for the normalized case is less elongated and gradient-based optimization methods will do much better and diverge less. This is certainly the case for linear models and especially the ones whose cost function is a measure of divergence of the model's output and the target (e.g. linear regression with MSE cost function), but might not be necessarily the case in the non-linear ones. Normalization does not hurt for the nonlinear models; not doing it for linear models will hurt. The picture below could be [roughly] viewed as the example of an elongated error surface in which the gradient-based methods could have a hard time to help the weight vectors move towards the local optima.
Is it a good practice to always scale/normalize data for machine learning? [duplicate]
Well, I believe a more geometric point of view will help better decide whether normalization helps or not. Imagine your problem of interest has only two features and they range differently. Then geome
Is it a good practice to always scale/normalize data for machine learning? [duplicate] Well, I believe a more geometric point of view will help better decide whether normalization helps or not. Imagine your problem of interest has only two features and they range differently. Then geometrically, the data points are spread around and form an ellipsoid. However, if the features are normalized they will be more concentrated and hopefully, form a unit circle and make the covariance diagonal or at least close to diagonal. This is what the idea is behind methods such as batch-normalizing the intermediate representations of data in neural networks. Using BN the convergence speed increases amazingly (maybe 5-10 times) since the gradient can easily help the gradients do what they are supposed to do in order to reduce the error. In the unnormalized case, gradient-based optimization algorithms will have a very hard time to move the weight vectors towards a good solution. However, the cost surface for the normalized case is less elongated and gradient-based optimization methods will do much better and diverge less. This is certainly the case for linear models and especially the ones whose cost function is a measure of divergence of the model's output and the target (e.g. linear regression with MSE cost function), but might not be necessarily the case in the non-linear ones. Normalization does not hurt for the nonlinear models; not doing it for linear models will hurt. The picture below could be [roughly] viewed as the example of an elongated error surface in which the gradient-based methods could have a hard time to help the weight vectors move towards the local optima.
Is it a good practice to always scale/normalize data for machine learning? [duplicate] Well, I believe a more geometric point of view will help better decide whether normalization helps or not. Imagine your problem of interest has only two features and they range differently. Then geome
3,923
Is it a good practice to always scale/normalize data for machine learning? [duplicate]
Let me tell you the story of how I learned the importance of normalization. I was trying to classify a handwritten digits data (it is a simple task of classifying features extracted from images of hand-written digits) with Neural Networks as an assignment for a Machine Learning course. Just like anyone else, I started with a Neural Network library/tool, fed it with the data and started playing with the parameters. I tried changing number of layers, the number of neurons and various activation functions. None of them yielded expected results (accuracy around 0.9). The Culprit? The scaling factor (s) in the activation function = $\frac{s}{1+e^{-s.x}}$-1. If the parameter s is not set, the activation function will either activate every input or nullify every input in every iteration. Which obviously led to unexpected values for model parameters. My point is, it is not easy to set s when the input x is varying over large values. As some of the other answers have already pointed it out, the "good practice" as to whether to normalize the data or not depends on the data, model, and application. By normalizing, you are actually throwing away some information about the data such as the absolute maximum and minimum values. So, there is no rule of thumb.
Is it a good practice to always scale/normalize data for machine learning? [duplicate]
Let me tell you the story of how I learned the importance of normalization. I was trying to classify a handwritten digits data (it is a simple task of classifying features extracted from images of han
Is it a good practice to always scale/normalize data for machine learning? [duplicate] Let me tell you the story of how I learned the importance of normalization. I was trying to classify a handwritten digits data (it is a simple task of classifying features extracted from images of hand-written digits) with Neural Networks as an assignment for a Machine Learning course. Just like anyone else, I started with a Neural Network library/tool, fed it with the data and started playing with the parameters. I tried changing number of layers, the number of neurons and various activation functions. None of them yielded expected results (accuracy around 0.9). The Culprit? The scaling factor (s) in the activation function = $\frac{s}{1+e^{-s.x}}$-1. If the parameter s is not set, the activation function will either activate every input or nullify every input in every iteration. Which obviously led to unexpected values for model parameters. My point is, it is not easy to set s when the input x is varying over large values. As some of the other answers have already pointed it out, the "good practice" as to whether to normalize the data or not depends on the data, model, and application. By normalizing, you are actually throwing away some information about the data such as the absolute maximum and minimum values. So, there is no rule of thumb.
Is it a good practice to always scale/normalize data for machine learning? [duplicate] Let me tell you the story of how I learned the importance of normalization. I was trying to classify a handwritten digits data (it is a simple task of classifying features extracted from images of han
3,924
Is it a good practice to always scale/normalize data for machine learning? [duplicate]
As others said, normalization is not always applicable; e.g. from a practical point of view. In order to be able to scale or normalize features to a common range like [0,1], you need to know the min/max (or mean/stdev depending on which scaling method you apply) of each feature. IOW: you need to have all the data for all features before you start training. Many practical learning problems don't provide you with all the data a-priori, so you simply can't normalize. Such problems require an online learning approach. However, note that some online (as opposed to batch learning) algorithms which learn from one example at a time, support an approximation to scaling/normalization. They learn the scales and compensate for them, iteratively. vowpal wabbit for example iteratively normalizes for scale by default (unless you explicitly disable auto-scaling by forcing a certain optimization algorithm like naive --sgd)
Is it a good practice to always scale/normalize data for machine learning? [duplicate]
As others said, normalization is not always applicable; e.g. from a practical point of view. In order to be able to scale or normalize features to a common range like [0,1], you need to know the min/m
Is it a good practice to always scale/normalize data for machine learning? [duplicate] As others said, normalization is not always applicable; e.g. from a practical point of view. In order to be able to scale or normalize features to a common range like [0,1], you need to know the min/max (or mean/stdev depending on which scaling method you apply) of each feature. IOW: you need to have all the data for all features before you start training. Many practical learning problems don't provide you with all the data a-priori, so you simply can't normalize. Such problems require an online learning approach. However, note that some online (as opposed to batch learning) algorithms which learn from one example at a time, support an approximation to scaling/normalization. They learn the scales and compensate for them, iteratively. vowpal wabbit for example iteratively normalizes for scale by default (unless you explicitly disable auto-scaling by forcing a certain optimization algorithm like naive --sgd)
Is it a good practice to always scale/normalize data for machine learning? [duplicate] As others said, normalization is not always applicable; e.g. from a practical point of view. In order to be able to scale or normalize features to a common range like [0,1], you need to know the min/m
3,925
Is it a good practice to always scale/normalize data for machine learning? [duplicate]
Scaling/normalizing does change your model slightly. Most of the time this corresponds to applying an affine function. So you have $Z=A_X+B_XXC_X$ where $X$ is your "input/original data" (one row for each training example, one column for each feature). Then $A_X,B_X,C_X$ are matrices that are typically functions of $X$. The matrix $Z$ is what you feed into your ML algorithm. Now, suppose you want to predict for some new sample. But you only have $X_{new}$ and not $Z_{new}$. You should be applying the function $Z_{new}=A_X+B_XX_{new}C_X$. That is, use the same $A_X,B_X,C_X$ from the training dataset, rather that re-estimate them. This makes these matrices have the same form as other parameters in your model. While they are often equivalent in terms of the predicted values you get from the training dataset, it certainly isn't on new data for predictions. A simple example, predict for $1$ new observation, standardising this (subtract mean, divide by sd) will always return zero.
Is it a good practice to always scale/normalize data for machine learning? [duplicate]
Scaling/normalizing does change your model slightly. Most of the time this corresponds to applying an affine function. So you have $Z=A_X+B_XXC_X$ where $X$ is your "input/original data" (one row for
Is it a good practice to always scale/normalize data for machine learning? [duplicate] Scaling/normalizing does change your model slightly. Most of the time this corresponds to applying an affine function. So you have $Z=A_X+B_XXC_X$ where $X$ is your "input/original data" (one row for each training example, one column for each feature). Then $A_X,B_X,C_X$ are matrices that are typically functions of $X$. The matrix $Z$ is what you feed into your ML algorithm. Now, suppose you want to predict for some new sample. But you only have $X_{new}$ and not $Z_{new}$. You should be applying the function $Z_{new}=A_X+B_XX_{new}C_X$. That is, use the same $A_X,B_X,C_X$ from the training dataset, rather that re-estimate them. This makes these matrices have the same form as other parameters in your model. While they are often equivalent in terms of the predicted values you get from the training dataset, it certainly isn't on new data for predictions. A simple example, predict for $1$ new observation, standardising this (subtract mean, divide by sd) will always return zero.
Is it a good practice to always scale/normalize data for machine learning? [duplicate] Scaling/normalizing does change your model slightly. Most of the time this corresponds to applying an affine function. So you have $Z=A_X+B_XXC_X$ where $X$ is your "input/original data" (one row for
3,926
Is it a good practice to always scale/normalize data for machine learning? [duplicate]
For machine learning models that include coefficients (e.g. regression, logistic regression, etc) the main reason to normalize is numerical stability. Mathematically, if one of your predictor columns is multiplied by 10^6, then the corresponding regression coefficient will get multiplied by 10^{-6} and the results will be the same. Computationally, your predictors are often transformed by the learning algorithm (e.g. the matrix X of predictors in a regression becomes X'X) and some of those transformations can result in lost numerical precision if X is very large or very small. If your predictors are on the scale of 100's then this won't matter. If you're modeling grains of sand, astronomical units, or search query counts then it might.
Is it a good practice to always scale/normalize data for machine learning? [duplicate]
For machine learning models that include coefficients (e.g. regression, logistic regression, etc) the main reason to normalize is numerical stability. Mathematically, if one of your predictor columns
Is it a good practice to always scale/normalize data for machine learning? [duplicate] For machine learning models that include coefficients (e.g. regression, logistic regression, etc) the main reason to normalize is numerical stability. Mathematically, if one of your predictor columns is multiplied by 10^6, then the corresponding regression coefficient will get multiplied by 10^{-6} and the results will be the same. Computationally, your predictors are often transformed by the learning algorithm (e.g. the matrix X of predictors in a regression becomes X'X) and some of those transformations can result in lost numerical precision if X is very large or very small. If your predictors are on the scale of 100's then this won't matter. If you're modeling grains of sand, astronomical units, or search query counts then it might.
Is it a good practice to always scale/normalize data for machine learning? [duplicate] For machine learning models that include coefficients (e.g. regression, logistic regression, etc) the main reason to normalize is numerical stability. Mathematically, if one of your predictor columns
3,927
Is it a good practice to always scale/normalize data for machine learning? [duplicate]
I was trying to solve ridge regression problem using gradient descent. Now without normalization I set some appropriate step size and ran the code. In order to make sure my coding was error-free, I coded the same objective in CVX too. Now CVX took only few iterations to converge to a certain optimal value but I ran my code for the best step size I could find by 10k iterations and I was close to the optimal value of CVX but still not exact. After normalizing the data-set and feeding it to my code and CVX, I was surprised to see that now convergence only took 100 iterations and the optimal value to which gradient descent converged was exactly equal to that of CVX. Also the amount of "explained variance" by model after normalization was more compared to the original one. So just from this naive experiment I realized that as far as regression problem is concerned I would go for normalization of data. BTW here normalization implies subtracting by mean and dividing by standard deviation. For backing me on regression please see this relevant question and discussion on it: When conducting multiple regression, when should you center your predictor variables & when should you standardize them?
Is it a good practice to always scale/normalize data for machine learning? [duplicate]
I was trying to solve ridge regression problem using gradient descent. Now without normalization I set some appropriate step size and ran the code. In order to make sure my coding was error-free, I co
Is it a good practice to always scale/normalize data for machine learning? [duplicate] I was trying to solve ridge regression problem using gradient descent. Now without normalization I set some appropriate step size and ran the code. In order to make sure my coding was error-free, I coded the same objective in CVX too. Now CVX took only few iterations to converge to a certain optimal value but I ran my code for the best step size I could find by 10k iterations and I was close to the optimal value of CVX but still not exact. After normalizing the data-set and feeding it to my code and CVX, I was surprised to see that now convergence only took 100 iterations and the optimal value to which gradient descent converged was exactly equal to that of CVX. Also the amount of "explained variance" by model after normalization was more compared to the original one. So just from this naive experiment I realized that as far as regression problem is concerned I would go for normalization of data. BTW here normalization implies subtracting by mean and dividing by standard deviation. For backing me on regression please see this relevant question and discussion on it: When conducting multiple regression, when should you center your predictor variables & when should you standardize them?
Is it a good practice to always scale/normalize data for machine learning? [duplicate] I was trying to solve ridge regression problem using gradient descent. Now without normalization I set some appropriate step size and ran the code. In order to make sure my coding was error-free, I co
3,928
What is the objective function of PCA?
Without trying to give a full primer on PCA, from an optimization standpoint, the primary objective function is the Rayleigh quotient. The matrix that figures in the quotient is (some multiple of) the sample covariance matrix $$\newcommand{\m}[1]{\mathbf{#1}}\newcommand{\x}{\m{x}}\newcommand{\S}{\m{S}}\newcommand{\u}{\m{u}}\newcommand{\reals}{\mathbb{R}}\newcommand{\Q}{\m{Q}}\newcommand{\L}{\boldsymbol{\Lambda}} \S = \frac{1}{n} \sum_{i=1}^n \x_i \x_i^T = \m{X}^T \m{X} / n $$ where each $\x_i$ is a vector of $p$ features and $\m{X}$ is the matrix such that the $i$th row is $\x_i^T$. PCA seeks to solve a sequence of optimization problems. The first in the sequence is the unconstrained problem $$ \begin{array}{ll} \text{maximize} & \frac{\u^T \S \u}{\u^T\u} \;, \u \in \reals^p \> . \end{array} $$ Since $\u^T \u = \|\u\|_2^2 = \|\u\| \|\u\|$, the above unconstrained problem is equivalent to the constrained problem $$ \begin{array}{ll} \text{maximize} & \u^T \S \u \\ \text{subject to} & \u^T \u = 1 \>. \end{array} $$ Here is where the matrix algebra comes in. Since $\S$ is a symmetric positive semidefinite matrix (by construction!) it has an eigenvalue decomposition of the form $$ \S = \Q \L \Q^T \>, $$ where $\Q$ is an orthogonal matrix (so $\Q \Q^T = \m{I}$) and $\L$ is a diagonal matrix with nonnegative entries $\lambda_i$ such that $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_p \geq 0$. Hence, $\u^T \S \u = \u^T \Q \L \Q^T \u = \m{w}^T \L \m{w} = \sum_{i=1}^p \lambda_i w_i^2$. Since $\u$ is constrained in the problem to have a norm of one, then so is $\m{w}$ since $\|\m{w}\|_2 = \|\Q^T \u\|_2 = \|\u\|_2 = 1$, by virtue of $\Q$ being orthogonal. But, if we want to maximize the quantity $\sum_{i=1}^p \lambda_i w_i^2$ under the constraints that $\sum_{i=1}^p w_i^2 = 1$, then the best we can do is to set $\m{w} = \m{e}_1$, that is, $w_1 = 1$ and $w_i = 0$ for $i > 1$. Now, backing out the corresponding $\u$, which is what we sought in the first place, we get that $$ \u^\star = \Q \m{e}_1 = \m{q}_1 $$ where $\m{q}_1$ denotes the first column of $\Q$, i.e., the eigenvector corresponding to the largest eigenvalue of $\S$. The value of the objective function is then also easily seen to be $\lambda_1$. The remaining principal component vectors are then found by solving the sequence (indexed by $i$) of optimization problems $$ \begin{array}{ll} \text{maximize} & \u_i^T \S \u_i \\ \text{subject to} & \u_i^T \u_i = 1 \\ & \u_i^T \u_j = 0 \quad \forall 1 \leq j < i\>. \end{array} $$ So, the problem is the same, except that we add the additional constraint that the solution must be orthogonal to all of the previous solutions in the sequence. It is not difficult to extend the argument above inductively to show that the solution of the $i$th problem is, indeed, $\m{q}_i$, the $i$th eigenvector of $\S$. The PCA solution is also often expressed in terms of the singular value decomposition of $\m{X}$. To see why, let $\m{X} = \m{U} \m{D} \m{V}^T$. Then $n \S = \m{X}^T \m{X} = \m{V} \m{D}^2 \m{V}^T$ and so $\m{V} = \m{Q}$ (strictly speaking, up to sign flips) and $\L = \m{D}^2 / n$. The principal components are found by projecting $\m{X}$ onto the principal component vectors. From the SVD formulation just given, it's easy to see that $$ \m{X} \m{Q} = \m{X} \m{V} = \m{U} \m{D} \m{V}^T \m{V} = \m{U} \m{D} \> . $$ The simplicity of representation of both the principal component vectors and the principal components themselves in terms of the SVD of the matrix of features is one reason the SVD features so prominently in some treatments of PCA.
What is the objective function of PCA?
Without trying to give a full primer on PCA, from an optimization standpoint, the primary objective function is the Rayleigh quotient. The matrix that figures in the quotient is (some multiple of) the
What is the objective function of PCA? Without trying to give a full primer on PCA, from an optimization standpoint, the primary objective function is the Rayleigh quotient. The matrix that figures in the quotient is (some multiple of) the sample covariance matrix $$\newcommand{\m}[1]{\mathbf{#1}}\newcommand{\x}{\m{x}}\newcommand{\S}{\m{S}}\newcommand{\u}{\m{u}}\newcommand{\reals}{\mathbb{R}}\newcommand{\Q}{\m{Q}}\newcommand{\L}{\boldsymbol{\Lambda}} \S = \frac{1}{n} \sum_{i=1}^n \x_i \x_i^T = \m{X}^T \m{X} / n $$ where each $\x_i$ is a vector of $p$ features and $\m{X}$ is the matrix such that the $i$th row is $\x_i^T$. PCA seeks to solve a sequence of optimization problems. The first in the sequence is the unconstrained problem $$ \begin{array}{ll} \text{maximize} & \frac{\u^T \S \u}{\u^T\u} \;, \u \in \reals^p \> . \end{array} $$ Since $\u^T \u = \|\u\|_2^2 = \|\u\| \|\u\|$, the above unconstrained problem is equivalent to the constrained problem $$ \begin{array}{ll} \text{maximize} & \u^T \S \u \\ \text{subject to} & \u^T \u = 1 \>. \end{array} $$ Here is where the matrix algebra comes in. Since $\S$ is a symmetric positive semidefinite matrix (by construction!) it has an eigenvalue decomposition of the form $$ \S = \Q \L \Q^T \>, $$ where $\Q$ is an orthogonal matrix (so $\Q \Q^T = \m{I}$) and $\L$ is a diagonal matrix with nonnegative entries $\lambda_i$ such that $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_p \geq 0$. Hence, $\u^T \S \u = \u^T \Q \L \Q^T \u = \m{w}^T \L \m{w} = \sum_{i=1}^p \lambda_i w_i^2$. Since $\u$ is constrained in the problem to have a norm of one, then so is $\m{w}$ since $\|\m{w}\|_2 = \|\Q^T \u\|_2 = \|\u\|_2 = 1$, by virtue of $\Q$ being orthogonal. But, if we want to maximize the quantity $\sum_{i=1}^p \lambda_i w_i^2$ under the constraints that $\sum_{i=1}^p w_i^2 = 1$, then the best we can do is to set $\m{w} = \m{e}_1$, that is, $w_1 = 1$ and $w_i = 0$ for $i > 1$. Now, backing out the corresponding $\u$, which is what we sought in the first place, we get that $$ \u^\star = \Q \m{e}_1 = \m{q}_1 $$ where $\m{q}_1$ denotes the first column of $\Q$, i.e., the eigenvector corresponding to the largest eigenvalue of $\S$. The value of the objective function is then also easily seen to be $\lambda_1$. The remaining principal component vectors are then found by solving the sequence (indexed by $i$) of optimization problems $$ \begin{array}{ll} \text{maximize} & \u_i^T \S \u_i \\ \text{subject to} & \u_i^T \u_i = 1 \\ & \u_i^T \u_j = 0 \quad \forall 1 \leq j < i\>. \end{array} $$ So, the problem is the same, except that we add the additional constraint that the solution must be orthogonal to all of the previous solutions in the sequence. It is not difficult to extend the argument above inductively to show that the solution of the $i$th problem is, indeed, $\m{q}_i$, the $i$th eigenvector of $\S$. The PCA solution is also often expressed in terms of the singular value decomposition of $\m{X}$. To see why, let $\m{X} = \m{U} \m{D} \m{V}^T$. Then $n \S = \m{X}^T \m{X} = \m{V} \m{D}^2 \m{V}^T$ and so $\m{V} = \m{Q}$ (strictly speaking, up to sign flips) and $\L = \m{D}^2 / n$. The principal components are found by projecting $\m{X}$ onto the principal component vectors. From the SVD formulation just given, it's easy to see that $$ \m{X} \m{Q} = \m{X} \m{V} = \m{U} \m{D} \m{V}^T \m{V} = \m{U} \m{D} \> . $$ The simplicity of representation of both the principal component vectors and the principal components themselves in terms of the SVD of the matrix of features is one reason the SVD features so prominently in some treatments of PCA.
What is the objective function of PCA? Without trying to give a full primer on PCA, from an optimization standpoint, the primary objective function is the Rayleigh quotient. The matrix that figures in the quotient is (some multiple of) the
3,929
What is the objective function of PCA?
The solution presented by cardinal focuses on the sample covariance matrix. Another starting point is the reconstruction error of the data by a q-dimensional hyperplane. If the p-dimensional data points are $x_1, \ldots, x_n$ the objective is to solve $$\min_{\mu, \lambda_1,\ldots, \lambda_n, \mathbf{V}_q} \sum_{i=1}^n ||x_i - \mu - \mathbf{V}_q \lambda_i||^2$$ for a $p \times q$ matrix $\mathbf{V}_q$ with orthonormal columns and $\lambda_i \in \mathbb{R}^q$. This gives the best rank q-reconstruction as measured by the euclidean norm, and the columns of the $\mathbf{V}_q$ solution are the first q principal component vectors. For fixed $\mathbf{V}_q$ the solution for $\mu$ and $\lambda_i$ (this is regression) are $$\mu = \overline{x} = \frac{1}{n}\sum_{i=1}^n x_i \qquad \lambda_i = \mathbf{V}_q^T(x_i - \overline{x})$$ For ease of notation lets assume that $x_i$ have been centered in the following computations. We then have to minimize $$\sum_{i=1}^n ||x_i - \mathbf{V}_q\mathbf{V}_q^T x_i||^2$$ over $\mathbf{V}_q$ with orthonormal columns. Note that $P = \mathbf{V}_q\mathbf{V}_q^T$ is the projection onto the q-dimensional column space. Hence the problem is equivalent to minimizing $$\sum_{i=1}^n ||x_i - P x_i||^2 = \sum_{i=1}^n ||x_i||^2 - \sum_{i=1}^n||Px_i||^2$$ over rank q projections $P$. That is, we need to maximize $$\sum_{i=1}^n||Px_i||^2 = \sum_{i=1}^n x_i^TPx_i = \text{tr}(P \sum_{i=1}^n x_i x_i^T) = n \text{tr}(P \mathbf{S})$$ over rank q projections $P$, where $\mathbf{S}$ is the sample covariance matrix. Now $$\text{tr}(P\mathbf{S}) = \text{tr}(\mathbf{V}_q^T\mathbf{S}\mathbf{V}_q) = \sum_{i=1}^q u_i^T \mathbf{S} u_i$$ where $u_1, \ldots, u_q$ are the $q$ (orthonormal) columns in $\mathbf{V}_q$, and the arguments presented in @cardinal's answer show that the maximum is obtained by taking the $u_i$'s to be $q$ eigenvectors for $\mathbf{S}$ with the $q$ largest eigenvalues. The reconstruction error suggests a number of useful generalizations, for instance sparse principal components or reconstructions by low-dimensional manifolds instead of hyperplanes. For details, see Section 14.5 in The Elements of Statistical Learning.
What is the objective function of PCA?
The solution presented by cardinal focuses on the sample covariance matrix. Another starting point is the reconstruction error of the data by a q-dimensional hyperplane. If the p-dimensional data poin
What is the objective function of PCA? The solution presented by cardinal focuses on the sample covariance matrix. Another starting point is the reconstruction error of the data by a q-dimensional hyperplane. If the p-dimensional data points are $x_1, \ldots, x_n$ the objective is to solve $$\min_{\mu, \lambda_1,\ldots, \lambda_n, \mathbf{V}_q} \sum_{i=1}^n ||x_i - \mu - \mathbf{V}_q \lambda_i||^2$$ for a $p \times q$ matrix $\mathbf{V}_q$ with orthonormal columns and $\lambda_i \in \mathbb{R}^q$. This gives the best rank q-reconstruction as measured by the euclidean norm, and the columns of the $\mathbf{V}_q$ solution are the first q principal component vectors. For fixed $\mathbf{V}_q$ the solution for $\mu$ and $\lambda_i$ (this is regression) are $$\mu = \overline{x} = \frac{1}{n}\sum_{i=1}^n x_i \qquad \lambda_i = \mathbf{V}_q^T(x_i - \overline{x})$$ For ease of notation lets assume that $x_i$ have been centered in the following computations. We then have to minimize $$\sum_{i=1}^n ||x_i - \mathbf{V}_q\mathbf{V}_q^T x_i||^2$$ over $\mathbf{V}_q$ with orthonormal columns. Note that $P = \mathbf{V}_q\mathbf{V}_q^T$ is the projection onto the q-dimensional column space. Hence the problem is equivalent to minimizing $$\sum_{i=1}^n ||x_i - P x_i||^2 = \sum_{i=1}^n ||x_i||^2 - \sum_{i=1}^n||Px_i||^2$$ over rank q projections $P$. That is, we need to maximize $$\sum_{i=1}^n||Px_i||^2 = \sum_{i=1}^n x_i^TPx_i = \text{tr}(P \sum_{i=1}^n x_i x_i^T) = n \text{tr}(P \mathbf{S})$$ over rank q projections $P$, where $\mathbf{S}$ is the sample covariance matrix. Now $$\text{tr}(P\mathbf{S}) = \text{tr}(\mathbf{V}_q^T\mathbf{S}\mathbf{V}_q) = \sum_{i=1}^q u_i^T \mathbf{S} u_i$$ where $u_1, \ldots, u_q$ are the $q$ (orthonormal) columns in $\mathbf{V}_q$, and the arguments presented in @cardinal's answer show that the maximum is obtained by taking the $u_i$'s to be $q$ eigenvectors for $\mathbf{S}$ with the $q$ largest eigenvalues. The reconstruction error suggests a number of useful generalizations, for instance sparse principal components or reconstructions by low-dimensional manifolds instead of hyperplanes. For details, see Section 14.5 in The Elements of Statistical Learning.
What is the objective function of PCA? The solution presented by cardinal focuses on the sample covariance matrix. Another starting point is the reconstruction error of the data by a q-dimensional hyperplane. If the p-dimensional data poin
3,930
What is the objective function of PCA?
See NIPALS (wiki) for one algorithm which doesn't explicitly use a matrix decomposition. I suppose that's what you mean when you say that you want to avoid matrix algebra since you really can't avoid matrix algebra here :)
What is the objective function of PCA?
See NIPALS (wiki) for one algorithm which doesn't explicitly use a matrix decomposition. I suppose that's what you mean when you say that you want to avoid matrix algebra since you really can't avoid
What is the objective function of PCA? See NIPALS (wiki) for one algorithm which doesn't explicitly use a matrix decomposition. I suppose that's what you mean when you say that you want to avoid matrix algebra since you really can't avoid matrix algebra here :)
What is the objective function of PCA? See NIPALS (wiki) for one algorithm which doesn't explicitly use a matrix decomposition. I suppose that's what you mean when you say that you want to avoid matrix algebra since you really can't avoid
3,931
Does the optimal number of trees in a random forest depend on the number of predictors?
Random forest uses bagging (picking a sample of observations rather than all of them) and random subspace method (picking a sample of features rather than all of them, in other words - attribute bagging) to grow a tree. If the number of observations is large, but the number of trees is too small, then some observations will be predicted only once or even not at all. If the number of predictors is large but the number of trees is too small, then some features can (theoretically) be missed in all subspaces used. Both cases results in the decrease of random forest predictive power. But the last is a rather extreme case, since the selection of subspace is performed at each node. During classification the subspace dimensionality is $\sqrt{p}$ (rather small, $p$ is the total number of predictors) by default, but a tree contains many nodes. During regression the subspace dimensionality is $p/3$ (large enough) by default, though a tree contains fewer nodes. So the optimal number of trees in a random forest depends on the number of predictors only in extreme cases. The official page of the algorithm states that random forest does not overfit, and you can use as much trees as you want. But Mark R. Segal (April 14 2004. "Machine Learning Benchmarks and Random Forest Regression." Center for Bioinformatics & Molecular Biostatistics) has found that it overfits for some noisy datasets. So to obtain optimal number you can try training random forest at a grid of ntree parameter (simple, but more CPU-consuming) or build one random forest with many trees with keep.inbag, calculate out-of-bag (OOB) error rates for first $n$ trees (where $n$ changes from $1$ to ntree) and plot OOB error rate vs. number of trees (more complex, but less CPU-consuming).
Does the optimal number of trees in a random forest depend on the number of predictors?
Random forest uses bagging (picking a sample of observations rather than all of them) and random subspace method (picking a sample of features rather than all of them, in other words - attribute baggi
Does the optimal number of trees in a random forest depend on the number of predictors? Random forest uses bagging (picking a sample of observations rather than all of them) and random subspace method (picking a sample of features rather than all of them, in other words - attribute bagging) to grow a tree. If the number of observations is large, but the number of trees is too small, then some observations will be predicted only once or even not at all. If the number of predictors is large but the number of trees is too small, then some features can (theoretically) be missed in all subspaces used. Both cases results in the decrease of random forest predictive power. But the last is a rather extreme case, since the selection of subspace is performed at each node. During classification the subspace dimensionality is $\sqrt{p}$ (rather small, $p$ is the total number of predictors) by default, but a tree contains many nodes. During regression the subspace dimensionality is $p/3$ (large enough) by default, though a tree contains fewer nodes. So the optimal number of trees in a random forest depends on the number of predictors only in extreme cases. The official page of the algorithm states that random forest does not overfit, and you can use as much trees as you want. But Mark R. Segal (April 14 2004. "Machine Learning Benchmarks and Random Forest Regression." Center for Bioinformatics & Molecular Biostatistics) has found that it overfits for some noisy datasets. So to obtain optimal number you can try training random forest at a grid of ntree parameter (simple, but more CPU-consuming) or build one random forest with many trees with keep.inbag, calculate out-of-bag (OOB) error rates for first $n$ trees (where $n$ changes from $1$ to ntree) and plot OOB error rate vs. number of trees (more complex, but less CPU-consuming).
Does the optimal number of trees in a random forest depend on the number of predictors? Random forest uses bagging (picking a sample of observations rather than all of them) and random subspace method (picking a sample of features rather than all of them, in other words - attribute baggi
3,932
Does the optimal number of trees in a random forest depend on the number of predictors?
The number of trees in the Random Forest depends on the number of rows in the data set. I was doing an experiment when tuning the number of trees on 72 classification tasks from OpenML-CC18 benchmark. I got such dependency between optimal number of trees and number of rows in the data:
Does the optimal number of trees in a random forest depend on the number of predictors?
The number of trees in the Random Forest depends on the number of rows in the data set. I was doing an experiment when tuning the number of trees on 72 classification tasks from OpenML-CC18 benchmark.
Does the optimal number of trees in a random forest depend on the number of predictors? The number of trees in the Random Forest depends on the number of rows in the data set. I was doing an experiment when tuning the number of trees on 72 classification tasks from OpenML-CC18 benchmark. I got such dependency between optimal number of trees and number of rows in the data:
Does the optimal number of trees in a random forest depend on the number of predictors? The number of trees in the Random Forest depends on the number of rows in the data set. I was doing an experiment when tuning the number of trees on 72 classification tasks from OpenML-CC18 benchmark.
3,933
Does the optimal number of trees in a random forest depend on the number of predictors?
Accordingly to this article They suggest that a random forest should have a number of trees between 64 - 128 trees. With that, you should have a good balance between ROC AUC and processing time.
Does the optimal number of trees in a random forest depend on the number of predictors?
Accordingly to this article They suggest that a random forest should have a number of trees between 64 - 128 trees. With that, you should have a good balance between ROC AUC and processing time.
Does the optimal number of trees in a random forest depend on the number of predictors? Accordingly to this article They suggest that a random forest should have a number of trees between 64 - 128 trees. With that, you should have a good balance between ROC AUC and processing time.
Does the optimal number of trees in a random forest depend on the number of predictors? Accordingly to this article They suggest that a random forest should have a number of trees between 64 - 128 trees. With that, you should have a good balance between ROC AUC and processing time.
3,934
Does the optimal number of trees in a random forest depend on the number of predictors?
i want add somthings if you have more than 1000 features you and 1000 rows you can't just take rondom number of tree . my suggest you should first detect the number of cpu and ram before trying to launch cross validation in find the ratio between them and number of tree if you use sikit learn in python you have option n_jobs=-1 to use all process but the cost each core requeire copy of data after that you can tris this formula ntree = sqrt (number of row * number of columns)/numberofcpu
Does the optimal number of trees in a random forest depend on the number of predictors?
i want add somthings if you have more than 1000 features you and 1000 rows you can't just take rondom number of tree . my suggest you should first detect the number of cpu and ram before trying to l
Does the optimal number of trees in a random forest depend on the number of predictors? i want add somthings if you have more than 1000 features you and 1000 rows you can't just take rondom number of tree . my suggest you should first detect the number of cpu and ram before trying to launch cross validation in find the ratio between them and number of tree if you use sikit learn in python you have option n_jobs=-1 to use all process but the cost each core requeire copy of data after that you can tris this formula ntree = sqrt (number of row * number of columns)/numberofcpu
Does the optimal number of trees in a random forest depend on the number of predictors? i want add somthings if you have more than 1000 features you and 1000 rows you can't just take rondom number of tree . my suggest you should first detect the number of cpu and ram before trying to l
3,935
Why do Convolutional Neural Networks not use a Support Vector Machine to classify?
What is an SVM, anyway? I think the answer for most purposes is “the solution to the following optimization problem”: $$ \begin{split} \operatorname*{arg\,min}_{f \in \mathcal H} \frac{1}{n} \sum_{i=1}^n \ell_\mathit{hinge}(f(x_i), y_i) \, + \lambda \lVert f \rVert_{\mathcal H}^2 \\ \ell_\mathit{hinge}(t, y) = \max(0, 1 - t y) ,\end{split} \tag{SVM} $$ where $\mathcal H$ is a reproducing kernel Hilbert space, $y$ is a label in $\{-1, 1\}$, and $t = f(x) \in \mathbb R$ is a “decision value”; our final prediction will be $\operatorname{sign}(t)$. In the simplest case, $\mathcal H$ could be the space of affine functions $f(x) = w \cdot x + b$, and $\lVert f \rVert_{\mathcal H}^2 = \lVert w \rVert^2 + b^2$. (Handling of the offset $b$ varies depending on exactly what you’re doing, but that’s not important for our purposes.) In the ‘90s through the early ‘10s, there was a lot of work on solving this particular optimization problem in various smart ways, and indeed that’s what LIBSVM / LIBLINEAR / SVMlight / ThunderSVM / ... do. But I don’t think that any of these particular algorithms are fundamental to “being an SVM,” really. Now, how do we train a deep network? Well, we try to solve something like, say, $$ \begin{split} \operatorname*{arg\,min}_{f \in \mathcal F} \frac1n \sum_{i=1}^n \ell_\mathit{CE}(f(x_i), y) + R(f) \\ \ell_\mathit{CE}(p, y) = - y \log(p) - (1-y) \log(1 - p) ,\end{split} \tag{$\star$} $$ where now $\mathcal F$ is the set of deep nets we consider, which output probabilities $p = f(x) \in [0, 1]$. The explicit regularizer $R(f)$ might be an L2 penalty on the weights in the network, or we might just use $R(f) = 0$. Although we could solve (SVM) up to machine precision if we really wanted, we usually can’t do that for $(\star)$ when $\mathcal F$ is more than one layer; instead we use stochastic gradient descent to attempt at an approximate solution. If we take $\mathcal F$ as a reproducing kernel Hilbert space and $R(f) = \lambda \lVert f \rVert_{\mathcal F}^2$, then $(\star)$ becomes very similar to (SVM), just with cross-entropy loss instead of hinge loss: this is also called kernel logistic regression. My understanding is that the reason SVMs took off in a way kernel logistic regression didn’t is largely due to a slight computational advantage of the former (more amenable to these fancy algorithms), and/or historical accident; there isn’t really a huge difference between the two as a whole, as far as I know. (There is sometimes a big difference between an SVM with a fancy kernel and a plain linear logistic regression, but that’s comparing apples to oranges.) So, what does a deep network using an SVM to classify look like? Well, that could mean some other things, but I think the most natural interpretation is just using $\ell_\mathit{hinge}$ in $(\star)$. One minor issue is that $\ell_\mathit{hinge}$ isn’t differentiable at $\hat y = y$; we could instead use $\ell_\mathit{hinge}^2$, if we want. (Doing this in (SVM) is sometimes called “L2-SVM” or similar names.) Or we can just ignore the non-differentiability; the ReLU activation isn’t differentiable at 0 either, and this usually doesn’t matter. This can be justified via subgradients, although note that the correctness here is actually quite subtle when dealing with deep networks. An ICML workshop paper – Tang, Deep Learning using Linear Support Vector Machines, ICML 2013 workshop Challenges in Representation Learning – found using $\ell_\mathit{hinge}^2$ gave small but consistent improvements over $\ell_\mathit{CE}$ on the problems they considered. I’m sure others have tried (squared) hinge loss since in deep networks, but it certainly hasn’t taken off widely. (You have to modify both $\ell_\mathit{CE}$ as I’ve written it and $\ell_\mathit{hinge}$ to support multi-class classification, but in the one-vs-rest scheme used by Tang, both are easy to do.) Another thing that’s sometimes done is to train CNNs in the typical way, but then take the output of a late layer as "features" and train a separate SVM on that. This was common in early days of transfer learning with deep features, but is I think less common now. Something like this is also done sometimes in other contexts, e.g. in meta-learning by Lee et al., Meta-Learning with Differentiable Convex Optimization, CVPR 2019, who actually solved (SVM) on deep network features and backpropped through the whole thing. (They didn't, but you can even do this with a nonlinear kernel in $\mathcal H$; this is also done in some other "deep kernels" contexts.) It’s a very cool approach – one that I've also worked on – and in certain domains this type of approach makes a ton of sense, but there are some pitfalls, and I don’t think it’s very applicable to a typical "plain classification" problem.
Why do Convolutional Neural Networks not use a Support Vector Machine to classify?
What is an SVM, anyway? I think the answer for most purposes is “the solution to the following optimization problem”: $$ \begin{split} \operatorname*{arg\,min}_{f \in \mathcal H} \frac{1}{n} \sum_{i=1
Why do Convolutional Neural Networks not use a Support Vector Machine to classify? What is an SVM, anyway? I think the answer for most purposes is “the solution to the following optimization problem”: $$ \begin{split} \operatorname*{arg\,min}_{f \in \mathcal H} \frac{1}{n} \sum_{i=1}^n \ell_\mathit{hinge}(f(x_i), y_i) \, + \lambda \lVert f \rVert_{\mathcal H}^2 \\ \ell_\mathit{hinge}(t, y) = \max(0, 1 - t y) ,\end{split} \tag{SVM} $$ where $\mathcal H$ is a reproducing kernel Hilbert space, $y$ is a label in $\{-1, 1\}$, and $t = f(x) \in \mathbb R$ is a “decision value”; our final prediction will be $\operatorname{sign}(t)$. In the simplest case, $\mathcal H$ could be the space of affine functions $f(x) = w \cdot x + b$, and $\lVert f \rVert_{\mathcal H}^2 = \lVert w \rVert^2 + b^2$. (Handling of the offset $b$ varies depending on exactly what you’re doing, but that’s not important for our purposes.) In the ‘90s through the early ‘10s, there was a lot of work on solving this particular optimization problem in various smart ways, and indeed that’s what LIBSVM / LIBLINEAR / SVMlight / ThunderSVM / ... do. But I don’t think that any of these particular algorithms are fundamental to “being an SVM,” really. Now, how do we train a deep network? Well, we try to solve something like, say, $$ \begin{split} \operatorname*{arg\,min}_{f \in \mathcal F} \frac1n \sum_{i=1}^n \ell_\mathit{CE}(f(x_i), y) + R(f) \\ \ell_\mathit{CE}(p, y) = - y \log(p) - (1-y) \log(1 - p) ,\end{split} \tag{$\star$} $$ where now $\mathcal F$ is the set of deep nets we consider, which output probabilities $p = f(x) \in [0, 1]$. The explicit regularizer $R(f)$ might be an L2 penalty on the weights in the network, or we might just use $R(f) = 0$. Although we could solve (SVM) up to machine precision if we really wanted, we usually can’t do that for $(\star)$ when $\mathcal F$ is more than one layer; instead we use stochastic gradient descent to attempt at an approximate solution. If we take $\mathcal F$ as a reproducing kernel Hilbert space and $R(f) = \lambda \lVert f \rVert_{\mathcal F}^2$, then $(\star)$ becomes very similar to (SVM), just with cross-entropy loss instead of hinge loss: this is also called kernel logistic regression. My understanding is that the reason SVMs took off in a way kernel logistic regression didn’t is largely due to a slight computational advantage of the former (more amenable to these fancy algorithms), and/or historical accident; there isn’t really a huge difference between the two as a whole, as far as I know. (There is sometimes a big difference between an SVM with a fancy kernel and a plain linear logistic regression, but that’s comparing apples to oranges.) So, what does a deep network using an SVM to classify look like? Well, that could mean some other things, but I think the most natural interpretation is just using $\ell_\mathit{hinge}$ in $(\star)$. One minor issue is that $\ell_\mathit{hinge}$ isn’t differentiable at $\hat y = y$; we could instead use $\ell_\mathit{hinge}^2$, if we want. (Doing this in (SVM) is sometimes called “L2-SVM” or similar names.) Or we can just ignore the non-differentiability; the ReLU activation isn’t differentiable at 0 either, and this usually doesn’t matter. This can be justified via subgradients, although note that the correctness here is actually quite subtle when dealing with deep networks. An ICML workshop paper – Tang, Deep Learning using Linear Support Vector Machines, ICML 2013 workshop Challenges in Representation Learning – found using $\ell_\mathit{hinge}^2$ gave small but consistent improvements over $\ell_\mathit{CE}$ on the problems they considered. I’m sure others have tried (squared) hinge loss since in deep networks, but it certainly hasn’t taken off widely. (You have to modify both $\ell_\mathit{CE}$ as I’ve written it and $\ell_\mathit{hinge}$ to support multi-class classification, but in the one-vs-rest scheme used by Tang, both are easy to do.) Another thing that’s sometimes done is to train CNNs in the typical way, but then take the output of a late layer as "features" and train a separate SVM on that. This was common in early days of transfer learning with deep features, but is I think less common now. Something like this is also done sometimes in other contexts, e.g. in meta-learning by Lee et al., Meta-Learning with Differentiable Convex Optimization, CVPR 2019, who actually solved (SVM) on deep network features and backpropped through the whole thing. (They didn't, but you can even do this with a nonlinear kernel in $\mathcal H$; this is also done in some other "deep kernels" contexts.) It’s a very cool approach – one that I've also worked on – and in certain domains this type of approach makes a ton of sense, but there are some pitfalls, and I don’t think it’s very applicable to a typical "plain classification" problem.
Why do Convolutional Neural Networks not use a Support Vector Machine to classify? What is an SVM, anyway? I think the answer for most purposes is “the solution to the following optimization problem”: $$ \begin{split} \operatorname*{arg\,min}_{f \in \mathcal H} \frac{1}{n} \sum_{i=1
3,936
Why do Convolutional Neural Networks not use a Support Vector Machine to classify?
Most of the theory behind the support vector machine assumes you are constructing a maximal margin classifier following a fixed transformation of the input space (via a kernel). The theory is less applicable if the fixed transformation has been learned from the data (as would be the case for the lower levels of the deep neural network). This means that there probably isn't much reason to expect a linear SVM to be better than a conventional (regularised) neural layer.
Why do Convolutional Neural Networks not use a Support Vector Machine to classify?
Most of the theory behind the support vector machine assumes you are constructing a maximal margin classifier following a fixed transformation of the input space (via a kernel). The theory is less ap
Why do Convolutional Neural Networks not use a Support Vector Machine to classify? Most of the theory behind the support vector machine assumes you are constructing a maximal margin classifier following a fixed transformation of the input space (via a kernel). The theory is less applicable if the fixed transformation has been learned from the data (as would be the case for the lower levels of the deep neural network). This means that there probably isn't much reason to expect a linear SVM to be better than a conventional (regularised) neural layer.
Why do Convolutional Neural Networks not use a Support Vector Machine to classify? Most of the theory behind the support vector machine assumes you are constructing a maximal margin classifier following a fixed transformation of the input space (via a kernel). The theory is less ap
3,937
Why do Convolutional Neural Networks not use a Support Vector Machine to classify?
As far I can see, there are at least couple differences: CNNs are designed to work with image data, while SVM is a more generic classifier; CNNs extract features while SVM simply maps its input to some high dimensional space where (hopefully) the differences between the classes can be revealed; Similar to 2., CNNs are deep architectures while SVMs are shallow; Learning objectives are different: SVMs look to maximize the margin, while CNNs are not (would love to know more) This being said, SVMs can work as good as CNNs provided good features are used with a good kernel function.
Why do Convolutional Neural Networks not use a Support Vector Machine to classify?
As far I can see, there are at least couple differences: CNNs are designed to work with image data, while SVM is a more generic classifier; CNNs extract features while SVM simply maps its input to so
Why do Convolutional Neural Networks not use a Support Vector Machine to classify? As far I can see, there are at least couple differences: CNNs are designed to work with image data, while SVM is a more generic classifier; CNNs extract features while SVM simply maps its input to some high dimensional space where (hopefully) the differences between the classes can be revealed; Similar to 2., CNNs are deep architectures while SVMs are shallow; Learning objectives are different: SVMs look to maximize the margin, while CNNs are not (would love to know more) This being said, SVMs can work as good as CNNs provided good features are used with a good kernel function.
Why do Convolutional Neural Networks not use a Support Vector Machine to classify? As far I can see, there are at least couple differences: CNNs are designed to work with image data, while SVM is a more generic classifier; CNNs extract features while SVM simply maps its input to so
3,938
How are regression, the t-test, and the ANOVA all versions of the general linear model?
Consider that they can all be written as a regression equation (perhaps with slightly differing interpretations than their traditional forms). Regression: $$ Y=\beta_0 + \beta_1X_{\text{(continuous)}} + \varepsilon \\ \text{where }\varepsilon\sim\mathcal N(0, \sigma^2) $$ t-test: $$ Y=\beta_0 + \beta_1X_{\text{(dummy code)}} + \varepsilon \\ \text{where }\varepsilon\sim\mathcal N(0, \sigma^2) $$ ANOVA: $$ Y=\beta_0 + \beta_1X_{\text{(dummy code)}} + \varepsilon \\ \text{where }\varepsilon\sim\mathcal N(0, \sigma^2) $$ The prototypical regression is conceptualized with $X$ as a continuous variable. However, the only assumption that is actually made about $X$ is that it is a vector of known constants. It could be a continuous variable, but it could also be a dummy code (i.e., a vector of $0$'s & $1$'s that indicates whether an observation is a member of an indicated group--e.g., a treatment group). Thus, in the second equation, $X$ could be such a dummy code, and the p-value would be the same as that from a t-test in its more traditional form. The meaning of the betas would differ here, though. In this case, $\beta_0$ would be the mean of the control group (for which the entries in the dummy variable would be $0$'s), and $\beta_1$ would be the difference between the mean of the treatment group and the mean of the control group. Now, remember that it is perfectly reasonable to have / run an ANOVA with only two groups (although a t-test would be more common), and you have all three connected. If you prefer seeing how it would work if you had an ANOVA with 3 groups; it would be: $$ Y=\beta_0 + \beta_1X_{\text{(dummy code 1)}} + \beta_2X_{\text{(dummy code 2)}} + \varepsilon \\ \text{where }\varepsilon\sim\mathcal N(0, \sigma^2) $$ Note that when you have $g$ groups, you have $g-1$ dummy codes to represent them. The reference group (typically the control group) is indicated by having $0$'s for all dummy codes (in this case, both dummy code 1 & dummy code 2). In this case, you would not want to interpret the p-values of the t-tests for these betas that come with standard statistical output--they only indicate whether the indicated group differs from the control group when assessed in isolation. That is, these tests are not independent. Instead, you would want to assess whether the group means vary by constructing an ANOVA table and conducting an F-test. For what it's worth, the betas are interpreted just as with the t-test version described above: $\beta_0$ is the mean of the control / reference group, $\beta_1$ indicates the difference between the means of group 1 and the reference group, and $\beta_2$ indicates the difference between group 2 and the reference group. In light of @whuber's comments below, these can also be represented via matrix equations: $$ \bf Y=\bf X\boldsymbol\beta + \boldsymbol\varepsilon $$ Represented this way, $\bf Y$ & $\boldsymbol\varepsilon$ are vectors of length $N$, and $\boldsymbol\beta$ is a vector of length $p+1$. $\bf X$ is now a matrix with $N$ rows and $(p+1)$ columns. In a prototypical regression you have $p$ continuous $X$ variables and the intercept. Thus, your $\bf X$ matrix is composed of a series of column vectors side by side, one for each $X$ variable, with a column of $1$'s on the far left for the intercept. If you are representing an ANOVA with $g$ groups in this way, remember that you would have $g-1$ dummy variables indicating the groups, with the reference group indicated by an observation having $0$'s in each dummy variable. As above, you would still have an intercept. Thus, $p=g-1$.
How are regression, the t-test, and the ANOVA all versions of the general linear model?
Consider that they can all be written as a regression equation (perhaps with slightly differing interpretations than their traditional forms). Regression: $$ Y=\beta_0 + \beta_1X_{\text{(continuous)
How are regression, the t-test, and the ANOVA all versions of the general linear model? Consider that they can all be written as a regression equation (perhaps with slightly differing interpretations than their traditional forms). Regression: $$ Y=\beta_0 + \beta_1X_{\text{(continuous)}} + \varepsilon \\ \text{where }\varepsilon\sim\mathcal N(0, \sigma^2) $$ t-test: $$ Y=\beta_0 + \beta_1X_{\text{(dummy code)}} + \varepsilon \\ \text{where }\varepsilon\sim\mathcal N(0, \sigma^2) $$ ANOVA: $$ Y=\beta_0 + \beta_1X_{\text{(dummy code)}} + \varepsilon \\ \text{where }\varepsilon\sim\mathcal N(0, \sigma^2) $$ The prototypical regression is conceptualized with $X$ as a continuous variable. However, the only assumption that is actually made about $X$ is that it is a vector of known constants. It could be a continuous variable, but it could also be a dummy code (i.e., a vector of $0$'s & $1$'s that indicates whether an observation is a member of an indicated group--e.g., a treatment group). Thus, in the second equation, $X$ could be such a dummy code, and the p-value would be the same as that from a t-test in its more traditional form. The meaning of the betas would differ here, though. In this case, $\beta_0$ would be the mean of the control group (for which the entries in the dummy variable would be $0$'s), and $\beta_1$ would be the difference between the mean of the treatment group and the mean of the control group. Now, remember that it is perfectly reasonable to have / run an ANOVA with only two groups (although a t-test would be more common), and you have all three connected. If you prefer seeing how it would work if you had an ANOVA with 3 groups; it would be: $$ Y=\beta_0 + \beta_1X_{\text{(dummy code 1)}} + \beta_2X_{\text{(dummy code 2)}} + \varepsilon \\ \text{where }\varepsilon\sim\mathcal N(0, \sigma^2) $$ Note that when you have $g$ groups, you have $g-1$ dummy codes to represent them. The reference group (typically the control group) is indicated by having $0$'s for all dummy codes (in this case, both dummy code 1 & dummy code 2). In this case, you would not want to interpret the p-values of the t-tests for these betas that come with standard statistical output--they only indicate whether the indicated group differs from the control group when assessed in isolation. That is, these tests are not independent. Instead, you would want to assess whether the group means vary by constructing an ANOVA table and conducting an F-test. For what it's worth, the betas are interpreted just as with the t-test version described above: $\beta_0$ is the mean of the control / reference group, $\beta_1$ indicates the difference between the means of group 1 and the reference group, and $\beta_2$ indicates the difference between group 2 and the reference group. In light of @whuber's comments below, these can also be represented via matrix equations: $$ \bf Y=\bf X\boldsymbol\beta + \boldsymbol\varepsilon $$ Represented this way, $\bf Y$ & $\boldsymbol\varepsilon$ are vectors of length $N$, and $\boldsymbol\beta$ is a vector of length $p+1$. $\bf X$ is now a matrix with $N$ rows and $(p+1)$ columns. In a prototypical regression you have $p$ continuous $X$ variables and the intercept. Thus, your $\bf X$ matrix is composed of a series of column vectors side by side, one for each $X$ variable, with a column of $1$'s on the far left for the intercept. If you are representing an ANOVA with $g$ groups in this way, remember that you would have $g-1$ dummy variables indicating the groups, with the reference group indicated by an observation having $0$'s in each dummy variable. As above, you would still have an intercept. Thus, $p=g-1$.
How are regression, the t-test, and the ANOVA all versions of the general linear model? Consider that they can all be written as a regression equation (perhaps with slightly differing interpretations than their traditional forms). Regression: $$ Y=\beta_0 + \beta_1X_{\text{(continuous)
3,939
How are regression, the t-test, and the ANOVA all versions of the general linear model?
They can all be written as particular cases of the general linear model. The t-test is a two-sample case of ANOVA. If you square the t-test statistic you get the corresponding $F$ in the ANOVA. An ANOVA model is basically just a regression model where the factor levels are represented by dummy (or indicator) variables. So if the model for a t-test is a subset of the ANOVA model and ANOVA is a subset of the multiple regression model, regression itself (and other things besides regression) is a subset of the general linear model, which extends regression to a more general specification of the error term than the usual regression case (which is 'independent' and 'equal-variance'), and to multivariate $Y$. Here's an example showing the equivalence of the ordinary (equal-variance) two sample-$t$ analysis and a hypothesis test in a regression model, done in R (the actual data looks to be paired, so this isn't really a suitable analysis): > t.test(extra ~ group, var.equal=TRUE, data = sleep) Two Sample t-test data: extra by group t = -1.8608, df = 18, p-value = 0.07919 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -3.363874 0.203874 sample estimates: mean in group 1 mean in group 2 0.75 2.33 Note the p-value of 0.079 above. Here's the one way anova: > summary(aov(extra~group,sleep)) Df Sum Sq Mean Sq F value Pr(>F) group 1 12.48 12.482 3.463 0.0792 Residuals 18 64.89 3.605 Now for the regression: > summary(lm(extra ~ group, data = sleep)) (some output removed) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.7500 0.6004 1.249 0.2276 group2 1.5800 0.8491 1.861 0.0792 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.899 on 18 degrees of freedom Multiple R-squared: 0.1613, Adjusted R-squared: 0.1147 F-statistic: 3.463 on 1 and 18 DF, p-value: 0.07919 Compare the p-value in the 'group2' row, and also the p-value for the F-test in the last row. For a two-tailed test, these are the same and both match the t-test result. Further, the coefficient for 'group2' represents the difference in means for the two groups.
How are regression, the t-test, and the ANOVA all versions of the general linear model?
They can all be written as particular cases of the general linear model. The t-test is a two-sample case of ANOVA. If you square the t-test statistic you get the corresponding $F$ in the ANOVA. An AN
How are regression, the t-test, and the ANOVA all versions of the general linear model? They can all be written as particular cases of the general linear model. The t-test is a two-sample case of ANOVA. If you square the t-test statistic you get the corresponding $F$ in the ANOVA. An ANOVA model is basically just a regression model where the factor levels are represented by dummy (or indicator) variables. So if the model for a t-test is a subset of the ANOVA model and ANOVA is a subset of the multiple regression model, regression itself (and other things besides regression) is a subset of the general linear model, which extends regression to a more general specification of the error term than the usual regression case (which is 'independent' and 'equal-variance'), and to multivariate $Y$. Here's an example showing the equivalence of the ordinary (equal-variance) two sample-$t$ analysis and a hypothesis test in a regression model, done in R (the actual data looks to be paired, so this isn't really a suitable analysis): > t.test(extra ~ group, var.equal=TRUE, data = sleep) Two Sample t-test data: extra by group t = -1.8608, df = 18, p-value = 0.07919 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -3.363874 0.203874 sample estimates: mean in group 1 mean in group 2 0.75 2.33 Note the p-value of 0.079 above. Here's the one way anova: > summary(aov(extra~group,sleep)) Df Sum Sq Mean Sq F value Pr(>F) group 1 12.48 12.482 3.463 0.0792 Residuals 18 64.89 3.605 Now for the regression: > summary(lm(extra ~ group, data = sleep)) (some output removed) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.7500 0.6004 1.249 0.2276 group2 1.5800 0.8491 1.861 0.0792 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.899 on 18 degrees of freedom Multiple R-squared: 0.1613, Adjusted R-squared: 0.1147 F-statistic: 3.463 on 1 and 18 DF, p-value: 0.07919 Compare the p-value in the 'group2' row, and also the p-value for the F-test in the last row. For a two-tailed test, these are the same and both match the t-test result. Further, the coefficient for 'group2' represents the difference in means for the two groups.
How are regression, the t-test, and the ANOVA all versions of the general linear model? They can all be written as particular cases of the general linear model. The t-test is a two-sample case of ANOVA. If you square the t-test statistic you get the corresponding $F$ in the ANOVA. An AN
3,940
How are regression, the t-test, and the ANOVA all versions of the general linear model?
This answer that I posted earlier is somewhat relevant, but this question is somewhat different. You might want to think about the differences and similarities between the following linear models: $$ \begin{bmatrix} Y_1 \\ \vdots \\ Y_n \end{bmatrix} = \begin{bmatrix} 1 & x_1 \\ 1 & x_2 \\ 1 & x_3 \\ \vdots & \vdots \\ 1 & x_n \end{bmatrix} \begin{bmatrix} \alpha_0 \\ \alpha_1 \end{bmatrix} + \begin{bmatrix} \varepsilon_1 \\ \vdots \\ \vdots \\ \varepsilon_n \end{bmatrix} $$ $$ \begin{bmatrix} Y_1 \\ \vdots \\ Y_n \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & 0 & 0 & \cdots & 0 \\ \hline 0 & 1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & & \vdots \\ 0 & 1 & 0 & \cdots & 0 \\ \hline 0 & 0 & 1 & \cdots & 0 \\ \vdots & & & & \vdots \\ \vdots & & & & \vdots \end{bmatrix} \begin{bmatrix} \alpha_0 \\ \vdots \\ \alpha_k \end{bmatrix} + \begin{bmatrix} \varepsilon_1 \\ \vdots \\ \vdots \\ \varepsilon_n \end{bmatrix} $$
How are regression, the t-test, and the ANOVA all versions of the general linear model?
This answer that I posted earlier is somewhat relevant, but this question is somewhat different. You might want to think about the differences and similarities between the following linear models: $$
How are regression, the t-test, and the ANOVA all versions of the general linear model? This answer that I posted earlier is somewhat relevant, but this question is somewhat different. You might want to think about the differences and similarities between the following linear models: $$ \begin{bmatrix} Y_1 \\ \vdots \\ Y_n \end{bmatrix} = \begin{bmatrix} 1 & x_1 \\ 1 & x_2 \\ 1 & x_3 \\ \vdots & \vdots \\ 1 & x_n \end{bmatrix} \begin{bmatrix} \alpha_0 \\ \alpha_1 \end{bmatrix} + \begin{bmatrix} \varepsilon_1 \\ \vdots \\ \vdots \\ \varepsilon_n \end{bmatrix} $$ $$ \begin{bmatrix} Y_1 \\ \vdots \\ Y_n \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & 0 & 0 & \cdots & 0 \\ \hline 0 & 1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & & \vdots \\ 0 & 1 & 0 & \cdots & 0 \\ \hline 0 & 0 & 1 & \cdots & 0 \\ \vdots & & & & \vdots \\ \vdots & & & & \vdots \end{bmatrix} \begin{bmatrix} \alpha_0 \\ \vdots \\ \alpha_k \end{bmatrix} + \begin{bmatrix} \varepsilon_1 \\ \vdots \\ \vdots \\ \varepsilon_n \end{bmatrix} $$
How are regression, the t-test, and the ANOVA all versions of the general linear model? This answer that I posted earlier is somewhat relevant, but this question is somewhat different. You might want to think about the differences and similarities between the following linear models: $$
3,941
How are regression, the t-test, and the ANOVA all versions of the general linear model?
Anova is similar to a t-test for equality of means under the assumption of unknown but equal variances among treatments. This is because in ANOVA MSE is identical to pooled-variance used in t-test. There are other versions of t-test such as one for un-equal variances and pair-wise t-test. From this view, t-test can be more flexible.
How are regression, the t-test, and the ANOVA all versions of the general linear model?
Anova is similar to a t-test for equality of means under the assumption of unknown but equal variances among treatments. This is because in ANOVA MSE is identical to pooled-variance used in t-test. Th
How are regression, the t-test, and the ANOVA all versions of the general linear model? Anova is similar to a t-test for equality of means under the assumption of unknown but equal variances among treatments. This is because in ANOVA MSE is identical to pooled-variance used in t-test. There are other versions of t-test such as one for un-equal variances and pair-wise t-test. From this view, t-test can be more flexible.
How are regression, the t-test, and the ANOVA all versions of the general linear model? Anova is similar to a t-test for equality of means under the assumption of unknown but equal variances among treatments. This is because in ANOVA MSE is identical to pooled-variance used in t-test. Th
3,942
Measuring entropy/ information/ patterns of a 2d binary matrix
There is a simple procedure that captures all the intuition, including the psychological and geometrical elements. It relies on using spatial proximity, which is the basis of our perception and provides an intrinsic way to capture what is only imperfectly measured by symmetries. To do this, we need to measure the "complexity" of these arrays at varying local scales. Although we have much flexibility to choose those scales and choose the sense in which we measure "proximity," it is simple enough and effective enough to use small square neighborhoods and to look at averages (or, equivalently, sums) within them. To this end, a sequence of arrays can be derived from any $m$ by $n$ array by forming moving neighborhood sums using $k=2$ by $2$ neighborhoods, then $3$ by $3$, etc, up to $\min(n,m)$ by $\min(n,m)$ (although by then there are usually too few values to provide anything reliable). To see how this works, let's do the calculations for the arrays in the question, which I will call $a_1$ through $a_5$, from top to bottom. Here are plots of the moving sums for $k=1,2,3,4$ ($k=1$ is the original array, of course) applied to $a_1$. Clockwise from the upper left, $k$ equals $1$, $2$, $4$, and $3$. The arrays are $5$ by $5$, then $4$ by $4$, $2$ by $2$, and $3$ by $3$, respectively. They all look sort of "random." Let's measure this randomness with their base-2 entropy. When an array $a$ contains various distinct values with proportions $p_1,$ $p_2,$ etc., its entropy (by definition) is $$H(a) = -p_1\log_2(p_1) - p_2\log_2(p_2) - \cdots$$ For instance, array $a_1$ has ten black cells and 15 white cells, whence they are in proportions of $10/25$ and $15/25,$ respectively. Its entropy therefore is $$H(a_1) = -(10/25)\log_2(10/25) - (15/25)\log_2(15/25) \approx 0.970951.$$ For $a_1$, the sequence of these entropies for $k=1,2,3,4$ is $(0.97, 0.99, 0.92, 1.5)$. Let's call this the "profile" of $a_1$. Here, in contrast, are the moving sums of $a_4$: For $k=2, 3, 4$ there is little variation, whence low entropy. The profile is $(1.00, 0, 0.99, 0)$. Its values are consistently close to or lower than the values for $a_1$, confirming the intuitive sense that there is a strong "pattern" present in $a_4$. We need a frame of reference for interpreting these profiles. A perfectly random array of binary values will have just about half its values equal to $0$ and the other half equal to $1$, for an entropy close to $1$. The moving sums within $k$ by $k$ neighborhoods will tend to have binomial distributions, giving them predictable entropies (at least for large arrays) that can be approximated by $1 + \log_2(k)$: These results are borne out by simulation with arrays up to $m=n=100$. However, they break down for small arrays (such as the $5$ by $5$ arrays here) due to correlation among neighboring windows (once the window size is about half the dimensions of the array) and due to the small amount of data. Here is a reference profile of random $5$ by $5$ arrays generated by simulation along with plots of some actual profiles: In this plot the reference profile is solid blue. The array profiles correspond to $a_1$: red, $a_2$: gold, $a_3$: green, $a_4$: light blue. (Including $a_5$ would obscure the picture because it is close to the profile of $a_4$.) Overall the profiles correspond to the ordering in the question: they get lower at most values of $k$ as the apparent ordering increases. The exception is $a_1$: until the end, for $k=4$, its moving sums tend to have among the lowest entropies. This reveals a surprising regularity: every $2$ by $2$ neighborhood in $a_1$ has exactly $1$ or $2$ black squares, never any more or less. It's much less "random" than one might think. (This is partly due to the loss of information that accompanies summing the values in each neighborhood, a procedure that condenses $2^{k^2}$ possible neighborhood configurations into just $k^2+1$ different possible sums. If we wanted to account specifically for the clustering and orientation within each neighborhood, then instead of using moving sums we would use moving concatenations. That is, each $k$ by $k$ neighborhood has $2^{k^2}$ possible different configurations; by distinguishing them all, we can obtain a finer measure of entropy. I suspect that such a measure would elevate the profile of $a_1$ compared to the other images.) This technique of creating a profile of entropies over a controlled range of scales, by summing (or concatenating or otherwise combining) values within moving neighborhoods, has been used in analysis of images. It is a two-dimensional generalization of the well-known idea of analyzing text first as a series of letters, then as a series of digraphs (two-letter sequences), then as trigraphs, etc. It also has some evident relations to fractal analysis (which explores properties of the image at finer and finer scales). If we take some care to use a block moving sum or block concatenation (so there are no overlaps between windows), one can derive simple mathematical relationships among the successive entropies; however, I suspect that using the moving window approach may be more powerful and is a little less arbitrary (because it does not depend on precisely how the image is divided into blocks). Various extensions are possible. For instance, for a rotationally invariant profile, use circular neighborhoods rather than square ones. Everything generalizes beyond binary arrays, of course. With sufficiently large arrays one can even compute locally varying entropy profiles to detect non-stationarity. If a single number is desired, instead of an entire profile, choose the scale at which the spatial randomness (or lack thereof) is of interest. In these examples, that scale would correspond best to a $3$ by $3$ or $4$ by $4$ moving neighborhood, because for their patterning they all rely on groupings that span three to five cells (and a $5$ by $5$ neighborhood just averages away all variation in the array and so is useless). At the latter scale, the entropies for $a_1$ through $a_5$ are $1.50$, $0.81$, $0$, $0$, and $0$; the expected entropy at this scale (for a uniformly random array) is $1.34$. This justifies the sense that $a_1$ "should have rather high entropy." To distinguish $a_3$, $a_4$, and $a_5$, which are tied with $0$ entropy at this scale, look at the next finer resolution ($3$ by $3$ neighborhoods): their entropies are $1.39$, $0.99$, $0.92$, respectively (whereas a random grid is expected to have a value of $1.77$). By these measures, the original question puts the arrays in exactly the right order.
Measuring entropy/ information/ patterns of a 2d binary matrix
There is a simple procedure that captures all the intuition, including the psychological and geometrical elements. It relies on using spatial proximity, which is the basis of our perception and provi
Measuring entropy/ information/ patterns of a 2d binary matrix There is a simple procedure that captures all the intuition, including the psychological and geometrical elements. It relies on using spatial proximity, which is the basis of our perception and provides an intrinsic way to capture what is only imperfectly measured by symmetries. To do this, we need to measure the "complexity" of these arrays at varying local scales. Although we have much flexibility to choose those scales and choose the sense in which we measure "proximity," it is simple enough and effective enough to use small square neighborhoods and to look at averages (or, equivalently, sums) within them. To this end, a sequence of arrays can be derived from any $m$ by $n$ array by forming moving neighborhood sums using $k=2$ by $2$ neighborhoods, then $3$ by $3$, etc, up to $\min(n,m)$ by $\min(n,m)$ (although by then there are usually too few values to provide anything reliable). To see how this works, let's do the calculations for the arrays in the question, which I will call $a_1$ through $a_5$, from top to bottom. Here are plots of the moving sums for $k=1,2,3,4$ ($k=1$ is the original array, of course) applied to $a_1$. Clockwise from the upper left, $k$ equals $1$, $2$, $4$, and $3$. The arrays are $5$ by $5$, then $4$ by $4$, $2$ by $2$, and $3$ by $3$, respectively. They all look sort of "random." Let's measure this randomness with their base-2 entropy. When an array $a$ contains various distinct values with proportions $p_1,$ $p_2,$ etc., its entropy (by definition) is $$H(a) = -p_1\log_2(p_1) - p_2\log_2(p_2) - \cdots$$ For instance, array $a_1$ has ten black cells and 15 white cells, whence they are in proportions of $10/25$ and $15/25,$ respectively. Its entropy therefore is $$H(a_1) = -(10/25)\log_2(10/25) - (15/25)\log_2(15/25) \approx 0.970951.$$ For $a_1$, the sequence of these entropies for $k=1,2,3,4$ is $(0.97, 0.99, 0.92, 1.5)$. Let's call this the "profile" of $a_1$. Here, in contrast, are the moving sums of $a_4$: For $k=2, 3, 4$ there is little variation, whence low entropy. The profile is $(1.00, 0, 0.99, 0)$. Its values are consistently close to or lower than the values for $a_1$, confirming the intuitive sense that there is a strong "pattern" present in $a_4$. We need a frame of reference for interpreting these profiles. A perfectly random array of binary values will have just about half its values equal to $0$ and the other half equal to $1$, for an entropy close to $1$. The moving sums within $k$ by $k$ neighborhoods will tend to have binomial distributions, giving them predictable entropies (at least for large arrays) that can be approximated by $1 + \log_2(k)$: These results are borne out by simulation with arrays up to $m=n=100$. However, they break down for small arrays (such as the $5$ by $5$ arrays here) due to correlation among neighboring windows (once the window size is about half the dimensions of the array) and due to the small amount of data. Here is a reference profile of random $5$ by $5$ arrays generated by simulation along with plots of some actual profiles: In this plot the reference profile is solid blue. The array profiles correspond to $a_1$: red, $a_2$: gold, $a_3$: green, $a_4$: light blue. (Including $a_5$ would obscure the picture because it is close to the profile of $a_4$.) Overall the profiles correspond to the ordering in the question: they get lower at most values of $k$ as the apparent ordering increases. The exception is $a_1$: until the end, for $k=4$, its moving sums tend to have among the lowest entropies. This reveals a surprising regularity: every $2$ by $2$ neighborhood in $a_1$ has exactly $1$ or $2$ black squares, never any more or less. It's much less "random" than one might think. (This is partly due to the loss of information that accompanies summing the values in each neighborhood, a procedure that condenses $2^{k^2}$ possible neighborhood configurations into just $k^2+1$ different possible sums. If we wanted to account specifically for the clustering and orientation within each neighborhood, then instead of using moving sums we would use moving concatenations. That is, each $k$ by $k$ neighborhood has $2^{k^2}$ possible different configurations; by distinguishing them all, we can obtain a finer measure of entropy. I suspect that such a measure would elevate the profile of $a_1$ compared to the other images.) This technique of creating a profile of entropies over a controlled range of scales, by summing (or concatenating or otherwise combining) values within moving neighborhoods, has been used in analysis of images. It is a two-dimensional generalization of the well-known idea of analyzing text first as a series of letters, then as a series of digraphs (two-letter sequences), then as trigraphs, etc. It also has some evident relations to fractal analysis (which explores properties of the image at finer and finer scales). If we take some care to use a block moving sum or block concatenation (so there are no overlaps between windows), one can derive simple mathematical relationships among the successive entropies; however, I suspect that using the moving window approach may be more powerful and is a little less arbitrary (because it does not depend on precisely how the image is divided into blocks). Various extensions are possible. For instance, for a rotationally invariant profile, use circular neighborhoods rather than square ones. Everything generalizes beyond binary arrays, of course. With sufficiently large arrays one can even compute locally varying entropy profiles to detect non-stationarity. If a single number is desired, instead of an entire profile, choose the scale at which the spatial randomness (or lack thereof) is of interest. In these examples, that scale would correspond best to a $3$ by $3$ or $4$ by $4$ moving neighborhood, because for their patterning they all rely on groupings that span three to five cells (and a $5$ by $5$ neighborhood just averages away all variation in the array and so is useless). At the latter scale, the entropies for $a_1$ through $a_5$ are $1.50$, $0.81$, $0$, $0$, and $0$; the expected entropy at this scale (for a uniformly random array) is $1.34$. This justifies the sense that $a_1$ "should have rather high entropy." To distinguish $a_3$, $a_4$, and $a_5$, which are tied with $0$ entropy at this scale, look at the next finer resolution ($3$ by $3$ neighborhoods): their entropies are $1.39$, $0.99$, $0.92$, respectively (whereas a random grid is expected to have a value of $1.77$). By these measures, the original question puts the arrays in exactly the right order.
Measuring entropy/ information/ patterns of a 2d binary matrix There is a simple procedure that captures all the intuition, including the psychological and geometrical elements. It relies on using spatial proximity, which is the basis of our perception and provi
3,943
Measuring entropy/ information/ patterns of a 2d binary matrix
First, my suggestion is purely intuitive: I know nothing in pattern recognition field. Second, alternative dozens suggestions like mine could be made. I start with idea that a regular configuration (that is, with low entropy) should be somehow symmetric, isomorphic to this or that its tranformants. For example, in rotations. You could rotate (flip to 90 degrees, than 180 degrees, etc) your matrix until the configuration concur with the original one. It will always concur upon 4 rotations (360 degrees), but sometimes it can concur earlier (like matrix E in the picture). At each rotation, count the number of cells with not identical values between the original configuration and the rotated one. For example, if you compare original matrix A with its 90-degree rotation you'll discover 10 cells where there is spot in one matrix and blank in the other matrix. Then compare the original matrix with its 180-degree rotation: 11 such cells will be found. 10 cells is the discrepancy between the original matrix A and its 270-degrees rotation. 10+11+10=31 is the overall "entropy" of matrix A. For matrix B the "entropy" is 20, and for matrix E it is only 12. For matrices C and D "entropy" is 0 because rotations stop after 90 degrees: isomorphism attained already.
Measuring entropy/ information/ patterns of a 2d binary matrix
First, my suggestion is purely intuitive: I know nothing in pattern recognition field. Second, alternative dozens suggestions like mine could be made. I start with idea that a regular configuration (t
Measuring entropy/ information/ patterns of a 2d binary matrix First, my suggestion is purely intuitive: I know nothing in pattern recognition field. Second, alternative dozens suggestions like mine could be made. I start with idea that a regular configuration (that is, with low entropy) should be somehow symmetric, isomorphic to this or that its tranformants. For example, in rotations. You could rotate (flip to 90 degrees, than 180 degrees, etc) your matrix until the configuration concur with the original one. It will always concur upon 4 rotations (360 degrees), but sometimes it can concur earlier (like matrix E in the picture). At each rotation, count the number of cells with not identical values between the original configuration and the rotated one. For example, if you compare original matrix A with its 90-degree rotation you'll discover 10 cells where there is spot in one matrix and blank in the other matrix. Then compare the original matrix with its 180-degree rotation: 11 such cells will be found. 10 cells is the discrepancy between the original matrix A and its 270-degrees rotation. 10+11+10=31 is the overall "entropy" of matrix A. For matrix B the "entropy" is 20, and for matrix E it is only 12. For matrices C and D "entropy" is 0 because rotations stop after 90 degrees: isomorphism attained already.
Measuring entropy/ information/ patterns of a 2d binary matrix First, my suggestion is purely intuitive: I know nothing in pattern recognition field. Second, alternative dozens suggestions like mine could be made. I start with idea that a regular configuration (t
3,944
Measuring entropy/ information/ patterns of a 2d binary matrix
The information is commonly defined as $h(x) = \log p(x)$. There is some nice theory explaining that $\log_2 p(x)$ is the amount of bits you need to code $x$ using $p$. If you want to know more about this read up on arithmetic coding. So how can that solve your problem? Easy. Find some $p$ that represents your data and use $-\log p(x)$ where $x$ is a new sample as a measure of surprise or information of encountering it. The hard thing is to find some model for $p$ and to generate your data. Maybe you can come up with an algorithm that generates matrices which you deem 'probable'. Some ideas for fitting $p$. If you are only looking at 5x5 matrices, you only need $2^{25}$ bits to store all possible matrices, so you can just enumerate all of them and assign a certain probability to each. Use a restricted Boltzmann machine to fit your data (then you'd have to use the free energy as a substitute for information, but that's okay), Use zip as a substitute for $-\log p(x)$ and don't care about the whole probability story from above. It's even formally okay, because you use zip as an approximation to Kolmogorov complexity and this has been done by information theorists as well leading to the normalized compression distance, Maybe use a graphical model to include spatial prior beliefs and use Bernoulli variables locally. To encode translational invariance, you could use an energy based model using a convolutional network. Some of the ideas above are quite heavy and come from machine learning. In case you want to have further advice, just use the comments.
Measuring entropy/ information/ patterns of a 2d binary matrix
The information is commonly defined as $h(x) = \log p(x)$. There is some nice theory explaining that $\log_2 p(x)$ is the amount of bits you need to code $x$ using $p$. If you want to know more about
Measuring entropy/ information/ patterns of a 2d binary matrix The information is commonly defined as $h(x) = \log p(x)$. There is some nice theory explaining that $\log_2 p(x)$ is the amount of bits you need to code $x$ using $p$. If you want to know more about this read up on arithmetic coding. So how can that solve your problem? Easy. Find some $p$ that represents your data and use $-\log p(x)$ where $x$ is a new sample as a measure of surprise or information of encountering it. The hard thing is to find some model for $p$ and to generate your data. Maybe you can come up with an algorithm that generates matrices which you deem 'probable'. Some ideas for fitting $p$. If you are only looking at 5x5 matrices, you only need $2^{25}$ bits to store all possible matrices, so you can just enumerate all of them and assign a certain probability to each. Use a restricted Boltzmann machine to fit your data (then you'd have to use the free energy as a substitute for information, but that's okay), Use zip as a substitute for $-\log p(x)$ and don't care about the whole probability story from above. It's even formally okay, because you use zip as an approximation to Kolmogorov complexity and this has been done by information theorists as well leading to the normalized compression distance, Maybe use a graphical model to include spatial prior beliefs and use Bernoulli variables locally. To encode translational invariance, you could use an energy based model using a convolutional network. Some of the ideas above are quite heavy and come from machine learning. In case you want to have further advice, just use the comments.
Measuring entropy/ information/ patterns of a 2d binary matrix The information is commonly defined as $h(x) = \log p(x)$. There is some nice theory explaining that $\log_2 p(x)$ is the amount of bits you need to code $x$ using $p$. If you want to know more about
3,945
Measuring entropy/ information/ patterns of a 2d binary matrix
My following proposal is rather insighted than deduced, so I cannot prove it, but can at least offer some rationale. The procedure of assessing of "entropy" of the configuration of spots includes: Digitize spots. Perform comparison of the configuration with itself permuted, many times, by orthogonal Procrustes analysis. Plot results of comparisons (identity coefficient) and assess jaggedness of the plot. Digitize spots, that is, take their coordinates. For example, below is your configuration D with numbered spots (numbering order may be arbitrary) and their coordinates. spot x y 1 1 1 2 3 1 3 5 1 4 2 2 5 4 2 6 1 3 7 3 3 8 5 3 9 2 4 10 4 4 11 1 5 12 3 5 13 5 5 Do permutations and perform Procrustes analysis. Permute spots (rows in the data) randomly and perform Procrustes comparison of the original (not permuted) data with the permuted one; record the identity coefficient (measure of similarity of the two configurations, output by the analysis). Repeat permutation - Procrustes - saving the coefficient, many times (e.g. 1000 times or more). What can we await from identity coefficients (IDc) obtained after the above operation on a regular structure? Consider for example the above configuration D. If we compare the original coordinates set with itself, we will get IDc=1, of course. But if we permute some spots the IDc between the original set and the permuted will be some value below 1. Let us permute, just for example, one pair of spots, labeled 1 and 4. IDc=.964. Now, instead, permute spots 3 and 5. Interestingly, IDc will be .964 again. The same value, why? Spots 3 and 5 are symmetric to 1 and 4, so that rotation to 90 degrees superpose them. Procrustes comparison is insensitive to rotation or reflexion, and thereby permutation within pair 1-4 is the "same" as permutation within pair 5-3, for it. To add more example, if you permute just spots 4 and 7, IDc will be again .964! It appeares that for Procrustes, permutation within pair 4-7 is the "same" as the above two in the sense that it gives the same degree of similarity (as measured by IDc). Obviously, this all is because configuration D is regular. For a regular configuration we expect to obtain rather discrete values of IDc in our permutation/comparison experiment; while for irregular configuration we expect that the values will tend to be continuous. Plot the recorded IDc values. For example, sort the values and make line-plot. I did the experiment - 5000 permutations - with each of your configurations A, B (both quite irregular), D, E (both regular) and here's the line-plot: Note how much more jagged are lines D and E (D especially). This is because of discreteness of the values. Values for A and B are much more continuous. You can pick yourself some kind of statistic that estimates degree of discreteness/continuety, instead of plotting. A seems no more continuous than B (for you, configuration A is somewhat less regular, but my line-plot seem not to demonstrate it) or, if not, maybe shows a bit another pattern of IDc values. What another pattern? This is beyond the scope of my answer yet. The big question whether A is indeed less regular than B: it may be for your eye, but not necessarily for Procrustes analysis or another person's eye. By the way, the whole permutation/Procrustes experiment I did very quickly. I used my own Procrustes analysis macro for SPSS (found on my web-page) and add some lines of code to do permutations.
Measuring entropy/ information/ patterns of a 2d binary matrix
My following proposal is rather insighted than deduced, so I cannot prove it, but can at least offer some rationale. The procedure of assessing of "entropy" of the configuration of spots includes: Di
Measuring entropy/ information/ patterns of a 2d binary matrix My following proposal is rather insighted than deduced, so I cannot prove it, but can at least offer some rationale. The procedure of assessing of "entropy" of the configuration of spots includes: Digitize spots. Perform comparison of the configuration with itself permuted, many times, by orthogonal Procrustes analysis. Plot results of comparisons (identity coefficient) and assess jaggedness of the plot. Digitize spots, that is, take their coordinates. For example, below is your configuration D with numbered spots (numbering order may be arbitrary) and their coordinates. spot x y 1 1 1 2 3 1 3 5 1 4 2 2 5 4 2 6 1 3 7 3 3 8 5 3 9 2 4 10 4 4 11 1 5 12 3 5 13 5 5 Do permutations and perform Procrustes analysis. Permute spots (rows in the data) randomly and perform Procrustes comparison of the original (not permuted) data with the permuted one; record the identity coefficient (measure of similarity of the two configurations, output by the analysis). Repeat permutation - Procrustes - saving the coefficient, many times (e.g. 1000 times or more). What can we await from identity coefficients (IDc) obtained after the above operation on a regular structure? Consider for example the above configuration D. If we compare the original coordinates set with itself, we will get IDc=1, of course. But if we permute some spots the IDc between the original set and the permuted will be some value below 1. Let us permute, just for example, one pair of spots, labeled 1 and 4. IDc=.964. Now, instead, permute spots 3 and 5. Interestingly, IDc will be .964 again. The same value, why? Spots 3 and 5 are symmetric to 1 and 4, so that rotation to 90 degrees superpose them. Procrustes comparison is insensitive to rotation or reflexion, and thereby permutation within pair 1-4 is the "same" as permutation within pair 5-3, for it. To add more example, if you permute just spots 4 and 7, IDc will be again .964! It appeares that for Procrustes, permutation within pair 4-7 is the "same" as the above two in the sense that it gives the same degree of similarity (as measured by IDc). Obviously, this all is because configuration D is regular. For a regular configuration we expect to obtain rather discrete values of IDc in our permutation/comparison experiment; while for irregular configuration we expect that the values will tend to be continuous. Plot the recorded IDc values. For example, sort the values and make line-plot. I did the experiment - 5000 permutations - with each of your configurations A, B (both quite irregular), D, E (both regular) and here's the line-plot: Note how much more jagged are lines D and E (D especially). This is because of discreteness of the values. Values for A and B are much more continuous. You can pick yourself some kind of statistic that estimates degree of discreteness/continuety, instead of plotting. A seems no more continuous than B (for you, configuration A is somewhat less regular, but my line-plot seem not to demonstrate it) or, if not, maybe shows a bit another pattern of IDc values. What another pattern? This is beyond the scope of my answer yet. The big question whether A is indeed less regular than B: it may be for your eye, but not necessarily for Procrustes analysis or another person's eye. By the way, the whole permutation/Procrustes experiment I did very quickly. I used my own Procrustes analysis macro for SPSS (found on my web-page) and add some lines of code to do permutations.
Measuring entropy/ information/ patterns of a 2d binary matrix My following proposal is rather insighted than deduced, so I cannot prove it, but can at least offer some rationale. The procedure of assessing of "entropy" of the configuration of spots includes: Di
3,946
Measuring entropy/ information/ patterns of a 2d binary matrix
Mutual information, considering each dimension as a random variable, thus each matrix as a set of pairs of numbers, should help in all cases, except for C, where I am not sure of the result. See the discussion around Fig 8 (starting in p24) on regression performance analysis in the TMVA manual or the corresponding arxiv entry.
Measuring entropy/ information/ patterns of a 2d binary matrix
Mutual information, considering each dimension as a random variable, thus each matrix as a set of pairs of numbers, should help in all cases, except for C, where I am not sure of the result. See the d
Measuring entropy/ information/ patterns of a 2d binary matrix Mutual information, considering each dimension as a random variable, thus each matrix as a set of pairs of numbers, should help in all cases, except for C, where I am not sure of the result. See the discussion around Fig 8 (starting in p24) on regression performance analysis in the TMVA manual or the corresponding arxiv entry.
Measuring entropy/ information/ patterns of a 2d binary matrix Mutual information, considering each dimension as a random variable, thus each matrix as a set of pairs of numbers, should help in all cases, except for C, where I am not sure of the result. See the d
3,947
Measuring entropy/ information/ patterns of a 2d binary matrix
Instead of looking at global properties of the pattern (like symmetries), one can take a look at the local ones, e.g. the number of neighbors each stone (=black circle) has. Let's denote the total number of stones by $s$. If the stones where thrown at random, the distribution of neighbors is $$P_{rand,p}(k\ \text{neighbors}|n\ \text{places} ) = {n \choose k} p^{k} (1-p)^{n-k},$$ where $p = s/25$ is the density of stones. Number of places $n$ depends if a stone is in the interior ($n=8$), on the edge ($n=5$) or on the corner $(n=3)$. It is clearly visible, that distribution of neighbors in C), D) and E) are far from random. For example, for D) all interior stones have exactly $4$ neighbors (opposing to random distribution, which yields in $\approx (0\%,2\%,9\%,20\%,27\%,24\%,13\%,4\%,0\%)$ instead of the measured $(0\%,0\%,0\%,0\%,100\%,0\%,0\%,0\%,0\%)$). So to quantify if a pattern is random you need to compare its distribution of neighbors $P_{measured}(k|n)$ and compare it with a random one $P_{rand,p}(k|n)$. For example you can compare their means and variances. Alternatively, one can measure their distances in the function spaces, e.g.: $$\sum_{n=\{3,5,8\}} \sum_{k=0}^n\left[P_{measured}(k|n)P_{measured}(n) -P_{rand,p}(k|n)P_{rand,p}(n)\right]^2,$$ where $P_{measured}(n)$ is the measured ratio of points with $n$ adjacent spaces and $P_{rand,p}(n)$ is the predicted for a random pattern, i.e. $P_{rand,p}(3) = 4/25$, $P_{rand,p}(5) = 12/25$ and $P_{rand,p}(8) = 9/25$.
Measuring entropy/ information/ patterns of a 2d binary matrix
Instead of looking at global properties of the pattern (like symmetries), one can take a look at the local ones, e.g. the number of neighbors each stone (=black circle) has. Let's denote the total num
Measuring entropy/ information/ patterns of a 2d binary matrix Instead of looking at global properties of the pattern (like symmetries), one can take a look at the local ones, e.g. the number of neighbors each stone (=black circle) has. Let's denote the total number of stones by $s$. If the stones where thrown at random, the distribution of neighbors is $$P_{rand,p}(k\ \text{neighbors}|n\ \text{places} ) = {n \choose k} p^{k} (1-p)^{n-k},$$ where $p = s/25$ is the density of stones. Number of places $n$ depends if a stone is in the interior ($n=8$), on the edge ($n=5$) or on the corner $(n=3)$. It is clearly visible, that distribution of neighbors in C), D) and E) are far from random. For example, for D) all interior stones have exactly $4$ neighbors (opposing to random distribution, which yields in $\approx (0\%,2\%,9\%,20\%,27\%,24\%,13\%,4\%,0\%)$ instead of the measured $(0\%,0\%,0\%,0\%,100\%,0\%,0\%,0\%,0\%)$). So to quantify if a pattern is random you need to compare its distribution of neighbors $P_{measured}(k|n)$ and compare it with a random one $P_{rand,p}(k|n)$. For example you can compare their means and variances. Alternatively, one can measure their distances in the function spaces, e.g.: $$\sum_{n=\{3,5,8\}} \sum_{k=0}^n\left[P_{measured}(k|n)P_{measured}(n) -P_{rand,p}(k|n)P_{rand,p}(n)\right]^2,$$ where $P_{measured}(n)$ is the measured ratio of points with $n$ adjacent spaces and $P_{rand,p}(n)$ is the predicted for a random pattern, i.e. $P_{rand,p}(3) = 4/25$, $P_{rand,p}(5) = 12/25$ and $P_{rand,p}(8) = 9/25$.
Measuring entropy/ information/ patterns of a 2d binary matrix Instead of looking at global properties of the pattern (like symmetries), one can take a look at the local ones, e.g. the number of neighbors each stone (=black circle) has. Let's denote the total num
3,948
Measuring entropy/ information/ patterns of a 2d binary matrix
There is a really simple way to conceptualize the information content that harks back to Shannon's (admittedly one dimensional) idea using probabilities and transition probabilities to find a least redundant representation of a text string. For an image (in this particular case a binary image defined on a square matrix) we can uniquely reconstruct from a knowledge of the x and y derivatives (-1,0,+1). We can define a 3x3 transition probability and a global probability density function, also 3x3. The Shannon information is then obtained from the classic logarithmic summation formula applied over 3x3. This is a second order Shannon information measure and nicely captures the spatial structure in the 3x3 pdf. This approach is more intuitive when applied to grayscale images with more than 2 (binary) levels, see https://arxiv.org/abs/1609.01117 for more details.
Measuring entropy/ information/ patterns of a 2d binary matrix
There is a really simple way to conceptualize the information content that harks back to Shannon's (admittedly one dimensional) idea using probabilities and transition probabilities to find a least re
Measuring entropy/ information/ patterns of a 2d binary matrix There is a really simple way to conceptualize the information content that harks back to Shannon's (admittedly one dimensional) idea using probabilities and transition probabilities to find a least redundant representation of a text string. For an image (in this particular case a binary image defined on a square matrix) we can uniquely reconstruct from a knowledge of the x and y derivatives (-1,0,+1). We can define a 3x3 transition probability and a global probability density function, also 3x3. The Shannon information is then obtained from the classic logarithmic summation formula applied over 3x3. This is a second order Shannon information measure and nicely captures the spatial structure in the 3x3 pdf. This approach is more intuitive when applied to grayscale images with more than 2 (binary) levels, see https://arxiv.org/abs/1609.01117 for more details.
Measuring entropy/ information/ patterns of a 2d binary matrix There is a really simple way to conceptualize the information content that harks back to Shannon's (admittedly one dimensional) idea using probabilities and transition probabilities to find a least re
3,949
Measuring entropy/ information/ patterns of a 2d binary matrix
In reading this, two things come to mind. The first is that a lot of the gestalt properties are quite challenging to predict, and a lot of PhD level work goes into trying to figure out models for how groupings take place. My instinct is that most easy rules that you could think of will end up with counter examples. If you can put aside the description of gestalt groupings for now, I think a helpful abstraction is to think of your input as a special case of an image. There are a lot of algorithms in computer vision that aim to assign a signature to an image based on a set of features that are scale invariant and feature invariant. I think the most well known are the SIFT features: http://en.wikipedia.org/wiki/Scale-invariant_feature_transform Basically your output will be a new vector which gives the weights for these features. You could use this vector and either apply a heuristic to it (find the norm, perhaps) and hope that it describes what you're looking for. Alternatively, you could train a classifier to take the feature vector as input and just tell it what your impression of its 'entropy' is. The upside of this is that it will use the appropriate SIFT features (which are definitely overkill for your problem) and construct some sort of mapping that may very well be appropriate. The downside is that you have to do a lot of that labeling yourself, and what you get may be tougher to interpret, depending on the classifier that you use. I hope this is helpful! A lot of traditional computer vision algorithms may also be appropriate for you here - a quick browse through wikipedia in that portal may give you some additional insight.
Measuring entropy/ information/ patterns of a 2d binary matrix
In reading this, two things come to mind. The first is that a lot of the gestalt properties are quite challenging to predict, and a lot of PhD level work goes into trying to figure out models for how
Measuring entropy/ information/ patterns of a 2d binary matrix In reading this, two things come to mind. The first is that a lot of the gestalt properties are quite challenging to predict, and a lot of PhD level work goes into trying to figure out models for how groupings take place. My instinct is that most easy rules that you could think of will end up with counter examples. If you can put aside the description of gestalt groupings for now, I think a helpful abstraction is to think of your input as a special case of an image. There are a lot of algorithms in computer vision that aim to assign a signature to an image based on a set of features that are scale invariant and feature invariant. I think the most well known are the SIFT features: http://en.wikipedia.org/wiki/Scale-invariant_feature_transform Basically your output will be a new vector which gives the weights for these features. You could use this vector and either apply a heuristic to it (find the norm, perhaps) and hope that it describes what you're looking for. Alternatively, you could train a classifier to take the feature vector as input and just tell it what your impression of its 'entropy' is. The upside of this is that it will use the appropriate SIFT features (which are definitely overkill for your problem) and construct some sort of mapping that may very well be appropriate. The downside is that you have to do a lot of that labeling yourself, and what you get may be tougher to interpret, depending on the classifier that you use. I hope this is helpful! A lot of traditional computer vision algorithms may also be appropriate for you here - a quick browse through wikipedia in that portal may give you some additional insight.
Measuring entropy/ information/ patterns of a 2d binary matrix In reading this, two things come to mind. The first is that a lot of the gestalt properties are quite challenging to predict, and a lot of PhD level work goes into trying to figure out models for how
3,950
Measuring entropy/ information/ patterns of a 2d binary matrix
Your examples remind me of truth tables from boolean algebra and digital circuits. In this realm, Karnaugh maps (http://en.wikipedia.org/wiki/Karnaugh_map) can be used as a tool to provide the minimal boolean function to express the entire grid. Alternatively, using boolean algebra identities can help to reduce the function to its minimal form. Counting the number of terms in the minimized boolean function could be used as your entropy measure. This gives you vertical and horizontal symmetry along with compressing adjacent neighbors, but lacks diagonal symmetry. Using boolean algebra, both axes are labelled from A-E starting at the upper left corner. In this manner, example C would map to the boolean function (!A & !E). For other examples, the axes would need to be be labelled separately (i.e. A-E, F-J).
Measuring entropy/ information/ patterns of a 2d binary matrix
Your examples remind me of truth tables from boolean algebra and digital circuits. In this realm, Karnaugh maps (http://en.wikipedia.org/wiki/Karnaugh_map) can be used as a tool to provide the minimal
Measuring entropy/ information/ patterns of a 2d binary matrix Your examples remind me of truth tables from boolean algebra and digital circuits. In this realm, Karnaugh maps (http://en.wikipedia.org/wiki/Karnaugh_map) can be used as a tool to provide the minimal boolean function to express the entire grid. Alternatively, using boolean algebra identities can help to reduce the function to its minimal form. Counting the number of terms in the minimized boolean function could be used as your entropy measure. This gives you vertical and horizontal symmetry along with compressing adjacent neighbors, but lacks diagonal symmetry. Using boolean algebra, both axes are labelled from A-E starting at the upper left corner. In this manner, example C would map to the boolean function (!A & !E). For other examples, the axes would need to be be labelled separately (i.e. A-E, F-J).
Measuring entropy/ information/ patterns of a 2d binary matrix Your examples remind me of truth tables from boolean algebra and digital circuits. In this realm, Karnaugh maps (http://en.wikipedia.org/wiki/Karnaugh_map) can be used as a tool to provide the minimal
3,951
Measuring entropy/ information/ patterns of a 2d binary matrix
I would point out the rank of the matrix used in binary matrix factorization as an indicator of the entropy. Although exact computation is NP-hard, the rank can be estimated in O(log2n) time. I would also merely point out that comparison with 3 rotations and 4 reflections method has a real flaw. For a matrix with an odd number of rows/columns, there will be a center row or column which will overlap with the original data in rotations/reflections which will cause the entropy amount to be lowered. Additionally, for the reflection at the 90 and 270 degree position, all of the diagonals will be overlapping also lowering the entropy. So this loss should all be accounted for.
Measuring entropy/ information/ patterns of a 2d binary matrix
I would point out the rank of the matrix used in binary matrix factorization as an indicator of the entropy. Although exact computation is NP-hard, the rank can be estimated in O(log2n) time. I would
Measuring entropy/ information/ patterns of a 2d binary matrix I would point out the rank of the matrix used in binary matrix factorization as an indicator of the entropy. Although exact computation is NP-hard, the rank can be estimated in O(log2n) time. I would also merely point out that comparison with 3 rotations and 4 reflections method has a real flaw. For a matrix with an odd number of rows/columns, there will be a center row or column which will overlap with the original data in rotations/reflections which will cause the entropy amount to be lowered. Additionally, for the reflection at the 90 and 270 degree position, all of the diagonals will be overlapping also lowering the entropy. So this loss should all be accounted for.
Measuring entropy/ information/ patterns of a 2d binary matrix I would point out the rank of the matrix used in binary matrix factorization as an indicator of the entropy. Although exact computation is NP-hard, the rank can be estimated in O(log2n) time. I would
3,952
Are all values within a 95% confidence interval equally likely?
One question that needs to be answered is what does "likely" mean in this context? If it means probability (as it is sometimes used as a synonym of) and we are using strict frequentist definitions then the true parameter value is a single value that does not change, so the probability (likelihood) of that point is 100% and all other values are 0%. So almost all are equally likely at 0%, but if the interval contains the true value, then it is different from the others. If we use a Bayesian approach then the CI (Credible Interval) comes from the posterior distribution and you can compare the likelihood at the different points within the interval. Unless the posterior is perfectly uniform within the interval (theoretically possible I guess, but that would be a strange circumstance) then the values have different likelihoods. If we use likely to be similar to confidence then think about it this way: Compute a 95% confidence interval, a 90% confidence interval, and an 85% confidence interval. We would be 5% confident that the true value lies in the region inside of the 95% interval but outside of the 90% interval, we could say that the true value is 5% likely to fall in that region. The same is true for the region that is inside the 90% interval but outside the 85% interval. So if every value is equally likely, then the size of the above 2 regions would need to be exactly the same and the same would hold true for the region inside a 10% confidence interval but outside a 5% confidence interval. None of the standard distributions that intervals are constructed using have this property (except special cases with 1 draw from a uniform). You could further prove this to yourself by simulating a large number of datasets from known populations, computing the confidence interval of interest, then comparing how often the true parameter is closer to the point estimate than to each of the end points.
Are all values within a 95% confidence interval equally likely?
One question that needs to be answered is what does "likely" mean in this context? If it means probability (as it is sometimes used as a synonym of) and we are using strict frequentist definitions the
Are all values within a 95% confidence interval equally likely? One question that needs to be answered is what does "likely" mean in this context? If it means probability (as it is sometimes used as a synonym of) and we are using strict frequentist definitions then the true parameter value is a single value that does not change, so the probability (likelihood) of that point is 100% and all other values are 0%. So almost all are equally likely at 0%, but if the interval contains the true value, then it is different from the others. If we use a Bayesian approach then the CI (Credible Interval) comes from the posterior distribution and you can compare the likelihood at the different points within the interval. Unless the posterior is perfectly uniform within the interval (theoretically possible I guess, but that would be a strange circumstance) then the values have different likelihoods. If we use likely to be similar to confidence then think about it this way: Compute a 95% confidence interval, a 90% confidence interval, and an 85% confidence interval. We would be 5% confident that the true value lies in the region inside of the 95% interval but outside of the 90% interval, we could say that the true value is 5% likely to fall in that region. The same is true for the region that is inside the 90% interval but outside the 85% interval. So if every value is equally likely, then the size of the above 2 regions would need to be exactly the same and the same would hold true for the region inside a 10% confidence interval but outside a 5% confidence interval. None of the standard distributions that intervals are constructed using have this property (except special cases with 1 draw from a uniform). You could further prove this to yourself by simulating a large number of datasets from known populations, computing the confidence interval of interest, then comparing how often the true parameter is closer to the point estimate than to each of the end points.
Are all values within a 95% confidence interval equally likely? One question that needs to be answered is what does "likely" mean in this context? If it means probability (as it is sometimes used as a synonym of) and we are using strict frequentist definitions the
3,953
Are all values within a 95% confidence interval equally likely?
Suppose someone told me that I should place equal trust in all values within a CI95 as potential indicators of the population value. (I'm deliberately avoiding the terms "likely" and "probable.") What's special about 95? Nothing: to be consistent I would also have to place equal trust in all values within a CI96, a CI97, ... and a CI99.9999999. As the CI's coverage approached its limit, virtually all real numbers would have to be included. The preposterousness of this conclusion would lead me to reject the initial claim.
Are all values within a 95% confidence interval equally likely?
Suppose someone told me that I should place equal trust in all values within a CI95 as potential indicators of the population value. (I'm deliberately avoiding the terms "likely" and "probable.") Wh
Are all values within a 95% confidence interval equally likely? Suppose someone told me that I should place equal trust in all values within a CI95 as potential indicators of the population value. (I'm deliberately avoiding the terms "likely" and "probable.") What's special about 95? Nothing: to be consistent I would also have to place equal trust in all values within a CI96, a CI97, ... and a CI99.9999999. As the CI's coverage approached its limit, virtually all real numbers would have to be included. The preposterousness of this conclusion would lead me to reject the initial claim.
Are all values within a 95% confidence interval equally likely? Suppose someone told me that I should place equal trust in all values within a CI95 as potential indicators of the population value. (I'm deliberately avoiding the terms "likely" and "probable.") Wh
3,954
Are all values within a 95% confidence interval equally likely?
This is a great question! There is a mathematical concept called likelihood that will help you understand the issues. Fisher invented likelihood but considered it to be somewhat less desirable than probability, but likelihood turns out to be more 'primitive' than probability and Ian Hacking (1965) considered it to be axiomatic in that it is not provable. Likelihood underpins probability rather then the reverse. Hacking, 1965. Logic of Statistical Inference. Likelihood is not given the attention that it should have in standard textbooks of statistics, for no good reason. It differs from probability in having almost exactly the properties that one would expect, and likelihood functions and intervals are very useful for inference. Perhaps likelihood is not liked by some statisticians because there is sometimes no 'proper' way to derive the relevant likelihood functions. However, in many cases the likelihood functions are obvious and well defined. A study of likelihoods for inference should probably start with Richard Royall's small and easy to understand book called Statistical Evidence: a Likelihood Paradigm. The answer to your question is that no, the points within any interval do not all have the same likelihood. Those at the edges of a confidence interval usually have lower likelihoods than others towards the centre of the interval. Of course, the conventional confidence interval tells you nothing directly about the parameter relevant to the particular experiment. Neyman's confidence intervals are 'global' in that they are designed to have long-run properties rather than 'local' properties relevant to the experiment in hand. (Happily good long-run performance can be interpreted in the local, but that is an intellectual shortcut rather than a mathematical reality.) Likelihood intervals—in the cases where they can be constructed—directly reflect the likelihood relating the experiment in hand. There is less about likelihood intervals that is confusing than is the case for confidence intervals, in my opinion, and they have more utility than might be expected from their almost complete disuse!
Are all values within a 95% confidence interval equally likely?
This is a great question! There is a mathematical concept called likelihood that will help you understand the issues. Fisher invented likelihood but considered it to be somewhat less desirable than pr
Are all values within a 95% confidence interval equally likely? This is a great question! There is a mathematical concept called likelihood that will help you understand the issues. Fisher invented likelihood but considered it to be somewhat less desirable than probability, but likelihood turns out to be more 'primitive' than probability and Ian Hacking (1965) considered it to be axiomatic in that it is not provable. Likelihood underpins probability rather then the reverse. Hacking, 1965. Logic of Statistical Inference. Likelihood is not given the attention that it should have in standard textbooks of statistics, for no good reason. It differs from probability in having almost exactly the properties that one would expect, and likelihood functions and intervals are very useful for inference. Perhaps likelihood is not liked by some statisticians because there is sometimes no 'proper' way to derive the relevant likelihood functions. However, in many cases the likelihood functions are obvious and well defined. A study of likelihoods for inference should probably start with Richard Royall's small and easy to understand book called Statistical Evidence: a Likelihood Paradigm. The answer to your question is that no, the points within any interval do not all have the same likelihood. Those at the edges of a confidence interval usually have lower likelihoods than others towards the centre of the interval. Of course, the conventional confidence interval tells you nothing directly about the parameter relevant to the particular experiment. Neyman's confidence intervals are 'global' in that they are designed to have long-run properties rather than 'local' properties relevant to the experiment in hand. (Happily good long-run performance can be interpreted in the local, but that is an intellectual shortcut rather than a mathematical reality.) Likelihood intervals—in the cases where they can be constructed—directly reflect the likelihood relating the experiment in hand. There is less about likelihood intervals that is confusing than is the case for confidence intervals, in my opinion, and they have more utility than might be expected from their almost complete disuse!
Are all values within a 95% confidence interval equally likely? This is a great question! There is a mathematical concept called likelihood that will help you understand the issues. Fisher invented likelihood but considered it to be somewhat less desirable than pr
3,955
Are all values within a 95% confidence interval equally likely?
Let's start with the definition of a confidence interval. If I say that a 95% confidence interval goes from this to that I mean that statements of that nature will be true about 95% of the time and false about 5% of the time. I do not necessarily mean that I am 95% confident about this particular statement. A 90% confidence interval will be narrower and an 80% narrower still. Therefore, when wondering what the true value is, I have less credence in values as they get closer and closer to the edge of any particular confidence interval. Note that all of the above is qualitative, especially "credence". (I avoided the term "confidence" or "likelihood" in that statement because they carry mathematical baggage that may differ from our intuitive baggage.) Bayesian approaches would rephrase your question to something that has a quantitative answer but I don't want to open that can of worms here. Box, Hunter & Hunter's classic text ("Statistics for Experimenters", Wiley, 1978) may also help. See "Sets of Confidence Intervals" on pp 113, ff.
Are all values within a 95% confidence interval equally likely?
Let's start with the definition of a confidence interval. If I say that a 95% confidence interval goes from this to that I mean that statements of that nature will be true about 95% of the time and fa
Are all values within a 95% confidence interval equally likely? Let's start with the definition of a confidence interval. If I say that a 95% confidence interval goes from this to that I mean that statements of that nature will be true about 95% of the time and false about 5% of the time. I do not necessarily mean that I am 95% confident about this particular statement. A 90% confidence interval will be narrower and an 80% narrower still. Therefore, when wondering what the true value is, I have less credence in values as they get closer and closer to the edge of any particular confidence interval. Note that all of the above is qualitative, especially "credence". (I avoided the term "confidence" or "likelihood" in that statement because they carry mathematical baggage that may differ from our intuitive baggage.) Bayesian approaches would rephrase your question to something that has a quantitative answer but I don't want to open that can of worms here. Box, Hunter & Hunter's classic text ("Statistics for Experimenters", Wiley, 1978) may also help. See "Sets of Confidence Intervals" on pp 113, ff.
Are all values within a 95% confidence interval equally likely? Let's start with the definition of a confidence interval. If I say that a 95% confidence interval goes from this to that I mean that statements of that nature will be true about 95% of the time and fa
3,956
What is quasi-binomial distribution (in the context of GLM)?
The difference between the binomial distribution and quasi-binomial can be seen in their probability density functions (pdf), which characterize these distributions. Binomial pdf: $$P(X=k)={n \choose k}p^{k}(1-p)^{n-k}$$ Quasi-binomial pdf: $$P(X=k)={n \choose k}p(p+k\phi)^{k-1}(1-p-k\phi)^{n-k}$$ The quasi-binomial distribution, while similar to the binomial distribution, has an extra parameter $\phi$ (limited to $|\phi| \le \min\{p/n, (1-p)/n\}$) that attempts to describe additional variance in the data that cannot be explained by a Binomial distribution alone. (Note that the mean of the quasi-binomial distribution is $p \sum_{i=0}^n \frac{n!\phi^i}{(n-k)!}$ rather than $p$ itself.) I am not sure on this one, perhaps the glm function in R adds weights in the quasibinomial mode in order to account for this? The purpose of the extra parameter $\phi$ is to estimate extra variance in the data. Every generalized linear model (GLM) makes a distributional assumption for the outcome/response and maximizes the likelihood of the data based on this distribution. It is a choice the analyst makes, and if you feel you need to account for more variance in your data, then you can choose the quasi-binomial distirbution to model the response for your glm. A great way to test if we need to fit a quasi-binomial model instead of a binomial is to fit a quasi-binomial model, and test to see if the $\phi$ parameter is 0.
What is quasi-binomial distribution (in the context of GLM)?
The difference between the binomial distribution and quasi-binomial can be seen in their probability density functions (pdf), which characterize these distributions. Binomial pdf: $$P(X=k)={n \choose
What is quasi-binomial distribution (in the context of GLM)? The difference between the binomial distribution and quasi-binomial can be seen in their probability density functions (pdf), which characterize these distributions. Binomial pdf: $$P(X=k)={n \choose k}p^{k}(1-p)^{n-k}$$ Quasi-binomial pdf: $$P(X=k)={n \choose k}p(p+k\phi)^{k-1}(1-p-k\phi)^{n-k}$$ The quasi-binomial distribution, while similar to the binomial distribution, has an extra parameter $\phi$ (limited to $|\phi| \le \min\{p/n, (1-p)/n\}$) that attempts to describe additional variance in the data that cannot be explained by a Binomial distribution alone. (Note that the mean of the quasi-binomial distribution is $p \sum_{i=0}^n \frac{n!\phi^i}{(n-k)!}$ rather than $p$ itself.) I am not sure on this one, perhaps the glm function in R adds weights in the quasibinomial mode in order to account for this? The purpose of the extra parameter $\phi$ is to estimate extra variance in the data. Every generalized linear model (GLM) makes a distributional assumption for the outcome/response and maximizes the likelihood of the data based on this distribution. It is a choice the analyst makes, and if you feel you need to account for more variance in your data, then you can choose the quasi-binomial distirbution to model the response for your glm. A great way to test if we need to fit a quasi-binomial model instead of a binomial is to fit a quasi-binomial model, and test to see if the $\phi$ parameter is 0.
What is quasi-binomial distribution (in the context of GLM)? The difference between the binomial distribution and quasi-binomial can be seen in their probability density functions (pdf), which characterize these distributions. Binomial pdf: $$P(X=k)={n \choose
3,957
What is quasi-binomial distribution (in the context of GLM)?
The quasi-binomial isn't necessarily a particular distribution; it describes a model for the relationship between variance and mean in generalized linear models which is $\phi$ times the variance for a binomial in terms of the mean for a binomial. There is a distribution that fits such a specification (the obvious one - a scaled binomial), but that's not necessarily the aim when a quasi-binomial model is fitted; if you're fitting to data that's still 0-1 it can't be scaled binomial. So the quasi-binomial variance model, via the $\phi$ parameter, can better deal with data for which the variance is larger (or, perhaps, smaller) than you'd get with binomial data, while not necessarily being an actual distribution at all. When the response variable is a proportion (example values include 0.23, 0.11, 078, 0.98), a quasibinomial model will run in R but a binomial model will not To my recollection a binomial model can be run in R with proportions*, but you have to have it set up right. * there are three separate ways to give binomial data to R that I'm aware of. I am pretty sure that's one.
What is quasi-binomial distribution (in the context of GLM)?
The quasi-binomial isn't necessarily a particular distribution; it describes a model for the relationship between variance and mean in generalized linear models which is $\phi$ times the variance for
What is quasi-binomial distribution (in the context of GLM)? The quasi-binomial isn't necessarily a particular distribution; it describes a model for the relationship between variance and mean in generalized linear models which is $\phi$ times the variance for a binomial in terms of the mean for a binomial. There is a distribution that fits such a specification (the obvious one - a scaled binomial), but that's not necessarily the aim when a quasi-binomial model is fitted; if you're fitting to data that's still 0-1 it can't be scaled binomial. So the quasi-binomial variance model, via the $\phi$ parameter, can better deal with data for which the variance is larger (or, perhaps, smaller) than you'd get with binomial data, while not necessarily being an actual distribution at all. When the response variable is a proportion (example values include 0.23, 0.11, 078, 0.98), a quasibinomial model will run in R but a binomial model will not To my recollection a binomial model can be run in R with proportions*, but you have to have it set up right. * there are three separate ways to give binomial data to R that I'm aware of. I am pretty sure that's one.
What is quasi-binomial distribution (in the context of GLM)? The quasi-binomial isn't necessarily a particular distribution; it describes a model for the relationship between variance and mean in generalized linear models which is $\phi$ times the variance for
3,958
Optimal number of folds in $K$-fold cross-validation: is leave-one-out CV always the best choice?
Leave-one-out cross-validation does not generally lead to better performance than K-fold, and is more likely to be worse, as it has a relatively high variance (i.e. its value changes more for different samples of data than the value for k-fold cross-validation). This is bad in a model selection criterion as it means the model selection criterion can be optimised in ways that merely exploit the random variation in the particular sample of data, rather than making genuine improvements in performance, i.e. you are more likely to over-fit the model selection criterion. The reason leave-one-out cross-validation is used in practice is that for many models it can be evaluated very cheaply as a by-product of fitting the model. If computational expense is not primarily an issue, a better approach is to perform repeated k-fold cross-validation, where the k-fold cross-validation procedure is repeated with different random partitions into k disjoint subsets each time. This reduces the variance. If you have only 20 patterns, it is very likely that you will experience over-fitting the model selection criterion, which is a much neglected pitfall in statistics and machine learning (shameless plug: see my paper on the topic). You may be better off choosing a relatively simple model and try not to optimise it very aggressively, or adopt a Bayesian approach and average over all model choices, weighted by their plausibility. IMHO optimisation is the root of all evil in statistics, so it is better not to optimise if you don't have to, and to optimise with caution whenever you do. Note also if you are going to perform model selection, you need to use something like nested cross-validation if you also need a performance estimate (i.e. you need to consider model selection as an integral part of the model fitting procedure and cross-validate that as well).
Optimal number of folds in $K$-fold cross-validation: is leave-one-out CV always the best choice?
Leave-one-out cross-validation does not generally lead to better performance than K-fold, and is more likely to be worse, as it has a relatively high variance (i.e. its value changes more for differen
Optimal number of folds in $K$-fold cross-validation: is leave-one-out CV always the best choice? Leave-one-out cross-validation does not generally lead to better performance than K-fold, and is more likely to be worse, as it has a relatively high variance (i.e. its value changes more for different samples of data than the value for k-fold cross-validation). This is bad in a model selection criterion as it means the model selection criterion can be optimised in ways that merely exploit the random variation in the particular sample of data, rather than making genuine improvements in performance, i.e. you are more likely to over-fit the model selection criterion. The reason leave-one-out cross-validation is used in practice is that for many models it can be evaluated very cheaply as a by-product of fitting the model. If computational expense is not primarily an issue, a better approach is to perform repeated k-fold cross-validation, where the k-fold cross-validation procedure is repeated with different random partitions into k disjoint subsets each time. This reduces the variance. If you have only 20 patterns, it is very likely that you will experience over-fitting the model selection criterion, which is a much neglected pitfall in statistics and machine learning (shameless plug: see my paper on the topic). You may be better off choosing a relatively simple model and try not to optimise it very aggressively, or adopt a Bayesian approach and average over all model choices, weighted by their plausibility. IMHO optimisation is the root of all evil in statistics, so it is better not to optimise if you don't have to, and to optimise with caution whenever you do. Note also if you are going to perform model selection, you need to use something like nested cross-validation if you also need a performance estimate (i.e. you need to consider model selection as an integral part of the model fitting procedure and cross-validate that as well).
Optimal number of folds in $K$-fold cross-validation: is leave-one-out CV always the best choice? Leave-one-out cross-validation does not generally lead to better performance than K-fold, and is more likely to be worse, as it has a relatively high variance (i.e. its value changes more for differen
3,959
Optimal number of folds in $K$-fold cross-validation: is leave-one-out CV always the best choice?
Choosing the number K folds by considering the learning curve I would like to argue that choosing the appropriate number of $K$ folds depends a lot on the shape and position of the learning curve, mostly due to its impact on the bias. This argument, which extends to leave-one-out CV, is largely taken from the book "Elements of Statistical Learning" chapter 7.10, page 243. For discussions on the impact of $K$ on the variance see here To summarize, if the learning curve has a considerable slope at the given training set size, five- or tenfold cross-validation will overestimate the true prediction error. Whether this bias is a drawback in practice depends on the objective. On the other hand, leave-one-out cross-validation has low bias but can have high variance. An intuitive visualization using a toy example To understand this argument visually, consider the following toy example where we are fitting a degree 4 polynomial to a noisy sine curve: Intuitively and visually, we expect this model to fare poorly for small datasets due to overfitting. This behaviour is reflected in the learning curve where we plot $1 -$ Mean Square Error vs Training size together with $\pm$ 1 standard deviation. Note that I chose to plot 1 - MSE here to reproduce the illustration used in ESL page 243 Discussing the argument The performance of the model improves significantly as the training size increases to 50 observations. Increasing the number further to 200 for example brings only small benefits. Consider the following two cases: If our training set had 200 observations, $5$ fold cross validation would estimate the performance over a training size of 160 which is virtually the same as the performance for training set size 200. Thus cross-validation would not suffer from much bias and increasing $K$ to larger values will not bring much benefit (left hand plot) However if the training set had $50$ observations, $5$ fold cross-validation would estimate the performance of the model over training sets of size 40, and from the learning curve this would lead to a biased result. Hence increasing $K$ in this case will tend to reduce the bias. (right hand plot). [Update] - Comments on the methodology You can find the code for this simulation here. The approach was the following: Generate 50,000 points from the distribution $sin(x) + \epsilon$ where the true variance of $\epsilon$ is known Iterate $i$ times (e.g. 100 or 200 times). At each iteration, change the dataset by resampling $N$ points from the original distribution For each data set $i$: Perform K-fold cross validation for one value of $K$ Store the average Mean Square Error (MSE) across the K-folds Once the loop over $i$ is complete, calculate the mean and standard deviation of the MSE across the $i$ datasets for the same value of $K$ Repeat the above steps for all $K$ in range $\{ 5,...,N\}$ all the way to LOOCV An alternative approach is to not resample a new data set at each iteration and instead reshuffle the same dataset each time. This seems to give similar results.
Optimal number of folds in $K$-fold cross-validation: is leave-one-out CV always the best choice?
Choosing the number K folds by considering the learning curve I would like to argue that choosing the appropriate number of $K$ folds depends a lot on the shape and position of the learning curve, mos
Optimal number of folds in $K$-fold cross-validation: is leave-one-out CV always the best choice? Choosing the number K folds by considering the learning curve I would like to argue that choosing the appropriate number of $K$ folds depends a lot on the shape and position of the learning curve, mostly due to its impact on the bias. This argument, which extends to leave-one-out CV, is largely taken from the book "Elements of Statistical Learning" chapter 7.10, page 243. For discussions on the impact of $K$ on the variance see here To summarize, if the learning curve has a considerable slope at the given training set size, five- or tenfold cross-validation will overestimate the true prediction error. Whether this bias is a drawback in practice depends on the objective. On the other hand, leave-one-out cross-validation has low bias but can have high variance. An intuitive visualization using a toy example To understand this argument visually, consider the following toy example where we are fitting a degree 4 polynomial to a noisy sine curve: Intuitively and visually, we expect this model to fare poorly for small datasets due to overfitting. This behaviour is reflected in the learning curve where we plot $1 -$ Mean Square Error vs Training size together with $\pm$ 1 standard deviation. Note that I chose to plot 1 - MSE here to reproduce the illustration used in ESL page 243 Discussing the argument The performance of the model improves significantly as the training size increases to 50 observations. Increasing the number further to 200 for example brings only small benefits. Consider the following two cases: If our training set had 200 observations, $5$ fold cross validation would estimate the performance over a training size of 160 which is virtually the same as the performance for training set size 200. Thus cross-validation would not suffer from much bias and increasing $K$ to larger values will not bring much benefit (left hand plot) However if the training set had $50$ observations, $5$ fold cross-validation would estimate the performance of the model over training sets of size 40, and from the learning curve this would lead to a biased result. Hence increasing $K$ in this case will tend to reduce the bias. (right hand plot). [Update] - Comments on the methodology You can find the code for this simulation here. The approach was the following: Generate 50,000 points from the distribution $sin(x) + \epsilon$ where the true variance of $\epsilon$ is known Iterate $i$ times (e.g. 100 or 200 times). At each iteration, change the dataset by resampling $N$ points from the original distribution For each data set $i$: Perform K-fold cross validation for one value of $K$ Store the average Mean Square Error (MSE) across the K-folds Once the loop over $i$ is complete, calculate the mean and standard deviation of the MSE across the $i$ datasets for the same value of $K$ Repeat the above steps for all $K$ in range $\{ 5,...,N\}$ all the way to LOOCV An alternative approach is to not resample a new data set at each iteration and instead reshuffle the same dataset each time. This seems to give similar results.
Optimal number of folds in $K$-fold cross-validation: is leave-one-out CV always the best choice? Choosing the number K folds by considering the learning curve I would like to argue that choosing the appropriate number of $K$ folds depends a lot on the shape and position of the learning curve, mos
3,960
Statistics and causal inference?
This is a broad question, but given the Box, Hunter and Hunter quote is true I think what it comes down to is The quality of the experimental design: randomization, sample sizes, control of confounders,... The quality of the implementation of the design: adherance to protocol, measurement error, data handling, ... The quality of the model to accurately reflect the design: blocking structures are accurately represented, proper degrees of freedom are associated with effects, estimators are unbiased, ... At the risk of stating the obvious I'll try to hit on the key points of each: is a large sub-field of statistics, but in it's most basic form I think it comes down to the fact that when making causal inference we ideally start with identical units that are monitored in identical environments other than being assigned to a treatment. Any systematic differences between groups after assigment are then logically attributable to the treatment (we can infer cause). But, the world isn't that nice and units differ prior to treatment and evironments during experiments are not perfectly controlled. So we "control what we can and randomize what we can't", which helps to insure that there won't be systematic bias due to the confounders that we controlled or randomized. One problem is that experiments tend to be difficult (to impossible) and expensive and a large variety of designs have been developed to efficiently extract as much information as possible in as carefully controlled a setting as possible, given the costs. Some of these are quite rigorous (e.g. in medicine the double-blind, randomized, placebo-controlled trial) and others less so (e.g. various forms of 'quasi-experiments'). is also a big issue and one that statisticians generally don't think about...though we should. In applied statistical work I can recall incidences where 'effects' found in the data were spurious results of inconsistency of data collection or handling. I also wonder how often information on true causal effects of interest is lost due to these issues (I believe students in the applied sciences generally have little-to-no training about ways that data can become corrupted - but I'm getting off topic here...) is another large technical subject, and another necessary step in objective causal inference. To a certain degree this is taken care of because the design crowd develop designs and models together (since inference from a model is the goal, the attributes of the estimators drive design). But this only gets us so far because in the 'real world' we end up analysing experimental data from non-textbook designs and then we have to think hard about things like the appropriate controls and how they should enter the model and what associated degrees of freedom should be and whether assumptions are met if if not how to adjust of violations and how robust the estimators are to any remaining violations and... Anyway, hopefully some of the above helps in thinking about considerations in making causal inference from a model. Did I forget anything big?
Statistics and causal inference?
This is a broad question, but given the Box, Hunter and Hunter quote is true I think what it comes down to is The quality of the experimental design: randomization, sample sizes, control of confound
Statistics and causal inference? This is a broad question, but given the Box, Hunter and Hunter quote is true I think what it comes down to is The quality of the experimental design: randomization, sample sizes, control of confounders,... The quality of the implementation of the design: adherance to protocol, measurement error, data handling, ... The quality of the model to accurately reflect the design: blocking structures are accurately represented, proper degrees of freedom are associated with effects, estimators are unbiased, ... At the risk of stating the obvious I'll try to hit on the key points of each: is a large sub-field of statistics, but in it's most basic form I think it comes down to the fact that when making causal inference we ideally start with identical units that are monitored in identical environments other than being assigned to a treatment. Any systematic differences between groups after assigment are then logically attributable to the treatment (we can infer cause). But, the world isn't that nice and units differ prior to treatment and evironments during experiments are not perfectly controlled. So we "control what we can and randomize what we can't", which helps to insure that there won't be systematic bias due to the confounders that we controlled or randomized. One problem is that experiments tend to be difficult (to impossible) and expensive and a large variety of designs have been developed to efficiently extract as much information as possible in as carefully controlled a setting as possible, given the costs. Some of these are quite rigorous (e.g. in medicine the double-blind, randomized, placebo-controlled trial) and others less so (e.g. various forms of 'quasi-experiments'). is also a big issue and one that statisticians generally don't think about...though we should. In applied statistical work I can recall incidences where 'effects' found in the data were spurious results of inconsistency of data collection or handling. I also wonder how often information on true causal effects of interest is lost due to these issues (I believe students in the applied sciences generally have little-to-no training about ways that data can become corrupted - but I'm getting off topic here...) is another large technical subject, and another necessary step in objective causal inference. To a certain degree this is taken care of because the design crowd develop designs and models together (since inference from a model is the goal, the attributes of the estimators drive design). But this only gets us so far because in the 'real world' we end up analysing experimental data from non-textbook designs and then we have to think hard about things like the appropriate controls and how they should enter the model and what associated degrees of freedom should be and whether assumptions are met if if not how to adjust of violations and how robust the estimators are to any remaining violations and... Anyway, hopefully some of the above helps in thinking about considerations in making causal inference from a model. Did I forget anything big?
Statistics and causal inference? This is a broad question, but given the Box, Hunter and Hunter quote is true I think what it comes down to is The quality of the experimental design: randomization, sample sizes, control of confound
3,961
Statistics and causal inference?
What can a statistical model say about causation? What considerations should be made when making a causal inference from a statistical model? The first thing to make clear is that you can't make causal inference from a purely statistical model. No statistical model can say anything about causation without causal assumptions. That is, to make causal inference you need a causal model. Even in something considered as the gold standard, such as Randomized Control Trials (RCTs), you need to make causal assumptions to proceed. Let me make this clear. For example, suppose $Z$ is the randomization procedure, $X$ the treatment of interest and $Y$ the outcome of interest. When assuming a perfect RCT, this is what you are assuming: In this case $P(Y|do(X)) = P(Y|X)$ so things are working well. However, suppose you have imperfect compliance resulting in a confounded relation between $X$ and $Y$. Then, now, your RCT looks like this: You can still do an intent to treat analysis. But if you want to estimate the actual effect of $X$ things are not simple anymore. This is an instrumental variable setting, and you might be able to bound or even point identify the effect if you make some parametric assumptions. This can get even more complicated. You may have measurement error problems, subjects might drop the study or not follow instructions, among other issues. You will need to make assumptions about how those things are related to procede with inference. With "purely" observational data this can be more problematic, because usually researchers will not have a good idea of the data generating process. Hence, to draw causal inferences from models you need to judge not only its statistical assumptions, but most importantly its causal assumptions. Here are some common threats to causal analysis: Incomplete/imprecise data Target causal quantity of interest not well defined (What is the causal effect that you want to identify? What is the target population?) Confounding (unobserved confounders) Selection bias (self-selection, truncated samples) Measurement error (that can induce confounding, not only noise) Misspecification (e.g., wrong functional form) External validity problems (wrong inference to target population) Sometimes the claim of absence of these problems (or the claim to have addressed these problems) can be backed up by the design of the study itself. That's why experimental data is usually more credible. Sometimes, however, people will assume away these problems either with theory or for convenience. If the theory is soft (like in the social sciences) it will be harder to take the conclusions at face value. Anytime you think there's an assumption that can't be backed up, you should assess how sensitive the conclusions are to plausible violations of those assumptions --- this is usually called sensitivity analysis.
Statistics and causal inference?
What can a statistical model say about causation? What considerations should be made when making a causal inference from a statistical model? The first thing to make clear is that you can't make ca
Statistics and causal inference? What can a statistical model say about causation? What considerations should be made when making a causal inference from a statistical model? The first thing to make clear is that you can't make causal inference from a purely statistical model. No statistical model can say anything about causation without causal assumptions. That is, to make causal inference you need a causal model. Even in something considered as the gold standard, such as Randomized Control Trials (RCTs), you need to make causal assumptions to proceed. Let me make this clear. For example, suppose $Z$ is the randomization procedure, $X$ the treatment of interest and $Y$ the outcome of interest. When assuming a perfect RCT, this is what you are assuming: In this case $P(Y|do(X)) = P(Y|X)$ so things are working well. However, suppose you have imperfect compliance resulting in a confounded relation between $X$ and $Y$. Then, now, your RCT looks like this: You can still do an intent to treat analysis. But if you want to estimate the actual effect of $X$ things are not simple anymore. This is an instrumental variable setting, and you might be able to bound or even point identify the effect if you make some parametric assumptions. This can get even more complicated. You may have measurement error problems, subjects might drop the study or not follow instructions, among other issues. You will need to make assumptions about how those things are related to procede with inference. With "purely" observational data this can be more problematic, because usually researchers will not have a good idea of the data generating process. Hence, to draw causal inferences from models you need to judge not only its statistical assumptions, but most importantly its causal assumptions. Here are some common threats to causal analysis: Incomplete/imprecise data Target causal quantity of interest not well defined (What is the causal effect that you want to identify? What is the target population?) Confounding (unobserved confounders) Selection bias (self-selection, truncated samples) Measurement error (that can induce confounding, not only noise) Misspecification (e.g., wrong functional form) External validity problems (wrong inference to target population) Sometimes the claim of absence of these problems (or the claim to have addressed these problems) can be backed up by the design of the study itself. That's why experimental data is usually more credible. Sometimes, however, people will assume away these problems either with theory or for convenience. If the theory is soft (like in the social sciences) it will be harder to take the conclusions at face value. Anytime you think there's an assumption that can't be backed up, you should assess how sensitive the conclusions are to plausible violations of those assumptions --- this is usually called sensitivity analysis.
Statistics and causal inference? What can a statistical model say about causation? What considerations should be made when making a causal inference from a statistical model? The first thing to make clear is that you can't make ca
3,962
Statistics and causal inference?
In addition to the excellent answer above, there is a statistical method that can get you closer to demonstrating causality. It is Granger Causality that demonstrates that one independent variable occurring before a dependent variable has a causal effect or not. I introduce this method in an easy to follow presentation at the following link: http://www.slideshare.net/gaetanlion/granger-causality-presentation I also apply this method to testing competing macroeconomic theories: http://www.slideshare.net/gaetanlion/economic-theory-testing-presentation Be aware that this method is not perfect. It just confirms that certain events occur before others and that those events appear to have a consistent directional relationship. This seems to entail true causality but it is not always the case. The rooster morning call does not cause the sun to rise.
Statistics and causal inference?
In addition to the excellent answer above, there is a statistical method that can get you closer to demonstrating causality. It is Granger Causality that demonstrates that one independent variable oc
Statistics and causal inference? In addition to the excellent answer above, there is a statistical method that can get you closer to demonstrating causality. It is Granger Causality that demonstrates that one independent variable occurring before a dependent variable has a causal effect or not. I introduce this method in an easy to follow presentation at the following link: http://www.slideshare.net/gaetanlion/granger-causality-presentation I also apply this method to testing competing macroeconomic theories: http://www.slideshare.net/gaetanlion/economic-theory-testing-presentation Be aware that this method is not perfect. It just confirms that certain events occur before others and that those events appear to have a consistent directional relationship. This seems to entail true causality but it is not always the case. The rooster morning call does not cause the sun to rise.
Statistics and causal inference? In addition to the excellent answer above, there is a statistical method that can get you closer to demonstrating causality. It is Granger Causality that demonstrates that one independent variable oc
3,963
Choosing between LM and GLM for a log-transformed response variable
Good effort for thinking through this issue. Here's an incomplete answer, but some starters for the next steps. First, the AIC scores - based on likelihoods - are on different scales because of the different distributions and link functions, so aren't comparable. Your sum of squares and mean sum of squares have been calculated on the original scale and hence are on the same scale, so can be compared, although whether this is a good criterion for model selection is another question (it might be, or might not - search the cross validated archives on model selection for some good discussion of this). For your more general question, a good way of focusing on the problem is to consider the difference between LOG.LM (your linear model with the response as log(y)); and LOG.GAUSS.GLM, the glm with the response as y and a log link function. In the first case the model you are fitting is: $\log(y)=X\beta+\epsilon$; and in the glm() case it is: $ \log(y+\epsilon)=X\beta$ and in both cases $\epsilon$ is distributed $ \mathcal{N}(0,\sigma^2)$.
Choosing between LM and GLM for a log-transformed response variable
Good effort for thinking through this issue. Here's an incomplete answer, but some starters for the next steps. First, the AIC scores - based on likelihoods - are on different scales because of the d
Choosing between LM and GLM for a log-transformed response variable Good effort for thinking through this issue. Here's an incomplete answer, but some starters for the next steps. First, the AIC scores - based on likelihoods - are on different scales because of the different distributions and link functions, so aren't comparable. Your sum of squares and mean sum of squares have been calculated on the original scale and hence are on the same scale, so can be compared, although whether this is a good criterion for model selection is another question (it might be, or might not - search the cross validated archives on model selection for some good discussion of this). For your more general question, a good way of focusing on the problem is to consider the difference between LOG.LM (your linear model with the response as log(y)); and LOG.GAUSS.GLM, the glm with the response as y and a log link function. In the first case the model you are fitting is: $\log(y)=X\beta+\epsilon$; and in the glm() case it is: $ \log(y+\epsilon)=X\beta$ and in both cases $\epsilon$ is distributed $ \mathcal{N}(0,\sigma^2)$.
Choosing between LM and GLM for a log-transformed response variable Good effort for thinking through this issue. Here's an incomplete answer, but some starters for the next steps. First, the AIC scores - based on likelihoods - are on different scales because of the d
3,964
Choosing between LM and GLM for a log-transformed response variable
In a more general way, $E[\ln(Y|x)]$ and $\ln([E(Y|X])$ are not the same. Also the variance assumptions made by GLM are more flexible than in OLS, and for certain modeling situation as counts variance can be different taking distinct distribution families. About the distribution family in my opinion is a question about the variance and its relation with the mean. For example in a gaussian family we have constant variance. In a gamma family we have variance as a quadratic function of the mean. Plot your standarized residuals vs the fitted values and see how they are.
Choosing between LM and GLM for a log-transformed response variable
In a more general way, $E[\ln(Y|x)]$ and $\ln([E(Y|X])$ are not the same. Also the variance assumptions made by GLM are more flexible than in OLS, and for certain modeling situation as counts variance
Choosing between LM and GLM for a log-transformed response variable In a more general way, $E[\ln(Y|x)]$ and $\ln([E(Y|X])$ are not the same. Also the variance assumptions made by GLM are more flexible than in OLS, and for certain modeling situation as counts variance can be different taking distinct distribution families. About the distribution family in my opinion is a question about the variance and its relation with the mean. For example in a gaussian family we have constant variance. In a gamma family we have variance as a quadratic function of the mean. Plot your standarized residuals vs the fitted values and see how they are.
Choosing between LM and GLM for a log-transformed response variable In a more general way, $E[\ln(Y|x)]$ and $\ln([E(Y|X])$ are not the same. Also the variance assumptions made by GLM are more flexible than in OLS, and for certain modeling situation as counts variance
3,965
Choosing between LM and GLM for a log-transformed response variable
Unfortunately, your R code does not lead to an example where $\log(y) = x + \varepsilon$. Instead, your example is $x = \log(y) + \varepsilon$. The errors here are horizontal, not vertical; they are errors in $x$, not errors in $y$. Intuitively, it seems like this shouldn't make a difference, but it does. You may want to read my answer here: What is the difference between linear regression on y with x and x with y? Your setup complicates the issue of what the "right" model is. Strictly, the right model is reverse regression: ly = log(y) REVERSE.REGRESSION = lm(x~ly) summary(REVERSE.REGRESSION) # Call: # lm(formula = x ~ ly) # # Residuals: # Min 1Q Median 3Q Max # -2.93996 -0.64547 -0.01351 0.63133 2.92991 # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.01563 0.03113 0.502 0.616 # ly 1.01519 0.03138 32.350 <2e-16 *** # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # # Residual standard error: 0.984 on 998 degrees of freedom # Multiple R-squared: 0.5119, Adjusted R-squared: 0.5114 # F-statistic: 1047 on 1 and 998 DF, p-value: < 2.2e-16 Metrics for this model (like the AIC) won't be comparable to your models. However, we know that this is the right model based on the data generating process, and notice that the estimated coefficients are right on target.
Choosing between LM and GLM for a log-transformed response variable
Unfortunately, your R code does not lead to an example where $\log(y) = x + \varepsilon$. Instead, your example is $x = \log(y) + \varepsilon$. The errors here are horizontal, not vertical; they are
Choosing between LM and GLM for a log-transformed response variable Unfortunately, your R code does not lead to an example where $\log(y) = x + \varepsilon$. Instead, your example is $x = \log(y) + \varepsilon$. The errors here are horizontal, not vertical; they are errors in $x$, not errors in $y$. Intuitively, it seems like this shouldn't make a difference, but it does. You may want to read my answer here: What is the difference between linear regression on y with x and x with y? Your setup complicates the issue of what the "right" model is. Strictly, the right model is reverse regression: ly = log(y) REVERSE.REGRESSION = lm(x~ly) summary(REVERSE.REGRESSION) # Call: # lm(formula = x ~ ly) # # Residuals: # Min 1Q Median 3Q Max # -2.93996 -0.64547 -0.01351 0.63133 2.92991 # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.01563 0.03113 0.502 0.616 # ly 1.01519 0.03138 32.350 <2e-16 *** # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # # Residual standard error: 0.984 on 998 degrees of freedom # Multiple R-squared: 0.5119, Adjusted R-squared: 0.5114 # F-statistic: 1047 on 1 and 998 DF, p-value: < 2.2e-16 Metrics for this model (like the AIC) won't be comparable to your models. However, we know that this is the right model based on the data generating process, and notice that the estimated coefficients are right on target.
Choosing between LM and GLM for a log-transformed response variable Unfortunately, your R code does not lead to an example where $\log(y) = x + \varepsilon$. Instead, your example is $x = \log(y) + \varepsilon$. The errors here are horizontal, not vertical; they are
3,966
Choosing between LM and GLM for a log-transformed response variable
The choice is based on your hypothesis on your variable. the log transformation is based on $$\frac{\sqrt{\mathrm{Var}(X_t} }{\mathrm{E}(X_t)} = \mathrm{constant}$$ the gamma distribution is based on $$\frac{\mathrm{Var}(X_t) }{\mathrm{E}(X_t)} = \mathrm{constant}$$ The log transformation rests on the hypothesis that, $$\sqrt{\mathrm{Var}(X_t} = \mathrm{E}(X_t) \sigma$$ In this way, $$\begin{alignat}{2} X_t &= X_t\\ & = \mathrm{E}(X_t) \cdot \frac{X_t}{\mathrm{E}(X_t)} \\ & = \mathrm{E}(X_t) \cdot \frac{X_t - \mathrm{E}(X_t) + \mathrm{E}(X_t)}{\mathrm{E}(X_t)} \\ & = \mathrm{E}(X_t) \cdot (1 + \frac{X_t - \mathrm{E}(X_t)}{\mathrm{E}(X_t)}) \\ \end{alignat}$$ Based on Taylor rule, $$\log(1+x) \approx x$$ We get $$\log(1 + \frac{X_t - \mathrm{E}(X_t)}{\mathrm{E}(X_t)}) = \frac{X_t - \mathrm{E}(X_t)}{\mathrm{E}(X_t)}$$ Thus, $$\begin{alignat}{2} X_t &= \mathrm{E}(X_t) \cdot (1 + \frac{X_t - \mathrm{E}(X_t)}{\mathrm{E}(X_t)}) \\ \log X_t &= \log \mathrm{E}(X_t) + \log (1 + \frac{X_t - \mathrm{E}(X_t)}{\mathrm{E}(X_t)}) \\ &= \log \mathrm{E}(X_t) + \frac{X_t - \mathrm{E}(X_t)}{\mathrm{E}(X_t)} \\ \mathrm{E}(\log X_t) & \approx \log \mathrm{E}(X_t) \end{alignat}$$ However, the gamma distribution rests on the hypothesis that, $$Y \sim \Gamma(\alpha, \beta)$$ $$\begin{cases} E(y_i) = \alpha_i \cdot \beta_i \\ Var(y_i) = \alpha_i \cdot \beta_i^2 \\ \end{cases} \to \frac{Var(y_i)}{E(y_i)} = \beta_i$$
Choosing between LM and GLM for a log-transformed response variable
The choice is based on your hypothesis on your variable. the log transformation is based on $$\frac{\sqrt{\mathrm{Var}(X_t} }{\mathrm{E}(X_t)} = \mathrm{constant}$$ the gamma distribution is based on
Choosing between LM and GLM for a log-transformed response variable The choice is based on your hypothesis on your variable. the log transformation is based on $$\frac{\sqrt{\mathrm{Var}(X_t} }{\mathrm{E}(X_t)} = \mathrm{constant}$$ the gamma distribution is based on $$\frac{\mathrm{Var}(X_t) }{\mathrm{E}(X_t)} = \mathrm{constant}$$ The log transformation rests on the hypothesis that, $$\sqrt{\mathrm{Var}(X_t} = \mathrm{E}(X_t) \sigma$$ In this way, $$\begin{alignat}{2} X_t &= X_t\\ & = \mathrm{E}(X_t) \cdot \frac{X_t}{\mathrm{E}(X_t)} \\ & = \mathrm{E}(X_t) \cdot \frac{X_t - \mathrm{E}(X_t) + \mathrm{E}(X_t)}{\mathrm{E}(X_t)} \\ & = \mathrm{E}(X_t) \cdot (1 + \frac{X_t - \mathrm{E}(X_t)}{\mathrm{E}(X_t)}) \\ \end{alignat}$$ Based on Taylor rule, $$\log(1+x) \approx x$$ We get $$\log(1 + \frac{X_t - \mathrm{E}(X_t)}{\mathrm{E}(X_t)}) = \frac{X_t - \mathrm{E}(X_t)}{\mathrm{E}(X_t)}$$ Thus, $$\begin{alignat}{2} X_t &= \mathrm{E}(X_t) \cdot (1 + \frac{X_t - \mathrm{E}(X_t)}{\mathrm{E}(X_t)}) \\ \log X_t &= \log \mathrm{E}(X_t) + \log (1 + \frac{X_t - \mathrm{E}(X_t)}{\mathrm{E}(X_t)}) \\ &= \log \mathrm{E}(X_t) + \frac{X_t - \mathrm{E}(X_t)}{\mathrm{E}(X_t)} \\ \mathrm{E}(\log X_t) & \approx \log \mathrm{E}(X_t) \end{alignat}$$ However, the gamma distribution rests on the hypothesis that, $$Y \sim \Gamma(\alpha, \beta)$$ $$\begin{cases} E(y_i) = \alpha_i \cdot \beta_i \\ Var(y_i) = \alpha_i \cdot \beta_i^2 \\ \end{cases} \to \frac{Var(y_i)}{E(y_i)} = \beta_i$$
Choosing between LM and GLM for a log-transformed response variable The choice is based on your hypothesis on your variable. the log transformation is based on $$\frac{\sqrt{\mathrm{Var}(X_t} }{\mathrm{E}(X_t)} = \mathrm{constant}$$ the gamma distribution is based on
3,967
What are disadvantages of state-space models and Kalman Filter for time-series modelling?
Here is some preliminary list of disadvantages I was able to extract from your comments. Criticism and additions are very welcome! Overall - compared to ARIMA, state-space models allow you to model more complex processes, have interpretable structure and easily handle data irregularities; but for this you pay with increased complexity of a model, harder calibration, less community knowledge. ARIMA is a universal approximator - you don't care what is the true model behind your data and you use universal ARIMA diagnostic and fitting tools to approximate this model. It is like a polynomial curve fitting - you don't care what is the true function, you always can approximate it with a polynomial of some degree. State-space models naturally require you to write-down some reasonable model for your process (which is good - you use your prior knowledge of your process to improve estimates). Of course, if you don't have any idea of your process, you always can use some universal state-space model also - e.g. represent ARIMA in a state-space form. But then ARIMA in its original form has more parsimonious formulation - without introducing unnecessary hidden states. Because there is such a great variety of state-space models formulations (much richer than class of ARIMA models), behavior of all these potential models is not well studied and if the model you formulated is complicated - it's hard to say how it will behave under different circumstances. Of course, if your state-space model is simple or composed of interpretable components, there is no such problem. But ARIMA is always the same well studied ARIMA so it should be easier to anticipate its behavior even if you use it to approximate some complex process. Because state-space allows you directly and exactly model complex/nonlinear models, then for these complex/nonlinear models you may have problems with stability of filtering/prediction (EKF/UKF divergence, particle filter degradation). You may also have problems with calibrating complicated-model's parameters - it's a computationally-hard optimization problem. ARIMA is simple, has less parameters (1 noise source instead of 2 noise sources, no hidden variables) so its calibration is simpler. For state-space there is less community knowledge and software in statistical community than for ARIMA.
What are disadvantages of state-space models and Kalman Filter for time-series modelling?
Here is some preliminary list of disadvantages I was able to extract from your comments. Criticism and additions are very welcome! Overall - compared to ARIMA, state-space models allow you to model mo
What are disadvantages of state-space models and Kalman Filter for time-series modelling? Here is some preliminary list of disadvantages I was able to extract from your comments. Criticism and additions are very welcome! Overall - compared to ARIMA, state-space models allow you to model more complex processes, have interpretable structure and easily handle data irregularities; but for this you pay with increased complexity of a model, harder calibration, less community knowledge. ARIMA is a universal approximator - you don't care what is the true model behind your data and you use universal ARIMA diagnostic and fitting tools to approximate this model. It is like a polynomial curve fitting - you don't care what is the true function, you always can approximate it with a polynomial of some degree. State-space models naturally require you to write-down some reasonable model for your process (which is good - you use your prior knowledge of your process to improve estimates). Of course, if you don't have any idea of your process, you always can use some universal state-space model also - e.g. represent ARIMA in a state-space form. But then ARIMA in its original form has more parsimonious formulation - without introducing unnecessary hidden states. Because there is such a great variety of state-space models formulations (much richer than class of ARIMA models), behavior of all these potential models is not well studied and if the model you formulated is complicated - it's hard to say how it will behave under different circumstances. Of course, if your state-space model is simple or composed of interpretable components, there is no such problem. But ARIMA is always the same well studied ARIMA so it should be easier to anticipate its behavior even if you use it to approximate some complex process. Because state-space allows you directly and exactly model complex/nonlinear models, then for these complex/nonlinear models you may have problems with stability of filtering/prediction (EKF/UKF divergence, particle filter degradation). You may also have problems with calibrating complicated-model's parameters - it's a computationally-hard optimization problem. ARIMA is simple, has less parameters (1 noise source instead of 2 noise sources, no hidden variables) so its calibration is simpler. For state-space there is less community knowledge and software in statistical community than for ARIMA.
What are disadvantages of state-space models and Kalman Filter for time-series modelling? Here is some preliminary list of disadvantages I was able to extract from your comments. Criticism and additions are very welcome! Overall - compared to ARIMA, state-space models allow you to model mo
3,968
What are disadvantages of state-space models and Kalman Filter for time-series modelling?
Thanks @IrishStat for several very good questions in comments, the answer for your questions is too long to post as comment, so I post it as an answer (unfortunately, not to original question of the topic). Questions were: "Does it clearly identify time trend changes and report the points in time where the trend changes ? Does it distinguish between parameter changes and error variance changes and report on this ? Does it detect and report on specific lead and lag effects around user specified predictors ? Can one specify the minimum number of values in a group before a level shift/local time trend is declared? Does it distinguish between the need for power transforms versus deterministic points in time where the error variance changes?" Identify trend changes - yes, most naturally, you can make trend-slope one of state-variables and KF will continuously estimate current slope. You can then decide what slope-change is big enough for you. Alternatively, if slope is not time-varying in your state-space model, you can test residuals during filtering in a standard way to see when there is some break of your model. Distinguish between parameters changes and error variance changes - yes, variance can be one of parameters(states), then which parameter most likely changed depends on a likelihood of your model and how particularly data have changed. Detect lead/lag relations - not sure about this, you certainly can include any lagged vars into a state-space model; for selection of lags, you can either test residuals of models with different lags included or, in a simple case, just use a cross-correlogram before formulating a model. Specify threshold number of observations to decide trend change - yes, as in 1) because filtering is done recursively, you can not only threshold slope change that is big enough for you, but also # of observations for confidence. But better - KF produces not only estimate of slope, but also confidence bands for this estimate, so you may decide that slope changed significantly when its confidence bound passed some threshold. Distinguish between need for power-transform and need for bigger variance - not sure I understand correct, but I think you can test residuals during filtering to see if they are still normal with just bigger variance or they got some skew so that you need to change your model. Better - you may make it a binary switching state of your model, then KF will estimate it automatically based on likelihood. In this case model will be non-linear so you will need UKF to do filtering.
What are disadvantages of state-space models and Kalman Filter for time-series modelling?
Thanks @IrishStat for several very good questions in comments, the answer for your questions is too long to post as comment, so I post it as an answer (unfortunately, not to original question of the t
What are disadvantages of state-space models and Kalman Filter for time-series modelling? Thanks @IrishStat for several very good questions in comments, the answer for your questions is too long to post as comment, so I post it as an answer (unfortunately, not to original question of the topic). Questions were: "Does it clearly identify time trend changes and report the points in time where the trend changes ? Does it distinguish between parameter changes and error variance changes and report on this ? Does it detect and report on specific lead and lag effects around user specified predictors ? Can one specify the minimum number of values in a group before a level shift/local time trend is declared? Does it distinguish between the need for power transforms versus deterministic points in time where the error variance changes?" Identify trend changes - yes, most naturally, you can make trend-slope one of state-variables and KF will continuously estimate current slope. You can then decide what slope-change is big enough for you. Alternatively, if slope is not time-varying in your state-space model, you can test residuals during filtering in a standard way to see when there is some break of your model. Distinguish between parameters changes and error variance changes - yes, variance can be one of parameters(states), then which parameter most likely changed depends on a likelihood of your model and how particularly data have changed. Detect lead/lag relations - not sure about this, you certainly can include any lagged vars into a state-space model; for selection of lags, you can either test residuals of models with different lags included or, in a simple case, just use a cross-correlogram before formulating a model. Specify threshold number of observations to decide trend change - yes, as in 1) because filtering is done recursively, you can not only threshold slope change that is big enough for you, but also # of observations for confidence. But better - KF produces not only estimate of slope, but also confidence bands for this estimate, so you may decide that slope changed significantly when its confidence bound passed some threshold. Distinguish between need for power-transform and need for bigger variance - not sure I understand correct, but I think you can test residuals during filtering to see if they are still normal with just bigger variance or they got some skew so that you need to change your model. Better - you may make it a binary switching state of your model, then KF will estimate it automatically based on likelihood. In this case model will be non-linear so you will need UKF to do filtering.
What are disadvantages of state-space models and Kalman Filter for time-series modelling? Thanks @IrishStat for several very good questions in comments, the answer for your questions is too long to post as comment, so I post it as an answer (unfortunately, not to original question of the t
3,969
What are disadvantages of state-space models and Kalman Filter for time-series modelling?
The Kalman Filter is the optimal linear quadratic estimator when the state dynamics and measurement errors follow the so-called linear Gaussian assumptions (http://wp.me/p491t5-PS). So, as long as you know your dynamics and measurement models and they follow the linear Gaussian assumptions, there is no better estimator in the class of linear quadratic estimators. However, the most common reasoners for "failed" Kalman Filter applications are: Imprecise/incorrect knowledge of the state dynamics and measurement models. Inaccurate initialization of the filter (providing an initial state estimate and covariance that is inconsistent with the true system state). This is easily overcome using a Weighted Least Squares (WLS) initialization procedure. Incorporating measurements that are statistical "outliers" with respect to the system dynamics model. This can cause the Kalman Gain to have negative elements, which can lead to a non positive semi-definite covariance matrix after update. This can be avoided using "gating" algorithms, such as ellipsoidal gating, to validate the measurement prior to updating the Kalman Filter with that measurement. These are some of the most common mistakes/issues I've seen working with the Kalman Filter. Otherwise, if the assumptions of your models are valid, the Kalman Filter is an optimal estimator.
What are disadvantages of state-space models and Kalman Filter for time-series modelling?
The Kalman Filter is the optimal linear quadratic estimator when the state dynamics and measurement errors follow the so-called linear Gaussian assumptions (http://wp.me/p491t5-PS). So, as long as you
What are disadvantages of state-space models and Kalman Filter for time-series modelling? The Kalman Filter is the optimal linear quadratic estimator when the state dynamics and measurement errors follow the so-called linear Gaussian assumptions (http://wp.me/p491t5-PS). So, as long as you know your dynamics and measurement models and they follow the linear Gaussian assumptions, there is no better estimator in the class of linear quadratic estimators. However, the most common reasoners for "failed" Kalman Filter applications are: Imprecise/incorrect knowledge of the state dynamics and measurement models. Inaccurate initialization of the filter (providing an initial state estimate and covariance that is inconsistent with the true system state). This is easily overcome using a Weighted Least Squares (WLS) initialization procedure. Incorporating measurements that are statistical "outliers" with respect to the system dynamics model. This can cause the Kalman Gain to have negative elements, which can lead to a non positive semi-definite covariance matrix after update. This can be avoided using "gating" algorithms, such as ellipsoidal gating, to validate the measurement prior to updating the Kalman Filter with that measurement. These are some of the most common mistakes/issues I've seen working with the Kalman Filter. Otherwise, if the assumptions of your models are valid, the Kalman Filter is an optimal estimator.
What are disadvantages of state-space models and Kalman Filter for time-series modelling? The Kalman Filter is the optimal linear quadratic estimator when the state dynamics and measurement errors follow the so-called linear Gaussian assumptions (http://wp.me/p491t5-PS). So, as long as you
3,970
What are disadvantages of state-space models and Kalman Filter for time-series modelling?
I'd add that if you directly use a State Space function, you're probably going to have to understand the several matrices that make up a model, and how they interact and work. It's much more like defining a program than defining an ARIMA model. If you're working with a dynamic State Space model, it gets even more complicated. If you use a software package that has a really, really nice State Space function, you may be able to avoid some of this, but the vast majority of such functions in R packages require you to jump into the details at some point. In my opinion, it's a lot like Bayesian statistics in general, the machinery of which takes more understanding, care, and feeding to use than more frequentist functions. In both cases, it's well worth the additional details/knowledge, but it could be a barrier to adoption.
What are disadvantages of state-space models and Kalman Filter for time-series modelling?
I'd add that if you directly use a State Space function, you're probably going to have to understand the several matrices that make up a model, and how they interact and work. It's much more like defi
What are disadvantages of state-space models and Kalman Filter for time-series modelling? I'd add that if you directly use a State Space function, you're probably going to have to understand the several matrices that make up a model, and how they interact and work. It's much more like defining a program than defining an ARIMA model. If you're working with a dynamic State Space model, it gets even more complicated. If you use a software package that has a really, really nice State Space function, you may be able to avoid some of this, but the vast majority of such functions in R packages require you to jump into the details at some point. In my opinion, it's a lot like Bayesian statistics in general, the machinery of which takes more understanding, care, and feeding to use than more frequentist functions. In both cases, it's well worth the additional details/knowledge, but it could be a barrier to adoption.
What are disadvantages of state-space models and Kalman Filter for time-series modelling? I'd add that if you directly use a State Space function, you're probably going to have to understand the several matrices that make up a model, and how they interact and work. It's much more like defi
3,971
What are disadvantages of state-space models and Kalman Filter for time-series modelling?
You can refer to the excellent book Bayesian forecasting and dynamic models (Harrison and West, 1997). The authors show that almost all traditional time series models are particular cases of the general dynamic model. They also emphasize the advantages. Perhaps one of the major advantages is the easiness with which you can integrate many state space models by simply augmenting the state vector. You can, for example, seamlessly integrate regressors, seasonal factors, and an autoregressive component in a single model.
What are disadvantages of state-space models and Kalman Filter for time-series modelling?
You can refer to the excellent book Bayesian forecasting and dynamic models (Harrison and West, 1997). The authors show that almost all traditional time series models are particular cases of the gener
What are disadvantages of state-space models and Kalman Filter for time-series modelling? You can refer to the excellent book Bayesian forecasting and dynamic models (Harrison and West, 1997). The authors show that almost all traditional time series models are particular cases of the general dynamic model. They also emphasize the advantages. Perhaps one of the major advantages is the easiness with which you can integrate many state space models by simply augmenting the state vector. You can, for example, seamlessly integrate regressors, seasonal factors, and an autoregressive component in a single model.
What are disadvantages of state-space models and Kalman Filter for time-series modelling? You can refer to the excellent book Bayesian forecasting and dynamic models (Harrison and West, 1997). The authors show that almost all traditional time series models are particular cases of the gener
3,972
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p?
Flip the coin twice. If it lands HH or TT, ignore it and flip it twice again. Now, the coin has equal probability of coming up HT or TH. If it comes up HT, call this H1. If it comes up TH, call this T1. Keep obtaining H1 or T1 until you have three in a row. These three results give you a number based on the table below: H1 H1 H1 -> 1 H1 H1 T1 -> 2 H1 T1 H1 -> 3 H1 T1 T1 -> 4 T1 H1 H1 -> 5 T1 H1 T1 -> 6 T1 T1 H1 -> 7 T1 T1 T1 -> [Throw out all results so far and repeat] I argue that this would work perfectly fine, although you would have a lot of wasted throws in the process!
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he
Flip the coin twice. If it lands HH or TT, ignore it and flip it twice again. Now, the coin has equal probability of coming up HT or TH. If it comes up HT, call this H1. If it comes up TH, call this
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p? Flip the coin twice. If it lands HH or TT, ignore it and flip it twice again. Now, the coin has equal probability of coming up HT or TH. If it comes up HT, call this H1. If it comes up TH, call this T1. Keep obtaining H1 or T1 until you have three in a row. These three results give you a number based on the table below: H1 H1 H1 -> 1 H1 H1 T1 -> 2 H1 T1 H1 -> 3 H1 T1 T1 -> 4 T1 H1 H1 -> 5 T1 H1 T1 -> 6 T1 T1 H1 -> 7 T1 T1 T1 -> [Throw out all results so far and repeat] I argue that this would work perfectly fine, although you would have a lot of wasted throws in the process!
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he Flip the coin twice. If it lands HH or TT, ignore it and flip it twice again. Now, the coin has equal probability of coming up HT or TH. If it comes up HT, call this H1. If it comes up TH, call this
3,973
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p?
Assume that $p \in (0,1)$. Step 1:. Toss the coin 5 times. If the outcome is $(H, H, H, T, T)$, return $1$ and stop. $(H, H, T, T, H)$, return $2$ and stop. $(H, T, T, H, H)$, return $3$ and stop. $(T, T, H, H, H)$, return $4$ and stop. $(T, H, H, H, T)$, return $5$ and stop. $(H, H, T, H, T)$, return $6$ and stop. $(H, T, H, T, H)$, return $7$ and stop. Step 2:. If the outcome is none of the above, repeat Step 1. Note that regardless of the value of $p \in (0,1)$, each of the seven outcomes listed above has probability $q = p^3(1-p)^2$, and the expected number of coin tosses is $\displaystyle \frac{5}{7q}$. The tosser doesn't need to know the value of $p$ (except that $p\neq 0$ and $p\neq 1$); it is guaranteed that the seven integers are equally likely to be returned by the experiment when it terminates (and it is guaranteed to end with probability $1$).
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he
Assume that $p \in (0,1)$. Step 1:. Toss the coin 5 times. If the outcome is $(H, H, H, T, T)$, return $1$ and stop. $(H, H, T, T, H)$, return $2$ and stop. $(H, T, T, H, H)$, return $3$ and stop. $
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p? Assume that $p \in (0,1)$. Step 1:. Toss the coin 5 times. If the outcome is $(H, H, H, T, T)$, return $1$ and stop. $(H, H, T, T, H)$, return $2$ and stop. $(H, T, T, H, H)$, return $3$ and stop. $(T, T, H, H, H)$, return $4$ and stop. $(T, H, H, H, T)$, return $5$ and stop. $(H, H, T, H, T)$, return $6$ and stop. $(H, T, H, T, H)$, return $7$ and stop. Step 2:. If the outcome is none of the above, repeat Step 1. Note that regardless of the value of $p \in (0,1)$, each of the seven outcomes listed above has probability $q = p^3(1-p)^2$, and the expected number of coin tosses is $\displaystyle \frac{5}{7q}$. The tosser doesn't need to know the value of $p$ (except that $p\neq 0$ and $p\neq 1$); it is guaranteed that the seven integers are equally likely to be returned by the experiment when it terminates (and it is guaranteed to end with probability $1$).
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he Assume that $p \in (0,1)$. Step 1:. Toss the coin 5 times. If the outcome is $(H, H, H, T, T)$, return $1$ and stop. $(H, H, T, T, H)$, return $2$ and stop. $(H, T, T, H, H)$, return $3$ and stop. $
3,974
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p?
Generalizing the case described by Dilip Sarwate Some of the methods described in the other answers use a scheme in which you throw a sequence of $n$ coins in a 'turn' and depending on the result you choose a number between 1 or 7 or discard the turn and throw again. The trick is to find in the expansion of possibilities a multiple of 7 outcomes with the same probability $p^k(1-p)^{n-k}$ and match those against each other. Because the total number of outcomes is not a multiple of 7, we have a few outcomes that we can not assign to a number, and have some probability that we need to discard the outcomes and start over. The case of using 7 coin flips per turn Intuitively we could say that rolling the dice seven times would be very interesting. Since we only need to throw out 2 out of the $2^7$ possibilities. Namely, the 7 times heads and 0 times heads. For all other $2^7-2$ possibilities there is always a multiple of 7 cases with the same number of heads. Namely 7 cases with 1 heads, 21 cases with 2 heads, 35 cases with 3 heads, 35 cases with 4 heads, 21 cases with 5 heads, and 7 cases with 6 heads. So if you compute the number (discarding 0 heads and 7 heads) $$X = \sum_{k=1}^{7} (k-1) \cdot C_k $$ with $C_k$ Bernoulli distributed variables (value 0 or 1), then X modulo 7 is a uniform variable with seven possible results. Comparing different number of coin flips per turn The question remains what the optimal number of rolls per turn would be. Rolling more dices per turn cost you more, but you reduce the probability to have to roll again. The image below shows a manual computations for the first few numbers of coin flips per turn. (possibly there might be an analytical solution, but I believe it is safe to say that a system with 7 coin flips provides the best method regarding the expectation value for the necessary number of coin flips) # plot an empty canvas plot(-100,-100, xlab="flips per turn", ylab="E(total flips)", ylim=c(7,400),xlim=c(0,20),log="y") title("expectation value for total number of coin flips (number of turns times flips per turn)") # loop 1 # different values p from fair to very unfair # since this is symmetric only from 0 to 0.5 is necessary # loop 2 # different values for number of flips per turn # we can only use a multiple of 7 to assign # so the modulus will have to be discarded # from this we can calculate the probability that the turn succeeds # the expected number of flips is # the flips per turn # divided by # the probability for the turn to succeed for (p in c(0.5,0.2,0.1,0.05)) { Ecoins <- rep(0,16) for (dr in (5:20)){ Pdiscards = 0 for (i in c(0:dr)) { Pdiscards = Pdiscards + p^(i)*(1-p)^(dr-i) * (choose(dr,i) %% 7) } Ecoins[dr-4] = dr/(1-Pdiscards) } lines(5:20, Ecoins) points(5:20, Ecoins, pch=21, col="black", bg="white", cex=0.5) text(5, Ecoins[1], paste0("p = ",p), pos=2) } Using an early stopping rule note: the calculations below, for the expectation value of number of flips, are for a fair coin $p=0.5$, it would become a mess to do this for different $p$, but the principle remains the same (although different book-keeping of the cases is needed) We should be able to choose the cases (instead of the formula for $X$) such that we might be able to stop earlier. With 5 coin flips we have for the six possible different unordered sets of heads and tails: 1+5+10+10+5+1 ordered sets And we can use the groups with ten cases (that is the group with 2 heads or the group with 2 tails) to choose (with equal probability) a number. This occurs in 14 out of 2^5=32 cases. This leaves us with: 1+5+3+3+5+1 ordered sets With an extra (6-th) coin flip we have for the seven possible different unordered sets of heads and tails: 1+6+8+6+8+6+1 ordered sets And we can use the groups with eight cases (that is the group with 3 heads or the group with 3 tails) to choose (with equal probability) a number. This occurs in 14 out of 2*(2^5-14)=36 cases. This leaves us with: 1+6+1+6+1+6+1 ordered sets With another (7-th) extra coin flip we have for the eight possible different unordered sets of heads and tails: 1+7+7+7+7+7+7+1 ordered sets And we can use the groups with seven cases (all except the all tails and all heads cases) to choose (with equal probability) a number. This occurs in 42 out of 44 cases. This leaves us with: 1+0+0+0+0+0+0+1 ordered sets (we could continue this but only in the 49-th step does this give us an advantage) So the probability to select a number at 5 flips is $\frac{14}{32} = \frac{7}{16}$ at 6 flips is $\frac{9}{16}\frac{14}{36} = \frac{7}{32}$ at 7 flips is $\frac{11}{32}\frac{42}{44} = \frac{231}{704}$ not in 7 flips is $1-\frac{7}{16}-\frac{7}{32}-\frac{231}{704} = \frac{2}{2^7}$ This makes the expectation value for the number of flips in one turn, conditional that there is success and p=0.5: $$5 \cdot \frac{7}{16}+ 6 \cdot \frac{7}{32} + 7 \cdot \frac{231}{704} = 5.796875 $$ The expectation value for the total number of flips (until there is a success), conditional that p=0.5, becomes: $$\left(5 \cdot \frac{7}{16}+ 6 \cdot \frac{7}{32} + 7 \cdot \frac{231}{704}\right) \frac{2^7}{2^7-2} = \frac{53}{9} = 5.88889 $$ The answer by NcAdams uses a variation of this stopping-rule strategy (each time come up with two new coin flips) but is not optimally selecting out all the flips. The answer by Clid might be similar as well although there might be an uneven selection rule that each two coin flips a number might be chosen but not necessarily with equal probability (a discrepancy which is being repaired during later coin flips) Comparison with other methods Other methods using a similar principle are the one by NcAdams and AdamO. The principle is: A decision for a number between 1 and 7 is made after a certain number of heads and tails. After an $x$ number of flips, for each decision that leads to a number $i$ there is a similar, equally probable, decision that leads to a number $j$ (the same number of heads and tails but just in a different order). Some series of heads and tails can lead to a decision to start over. For such type of methods the one placed here is the most efficient because it makes the decisions as early as possible (as soon as there is a possibility for 7 equal probability sequences of heads and tails, after the $x$-th flip, we can use them to make a decision on a number and we do not need to flip further if we encounter one of those cases). This is demonstrated by the image and simulation below: #### mathematical part ##### set.seed(1) #plotting this method p <- seq(0.001,0.999,0.001) tot <- (5*7*(p^2*(1-p)^3+p^3*(1-p)^2)+ 6*7*(p^2*(1-p)^4+p^4*(1-p)^2)+ 7*7*(p^1*(1-p)^6+p^2*(1-p)^5+p^3*(1-p)^4+p^4*(1-p)^3+p^5*(1-p)^2+p^6*(1-p)^1)+ 7*1*(0+p^7+(1-p)^7) )/ (1-p^7-(1-p)^7) plot(p,tot,type="l",log="y", xlab="p", ylab="expactation value number of flips" ) #plotting method by AdamO tot <- (7*(p^20-20*p^19+189*p^18-1121*p^17+4674*p^16-14536*p^15+34900*p^14-66014*p^13+99426*p^12-119573*p^11+114257*p^10-85514*p^9+48750*p^8-20100*p^7+5400*p^6-720*p^5)+6* (-7*p^21+140*p^20-1323*p^19+7847*p^18-32718*p^17+101752*p^16-244307*p^15+462196*p^14-696612*p^13+839468*p^12-806260*p^11+610617*p^10-357343*p^9+156100*p^8-47950*p^7+9240*p^6-840*p^5)+5* (21*p^22-420*p^21+3969*p^20-23541*p^19+98154*p^18-305277*p^17+733257*p^16-1389066*p^15+2100987*p^14-2552529*p^13+2493624*p^12-1952475*p^11+1215900*p^10-594216*p^9+222600*p^8-61068*p^7+11088*p^6-1008*p^5)+4*(- 35*p^23+700*p^22-6615*p^21+39235*p^20-163625*p^19+509425*p^18-1227345*p^17+2341955*p^16-3595725*p^15+4493195*p^14-4609675*p^13+3907820*p^12-2745610*p^11+1592640*p^10-750855*p^9+278250*p^8-76335*p^7+13860*p^6- 1260*p^5)+3*(35*p^24-700*p^23+6615*p^22-39270*p^21+164325*p^20-515935*p^19+1264725*p^18-2490320*p^17+4027555*p^16-5447470*p^15+6245645*p^14-6113275*p^13+5102720*p^12-3597370*p^11+2105880*p^10-999180*p^9+371000 *p^8-101780*p^7+18480*p^6-1680*p^5)+2*(-21*p^25+420*p^24-3990*p^23+24024*p^22-103362*p^21+340221*p^20-896679*p^19+1954827*p^18-3604755*p^17+5695179*p^16-7742301*p^15+9038379*p^14-9009357*p^13+7608720*p^12- 5390385*p^11+3158820*p^10-1498770*p^9+556500*p^8-152670*p^7+27720*p^6-2520*p^5))/(7*p^27-147*p^26+1505*p^25-10073*p^24+49777*p^23-193781*p^22+616532*p^21-1636082*p^20+3660762*p^19-6946380*p^18+11213888*p^17- 15426950*p^16+18087244*p^15-18037012*p^14+15224160*p^13-10781610*p^12+6317640*p^11-2997540*p^10+1113000*p^9-305340*p^8+55440*p^7-5040*p^6) lines(p,tot,col=2,lty=2) #plotting method by NcAdam lines(p,3*8/7/(p*(1-p)),col=3,lty=2) legend(0.2,500, c("this method calculation","AdamO","NcAdams","this method simulation"), lty=c(1,2,2,0),pch=c(NA,NA,NA,1),col=c(1,2,3,1)) ##### simulation part ###### #creating decision table mat<-matrix(as.numeric(intToBits(c(0:(2^5-1)))),2^5,byrow=1)[,c(1:12)] colnames(mat) <- c("b1","b2","b3","b4","b5","b6","b7","sum5","sum6","sum7","decision","exit") # first 5 rolls mat[,8] <- sapply(c(1:2^5), FUN = function(x) {sum(mat[x,1:5])}) mat[which((mat[,8]==2)&(mat[,11]==0))[1:7],12] = rep(5,7) # we can stop for 7 cases with 2 heads mat[which((mat[,8]==2)&(mat[,11]==0))[1:7],11] = c(1:7) mat[which((mat[,8]==3)&(mat[,11]==0))[1:7],12] = rep(5,7) # we can stop for 7 cases with 3 heads mat[which((mat[,8]==3)&(mat[,11]==0))[1:7],11] = c(1:7) # extra 6th roll mat <- rbind(mat,mat) mat[c(33:64),6] <- rep(1,32) mat[,9] <- sapply(c(1:2^6), FUN = function(x) {sum(mat[x,1:6])}) mat[which((mat[,9]==2)&(mat[,11]==0))[1:7],12] = rep(6,7) # we can stop for 7 cases with 2 heads mat[which((mat[,9]==2)&(mat[,11]==0))[1:7],11] = c(1:7) mat[which((mat[,9]==4)&(mat[,11]==0))[1:7],12] = rep(6,7) # we can stop for 7 cases with 4 heads mat[which((mat[,9]==4)&(mat[,11]==0))[1:7],11] = c(1:7) # extra 7th roll mat <- rbind(mat,mat) mat[c(65:128),7] <- rep(1,64) mat[,10] <- sapply(c(1:2^7), FUN = function(x) {sum(mat[x,1:7])}) for (i in 1:6) { mat[which((mat[,10]==i)&(mat[,11]==0))[1:7],12] = rep(7,7) # we can stop for 7 cases with i heads mat[which((mat[,10]==i)&(mat[,11]==0))[1:7],11] = c(1:7) } mat[1,12] = 7 # when we did not have succes we still need to count the 7 coin tosses mat[2^7,12] = 7 draws = rep(0,100) num = rep(0,100) # plotting simulation for (p in seq(0.05,0.95,0.05)) { n <- rep(0,1000) for (i in 1:1000) { coinflips <- rbinom(7,1,p) # draw seven numbers I <- mat[,1:7]-matrix(rep(coinflips,2^7),2^7,byrow=1) == rep(0,7) # compare with the table Imatch = I[,1]*I[,2]*I[,3]*I[,4]*I[,5]*I[,6]*I[,7] # compare with the table draws[i] <- mat[which(Imatch==1),11] # result which number num[i] <- mat[which(Imatch==1),12] # result how long it took } Nturn <- mean(num) #how many flips we made Sturn <- (1000-sum(draws==0))/1000 #how many numbers we got (relatively) points(p,Nturn/Sturn) } another image which is scaled by $p*(1-p)$ for better comparison: zoom in comparing methods described in this post and comments the 'conditional skipping of the 7-th step' is a slight improvement which can be made on the early stopping rule. In this case you select not groups with equal probabilities after the 6-th flips. You have 6 groups with equal probabilities, and 1 groups with a slightly different probability (for this last group you need to flip one more extra time when you have 6 heads or tails and because you discard the 7 heads or 7 tails, you will end up with the same probability after all)
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he
Generalizing the case described by Dilip Sarwate Some of the methods described in the other answers use a scheme in which you throw a sequence of $n$ coins in a 'turn' and depending on the result you
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p? Generalizing the case described by Dilip Sarwate Some of the methods described in the other answers use a scheme in which you throw a sequence of $n$ coins in a 'turn' and depending on the result you choose a number between 1 or 7 or discard the turn and throw again. The trick is to find in the expansion of possibilities a multiple of 7 outcomes with the same probability $p^k(1-p)^{n-k}$ and match those against each other. Because the total number of outcomes is not a multiple of 7, we have a few outcomes that we can not assign to a number, and have some probability that we need to discard the outcomes and start over. The case of using 7 coin flips per turn Intuitively we could say that rolling the dice seven times would be very interesting. Since we only need to throw out 2 out of the $2^7$ possibilities. Namely, the 7 times heads and 0 times heads. For all other $2^7-2$ possibilities there is always a multiple of 7 cases with the same number of heads. Namely 7 cases with 1 heads, 21 cases with 2 heads, 35 cases with 3 heads, 35 cases with 4 heads, 21 cases with 5 heads, and 7 cases with 6 heads. So if you compute the number (discarding 0 heads and 7 heads) $$X = \sum_{k=1}^{7} (k-1) \cdot C_k $$ with $C_k$ Bernoulli distributed variables (value 0 or 1), then X modulo 7 is a uniform variable with seven possible results. Comparing different number of coin flips per turn The question remains what the optimal number of rolls per turn would be. Rolling more dices per turn cost you more, but you reduce the probability to have to roll again. The image below shows a manual computations for the first few numbers of coin flips per turn. (possibly there might be an analytical solution, but I believe it is safe to say that a system with 7 coin flips provides the best method regarding the expectation value for the necessary number of coin flips) # plot an empty canvas plot(-100,-100, xlab="flips per turn", ylab="E(total flips)", ylim=c(7,400),xlim=c(0,20),log="y") title("expectation value for total number of coin flips (number of turns times flips per turn)") # loop 1 # different values p from fair to very unfair # since this is symmetric only from 0 to 0.5 is necessary # loop 2 # different values for number of flips per turn # we can only use a multiple of 7 to assign # so the modulus will have to be discarded # from this we can calculate the probability that the turn succeeds # the expected number of flips is # the flips per turn # divided by # the probability for the turn to succeed for (p in c(0.5,0.2,0.1,0.05)) { Ecoins <- rep(0,16) for (dr in (5:20)){ Pdiscards = 0 for (i in c(0:dr)) { Pdiscards = Pdiscards + p^(i)*(1-p)^(dr-i) * (choose(dr,i) %% 7) } Ecoins[dr-4] = dr/(1-Pdiscards) } lines(5:20, Ecoins) points(5:20, Ecoins, pch=21, col="black", bg="white", cex=0.5) text(5, Ecoins[1], paste0("p = ",p), pos=2) } Using an early stopping rule note: the calculations below, for the expectation value of number of flips, are for a fair coin $p=0.5$, it would become a mess to do this for different $p$, but the principle remains the same (although different book-keeping of the cases is needed) We should be able to choose the cases (instead of the formula for $X$) such that we might be able to stop earlier. With 5 coin flips we have for the six possible different unordered sets of heads and tails: 1+5+10+10+5+1 ordered sets And we can use the groups with ten cases (that is the group with 2 heads or the group with 2 tails) to choose (with equal probability) a number. This occurs in 14 out of 2^5=32 cases. This leaves us with: 1+5+3+3+5+1 ordered sets With an extra (6-th) coin flip we have for the seven possible different unordered sets of heads and tails: 1+6+8+6+8+6+1 ordered sets And we can use the groups with eight cases (that is the group with 3 heads or the group with 3 tails) to choose (with equal probability) a number. This occurs in 14 out of 2*(2^5-14)=36 cases. This leaves us with: 1+6+1+6+1+6+1 ordered sets With another (7-th) extra coin flip we have for the eight possible different unordered sets of heads and tails: 1+7+7+7+7+7+7+1 ordered sets And we can use the groups with seven cases (all except the all tails and all heads cases) to choose (with equal probability) a number. This occurs in 42 out of 44 cases. This leaves us with: 1+0+0+0+0+0+0+1 ordered sets (we could continue this but only in the 49-th step does this give us an advantage) So the probability to select a number at 5 flips is $\frac{14}{32} = \frac{7}{16}$ at 6 flips is $\frac{9}{16}\frac{14}{36} = \frac{7}{32}$ at 7 flips is $\frac{11}{32}\frac{42}{44} = \frac{231}{704}$ not in 7 flips is $1-\frac{7}{16}-\frac{7}{32}-\frac{231}{704} = \frac{2}{2^7}$ This makes the expectation value for the number of flips in one turn, conditional that there is success and p=0.5: $$5 \cdot \frac{7}{16}+ 6 \cdot \frac{7}{32} + 7 \cdot \frac{231}{704} = 5.796875 $$ The expectation value for the total number of flips (until there is a success), conditional that p=0.5, becomes: $$\left(5 \cdot \frac{7}{16}+ 6 \cdot \frac{7}{32} + 7 \cdot \frac{231}{704}\right) \frac{2^7}{2^7-2} = \frac{53}{9} = 5.88889 $$ The answer by NcAdams uses a variation of this stopping-rule strategy (each time come up with two new coin flips) but is not optimally selecting out all the flips. The answer by Clid might be similar as well although there might be an uneven selection rule that each two coin flips a number might be chosen but not necessarily with equal probability (a discrepancy which is being repaired during later coin flips) Comparison with other methods Other methods using a similar principle are the one by NcAdams and AdamO. The principle is: A decision for a number between 1 and 7 is made after a certain number of heads and tails. After an $x$ number of flips, for each decision that leads to a number $i$ there is a similar, equally probable, decision that leads to a number $j$ (the same number of heads and tails but just in a different order). Some series of heads and tails can lead to a decision to start over. For such type of methods the one placed here is the most efficient because it makes the decisions as early as possible (as soon as there is a possibility for 7 equal probability sequences of heads and tails, after the $x$-th flip, we can use them to make a decision on a number and we do not need to flip further if we encounter one of those cases). This is demonstrated by the image and simulation below: #### mathematical part ##### set.seed(1) #plotting this method p <- seq(0.001,0.999,0.001) tot <- (5*7*(p^2*(1-p)^3+p^3*(1-p)^2)+ 6*7*(p^2*(1-p)^4+p^4*(1-p)^2)+ 7*7*(p^1*(1-p)^6+p^2*(1-p)^5+p^3*(1-p)^4+p^4*(1-p)^3+p^5*(1-p)^2+p^6*(1-p)^1)+ 7*1*(0+p^7+(1-p)^7) )/ (1-p^7-(1-p)^7) plot(p,tot,type="l",log="y", xlab="p", ylab="expactation value number of flips" ) #plotting method by AdamO tot <- (7*(p^20-20*p^19+189*p^18-1121*p^17+4674*p^16-14536*p^15+34900*p^14-66014*p^13+99426*p^12-119573*p^11+114257*p^10-85514*p^9+48750*p^8-20100*p^7+5400*p^6-720*p^5)+6* (-7*p^21+140*p^20-1323*p^19+7847*p^18-32718*p^17+101752*p^16-244307*p^15+462196*p^14-696612*p^13+839468*p^12-806260*p^11+610617*p^10-357343*p^9+156100*p^8-47950*p^7+9240*p^6-840*p^5)+5* (21*p^22-420*p^21+3969*p^20-23541*p^19+98154*p^18-305277*p^17+733257*p^16-1389066*p^15+2100987*p^14-2552529*p^13+2493624*p^12-1952475*p^11+1215900*p^10-594216*p^9+222600*p^8-61068*p^7+11088*p^6-1008*p^5)+4*(- 35*p^23+700*p^22-6615*p^21+39235*p^20-163625*p^19+509425*p^18-1227345*p^17+2341955*p^16-3595725*p^15+4493195*p^14-4609675*p^13+3907820*p^12-2745610*p^11+1592640*p^10-750855*p^9+278250*p^8-76335*p^7+13860*p^6- 1260*p^5)+3*(35*p^24-700*p^23+6615*p^22-39270*p^21+164325*p^20-515935*p^19+1264725*p^18-2490320*p^17+4027555*p^16-5447470*p^15+6245645*p^14-6113275*p^13+5102720*p^12-3597370*p^11+2105880*p^10-999180*p^9+371000 *p^8-101780*p^7+18480*p^6-1680*p^5)+2*(-21*p^25+420*p^24-3990*p^23+24024*p^22-103362*p^21+340221*p^20-896679*p^19+1954827*p^18-3604755*p^17+5695179*p^16-7742301*p^15+9038379*p^14-9009357*p^13+7608720*p^12- 5390385*p^11+3158820*p^10-1498770*p^9+556500*p^8-152670*p^7+27720*p^6-2520*p^5))/(7*p^27-147*p^26+1505*p^25-10073*p^24+49777*p^23-193781*p^22+616532*p^21-1636082*p^20+3660762*p^19-6946380*p^18+11213888*p^17- 15426950*p^16+18087244*p^15-18037012*p^14+15224160*p^13-10781610*p^12+6317640*p^11-2997540*p^10+1113000*p^9-305340*p^8+55440*p^7-5040*p^6) lines(p,tot,col=2,lty=2) #plotting method by NcAdam lines(p,3*8/7/(p*(1-p)),col=3,lty=2) legend(0.2,500, c("this method calculation","AdamO","NcAdams","this method simulation"), lty=c(1,2,2,0),pch=c(NA,NA,NA,1),col=c(1,2,3,1)) ##### simulation part ###### #creating decision table mat<-matrix(as.numeric(intToBits(c(0:(2^5-1)))),2^5,byrow=1)[,c(1:12)] colnames(mat) <- c("b1","b2","b3","b4","b5","b6","b7","sum5","sum6","sum7","decision","exit") # first 5 rolls mat[,8] <- sapply(c(1:2^5), FUN = function(x) {sum(mat[x,1:5])}) mat[which((mat[,8]==2)&(mat[,11]==0))[1:7],12] = rep(5,7) # we can stop for 7 cases with 2 heads mat[which((mat[,8]==2)&(mat[,11]==0))[1:7],11] = c(1:7) mat[which((mat[,8]==3)&(mat[,11]==0))[1:7],12] = rep(5,7) # we can stop for 7 cases with 3 heads mat[which((mat[,8]==3)&(mat[,11]==0))[1:7],11] = c(1:7) # extra 6th roll mat <- rbind(mat,mat) mat[c(33:64),6] <- rep(1,32) mat[,9] <- sapply(c(1:2^6), FUN = function(x) {sum(mat[x,1:6])}) mat[which((mat[,9]==2)&(mat[,11]==0))[1:7],12] = rep(6,7) # we can stop for 7 cases with 2 heads mat[which((mat[,9]==2)&(mat[,11]==0))[1:7],11] = c(1:7) mat[which((mat[,9]==4)&(mat[,11]==0))[1:7],12] = rep(6,7) # we can stop for 7 cases with 4 heads mat[which((mat[,9]==4)&(mat[,11]==0))[1:7],11] = c(1:7) # extra 7th roll mat <- rbind(mat,mat) mat[c(65:128),7] <- rep(1,64) mat[,10] <- sapply(c(1:2^7), FUN = function(x) {sum(mat[x,1:7])}) for (i in 1:6) { mat[which((mat[,10]==i)&(mat[,11]==0))[1:7],12] = rep(7,7) # we can stop for 7 cases with i heads mat[which((mat[,10]==i)&(mat[,11]==0))[1:7],11] = c(1:7) } mat[1,12] = 7 # when we did not have succes we still need to count the 7 coin tosses mat[2^7,12] = 7 draws = rep(0,100) num = rep(0,100) # plotting simulation for (p in seq(0.05,0.95,0.05)) { n <- rep(0,1000) for (i in 1:1000) { coinflips <- rbinom(7,1,p) # draw seven numbers I <- mat[,1:7]-matrix(rep(coinflips,2^7),2^7,byrow=1) == rep(0,7) # compare with the table Imatch = I[,1]*I[,2]*I[,3]*I[,4]*I[,5]*I[,6]*I[,7] # compare with the table draws[i] <- mat[which(Imatch==1),11] # result which number num[i] <- mat[which(Imatch==1),12] # result how long it took } Nturn <- mean(num) #how many flips we made Sturn <- (1000-sum(draws==0))/1000 #how many numbers we got (relatively) points(p,Nturn/Sturn) } another image which is scaled by $p*(1-p)$ for better comparison: zoom in comparing methods described in this post and comments the 'conditional skipping of the 7-th step' is a slight improvement which can be made on the early stopping rule. In this case you select not groups with equal probabilities after the 6-th flips. You have 6 groups with equal probabilities, and 1 groups with a slightly different probability (for this last group you need to flip one more extra time when you have 6 heads or tails and because you discard the 7 heads or 7 tails, you will end up with the same probability after all)
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he Generalizing the case described by Dilip Sarwate Some of the methods described in the other answers use a scheme in which you throw a sequence of $n$ coins in a 'turn' and depending on the result you
3,975
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p?
Divide a box into seven equal-area regions, each labeled with an integer. Throw the coin into the box in such a way that it has equal probability of landing in each region. This is only half in jest -- it's essentially the same procedure as estimating $\pi$ using a physical Monte Carlo procedure, like dropping rice grains onto paper with a circle drawn on it. This is one of the only answer that works for the case of $p = 1$ or $p=0$.
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he
Divide a box into seven equal-area regions, each labeled with an integer. Throw the coin into the box in such a way that it has equal probability of landing in each region. This is only half in jest -
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p? Divide a box into seven equal-area regions, each labeled with an integer. Throw the coin into the box in such a way that it has equal probability of landing in each region. This is only half in jest -- it's essentially the same procedure as estimating $\pi$ using a physical Monte Carlo procedure, like dropping rice grains onto paper with a circle drawn on it. This is one of the only answer that works for the case of $p = 1$ or $p=0$.
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he Divide a box into seven equal-area regions, each labeled with an integer. Throw the coin into the box in such a way that it has equal probability of landing in each region. This is only half in jest -
3,976
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p?
EDIT: based on others' feedback. Here's an interesting thought: set the list of {1,2,3,4,5,6,7}. Throw the coin for each element in the list sequentially. If it lands head side up for a particular element, remove the number from the list. If all the numbers from a particular iteration of the list are removed, repeat the sampling. Do so until only one number remains. drop.one <- function(x, p) { drop <- runif(length(x)) < p if (all(drop)) return(x) return(x[!drop]) } sample.recur <- function(x, p) { if (length(x) > 1) return(sample.recur(drop.one(x, p), p)) return(x) } # x <- c(1:7,7:1) x <- 1:7 p <- 0.01 out <- replicate(1e5, sample.recur(x, p)) round(prop.table(table(out)), 2) gives me an approximately uniform distribution > round(prop.table(table(out)), 2) out 1 2 3 4 5 6 7 0.14 0.14 0.15 0.14 0.14 0.14 0.14 It's interesting to note (if I haven't made a dire mistake) that this produces a different result than generating $N$ binomial outcomes as the sum of 13 tosses of the coin (counting 0 heads as an outcome) and mapping the {0,1,2,...,12,13} index onto the earlier list of {1,2,3,...,3,2,1}. I don't quite know how to prove that my method works. Evaluation of expectation value for number of coin throws The expectation value for the number coin throws can be calculated using the transition matrix below (answering the question when we start with $x$ non-eliminated numbers then what is the probability to get to $y$ non-eliminated numbers) $$M = \begin{bmatrix} q^7 & 0 & 0 & 0 &0 & 0 & 1 & 1\\ 7p^1q^6 & q^6 & 0 & 0 & 0 & 0 & 0 & 0 \\ 21p^2q^5 & 6p^1q^5 & q^5 & 0 & 0 & 0 & 0 & 0\\ 35 p^3q^4 & 15 p^2q^4 & 5q^4 & q^4 & 0 & 0 & 0 & 0\\ 35 p^4q^3 & 20 p^3q^3 & 10 p^2q^3 & 4 p^1q^3 & q^3 & 0 & 0 & 0\\ 21p^5q^2 & 15 p^4q^2 & 10 p^3q^2 & 6 p^2q^2 & 3 p^1q^2 & q^2 & 0 & 0\\ 7p^6q^1 & 6 p^5q^1 & 5 p^4q^1 & 4 p^3q^1 & 3p^2q^1 & 2p^1q^1 & 0 & 0\\ p^7 & p^6 & p^5 & p^4 & p^3 & p^2 & 0 & 0 \end{bmatrix}$$ The eigenvector associated with the eigenvalue 1 (which can be found by solving $(M-I)v=0$) depicts how much time is relatively spend in what state. Then 7th state is how often you will be able to draw a number from 1 to 7. The other states tell how many coin flips it costs. Below is the image which compares with the answer from NcAdams which has expectation value for coin throws being $E(n) = \frac{24}{7}p(1-p)$ Remarkable is that the method performs better roughly for $p>2/3$. But also the performance is non-symmetric. A symmetric and better overall performance could be made when a probabilistic switching rule would be made which changes the decision rule from tails to heads when heads happens to be improbable. Solution found with wxMaxima M: matrix( [(1-p)^7, 0, 0,0,0,0,1,1], [7* p*(1-p)^6, (1-p)^6, 0,0,0,0,0,0], [21*p^2*(1-p)^5, 6*p*(1-p)^5, (1-p)^5,0,0,0,0,0], [35*p^3*(1-p)^4, 15*p^2*(1-p)^4, 5*p*(1-p)^4,(1-p)^4,0,0,0,0], [35*p^4*(1-p)^3, 20*p^3*(1-p)^3, 10*p^2*(1-p)^3,4*p*(1-p)^3,(1-p)^3,0,0,0], [21*p^5*(1-p)^2, 15*p^4*(1-p)^2, 10*p^3*(1-p)^2,6*p^2*(1-p)^2,3*p*(1-p)^2,(1-p)^2,0,0], [7* p^6*(1-p)^1, 6*p^5*(1-p), 5*p^4*(1-p),4*p^3*(1-p),3*p^2*(1-p),2*(1-p)*p,0,0], [p^7, p^6, p^5,p^4,p^3,p^2,0,0] ); z: nullspace(M-diagmatrix(8,1)); x : apply (addcol, args (z)); t : [7,6,5,4,3,2,0,0]; plot2d(t.x/x[7],[p,0,1],logy); Calculations in R # plotting empty canvas plot(-100,-100, xlab="p", ylab="E(total flips)", ylim=c(10,1000),xlim=c(0,1),log="y") # plotting simulation for (p in seq(0.1,0.9,0.05)) { n <- rep(0,10000) for (i in 1:10000) { success = 0 tests = c(1,1,1,1,1,1,1) # start with seven numbers in the set count = 0 while(success==0) { for (j in 1:7) { if (tests[j]==1) { count = count + 1 if (rbinom(1,1,p) == 1) { tests[j] <- 0 # elliminate number when we draw heads } } } if (sum(tests)==1) { n[i] = count success = 1 # end when 1 is left over } if (sum(tests)==0) { tests = c(1,1,1,1,1,1,1) # restart when 0 are left over } } } points(p,mean(n)) } # plotting formula p <- seq(0.001,0.999,0.001) tot <- (7*(p^20-20*p^19+189*p^18-1121*p^17+4674*p^16-14536*p^15+34900*p^14-66014*p^13+99426*p^12-119573*p^11+114257*p^10-85514*p^9+48750*p^8-20100*p^7+5400*p^6-720*p^5)+6* (-7*p^21+140*p^20-1323*p^19+7847*p^18-32718*p^17+101752*p^16-244307*p^15+462196*p^14-696612*p^13+839468*p^12-806260*p^11+610617*p^10-357343*p^9+156100*p^8-47950*p^7+9240*p^6-840*p^5)+5* (21*p^22-420*p^21+3969*p^20-23541*p^19+98154*p^18-305277*p^17+733257*p^16-1389066*p^15+2100987*p^14-2552529*p^13+2493624*p^12-1952475*p^11+1215900*p^10-594216*p^9+222600*p^8-61068*p^7+11088*p^6-1008*p^5)+4*(- 35*p^23+700*p^22-6615*p^21+39235*p^20-163625*p^19+509425*p^18-1227345*p^17+2341955*p^16-3595725*p^15+4493195*p^14-4609675*p^13+3907820*p^12-2745610*p^11+1592640*p^10-750855*p^9+278250*p^8-76335*p^7+13860*p^6- 1260*p^5)+3*(35*p^24-700*p^23+6615*p^22-39270*p^21+164325*p^20-515935*p^19+1264725*p^18-2490320*p^17+4027555*p^16-5447470*p^15+6245645*p^14-6113275*p^13+5102720*p^12-3597370*p^11+2105880*p^10-999180*p^9+371000 *p^8-101780*p^7+18480*p^6-1680*p^5)+2*(-21*p^25+420*p^24-3990*p^23+24024*p^22-103362*p^21+340221*p^20-896679*p^19+1954827*p^18-3604755*p^17+5695179*p^16-7742301*p^15+9038379*p^14-9009357*p^13+7608720*p^12- 5390385*p^11+3158820*p^10-1498770*p^9+556500*p^8-152670*p^7+27720*p^6-2520*p^5))/(7*p^27-147*p^26+1505*p^25-10073*p^24+49777*p^23-193781*p^22+616532*p^21-1636082*p^20+3660762*p^19-6946380*p^18+11213888*p^17- 15426950*p^16+18087244*p^15-18037012*p^14+15224160*p^13-10781610*p^12+6317640*p^11-2997540*p^10+1113000*p^9-305340*p^8+55440*p^7-5040*p^6) lines(p,tot) #plotting comparison with alternative method lines(p,3*8/7/(p*(1-p)),lty=2) legend(0.2,500, c("simulation","calculation","comparison"), lty=c(0,1,2),pch=c(1,NA,NA))
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he
EDIT: based on others' feedback. Here's an interesting thought: set the list of {1,2,3,4,5,6,7}. Throw the coin for each element in the list sequentially. If it lands head side up for a particular ele
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p? EDIT: based on others' feedback. Here's an interesting thought: set the list of {1,2,3,4,5,6,7}. Throw the coin for each element in the list sequentially. If it lands head side up for a particular element, remove the number from the list. If all the numbers from a particular iteration of the list are removed, repeat the sampling. Do so until only one number remains. drop.one <- function(x, p) { drop <- runif(length(x)) < p if (all(drop)) return(x) return(x[!drop]) } sample.recur <- function(x, p) { if (length(x) > 1) return(sample.recur(drop.one(x, p), p)) return(x) } # x <- c(1:7,7:1) x <- 1:7 p <- 0.01 out <- replicate(1e5, sample.recur(x, p)) round(prop.table(table(out)), 2) gives me an approximately uniform distribution > round(prop.table(table(out)), 2) out 1 2 3 4 5 6 7 0.14 0.14 0.15 0.14 0.14 0.14 0.14 It's interesting to note (if I haven't made a dire mistake) that this produces a different result than generating $N$ binomial outcomes as the sum of 13 tosses of the coin (counting 0 heads as an outcome) and mapping the {0,1,2,...,12,13} index onto the earlier list of {1,2,3,...,3,2,1}. I don't quite know how to prove that my method works. Evaluation of expectation value for number of coin throws The expectation value for the number coin throws can be calculated using the transition matrix below (answering the question when we start with $x$ non-eliminated numbers then what is the probability to get to $y$ non-eliminated numbers) $$M = \begin{bmatrix} q^7 & 0 & 0 & 0 &0 & 0 & 1 & 1\\ 7p^1q^6 & q^6 & 0 & 0 & 0 & 0 & 0 & 0 \\ 21p^2q^5 & 6p^1q^5 & q^5 & 0 & 0 & 0 & 0 & 0\\ 35 p^3q^4 & 15 p^2q^4 & 5q^4 & q^4 & 0 & 0 & 0 & 0\\ 35 p^4q^3 & 20 p^3q^3 & 10 p^2q^3 & 4 p^1q^3 & q^3 & 0 & 0 & 0\\ 21p^5q^2 & 15 p^4q^2 & 10 p^3q^2 & 6 p^2q^2 & 3 p^1q^2 & q^2 & 0 & 0\\ 7p^6q^1 & 6 p^5q^1 & 5 p^4q^1 & 4 p^3q^1 & 3p^2q^1 & 2p^1q^1 & 0 & 0\\ p^7 & p^6 & p^5 & p^4 & p^3 & p^2 & 0 & 0 \end{bmatrix}$$ The eigenvector associated with the eigenvalue 1 (which can be found by solving $(M-I)v=0$) depicts how much time is relatively spend in what state. Then 7th state is how often you will be able to draw a number from 1 to 7. The other states tell how many coin flips it costs. Below is the image which compares with the answer from NcAdams which has expectation value for coin throws being $E(n) = \frac{24}{7}p(1-p)$ Remarkable is that the method performs better roughly for $p>2/3$. But also the performance is non-symmetric. A symmetric and better overall performance could be made when a probabilistic switching rule would be made which changes the decision rule from tails to heads when heads happens to be improbable. Solution found with wxMaxima M: matrix( [(1-p)^7, 0, 0,0,0,0,1,1], [7* p*(1-p)^6, (1-p)^6, 0,0,0,0,0,0], [21*p^2*(1-p)^5, 6*p*(1-p)^5, (1-p)^5,0,0,0,0,0], [35*p^3*(1-p)^4, 15*p^2*(1-p)^4, 5*p*(1-p)^4,(1-p)^4,0,0,0,0], [35*p^4*(1-p)^3, 20*p^3*(1-p)^3, 10*p^2*(1-p)^3,4*p*(1-p)^3,(1-p)^3,0,0,0], [21*p^5*(1-p)^2, 15*p^4*(1-p)^2, 10*p^3*(1-p)^2,6*p^2*(1-p)^2,3*p*(1-p)^2,(1-p)^2,0,0], [7* p^6*(1-p)^1, 6*p^5*(1-p), 5*p^4*(1-p),4*p^3*(1-p),3*p^2*(1-p),2*(1-p)*p,0,0], [p^7, p^6, p^5,p^4,p^3,p^2,0,0] ); z: nullspace(M-diagmatrix(8,1)); x : apply (addcol, args (z)); t : [7,6,5,4,3,2,0,0]; plot2d(t.x/x[7],[p,0,1],logy); Calculations in R # plotting empty canvas plot(-100,-100, xlab="p", ylab="E(total flips)", ylim=c(10,1000),xlim=c(0,1),log="y") # plotting simulation for (p in seq(0.1,0.9,0.05)) { n <- rep(0,10000) for (i in 1:10000) { success = 0 tests = c(1,1,1,1,1,1,1) # start with seven numbers in the set count = 0 while(success==0) { for (j in 1:7) { if (tests[j]==1) { count = count + 1 if (rbinom(1,1,p) == 1) { tests[j] <- 0 # elliminate number when we draw heads } } } if (sum(tests)==1) { n[i] = count success = 1 # end when 1 is left over } if (sum(tests)==0) { tests = c(1,1,1,1,1,1,1) # restart when 0 are left over } } } points(p,mean(n)) } # plotting formula p <- seq(0.001,0.999,0.001) tot <- (7*(p^20-20*p^19+189*p^18-1121*p^17+4674*p^16-14536*p^15+34900*p^14-66014*p^13+99426*p^12-119573*p^11+114257*p^10-85514*p^9+48750*p^8-20100*p^7+5400*p^6-720*p^5)+6* (-7*p^21+140*p^20-1323*p^19+7847*p^18-32718*p^17+101752*p^16-244307*p^15+462196*p^14-696612*p^13+839468*p^12-806260*p^11+610617*p^10-357343*p^9+156100*p^8-47950*p^7+9240*p^6-840*p^5)+5* (21*p^22-420*p^21+3969*p^20-23541*p^19+98154*p^18-305277*p^17+733257*p^16-1389066*p^15+2100987*p^14-2552529*p^13+2493624*p^12-1952475*p^11+1215900*p^10-594216*p^9+222600*p^8-61068*p^7+11088*p^6-1008*p^5)+4*(- 35*p^23+700*p^22-6615*p^21+39235*p^20-163625*p^19+509425*p^18-1227345*p^17+2341955*p^16-3595725*p^15+4493195*p^14-4609675*p^13+3907820*p^12-2745610*p^11+1592640*p^10-750855*p^9+278250*p^8-76335*p^7+13860*p^6- 1260*p^5)+3*(35*p^24-700*p^23+6615*p^22-39270*p^21+164325*p^20-515935*p^19+1264725*p^18-2490320*p^17+4027555*p^16-5447470*p^15+6245645*p^14-6113275*p^13+5102720*p^12-3597370*p^11+2105880*p^10-999180*p^9+371000 *p^8-101780*p^7+18480*p^6-1680*p^5)+2*(-21*p^25+420*p^24-3990*p^23+24024*p^22-103362*p^21+340221*p^20-896679*p^19+1954827*p^18-3604755*p^17+5695179*p^16-7742301*p^15+9038379*p^14-9009357*p^13+7608720*p^12- 5390385*p^11+3158820*p^10-1498770*p^9+556500*p^8-152670*p^7+27720*p^6-2520*p^5))/(7*p^27-147*p^26+1505*p^25-10073*p^24+49777*p^23-193781*p^22+616532*p^21-1636082*p^20+3660762*p^19-6946380*p^18+11213888*p^17- 15426950*p^16+18087244*p^15-18037012*p^14+15224160*p^13-10781610*p^12+6317640*p^11-2997540*p^10+1113000*p^9-305340*p^8+55440*p^7-5040*p^6) lines(p,tot) #plotting comparison with alternative method lines(p,3*8/7/(p*(1-p)),lty=2) legend(0.2,500, c("simulation","calculation","comparison"), lty=c(0,1,2),pch=c(1,NA,NA))
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he EDIT: based on others' feedback. Here's an interesting thought: set the list of {1,2,3,4,5,6,7}. Throw the coin for each element in the list sequentially. If it lands head side up for a particular ele
3,977
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p?
The question is a bit ambiguous, is it asking "generate a random integer equal or less than 7 with equal probability", or is it asking "generate 7 random integers with equal probability?" - but what is the space of integers?!? I'll assume it's the former, but the same logic I'm applying can be extended to the latter case too, once that problem is cleared up. With a biased coin, you can produce a fair coin by following the following procedure: https://en.wikipedia.org/wiki/Fair_coin#Fair_results_from_a_biased_coin A number 7 or less can be written in binary as three {0,1} digits. So all one needs to do is follow the above procedure three times, and convert the binary number produced back to decimal.
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he
The question is a bit ambiguous, is it asking "generate a random integer equal or less than 7 with equal probability", or is it asking "generate 7 random integers with equal probability?" - but what i
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p? The question is a bit ambiguous, is it asking "generate a random integer equal or less than 7 with equal probability", or is it asking "generate 7 random integers with equal probability?" - but what is the space of integers?!? I'll assume it's the former, but the same logic I'm applying can be extended to the latter case too, once that problem is cleared up. With a biased coin, you can produce a fair coin by following the following procedure: https://en.wikipedia.org/wiki/Fair_coin#Fair_results_from_a_biased_coin A number 7 or less can be written in binary as three {0,1} digits. So all one needs to do is follow the above procedure three times, and convert the binary number produced back to decimal.
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he The question is a bit ambiguous, is it asking "generate a random integer equal or less than 7 with equal probability", or is it asking "generate 7 random integers with equal probability?" - but what i
3,978
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p?
As mentioned in earlier comments, this puzzle relates to John von Neumann's 1951 paper "Various Techniques Used in Connection With Random Digits" published in the research journal of the National Bureau of Standards: There is a wider literature about such problems that goes under the name of Bernoulli factory problems, that is, given a coin with tail probability $p$, how to simulate a coin with tail probability $f(p)$. If feasible, since some functions $f$ cannot be used as for instance $f(p)=\min\{1,2p\}$. Nacu and Peres (2005) study fast algorithms for solving [solvable] Bernoulli factory problems where fast means exponential decay of the tail of the distribution of the number $N$ of trials.
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he
As mentioned in earlier comments, this puzzle relates to John von Neumann's 1951 paper "Various Techniques Used in Connection With Random Digits" published in the research journal of the National Bure
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p? As mentioned in earlier comments, this puzzle relates to John von Neumann's 1951 paper "Various Techniques Used in Connection With Random Digits" published in the research journal of the National Bureau of Standards: There is a wider literature about such problems that goes under the name of Bernoulli factory problems, that is, given a coin with tail probability $p$, how to simulate a coin with tail probability $f(p)$. If feasible, since some functions $f$ cannot be used as for instance $f(p)=\min\{1,2p\}$. Nacu and Peres (2005) study fast algorithms for solving [solvable] Bernoulli factory problems where fast means exponential decay of the tail of the distribution of the number $N$ of trials.
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he As mentioned in earlier comments, this puzzle relates to John von Neumann's 1951 paper "Various Techniques Used in Connection With Random Digits" published in the research journal of the National Bure
3,979
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p?
A solution that never wastes flips, which helps a lot for very-biased coins. The disadvantage of this algorithm (as written, at least) is that it's using arbitrary-precision arithmetic. Practically, you probably want to use this until integer overflow, and only then throw it away and start over. Also, you need to know what the bias is ... which you might not, say, if it is temperature-dependent like most physical phenomena. Assuming the chance of heads is, say, 30%. Start with the range [1, 8). Flip your coin. If heads, use the left 30%, so your new range is [1, 3.1). Else, use the right 70%, so your new range is [3.1, 8). Repeat until the entire range has the same integer part. Full code: #!/usr/bin/env python3 from fractions import Fraction from collections import Counter from random import randrange BIAS = Fraction(3, 10) STAT_COUNT = 100000 calls = 0 def biased_rand(): global calls calls += 1 return randrange(BIAS.denominator) < BIAS.numerator def can_generate_multiple(start, stop): if stop.denominator == 1: # half-open range stop = stop.numerator - 1 else: stop = int(stop) start = int(start) return start != stop def unbiased_rand(start, stop): if start < 0: # negative numbers round wrong return start + unbiased_rand(0, stop - start) assert isinstance(start, int) and start >= 0 assert isinstance(stop, int) and stop >= start start = Fraction(start) stop = Fraction(stop) while can_generate_multiple(start, stop): if biased_rand(): old_diff = stop - start diff = old_diff * BIAS stop = start + diff else: old_diff = stop - start diff = old_diff * (1 - BIAS) start = stop - diff return int(start) def stats(f, *args, **kwargs): c = Counter() for _ in range(STAT_COUNT): c[f(*args, **kwargs)] += 1 print('stats for %s:' % f.__qualname__) for k, v in sorted(c.items()): percent = v * 100 / STAT_COUNT print(' %s: %f%%' % (k, percent)) def main(): #stats(biased_rand) stats(unbiased_rand, 1, 7+1) print('used %f calls at bias %s' % (calls/STAT_COUNT, BIAS)) if __name__ == '__main__': main()
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he
A solution that never wastes flips, which helps a lot for very-biased coins. The disadvantage of this algorithm (as written, at least) is that it's using arbitrary-precision arithmetic. Practically, y
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p? A solution that never wastes flips, which helps a lot for very-biased coins. The disadvantage of this algorithm (as written, at least) is that it's using arbitrary-precision arithmetic. Practically, you probably want to use this until integer overflow, and only then throw it away and start over. Also, you need to know what the bias is ... which you might not, say, if it is temperature-dependent like most physical phenomena. Assuming the chance of heads is, say, 30%. Start with the range [1, 8). Flip your coin. If heads, use the left 30%, so your new range is [1, 3.1). Else, use the right 70%, so your new range is [3.1, 8). Repeat until the entire range has the same integer part. Full code: #!/usr/bin/env python3 from fractions import Fraction from collections import Counter from random import randrange BIAS = Fraction(3, 10) STAT_COUNT = 100000 calls = 0 def biased_rand(): global calls calls += 1 return randrange(BIAS.denominator) < BIAS.numerator def can_generate_multiple(start, stop): if stop.denominator == 1: # half-open range stop = stop.numerator - 1 else: stop = int(stop) start = int(start) return start != stop def unbiased_rand(start, stop): if start < 0: # negative numbers round wrong return start + unbiased_rand(0, stop - start) assert isinstance(start, int) and start >= 0 assert isinstance(stop, int) and stop >= start start = Fraction(start) stop = Fraction(stop) while can_generate_multiple(start, stop): if biased_rand(): old_diff = stop - start diff = old_diff * BIAS stop = start + diff else: old_diff = stop - start diff = old_diff * (1 - BIAS) start = stop - diff return int(start) def stats(f, *args, **kwargs): c = Counter() for _ in range(STAT_COUNT): c[f(*args, **kwargs)] += 1 print('stats for %s:' % f.__qualname__) for k, v in sorted(c.items()): percent = v * 100 / STAT_COUNT print(' %s: %f%%' % (k, percent)) def main(): #stats(biased_rand) stats(unbiased_rand, 1, 7+1) print('used %f calls at bias %s' % (calls/STAT_COUNT, BIAS)) if __name__ == '__main__': main()
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he A solution that never wastes flips, which helps a lot for very-biased coins. The disadvantage of this algorithm (as written, at least) is that it's using arbitrary-precision arithmetic. Practically, y
3,980
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p?
This also only works for $p \neq 1$ and $p \neq 0$. We first turn the (possibly) unfair coin into a fair coin using the process from NcAdams answer: Flip the coin twice. If it lands HH or TT, ignore it and flip it twice again. Now, the coin has equal probability of coming up HT or TH. If it comes up HT, call this H1. If it comes up TH, call this T1. Now we use the fair coin to generate a real number between $0$ and $1$ in binary. Let H1$= 1$ and T1 $= 0$. Start with the string 0., flip the coin and append the resulting digit to the at the end of the string. Repeat with the new string. For example, the sequence H1 H1 T1 would give you the number $0.110$. $1/7$ is a repeating decimal, and with the right-hand side being in base 2 we have that: $1/7 = 0.001 001 001 ...$ $2/7 = 0.010 010 010 ...$ $3/7 = 0.011 011 011 ...$ $4/7 = 0.100 100 100 ...$ $5/7 = 0.101 101 101 ...$ $6/7 = 0.110 110 110 ...$ Keep flipping the fair coin to generate the decimal digits until the digits of your sequence does not match one of the above sequences, then chose the number $n$ such that your generated number is less than $n/7$ and greater than $(n-1)/7$. Since each generated number is equally likely we have chosen a number between $1$ and $7$ with equal probability.
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he
This also only works for $p \neq 1$ and $p \neq 0$. We first turn the (possibly) unfair coin into a fair coin using the process from NcAdams answer: Flip the coin twice. If it lands HH or TT, ignore
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p? This also only works for $p \neq 1$ and $p \neq 0$. We first turn the (possibly) unfair coin into a fair coin using the process from NcAdams answer: Flip the coin twice. If it lands HH or TT, ignore it and flip it twice again. Now, the coin has equal probability of coming up HT or TH. If it comes up HT, call this H1. If it comes up TH, call this T1. Now we use the fair coin to generate a real number between $0$ and $1$ in binary. Let H1$= 1$ and T1 $= 0$. Start with the string 0., flip the coin and append the resulting digit to the at the end of the string. Repeat with the new string. For example, the sequence H1 H1 T1 would give you the number $0.110$. $1/7$ is a repeating decimal, and with the right-hand side being in base 2 we have that: $1/7 = 0.001 001 001 ...$ $2/7 = 0.010 010 010 ...$ $3/7 = 0.011 011 011 ...$ $4/7 = 0.100 100 100 ...$ $5/7 = 0.101 101 101 ...$ $6/7 = 0.110 110 110 ...$ Keep flipping the fair coin to generate the decimal digits until the digits of your sequence does not match one of the above sequences, then chose the number $n$ such that your generated number is less than $n/7$ and greater than $(n-1)/7$. Since each generated number is equally likely we have chosen a number between $1$ and $7$ with equal probability.
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he This also only works for $p \neq 1$ and $p \neq 0$. We first turn the (possibly) unfair coin into a fair coin using the process from NcAdams answer: Flip the coin twice. If it lands HH or TT, ignore
3,981
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p?
Inspired by AdamO's answer, here is a Python solution that avoids bias: def roll(p, n): remaining = range(1,n+1) flips = 0 while len(remaining) > 1: round_winners = [c for c in remaining if random.choices(['H','T'], [p, 1.0-p]) == ['H']] flips += len(remaining) if len(round_winners) > 0: remaining = round_winners p = 1.0 - p return remaining[0], flips There are two main changes here: The main one is that if all the number are discarded in a round, repeat the round. Also I flip the choice of whether heads or tails means discard every time. This reduces the number of flips needed in cases where p is close to 0 or 1 by ~70% when p=0.999
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he
Inspired by AdamO's answer, here is a Python solution that avoids bias: def roll(p, n): remaining = range(1,n+1) flips = 0 while len(remaining) > 1: round_winners = [c for c in rem
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p? Inspired by AdamO's answer, here is a Python solution that avoids bias: def roll(p, n): remaining = range(1,n+1) flips = 0 while len(remaining) > 1: round_winners = [c for c in remaining if random.choices(['H','T'], [p, 1.0-p]) == ['H']] flips += len(remaining) if len(round_winners) > 0: remaining = round_winners p = 1.0 - p return remaining[0], flips There are two main changes here: The main one is that if all the number are discarded in a round, repeat the round. Also I flip the choice of whether heads or tails means discard every time. This reduces the number of flips needed in cases where p is close to 0 or 1 by ~70% when p=0.999
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he Inspired by AdamO's answer, here is a Python solution that avoids bias: def roll(p, n): remaining = range(1,n+1) flips = 0 while len(remaining) > 1: round_winners = [c for c in rem
3,982
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p?
It appears we are allowed to change the mapping of the outcome of each flip, every time we flip. So, using for convenience the first seven positive integers, we give the following orders: 1st Flip, map $H \to 1$ 2nd Flip, map $H \to 2$ ... 7th flip, map $H \to 7$ 8th flip, map $H \to 1$ etc Repeat, always in batches of 7 flips. Map the $T$ outcomes to nothing. SOME REMARKS ON EFFICIENCY Our RNG, index it by $AP$, will generate zero useful outcomes in one 7-flip batch if we get $T$ in all 7 flips. So $$P_{AP}(\text{no integers generated}) = (1-p)^7$$ As we run $N_b$ 7-flip batches, the total number of useless flips will tend to $$\text{Count}_{AP}(\text{useless flips}) \to 7\cdot N_b(1-p)^7$$ Consider now the RNG of @DilipSarwate. There, we use a binomial $B(p,n=5)$ and 5-flip batches. The seven outcomes that generate an integer each has probability of occuring $p^3(1-p)^2$, so, in a 5-flip batch $$P_{DS}(\text{no integers generated}) = 1-7\cdot p^3(1-p)^2$$ The count of useless flips will here tend to $$\text{Count}_{DS}(\text{useless flips}) \to 5\cdot N_b\cdot [1-7\cdot p^3(1-p)^2]$$ For the $AP$ RNG to tend to produce less useless flips, it must be the case that $$ \text{Count}_{AP}(\text{useless flips}) < \text{Count}_{DS}(\text{useless flips})$$ $$\implies 7\cdot N_b(1-p)^7 < 5\cdot N_b\cdot [1-7\cdot p^3(1-p)^2]$$ $$\implies 7\cdot (1-p)^7 < 5\cdot [1-7\cdot p^3(1-p)^2]$$ Numerical examination shows that if $p>0.0467$, then the $AP$ RNG generates less useless flips. We also find that the number of useless flips is monotonically decreasing in $p$ for the $AP$ RNG, while for the $DS$ RNG it has a minimum at around $p\approx 0.5967$ and then increases again, while in general it stays high. The ratio $$\frac{\text{Count}_{AP}(\text{useless flips})}{\text{Count}_{DS}(\text{useless flips})}$$ declines pretty quickly. For example it is equal to $0.67$ for$p=0.1$, equal to $0.3$ for $p=0.2$, equal to $0.127$ for $p=0.4$.
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he
It appears we are allowed to change the mapping of the outcome of each flip, every time we flip. So, using for convenience the first seven positive integers, we give the following orders: 1st Flip, m
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(head) = p? It appears we are allowed to change the mapping of the outcome of each flip, every time we flip. So, using for convenience the first seven positive integers, we give the following orders: 1st Flip, map $H \to 1$ 2nd Flip, map $H \to 2$ ... 7th flip, map $H \to 7$ 8th flip, map $H \to 1$ etc Repeat, always in batches of 7 flips. Map the $T$ outcomes to nothing. SOME REMARKS ON EFFICIENCY Our RNG, index it by $AP$, will generate zero useful outcomes in one 7-flip batch if we get $T$ in all 7 flips. So $$P_{AP}(\text{no integers generated}) = (1-p)^7$$ As we run $N_b$ 7-flip batches, the total number of useless flips will tend to $$\text{Count}_{AP}(\text{useless flips}) \to 7\cdot N_b(1-p)^7$$ Consider now the RNG of @DilipSarwate. There, we use a binomial $B(p,n=5)$ and 5-flip batches. The seven outcomes that generate an integer each has probability of occuring $p^3(1-p)^2$, so, in a 5-flip batch $$P_{DS}(\text{no integers generated}) = 1-7\cdot p^3(1-p)^2$$ The count of useless flips will here tend to $$\text{Count}_{DS}(\text{useless flips}) \to 5\cdot N_b\cdot [1-7\cdot p^3(1-p)^2]$$ For the $AP$ RNG to tend to produce less useless flips, it must be the case that $$ \text{Count}_{AP}(\text{useless flips}) < \text{Count}_{DS}(\text{useless flips})$$ $$\implies 7\cdot N_b(1-p)^7 < 5\cdot N_b\cdot [1-7\cdot p^3(1-p)^2]$$ $$\implies 7\cdot (1-p)^7 < 5\cdot [1-7\cdot p^3(1-p)^2]$$ Numerical examination shows that if $p>0.0467$, then the $AP$ RNG generates less useless flips. We also find that the number of useless flips is monotonically decreasing in $p$ for the $AP$ RNG, while for the $DS$ RNG it has a minimum at around $p\approx 0.5967$ and then increases again, while in general it stays high. The ratio $$\frac{\text{Count}_{AP}(\text{useless flips})}{\text{Count}_{DS}(\text{useless flips})}$$ declines pretty quickly. For example it is equal to $0.67$ for$p=0.1$, equal to $0.3$ for $p=0.2$, equal to $0.127$ for $p=0.4$.
Brain teaser: How to generate 7 integers with equal probability using a biased coin that has a pr(he It appears we are allowed to change the mapping of the outcome of each flip, every time we flip. So, using for convenience the first seven positive integers, we give the following orders: 1st Flip, m
3,983
Does 10 heads in a row increase the chance of the next toss being a tail?
they are trying to assert that [...] if there have been 10 heads, then the next in the sequence will more likely be a tail because statistics says it will balance out in the end There's only a "balancing out" in a very particular sense. If it's a fair coin, then it's still 50-50 at every toss. The coin cannot know its past. It cannot know there was an excess of heads. It cannot compensate for its past. Ever. it just goes on randomly being heads or tails with constant chance of a head. If $n_H$ is the number of heads in $n=n_H+n_T$ tosses ($n_T$ is the number of tails), for a fair coin, $n_H/n_T$ will tend to 1, as $n_H+n_T$ goes to infinity .... but $|n_H-n_T|$ doesn't go to 0. In fact, it also goes to infinity! That is, nothing acts to make them more even. The counts don't tend toward "balancing out". On average, imbalance between the count of heads and tails actually grows! Here's the result of 100 sets of 1000 tosses, with the grey traces showing the difference in number of head minus number of tails at every step. The grey traces (representing $n_H-n_T$) are a Bernoulli random walk. If you think of a particle moving up or down the y-axis by a unit step (randomly with equal probability) at each time-step, then the distribution of the position of the particle will 'diffuse' away from 0 over time. It still has 0 expected value, but its expected distance from 0 grows as the square root of the number of time steps. [Note for anyone thinking "is he talking about expected absolute difference or the RMS difference" -- actually either: for large $n$ the first is $\sqrt{2/\pi}\approx$ 80% of the second.] The blue curve above is at $\pm \sqrt{n}$ and the green curve is at $\pm 2\sqrt{n}$. As you see, the typical distance between total heads and total tails grows. If there was anything acting to 'restore to equality' - to 'make up for' deviations from equality - they wouldn't tend to typically grow further apart like that. (It's not hard to show this algebraically, but I doubt that would convince your friend. The critical part is that the variance of a sum of independent random variables is the sum of the variances $<$see the end of the linked section$>$ -- every time you add another coin flip, you add a constant amount onto the variance of the sum... so variance must grow proportionally with $n$. Consequently the standard deviation increases with $\sqrt{n}$. The constant that gets added to variance at each step in this case happens to be 1, but that's not crucial to the argument.) Equivalently, $\frac{|n_H-n_T|}{n_H+n_T}$ does go to $0$ as the total tosses goes to infinity, but only because $n_H+n_T$ goes to infinity a lot faster than $|n_H-n_T|$ does. That means if we divide that cumulative count by $n$ at each step, it curves in -- the typical absolute difference in count is of the order of $\sqrt{n}$, but the typical absolute difference in proportion must then be of the order of $1/\sqrt{n}$. That's all that's going on. The increasingly-large* random deviations from equality are just "washed out" by the even bigger denominator. * increasing in typical absolute size See the little animation in the margin, here If your friend is unconvinced, toss some coins. Every time you get say three heads in a row, get him or her to nominate a probability for a head on the next toss (that's less than 50%) that he thinks must be fair by his reasoning. Ask for them to give you the corresponding odds (that is, he or she must be willing to pay a bit more than 1:1 if you bet on heads, since they insist that tails is more likely). It's best if it's set up as a lot of bets each for a small amount of money. (Don't be surprised if there's some excuse as to why they can't take up their half of the bet -- but it does at least seem to dramatically reduce the vehemence with which the position is held.) [However, all this discussion is predicated on the coin being fair. If the coin wasn't fair (50-50), then a different version of the discussion - based around deviations from the expected proportion-difference would be required. Having 10 heads in 10 tosses might make you suspicious of the assumption of p=0.5. A well tossed coin should be close to fair - weighted or not - but in fact still exhibit small but exploitable bias, especially if the person exploiting it is someone like Persi Diaconis. Spun coins on the other hand, may be quite susceptible to bias due to more weight on one face.]
Does 10 heads in a row increase the chance of the next toss being a tail?
they are trying to assert that [...] if there have been 10 heads, then the next in the sequence will more likely be a tail because statistics says it will balance out in the end There's only a "balan
Does 10 heads in a row increase the chance of the next toss being a tail? they are trying to assert that [...] if there have been 10 heads, then the next in the sequence will more likely be a tail because statistics says it will balance out in the end There's only a "balancing out" in a very particular sense. If it's a fair coin, then it's still 50-50 at every toss. The coin cannot know its past. It cannot know there was an excess of heads. It cannot compensate for its past. Ever. it just goes on randomly being heads or tails with constant chance of a head. If $n_H$ is the number of heads in $n=n_H+n_T$ tosses ($n_T$ is the number of tails), for a fair coin, $n_H/n_T$ will tend to 1, as $n_H+n_T$ goes to infinity .... but $|n_H-n_T|$ doesn't go to 0. In fact, it also goes to infinity! That is, nothing acts to make them more even. The counts don't tend toward "balancing out". On average, imbalance between the count of heads and tails actually grows! Here's the result of 100 sets of 1000 tosses, with the grey traces showing the difference in number of head minus number of tails at every step. The grey traces (representing $n_H-n_T$) are a Bernoulli random walk. If you think of a particle moving up or down the y-axis by a unit step (randomly with equal probability) at each time-step, then the distribution of the position of the particle will 'diffuse' away from 0 over time. It still has 0 expected value, but its expected distance from 0 grows as the square root of the number of time steps. [Note for anyone thinking "is he talking about expected absolute difference or the RMS difference" -- actually either: for large $n$ the first is $\sqrt{2/\pi}\approx$ 80% of the second.] The blue curve above is at $\pm \sqrt{n}$ and the green curve is at $\pm 2\sqrt{n}$. As you see, the typical distance between total heads and total tails grows. If there was anything acting to 'restore to equality' - to 'make up for' deviations from equality - they wouldn't tend to typically grow further apart like that. (It's not hard to show this algebraically, but I doubt that would convince your friend. The critical part is that the variance of a sum of independent random variables is the sum of the variances $<$see the end of the linked section$>$ -- every time you add another coin flip, you add a constant amount onto the variance of the sum... so variance must grow proportionally with $n$. Consequently the standard deviation increases with $\sqrt{n}$. The constant that gets added to variance at each step in this case happens to be 1, but that's not crucial to the argument.) Equivalently, $\frac{|n_H-n_T|}{n_H+n_T}$ does go to $0$ as the total tosses goes to infinity, but only because $n_H+n_T$ goes to infinity a lot faster than $|n_H-n_T|$ does. That means if we divide that cumulative count by $n$ at each step, it curves in -- the typical absolute difference in count is of the order of $\sqrt{n}$, but the typical absolute difference in proportion must then be of the order of $1/\sqrt{n}$. That's all that's going on. The increasingly-large* random deviations from equality are just "washed out" by the even bigger denominator. * increasing in typical absolute size See the little animation in the margin, here If your friend is unconvinced, toss some coins. Every time you get say three heads in a row, get him or her to nominate a probability for a head on the next toss (that's less than 50%) that he thinks must be fair by his reasoning. Ask for them to give you the corresponding odds (that is, he or she must be willing to pay a bit more than 1:1 if you bet on heads, since they insist that tails is more likely). It's best if it's set up as a lot of bets each for a small amount of money. (Don't be surprised if there's some excuse as to why they can't take up their half of the bet -- but it does at least seem to dramatically reduce the vehemence with which the position is held.) [However, all this discussion is predicated on the coin being fair. If the coin wasn't fair (50-50), then a different version of the discussion - based around deviations from the expected proportion-difference would be required. Having 10 heads in 10 tosses might make you suspicious of the assumption of p=0.5. A well tossed coin should be close to fair - weighted or not - but in fact still exhibit small but exploitable bias, especially if the person exploiting it is someone like Persi Diaconis. Spun coins on the other hand, may be quite susceptible to bias due to more weight on one face.]
Does 10 heads in a row increase the chance of the next toss being a tail? they are trying to assert that [...] if there have been 10 heads, then the next in the sequence will more likely be a tail because statistics says it will balance out in the end There's only a "balan
3,984
Does 10 heads in a row increase the chance of the next toss being a tail?
The confusion is because he is looking at the probability from the start without looking at what else has already happened. Lets simplify things: First flip: T Now the chance of a T was 50%, so 0.5. The chance that the next flip will be T again is 0.5 TT 0.5 TF 0.5 However, what about the first flip? If we include that then: TT 0.25 TF 0.25 The remaining 50% is starting with F, and again has an even split between T and F. To extend this out to ten tails in a row - the probability that you already got that is 1/1024. The probability that the next one is T or F is 50%. So the chance from the start of 11 tails is 1 in 2048. The probability that having already flipped tail 10 times that the next flip will also be a tail though is still 50%. They are trying to apply the unlikeliness of the 1 in 1024 odds of 10 T to the chance of another T, when in fact that has already happened so the probability of it happening is no longer important. 11 tails in a row are no more or less likely than 10 tails followed by one head. The probability that 11 flips are all tails is unlikely but since it has already happened it doesn't matter any more!
Does 10 heads in a row increase the chance of the next toss being a tail?
The confusion is because he is looking at the probability from the start without looking at what else has already happened. Lets simplify things: First flip: T Now the chance of a T was 50%, so 0.5.
Does 10 heads in a row increase the chance of the next toss being a tail? The confusion is because he is looking at the probability from the start without looking at what else has already happened. Lets simplify things: First flip: T Now the chance of a T was 50%, so 0.5. The chance that the next flip will be T again is 0.5 TT 0.5 TF 0.5 However, what about the first flip? If we include that then: TT 0.25 TF 0.25 The remaining 50% is starting with F, and again has an even split between T and F. To extend this out to ten tails in a row - the probability that you already got that is 1/1024. The probability that the next one is T or F is 50%. So the chance from the start of 11 tails is 1 in 2048. The probability that having already flipped tail 10 times that the next flip will also be a tail though is still 50%. They are trying to apply the unlikeliness of the 1 in 1024 odds of 10 T to the chance of another T, when in fact that has already happened so the probability of it happening is no longer important. 11 tails in a row are no more or less likely than 10 tails followed by one head. The probability that 11 flips are all tails is unlikely but since it has already happened it doesn't matter any more!
Does 10 heads in a row increase the chance of the next toss being a tail? The confusion is because he is looking at the probability from the start without looking at what else has already happened. Lets simplify things: First flip: T Now the chance of a T was 50%, so 0.5.
3,985
Does 10 heads in a row increase the chance of the next toss being a tail?
The odds are still 50-50 that the next flip will be tails. Very simple explanation: The odds of flipping 10 heads + 1 tail in that order are very low. But by the time you've flipped 10 heads, you've already beaten most of the odds... you have a 50-50 chance of finishing the sequence with the next coin flip.
Does 10 heads in a row increase the chance of the next toss being a tail?
The odds are still 50-50 that the next flip will be tails. Very simple explanation: The odds of flipping 10 heads + 1 tail in that order are very low. But by the time you've flipped 10 heads, you've
Does 10 heads in a row increase the chance of the next toss being a tail? The odds are still 50-50 that the next flip will be tails. Very simple explanation: The odds of flipping 10 heads + 1 tail in that order are very low. But by the time you've flipped 10 heads, you've already beaten most of the odds... you have a 50-50 chance of finishing the sequence with the next coin flip.
Does 10 heads in a row increase the chance of the next toss being a tail? The odds are still 50-50 that the next flip will be tails. Very simple explanation: The odds of flipping 10 heads + 1 tail in that order are very low. But by the time you've flipped 10 heads, you've
3,986
Does 10 heads in a row increase the chance of the next toss being a tail?
You should try convincing them that if the previous results were to impact the upcoming tosses then not only the last 10 tosses should have been taken into consideration, but also every previous toss in the coin life. I think it's a more logical approach.
Does 10 heads in a row increase the chance of the next toss being a tail?
You should try convincing them that if the previous results were to impact the upcoming tosses then not only the last 10 tosses should have been taken into consideration, but also every previous toss
Does 10 heads in a row increase the chance of the next toss being a tail? You should try convincing them that if the previous results were to impact the upcoming tosses then not only the last 10 tosses should have been taken into consideration, but also every previous toss in the coin life. I think it's a more logical approach.
Does 10 heads in a row increase the chance of the next toss being a tail? You should try convincing them that if the previous results were to impact the upcoming tosses then not only the last 10 tosses should have been taken into consideration, but also every previous toss
3,987
Does 10 heads in a row increase the chance of the next toss being a tail?
To add to earlier answers, there are two issues here, first, what happens when the count is actually fair and each toss is independent of all other tosses. Then, we have the "law of large numbers", saying that in the limit of an ever increasing sequence of tosses, the frequency of tails will approach probability of tail, that is, $1/2$. If the first ten tosses all were tails, then the limiting frequency will still be one half, without any need of later tosses "balancing out" the first ten tails! Algebraically , let $x_n$ be the number of tails among the throws $11, 12, \dots, n+10.$. Lets assume that actually we get $$ \lim_{n \rightarrow \infty} x_n/n =1/2 $$ Then when taking into account the first ten tosses, we will still have the limit $$ \lim_{n \rightarrow \infty} \frac{10+x_n}{n+10}= 1/2 $$ That is, after one million and ten tosses, we have that $$ \frac{10+500000}{1000010} \approx 0.5 $$ so, in the limit, the first 10 tails doesn't matter at all, its effect is "washed out" by all the later tosses. So, there is no need of "balancing out" for the limit result to hold. Mathematically, this is just using the fact that the limit (if exists ...) of any sequence of numbers do not depend at all on any finite, initial segment! So we can arbitrarily assign the results for the first ten (or first hundred) tosses without affecting the limit, at all. I guess this way of explaining it to your gambler friends (maybe with more numbers & examples and less algebra ...) might be the best way. The other aspect is: After ten tosses ten tails, maybe somebody starts to doubt if the coin is a good one, corresponds to the simple, ordinary model of independent, equal probability tosses. Assuming the "tosser" (the person doing the tossing) haven't been trained to control the tosses in some way, and is really tossing in a honest way, the probability of tail must be one half (see this Gelman paper.) So then there must be, in the alternative hypothesis, some dependence among the coin tosses! And, after seeing ten tails in a row, the evidence is that the dependence is a positive one, so that one tail increases the probability that the next coin toss will be tail. But then, after that analysis, the reasonable conclusion is that the probability of the eleventh toss being a tail, is increased, not lowered! So the conclusion, in that case, is the opposite of your gamblers friends. I think you will need a very strange indeed model to justify their conclusions.
Does 10 heads in a row increase the chance of the next toss being a tail?
To add to earlier answers, there are two issues here, first, what happens when the count is actually fair and each toss is independent of all other tosses. Then, we have the "law of large numbers", s
Does 10 heads in a row increase the chance of the next toss being a tail? To add to earlier answers, there are two issues here, first, what happens when the count is actually fair and each toss is independent of all other tosses. Then, we have the "law of large numbers", saying that in the limit of an ever increasing sequence of tosses, the frequency of tails will approach probability of tail, that is, $1/2$. If the first ten tosses all were tails, then the limiting frequency will still be one half, without any need of later tosses "balancing out" the first ten tails! Algebraically , let $x_n$ be the number of tails among the throws $11, 12, \dots, n+10.$. Lets assume that actually we get $$ \lim_{n \rightarrow \infty} x_n/n =1/2 $$ Then when taking into account the first ten tosses, we will still have the limit $$ \lim_{n \rightarrow \infty} \frac{10+x_n}{n+10}= 1/2 $$ That is, after one million and ten tosses, we have that $$ \frac{10+500000}{1000010} \approx 0.5 $$ so, in the limit, the first 10 tails doesn't matter at all, its effect is "washed out" by all the later tosses. So, there is no need of "balancing out" for the limit result to hold. Mathematically, this is just using the fact that the limit (if exists ...) of any sequence of numbers do not depend at all on any finite, initial segment! So we can arbitrarily assign the results for the first ten (or first hundred) tosses without affecting the limit, at all. I guess this way of explaining it to your gambler friends (maybe with more numbers & examples and less algebra ...) might be the best way. The other aspect is: After ten tosses ten tails, maybe somebody starts to doubt if the coin is a good one, corresponds to the simple, ordinary model of independent, equal probability tosses. Assuming the "tosser" (the person doing the tossing) haven't been trained to control the tosses in some way, and is really tossing in a honest way, the probability of tail must be one half (see this Gelman paper.) So then there must be, in the alternative hypothesis, some dependence among the coin tosses! And, after seeing ten tails in a row, the evidence is that the dependence is a positive one, so that one tail increases the probability that the next coin toss will be tail. But then, after that analysis, the reasonable conclusion is that the probability of the eleventh toss being a tail, is increased, not lowered! So the conclusion, in that case, is the opposite of your gamblers friends. I think you will need a very strange indeed model to justify their conclusions.
Does 10 heads in a row increase the chance of the next toss being a tail? To add to earlier answers, there are two issues here, first, what happens when the count is actually fair and each toss is independent of all other tosses. Then, we have the "law of large numbers", s
3,988
Does 10 heads in a row increase the chance of the next toss being a tail?
This isn't really an answer - your problem is psychological, not mathematical. But it may help. I often face your "how the hell ..." question. The answers here - mostly correct, are too mathematical for the people you're addressing. One place I start is to try to convince them that flipping one coin 10 times is essentially the same as flipping 10 coins simultaneously. They can grasp the fact that sometimes you'll see 10 heads. In fact that happens about once in a thousand tries (since $2^{10} \approx 10^3$). If 15,000 people try this then about 30 of them will think they have special coins - either all heads or all tails. If they accept this argument then the step to sequential tosses is a little easier.
Does 10 heads in a row increase the chance of the next toss being a tail?
This isn't really an answer - your problem is psychological, not mathematical. But it may help. I often face your "how the hell ..." question. The answers here - mostly correct, are too mathematical f
Does 10 heads in a row increase the chance of the next toss being a tail? This isn't really an answer - your problem is psychological, not mathematical. But it may help. I often face your "how the hell ..." question. The answers here - mostly correct, are too mathematical for the people you're addressing. One place I start is to try to convince them that flipping one coin 10 times is essentially the same as flipping 10 coins simultaneously. They can grasp the fact that sometimes you'll see 10 heads. In fact that happens about once in a thousand tries (since $2^{10} \approx 10^3$). If 15,000 people try this then about 30 of them will think they have special coins - either all heads or all tails. If they accept this argument then the step to sequential tosses is a little easier.
Does 10 heads in a row increase the chance of the next toss being a tail? This isn't really an answer - your problem is psychological, not mathematical. But it may help. I often face your "how the hell ..." question. The answers here - mostly correct, are too mathematical f
3,989
Does 10 heads in a row increase the chance of the next toss being a tail?
Assuming coin flips are independent, this is very easy to prove from one statistician to another. However, your friend seems to not believe that coin flips are independent. Other than throwing around words that are synonymous with independent (for example, the coin doesn't have a "memory") you can't prove to him that coin flips are independent with a mere word argument. I would suggest using simulation to assert your claim but, to be honest, if your friend doesn't believe coin flips are independent I'm not sure he'll believe simulation results.
Does 10 heads in a row increase the chance of the next toss being a tail?
Assuming coin flips are independent, this is very easy to prove from one statistician to another. However, your friend seems to not believe that coin flips are independent. Other than throwing around
Does 10 heads in a row increase the chance of the next toss being a tail? Assuming coin flips are independent, this is very easy to prove from one statistician to another. However, your friend seems to not believe that coin flips are independent. Other than throwing around words that are synonymous with independent (for example, the coin doesn't have a "memory") you can't prove to him that coin flips are independent with a mere word argument. I would suggest using simulation to assert your claim but, to be honest, if your friend doesn't believe coin flips are independent I'm not sure he'll believe simulation results.
Does 10 heads in a row increase the chance of the next toss being a tail? Assuming coin flips are independent, this is very easy to prove from one statistician to another. However, your friend seems to not believe that coin flips are independent. Other than throwing around
3,990
Does 10 heads in a row increase the chance of the next toss being a tail?
To restate some of the explanations already given (by @TimB and @James K), once you've flipped a coin 10 times and got 10 heads, the probability of getting 10 heads in a row is exactly 1.0! It's already happened, so the probability of that happening is now fixed. When you multiply that by the probability of getting heads on your next flip (0.5), you get exactly 0.5. Betting on tails with anything other than even odds at that point is a sucker's bet.
Does 10 heads in a row increase the chance of the next toss being a tail?
To restate some of the explanations already given (by @TimB and @James K), once you've flipped a coin 10 times and got 10 heads, the probability of getting 10 heads in a row is exactly 1.0! It's alre
Does 10 heads in a row increase the chance of the next toss being a tail? To restate some of the explanations already given (by @TimB and @James K), once you've flipped a coin 10 times and got 10 heads, the probability of getting 10 heads in a row is exactly 1.0! It's already happened, so the probability of that happening is now fixed. When you multiply that by the probability of getting heads on your next flip (0.5), you get exactly 0.5. Betting on tails with anything other than even odds at that point is a sucker's bet.
Does 10 heads in a row increase the chance of the next toss being a tail? To restate some of the explanations already given (by @TimB and @James K), once you've flipped a coin 10 times and got 10 heads, the probability of getting 10 heads in a row is exactly 1.0! It's alre
3,991
Does 10 heads in a row increase the chance of the next toss being a tail?
Let's say I'm convinced that the coin is fair. If the coin was fair then the probability of having 10 heads in a row is $$p_{10}=\left(\frac{1}{2}\right)^{10}=\frac{1}{1024}<0.1\%$$ So, as a frequentist at $\alpha=1\%$ significance, I must reject the $H_0$:coin is fair, and conclude that the $H_a$: "something's fishy" is true. No, I can't insist that the probability to see another head is still $\frac{1}{2}$ I'll leave it to you to apply Bayesian approach and get to a similar conclusion. You'll start with prior probability of heads $p=\frac{1}{2}$, then update it with observation of 10 heads in a row, and you'll see how posterior probability of heads $\pi>\frac{1}{2}$ UPDATE @oerkelens example can be interpreted in two ways. your friend bet on THHTTHTTHT, then tossed a coin 10 times and got: THHTTHTTHT. In this case you'll be as surprised as with 10 heads in a row, and start doubting the fairness of a coin. You're not sure what to think about the probability of tail in the next toss, because your friend seems to be able to get exactly what he wants, this is not random. you threw a coin 10 times, and observed some combination which happened to be THHTTHTTHT, you'll notice there were 6 tails and 4 heads, which is $p=\frac{10!}{6!4!2^{10}}\sim 0.2$, which is unremarkable. Hence, the probability of a tail in the next toss is probably $\frac{1}{2}$, since there's no reason to doubt its fairness. Also, one could argue that although 0.001 is a small probability, if you toss 10 coins 100,000 times, you're bound to see a few 10-head combinations. True, but in this case you have 1 million coin tosses in total, and you're looking for at least one 10-head combination in the sequence. The frequentist probability of observing at least one 10-head combination is computed as follows: $$1-\left(\frac{1}{2^{10}}\right)^{100,000}\approx 1$$ So, the frequentist will conclude after long months of tossing a coin 1 million times and observing a 10-head combination, that it's no big deal, things happen. He will not be making any adjustments to his expectations about the probability of the next head, and leave it at 0.5 FOR COMPUTER PEOPLE If your friends are computer programmers then I found that the easiest way to appeal to their intuition is through programming. Ask them to program the coin toss experiment. They'll think a little bit then come up with something like this: for i=1:11 if rand()>0.5 c='H'; else c='T'; end fprintf('%s',c) end disp '.' THTHTHTHHHT. You'll ask them where's your code for handling 10 heads in a row here? It appears that in your code regardless of what happened in first 10 loops, the 11th toss has 0.5 probability of heads. However, this case appeals to the fair coin toss. The code is designed with a fair coin toss. In case of 10 heads though, it's highly unlikely that the coin is fair.
Does 10 heads in a row increase the chance of the next toss being a tail?
Let's say I'm convinced that the coin is fair. If the coin was fair then the probability of having 10 heads in a row is $$p_{10}=\left(\frac{1}{2}\right)^{10}=\frac{1}{1024}<0.1\%$$ So, as a frequenti
Does 10 heads in a row increase the chance of the next toss being a tail? Let's say I'm convinced that the coin is fair. If the coin was fair then the probability of having 10 heads in a row is $$p_{10}=\left(\frac{1}{2}\right)^{10}=\frac{1}{1024}<0.1\%$$ So, as a frequentist at $\alpha=1\%$ significance, I must reject the $H_0$:coin is fair, and conclude that the $H_a$: "something's fishy" is true. No, I can't insist that the probability to see another head is still $\frac{1}{2}$ I'll leave it to you to apply Bayesian approach and get to a similar conclusion. You'll start with prior probability of heads $p=\frac{1}{2}$, then update it with observation of 10 heads in a row, and you'll see how posterior probability of heads $\pi>\frac{1}{2}$ UPDATE @oerkelens example can be interpreted in two ways. your friend bet on THHTTHTTHT, then tossed a coin 10 times and got: THHTTHTTHT. In this case you'll be as surprised as with 10 heads in a row, and start doubting the fairness of a coin. You're not sure what to think about the probability of tail in the next toss, because your friend seems to be able to get exactly what he wants, this is not random. you threw a coin 10 times, and observed some combination which happened to be THHTTHTTHT, you'll notice there were 6 tails and 4 heads, which is $p=\frac{10!}{6!4!2^{10}}\sim 0.2$, which is unremarkable. Hence, the probability of a tail in the next toss is probably $\frac{1}{2}$, since there's no reason to doubt its fairness. Also, one could argue that although 0.001 is a small probability, if you toss 10 coins 100,000 times, you're bound to see a few 10-head combinations. True, but in this case you have 1 million coin tosses in total, and you're looking for at least one 10-head combination in the sequence. The frequentist probability of observing at least one 10-head combination is computed as follows: $$1-\left(\frac{1}{2^{10}}\right)^{100,000}\approx 1$$ So, the frequentist will conclude after long months of tossing a coin 1 million times and observing a 10-head combination, that it's no big deal, things happen. He will not be making any adjustments to his expectations about the probability of the next head, and leave it at 0.5 FOR COMPUTER PEOPLE If your friends are computer programmers then I found that the easiest way to appeal to their intuition is through programming. Ask them to program the coin toss experiment. They'll think a little bit then come up with something like this: for i=1:11 if rand()>0.5 c='H'; else c='T'; end fprintf('%s',c) end disp '.' THTHTHTHHHT. You'll ask them where's your code for handling 10 heads in a row here? It appears that in your code regardless of what happened in first 10 loops, the 11th toss has 0.5 probability of heads. However, this case appeals to the fair coin toss. The code is designed with a fair coin toss. In case of 10 heads though, it's highly unlikely that the coin is fair.
Does 10 heads in a row increase the chance of the next toss being a tail? Let's say I'm convinced that the coin is fair. If the coin was fair then the probability of having 10 heads in a row is $$p_{10}=\left(\frac{1}{2}\right)^{10}=\frac{1}{1024}<0.1\%$$ So, as a frequenti
3,992
Does 10 heads in a row increase the chance of the next toss being a tail?
Under ideal circumstances the answer is no. Each throw is independent of what came before. So if this is a truly fair coin then it does not matter. But if you unsure about whether the coin is faulty or not (which could happen in real life), a long sequence of tails may lead one to believe that it is unfair.
Does 10 heads in a row increase the chance of the next toss being a tail?
Under ideal circumstances the answer is no. Each throw is independent of what came before. So if this is a truly fair coin then it does not matter. But if you unsure about whether the coin is faulty o
Does 10 heads in a row increase the chance of the next toss being a tail? Under ideal circumstances the answer is no. Each throw is independent of what came before. So if this is a truly fair coin then it does not matter. But if you unsure about whether the coin is faulty or not (which could happen in real life), a long sequence of tails may lead one to believe that it is unfair.
Does 10 heads in a row increase the chance of the next toss being a tail? Under ideal circumstances the answer is no. Each throw is independent of what came before. So if this is a truly fair coin then it does not matter. But if you unsure about whether the coin is faulty o
3,993
Does 10 heads in a row increase the chance of the next toss being a tail?
This answer will work for all questions of this sort, including the Monty Hall problem. Simply ask them what they think the odds are of getting a tail after ten heads. Offer to play them for slightly better (to them) but still under 50-50 odds. With any luck they will agree to let a computer do the flipping in which case you will quickly have a sum of money in your pocket. Otherwise it will take longer but the result is (inevitably) the same.
Does 10 heads in a row increase the chance of the next toss being a tail?
This answer will work for all questions of this sort, including the Monty Hall problem. Simply ask them what they think the odds are of getting a tail after ten heads. Offer to play them for slightl
Does 10 heads in a row increase the chance of the next toss being a tail? This answer will work for all questions of this sort, including the Monty Hall problem. Simply ask them what they think the odds are of getting a tail after ten heads. Offer to play them for slightly better (to them) but still under 50-50 odds. With any luck they will agree to let a computer do the flipping in which case you will quickly have a sum of money in your pocket. Otherwise it will take longer but the result is (inevitably) the same.
Does 10 heads in a row increase the chance of the next toss being a tail? This answer will work for all questions of this sort, including the Monty Hall problem. Simply ask them what they think the odds are of getting a tail after ten heads. Offer to play them for slightl
3,994
Does 10 heads in a row increase the chance of the next toss being a tail?
How would you convince them? One way is to show the distribution of outcomes from the exact problem described. #1,000,000 observations numObservations <- 1e+6 #11 coin tosses per sample numCoinTosses <- 11 sampledCoinTosses <- matrix(sample(c(-1,1),numObservations*numCoinTosses,replace=TRUE), nrow = numObservations, ncol = numCoinTosses) sampledCoinTosses <- cbind(sampledCoinTosses,apply(sampledCoinTosses[,1:numCoinTosses - 1],1,sum)) #Where the sum of the first ten observations is 10, this corresponds to 10 heads. tenHeadsObservations <- sampledCoinTosses[which(sampledCoinTosses[,numCoinTosses + 1] == 10),] #By looking at the summary of the 11th coin toss we can see how close the average value is to 0 summary(tenHeadsObservations[,numCoinTosses])
Does 10 heads in a row increase the chance of the next toss being a tail?
How would you convince them? One way is to show the distribution of outcomes from the exact problem described. #1,000,000 observations numObservations <- 1e+6 #11 coin tosses per sample numCoinTosses
Does 10 heads in a row increase the chance of the next toss being a tail? How would you convince them? One way is to show the distribution of outcomes from the exact problem described. #1,000,000 observations numObservations <- 1e+6 #11 coin tosses per sample numCoinTosses <- 11 sampledCoinTosses <- matrix(sample(c(-1,1),numObservations*numCoinTosses,replace=TRUE), nrow = numObservations, ncol = numCoinTosses) sampledCoinTosses <- cbind(sampledCoinTosses,apply(sampledCoinTosses[,1:numCoinTosses - 1],1,sum)) #Where the sum of the first ten observations is 10, this corresponds to 10 heads. tenHeadsObservations <- sampledCoinTosses[which(sampledCoinTosses[,numCoinTosses + 1] == 10),] #By looking at the summary of the 11th coin toss we can see how close the average value is to 0 summary(tenHeadsObservations[,numCoinTosses])
Does 10 heads in a row increase the chance of the next toss being a tail? How would you convince them? One way is to show the distribution of outcomes from the exact problem described. #1,000,000 observations numObservations <- 1e+6 #11 coin tosses per sample numCoinTosses
3,995
Does 10 heads in a row increase the chance of the next toss being a tail?
Try like this: Assume that we already have $10$ heads tosses -- a very very rare event with probability of "being there" of $0.5^{10}$. Now we prepare for one more toss, and think ahead what might happen next: if tails, we still end up with recording an extremely rare series of events with probability of $0.5^{10}$; if heads, the probability of whole series is somewhat smaller, but not That much smaller, $0.5^{11}$; And the difference between the two is just one fair coin toss.
Does 10 heads in a row increase the chance of the next toss being a tail?
Try like this: Assume that we already have $10$ heads tosses -- a very very rare event with probability of "being there" of $0.5^{10}$. Now we prepare for one more toss, and think ahead what might hap
Does 10 heads in a row increase the chance of the next toss being a tail? Try like this: Assume that we already have $10$ heads tosses -- a very very rare event with probability of "being there" of $0.5^{10}$. Now we prepare for one more toss, and think ahead what might happen next: if tails, we still end up with recording an extremely rare series of events with probability of $0.5^{10}$; if heads, the probability of whole series is somewhat smaller, but not That much smaller, $0.5^{11}$; And the difference between the two is just one fair coin toss.
Does 10 heads in a row increase the chance of the next toss being a tail? Try like this: Assume that we already have $10$ heads tosses -- a very very rare event with probability of "being there" of $0.5^{10}$. Now we prepare for one more toss, and think ahead what might hap
3,996
How to derive the ridge regression solution?
It suffices to modify the loss function by adding the penalty. In matrix terms, the initial quadratic loss function becomes $$ (Y - X\beta)^{T}(Y-X\beta) + \lambda \beta^T\beta.$$ Deriving with respect to $\beta$ leads to the normal equation $$ X^{T}Y = \left(X^{T}X + \lambda I\right)\beta $$ which leads to the Ridge estimator.
How to derive the ridge regression solution?
It suffices to modify the loss function by adding the penalty. In matrix terms, the initial quadratic loss function becomes $$ (Y - X\beta)^{T}(Y-X\beta) + \lambda \beta^T\beta.$$ Deriving with respec
How to derive the ridge regression solution? It suffices to modify the loss function by adding the penalty. In matrix terms, the initial quadratic loss function becomes $$ (Y - X\beta)^{T}(Y-X\beta) + \lambda \beta^T\beta.$$ Deriving with respect to $\beta$ leads to the normal equation $$ X^{T}Y = \left(X^{T}X + \lambda I\right)\beta $$ which leads to the Ridge estimator.
How to derive the ridge regression solution? It suffices to modify the loss function by adding the penalty. In matrix terms, the initial quadratic loss function becomes $$ (Y - X\beta)^{T}(Y-X\beta) + \lambda \beta^T\beta.$$ Deriving with respec
3,997
How to derive the ridge regression solution?
Let's build on what we know, which is that whenever the $n\times p$ model matrix is $X$, the response $n$-vector is $y$, and the parameter $p$-vector is $\beta$, the objective function $$f(\beta) = (y - X\beta)^\prime(y - X\beta)$$ (which is the sum of squares of residuals) is minimized when $\beta$ solves the Normal equations $$(X^\prime X)\beta = X^\prime y.$$ Ridge regression adds another term to the objective function (usually after standardizing all variables in order to put them on a common footing), asking to minimize $$(y - X\beta)^\prime(y - X\beta) + \lambda \beta^\prime \beta$$ for some non-negative constant $\lambda$. It is the sum of squares of the residuals plus a multiple of the sum of squares of the coefficients themselves (making it obvious that it has a global minimum). Because $\lambda\ge 0$, it has a positive square root $\nu^2 = \lambda$. Consider the matrix $X$ augmented with rows corresponding to $\nu$ times the $p\times p$ identity matrix $I$: $$X_{*} = \pmatrix{X \\ \nu I}$$ When the vector $y$ is similarly extended with $p$ zeros at the end to $y_{*}$, the matrix product in the objective function adds $p$ additional terms of the form $(0 - \nu \beta_i)^2 = \lambda \beta_i^2$ to the original objective. Therefore $$(y_{*} - X_{*}\beta)^\prime(y_{*} - X_{*}\beta) = (y - X\beta)^\prime(y - X\beta) + \lambda \beta^\prime \beta.$$ From the form of the left hand expression it is immediate that the Normal equations are $$(X_{*}^\prime X_{*})\beta = X_{*}^\prime y_{*}.$$ Because we adjoined zeros to the end of $y$, the right hand side is the same as $X^\prime y$. On the left hand side $\nu^2 I=\lambda I$ is added to the original $X^\prime X$. Therefore the new Normal equations simplify to $$(X^\prime X + \lambda I)\beta = X^\prime y.$$ Besides being conceptually economical--no new manipulations are needed to derive this result--it also is computationally economical: your software for doing ordinary least squares will also do ridge regression without any change whatsoever. (It nevertheless can be helpful in large problems to use software designed for this purpose, because it will exploit the special structure of $X_{*}$ to obtain results efficiently for a densely spaced interval of $\lambda$, enabling you to explore how the answers vary with $\lambda$.) Another beauty of this way of looking at things is how it can help us understand ridge regression. When we want to really understand regression, it almost always helps to think of it geometrically: the columns of $X$ constitute $p$ vectors in a real vector space of dimension $n$. By adjoining $\nu I$ to $X$, thereby prolonging them from $n$-vectors to $n+p$-vectors, we are embedding $\mathbb{R}^n$ into a larger space $\mathbb{R}^{n+p}$ by including $p$ "imaginary", mutually orthogonal directions. The first column of $X$ is given a small imaginary component of size $\nu$, thereby lengthening it and moving it out of the space generated by the original $p$ columns. The second, third, ..., $p^\text{th}$ columns are similarly lengthened and moved out of the original space by the same amount $\nu$--but all in different new directions. Consequently, any collinearity present in the original columns will immediately be resolved. Moreover, the larger $\nu$ becomes, the more these new vectors approach the individual $p$ imaginary directions: they become more and more orthogonal. Consequently, the solution of the Normal equations will immediately become possible and it will rapidly become numerically stable as $\nu$ increases from $0$. This description of the process suggests some novel and creative approaches to addressing the problems Ridge Regression was designed to handle. For instance, using any means whatsoever (such as the variance decomposition described by Belsley, Kuh, and Welsch in their 1980 book on Regression Diagnostics, Chapter 3), you might be able to identify subgroups of nearly collinear columns of $X$, where each subgroup is nearly orthogonal to any other. You only need adjoin as many rows to $X$ (and zeros to $y$) as there are elements in the largest group, dedicating one new "imaginary" dimension for displacing each element of a group away from its siblings: you don't need $p$ imaginary dimensions to do this. A method of performing this was proposed by Rolf Sundberg some 30 years ago. See Continuum Regression and Ridge Regression. Journal of the Royal Statistical Society. Series B (Methodological) Vol. 55, No. 3 (1993), pp. 653-659.
How to derive the ridge regression solution?
Let's build on what we know, which is that whenever the $n\times p$ model matrix is $X$, the response $n$-vector is $y$, and the parameter $p$-vector is $\beta$, the objective function $$f(\beta) = (y
How to derive the ridge regression solution? Let's build on what we know, which is that whenever the $n\times p$ model matrix is $X$, the response $n$-vector is $y$, and the parameter $p$-vector is $\beta$, the objective function $$f(\beta) = (y - X\beta)^\prime(y - X\beta)$$ (which is the sum of squares of residuals) is minimized when $\beta$ solves the Normal equations $$(X^\prime X)\beta = X^\prime y.$$ Ridge regression adds another term to the objective function (usually after standardizing all variables in order to put them on a common footing), asking to minimize $$(y - X\beta)^\prime(y - X\beta) + \lambda \beta^\prime \beta$$ for some non-negative constant $\lambda$. It is the sum of squares of the residuals plus a multiple of the sum of squares of the coefficients themselves (making it obvious that it has a global minimum). Because $\lambda\ge 0$, it has a positive square root $\nu^2 = \lambda$. Consider the matrix $X$ augmented with rows corresponding to $\nu$ times the $p\times p$ identity matrix $I$: $$X_{*} = \pmatrix{X \\ \nu I}$$ When the vector $y$ is similarly extended with $p$ zeros at the end to $y_{*}$, the matrix product in the objective function adds $p$ additional terms of the form $(0 - \nu \beta_i)^2 = \lambda \beta_i^2$ to the original objective. Therefore $$(y_{*} - X_{*}\beta)^\prime(y_{*} - X_{*}\beta) = (y - X\beta)^\prime(y - X\beta) + \lambda \beta^\prime \beta.$$ From the form of the left hand expression it is immediate that the Normal equations are $$(X_{*}^\prime X_{*})\beta = X_{*}^\prime y_{*}.$$ Because we adjoined zeros to the end of $y$, the right hand side is the same as $X^\prime y$. On the left hand side $\nu^2 I=\lambda I$ is added to the original $X^\prime X$. Therefore the new Normal equations simplify to $$(X^\prime X + \lambda I)\beta = X^\prime y.$$ Besides being conceptually economical--no new manipulations are needed to derive this result--it also is computationally economical: your software for doing ordinary least squares will also do ridge regression without any change whatsoever. (It nevertheless can be helpful in large problems to use software designed for this purpose, because it will exploit the special structure of $X_{*}$ to obtain results efficiently for a densely spaced interval of $\lambda$, enabling you to explore how the answers vary with $\lambda$.) Another beauty of this way of looking at things is how it can help us understand ridge regression. When we want to really understand regression, it almost always helps to think of it geometrically: the columns of $X$ constitute $p$ vectors in a real vector space of dimension $n$. By adjoining $\nu I$ to $X$, thereby prolonging them from $n$-vectors to $n+p$-vectors, we are embedding $\mathbb{R}^n$ into a larger space $\mathbb{R}^{n+p}$ by including $p$ "imaginary", mutually orthogonal directions. The first column of $X$ is given a small imaginary component of size $\nu$, thereby lengthening it and moving it out of the space generated by the original $p$ columns. The second, third, ..., $p^\text{th}$ columns are similarly lengthened and moved out of the original space by the same amount $\nu$--but all in different new directions. Consequently, any collinearity present in the original columns will immediately be resolved. Moreover, the larger $\nu$ becomes, the more these new vectors approach the individual $p$ imaginary directions: they become more and more orthogonal. Consequently, the solution of the Normal equations will immediately become possible and it will rapidly become numerically stable as $\nu$ increases from $0$. This description of the process suggests some novel and creative approaches to addressing the problems Ridge Regression was designed to handle. For instance, using any means whatsoever (such as the variance decomposition described by Belsley, Kuh, and Welsch in their 1980 book on Regression Diagnostics, Chapter 3), you might be able to identify subgroups of nearly collinear columns of $X$, where each subgroup is nearly orthogonal to any other. You only need adjoin as many rows to $X$ (and zeros to $y$) as there are elements in the largest group, dedicating one new "imaginary" dimension for displacing each element of a group away from its siblings: you don't need $p$ imaginary dimensions to do this. A method of performing this was proposed by Rolf Sundberg some 30 years ago. See Continuum Regression and Ridge Regression. Journal of the Royal Statistical Society. Series B (Methodological) Vol. 55, No. 3 (1993), pp. 653-659.
How to derive the ridge regression solution? Let's build on what we know, which is that whenever the $n\times p$ model matrix is $X$, the response $n$-vector is $y$, and the parameter $p$-vector is $\beta$, the objective function $$f(\beta) = (y
3,998
How to derive the ridge regression solution?
The derivation includes matrix calculus, which can be quite tedious. We would like solve the following problem: \begin{equation} \min_\beta (Y-\beta^T X)^T(Y-\beta^T X)+\lambda \beta^T \beta \end{equation} Now note that \begin{equation} \frac{\partial (Y-\beta^T X)^T (Y-\beta^T X)}{\partial \beta}=-2X^T(Y-\beta^T X) \end{equation} and \begin{equation} \frac{\partial \lambda \beta^T \beta}{\partial \beta}=2\lambda\beta. \end{equation} Together we get to the first order condition \begin{equation} X^TY = X^TX\beta + \lambda\beta. \end{equation} Isolating $\beta$ yields the solution: \begin{equation} \beta = (X^TX+ \lambda I )^{-1}X^T Y. \end{equation}
How to derive the ridge regression solution?
The derivation includes matrix calculus, which can be quite tedious. We would like solve the following problem: \begin{equation} \min_\beta (Y-\beta^T X)^T(Y-\beta^T X)+\lambda \beta^T \beta \end{equa
How to derive the ridge regression solution? The derivation includes matrix calculus, which can be quite tedious. We would like solve the following problem: \begin{equation} \min_\beta (Y-\beta^T X)^T(Y-\beta^T X)+\lambda \beta^T \beta \end{equation} Now note that \begin{equation} \frac{\partial (Y-\beta^T X)^T (Y-\beta^T X)}{\partial \beta}=-2X^T(Y-\beta^T X) \end{equation} and \begin{equation} \frac{\partial \lambda \beta^T \beta}{\partial \beta}=2\lambda\beta. \end{equation} Together we get to the first order condition \begin{equation} X^TY = X^TX\beta + \lambda\beta. \end{equation} Isolating $\beta$ yields the solution: \begin{equation} \beta = (X^TX+ \lambda I )^{-1}X^T Y. \end{equation}
How to derive the ridge regression solution? The derivation includes matrix calculus, which can be quite tedious. We would like solve the following problem: \begin{equation} \min_\beta (Y-\beta^T X)^T(Y-\beta^T X)+\lambda \beta^T \beta \end{equa
3,999
How to derive the ridge regression solution?
I have recently stumbled upon the same question in the context of P-Splines and as the concept is the same I want to give a more detailed answer on the derivation of the ridge estimator. We start with a penalized criterion function that differs from the classic OLS-criterion function by its penalization term in the last summand: $Criterion_{Ridge} = \sum_{i=1}^{n}(y_i-x_i^T\beta)^2 + \lambda \sum_{j=1}^p\beta_j^2$ where $p=$ the amount of covariables used in the model $x_i^T\beta = $ your standard linear predictor the first summand respresents the MSE (squared divergence of the prediction from the actual value) that we want to minimize as usual the second summand represents the penalization we apply on the coefficients. Here we are in the Ridge-context which implies a Euclidian Distance Measure and therefore the degree of 2 in the penalization term. In the case of a Lasso-Penalization we would apply a degree of 1 and yield a totally different estimator. We can rewrite this criterion in matrix-notation and further break it down: $Criterion_{Ridge} = (y-X\beta)^T(y-X\beta) + \lambda\beta^T\beta$ $ = y^Ty - \beta^TX^Ty - y^TX\beta+ \beta^Tx^TX\beta + \lambda\beta^T\beta$ $ = y^Ty - \beta^TX^Ty - \beta^TX^Ty + \beta^TX^TX\beta + \beta^T\lambda I\beta$ with $I$ being the identity matrix $ = y^Ty - 2\beta^TX^Ty + \beta^T(X^TX + \lambda I)\beta$ Now we search for the $\beta$ that minimizes our criterion. Amongst others we make use of the matrix differentiation rule $\frac{\partial x^TAx}{\partial x} = (A+A^T)x \overset{\text{A symmetric}}{=} 2Ax$ which we can apply here as $(X^TX + \lambda I) \in \mathbb{R}^{n \times n}$: $\frac{\partial Criterion_{Ridge} }{\partial\beta} = -2X^Ty + 2(X^TX + \lambda I)\beta \overset{!}{=}0$ $(X^TX + \lambda I)\beta = X^Ty$ $\overset{\text{et voilà}}{\Rightarrow} \hat\beta = (X^TX + \lambda I)^{-1} X^Ty$
How to derive the ridge regression solution?
I have recently stumbled upon the same question in the context of P-Splines and as the concept is the same I want to give a more detailed answer on the derivation of the ridge estimator. We start wit
How to derive the ridge regression solution? I have recently stumbled upon the same question in the context of P-Splines and as the concept is the same I want to give a more detailed answer on the derivation of the ridge estimator. We start with a penalized criterion function that differs from the classic OLS-criterion function by its penalization term in the last summand: $Criterion_{Ridge} = \sum_{i=1}^{n}(y_i-x_i^T\beta)^2 + \lambda \sum_{j=1}^p\beta_j^2$ where $p=$ the amount of covariables used in the model $x_i^T\beta = $ your standard linear predictor the first summand respresents the MSE (squared divergence of the prediction from the actual value) that we want to minimize as usual the second summand represents the penalization we apply on the coefficients. Here we are in the Ridge-context which implies a Euclidian Distance Measure and therefore the degree of 2 in the penalization term. In the case of a Lasso-Penalization we would apply a degree of 1 and yield a totally different estimator. We can rewrite this criterion in matrix-notation and further break it down: $Criterion_{Ridge} = (y-X\beta)^T(y-X\beta) + \lambda\beta^T\beta$ $ = y^Ty - \beta^TX^Ty - y^TX\beta+ \beta^Tx^TX\beta + \lambda\beta^T\beta$ $ = y^Ty - \beta^TX^Ty - \beta^TX^Ty + \beta^TX^TX\beta + \beta^T\lambda I\beta$ with $I$ being the identity matrix $ = y^Ty - 2\beta^TX^Ty + \beta^T(X^TX + \lambda I)\beta$ Now we search for the $\beta$ that minimizes our criterion. Amongst others we make use of the matrix differentiation rule $\frac{\partial x^TAx}{\partial x} = (A+A^T)x \overset{\text{A symmetric}}{=} 2Ax$ which we can apply here as $(X^TX + \lambda I) \in \mathbb{R}^{n \times n}$: $\frac{\partial Criterion_{Ridge} }{\partial\beta} = -2X^Ty + 2(X^TX + \lambda I)\beta \overset{!}{=}0$ $(X^TX + \lambda I)\beta = X^Ty$ $\overset{\text{et voilà}}{\Rightarrow} \hat\beta = (X^TX + \lambda I)^{-1} X^Ty$
How to derive the ridge regression solution? I have recently stumbled upon the same question in the context of P-Splines and as the concept is the same I want to give a more detailed answer on the derivation of the ridge estimator. We start wit
4,000
How to derive the ridge regression solution?
There are a few important things that are missing in the answers given. The solution for $\beta$ is derived from the first-order necessary condition: $\frac{\partial f_{ridge}(\beta, \lambda)}{\partial \beta} = 0$ which yields $\beta = (X^TX+ \lambda I )^{-1}X^T Y$. But is this sufficient? That is, the solution is a global minimum only if $f_{ridge}(\beta, \lambda)$ is strictly convex. This can be shown to be true. Another way to look at the problem is to see the equivalence between $f_{ridge}(\beta, \lambda)$ and $f_{OLS}(\beta) = (Y-\beta^T X)^T(Y-\beta^T X)$ constrained to $||\beta||^2_2 \leq t$. OLS stands for Ordinary Least Squares. From this perspective $f_{ridge}(\beta, \lambda)$ is just the Lagrangian function used to find the global minima of the convex objective function $f_{OLS}(\beta)$ constrained with convex function $||\beta||^2_2$. A good explanation of these points and the derivation of $\beta$ can be found in these fine lecture notes: http://math.bu.edu/people/cgineste/classes/ma575/p/w14_1.pdf
How to derive the ridge regression solution?
There are a few important things that are missing in the answers given. The solution for $\beta$ is derived from the first-order necessary condition: $\frac{\partial f_{ridge}(\beta, \lambda)}{\part
How to derive the ridge regression solution? There are a few important things that are missing in the answers given. The solution for $\beta$ is derived from the first-order necessary condition: $\frac{\partial f_{ridge}(\beta, \lambda)}{\partial \beta} = 0$ which yields $\beta = (X^TX+ \lambda I )^{-1}X^T Y$. But is this sufficient? That is, the solution is a global minimum only if $f_{ridge}(\beta, \lambda)$ is strictly convex. This can be shown to be true. Another way to look at the problem is to see the equivalence between $f_{ridge}(\beta, \lambda)$ and $f_{OLS}(\beta) = (Y-\beta^T X)^T(Y-\beta^T X)$ constrained to $||\beta||^2_2 \leq t$. OLS stands for Ordinary Least Squares. From this perspective $f_{ridge}(\beta, \lambda)$ is just the Lagrangian function used to find the global minima of the convex objective function $f_{OLS}(\beta)$ constrained with convex function $||\beta||^2_2$. A good explanation of these points and the derivation of $\beta$ can be found in these fine lecture notes: http://math.bu.edu/people/cgineste/classes/ma575/p/w14_1.pdf
How to derive the ridge regression solution? There are a few important things that are missing in the answers given. The solution for $\beta$ is derived from the first-order necessary condition: $\frac{\partial f_{ridge}(\beta, \lambda)}{\part