idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
53,101 | Riddler puzzle - distance from origin after two random jumps of equal length | That puzzle was asking for the mode, not the expected distance. Here is how I solved it.
Without loss of generality, assume that the cricket starts at (1,0) and jumps to the origin. The question is then about the density of distances from (1,0) to any point on the unit circle.
Let $\theta \sim \operatorname{U}(0, 1)$, so $p(\theta) = 1$. We are asked about the mode of the distribution of
$$l^2 = 2 - 2\cos(\pi \theta) \implies l = \sqrt{2(1-\cos(\pi \theta))} $$
Here, I have used the law of cosines and the fact that the jump distance is exactly 1 unit long and am only considering the top half of the circle (hence $\pi \theta$) so that $l$ is a monotonically increasing function on [0,1]. We know the distribution of $\theta$, so we can use some well known results from probability theory to derive the distribution of $l$.
$$g(\theta) = 1$$
$$f(l) = g(\theta(l)) \Big\vert \dfrac{d\theta}{dl} \Big\vert $$
$$ \theta(l) = \dfrac{1}{\pi}\arccos \Big( \dfrac{l^2}{2}-1 \Big) $$
$$ \Big\vert \dfrac{d\theta}{dl} \Big\vert = \frac{2 l}{\pi \sqrt{(2l-l^2)(2l+l^2)}} $$
So the density for $l$ is "simply"
$$f(l) = \frac{2 l}{\pi \sqrt{(2l-l^2)(2l+l^2)}}$$
We can plot this and we notice that the majority of the probability is concentrated around 2
So although there is a discontinuity at 2, it appears that 2 is the mode. Additionally, this is a proper probability density
$$ \int_0^2 f(l) \, dl = \theta(2) - \theta(0) = \dfrac{1}{\pi} \pi - \dfrac{1}{\pi}0 = 1 $$
Another eloquent way to consider this problem is to plot points around the circle and color them by their distance like so
If we plot these along the unit interval, we notice that the the points are quite literally most dense around 2
Credit where credit is due: Thank you to Corey Yanofsky for correcting my math on this and reminding me that this change of variables approach only applies when $l(\theta)$ is monotic. Thank you to Jake Westfall (who is an active member on this forum) for the very clever plots of dots around the unit the circle. That was all his idea, not mine. | Riddler puzzle - distance from origin after two random jumps of equal length | That puzzle was asking for the mode, not the expected distance. Here is how I solved it.
Without loss of generality, assume that the cricket starts at (1,0) and jumps to the origin. The question is | Riddler puzzle - distance from origin after two random jumps of equal length
That puzzle was asking for the mode, not the expected distance. Here is how I solved it.
Without loss of generality, assume that the cricket starts at (1,0) and jumps to the origin. The question is then about the density of distances from (1,0) to any point on the unit circle.
Let $\theta \sim \operatorname{U}(0, 1)$, so $p(\theta) = 1$. We are asked about the mode of the distribution of
$$l^2 = 2 - 2\cos(\pi \theta) \implies l = \sqrt{2(1-\cos(\pi \theta))} $$
Here, I have used the law of cosines and the fact that the jump distance is exactly 1 unit long and am only considering the top half of the circle (hence $\pi \theta$) so that $l$ is a monotonically increasing function on [0,1]. We know the distribution of $\theta$, so we can use some well known results from probability theory to derive the distribution of $l$.
$$g(\theta) = 1$$
$$f(l) = g(\theta(l)) \Big\vert \dfrac{d\theta}{dl} \Big\vert $$
$$ \theta(l) = \dfrac{1}{\pi}\arccos \Big( \dfrac{l^2}{2}-1 \Big) $$
$$ \Big\vert \dfrac{d\theta}{dl} \Big\vert = \frac{2 l}{\pi \sqrt{(2l-l^2)(2l+l^2)}} $$
So the density for $l$ is "simply"
$$f(l) = \frac{2 l}{\pi \sqrt{(2l-l^2)(2l+l^2)}}$$
We can plot this and we notice that the majority of the probability is concentrated around 2
So although there is a discontinuity at 2, it appears that 2 is the mode. Additionally, this is a proper probability density
$$ \int_0^2 f(l) \, dl = \theta(2) - \theta(0) = \dfrac{1}{\pi} \pi - \dfrac{1}{\pi}0 = 1 $$
Another eloquent way to consider this problem is to plot points around the circle and color them by their distance like so
If we plot these along the unit interval, we notice that the the points are quite literally most dense around 2
Credit where credit is due: Thank you to Corey Yanofsky for correcting my math on this and reminding me that this change of variables approach only applies when $l(\theta)$ is monotic. Thank you to Jake Westfall (who is an active member on this forum) for the very clever plots of dots around the unit the circle. That was all his idea, not mine. | Riddler puzzle - distance from origin after two random jumps of equal length
That puzzle was asking for the mode, not the expected distance. Here is how I solved it.
Without loss of generality, assume that the cricket starts at (1,0) and jumps to the origin. The question is |
53,102 | How to determine by what percent the target variable will change if we change a variable by some percent in Linear Regression? | Let's say we have fitted a model such as:
$$ y = x_1 + x_2 + \epsilon$$
and we obtained esimates giving the following equation:
$$ \hat{y} = 2x_1 + 3x_2 $$
Thus for a 1 unit change in $x_1$ we expect a change of 2 units of $y$.
It is not possible to get the expected % change in $y$ from a % change in $x$ unless the variables are on a log scale prior to running the model. | How to determine by what percent the target variable will change if we change a variable by some per | Let's say we have fitted a model such as:
$$ y = x_1 + x_2 + \epsilon$$
and we obtained esimates giving the following equation:
$$ \hat{y} = 2x_1 + 3x_2 $$
Thus for a 1 unit change in $x_1$ we expect | How to determine by what percent the target variable will change if we change a variable by some percent in Linear Regression?
Let's say we have fitted a model such as:
$$ y = x_1 + x_2 + \epsilon$$
and we obtained esimates giving the following equation:
$$ \hat{y} = 2x_1 + 3x_2 $$
Thus for a 1 unit change in $x_1$ we expect a change of 2 units of $y$.
It is not possible to get the expected % change in $y$ from a % change in $x$ unless the variables are on a log scale prior to running the model. | How to determine by what percent the target variable will change if we change a variable by some per
Let's say we have fitted a model such as:
$$ y = x_1 + x_2 + \epsilon$$
and we obtained esimates giving the following equation:
$$ \hat{y} = 2x_1 + 3x_2 $$
Thus for a 1 unit change in $x_1$ we expect |
53,103 | How to determine by what percent the target variable will change if we change a variable by some percent in Linear Regression? | We can back out the answer with a little auxiliary information about the covariates.
Your linear model is probably something like $$E[y \vert x, z] = \hat y= \hat \alpha + \hat \beta \cdot x +\hat \delta \cdot z.$$
What is the change in the expected value of $y$ associated with 1 unit change with $x$? We can easily get that from the derivative:$$\frac{\Delta \hat y}{\Delta x} =\hat \beta.$$
We can then turn that into an elasticity:
$$\epsilon =\frac{100\cdot\frac{\Delta \hat y}{\hat y}}{100 \cdot
\frac{\Delta x}{x} }= \frac{\% \Delta \hat y}{\% \Delta x}= \hat \beta \cdot \frac{x}{\hat y}=\frac{\partial \hat y} {\partial x} \cdot \frac{x}{\hat y}.$$
In other words, to get an elasticity, we just need to multiply the regression coefficient on $x$ by the value of $x$ itself over the prediction of $y$. Note that this is a function of the covariates and can vary across observations. We need some way to summarize the individual elasticities.
There are several ways to put this into practice, but the most common is to take the average in the sample:
$$\bar \epsilon =\frac{1}{N} \sum_{i=1}^N \hat \beta \cdot \frac{x_i}{\hat y_i}.$$
Some software can do this for us, which makes the calculation of the standard errors much easier. Here's an example in Stata:
. sysuse auto, clear
(1978 automobile data)
. regress price mpg foreign
Source | SS df MS Number of obs = 74
-------------+---------------------------------- F(2, 71) = 14.07
Model | 180261702 2 90130850.8 Prob > F = 0.0000
Residual | 454803695 71 6405685.84 R-squared = 0.2838
-------------+---------------------------------- Adj R-squared = 0.2637
Total | 635065396 73 8699525.97 Root MSE = 2530.9
------------------------------------------------------------------------------
price | Coefficient Std. err. t P>|t| [95% conf. interval]
-------------+----------------------------------------------------------------
mpg | -294.1955 55.69172 -5.28 0.000 -405.2417 -183.1494
foreign | 1767.292 700.158 2.52 0.014 371.2169 3163.368
_cons | 11905.42 1158.634 10.28 0.000 9595.164 14215.67
------------------------------------------------------------------------------
. /* canned */
. margins, eyex(mpg)
Average marginal effects Number of obs = 74
Model VCE: OLS
Expression: Linear prediction, predict()
ey/ex wrt: mpg
------------------------------------------------------------------------------
| Delta-method
| ey/ex std. err. t P>|t| [95% conf. interval]
-------------+----------------------------------------------------------------
mpg | -1.238224 .3721885 -3.33 0.001 -1.980347 -.4961013
------------------------------------------------------------------------------
. /* by hand */
. predict yhat, xb
. generate elasticity = -294.1955 *(mpg/yhat)
. summarize elasticity
Variable | Obs Mean Std. dev. Min Max
-------------+---------------------------------------------------------
elasticity | 74 -1.238224 1.060581 -7.488722 -.4215304
This means that a 1% increase in mpg is associated with 1.2% decrease in price.
If we don't have the raw data, but have some summary statistics on the covariates and the coefficients, we can plug those into the first formula instead of averaging. The answers won't match exactly but are usually reasonably close. | How to determine by what percent the target variable will change if we change a variable by some per | We can back out the answer with a little auxiliary information about the covariates.
Your linear model is probably something like $$E[y \vert x, z] = \hat y= \hat \alpha + \hat \beta \cdot x +\hat \de | How to determine by what percent the target variable will change if we change a variable by some percent in Linear Regression?
We can back out the answer with a little auxiliary information about the covariates.
Your linear model is probably something like $$E[y \vert x, z] = \hat y= \hat \alpha + \hat \beta \cdot x +\hat \delta \cdot z.$$
What is the change in the expected value of $y$ associated with 1 unit change with $x$? We can easily get that from the derivative:$$\frac{\Delta \hat y}{\Delta x} =\hat \beta.$$
We can then turn that into an elasticity:
$$\epsilon =\frac{100\cdot\frac{\Delta \hat y}{\hat y}}{100 \cdot
\frac{\Delta x}{x} }= \frac{\% \Delta \hat y}{\% \Delta x}= \hat \beta \cdot \frac{x}{\hat y}=\frac{\partial \hat y} {\partial x} \cdot \frac{x}{\hat y}.$$
In other words, to get an elasticity, we just need to multiply the regression coefficient on $x$ by the value of $x$ itself over the prediction of $y$. Note that this is a function of the covariates and can vary across observations. We need some way to summarize the individual elasticities.
There are several ways to put this into practice, but the most common is to take the average in the sample:
$$\bar \epsilon =\frac{1}{N} \sum_{i=1}^N \hat \beta \cdot \frac{x_i}{\hat y_i}.$$
Some software can do this for us, which makes the calculation of the standard errors much easier. Here's an example in Stata:
. sysuse auto, clear
(1978 automobile data)
. regress price mpg foreign
Source | SS df MS Number of obs = 74
-------------+---------------------------------- F(2, 71) = 14.07
Model | 180261702 2 90130850.8 Prob > F = 0.0000
Residual | 454803695 71 6405685.84 R-squared = 0.2838
-------------+---------------------------------- Adj R-squared = 0.2637
Total | 635065396 73 8699525.97 Root MSE = 2530.9
------------------------------------------------------------------------------
price | Coefficient Std. err. t P>|t| [95% conf. interval]
-------------+----------------------------------------------------------------
mpg | -294.1955 55.69172 -5.28 0.000 -405.2417 -183.1494
foreign | 1767.292 700.158 2.52 0.014 371.2169 3163.368
_cons | 11905.42 1158.634 10.28 0.000 9595.164 14215.67
------------------------------------------------------------------------------
. /* canned */
. margins, eyex(mpg)
Average marginal effects Number of obs = 74
Model VCE: OLS
Expression: Linear prediction, predict()
ey/ex wrt: mpg
------------------------------------------------------------------------------
| Delta-method
| ey/ex std. err. t P>|t| [95% conf. interval]
-------------+----------------------------------------------------------------
mpg | -1.238224 .3721885 -3.33 0.001 -1.980347 -.4961013
------------------------------------------------------------------------------
. /* by hand */
. predict yhat, xb
. generate elasticity = -294.1955 *(mpg/yhat)
. summarize elasticity
Variable | Obs Mean Std. dev. Min Max
-------------+---------------------------------------------------------
elasticity | 74 -1.238224 1.060581 -7.488722 -.4215304
This means that a 1% increase in mpg is associated with 1.2% decrease in price.
If we don't have the raw data, but have some summary statistics on the covariates and the coefficients, we can plug those into the first formula instead of averaging. The answers won't match exactly but are usually reasonably close. | How to determine by what percent the target variable will change if we change a variable by some per
We can back out the answer with a little auxiliary information about the covariates.
Your linear model is probably something like $$E[y \vert x, z] = \hat y= \hat \alpha + \hat \beta \cdot x +\hat \de |
53,104 | Autoencoders as dimensionality reduction tools..? | Yes, dimension reduction is one way to use auto-encoders. Consider a feed-forward fully-connected auto-encoder with and input layer, 1 hidden layer with $k$ units, 1 output layer and all linear activation functions. The latent space of this auto-encoder spans the first $k$ principle components of the original data. This can be useful if you want to represent the input in fewer features, but don't necessarily care about the orthogonality constraint in PCA. (More information: What're the differences between PCA and autoencoder?)
But auto-encoders allow a number of variations on this basic theme, giving you more options for how the latent space should be constructed than does PCA.
Using CNN layers instead of FFNs is clearly a different kind of model compared to PCA, so it will encode a different kind of information in the latent space.
Using nonlinear activation functions will also yield a different kind of latent encoding than PCA (because PCA is linear).
Likewise sparse, contractive or variational auto-encoders have different goals than PCA and will give different results, which can be helpful depending on what problem you're trying to solve. | Autoencoders as dimensionality reduction tools..? | Yes, dimension reduction is one way to use auto-encoders. Consider a feed-forward fully-connected auto-encoder with and input layer, 1 hidden layer with $k$ units, 1 output layer and all linear activa | Autoencoders as dimensionality reduction tools..?
Yes, dimension reduction is one way to use auto-encoders. Consider a feed-forward fully-connected auto-encoder with and input layer, 1 hidden layer with $k$ units, 1 output layer and all linear activation functions. The latent space of this auto-encoder spans the first $k$ principle components of the original data. This can be useful if you want to represent the input in fewer features, but don't necessarily care about the orthogonality constraint in PCA. (More information: What're the differences between PCA and autoencoder?)
But auto-encoders allow a number of variations on this basic theme, giving you more options for how the latent space should be constructed than does PCA.
Using CNN layers instead of FFNs is clearly a different kind of model compared to PCA, so it will encode a different kind of information in the latent space.
Using nonlinear activation functions will also yield a different kind of latent encoding than PCA (because PCA is linear).
Likewise sparse, contractive or variational auto-encoders have different goals than PCA and will give different results, which can be helpful depending on what problem you're trying to solve. | Autoencoders as dimensionality reduction tools..?
Yes, dimension reduction is one way to use auto-encoders. Consider a feed-forward fully-connected auto-encoder with and input layer, 1 hidden layer with $k$ units, 1 output layer and all linear activa |
53,105 | Autoencoders as dimensionality reduction tools..? | Yes, using some form of autoencoder training/pre-training to create good features has been a successful approach in many areas. E.g. for tabular data, using a denoising autoencoder was the winning approach in a recent Kaggle competition. Autoencoder pre-training (recovering masked features) has been used in the TabNet paper that gained a decent amount of attention. These first two examples were for tabular data, but similar things have long been done (and are still popular) for vision applications as discussed here. However, as you may notice it looks like a lot of the most successful version of autoencoders for pretraining representations have been ones that were trained to re-construct corrupted inputs into the uncorrupted inputs.
Other ideas for what you can use autoencoders for would be file compression (e.g. for video calls, what's minimal information that still produces a video that looks decent to humans, but does not take a lot of bandwidth to transmit?). | Autoencoders as dimensionality reduction tools..? | Yes, using some form of autoencoder training/pre-training to create good features has been a successful approach in many areas. E.g. for tabular data, using a denoising autoencoder was the winning app | Autoencoders as dimensionality reduction tools..?
Yes, using some form of autoencoder training/pre-training to create good features has been a successful approach in many areas. E.g. for tabular data, using a denoising autoencoder was the winning approach in a recent Kaggle competition. Autoencoder pre-training (recovering masked features) has been used in the TabNet paper that gained a decent amount of attention. These first two examples were for tabular data, but similar things have long been done (and are still popular) for vision applications as discussed here. However, as you may notice it looks like a lot of the most successful version of autoencoders for pretraining representations have been ones that were trained to re-construct corrupted inputs into the uncorrupted inputs.
Other ideas for what you can use autoencoders for would be file compression (e.g. for video calls, what's minimal information that still produces a video that looks decent to humans, but does not take a lot of bandwidth to transmit?). | Autoencoders as dimensionality reduction tools..?
Yes, using some form of autoencoder training/pre-training to create good features has been a successful approach in many areas. E.g. for tabular data, using a denoising autoencoder was the winning app |
53,106 | Autoencoders as dimensionality reduction tools..? | As you mentioned, auto-encoders can be used for dimensionality reduction. One of the nice things about them, is they are an unsupervised learning method. If you have a large volume of unlabeled data and a small volume of labeled data, you can train your auto-encoder on the large unlabeled dataset to get a robust representation of your data and then either train something lighter weight on your embedings or use transfer learning on your encoder.
There are also a number of other applications of auto-encoders though. For example, auto-encoders can be used for anomaly detection. Since the auto-encoder can encode more of the dataset correctly by robustly representing common patterns than unusual ones, the reconstruction error of "normal" data tends to be lower than that of "anomalies". The reconstruction error can then be used to identify anomalous data.
Another application for them can be grouping or clustering similar data. Data with similar properties will tend to have similar embedings, so two inputs with embedings that are similar are likely to have similar content. You can then apply traditional clustering methods to the pairwise embeding distances to find structure in your dataset or identify similar instances as in recommender systems. The downside is that the "similarity" between the inputs is not necessarily related to features that are important to a user (images of two clothing items may be similar because they are both red rather than because they are both dresses). If particular aspects of similarity are important, you may be better off using triplet loss or a Siamese network. However, both of those methods require some additional label information, where as once again, auto-encoders are unsupervised so they are an option even when no labels are available, or "similarity" is hard to define.
I'm sure there are plenty of other applications, these are just the ones I've used them for. That is one of the fun things about unsupervised methods; you can be creative with them without getting stuck waiting for annotations or cleaning bad labels. | Autoencoders as dimensionality reduction tools..? | As you mentioned, auto-encoders can be used for dimensionality reduction. One of the nice things about them, is they are an unsupervised learning method. If you have a large volume of unlabeled data | Autoencoders as dimensionality reduction tools..?
As you mentioned, auto-encoders can be used for dimensionality reduction. One of the nice things about them, is they are an unsupervised learning method. If you have a large volume of unlabeled data and a small volume of labeled data, you can train your auto-encoder on the large unlabeled dataset to get a robust representation of your data and then either train something lighter weight on your embedings or use transfer learning on your encoder.
There are also a number of other applications of auto-encoders though. For example, auto-encoders can be used for anomaly detection. Since the auto-encoder can encode more of the dataset correctly by robustly representing common patterns than unusual ones, the reconstruction error of "normal" data tends to be lower than that of "anomalies". The reconstruction error can then be used to identify anomalous data.
Another application for them can be grouping or clustering similar data. Data with similar properties will tend to have similar embedings, so two inputs with embedings that are similar are likely to have similar content. You can then apply traditional clustering methods to the pairwise embeding distances to find structure in your dataset or identify similar instances as in recommender systems. The downside is that the "similarity" between the inputs is not necessarily related to features that are important to a user (images of two clothing items may be similar because they are both red rather than because they are both dresses). If particular aspects of similarity are important, you may be better off using triplet loss or a Siamese network. However, both of those methods require some additional label information, where as once again, auto-encoders are unsupervised so they are an option even when no labels are available, or "similarity" is hard to define.
I'm sure there are plenty of other applications, these are just the ones I've used them for. That is one of the fun things about unsupervised methods; you can be creative with them without getting stuck waiting for annotations or cleaning bad labels. | Autoencoders as dimensionality reduction tools..?
As you mentioned, auto-encoders can be used for dimensionality reduction. One of the nice things about them, is they are an unsupervised learning method. If you have a large volume of unlabeled data |
53,107 | Embedding data into a larger dimension space | As Forrest mentioned embedding data into a higher dimension (sometimes called basis expansion) is a common method which allows a linear classifier to observe a non-linear input space. Examples are using the RBF kernel with an SVM or polynomial expansion with linear regression.
However basis expansion isn't always beneficial. For a classification task if your data is already linearly separable the only thing basis expansion will do is increase model complexity and training time. For a regression task basis expansion can lead to overfitting the training data if regularization is not imposed, effectively fitting the model to noise in the training set.
And certain models don't benefit from basis expansion at all. For example deep neural networks will have no noticeable benefit from embedding the inputs into a higher dimension space. This is because the (non-linear) hidden layers of a network can be seen as performing basis expansion where the specific higher dimension embedding is learnt by the network during training.
So as a summary, the merit of embedding data in a higher dimension depends on what model you are using and properties of your data. Embedding data in a higher dimension prior to using a linear model is common to attempt to introduce linear separability. Embedding data in a higher dimension is also something that occurs implicitly in some models such as SVMs using the kernel trick or neural networks with non-linear activations. | Embedding data into a larger dimension space | As Forrest mentioned embedding data into a higher dimension (sometimes called basis expansion) is a common method which allows a linear classifier to observe a non-linear input space. Examples are usi | Embedding data into a larger dimension space
As Forrest mentioned embedding data into a higher dimension (sometimes called basis expansion) is a common method which allows a linear classifier to observe a non-linear input space. Examples are using the RBF kernel with an SVM or polynomial expansion with linear regression.
However basis expansion isn't always beneficial. For a classification task if your data is already linearly separable the only thing basis expansion will do is increase model complexity and training time. For a regression task basis expansion can lead to overfitting the training data if regularization is not imposed, effectively fitting the model to noise in the training set.
And certain models don't benefit from basis expansion at all. For example deep neural networks will have no noticeable benefit from embedding the inputs into a higher dimension space. This is because the (non-linear) hidden layers of a network can be seen as performing basis expansion where the specific higher dimension embedding is learnt by the network during training.
So as a summary, the merit of embedding data in a higher dimension depends on what model you are using and properties of your data. Embedding data in a higher dimension prior to using a linear model is common to attempt to introduce linear separability. Embedding data in a higher dimension is also something that occurs implicitly in some models such as SVMs using the kernel trick or neural networks with non-linear activations. | Embedding data into a larger dimension space
As Forrest mentioned embedding data into a higher dimension (sometimes called basis expansion) is a common method which allows a linear classifier to observe a non-linear input space. Examples are usi |
53,108 | Embedding data into a larger dimension space | are there possible benefits for embedding into a space of larger or same dimension?
In Vector Symbolic Architectures (also known as Hyperdimensional Computing) this is essential. VSAs use algebraic operations (sum, product, permutation) on vectors to encode information in their directions. VSAs can be used to represent discrete data structures such as graphs as well as continuous properties (like brightness) as attributes of the discrete structure. They rely on the properties of high-dimensional spaces to work. You're unlikely to come across a VSA working in a less than 500-d space. 10,000-d is typical.
Low dimensional input values are projected up to the VSA dimensionality choosing a method that preserves the input properties of interest. Note that although individual component (like a pixel brightness) are being massively expanded in dimensionality, the dimensionality of the entire structure being encoded (say a 500 x 500 x RGB image) may be being reduced in dimensionality to a single VSA encoding (i.e. one vector represents the entire image).
For an introduction, see http://www.rctn.org/vs265/kanerva09-hyperdimensional.pdf
they have examples that is embedding a 5 dimensional space into 24 dimensions
In such low dimensional spaces they are very unlikely to be using VSA. But the point remains, also stated in other answers, expanding the basis might be useful - it depends what you are trying to do. | Embedding data into a larger dimension space | are there possible benefits for embedding into a space of larger or same dimension?
In Vector Symbolic Architectures (also known as Hyperdimensional Computing) this is essential. VSAs use algebraic o | Embedding data into a larger dimension space
are there possible benefits for embedding into a space of larger or same dimension?
In Vector Symbolic Architectures (also known as Hyperdimensional Computing) this is essential. VSAs use algebraic operations (sum, product, permutation) on vectors to encode information in their directions. VSAs can be used to represent discrete data structures such as graphs as well as continuous properties (like brightness) as attributes of the discrete structure. They rely on the properties of high-dimensional spaces to work. You're unlikely to come across a VSA working in a less than 500-d space. 10,000-d is typical.
Low dimensional input values are projected up to the VSA dimensionality choosing a method that preserves the input properties of interest. Note that although individual component (like a pixel brightness) are being massively expanded in dimensionality, the dimensionality of the entire structure being encoded (say a 500 x 500 x RGB image) may be being reduced in dimensionality to a single VSA encoding (i.e. one vector represents the entire image).
For an introduction, see http://www.rctn.org/vs265/kanerva09-hyperdimensional.pdf
they have examples that is embedding a 5 dimensional space into 24 dimensions
In such low dimensional spaces they are very unlikely to be using VSA. But the point remains, also stated in other answers, expanding the basis might be useful - it depends what you are trying to do. | Embedding data into a larger dimension space
are there possible benefits for embedding into a space of larger or same dimension?
In Vector Symbolic Architectures (also known as Hyperdimensional Computing) this is essential. VSAs use algebraic o |
53,109 | Embedding data into a larger dimension space | I suggest that the datapoints of the 5-dimensional dataset are first classified in 13 types of classes(orbits) where these types of classes(orbits) have the following cardinalities: 1,10,32,40,80,160,240,320,480,640,960,1920,3840. These 13 types of classes are generated by the intrinsic properties of the automorphism of Z^5, where Z^5 is the 5-D integer lattice. For other dimensions please have a look at the OEIS sequence A270950. Use the infinity norm of the datapoint to create a 2D-table where the columns are the cardinalities and the rows the infinity norm. Each cell in the table represents the number of classes (orbits) with the chosen cardinality and containing datapoints having the same infinity norm. These tables are mathematical invariants and have only to be calculated once and can be used for any 5-D dataset. | Embedding data into a larger dimension space | I suggest that the datapoints of the 5-dimensional dataset are first classified in 13 types of classes(orbits) where these types of classes(orbits) have the following cardinalities: 1,10,32,40,80,160, | Embedding data into a larger dimension space
I suggest that the datapoints of the 5-dimensional dataset are first classified in 13 types of classes(orbits) where these types of classes(orbits) have the following cardinalities: 1,10,32,40,80,160,240,320,480,640,960,1920,3840. These 13 types of classes are generated by the intrinsic properties of the automorphism of Z^5, where Z^5 is the 5-D integer lattice. For other dimensions please have a look at the OEIS sequence A270950. Use the infinity norm of the datapoint to create a 2D-table where the columns are the cardinalities and the rows the infinity norm. Each cell in the table represents the number of classes (orbits) with the chosen cardinality and containing datapoints having the same infinity norm. These tables are mathematical invariants and have only to be calculated once and can be used for any 5-D dataset. | Embedding data into a larger dimension space
I suggest that the datapoints of the 5-dimensional dataset are first classified in 13 types of classes(orbits) where these types of classes(orbits) have the following cardinalities: 1,10,32,40,80,160, |
53,110 | Programming inverse-transformation sampling for Pareto distribution | I'm assuming you are referecning to Inverse Transform Sampling method. Its very straight forward. Refer Wiki article and this site.
Pareto CDF is given by:
$$
F(X) = 1 -(\frac{k}{x})^\gamma; x\ge k>0 \ and \ \gamma>0
$$
All you do is equate to uniform distribution and solve for $x$
$$
F(X) = U \\
U \sim Uniform(0,1) \\
1 -(\frac{k}{x})^\gamma = u \\
x = k(1-u)^{-1/\gamma}
$$
Now in R:
#N = number of samples
#N = number of sample
rpar <- function(N,g,k){
if (k < 0 | g <0){
stop("both k and g >0")
}
k*(1-runif(N))^(-1/g)
}
rand_pareto <- rpar(1e5,5, 16)
hist(rand_pareto, 100, freq = FALSE)
#verify using built in random variate rpareto in package extrDistr
x <- (extraDistr::rpareto(1e5,5, 16))
hist(x, 100, freq = FALSE)
This will give you the random variate for Pareto distribution. I'm not sure about where you are getting gammaroot? | Programming inverse-transformation sampling for Pareto distribution | I'm assuming you are referecning to Inverse Transform Sampling method. Its very straight forward. Refer Wiki article and this site.
Pareto CDF is given by:
$$
F(X) = 1 -(\frac{k}{x})^\gamma; x\ge k>0 | Programming inverse-transformation sampling for Pareto distribution
I'm assuming you are referecning to Inverse Transform Sampling method. Its very straight forward. Refer Wiki article and this site.
Pareto CDF is given by:
$$
F(X) = 1 -(\frac{k}{x})^\gamma; x\ge k>0 \ and \ \gamma>0
$$
All you do is equate to uniform distribution and solve for $x$
$$
F(X) = U \\
U \sim Uniform(0,1) \\
1 -(\frac{k}{x})^\gamma = u \\
x = k(1-u)^{-1/\gamma}
$$
Now in R:
#N = number of samples
#N = number of sample
rpar <- function(N,g,k){
if (k < 0 | g <0){
stop("both k and g >0")
}
k*(1-runif(N))^(-1/g)
}
rand_pareto <- rpar(1e5,5, 16)
hist(rand_pareto, 100, freq = FALSE)
#verify using built in random variate rpareto in package extrDistr
x <- (extraDistr::rpareto(1e5,5, 16))
hist(x, 100, freq = FALSE)
This will give you the random variate for Pareto distribution. I'm not sure about where you are getting gammaroot? | Programming inverse-transformation sampling for Pareto distribution
I'm assuming you are referecning to Inverse Transform Sampling method. Its very straight forward. Refer Wiki article and this site.
Pareto CDF is given by:
$$
F(X) = 1 -(\frac{k}{x})^\gamma; x\ge k>0 |
53,111 | Programming inverse-transformation sampling for Pareto distribution | Using the quantile $p=F(x)$ and inverting the CDF equation gives the quantile function:
$$Q(p) = \frac{k}{(1-p)^{1/\gamma}}
\quad \quad \quad \text{for all } 0 \leqslant p \leqslant 1.$$
The corresponding log-quantile function is:
$$\log Q(p) = \log k - \frac{1}{\gamma} \log (1-p)
\quad \quad \quad \text{for all } 0 \leqslant p \leqslant 1.$$
The probability functions for the Pareto distribution are already available in R (see e.g., the EnvStats package). However, it is fairly simple to program this function from scratch if preferred. Here is a vectorised version of the function.
qpareto <- function(p, scale, shape = 1, lower.tail = TRUE, log.p = FALSE) {
#Check input p
if (!is.vector(p)) { stop('Error: p must be a numeric vector') }
if (!is.numeric(p)) { stop('Error: p must be a numeric vector') }
if (min(p) < 0) { stop('Error: p must be between zero and one') }
if (max(p) > 1) { stop('Error: p must be between zero and one') }
n <- length(p)
OUT <- numeric(n)
for (i in 1:n) {
P <- ifelse(lower.tail, 1-p[i], p[i])
OUT[i] <- log(scale) - log(P)/shape) }
if (log.p) OUT else exp(OUT) }
Once you have the quantile function it is simple to generate random variables using inverse-transformation sampling. Again, this is already done in the existing Pareto functions in R, but if you want to program it from scratch, that is quite simple to do.
rpareto <- function(n, scale, shape = 1) {
#Check input n
if (!is.vector(n)) { stop('Error: n must be a single positive integer') }
if (!is.numeric(n)) { stop('Error: n must be a single positive integer') }
if (length(n) != 1) { stop('Error: n must be a single positive integer') }
if (as.integer(n) != n) { stop('Error: n must be a single positive integer') }
if (n <= 0) { stop('Error: n must be a single positive integer') }
qpareto(runif(n), scale = scale, shape = shape) } | Programming inverse-transformation sampling for Pareto distribution | Using the quantile $p=F(x)$ and inverting the CDF equation gives the quantile function:
$$Q(p) = \frac{k}{(1-p)^{1/\gamma}}
\quad \quad \quad \text{for all } 0 \leqslant p \leqslant 1.$$
The correspon | Programming inverse-transformation sampling for Pareto distribution
Using the quantile $p=F(x)$ and inverting the CDF equation gives the quantile function:
$$Q(p) = \frac{k}{(1-p)^{1/\gamma}}
\quad \quad \quad \text{for all } 0 \leqslant p \leqslant 1.$$
The corresponding log-quantile function is:
$$\log Q(p) = \log k - \frac{1}{\gamma} \log (1-p)
\quad \quad \quad \text{for all } 0 \leqslant p \leqslant 1.$$
The probability functions for the Pareto distribution are already available in R (see e.g., the EnvStats package). However, it is fairly simple to program this function from scratch if preferred. Here is a vectorised version of the function.
qpareto <- function(p, scale, shape = 1, lower.tail = TRUE, log.p = FALSE) {
#Check input p
if (!is.vector(p)) { stop('Error: p must be a numeric vector') }
if (!is.numeric(p)) { stop('Error: p must be a numeric vector') }
if (min(p) < 0) { stop('Error: p must be between zero and one') }
if (max(p) > 1) { stop('Error: p must be between zero and one') }
n <- length(p)
OUT <- numeric(n)
for (i in 1:n) {
P <- ifelse(lower.tail, 1-p[i], p[i])
OUT[i] <- log(scale) - log(P)/shape) }
if (log.p) OUT else exp(OUT) }
Once you have the quantile function it is simple to generate random variables using inverse-transformation sampling. Again, this is already done in the existing Pareto functions in R, but if you want to program it from scratch, that is quite simple to do.
rpareto <- function(n, scale, shape = 1) {
#Check input n
if (!is.vector(n)) { stop('Error: n must be a single positive integer') }
if (!is.numeric(n)) { stop('Error: n must be a single positive integer') }
if (length(n) != 1) { stop('Error: n must be a single positive integer') }
if (as.integer(n) != n) { stop('Error: n must be a single positive integer') }
if (n <= 0) { stop('Error: n must be a single positive integer') }
qpareto(runif(n), scale = scale, shape = shape) } | Programming inverse-transformation sampling for Pareto distribution
Using the quantile $p=F(x)$ and inverting the CDF equation gives the quantile function:
$$Q(p) = \frac{k}{(1-p)^{1/\gamma}}
\quad \quad \quad \text{for all } 0 \leqslant p \leqslant 1.$$
The correspon |
53,112 | How to choose between an overfit model and a non-overfit model? | First of all, you need to choose before the final test. The purpose of the final test is to measure/estimate generalization error for the already chosen model.
If you choose again based on the test set, you either
need to restrict yourself to not claim any generalization error. I.e. you can say that your optimization heuristic yielded model x, but you cannot give an estimate of generalization error for model x (you can only give your test set accuracy as training error since such a selection is part of training)
or you need to get another test set that is independent of the whole training procedure including selecting between your two candidate models, and then measure generalization error for the final chosen model with this third test set.
Secondly, you need to make sure that the more overfit model actually outperforms the less overfit model in the test: Test set results do have random uncertainty, and this is known to be large for figures of merit like accuracy which are proportions of tested cases. This means that substantial numbers of tested cases are required to guide such a decision between two models based on accuracy.
In the example, a difference such an in the question can easily need several thousand test cases to be significant (depends on the actual distribution of correct/wrong predictions for both models, and on whether only those 2 models are compared).
Other figures of merit, in particular proper scoring rules, are much better suited to guide selection decisions. They also often have less random uncertainty than proportions.
If model 2 turns out not to be significantly better*, I'd recommend choosing the less complex/less overfit model 1.
Essentially this is also the heuristic behind the one-standard-deviation rule: when uncertain, choose the less complex model.
* Strictly speaking, significance only tells us the probability to observe at least such a difference iff there is really no difference in the performance [or if model 2 is really no better than model 1], while we'd like to decide based on the probability that model 2 is better than model 1 - which we cannot access without further information or assumptions about the pre-test probability of model 2 being better than model 1.
Nevertheless, accounting for this test set size uncertainty via significance is a big step into the right direction. | How to choose between an overfit model and a non-overfit model? | First of all, you need to choose before the final test. The purpose of the final test is to measure/estimate generalization error for the already chosen model.
If you choose again based on the test se | How to choose between an overfit model and a non-overfit model?
First of all, you need to choose before the final test. The purpose of the final test is to measure/estimate generalization error for the already chosen model.
If you choose again based on the test set, you either
need to restrict yourself to not claim any generalization error. I.e. you can say that your optimization heuristic yielded model x, but you cannot give an estimate of generalization error for model x (you can only give your test set accuracy as training error since such a selection is part of training)
or you need to get another test set that is independent of the whole training procedure including selecting between your two candidate models, and then measure generalization error for the final chosen model with this third test set.
Secondly, you need to make sure that the more overfit model actually outperforms the less overfit model in the test: Test set results do have random uncertainty, and this is known to be large for figures of merit like accuracy which are proportions of tested cases. This means that substantial numbers of tested cases are required to guide such a decision between two models based on accuracy.
In the example, a difference such an in the question can easily need several thousand test cases to be significant (depends on the actual distribution of correct/wrong predictions for both models, and on whether only those 2 models are compared).
Other figures of merit, in particular proper scoring rules, are much better suited to guide selection decisions. They also often have less random uncertainty than proportions.
If model 2 turns out not to be significantly better*, I'd recommend choosing the less complex/less overfit model 1.
Essentially this is also the heuristic behind the one-standard-deviation rule: when uncertain, choose the less complex model.
* Strictly speaking, significance only tells us the probability to observe at least such a difference iff there is really no difference in the performance [or if model 2 is really no better than model 1], while we'd like to decide based on the probability that model 2 is better than model 1 - which we cannot access without further information or assumptions about the pre-test probability of model 2 being better than model 1.
Nevertheless, accounting for this test set size uncertainty via significance is a big step into the right direction. | How to choose between an overfit model and a non-overfit model?
First of all, you need to choose before the final test. The purpose of the final test is to measure/estimate generalization error for the already chosen model.
If you choose again based on the test se |
53,113 | How to choose between an overfit model and a non-overfit model? | This is impossible to answer without more information. Class balance, tolerance for false positive/negative results, etc are important factors into deciding if model is for for production.
I've seen models with a very high accuracy score poorly on something like MCC due to most of the predictions being wrong on the minority class, which in our case was the most important class to get right.
In any case look at the confusion matrix and ask yourself how is each model doing relative to your specific use case and tolerance for error. Maybe that will give you a better intuition. | How to choose between an overfit model and a non-overfit model? | This is impossible to answer without more information. Class balance, tolerance for false positive/negative results, etc are important factors into deciding if model is for for production.
I've seen m | How to choose between an overfit model and a non-overfit model?
This is impossible to answer without more information. Class balance, tolerance for false positive/negative results, etc are important factors into deciding if model is for for production.
I've seen models with a very high accuracy score poorly on something like MCC due to most of the predictions being wrong on the minority class, which in our case was the most important class to get right.
In any case look at the confusion matrix and ask yourself how is each model doing relative to your specific use case and tolerance for error. Maybe that will give you a better intuition. | How to choose between an overfit model and a non-overfit model?
This is impossible to answer without more information. Class balance, tolerance for false positive/negative results, etc are important factors into deciding if model is for for production.
I've seen m |
53,114 | How to choose between an overfit model and a non-overfit model? | Overfit or not, you should pick the one with the highest test accuracy, conditional on the fact that you have truly kept your test data separate. I would be tempted to find more unseen test data to double check that it has truly generalised well to new data. | How to choose between an overfit model and a non-overfit model? | Overfit or not, you should pick the one with the highest test accuracy, conditional on the fact that you have truly kept your test data separate. I would be tempted to find more unseen test data to do | How to choose between an overfit model and a non-overfit model?
Overfit or not, you should pick the one with the highest test accuracy, conditional on the fact that you have truly kept your test data separate. I would be tempted to find more unseen test data to double check that it has truly generalised well to new data. | How to choose between an overfit model and a non-overfit model?
Overfit or not, you should pick the one with the highest test accuracy, conditional on the fact that you have truly kept your test data separate. I would be tempted to find more unseen test data to do |
53,115 | How to check the correlation between categorical and numeric independent variable in R? [duplicate] | There are several ways to determine correlation between a categorical and a continuous variable. However, I found only one way to calculate a 'correlation coefficient', and that only works if your categorical variable is dichotomous.
If your categorical variable is dichotomous (only two values), then you can use the point-biserial correlation. There is a function to do this in the ltm package.
library(ltm)
# weakly correlated example
set.seed(123)
x <- rnorm(100)
y <- factor(sample(c("A", "B"), 100, replace = TRUE))
biserial.cor(x, y)
[1] -0.07914586
# strongly correlated example
biserial.cor(mtcars$mpg, mtcars$am)
[1] -0.5998324
You could do a logistic regression and use various evaluations of it (accuracy, etc.) in place of a correlation coefficient. Again, this works best if your categorical variable is dichotomous.
# weakly correlated
set.seed(123)
x <- rnorm(100)
y <- factor(sample(c("A", "B"), 100, replace = TRUE))
logit <- glm(y ~ x, family = "binomial")
# Accuracy
sum(round(predict(logit, type = "response")) == as.numeric(y)) / length(y)
[1] 0.15
# Sensitivity
sum(round(predict(logit, type = "response")) == as.numeric(y) & as.numeric(y) == 1) /
sum(as.numeric(y))
[1] 0.1013514
# Precision
sum(round(predict(logit, type = "response")) == as.numeric(y) & as.numeric(y) == 1) /
sum(round(predict(logit, type = "response") == 1))
[1] Inf
enter code here
# strongly correlated
mt_logit <- glm(am~mpg, data = mtcars, family = "binomial")
mt_pred <- round(predict(mt_logit, type = "response"))
# Accuracy
sum(mt_pred == mtcars$am) / nrow(mtcars)
[1] 0.75
# Sensitivity
sum(mt_pred == mtcars$am & mtcars$am == 1) /
sum(mtcars$am)
[1] 0.5384615
# Precision
sum(mt_pred == mtcars$am & mtcars$am == 1) /
sum(mt_pred == 1)
[1] 0.7777778
Again, if your categorical data is dichotomous, you could do the two-sample Wilcoxon rank sum test. The wilcox.test() function is available in base R. This is a non-parametric variation on the ANOVA.
# weakly correlated
set.seed(123)
x <- rnorm(100)
y <- factor(sample(c("A", "B"), 100, replace = TRUE))
df <- data.frame(x = x, y = y)
wt <- wilcox.test(df$x[which(df$y == "A")], df$x[which(df$y == "B")])
Wilcoxon rank sum test with continuity correction
data: df$x[which(df$y == "A")] and df$x[which(df$y == "B")]
W = 1243, p-value = 0.9752
alternative hypothesis: true location shift is not equal to 0
# strongly correlated
wilcox.test(mtcars$mpg[which(mtcars$am == 1)],
mtcars$mpg[which(mtcars$am == 0)], exact = FALSE) # exact = FALSE because there are ties
Wilcoxon rank sum test with continuity correction
data: mtcars$mpg[which(mtcars$am == 1)] and mtcars$mpg[which(mtcars$am == 0)]
W = 205, p-value = 0.001871
alternative hypothesis: true location shift is not equal to 0
You could also just do an ANOVA on your logit model from earlier.
# weakly correlated
anova(logit)
Analysis of Deviance Table
Model: binomial, link: logit
Response: y
Terms added sequentially (first to last)
Df Deviance Resid. Df Resid. Dev
NULL 99 138.47
x 1 0.62819 98 137.84
# strongly correlated
anova(mt_logit)
Analysis of Deviance Table
Model: binomial, link: logit
Response: am
Terms added sequentially (first to last)
Df Deviance Resid. Df Resid. Dev
NULL 31 43.230
mpg 1 13.555 30 29.675
If your categorical variable is not dichotomous, you can use the Kruskal-Wallis test.
# weakly correlated
set.seed(123)
x <- rnorm(100)
y <- factor(sample(c("A", "B", "C"), 100, replace = TRUE))
kruskal.test(x~y)
Kruskal-Wallis rank sum test
data: x by y
Kruskal-Wallis chi-squared = 0.62986, df = 2, p-value = 0.7298
# strongly correlated
kruskal.test(mpg ~ cyl, data = mtcars)
Kruskal-Wallis rank sum test
data: mpg by cyl
Kruskal-Wallis chi-squared = 25.746, df = 2, p-value = 2.566e-06
Finally, you can just inspect your data visually using some boxplots. If your data are weakly correlated, there will be a lot of overlap between the boxes.
library(ggplot2)
# weakly correlated
set.seed(123)
y <- rnorm(100)
x <- factor(sample(c("A", "B", "C"), 100, replace = TRUE))
df <- data.frame(x = x, y = y)
ggplot(df) + geom_boxplot(aes(x, y))
# strongly correlated
ggplot(mtcars) + geom_boxplot(aes(x = factor(cyl), y = mpg)) | How to check the correlation between categorical and numeric independent variable in R? [duplicate] | There are several ways to determine correlation between a categorical and a continuous variable. However, I found only one way to calculate a 'correlation coefficient', and that only works if your cat | How to check the correlation between categorical and numeric independent variable in R? [duplicate]
There are several ways to determine correlation between a categorical and a continuous variable. However, I found only one way to calculate a 'correlation coefficient', and that only works if your categorical variable is dichotomous.
If your categorical variable is dichotomous (only two values), then you can use the point-biserial correlation. There is a function to do this in the ltm package.
library(ltm)
# weakly correlated example
set.seed(123)
x <- rnorm(100)
y <- factor(sample(c("A", "B"), 100, replace = TRUE))
biserial.cor(x, y)
[1] -0.07914586
# strongly correlated example
biserial.cor(mtcars$mpg, mtcars$am)
[1] -0.5998324
You could do a logistic regression and use various evaluations of it (accuracy, etc.) in place of a correlation coefficient. Again, this works best if your categorical variable is dichotomous.
# weakly correlated
set.seed(123)
x <- rnorm(100)
y <- factor(sample(c("A", "B"), 100, replace = TRUE))
logit <- glm(y ~ x, family = "binomial")
# Accuracy
sum(round(predict(logit, type = "response")) == as.numeric(y)) / length(y)
[1] 0.15
# Sensitivity
sum(round(predict(logit, type = "response")) == as.numeric(y) & as.numeric(y) == 1) /
sum(as.numeric(y))
[1] 0.1013514
# Precision
sum(round(predict(logit, type = "response")) == as.numeric(y) & as.numeric(y) == 1) /
sum(round(predict(logit, type = "response") == 1))
[1] Inf
enter code here
# strongly correlated
mt_logit <- glm(am~mpg, data = mtcars, family = "binomial")
mt_pred <- round(predict(mt_logit, type = "response"))
# Accuracy
sum(mt_pred == mtcars$am) / nrow(mtcars)
[1] 0.75
# Sensitivity
sum(mt_pred == mtcars$am & mtcars$am == 1) /
sum(mtcars$am)
[1] 0.5384615
# Precision
sum(mt_pred == mtcars$am & mtcars$am == 1) /
sum(mt_pred == 1)
[1] 0.7777778
Again, if your categorical data is dichotomous, you could do the two-sample Wilcoxon rank sum test. The wilcox.test() function is available in base R. This is a non-parametric variation on the ANOVA.
# weakly correlated
set.seed(123)
x <- rnorm(100)
y <- factor(sample(c("A", "B"), 100, replace = TRUE))
df <- data.frame(x = x, y = y)
wt <- wilcox.test(df$x[which(df$y == "A")], df$x[which(df$y == "B")])
Wilcoxon rank sum test with continuity correction
data: df$x[which(df$y == "A")] and df$x[which(df$y == "B")]
W = 1243, p-value = 0.9752
alternative hypothesis: true location shift is not equal to 0
# strongly correlated
wilcox.test(mtcars$mpg[which(mtcars$am == 1)],
mtcars$mpg[which(mtcars$am == 0)], exact = FALSE) # exact = FALSE because there are ties
Wilcoxon rank sum test with continuity correction
data: mtcars$mpg[which(mtcars$am == 1)] and mtcars$mpg[which(mtcars$am == 0)]
W = 205, p-value = 0.001871
alternative hypothesis: true location shift is not equal to 0
You could also just do an ANOVA on your logit model from earlier.
# weakly correlated
anova(logit)
Analysis of Deviance Table
Model: binomial, link: logit
Response: y
Terms added sequentially (first to last)
Df Deviance Resid. Df Resid. Dev
NULL 99 138.47
x 1 0.62819 98 137.84
# strongly correlated
anova(mt_logit)
Analysis of Deviance Table
Model: binomial, link: logit
Response: am
Terms added sequentially (first to last)
Df Deviance Resid. Df Resid. Dev
NULL 31 43.230
mpg 1 13.555 30 29.675
If your categorical variable is not dichotomous, you can use the Kruskal-Wallis test.
# weakly correlated
set.seed(123)
x <- rnorm(100)
y <- factor(sample(c("A", "B", "C"), 100, replace = TRUE))
kruskal.test(x~y)
Kruskal-Wallis rank sum test
data: x by y
Kruskal-Wallis chi-squared = 0.62986, df = 2, p-value = 0.7298
# strongly correlated
kruskal.test(mpg ~ cyl, data = mtcars)
Kruskal-Wallis rank sum test
data: mpg by cyl
Kruskal-Wallis chi-squared = 25.746, df = 2, p-value = 2.566e-06
Finally, you can just inspect your data visually using some boxplots. If your data are weakly correlated, there will be a lot of overlap between the boxes.
library(ggplot2)
# weakly correlated
set.seed(123)
y <- rnorm(100)
x <- factor(sample(c("A", "B", "C"), 100, replace = TRUE))
df <- data.frame(x = x, y = y)
ggplot(df) + geom_boxplot(aes(x, y))
# strongly correlated
ggplot(mtcars) + geom_boxplot(aes(x = factor(cyl), y = mpg)) | How to check the correlation between categorical and numeric independent variable in R? [duplicate]
There are several ways to determine correlation between a categorical and a continuous variable. However, I found only one way to calculate a 'correlation coefficient', and that only works if your cat |
53,116 | Generating uniform points inside an $m$-dimensional ball [duplicate] | A simple and efficient method for this problem uses a variation of the well-known Box-Mueller transform, which connects the normal distribution to the uniform distribution on a ball. If we generate a random vector $\mathbf{Z} = (Z_1,...,Z_m)$ composed of IID standard normal random variables and a random variable $U \sim \text{U}(0,1)$ (independent of the first random vector) then we can construct the uniform point of interest as:
$$\mathbf{X} = \mathbf{c} + r \cdot U^{1/m} \cdot \frac{\mathbf{Z}}{||\mathbf{Z}||}.$$
In the code below we create an R function called runifball which implements this method. The function allows the user to generate n random vectors that are points on a ball with arbitrary centre, radius and dimension.
runifball <- function(n, centre = 0, center = centre, radius = 1) {
#Check inputs
if (!missing(centre) && !missing(center)) {
if (sum((centre - center)^2) < 1e-15) {
warning("specify 'centre' or 'center' but not both") } else {
stop("Error: specify 'centre' or 'center' but not both") } }
if (radius < 0) { stop("Error: radius must be non-negative") }
#Create output matrix
m <- length(center)
OUT <- matrix(0, nrow = m, ncol = n)
rownames(OUT) <- sprintf("x[%s]", 1:m)
#Generate uniform values on circle
UU <- runif(n, min = 0, max = radius)
ZZ <- matrix(rnorm(n*m), nrow = m, ncol = n)
for (i in 1:n) {
OUT[, i] <- center + radius*UU[i]^(1/m)*ZZ[, i]/sqrt(sum(ZZ[, i]^2)) }
OUT }
Here is an example using this function to generate random points uniformly over a two-dimensional disk. The plot shows that the points are indeed uniform over the specified ball.
#Generate points uniformly on a disk
set.seed(1)
n <- 10^5
CENTRE <- c(5, 3)
RADIUS <- 3
UNIF <- runifball(n, centre = CENTRE, radius = RADIUS)
#Plot the points
plot(UNIF,
col = rgb(0, 0, 0, 0.05), pch = 16, asp = 1,
main = 'Points distributed uniformly over a circle', xlab = 'x', ylab = 'y')
points(x = CENTRE[1], y = CENTRE[2], col = 'red', pch = 16) | Generating uniform points inside an $m$-dimensional ball [duplicate] | A simple and efficient method for this problem uses a variation of the well-known Box-Mueller transform, which connects the normal distribution to the uniform distribution on a ball. If we generate a | Generating uniform points inside an $m$-dimensional ball [duplicate]
A simple and efficient method for this problem uses a variation of the well-known Box-Mueller transform, which connects the normal distribution to the uniform distribution on a ball. If we generate a random vector $\mathbf{Z} = (Z_1,...,Z_m)$ composed of IID standard normal random variables and a random variable $U \sim \text{U}(0,1)$ (independent of the first random vector) then we can construct the uniform point of interest as:
$$\mathbf{X} = \mathbf{c} + r \cdot U^{1/m} \cdot \frac{\mathbf{Z}}{||\mathbf{Z}||}.$$
In the code below we create an R function called runifball which implements this method. The function allows the user to generate n random vectors that are points on a ball with arbitrary centre, radius and dimension.
runifball <- function(n, centre = 0, center = centre, radius = 1) {
#Check inputs
if (!missing(centre) && !missing(center)) {
if (sum((centre - center)^2) < 1e-15) {
warning("specify 'centre' or 'center' but not both") } else {
stop("Error: specify 'centre' or 'center' but not both") } }
if (radius < 0) { stop("Error: radius must be non-negative") }
#Create output matrix
m <- length(center)
OUT <- matrix(0, nrow = m, ncol = n)
rownames(OUT) <- sprintf("x[%s]", 1:m)
#Generate uniform values on circle
UU <- runif(n, min = 0, max = radius)
ZZ <- matrix(rnorm(n*m), nrow = m, ncol = n)
for (i in 1:n) {
OUT[, i] <- center + radius*UU[i]^(1/m)*ZZ[, i]/sqrt(sum(ZZ[, i]^2)) }
OUT }
Here is an example using this function to generate random points uniformly over a two-dimensional disk. The plot shows that the points are indeed uniform over the specified ball.
#Generate points uniformly on a disk
set.seed(1)
n <- 10^5
CENTRE <- c(5, 3)
RADIUS <- 3
UNIF <- runifball(n, centre = CENTRE, radius = RADIUS)
#Plot the points
plot(UNIF,
col = rgb(0, 0, 0, 0.05), pch = 16, asp = 1,
main = 'Points distributed uniformly over a circle', xlab = 'x', ylab = 'y')
points(x = CENTRE[1], y = CENTRE[2], col = 'red', pch = 16) | Generating uniform points inside an $m$-dimensional ball [duplicate]
A simple and efficient method for this problem uses a variation of the well-known Box-Mueller transform, which connects the normal distribution to the uniform distribution on a ball. If we generate a |
53,117 | Generating uniform points inside an $m$-dimensional ball [duplicate] | The simplest and least error-prone approach - for low dimensions (see below!) - would still be rejection sampling: pick uniformly distributed points from the $m$-dimensional hypercube circumscribing the sphere, then reject all that fall outside the ball.
runifball <- function(n, centre = 0, center = centre, radius = 1) {
#Check inputs
if (!missing(centre) && !missing(center)) {
if (sum((centre - center)^2) < 1e-15) {
warning("specify 'centre' or 'center' but not both") } else {
stop("Error: specify 'centre' or 'center' but not both") } }
if (radius < 0) { stop("Error: radius must be non-negative") }
n_to_generate <- 2^length(center)*gamma(length(center)/2+1)*n/pi^(length(center)/2) # see below
original_sample_around_origin <-
matrix(replicate(length(center),runif(n_to_generate ,-radius,radius)),nrow=n_to_generate )
index_to_keep <- rowSums(original_sample_around_origin^2)<radius^2
original_sample_around_origin[index_to_keep,]+
matrix(center,nrow=sum(index_to_keep),ncol=length(center),byrow=TRUE)
}
Here is an application for the $m=2$-dimensional disk:
#Generate points uniformly on a disk
set.seed(1)
n <- 10^5
CENTRE <- c(5, 3)
RADIUS <- 3
UNIF <- runifball(n, centre = CENTRE, radius = RADIUS)
#Plot the points
plot(UNIF,
col = rgb(0, 0, 0, 0.05), pch = 16, asp = 1,
main = 'Points distributed uniformly over a circle', xlab = 'x', ylab = 'y')
points(x = CENTRE[1], y = CENTRE[2], col = 'red', pch = 16)
Once again, we will need to originally generate more points, because we will reject some. Specifically, we expect to keep $\frac{\pi^\frac{m}{2}}{2^m\Gamma(\frac{m}{2}+1)}$, which is the ratio of the volume of the $m$-dimensional ball to the volume of the $m$-dimensional hypercube circumscribing it. So we can either start by generating $\frac{2^m\Gamma(\frac{m}{2}+1)n}{\pi^\frac{m}{2}}$ and expect to end up with $n$ points (this is the approach the code above takes), or just start generating until we have kept $n$.
In either case, the number of points we originally need to draw in the hypercube in order to (expect to) end up with a single point in the ball rises quickly with increasing dimensionality $m$:
(Note the logarithmic vertical axis!)
m <- 2:20
plot(m,2^m*gamma(m/2+1)/pi^(m/2),type="o",pch=19,log="y",
xlab="Dimension (m)")
This is just a consequence of the fact that for large $m$, most of the volume of the $m$-dimensional hypercube is in the corners, not in the center (where the ball is). So rejection sampling is likely only an option for low dimensions. | Generating uniform points inside an $m$-dimensional ball [duplicate] | The simplest and least error-prone approach - for low dimensions (see below!) - would still be rejection sampling: pick uniformly distributed points from the $m$-dimensional hypercube circumscribing t | Generating uniform points inside an $m$-dimensional ball [duplicate]
The simplest and least error-prone approach - for low dimensions (see below!) - would still be rejection sampling: pick uniformly distributed points from the $m$-dimensional hypercube circumscribing the sphere, then reject all that fall outside the ball.
runifball <- function(n, centre = 0, center = centre, radius = 1) {
#Check inputs
if (!missing(centre) && !missing(center)) {
if (sum((centre - center)^2) < 1e-15) {
warning("specify 'centre' or 'center' but not both") } else {
stop("Error: specify 'centre' or 'center' but not both") } }
if (radius < 0) { stop("Error: radius must be non-negative") }
n_to_generate <- 2^length(center)*gamma(length(center)/2+1)*n/pi^(length(center)/2) # see below
original_sample_around_origin <-
matrix(replicate(length(center),runif(n_to_generate ,-radius,radius)),nrow=n_to_generate )
index_to_keep <- rowSums(original_sample_around_origin^2)<radius^2
original_sample_around_origin[index_to_keep,]+
matrix(center,nrow=sum(index_to_keep),ncol=length(center),byrow=TRUE)
}
Here is an application for the $m=2$-dimensional disk:
#Generate points uniformly on a disk
set.seed(1)
n <- 10^5
CENTRE <- c(5, 3)
RADIUS <- 3
UNIF <- runifball(n, centre = CENTRE, radius = RADIUS)
#Plot the points
plot(UNIF,
col = rgb(0, 0, 0, 0.05), pch = 16, asp = 1,
main = 'Points distributed uniformly over a circle', xlab = 'x', ylab = 'y')
points(x = CENTRE[1], y = CENTRE[2], col = 'red', pch = 16)
Once again, we will need to originally generate more points, because we will reject some. Specifically, we expect to keep $\frac{\pi^\frac{m}{2}}{2^m\Gamma(\frac{m}{2}+1)}$, which is the ratio of the volume of the $m$-dimensional ball to the volume of the $m$-dimensional hypercube circumscribing it. So we can either start by generating $\frac{2^m\Gamma(\frac{m}{2}+1)n}{\pi^\frac{m}{2}}$ and expect to end up with $n$ points (this is the approach the code above takes), or just start generating until we have kept $n$.
In either case, the number of points we originally need to draw in the hypercube in order to (expect to) end up with a single point in the ball rises quickly with increasing dimensionality $m$:
(Note the logarithmic vertical axis!)
m <- 2:20
plot(m,2^m*gamma(m/2+1)/pi^(m/2),type="o",pch=19,log="y",
xlab="Dimension (m)")
This is just a consequence of the fact that for large $m$, most of the volume of the $m$-dimensional hypercube is in the corners, not in the center (where the ball is). So rejection sampling is likely only an option for low dimensions. | Generating uniform points inside an $m$-dimensional ball [duplicate]
The simplest and least error-prone approach - for low dimensions (see below!) - would still be rejection sampling: pick uniformly distributed points from the $m$-dimensional hypercube circumscribing t |
53,118 | Assumption of normality in a sample size of 3 | Honestly, with a sample size of three, there's really not much you can say at all. Those numbers are more likely to come from a normal distribution than, say, $1$, $2$, and $1000$ are, but there are plenty of non-normal distributions that can produce those numbers.
With regards to a standard deviation being relatively small, that entirely depends on the underlying data. A standard deviation of 5-7 for the number of drinks consumed in a night is big, a standard deviation of 5-7 for total dollar earnings in a year is tiny. | Assumption of normality in a sample size of 3 | Honestly, with a sample size of three, there's really not much you can say at all. Those numbers are more likely to come from a normal distribution than, say, $1$, $2$, and $1000$ are, but there are p | Assumption of normality in a sample size of 3
Honestly, with a sample size of three, there's really not much you can say at all. Those numbers are more likely to come from a normal distribution than, say, $1$, $2$, and $1000$ are, but there are plenty of non-normal distributions that can produce those numbers.
With regards to a standard deviation being relatively small, that entirely depends on the underlying data. A standard deviation of 5-7 for the number of drinks consumed in a night is big, a standard deviation of 5-7 for total dollar earnings in a year is tiny. | Assumption of normality in a sample size of 3
Honestly, with a sample size of three, there's really not much you can say at all. Those numbers are more likely to come from a normal distribution than, say, $1$, $2$, and $1000$ are, but there are p |
53,119 | Assumption of normality in a sample size of 3 | You don't say what null hypothesis you're testing.
If you want to know if your observations indicate that the population mean or median is above $20,$ then a one-sample, one-sided Wilcoxon signed-rank test
cannot give a P-value below $1/8 = 0.125.$
wilcox.test(x, mu=20, alt="g")$p.val
[1] 0.125
By contrast, if you have prior experience with the kind of mechanism
or population that produced these must be normally distributed and with reasonably small variance, then
a one-sided, one-sample t-test of $H_0: \mu=20$ vs. $H_a: \mu > 20$
will give a highly significant result with a P-value of about $0.018 < 0.05.$
x = c(36, 31, 42)
t.test(x, mu=20, alt="greater")
One Sample t-test
data: x
t = 5.1366, df = 2, p-value = 0.01794
alternative hypothesis: true mean is greater than 20
95 percent confidence interval:
27.04837 Inf
sample estimates:
mean of x
36.33333
However, depending on the plausibility of your assumptions of
normality and small or moderate variance, you may have difficulty
persuading anyone else that the population mean truly exceeds $20.$
If it's an important issue for you and you are the one doing the
experimental work, then you may feel further experimentation is
worthwhile---even if others are sceptical. | Assumption of normality in a sample size of 3 | You don't say what null hypothesis you're testing.
If you want to know if your observations indicate that the population mean or median is above $20,$ then a one-sample, one-sided Wilcoxon signed-rank | Assumption of normality in a sample size of 3
You don't say what null hypothesis you're testing.
If you want to know if your observations indicate that the population mean or median is above $20,$ then a one-sample, one-sided Wilcoxon signed-rank test
cannot give a P-value below $1/8 = 0.125.$
wilcox.test(x, mu=20, alt="g")$p.val
[1] 0.125
By contrast, if you have prior experience with the kind of mechanism
or population that produced these must be normally distributed and with reasonably small variance, then
a one-sided, one-sample t-test of $H_0: \mu=20$ vs. $H_a: \mu > 20$
will give a highly significant result with a P-value of about $0.018 < 0.05.$
x = c(36, 31, 42)
t.test(x, mu=20, alt="greater")
One Sample t-test
data: x
t = 5.1366, df = 2, p-value = 0.01794
alternative hypothesis: true mean is greater than 20
95 percent confidence interval:
27.04837 Inf
sample estimates:
mean of x
36.33333
However, depending on the plausibility of your assumptions of
normality and small or moderate variance, you may have difficulty
persuading anyone else that the population mean truly exceeds $20.$
If it's an important issue for you and you are the one doing the
experimental work, then you may feel further experimentation is
worthwhile---even if others are sceptical. | Assumption of normality in a sample size of 3
You don't say what null hypothesis you're testing.
If you want to know if your observations indicate that the population mean or median is above $20,$ then a one-sample, one-sided Wilcoxon signed-rank |
53,120 | Assumption of normality in a sample size of 3 | Samples of size 3 cannot be easily used to visually assess whether they might be consistent with having come from a normal population.
Tiny samples actually from a normal distribution may be highly asymmetric.
Symmetry doesn't imply normality. In some cases population symmetry would be very helpful, in others it's far from the most important consideration. For example, in a two-sample test, where you're taking differences, if the two sample means have the same distribution then the difference in means is symmetric, so skewness of the original variable may not be at all consequential for the skewness of the numerator. Further, symmetry of the parent distribution would be insufficient for the properties of the denominator and for the dependence between the two.
Very small samples from quite non-normal distributions can look perfectly consistent with a normal. You will have little power.
I contend that with three observations your assessment of skewness is based on the relative length of the longer and shorter intervals between adjacent observations (in that almost any reasonable measure will be a monotonic function of that ratio).
If the two intervals between adjacent values ($x_{(3)}-x_{(2)}$ and $x_{(2)}-x_{(1)}$) are similar in size you'll "see" symmetry and if they're very different in size you'd interpret that as skewness.
As a measure of symmetry, then, we could take the ratio of the shorter interval to the longer, which will range from 0 - very different sized intervals, from which you'd see asymmetry - to 1 - identical intervals, from which you'd see symmetry.
I could have taken the reciprocal of that (the ratio of longer interval to shorter - which would range from 1 upward) as a measure of skewness, but this has a very highly skewed distribution and isn't so convenient to work with. It's easier to see what's going on with its inverse which has support on the unit interval.
For the purpose of illustration, let's look at three parent distributions; one is skewed (its density is monotonic decreasing, so it's unambiguously skewed), one is symmetric but heavy-tailed (has infinite kurtosis), and one is normal.
I drew 100,000 samples of size 3 from each distribution, and calculated the ratio of the shorter interval to the longer for each sample, then plotted histograms of these 10000 symmetry measures (I have not identified which is from which distribution, merely numbering them):
As we see, the distributions of this measure is extremely hard to distinguish for those chosen distributions. Imagine you had a sample of size three where the longer interval was 5 times as long as the shorter interval (S/L = 0.2) -- pretty skew, right? Which distribution did it come from? I certainly couldn't do much better than guess at random, and two of the parent distributions these symmetry measures came from are symmetric!
Now let's imagine that the longer interval was about 11% longer than the shorter one (S/L = 0.9) -- very nearly symmetric right? But which of those three distributions did it come from? If I hadn't made the plots I could not hope to know with any degree of confidence which was from the skewed parent.
With extremely skew or extremely heavy-tailed distributions it's somewhat easier to tell the distributions of S/L apart from that for a normal, but it's still not much help, because the distributions aren't typically different enough to reliably tell from a single sample of size 3. If you see a long interval ten or twenty times as long as the short one, you still don't have a reliable indicator of whether it came from a normal distribution or not -- there's still a good chance of seeing a very skewed sample with a normal and of seeing a nearly-symmetric sample with something that very much isn't normal.
I'll leave aside my usual advice about not using the sample to choose your specific hypothesis and end with a simple warning:
Do not use a Mann-Whitney test nor signed rank test with n=3
Because of this difficulty of assessing normality, you will no doubt see people recommending a (Wilcoxon-)Mann-Whitney test or signed rank test as fits the circumstances; such advice is very common but simply disastrous with sample sizes of 3. Besides the obvious lack of power you'd experience with any location test, you lack available significance levels. With two samples of size 3, the smallest possible two sided p-value you can observe is $0.1$. That is, if you're trying to reject p values $\leq 0.05$, you'll never see one, no matter how different the samples are). Unless you're prepared to do a test at the 10% level, you're wasting your time.
The situation is even worse with a signed rank test.
Spoiler - no doubt some people will wish to know the details of the distributions involved. To that end:
One of the distributions is a folded normal($\mu=1,\sigma=1$) parameterized the same way as the Wikipedia page; this one has a monotonic decreasing pdf. One is standard normal. One is a $t$ distribution with 3 degrees of freedom. You can always simulate them to see, but even when you know which is which, regarded as members of location-scale families in each case (so only shape information matters for distinguishing them), they're very hard to tell apart with just three observations. | Assumption of normality in a sample size of 3 | Samples of size 3 cannot be easily used to visually assess whether they might be consistent with having come from a normal population.
Tiny samples actually from a normal distribution may be highly a | Assumption of normality in a sample size of 3
Samples of size 3 cannot be easily used to visually assess whether they might be consistent with having come from a normal population.
Tiny samples actually from a normal distribution may be highly asymmetric.
Symmetry doesn't imply normality. In some cases population symmetry would be very helpful, in others it's far from the most important consideration. For example, in a two-sample test, where you're taking differences, if the two sample means have the same distribution then the difference in means is symmetric, so skewness of the original variable may not be at all consequential for the skewness of the numerator. Further, symmetry of the parent distribution would be insufficient for the properties of the denominator and for the dependence between the two.
Very small samples from quite non-normal distributions can look perfectly consistent with a normal. You will have little power.
I contend that with three observations your assessment of skewness is based on the relative length of the longer and shorter intervals between adjacent observations (in that almost any reasonable measure will be a monotonic function of that ratio).
If the two intervals between adjacent values ($x_{(3)}-x_{(2)}$ and $x_{(2)}-x_{(1)}$) are similar in size you'll "see" symmetry and if they're very different in size you'd interpret that as skewness.
As a measure of symmetry, then, we could take the ratio of the shorter interval to the longer, which will range from 0 - very different sized intervals, from which you'd see asymmetry - to 1 - identical intervals, from which you'd see symmetry.
I could have taken the reciprocal of that (the ratio of longer interval to shorter - which would range from 1 upward) as a measure of skewness, but this has a very highly skewed distribution and isn't so convenient to work with. It's easier to see what's going on with its inverse which has support on the unit interval.
For the purpose of illustration, let's look at three parent distributions; one is skewed (its density is monotonic decreasing, so it's unambiguously skewed), one is symmetric but heavy-tailed (has infinite kurtosis), and one is normal.
I drew 100,000 samples of size 3 from each distribution, and calculated the ratio of the shorter interval to the longer for each sample, then plotted histograms of these 10000 symmetry measures (I have not identified which is from which distribution, merely numbering them):
As we see, the distributions of this measure is extremely hard to distinguish for those chosen distributions. Imagine you had a sample of size three where the longer interval was 5 times as long as the shorter interval (S/L = 0.2) -- pretty skew, right? Which distribution did it come from? I certainly couldn't do much better than guess at random, and two of the parent distributions these symmetry measures came from are symmetric!
Now let's imagine that the longer interval was about 11% longer than the shorter one (S/L = 0.9) -- very nearly symmetric right? But which of those three distributions did it come from? If I hadn't made the plots I could not hope to know with any degree of confidence which was from the skewed parent.
With extremely skew or extremely heavy-tailed distributions it's somewhat easier to tell the distributions of S/L apart from that for a normal, but it's still not much help, because the distributions aren't typically different enough to reliably tell from a single sample of size 3. If you see a long interval ten or twenty times as long as the short one, you still don't have a reliable indicator of whether it came from a normal distribution or not -- there's still a good chance of seeing a very skewed sample with a normal and of seeing a nearly-symmetric sample with something that very much isn't normal.
I'll leave aside my usual advice about not using the sample to choose your specific hypothesis and end with a simple warning:
Do not use a Mann-Whitney test nor signed rank test with n=3
Because of this difficulty of assessing normality, you will no doubt see people recommending a (Wilcoxon-)Mann-Whitney test or signed rank test as fits the circumstances; such advice is very common but simply disastrous with sample sizes of 3. Besides the obvious lack of power you'd experience with any location test, you lack available significance levels. With two samples of size 3, the smallest possible two sided p-value you can observe is $0.1$. That is, if you're trying to reject p values $\leq 0.05$, you'll never see one, no matter how different the samples are). Unless you're prepared to do a test at the 10% level, you're wasting your time.
The situation is even worse with a signed rank test.
Spoiler - no doubt some people will wish to know the details of the distributions involved. To that end:
One of the distributions is a folded normal($\mu=1,\sigma=1$) parameterized the same way as the Wikipedia page; this one has a monotonic decreasing pdf. One is standard normal. One is a $t$ distribution with 3 degrees of freedom. You can always simulate them to see, but even when you know which is which, regarded as members of location-scale families in each case (so only shape information matters for distinguishing them), they're very hard to tell apart with just three observations. | Assumption of normality in a sample size of 3
Samples of size 3 cannot be easily used to visually assess whether they might be consistent with having come from a normal population.
Tiny samples actually from a normal distribution may be highly a |
53,121 | Assumption of normality in a sample size of 3 | As Norvia points out, there you cannot make this assumption with a sample size of 3. I would not assume normal distribution. The mann-whitney U test is the better choice. | Assumption of normality in a sample size of 3 | As Norvia points out, there you cannot make this assumption with a sample size of 3. I would not assume normal distribution. The mann-whitney U test is the better choice. | Assumption of normality in a sample size of 3
As Norvia points out, there you cannot make this assumption with a sample size of 3. I would not assume normal distribution. The mann-whitney U test is the better choice. | Assumption of normality in a sample size of 3
As Norvia points out, there you cannot make this assumption with a sample size of 3. I would not assume normal distribution. The mann-whitney U test is the better choice. |
53,122 | Assumption of normality in a sample size of 3 | With classic statistics, I would still do the usual test and arrive at large confidence intervals, for example, which still may be informative in postulating say a prior distribution for any subsequent low data exercise. [EDIT] The rationale is also that it offers a parametric opinion to contrast to the nonparametric/bootstrap path discussed below.
One could also assume that each of the three numbers are equally likely to recur in the future (as to whether this is rationale depends on the nature of the data).
If so, generate all 27 sets of data (3x3x3). Compute statistics of interest across all sets. Tabulate the statistics into an empirical distribution which may again have some probabilistic value for decision making. [EDIT] What I just described is called a nonparametric bootstrap technique, which usually has a problem in that even small data sets can produce an extremely large number of data sets to examine, but not for the current case. | Assumption of normality in a sample size of 3 | With classic statistics, I would still do the usual test and arrive at large confidence intervals, for example, which still may be informative in postulating say a prior distribution for any subsequen | Assumption of normality in a sample size of 3
With classic statistics, I would still do the usual test and arrive at large confidence intervals, for example, which still may be informative in postulating say a prior distribution for any subsequent low data exercise. [EDIT] The rationale is also that it offers a parametric opinion to contrast to the nonparametric/bootstrap path discussed below.
One could also assume that each of the three numbers are equally likely to recur in the future (as to whether this is rationale depends on the nature of the data).
If so, generate all 27 sets of data (3x3x3). Compute statistics of interest across all sets. Tabulate the statistics into an empirical distribution which may again have some probabilistic value for decision making. [EDIT] What I just described is called a nonparametric bootstrap technique, which usually has a problem in that even small data sets can produce an extremely large number of data sets to examine, but not for the current case. | Assumption of normality in a sample size of 3
With classic statistics, I would still do the usual test and arrive at large confidence intervals, for example, which still may be informative in postulating say a prior distribution for any subsequen |
53,123 | Finding UMVUE for a function of a Bernoulli parameter | Except when $k=1$, given a finite sequence of i.i.d. Bernoulli
$\mathcal B(θ)$ random variables $X_1,X_2,\ldots,X_m$, there exists no
unbiased estimator of $(1−θ)^{1/k}$, when $k$ is a positive integer.
The reason for this impossibility is that only polynomials in $\theta$ of degree at most $m$ can be unbiasedly estimated. Indeed, since $Y_m=m\bar{X}_m$ is a sufficient statistic, we can assume wlog that an unbiased estimator is a function of $Y_m\sim\mathcal Bin(m,p)$, $\delta(Y_m)$, with expectation
$$\sum_{i=0}^m \delta(i) {m \choose i} \theta^i(1-\theta)^{m-i}$$
which is therefore a polynomial in $\theta$ of degree at most $m$.
See Halmos (1946) for a general theory of unbiased estimation that points out the rarity of unbiasedly estimable functions.
However, when changing the perspective, there exists an unbiased estimator of $\theta^a$, $a\in(0,1)$, when considering instead an infinite sequence of i.i.d. Bernoulli $\mathcal B(θ)$ random variables $X_1,X_2,\ldots$ This is a consequence of the notion of a Bernoulli factory.
Given a known function $f:S\mapsto (0,1)$, we consider the problem of
using independent tosses of a coin with probability of heads $\theta$
(where $\theta\in S$ is unknown) to simulate a coin with probability of
heads $f(\theta)$. (Nacu & Peres, 2005)
Mendo (2018) and Thomas and Blanchet show that there exists a Bernoulli factory solution for $θ^a$, $a\in (0,1)$, with constructive arguments. The first author uses the power series decomposition of $f(\theta)$
$$f(\theta)=1-\sum_{k=1}^\infty c_k(1-\theta)^k\qquad c_k\ge 0,\,\sum_{k=1}^\infty c_k=1$$
to construct the sequence$$d_k=\dfrac{c_k}{1-\sum_{\kappa=1}^{k-1}c_\kappa}$$ and the associated algorithm
Set i=1.
Take one Bernoulli $\mathcal B(θ)$ input Xi.
Produce Ui Uniform on (0,1). Let Vi = 1 if Ui < di or Vi = 0 otherwise.
If Vi or Xi are 1, output Y = Xi and finish. Else increase i by 1 and go back to step 2.
For instance, when $f(\theta) =\sqrt\theta$ the coefficients $c_k$ are
$$c_k=\frac{1}{2^{2k−1}k}{2k-2 \choose k−1}$$
Here is an R code illustrating the validity of the method:
ck=exp(lchoose(n=2*(k<-1:1e5)-2,k=k-1)-log(k)-{2*k-1}*log(2))
dk=ck/(1-c(0,cumsum(ck[-1e5])))
be <- function(p){
i=1
while((xi<-runif(1)>p)&(runif(1)>dk[i]))
i=i+1
1-xi}
for(t in 1:1e5)ck[t]=be(p)
and the empirical verification that the simulated outcomes are indeed Bernoulli $\mathcal B(\sqrt{\theta})$:
As an aside estimating $\theta^{1/k}$ or $(1-\theta)^{1/k}$ has a practical appeal when considering Dorfman's group blood testing or pooling where blood samples of $k$ individuals are mixed together to speed up the confirmation they all are free from a disease. | Finding UMVUE for a function of a Bernoulli parameter | Except when $k=1$, given a finite sequence of i.i.d. Bernoulli
$\mathcal B(θ)$ random variables $X_1,X_2,\ldots,X_m$, there exists no
unbiased estimator of $(1−θ)^{1/k}$, when $k$ is a positive in | Finding UMVUE for a function of a Bernoulli parameter
Except when $k=1$, given a finite sequence of i.i.d. Bernoulli
$\mathcal B(θ)$ random variables $X_1,X_2,\ldots,X_m$, there exists no
unbiased estimator of $(1−θ)^{1/k}$, when $k$ is a positive integer.
The reason for this impossibility is that only polynomials in $\theta$ of degree at most $m$ can be unbiasedly estimated. Indeed, since $Y_m=m\bar{X}_m$ is a sufficient statistic, we can assume wlog that an unbiased estimator is a function of $Y_m\sim\mathcal Bin(m,p)$, $\delta(Y_m)$, with expectation
$$\sum_{i=0}^m \delta(i) {m \choose i} \theta^i(1-\theta)^{m-i}$$
which is therefore a polynomial in $\theta$ of degree at most $m$.
See Halmos (1946) for a general theory of unbiased estimation that points out the rarity of unbiasedly estimable functions.
However, when changing the perspective, there exists an unbiased estimator of $\theta^a$, $a\in(0,1)$, when considering instead an infinite sequence of i.i.d. Bernoulli $\mathcal B(θ)$ random variables $X_1,X_2,\ldots$ This is a consequence of the notion of a Bernoulli factory.
Given a known function $f:S\mapsto (0,1)$, we consider the problem of
using independent tosses of a coin with probability of heads $\theta$
(where $\theta\in S$ is unknown) to simulate a coin with probability of
heads $f(\theta)$. (Nacu & Peres, 2005)
Mendo (2018) and Thomas and Blanchet show that there exists a Bernoulli factory solution for $θ^a$, $a\in (0,1)$, with constructive arguments. The first author uses the power series decomposition of $f(\theta)$
$$f(\theta)=1-\sum_{k=1}^\infty c_k(1-\theta)^k\qquad c_k\ge 0,\,\sum_{k=1}^\infty c_k=1$$
to construct the sequence$$d_k=\dfrac{c_k}{1-\sum_{\kappa=1}^{k-1}c_\kappa}$$ and the associated algorithm
Set i=1.
Take one Bernoulli $\mathcal B(θ)$ input Xi.
Produce Ui Uniform on (0,1). Let Vi = 1 if Ui < di or Vi = 0 otherwise.
If Vi or Xi are 1, output Y = Xi and finish. Else increase i by 1 and go back to step 2.
For instance, when $f(\theta) =\sqrt\theta$ the coefficients $c_k$ are
$$c_k=\frac{1}{2^{2k−1}k}{2k-2 \choose k−1}$$
Here is an R code illustrating the validity of the method:
ck=exp(lchoose(n=2*(k<-1:1e5)-2,k=k-1)-log(k)-{2*k-1}*log(2))
dk=ck/(1-c(0,cumsum(ck[-1e5])))
be <- function(p){
i=1
while((xi<-runif(1)>p)&(runif(1)>dk[i]))
i=i+1
1-xi}
for(t in 1:1e5)ck[t]=be(p)
and the empirical verification that the simulated outcomes are indeed Bernoulli $\mathcal B(\sqrt{\theta})$:
As an aside estimating $\theta^{1/k}$ or $(1-\theta)^{1/k}$ has a practical appeal when considering Dorfman's group blood testing or pooling where blood samples of $k$ individuals are mixed together to speed up the confirmation they all are free from a disease. | Finding UMVUE for a function of a Bernoulli parameter
Except when $k=1$, given a finite sequence of i.i.d. Bernoulli
$\mathcal B(θ)$ random variables $X_1,X_2,\ldots,X_m$, there exists no
unbiased estimator of $(1−θ)^{1/k}$, when $k$ is a positive in |
53,124 | Finding UMVUE for a function of a Bernoulli parameter | First of all, I'll just point out that it's not enough that $\sum_i X_i$ is sufficient. We need it to be complete-sufficient. Fortunately we know that $\sum_i X_i$ is also a complete statistic by well known properties of the exponential family of distributions.
As you say, we need an estimator $\delta(\cdot)$ based on the complete-sufficient statistic $T(X)$ which is unbiased, i.e. we need
$$\mathbb{E} [\delta (T(X)) ] = (1-\theta)^{1/k}.$$
One approach to solving this problem is to solve for the function $\delta(\cdot)$.
We know that $\sum_i X_i \sim Binomial(m,\theta)$. Thus, we can write out the expected value of $\delta (\sum_i X_i)$ as:
$$\mathbb{E} [\delta (\sum_i X_i) ] = \sum_{k=0}^m \delta (k) {m \choose k} \theta^k (1-\theta)^{n-k}$$
We want the right hand side to equal $(1-\theta)^{1/k}$, so that $\delta (\cdot)$ is unbiased and hence UMVUE. Hence, you need to solve the following for $\delta (\cdot)$:
$$ \sum_{k=0}^n \delta (k) {m \choose k} \theta^k (1-\theta)^{m-k} = (1-\theta)^{1/k}\tag{1}$$
The answer is not immediately obvious to me, but this is one of two standard approaches when deriving UMVUE's. The other approach is to start with any unbiased estimator and condition on a complete sufficient statistic.
For example, suppose you know that there is an estimator $g(\cdot)$ such that $E[g(\vec{X})] = (1-\theta)^{1/k}$, so that it is unbiased but not UMVUE. Then it follows that $\delta (\sum_i X_i) = E[g(\vec{X})|\sum_i X_i]$ is UMVUE. | Finding UMVUE for a function of a Bernoulli parameter | First of all, I'll just point out that it's not enough that $\sum_i X_i$ is sufficient. We need it to be complete-sufficient. Fortunately we know that $\sum_i X_i$ is also a complete statistic by wel | Finding UMVUE for a function of a Bernoulli parameter
First of all, I'll just point out that it's not enough that $\sum_i X_i$ is sufficient. We need it to be complete-sufficient. Fortunately we know that $\sum_i X_i$ is also a complete statistic by well known properties of the exponential family of distributions.
As you say, we need an estimator $\delta(\cdot)$ based on the complete-sufficient statistic $T(X)$ which is unbiased, i.e. we need
$$\mathbb{E} [\delta (T(X)) ] = (1-\theta)^{1/k}.$$
One approach to solving this problem is to solve for the function $\delta(\cdot)$.
We know that $\sum_i X_i \sim Binomial(m,\theta)$. Thus, we can write out the expected value of $\delta (\sum_i X_i)$ as:
$$\mathbb{E} [\delta (\sum_i X_i) ] = \sum_{k=0}^m \delta (k) {m \choose k} \theta^k (1-\theta)^{n-k}$$
We want the right hand side to equal $(1-\theta)^{1/k}$, so that $\delta (\cdot)$ is unbiased and hence UMVUE. Hence, you need to solve the following for $\delta (\cdot)$:
$$ \sum_{k=0}^n \delta (k) {m \choose k} \theta^k (1-\theta)^{m-k} = (1-\theta)^{1/k}\tag{1}$$
The answer is not immediately obvious to me, but this is one of two standard approaches when deriving UMVUE's. The other approach is to start with any unbiased estimator and condition on a complete sufficient statistic.
For example, suppose you know that there is an estimator $g(\cdot)$ such that $E[g(\vec{X})] = (1-\theta)^{1/k}$, so that it is unbiased but not UMVUE. Then it follows that $\delta (\sum_i X_i) = E[g(\vec{X})|\sum_i X_i]$ is UMVUE. | Finding UMVUE for a function of a Bernoulli parameter
First of all, I'll just point out that it's not enough that $\sum_i X_i$ is sufficient. We need it to be complete-sufficient. Fortunately we know that $\sum_i X_i$ is also a complete statistic by wel |
53,125 | stochastic vs. deterministic trend in time series | Deterministic Trend
$$
y_t = \beta_0 + \beta_1 t + \epsilon_t
$$
where $\{\epsilon_t\}$ is white noise, for simplicity. Same discussion applies to the case where $\{\epsilon_t\}$ is a covariance-stationary process (e.g. ARIMA with $d = 0$).
The process is random fluctuations around a deterministic linear trend $\beta_0 + \beta_1 t$. Hence the terminology "deterministic trend".
Such processes also called trend-stationary. If you remove the linear trend, you recover the stationary process $\{\epsilon_t\}$.
Stochastic Trend
$$
y_t = \beta_0 + \beta_1 t + \eta_t
$$
where $\{\eta_t\}$ is a random walk, for simplicity. Same discussion applies to the case where $\{\eta_t\}$ is an $I(1)$ process (e.g. ARIMA with $d = 1$).
Equivalently,
$$
y_t = y_0 + \beta_0 + \beta_1 t + \sum_{s = 1}^{t} \epsilon_t
$$
where $\{\epsilon_t\}$ is the white noise driving the random walk $\{\eta_t\}$.
The "stochastic trend" terminology refers to $\eta_t$. The random walk is a highly persistent process, giving its sample path the appearance of a "trend".
Such processes are also called difference-stationary. If you take first-difference, you recover the stationary process $\{\epsilon_t\}$, i.e.
$$
\Delta y_t = \beta_1 + \epsilon_t,
$$
which is the same series (random walk with drift) from your second link.
Visual Similarity
You can observe via simulation that the sample paths from these two models can be visually similar---e.g. choose $\beta_1=1$ and $\epsilon_t \stackrel{i.i.d.}{\sim}(0,1)$.
This is because the linear trend $\beta_0 + \beta_1 t$ dominates. More precisely, for both models
$$
\frac{y_t}{t} = \beta_1 + o_p(1).
$$
Only the slope term $\beta_1$ is not negligible in the limit. For the deterministic trend case, it is clear that $\frac{\epsilon_t}{t} = o_p(1)$.
For the stochastic trend case, $\frac{\eta_t}{t} = o_p(1)$ because $\frac{\eta_t}{\sqrt{t}}$ converges in distribution to a normal distribution (Central Limit Theorem).
Statistical Testing
The visual similarity of sample paths motivates the problem of statistically distinguishing these two models. This is the purpose of unit root tests---e.g. the (Augmented) Dickey-Fuller test, which is historically the first such test.
For the ADF test, you basically take the detrended series $\tilde{y}_t$ (residuals from regressing $y_t$ on $1$ and $t$), run the regression
$$
\Delta \tilde{y}_t = \alpha \tilde{y}_{t-1} + \tilde{\epsilon}_t,
$$
and consider the $t$-statistic for $\alpha = 0$. It the $t$-statistic is small, you reject the null of stochastic trend.
The empirical reasoning behind the ADF test is simple. Even though the sample paths themselves are similar, the detrended series would look quite different. Under trend-stationarity, the detrended series would appear stationary. On the other hand, if a difference-stationary model is mistakenly detrended, the detrended series would not appear stationary. | stochastic vs. deterministic trend in time series | Deterministic Trend
$$
y_t = \beta_0 + \beta_1 t + \epsilon_t
$$
where $\{\epsilon_t\}$ is white noise, for simplicity. Same discussion applies to the case where $\{\epsilon_t\}$ is a covariance-stati | stochastic vs. deterministic trend in time series
Deterministic Trend
$$
y_t = \beta_0 + \beta_1 t + \epsilon_t
$$
where $\{\epsilon_t\}$ is white noise, for simplicity. Same discussion applies to the case where $\{\epsilon_t\}$ is a covariance-stationary process (e.g. ARIMA with $d = 0$).
The process is random fluctuations around a deterministic linear trend $\beta_0 + \beta_1 t$. Hence the terminology "deterministic trend".
Such processes also called trend-stationary. If you remove the linear trend, you recover the stationary process $\{\epsilon_t\}$.
Stochastic Trend
$$
y_t = \beta_0 + \beta_1 t + \eta_t
$$
where $\{\eta_t\}$ is a random walk, for simplicity. Same discussion applies to the case where $\{\eta_t\}$ is an $I(1)$ process (e.g. ARIMA with $d = 1$).
Equivalently,
$$
y_t = y_0 + \beta_0 + \beta_1 t + \sum_{s = 1}^{t} \epsilon_t
$$
where $\{\epsilon_t\}$ is the white noise driving the random walk $\{\eta_t\}$.
The "stochastic trend" terminology refers to $\eta_t$. The random walk is a highly persistent process, giving its sample path the appearance of a "trend".
Such processes are also called difference-stationary. If you take first-difference, you recover the stationary process $\{\epsilon_t\}$, i.e.
$$
\Delta y_t = \beta_1 + \epsilon_t,
$$
which is the same series (random walk with drift) from your second link.
Visual Similarity
You can observe via simulation that the sample paths from these two models can be visually similar---e.g. choose $\beta_1=1$ and $\epsilon_t \stackrel{i.i.d.}{\sim}(0,1)$.
This is because the linear trend $\beta_0 + \beta_1 t$ dominates. More precisely, for both models
$$
\frac{y_t}{t} = \beta_1 + o_p(1).
$$
Only the slope term $\beta_1$ is not negligible in the limit. For the deterministic trend case, it is clear that $\frac{\epsilon_t}{t} = o_p(1)$.
For the stochastic trend case, $\frac{\eta_t}{t} = o_p(1)$ because $\frac{\eta_t}{\sqrt{t}}$ converges in distribution to a normal distribution (Central Limit Theorem).
Statistical Testing
The visual similarity of sample paths motivates the problem of statistically distinguishing these two models. This is the purpose of unit root tests---e.g. the (Augmented) Dickey-Fuller test, which is historically the first such test.
For the ADF test, you basically take the detrended series $\tilde{y}_t$ (residuals from regressing $y_t$ on $1$ and $t$), run the regression
$$
\Delta \tilde{y}_t = \alpha \tilde{y}_{t-1} + \tilde{\epsilon}_t,
$$
and consider the $t$-statistic for $\alpha = 0$. It the $t$-statistic is small, you reject the null of stochastic trend.
The empirical reasoning behind the ADF test is simple. Even though the sample paths themselves are similar, the detrended series would look quite different. Under trend-stationarity, the detrended series would appear stationary. On the other hand, if a difference-stationary model is mistakenly detrended, the detrended series would not appear stationary. | stochastic vs. deterministic trend in time series
Deterministic Trend
$$
y_t = \beta_0 + \beta_1 t + \epsilon_t
$$
where $\{\epsilon_t\}$ is white noise, for simplicity. Same discussion applies to the case where $\{\epsilon_t\}$ is a covariance-stati |
53,126 | Sample size of a "continuous" experimental unit/population instead of "discrete" | All sample size calculations are built on top of a proposed inference that will be made from the data. This might be a confidence interval for an unknown parameter, or a hypothesis test for a set of hypotheses, or a Bayesian posterior inference, etc. Whatever the inference being made, there will be some appropriate measure of how "accurate" the inference is, and this accuracy will be a function of the sample size. For example, if you are computing a confidence interval then the accuracy is usually measured by the width of the interval (relative to an unknown standard deviation) at a given confidence level.
If you want to formalise a sample size calculation for a continuous measure of size (e.g., the weight of the sampled soil), you will need to formulate the inference that is being made with the sample, and write out the accuracy of the inference as a function of the continuous size measure. So long as you can write out the accuracy of the proposed inference as a function of the size, you can determine the minimum size required to obtain a stipulated minimum level of accuracy. This can be done regardless of whether the sample size is specified by a discrete unit or a continuous measure.
Example: Suppose you have an experiment where you will sample a weight of $w$ grams of soil and determine the proportion of some mineral in that sample. Suppose you are willing to stipulate that the sample proportion $p$ is related to the true proportion $\theta$ by the sampling distribution:
$$p \sim \text{N} \Bigg( \theta, \frac{\theta (1-\theta)}{w} \Bigg).$$
In this case you might make an inference about the true proportion $\theta$ using the confidence interval:
$$\text{CI}_\theta(1-\alpha) = \Bigg[ p \pm z_{\alpha/2} \sqrt{\frac{p (1-p)}{w}} \Bigg].$$
The length of this confidence interval is:
$$L= 2 z_{\alpha/2} \sqrt{\frac{p (1-p)}{w}}.$$
A higher accuracy for a confidence interval requires a smaller length for the interval (i.e., a narrower interval is a more accurate inference). Thus, to stipulate the minimum required accuracy for our inference, we would stipulate some maximum length $L_*$ that we are willing to accept. For a given value $\alpha$ and a given sample proportion $p$, getting this stipulated length requires us to set the sample weight to:
$$w = 4 z_{\alpha/2}^2 \frac{p (1-p)}{L_*^2}.$$
Note that this formula will generally yield a non-integer value, which is okay in the case where our sample weight is continuous. As you can see, there is nothing fundamentally different in this computation to the case where we have a discrete sample size. (The only difference here is that we do not need to round the required sample size up to the next integer at the end of the computation.) What is needed is for us to be able to write out some measure of the accuracy of the inference as a function of the sample weight, and then find the minimum weight that gives some stipulated minimum accuracy. | Sample size of a "continuous" experimental unit/population instead of "discrete" | All sample size calculations are built on top of a proposed inference that will be made from the data. This might be a confidence interval for an unknown parameter, or a hypothesis test for a set of | Sample size of a "continuous" experimental unit/population instead of "discrete"
All sample size calculations are built on top of a proposed inference that will be made from the data. This might be a confidence interval for an unknown parameter, or a hypothesis test for a set of hypotheses, or a Bayesian posterior inference, etc. Whatever the inference being made, there will be some appropriate measure of how "accurate" the inference is, and this accuracy will be a function of the sample size. For example, if you are computing a confidence interval then the accuracy is usually measured by the width of the interval (relative to an unknown standard deviation) at a given confidence level.
If you want to formalise a sample size calculation for a continuous measure of size (e.g., the weight of the sampled soil), you will need to formulate the inference that is being made with the sample, and write out the accuracy of the inference as a function of the continuous size measure. So long as you can write out the accuracy of the proposed inference as a function of the size, you can determine the minimum size required to obtain a stipulated minimum level of accuracy. This can be done regardless of whether the sample size is specified by a discrete unit or a continuous measure.
Example: Suppose you have an experiment where you will sample a weight of $w$ grams of soil and determine the proportion of some mineral in that sample. Suppose you are willing to stipulate that the sample proportion $p$ is related to the true proportion $\theta$ by the sampling distribution:
$$p \sim \text{N} \Bigg( \theta, \frac{\theta (1-\theta)}{w} \Bigg).$$
In this case you might make an inference about the true proportion $\theta$ using the confidence interval:
$$\text{CI}_\theta(1-\alpha) = \Bigg[ p \pm z_{\alpha/2} \sqrt{\frac{p (1-p)}{w}} \Bigg].$$
The length of this confidence interval is:
$$L= 2 z_{\alpha/2} \sqrt{\frac{p (1-p)}{w}}.$$
A higher accuracy for a confidence interval requires a smaller length for the interval (i.e., a narrower interval is a more accurate inference). Thus, to stipulate the minimum required accuracy for our inference, we would stipulate some maximum length $L_*$ that we are willing to accept. For a given value $\alpha$ and a given sample proportion $p$, getting this stipulated length requires us to set the sample weight to:
$$w = 4 z_{\alpha/2}^2 \frac{p (1-p)}{L_*^2}.$$
Note that this formula will generally yield a non-integer value, which is okay in the case where our sample weight is continuous. As you can see, there is nothing fundamentally different in this computation to the case where we have a discrete sample size. (The only difference here is that we do not need to round the required sample size up to the next integer at the end of the computation.) What is needed is for us to be able to write out some measure of the accuracy of the inference as a function of the sample weight, and then find the minimum weight that gives some stipulated minimum accuracy. | Sample size of a "continuous" experimental unit/population instead of "discrete"
All sample size calculations are built on top of a proposed inference that will be made from the data. This might be a confidence interval for an unknown parameter, or a hypothesis test for a set of |
53,127 | Sample size of a "continuous" experimental unit/population instead of "discrete" | Peter cannot inflate his effective sample size for the purpose of estimating treatment effects just by repeatedly subsampling the same experimental units – this would be the most egregious form of 'pseudo-replication'.
Sample size in the context of a designed experiment is set by the randomization design – since the different samples within the same plot could not possibly be assigned to receive different treatments, they cannot be considered separate experimental units. It doesn't matter, for the purpose of sample size, how much soil Tom or Peter collect from each unit – what matters is that all the soil in that unit received the same treatment – that all of it was required by the experimental design to receive the same treatment.
Tom and Peter can potentially take different measurements of the same experimental unit, and one might get "better" measurements in some sense than the other – maybe measurement error is reduced by using a larger volume of soil, or maybe it's reduced by averaging samples from several points within the plot – but that's an issue of reducing the size of the error variance (assumed the same for each plot) not inflating the sample size.
A more precise/reliable/stable measurement method could thereby still reduce the standard errors of effect estimates, but not through changing the sample size. The sample size is, again, fixed by the design of the randomization scheme – each unit that could be independently assigned to a different treatment is one experimental unit and adds one to the total sample size. | Sample size of a "continuous" experimental unit/population instead of "discrete" | Peter cannot inflate his effective sample size for the purpose of estimating treatment effects just by repeatedly subsampling the same experimental units – this would be the most egregious form of 'ps | Sample size of a "continuous" experimental unit/population instead of "discrete"
Peter cannot inflate his effective sample size for the purpose of estimating treatment effects just by repeatedly subsampling the same experimental units – this would be the most egregious form of 'pseudo-replication'.
Sample size in the context of a designed experiment is set by the randomization design – since the different samples within the same plot could not possibly be assigned to receive different treatments, they cannot be considered separate experimental units. It doesn't matter, for the purpose of sample size, how much soil Tom or Peter collect from each unit – what matters is that all the soil in that unit received the same treatment – that all of it was required by the experimental design to receive the same treatment.
Tom and Peter can potentially take different measurements of the same experimental unit, and one might get "better" measurements in some sense than the other – maybe measurement error is reduced by using a larger volume of soil, or maybe it's reduced by averaging samples from several points within the plot – but that's an issue of reducing the size of the error variance (assumed the same for each plot) not inflating the sample size.
A more precise/reliable/stable measurement method could thereby still reduce the standard errors of effect estimates, but not through changing the sample size. The sample size is, again, fixed by the design of the randomization scheme – each unit that could be independently assigned to a different treatment is one experimental unit and adds one to the total sample size. | Sample size of a "continuous" experimental unit/population instead of "discrete"
Peter cannot inflate his effective sample size for the purpose of estimating treatment effects just by repeatedly subsampling the same experimental units – this would be the most egregious form of 'ps |
53,128 | Asking nature a single question or a carefully thought out questionnaire | Think of a simple example, and since RA Fisher's primary experience with experimental design was at Rothamsted Experimental station, let us use an agricultural example.
Say you are interested in comparing the effectiveness of various fertilizers. Using the philosophy of vary only one circumstance at a time, you design your experiment, all at one location only, with only one cultivar, and all the experimental plots with the same soil type, and all production variables equal, like plant spacing. Does this sound like a good plan?
The other, questionnaire philosophy, leads you to an experiment spread at multiple locations, with different cultivars, and varying soil types, plant spacing and other production variables in a systematic way. This second experiment makes you learn, not only which fertilizer is best under some highly specific conditions, but how fertilizer effectiveness changes with varying conditions. It seems more useful. | Asking nature a single question or a carefully thought out questionnaire | Think of a simple example, and since RA Fisher's primary experience with experimental design was at Rothamsted Experimental station, let us use an agricultural example.
Say you are interested in comp | Asking nature a single question or a carefully thought out questionnaire
Think of a simple example, and since RA Fisher's primary experience with experimental design was at Rothamsted Experimental station, let us use an agricultural example.
Say you are interested in comparing the effectiveness of various fertilizers. Using the philosophy of vary only one circumstance at a time, you design your experiment, all at one location only, with only one cultivar, and all the experimental plots with the same soil type, and all production variables equal, like plant spacing. Does this sound like a good plan?
The other, questionnaire philosophy, leads you to an experiment spread at multiple locations, with different cultivars, and varying soil types, plant spacing and other production variables in a systematic way. This second experiment makes you learn, not only which fertilizer is best under some highly specific conditions, but how fertilizer effectiveness changes with varying conditions. It seems more useful. | Asking nature a single question or a carefully thought out questionnaire
Think of a simple example, and since RA Fisher's primary experience with experimental design was at Rothamsted Experimental station, let us use an agricultural example.
Say you are interested in comp |
53,129 | Asking nature a single question or a carefully thought out questionnaire | Just to add to @kjetilbhalvorsen's answer, the insight Fisher got, or what came out of that is factorial design. This often cited excerpt comes from Arrangement of Field experiment, pg 10 (or 511). Though it seems maybe common sense to us now, Fisher notes most experiments done during that time involves single factors. In Terry Speed's introduction to this work, he notes:
So that the reader can better appreciate Fisher's remark about asking
Nature few questions, it is worth quoting from Russell (1926, p.989,
our italics): "A committee or an investigator considering a scheme of
experiments should first and ask whether each experiment or question
is framed in such a way that a definite answer can be given. The chief
requirement is simplicity: only one question should be asked at a
time."
Fisher proposed a field experiment design that can understand the effect of early/late addition of fertilizer containing sulphate or muraite, and other factors using only 96 plots as compared to using 216 plots if we are to conduct the sulphate/muraite experiments separately.
Of note is that factorial experiments actually opened the path for complex experimental design and also analysis of confounding effects. | Asking nature a single question or a carefully thought out questionnaire | Just to add to @kjetilbhalvorsen's answer, the insight Fisher got, or what came out of that is factorial design. This often cited excerpt comes from Arrangement of Field experiment, pg 10 (or 511). Th | Asking nature a single question or a carefully thought out questionnaire
Just to add to @kjetilbhalvorsen's answer, the insight Fisher got, or what came out of that is factorial design. This often cited excerpt comes from Arrangement of Field experiment, pg 10 (or 511). Though it seems maybe common sense to us now, Fisher notes most experiments done during that time involves single factors. In Terry Speed's introduction to this work, he notes:
So that the reader can better appreciate Fisher's remark about asking
Nature few questions, it is worth quoting from Russell (1926, p.989,
our italics): "A committee or an investigator considering a scheme of
experiments should first and ask whether each experiment or question
is framed in such a way that a definite answer can be given. The chief
requirement is simplicity: only one question should be asked at a
time."
Fisher proposed a field experiment design that can understand the effect of early/late addition of fertilizer containing sulphate or muraite, and other factors using only 96 plots as compared to using 216 plots if we are to conduct the sulphate/muraite experiments separately.
Of note is that factorial experiments actually opened the path for complex experimental design and also analysis of confounding effects. | Asking nature a single question or a carefully thought out questionnaire
Just to add to @kjetilbhalvorsen's answer, the insight Fisher got, or what came out of that is factorial design. This often cited excerpt comes from Arrangement of Field experiment, pg 10 (or 511). Th |
53,130 | Empirical results of Machine Learning/Deep Learning in practice | I don't think it would be possible to answer this question with respect to proprietary models used by private enterprise. But there is a vein of scholarship that focuses on flawed practices, such as in this paper.
Zachary C. Lipton, Jacob Steinhardt "Troubling Trends in Machine Learning Scholarship"
Collectively, machine learning (ML) researchers are engaged in the creation and dissemination of knowledge about data-driven algorithms. In a given paper, researchers might aspire to any subset of the following goals, among others: to theoretically characterize what is learnable, to obtain understanding through empirically rigorous experiments, or to build a working system that has high predictive accuracy. While determining which knowledge warrants inquiry may be subjective, once the topic is fixed, papers are most valuable to the community when they act in service of the reader, creating foundational knowledge and communicating as clearly as possible.
Recent progress in machine learning comes despite frequent departures from these ideals. In this paper, we focus on the following four patterns that appear to us to be trending in ML scholarship: (i) failure to distinguish between explanation and speculation; (ii) failure to identify the sources of empirical gains, e.g., emphasizing unnecessary modifications to neural architectures when gains actually stem from hyper-parameter tuning; (iii) mathiness: the use of mathematics that obfuscates or impresses rather than clarifies, e.g., by confusing technical and non-technical concepts; and (iv) misuse of language, e.g., by choosing terms of art with colloquial connotations or by overloading established technical terms.
While the causes behind these patterns are uncertain, possibilities include the rapid expansion of the community, the consequent thinness of the reviewer pool, and the often-misaligned incentives between scholarship and short-term measures of success (e.g., bibliometrics, attention, and entrepreneurial opportunity). While each pattern offers a corresponding remedy (don't do it), we also discuss some speculative suggestions for how the community might combat these trends. | Empirical results of Machine Learning/Deep Learning in practice | I don't think it would be possible to answer this question with respect to proprietary models used by private enterprise. But there is a vein of scholarship that focuses on flawed practices, such as i | Empirical results of Machine Learning/Deep Learning in practice
I don't think it would be possible to answer this question with respect to proprietary models used by private enterprise. But there is a vein of scholarship that focuses on flawed practices, such as in this paper.
Zachary C. Lipton, Jacob Steinhardt "Troubling Trends in Machine Learning Scholarship"
Collectively, machine learning (ML) researchers are engaged in the creation and dissemination of knowledge about data-driven algorithms. In a given paper, researchers might aspire to any subset of the following goals, among others: to theoretically characterize what is learnable, to obtain understanding through empirically rigorous experiments, or to build a working system that has high predictive accuracy. While determining which knowledge warrants inquiry may be subjective, once the topic is fixed, papers are most valuable to the community when they act in service of the reader, creating foundational knowledge and communicating as clearly as possible.
Recent progress in machine learning comes despite frequent departures from these ideals. In this paper, we focus on the following four patterns that appear to us to be trending in ML scholarship: (i) failure to distinguish between explanation and speculation; (ii) failure to identify the sources of empirical gains, e.g., emphasizing unnecessary modifications to neural architectures when gains actually stem from hyper-parameter tuning; (iii) mathiness: the use of mathematics that obfuscates or impresses rather than clarifies, e.g., by confusing technical and non-technical concepts; and (iv) misuse of language, e.g., by choosing terms of art with colloquial connotations or by overloading established technical terms.
While the causes behind these patterns are uncertain, possibilities include the rapid expansion of the community, the consequent thinness of the reviewer pool, and the often-misaligned incentives between scholarship and short-term measures of success (e.g., bibliometrics, attention, and entrepreneurial opportunity). While each pattern offers a corresponding remedy (don't do it), we also discuss some speculative suggestions for how the community might combat these trends. | Empirical results of Machine Learning/Deep Learning in practice
I don't think it would be possible to answer this question with respect to proprietary models used by private enterprise. But there is a vein of scholarship that focuses on flawed practices, such as i |
53,131 | Can the difference of random variables be uniform distributed? [duplicate] | Consider the following example:
$$
X\sim\text{Unif}(0, 1) \\
Y = 1-X
$$
$X$ and $Y$ are identically distributed as the standard uniform distribution, and $X-Y = 2X-1$, so $X-Y\sim\text{Unif}(-1, 1)$.
Note that this example relied on $X$ and $Y$ being dependent, identically distributed random variables. It is impossible for two independent, identically distributed random variables $X$ and $Y$ to have a difference that follows a uniform distribution. Clearly for $X-Y$ to be uniform we would need $X$ and $Y$ to be bounded, continuous random variables (assume bounds $\underline{x}$ and $\overline{x}$). However, since $X$ and $Y$ are continuous, their difference will have density 0 at its bounds $\overline{x}-\underline{x}$ and $\underline{x}-\overline{x}$, leading us to conclude that $X-Y$ does not follow a uniform distribution. | Can the difference of random variables be uniform distributed? [duplicate] | Consider the following example:
$$
X\sim\text{Unif}(0, 1) \\
Y = 1-X
$$
$X$ and $Y$ are identically distributed as the standard uniform distribution, and $X-Y = 2X-1$, so $X-Y\sim\text{Unif}(-1, 1)$.
| Can the difference of random variables be uniform distributed? [duplicate]
Consider the following example:
$$
X\sim\text{Unif}(0, 1) \\
Y = 1-X
$$
$X$ and $Y$ are identically distributed as the standard uniform distribution, and $X-Y = 2X-1$, so $X-Y\sim\text{Unif}(-1, 1)$.
Note that this example relied on $X$ and $Y$ being dependent, identically distributed random variables. It is impossible for two independent, identically distributed random variables $X$ and $Y$ to have a difference that follows a uniform distribution. Clearly for $X-Y$ to be uniform we would need $X$ and $Y$ to be bounded, continuous random variables (assume bounds $\underline{x}$ and $\overline{x}$). However, since $X$ and $Y$ are continuous, their difference will have density 0 at its bounds $\overline{x}-\underline{x}$ and $\underline{x}-\overline{x}$, leading us to conclude that $X-Y$ does not follow a uniform distribution. | Can the difference of random variables be uniform distributed? [duplicate]
Consider the following example:
$$
X\sim\text{Unif}(0, 1) \\
Y = 1-X
$$
$X$ and $Y$ are identically distributed as the standard uniform distribution, and $X-Y = 2X-1$, so $X-Y\sim\text{Unif}(-1, 1)$.
|
53,132 | Can the difference of random variables be uniform distributed? [duplicate] | Independent and identically distributed. If you ask for IID $X$ and $Y$, as others have noted this is not possible. See the answers to this question.
If you are happy to drop either independence between $X$ and $Y$ or them having the same distribution, then there's hope.
Same distribution. If you allow for dependent but identically distributed variables, then you can build the joint distribution of $X$ and $Y$ so that the marginal distributions $p_Y(y)=\int p_{X,Y}(x,y)\,dx$ and $p_X(x)=\int p_{X,Y}(x,y)\,dy$ are the same and the integral over diagonal lines $p_Z(z)=\int p_{X,Y}(y+z,y)\,dy$ is constant in an interval. One such example is:
$$p_{X,Y}(x,y)=\begin{cases}\frac12 & \text{if $|x|+|y|<1$}\\0 & \text{otherwise}\end{cases}.$$
The marginal distribution of $X$ is $p_X(x) = 1 - |x|$ for $x\in[-1,1]$, which is the same as that of $Y$, while $Z \sim \mathcal{U}(-1,1)$.
A simple way to build such distributions for $X$ and $Y$ is to choose $Z \sim \mathcal{U}(-a,a)$ and $W$ distributed according to a distribution $D'$, then $X=W+Z/2$ and $Y=W-Z/2$ have the chosen properties (their difference is uniform and they are distributed according to a common distribution $D$ whose pdf is the convolution of the pdf of $D'$ and a rect). For example for Gaussian $W$, we could obtain something like the following joint and marginal densities:
Independent. If $X$ and $Y$ are independent, then $Z$'s pdf must be the convolution of that of $X$ and $-Y$. As noted by josliber, $X$ and $Y$ cannot be both continuous variables otherwise the convolution of their pdfs would be continuous and would need to approach $0$ at the boundaries of the support of $Z$. This limit can be overcome if the pdf of either variable is not a function but a distribution (in the mathematical sense). For example the convolution of a rect (pdf of a uniform distribution) and an appropriately chosen train of Dirac's deltas (pdf of a discrete variable) can be a rect. One such example is when $Y \sim \mathcal{U}(-a,a)$ and $X = b + 2\,a\,N$ with $N \sim \mathcal{U}\{c,d\}$ (discrete uniform distribution). | Can the difference of random variables be uniform distributed? [duplicate] | Independent and identically distributed. If you ask for IID $X$ and $Y$, as others have noted this is not possible. See the answers to this question.
If you are happy to drop either independence betwe | Can the difference of random variables be uniform distributed? [duplicate]
Independent and identically distributed. If you ask for IID $X$ and $Y$, as others have noted this is not possible. See the answers to this question.
If you are happy to drop either independence between $X$ and $Y$ or them having the same distribution, then there's hope.
Same distribution. If you allow for dependent but identically distributed variables, then you can build the joint distribution of $X$ and $Y$ so that the marginal distributions $p_Y(y)=\int p_{X,Y}(x,y)\,dx$ and $p_X(x)=\int p_{X,Y}(x,y)\,dy$ are the same and the integral over diagonal lines $p_Z(z)=\int p_{X,Y}(y+z,y)\,dy$ is constant in an interval. One such example is:
$$p_{X,Y}(x,y)=\begin{cases}\frac12 & \text{if $|x|+|y|<1$}\\0 & \text{otherwise}\end{cases}.$$
The marginal distribution of $X$ is $p_X(x) = 1 - |x|$ for $x\in[-1,1]$, which is the same as that of $Y$, while $Z \sim \mathcal{U}(-1,1)$.
A simple way to build such distributions for $X$ and $Y$ is to choose $Z \sim \mathcal{U}(-a,a)$ and $W$ distributed according to a distribution $D'$, then $X=W+Z/2$ and $Y=W-Z/2$ have the chosen properties (their difference is uniform and they are distributed according to a common distribution $D$ whose pdf is the convolution of the pdf of $D'$ and a rect). For example for Gaussian $W$, we could obtain something like the following joint and marginal densities:
Independent. If $X$ and $Y$ are independent, then $Z$'s pdf must be the convolution of that of $X$ and $-Y$. As noted by josliber, $X$ and $Y$ cannot be both continuous variables otherwise the convolution of their pdfs would be continuous and would need to approach $0$ at the boundaries of the support of $Z$. This limit can be overcome if the pdf of either variable is not a function but a distribution (in the mathematical sense). For example the convolution of a rect (pdf of a uniform distribution) and an appropriately chosen train of Dirac's deltas (pdf of a discrete variable) can be a rect. One such example is when $Y \sim \mathcal{U}(-a,a)$ and $X = b + 2\,a\,N$ with $N \sim \mathcal{U}\{c,d\}$ (discrete uniform distribution). | Can the difference of random variables be uniform distributed? [duplicate]
Independent and identically distributed. If you ask for IID $X$ and $Y$, as others have noted this is not possible. See the answers to this question.
If you are happy to drop either independence betwe |
53,133 | Why decision tree handle unbalanced data well? | Decision trees do not always handle unbalanced data well.
If there is relatively obvious particular partition of our sample space that contains a high-proportion of minority class instances, decision trees can probably find it but that is far from a certainty.
For example, if the minority class is strongly associated with multiple features interacting with each other, it is rather demanding for a tree to recognise the pattern; even if it does, it will probably be a rather deep and unstable tree that will be prone to over-fitting; pruning the tree will not immediately solve the problem because it will directly affect the ability of the tree to utilise those interactions.
Generalisations of decision tree algorithms as random forests and gradient boosting machines offer a much better alternative in terms of stability without sacrificing any performance. Similarly using a GAM with an interacting spline can also provide another potentially viable alternative. | Why decision tree handle unbalanced data well? | Decision trees do not always handle unbalanced data well.
If there is relatively obvious particular partition of our sample space that contains a high-proportion of minority class instances, decision | Why decision tree handle unbalanced data well?
Decision trees do not always handle unbalanced data well.
If there is relatively obvious particular partition of our sample space that contains a high-proportion of minority class instances, decision trees can probably find it but that is far from a certainty.
For example, if the minority class is strongly associated with multiple features interacting with each other, it is rather demanding for a tree to recognise the pattern; even if it does, it will probably be a rather deep and unstable tree that will be prone to over-fitting; pruning the tree will not immediately solve the problem because it will directly affect the ability of the tree to utilise those interactions.
Generalisations of decision tree algorithms as random forests and gradient boosting machines offer a much better alternative in terms of stability without sacrificing any performance. Similarly using a GAM with an interacting spline can also provide another potentially viable alternative. | Why decision tree handle unbalanced data well?
Decision trees do not always handle unbalanced data well.
If there is relatively obvious particular partition of our sample space that contains a high-proportion of minority class instances, decision |
53,134 | Why decision tree handle unbalanced data well? | I would like to add something to the previous answer - though it's already good (+1).
Decision trees implementations normally use Gini index or Entropy for finding splits. These are functions that are maximized when the classes in a node are perfectly balanced - and therefore reward splits that move away from this balance. This means that the splits are always done assuming that the classes distribution is $1/K$ (or 50-50 in the binary case), and this is particularly clear when classes are VERY unbalanced. In those cases, since classification is done by majority voting, most of the leaf nodes will contain majority class elements and the performance will be sub-optimal.
There are a number of papers that discuss this issue, but I really suggest reading Using Random Forest to learn Imbalanced Data, that proposes the use of Weighted Gini (or Entropy) to take into account the class distribution, or using a mixture of Under and Over sampling of the classes when bagging decision trees.
Indeed, standard trees are mathematically not constructed to deal particularly well with unbalanced data, and some adjustments are needed (to the voting, splitting, or sampling) - which is also why many implementations allow the use of class weights. | Why decision tree handle unbalanced data well? | I would like to add something to the previous answer - though it's already good (+1).
Decision trees implementations normally use Gini index or Entropy for finding splits. These are functions that ar | Why decision tree handle unbalanced data well?
I would like to add something to the previous answer - though it's already good (+1).
Decision trees implementations normally use Gini index or Entropy for finding splits. These are functions that are maximized when the classes in a node are perfectly balanced - and therefore reward splits that move away from this balance. This means that the splits are always done assuming that the classes distribution is $1/K$ (or 50-50 in the binary case), and this is particularly clear when classes are VERY unbalanced. In those cases, since classification is done by majority voting, most of the leaf nodes will contain majority class elements and the performance will be sub-optimal.
There are a number of papers that discuss this issue, but I really suggest reading Using Random Forest to learn Imbalanced Data, that proposes the use of Weighted Gini (or Entropy) to take into account the class distribution, or using a mixture of Under and Over sampling of the classes when bagging decision trees.
Indeed, standard trees are mathematically not constructed to deal particularly well with unbalanced data, and some adjustments are needed (to the voting, splitting, or sampling) - which is also why many implementations allow the use of class weights. | Why decision tree handle unbalanced data well?
I would like to add something to the previous answer - though it's already good (+1).
Decision trees implementations normally use Gini index or Entropy for finding splits. These are functions that ar |
53,135 | disadvantage of bootstrap (from wiki) | It's wiki, read all wiki with a grain of salt. You should raise a flag as being unclear, opinion-based, or needing a citation because all of those are (partly) true. The recent influx of people in statistics who feel that broad statements can be made and parroted without formal proof need to be reigned in (I include myself in that statement).
The bootstrap does not require that samples are independent. There are special bootstrapping procedures that are more efficient than an unconditional bootstrap
The article makes the critical fallacy of conflating the procedure of generating bootstrap replicates of a dataset (which has no assumptions whatsoever) and obtaining bootstrap intervals/p-values for a test statistic. The BCa, Quantile, Normal Percentile, and Double Bootstrap methods are just a subset of what's out there, and are all developed to be performed on already-bootstrapped replicates of the study data. Basically, there is no one method for getting CIs and p-values, and the weirdness ends up being more a function of the statistic chosen than it is an attribute of the data themselves. | disadvantage of bootstrap (from wiki) | It's wiki, read all wiki with a grain of salt. You should raise a flag as being unclear, opinion-based, or needing a citation because all of those are (partly) true. The recent influx of people in sta | disadvantage of bootstrap (from wiki)
It's wiki, read all wiki with a grain of salt. You should raise a flag as being unclear, opinion-based, or needing a citation because all of those are (partly) true. The recent influx of people in statistics who feel that broad statements can be made and parroted without formal proof need to be reigned in (I include myself in that statement).
The bootstrap does not require that samples are independent. There are special bootstrapping procedures that are more efficient than an unconditional bootstrap
The article makes the critical fallacy of conflating the procedure of generating bootstrap replicates of a dataset (which has no assumptions whatsoever) and obtaining bootstrap intervals/p-values for a test statistic. The BCa, Quantile, Normal Percentile, and Double Bootstrap methods are just a subset of what's out there, and are all developed to be performed on already-bootstrapped replicates of the study data. Basically, there is no one method for getting CIs and p-values, and the weirdness ends up being more a function of the statistic chosen than it is an attribute of the data themselves. | disadvantage of bootstrap (from wiki)
It's wiki, read all wiki with a grain of salt. You should raise a flag as being unclear, opinion-based, or needing a citation because all of those are (partly) true. The recent influx of people in sta |
53,136 | disadvantage of bootstrap (from wiki) | This may be related to the fact that the bootstrap may sometimes be roughly presented as an "assumption free" procedure that can be used to replace other common e.g. tests when their required assumptions (e.g. normality) are not met. However, bootstrapping is relevant only in certain situations raising assumptions that also have to be met. | disadvantage of bootstrap (from wiki) | This may be related to the fact that the bootstrap may sometimes be roughly presented as an "assumption free" procedure that can be used to replace other common e.g. tests when their required assumpti | disadvantage of bootstrap (from wiki)
This may be related to the fact that the bootstrap may sometimes be roughly presented as an "assumption free" procedure that can be used to replace other common e.g. tests when their required assumptions (e.g. normality) are not met. However, bootstrapping is relevant only in certain situations raising assumptions that also have to be met. | disadvantage of bootstrap (from wiki)
This may be related to the fact that the bootstrap may sometimes be roughly presented as an "assumption free" procedure that can be used to replace other common e.g. tests when their required assumpti |
53,137 | How to apply PCA on 3 dimensional image data in python | In general, your approach may work, and it might even give you something that works somewhat well. However, I would strongly advise against it, or only use something like this as a first step to just get a feel for the problem.
Think about it this way: If you just shift one of the images one pixel to the left, how much would the vector representing that image change? How well could a PCA identify that these two images are in fact the same image, except for this 1-pixel shift.
It is better to use an approach that somewhat shift-invariant (and if possible rotation-invariant) . Here are some ideas:
You could use PCA to reduce the color space. Often the full 3D RGB space is not required. Instead of using the PCA on all pixels of the images, collect all pixels as individual 3D vectors. Then run the PCA on those. The resulting factors tell you which colors are actually representative of your images. However, you would get at best a reduction of the dataset by 1/3rd. In that case, you are reducing to grayscale, but you are retaining as much information as possible.
Use a method similar to one used by convolutional networks. Split each image into small (overlapping) patches of $K\times K$ pixels. Run the PCA on those patches. The resulting factors then represent typical features found in your image and are much more informative than just running a PCA on the complete image. Experiment with the size of the patches and the amount of overlap to see what gives you good result. If you know, for example, how a cancerous region looks, you could look at the resulting factors to see if any of those represent something you might recognize. Or you can drop patches that you recognize to be meaningless (e.g. patches which contain mostly uniform areas etc).
You can test if the patches work better, if you run them on independent colors (seperate patches for each color, with different component structure), of if you combine the colors first.
Mix, combine and stack these methods. If you have found a good size and overlap of the patches, but you have not reduced your data enough, then reduce the data using those patches. Because these patches represent areas of your images, you can still interpret them as 2D (or 3D if you have separate patches for each color) data. Repeat the process and create patches of patches. At this point, you are essentially building some form of convolutional neural network.
Although it might seem counterintuitive, in many cases it is helpful to first blow up your dataset (i.e. generate artificial data based on the data you have). The images you have may be very clean, all from the same angle, centered around the possible cancerous region etc. This may or may not represent the actual situation where you later want to use your data. If it doesn't, then you will not train the SVM (or the PCA) well for the task at hand. Generate additional images by adding noise, shifting them, rotating them a little etc. Then run the PCA and the SVM on increased dataset. This can greatly improve the final classifier.
If you want to get one step further, you should look at more powerful techniques of dimensionality reduction. A PCA is always computing a linear reduction. A better method is auto-encoder networks, which can be seen as a non-linear generalization of a PCA. There are also convolutional versions of auto-encoder networks, that give you the shift-invariance that you usually need. Also have a look at denoising auto-encoders, because these perform much better than naive auto-encoders in many cases. You can directly feed the (encoded) output from an auto-encoder to a SVM for classification. Or you use the auto-encoder in combination with a classical neural network, which essentially is a method for building deep neural networks. | How to apply PCA on 3 dimensional image data in python | In general, your approach may work, and it might even give you something that works somewhat well. However, I would strongly advise against it, or only use something like this as a first step to just | How to apply PCA on 3 dimensional image data in python
In general, your approach may work, and it might even give you something that works somewhat well. However, I would strongly advise against it, or only use something like this as a first step to just get a feel for the problem.
Think about it this way: If you just shift one of the images one pixel to the left, how much would the vector representing that image change? How well could a PCA identify that these two images are in fact the same image, except for this 1-pixel shift.
It is better to use an approach that somewhat shift-invariant (and if possible rotation-invariant) . Here are some ideas:
You could use PCA to reduce the color space. Often the full 3D RGB space is not required. Instead of using the PCA on all pixels of the images, collect all pixels as individual 3D vectors. Then run the PCA on those. The resulting factors tell you which colors are actually representative of your images. However, you would get at best a reduction of the dataset by 1/3rd. In that case, you are reducing to grayscale, but you are retaining as much information as possible.
Use a method similar to one used by convolutional networks. Split each image into small (overlapping) patches of $K\times K$ pixels. Run the PCA on those patches. The resulting factors then represent typical features found in your image and are much more informative than just running a PCA on the complete image. Experiment with the size of the patches and the amount of overlap to see what gives you good result. If you know, for example, how a cancerous region looks, you could look at the resulting factors to see if any of those represent something you might recognize. Or you can drop patches that you recognize to be meaningless (e.g. patches which contain mostly uniform areas etc).
You can test if the patches work better, if you run them on independent colors (seperate patches for each color, with different component structure), of if you combine the colors first.
Mix, combine and stack these methods. If you have found a good size and overlap of the patches, but you have not reduced your data enough, then reduce the data using those patches. Because these patches represent areas of your images, you can still interpret them as 2D (or 3D if you have separate patches for each color) data. Repeat the process and create patches of patches. At this point, you are essentially building some form of convolutional neural network.
Although it might seem counterintuitive, in many cases it is helpful to first blow up your dataset (i.e. generate artificial data based on the data you have). The images you have may be very clean, all from the same angle, centered around the possible cancerous region etc. This may or may not represent the actual situation where you later want to use your data. If it doesn't, then you will not train the SVM (or the PCA) well for the task at hand. Generate additional images by adding noise, shifting them, rotating them a little etc. Then run the PCA and the SVM on increased dataset. This can greatly improve the final classifier.
If you want to get one step further, you should look at more powerful techniques of dimensionality reduction. A PCA is always computing a linear reduction. A better method is auto-encoder networks, which can be seen as a non-linear generalization of a PCA. There are also convolutional versions of auto-encoder networks, that give you the shift-invariance that you usually need. Also have a look at denoising auto-encoders, because these perform much better than naive auto-encoders in many cases. You can directly feed the (encoded) output from an auto-encoder to a SVM for classification. Or you use the auto-encoder in combination with a classical neural network, which essentially is a method for building deep neural networks. | How to apply PCA on 3 dimensional image data in python
In general, your approach may work, and it might even give you something that works somewhat well. However, I would strongly advise against it, or only use something like this as a first step to just |
53,138 | How to apply PCA on 3 dimensional image data in python | If you final goal is using SVM, the problem is number of data points instead of the number of dimensions. See following question.
Can support vector machine be used in large data?
In real world SVM will not work very well if you have ~10K data and above.
Your problem is a standard image classification problem using convolutional neural network CNN may be better. And there are many very mature algorithms and packages available for that.
Here is an example.
https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html | How to apply PCA on 3 dimensional image data in python | If you final goal is using SVM, the problem is number of data points instead of the number of dimensions. See following question.
Can support vector machine be used in large data?
In real world SVM wi | How to apply PCA on 3 dimensional image data in python
If you final goal is using SVM, the problem is number of data points instead of the number of dimensions. See following question.
Can support vector machine be used in large data?
In real world SVM will not work very well if you have ~10K data and above.
Your problem is a standard image classification problem using convolutional neural network CNN may be better. And there are many very mature algorithms and packages available for that.
Here is an example.
https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html | How to apply PCA on 3 dimensional image data in python
If you final goal is using SVM, the problem is number of data points instead of the number of dimensions. See following question.
Can support vector machine be used in large data?
In real world SVM wi |
53,139 | P Wasserstein distance in Python | This is implemented in the POT: Python Optimal Transport package, for samples (or, generally, discrete measures): use ot.wasserstein_1d.
If you want to do it for weighted samples (or general discrete distributions with finite support), you can provide the a and b arguments. | P Wasserstein distance in Python | This is implemented in the POT: Python Optimal Transport package, for samples (or, generally, discrete measures): use ot.wasserstein_1d.
If you want to do it for weighted samples (or general discrete | P Wasserstein distance in Python
This is implemented in the POT: Python Optimal Transport package, for samples (or, generally, discrete measures): use ot.wasserstein_1d.
If you want to do it for weighted samples (or general discrete distributions with finite support), you can provide the a and b arguments. | P Wasserstein distance in Python
This is implemented in the POT: Python Optimal Transport package, for samples (or, generally, discrete measures): use ot.wasserstein_1d.
If you want to do it for weighted samples (or general discrete |
53,140 | Controlling variables in causal diagrams | This is more of a comment, in response to comments to Ed Rigdon's answer:
I understand that I shouldn’t control for B because it’s a collider. I want to know how the diagram looks when a variable is controlled for. Then I would be able to see everything explicitly
A good way to do this is by drawing the conditional graph by a process called graphical moralization. The steps are very simple (this is quoted almost verbatim from Greenland and Pearl, 2017) where I have just changed the variable names to match the ones in the question:
If B is a collider, join (marry) all pairs of parents of B by undirected arcs (here, a dashed line will be used).
Similarly, if A is an ancestor of B and a collider, join all pairs of parents of A by undirected arcs. [obviously this is not the case here]
Erase B and all arcs connecting B to other variables.
So we arrive at the following graph:
Note that this is not a DAG because of the presence of the dashed line. To continue using DAG theory we must retain B and use the reasoning in Noah's answer where the backdoor path is shown as $X \leftarrow A \rightarrow \fbox B \leftarrow C \rightarrow Y$
Finally I often find it instructive to do simple simulation so here I simulate data according to the original DAG and show what happens when controlling for the collider:
> set.seed(15)
> N <- 100
> A <- rnorm(N, 10, 2)
> C <- rnorm(N, 5, 1)
> B <- A + C + rnorm(N)
> X <- A + B + rnorm(N)
> Y <- X + C + rnorm(N)
> m0 <- lm(Y ~ X)
> summary(m0)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.28681 0.87685 3.748 0.000301 ***
X 1.06439 0.03411 31.203 < 2e-16 ***
So we obtain good estimates for the effect of X. However:
> m1 <- lm(Y ~ X + B)
> summary(m1)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.82040 0.82263 3.429 0.000892 ***
X 0.68665 0.09811 6.999 3.36e-10 ***
B 0.66931 0.16452 4.068 9.65e-05 ***
Now we have a biased estimate for X | Controlling variables in causal diagrams | This is more of a comment, in response to comments to Ed Rigdon's answer:
I understand that I shouldn’t control for B because it’s a collider. I want to know how the diagram looks when a variable is | Controlling variables in causal diagrams
This is more of a comment, in response to comments to Ed Rigdon's answer:
I understand that I shouldn’t control for B because it’s a collider. I want to know how the diagram looks when a variable is controlled for. Then I would be able to see everything explicitly
A good way to do this is by drawing the conditional graph by a process called graphical moralization. The steps are very simple (this is quoted almost verbatim from Greenland and Pearl, 2017) where I have just changed the variable names to match the ones in the question:
If B is a collider, join (marry) all pairs of parents of B by undirected arcs (here, a dashed line will be used).
Similarly, if A is an ancestor of B and a collider, join all pairs of parents of A by undirected arcs. [obviously this is not the case here]
Erase B and all arcs connecting B to other variables.
So we arrive at the following graph:
Note that this is not a DAG because of the presence of the dashed line. To continue using DAG theory we must retain B and use the reasoning in Noah's answer where the backdoor path is shown as $X \leftarrow A \rightarrow \fbox B \leftarrow C \rightarrow Y$
Finally I often find it instructive to do simple simulation so here I simulate data according to the original DAG and show what happens when controlling for the collider:
> set.seed(15)
> N <- 100
> A <- rnorm(N, 10, 2)
> C <- rnorm(N, 5, 1)
> B <- A + C + rnorm(N)
> X <- A + B + rnorm(N)
> Y <- X + C + rnorm(N)
> m0 <- lm(Y ~ X)
> summary(m0)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.28681 0.87685 3.748 0.000301 ***
X 1.06439 0.03411 31.203 < 2e-16 ***
So we obtain good estimates for the effect of X. However:
> m1 <- lm(Y ~ X + B)
> summary(m1)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.82040 0.82263 3.429 0.000892 ***
X 0.68665 0.09811 6.999 3.36e-10 ***
B 0.66931 0.16452 4.068 9.65e-05 ***
Now we have a biased estimate for X | Controlling variables in causal diagrams
This is more of a comment, in response to comments to Ed Rigdon's answer:
I understand that I shouldn’t control for B because it’s a collider. I want to know how the diagram looks when a variable is |
53,141 | Controlling variables in causal diagrams | In the model, B = A + C, and A and C are orthogonal. If you hold B constant, you induce covariance between A and C. Say you hold B constant at 10. Then, if A is 7, C is 3. If A is 6, then C is 4. This creates a confound for the X -> Y relationship, a backdoor path from X through A through the covariance with C to Y.
C is a confound for the X -> Y relationship because X and Y are joint descendants of C. Controlling for C negates that joint dependence. You don't need to control for B because B by itself is not a confound--Y is not a descendant of B, except through X.
One of the tough parts of interpreting diagrams is looking for the paths that are not there. Those omissions can be the most important part of the diagram, but without practice they will not be the focus of our attention. | Controlling variables in causal diagrams | In the model, B = A + C, and A and C are orthogonal. If you hold B constant, you induce covariance between A and C. Say you hold B constant at 10. Then, if A is 7, C is 3. If A is 6, then C is 4. This | Controlling variables in causal diagrams
In the model, B = A + C, and A and C are orthogonal. If you hold B constant, you induce covariance between A and C. Say you hold B constant at 10. Then, if A is 7, C is 3. If A is 6, then C is 4. This creates a confound for the X -> Y relationship, a backdoor path from X through A through the covariance with C to Y.
C is a confound for the X -> Y relationship because X and Y are joint descendants of C. Controlling for C negates that joint dependence. You don't need to control for B because B by itself is not a confound--Y is not a descendant of B, except through X.
One of the tough parts of interpreting diagrams is looking for the paths that are not there. Those omissions can be the most important part of the diagram, but without practice they will not be the focus of our attention. | Controlling variables in causal diagrams
In the model, B = A + C, and A and C are orthogonal. If you hold B constant, you induce covariance between A and C. Say you hold B constant at 10. Then, if A is 7, C is 3. If A is 6, then C is 4. This |
53,142 | Controlling variables in causal diagrams | There are seven rules of association. In the first four, $R$ and $T$ are associated with each other:
$$R \rightarrow T$$
$$R \rightarrow S \rightarrow T$$
$$R \leftarrow S \rightarrow T$$
$$R \rightarrow \fbox S \leftarrow T$$
In the second three, $R$ and $T$ are not associated with each other through the path:
$$R \rightarrow \fbox S \rightarrow T$$
$$R \leftarrow \fbox S \rightarrow T$$
$$R \rightarrow S \leftarrow T$$
A box around a variable means we are conditioning on the variable.
The path $X \leftarrow A \rightarrow B \leftarrow C \rightarrow Y$, when not conditioning on $B$, is a closed path, meaning that $X$ and $Y$ are not associated with each other through this path. This is because the chain of association breaks when two arrows point to the same variable in a path (here, two arrows point to $B$). In this case, $B$ is called a collider.
Conditioning on $B$ opens the path of association. That is, $X \leftarrow A \rightarrow \fbox B \leftarrow C \rightarrow Y$ leaves open the association between $X$ and $Y$ because conditioning on a collider or a descendant of a collider opens the path of association. Other paths between $X$ and $Y$ may be open or closed, but when through this particular pathway, they are associated. This association is "noncausal" because it's an association that does not represent the causal effect of $X$ on $Y$. | Controlling variables in causal diagrams | There are seven rules of association. In the first four, $R$ and $T$ are associated with each other:
$$R \rightarrow T$$
$$R \rightarrow S \rightarrow T$$
$$R \leftarrow S \rightarrow T$$
$$R \rightar | Controlling variables in causal diagrams
There are seven rules of association. In the first four, $R$ and $T$ are associated with each other:
$$R \rightarrow T$$
$$R \rightarrow S \rightarrow T$$
$$R \leftarrow S \rightarrow T$$
$$R \rightarrow \fbox S \leftarrow T$$
In the second three, $R$ and $T$ are not associated with each other through the path:
$$R \rightarrow \fbox S \rightarrow T$$
$$R \leftarrow \fbox S \rightarrow T$$
$$R \rightarrow S \leftarrow T$$
A box around a variable means we are conditioning on the variable.
The path $X \leftarrow A \rightarrow B \leftarrow C \rightarrow Y$, when not conditioning on $B$, is a closed path, meaning that $X$ and $Y$ are not associated with each other through this path. This is because the chain of association breaks when two arrows point to the same variable in a path (here, two arrows point to $B$). In this case, $B$ is called a collider.
Conditioning on $B$ opens the path of association. That is, $X \leftarrow A \rightarrow \fbox B \leftarrow C \rightarrow Y$ leaves open the association between $X$ and $Y$ because conditioning on a collider or a descendant of a collider opens the path of association. Other paths between $X$ and $Y$ may be open or closed, but when through this particular pathway, they are associated. This association is "noncausal" because it's an association that does not represent the causal effect of $X$ on $Y$. | Controlling variables in causal diagrams
There are seven rules of association. In the first four, $R$ and $T$ are associated with each other:
$$R \rightarrow T$$
$$R \rightarrow S \rightarrow T$$
$$R \leftarrow S \rightarrow T$$
$$R \rightar |
53,143 | In regression analysis, how does one know which transformation to apply to either the response variables or features? | Consider the case of OLS regression:
$$ Y_i = a + b_1X_{i1} + \cdots +b_kX_{ik} + \epsilon_{i}$$
where $Y$ is your response variable, $a$ is the intercept, the $X$s are your independent variables (i.e. predictors), $b_1$ to $b_k$ are the slope coefficients associated with each independent variable, $\epsilon$ represents your residuals, and $i$ is the index denoting each row of observations.
Usually, after fitting your OLS model, residual plots are produced to visually inspect the spread of residuals. This is typically accomplished by plotting residuals vs fitted values. Visually, residuals should show constant variance (this is called homoskedasticity), rather than forming patterns (called heteroskedasticity). Heteroskedasticity is problematic as it means that parts of the model are predicted with different levels of error. The negative consequences of this are related to the wrong estimation of parameters' standard errors, p-values and confidence intervals. Therefore, your ability to correctly state whether each variable is statistically significant is compromised. Please note that under heteroskedasticity, your $b$ slopes may still be unbiased: it is just you will not be able to say with confidence if they are statistically significant.
Now there are a number of ways of dealing with this issue, including, for example, the use of heteroskedasticity robust standard errors, or transformations. Let us comment on the latter to stay on track. Transforming $Y$ and/or $X$ variables can be a remedy for making the residual plot (more) homoskedastic.
One flexible approach to identifying transformations falls under the theme of the Box-Cox (Box & Cox, 1964) and Box-Tidwell (Box & Tidwell, 1962) techniques. You can easily compute those in R (Fox, 2002; Faraway, 2005). In case you are very interested, manual calculation of both is covered in Hutcheson and Sofroniou (1999). The Box-Cox method suggests transformations to the response variable (DV), and the Box-Tidwell procedure suggests transformations for the predictor variables (IVs).
One strategy is to start with a single type of transformation. For example, transformation to the response variable (DV). This can be accomplished with the MASS package in R by using boxcox function. This will produce a plot of log-likelihoods versus $\lambda$ (power transformations). Then, for example, the graph can suggest a power transformation of 3 with 95% confidence intervals between 2.5 and 3.75. As such, we transform the response variable from $Y$ into $Y^3$.
Next, assess the residual plot and check if this resulted in an improvement. For example, your residual plot became less heteroskedastic but there may still be room for improvement. In that case, we may proceed further and try to additionally apply transformation(s) to the independent predictor variable(s). This can be accomplished with the aforementioned Box-Tidwell transformation, by using the boxTidwell function in car package in R. Here, we may get a suggestion that your IV should be raised to the power of, for example, $.6968$. So then you would add this transformation and end up with some model such as
$Y^3 = a + \beta_{1}X^{.7}$. So then you assess the fit and hopefully, the issue of heteroskedasticity is resolved or significantly improved upon.
Caution: from real-life experience, I can assure you that transformations do not always work. So they are not always a magical cure, and if time is of the essence for you, you may consider resorting to using heteroskedasticity-robust standard errors. However, this, in itself, is a different topic
Edit
Courtesy to @IsabellaGhement who correctly noted that in addition to improving on model heteroskedasticity (non-constant variance), power transformations may also represent a useful solution to violations of linearity and normality.
References
Box, G. E. P., & Cox, D. R. (1964). An analysis of transformations. Journal of the Royal Statistical Society: Series B (Methodological), 26(2), 211-243.
Box, G. E. P., & Tidwell, P. W. (1962). Transformation of the independent variables. Technometrics, 4(4), 531-550.
Fox, J. (2002). An R and S-PLUS Companion to Applied Regression. Sage Publications,
Thousand Oaks, CA.
Faraway, J. J. (2005). Extending the linear model with r (texts in statistical science).
Hutcheson, G. D., & Sofroniou, N. (1999). The multivariate social scientist: Introductory statistics using generalized linear models. London: Sage. | In regression analysis, how does one know which transformation to apply to either the response varia | Consider the case of OLS regression:
$$ Y_i = a + b_1X_{i1} + \cdots +b_kX_{ik} + \epsilon_{i}$$
where $Y$ is your response variable, $a$ is the intercept, the $X$s are your independent variables (i.e | In regression analysis, how does one know which transformation to apply to either the response variables or features?
Consider the case of OLS regression:
$$ Y_i = a + b_1X_{i1} + \cdots +b_kX_{ik} + \epsilon_{i}$$
where $Y$ is your response variable, $a$ is the intercept, the $X$s are your independent variables (i.e. predictors), $b_1$ to $b_k$ are the slope coefficients associated with each independent variable, $\epsilon$ represents your residuals, and $i$ is the index denoting each row of observations.
Usually, after fitting your OLS model, residual plots are produced to visually inspect the spread of residuals. This is typically accomplished by plotting residuals vs fitted values. Visually, residuals should show constant variance (this is called homoskedasticity), rather than forming patterns (called heteroskedasticity). Heteroskedasticity is problematic as it means that parts of the model are predicted with different levels of error. The negative consequences of this are related to the wrong estimation of parameters' standard errors, p-values and confidence intervals. Therefore, your ability to correctly state whether each variable is statistically significant is compromised. Please note that under heteroskedasticity, your $b$ slopes may still be unbiased: it is just you will not be able to say with confidence if they are statistically significant.
Now there are a number of ways of dealing with this issue, including, for example, the use of heteroskedasticity robust standard errors, or transformations. Let us comment on the latter to stay on track. Transforming $Y$ and/or $X$ variables can be a remedy for making the residual plot (more) homoskedastic.
One flexible approach to identifying transformations falls under the theme of the Box-Cox (Box & Cox, 1964) and Box-Tidwell (Box & Tidwell, 1962) techniques. You can easily compute those in R (Fox, 2002; Faraway, 2005). In case you are very interested, manual calculation of both is covered in Hutcheson and Sofroniou (1999). The Box-Cox method suggests transformations to the response variable (DV), and the Box-Tidwell procedure suggests transformations for the predictor variables (IVs).
One strategy is to start with a single type of transformation. For example, transformation to the response variable (DV). This can be accomplished with the MASS package in R by using boxcox function. This will produce a plot of log-likelihoods versus $\lambda$ (power transformations). Then, for example, the graph can suggest a power transformation of 3 with 95% confidence intervals between 2.5 and 3.75. As such, we transform the response variable from $Y$ into $Y^3$.
Next, assess the residual plot and check if this resulted in an improvement. For example, your residual plot became less heteroskedastic but there may still be room for improvement. In that case, we may proceed further and try to additionally apply transformation(s) to the independent predictor variable(s). This can be accomplished with the aforementioned Box-Tidwell transformation, by using the boxTidwell function in car package in R. Here, we may get a suggestion that your IV should be raised to the power of, for example, $.6968$. So then you would add this transformation and end up with some model such as
$Y^3 = a + \beta_{1}X^{.7}$. So then you assess the fit and hopefully, the issue of heteroskedasticity is resolved or significantly improved upon.
Caution: from real-life experience, I can assure you that transformations do not always work. So they are not always a magical cure, and if time is of the essence for you, you may consider resorting to using heteroskedasticity-robust standard errors. However, this, in itself, is a different topic
Edit
Courtesy to @IsabellaGhement who correctly noted that in addition to improving on model heteroskedasticity (non-constant variance), power transformations may also represent a useful solution to violations of linearity and normality.
References
Box, G. E. P., & Cox, D. R. (1964). An analysis of transformations. Journal of the Royal Statistical Society: Series B (Methodological), 26(2), 211-243.
Box, G. E. P., & Tidwell, P. W. (1962). Transformation of the independent variables. Technometrics, 4(4), 531-550.
Fox, J. (2002). An R and S-PLUS Companion to Applied Regression. Sage Publications,
Thousand Oaks, CA.
Faraway, J. J. (2005). Extending the linear model with r (texts in statistical science).
Hutcheson, G. D., & Sofroniou, N. (1999). The multivariate social scientist: Introductory statistics using generalized linear models. London: Sage. | In regression analysis, how does one know which transformation to apply to either the response varia
Consider the case of OLS regression:
$$ Y_i = a + b_1X_{i1} + \cdots +b_kX_{ik} + \epsilon_{i}$$
where $Y$ is your response variable, $a$ is the intercept, the $X$s are your independent variables (i.e |
53,144 | In regression analysis, how does one know which transformation to apply to either the response variables or features? | If your only tool is OLS regression, then you can use methods such as Box-Cox. In the old days, this was true in practice because computers were (first) unavailable and (later) not that powerful or fast.
Nowadays, though, we have very powerful computers and methods that can implement many methods other than OLS, in particular quantile regression and various kinds of robust regression. These do not make assumptions about the distribution of errors/residuals, which is the problem that most transformations are trying to solve.
Therefore, I'd say that you ought transform variables when it suits your substantive purpose. E.g. for variables related to money (income, expenses, sales, etc) it often makes sense to take the log. | In regression analysis, how does one know which transformation to apply to either the response varia | If your only tool is OLS regression, then you can use methods such as Box-Cox. In the old days, this was true in practice because computers were (first) unavailable and (later) not that powerful or fa | In regression analysis, how does one know which transformation to apply to either the response variables or features?
If your only tool is OLS regression, then you can use methods such as Box-Cox. In the old days, this was true in practice because computers were (first) unavailable and (later) not that powerful or fast.
Nowadays, though, we have very powerful computers and methods that can implement many methods other than OLS, in particular quantile regression and various kinds of robust regression. These do not make assumptions about the distribution of errors/residuals, which is the problem that most transformations are trying to solve.
Therefore, I'd say that you ought transform variables when it suits your substantive purpose. E.g. for variables related to money (income, expenses, sales, etc) it often makes sense to take the log. | In regression analysis, how does one know which transformation to apply to either the response varia
If your only tool is OLS regression, then you can use methods such as Box-Cox. In the old days, this was true in practice because computers were (first) unavailable and (later) not that powerful or fa |
53,145 | In regression analysis, how does one know which transformation to apply to either the response variables or features? | Mosteller and Tukey (1977) offered their bulging rule, suggesting transformations (of X or Y) based on eyeballing the shape of the untransformed relationship.
Mosteller and Tukey suggested looking at the plot of untransformed X and Y, matching the curve that you see to one of the four quadrant curves in the diagram, and applying an indicated transformations, taking either X or Y (or both) to either a higher power or to lower power. Here, "higher power" means square or cube, while "lower power" includes log, inverse, inverse square and so forth.
You also can think of this as adjusting the axis for either X or Y, and in so doing making the curved relationship more straight. Higher power transformations stretch the axis, while lower power transformations compress the axis.
As Cohen et al. (2003) pointed out, transforming Y may resolve multiple problems, including heterogeneity in residuals, so that may be preferred. But also think about how you are going to explain the result to your client. A transformation applied to one predictor in a model may take less explaining than a transformation applied to the outcome variable. And then there is convention--if you can resolve the problem either by applying the same transformation that everyone else has used or by doing something different, life may be easier if you follow convention.
References:
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple correlation/regression analysis for the social sciences.
Mosteller, F., & Tukey, J. W. (1977). Data analysis and regression: a second course in statistics. Addison-Wesley Series in Behavioral Science: Quantitative Methods. | In regression analysis, how does one know which transformation to apply to either the response varia | Mosteller and Tukey (1977) offered their bulging rule, suggesting transformations (of X or Y) based on eyeballing the shape of the untransformed relationship.
Mosteller and Tukey suggested looking at | In regression analysis, how does one know which transformation to apply to either the response variables or features?
Mosteller and Tukey (1977) offered their bulging rule, suggesting transformations (of X or Y) based on eyeballing the shape of the untransformed relationship.
Mosteller and Tukey suggested looking at the plot of untransformed X and Y, matching the curve that you see to one of the four quadrant curves in the diagram, and applying an indicated transformations, taking either X or Y (or both) to either a higher power or to lower power. Here, "higher power" means square or cube, while "lower power" includes log, inverse, inverse square and so forth.
You also can think of this as adjusting the axis for either X or Y, and in so doing making the curved relationship more straight. Higher power transformations stretch the axis, while lower power transformations compress the axis.
As Cohen et al. (2003) pointed out, transforming Y may resolve multiple problems, including heterogeneity in residuals, so that may be preferred. But also think about how you are going to explain the result to your client. A transformation applied to one predictor in a model may take less explaining than a transformation applied to the outcome variable. And then there is convention--if you can resolve the problem either by applying the same transformation that everyone else has used or by doing something different, life may be easier if you follow convention.
References:
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple correlation/regression analysis for the social sciences.
Mosteller, F., & Tukey, J. W. (1977). Data analysis and regression: a second course in statistics. Addison-Wesley Series in Behavioral Science: Quantitative Methods. | In regression analysis, how does one know which transformation to apply to either the response varia
Mosteller and Tukey (1977) offered their bulging rule, suggesting transformations (of X or Y) based on eyeballing the shape of the untransformed relationship.
Mosteller and Tukey suggested looking at |
53,146 | Mixed-effect logistic regression with very large dataset | You do not indicate if you have multiple human subjects in your study who are supposed to mention these preposition phrases. If you do, your model would need to reflect this by also including a random effect for subject.
With your model as it currently is formulated, the odds ratio of 1.26 refers just to a "typical" verb (i.e., one for which the random intercept and the random slope of bin_figure_anim are equal to 0). The odds ratios corresponding to other verbs will vary around 1.25, with the extent of variation governed by the variance of the random slopes of bin_figure_anim. In your case, it probably would be informative to describe the extent of this variation. So the first takeaway would be to not overreact when seeing the odds ratio for the "typical" verb - the odds ratios for other verbs could be higher/smaller than it.
Focusing on the "typical" verb, you can construct a confidence interval for the true odds ratio associated with it. Some people would choose a higher confidence level (e.g., 99%) to offset the fact that the sample size is quite large. (For the same reason, they would use a significance level alpha = 0.01 for their tests of significance.). The confint() function will help you get this interval on the log odds scale - you can exponentiate its endpoints to get the interval on the odds ratio scale. Let's say this interval comes out to be (1.11, 1.35) on the odds ratio scale. While your best guess from the data is that the true odds ratio for the "typical" verb is 1.26, in actuality this true ratio can be as low as 1.11 and as high as 1.35. This type of statement will more aptly describe the uncertainty involved in estimating the true odds ratio for the "typical" verb. It will also help you focus on describing the "effect size", as recommended in @RobertLong's excellent answer.
The size of the sample should play in your favour in terms of helping you produce a better value for your best guess as to what the true odds ratio is for the "typical" verb. When constructing a confidence interval for this true value, you can guard against the large sample size by choosing a higher confidence level (e.g., 99%). Similarly, you can choose a smaller significance level for your significance tests involving this true value (e.g., alpha = 0.05).
P.S. It's helpful to define your variables and their values more explicitly when posting here. For example, what does bin_figure_anim mean? Is it a binary variable? When does it take the value 0 and when does it take the value 1? Describing your study design (briefly) is also recommended - this way, people can tell whether your proposed model adequately reflects your study design or not. | Mixed-effect logistic regression with very large dataset | You do not indicate if you have multiple human subjects in your study who are supposed to mention these preposition phrases. If you do, your model would need to reflect this by also including a random | Mixed-effect logistic regression with very large dataset
You do not indicate if you have multiple human subjects in your study who are supposed to mention these preposition phrases. If you do, your model would need to reflect this by also including a random effect for subject.
With your model as it currently is formulated, the odds ratio of 1.26 refers just to a "typical" verb (i.e., one for which the random intercept and the random slope of bin_figure_anim are equal to 0). The odds ratios corresponding to other verbs will vary around 1.25, with the extent of variation governed by the variance of the random slopes of bin_figure_anim. In your case, it probably would be informative to describe the extent of this variation. So the first takeaway would be to not overreact when seeing the odds ratio for the "typical" verb - the odds ratios for other verbs could be higher/smaller than it.
Focusing on the "typical" verb, you can construct a confidence interval for the true odds ratio associated with it. Some people would choose a higher confidence level (e.g., 99%) to offset the fact that the sample size is quite large. (For the same reason, they would use a significance level alpha = 0.01 for their tests of significance.). The confint() function will help you get this interval on the log odds scale - you can exponentiate its endpoints to get the interval on the odds ratio scale. Let's say this interval comes out to be (1.11, 1.35) on the odds ratio scale. While your best guess from the data is that the true odds ratio for the "typical" verb is 1.26, in actuality this true ratio can be as low as 1.11 and as high as 1.35. This type of statement will more aptly describe the uncertainty involved in estimating the true odds ratio for the "typical" verb. It will also help you focus on describing the "effect size", as recommended in @RobertLong's excellent answer.
The size of the sample should play in your favour in terms of helping you produce a better value for your best guess as to what the true odds ratio is for the "typical" verb. When constructing a confidence interval for this true value, you can guard against the large sample size by choosing a higher confidence level (e.g., 99%). Similarly, you can choose a smaller significance level for your significance tests involving this true value (e.g., alpha = 0.05).
P.S. It's helpful to define your variables and their values more explicitly when posting here. For example, what does bin_figure_anim mean? Is it a binary variable? When does it take the value 0 and when does it take the value 1? Describing your study design (briefly) is also recommended - this way, people can tell whether your proposed model adequately reflects your study design or not. | Mixed-effect logistic regression with very large dataset
You do not indicate if you have multiple human subjects in your study who are supposed to mention these preposition phrases. If you do, your model would need to reflect this by also including a random |
53,147 | Mixed-effect logistic regression with very large dataset | As you say, statistical significance is due to the sample size. You should focus on the effect size. 1.26 is not a particularly low odds ratio in many disciplines but in others it is, so it really does depend on what you consider to be low, for your study, in your domain. | Mixed-effect logistic regression with very large dataset | As you say, statistical significance is due to the sample size. You should focus on the effect size. 1.26 is not a particularly low odds ratio in many disciplines but in others it is, so it really doe | Mixed-effect logistic regression with very large dataset
As you say, statistical significance is due to the sample size. You should focus on the effect size. 1.26 is not a particularly low odds ratio in many disciplines but in others it is, so it really does depend on what you consider to be low, for your study, in your domain. | Mixed-effect logistic regression with very large dataset
As you say, statistical significance is due to the sample size. You should focus on the effect size. 1.26 is not a particularly low odds ratio in many disciplines but in others it is, so it really doe |
53,148 | Forecast accuracy rolling window | Symmetric mape (SMAPE) will be useful for you . Pursue https://stats.stackexchange.com/search?q=SMAPE . | Forecast accuracy rolling window | Symmetric mape (SMAPE) will be useful for you . Pursue https://stats.stackexchange.com/search?q=SMAPE . | Forecast accuracy rolling window
Symmetric mape (SMAPE) will be useful for you . Pursue https://stats.stackexchange.com/search?q=SMAPE . | Forecast accuracy rolling window
Symmetric mape (SMAPE) will be useful for you . Pursue https://stats.stackexchange.com/search?q=SMAPE . |
53,149 | Forecast accuracy rolling window | @IrishStat's suggestion to use SMAPE is probably the most correct.
A log(x+1) transform could also work.
It would be useful to know your application. What is the proportion of 0's? I am curious if you have an intermittent time series.
Intermittent time series are often modeled with Crostons method or the Poisson distribution, the benchmark forecast being an all-zero forecast. If you are dealing with intermittent time series then it is common to use a distributional error metric. Continuous rank probability score or pinball loss. | Forecast accuracy rolling window | @IrishStat's suggestion to use SMAPE is probably the most correct.
A log(x+1) transform could also work.
It would be useful to know your application. What is the proportion of 0's? I am curious if yo | Forecast accuracy rolling window
@IrishStat's suggestion to use SMAPE is probably the most correct.
A log(x+1) transform could also work.
It would be useful to know your application. What is the proportion of 0's? I am curious if you have an intermittent time series.
Intermittent time series are often modeled with Crostons method or the Poisson distribution, the benchmark forecast being an all-zero forecast. If you are dealing with intermittent time series then it is common to use a distributional error metric. Continuous rank probability score or pinball loss. | Forecast accuracy rolling window
@IrishStat's suggestion to use SMAPE is probably the most correct.
A log(x+1) transform could also work.
It would be useful to know your application. What is the proportion of 0's? I am curious if yo |
53,150 | Forecast accuracy rolling window | Another interesting measure that can deal with zero actuals:
Relative Total Absolute Error: mean of all absolute errors devided by the mean of the absolute actuals.
This measure is a MAPE where the sum is moved into the fraction. An advantage is that the RTAE can deal with zero actuals.
That said, SMAPE and the RTAE are useful when there are a few zeroes in your time series. If there a lot of zeroes; close to 50% or more, then these measures are not useful to evaluate the forecast. In such a case the best forecast would be zero according to these measures. Here is a reference for error measures when having time series with a lot of zeroes: https://www.lancaster.ac.uk/pg/waller/pdfs/Intermittent_Demand_Forecasting.pdf . | Forecast accuracy rolling window | Another interesting measure that can deal with zero actuals:
Relative Total Absolute Error: mean of all absolute errors devided by the mean of the absolute actuals.
This measure is a MAPE where the s | Forecast accuracy rolling window
Another interesting measure that can deal with zero actuals:
Relative Total Absolute Error: mean of all absolute errors devided by the mean of the absolute actuals.
This measure is a MAPE where the sum is moved into the fraction. An advantage is that the RTAE can deal with zero actuals.
That said, SMAPE and the RTAE are useful when there are a few zeroes in your time series. If there a lot of zeroes; close to 50% or more, then these measures are not useful to evaluate the forecast. In such a case the best forecast would be zero according to these measures. Here is a reference for error measures when having time series with a lot of zeroes: https://www.lancaster.ac.uk/pg/waller/pdfs/Intermittent_Demand_Forecasting.pdf . | Forecast accuracy rolling window
Another interesting measure that can deal with zero actuals:
Relative Total Absolute Error: mean of all absolute errors devided by the mean of the absolute actuals.
This measure is a MAPE where the s |
53,151 | Why doesnt Conditional Variational AutoEncoders(CVAE) cluster data like the vanila VAE? | In a conditional VAE, the approximate posterior is already conditioned on the class -- $q(z|X,c)$, so there is no need for the latent space to separate the class of each input.
How does it know how to produce different classes
The decoder is also conditioned on the class.
How should I interpret what is going on here?
The latent space is probably modeling other types of "style" -- thickness, tilt, size, etc of the digits. | Why doesnt Conditional Variational AutoEncoders(CVAE) cluster data like the vanila VAE? | In a conditional VAE, the approximate posterior is already conditioned on the class -- $q(z|X,c)$, so there is no need for the latent space to separate the class of each input.
How does it know how t | Why doesnt Conditional Variational AutoEncoders(CVAE) cluster data like the vanila VAE?
In a conditional VAE, the approximate posterior is already conditioned on the class -- $q(z|X,c)$, so there is no need for the latent space to separate the class of each input.
How does it know how to produce different classes
The decoder is also conditioned on the class.
How should I interpret what is going on here?
The latent space is probably modeling other types of "style" -- thickness, tilt, size, etc of the digits. | Why doesnt Conditional Variational AutoEncoders(CVAE) cluster data like the vanila VAE?
In a conditional VAE, the approximate posterior is already conditioned on the class -- $q(z|X,c)$, so there is no need for the latent space to separate the class of each input.
How does it know how t |
53,152 | What does "large grant" mean in machine learning? | Grants fund research. The joke is that statistics and machine learning receive dramatically different amounts of funding, so what counts as a "large" amount of funding (a "large grant") is different between the two fields. | What does "large grant" mean in machine learning? | Grants fund research. The joke is that statistics and machine learning receive dramatically different amounts of funding, so what counts as a "large" amount of funding (a "large grant") is different b | What does "large grant" mean in machine learning?
Grants fund research. The joke is that statistics and machine learning receive dramatically different amounts of funding, so what counts as a "large" amount of funding (a "large grant") is different between the two fields. | What does "large grant" mean in machine learning?
Grants fund research. The joke is that statistics and machine learning receive dramatically different amounts of funding, so what counts as a "large" amount of funding (a "large grant") is different b |
53,153 | Does the sum of two independent exponentially distributed random variables with different rate parameters follow a gamma distribution? | If $X\sim \exp(\lambda_1)$, $Y\sim\exp(\lambda_2)$ and $\lambda_1\neq\lambda_2$, the sum $Z=X+Y$ has pdf given by the convolution
\begin{align}
f_Z(z)
&=\int_{-\infty}^\infty f_Y(z-x)f_X(x)dx
\\&=\lambda_1\lambda_2\int_0^z e^{-\lambda_2(z-x)}e^{-\lambda_1x}dx
\\&=\lambda_1\lambda_2 e^{-\lambda_2z}\int_0^z e^{-(\lambda_1-\lambda_2)x}dx
\\&=\frac{\lambda_1\lambda_2}{\lambda_1-\lambda_2} e^{-\lambda_2z}(1- e^{-(\lambda_1-\lambda_2)z})
\\&=\frac{\lambda_1\lambda_2}{\lambda_1-\lambda_2} (e^{-\lambda_2z} - e^{-\lambda_1z})
\end{align}
which is the two-parameter hypoexponential distribution. | Does the sum of two independent exponentially distributed random variables with different rate param | If $X\sim \exp(\lambda_1)$, $Y\sim\exp(\lambda_2)$ and $\lambda_1\neq\lambda_2$, the sum $Z=X+Y$ has pdf given by the convolution
\begin{align}
f_Z(z)
&=\int_{-\infty}^\infty f_Y(z-x)f_X(x)dx
\\&=\l | Does the sum of two independent exponentially distributed random variables with different rate parameters follow a gamma distribution?
If $X\sim \exp(\lambda_1)$, $Y\sim\exp(\lambda_2)$ and $\lambda_1\neq\lambda_2$, the sum $Z=X+Y$ has pdf given by the convolution
\begin{align}
f_Z(z)
&=\int_{-\infty}^\infty f_Y(z-x)f_X(x)dx
\\&=\lambda_1\lambda_2\int_0^z e^{-\lambda_2(z-x)}e^{-\lambda_1x}dx
\\&=\lambda_1\lambda_2 e^{-\lambda_2z}\int_0^z e^{-(\lambda_1-\lambda_2)x}dx
\\&=\frac{\lambda_1\lambda_2}{\lambda_1-\lambda_2} e^{-\lambda_2z}(1- e^{-(\lambda_1-\lambda_2)z})
\\&=\frac{\lambda_1\lambda_2}{\lambda_1-\lambda_2} (e^{-\lambda_2z} - e^{-\lambda_1z})
\end{align}
which is the two-parameter hypoexponential distribution. | Does the sum of two independent exponentially distributed random variables with different rate param
If $X\sim \exp(\lambda_1)$, $Y\sim\exp(\lambda_2)$ and $\lambda_1\neq\lambda_2$, the sum $Z=X+Y$ has pdf given by the convolution
\begin{align}
f_Z(z)
&=\int_{-\infty}^\infty f_Y(z-x)f_X(x)dx
\\&=\l |
53,154 | Why do attention models need to choose a maximum sentence length? | A "typical" attention mechanism might assign the weight $w_i$ to one of the source vectors as $w_i \propto \exp(u_i^Tv)$ where $u_i$ is the $i$th "source" vector and $v$ is the query vector. The attention mechanism described in OP from "Pointer Networks" opts for something slightly more involved: $w_i \propto \exp(q^T \tanh(W_1u_i + W_2v))$, but the main ideas are the same -- you can read my answer here for a more comprehensive exploration of different attention mechanisms.
The tutorial mentioned in the question appears to have the peculiar mechanism
$$w_i \propto \exp(a_i^Tv)$$
Where $a_i$ is the $i$th row of a learned weight matrix $A$. I say that it is peculiar because the weight on the $i$th input element does not actually depend on any of the $u_i$ at all! In fact we can view this mechanism as attention over word slots -- how much attention to put to the first word, the second word, third word etc, which does not pay any attention to which words are occupying which slots.
Since $A$, a learned weight matrix, must be fixed in size, then the number of word slots must also be fixed, which means the input sequence length must be constant (shorter inputs can be padded). Of couse this peculiar attention mechanism doesn't really make sense at all, so I wouldn't read too much into it.
Regarding length limitations in general: the only limitation to attention mechanisms is a soft one: longer sequences require more memory, and memory usage scales quadratically with sequence length (compare this to linear memory usage for vanilla RNNs).
I skimmed the "Effective Approaches to Attention-based Neural Machine Translation" paper mentioned in the question, and from what I can tell they propose a two-stage attention mechanism: in the decoder, the network selects a fixed sized window of the input of the encoder outputs to focus on. Then, attention is applied across only those source vectors within the fixed sized window. This is more efficient than typical "global" attention mechanisms. | Why do attention models need to choose a maximum sentence length? | A "typical" attention mechanism might assign the weight $w_i$ to one of the source vectors as $w_i \propto \exp(u_i^Tv)$ where $u_i$ is the $i$th "source" vector and $v$ is the query vector. The atten | Why do attention models need to choose a maximum sentence length?
A "typical" attention mechanism might assign the weight $w_i$ to one of the source vectors as $w_i \propto \exp(u_i^Tv)$ where $u_i$ is the $i$th "source" vector and $v$ is the query vector. The attention mechanism described in OP from "Pointer Networks" opts for something slightly more involved: $w_i \propto \exp(q^T \tanh(W_1u_i + W_2v))$, but the main ideas are the same -- you can read my answer here for a more comprehensive exploration of different attention mechanisms.
The tutorial mentioned in the question appears to have the peculiar mechanism
$$w_i \propto \exp(a_i^Tv)$$
Where $a_i$ is the $i$th row of a learned weight matrix $A$. I say that it is peculiar because the weight on the $i$th input element does not actually depend on any of the $u_i$ at all! In fact we can view this mechanism as attention over word slots -- how much attention to put to the first word, the second word, third word etc, which does not pay any attention to which words are occupying which slots.
Since $A$, a learned weight matrix, must be fixed in size, then the number of word slots must also be fixed, which means the input sequence length must be constant (shorter inputs can be padded). Of couse this peculiar attention mechanism doesn't really make sense at all, so I wouldn't read too much into it.
Regarding length limitations in general: the only limitation to attention mechanisms is a soft one: longer sequences require more memory, and memory usage scales quadratically with sequence length (compare this to linear memory usage for vanilla RNNs).
I skimmed the "Effective Approaches to Attention-based Neural Machine Translation" paper mentioned in the question, and from what I can tell they propose a two-stage attention mechanism: in the decoder, the network selects a fixed sized window of the input of the encoder outputs to focus on. Then, attention is applied across only those source vectors within the fixed sized window. This is more efficient than typical "global" attention mechanisms. | Why do attention models need to choose a maximum sentence length?
A "typical" attention mechanism might assign the weight $w_i$ to one of the source vectors as $w_i \propto \exp(u_i^Tv)$ where $u_i$ is the $i$th "source" vector and $v$ is the query vector. The atten |
53,155 | Why do attention models need to choose a maximum sentence length? | It is only an efficiency issue. In theory, the attention mechanism can work with arbitrarily long sequences. The reason is that batches must be padded to the same length.
Sentences of the maximum length will use all the attention weights, while shorter sentences will only use the first few.
By this sentence they mean they want to avoid batches like this:
A B C D E F G H I K L M N O
P Q _ _ _ _ _ _ _ _ _ _ _ _
R S T U _ _ _ _ _ _ _ _ _ _
V W _ _ _ _ _ _ _ _ _ _ _ _
Because of one long sequence, most of the memory is wasted for padding and not used from weights update.
A common strategy to avoid this problem (not included in the tutorial) is bucketing, i.e., having batches with an approximately constant number of words, but a different number of sequences in each batch, so the memory is used efficiently. | Why do attention models need to choose a maximum sentence length? | It is only an efficiency issue. In theory, the attention mechanism can work with arbitrarily long sequences. The reason is that batches must be padded to the same length.
Sentences of the maximum len | Why do attention models need to choose a maximum sentence length?
It is only an efficiency issue. In theory, the attention mechanism can work with arbitrarily long sequences. The reason is that batches must be padded to the same length.
Sentences of the maximum length will use all the attention weights, while shorter sentences will only use the first few.
By this sentence they mean they want to avoid batches like this:
A B C D E F G H I K L M N O
P Q _ _ _ _ _ _ _ _ _ _ _ _
R S T U _ _ _ _ _ _ _ _ _ _
V W _ _ _ _ _ _ _ _ _ _ _ _
Because of one long sequence, most of the memory is wasted for padding and not used from weights update.
A common strategy to avoid this problem (not included in the tutorial) is bucketing, i.e., having batches with an approximately constant number of words, but a different number of sequences in each batch, so the memory is used efficiently. | Why do attention models need to choose a maximum sentence length?
It is only an efficiency issue. In theory, the attention mechanism can work with arbitrarily long sequences. The reason is that batches must be padded to the same length.
Sentences of the maximum len |
53,156 | If $F_X(z) > F_Y (z)$ for all $z\in \mathbb{R}$ then $P(X < Y ) > 0$? | Firstly, it is worth noting that the antecedent condition in your conjecture is a slightly stronger version of the condition for strict first-order stochastic dominance (FSD) $X \ll Y$, so it implies this stochastic dominance relationship. This condition is much stronger than what you actually need to get the result in the conjecture, so I will give you a proof for a stronger result (same implication but with a weaker antecedent condition). Your chosen method of proof is a good one, and you are almost there - just one more step to go!
Theorem: If $F_X(z) > F_Y(z)$ for some $z \in \mathbb{R}$ then $\mathbb{P}(X<Y) > 0$.
Proof: We will proceed using a proof-by-contradiction. Contrary to the result in the theorem, suppose that $\mathbb{P}(X<Y)=0$. Then for all $z \in \mathbb{R}$ you have:
$$\begin{equation} \begin{aligned}
F_X(z) = \mathbb{P}(X \leqslant z)
&= \mathbb{P}(X \leqslant z, X < Y) + \mathbb{P}(Y \leqslant X \leqslant z) \\[6pt]
&= 0 + \mathbb{P}(Y \leqslant X \leqslant z) \\[6pt]
&= \mathbb{P}(Y \leqslant X \leqslant z) \\[6pt]
&\leqslant \mathbb{P}(Y \leqslant z) = F_Y(z), \\[6pt]
\end{aligned} \end{equation}$$
which contradicts the antecedent condition for the theorem. This establishes the theorem by contradiction. $\blacksquare$ | If $F_X(z) > F_Y (z)$ for all $z\in \mathbb{R}$ then $P(X < Y ) > 0$? | Firstly, it is worth noting that the antecedent condition in your conjecture is a slightly stronger version of the condition for strict first-order stochastic dominance (FSD) $X \ll Y$, so it implies | If $F_X(z) > F_Y (z)$ for all $z\in \mathbb{R}$ then $P(X < Y ) > 0$?
Firstly, it is worth noting that the antecedent condition in your conjecture is a slightly stronger version of the condition for strict first-order stochastic dominance (FSD) $X \ll Y$, so it implies this stochastic dominance relationship. This condition is much stronger than what you actually need to get the result in the conjecture, so I will give you a proof for a stronger result (same implication but with a weaker antecedent condition). Your chosen method of proof is a good one, and you are almost there - just one more step to go!
Theorem: If $F_X(z) > F_Y(z)$ for some $z \in \mathbb{R}$ then $\mathbb{P}(X<Y) > 0$.
Proof: We will proceed using a proof-by-contradiction. Contrary to the result in the theorem, suppose that $\mathbb{P}(X<Y)=0$. Then for all $z \in \mathbb{R}$ you have:
$$\begin{equation} \begin{aligned}
F_X(z) = \mathbb{P}(X \leqslant z)
&= \mathbb{P}(X \leqslant z, X < Y) + \mathbb{P}(Y \leqslant X \leqslant z) \\[6pt]
&= 0 + \mathbb{P}(Y \leqslant X \leqslant z) \\[6pt]
&= \mathbb{P}(Y \leqslant X \leqslant z) \\[6pt]
&\leqslant \mathbb{P}(Y \leqslant z) = F_Y(z), \\[6pt]
\end{aligned} \end{equation}$$
which contradicts the antecedent condition for the theorem. This establishes the theorem by contradiction. $\blacksquare$ | If $F_X(z) > F_Y (z)$ for all $z\in \mathbb{R}$ then $P(X < Y ) > 0$?
Firstly, it is worth noting that the antecedent condition in your conjecture is a slightly stronger version of the condition for strict first-order stochastic dominance (FSD) $X \ll Y$, so it implies |
53,157 | If $F_X(z) > F_Y (z)$ for all $z\in \mathbb{R}$ then $P(X < Y ) > 0$? | Under the assumption that $X$ and $Y$ are independent and continuous,
\begin{align*}\Bbb P(X<Y)&=\Bbb E^Y[\Bbb I_{X<Y}\mid Y]\\ &=\Bbb E^Y[F_X(Y)]\\&>\Bbb E^Y[F_Y(Y)]\\ &=\int_{\Bbb R} F_Y(y) \, \text{d}F_Y(y) \\&= \frac{1}{2} \int_{\Bbb R} \, \text{d}F_Y^2(y)\\&=\frac{1}{2}F_Y^2(\infty)-\frac{1}{2}F_Y^2(-\infty)\\&=1/2\end{align*}
Further,
$$\int_{\Bbb R} F_Y(y) \,\text{d}F_Y(y)=\int_{\Bbb R} \Bbb P(Y'<y) \,\text{d}F_Y(y)$$
when $Y'\sim F_Y(\cdot)$, or
$$\int_{\Bbb R} F_Y(y) \,\text{d}F_Y(y)=\Bbb P(Y'<Y)$$
when $Y,Y'\stackrel{\text{iid}}{\sim} F_Y(\cdot)$, implying
$$\int_{\Bbb R} F_Y(y) \,\text{d}F_Y(y)=1/2$$ | If $F_X(z) > F_Y (z)$ for all $z\in \mathbb{R}$ then $P(X < Y ) > 0$? | Under the assumption that $X$ and $Y$ are independent and continuous,
\begin{align*}\Bbb P(X<Y)&=\Bbb E^Y[\Bbb I_{X<Y}\mid Y]\\ &=\Bbb E^Y[F_X(Y)]\\&>\Bbb E^Y[F_Y(Y)]\\ &=\int_{\Bbb R} F_Y(y) \, \text | If $F_X(z) > F_Y (z)$ for all $z\in \mathbb{R}$ then $P(X < Y ) > 0$?
Under the assumption that $X$ and $Y$ are independent and continuous,
\begin{align*}\Bbb P(X<Y)&=\Bbb E^Y[\Bbb I_{X<Y}\mid Y]\\ &=\Bbb E^Y[F_X(Y)]\\&>\Bbb E^Y[F_Y(Y)]\\ &=\int_{\Bbb R} F_Y(y) \, \text{d}F_Y(y) \\&= \frac{1}{2} \int_{\Bbb R} \, \text{d}F_Y^2(y)\\&=\frac{1}{2}F_Y^2(\infty)-\frac{1}{2}F_Y^2(-\infty)\\&=1/2\end{align*}
Further,
$$\int_{\Bbb R} F_Y(y) \,\text{d}F_Y(y)=\int_{\Bbb R} \Bbb P(Y'<y) \,\text{d}F_Y(y)$$
when $Y'\sim F_Y(\cdot)$, or
$$\int_{\Bbb R} F_Y(y) \,\text{d}F_Y(y)=\Bbb P(Y'<Y)$$
when $Y,Y'\stackrel{\text{iid}}{\sim} F_Y(\cdot)$, implying
$$\int_{\Bbb R} F_Y(y) \,\text{d}F_Y(y)=1/2$$ | If $F_X(z) > F_Y (z)$ for all $z\in \mathbb{R}$ then $P(X < Y ) > 0$?
Under the assumption that $X$ and $Y$ are independent and continuous,
\begin{align*}\Bbb P(X<Y)&=\Bbb E^Y[\Bbb I_{X<Y}\mid Y]\\ &=\Bbb E^Y[F_X(Y)]\\&>\Bbb E^Y[F_Y(Y)]\\ &=\int_{\Bbb R} F_Y(y) \, \text |
53,158 | Proving $\Gamma\left(\frac{1}{2}\right)=\sqrt\pi$ using the expected value of standard normal variable | Notice first
\begin{eqnarray} \Gamma(1/2) &=& \int_{0}^{\infty}y^{-1/2}e^{-y}dy \\
&=& \int_{0}^{\infty} \sqrt{2}z^{-1} e^{-z^2/2} z \ d z \qquad \text{ (substitute $y=\frac{z^2}{2}$ )} \\
&=& \frac{1}{2} \int_{-\infty}^{\infty} \sqrt{2} e^{-z^2/2} \ d z \qquad \text{ (since even function)} \\
&=& \int_{-\infty}^{\infty} \frac{1}{\sqrt{2}} z^2 e^{-z^2/2} \ d z \qquad \text{ (*using integration by parts)} \\
\end{eqnarray}
The last line (integration by parts) is valid as
\begin{eqnarray}
\int_{-\infty}^{+\infty}z^{2}\left(e^{-z^2/2}\right)dz
&=&\int_{-\infty}^{+\infty}z\left(ze^{-z^2/2}\right)dz \\
&=& \int_{-\infty}^{+\infty}z\left(-e^{-z^2/2}\right)'dz \\
&=& \underbrace{-ze^{-z^2/2}\Bigg|^{+\infty}_{-\infty}}_{=0}+\int_{-\infty}^{+\infty}\left(e^{-z^2/2}\right)dz\\
\end{eqnarray}
Hence we can easily see
$$\Gamma(1/2) = \frac{1}{\sqrt{\pi}} \mathbb{E}[Z^2] $$
hence
$$\Gamma(1/2) = \sqrt{\pi} \text{ □ }$$
Addition:
To be clear, below I am answering exactly the question user @Sarina asked. My answer is based on a result he assumes (that the variance of the standard normal equals 1). The answer would be more complete by proving that this is the case (that involves a simple change from Cartesian to polar coordinates) but I feel this is not what user @Sarina wanted to know | Proving $\Gamma\left(\frac{1}{2}\right)=\sqrt\pi$ using the expected value of standard normal variab | Notice first
\begin{eqnarray} \Gamma(1/2) &=& \int_{0}^{\infty}y^{-1/2}e^{-y}dy \\
&=& \int_{0}^{\infty} \sqrt{2}z^{-1} e^{-z^2/2} z \ d z \qquad \text{ (substitute $y=\frac{z^2}{2}$ )} \\ | Proving $\Gamma\left(\frac{1}{2}\right)=\sqrt\pi$ using the expected value of standard normal variable
Notice first
\begin{eqnarray} \Gamma(1/2) &=& \int_{0}^{\infty}y^{-1/2}e^{-y}dy \\
&=& \int_{0}^{\infty} \sqrt{2}z^{-1} e^{-z^2/2} z \ d z \qquad \text{ (substitute $y=\frac{z^2}{2}$ )} \\
&=& \frac{1}{2} \int_{-\infty}^{\infty} \sqrt{2} e^{-z^2/2} \ d z \qquad \text{ (since even function)} \\
&=& \int_{-\infty}^{\infty} \frac{1}{\sqrt{2}} z^2 e^{-z^2/2} \ d z \qquad \text{ (*using integration by parts)} \\
\end{eqnarray}
The last line (integration by parts) is valid as
\begin{eqnarray}
\int_{-\infty}^{+\infty}z^{2}\left(e^{-z^2/2}\right)dz
&=&\int_{-\infty}^{+\infty}z\left(ze^{-z^2/2}\right)dz \\
&=& \int_{-\infty}^{+\infty}z\left(-e^{-z^2/2}\right)'dz \\
&=& \underbrace{-ze^{-z^2/2}\Bigg|^{+\infty}_{-\infty}}_{=0}+\int_{-\infty}^{+\infty}\left(e^{-z^2/2}\right)dz\\
\end{eqnarray}
Hence we can easily see
$$\Gamma(1/2) = \frac{1}{\sqrt{\pi}} \mathbb{E}[Z^2] $$
hence
$$\Gamma(1/2) = \sqrt{\pi} \text{ □ }$$
Addition:
To be clear, below I am answering exactly the question user @Sarina asked. My answer is based on a result he assumes (that the variance of the standard normal equals 1). The answer would be more complete by proving that this is the case (that involves a simple change from Cartesian to polar coordinates) but I feel this is not what user @Sarina wanted to know | Proving $\Gamma\left(\frac{1}{2}\right)=\sqrt\pi$ using the expected value of standard normal variab
Notice first
\begin{eqnarray} \Gamma(1/2) &=& \int_{0}^{\infty}y^{-1/2}e^{-y}dy \\
&=& \int_{0}^{\infty} \sqrt{2}z^{-1} e^{-z^2/2} z \ d z \qquad \text{ (substitute $y=\frac{z^2}{2}$ )} \\ |
53,159 | Proving $\Gamma\left(\frac{1}{2}\right)=\sqrt\pi$ using the expected value of standard normal variable | This is a perfect example of a question-begging mathematical "proof": The kind of "proof" you are trying to construct here is really not terribly useful. If you don't know how to prove the integral:
$$\int \limits_0^\infty y^{-1/2} e^{-y} dy = \sqrt{\pi},$$
then you need to ask yourself why are you willing to assume that you already know the integral:
$$\int \limits_{-\infty}^\infty y^2 e^{-\tfrac{1}{2} y^2} dy = \sqrt{2 \pi},$$
which is what you are assuming when you appeal to the fact that a standard normal random variable has a variance of one. In this particular context, the properties of the normal density function first had to be derived by elementary methods, that are essentially equivalent to the result you are trying to prove. Indeed, even the fact that the normal density is a density function (i.e., that it integrates to one) is essentially just a restatement of an integral that is a simple variation of the integral you are trying to prove.
Appealing to the properties of the normal density to prove the gamma integral is an example of a "proof" that tries to demonstrate result $A$ by appealing to result $B$, where the latter theorem is a result that is itself proved by the same technique as an elementary proof of $A$. In these cases you need to be very careful because the latter result may even have a "proof" that hinges on the former result, in which case you get a completely circular argument.
Proof by elementary methods: Fortunately, in this particular case, the gamma integral and the resulting properties of the normal distribution are all provable by elementary methods. Rather than trying to "prove" the gamma integral by appealing to what are essentially variations of the same result, you can use Liouville's method (hat tip to whuber for mentioning it in comments), which uses polar coordinates to establish that:
$$\begin{equation} \begin{aligned}
\Bigg( \int \limits_{-\infty}^\infty e^{- \frac{1}{2} y^2} dy \Bigg)^2
&= \Bigg( \int \limits_{-\infty}^\infty e^{- \frac{1}{2} x^2} dx \Bigg) \Bigg( \int \limits_{-\infty}^\infty e^{- \frac{1}{2} y^2} dy \Bigg) \\[6pt]
&= \int \limits_{-\infty}^\infty \int \limits_{-\infty}^\infty e^{- \frac{1}{2} (x^2+y^2)} dx dy \\[6pt]
&= \int \limits_0^{2 \pi} \int \limits_{0}^\infty r e^{- \frac{1}{2} r^2} dr d\theta \\[6pt]
&= \int \limits_0^{2 \pi} \Bigg[ - e^{- \frac{1}{2} r^2} \Bigg]_{r=0}^{ r \rightarrow\infty} d\theta \\[6pt]
&= \int \limits_0^{2 \pi} \Bigg[ 0 - (-1) \Bigg] d\theta \\[6pt]
&= \int \limits_0^{2 \pi} d\theta \\[6pt]
&= 2 \pi. \\[6pt]
\end{aligned} \end{equation}$$
Using the substitution $r = \tfrac{1}{2} y^2$ gives $dr = y dy$, so you then have:
$$\begin{equation} \begin{aligned}
\Gamma(\tfrac{1}{2})
&= \int \limits_0^\infty r^{-1/2} e^{-r} dr \\[6pt]
&= \int \limits_0^\infty \sqrt{\frac{2}{y^2}} e^{- \tfrac{1}{2} y^2} y \ dy \\[6pt]
&= \sqrt{2} \int \limits_0^\infty e^{- \tfrac{1}{2} y^2} \ dy \\[6pt]
&= \frac{1}{\sqrt{2}} \int \limits_{-\infty}^\infty e^{- \tfrac{1}{2} y^2} \ dy \\[6pt]
&= \frac{1}{\sqrt{2}} \cdot \sqrt{2 \pi} = \sqrt{\pi}. \\[6pt]
\end{aligned} \end{equation}$$
This method uses only substituions and familiarity with polar coordinates, and most importantly, it does not require you to assume any premise that is effectively just a variation of the result. | Proving $\Gamma\left(\frac{1}{2}\right)=\sqrt\pi$ using the expected value of standard normal variab | This is a perfect example of a question-begging mathematical "proof": The kind of "proof" you are trying to construct here is really not terribly useful. If you don't know how to prove the integral:
| Proving $\Gamma\left(\frac{1}{2}\right)=\sqrt\pi$ using the expected value of standard normal variable
This is a perfect example of a question-begging mathematical "proof": The kind of "proof" you are trying to construct here is really not terribly useful. If you don't know how to prove the integral:
$$\int \limits_0^\infty y^{-1/2} e^{-y} dy = \sqrt{\pi},$$
then you need to ask yourself why are you willing to assume that you already know the integral:
$$\int \limits_{-\infty}^\infty y^2 e^{-\tfrac{1}{2} y^2} dy = \sqrt{2 \pi},$$
which is what you are assuming when you appeal to the fact that a standard normal random variable has a variance of one. In this particular context, the properties of the normal density function first had to be derived by elementary methods, that are essentially equivalent to the result you are trying to prove. Indeed, even the fact that the normal density is a density function (i.e., that it integrates to one) is essentially just a restatement of an integral that is a simple variation of the integral you are trying to prove.
Appealing to the properties of the normal density to prove the gamma integral is an example of a "proof" that tries to demonstrate result $A$ by appealing to result $B$, where the latter theorem is a result that is itself proved by the same technique as an elementary proof of $A$. In these cases you need to be very careful because the latter result may even have a "proof" that hinges on the former result, in which case you get a completely circular argument.
Proof by elementary methods: Fortunately, in this particular case, the gamma integral and the resulting properties of the normal distribution are all provable by elementary methods. Rather than trying to "prove" the gamma integral by appealing to what are essentially variations of the same result, you can use Liouville's method (hat tip to whuber for mentioning it in comments), which uses polar coordinates to establish that:
$$\begin{equation} \begin{aligned}
\Bigg( \int \limits_{-\infty}^\infty e^{- \frac{1}{2} y^2} dy \Bigg)^2
&= \Bigg( \int \limits_{-\infty}^\infty e^{- \frac{1}{2} x^2} dx \Bigg) \Bigg( \int \limits_{-\infty}^\infty e^{- \frac{1}{2} y^2} dy \Bigg) \\[6pt]
&= \int \limits_{-\infty}^\infty \int \limits_{-\infty}^\infty e^{- \frac{1}{2} (x^2+y^2)} dx dy \\[6pt]
&= \int \limits_0^{2 \pi} \int \limits_{0}^\infty r e^{- \frac{1}{2} r^2} dr d\theta \\[6pt]
&= \int \limits_0^{2 \pi} \Bigg[ - e^{- \frac{1}{2} r^2} \Bigg]_{r=0}^{ r \rightarrow\infty} d\theta \\[6pt]
&= \int \limits_0^{2 \pi} \Bigg[ 0 - (-1) \Bigg] d\theta \\[6pt]
&= \int \limits_0^{2 \pi} d\theta \\[6pt]
&= 2 \pi. \\[6pt]
\end{aligned} \end{equation}$$
Using the substitution $r = \tfrac{1}{2} y^2$ gives $dr = y dy$, so you then have:
$$\begin{equation} \begin{aligned}
\Gamma(\tfrac{1}{2})
&= \int \limits_0^\infty r^{-1/2} e^{-r} dr \\[6pt]
&= \int \limits_0^\infty \sqrt{\frac{2}{y^2}} e^{- \tfrac{1}{2} y^2} y \ dy \\[6pt]
&= \sqrt{2} \int \limits_0^\infty e^{- \tfrac{1}{2} y^2} \ dy \\[6pt]
&= \frac{1}{\sqrt{2}} \int \limits_{-\infty}^\infty e^{- \tfrac{1}{2} y^2} \ dy \\[6pt]
&= \frac{1}{\sqrt{2}} \cdot \sqrt{2 \pi} = \sqrt{\pi}. \\[6pt]
\end{aligned} \end{equation}$$
This method uses only substituions and familiarity with polar coordinates, and most importantly, it does not require you to assume any premise that is effectively just a variation of the result. | Proving $\Gamma\left(\frac{1}{2}\right)=\sqrt\pi$ using the expected value of standard normal variab
This is a perfect example of a question-begging mathematical "proof": The kind of "proof" you are trying to construct here is really not terribly useful. If you don't know how to prove the integral:
|
53,160 | Can Adjusted R squared be equal to 1? | Dan and Michael point out the relevant issues. Just for completeness, the relationship between adjusted $R^2$ and $R^2$ is given by (see, e.g., here)
$$
R^2_{adjusted}=1-(1-R^2)\frac{n-1}{n-K},
$$
(with $K$ the number of regressors, including the constant). This shows that $R^2_{adjusted}=1$ if $R^2=1$, unless (see below) $K=n$.
$R^2=1$ occurs when all residuals $\hat u_i=y_i-\hat y_i$ are zero, as
$$
R^2=1-\frac{\hat{u}'\hat{u}/n}{\tilde{y}'\tilde{y}/n}.
$$
Here, $\hat u$ denotes the vector of residuals and $\tilde y$ the vector of demeaned observations on the dependent variable.
Dan discusses one reason to get an $R^2$ of 1. Another is to have as many regressors as observations, i.e., $K=n$.
Technically, this is because the $n\times K$ regressor matrix $X$ then is square. The OLS estimator $\hat\beta=(X'X)^{-1}X'y$ can then be written as (assuming no exact multicollinearity)
$$
\hat\beta=(X'X)^{-1}X'y=X^{-1}{X'}^{-1}X'y=X^{-1}y
$$
so that the fitted values $\hat y=X\hat\beta$ are just $\hat y=XX^{-1}y=y$, so that all residuals are zero.
Here is an illustration using artificial data (code below), in which regressors are generated totally independently of $y$, and yet we achieve an $R^2$ of 1 once we have as many of them as we have observations.
Code:
n <- 15
regressors <- n-1 # enough, as we'll also fit a constant
y <- rnorm(n)
X <- matrix(rnorm(regressors*n),ncol=regressors)
collectionR2s <- rep(NA,regressors)
for (i in 1:regressors){
collectionR2s[i] <- summary(lm(y~X[,1:i]))$r.squared
}
plot(1:regressors,collectionR2s,col="purple",pch=19,type="b",lwd=2)
abline(h=1, lty=2)
When $K=n$, R however, correctly, does not report an adjusted $R^2$:
> summary(lm(y~X))
Call:
lm(formula = y ~ X)
Residuals:
ALL 15 residuals are 0: no residual degrees of freedom!
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.36296 NA NA NA
X1 -1.09003 NA NA NA
X2 0.39177 NA NA NA
X3 0.19273 NA NA NA
X4 0.51528 NA NA NA
X5 -0.04530 NA NA NA
X6 -1.28539 NA NA NA
X7 -0.72770 NA NA NA
X8 -0.14604 NA NA NA
X9 0.34385 NA NA NA
X10 -0.93811 NA NA NA
X11 2.23064 NA NA NA
X12 0.06744 NA NA NA
X13 0.21220 NA NA NA
X14 -2.29134 NA NA NA
Residual standard error: NaN on 0 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: NaN
F-statistic: NaN on 14 and 0 DF, p-value: NA | Can Adjusted R squared be equal to 1? | Dan and Michael point out the relevant issues. Just for completeness, the relationship between adjusted $R^2$ and $R^2$ is given by (see, e.g., here)
$$
R^2_{adjusted}=1-(1-R^2)\frac{n-1}{n-K},
$$
(wi | Can Adjusted R squared be equal to 1?
Dan and Michael point out the relevant issues. Just for completeness, the relationship between adjusted $R^2$ and $R^2$ is given by (see, e.g., here)
$$
R^2_{adjusted}=1-(1-R^2)\frac{n-1}{n-K},
$$
(with $K$ the number of regressors, including the constant). This shows that $R^2_{adjusted}=1$ if $R^2=1$, unless (see below) $K=n$.
$R^2=1$ occurs when all residuals $\hat u_i=y_i-\hat y_i$ are zero, as
$$
R^2=1-\frac{\hat{u}'\hat{u}/n}{\tilde{y}'\tilde{y}/n}.
$$
Here, $\hat u$ denotes the vector of residuals and $\tilde y$ the vector of demeaned observations on the dependent variable.
Dan discusses one reason to get an $R^2$ of 1. Another is to have as many regressors as observations, i.e., $K=n$.
Technically, this is because the $n\times K$ regressor matrix $X$ then is square. The OLS estimator $\hat\beta=(X'X)^{-1}X'y$ can then be written as (assuming no exact multicollinearity)
$$
\hat\beta=(X'X)^{-1}X'y=X^{-1}{X'}^{-1}X'y=X^{-1}y
$$
so that the fitted values $\hat y=X\hat\beta$ are just $\hat y=XX^{-1}y=y$, so that all residuals are zero.
Here is an illustration using artificial data (code below), in which regressors are generated totally independently of $y$, and yet we achieve an $R^2$ of 1 once we have as many of them as we have observations.
Code:
n <- 15
regressors <- n-1 # enough, as we'll also fit a constant
y <- rnorm(n)
X <- matrix(rnorm(regressors*n),ncol=regressors)
collectionR2s <- rep(NA,regressors)
for (i in 1:regressors){
collectionR2s[i] <- summary(lm(y~X[,1:i]))$r.squared
}
plot(1:regressors,collectionR2s,col="purple",pch=19,type="b",lwd=2)
abline(h=1, lty=2)
When $K=n$, R however, correctly, does not report an adjusted $R^2$:
> summary(lm(y~X))
Call:
lm(formula = y ~ X)
Residuals:
ALL 15 residuals are 0: no residual degrees of freedom!
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.36296 NA NA NA
X1 -1.09003 NA NA NA
X2 0.39177 NA NA NA
X3 0.19273 NA NA NA
X4 0.51528 NA NA NA
X5 -0.04530 NA NA NA
X6 -1.28539 NA NA NA
X7 -0.72770 NA NA NA
X8 -0.14604 NA NA NA
X9 0.34385 NA NA NA
X10 -0.93811 NA NA NA
X11 2.23064 NA NA NA
X12 0.06744 NA NA NA
X13 0.21220 NA NA NA
X14 -2.29134 NA NA NA
Residual standard error: NaN on 0 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: NaN
F-statistic: NaN on 14 and 0 DF, p-value: NA | Can Adjusted R squared be equal to 1?
Dan and Michael point out the relevant issues. Just for completeness, the relationship between adjusted $R^2$ and $R^2$ is given by (see, e.g., here)
$$
R^2_{adjusted}=1-(1-R^2)\frac{n-1}{n-K},
$$
(wi |
53,161 | Can Adjusted R squared be equal to 1? | An adjusted R squared equal to one implies perfect prediction and is an indication of a problem in your model. Adjusted R squared is a penalised version of R square, which is a way of describing the ratio of the residual sum of squares to the total sum of squares - as you approach 1 the implication is that there is no variation/deviation away from your model.
I would suggest you begin by looking at a correlation matrix, or put each predictor into your model individually to see which predictor is causing the issue.
In R, you will get a warning from lm: "essentially perfect fit..."
In the (single predictor) example below you will see that adjusted R square is less than 1 even when the correlation between y and x is greater than 0.99.
# create a data frame with some strongly correlated variables
myData<- data.frame(y = rnorm(n = 1000, mean = 0, sd = 1))
myData$x1<- myData$y
myData$x2<- jitter(myData$x1, factor = 10)
myData$x3<- jitter(myData$x1, factor = 1000)
# fit models
myModel1<- lm(y ~ x1, data = myData)
myModel2<- lm(y ~ x2, data = myData)
myModel3<- lm(y ~ x3, data = myData)
# output
summary(myModel1)
#> Warning in summary.lm(myModel1): essentially perfect fit: summary may be
#> unreliable
#>
#> Call:
#> lm(formula = y ~ x1, data = myData)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -4.551e-15 -1.200e-17 6.000e-18 2.090e-17 3.455e-16
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 1.404e-17 4.924e-18 2.852e+00 0.00444 **
#> x1 1.000e+00 5.085e-18 1.966e+17 < 2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 1.553e-16 on 998 degrees of freedom
#> Multiple R-squared: 1, Adjusted R-squared: 1
#> F-statistic: 3.867e+34 on 1 and 998 DF, p-value: < 2.2e-16
summary(myModel2)
#>
#> Call:
#> lm(formula = y ~ x2, data = myData)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -1.996e-03 -9.643e-04 -1.996e-05 1.009e-03 2.034e-03
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 3.278e-06 3.647e-05 0.09 0.928
#> x2 1.000e+00 3.766e-05 26550.25 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.00115 on 998 degrees of freedom
#> Multiple R-squared: 1, Adjusted R-squared: 1
#> F-statistic: 7.049e+08 on 1 and 998 DF, p-value: < 2.2e-16
summary(myModel3)
#>
#> Call:
#> lm(formula = y ~ x3, data = myData)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -0.214135 -0.097828 -0.003721 0.099000 0.226453
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) -0.001598 0.003602 -0.444 0.657
#> x3 0.983900 0.003685 266.982 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.1136 on 998 degrees of freedom
#> Multiple R-squared: 0.9862, Adjusted R-squared: 0.9862
#> F-statistic: 7.128e+04 on 1 and 998 DF, p-value: < 2.2e-16
cor(myData$x1, myData$x2)
#> [1] 0.9999993
cor(myData$x1, myData$x3)
#> [1] 0.9930721 | Can Adjusted R squared be equal to 1? | An adjusted R squared equal to one implies perfect prediction and is an indication of a problem in your model. Adjusted R squared is a penalised version of R square, which is a way of describing the r | Can Adjusted R squared be equal to 1?
An adjusted R squared equal to one implies perfect prediction and is an indication of a problem in your model. Adjusted R squared is a penalised version of R square, which is a way of describing the ratio of the residual sum of squares to the total sum of squares - as you approach 1 the implication is that there is no variation/deviation away from your model.
I would suggest you begin by looking at a correlation matrix, or put each predictor into your model individually to see which predictor is causing the issue.
In R, you will get a warning from lm: "essentially perfect fit..."
In the (single predictor) example below you will see that adjusted R square is less than 1 even when the correlation between y and x is greater than 0.99.
# create a data frame with some strongly correlated variables
myData<- data.frame(y = rnorm(n = 1000, mean = 0, sd = 1))
myData$x1<- myData$y
myData$x2<- jitter(myData$x1, factor = 10)
myData$x3<- jitter(myData$x1, factor = 1000)
# fit models
myModel1<- lm(y ~ x1, data = myData)
myModel2<- lm(y ~ x2, data = myData)
myModel3<- lm(y ~ x3, data = myData)
# output
summary(myModel1)
#> Warning in summary.lm(myModel1): essentially perfect fit: summary may be
#> unreliable
#>
#> Call:
#> lm(formula = y ~ x1, data = myData)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -4.551e-15 -1.200e-17 6.000e-18 2.090e-17 3.455e-16
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 1.404e-17 4.924e-18 2.852e+00 0.00444 **
#> x1 1.000e+00 5.085e-18 1.966e+17 < 2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 1.553e-16 on 998 degrees of freedom
#> Multiple R-squared: 1, Adjusted R-squared: 1
#> F-statistic: 3.867e+34 on 1 and 998 DF, p-value: < 2.2e-16
summary(myModel2)
#>
#> Call:
#> lm(formula = y ~ x2, data = myData)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -1.996e-03 -9.643e-04 -1.996e-05 1.009e-03 2.034e-03
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 3.278e-06 3.647e-05 0.09 0.928
#> x2 1.000e+00 3.766e-05 26550.25 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.00115 on 998 degrees of freedom
#> Multiple R-squared: 1, Adjusted R-squared: 1
#> F-statistic: 7.049e+08 on 1 and 998 DF, p-value: < 2.2e-16
summary(myModel3)
#>
#> Call:
#> lm(formula = y ~ x3, data = myData)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -0.214135 -0.097828 -0.003721 0.099000 0.226453
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) -0.001598 0.003602 -0.444 0.657
#> x3 0.983900 0.003685 266.982 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.1136 on 998 degrees of freedom
#> Multiple R-squared: 0.9862, Adjusted R-squared: 0.9862
#> F-statistic: 7.128e+04 on 1 and 998 DF, p-value: < 2.2e-16
cor(myData$x1, myData$x2)
#> [1] 0.9999993
cor(myData$x1, myData$x3)
#> [1] 0.9930721 | Can Adjusted R squared be equal to 1?
An adjusted R squared equal to one implies perfect prediction and is an indication of a problem in your model. Adjusted R squared is a penalised version of R square, which is a way of describing the r |
53,162 | How bad is Cholesky decomposition for OLS? | What you are seeing is exactly what one would expect. The condition number of your matrix $X$ is about $10^6$. Double precision floating point calculations give about 18 figures of accuracy so, in double precision, you would expect to get at most $18-6=12$ significant figures for the estimated coefficients from QR and at most $18-6-6=6$ significant figures from Cholesky, after allowing for accuracy lost due to conditioning.
(Cholesky loses twice the number of significant figures because forming $X^TX$ squares the condition number.)
Hence the coefficients from the two algorithms should start to differ in the 6th significant figure, which is exactly what you see. Your results show that for x2 the relative difference between the two algorithms is about
$$ 0.2 / 148232 = 1.35 \times 10^{-6},$$
which matches the condition number you started out with surprisingly well.
The condition number of $X$ is a reasonable guide to precision when the residuals are small. If we make sure that the exact coefficients are 1 and 1.00001 by
> y <- X %*% c(1,1.00001)
and rerun your calculation we get
> result
chol qr
x1 0.9999833544961814 0.999999999968327
x2 1.0000266455870466 1.000010000031673
confirming that QR is in fact correct to 12 significant figures and Cholesky is correct to 6 significant figures.
The fitted values from Cholesky are generally more precise than the estimated coefficients. If the fitted values are more important to you than the coefficients, and that will often be so in highly collinear cases, then Cholesky is usually fine.
Cholesky is also about as good as QR when the residuals are very large, but this is the hardest case for both algorithms.
My PhD supervisor was Australia's foremost numerical analyst (Mike Osborne) and he used to say that Cholesky was not as bad as it's made out to be.
The full sensitivity analysis for Cholesky vs QR is given in Section 5.3.8 of Golub and Van Loan (1996), and is very much more complex than the simple calculation I used above. The sensitivity of Cholesky is roughly proportion to $\kappa + \rho \kappa^2$ where $\kappa$ is the condition number of $X$ and $\rho$ is a theoretical quantity that is almost impossible to compute in practice.
$\rho$ depends on the size of the residuals-- it is roughly proportional to but much smaller than the average squared residual.
The sensitivity of QR is somewhat better than Cholesky although not always as good as I have suggested above. Golub and Van Loan remark:
At the very minimum, this discussion should convince you how difficult it can be to choose the "right" algorithm!
Reference
Golub, GH, and Van Loan CF (1996). Matrix Computations. Johns Hopkins University Press, Baltimore. | How bad is Cholesky decomposition for OLS? | What you are seeing is exactly what one would expect. The condition number of your matrix $X$ is about $10^6$. Double precision floating point calculations give about 18 figures of accuracy so, in dou | How bad is Cholesky decomposition for OLS?
What you are seeing is exactly what one would expect. The condition number of your matrix $X$ is about $10^6$. Double precision floating point calculations give about 18 figures of accuracy so, in double precision, you would expect to get at most $18-6=12$ significant figures for the estimated coefficients from QR and at most $18-6-6=6$ significant figures from Cholesky, after allowing for accuracy lost due to conditioning.
(Cholesky loses twice the number of significant figures because forming $X^TX$ squares the condition number.)
Hence the coefficients from the two algorithms should start to differ in the 6th significant figure, which is exactly what you see. Your results show that for x2 the relative difference between the two algorithms is about
$$ 0.2 / 148232 = 1.35 \times 10^{-6},$$
which matches the condition number you started out with surprisingly well.
The condition number of $X$ is a reasonable guide to precision when the residuals are small. If we make sure that the exact coefficients are 1 and 1.00001 by
> y <- X %*% c(1,1.00001)
and rerun your calculation we get
> result
chol qr
x1 0.9999833544961814 0.999999999968327
x2 1.0000266455870466 1.000010000031673
confirming that QR is in fact correct to 12 significant figures and Cholesky is correct to 6 significant figures.
The fitted values from Cholesky are generally more precise than the estimated coefficients. If the fitted values are more important to you than the coefficients, and that will often be so in highly collinear cases, then Cholesky is usually fine.
Cholesky is also about as good as QR when the residuals are very large, but this is the hardest case for both algorithms.
My PhD supervisor was Australia's foremost numerical analyst (Mike Osborne) and he used to say that Cholesky was not as bad as it's made out to be.
The full sensitivity analysis for Cholesky vs QR is given in Section 5.3.8 of Golub and Van Loan (1996), and is very much more complex than the simple calculation I used above. The sensitivity of Cholesky is roughly proportion to $\kappa + \rho \kappa^2$ where $\kappa$ is the condition number of $X$ and $\rho$ is a theoretical quantity that is almost impossible to compute in practice.
$\rho$ depends on the size of the residuals-- it is roughly proportional to but much smaller than the average squared residual.
The sensitivity of QR is somewhat better than Cholesky although not always as good as I have suggested above. Golub and Van Loan remark:
At the very minimum, this discussion should convince you how difficult it can be to choose the "right" algorithm!
Reference
Golub, GH, and Van Loan CF (1996). Matrix Computations. Johns Hopkins University Press, Baltimore. | How bad is Cholesky decomposition for OLS?
What you are seeing is exactly what one would expect. The condition number of your matrix $X$ is about $10^6$. Double precision floating point calculations give about 18 figures of accuracy so, in dou |
53,163 | Correlation vs Chi Square | First, in your opening sentence, "affected by" should be "linearly related to". Two variables can be correlated and have not the slightest causal relationship and correlation does not measure all relationships, just linear ones (either on the quantities themselves (Pearson) or their ranks (Spearman)).
So, correlation is about the linear relationship between two variables. Usually, both are continuous (or nearly so) but there are variations for the case where one is dichotomous.
Chi-square is usually about the independence of two variables. Usually, both are categorical. In your first link, the two variables are smoking and exercise and both are measured ordinally - not in terms of number of cigarettes or minutes of exercise, for example. (Incidentally, I would prefer using a test that captured the ordinal nature of the variables, I don't think this is the best example of chi-square).
Your second link is a fairly specialized use of chi-square - it looks like it's an attempt to find multivariate outliers by comparing Mahlanobis distance to what their distribution should be, in the absence of outliers. I would leave that aside while you learn the basics of chi-square. | Correlation vs Chi Square | First, in your opening sentence, "affected by" should be "linearly related to". Two variables can be correlated and have not the slightest causal relationship and correlation does not measure all rela | Correlation vs Chi Square
First, in your opening sentence, "affected by" should be "linearly related to". Two variables can be correlated and have not the slightest causal relationship and correlation does not measure all relationships, just linear ones (either on the quantities themselves (Pearson) or their ranks (Spearman)).
So, correlation is about the linear relationship between two variables. Usually, both are continuous (or nearly so) but there are variations for the case where one is dichotomous.
Chi-square is usually about the independence of two variables. Usually, both are categorical. In your first link, the two variables are smoking and exercise and both are measured ordinally - not in terms of number of cigarettes or minutes of exercise, for example. (Incidentally, I would prefer using a test that captured the ordinal nature of the variables, I don't think this is the best example of chi-square).
Your second link is a fairly specialized use of chi-square - it looks like it's an attempt to find multivariate outliers by comparing Mahlanobis distance to what their distribution should be, in the absence of outliers. I would leave that aside while you learn the basics of chi-square. | Correlation vs Chi Square
First, in your opening sentence, "affected by" should be "linearly related to". Two variables can be correlated and have not the slightest causal relationship and correlation does not measure all rela |
53,164 | Correlation vs Chi Square | generally, Chi square is a non-parametric test that is used to show association between two qualitative variables (like gender and eye color) ; while correlation (Pearson coefficient) is used to test the correlation between two quantitative variables (like heart rate and blood pressure) | Correlation vs Chi Square | generally, Chi square is a non-parametric test that is used to show association between two qualitative variables (like gender and eye color) ; while correlation (Pearson coefficient) is used to test | Correlation vs Chi Square
generally, Chi square is a non-parametric test that is used to show association between two qualitative variables (like gender and eye color) ; while correlation (Pearson coefficient) is used to test the correlation between two quantitative variables (like heart rate and blood pressure) | Correlation vs Chi Square
generally, Chi square is a non-parametric test that is used to show association between two qualitative variables (like gender and eye color) ; while correlation (Pearson coefficient) is used to test |
53,165 | Does $\mathbb{P}(X < a) = \mathbb{P}(f(X) < f(a))$? | Consider the set of $x$, call it $S$, where $x<a$. You seek for the probability, $P(S)$. Any expression that lead to $S$ produces the exact same probability, $b$, irrespective of its decleration. If $f(x)$ is a monotonic (strictly) increasing function, $x<a$ directly implies $f(x)<f(a)$ and vice versa, i.e. if $f(x)<f(a)$, then $x<a$. | Does $\mathbb{P}(X < a) = \mathbb{P}(f(X) < f(a))$? | Consider the set of $x$, call it $S$, where $x<a$. You seek for the probability, $P(S)$. Any expression that lead to $S$ produces the exact same probability, $b$, irrespective of its decleration. If $ | Does $\mathbb{P}(X < a) = \mathbb{P}(f(X) < f(a))$?
Consider the set of $x$, call it $S$, where $x<a$. You seek for the probability, $P(S)$. Any expression that lead to $S$ produces the exact same probability, $b$, irrespective of its decleration. If $f(x)$ is a monotonic (strictly) increasing function, $x<a$ directly implies $f(x)<f(a)$ and vice versa, i.e. if $f(x)<f(a)$, then $x<a$. | Does $\mathbb{P}(X < a) = \mathbb{P}(f(X) < f(a))$?
Consider the set of $x$, call it $S$, where $x<a$. You seek for the probability, $P(S)$. Any expression that lead to $S$ produces the exact same probability, $b$, irrespective of its decleration. If $ |
53,166 | Does $\mathbb{P}(X < a) = \mathbb{P}(f(X) < f(a))$? | If $f$ is strictly increasing then you have:
$$\begin{equation} \begin{aligned}
\{ X < a \}
&= \{ \omega \in \Omega | X(\omega) < a \} \\[6pt]
&= \{ \omega \in \Omega | f(X(\omega)) < f(a) \} \\[6pt]
&= \{ \omega \in \Omega | f(X)(\omega) < f(a) \} \\[6pt]
&= \{ f(X) < f(a) \}, \\[6pt]
\end{aligned} \end{equation}$$
which means that $\mathbb{P}(X<a) = \mathbb{P}(f(X)<f(a))$. If $f$ is only non-decreasing then you cannot derive this result but you can derive an analogous result with non-strict inequality. | Does $\mathbb{P}(X < a) = \mathbb{P}(f(X) < f(a))$? | If $f$ is strictly increasing then you have:
$$\begin{equation} \begin{aligned}
\{ X < a \}
&= \{ \omega \in \Omega | X(\omega) < a \} \\[6pt]
&= \{ \omega \in \Omega | f(X(\omega)) < f(a) \} \\[6pt] | Does $\mathbb{P}(X < a) = \mathbb{P}(f(X) < f(a))$?
If $f$ is strictly increasing then you have:
$$\begin{equation} \begin{aligned}
\{ X < a \}
&= \{ \omega \in \Omega | X(\omega) < a \} \\[6pt]
&= \{ \omega \in \Omega | f(X(\omega)) < f(a) \} \\[6pt]
&= \{ \omega \in \Omega | f(X)(\omega) < f(a) \} \\[6pt]
&= \{ f(X) < f(a) \}, \\[6pt]
\end{aligned} \end{equation}$$
which means that $\mathbb{P}(X<a) = \mathbb{P}(f(X)<f(a))$. If $f$ is only non-decreasing then you cannot derive this result but you can derive an analogous result with non-strict inequality. | Does $\mathbb{P}(X < a) = \mathbb{P}(f(X) < f(a))$?
If $f$ is strictly increasing then you have:
$$\begin{equation} \begin{aligned}
\{ X < a \}
&= \{ \omega \in \Omega | X(\omega) < a \} \\[6pt]
&= \{ \omega \in \Omega | f(X(\omega)) < f(a) \} \\[6pt] |
53,167 | Does $\mathbb{P}(X < a) = \mathbb{P}(f(X) < f(a))$? | No.
Assuming that you call monotonically increasing a function that is non-decreasing (for all $x$ and $y$ such that $x\leq y$, one has $f ( x ) \leq f ( y )$), consider $X$ following a uniform distribution on $[0, 1]$, $f=0$ and $a=1$.
Then, $P(X<a) = P(X<1) = 1 \neq 0 = P(0<0) = P(f(X) < f(a))$.
Your assumption is true for strictly increasing functions. If f is strictly increasing, $ \{X<a\} = \{f(X)<f(a)\}$. | Does $\mathbb{P}(X < a) = \mathbb{P}(f(X) < f(a))$? | No.
Assuming that you call monotonically increasing a function that is non-decreasing (for all $x$ and $y$ such that $x\leq y$, one has $f ( x ) \leq f ( y )$), consider $X$ following a uniform distr | Does $\mathbb{P}(X < a) = \mathbb{P}(f(X) < f(a))$?
No.
Assuming that you call monotonically increasing a function that is non-decreasing (for all $x$ and $y$ such that $x\leq y$, one has $f ( x ) \leq f ( y )$), consider $X$ following a uniform distribution on $[0, 1]$, $f=0$ and $a=1$.
Then, $P(X<a) = P(X<1) = 1 \neq 0 = P(0<0) = P(f(X) < f(a))$.
Your assumption is true for strictly increasing functions. If f is strictly increasing, $ \{X<a\} = \{f(X)<f(a)\}$. | Does $\mathbb{P}(X < a) = \mathbb{P}(f(X) < f(a))$?
No.
Assuming that you call monotonically increasing a function that is non-decreasing (for all $x$ and $y$ such that $x\leq y$, one has $f ( x ) \leq f ( y )$), consider $X$ following a uniform distr |
53,168 | Binomial Regression "logit" vs "cloglog" | If you fit two models, one with logit and one with cloglog, you should report the results of both, and also carry out some type of model comparison technique and report the results of that.
As for the models, this is a great situation in which to use Bayesian multilevel models (see this paper [PDF] by Gelman). We can pool information among groups to inform the estimates for groups with smaller sample sizes and for groups with seemingly extreme outcomes (as in phth and pltl).
Fitting multilevel models with the brms package is fairly straightforward. The two models here, one with a logit link and one with a cloglog link, would be fit as follows:
fit1 <- brm(
response ~ (1 | treat.group),
family = bernoulli, # logit link is the default
data = df,
iter = 2e4,
warmup = 2e3,
control = list(adapt_delta = 0.999)
)
fit2 <- brm(
response ~ (1 | treat.group),
family = bernoulli(link = "cloglog"),
data = df,
iter = 2e4,
warmup = 2e3,
control = list(adapt_delta = 0.999)
)
These will each generate 72000 samples from the models, which can be accessed with as.data.frame(fit). After applying the inverse logit and inverse cloglog to the samples, we get the following estimates for the desired probabilities:
group prob using logit sd 89% probability interval
--------------------------------------------------------------
pctc 0.405 0.169 0.12 to 0.66
pcth 0.562 0.135 0.34 to 0.78
pctl 0.438 0.170 0.15 to 0.7
phtc 0.604 0.141 0.39 to 0.84
phth 0.729 0.162 0.52 to 1
phtl 0.529 0.142 0.3 to 0.76
pltc 0.532 0.157 0.28 to 0.79
plth 0.539 0.180 0.24 to 0.84
pltl 0.623 0.193 0.39 to 1
group prob using cloglog sd 89% probability interval
----------------------------------------------------------------
pctc 0.395 0.162 0.12 to 0.64
pcth 0.545 0.136 0.32 to 0.76
pctl 0.422 0.164 0.14 to 0.67
phtc 0.589 0.144 0.36 to 0.83
phth 0.760 0.182 0.52 to 1
phtl 0.511 0.142 0.28 to 0.74
pltc 0.512 0.158 0.25 to 0.77
plth 0.517 0.181 0.21 to 0.81
pltl 0.627 0.213 0.37 to 1
Overall, the model using the cloglog link induced less shrinkage in the probabilities. It's worth noting that the 89% probability intervals for phth are the same in the two models.
The brms package supports model comparison using PSIS-LOO. Using the command
loo(fit1, fit2, reloo = TRUE)
takes a while but eventually returns the following result:
LOOIC SE
fit1 59.19 2.95
fit2 58.59 2.91
fit1 - fit2 0.60 0.46
Here the model with lower LOOIC is estimated to have better out-of-sample prediction accuracy. The model with the cloglog link has marginally lower LOOIC, and the standard error on the difference, 0.46, is small, so we would conclude that the model using the cloglog link is marginally better than the one using the logit link. | Binomial Regression "logit" vs "cloglog" | If you fit two models, one with logit and one with cloglog, you should report the results of both, and also carry out some type of model comparison technique and report the results of that.
As for the | Binomial Regression "logit" vs "cloglog"
If you fit two models, one with logit and one with cloglog, you should report the results of both, and also carry out some type of model comparison technique and report the results of that.
As for the models, this is a great situation in which to use Bayesian multilevel models (see this paper [PDF] by Gelman). We can pool information among groups to inform the estimates for groups with smaller sample sizes and for groups with seemingly extreme outcomes (as in phth and pltl).
Fitting multilevel models with the brms package is fairly straightforward. The two models here, one with a logit link and one with a cloglog link, would be fit as follows:
fit1 <- brm(
response ~ (1 | treat.group),
family = bernoulli, # logit link is the default
data = df,
iter = 2e4,
warmup = 2e3,
control = list(adapt_delta = 0.999)
)
fit2 <- brm(
response ~ (1 | treat.group),
family = bernoulli(link = "cloglog"),
data = df,
iter = 2e4,
warmup = 2e3,
control = list(adapt_delta = 0.999)
)
These will each generate 72000 samples from the models, which can be accessed with as.data.frame(fit). After applying the inverse logit and inverse cloglog to the samples, we get the following estimates for the desired probabilities:
group prob using logit sd 89% probability interval
--------------------------------------------------------------
pctc 0.405 0.169 0.12 to 0.66
pcth 0.562 0.135 0.34 to 0.78
pctl 0.438 0.170 0.15 to 0.7
phtc 0.604 0.141 0.39 to 0.84
phth 0.729 0.162 0.52 to 1
phtl 0.529 0.142 0.3 to 0.76
pltc 0.532 0.157 0.28 to 0.79
plth 0.539 0.180 0.24 to 0.84
pltl 0.623 0.193 0.39 to 1
group prob using cloglog sd 89% probability interval
----------------------------------------------------------------
pctc 0.395 0.162 0.12 to 0.64
pcth 0.545 0.136 0.32 to 0.76
pctl 0.422 0.164 0.14 to 0.67
phtc 0.589 0.144 0.36 to 0.83
phth 0.760 0.182 0.52 to 1
phtl 0.511 0.142 0.28 to 0.74
pltc 0.512 0.158 0.25 to 0.77
plth 0.517 0.181 0.21 to 0.81
pltl 0.627 0.213 0.37 to 1
Overall, the model using the cloglog link induced less shrinkage in the probabilities. It's worth noting that the 89% probability intervals for phth are the same in the two models.
The brms package supports model comparison using PSIS-LOO. Using the command
loo(fit1, fit2, reloo = TRUE)
takes a while but eventually returns the following result:
LOOIC SE
fit1 59.19 2.95
fit2 58.59 2.91
fit1 - fit2 0.60 0.46
Here the model with lower LOOIC is estimated to have better out-of-sample prediction accuracy. The model with the cloglog link has marginally lower LOOIC, and the standard error on the difference, 0.46, is small, so we would conclude that the model using the cloglog link is marginally better than the one using the logit link. | Binomial Regression "logit" vs "cloglog"
If you fit two models, one with logit and one with cloglog, you should report the results of both, and also carry out some type of model comparison technique and report the results of that.
As for the |
53,169 | Measure of smoothness | If this is an image then in my understanding a row should display a gradual change between the adjacent pixels. In such a case autocorrelation (the correlation of data with itself after a shift by one pixel) should work as a measure of smoothness.
However using your example I only get a slight increase in autocorrelation. Using R:
y1 <- c(118, 117, 118, 120, 80, 117, 118, 120, 118, 119, 121, 119, 121, 118, 121, 120, 118, 80, 120, 121)
y2 <- c(118, 117, 118, 120, 118, 117, 118, 120, 118, 119, 121, 119, 121, 118, 121, 120, 118, 119, 120, 121)
acf(y1, plot=FALSE, lag.max=1)
# Autocorrelations of series ‘y1’, by lag
0 1
1.000 -0.095
acf(y2, plot=FALSE, lag.max=1)
# Autocorrelations of series ‘y2’, by lag
0 1
1.000 0.086
This might happen if there is not much going on in the row you selected. i.e. it only has shades of the same color. Or if the drawing on the picture has very thin edges so that the contours of an object are only one pixel in width. In this case shifting the row by one pixel would dislocate the edges. | Measure of smoothness | If this is an image then in my understanding a row should display a gradual change between the adjacent pixels. In such a case autocorrelation (the correlation of data with itself after a shift by one | Measure of smoothness
If this is an image then in my understanding a row should display a gradual change between the adjacent pixels. In such a case autocorrelation (the correlation of data with itself after a shift by one pixel) should work as a measure of smoothness.
However using your example I only get a slight increase in autocorrelation. Using R:
y1 <- c(118, 117, 118, 120, 80, 117, 118, 120, 118, 119, 121, 119, 121, 118, 121, 120, 118, 80, 120, 121)
y2 <- c(118, 117, 118, 120, 118, 117, 118, 120, 118, 119, 121, 119, 121, 118, 121, 120, 118, 119, 120, 121)
acf(y1, plot=FALSE, lag.max=1)
# Autocorrelations of series ‘y1’, by lag
0 1
1.000 -0.095
acf(y2, plot=FALSE, lag.max=1)
# Autocorrelations of series ‘y2’, by lag
0 1
1.000 0.086
This might happen if there is not much going on in the row you selected. i.e. it only has shades of the same color. Or if the drawing on the picture has very thin edges so that the contours of an object are only one pixel in width. In this case shifting the row by one pixel would dislocate the edges. | Measure of smoothness
If this is an image then in my understanding a row should display a gradual change between the adjacent pixels. In such a case autocorrelation (the correlation of data with itself after a shift by one |
53,170 | Measure of smoothness | The conventional smoothness measures are based on derivatives, and are sometimes called "roughness." For instance, in smoothing splines there's a roughness penalty, which you minimize to get the smooth curve. In case of splines you want continuous first derivative, therefore, the roughness is based on the integral of the square of the second derivative:
$$\int [f''(x)]^2dx $$
In your case, you could use the second difference which corresponds to the second derivative:
$$D2=\sum_i [x_i-2x_{i-1}+x_{i-2}]^2/4$$
Demo
Here's the demo on your data set.
For your two series this measure D2 is:
The levels plot:
Here's the second difference plot:
We see the first row is much rougher.
Why not D1?
Let's see why for smoothness (roughness) the second derivative is generally more appropriate.
Consider these two series:
The sums of squares of first differences are the same: 18
The sums of squares of second differences are: 0 and 72, which represents the intuitive and visible roughness very well.
Here's the plot of first difference:
And here's the plot of the second difference:
Conclusion
You can go for higher differences, e.g. third difference, but going after the first difference is not helpful to measure the smoothness. The reason is that our intuitive understanding of smoothness is compatible with any first derivative! Any series with a trend will have the first derivative, and it's not an rough series necessarily. The series are rough when the first derivative starts jumping around, which is when the higher derivatives are high. | Measure of smoothness | The conventional smoothness measures are based on derivatives, and are sometimes called "roughness." For instance, in smoothing splines there's a roughness penalty, which you minimize to get the smoot | Measure of smoothness
The conventional smoothness measures are based on derivatives, and are sometimes called "roughness." For instance, in smoothing splines there's a roughness penalty, which you minimize to get the smooth curve. In case of splines you want continuous first derivative, therefore, the roughness is based on the integral of the square of the second derivative:
$$\int [f''(x)]^2dx $$
In your case, you could use the second difference which corresponds to the second derivative:
$$D2=\sum_i [x_i-2x_{i-1}+x_{i-2}]^2/4$$
Demo
Here's the demo on your data set.
For your two series this measure D2 is:
The levels plot:
Here's the second difference plot:
We see the first row is much rougher.
Why not D1?
Let's see why for smoothness (roughness) the second derivative is generally more appropriate.
Consider these two series:
The sums of squares of first differences are the same: 18
The sums of squares of second differences are: 0 and 72, which represents the intuitive and visible roughness very well.
Here's the plot of first difference:
And here's the plot of the second difference:
Conclusion
You can go for higher differences, e.g. third difference, but going after the first difference is not helpful to measure the smoothness. The reason is that our intuitive understanding of smoothness is compatible with any first derivative! Any series with a trend will have the first derivative, and it's not an rough series necessarily. The series are rough when the first derivative starts jumping around, which is when the higher derivatives are high. | Measure of smoothness
The conventional smoothness measures are based on derivatives, and are sometimes called "roughness." For instance, in smoothing splines there's a roughness penalty, which you minimize to get the smoot |
53,171 | Measure of smoothness | One way to measure non-smoothness is to first smooth the data, subtract it away and compute some measure of how much residuals do you have (i.e. sum squares of all residuals). I.e. you can apply a Laplacian filter and compute the sum of squares of the residuals for both images and compare. | Measure of smoothness | One way to measure non-smoothness is to first smooth the data, subtract it away and compute some measure of how much residuals do you have (i.e. sum squares of all residuals). I.e. you can apply a La | Measure of smoothness
One way to measure non-smoothness is to first smooth the data, subtract it away and compute some measure of how much residuals do you have (i.e. sum squares of all residuals). I.e. you can apply a Laplacian filter and compute the sum of squares of the residuals for both images and compare. | Measure of smoothness
One way to measure non-smoothness is to first smooth the data, subtract it away and compute some measure of how much residuals do you have (i.e. sum squares of all residuals). I.e. you can apply a La |
53,172 | Measure of smoothness | I totally agree with @Tim's comment about variance but I felt motivated to go a step further, as is my want. I took the 20 values and mused as to what AUTOBOX would do with these values essentially revisiting this post How to calculate the standard average of a set excluding outliers? .
AUTOBOX delivered the following adjustments to cleanse the data using procedures developed here http://docplayer.net/12080848-Outliers-level-shifts-and-variance-changes-in-time-series.html . Additionally identifying an unusual value at period 4 and a persistant level/step shift starting at period 8.
The plot of the Actual and Cleansed Data is educational as to what the human eye sees and what it doesn't see ..
What we miss initially is the subtle but significant anomaly at period 4 and the persistent level shift at period 8 BECAUSE we are focused on the overwhelming pulse impacts at periods 5 and 18.
Going one step further ( always dangerous with small samples but not necessarily so when there is strong signal ) the model's residuals suggest a constant/persistant blurring (inncreased error variance ) from period 7 to 20
The question I really answered "Is there a better process ?" in terms of making data smoother ? i.e. less effected by blurring . Or is it possible to further reduce the variance (non-systematic behavior )? | Measure of smoothness | I totally agree with @Tim's comment about variance but I felt motivated to go a step further, as is my want. I took the 20 values and mused as to what AUTOBOX would do with these values essentially | Measure of smoothness
I totally agree with @Tim's comment about variance but I felt motivated to go a step further, as is my want. I took the 20 values and mused as to what AUTOBOX would do with these values essentially revisiting this post How to calculate the standard average of a set excluding outliers? .
AUTOBOX delivered the following adjustments to cleanse the data using procedures developed here http://docplayer.net/12080848-Outliers-level-shifts-and-variance-changes-in-time-series.html . Additionally identifying an unusual value at period 4 and a persistant level/step shift starting at period 8.
The plot of the Actual and Cleansed Data is educational as to what the human eye sees and what it doesn't see ..
What we miss initially is the subtle but significant anomaly at period 4 and the persistent level shift at period 8 BECAUSE we are focused on the overwhelming pulse impacts at periods 5 and 18.
Going one step further ( always dangerous with small samples but not necessarily so when there is strong signal ) the model's residuals suggest a constant/persistant blurring (inncreased error variance ) from period 7 to 20
The question I really answered "Is there a better process ?" in terms of making data smoother ? i.e. less effected by blurring . Or is it possible to further reduce the variance (non-systematic behavior )? | Measure of smoothness
I totally agree with @Tim's comment about variance but I felt motivated to go a step further, as is my want. I took the 20 values and mused as to what AUTOBOX would do with these values essentially |
53,173 | Is Granger causality still relevant? | If you mean econometric theory, I would not be surprised to learn that most of the relevant and needed theoretical results related to the notion of Granger causality have already been derived. After all, the notion of Granger causality is nearly 50 years old. Hence, I do not expect to see much research on the theoretical aspects of Granger causality any more.
If you mean applied econometrics, it seems to me Granger causality is still relevant.
For example, a popular topic in finance is modelling volatility and market interconnectedness. Granger causality is widely used for examining whether volatility spills over from one market to another (not giving concrete references here, but there are plenty of hits from joint search of "volatility spillover" and "Granger causality", see here; I personally have read a few random papers employing Granger causality for analyzing volatility spillovers).
If the volatility in market $j$ can be predicted using the past information on market $j$ alone, there is little reason to think it is being transmitted from some other market $i$ to market $j$. However, if the volatility in market $j$ is predictable by past information on market $i$, once past information on market $j$ has been accounted for, which means there is Granger causality, it might be though that volatility is transmitted from market $i$ to market $j$.
If you add an assumption that this relationship is causal, you have a causal explanation of volatility transmission. As Judea Pearl notes in "Statistics and causal inference: A review" (2003),
Every claim invoking causal concepts must be traced to some premises that invoke such concepts; it cannot be derived or inferred from statistical associations alone
Hence, adding a causal premise to the statistical machinery of Granger causality seems fine to me.
Another example of a popular use of Granger causality is the analysis of lead-lag relationships between markets, see some search results here.
Regarding the birthday cake example, it is false, as are other similar ones I have encountered. The cake has absolutely no predictive power beyond the history of your birthdays: past birthdays contain all possibly relevant information for predicting future birthdays.
Side note: Google Scholar finds 1290 articles from 2018 alone citing Granger's original paper from 1969. This is however not confined to econometrics journals alone. | Is Granger causality still relevant? | If you mean econometric theory, I would not be surprised to learn that most of the relevant and needed theoretical results related to the notion of Granger causality have already been derived. After a | Is Granger causality still relevant?
If you mean econometric theory, I would not be surprised to learn that most of the relevant and needed theoretical results related to the notion of Granger causality have already been derived. After all, the notion of Granger causality is nearly 50 years old. Hence, I do not expect to see much research on the theoretical aspects of Granger causality any more.
If you mean applied econometrics, it seems to me Granger causality is still relevant.
For example, a popular topic in finance is modelling volatility and market interconnectedness. Granger causality is widely used for examining whether volatility spills over from one market to another (not giving concrete references here, but there are plenty of hits from joint search of "volatility spillover" and "Granger causality", see here; I personally have read a few random papers employing Granger causality for analyzing volatility spillovers).
If the volatility in market $j$ can be predicted using the past information on market $j$ alone, there is little reason to think it is being transmitted from some other market $i$ to market $j$. However, if the volatility in market $j$ is predictable by past information on market $i$, once past information on market $j$ has been accounted for, which means there is Granger causality, it might be though that volatility is transmitted from market $i$ to market $j$.
If you add an assumption that this relationship is causal, you have a causal explanation of volatility transmission. As Judea Pearl notes in "Statistics and causal inference: A review" (2003),
Every claim invoking causal concepts must be traced to some premises that invoke such concepts; it cannot be derived or inferred from statistical associations alone
Hence, adding a causal premise to the statistical machinery of Granger causality seems fine to me.
Another example of a popular use of Granger causality is the analysis of lead-lag relationships between markets, see some search results here.
Regarding the birthday cake example, it is false, as are other similar ones I have encountered. The cake has absolutely no predictive power beyond the history of your birthdays: past birthdays contain all possibly relevant information for predicting future birthdays.
Side note: Google Scholar finds 1290 articles from 2018 alone citing Granger's original paper from 1969. This is however not confined to econometrics journals alone. | Is Granger causality still relevant?
If you mean econometric theory, I would not be surprised to learn that most of the relevant and needed theoretical results related to the notion of Granger causality have already been derived. After a |
53,174 | Is Granger causality still relevant? | "Granger causality" has nothing to do with causality. It is just that the late Sir Clive Granger was a master of marketing his tremendously important work by tremendously catchy names.
We should remember that prof. Granger was heavy into forecasting. And in forecasting, if I can forecast your birthday through your (or my) mom, I am perfectly fine. Meaning, that in forecasting we really don't give a dime or a damn about causality under the proper meaning of the word.
When $X$ "Granger causes" $Y$, it just means that forecasts of $Y$ are better if we incorporate $X$ in the forecast function, than if we don't, usually in some reduced-variance sense. That's all.
Now this is, or should be, very well known in scholarly circles. So anyone mocking the concept, while strictly speaking is totally justified because it indeed abuses the word "causality", prove themselves to be below-par by selecting a non-target to mock. It is like seeing an actor pretending to be the Pope, and mocking him because he is not really the Pope (while the important issue is how convincingly he imitates the Pope). | Is Granger causality still relevant? | "Granger causality" has nothing to do with causality. It is just that the late Sir Clive Granger was a master of marketing his tremendously important work by tremendously catchy names.
We should rem | Is Granger causality still relevant?
"Granger causality" has nothing to do with causality. It is just that the late Sir Clive Granger was a master of marketing his tremendously important work by tremendously catchy names.
We should remember that prof. Granger was heavy into forecasting. And in forecasting, if I can forecast your birthday through your (or my) mom, I am perfectly fine. Meaning, that in forecasting we really don't give a dime or a damn about causality under the proper meaning of the word.
When $X$ "Granger causes" $Y$, it just means that forecasts of $Y$ are better if we incorporate $X$ in the forecast function, than if we don't, usually in some reduced-variance sense. That's all.
Now this is, or should be, very well known in scholarly circles. So anyone mocking the concept, while strictly speaking is totally justified because it indeed abuses the word "causality", prove themselves to be below-par by selecting a non-target to mock. It is like seeing an actor pretending to be the Pope, and mocking him because he is not really the Pope (while the important issue is how convincingly he imitates the Pope). | Is Granger causality still relevant?
"Granger causality" has nothing to do with causality. It is just that the late Sir Clive Granger was a master of marketing his tremendously important work by tremendously catchy names.
We should rem |
53,175 | L1 and L2 penalty vs L1 and L2 norms | Norm in mathematics is some function that measures "length" or "size" of a vector. Among the popular norms, there are $\ell_1$, $\ell_2$ and $\ell_p$ norms defined as
$$\begin{align}
\|\boldsymbol{x}\|_1 &= \sum_i | x_i | \\
\| \boldsymbol{x}\|_2 &= \sqrt{ \sum_i |x_i|^2 } \\
\| \boldsymbol{x}\|_p &= \left( \sum_i | x_i |^p \right)^{1/p}
\end{align}$$
In machine learning, we often want to predict target values $y$ using function $f$ of features $\mathbf{x}$ parametrized by a vector of parameters $\boldsymbol{\theta}$. To achieve this, we minimize the loss function $\mathcal{L}$. We sometimes want to penalize the parameters, by forcing them to have small values. The rationale for using regularization is described, for example here, here, or here. One of the ways of achieving this, is by adding the regularization terms, e.g. $\ell_2$ norm (often used squared, as below) of the vector of weights, and minimizing the whole thing
$$
\underset{\boldsymbol{\theta}}{\operatorname{arg\,min}} \; \mathcal{L}\big(y, \,f(\mathbf{x}; \boldsymbol{\theta}) \big) + \lambda\, \|\boldsymbol{\theta}\|_2^2
$$
where $\lambda\ge0$ is a hyperparameter. So basically, we use the norms in here to measure the "size" of the model weights. By adding the size of the weights to the loss function, we force the minimization algorithm to seek for such solution that along with minimizing the loss function, would make the "size" of weights smaller. The $\lambda$ hyperparameter lets you control how large effect this should have on the optimization algorithm.
Indeed, using $\ell_2$ as penalty may be seen as equivalent of using Gaussian priors for the parameters, while using $\ell_1$ norm would be equivalent of using Laplace priors (but in practice, you need much stronger priors, check e.g. the paper Shrinkage priors for Bayesian penalized regression by van Erp et al).
For more details check e.g. Why L1 norm for sparse models, Why does the Lasso provide Variable Selection?, or When should I use lasso vs ridge? threads. | L1 and L2 penalty vs L1 and L2 norms | Norm in mathematics is some function that measures "length" or "size" of a vector. Among the popular norms, there are $\ell_1$, $\ell_2$ and $\ell_p$ norms defined as
$$\begin{align}
\|\boldsymbol{x}\ | L1 and L2 penalty vs L1 and L2 norms
Norm in mathematics is some function that measures "length" or "size" of a vector. Among the popular norms, there are $\ell_1$, $\ell_2$ and $\ell_p$ norms defined as
$$\begin{align}
\|\boldsymbol{x}\|_1 &= \sum_i | x_i | \\
\| \boldsymbol{x}\|_2 &= \sqrt{ \sum_i |x_i|^2 } \\
\| \boldsymbol{x}\|_p &= \left( \sum_i | x_i |^p \right)^{1/p}
\end{align}$$
In machine learning, we often want to predict target values $y$ using function $f$ of features $\mathbf{x}$ parametrized by a vector of parameters $\boldsymbol{\theta}$. To achieve this, we minimize the loss function $\mathcal{L}$. We sometimes want to penalize the parameters, by forcing them to have small values. The rationale for using regularization is described, for example here, here, or here. One of the ways of achieving this, is by adding the regularization terms, e.g. $\ell_2$ norm (often used squared, as below) of the vector of weights, and minimizing the whole thing
$$
\underset{\boldsymbol{\theta}}{\operatorname{arg\,min}} \; \mathcal{L}\big(y, \,f(\mathbf{x}; \boldsymbol{\theta}) \big) + \lambda\, \|\boldsymbol{\theta}\|_2^2
$$
where $\lambda\ge0$ is a hyperparameter. So basically, we use the norms in here to measure the "size" of the model weights. By adding the size of the weights to the loss function, we force the minimization algorithm to seek for such solution that along with minimizing the loss function, would make the "size" of weights smaller. The $\lambda$ hyperparameter lets you control how large effect this should have on the optimization algorithm.
Indeed, using $\ell_2$ as penalty may be seen as equivalent of using Gaussian priors for the parameters, while using $\ell_1$ norm would be equivalent of using Laplace priors (but in practice, you need much stronger priors, check e.g. the paper Shrinkage priors for Bayesian penalized regression by van Erp et al).
For more details check e.g. Why L1 norm for sparse models, Why does the Lasso provide Variable Selection?, or When should I use lasso vs ridge? threads. | L1 and L2 penalty vs L1 and L2 norms
Norm in mathematics is some function that measures "length" or "size" of a vector. Among the popular norms, there are $\ell_1$, $\ell_2$ and $\ell_p$ norms defined as
$$\begin{align}
\|\boldsymbol{x}\ |
53,176 | SE for estimated marginal means | The following code illustrates how this computation is done:
grid <- with(data, expand.grid(f1 = levels(f1), f2 = levels(f2)))
X <- model.matrix(~ f1 * f2, data = grid)
V <- vcov(m)
betas <- fixef(m)
grid$emmean <- c(X %*% betas)
grid$SE <- sqrt(diag(X %*% V %*% t(X)))
grid | SE for estimated marginal means | The following code illustrates how this computation is done:
grid <- with(data, expand.grid(f1 = levels(f1), f2 = levels(f2)))
X <- model.matrix(~ f1 * f2, data = grid)
V <- vcov(m)
betas <- fixef(m) | SE for estimated marginal means
The following code illustrates how this computation is done:
grid <- with(data, expand.grid(f1 = levels(f1), f2 = levels(f2)))
X <- model.matrix(~ f1 * f2, data = grid)
V <- vcov(m)
betas <- fixef(m)
grid$emmean <- c(X %*% betas)
grid$SE <- sqrt(diag(X %*% V %*% t(X)))
grid | SE for estimated marginal means
The following code illustrates how this computation is done:
grid <- with(data, expand.grid(f1 = levels(f1), f2 = levels(f2)))
X <- model.matrix(~ f1 * f2, data = grid)
V <- vcov(m)
betas <- fixef(m) |
53,177 | SE for estimated marginal means | Look at the model summary:
> m
Linear mixed model fit by REML ['lmerMod']
Formula: dep ~ f1 * f2 + (1 | sub)
Data: data
REML criterion at convergence: 371.7578
Random effects:
Groups Name Std.Dev.
sub (Intercept) 1.222
Residual 4.768
Number of obs: 64, groups: sub, 8
Fixed Effects:
(Intercept) f1Male f2day2 f1Male:f2day2
3.9375 3.1250 0.1250 0.0625
This gives estimates of the subject SD (1.222) and the error SD (4.768). This is a balanced experiment, and each mean consists of 16 observations. However, there are only 8 different subject effects. The random component of each cell mean includes the average of all 8 subject effects, and the average of 16 of the residual effects. So its SD is estimated as:
> sqrt(1.222^2 / 8 + 4.768^2 / 16)
[1] 1.267882
This is the same (except for slight error due to roundoff) as the SE displayed in the emmeans results:
> emmeans(m, ~ f1*f2)
f1 f2 emmean SE df lower.CL upper.CL
Female day1 3.9375 1.267923 40.78 1.376463 6.498537
Male day1 7.0625 1.267923 40.78 4.501463 9.623537
Female day2 4.0625 1.267923 40.78 1.501463 6.623537
Male day2 7.2500 1.267923 40.78 4.688963 9.81103 | SE for estimated marginal means | Look at the model summary:
> m
Linear mixed model fit by REML ['lmerMod']
Formula: dep ~ f1 * f2 + (1 | sub)
Data: data
REML criterion at convergence: 371.7578
Random effects:
Groups Name | SE for estimated marginal means
Look at the model summary:
> m
Linear mixed model fit by REML ['lmerMod']
Formula: dep ~ f1 * f2 + (1 | sub)
Data: data
REML criterion at convergence: 371.7578
Random effects:
Groups Name Std.Dev.
sub (Intercept) 1.222
Residual 4.768
Number of obs: 64, groups: sub, 8
Fixed Effects:
(Intercept) f1Male f2day2 f1Male:f2day2
3.9375 3.1250 0.1250 0.0625
This gives estimates of the subject SD (1.222) and the error SD (4.768). This is a balanced experiment, and each mean consists of 16 observations. However, there are only 8 different subject effects. The random component of each cell mean includes the average of all 8 subject effects, and the average of 16 of the residual effects. So its SD is estimated as:
> sqrt(1.222^2 / 8 + 4.768^2 / 16)
[1] 1.267882
This is the same (except for slight error due to roundoff) as the SE displayed in the emmeans results:
> emmeans(m, ~ f1*f2)
f1 f2 emmean SE df lower.CL upper.CL
Female day1 3.9375 1.267923 40.78 1.376463 6.498537
Male day1 7.0625 1.267923 40.78 4.501463 9.623537
Female day2 4.0625 1.267923 40.78 1.501463 6.623537
Male day2 7.2500 1.267923 40.78 4.688963 9.81103 | SE for estimated marginal means
Look at the model summary:
> m
Linear mixed model fit by REML ['lmerMod']
Formula: dep ~ f1 * f2 + (1 | sub)
Data: data
REML criterion at convergence: 371.7578
Random effects:
Groups Name |
53,178 | Bayesian statistical conclusions: we implicitly condition on the known values of any covariates, $x$ | To implicitly condition on a variable simply means that we condition on it, but we do not state it as a conditioning variable in the probability statement (i.e., the conditioning is implicit, not explicit). This is usually done for reasons of brevity, particularly if you are making a whole lot of statements where you are always conditioning on $x$, and it would be more succinct just to omit this from your notation entirely, to avoid burdening the reader.
So what Gelman et al are saying is that they will continue to use notation like $p(\theta|y)$ and $p(\tilde{y}|y)$ that does not have $x$ stated as a conditioning variable, but their working can also be interpreted as if $x$ was an implicit conditioning variable in every statement. So when they use these functions in statements, they are really referring to $p(\theta|y,x)$ and $p(\tilde{y}|y,x)$ respectively.
In regard to this issue, it is also worth noting that many theories of probability regard all probability to be conditional on implicit information. This idea is most famously associated with the axiomatic approach of the mathematician Alfréd Rényi (see e.g., Kaminski 1984). Rényi argued that every probability measure must be interpreted as being conditional on some underlying information, and that reference to marginal probabilities was merely a reference to probability where the underlying conditions are implicit. | Bayesian statistical conclusions: we implicitly condition on the known values of any covariates, $x$ | To implicitly condition on a variable simply means that we condition on it, but we do not state it as a conditioning variable in the probability statement (i.e., the conditioning is implicit, not expl | Bayesian statistical conclusions: we implicitly condition on the known values of any covariates, $x$
To implicitly condition on a variable simply means that we condition on it, but we do not state it as a conditioning variable in the probability statement (i.e., the conditioning is implicit, not explicit). This is usually done for reasons of brevity, particularly if you are making a whole lot of statements where you are always conditioning on $x$, and it would be more succinct just to omit this from your notation entirely, to avoid burdening the reader.
So what Gelman et al are saying is that they will continue to use notation like $p(\theta|y)$ and $p(\tilde{y}|y)$ that does not have $x$ stated as a conditioning variable, but their working can also be interpreted as if $x$ was an implicit conditioning variable in every statement. So when they use these functions in statements, they are really referring to $p(\theta|y,x)$ and $p(\tilde{y}|y,x)$ respectively.
In regard to this issue, it is also worth noting that many theories of probability regard all probability to be conditional on implicit information. This idea is most famously associated with the axiomatic approach of the mathematician Alfréd Rényi (see e.g., Kaminski 1984). Rényi argued that every probability measure must be interpreted as being conditional on some underlying information, and that reference to marginal probabilities was merely a reference to probability where the underlying conditions are implicit. | Bayesian statistical conclusions: we implicitly condition on the known values of any covariates, $x$
To implicitly condition on a variable simply means that we condition on it, but we do not state it as a conditioning variable in the probability statement (i.e., the conditioning is implicit, not expl |
53,179 | Bayesian statistical conclusions: we implicitly condition on the known values of any covariates, $x$ | In addition to the other excellent answer, here I will try to make a more explicit argument. Making the argument explicit is helpful in understanding its underlying assumptions, so we can judge when to use the argument, and when to avoid it. This will be a bayesian version of the argument made in What is the difference between conditioning on regressors vs. treating them as fixed?, and I will use notation from there.
So assume we are interested in some regression-like model for random vector $(X, Y)$, with joint density $f(y,x \mid \theta,\psi)$ which can be factored as
$$
f_\theta(y\mid x)\cdot f_\psi(x)
$$
where $\theta$ is an unknown parameter in the conditional distribution of $Y$ given $X$ (the regression model), while $\psi$ is an unknown parameter in the marginal distribution of $X$. We assume the focus of interest is in the regression relationship, so $\theta$ is the focus or interest parameter, while $\psi$ is an incidental parameter.
If now the prior distribution factorizes in the same way, that is
$$ \pi(\theta,\psi) = \pi_1(\theta)\cdot \pi_2(\psi) $$ then after some manipulation we find that
$$
\pi(\theta,\psi \mid y,x) = \pi_1(\theta \mid y,x)\cdot \pi_2(\psi\mid x) $$ where
$$
\pi_1(\theta\mid y,x)=\frac{f_\theta(y\mid x) \pi_1(\theta)}{\int f_\theta(y\mid x) \pi_1(\theta)\; d\theta} \\
\pi_2(\psi \mid x) = \frac{f_\psi(x) \pi_2(\psi)}{\int f_\psi(x) \pi_2(\psi)\; d\psi}
$$
So, under our assumptions, the posterior distribution factors in the same way as the prior, and so if our only interest is in the regression relationship (thus in $\theta$), we do not need to model $f_\psi(x)$ at all, so can condition on $x$.
This framework also makes it easy to see when such conditioning is problematic, an obvious example is when we include lagged responses as predictors. Another case is with omitted variables, in a regression model omitted variables will implicitely be part of the error term, and so if an omitted variable is correlated with other predictors, that induces correlations between $X$ and the error term in the regression, destroying the factorization. | Bayesian statistical conclusions: we implicitly condition on the known values of any covariates, $x$ | In addition to the other excellent answer, here I will try to make a more explicit argument. Making the argument explicit is helpful in understanding its underlying assumptions, so we can judge when t | Bayesian statistical conclusions: we implicitly condition on the known values of any covariates, $x$
In addition to the other excellent answer, here I will try to make a more explicit argument. Making the argument explicit is helpful in understanding its underlying assumptions, so we can judge when to use the argument, and when to avoid it. This will be a bayesian version of the argument made in What is the difference between conditioning on regressors vs. treating them as fixed?, and I will use notation from there.
So assume we are interested in some regression-like model for random vector $(X, Y)$, with joint density $f(y,x \mid \theta,\psi)$ which can be factored as
$$
f_\theta(y\mid x)\cdot f_\psi(x)
$$
where $\theta$ is an unknown parameter in the conditional distribution of $Y$ given $X$ (the regression model), while $\psi$ is an unknown parameter in the marginal distribution of $X$. We assume the focus of interest is in the regression relationship, so $\theta$ is the focus or interest parameter, while $\psi$ is an incidental parameter.
If now the prior distribution factorizes in the same way, that is
$$ \pi(\theta,\psi) = \pi_1(\theta)\cdot \pi_2(\psi) $$ then after some manipulation we find that
$$
\pi(\theta,\psi \mid y,x) = \pi_1(\theta \mid y,x)\cdot \pi_2(\psi\mid x) $$ where
$$
\pi_1(\theta\mid y,x)=\frac{f_\theta(y\mid x) \pi_1(\theta)}{\int f_\theta(y\mid x) \pi_1(\theta)\; d\theta} \\
\pi_2(\psi \mid x) = \frac{f_\psi(x) \pi_2(\psi)}{\int f_\psi(x) \pi_2(\psi)\; d\psi}
$$
So, under our assumptions, the posterior distribution factors in the same way as the prior, and so if our only interest is in the regression relationship (thus in $\theta$), we do not need to model $f_\psi(x)$ at all, so can condition on $x$.
This framework also makes it easy to see when such conditioning is problematic, an obvious example is when we include lagged responses as predictors. Another case is with omitted variables, in a regression model omitted variables will implicitely be part of the error term, and so if an omitted variable is correlated with other predictors, that induces correlations between $X$ and the error term in the regression, destroying the factorization. | Bayesian statistical conclusions: we implicitly condition on the known values of any covariates, $x$
In addition to the other excellent answer, here I will try to make a more explicit argument. Making the argument explicit is helpful in understanding its underlying assumptions, so we can judge when t |
53,180 | The Universal Approximation Theorem vs. The No Free Lunch Theorem: What's the caveat? | The UAT says that a suitable neural network can represent any approximate continuous function, essentially taking an input, passing it through a network of particular weights, and mapping it to an output.
That says nothing about how the particular weights on that network are determined, which is what NFL deals with. Since the neural net can represent a huge range of functions, we need an algorithm to learn the weights on the network that will get it to do what we want.
Depending on the function we want the neural network to represent, some algorithms will find the appropriate weights quickly and some will not. No algorithm will outperform any other if we average performance over all possible mappings from input to output.
In short, UAT addresses what kinds of input-output mappings we can represent with a neural network. NFL addresses our ability to actually learn the weights that will generate that mapping. | The Universal Approximation Theorem vs. The No Free Lunch Theorem: What's the caveat? | The UAT says that a suitable neural network can represent any approximate continuous function, essentially taking an input, passing it through a network of particular weights, and mapping it to an out | The Universal Approximation Theorem vs. The No Free Lunch Theorem: What's the caveat?
The UAT says that a suitable neural network can represent any approximate continuous function, essentially taking an input, passing it through a network of particular weights, and mapping it to an output.
That says nothing about how the particular weights on that network are determined, which is what NFL deals with. Since the neural net can represent a huge range of functions, we need an algorithm to learn the weights on the network that will get it to do what we want.
Depending on the function we want the neural network to represent, some algorithms will find the appropriate weights quickly and some will not. No algorithm will outperform any other if we average performance over all possible mappings from input to output.
In short, UAT addresses what kinds of input-output mappings we can represent with a neural network. NFL addresses our ability to actually learn the weights that will generate that mapping. | The Universal Approximation Theorem vs. The No Free Lunch Theorem: What's the caveat?
The UAT says that a suitable neural network can represent any approximate continuous function, essentially taking an input, passing it through a network of particular weights, and mapping it to an out |
53,181 | The Universal Approximation Theorem vs. The No Free Lunch Theorem: What's the caveat? | Approximating the optimal decision function for the training data is no guarantee of performance on unseen data. So, UAT and NFLT are not in contradiction, because approximating the training decision surface to arbitrary precision does not mean your algorithm is the best (in fact, it might be worthless). | The Universal Approximation Theorem vs. The No Free Lunch Theorem: What's the caveat? | Approximating the optimal decision function for the training data is no guarantee of performance on unseen data. So, UAT and NFLT are not in contradiction, because approximating the training decision | The Universal Approximation Theorem vs. The No Free Lunch Theorem: What's the caveat?
Approximating the optimal decision function for the training data is no guarantee of performance on unseen data. So, UAT and NFLT are not in contradiction, because approximating the training decision surface to arbitrary precision does not mean your algorithm is the best (in fact, it might be worthless). | The Universal Approximation Theorem vs. The No Free Lunch Theorem: What's the caveat?
Approximating the optimal decision function for the training data is no guarantee of performance on unseen data. So, UAT and NFLT are not in contradiction, because approximating the training decision |
53,182 | Does multiplying the likelihood by a constant change the Bayesian inference using MCMC? | When you multiply the likelihood by the prior, the resulting function may no longer integrate to $1$, hence why you need to know the normalising constant to analytically solve the posterior.
MCMC samples randomly with frequency proportional to the posterior density (or height of the distribution at any point), without actually knowing it. Hence why you don't need to know the normalising constant when emperically estimating the posterior by MCMC.
So theoretically, multiplying the likelihood by some constant should not affect MCMC. However, you might run into computational problems or numerical instability. | Does multiplying the likelihood by a constant change the Bayesian inference using MCMC? | When you multiply the likelihood by the prior, the resulting function may no longer integrate to $1$, hence why you need to know the normalising constant to analytically solve the posterior.
MCMC samp | Does multiplying the likelihood by a constant change the Bayesian inference using MCMC?
When you multiply the likelihood by the prior, the resulting function may no longer integrate to $1$, hence why you need to know the normalising constant to analytically solve the posterior.
MCMC samples randomly with frequency proportional to the posterior density (or height of the distribution at any point), without actually knowing it. Hence why you don't need to know the normalising constant when emperically estimating the posterior by MCMC.
So theoretically, multiplying the likelihood by some constant should not affect MCMC. However, you might run into computational problems or numerical instability. | Does multiplying the likelihood by a constant change the Bayesian inference using MCMC?
When you multiply the likelihood by the prior, the resulting function may no longer integrate to $1$, hence why you need to know the normalising constant to analytically solve the posterior.
MCMC samp |
53,183 | Does multiplying the likelihood by a constant change the Bayesian inference using MCMC? | Not only is that allowed, but the multiple you get is still a valid likelihood function. A valid likelihood function is defined by the requirement that $L_\mathbf{x}(\theta) \propto p(\mathbf{x}|\theta)$ (i.e., it is proportional to the sampling density, with respect to the parameter vector). Multiplication of a likelihood function by a positive value that does not depend on the parameter vector leaves this proportionality requirement intact, so the result is another valid likelihood function. While we commonly refer to "the" likelihood function, this is actually a class of functions defined up to a positive multiplicative constant.
MCMC methods that use the likelihood function (as opposed to the sampling density) should be set up propoerly to allow any instance of the likelihood function, and so multiplication by a constant does not matter in this case. Moreover, multiplication by a constant does not even change the fact that you are still using a valid likelihood function. | Does multiplying the likelihood by a constant change the Bayesian inference using MCMC? | Not only is that allowed, but the multiple you get is still a valid likelihood function. A valid likelihood function is defined by the requirement that $L_\mathbf{x}(\theta) \propto p(\mathbf{x}|\the | Does multiplying the likelihood by a constant change the Bayesian inference using MCMC?
Not only is that allowed, but the multiple you get is still a valid likelihood function. A valid likelihood function is defined by the requirement that $L_\mathbf{x}(\theta) \propto p(\mathbf{x}|\theta)$ (i.e., it is proportional to the sampling density, with respect to the parameter vector). Multiplication of a likelihood function by a positive value that does not depend on the parameter vector leaves this proportionality requirement intact, so the result is another valid likelihood function. While we commonly refer to "the" likelihood function, this is actually a class of functions defined up to a positive multiplicative constant.
MCMC methods that use the likelihood function (as opposed to the sampling density) should be set up propoerly to allow any instance of the likelihood function, and so multiplication by a constant does not matter in this case. Moreover, multiplication by a constant does not even change the fact that you are still using a valid likelihood function. | Does multiplying the likelihood by a constant change the Bayesian inference using MCMC?
Not only is that allowed, but the multiple you get is still a valid likelihood function. A valid likelihood function is defined by the requirement that $L_\mathbf{x}(\theta) \propto p(\mathbf{x}|\the |
53,184 | Is a distribution still considered right-skewed if the majority of responses are zero? | This is certainly possible. The most common definition for a distribution to be right skewed is that the skewness
$$ \gamma_1 := E\bigg(\Big(\frac{X-\mu}{\sigma}\Big)^3\bigg) $$
be positive.
For instance, the Poisson distribution with parameter $\lambda$ has skewness $\frac{1}{\sqrt{\lambda}}>0$, so it is always right skewed. And for sufficiently small $\lambda$, a majority of the mass could be at zero. If $\lambda<\ln 2$, more than half the mass is at zero.
The same can hold for zero inflated distributions.
Regarding your second question, it is usually not necessary to transform data to be "more" normal (although this is a common misconception), especially not discrete data. You may want to ask a separate question on this topic. If you do so, please explain why you believe your data should be transformed to normality. | Is a distribution still considered right-skewed if the majority of responses are zero? | This is certainly possible. The most common definition for a distribution to be right skewed is that the skewness
$$ \gamma_1 := E\bigg(\Big(\frac{X-\mu}{\sigma}\Big)^3\bigg) $$
be positive.
For insta | Is a distribution still considered right-skewed if the majority of responses are zero?
This is certainly possible. The most common definition for a distribution to be right skewed is that the skewness
$$ \gamma_1 := E\bigg(\Big(\frac{X-\mu}{\sigma}\Big)^3\bigg) $$
be positive.
For instance, the Poisson distribution with parameter $\lambda$ has skewness $\frac{1}{\sqrt{\lambda}}>0$, so it is always right skewed. And for sufficiently small $\lambda$, a majority of the mass could be at zero. If $\lambda<\ln 2$, more than half the mass is at zero.
The same can hold for zero inflated distributions.
Regarding your second question, it is usually not necessary to transform data to be "more" normal (although this is a common misconception), especially not discrete data. You may want to ask a separate question on this topic. If you do so, please explain why you believe your data should be transformed to normality. | Is a distribution still considered right-skewed if the majority of responses are zero?
This is certainly possible. The most common definition for a distribution to be right skewed is that the skewness
$$ \gamma_1 := E\bigg(\Big(\frac{X-\mu}{\sigma}\Big)^3\bigg) $$
be positive.
For insta |
53,185 | Is a distribution still considered right-skewed if the majority of responses are zero? | A note from this 11th grade stats class states:
For a right skewed distribution, the mean is typically greater than the median. Also notice that the tail of the distribution on the right hand (positive) side is longer than on the left hand side.
As noted in the comments of this post, the above statement is a generalization, and not a definition (though the nonparametric skew does, by definition, require the mean to be greater than the median to establish positive skewness). | Is a distribution still considered right-skewed if the majority of responses are zero? | A note from this 11th grade stats class states:
For a right skewed distribution, the mean is typically greater than the median. Also notice that the tail of the distribution on the right hand (positi | Is a distribution still considered right-skewed if the majority of responses are zero?
A note from this 11th grade stats class states:
For a right skewed distribution, the mean is typically greater than the median. Also notice that the tail of the distribution on the right hand (positive) side is longer than on the left hand side.
As noted in the comments of this post, the above statement is a generalization, and not a definition (though the nonparametric skew does, by definition, require the mean to be greater than the median to establish positive skewness). | Is a distribution still considered right-skewed if the majority of responses are zero?
A note from this 11th grade stats class states:
For a right skewed distribution, the mean is typically greater than the median. Also notice that the tail of the distribution on the right hand (positi |
53,186 | Should you use an opinion based variable in modelling? (If it predicts well) | I agree with Noah; it is not really a technical statistical question per se. There are several questions you need to have an clear answer.
Do you have a "consistent" subjective rating?
Let say your training data come from an opinion of an existing employee, will the assessment of a new employee going to have the same opinion? It is really problematic if there are inconsistent opinions and ratings after the implementation phase of your model and if so you cannot infer the performance of the feature anymore. I think this is probably the most problematic assumption if you decided to you use it.
What is your modeling objective?
If the goal is to maximize the predictive capability of the model solely, you have a legitimate reason to use it.
Is there any other business constraint?
Sometimes even if you have a significant predictor, you can't use it because of some business and legal constraint. For example, if you were to build a credit model to predict default on loan in the financial sector, you can't use age and gender (in the U.S.)... etc.
Is it ethical to include the variable?
This question probably puts your modeling higher standard; it depends on the context of your business domain.
Potential solution:
Is it possible to derive an estimate from another variable? For example, do you have the address of the donor? If so, use addresses as an intermediate variable and get an estimate of the net worth of the donor (Zillow's Zestimate) may be a good idea.
P.S. There is a well-discussed topic on stepwise regression; you should check out the post here | Should you use an opinion based variable in modelling? (If it predicts well) | I agree with Noah; it is not really a technical statistical question per se. There are several questions you need to have an clear answer.
Do you have a "consistent" subjective rating?
Let say your | Should you use an opinion based variable in modelling? (If it predicts well)
I agree with Noah; it is not really a technical statistical question per se. There are several questions you need to have an clear answer.
Do you have a "consistent" subjective rating?
Let say your training data come from an opinion of an existing employee, will the assessment of a new employee going to have the same opinion? It is really problematic if there are inconsistent opinions and ratings after the implementation phase of your model and if so you cannot infer the performance of the feature anymore. I think this is probably the most problematic assumption if you decided to you use it.
What is your modeling objective?
If the goal is to maximize the predictive capability of the model solely, you have a legitimate reason to use it.
Is there any other business constraint?
Sometimes even if you have a significant predictor, you can't use it because of some business and legal constraint. For example, if you were to build a credit model to predict default on loan in the financial sector, you can't use age and gender (in the U.S.)... etc.
Is it ethical to include the variable?
This question probably puts your modeling higher standard; it depends on the context of your business domain.
Potential solution:
Is it possible to derive an estimate from another variable? For example, do you have the address of the donor? If so, use addresses as an intermediate variable and get an estimate of the net worth of the donor (Zillow's Zestimate) may be a good idea.
P.S. There is a well-discussed topic on stepwise regression; you should check out the post here | Should you use an opinion based variable in modelling? (If it predicts well)
I agree with Noah; it is not really a technical statistical question per se. There are several questions you need to have an clear answer.
Do you have a "consistent" subjective rating?
Let say your |
53,187 | Should you use an opinion based variable in modelling? (If it predicts well) | If you used stepwise regression, it is possible that you are making a type I error and capitalizing on chance, so be careful about interpreting results from it without a cross-validation sample. In addition, if this variable is highly correlated with another variable in your sample (e.g., wealth), the fact that it emerged as important and not the other variable could be do to chance.
That said, whether to include this variable in a model depends on what the model is attempting to do. If it is to be used to optimally predict the outcome in a new data set, then sure, use every variable you have that is helpful for doing so. The meaning of the variable is irrelevant.
If you are trying to make an inference about the relationship between predictors and the outcome in the population, then this variable doesn't do much to explain anything about an individual's characteristics and their decision to donate. Instead, it should hint that you need to collect additional data on the common causes of the perception and the propensity to donate. For example, maybe someone's job influences both the perceptions of their wealth by an onlooker and their decision to donate, independent of their actual wealth. Including this as a predictor would create a model with more explanatory power.
In general, this is a substantive rather than a statistical question and depends on the type of inference you want to make. Is your model meant to be optimally predictive in an external sample? Is it meant to explain variance in the outcome? Is it meant to represent causal relationships between predictors the the outcome? How you model and what variables you should include in your model are determined by the answers to these questions. | Should you use an opinion based variable in modelling? (If it predicts well) | If you used stepwise regression, it is possible that you are making a type I error and capitalizing on chance, so be careful about interpreting results from it without a cross-validation sample. In ad | Should you use an opinion based variable in modelling? (If it predicts well)
If you used stepwise regression, it is possible that you are making a type I error and capitalizing on chance, so be careful about interpreting results from it without a cross-validation sample. In addition, if this variable is highly correlated with another variable in your sample (e.g., wealth), the fact that it emerged as important and not the other variable could be do to chance.
That said, whether to include this variable in a model depends on what the model is attempting to do. If it is to be used to optimally predict the outcome in a new data set, then sure, use every variable you have that is helpful for doing so. The meaning of the variable is irrelevant.
If you are trying to make an inference about the relationship between predictors and the outcome in the population, then this variable doesn't do much to explain anything about an individual's characteristics and their decision to donate. Instead, it should hint that you need to collect additional data on the common causes of the perception and the propensity to donate. For example, maybe someone's job influences both the perceptions of their wealth by an onlooker and their decision to donate, independent of their actual wealth. Including this as a predictor would create a model with more explanatory power.
In general, this is a substantive rather than a statistical question and depends on the type of inference you want to make. Is your model meant to be optimally predictive in an external sample? Is it meant to explain variance in the outcome? Is it meant to represent causal relationships between predictors the the outcome? How you model and what variables you should include in your model are determined by the answers to these questions. | Should you use an opinion based variable in modelling? (If it predicts well)
If you used stepwise regression, it is possible that you are making a type I error and capitalizing on chance, so be careful about interpreting results from it without a cross-validation sample. In ad |
53,188 | Why take the minimum in the acceptance ratio in the Metropolis-Hastings algorithm? | One very useful fact of the standard uniform distribution is for any $r \in [0,1]$ $P(u \leq r)= r$ for $u \sim U(0,1)$. We're doing a stochastic hill-climbing procedure which means we have two cases to consider: (1) if it's more likely to move $x\to x'$ than $x'\to x$, i.e. the proposed point is "better", we'll always move there; (2) if $x'$ is not as good, we'll move there with some probability that depends on how "good" it is.
We can make the description of this process simpler by forcing $\alpha(x'|x)\in[0,1]$ so that $\alpha(x'|x)$ itself becomes the probability of accepting.
In general $\alpha(x'|x) \geq 0$ but it does not have to be bounded. But if the proposed point $x'$ is better, as measured by $\frac{P(x')g(x|x')}{P(x)g(x'|x)}\geq 1$, we want to move there with probability $1$. If the proposed point $x'$ is not better we still want a chance to move there, and want to do so with probability $\frac{P(x')g(x|x')}{P(x)g(x'|x)}$. A simple way to do this in one line is
$$
\alpha(x'|x) = \begin{cases} 1 & \frac{P(x')g(x|x')}{P(x)g(x'|x)} \geq 1 \\ \frac{P(x')g(x|x')}{P(x)g(x'|x)} & \text{o.w.}\end{cases} = \min\left\{1, \frac{P(x')g(x|x')}{P(x)g(x'|x)}\right\}
$$
which makes it so
$$
P(\text{move to } x' \text{ from } x) = P(u \leq \alpha(x'|x)) = \alpha(x'|x).
$$ | Why take the minimum in the acceptance ratio in the Metropolis-Hastings algorithm? | One very useful fact of the standard uniform distribution is for any $r \in [0,1]$ $P(u \leq r)= r$ for $u \sim U(0,1)$. We're doing a stochastic hill-climbing procedure which means we have two cases | Why take the minimum in the acceptance ratio in the Metropolis-Hastings algorithm?
One very useful fact of the standard uniform distribution is for any $r \in [0,1]$ $P(u \leq r)= r$ for $u \sim U(0,1)$. We're doing a stochastic hill-climbing procedure which means we have two cases to consider: (1) if it's more likely to move $x\to x'$ than $x'\to x$, i.e. the proposed point is "better", we'll always move there; (2) if $x'$ is not as good, we'll move there with some probability that depends on how "good" it is.
We can make the description of this process simpler by forcing $\alpha(x'|x)\in[0,1]$ so that $\alpha(x'|x)$ itself becomes the probability of accepting.
In general $\alpha(x'|x) \geq 0$ but it does not have to be bounded. But if the proposed point $x'$ is better, as measured by $\frac{P(x')g(x|x')}{P(x)g(x'|x)}\geq 1$, we want to move there with probability $1$. If the proposed point $x'$ is not better we still want a chance to move there, and want to do so with probability $\frac{P(x')g(x|x')}{P(x)g(x'|x)}$. A simple way to do this in one line is
$$
\alpha(x'|x) = \begin{cases} 1 & \frac{P(x')g(x|x')}{P(x)g(x'|x)} \geq 1 \\ \frac{P(x')g(x|x')}{P(x)g(x'|x)} & \text{o.w.}\end{cases} = \min\left\{1, \frac{P(x')g(x|x')}{P(x)g(x'|x)}\right\}
$$
which makes it so
$$
P(\text{move to } x' \text{ from } x) = P(u \leq \alpha(x'|x)) = \alpha(x'|x).
$$ | Why take the minimum in the acceptance ratio in the Metropolis-Hastings algorithm?
One very useful fact of the standard uniform distribution is for any $r \in [0,1]$ $P(u \leq r)= r$ for $u \sim U(0,1)$. We're doing a stochastic hill-climbing procedure which means we have two cases |
53,189 | Why take the minimum in the acceptance ratio in the Metropolis-Hastings algorithm? | $\alpha$ is the acceptance probability. You don't have to clip it in your code since as you point out $u \le \alpha_{\textrm{clipped}}$ iff $u \le \alpha_{\textrm{unclipped}}$. | Why take the minimum in the acceptance ratio in the Metropolis-Hastings algorithm? | $\alpha$ is the acceptance probability. You don't have to clip it in your code since as you point out $u \le \alpha_{\textrm{clipped}}$ iff $u \le \alpha_{\textrm{unclipped}}$. | Why take the minimum in the acceptance ratio in the Metropolis-Hastings algorithm?
$\alpha$ is the acceptance probability. You don't have to clip it in your code since as you point out $u \le \alpha_{\textrm{clipped}}$ iff $u \le \alpha_{\textrm{unclipped}}$. | Why take the minimum in the acceptance ratio in the Metropolis-Hastings algorithm?
$\alpha$ is the acceptance probability. You don't have to clip it in your code since as you point out $u \le \alpha_{\textrm{clipped}}$ iff $u \le \alpha_{\textrm{unclipped}}$. |
53,190 | When is it better to use Multiple Linear Regression instead of Polynomial Regression? | First of all note that polynomial regression is a special case of multiple linear regression.
Let's consider three models:
Model1
$Y = \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \epsilon$
and
Model2
$Y = \beta_1 x_1 + \beta_2 x_1^2 + \beta_3 x_1^3 + \epsilon$
and
Model3
$Y = \beta_1 x_1 + \beta_2 x_1^2 + \beta_3 x_1^3 + \beta_4 x_2 + \beta_5 x_3 + \epsilon$
Of course Model3 would explain the most deviation in the data. However if you have overfitting you might go for either Model1 or Model2. Also if some of the five $\beta$s are not significant you can exclude them. If there is no non-linearity you go for Model1. If only one explanatory variable shows significant impact, but this variable has a non-linear relationship with the explained variable you go for Model2.
You can use variable selecton in order to check with variables are relevant for the model. One famous variable selection algorithm is Boruta, but you can also do variable selection with AIC and BIC. | When is it better to use Multiple Linear Regression instead of Polynomial Regression? | First of all note that polynomial regression is a special case of multiple linear regression.
Let's consider three models:
Model1
$Y = \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \epsilon$
and
Model2
| When is it better to use Multiple Linear Regression instead of Polynomial Regression?
First of all note that polynomial regression is a special case of multiple linear regression.
Let's consider three models:
Model1
$Y = \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \epsilon$
and
Model2
$Y = \beta_1 x_1 + \beta_2 x_1^2 + \beta_3 x_1^3 + \epsilon$
and
Model3
$Y = \beta_1 x_1 + \beta_2 x_1^2 + \beta_3 x_1^3 + \beta_4 x_2 + \beta_5 x_3 + \epsilon$
Of course Model3 would explain the most deviation in the data. However if you have overfitting you might go for either Model1 or Model2. Also if some of the five $\beta$s are not significant you can exclude them. If there is no non-linearity you go for Model1. If only one explanatory variable shows significant impact, but this variable has a non-linear relationship with the explained variable you go for Model2.
You can use variable selecton in order to check with variables are relevant for the model. One famous variable selection algorithm is Boruta, but you can also do variable selection with AIC and BIC. | When is it better to use Multiple Linear Regression instead of Polynomial Regression?
First of all note that polynomial regression is a special case of multiple linear regression.
Let's consider three models:
Model1
$Y = \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \epsilon$
and
Model2
|
53,191 | Does the interval/ratio distinction ever matter? | As an example, consider a variety of generalized linear models -- for example, gamma, Poisson or inverse Gaussian regression models. Those models (plus some others) correspond to models for data that could be classified as ratio data (the zero is meaningful, 2 really is twice as much as 1, etc).
Further, these models are not equivariant to adding or subtracting constants to/from the data (so it's plainly not simply interval).
If you're trying to analyze some kinds of times/counts/monetary data -- and numerous other kinds of data, such models may be very useful.
In a related fashion, a log-transform or a power transform isn't generally meaningful for data with an arbitrary zero (where the data mean exactly the same thing with a different zero) even if the data happen to be all-positive. It is often meaningful for ratio data.
One should also be careful not to get too hung up on Stephens' typology in particular; there are other typologies that can be more useful for statistical purposes; indeed the notion of using typologies to decide analysis (rather than the question the data are being used to investigate or answer) may often be misplaced. It can sometimes be helpful, but not infrequently, it's the opposite. | Does the interval/ratio distinction ever matter? | As an example, consider a variety of generalized linear models -- for example, gamma, Poisson or inverse Gaussian regression models. Those models (plus some others) correspond to models for data that | Does the interval/ratio distinction ever matter?
As an example, consider a variety of generalized linear models -- for example, gamma, Poisson or inverse Gaussian regression models. Those models (plus some others) correspond to models for data that could be classified as ratio data (the zero is meaningful, 2 really is twice as much as 1, etc).
Further, these models are not equivariant to adding or subtracting constants to/from the data (so it's plainly not simply interval).
If you're trying to analyze some kinds of times/counts/monetary data -- and numerous other kinds of data, such models may be very useful.
In a related fashion, a log-transform or a power transform isn't generally meaningful for data with an arbitrary zero (where the data mean exactly the same thing with a different zero) even if the data happen to be all-positive. It is often meaningful for ratio data.
One should also be careful not to get too hung up on Stephens' typology in particular; there are other typologies that can be more useful for statistical purposes; indeed the notion of using typologies to decide analysis (rather than the question the data are being used to investigate or answer) may often be misplaced. It can sometimes be helpful, but not infrequently, it's the opposite. | Does the interval/ratio distinction ever matter?
As an example, consider a variety of generalized linear models -- for example, gamma, Poisson or inverse Gaussian regression models. Those models (plus some others) correspond to models for data that |
53,192 | Does the interval/ratio distinction ever matter? | Computing percentages or ratios seems to be the easiest statistical operation that is valid for ratio scale but invalid for an interval scale.
Temperature in Celsius or Fahrenheit are examples of interval scale measurement (no fixed 0). For example, it makes no sense to say that 110°C is 10% hotter than 100°C, because if you transform the temperatures to Fahrenheit you would get $\frac{230-212}{212}\times100\%=8\%$ hotter.
However length has a fixed 0, and is on a ratio scale. So now, you can say that something measuring 110 cm is 10% longer than something measuring 100 cm. If you transform that to inches you still get $\frac{(43.3071 - 39.3701)}{39.3701}\times100\%=10\%$ | Does the interval/ratio distinction ever matter? | Computing percentages or ratios seems to be the easiest statistical operation that is valid for ratio scale but invalid for an interval scale.
Temperature in Celsius or Fahrenheit are examples of inte | Does the interval/ratio distinction ever matter?
Computing percentages or ratios seems to be the easiest statistical operation that is valid for ratio scale but invalid for an interval scale.
Temperature in Celsius or Fahrenheit are examples of interval scale measurement (no fixed 0). For example, it makes no sense to say that 110°C is 10% hotter than 100°C, because if you transform the temperatures to Fahrenheit you would get $\frac{230-212}{212}\times100\%=8\%$ hotter.
However length has a fixed 0, and is on a ratio scale. So now, you can say that something measuring 110 cm is 10% longer than something measuring 100 cm. If you transform that to inches you still get $\frac{(43.3071 - 39.3701)}{39.3701}\times100\%=10\%$ | Does the interval/ratio distinction ever matter?
Computing percentages or ratios seems to be the easiest statistical operation that is valid for ratio scale but invalid for an interval scale.
Temperature in Celsius or Fahrenheit are examples of inte |
53,193 | SMOTE data balance - before or during Cross-Validation | The second method should be preferred for exactly the reason that you gave to justify the first. The first method uses the whole data set to synthesize new samples. Cross validation is excluding points from training to give an accurate assessment of the error rate on new data. If you use SMOTE first, information from the excluded points will be leaked into the training data and will taint the XV testing. | SMOTE data balance - before or during Cross-Validation | The second method should be preferred for exactly the reason that you gave to justify the first. The first method uses the whole data set to synthesize new samples. Cross validation is excluding point | SMOTE data balance - before or during Cross-Validation
The second method should be preferred for exactly the reason that you gave to justify the first. The first method uses the whole data set to synthesize new samples. Cross validation is excluding points from training to give an accurate assessment of the error rate on new data. If you use SMOTE first, information from the excluded points will be leaked into the training data and will taint the XV testing. | SMOTE data balance - before or during Cross-Validation
The second method should be preferred for exactly the reason that you gave to justify the first. The first method uses the whole data set to synthesize new samples. Cross validation is excluding point |
53,194 | SMOTE data balance - before or during Cross-Validation | Method 1 should not be used as it leaks information from the test partition into the training set in each fold of the cross-validation. This is because a synthetic example may lie between a real training pattern and a real test pattern or between two real test patterns. Consider a synthetic example generated by random chance very close to the real test pattern ending up in the training set.
The way to look at it is that cross-validation is a method of evaluating the performance of a procedure for fitting a model, rather than of the model itself. So the whole procedure must be implemented independently, in full, in each fold of the cross-validation. So if SMOTE is part of the model-fitting procedure, it needs to be done separately in each fold. | SMOTE data balance - before or during Cross-Validation | Method 1 should not be used as it leaks information from the test partition into the training set in each fold of the cross-validation. This is because a synthetic example may lie between a real trai | SMOTE data balance - before or during Cross-Validation
Method 1 should not be used as it leaks information from the test partition into the training set in each fold of the cross-validation. This is because a synthetic example may lie between a real training pattern and a real test pattern or between two real test patterns. Consider a synthetic example generated by random chance very close to the real test pattern ending up in the training set.
The way to look at it is that cross-validation is a method of evaluating the performance of a procedure for fitting a model, rather than of the model itself. So the whole procedure must be implemented independently, in full, in each fold of the cross-validation. So if SMOTE is part of the model-fitting procedure, it needs to be done separately in each fold. | SMOTE data balance - before or during Cross-Validation
Method 1 should not be used as it leaks information from the test partition into the training set in each fold of the cross-validation. This is because a synthetic example may lie between a real trai |
53,195 | Fitting Weibull distribution in R | Here's the fitted pdf and cdf (Weibull) for each of locations 1 to 3:
Let's break down what we need to do here, keeping in mind that the end goal is to estimate the cumulative proportion of area planted with a certain crop at some value for the random variable time $X$:
The first step is to fit a distribution (e.g. $Weibull\left(a:= \text{shape},b := \text{scale}\right)$ or $Beta\left(\alpha,\beta\right)$).
You made an error in fitting the data on a Weibull distribution because the function fitdist (which uses MLE by default) expects a vector of observations, which would be 'time $X_i$ when some area $i$ was planted with a certain crop'.
Here is some R that turns the cumulative data into a vector of observations, which can then be used to fit the distribution with fitdist:
dat <- dat[order(dat$loc.id,dat$year.id),]
dat$loc.id <- factor(dat$loc.id)
dat$year.id <- factor(dat$year.id)
library(plyr)
#hacking the original data from the cumulative
to.density <- function(cdf) { #given input (a cdf vector), returns respective densities
y <- c(); cdf <- cdf[,'cum.per.plant']
for (cum in seq(1,NROW(cdf))) {y <- c(y, ifelse(cum==1, cdf[cum], cdf[cum]-cdf[cum-1]))}
return(y)
}
#apply to each location X year in dat, then kbind density data to dat
densities <- dlply(dat, .(loc.id, year.id), to.density)
dat <- cbind(dat,dens=unlist(densities))
#forming a dataset of the variable planting time
#--because the proportions go to the thousandths place, a pseudo-sample of time.id's with
#--n=1000 should allow usage of fitdist
rep.vec <- function(df) {
y <- c()
for (row in rownames(df)) {y <- c(y, rep(df[row,'time.id'], df[row,'dens']*1000))}
return(y)
}
#apply to each location in dat - may want to include year.id in the split (excluded)
time.sample <- dlply(dat, .(loc.id), rep.vec)
Be sure to assess the 'fit' of your estimated parameters (perhaps by calculating the mean square error) on the data.
Your code does not indicate that each location and year will be fitted with its own distribution, although you suggest you want to calculate the cumulative area planted for a given location or year.
Here is some R for fitting each location:
#fitting separate Weibull distributions for each loc.id (may want to include year.id in
#the split)
library(fitdistrplus)
fit.weibull <- function(loc) {
y <- summary(fitdist(loc,'weibull'))[[1]]
return(y)
}
params <- lapply(time.sample, fit.weibull) #apply to each element in time sample
Finally, consider the inclusion of a location parameter, which shifts the graph of the pdf in a negative or positive direction along the x-axis; this should be appropriate because in many locations, no area gets plotted until the $X=2^{nd}$ week. The inclusion of a location parameter would likely improve the fit of your distributions.
Use the fitted cdf (with the parameters informed by the previous step) to predict the cumulative proportion of area planted on a certain day for a given location.
It is important to understand what you are doing when you want to rescale the random variable time $X$ to represent days rather than weeks, which simply involves dividing the data (a vector of time observations) by 7. This can be seen in the equation for the Weibull pdf itself.
I've demonstrated it through a few lines of R:
#creating predictive model - on a daily rather than weekly domain
predict1.cum.plant <- function(day, loc.id, params) {
#'scaling' 7x the scale parameter
pweibull(day, shape=params[[loc.id]][[1]], scale=params[[loc.id]][[2]]*7)
}
predict2.cum.plant <- function(day, loc.id, params) {
#'scaling' 1/7x the random variable time
pweibull(day/7, shape=params[[loc.id]][[1]], scale=params[[loc.id]][[2]])
}
#example - the models are equal, which is obvious from the cdf equation!
predict1.cum.plant(35, 2, params)
predict2.cum.plant(35, 2, params)
pweibull(5, shape=params[[2]][[1]], scale=params[[2]][[2]]) #by week
Some supplemental code of mine can be found here.
Very similar methods can be used to fit a Beta distribution. | Fitting Weibull distribution in R | Here's the fitted pdf and cdf (Weibull) for each of locations 1 to 3:
Let's break down what we need to do here, keeping in mind that the end goal is to estimate the cumulative proportion of area plan | Fitting Weibull distribution in R
Here's the fitted pdf and cdf (Weibull) for each of locations 1 to 3:
Let's break down what we need to do here, keeping in mind that the end goal is to estimate the cumulative proportion of area planted with a certain crop at some value for the random variable time $X$:
The first step is to fit a distribution (e.g. $Weibull\left(a:= \text{shape},b := \text{scale}\right)$ or $Beta\left(\alpha,\beta\right)$).
You made an error in fitting the data on a Weibull distribution because the function fitdist (which uses MLE by default) expects a vector of observations, which would be 'time $X_i$ when some area $i$ was planted with a certain crop'.
Here is some R that turns the cumulative data into a vector of observations, which can then be used to fit the distribution with fitdist:
dat <- dat[order(dat$loc.id,dat$year.id),]
dat$loc.id <- factor(dat$loc.id)
dat$year.id <- factor(dat$year.id)
library(plyr)
#hacking the original data from the cumulative
to.density <- function(cdf) { #given input (a cdf vector), returns respective densities
y <- c(); cdf <- cdf[,'cum.per.plant']
for (cum in seq(1,NROW(cdf))) {y <- c(y, ifelse(cum==1, cdf[cum], cdf[cum]-cdf[cum-1]))}
return(y)
}
#apply to each location X year in dat, then kbind density data to dat
densities <- dlply(dat, .(loc.id, year.id), to.density)
dat <- cbind(dat,dens=unlist(densities))
#forming a dataset of the variable planting time
#--because the proportions go to the thousandths place, a pseudo-sample of time.id's with
#--n=1000 should allow usage of fitdist
rep.vec <- function(df) {
y <- c()
for (row in rownames(df)) {y <- c(y, rep(df[row,'time.id'], df[row,'dens']*1000))}
return(y)
}
#apply to each location in dat - may want to include year.id in the split (excluded)
time.sample <- dlply(dat, .(loc.id), rep.vec)
Be sure to assess the 'fit' of your estimated parameters (perhaps by calculating the mean square error) on the data.
Your code does not indicate that each location and year will be fitted with its own distribution, although you suggest you want to calculate the cumulative area planted for a given location or year.
Here is some R for fitting each location:
#fitting separate Weibull distributions for each loc.id (may want to include year.id in
#the split)
library(fitdistrplus)
fit.weibull <- function(loc) {
y <- summary(fitdist(loc,'weibull'))[[1]]
return(y)
}
params <- lapply(time.sample, fit.weibull) #apply to each element in time sample
Finally, consider the inclusion of a location parameter, which shifts the graph of the pdf in a negative or positive direction along the x-axis; this should be appropriate because in many locations, no area gets plotted until the $X=2^{nd}$ week. The inclusion of a location parameter would likely improve the fit of your distributions.
Use the fitted cdf (with the parameters informed by the previous step) to predict the cumulative proportion of area planted on a certain day for a given location.
It is important to understand what you are doing when you want to rescale the random variable time $X$ to represent days rather than weeks, which simply involves dividing the data (a vector of time observations) by 7. This can be seen in the equation for the Weibull pdf itself.
I've demonstrated it through a few lines of R:
#creating predictive model - on a daily rather than weekly domain
predict1.cum.plant <- function(day, loc.id, params) {
#'scaling' 7x the scale parameter
pweibull(day, shape=params[[loc.id]][[1]], scale=params[[loc.id]][[2]]*7)
}
predict2.cum.plant <- function(day, loc.id, params) {
#'scaling' 1/7x the random variable time
pweibull(day/7, shape=params[[loc.id]][[1]], scale=params[[loc.id]][[2]])
}
#example - the models are equal, which is obvious from the cdf equation!
predict1.cum.plant(35, 2, params)
predict2.cum.plant(35, 2, params)
pweibull(5, shape=params[[2]][[1]], scale=params[[2]][[2]]) #by week
Some supplemental code of mine can be found here.
Very similar methods can be used to fit a Beta distribution. | Fitting Weibull distribution in R
Here's the fitted pdf and cdf (Weibull) for each of locations 1 to 3:
Let's break down what we need to do here, keeping in mind that the end goal is to estimate the cumulative proportion of area plan |
53,196 | If $(X,Y) \sim \mathcal N(0,\Sigma)$, are $Z = Y - \rho\frac{\sigma_Y}{\sigma_X}X$ and $X$ independent? | Let the random vector $\mathbf{X} \sim N_{p}(\mathbf{\mu},\mathbf{\Sigma})$. If we partition $\mathbf{X}$ as $\left(\begin{array}{c}
\mathbf{X^{(1)}}\\
\mathbf{X^{(2)}}
\end{array}\right)$ and take a non-singular linear transformation to the components of
$\mathbf{X}$ as
\begin{eqnarray*}
\mathbf{Y^{(1)}} &=& \mathbf{X^{(1)} + M X^{(2)}}\\
\mathbf{Y^{(2)}} &=& \mathbf{X^{(2)}}
\end{eqnarray*}
where the matrix $\mathbf{M}$ is chosen such that the sub-vectors $\mathbf{Y^{(1)}}$ and $\mathbf{Y^{(2)}}$ are uncorrelated. That is, Choose $\mathbf{M}$ such that,
\begin{equation*}
\mathbb{E}\mathbf{\left(Y^{(1)}-\mathbb{E}\mathbf{Y^{(1)}}\right)\left(Y^{(2)}-\mathbb{E}\mathbf{Y^{(2)}}\right)^{T}} = \mathbf{O}
\end{equation*}
Substituting $\mathbf{Y^{(1)}}$ and $\mathbf{Y^{(2)}}$ into the above equation and solving for $\mathbf{M},$ we get $\mathbf{M=-\Sigma_{12}\Sigma_{22}^{-1}}.$
In the bivariate case, let
\begin{equation*}
\mathbf{X} = \left(\begin{array}{c}
\mathbf{X_{1}}\\
\mathbf{X_{2}}
\end{array}\right)
\end{equation*}
\begin{equation*}
\mathbf{\Sigma}= \left(\begin{array}{cc}
\Sigma_{11} & \Sigma_{12}\\
\Sigma_{21} & \Sigma_{22} \\
\end{array}\right) = \left(\begin{array}{cc}
\sigma_{1}^{2} & \sigma_{12}\\
\sigma_{21} & \sigma_{2}^{2}\\
\end{array}\right) = \left(\begin{array}{cc}
\sigma_{1}^{2} & \rho\sigma_{1}\sigma_{2}\\
\rho\sigma_{1}\sigma_{2} & \sigma_{2}^{2}\\
\end{array}\right)
\end{equation*}
From which we note that, $\Sigma_{12}=\rho\sigma_{1}\sigma_{2}$, $\Sigma_{22}^{-1}=\dfrac{1}{\sigma_{2}^{2}}$ and $-\mathbf{\Sigma_{12}\Sigma_{22}^{-1}} = -\rho\dfrac{\sigma_{1}}{\sigma_{2}} $. Hence, the random variables defined by
\begin{eqnarray*}
Y_{1} &=& X_{1} - \mathbf{\Sigma_{12}\Sigma_{22}^{-1}}X_{2} = X_{1}-\rho\dfrac{\sigma_{1}}{\sigma_{2}}X_{2}\\
Y_{2} &=& X_{2}.
\end{eqnarray*}
are independent. Since,
\begin{equation*}
\mathbf{Y} = \left(\begin{array}{c}
\mathbf{Y^{(1)}}\\
\mathbf{Y^{(2)}}
\end{array}\right) = \left(\begin{array}{ll}
\mathbf{I_{11}} & -\mathbf{\Sigma_{12}\Sigma_{22}^{-1}}\\
\mathbf{O} & \mathbf{I_{22}}
\end{array}\right)\mathbf{X}
\end{equation*}
being a non-singular transformation of $\mathbf{X}$, the random vector $\mathbf{Y}$ has a Multivariate normal distribution with the variance-covariance matrix
$\left(\begin{array}{ll}
\mathbf{\Sigma_{11}}-\mathbf{\Sigma_{12}\Sigma_{22}^{-1}} & \mathbf{O}\\
\mathbf{O} & \mathbf{\Sigma_{22}}
\end{array}\right).$ | If $(X,Y) \sim \mathcal N(0,\Sigma)$, are $Z = Y - \rho\frac{\sigma_Y}{\sigma_X}X$ and $X$ independe | Let the random vector $\mathbf{X} \sim N_{p}(\mathbf{\mu},\mathbf{\Sigma})$. If we partition $\mathbf{X}$ as $\left(\begin{array}{c}
\mathbf{X^{(1)}}\\
\mathbf{X^{(2)}}
\end{array}\right)$ and take a | If $(X,Y) \sim \mathcal N(0,\Sigma)$, are $Z = Y - \rho\frac{\sigma_Y}{\sigma_X}X$ and $X$ independent?
Let the random vector $\mathbf{X} \sim N_{p}(\mathbf{\mu},\mathbf{\Sigma})$. If we partition $\mathbf{X}$ as $\left(\begin{array}{c}
\mathbf{X^{(1)}}\\
\mathbf{X^{(2)}}
\end{array}\right)$ and take a non-singular linear transformation to the components of
$\mathbf{X}$ as
\begin{eqnarray*}
\mathbf{Y^{(1)}} &=& \mathbf{X^{(1)} + M X^{(2)}}\\
\mathbf{Y^{(2)}} &=& \mathbf{X^{(2)}}
\end{eqnarray*}
where the matrix $\mathbf{M}$ is chosen such that the sub-vectors $\mathbf{Y^{(1)}}$ and $\mathbf{Y^{(2)}}$ are uncorrelated. That is, Choose $\mathbf{M}$ such that,
\begin{equation*}
\mathbb{E}\mathbf{\left(Y^{(1)}-\mathbb{E}\mathbf{Y^{(1)}}\right)\left(Y^{(2)}-\mathbb{E}\mathbf{Y^{(2)}}\right)^{T}} = \mathbf{O}
\end{equation*}
Substituting $\mathbf{Y^{(1)}}$ and $\mathbf{Y^{(2)}}$ into the above equation and solving for $\mathbf{M},$ we get $\mathbf{M=-\Sigma_{12}\Sigma_{22}^{-1}}.$
In the bivariate case, let
\begin{equation*}
\mathbf{X} = \left(\begin{array}{c}
\mathbf{X_{1}}\\
\mathbf{X_{2}}
\end{array}\right)
\end{equation*}
\begin{equation*}
\mathbf{\Sigma}= \left(\begin{array}{cc}
\Sigma_{11} & \Sigma_{12}\\
\Sigma_{21} & \Sigma_{22} \\
\end{array}\right) = \left(\begin{array}{cc}
\sigma_{1}^{2} & \sigma_{12}\\
\sigma_{21} & \sigma_{2}^{2}\\
\end{array}\right) = \left(\begin{array}{cc}
\sigma_{1}^{2} & \rho\sigma_{1}\sigma_{2}\\
\rho\sigma_{1}\sigma_{2} & \sigma_{2}^{2}\\
\end{array}\right)
\end{equation*}
From which we note that, $\Sigma_{12}=\rho\sigma_{1}\sigma_{2}$, $\Sigma_{22}^{-1}=\dfrac{1}{\sigma_{2}^{2}}$ and $-\mathbf{\Sigma_{12}\Sigma_{22}^{-1}} = -\rho\dfrac{\sigma_{1}}{\sigma_{2}} $. Hence, the random variables defined by
\begin{eqnarray*}
Y_{1} &=& X_{1} - \mathbf{\Sigma_{12}\Sigma_{22}^{-1}}X_{2} = X_{1}-\rho\dfrac{\sigma_{1}}{\sigma_{2}}X_{2}\\
Y_{2} &=& X_{2}.
\end{eqnarray*}
are independent. Since,
\begin{equation*}
\mathbf{Y} = \left(\begin{array}{c}
\mathbf{Y^{(1)}}\\
\mathbf{Y^{(2)}}
\end{array}\right) = \left(\begin{array}{ll}
\mathbf{I_{11}} & -\mathbf{\Sigma_{12}\Sigma_{22}^{-1}}\\
\mathbf{O} & \mathbf{I_{22}}
\end{array}\right)\mathbf{X}
\end{equation*}
being a non-singular transformation of $\mathbf{X}$, the random vector $\mathbf{Y}$ has a Multivariate normal distribution with the variance-covariance matrix
$\left(\begin{array}{ll}
\mathbf{\Sigma_{11}}-\mathbf{\Sigma_{12}\Sigma_{22}^{-1}} & \mathbf{O}\\
\mathbf{O} & \mathbf{\Sigma_{22}}
\end{array}\right).$ | If $(X,Y) \sim \mathcal N(0,\Sigma)$, are $Z = Y - \rho\frac{\sigma_Y}{\sigma_X}X$ and $X$ independe
Let the random vector $\mathbf{X} \sim N_{p}(\mathbf{\mu},\mathbf{\Sigma})$. If we partition $\mathbf{X}$ as $\left(\begin{array}{c}
\mathbf{X^{(1)}}\\
\mathbf{X^{(2)}}
\end{array}\right)$ and take a |
53,197 | If $(X,Y) \sim \mathcal N(0,\Sigma)$, are $Z = Y - \rho\frac{\sigma_Y}{\sigma_X}X$ and $X$ independent? | Hint:
One can see that $Z$, being a linear combination of jointly normal variables $X$ and $Y$, is itself univariate normal. And two linear combinations (namely, $Z$ and $X$) of jointly normal variables are themselves jointly normal. So one possible way is to find the joint moment generating function of $(Z,X)$ to see whether $X$ and $Z$ are independent or not. The joint MGF of $(Z,X)$ is given by
$$M(t_1,t_2)=E(\exp(t_1Z+t_2X))=E\left[\exp\left(\left(t_2-\rho t_1\frac{\sigma_y}{\sigma_x}\right)X+t_1Y\right)\right]$$
From the expression of the joint MGF of $(X,Y)$, that last expectation gives $$M(t_1,t_2)=\exp\left[\frac{1}{2}\left(\sigma_x^2\left(t_2-t_1\rho\frac{\sigma_y}{\sigma_x}\right)^2+\sigma_y^2t_1^2+2\rho\sigma_x\sigma_y\left(t_2-t_1\rho\frac{\sigma_y}{\sigma_x}\right)t_1\right)\right]$$
Simplify that exponent in terms of a bivariate normal MGF and then try to conclude from the correlation whether $Z$ and $X$ are independent or not. You already know that zero correlation is a necessary and sufficient condition of independence for jointly normal variables. | If $(X,Y) \sim \mathcal N(0,\Sigma)$, are $Z = Y - \rho\frac{\sigma_Y}{\sigma_X}X$ and $X$ independe | Hint:
One can see that $Z$, being a linear combination of jointly normal variables $X$ and $Y$, is itself univariate normal. And two linear combinations (namely, $Z$ and $X$) of jointly normal variabl | If $(X,Y) \sim \mathcal N(0,\Sigma)$, are $Z = Y - \rho\frac{\sigma_Y}{\sigma_X}X$ and $X$ independent?
Hint:
One can see that $Z$, being a linear combination of jointly normal variables $X$ and $Y$, is itself univariate normal. And two linear combinations (namely, $Z$ and $X$) of jointly normal variables are themselves jointly normal. So one possible way is to find the joint moment generating function of $(Z,X)$ to see whether $X$ and $Z$ are independent or not. The joint MGF of $(Z,X)$ is given by
$$M(t_1,t_2)=E(\exp(t_1Z+t_2X))=E\left[\exp\left(\left(t_2-\rho t_1\frac{\sigma_y}{\sigma_x}\right)X+t_1Y\right)\right]$$
From the expression of the joint MGF of $(X,Y)$, that last expectation gives $$M(t_1,t_2)=\exp\left[\frac{1}{2}\left(\sigma_x^2\left(t_2-t_1\rho\frac{\sigma_y}{\sigma_x}\right)^2+\sigma_y^2t_1^2+2\rho\sigma_x\sigma_y\left(t_2-t_1\rho\frac{\sigma_y}{\sigma_x}\right)t_1\right)\right]$$
Simplify that exponent in terms of a bivariate normal MGF and then try to conclude from the correlation whether $Z$ and $X$ are independent or not. You already know that zero correlation is a necessary and sufficient condition of independence for jointly normal variables. | If $(X,Y) \sim \mathcal N(0,\Sigma)$, are $Z = Y - \rho\frac{\sigma_Y}{\sigma_X}X$ and $X$ independe
Hint:
One can see that $Z$, being a linear combination of jointly normal variables $X$ and $Y$, is itself univariate normal. And two linear combinations (namely, $Z$ and $X$) of jointly normal variabl |
53,198 | difference between accuracy and Rand index (R) | Rand index is accuracy computed not in the raw data (which does not work unless you have you data where class 1 is cluster 1).
Instead, it is the accuracy on pairs of points, which is invariant to renaming clusters.
In binary classification, the common definition of accuracy is: (TP+TN)/(TP+FP+FN+TN), that should make the similarity of the equations ready to see. | difference between accuracy and Rand index (R) | Rand index is accuracy computed not in the raw data (which does not work unless you have you data where class 1 is cluster 1).
Instead, it is the accuracy on pairs of points, which is invariant to ren | difference between accuracy and Rand index (R)
Rand index is accuracy computed not in the raw data (which does not work unless you have you data where class 1 is cluster 1).
Instead, it is the accuracy on pairs of points, which is invariant to renaming clusters.
In binary classification, the common definition of accuracy is: (TP+TN)/(TP+FP+FN+TN), that should make the similarity of the equations ready to see. | difference between accuracy and Rand index (R)
Rand index is accuracy computed not in the raw data (which does not work unless you have you data where class 1 is cluster 1).
Instead, it is the accuracy on pairs of points, which is invariant to ren |
53,199 | difference between accuracy and Rand index (R) | The confusion matrix that you use to calculate RI is different than that of accuracy.
Definition of confusion matrix in the Rand Index (RI):
+--------------------------------+--------------------------------------+
| TP: | FN: |
| Same class + same cluster | Same class + different clusters |
+--------------------------------+--------------------------------------+
| FP: | TN: |
| different class + same cluster | different class + different clusters |
+--------------------------------+--------------------------------------+
Another difference between these two is that unlike accuracy, RI is mainly used for clustering (unsupervised learning).
The best link to learn RI is Introduction to Information Retrieval book:
https://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html
Accuracy is sensitive to cluster naming; however, RI is not. | difference between accuracy and Rand index (R) | The confusion matrix that you use to calculate RI is different than that of accuracy.
Definition of confusion matrix in the Rand Index (RI):
+--------------------------------+------------------------ | difference between accuracy and Rand index (R)
The confusion matrix that you use to calculate RI is different than that of accuracy.
Definition of confusion matrix in the Rand Index (RI):
+--------------------------------+--------------------------------------+
| TP: | FN: |
| Same class + same cluster | Same class + different clusters |
+--------------------------------+--------------------------------------+
| FP: | TN: |
| different class + same cluster | different class + different clusters |
+--------------------------------+--------------------------------------+
Another difference between these two is that unlike accuracy, RI is mainly used for clustering (unsupervised learning).
The best link to learn RI is Introduction to Information Retrieval book:
https://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html
Accuracy is sensitive to cluster naming; however, RI is not. | difference between accuracy and Rand index (R)
The confusion matrix that you use to calculate RI is different than that of accuracy.
Definition of confusion matrix in the Rand Index (RI):
+--------------------------------+------------------------ |
53,200 | difference between accuracy and Rand index (R) | Couple of points to remember:
Rand Index looks at similarity between any two clustering methods. Generally, there are no 'true' labels present here. Whereas for calculating accuracy, you need to compare the true labels with the predicted labels.
Like Has QUIT--Anony-Mousse answered, Rand Index looks for the relationship between two points in the dataset, rather than the relationship of a point and its true label.
Here's some visualization:
If Method 1 and Method 2 are two clustering methods,
then their Rand Index will be equal to 1.
If Method 2 signifies the Reference Clustering (where the two colors depict different classes), then accuracy will be 0.
Hope this helps. | difference between accuracy and Rand index (R) | Couple of points to remember:
Rand Index looks at similarity between any two clustering methods. Generally, there are no 'true' labels present here. Whereas for calculating accuracy, you need to comp | difference between accuracy and Rand index (R)
Couple of points to remember:
Rand Index looks at similarity between any two clustering methods. Generally, there are no 'true' labels present here. Whereas for calculating accuracy, you need to compare the true labels with the predicted labels.
Like Has QUIT--Anony-Mousse answered, Rand Index looks for the relationship between two points in the dataset, rather than the relationship of a point and its true label.
Here's some visualization:
If Method 1 and Method 2 are two clustering methods,
then their Rand Index will be equal to 1.
If Method 2 signifies the Reference Clustering (where the two colors depict different classes), then accuracy will be 0.
Hope this helps. | difference between accuracy and Rand index (R)
Couple of points to remember:
Rand Index looks at similarity between any two clustering methods. Generally, there are no 'true' labels present here. Whereas for calculating accuracy, you need to comp |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.