idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
3,001 | Which activation function for output layer? | Sigmoid and tanh should not be used as activation function for the hidden layer. This is because of the vanishing gradient problem, i.e., if your input is on a higher side (where sigmoid goes flat) then the gradient will be near zero. This will cause very slow or no learning during backpropagation as weights will be updated with really small values.
Detailed explanation here: http://cs231n.github.io/neural-networks-1/#actfun
The best function for hidden layers is thus ReLu. | Which activation function for output layer? | Sigmoid and tanh should not be used as activation function for the hidden layer. This is because of the vanishing gradient problem, i.e., if your input is on a higher side (where sigmoid goes flat) th | Which activation function for output layer?
Sigmoid and tanh should not be used as activation function for the hidden layer. This is because of the vanishing gradient problem, i.e., if your input is on a higher side (where sigmoid goes flat) then the gradient will be near zero. This will cause very slow or no learning during backpropagation as weights will be updated with really small values.
Detailed explanation here: http://cs231n.github.io/neural-networks-1/#actfun
The best function for hidden layers is thus ReLu. | Which activation function for output layer?
Sigmoid and tanh should not be used as activation function for the hidden layer. This is because of the vanishing gradient problem, i.e., if your input is on a higher side (where sigmoid goes flat) th |
3,002 | Which activation function for output layer? | Softmax outputs produce a vector that is non-negative and sums to 1. It's useful when you have mutually exclusive categories ("these images only contain cats or dogs, not both"). You can use softmax if you have $2,3,4,5,...$ mutually exclusive labels.
Using $2,3,4,...$ sigmoid outputs produce a vector where each element is a probability. It's useful when you have categories that are not mutually exclusive ("these images can contain cats, dogs, or both cats and dogs together, or neither cats nor dogs"). You use as many sigmoid neurons as you have categories, and your labels should not be mutually exclusive.
A cute trick is that you can also use a single sigmoid unit if you have a mutually-exclusive binary problem; because a single sigmoid unit can be used to estimate $p(y=1)$, the Kolmogorov axioms imply that when $y$ is binary, we have $1-p(y=1)=p(y=0)$.
Using the identity function as an output can be helpful when your outputs are unbounded. For example, some company's profit or loss for a quarter could be unbounded on either side.
ReLU units or similar variants can be helpful when the output is bounded above (or below, if you reverse the sign). If the output is only restricted to be non-negative, it would make sense to use a ReLU activation as the output function.
Likewise, if the outputs are somehow constrained to lie in $[-1,1]$, tanh could make sense.
The nice thing about neural networks is that they're incredibly flexible tools, and flexibility in output activation is one aspect of that flexibility. | Which activation function for output layer? | Softmax outputs produce a vector that is non-negative and sums to 1. It's useful when you have mutually exclusive categories ("these images only contain cats or dogs, not both"). You can use softmax i | Which activation function for output layer?
Softmax outputs produce a vector that is non-negative and sums to 1. It's useful when you have mutually exclusive categories ("these images only contain cats or dogs, not both"). You can use softmax if you have $2,3,4,5,...$ mutually exclusive labels.
Using $2,3,4,...$ sigmoid outputs produce a vector where each element is a probability. It's useful when you have categories that are not mutually exclusive ("these images can contain cats, dogs, or both cats and dogs together, or neither cats nor dogs"). You use as many sigmoid neurons as you have categories, and your labels should not be mutually exclusive.
A cute trick is that you can also use a single sigmoid unit if you have a mutually-exclusive binary problem; because a single sigmoid unit can be used to estimate $p(y=1)$, the Kolmogorov axioms imply that when $y$ is binary, we have $1-p(y=1)=p(y=0)$.
Using the identity function as an output can be helpful when your outputs are unbounded. For example, some company's profit or loss for a quarter could be unbounded on either side.
ReLU units or similar variants can be helpful when the output is bounded above (or below, if you reverse the sign). If the output is only restricted to be non-negative, it would make sense to use a ReLU activation as the output function.
Likewise, if the outputs are somehow constrained to lie in $[-1,1]$, tanh could make sense.
The nice thing about neural networks is that they're incredibly flexible tools, and flexibility in output activation is one aspect of that flexibility. | Which activation function for output layer?
Softmax outputs produce a vector that is non-negative and sums to 1. It's useful when you have mutually exclusive categories ("these images only contain cats or dogs, not both"). You can use softmax i |
3,003 | Which activation function for output layer? | The choice of the activation function for the output layer depends on the constraints of the problem. I will give my answer based on different examples:
Fitting in Supervised Learning: any activation function can be used in this problem. In some cases, the target data would have to be mapped within the image of the activation function.
Binary decisions: sigmoid or softmax.
Examples:
Supervised Learning: classification of images in two classes A/B (cats/dogs, number/letter, art/non-art):
sigmoid: the output could correspond to the confidence $c$ (valued between 0 and 1) that the image belongs to the first class. The value $1-c$ could be interpreted as the confidence that the image belongs to the second class.
softmax: the outputs could be interpreted as the confidences $c_1$ and $c_2$ that the image belongs to each class.
Reinforcement Learning: button actions in policy gradient:
sigmoid: the output could correspond to the probability $p$ of pressing the button; the probability of not pressing it would be $1-p$.
softmax: the outputs would correspond to the probabilities $p_1, p_2$ of pressing/not pressing the button.
Multiple decisions: softmax. Examples:
Supervised Learning: classification of images in multiple classes A/B/C/... (for example, classification of digits 0/1/2/3/4/5/6/7/8/9; see MNIST)
softmax: in these cases the softmax function is usually chosen so that the sum of all the confidences $\{c_i\}_i$ adds up to 1. The most "reliable" class would have an output closer to $1$.
maxout: however, it is also possible to choose the class according to the maximum value of the layer (see this paper).
Reinforcement Learning: joestick-type actions in policy gradient; actions that exclude other actions (see openai gym environments; for example LunarLander-v2 where possible actions are "do nothing, fire left orientation engine, fire main engine OR fire right orientation engine."):
softmax: the outputs of this function would represent the probabilities $\{p_i\}_i$ of choosing each action.
others: they are not usually used since one needs to parameterize the probability of each action. There could be some other way to parameterize the probabilities, but that would probably result in a complicated expression for the computation of the gradient:
$$ \nabla_{\boldsymbol{\theta}} \log \pi_{\boldsymbol{\theta}}( a_k | s_k ) $$
Continuous actions in policy gradient (reinforcement learning); actions that do not take discrete values (see openai gym environments; for example BipedalWalker-v2 where the actions are the amount of torque applied to each joint of the robot): in these cases a probability distribution is usually defined, and used to choose the actions; each action has an associated probability density. The output layers would parameterize the probability distribution. A couple of examples of distributions would be:
Normal distribution parametrized by the mean $\mu$ and variance $\sigma^2$: in this case an output layer would provide the mean of the distribution, and another one would provide the variance:
if $\mu$ can take values on all $\mathbb{R}$, activation functions like identity, arcsinh, or even lrelu could be used.
if $\mu$ can take values in a range $(a, b)$, activation functions such as sigmoid, tanh, or any other whose range is bounded could be used.
for $\sigma^2$ it is convenient to use activation functions that produce strictly positive values such as sigmoid, softplus, or relu.
Beta distribution parameterized by $a$ and $b$: in this case, $a, b> 0$, so any activation function that produces values greater than $0$ could be convenient. However, this distribution usually presents problems when $a, b <1$, which is why activation functions are often designed to produce values greater than or equal to $1$. A toy example would be softplus+1.
***As a side note on the choice of activation functions in the HIDDEN layers. Only if you are interested in how different activation functions perform, please check the following video:
https://www.youtube.com/watch?v=Hb3vIYUQ_I8 | Which activation function for output layer? | The choice of the activation function for the output layer depends on the constraints of the problem. I will give my answer based on different examples:
Fitting in Supervised Learning: any activation | Which activation function for output layer?
The choice of the activation function for the output layer depends on the constraints of the problem. I will give my answer based on different examples:
Fitting in Supervised Learning: any activation function can be used in this problem. In some cases, the target data would have to be mapped within the image of the activation function.
Binary decisions: sigmoid or softmax.
Examples:
Supervised Learning: classification of images in two classes A/B (cats/dogs, number/letter, art/non-art):
sigmoid: the output could correspond to the confidence $c$ (valued between 0 and 1) that the image belongs to the first class. The value $1-c$ could be interpreted as the confidence that the image belongs to the second class.
softmax: the outputs could be interpreted as the confidences $c_1$ and $c_2$ that the image belongs to each class.
Reinforcement Learning: button actions in policy gradient:
sigmoid: the output could correspond to the probability $p$ of pressing the button; the probability of not pressing it would be $1-p$.
softmax: the outputs would correspond to the probabilities $p_1, p_2$ of pressing/not pressing the button.
Multiple decisions: softmax. Examples:
Supervised Learning: classification of images in multiple classes A/B/C/... (for example, classification of digits 0/1/2/3/4/5/6/7/8/9; see MNIST)
softmax: in these cases the softmax function is usually chosen so that the sum of all the confidences $\{c_i\}_i$ adds up to 1. The most "reliable" class would have an output closer to $1$.
maxout: however, it is also possible to choose the class according to the maximum value of the layer (see this paper).
Reinforcement Learning: joestick-type actions in policy gradient; actions that exclude other actions (see openai gym environments; for example LunarLander-v2 where possible actions are "do nothing, fire left orientation engine, fire main engine OR fire right orientation engine."):
softmax: the outputs of this function would represent the probabilities $\{p_i\}_i$ of choosing each action.
others: they are not usually used since one needs to parameterize the probability of each action. There could be some other way to parameterize the probabilities, but that would probably result in a complicated expression for the computation of the gradient:
$$ \nabla_{\boldsymbol{\theta}} \log \pi_{\boldsymbol{\theta}}( a_k | s_k ) $$
Continuous actions in policy gradient (reinforcement learning); actions that do not take discrete values (see openai gym environments; for example BipedalWalker-v2 where the actions are the amount of torque applied to each joint of the robot): in these cases a probability distribution is usually defined, and used to choose the actions; each action has an associated probability density. The output layers would parameterize the probability distribution. A couple of examples of distributions would be:
Normal distribution parametrized by the mean $\mu$ and variance $\sigma^2$: in this case an output layer would provide the mean of the distribution, and another one would provide the variance:
if $\mu$ can take values on all $\mathbb{R}$, activation functions like identity, arcsinh, or even lrelu could be used.
if $\mu$ can take values in a range $(a, b)$, activation functions such as sigmoid, tanh, or any other whose range is bounded could be used.
for $\sigma^2$ it is convenient to use activation functions that produce strictly positive values such as sigmoid, softplus, or relu.
Beta distribution parameterized by $a$ and $b$: in this case, $a, b> 0$, so any activation function that produces values greater than $0$ could be convenient. However, this distribution usually presents problems when $a, b <1$, which is why activation functions are often designed to produce values greater than or equal to $1$. A toy example would be softplus+1.
***As a side note on the choice of activation functions in the HIDDEN layers. Only if you are interested in how different activation functions perform, please check the following video:
https://www.youtube.com/watch?v=Hb3vIYUQ_I8 | Which activation function for output layer?
The choice of the activation function for the output layer depends on the constraints of the problem. I will give my answer based on different examples:
Fitting in Supervised Learning: any activation |
3,004 | Which activation function for output layer? | The reason why different activations exist is because they match different problems. Among a few others are mentioned by you "linear functions, sigmoid functions and softmax functions":
linear is an obvious choice for regression problems where you are predicting unbounded quantities, e.g. stock log returns.
sigmoid can be used for regression of bounded quantities, such as probabilities between 0 and 1, and also for classification into two categories such as male/female
softmax is used for classification, especially between multiple categories, e.g. activities such as "walking", "sleeping", "running" in activity trackers. I had a curious case recently receiving an email from a local association which started with something like "based on your profile answers we think you are white...", I wonder what was their ML algo | Which activation function for output layer? | The reason why different activations exist is because they match different problems. Among a few others are mentioned by you "linear functions, sigmoid functions and softmax functions":
linear is an | Which activation function for output layer?
The reason why different activations exist is because they match different problems. Among a few others are mentioned by you "linear functions, sigmoid functions and softmax functions":
linear is an obvious choice for regression problems where you are predicting unbounded quantities, e.g. stock log returns.
sigmoid can be used for regression of bounded quantities, such as probabilities between 0 and 1, and also for classification into two categories such as male/female
softmax is used for classification, especially between multiple categories, e.g. activities such as "walking", "sleeping", "running" in activity trackers. I had a curious case recently receiving an email from a local association which started with something like "based on your profile answers we think you are white...", I wonder what was their ML algo | Which activation function for output layer?
The reason why different activations exist is because they match different problems. Among a few others are mentioned by you "linear functions, sigmoid functions and softmax functions":
linear is an |
3,005 | Which activation function for output layer? | As i see that lot of people have given very intuitive, and perfect answers for the question, but i will give my answer in a nutshell.
So,
For hidden layers the best option to use is ReLU, and the second option you can use as SIGMOID.
For output layers the best option depends, so we use LINEAR FUNCTIONS for regression type of output layers and SOFTMAX for multi-class classification.
I just gave one method for each type of classification to avoid the confusion, and also you can try other functions also to get better understanding. | Which activation function for output layer? | As i see that lot of people have given very intuitive, and perfect answers for the question, but i will give my answer in a nutshell.
So,
For hidden layers the best option to use is ReLU, and the sec | Which activation function for output layer?
As i see that lot of people have given very intuitive, and perfect answers for the question, but i will give my answer in a nutshell.
So,
For hidden layers the best option to use is ReLU, and the second option you can use as SIGMOID.
For output layers the best option depends, so we use LINEAR FUNCTIONS for regression type of output layers and SOFTMAX for multi-class classification.
I just gave one method for each type of classification to avoid the confusion, and also you can try other functions also to get better understanding. | Which activation function for output layer?
As i see that lot of people have given very intuitive, and perfect answers for the question, but i will give my answer in a nutshell.
So,
For hidden layers the best option to use is ReLU, and the sec |
3,006 | Why is ANOVA equivalent to linear regression? | ANOVA and linear regression are equivalent when the two models test against the same hypotheses and use an identical encoding. The models differ in their basic aim: ANOVA is mostly concerned to present differences between categories' means in the data while linear regression is mostly concern to estimate a sample mean response and an associated $\sigma^2$.
Somewhat aphoristically one can describe ANOVA as a regression with dummy variables. We can easily see that this is the case in the simple regression with categorical variables. A categorical variable will be encoded as a indicator matrix (a matrix of 0/1 depending on whether a subject is part of a given group or not) and then used directly for the solution of the linear system described by a linear regression.
Let's see an example with 5 groups. For the sake of argument I will assume that the mean of group1 equals 1, the mean of group2 equals 2, ... and the mean of group5 equals 5. (I use MATLAB, but the exact same thing is equivalent in R.)
rng(123); % Fix the seed
X = randi(5,100,1); % Generate 100 random integer U[1,5]
Y = X + randn(100,1); % Generate my response sample
Xcat = categorical(X); % Treat the integers are categories
% One-way ANOVA
[anovaPval,anovatab,stats] = anova1(Y,Xcat);
% Linear regression
fitObj = fitlm(Xcat,Y);
% Get the group means from the ANOVA
ANOVAgroupMeans = stats.means
% ANOVAgroupMeans =
% 1.0953 1.8421 2.7350 4.2321 5.0517
% Get the beta coefficients from the linear regression
LRbetas = [fitObj.Coefficients.Estimate']
% LRbetas =
% 1.0953 0.7468 1.6398 3.1368 3.9565
% Rescale the betas according the intercept
scaledLRbetas = [LRbetas(1) LRbetas(1)+LRbetas(2:5)]
% scaledLRbetas =
% 1.0953 1.8421 2.7350 4.2321 5.0517
% Check if the two results are numerically equivalent
abs(max( scaledLRbetas - ANOVAgroupMeans))
% ans =
% 2.6645e-15
As it can be seen in this scenario the results where exactly the same. The minute numerical difference is due to the design not being perfectly balanced as well as the underlaying estimation procedure; the ANOVA accumulates numerical errors a bit more aggressively. To that respect we fit an intercept, LRbetas(1); we could fit an intercept-free model but that would not be a "standard" linear regression. (The results would be even closer to ANOVA in that case though.)
The $F$-statistic (a ratio of the means) in the case of the ANOVA and in the case of linear regression will be also be the same for the above example:
abs( fitObj.anova.F(1) - anovatab{2,5} )
% ans =
% 2.9132e-13
This is because procedures test the same hypothesis but with different wordings: ANOVA will qualitatively check if "the ratio is high enough to suggest that no grouping is implausible" while linear regression will qualitatively check if "the ratio is high enough to suggest an intercept only model is possibly inadequate".
(This is a somewhat free interpretation of the "possibility to see a value equal or greater than the one observed under the null hypothesis" and it is not meant to be a text-book definition.)
Coming back to the final part of your question about "ANOVA tell(ing) you nothing about the coefficients of the linear model (assuming the means are not equal") I hope you can now see that the ANOVA, in the case that your design is simple/balanced enough, tells you everything that a linear model would. The confidence intervals for group means will be the same you have for your $\beta$, etc. Clearly when ones starts adding multiple covariate in his regression model, a simple one-way ANOVA does not have a direct equivalence. In that case one augments the information used to calculate the linear regression's mean response with information that are not directly available for a one way ANOVA. I believe that one can re-express things in ANOVA terms once more but it is mostly an academic exercise.
An interesting paper on the matter is Gelman's 2005 paper titled: Analysis of Variance - Why it is more important than ever. Some important points raised; I am not fully supportive of the paper (I think I personally align much more with McCullach's view) but it can be a constructive read.
As a final note: The plot thickens when you have mixed effects models. There you have different concepts about what can be considered a nuisance or actual information regarding the grouping of your data. These issues are outside the scope of this question but I think they are worthy of a nod. | Why is ANOVA equivalent to linear regression? | ANOVA and linear regression are equivalent when the two models test against the same hypotheses and use an identical encoding. The models differ in their basic aim: ANOVA is mostly concerned to presen | Why is ANOVA equivalent to linear regression?
ANOVA and linear regression are equivalent when the two models test against the same hypotheses and use an identical encoding. The models differ in their basic aim: ANOVA is mostly concerned to present differences between categories' means in the data while linear regression is mostly concern to estimate a sample mean response and an associated $\sigma^2$.
Somewhat aphoristically one can describe ANOVA as a regression with dummy variables. We can easily see that this is the case in the simple regression with categorical variables. A categorical variable will be encoded as a indicator matrix (a matrix of 0/1 depending on whether a subject is part of a given group or not) and then used directly for the solution of the linear system described by a linear regression.
Let's see an example with 5 groups. For the sake of argument I will assume that the mean of group1 equals 1, the mean of group2 equals 2, ... and the mean of group5 equals 5. (I use MATLAB, but the exact same thing is equivalent in R.)
rng(123); % Fix the seed
X = randi(5,100,1); % Generate 100 random integer U[1,5]
Y = X + randn(100,1); % Generate my response sample
Xcat = categorical(X); % Treat the integers are categories
% One-way ANOVA
[anovaPval,anovatab,stats] = anova1(Y,Xcat);
% Linear regression
fitObj = fitlm(Xcat,Y);
% Get the group means from the ANOVA
ANOVAgroupMeans = stats.means
% ANOVAgroupMeans =
% 1.0953 1.8421 2.7350 4.2321 5.0517
% Get the beta coefficients from the linear regression
LRbetas = [fitObj.Coefficients.Estimate']
% LRbetas =
% 1.0953 0.7468 1.6398 3.1368 3.9565
% Rescale the betas according the intercept
scaledLRbetas = [LRbetas(1) LRbetas(1)+LRbetas(2:5)]
% scaledLRbetas =
% 1.0953 1.8421 2.7350 4.2321 5.0517
% Check if the two results are numerically equivalent
abs(max( scaledLRbetas - ANOVAgroupMeans))
% ans =
% 2.6645e-15
As it can be seen in this scenario the results where exactly the same. The minute numerical difference is due to the design not being perfectly balanced as well as the underlaying estimation procedure; the ANOVA accumulates numerical errors a bit more aggressively. To that respect we fit an intercept, LRbetas(1); we could fit an intercept-free model but that would not be a "standard" linear regression. (The results would be even closer to ANOVA in that case though.)
The $F$-statistic (a ratio of the means) in the case of the ANOVA and in the case of linear regression will be also be the same for the above example:
abs( fitObj.anova.F(1) - anovatab{2,5} )
% ans =
% 2.9132e-13
This is because procedures test the same hypothesis but with different wordings: ANOVA will qualitatively check if "the ratio is high enough to suggest that no grouping is implausible" while linear regression will qualitatively check if "the ratio is high enough to suggest an intercept only model is possibly inadequate".
(This is a somewhat free interpretation of the "possibility to see a value equal or greater than the one observed under the null hypothesis" and it is not meant to be a text-book definition.)
Coming back to the final part of your question about "ANOVA tell(ing) you nothing about the coefficients of the linear model (assuming the means are not equal") I hope you can now see that the ANOVA, in the case that your design is simple/balanced enough, tells you everything that a linear model would. The confidence intervals for group means will be the same you have for your $\beta$, etc. Clearly when ones starts adding multiple covariate in his regression model, a simple one-way ANOVA does not have a direct equivalence. In that case one augments the information used to calculate the linear regression's mean response with information that are not directly available for a one way ANOVA. I believe that one can re-express things in ANOVA terms once more but it is mostly an academic exercise.
An interesting paper on the matter is Gelman's 2005 paper titled: Analysis of Variance - Why it is more important than ever. Some important points raised; I am not fully supportive of the paper (I think I personally align much more with McCullach's view) but it can be a constructive read.
As a final note: The plot thickens when you have mixed effects models. There you have different concepts about what can be considered a nuisance or actual information regarding the grouping of your data. These issues are outside the scope of this question but I think they are worthy of a nod. | Why is ANOVA equivalent to linear regression?
ANOVA and linear regression are equivalent when the two models test against the same hypotheses and use an identical encoding. The models differ in their basic aim: ANOVA is mostly concerned to presen |
3,007 | Why is ANOVA equivalent to linear regression? | Let me put some color into the idea that OLS with categorical (dummy-coded) regressors is equivalent to the factors in ANOVA. In both cases there are levels (or groups in the case of ANOVA).
In OLS regression it is most usual to have also continuous variables in the regressors. These logically modify the relationship in the fit model between the categorical variables and the dependent variable (D.C.). But not to the point of making the parallel unrecognizable.
Based on the mtcars data set we can first visualize the model lm(mpg ~ wt + as.factor(cyl), data = mtcars) as the slope determined by the continuous variable wt (weight), and the different intercepts projecting the effect of the categorical variable cylinder (four, six or eight cylinders). It is this last part that forms a parallel with a one-way ANOVA.
Let's see it graphically on the sub-plot to the right (the three sub-plots to the left are included for side-to-side comparison with the ANOVA model discussed immediately afterwards):
Each cylinder engine is color coded, and the distance between the fitted lines with different intercepts and the data cloud is the equivalent of within-group variation in an ANOVA. Notice that the intercepts in the OLS model with a continuous variable (weight) is not mathematically the same as the value of the different within-group means in ANOVA, due to the effect of weight and the different model matrices (see below): the mean mpg for 4-cylinder cars, for example, is mean(mtcars$mpg[mtcars$cyl==4]) #[1] 26.66364, whereas the OLS "baseline" intercept (reflecting by convention cyl==4 (lowest to highest numerals ordering in R)) is markedly different: summary(fit)$coef[1] #[1] 33.99079. The slope of the lines is the coefficient for the continuous variable weight.
If you try to suppress the effect of weight by mentally straightening these lines and returning them to the horizontal line, you'll end up with the ANOVA plot of the model aov(mtcars$mpg ~ as.factor(mtcars$cyl)) on the three sub-plots to the left. The weight regressor is now out, but the relationship from the points to the different intercepts is roughly preserved - we are simply rotating counter-clockwise and spreading out the previously overlapping plots for each different level (again, only as a visual device to "see" the connection; not as a mathematical equality, since we are comparing two different models!).
Each level in the factor cylinder is separate, and the vertical lines represent the residuals or within-group error: the distance from each point in the cloud and the mean for each level (color-coded horizontal line). The color gradient gives us an indication of how significant the levels are in validating the model: the more clustered the data points are around their group means, the more likely the ANOVA model will be statistically significant. The horizontal black line around $\small 20$ in all the plots is the mean for all the factors. The numbers in the $x$-axis are simply the placeholder number/identifier for each point within each level, and don't have any further purpose than to separate points along the horizontal line to allow a plotting display different to boxplots.
And it is through the sum of these vertical segments that we can manually calculate the residuals:
mu_mpg <- mean(mtcars$mpg) # Mean mpg in dataset
TSS <- sum((mtcars$mpg - mu_mpg)^2) # Total sum of squares
SumSq=sum((mtcars[mtcars$cyl==4,"mpg"]-mean(mtcars[mtcars$cyl=="4","mpg"]))^2)+
sum((mtcars[mtcars$cyl==6,"mpg"] - mean(mtcars[mtcars$cyl=="6","mpg"]))^2)+
sum((mtcars[mtcars$cyl==8,"mpg"] - mean(mtcars[mtcars$cyl=="8","mpg"]))^2)
The result: SumSq = 301.2626 and TSS - SumSq = 824.7846. Compare to:
Call:
aov(formula = mtcars$mpg ~ as.factor(mtcars$cyl))
Terms:
as.factor(mtcars$cyl) Residuals
Sum of Squares 824.7846 301.2626
Deg. of Freedom 2 29
Exactly the same result as testing with an ANOVA the linear model with only the categorical cylinder as regressor:
fit <- lm(mpg ~ as.factor(cyl), data = mtcars)
summary(fit)
anova(fit)
Analysis of Variance Table
Response: mpg
Df Sum Sq Mean Sq F value Pr(>F)
as.factor(cyl) 2 824.78 412.39 39.697 4.979e-09 ***
Residuals 29 301.26 10.39
What we see, then, is that the residuals - the part of the total variance not explained by the model - as well as the variance are the same whether you call an OLS of the type lm(DV ~ factors), or an ANOVA (aov(DV ~ factors)): when we strip the model of continuous variables we end up with an identical system. Similarly, when we evaluate the models globally or as an omnibus ANOVA (not level by level), we naturally get the same p-value F-statistic: 39.7 on 2 and 29 DF, p-value: 4.979e-09.
This is not to imply that the testing of individual levels is going to yield identical p-values. In the case of OLS, we can invoke summary(fit) and get:
lm(formula = mpg ~ as.factor(cyl), data = mtcars)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 26.6636 0.9718 27.437 < 2e-16 ***
as.factor(cyl)6 -6.9208 1.5583 -4.441 0.000119 ***
as.factor(cyl)8 -11.5636 1.2986 -8.905 8.57e-10 ***
This is not possible in ANOVA, which is more of an omnibus test. To get these types of $p$-value assessments we need to run a Tukey Honest Significant Difference test, which will try to reduce the possibility of a type I error as a result of performing multiple pairwise comparisons (hence, "p adjusted"), resulting in a completely different output:
Tukey multiple comparisons of means
95% family-wise confidence level
Fit: aov(formula = mtcars$mpg ~ as.factor(mtcars$cyl))
$`as.factor(mtcars$cyl)`
diff lwr upr p adj
6-4 -6.920779 -10.769350 -3.0722086 0.0003424
8-4 -11.563636 -14.770779 -8.3564942 0.0000000
8-6 -4.642857 -8.327583 -0.9581313 0.0112287
Ultimately, nothing is more reassuring than taking a peek at the engine under the hood, which is none other than the model matrices, and the projections in the column space. These are actually quite simple in the case of an ANOVA:
$$\small\begin{bmatrix} y_1 \\ y_2 \\ y_3 \\ \vdots \\\vdots\\\vdots\\.\\y_n
\end{bmatrix} =
\begin{bmatrix} \color{magenta} 1 & 0 & 0 \\ \color{magenta}1 & 0 & 0
\\ \vdots & \vdots & \vdots \\ \color{magenta}
0 & 1 & 0 \\ \color{magenta}0 & 1 & 0
\\ \vdots & \vdots & \vdots \\
.&.&.\\\color{magenta}
0 & 0 & 1 \\ \color{magenta}0 & 0 & 1 \\
\end{bmatrix}
\begin{bmatrix}
\mu_1\\
\mu_2\\
\mu_3
\end{bmatrix}
+\begin{bmatrix}
\varepsilon_1 \\
\varepsilon_2\\
\varepsilon_3\\
\vdots\\
\vdots\\
\vdots\\
.\\
\varepsilon_n
\end{bmatrix}\tag 1$$
This would be the one-way ANOVA model matrix with three levels (e.g. cyl 4, cyl 6, cyl 8), summarized as $\small y_{ij} = \mu_i + \epsilon_{ij}$, where $\mu_i$ is the mean at each level or group: when the error or residual for the observation $j$ of the group or level $i$ is added, we obtain the actual DV $y_{ij}$ observation.
On the other hand, the model matrix for an OLS regression is:
$$\small\begin{bmatrix}y_1 \\ y_2 \\ y_3 \\ y_4 \\ \vdots \\ y_n \end{bmatrix} =
\begin{bmatrix} 1 & x_{12} & x_{13}\\ 1 & x_{22} & x_{23} \\ 1 & x_{32} & x_{33} \\ 1 & x_{42} & x_{43} \\ \vdots & \vdots & \vdots \\1 & x_{n2} & x_{n3} \end{bmatrix}
\begin{bmatrix} \beta_0 \\ \beta_1 \\ \beta_2 \end{bmatrix}
+
\begin{bmatrix} \varepsilon_1 \\ \varepsilon_2 \\ \varepsilon_3 \\ \varepsilon_4 \\ \vdots \\ \varepsilon_n \end{bmatrix}$$
This is of the form $ \small y_i = \beta_0 + \beta_1\, x_{i1} + \beta_2\, x_{i2} + \epsilon_i $ with a single intercept $\beta_0$ and two slopes ($\beta_1$ and $\beta_2$) each for a different continuous variables, say weight and displacement.
The trick now is to see how we can create different intercepts, as in the initial example, lm(mpg ~ wt + as.factor(cyl), data = mtcars) - so let's get rid of the second slope and stick to the original single continuous variable weight (in other words, one single column besides the column of ones in the model matrix; the intercept $\beta_0$ and the slope for weight, $\beta_1$). The column of $\color{brown}1$'s will by default correspond to the cyl 4 intercept. Again, its value is not identical to the ANOVA within-group mean for cyl 4, an observation that shouldn't be surprising comparing the column of $\color{brown}1$'s in the OLS model matrix (below) to the first column of $\color{magenta}1$'s in the ANOVA model matrix $(1),$ which only selects examples with 4-cylinders. The intercept will be shifted via dummy coding to explain the effect of cyl 6 and cyl 8as follows:
$$\small\begin{bmatrix}y_1 \\ y_2 \\ y_3 \\ y_4\\ y_5 \\ \vdots \\ y_n\end{bmatrix} =
\begin{bmatrix} \color{brown}1 & x_1 \\ \color{brown}1 & x_2 \\\color{brown} 1 & x_3 \\ \color{brown}1 & x_4 \\ \color{brown}1 & x_5 \\ \vdots & \vdots \\\color{brown}1 & x_n \end{bmatrix}
\begin{bmatrix} \beta_0 \\ \beta_1 \end{bmatrix}+
\begin{bmatrix}\color{red}1&0\\\color{red}1&0\\\color{red}1&0\\0&\color{blue}1\\0&\color{blue}1\\ \vdots & \vdots\\0&\color{blue}1\end{bmatrix}
\begin{bmatrix} \tilde\mu_2 \\ \tilde\mu_3 \end{bmatrix}
+
\begin{bmatrix} \varepsilon_1 \\ \varepsilon_2 \\ \varepsilon_3 \\ \varepsilon_4 \\ \varepsilon_5\\ \vdots \\ \varepsilon_n \end{bmatrix}$$
Now when the third column is $\color{red}1$ we'll be systematically shifting the intercept by $\tilde\mu_2.$ The $\tilde\cdot$ indicates that, as in the case of the "baseline" intercept in the OLS model not being identical to the group mean of 4-cylinder cars, but reflecting it, the differences between levels in the OLS model are not mathematically the between-groups differences in means:
fit <- lm(mpg ~ wt + as.factor(cyl), data = mtcars)
summary(fit)$coef[3] #[1] -4.255582 (difference between intercepts cyl==4 and cyl==6 in OLS)
fit <- lm(mpg ~ as.factor(cyl), data = mtcars)
summary(fit)$coef[2] #[1] -6.920779 (difference between group mean cyl==4 and cyl==6)
Likewise, when the fourth column is $\color{blue}1$, a fixed value $\tilde\mu_3$ will added to the intercept. The matrix equation, hence, will be $\small y_i = \beta_0 + \beta_1\, x_i + \tilde\mu_i + \epsilon_i $. Therefore, going with this model to the ANOVA model is just a matter of getting rid of the continuous variables, and understanding that the default intercept in OLS reflects the first level in ANOVA. | Why is ANOVA equivalent to linear regression? | Let me put some color into the idea that OLS with categorical (dummy-coded) regressors is equivalent to the factors in ANOVA. In both cases there are levels (or groups in the case of ANOVA).
In OLS r | Why is ANOVA equivalent to linear regression?
Let me put some color into the idea that OLS with categorical (dummy-coded) regressors is equivalent to the factors in ANOVA. In both cases there are levels (or groups in the case of ANOVA).
In OLS regression it is most usual to have also continuous variables in the regressors. These logically modify the relationship in the fit model between the categorical variables and the dependent variable (D.C.). But not to the point of making the parallel unrecognizable.
Based on the mtcars data set we can first visualize the model lm(mpg ~ wt + as.factor(cyl), data = mtcars) as the slope determined by the continuous variable wt (weight), and the different intercepts projecting the effect of the categorical variable cylinder (four, six or eight cylinders). It is this last part that forms a parallel with a one-way ANOVA.
Let's see it graphically on the sub-plot to the right (the three sub-plots to the left are included for side-to-side comparison with the ANOVA model discussed immediately afterwards):
Each cylinder engine is color coded, and the distance between the fitted lines with different intercepts and the data cloud is the equivalent of within-group variation in an ANOVA. Notice that the intercepts in the OLS model with a continuous variable (weight) is not mathematically the same as the value of the different within-group means in ANOVA, due to the effect of weight and the different model matrices (see below): the mean mpg for 4-cylinder cars, for example, is mean(mtcars$mpg[mtcars$cyl==4]) #[1] 26.66364, whereas the OLS "baseline" intercept (reflecting by convention cyl==4 (lowest to highest numerals ordering in R)) is markedly different: summary(fit)$coef[1] #[1] 33.99079. The slope of the lines is the coefficient for the continuous variable weight.
If you try to suppress the effect of weight by mentally straightening these lines and returning them to the horizontal line, you'll end up with the ANOVA plot of the model aov(mtcars$mpg ~ as.factor(mtcars$cyl)) on the three sub-plots to the left. The weight regressor is now out, but the relationship from the points to the different intercepts is roughly preserved - we are simply rotating counter-clockwise and spreading out the previously overlapping plots for each different level (again, only as a visual device to "see" the connection; not as a mathematical equality, since we are comparing two different models!).
Each level in the factor cylinder is separate, and the vertical lines represent the residuals or within-group error: the distance from each point in the cloud and the mean for each level (color-coded horizontal line). The color gradient gives us an indication of how significant the levels are in validating the model: the more clustered the data points are around their group means, the more likely the ANOVA model will be statistically significant. The horizontal black line around $\small 20$ in all the plots is the mean for all the factors. The numbers in the $x$-axis are simply the placeholder number/identifier for each point within each level, and don't have any further purpose than to separate points along the horizontal line to allow a plotting display different to boxplots.
And it is through the sum of these vertical segments that we can manually calculate the residuals:
mu_mpg <- mean(mtcars$mpg) # Mean mpg in dataset
TSS <- sum((mtcars$mpg - mu_mpg)^2) # Total sum of squares
SumSq=sum((mtcars[mtcars$cyl==4,"mpg"]-mean(mtcars[mtcars$cyl=="4","mpg"]))^2)+
sum((mtcars[mtcars$cyl==6,"mpg"] - mean(mtcars[mtcars$cyl=="6","mpg"]))^2)+
sum((mtcars[mtcars$cyl==8,"mpg"] - mean(mtcars[mtcars$cyl=="8","mpg"]))^2)
The result: SumSq = 301.2626 and TSS - SumSq = 824.7846. Compare to:
Call:
aov(formula = mtcars$mpg ~ as.factor(mtcars$cyl))
Terms:
as.factor(mtcars$cyl) Residuals
Sum of Squares 824.7846 301.2626
Deg. of Freedom 2 29
Exactly the same result as testing with an ANOVA the linear model with only the categorical cylinder as regressor:
fit <- lm(mpg ~ as.factor(cyl), data = mtcars)
summary(fit)
anova(fit)
Analysis of Variance Table
Response: mpg
Df Sum Sq Mean Sq F value Pr(>F)
as.factor(cyl) 2 824.78 412.39 39.697 4.979e-09 ***
Residuals 29 301.26 10.39
What we see, then, is that the residuals - the part of the total variance not explained by the model - as well as the variance are the same whether you call an OLS of the type lm(DV ~ factors), or an ANOVA (aov(DV ~ factors)): when we strip the model of continuous variables we end up with an identical system. Similarly, when we evaluate the models globally or as an omnibus ANOVA (not level by level), we naturally get the same p-value F-statistic: 39.7 on 2 and 29 DF, p-value: 4.979e-09.
This is not to imply that the testing of individual levels is going to yield identical p-values. In the case of OLS, we can invoke summary(fit) and get:
lm(formula = mpg ~ as.factor(cyl), data = mtcars)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 26.6636 0.9718 27.437 < 2e-16 ***
as.factor(cyl)6 -6.9208 1.5583 -4.441 0.000119 ***
as.factor(cyl)8 -11.5636 1.2986 -8.905 8.57e-10 ***
This is not possible in ANOVA, which is more of an omnibus test. To get these types of $p$-value assessments we need to run a Tukey Honest Significant Difference test, which will try to reduce the possibility of a type I error as a result of performing multiple pairwise comparisons (hence, "p adjusted"), resulting in a completely different output:
Tukey multiple comparisons of means
95% family-wise confidence level
Fit: aov(formula = mtcars$mpg ~ as.factor(mtcars$cyl))
$`as.factor(mtcars$cyl)`
diff lwr upr p adj
6-4 -6.920779 -10.769350 -3.0722086 0.0003424
8-4 -11.563636 -14.770779 -8.3564942 0.0000000
8-6 -4.642857 -8.327583 -0.9581313 0.0112287
Ultimately, nothing is more reassuring than taking a peek at the engine under the hood, which is none other than the model matrices, and the projections in the column space. These are actually quite simple in the case of an ANOVA:
$$\small\begin{bmatrix} y_1 \\ y_2 \\ y_3 \\ \vdots \\\vdots\\\vdots\\.\\y_n
\end{bmatrix} =
\begin{bmatrix} \color{magenta} 1 & 0 & 0 \\ \color{magenta}1 & 0 & 0
\\ \vdots & \vdots & \vdots \\ \color{magenta}
0 & 1 & 0 \\ \color{magenta}0 & 1 & 0
\\ \vdots & \vdots & \vdots \\
.&.&.\\\color{magenta}
0 & 0 & 1 \\ \color{magenta}0 & 0 & 1 \\
\end{bmatrix}
\begin{bmatrix}
\mu_1\\
\mu_2\\
\mu_3
\end{bmatrix}
+\begin{bmatrix}
\varepsilon_1 \\
\varepsilon_2\\
\varepsilon_3\\
\vdots\\
\vdots\\
\vdots\\
.\\
\varepsilon_n
\end{bmatrix}\tag 1$$
This would be the one-way ANOVA model matrix with three levels (e.g. cyl 4, cyl 6, cyl 8), summarized as $\small y_{ij} = \mu_i + \epsilon_{ij}$, where $\mu_i$ is the mean at each level or group: when the error or residual for the observation $j$ of the group or level $i$ is added, we obtain the actual DV $y_{ij}$ observation.
On the other hand, the model matrix for an OLS regression is:
$$\small\begin{bmatrix}y_1 \\ y_2 \\ y_3 \\ y_4 \\ \vdots \\ y_n \end{bmatrix} =
\begin{bmatrix} 1 & x_{12} & x_{13}\\ 1 & x_{22} & x_{23} \\ 1 & x_{32} & x_{33} \\ 1 & x_{42} & x_{43} \\ \vdots & \vdots & \vdots \\1 & x_{n2} & x_{n3} \end{bmatrix}
\begin{bmatrix} \beta_0 \\ \beta_1 \\ \beta_2 \end{bmatrix}
+
\begin{bmatrix} \varepsilon_1 \\ \varepsilon_2 \\ \varepsilon_3 \\ \varepsilon_4 \\ \vdots \\ \varepsilon_n \end{bmatrix}$$
This is of the form $ \small y_i = \beta_0 + \beta_1\, x_{i1} + \beta_2\, x_{i2} + \epsilon_i $ with a single intercept $\beta_0$ and two slopes ($\beta_1$ and $\beta_2$) each for a different continuous variables, say weight and displacement.
The trick now is to see how we can create different intercepts, as in the initial example, lm(mpg ~ wt + as.factor(cyl), data = mtcars) - so let's get rid of the second slope and stick to the original single continuous variable weight (in other words, one single column besides the column of ones in the model matrix; the intercept $\beta_0$ and the slope for weight, $\beta_1$). The column of $\color{brown}1$'s will by default correspond to the cyl 4 intercept. Again, its value is not identical to the ANOVA within-group mean for cyl 4, an observation that shouldn't be surprising comparing the column of $\color{brown}1$'s in the OLS model matrix (below) to the first column of $\color{magenta}1$'s in the ANOVA model matrix $(1),$ which only selects examples with 4-cylinders. The intercept will be shifted via dummy coding to explain the effect of cyl 6 and cyl 8as follows:
$$\small\begin{bmatrix}y_1 \\ y_2 \\ y_3 \\ y_4\\ y_5 \\ \vdots \\ y_n\end{bmatrix} =
\begin{bmatrix} \color{brown}1 & x_1 \\ \color{brown}1 & x_2 \\\color{brown} 1 & x_3 \\ \color{brown}1 & x_4 \\ \color{brown}1 & x_5 \\ \vdots & \vdots \\\color{brown}1 & x_n \end{bmatrix}
\begin{bmatrix} \beta_0 \\ \beta_1 \end{bmatrix}+
\begin{bmatrix}\color{red}1&0\\\color{red}1&0\\\color{red}1&0\\0&\color{blue}1\\0&\color{blue}1\\ \vdots & \vdots\\0&\color{blue}1\end{bmatrix}
\begin{bmatrix} \tilde\mu_2 \\ \tilde\mu_3 \end{bmatrix}
+
\begin{bmatrix} \varepsilon_1 \\ \varepsilon_2 \\ \varepsilon_3 \\ \varepsilon_4 \\ \varepsilon_5\\ \vdots \\ \varepsilon_n \end{bmatrix}$$
Now when the third column is $\color{red}1$ we'll be systematically shifting the intercept by $\tilde\mu_2.$ The $\tilde\cdot$ indicates that, as in the case of the "baseline" intercept in the OLS model not being identical to the group mean of 4-cylinder cars, but reflecting it, the differences between levels in the OLS model are not mathematically the between-groups differences in means:
fit <- lm(mpg ~ wt + as.factor(cyl), data = mtcars)
summary(fit)$coef[3] #[1] -4.255582 (difference between intercepts cyl==4 and cyl==6 in OLS)
fit <- lm(mpg ~ as.factor(cyl), data = mtcars)
summary(fit)$coef[2] #[1] -6.920779 (difference between group mean cyl==4 and cyl==6)
Likewise, when the fourth column is $\color{blue}1$, a fixed value $\tilde\mu_3$ will added to the intercept. The matrix equation, hence, will be $\small y_i = \beta_0 + \beta_1\, x_i + \tilde\mu_i + \epsilon_i $. Therefore, going with this model to the ANOVA model is just a matter of getting rid of the continuous variables, and understanding that the default intercept in OLS reflects the first level in ANOVA. | Why is ANOVA equivalent to linear regression?
Let me put some color into the idea that OLS with categorical (dummy-coded) regressors is equivalent to the factors in ANOVA. In both cases there are levels (or groups in the case of ANOVA).
In OLS r |
3,008 | Why is ANOVA equivalent to linear regression? | Antoni Parellada and usεr11852 had very good answer. I will address your question for coding perspective with R.
ANOVA tells you nothing about the coefficients of the linear model. So how is linear regression the same as ANOVA?
In fact, we can aov function in R can be used as same as lm. Here are some examples.
> lm_fit=lm(mpg~as.factor(cyl),mtcars)
> aov_fit=aov(mpg~as.factor(cyl),mtcars)
> coef(lm_fit)
(Intercept) as.factor(cyl)6 as.factor(cyl)8
26.663636 -6.920779 -11.563636
> coef(aov_fit)
(Intercept) as.factor(cyl)6 as.factor(cyl)8
26.663636 -6.920779 -11.563636
> all(predict(lm_fit,mtcars)==predict(aov_fit,mtcars))
[1] TRUE
As you can see, not only we can get coefficient from ANOVA model, but also we can use it for prediction, just like the linear model.
If we check the help file for aov function it says
This provides a wrapper to lm for fitting linear models to balanced or unbalanced experimental designs. The main difference from lm is in the way print, summary and so on handle the fit: this is expressed in the traditional language of the analysis of variance rather than that of linear models. | Why is ANOVA equivalent to linear regression? | Antoni Parellada and usεr11852 had very good answer. I will address your question for coding perspective with R.
ANOVA tells you nothing about the coefficients of the linear model. So how is linear r | Why is ANOVA equivalent to linear regression?
Antoni Parellada and usεr11852 had very good answer. I will address your question for coding perspective with R.
ANOVA tells you nothing about the coefficients of the linear model. So how is linear regression the same as ANOVA?
In fact, we can aov function in R can be used as same as lm. Here are some examples.
> lm_fit=lm(mpg~as.factor(cyl),mtcars)
> aov_fit=aov(mpg~as.factor(cyl),mtcars)
> coef(lm_fit)
(Intercept) as.factor(cyl)6 as.factor(cyl)8
26.663636 -6.920779 -11.563636
> coef(aov_fit)
(Intercept) as.factor(cyl)6 as.factor(cyl)8
26.663636 -6.920779 -11.563636
> all(predict(lm_fit,mtcars)==predict(aov_fit,mtcars))
[1] TRUE
As you can see, not only we can get coefficient from ANOVA model, but also we can use it for prediction, just like the linear model.
If we check the help file for aov function it says
This provides a wrapper to lm for fitting linear models to balanced or unbalanced experimental designs. The main difference from lm is in the way print, summary and so on handle the fit: this is expressed in the traditional language of the analysis of variance rather than that of linear models. | Why is ANOVA equivalent to linear regression?
Antoni Parellada and usεr11852 had very good answer. I will address your question for coding perspective with R.
ANOVA tells you nothing about the coefficients of the linear model. So how is linear r |
3,009 | Why is ANOVA equivalent to linear regression? | ANOVA is not a model; it is a method within a model
The analysis of variance (ANOVA) is a method that occurs within regression models. The technique gives a particular set of outputs for the model that analyse the estimated variance of different parts, and use this to make inferences about whether or not there are relationships between the explanatory variables and the response variable. Comparison of linear regression to ANOVA is an "apples and oranges" comparison, since the former is a model and the latter is a method of analysis that occurs within a model. Sometimes you will see references to ANOVA as if it were a model, in which case there must be some underlying model to which the method is applied.
The ANOVA method is fairly simple to understand when you look at it holistically in the context of a general regression model. The technique is based on the law of iterated variance. Suppose you are working in the context of some regression model:
$$Y_i = f(\mathbf{x}_i, \theta) + \varepsilon_i
\quad \quad \quad
\varepsilon_1,...,\varepsilon_n \sim \text{IID Dist}(0, \sigma^2).$$
Using the law of iterated variance we can write the marginal variance of $Y_i$ as:
$$\begin{equation} \begin{aligned}
\mathbb{V}(Y_i)
&= \mathbb{V}(\mathbb{E}(Y_i|\mathbf{X}_i)) + \mathbb{E}(\mathbb{V}(Y_i|\mathbf{X}_i)) \\[6pt]
&= \mathbb{V}(f(\mathbf{X}_i, \theta)) + \mathbb{E}(\mathbb{V}(\varepsilon_i)) \\[6pt]
&= \mathbb{V}(f(\mathbf{X}_i, \theta)) + \mathbb{E}(\sigma^2) \\[6pt]
&= \mathbb{V}(f(\mathbf{X}_i, \theta)) + \sigma^2. \\[6pt]
\end{aligned} \end{equation}$$
Now, if the explanatory vector does not have a relationship with the response variable then the regression function does not depend on the explanatory vector, and so $\mathbb{V}(f(\mathbf{X}_i, \theta)) = 0$, which implies $\mathbb{V}(Y_i) = \sigma^2$. On the other hand, if the explanatory vector does have a relationship with the response variable, then we will generally have $\mathbb{V}(f(\mathbf{X}_i, \theta)) > 0$, which implies $\mathbb{V}(Y_i) > \sigma^2$. Thus, generally speaking, a larger gap between the estimated variance of the response variable, and the estimated variance of the error term, constitutes evidence in favour of the hypothesis that there is a relationship between the explanatory vector and the response variable.
ANOVA can be taken further than this by breaking down the variance contributions of each of the explanatory variables in the model (or groups of these variables, etc.) to allow you to further test whether there is a plausible relationship between the response variable and an individual explanatory variable or group of explanatory variables. The lovely answer by Antoni Parellada shows you a colourful illustration of estimating the variance in three groups from a categorical explanatory variable.
The above decomposition from the law of iterated variance is the basic insight that underlies ANOVA. It is used to construct formal ANOVA tests to determine whether or not there is evidence of a relationship between the explanatory vector and the response variable. By conditioning on parts of the explanatory vector, this basic method can also be used to test for a relationship between particular subsets of explanatory variables and the response variable. In summary, ANOVA is a particular method that is used within the context of regression analysis to test for relationships between variables.
When is ANOVA "equivalent" to linear regression: The technique of ANOVA gives you a breakdown of the estimated variance of the components in the data, and this is often augmented with formal F-tests that use those estimated variance components. This is "equivalent" to performing the F-tests in a regression model, since that is what you are doing in ANOVA. (In simple linear regression there is only one explanatory variable so the F-test gives the same p-value as the T-test for this coefficient. In this case the ANOVA test is also equivalent to the individual T-test for the sole explanatory variable.)
You are correct that there are many aspects of the regression model that fall outside the scope of ANOVA (e.g., estimation of the coefficients for the individual explanatory variables). Again, that is because ANOVA is a method that occurs within the context of a regression model, not a model in its own right. When you see a whole model referred to as an "ANOVA model", it is more accurate to think of it as an ANOVA method applied to an underlying regression model. | Why is ANOVA equivalent to linear regression? | ANOVA is not a model; it is a method within a model
The analysis of variance (ANOVA) is a method that occurs within regression models. The technique gives a particular set of outputs for the model th | Why is ANOVA equivalent to linear regression?
ANOVA is not a model; it is a method within a model
The analysis of variance (ANOVA) is a method that occurs within regression models. The technique gives a particular set of outputs for the model that analyse the estimated variance of different parts, and use this to make inferences about whether or not there are relationships between the explanatory variables and the response variable. Comparison of linear regression to ANOVA is an "apples and oranges" comparison, since the former is a model and the latter is a method of analysis that occurs within a model. Sometimes you will see references to ANOVA as if it were a model, in which case there must be some underlying model to which the method is applied.
The ANOVA method is fairly simple to understand when you look at it holistically in the context of a general regression model. The technique is based on the law of iterated variance. Suppose you are working in the context of some regression model:
$$Y_i = f(\mathbf{x}_i, \theta) + \varepsilon_i
\quad \quad \quad
\varepsilon_1,...,\varepsilon_n \sim \text{IID Dist}(0, \sigma^2).$$
Using the law of iterated variance we can write the marginal variance of $Y_i$ as:
$$\begin{equation} \begin{aligned}
\mathbb{V}(Y_i)
&= \mathbb{V}(\mathbb{E}(Y_i|\mathbf{X}_i)) + \mathbb{E}(\mathbb{V}(Y_i|\mathbf{X}_i)) \\[6pt]
&= \mathbb{V}(f(\mathbf{X}_i, \theta)) + \mathbb{E}(\mathbb{V}(\varepsilon_i)) \\[6pt]
&= \mathbb{V}(f(\mathbf{X}_i, \theta)) + \mathbb{E}(\sigma^2) \\[6pt]
&= \mathbb{V}(f(\mathbf{X}_i, \theta)) + \sigma^2. \\[6pt]
\end{aligned} \end{equation}$$
Now, if the explanatory vector does not have a relationship with the response variable then the regression function does not depend on the explanatory vector, and so $\mathbb{V}(f(\mathbf{X}_i, \theta)) = 0$, which implies $\mathbb{V}(Y_i) = \sigma^2$. On the other hand, if the explanatory vector does have a relationship with the response variable, then we will generally have $\mathbb{V}(f(\mathbf{X}_i, \theta)) > 0$, which implies $\mathbb{V}(Y_i) > \sigma^2$. Thus, generally speaking, a larger gap between the estimated variance of the response variable, and the estimated variance of the error term, constitutes evidence in favour of the hypothesis that there is a relationship between the explanatory vector and the response variable.
ANOVA can be taken further than this by breaking down the variance contributions of each of the explanatory variables in the model (or groups of these variables, etc.) to allow you to further test whether there is a plausible relationship between the response variable and an individual explanatory variable or group of explanatory variables. The lovely answer by Antoni Parellada shows you a colourful illustration of estimating the variance in three groups from a categorical explanatory variable.
The above decomposition from the law of iterated variance is the basic insight that underlies ANOVA. It is used to construct formal ANOVA tests to determine whether or not there is evidence of a relationship between the explanatory vector and the response variable. By conditioning on parts of the explanatory vector, this basic method can also be used to test for a relationship between particular subsets of explanatory variables and the response variable. In summary, ANOVA is a particular method that is used within the context of regression analysis to test for relationships between variables.
When is ANOVA "equivalent" to linear regression: The technique of ANOVA gives you a breakdown of the estimated variance of the components in the data, and this is often augmented with formal F-tests that use those estimated variance components. This is "equivalent" to performing the F-tests in a regression model, since that is what you are doing in ANOVA. (In simple linear regression there is only one explanatory variable so the F-test gives the same p-value as the T-test for this coefficient. In this case the ANOVA test is also equivalent to the individual T-test for the sole explanatory variable.)
You are correct that there are many aspects of the regression model that fall outside the scope of ANOVA (e.g., estimation of the coefficients for the individual explanatory variables). Again, that is because ANOVA is a method that occurs within the context of a regression model, not a model in its own right. When you see a whole model referred to as an "ANOVA model", it is more accurate to think of it as an ANOVA method applied to an underlying regression model. | Why is ANOVA equivalent to linear regression?
ANOVA is not a model; it is a method within a model
The analysis of variance (ANOVA) is a method that occurs within regression models. The technique gives a particular set of outputs for the model th |
3,010 | Why is ANOVA equivalent to linear regression? | If we take all data entries and arrange them into one single column Y, with the rest of the columns being indicator variables 1{ith data is element of the jth column in the original anova arrangement} then by taking a simple linear regression of Y on any of the other columns (say column B), you should obtain the same DF, SS, MS and F test statistic as in your ANOVA problem.
Thus ANOVA can be 'treated as' Linear Regression by writing the data with binary variables. Also note that the coefficient of regression for, say, a regression of Y on B should be the same as the avg. of the column B, computed with the original data. | Why is ANOVA equivalent to linear regression? | If we take all data entries and arrange them into one single column Y, with the rest of the columns being indicator variables 1{ith data is element of the jth column in the original anova arrangement} | Why is ANOVA equivalent to linear regression?
If we take all data entries and arrange them into one single column Y, with the rest of the columns being indicator variables 1{ith data is element of the jth column in the original anova arrangement} then by taking a simple linear regression of Y on any of the other columns (say column B), you should obtain the same DF, SS, MS and F test statistic as in your ANOVA problem.
Thus ANOVA can be 'treated as' Linear Regression by writing the data with binary variables. Also note that the coefficient of regression for, say, a regression of Y on B should be the same as the avg. of the column B, computed with the original data. | Why is ANOVA equivalent to linear regression?
If we take all data entries and arrange them into one single column Y, with the rest of the columns being indicator variables 1{ith data is element of the jth column in the original anova arrangement} |
3,011 | Standard error for the mean of a sample of binomial random variables | It seems like you're using $n$ twice in two different ways - both as the sample size and as the number of bernoulli trials that comprise the Binomial random variable; to eliminate any ambiguity, I'm going to use $k$ to refer to the latter.
If you have $n$ independent samples from a ${\rm Binomial}(k,p)$ distribution, the variance of their sample mean is
$$ {\rm var} \left( \frac{1}{n} \sum_{i=1}^{n} X_{i} \right) = \frac{1}{n^2} \sum_{i=1}^{n} {\rm var}( X_{i} ) = \frac{ n {\rm var}(X_{i}) }{ n^2 } = \frac{ {\rm var}(X_{i})}{n} = \frac{ k pq }{n} $$
where $q=1-p$ and $\overline{X}$ is the same mean. This follows since
(1) ${\rm var}(cX) = c^2 {\rm var}(X)$, for any random variable, $X$, and any constant $c$.
(2) the variance of a sum of independent random variables equals the sum of the variances.
The standard error of $\overline{X}$is the square root of the variance: $\sqrt{\frac{ k pq }{n}}$. Therefore,
When $k = n$, you get the formula you pointed out: $\sqrt{pq}$
When $k = 1$, and the Binomial variables are just bernoulli trials, you get the formula you've seen elsewhere: $\sqrt{\frac{pq }{n}}$ | Standard error for the mean of a sample of binomial random variables | It seems like you're using $n$ twice in two different ways - both as the sample size and as the number of bernoulli trials that comprise the Binomial random variable; to eliminate any ambiguity, I'm g | Standard error for the mean of a sample of binomial random variables
It seems like you're using $n$ twice in two different ways - both as the sample size and as the number of bernoulli trials that comprise the Binomial random variable; to eliminate any ambiguity, I'm going to use $k$ to refer to the latter.
If you have $n$ independent samples from a ${\rm Binomial}(k,p)$ distribution, the variance of their sample mean is
$$ {\rm var} \left( \frac{1}{n} \sum_{i=1}^{n} X_{i} \right) = \frac{1}{n^2} \sum_{i=1}^{n} {\rm var}( X_{i} ) = \frac{ n {\rm var}(X_{i}) }{ n^2 } = \frac{ {\rm var}(X_{i})}{n} = \frac{ k pq }{n} $$
where $q=1-p$ and $\overline{X}$ is the same mean. This follows since
(1) ${\rm var}(cX) = c^2 {\rm var}(X)$, for any random variable, $X$, and any constant $c$.
(2) the variance of a sum of independent random variables equals the sum of the variances.
The standard error of $\overline{X}$is the square root of the variance: $\sqrt{\frac{ k pq }{n}}$. Therefore,
When $k = n$, you get the formula you pointed out: $\sqrt{pq}$
When $k = 1$, and the Binomial variables are just bernoulli trials, you get the formula you've seen elsewhere: $\sqrt{\frac{pq }{n}}$ | Standard error for the mean of a sample of binomial random variables
It seems like you're using $n$ twice in two different ways - both as the sample size and as the number of bernoulli trials that comprise the Binomial random variable; to eliminate any ambiguity, I'm g |
3,012 | Standard error for the mean of a sample of binomial random variables | It's easy to get two binomial distributions confused:
distribution of number of successes
distribution of the proportion of successes
npq is the number of successes, while npq/n = pq is the ratio of successes. This results in different standard error formulas. | Standard error for the mean of a sample of binomial random variables | It's easy to get two binomial distributions confused:
distribution of number of successes
distribution of the proportion of successes
npq is the number of successes, while npq/n = pq is the ratio of | Standard error for the mean of a sample of binomial random variables
It's easy to get two binomial distributions confused:
distribution of number of successes
distribution of the proportion of successes
npq is the number of successes, while npq/n = pq is the ratio of successes. This results in different standard error formulas. | Standard error for the mean of a sample of binomial random variables
It's easy to get two binomial distributions confused:
distribution of number of successes
distribution of the proportion of successes
npq is the number of successes, while npq/n = pq is the ratio of |
3,013 | Standard error for the mean of a sample of binomial random variables | We can look at this in the following way:
Suppose we are doing an experiment where we need to toss an unbiased coin $n$ times. The overall outcome of the experiment is $Y$ which is the summation of individual tosses (say, head as 1 and tail as 0). So, for this experiment, $Y = \sum_{i=1}^n X_i$, where $X_i$ are outcomes of individual tosses.
Here, the outcome of each toss, $X_i$, follows a Bernoulli distribution and the overall outcome $Y$ follows a binomial distribution.
The complete experiment can be thought as a single sample. Thus, if we repeat the experiment, we can get another value of $Y$, which will form another sample. All possible values of $Y$ will constitute the complete population.
Coming back to the single coin toss, which follows a Bernoulli distribution, the variance is given by $pq$, where $p$ is the probability of head (success) and $q = 1 – p$.
Now, if we look at Variance of $Y$, $V(Y) = V(\sum X_i) = \sum V(X_i)$. But, for all individual Bernoulli experiments, $V(X_i) = pq$. Since there are $n$ tosses or Bernoulli trials in the experiment, $V(Y) = \sum V(X_i) = npq$. This implies that $Y$ has variance $npq$.
Now, the sample proportion is given by $\hat p = \frac Y n$, which gives the 'proportion of success or heads'. Here, $n$ is a constant as we plan to take same no of coin tosses for all the experiments in the population.
So, $V(\frac Y n) = (\frac {1}{n^2})V(Y) = (\frac {1}{n^2})(npq) = pq/n$.
So, standard error for $\hat p$ (a sample statistic) is $\sqrt{pq/n}$ | Standard error for the mean of a sample of binomial random variables | We can look at this in the following way:
Suppose we are doing an experiment where we need to toss an unbiased coin $n$ times. The overall outcome of the experiment is $Y$ which is the summation of in | Standard error for the mean of a sample of binomial random variables
We can look at this in the following way:
Suppose we are doing an experiment where we need to toss an unbiased coin $n$ times. The overall outcome of the experiment is $Y$ which is the summation of individual tosses (say, head as 1 and tail as 0). So, for this experiment, $Y = \sum_{i=1}^n X_i$, where $X_i$ are outcomes of individual tosses.
Here, the outcome of each toss, $X_i$, follows a Bernoulli distribution and the overall outcome $Y$ follows a binomial distribution.
The complete experiment can be thought as a single sample. Thus, if we repeat the experiment, we can get another value of $Y$, which will form another sample. All possible values of $Y$ will constitute the complete population.
Coming back to the single coin toss, which follows a Bernoulli distribution, the variance is given by $pq$, where $p$ is the probability of head (success) and $q = 1 – p$.
Now, if we look at Variance of $Y$, $V(Y) = V(\sum X_i) = \sum V(X_i)$. But, for all individual Bernoulli experiments, $V(X_i) = pq$. Since there are $n$ tosses or Bernoulli trials in the experiment, $V(Y) = \sum V(X_i) = npq$. This implies that $Y$ has variance $npq$.
Now, the sample proportion is given by $\hat p = \frac Y n$, which gives the 'proportion of success or heads'. Here, $n$ is a constant as we plan to take same no of coin tosses for all the experiments in the population.
So, $V(\frac Y n) = (\frac {1}{n^2})V(Y) = (\frac {1}{n^2})(npq) = pq/n$.
So, standard error for $\hat p$ (a sample statistic) is $\sqrt{pq/n}$ | Standard error for the mean of a sample of binomial random variables
We can look at this in the following way:
Suppose we are doing an experiment where we need to toss an unbiased coin $n$ times. The overall outcome of the experiment is $Y$ which is the summation of in |
3,014 | Standard error for the mean of a sample of binomial random variables | I think there is also some confusion in the initial post between standard error and standard deviation. Standard deviation is the sqrt of the variance of a distribution; standard error is the standard deviation of the estimated mean of a sample from that distribution, i.e., the spread of the means you would observe if you did that sample infinitely many times. The former is an intrinsic property of the distribution; the latter is a measure of the quality of your estimate of a property (the mean) of the distribution. When you do an experiment of N Bernouilli trials to estimate the unknown probability of success, the uncertainty of your estimated p=k/N after seeing k successes is a standard error of the estimated proportion, sqrt(pq/N) where q=1-p. The true distribution is characterized by a parameter P, the true probability of success. If you did an infinite number of experiments with N trials each and looked at the distribution of successes, it would have mean K=P*N, variance NPQ and standard deviation sqrt(NPQ). | Standard error for the mean of a sample of binomial random variables | I think there is also some confusion in the initial post between standard error and standard deviation. Standard deviation is the sqrt of the variance of a distribution; standard error is the standard | Standard error for the mean of a sample of binomial random variables
I think there is also some confusion in the initial post between standard error and standard deviation. Standard deviation is the sqrt of the variance of a distribution; standard error is the standard deviation of the estimated mean of a sample from that distribution, i.e., the spread of the means you would observe if you did that sample infinitely many times. The former is an intrinsic property of the distribution; the latter is a measure of the quality of your estimate of a property (the mean) of the distribution. When you do an experiment of N Bernouilli trials to estimate the unknown probability of success, the uncertainty of your estimated p=k/N after seeing k successes is a standard error of the estimated proportion, sqrt(pq/N) where q=1-p. The true distribution is characterized by a parameter P, the true probability of success. If you did an infinite number of experiments with N trials each and looked at the distribution of successes, it would have mean K=P*N, variance NPQ and standard deviation sqrt(NPQ). | Standard error for the mean of a sample of binomial random variables
I think there is also some confusion in the initial post between standard error and standard deviation. Standard deviation is the sqrt of the variance of a distribution; standard error is the standard |
3,015 | Do all interactions terms need their individual terms in regression model? | Most of the time this is a bad idea - the main reason is that it no longer makes the model invariant to location shifts. For example, suppose you have a single outcome $y_i$ and two predictors $x_i$ and $z_i$ and specify the model:
$$ y_i = \beta_0 + \beta_1 x_{i} z_i + \varepsilon $$
If you were to center the predictors by their means, $x_i z_i$ becomes
$$ (x_i - \overline{x})(z_i - \overline{z}) = x_i z_i - x_{i} \overline{z} - z_{i} \overline{x} + \overline{x} \overline{z}$$
So, you can see that the main effects have been reintroduced into the model.
I've given a heuristic argument here, but this does present a practical issue. As noted in Faraway(2005) on page 114, an additive change in scale changes the model inference when the main effects are left out of the model, whereas this does not happen when the lower order terms are included. It is normally undesirable to have arbitrary things like a location shift cause a fundamental change in the statistical inference (and therefore the conclusions of your inquiry), as can happen when you include polynomial terms or interactions in a model without the lower order effects.
Note: There may be special circumstances where you would only want to include the interaction, if the $x_i z_i$ has some particular substantive meaning or if you only observe the product and not the individual variables $x_i, z_i$. But, in that case, one may as well think of the predictor $a_i = x_i z_i$ and proceed with the model
$$ y_i = \alpha_0 + \alpha_1 a_i + \varepsilon_i $$
rather than thinking of $a_i$ as an interaction term. | Do all interactions terms need their individual terms in regression model? | Most of the time this is a bad idea - the main reason is that it no longer makes the model invariant to location shifts. For example, suppose you have a single outcome $y_i$ and two predictors $x_i$ a | Do all interactions terms need their individual terms in regression model?
Most of the time this is a bad idea - the main reason is that it no longer makes the model invariant to location shifts. For example, suppose you have a single outcome $y_i$ and two predictors $x_i$ and $z_i$ and specify the model:
$$ y_i = \beta_0 + \beta_1 x_{i} z_i + \varepsilon $$
If you were to center the predictors by their means, $x_i z_i$ becomes
$$ (x_i - \overline{x})(z_i - \overline{z}) = x_i z_i - x_{i} \overline{z} - z_{i} \overline{x} + \overline{x} \overline{z}$$
So, you can see that the main effects have been reintroduced into the model.
I've given a heuristic argument here, but this does present a practical issue. As noted in Faraway(2005) on page 114, an additive change in scale changes the model inference when the main effects are left out of the model, whereas this does not happen when the lower order terms are included. It is normally undesirable to have arbitrary things like a location shift cause a fundamental change in the statistical inference (and therefore the conclusions of your inquiry), as can happen when you include polynomial terms or interactions in a model without the lower order effects.
Note: There may be special circumstances where you would only want to include the interaction, if the $x_i z_i$ has some particular substantive meaning or if you only observe the product and not the individual variables $x_i, z_i$. But, in that case, one may as well think of the predictor $a_i = x_i z_i$ and proceed with the model
$$ y_i = \alpha_0 + \alpha_1 a_i + \varepsilon_i $$
rather than thinking of $a_i$ as an interaction term. | Do all interactions terms need their individual terms in regression model?
Most of the time this is a bad idea - the main reason is that it no longer makes the model invariant to location shifts. For example, suppose you have a single outcome $y_i$ and two predictors $x_i$ a |
3,016 | Do all interactions terms need their individual terms in regression model? | All the answers so far seem to miss a very basic point: the functional form you choose should be flexible enough to capture the features that are scientifically relevant. Models 2-5 impose zero coefficients on some terms without scientific justification. And even if scientifically justified, Model 1 remains appealing because you might as well test for the zero coefficients rather than impose them.
The key is understanding what the restrictions mean. The typical admonition to avoid Models 3-5 is because in most applications the assumptions they impose are scientifically implausible. Model 3 assumes X2 only influences the slope dY/dX1 but not the level. Model 4 assumes X1 only influences the slope dY/dX2 but not the level. And Model 5 assumes neither X1 nor X2 affects the level, but only dY/dX1 or dY/dX2. In most applications these assumptions don't seem reasonable. Model 2 also imposes a zero coefficient but still has some merit. It gives the best linear approximation to the data, which in many cases satisfies the scientific goal. | Do all interactions terms need their individual terms in regression model? | All the answers so far seem to miss a very basic point: the functional form you choose should be flexible enough to capture the features that are scientifically relevant. Models 2-5 impose zero coeff | Do all interactions terms need their individual terms in regression model?
All the answers so far seem to miss a very basic point: the functional form you choose should be flexible enough to capture the features that are scientifically relevant. Models 2-5 impose zero coefficients on some terms without scientific justification. And even if scientifically justified, Model 1 remains appealing because you might as well test for the zero coefficients rather than impose them.
The key is understanding what the restrictions mean. The typical admonition to avoid Models 3-5 is because in most applications the assumptions they impose are scientifically implausible. Model 3 assumes X2 only influences the slope dY/dX1 but not the level. Model 4 assumes X1 only influences the slope dY/dX2 but not the level. And Model 5 assumes neither X1 nor X2 affects the level, but only dY/dX1 or dY/dX2. In most applications these assumptions don't seem reasonable. Model 2 also imposes a zero coefficient but still has some merit. It gives the best linear approximation to the data, which in many cases satisfies the scientific goal. | Do all interactions terms need their individual terms in regression model?
All the answers so far seem to miss a very basic point: the functional form you choose should be flexible enough to capture the features that are scientifically relevant. Models 2-5 impose zero coeff |
3,017 | Do all interactions terms need their individual terms in regression model? | +1 to @Macro. Let me bring out what I think is a similar point that concerns when you have categorical predictors. A lot can depend on how they are coded. For example, reference cell (aka, 'dummy') coding uses 0 & 1, whereas effect coding uses -1, 0 & 1. Consider a simple case with two factors with two levels each, then $x_1x_2$ could be [0, 0, 0, 1] or [1, -1, -1, 1], depending on the coding scheme used. I believe that it is possible to have a situation where only the interaction is 'significant' with one coding scheme, but all terms are 'significant' using the other scheme. This implies that meaningful interpretive decisions would be made based on an arbitrary coding decision that, in fact, your software may have made for you without your knowledge. I recognize that this is a small point, but it's just one more reason that it's typically not a good idea to retain only the interaction (and also not to select a subset of predictors based on p-values, of course). | Do all interactions terms need their individual terms in regression model? | +1 to @Macro. Let me bring out what I think is a similar point that concerns when you have categorical predictors. A lot can depend on how they are coded. For example, reference cell (aka, 'dummy') | Do all interactions terms need their individual terms in regression model?
+1 to @Macro. Let me bring out what I think is a similar point that concerns when you have categorical predictors. A lot can depend on how they are coded. For example, reference cell (aka, 'dummy') coding uses 0 & 1, whereas effect coding uses -1, 0 & 1. Consider a simple case with two factors with two levels each, then $x_1x_2$ could be [0, 0, 0, 1] or [1, -1, -1, 1], depending on the coding scheme used. I believe that it is possible to have a situation where only the interaction is 'significant' with one coding scheme, but all terms are 'significant' using the other scheme. This implies that meaningful interpretive decisions would be made based on an arbitrary coding decision that, in fact, your software may have made for you without your knowledge. I recognize that this is a small point, but it's just one more reason that it's typically not a good idea to retain only the interaction (and also not to select a subset of predictors based on p-values, of course). | Do all interactions terms need their individual terms in regression model?
+1 to @Macro. Let me bring out what I think is a similar point that concerns when you have categorical predictors. A lot can depend on how they are coded. For example, reference cell (aka, 'dummy') |
3,018 | Do all interactions terms need their individual terms in regression model? | Since you are reviewing a paper you might suggest that the authors discuss the issue of model hierarchy and justify their departure from it.
Here are some references:
Nelder JA. The selection of terms in response-surface models—how strong is the weak-heredity principle? The American Statistician. 1998;52:315–8. http://www.jstor.org/pss/2685433. Accessed 10
June 2010.
Peixoto JL. Hierarchical variable selection in polynomial regression models. The American Statistician. 1987;41:311–3. http://www.jstor.org/pss/2684752. Accessed 10 June 2010.
Peixoto JL. A property of well-formulated polynomial regression models. The American Statistician. 1990;44:26–30. http://www.jstor.org/pss/2684952. Accessed 10 June 2010.
I usually follow hierarchy but depart from it in some situations. For example, if you are testing tire wear versus mileage at several different speeds, your model might look like:
tread depth = intercept + mileage + mileage*speed
but it would not make physical sense to include a main effect of speed because the tire does not know what the speed will be at zero miles.
(On the other hand, you might still want to test for a speed effect because it might indicate that "break-in" effects differ at different speeds. On the other other hand, an even better way to handle break-in would be to get data at zero and at very low mileage and then test for non-linearity. Note that removing the intercept term can be thought of as a special case of violating hierarchy.)
I'll also reiterate what someone said above because it's very important: The authors need to make sure they know whether their software is centering the data. The tire model above becomes physically nonsensical if the software replaces mileage with (mileage - mean of mileage).
The same sorts of things are relevant in pharmaceutical stability studies (mentioned tangentially in "Stability Models for Sequential Storage", Emil M. Friedman and Sam C. Shum, AAPS PharmSciTech, Vol. 12, No. 1, March 2011, DOI: 10.1208/s12249-010-9558-x). | Do all interactions terms need their individual terms in regression model? | Since you are reviewing a paper you might suggest that the authors discuss the issue of model hierarchy and justify their departure from it.
Here are some references:
Nelder JA. The selection of te | Do all interactions terms need their individual terms in regression model?
Since you are reviewing a paper you might suggest that the authors discuss the issue of model hierarchy and justify their departure from it.
Here are some references:
Nelder JA. The selection of terms in response-surface models—how strong is the weak-heredity principle? The American Statistician. 1998;52:315–8. http://www.jstor.org/pss/2685433. Accessed 10
June 2010.
Peixoto JL. Hierarchical variable selection in polynomial regression models. The American Statistician. 1987;41:311–3. http://www.jstor.org/pss/2684752. Accessed 10 June 2010.
Peixoto JL. A property of well-formulated polynomial regression models. The American Statistician. 1990;44:26–30. http://www.jstor.org/pss/2684952. Accessed 10 June 2010.
I usually follow hierarchy but depart from it in some situations. For example, if you are testing tire wear versus mileage at several different speeds, your model might look like:
tread depth = intercept + mileage + mileage*speed
but it would not make physical sense to include a main effect of speed because the tire does not know what the speed will be at zero miles.
(On the other hand, you might still want to test for a speed effect because it might indicate that "break-in" effects differ at different speeds. On the other other hand, an even better way to handle break-in would be to get data at zero and at very low mileage and then test for non-linearity. Note that removing the intercept term can be thought of as a special case of violating hierarchy.)
I'll also reiterate what someone said above because it's very important: The authors need to make sure they know whether their software is centering the data. The tire model above becomes physically nonsensical if the software replaces mileage with (mileage - mean of mileage).
The same sorts of things are relevant in pharmaceutical stability studies (mentioned tangentially in "Stability Models for Sequential Storage", Emil M. Friedman and Sam C. Shum, AAPS PharmSciTech, Vol. 12, No. 1, March 2011, DOI: 10.1208/s12249-010-9558-x). | Do all interactions terms need their individual terms in regression model?
Since you are reviewing a paper you might suggest that the authors discuss the issue of model hierarchy and justify their departure from it.
Here are some references:
Nelder JA. The selection of te |
3,019 | Do all interactions terms need their individual terms in regression model? | I have had a real case that illustrates this. In the data, one of the variables represented group with 0-control and 1-treatment. The other predictor represented time period with 0-before treatment and 1-after treatment. The interaction was the main parameter of interest measuring the effect of the treatment, the difference after the treatment in the treatment group above any effect of time measured in the control group. The main effect from group measured the difference in the 2 groups before any treatment, so it could easily be 0 (in a randomized experiment it should be 0, this one was not). The 2nd main effect measures the difference between the before and after time periods in the control group where there was no treatment, so this also makes sense that it could be 0 while the interaction term is non-zero. Of course this depends on how things were coded and a different coding would change the meanings and whether or not the interaction makes sense without the main effects. So it only makes sense to fit the interaction without the main effects in specific cases. | Do all interactions terms need their individual terms in regression model? | I have had a real case that illustrates this. In the data, one of the variables represented group with 0-control and 1-treatment. The other predictor represented time period with 0-before treatment | Do all interactions terms need their individual terms in regression model?
I have had a real case that illustrates this. In the data, one of the variables represented group with 0-control and 1-treatment. The other predictor represented time period with 0-before treatment and 1-after treatment. The interaction was the main parameter of interest measuring the effect of the treatment, the difference after the treatment in the treatment group above any effect of time measured in the control group. The main effect from group measured the difference in the 2 groups before any treatment, so it could easily be 0 (in a randomized experiment it should be 0, this one was not). The 2nd main effect measures the difference between the before and after time periods in the control group where there was no treatment, so this also makes sense that it could be 0 while the interaction term is non-zero. Of course this depends on how things were coded and a different coding would change the meanings and whether or not the interaction makes sense without the main effects. So it only makes sense to fit the interaction without the main effects in specific cases. | Do all interactions terms need their individual terms in regression model?
I have had a real case that illustrates this. In the data, one of the variables represented group with 0-control and 1-treatment. The other predictor represented time period with 0-before treatment |
3,020 | Do all interactions terms need their individual terms in regression model? | I agree with Peter. I think the rule is folklore. Why could we conceive of a situation where two variables would affect the model only because of an interaction. An analogy in chemistry is that two chemicals are totally inert by themselves but cause an explosion when mixed together. Mathematical/statistical niceties like invariance have nothing to do with a real problem with real data. I just think that when there are a lot of variables to consider there is an awful lot of testing to do if you are going to look at all main effects and most if not all first order interactions. We also almost never look at second order interactions even in small experiments with only a handful of variables. The thinking is that the higher the order of interaction the less likely it is that there is a real effect. So don't look at first or second order interactions if the main effect isn't there. A good rule perhaps but to follow it religiously means overlooking the exceptions and your problem may be an exception. | Do all interactions terms need their individual terms in regression model? | I agree with Peter. I think the rule is folklore. Why could we conceive of a situation where two variables would affect the model only because of an interaction. An analogy in chemistry is that two | Do all interactions terms need their individual terms in regression model?
I agree with Peter. I think the rule is folklore. Why could we conceive of a situation where two variables would affect the model only because of an interaction. An analogy in chemistry is that two chemicals are totally inert by themselves but cause an explosion when mixed together. Mathematical/statistical niceties like invariance have nothing to do with a real problem with real data. I just think that when there are a lot of variables to consider there is an awful lot of testing to do if you are going to look at all main effects and most if not all first order interactions. We also almost never look at second order interactions even in small experiments with only a handful of variables. The thinking is that the higher the order of interaction the less likely it is that there is a real effect. So don't look at first or second order interactions if the main effect isn't there. A good rule perhaps but to follow it religiously means overlooking the exceptions and your problem may be an exception. | Do all interactions terms need their individual terms in regression model?
I agree with Peter. I think the rule is folklore. Why could we conceive of a situation where two variables would affect the model only because of an interaction. An analogy in chemistry is that two |
3,021 | Do all interactions terms need their individual terms in regression model? | [trying to answer a part of the original question which seems left uncovered in most answers: "should AIC, as a model selection criterion be trusted?"]
AIC should be used more as a guideline, than a rule that should be taken as gospel.
The effectiveness of AIC (or BIC or any similar 'simple' criterion for model selection) highly depends on the learning algorithm, and the problem.
Think of it this way: the goal of the complexity (number of factors) term in the AIC formula is simple: to avoid selecting models which over-fit. But the simplicity of AIC very often fails to capture the real complexity of the problem itself. This is why there are other practical techniques to avoid over-fitting: for example, cross-validation or adding a regularization term.
When I use online SGD (stochastic gradient descent) to do linear regression on a data-set with a very large number of inputs, I find AIC to be a terrible predictor of model quality because it excessively penalizes complex models with large number of terms. There are many real life situations where each term has a tiny effect, but together, a large number of them gives strong statistical evidence of an outcome. AIC and BIC model-selection criteria would reject these models and prefer the simpler ones, even though the more complex ones are superior.
In the end, it is the generalization error (roughly: out of sample performance) that counts. AIC can give you some hint of model quality in some relatively simple situations. Just be careful and remember that real life is more often than not, more complex than a simple formula. | Do all interactions terms need their individual terms in regression model? | [trying to answer a part of the original question which seems left uncovered in most answers: "should AIC, as a model selection criterion be trusted?"]
AIC should be used more as a guideline, than a r | Do all interactions terms need their individual terms in regression model?
[trying to answer a part of the original question which seems left uncovered in most answers: "should AIC, as a model selection criterion be trusted?"]
AIC should be used more as a guideline, than a rule that should be taken as gospel.
The effectiveness of AIC (or BIC or any similar 'simple' criterion for model selection) highly depends on the learning algorithm, and the problem.
Think of it this way: the goal of the complexity (number of factors) term in the AIC formula is simple: to avoid selecting models which over-fit. But the simplicity of AIC very often fails to capture the real complexity of the problem itself. This is why there are other practical techniques to avoid over-fitting: for example, cross-validation or adding a regularization term.
When I use online SGD (stochastic gradient descent) to do linear regression on a data-set with a very large number of inputs, I find AIC to be a terrible predictor of model quality because it excessively penalizes complex models with large number of terms. There are many real life situations where each term has a tiny effect, but together, a large number of them gives strong statistical evidence of an outcome. AIC and BIC model-selection criteria would reject these models and prefer the simpler ones, even though the more complex ones are superior.
In the end, it is the generalization error (roughly: out of sample performance) that counts. AIC can give you some hint of model quality in some relatively simple situations. Just be careful and remember that real life is more often than not, more complex than a simple formula. | Do all interactions terms need their individual terms in regression model?
[trying to answer a part of the original question which seems left uncovered in most answers: "should AIC, as a model selection criterion be trusted?"]
AIC should be used more as a guideline, than a r |
3,022 | How to actually plot a sample tree from randomForest::getTree()? [closed] | First (and easiest) solution: If you are not keen to stick with classical RF, as implemented in Andy Liaw's randomForest, you can try the party package which provides a different implementation of the original RF algorithm (use of conditional trees and aggregation scheme based on units weight average). Then, as reported on this R-help post, you can plot a single member of the list of trees. It seems to run smoothly, as far as I can tell. Below is a plot of one tree generated by cforest(Species ~ ., data=iris, controls=cforest_control(mtry=2, mincriterion=0)).
Second (almost as easy) solution: Most of tree-based techniques in R (tree, rpart, TWIX, etc.) offers a tree-like structure for printing/plotting a single tree. The idea would be to convert the output of randomForest::getTree to such an R object, even if it is nonsensical from a statistical point of view. Basically, it is easy to access the tree structure from a tree object, as shown below. Please note that it will slightly differ depending of the type of task--regression vs. classification--where in the later case it will add class-specific probabilities as the last column of the obj$frame (which is a data.frame).
> library(tree)
> tr <- tree(Species ~ ., data=iris)
> tr
node), split, n, deviance, yval, (yprob)
* denotes terminal node
1) root 150 329.600 setosa ( 0.33333 0.33333 0.33333 )
2) Petal.Length < 2.45 50 0.000 setosa ( 1.00000 0.00000 0.00000 ) *
3) Petal.Length > 2.45 100 138.600 versicolor ( 0.00000 0.50000 0.50000 )
6) Petal.Width < 1.75 54 33.320 versicolor ( 0.00000 0.90741 0.09259 )
12) Petal.Length < 4.95 48 9.721 versicolor ( 0.00000 0.97917 0.02083 )
24) Sepal.Length < 5.15 5 5.004 versicolor ( 0.00000 0.80000 0.20000 ) *
25) Sepal.Length > 5.15 43 0.000 versicolor ( 0.00000 1.00000 0.00000 ) *
13) Petal.Length > 4.95 6 7.638 virginica ( 0.00000 0.33333 0.66667 ) *
7) Petal.Width > 1.75 46 9.635 virginica ( 0.00000 0.02174 0.97826 )
14) Petal.Length < 4.95 6 5.407 virginica ( 0.00000 0.16667 0.83333 ) *
15) Petal.Length > 4.95 40 0.000 virginica ( 0.00000 0.00000 1.00000 ) *
> tr$frame
var n dev yval splits.cutleft splits.cutright yprob.setosa yprob.versicolor yprob.virginica
1 Petal.Length 150 329.583687 setosa <2.45 >2.45 0.33333333 0.33333333 0.33333333
2 <leaf> 50 0.000000 setosa 1.00000000 0.00000000 0.00000000
3 Petal.Width 100 138.629436 versicolor <1.75 >1.75 0.00000000 0.50000000 0.50000000
6 Petal.Length 54 33.317509 versicolor <4.95 >4.95 0.00000000 0.90740741 0.09259259
12 Sepal.Length 48 9.721422 versicolor <5.15 >5.15 0.00000000 0.97916667 0.02083333
24 <leaf> 5 5.004024 versicolor 0.00000000 0.80000000 0.20000000
25 <leaf> 43 0.000000 versicolor 0.00000000 1.00000000 0.00000000
13 <leaf> 6 7.638170 virginica 0.00000000 0.33333333 0.66666667
7 Petal.Length 46 9.635384 virginica <4.95 >4.95 0.00000000 0.02173913 0.97826087
14 <leaf> 6 5.406735 virginica 0.00000000 0.16666667 0.83333333
15 <leaf> 40 0.000000 virginica 0.00000000 0.00000000 1.00000000
Then, there are methods for pretty printing and plotting those objects. The key functions are a generic tree:::plot.tree method (I put a triple : which allows you to view the code in R directly) relying on tree:::treepl (graphical display) and tree:::treeco (compute nodes coordinates). These functions expect the obj$frame representation of the tree. Other subtle issues: (1) the argument type = c("proportional", "uniform") in the default plotting method, tree:::plot.tree, help to manage vertical distance between nodes (proportional means it is proportional to deviance, uniform mean it is fixed); (2) you need to complement plot(tr) by a call to text(tr) to add text labels to nodes and splits, which in this case means that you will also have to take a look at tree:::text.tree.
The getTree method from randomForest returns a different structure, which is documented in the online help. A typical output is shown below, with terminal nodes indicated by status code (-1). (Again, output will differ depending on the type of task, but only on the status and prediction columns.)
> library(randomForest)
> rf <- randomForest(Species ~ ., data=iris)
> getTree(rf, 1, labelVar=TRUE)
left daughter right daughter split var split point status prediction
1 2 3 Petal.Length 4.75 1 <NA>
2 4 5 Sepal.Length 5.45 1 <NA>
3 6 7 Sepal.Width 3.15 1 <NA>
4 8 9 Petal.Width 0.80 1 <NA>
5 10 11 Sepal.Width 3.60 1 <NA>
6 0 0 <NA> 0.00 -1 virginica
7 12 13 Petal.Width 1.90 1 <NA>
8 0 0 <NA> 0.00 -1 setosa
9 14 15 Petal.Width 1.55 1 <NA>
10 0 0 <NA> 0.00 -1 versicolor
11 0 0 <NA> 0.00 -1 setosa
12 16 17 Petal.Length 5.40 1 <NA>
13 0 0 <NA> 0.00 -1 virginica
14 0 0 <NA> 0.00 -1 versicolor
15 0 0 <NA> 0.00 -1 virginica
16 0 0 <NA> 0.00 -1 versicolor
17 0 0 <NA> 0.00 -1 virginica
If you can manage to convert the above table to the one generated by tree, you will probably be able to customize tree:::treepl, tree:::treeco and tree:::text.tree to suit your needs, though I do not have an example of this approach. In particular, you probably want to get rid of the use of deviance, class probabilities, etc. which are not meaningful in RF. All you want is to set up nodes coordinates and split values. You could use fixInNamespace() for that, but, to be honest, I'm not sure this is the right way to go.
Third (and certainly clever) solution: Write a true as.tree helper
function which will alleviates all of the above "patches". You could then use R's plotting methods or, probably better, Klimt (directly from R) to display individual trees. | How to actually plot a sample tree from randomForest::getTree()? [closed] | First (and easiest) solution: If you are not keen to stick with classical RF, as implemented in Andy Liaw's randomForest, you can try the party package which provides a different implementation of the | How to actually plot a sample tree from randomForest::getTree()? [closed]
First (and easiest) solution: If you are not keen to stick with classical RF, as implemented in Andy Liaw's randomForest, you can try the party package which provides a different implementation of the original RF algorithm (use of conditional trees and aggregation scheme based on units weight average). Then, as reported on this R-help post, you can plot a single member of the list of trees. It seems to run smoothly, as far as I can tell. Below is a plot of one tree generated by cforest(Species ~ ., data=iris, controls=cforest_control(mtry=2, mincriterion=0)).
Second (almost as easy) solution: Most of tree-based techniques in R (tree, rpart, TWIX, etc.) offers a tree-like structure for printing/plotting a single tree. The idea would be to convert the output of randomForest::getTree to such an R object, even if it is nonsensical from a statistical point of view. Basically, it is easy to access the tree structure from a tree object, as shown below. Please note that it will slightly differ depending of the type of task--regression vs. classification--where in the later case it will add class-specific probabilities as the last column of the obj$frame (which is a data.frame).
> library(tree)
> tr <- tree(Species ~ ., data=iris)
> tr
node), split, n, deviance, yval, (yprob)
* denotes terminal node
1) root 150 329.600 setosa ( 0.33333 0.33333 0.33333 )
2) Petal.Length < 2.45 50 0.000 setosa ( 1.00000 0.00000 0.00000 ) *
3) Petal.Length > 2.45 100 138.600 versicolor ( 0.00000 0.50000 0.50000 )
6) Petal.Width < 1.75 54 33.320 versicolor ( 0.00000 0.90741 0.09259 )
12) Petal.Length < 4.95 48 9.721 versicolor ( 0.00000 0.97917 0.02083 )
24) Sepal.Length < 5.15 5 5.004 versicolor ( 0.00000 0.80000 0.20000 ) *
25) Sepal.Length > 5.15 43 0.000 versicolor ( 0.00000 1.00000 0.00000 ) *
13) Petal.Length > 4.95 6 7.638 virginica ( 0.00000 0.33333 0.66667 ) *
7) Petal.Width > 1.75 46 9.635 virginica ( 0.00000 0.02174 0.97826 )
14) Petal.Length < 4.95 6 5.407 virginica ( 0.00000 0.16667 0.83333 ) *
15) Petal.Length > 4.95 40 0.000 virginica ( 0.00000 0.00000 1.00000 ) *
> tr$frame
var n dev yval splits.cutleft splits.cutright yprob.setosa yprob.versicolor yprob.virginica
1 Petal.Length 150 329.583687 setosa <2.45 >2.45 0.33333333 0.33333333 0.33333333
2 <leaf> 50 0.000000 setosa 1.00000000 0.00000000 0.00000000
3 Petal.Width 100 138.629436 versicolor <1.75 >1.75 0.00000000 0.50000000 0.50000000
6 Petal.Length 54 33.317509 versicolor <4.95 >4.95 0.00000000 0.90740741 0.09259259
12 Sepal.Length 48 9.721422 versicolor <5.15 >5.15 0.00000000 0.97916667 0.02083333
24 <leaf> 5 5.004024 versicolor 0.00000000 0.80000000 0.20000000
25 <leaf> 43 0.000000 versicolor 0.00000000 1.00000000 0.00000000
13 <leaf> 6 7.638170 virginica 0.00000000 0.33333333 0.66666667
7 Petal.Length 46 9.635384 virginica <4.95 >4.95 0.00000000 0.02173913 0.97826087
14 <leaf> 6 5.406735 virginica 0.00000000 0.16666667 0.83333333
15 <leaf> 40 0.000000 virginica 0.00000000 0.00000000 1.00000000
Then, there are methods for pretty printing and plotting those objects. The key functions are a generic tree:::plot.tree method (I put a triple : which allows you to view the code in R directly) relying on tree:::treepl (graphical display) and tree:::treeco (compute nodes coordinates). These functions expect the obj$frame representation of the tree. Other subtle issues: (1) the argument type = c("proportional", "uniform") in the default plotting method, tree:::plot.tree, help to manage vertical distance between nodes (proportional means it is proportional to deviance, uniform mean it is fixed); (2) you need to complement plot(tr) by a call to text(tr) to add text labels to nodes and splits, which in this case means that you will also have to take a look at tree:::text.tree.
The getTree method from randomForest returns a different structure, which is documented in the online help. A typical output is shown below, with terminal nodes indicated by status code (-1). (Again, output will differ depending on the type of task, but only on the status and prediction columns.)
> library(randomForest)
> rf <- randomForest(Species ~ ., data=iris)
> getTree(rf, 1, labelVar=TRUE)
left daughter right daughter split var split point status prediction
1 2 3 Petal.Length 4.75 1 <NA>
2 4 5 Sepal.Length 5.45 1 <NA>
3 6 7 Sepal.Width 3.15 1 <NA>
4 8 9 Petal.Width 0.80 1 <NA>
5 10 11 Sepal.Width 3.60 1 <NA>
6 0 0 <NA> 0.00 -1 virginica
7 12 13 Petal.Width 1.90 1 <NA>
8 0 0 <NA> 0.00 -1 setosa
9 14 15 Petal.Width 1.55 1 <NA>
10 0 0 <NA> 0.00 -1 versicolor
11 0 0 <NA> 0.00 -1 setosa
12 16 17 Petal.Length 5.40 1 <NA>
13 0 0 <NA> 0.00 -1 virginica
14 0 0 <NA> 0.00 -1 versicolor
15 0 0 <NA> 0.00 -1 virginica
16 0 0 <NA> 0.00 -1 versicolor
17 0 0 <NA> 0.00 -1 virginica
If you can manage to convert the above table to the one generated by tree, you will probably be able to customize tree:::treepl, tree:::treeco and tree:::text.tree to suit your needs, though I do not have an example of this approach. In particular, you probably want to get rid of the use of deviance, class probabilities, etc. which are not meaningful in RF. All you want is to set up nodes coordinates and split values. You could use fixInNamespace() for that, but, to be honest, I'm not sure this is the right way to go.
Third (and certainly clever) solution: Write a true as.tree helper
function which will alleviates all of the above "patches". You could then use R's plotting methods or, probably better, Klimt (directly from R) to display individual trees. | How to actually plot a sample tree from randomForest::getTree()? [closed]
First (and easiest) solution: If you are not keen to stick with classical RF, as implemented in Andy Liaw's randomForest, you can try the party package which provides a different implementation of the |
3,023 | How to actually plot a sample tree from randomForest::getTree()? [closed] | I'm four years late, but if you really want to stick to the randomForest package (and there are some good reasons to do so), and want to actually visualize the tree, you can use the reprtree package.
The package is not super well documented (you can find the docs here), but everything is pretty straightforward. To install the package refer to initialize.R in the repo, so simply run the following:
options(repos='http://cran.rstudio.org')
have.packages <- installed.packages()
cran.packages <- c('devtools','plotrix','randomForest','tree')
to.install <- setdiff(cran.packages, have.packages[,1])
if(length(to.install)>0) install.packages(to.install)
library(devtools)
if(!('reprtree' %in% installed.packages())){
install_github('munoztd0/reprtree')
}
for(p in c(cran.packages, 'reprtree')) eval(substitute(library(pkg), list(pkg=p)))
Then go ahead and make your model and tree:
library(randomForest)
library(reprtree)
model <- randomForest(Species ~ ., data=iris, importance=TRUE, ntree=500, mtry = 2, do.trace=100)
reprtree:::plot.getTree(model)
And there you go! Beautiful and simple.
You can check the github repo to learn about the other methods in the package. In fact, if you check plot.getTree.R, you'll notice that the author uses his own implementation of as.tree() which chl♦ suggested you could build yourself in his answer. This means that you could do this:
tree <- getTree(model, k=1, labelVar=TRUE)
realtree <- reprtree:::as.tree(tree, model)
And then potentially use realtree with other tree plotting packages such as tree. | How to actually plot a sample tree from randomForest::getTree()? [closed] | I'm four years late, but if you really want to stick to the randomForest package (and there are some good reasons to do so), and want to actually visualize the tree, you can use the reprtree package.
| How to actually plot a sample tree from randomForest::getTree()? [closed]
I'm four years late, but if you really want to stick to the randomForest package (and there are some good reasons to do so), and want to actually visualize the tree, you can use the reprtree package.
The package is not super well documented (you can find the docs here), but everything is pretty straightforward. To install the package refer to initialize.R in the repo, so simply run the following:
options(repos='http://cran.rstudio.org')
have.packages <- installed.packages()
cran.packages <- c('devtools','plotrix','randomForest','tree')
to.install <- setdiff(cran.packages, have.packages[,1])
if(length(to.install)>0) install.packages(to.install)
library(devtools)
if(!('reprtree' %in% installed.packages())){
install_github('munoztd0/reprtree')
}
for(p in c(cran.packages, 'reprtree')) eval(substitute(library(pkg), list(pkg=p)))
Then go ahead and make your model and tree:
library(randomForest)
library(reprtree)
model <- randomForest(Species ~ ., data=iris, importance=TRUE, ntree=500, mtry = 2, do.trace=100)
reprtree:::plot.getTree(model)
And there you go! Beautiful and simple.
You can check the github repo to learn about the other methods in the package. In fact, if you check plot.getTree.R, you'll notice that the author uses his own implementation of as.tree() which chl♦ suggested you could build yourself in his answer. This means that you could do this:
tree <- getTree(model, k=1, labelVar=TRUE)
realtree <- reprtree:::as.tree(tree, model)
And then potentially use realtree with other tree plotting packages such as tree. | How to actually plot a sample tree from randomForest::getTree()? [closed]
I'm four years late, but if you really want to stick to the randomForest package (and there are some good reasons to do so), and want to actually visualize the tree, you can use the reprtree package.
|
3,024 | How to actually plot a sample tree from randomForest::getTree()? [closed] | I've created some functions to extract the rules of a tree.
#***********************
#return the rules of a tree
#***********************
getConds<-function(tree){
#store all conditions into a list
conds<-list()
#start by the terminal nodes and find previous conditions
id.leafs<-which(tree$status==-1)
j<-0
for(i in id.leafs){
j<-j+1
prevConds<-prevCond(tree,i)
conds[[j]]<-prevConds$cond
while(prevConds$id>1){
prevConds<-prevCond(tree,prevConds$id)
conds[[j]]<-paste(conds[[j]]," & ",prevConds$cond)
}
if(prevConds$id==1){
conds[[j]]<-paste(conds[[j]]," => ",tree$prediction[i])
}
}
return(conds)
}
#**************************
#find the previous conditions in the tree
#**************************
prevCond<-function(tree,i){
if(i %in% tree$right_daughter){
id<-which(tree$right_daughter==i)
cond<-paste(tree$split_var[id],">",tree$split_point[id])
}
if(i %in% tree$left_daughter){
id<-which(tree$left_daughter==i)
cond<-paste(tree$split_var[id],"<",tree$split_point[id])
}
return(list(cond=cond,id=id))
}
#remove spaces in a word
collapse<-function(x){
x<-sub(" ","_",x)
return(x)
}
data(iris)
require(randomForest)
mod.rf <- randomForest(Species ~ ., data=iris)
tree<-getTree(mod.rf, k=1, labelVar=TRUE)
#rename the name of the column
colnames(tree)<-sapply(colnames(tree),collapse)
rules<-getConds(tree)
print(rules) | How to actually plot a sample tree from randomForest::getTree()? [closed] | I've created some functions to extract the rules of a tree.
#***********************
#return the rules of a tree
#***********************
getConds<-function(tree){
#store all conditions into a list
| How to actually plot a sample tree from randomForest::getTree()? [closed]
I've created some functions to extract the rules of a tree.
#***********************
#return the rules of a tree
#***********************
getConds<-function(tree){
#store all conditions into a list
conds<-list()
#start by the terminal nodes and find previous conditions
id.leafs<-which(tree$status==-1)
j<-0
for(i in id.leafs){
j<-j+1
prevConds<-prevCond(tree,i)
conds[[j]]<-prevConds$cond
while(prevConds$id>1){
prevConds<-prevCond(tree,prevConds$id)
conds[[j]]<-paste(conds[[j]]," & ",prevConds$cond)
}
if(prevConds$id==1){
conds[[j]]<-paste(conds[[j]]," => ",tree$prediction[i])
}
}
return(conds)
}
#**************************
#find the previous conditions in the tree
#**************************
prevCond<-function(tree,i){
if(i %in% tree$right_daughter){
id<-which(tree$right_daughter==i)
cond<-paste(tree$split_var[id],">",tree$split_point[id])
}
if(i %in% tree$left_daughter){
id<-which(tree$left_daughter==i)
cond<-paste(tree$split_var[id],"<",tree$split_point[id])
}
return(list(cond=cond,id=id))
}
#remove spaces in a word
collapse<-function(x){
x<-sub(" ","_",x)
return(x)
}
data(iris)
require(randomForest)
mod.rf <- randomForest(Species ~ ., data=iris)
tree<-getTree(mod.rf, k=1, labelVar=TRUE)
#rename the name of the column
colnames(tree)<-sapply(colnames(tree),collapse)
rules<-getConds(tree)
print(rules) | How to actually plot a sample tree from randomForest::getTree()? [closed]
I've created some functions to extract the rules of a tree.
#***********************
#return the rules of a tree
#***********************
getConds<-function(tree){
#store all conditions into a list
|
3,025 | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient descent? | Arech's answer about Nesterov momentum is correct, but the code essentially does the same thing. So in this regard the Nesterov method does give more weight to the $lr \cdot g$ term, and less weight to the $v$ term.
To illustrate why Keras' implementation is correct, I'll borrow Geoffrey Hinton's example.
Nesterov method takes the "gamble->correction" approach.
$v' = m \cdot v - lr \cdot \nabla(w+m \cdot v)$
$w' = w + v'$
The brown vector is $m \cdot v$ (gamble/jump), the red vector is $-lr \cdot \nabla(w+m \cdot v)$ (correction), and the green vector is $m \cdot v-lr \cdot \nabla(w+m \cdot v)$ (where we should actually move to). $\nabla(\cdot)$ is the gradient function.
The code looks different because it moves by the brown vector instead of the green vector, as the Nesterov method only requires evaluating $\nabla(w+m \cdot v) =: g$ instead of $\nabla(w)$. Therefore in each step we want to
move back to where we were $(1 \rightarrow 0)$
follow the green vector to where we should be $(0 \rightarrow 2)$
make another gamble $(2 \rightarrow 3)$
Keras' code written for short is $p' = p + m \cdot (m \cdot v - lr \cdot g) - lr \cdot g$, and we do some maths
$\begin{align}
p' &= p - m \cdot v + m \cdot v + m \cdot (m \cdot v - lr \cdot g) - lr \cdot g\\
&= p - m \cdot v + m \cdot v - lr \cdot g + m \cdot (m \cdot v - lr \cdot g)\\
&= p - m \cdot v + (m \cdot v-lr \cdot g) + m \cdot (m \cdot v-lr \cdot g)
\end{align}$
and that's exactly $1 \rightarrow 0 \rightarrow 2 \rightarrow 3$. Actually the original code takes a shorter path $1 \rightarrow 2 \rightarrow 3$.
The actual estimated value (green vector) should be $p - m \cdot v$, which should be close to $p$ when learning converges. | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient de | Arech's answer about Nesterov momentum is correct, but the code essentially does the same thing. So in this regard the Nesterov method does give more weight to the $lr \cdot g$ term, and less weight | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient descent?
Arech's answer about Nesterov momentum is correct, but the code essentially does the same thing. So in this regard the Nesterov method does give more weight to the $lr \cdot g$ term, and less weight to the $v$ term.
To illustrate why Keras' implementation is correct, I'll borrow Geoffrey Hinton's example.
Nesterov method takes the "gamble->correction" approach.
$v' = m \cdot v - lr \cdot \nabla(w+m \cdot v)$
$w' = w + v'$
The brown vector is $m \cdot v$ (gamble/jump), the red vector is $-lr \cdot \nabla(w+m \cdot v)$ (correction), and the green vector is $m \cdot v-lr \cdot \nabla(w+m \cdot v)$ (where we should actually move to). $\nabla(\cdot)$ is the gradient function.
The code looks different because it moves by the brown vector instead of the green vector, as the Nesterov method only requires evaluating $\nabla(w+m \cdot v) =: g$ instead of $\nabla(w)$. Therefore in each step we want to
move back to where we were $(1 \rightarrow 0)$
follow the green vector to where we should be $(0 \rightarrow 2)$
make another gamble $(2 \rightarrow 3)$
Keras' code written for short is $p' = p + m \cdot (m \cdot v - lr \cdot g) - lr \cdot g$, and we do some maths
$\begin{align}
p' &= p - m \cdot v + m \cdot v + m \cdot (m \cdot v - lr \cdot g) - lr \cdot g\\
&= p - m \cdot v + m \cdot v - lr \cdot g + m \cdot (m \cdot v - lr \cdot g)\\
&= p - m \cdot v + (m \cdot v-lr \cdot g) + m \cdot (m \cdot v-lr \cdot g)
\end{align}$
and that's exactly $1 \rightarrow 0 \rightarrow 2 \rightarrow 3$. Actually the original code takes a shorter path $1 \rightarrow 2 \rightarrow 3$.
The actual estimated value (green vector) should be $p - m \cdot v$, which should be close to $p$ when learning converges. | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient de
Arech's answer about Nesterov momentum is correct, but the code essentially does the same thing. So in this regard the Nesterov method does give more weight to the $lr \cdot g$ term, and less weight |
3,026 | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient descent? | It seems to me that the OP's question was already answered, but I would try to give another (hopefully intuitive) explanation about momentum and the difference between Classical Momentum (CM) and Nesterov's Accelerated Gradient (NAG).
tl;dr
Just skip to the image at the end.
NAG_ball's reasoning is another important part, but I am not sure it would be easy to understand without all of the rest.
CM and NAG are both methods for choosing the next vector $\theta$ in parameter space, in order to find a minimum of a function $f(\theta)$.
In other news, lately these two wild sentient balls appeared:
It turns out (according to the observed behavior of the balls, and according to the paper On the importance of initialization and momentum in deep learning, that describes both CM and NAG in section 2) that each ball behaves exactly like one of these methods, and so we would call them "CM_ball" and "NAG_ball":
(NAG_ball is smiling, because he recently watched the end of Lecture 6c - The momentum method, by Geoffrey Hinton with Nitish Srivastava and Kevin Swersky, and thus believes more than ever that his behavior leads to finding a minimum faster.)
Here is how the balls behave:
Instead of rolling like normal balls, they jump between points in parameter space.
Let $\theta_t$ be a ball's $t$-th location in parameter space, and let $v_t$ be the ball's $t$-th jump. Then jumping between points in parameter space can be described by $\theta_t=\theta_{t-1}+v_t$.
Not only do they jump instead of roll, but also their jumps are special: Each jump $v_t$ is actually a Double Jump, which is the composition of two jumps:
Momentum Jump - a jump that uses the momentum from $v_{t-1}$, the last Double Jump.
A small fraction of the momentum of $v_{t-1}$ is lost due to friction with the air.Let $\mu$ be the fraction of the momentum that is left (the balls are quite aerodynamic, so usually $0.9 \le \mu <1$). Then the Momentum Jump is equal to $\mu v_{t-1}$.
(In both CM and NAG, $\mu$ is a hyperparameter called "momentum coefficient".)
Slope Jump - a jump that reminds me of the result of putting a normal ball on a surface - the ball starts rolling in the direction of the steepest slope downward, while the steeper the slope, the larger the acceleration.
Similarly, the Slope Jump is in the direction of the steepest slope downward (the direction opposite to the gradient), and the larger the gradient, the further the jump.
The Slope Jump also depends on $\epsilon$, the level of eagerness of the ball (naturally, $\epsilon>0$): The more eager the ball, the further the Slope Jump would take it.
(In both CM and NAG, $\epsilon$ is a hyperparameter called "learning rate".)
Let $g$ be the gradient in the starting location of the Slope Jump. Then the Slope Jump is equal to $-\epsilon g$.
So for both balls the Double Jump is equal to:$$v_t=\mu v_{t-1} -\epsilon g$$
The only difference between the balls is the order of the two jumps in the Double Jump.
CM_ball didn't think it mattered, so he decided to always start with the Slope Jump.
Thus, CM_ball's Double Jump is:
$$v_{t}=\mu v_{t-1}-\epsilon\nabla f\left(\theta_{t-1}\right)$$
In contrast, NAG_ball thought about it for some time, and then decided to always start with the Momentum Jump.
Therefore, NAG_ball's Double Jump is:
$$v_{t}=\mu v_{t-1}-\epsilon\nabla f\left(\theta_{t-1}+\mu v_{t-1}\right)$$
NAG_ball's reasoning
Whatever jump comes first, my Momentum Jump would be the same.
So I should consider the situation as if I have already made my Momentum Jump, and I am about to make my Slope Jump.
Now, my Slope Jump is conceptually going to start from here, but I can choose whether to calculate what my Slope Jump would be as if it started before the Momentum Jump, or as if it started here.
Thinking about it this way makes it quite clear that the latter is better, as generally, the gradient at some point $\theta$ roughly points you in the direction from $\theta$ to a minimum (with the relatively right magnitude), while the gradient at some other point is less likely to point you in the direction from $\theta$ to a minimum (with the relatively right magnitude).
Finally, yesterday I was fortunate enough to observe each of the balls jumping around in a 1-dimensional parameter space.
I think that looking at their changing positions in the parameter space wouldn't help much with gaining intuition, as this parameter space is a line.
So instead, for each ball I sketched a 2-dimensional graph in which the horizontal axis is $\theta$.
Then I drew $f(\theta)$ using a black brush, and also drew each ball in his $7$ first positions, along with numbers to show the chronological order of the positions.
Lastly, I drew green arrows to show the distance in parameter space (i.e. the horizontal distance in the graph) of each Momentum Jump and Slope Jump.
Appendix 1 - A demonstration of NAG_ball's reasoning
In this mesmerizing gif by Alec Radford, you can see NAG performing arguably better than CM ("Momentum" in the gif).
(The minimum is where the star is, and the curves are contour lines. For an explanation about contour lines and why they are perpendicular to the gradient, see videos 1 and 2 by the legendary 3Blue1Brown.)
An analysis of a specific moment demonstrates NAG_ball's reasoning:
The (long) purple arrow is the momentum sub-step.
The transparent red arrow is the gradient sub-step if it starts before the momentum sub-step.
The black arrow is the gradient sub-step if it starts after the momentum sub-step.
CM would end up in the target of the dark red arrow.
NAG would end up in the target of the black arrow.
Appendix 2 - things/terms I made up (for intuition's sake)
CM_ball
NAG_ball
Double Jump
Momentum Jump
Momentum lost due to friction with the air
Slope Jump
Eagerness of a ball
Me observing the balls yesterday
Appendix 3 - terms I didn't make up
The way CM and NAG behave:
I mostly depended on section 2 in the paper On the importance of initialization and momentum in deep learning.
In addition, An overview of gradient descent optimization algorithms (a blog post by Sebastian Ruder) really helped me understand CM and NAG (and much more).
Momentum coefficient - a term used at least by the paper
Learning rate | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient de | It seems to me that the OP's question was already answered, but I would try to give another (hopefully intuitive) explanation about momentum and the difference between Classical Momentum (CM) and Nest | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient descent?
It seems to me that the OP's question was already answered, but I would try to give another (hopefully intuitive) explanation about momentum and the difference between Classical Momentum (CM) and Nesterov's Accelerated Gradient (NAG).
tl;dr
Just skip to the image at the end.
NAG_ball's reasoning is another important part, but I am not sure it would be easy to understand without all of the rest.
CM and NAG are both methods for choosing the next vector $\theta$ in parameter space, in order to find a minimum of a function $f(\theta)$.
In other news, lately these two wild sentient balls appeared:
It turns out (according to the observed behavior of the balls, and according to the paper On the importance of initialization and momentum in deep learning, that describes both CM and NAG in section 2) that each ball behaves exactly like one of these methods, and so we would call them "CM_ball" and "NAG_ball":
(NAG_ball is smiling, because he recently watched the end of Lecture 6c - The momentum method, by Geoffrey Hinton with Nitish Srivastava and Kevin Swersky, and thus believes more than ever that his behavior leads to finding a minimum faster.)
Here is how the balls behave:
Instead of rolling like normal balls, they jump between points in parameter space.
Let $\theta_t$ be a ball's $t$-th location in parameter space, and let $v_t$ be the ball's $t$-th jump. Then jumping between points in parameter space can be described by $\theta_t=\theta_{t-1}+v_t$.
Not only do they jump instead of roll, but also their jumps are special: Each jump $v_t$ is actually a Double Jump, which is the composition of two jumps:
Momentum Jump - a jump that uses the momentum from $v_{t-1}$, the last Double Jump.
A small fraction of the momentum of $v_{t-1}$ is lost due to friction with the air.Let $\mu$ be the fraction of the momentum that is left (the balls are quite aerodynamic, so usually $0.9 \le \mu <1$). Then the Momentum Jump is equal to $\mu v_{t-1}$.
(In both CM and NAG, $\mu$ is a hyperparameter called "momentum coefficient".)
Slope Jump - a jump that reminds me of the result of putting a normal ball on a surface - the ball starts rolling in the direction of the steepest slope downward, while the steeper the slope, the larger the acceleration.
Similarly, the Slope Jump is in the direction of the steepest slope downward (the direction opposite to the gradient), and the larger the gradient, the further the jump.
The Slope Jump also depends on $\epsilon$, the level of eagerness of the ball (naturally, $\epsilon>0$): The more eager the ball, the further the Slope Jump would take it.
(In both CM and NAG, $\epsilon$ is a hyperparameter called "learning rate".)
Let $g$ be the gradient in the starting location of the Slope Jump. Then the Slope Jump is equal to $-\epsilon g$.
So for both balls the Double Jump is equal to:$$v_t=\mu v_{t-1} -\epsilon g$$
The only difference between the balls is the order of the two jumps in the Double Jump.
CM_ball didn't think it mattered, so he decided to always start with the Slope Jump.
Thus, CM_ball's Double Jump is:
$$v_{t}=\mu v_{t-1}-\epsilon\nabla f\left(\theta_{t-1}\right)$$
In contrast, NAG_ball thought about it for some time, and then decided to always start with the Momentum Jump.
Therefore, NAG_ball's Double Jump is:
$$v_{t}=\mu v_{t-1}-\epsilon\nabla f\left(\theta_{t-1}+\mu v_{t-1}\right)$$
NAG_ball's reasoning
Whatever jump comes first, my Momentum Jump would be the same.
So I should consider the situation as if I have already made my Momentum Jump, and I am about to make my Slope Jump.
Now, my Slope Jump is conceptually going to start from here, but I can choose whether to calculate what my Slope Jump would be as if it started before the Momentum Jump, or as if it started here.
Thinking about it this way makes it quite clear that the latter is better, as generally, the gradient at some point $\theta$ roughly points you in the direction from $\theta$ to a minimum (with the relatively right magnitude), while the gradient at some other point is less likely to point you in the direction from $\theta$ to a minimum (with the relatively right magnitude).
Finally, yesterday I was fortunate enough to observe each of the balls jumping around in a 1-dimensional parameter space.
I think that looking at their changing positions in the parameter space wouldn't help much with gaining intuition, as this parameter space is a line.
So instead, for each ball I sketched a 2-dimensional graph in which the horizontal axis is $\theta$.
Then I drew $f(\theta)$ using a black brush, and also drew each ball in his $7$ first positions, along with numbers to show the chronological order of the positions.
Lastly, I drew green arrows to show the distance in parameter space (i.e. the horizontal distance in the graph) of each Momentum Jump and Slope Jump.
Appendix 1 - A demonstration of NAG_ball's reasoning
In this mesmerizing gif by Alec Radford, you can see NAG performing arguably better than CM ("Momentum" in the gif).
(The minimum is where the star is, and the curves are contour lines. For an explanation about contour lines and why they are perpendicular to the gradient, see videos 1 and 2 by the legendary 3Blue1Brown.)
An analysis of a specific moment demonstrates NAG_ball's reasoning:
The (long) purple arrow is the momentum sub-step.
The transparent red arrow is the gradient sub-step if it starts before the momentum sub-step.
The black arrow is the gradient sub-step if it starts after the momentum sub-step.
CM would end up in the target of the dark red arrow.
NAG would end up in the target of the black arrow.
Appendix 2 - things/terms I made up (for intuition's sake)
CM_ball
NAG_ball
Double Jump
Momentum Jump
Momentum lost due to friction with the air
Slope Jump
Eagerness of a ball
Me observing the balls yesterday
Appendix 3 - terms I didn't make up
The way CM and NAG behave:
I mostly depended on section 2 in the paper On the importance of initialization and momentum in deep learning.
In addition, An overview of gradient descent optimization algorithms (a blog post by Sebastian Ruder) really helped me understand CM and NAG (and much more).
Momentum coefficient - a term used at least by the paper
Learning rate | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient de
It seems to me that the OP's question was already answered, but I would try to give another (hopefully intuitive) explanation about momentum and the difference between Classical Momentum (CM) and Nest |
3,027 | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient descent? | I don't think so.
There's a good description of Nesterov Momentum (aka Nesterov Accelerated Gradient) properties in, for example, Sutskever, Martens et al."On the importance of initialization and momentum in deep learning" 2013.
The main difference is in classical momentum you first correct your velocity and then make a big step according to that velocity (and then repeat), but in Nesterov momentum you first making a step into velocity direction and then make a correction to a velocity vector based on new location (then repeat).
i.e. Classical momentum:
vW(t+1) = momentum.*Vw(t) - scaling .* gradient_F( W(t) )
W(t+1) = W(t) + vW(t+1)
While Nesterov momentum is this:
vW(t+1) = momentum.*Vw(t) - scaling .* gradient_F( W(t) + momentum.*vW(t) )
W(t+1) = W(t) + vW(t+1)
Actually, this makes a huge difference in practice... | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient de | I don't think so.
There's a good description of Nesterov Momentum (aka Nesterov Accelerated Gradient) properties in, for example, Sutskever, Martens et al."On the importance of initialization and mome | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient descent?
I don't think so.
There's a good description of Nesterov Momentum (aka Nesterov Accelerated Gradient) properties in, for example, Sutskever, Martens et al."On the importance of initialization and momentum in deep learning" 2013.
The main difference is in classical momentum you first correct your velocity and then make a big step according to that velocity (and then repeat), but in Nesterov momentum you first making a step into velocity direction and then make a correction to a velocity vector based on new location (then repeat).
i.e. Classical momentum:
vW(t+1) = momentum.*Vw(t) - scaling .* gradient_F( W(t) )
W(t+1) = W(t) + vW(t+1)
While Nesterov momentum is this:
vW(t+1) = momentum.*Vw(t) - scaling .* gradient_F( W(t) + momentum.*vW(t) )
W(t+1) = W(t) + vW(t+1)
Actually, this makes a huge difference in practice... | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient de
I don't think so.
There's a good description of Nesterov Momentum (aka Nesterov Accelerated Gradient) properties in, for example, Sutskever, Martens et al."On the importance of initialization and mome |
3,028 | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient descent? | Added: a Stanford course on neural networks,
cs231n,
gives yet another form of the steps:
v = mu * v_prev - learning_rate * gradient(x) # GD + momentum
v_nesterov = v + mu * (v - v_prev) # keep going, extrapolate
x += v_nesterov
Here v is velocity aka step aka state,
and mu is a momentum factor, typically 0.9 or so.
(v, x and learning_rate can be very long vectors;
with numpy, the code is the same.)
v in the first line is gradient descent with momentum;
v_nesterov extrapolates, keeps going.
For example, with mu = 0.9,
v_prev v --> v_nesterov
---------------
0 10 --> 19
10 0 --> -9
10 10 --> 10
10 20 --> 29
The following description has 3 terms:
term 1 alone is plain gradient descent (GD),
1 + 2 give GD + momentum,
1 + 2 + 3 give Nesterov GD.
Nesterov GD is usually described as alternating
momentum steps $x_t \to y_t$ and gradient steps $y_t \to x_{t+1}$:
$\qquad y_t = x_t + m (x_t - x_{t-1}) \quad $ -- momentum, predictor
$\qquad x_{t+1} = y_t + h\ g(y_t) \qquad $ -- gradient
where $g_t \equiv - \nabla f(y_t)$ is the negative gradient,
and $h$ is stepsize aka learning rate.
Combine these two equations to one in $y_t$ only,
the points at which the gradients are evaluated,
by plugging the second equation into the first, and rearrange terms:
$\qquad y_{t+1} = y_t$
$\qquad \qquad + \ h \ g_t \qquad \qquad \quad $ -- gradient
$\qquad \qquad + \ m \ (y_t - y_{t-1}) \qquad $ -- step momentum
$\qquad \qquad + \ m \ h \ (g_t - g_{t-1}) \quad $ -- gradient momentum
The last term is the difference between GD with plain momentum,
and GD with Nesterov momentum.
One could use separate momentum terms, say $m$ and $m_{grad}$:
$\qquad \qquad + \ m \ (y_t - y_{t-1}) \qquad $ -- step momentum
$\qquad \qquad + \ m_{grad} \ h \ (g_t - g_{t-1}) \quad $ -- gradient momentum
Then $m_{grad} = 0$ gives plain momentum, $m_{grad} = m$ Nesterov.
$m_{grad} > 0 $ amplifies noise (gradients can be very noisy),
$m_{grad} \sim -.1$ is an IIR smoothing filter.
By the way, momentum and stepsize can vary with time, $m_t$ and $h_t$,
or per component (ada* coordinate descent), or both -- more methods than test cases.
A plot comparing plain momentum with Nesterov momentum on a simple 2d test case,
$(x / [cond, 1] - 100) + ripple \times sin( \pi x )$ : | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient de | Added: a Stanford course on neural networks,
cs231n,
gives yet another form of the steps:
v = mu * v_prev - learning_rate * gradient(x) # GD + momentum
v_nesterov = v + mu * (v - v_prev) | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient descent?
Added: a Stanford course on neural networks,
cs231n,
gives yet another form of the steps:
v = mu * v_prev - learning_rate * gradient(x) # GD + momentum
v_nesterov = v + mu * (v - v_prev) # keep going, extrapolate
x += v_nesterov
Here v is velocity aka step aka state,
and mu is a momentum factor, typically 0.9 or so.
(v, x and learning_rate can be very long vectors;
with numpy, the code is the same.)
v in the first line is gradient descent with momentum;
v_nesterov extrapolates, keeps going.
For example, with mu = 0.9,
v_prev v --> v_nesterov
---------------
0 10 --> 19
10 0 --> -9
10 10 --> 10
10 20 --> 29
The following description has 3 terms:
term 1 alone is plain gradient descent (GD),
1 + 2 give GD + momentum,
1 + 2 + 3 give Nesterov GD.
Nesterov GD is usually described as alternating
momentum steps $x_t \to y_t$ and gradient steps $y_t \to x_{t+1}$:
$\qquad y_t = x_t + m (x_t - x_{t-1}) \quad $ -- momentum, predictor
$\qquad x_{t+1} = y_t + h\ g(y_t) \qquad $ -- gradient
where $g_t \equiv - \nabla f(y_t)$ is the negative gradient,
and $h$ is stepsize aka learning rate.
Combine these two equations to one in $y_t$ only,
the points at which the gradients are evaluated,
by plugging the second equation into the first, and rearrange terms:
$\qquad y_{t+1} = y_t$
$\qquad \qquad + \ h \ g_t \qquad \qquad \quad $ -- gradient
$\qquad \qquad + \ m \ (y_t - y_{t-1}) \qquad $ -- step momentum
$\qquad \qquad + \ m \ h \ (g_t - g_{t-1}) \quad $ -- gradient momentum
The last term is the difference between GD with plain momentum,
and GD with Nesterov momentum.
One could use separate momentum terms, say $m$ and $m_{grad}$:
$\qquad \qquad + \ m \ (y_t - y_{t-1}) \qquad $ -- step momentum
$\qquad \qquad + \ m_{grad} \ h \ (g_t - g_{t-1}) \quad $ -- gradient momentum
Then $m_{grad} = 0$ gives plain momentum, $m_{grad} = m$ Nesterov.
$m_{grad} > 0 $ amplifies noise (gradients can be very noisy),
$m_{grad} \sim -.1$ is an IIR smoothing filter.
By the way, momentum and stepsize can vary with time, $m_t$ and $h_t$,
or per component (ada* coordinate descent), or both -- more methods than test cases.
A plot comparing plain momentum with Nesterov momentum on a simple 2d test case,
$(x / [cond, 1] - 100) + ripple \times sin( \pi x )$ : | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient de
Added: a Stanford course on neural networks,
cs231n,
gives yet another form of the steps:
v = mu * v_prev - learning_rate * gradient(x) # GD + momentum
v_nesterov = v + mu * (v - v_prev) |
3,029 | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient descent? | You say :
So to me it seems Nesterov's accelerated gradient descent just gives more weight to the ηg term over the pervious weight change term m (compared to plain old momentum).
The answer is : NO.
As you noticed : $p_{new} = p + \beta^2 m - (1+\beta)\eta g$. You might think it is equivalent to an update like this :
Update $m_{t+1}= \beta^2 m_t - (1+\beta) \cdot \eta \cdot g_t$ (where $g_t$ is the gradient of the objective function at $p_t$).
Then update $p_{t+1} = p_t + m_{t+1}$ (like momemtum method).
It is NOT. In fact Nesterov is equivalent to the following update :
$v_{t+1} = \beta v_{t} - \eta( g_t + \beta (g_t - g_{t-1}))$.
$p_{t+1} = p_t + v_{t+1}$
So the difference with momentum is that Nesterov involves $g_t - g_{t-1}$ in the update.
Proof : Nesterov update is :
$m_{t+1} = \beta \cdot m_t - \eta \cdot g_t$.
$v_{t+1} = \beta \cdot m_{t+1} - \eta \cdot g_t$
$p_{t+1} = p_t + v_{t+1}$.
You can write update rule for $v_{t+1}$ : since
$$v_{t} = \beta \cdot m_{t} - \eta \cdot g_{t-1},$$
you can write
$$\beta \cdot m_{t} = v_{t} + \eta \cdot g_{t-1}.$$
Hence
\begin{eqnarray*}
v_{t+1}
&=& \beta \cdot m_{t+1} - \eta \cdot g_t \\
&=& \beta (\beta \cdot m_{t} - \eta g_t) - \eta \cdot g_t \\
&=& \beta (v_{t} + \eta \cdot g_{t-1} - \eta \cdot g_t) - \eta \cdot g_t \quad \square \\
\end{eqnarray*} | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient de | You say :
So to me it seems Nesterov's accelerated gradient descent just gives more weight to the ηg term over the pervious weight change term m (compared to plain old momentum).
The answer is : NO. | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient descent?
You say :
So to me it seems Nesterov's accelerated gradient descent just gives more weight to the ηg term over the pervious weight change term m (compared to plain old momentum).
The answer is : NO.
As you noticed : $p_{new} = p + \beta^2 m - (1+\beta)\eta g$. You might think it is equivalent to an update like this :
Update $m_{t+1}= \beta^2 m_t - (1+\beta) \cdot \eta \cdot g_t$ (where $g_t$ is the gradient of the objective function at $p_t$).
Then update $p_{t+1} = p_t + m_{t+1}$ (like momemtum method).
It is NOT. In fact Nesterov is equivalent to the following update :
$v_{t+1} = \beta v_{t} - \eta( g_t + \beta (g_t - g_{t-1}))$.
$p_{t+1} = p_t + v_{t+1}$
So the difference with momentum is that Nesterov involves $g_t - g_{t-1}$ in the update.
Proof : Nesterov update is :
$m_{t+1} = \beta \cdot m_t - \eta \cdot g_t$.
$v_{t+1} = \beta \cdot m_{t+1} - \eta \cdot g_t$
$p_{t+1} = p_t + v_{t+1}$.
You can write update rule for $v_{t+1}$ : since
$$v_{t} = \beta \cdot m_{t} - \eta \cdot g_{t-1},$$
you can write
$$\beta \cdot m_{t} = v_{t} + \eta \cdot g_{t-1}.$$
Hence
\begin{eqnarray*}
v_{t+1}
&=& \beta \cdot m_{t+1} - \eta \cdot g_t \\
&=& \beta (\beta \cdot m_{t} - \eta g_t) - \eta \cdot g_t \\
&=& \beta (v_{t} + \eta \cdot g_{t-1} - \eta \cdot g_t) - \eta \cdot g_t \quad \square \\
\end{eqnarray*} | What's the difference between momentum based gradient descent and Nesterov's accelerated gradient de
You say :
So to me it seems Nesterov's accelerated gradient descent just gives more weight to the ηg term over the pervious weight change term m (compared to plain old momentum).
The answer is : NO. |
3,030 | What is the difference between discrete data and continuous data? | Discrete data can only take particular values. There may potentially be an infinite number of those values, but each is distinct and there's no grey area in between. Discrete data can be numeric -- like numbers of apples -- but it can also be categorical -- like red or blue, or male or female, or good or bad.
Continuous data are not restricted to defined separate values, but can occupy any value over a continuous range. Between any two continuous data values, there may be an infinite number of others. Continuous data are always essentially numeric.
It sometimes makes sense to treat discrete data as continuous and the other way around:
For example, something like height is continuous, but often we
don't really care too much about tiny differences and instead group
heights into a number of discrete bins -- i.e. only measuring
centimetres --.
Conversely, if we're counting large amounts of some discrete entity
-- i.e. grains of rice, or termites, or pennies in the economy -- we may choose not to think of 2,000,006 and 2,000,008 as crucially
different values but instead as nearby points on an approximate
continuum.
It can also sometimes be useful to treat numeric data as categorical, eg: underweight, normal, obese. This is usually just another kind of binning.
It seldom makes sense to consider categorical data as continuous. | What is the difference between discrete data and continuous data? | Discrete data can only take particular values. There may potentially be an infinite number of those values, but each is distinct and there's no grey area in between. Discrete data can be numeric -- li | What is the difference between discrete data and continuous data?
Discrete data can only take particular values. There may potentially be an infinite number of those values, but each is distinct and there's no grey area in between. Discrete data can be numeric -- like numbers of apples -- but it can also be categorical -- like red or blue, or male or female, or good or bad.
Continuous data are not restricted to defined separate values, but can occupy any value over a continuous range. Between any two continuous data values, there may be an infinite number of others. Continuous data are always essentially numeric.
It sometimes makes sense to treat discrete data as continuous and the other way around:
For example, something like height is continuous, but often we
don't really care too much about tiny differences and instead group
heights into a number of discrete bins -- i.e. only measuring
centimetres --.
Conversely, if we're counting large amounts of some discrete entity
-- i.e. grains of rice, or termites, or pennies in the economy -- we may choose not to think of 2,000,006 and 2,000,008 as crucially
different values but instead as nearby points on an approximate
continuum.
It can also sometimes be useful to treat numeric data as categorical, eg: underweight, normal, obese. This is usually just another kind of binning.
It seldom makes sense to consider categorical data as continuous. | What is the difference between discrete data and continuous data?
Discrete data can only take particular values. There may potentially be an infinite number of those values, but each is distinct and there's no grey area in between. Discrete data can be numeric -- li |
3,031 | What is the difference between discrete data and continuous data? | Data is always discrete. Given a sample of n values on a variable, the maximum number of distinct values the variable can take is equal to n. See this quote
All actual sample spaces are discrete, and all observable random
variables have discrete distributions. The continuous distribution is
a mathematical construction, suitable for mathematical treatment,
but not practically observable. E.J.G. Pitman (1979, p. 1).
Data on a variable are typically assumed to be drawn from a random variable.
The random variable is continuous over a range if there is an infinite number of possible values that the variable can take between any two different points in the range.
For example, height, weight, and time are typically assumed to be continuous.
Of course, any measurement of these variables will be finitely accurate and in some
sense discrete.
It is useful to distinguish between ordered (i.e., ordinal), unordered (i.e., nominal),
and binary discrete variables.
Some introductory textbooks confuse a continuous variable with a numeric variable.
For example, a score on a computer game is discrete even though it is numeric.
Some introductory textbooks confuse a ratio variable with continuous variables. A count variable is a ratio variable, but it is not continuous.
In actual practice, a variable is often treated as continuous when it can take on a sufficiently large number of different values.
References
Pitman, E. J. G. 1979. Some basic theory for statistical inference. London: Chapman and Hall. Note: I found the quote in the introduction of Chapter 2 of Murray Aitkin's book Statistical Inference: An Integrated Bayesian/Likelihood Approach | What is the difference between discrete data and continuous data? | Data is always discrete. Given a sample of n values on a variable, the maximum number of distinct values the variable can take is equal to n. See this quote
All actual sample spaces are discrete, a | What is the difference between discrete data and continuous data?
Data is always discrete. Given a sample of n values on a variable, the maximum number of distinct values the variable can take is equal to n. See this quote
All actual sample spaces are discrete, and all observable random
variables have discrete distributions. The continuous distribution is
a mathematical construction, suitable for mathematical treatment,
but not practically observable. E.J.G. Pitman (1979, p. 1).
Data on a variable are typically assumed to be drawn from a random variable.
The random variable is continuous over a range if there is an infinite number of possible values that the variable can take between any two different points in the range.
For example, height, weight, and time are typically assumed to be continuous.
Of course, any measurement of these variables will be finitely accurate and in some
sense discrete.
It is useful to distinguish between ordered (i.e., ordinal), unordered (i.e., nominal),
and binary discrete variables.
Some introductory textbooks confuse a continuous variable with a numeric variable.
For example, a score on a computer game is discrete even though it is numeric.
Some introductory textbooks confuse a ratio variable with continuous variables. A count variable is a ratio variable, but it is not continuous.
In actual practice, a variable is often treated as continuous when it can take on a sufficiently large number of different values.
References
Pitman, E. J. G. 1979. Some basic theory for statistical inference. London: Chapman and Hall. Note: I found the quote in the introduction of Chapter 2 of Murray Aitkin's book Statistical Inference: An Integrated Bayesian/Likelihood Approach | What is the difference between discrete data and continuous data?
Data is always discrete. Given a sample of n values on a variable, the maximum number of distinct values the variable can take is equal to n. See this quote
All actual sample spaces are discrete, a |
3,032 | What is the difference between discrete data and continuous data? | Temperatures are continuous. It can be 23 degrees, 23.1 degrees, 23.100004 degrees.
Sex is discrete. You can only be male or female (in classical thinking anyways). Something you could represent with a whole number like 1, 2, etc
The difference is important as many statistical and data mining algorithms can handle one type but not the other. For example in regular regression, the Y must be continuous. In logistic regression the Y is discrete. | What is the difference between discrete data and continuous data? | Temperatures are continuous. It can be 23 degrees, 23.1 degrees, 23.100004 degrees.
Sex is discrete. You can only be male or female (in classical thinking anyways). Something you could represent with | What is the difference between discrete data and continuous data?
Temperatures are continuous. It can be 23 degrees, 23.1 degrees, 23.100004 degrees.
Sex is discrete. You can only be male or female (in classical thinking anyways). Something you could represent with a whole number like 1, 2, etc
The difference is important as many statistical and data mining algorithms can handle one type but not the other. For example in regular regression, the Y must be continuous. In logistic regression the Y is discrete. | What is the difference between discrete data and continuous data?
Temperatures are continuous. It can be 23 degrees, 23.1 degrees, 23.100004 degrees.
Sex is discrete. You can only be male or female (in classical thinking anyways). Something you could represent with |
3,033 | What is the difference between discrete data and continuous data? | Discrete Data can only take certain values.
Example: the number of students in a class (you can't have half a student).
Continuous Data is data that can take any value (within a range)
Examples:
A person's height: could be any value (within the range of human heights), not just certain fixed heights,
Time in a race: you could even measure it to fractions of a second,
A dog's weight,
The length of a leaf,
The weight of a person, | What is the difference between discrete data and continuous data? | Discrete Data can only take certain values.
Example: the number of students in a class (you can't have half a student).
Continuous Data is data that can take any value (within a range)
Examples:
A pe | What is the difference between discrete data and continuous data?
Discrete Data can only take certain values.
Example: the number of students in a class (you can't have half a student).
Continuous Data is data that can take any value (within a range)
Examples:
A person's height: could be any value (within the range of human heights), not just certain fixed heights,
Time in a race: you could even measure it to fractions of a second,
A dog's weight,
The length of a leaf,
The weight of a person, | What is the difference between discrete data and continuous data?
Discrete Data can only take certain values.
Example: the number of students in a class (you can't have half a student).
Continuous Data is data that can take any value (within a range)
Examples:
A pe |
3,034 | What is the difference between discrete data and continuous data? | In the case of database, we would always store the data in discrete even the nature of the data is continuous. Why should I emphasize the nature of data? We should take the distribution of data that could help us to analyze the data. IF the nature of data is continuous, I suggest you to use them by continuous analysis.
Take an example of continuous and discrete: MP3. Even the type of "sound" is analogy, if stored by digital format. We should analyze it always in a analogy way. | What is the difference between discrete data and continuous data? | In the case of database, we would always store the data in discrete even the nature of the data is continuous. Why should I emphasize the nature of data? We should take the distribution of data that c | What is the difference between discrete data and continuous data?
In the case of database, we would always store the data in discrete even the nature of the data is continuous. Why should I emphasize the nature of data? We should take the distribution of data that could help us to analyze the data. IF the nature of data is continuous, I suggest you to use them by continuous analysis.
Take an example of continuous and discrete: MP3. Even the type of "sound" is analogy, if stored by digital format. We should analyze it always in a analogy way. | What is the difference between discrete data and continuous data?
In the case of database, we would always store the data in discrete even the nature of the data is continuous. Why should I emphasize the nature of data? We should take the distribution of data that c |
3,035 | What is the difference between discrete data and continuous data? | On the one hand, from a practical point of view I do agree with Jeromy Anglim's answer. In the end we are most of the time dealing with discrete variables – although from a theoretical point of view they are continuous – and that has a real impact for instance for classification. Recall Strobl's paper indicating that Random Forests is biased towards variables with multiple cutting points (higher accuracy but potentially similar nature). From my personal experience probabilistic neural networks may present also a bias when variables present different accuracy unless they are of the same type (i.e., continuous). On the other hand, from a theoretical point of view the classical classification (e.g., continuous, discrete, nominal etc.) is, IMHO, right. In accordance I think that the source name of Quinlan’s paper describing the M5 algorithm, which is a ‘regressor’, is a great choice. So the definition and the implications of continuous vs. discrete are relevant depending on the ‘environment’.
Refs:
Quinlan J.R. (1992). Learning with continuous classes. In: The 5th Australian Joint Conference on AI. Sydney (Australia), 343–348.
Strobl C., Boulesteix A.-L., Zeileis A., & Hothorn T. (2007). Bias in random forest variable importance measures: illustrations, sources and a solution. BMC Bioinformatics, 8, 25. doi: 10.1186/1471-2105-8-25 | What is the difference between discrete data and continuous data? | On the one hand, from a practical point of view I do agree with Jeromy Anglim's answer. In the end we are most of the time dealing with discrete variables – although from a theoretical point of view t | What is the difference between discrete data and continuous data?
On the one hand, from a practical point of view I do agree with Jeromy Anglim's answer. In the end we are most of the time dealing with discrete variables – although from a theoretical point of view they are continuous – and that has a real impact for instance for classification. Recall Strobl's paper indicating that Random Forests is biased towards variables with multiple cutting points (higher accuracy but potentially similar nature). From my personal experience probabilistic neural networks may present also a bias when variables present different accuracy unless they are of the same type (i.e., continuous). On the other hand, from a theoretical point of view the classical classification (e.g., continuous, discrete, nominal etc.) is, IMHO, right. In accordance I think that the source name of Quinlan’s paper describing the M5 algorithm, which is a ‘regressor’, is a great choice. So the definition and the implications of continuous vs. discrete are relevant depending on the ‘environment’.
Refs:
Quinlan J.R. (1992). Learning with continuous classes. In: The 5th Australian Joint Conference on AI. Sydney (Australia), 343–348.
Strobl C., Boulesteix A.-L., Zeileis A., & Hothorn T. (2007). Bias in random forest variable importance measures: illustrations, sources and a solution. BMC Bioinformatics, 8, 25. doi: 10.1186/1471-2105-8-25 | What is the difference between discrete data and continuous data?
On the one hand, from a practical point of view I do agree with Jeromy Anglim's answer. In the end we are most of the time dealing with discrete variables – although from a theoretical point of view t |
3,036 | What is the difference between discrete data and continuous data? | Discrete data take particular values, while continuous data are not restricted to separate values.
Discrete data are distinct and there is no grey area in between, while continuous data occupy any value over a continuous data value. | What is the difference between discrete data and continuous data? | Discrete data take particular values, while continuous data are not restricted to separate values.
Discrete data are distinct and there is no grey area in between, while continuous data occupy any va | What is the difference between discrete data and continuous data?
Discrete data take particular values, while continuous data are not restricted to separate values.
Discrete data are distinct and there is no grey area in between, while continuous data occupy any value over a continuous data value. | What is the difference between discrete data and continuous data?
Discrete data take particular values, while continuous data are not restricted to separate values.
Discrete data are distinct and there is no grey area in between, while continuous data occupy any va |
3,037 | What is the difference between discrete data and continuous data? | Discrete data
They can take particular values .they are numeric. | What is the difference between discrete data and continuous data? | Discrete data
They can take particular values .they are numeric. | What is the difference between discrete data and continuous data?
Discrete data
They can take particular values .they are numeric. | What is the difference between discrete data and continuous data?
Discrete data
They can take particular values .they are numeric. |
3,038 | What is the difference between discrete data and continuous data? | Discrete data can take on only integer values whereas continuous data can take on any value. For instance the number of cancer patients treated by a hospital each year is discrete but your weight is continuous. Some data are continuous but measured in a discrete way e.g. your age. It is common to report your age as say, 31. | What is the difference between discrete data and continuous data? | Discrete data can take on only integer values whereas continuous data can take on any value. For instance the number of cancer patients treated by a hospital each year is discrete but your weight is c | What is the difference between discrete data and continuous data?
Discrete data can take on only integer values whereas continuous data can take on any value. For instance the number of cancer patients treated by a hospital each year is discrete but your weight is continuous. Some data are continuous but measured in a discrete way e.g. your age. It is common to report your age as say, 31. | What is the difference between discrete data and continuous data?
Discrete data can take on only integer values whereas continuous data can take on any value. For instance the number of cancer patients treated by a hospital each year is discrete but your weight is c |
3,039 | What is the difference between discrete data and continuous data? | Discrete data perticularly talks about the finite values and continuous data talks about ifinite values..... | What is the difference between discrete data and continuous data? | Discrete data perticularly talks about the finite values and continuous data talks about ifinite values..... | What is the difference between discrete data and continuous data?
Discrete data perticularly talks about the finite values and continuous data talks about ifinite values..... | What is the difference between discrete data and continuous data?
Discrete data perticularly talks about the finite values and continuous data talks about ifinite values..... |
3,040 | Complete substantive examples of reproducible research using R | Frank Harrell has been beating the drum on reproducible research and reports for many, many years. You could start
at this wiki page which lists plenty of other resources, including published research and also covers Charles Geyer's page. | Complete substantive examples of reproducible research using R | Frank Harrell has been beating the drum on reproducible research and reports for many, many years. You could start
at this wiki page which lists plenty of other resources, including published resear | Complete substantive examples of reproducible research using R
Frank Harrell has been beating the drum on reproducible research and reports for many, many years. You could start
at this wiki page which lists plenty of other resources, including published research and also covers Charles Geyer's page. | Complete substantive examples of reproducible research using R
Frank Harrell has been beating the drum on reproducible research and reports for many, many years. You could start
at this wiki page which lists plenty of other resources, including published resear |
3,041 | Complete substantive examples of reproducible research using R | The journal Biostatistics has an Associate Editor for Reproducibility, and all its articles are marked:
Reproducible Research
Our reproducible research policy is for papers in the journal to be
kite-marked D if the data on which they are based are freely
available, C if the authors’ code is freely available, and R if both
data and code are available, and our Associate Editor for
Reproducibility is able to use these to reproduce the results in the
paper. Data and code are published electronically on the journal’s
website as Supplementary Materials.
http://biostatistics.oxfordjournals.org/
How good an idea is that?
http://biostatistics.oxfordjournals.org/content/12/1/18.abstract comes with an R package in the supplementaries that does the analysis - haven't tried it myself yet. Also, can't find out where the openness rating is specified. Am emailing the associate editor with some questions...
[edit]
Roger Peng the associate editor tells me there probably is no way of finding the reproducible papers without getting the PDF. He pointed me at this one which has a nice big 'R' on it (which does not mean R-rated like movies) for reproducibility:
http://biostatistics.oxfordjournals.org/content/10/3/409.abstract
Of course the journal itself isn't free... #fail
Barry | Complete substantive examples of reproducible research using R | The journal Biostatistics has an Associate Editor for Reproducibility, and all its articles are marked:
Reproducible Research
Our reproducible research policy is for papers in the journal to be
kit | Complete substantive examples of reproducible research using R
The journal Biostatistics has an Associate Editor for Reproducibility, and all its articles are marked:
Reproducible Research
Our reproducible research policy is for papers in the journal to be
kite-marked D if the data on which they are based are freely
available, C if the authors’ code is freely available, and R if both
data and code are available, and our Associate Editor for
Reproducibility is able to use these to reproduce the results in the
paper. Data and code are published electronically on the journal’s
website as Supplementary Materials.
http://biostatistics.oxfordjournals.org/
How good an idea is that?
http://biostatistics.oxfordjournals.org/content/12/1/18.abstract comes with an R package in the supplementaries that does the analysis - haven't tried it myself yet. Also, can't find out where the openness rating is specified. Am emailing the associate editor with some questions...
[edit]
Roger Peng the associate editor tells me there probably is no way of finding the reproducible papers without getting the PDF. He pointed me at this one which has a nice big 'R' on it (which does not mean R-rated like movies) for reproducibility:
http://biostatistics.oxfordjournals.org/content/10/3/409.abstract
Of course the journal itself isn't free... #fail
Barry | Complete substantive examples of reproducible research using R
The journal Biostatistics has an Associate Editor for Reproducibility, and all its articles are marked:
Reproducible Research
Our reproducible research policy is for papers in the journal to be
kit |
3,042 | Complete substantive examples of reproducible research using R | Irreproducibility of NCI60 Predictors of Chemotherapy
This is a reproducible analysis showing the lack of reproducibility of a paper that has been in the news. A clinical trial based on the false conclusions of the irreproducible paper was suspended, re-instated, suspended again, ... It's a good example of reproducible analysis in the news. | Complete substantive examples of reproducible research using R | Irreproducibility of NCI60 Predictors of Chemotherapy
This is a reproducible analysis showing the lack of reproducibility of a paper that has been in the news. A clinical trial based on the false con | Complete substantive examples of reproducible research using R
Irreproducibility of NCI60 Predictors of Chemotherapy
This is a reproducible analysis showing the lack of reproducibility of a paper that has been in the news. A clinical trial based on the false conclusions of the irreproducible paper was suspended, re-instated, suspended again, ... It's a good example of reproducible analysis in the news. | Complete substantive examples of reproducible research using R
Irreproducibility of NCI60 Predictors of Chemotherapy
This is a reproducible analysis showing the lack of reproducibility of a paper that has been in the news. A clinical trial based on the false con |
3,043 | Complete substantive examples of reproducible research using R | I have a few such examples on my research papers page. (I am not allowed to post more than one hyperlink as a new member. So I'll just describe the papers on that site.)
(1) "Making Effects Manifest in Randomized Experiments" uses R's vignette system.
(2) "Attributing Effects to a Cluster Randomized Get-Out-The-Vote Campaign" was a more complex paper involving some time consuming simulations. We used a Makefile based system and posted it to the Dataverse
(3) "EDA for HLM" was my earliest attempt. Here I just put the data and associated Sweave files in a tarball.
One problem we discovered when creating our JASA archive was that versions and defaults of CRAN packages changed. So, in that archive, we also include the versions of the packages that we used. The vignette based system will probably break as folks change their packages (not sure how to include extra packages within the package that is the Compendium).
Finally, I wonder about what to do when R itself changes. Are there ways to produce, say, a virtual machine that reproduces the entire computational environment used for a paper such that the virtual machine is not enormous?
Anyway, I hope that these examples help. At least they show some of my own experiments in this area.
(Here are some plain text hyperlinks.)
[2]: http://jakebowers.org/manifesteffects-compendium-howto.txt
[3]: http://hdl.handle.net/1902.1/12174
[4]: http://hdl.handle.net/1902.1/13376 | Complete substantive examples of reproducible research using R | I have a few such examples on my research papers page. (I am not allowed to post more than one hyperlink as a new member. So I'll just describe the papers on that site.)
(1) "Making Effects Manifest i | Complete substantive examples of reproducible research using R
I have a few such examples on my research papers page. (I am not allowed to post more than one hyperlink as a new member. So I'll just describe the papers on that site.)
(1) "Making Effects Manifest in Randomized Experiments" uses R's vignette system.
(2) "Attributing Effects to a Cluster Randomized Get-Out-The-Vote Campaign" was a more complex paper involving some time consuming simulations. We used a Makefile based system and posted it to the Dataverse
(3) "EDA for HLM" was my earliest attempt. Here I just put the data and associated Sweave files in a tarball.
One problem we discovered when creating our JASA archive was that versions and defaults of CRAN packages changed. So, in that archive, we also include the versions of the packages that we used. The vignette based system will probably break as folks change their packages (not sure how to include extra packages within the package that is the Compendium).
Finally, I wonder about what to do when R itself changes. Are there ways to produce, say, a virtual machine that reproduces the entire computational environment used for a paper such that the virtual machine is not enormous?
Anyway, I hope that these examples help. At least they show some of my own experiments in this area.
(Here are some plain text hyperlinks.)
[2]: http://jakebowers.org/manifesteffects-compendium-howto.txt
[3]: http://hdl.handle.net/1902.1/12174
[4]: http://hdl.handle.net/1902.1/13376 | Complete substantive examples of reproducible research using R
I have a few such examples on my research papers page. (I am not allowed to post more than one hyperlink as a new member. So I'll just describe the papers on that site.)
(1) "Making Effects Manifest i |
3,044 | Complete substantive examples of reproducible research using R | Koenker and Zeileis provide a webpage with a relatively complete example.
They share:
Rnw (Sweave code)
R analysis code
Final PDF
Discussion of version control issues | Complete substantive examples of reproducible research using R | Koenker and Zeileis provide a webpage with a relatively complete example.
They share:
Rnw (Sweave code)
R analysis code
Final PDF
Discussion of version control issues | Complete substantive examples of reproducible research using R
Koenker and Zeileis provide a webpage with a relatively complete example.
They share:
Rnw (Sweave code)
R analysis code
Final PDF
Discussion of version control issues | Complete substantive examples of reproducible research using R
Koenker and Zeileis provide a webpage with a relatively complete example.
They share:
Rnw (Sweave code)
R analysis code
Final PDF
Discussion of version control issues |
3,045 | Complete substantive examples of reproducible research using R | We wrote a paper explaining how to use R/Bioconductor when analysing microarray data. The paper was written in Sweave and all the code used to generate the graphs is included as supplementary material.
Gillespie, C. S., Lei, G., Boys, R. J., Greenall, A. J., Wilkinson, D. J., 2010. Analysing yeast time course microarray data using BioConductor: a case study using yeast2 Affymetrix arrays BMC Research Notes, 3:81. | Complete substantive examples of reproducible research using R | We wrote a paper explaining how to use R/Bioconductor when analysing microarray data. The paper was written in Sweave and all the code used to generate the graphs is included as supplementary material | Complete substantive examples of reproducible research using R
We wrote a paper explaining how to use R/Bioconductor when analysing microarray data. The paper was written in Sweave and all the code used to generate the graphs is included as supplementary material.
Gillespie, C. S., Lei, G., Boys, R. J., Greenall, A. J., Wilkinson, D. J., 2010. Analysing yeast time course microarray data using BioConductor: a case study using yeast2 Affymetrix arrays BMC Research Notes, 3:81. | Complete substantive examples of reproducible research using R
We wrote a paper explaining how to use R/Bioconductor when analysing microarray data. The paper was written in Sweave and all the code used to generate the graphs is included as supplementary material |
3,046 | Complete substantive examples of reproducible research using R | Charles Geyer's page on Sweave has an example from a thesis, which meets some of your requirements (the raw data is simply from an R package, but the R/sweave code and final PDF are available):
A paper on the theory in Yun Ju Sung's thesis, Monte Carlo Likelihood Inference for Missing Data Models (preprint) contained computing examples. Every number in the paper and every plot was taken (by cut-and-paste, I must admit) from a "supplementary materials" document done in Sweave.
(The source file is linked under the "Supplementary Materials for a Paper" section.)
I know I've come across at least one R example browsing the ReproducibleResearch.net material page before, but unfortunately didn't bookmark it. | Complete substantive examples of reproducible research using R | Charles Geyer's page on Sweave has an example from a thesis, which meets some of your requirements (the raw data is simply from an R package, but the R/sweave code and final PDF are available):
A pap | Complete substantive examples of reproducible research using R
Charles Geyer's page on Sweave has an example from a thesis, which meets some of your requirements (the raw data is simply from an R package, but the R/sweave code and final PDF are available):
A paper on the theory in Yun Ju Sung's thesis, Monte Carlo Likelihood Inference for Missing Data Models (preprint) contained computing examples. Every number in the paper and every plot was taken (by cut-and-paste, I must admit) from a "supplementary materials" document done in Sweave.
(The source file is linked under the "Supplementary Materials for a Paper" section.)
I know I've come across at least one R example browsing the ReproducibleResearch.net material page before, but unfortunately didn't bookmark it. | Complete substantive examples of reproducible research using R
Charles Geyer's page on Sweave has an example from a thesis, which meets some of your requirements (the raw data is simply from an R package, but the R/sweave code and final PDF are available):
A pap |
3,047 | Complete substantive examples of reproducible research using R | Simon Jackman has a particularly useful example of analysing the results of a survey: "Americans and Australians 10 years after 9/11". It has multiple examples of integrating tables and figures.
He has made the Sweave document and PDF report via this blog post.
While the raw data is not supplied (as far as I can tell), so it's not possible to run the actual Sweave examples, I think a fair bit can be learned from studying the Sweave code. | Complete substantive examples of reproducible research using R | Simon Jackman has a particularly useful example of analysing the results of a survey: "Americans and Australians 10 years after 9/11". It has multiple examples of integrating tables and figures.
He h | Complete substantive examples of reproducible research using R
Simon Jackman has a particularly useful example of analysing the results of a survey: "Americans and Australians 10 years after 9/11". It has multiple examples of integrating tables and figures.
He has made the Sweave document and PDF report via this blog post.
While the raw data is not supplied (as far as I can tell), so it's not possible to run the actual Sweave examples, I think a fair bit can be learned from studying the Sweave code. | Complete substantive examples of reproducible research using R
Simon Jackman has a particularly useful example of analysing the results of a survey: "Americans and Australians 10 years after 9/11". It has multiple examples of integrating tables and figures.
He h |
3,048 | Complete substantive examples of reproducible research using R | Neil Saunders analysed online interactions associated with a conference.
Several properties which make it a useful Sweave example include:
Rnw file is provided
Graphs are generated using ggplot
Good size and easily comprehensible domain
The materials are available here:
The blog post
The github repository
Rnw file on github | Complete substantive examples of reproducible research using R | Neil Saunders analysed online interactions associated with a conference.
Several properties which make it a useful Sweave example include:
Rnw file is provided
Graphs are generated using ggplot
Good | Complete substantive examples of reproducible research using R
Neil Saunders analysed online interactions associated with a conference.
Several properties which make it a useful Sweave example include:
Rnw file is provided
Graphs are generated using ggplot
Good size and easily comprehensible domain
The materials are available here:
The blog post
The github repository
Rnw file on github | Complete substantive examples of reproducible research using R
Neil Saunders analysed online interactions associated with a conference.
Several properties which make it a useful Sweave example include:
Rnw file is provided
Graphs are generated using ggplot
Good |
3,049 | Complete substantive examples of reproducible research using R | Also look at Journal Of Statistical Software; they encourage making papers in Sweave. | Complete substantive examples of reproducible research using R | Also look at Journal Of Statistical Software; they encourage making papers in Sweave. | Complete substantive examples of reproducible research using R
Also look at Journal Of Statistical Software; they encourage making papers in Sweave. | Complete substantive examples of reproducible research using R
Also look at Journal Of Statistical Software; they encourage making papers in Sweave. |
3,050 | Complete substantive examples of reproducible research using R | I have found good ones in the past and will post once I dig them up, but some quick general suggestions:
You may be able to find some interesting examples by searching google with keywords and ext:rnw (which will search for files with the sweave extension). Here's an example search. This is the third result from my search: http://www.ne.su.se/paper/araietal_source.Rnw. Here's another example from my search: http://www.stat.umn.edu/geyer/gdor/.
Many R packages have interesting vignettes which essentially amount to the same thing. An example: https://r-forge.r-project.org/scm/viewvc.php/paper/maxLik.Rnw | Complete substantive examples of reproducible research using R | I have found good ones in the past and will post once I dig them up, but some quick general suggestions:
You may be able to find some interesting examples by searching google with keywords and ext:rn | Complete substantive examples of reproducible research using R
I have found good ones in the past and will post once I dig them up, but some quick general suggestions:
You may be able to find some interesting examples by searching google with keywords and ext:rnw (which will search for files with the sweave extension). Here's an example search. This is the third result from my search: http://www.ne.su.se/paper/araietal_source.Rnw. Here's another example from my search: http://www.stat.umn.edu/geyer/gdor/.
Many R packages have interesting vignettes which essentially amount to the same thing. An example: https://r-forge.r-project.org/scm/viewvc.php/paper/maxLik.Rnw | Complete substantive examples of reproducible research using R
I have found good ones in the past and will post once I dig them up, but some quick general suggestions:
You may be able to find some interesting examples by searching google with keywords and ext:rn |
3,051 | Complete substantive examples of reproducible research using R | Robert Gentleman wrote a paper called "Reproducible Research: A Bioinformatics Case Study"
It implements a short set of analyses as an R Package and uses Sweave.
It also discusses the use of Sweave more generally.
See the "Related Files" section of the article page for an archive file of all files and folders used.
Reference:
Gentleman, Robert (2005) "Reproducible Research: A Bioinformatics Case Study," Statistical Applications in Genetics and Molecular Biology: Vol. 4 : Iss. 1, Article 2.
DOI: 10.2202/1544-6115.1034
Available at: http://www.bepress.com/sagmb/vol4/iss1/art2 | Complete substantive examples of reproducible research using R | Robert Gentleman wrote a paper called "Reproducible Research: A Bioinformatics Case Study"
It implements a short set of analyses as an R Package and uses Sweave.
It also discusses the use of Sweave mo | Complete substantive examples of reproducible research using R
Robert Gentleman wrote a paper called "Reproducible Research: A Bioinformatics Case Study"
It implements a short set of analyses as an R Package and uses Sweave.
It also discusses the use of Sweave more generally.
See the "Related Files" section of the article page for an archive file of all files and folders used.
Reference:
Gentleman, Robert (2005) "Reproducible Research: A Bioinformatics Case Study," Statistical Applications in Genetics and Molecular Biology: Vol. 4 : Iss. 1, Article 2.
DOI: 10.2202/1544-6115.1034
Available at: http://www.bepress.com/sagmb/vol4/iss1/art2 | Complete substantive examples of reproducible research using R
Robert Gentleman wrote a paper called "Reproducible Research: A Bioinformatics Case Study"
It implements a short set of analyses as an R Package and uses Sweave.
It also discusses the use of Sweave mo |
3,052 | Complete substantive examples of reproducible research using R | http://genome.cshlp.org/content/early/2011/06/09/gr.117523.110/suppl/DC1
A nice paper, by a lab mate of mine. Our PI was pretty pleased when something resembling fan mail came in for this. Now all publications from the group have the supplemental methods laid out in LaTeX/Sweave. Some of the papers, too (can't decide whether to keep mine in LyX/Sweave or fold and just do the supplementals in Sweave). | Complete substantive examples of reproducible research using R | http://genome.cshlp.org/content/early/2011/06/09/gr.117523.110/suppl/DC1
A nice paper, by a lab mate of mine. Our PI was pretty pleased when something resembling fan mail came in for this. Now all p | Complete substantive examples of reproducible research using R
http://genome.cshlp.org/content/early/2011/06/09/gr.117523.110/suppl/DC1
A nice paper, by a lab mate of mine. Our PI was pretty pleased when something resembling fan mail came in for this. Now all publications from the group have the supplemental methods laid out in LaTeX/Sweave. Some of the papers, too (can't decide whether to keep mine in LyX/Sweave or fold and just do the supplementals in Sweave). | Complete substantive examples of reproducible research using R
http://genome.cshlp.org/content/early/2011/06/09/gr.117523.110/suppl/DC1
A nice paper, by a lab mate of mine. Our PI was pretty pleased when something resembling fan mail came in for this. Now all p |
3,053 | Complete substantive examples of reproducible research using R | Looking for examples and practices is a good way to learn, but I just wanted to mention that reproducibility has not only technical/script rerun side, but also code style and structuring aspect, minimization of side effects in core functions etc.
I personally found that Chambers book Software for Data Analysis allows to understand more deeply techniques that help to avoid reliability and reproducibility issues on R code level. | Complete substantive examples of reproducible research using R | Looking for examples and practices is a good way to learn, but I just wanted to mention that reproducibility has not only technical/script rerun side, but also code style and structuring aspect, minim | Complete substantive examples of reproducible research using R
Looking for examples and practices is a good way to learn, but I just wanted to mention that reproducibility has not only technical/script rerun side, but also code style and structuring aspect, minimization of side effects in core functions etc.
I personally found that Chambers book Software for Data Analysis allows to understand more deeply techniques that help to avoid reliability and reproducibility issues on R code level. | Complete substantive examples of reproducible research using R
Looking for examples and practices is a good way to learn, but I just wanted to mention that reproducibility has not only technical/script rerun side, but also code style and structuring aspect, minim |
3,054 | Complete substantive examples of reproducible research using R | if you still need a great example of a fully REPRODUCIBLE analysis plus a PAPER, use this repo.
The @jscamac did a great job by making his analysis rproducible and I personally validated it.
You can lean how to use R specific functions like the package remake to ensure reproduciblity.
Watch out / the calculations take about one hour to complete.
Its all scripted and produces a LaTeX paper in the end with figures. | Complete substantive examples of reproducible research using R | if you still need a great example of a fully REPRODUCIBLE analysis plus a PAPER, use this repo.
The @jscamac did a great job by making his analysis rproducible and I personally validated it.
You can | Complete substantive examples of reproducible research using R
if you still need a great example of a fully REPRODUCIBLE analysis plus a PAPER, use this repo.
The @jscamac did a great job by making his analysis rproducible and I personally validated it.
You can lean how to use R specific functions like the package remake to ensure reproduciblity.
Watch out / the calculations take about one hour to complete.
Its all scripted and produces a LaTeX paper in the end with figures. | Complete substantive examples of reproducible research using R
if you still need a great example of a fully REPRODUCIBLE analysis plus a PAPER, use this repo.
The @jscamac did a great job by making his analysis rproducible and I personally validated it.
You can |
3,055 | What are the differences between 'epoch', 'batch', and 'minibatch'? | Epoch means one pass over the full training set
Batch means that you use all your data to compute the gradient during one iteration.
Mini-batch means you only take a subset of all your data during one iteration. | What are the differences between 'epoch', 'batch', and 'minibatch'? | Epoch means one pass over the full training set
Batch means that you use all your data to compute the gradient during one iteration.
Mini-batch means you only take a subset of all your data during one | What are the differences between 'epoch', 'batch', and 'minibatch'?
Epoch means one pass over the full training set
Batch means that you use all your data to compute the gradient during one iteration.
Mini-batch means you only take a subset of all your data during one iteration. | What are the differences between 'epoch', 'batch', and 'minibatch'?
Epoch means one pass over the full training set
Batch means that you use all your data to compute the gradient during one iteration.
Mini-batch means you only take a subset of all your data during one |
3,056 | What are the differences between 'epoch', 'batch', and 'minibatch'? | One epoch typically means your algorithm sees every training instance once. Now assuming you have $n$ training instances:
If you run batch update, every parameter update requires your algorithm see each of the $n$ training instances exactly once, i.e., every epoch your parameters are updated once.
If you run mini-batch update with batch size = $b$, every parameter update requires your algorithm see $b$ of $n$ training instances, i.e., every epoch your parameters are updated about $n/b$ times.
If you run SGD update, every parameter update requires your algorithm see 1 of $n$ training instances, i.e., every epoch your parameters are updated about $n$ times. | What are the differences between 'epoch', 'batch', and 'minibatch'? | One epoch typically means your algorithm sees every training instance once. Now assuming you have $n$ training instances:
If you run batch update, every parameter update requires your algorithm see e | What are the differences between 'epoch', 'batch', and 'minibatch'?
One epoch typically means your algorithm sees every training instance once. Now assuming you have $n$ training instances:
If you run batch update, every parameter update requires your algorithm see each of the $n$ training instances exactly once, i.e., every epoch your parameters are updated once.
If you run mini-batch update with batch size = $b$, every parameter update requires your algorithm see $b$ of $n$ training instances, i.e., every epoch your parameters are updated about $n/b$ times.
If you run SGD update, every parameter update requires your algorithm see 1 of $n$ training instances, i.e., every epoch your parameters are updated about $n$ times. | What are the differences between 'epoch', 'batch', and 'minibatch'?
One epoch typically means your algorithm sees every training instance once. Now assuming you have $n$ training instances:
If you run batch update, every parameter update requires your algorithm see e |
3,057 | What are the differences between 'epoch', 'batch', and 'minibatch'? | An epoch is typically one loop over the entire dataset.
A batch or minibatch refers to equally sized subsets of the dataset over which the gradient is calculated and weights updated.
i.e. for a dataset of size $n$:
Optimization method
Samples in each gradient calculation
Weight updates per epoch
Batch Gradient Descent
the entire dataset
$1$
Minibatch Gradient Descent
consecutive subsets of the dataset
$\lceil\frac{n}{\text{size of minibatch}}\rceil$
Stochastic Gradient Descent*
each sample of the dataset
$n$
The term batch itself is ambiguous however and can refer to either batch gradient descent or the size of a minibatch.
* Equivalent to minibatch with a batch-size of 1.
Why use minibatches?
It may be infeasible (due to memory/computational constraints) to calculate the gradient over the entire dataset, so smaller minibatches (as opposed to a single batch) may be used instead. At its extreme one can recalculate the gradient over each individual sample in the dataset.
If you perform this iteratively (i.e. re-calculate over the entire dataset multiple times), each iteration is referred to as an epoch. | What are the differences between 'epoch', 'batch', and 'minibatch'? | An epoch is typically one loop over the entire dataset.
A batch or minibatch refers to equally sized subsets of the dataset over which the gradient is calculated and weights updated.
i.e. for a datase | What are the differences between 'epoch', 'batch', and 'minibatch'?
An epoch is typically one loop over the entire dataset.
A batch or minibatch refers to equally sized subsets of the dataset over which the gradient is calculated and weights updated.
i.e. for a dataset of size $n$:
Optimization method
Samples in each gradient calculation
Weight updates per epoch
Batch Gradient Descent
the entire dataset
$1$
Minibatch Gradient Descent
consecutive subsets of the dataset
$\lceil\frac{n}{\text{size of minibatch}}\rceil$
Stochastic Gradient Descent*
each sample of the dataset
$n$
The term batch itself is ambiguous however and can refer to either batch gradient descent or the size of a minibatch.
* Equivalent to minibatch with a batch-size of 1.
Why use minibatches?
It may be infeasible (due to memory/computational constraints) to calculate the gradient over the entire dataset, so smaller minibatches (as opposed to a single batch) may be used instead. At its extreme one can recalculate the gradient over each individual sample in the dataset.
If you perform this iteratively (i.e. re-calculate over the entire dataset multiple times), each iteration is referred to as an epoch. | What are the differences between 'epoch', 'batch', and 'minibatch'?
An epoch is typically one loop over the entire dataset.
A batch or minibatch refers to equally sized subsets of the dataset over which the gradient is calculated and weights updated.
i.e. for a datase |
3,058 | What are the differences between 'epoch', 'batch', and 'minibatch'? | "Epoch" is usually means exposing a learning algorithm to the entire set of training data. This doesn't always make sense as we sometimes generate data.
"Batch" and "Minibatch" can be confusing.
Training examples sometimes need to be "batched" because not all data can necessarily be exposed to the algorithm at once (due to memory constraints usually).
In the context of SGD, "Minibatch" means that the gradient is calculated across the entire batch before updating weights. If you are not using a "minibatch", every training example in a "batch" updates the learning algorithm's parameters independently. | What are the differences between 'epoch', 'batch', and 'minibatch'? | "Epoch" is usually means exposing a learning algorithm to the entire set of training data. This doesn't always make sense as we sometimes generate data.
"Batch" and "Minibatch" can be confusing.
Train | What are the differences between 'epoch', 'batch', and 'minibatch'?
"Epoch" is usually means exposing a learning algorithm to the entire set of training data. This doesn't always make sense as we sometimes generate data.
"Batch" and "Minibatch" can be confusing.
Training examples sometimes need to be "batched" because not all data can necessarily be exposed to the algorithm at once (due to memory constraints usually).
In the context of SGD, "Minibatch" means that the gradient is calculated across the entire batch before updating weights. If you are not using a "minibatch", every training example in a "batch" updates the learning algorithm's parameters independently. | What are the differences between 'epoch', 'batch', and 'minibatch'?
"Epoch" is usually means exposing a learning algorithm to the entire set of training data. This doesn't always make sense as we sometimes generate data.
"Batch" and "Minibatch" can be confusing.
Train |
3,059 | How small a quantity should be added to x to avoid taking the log of zero? | As the zeros merely indicate concentrations below the detection limit, maybe setting them to (detection limit)/2 would be appropriate
I was just typing that the thing that comes to my mind where log does (frequently) make sense and 0 may occur are concentrations when you did the 2nd edit. As you say, for measured concentrations the 0 just means "I couldn't measure that low concentrations".
Side note: do you mean LOQ instead of LOD?
Whether setting the 0 to $\frac{1}{2}$LOQ is a good idea or not depends:
from the point of view that $\frac{1}{2}\mathrm{LOQ}$ is your "guess" expressing that c is anywhere between 0 and LOQ, it does make sense.
But consider the corresponding calibration function:
On the left, the calibration function yields c = 0 below the LOQ. On the right, $\frac{1}{2}\mathrm{LOQ}$ is used instead of 0.
However, if the original measured value is available, that may provide a better guess. After all, LOQ usually just means that the relative error is 10%. Below that the measurement still carries information, but the relative error becomes huge.
(blue: LOD, red: LOQ)
An alternative would be to exclude these measurements. That can be reasonable, too
e.g. think of a calibration curve. In practice you often observe a sigmoid shape: for low c, signal ≈ constant, intermediate linear behaviour, then detector saturation.
In that situation you may want to restrict yourself to statements about concentrations that are clearly in the linear range as both below and above other processes heavily influence the result.
Make sure you explain that the data was selected that way and why.
edit: What is sensible or acceptable, depends of course on the problem. Hopefully, we're talking here about a small part of the data that does not influence the analyis.
Maybe a quick and dirty check is: run your data analysis with and without excluding the data (or whatever treatment you propose) and see whether anything changes substantially.
If you see changes, then of course you're in trouble. However, from the analytical chemistry point of view, I'd say your trouble does not primarily lie in which method you use to deal with the data, but the underlying problem is that the analytical method (or its working range) was not appropriate for the problem at hand. There is of course a zone where the better statistical approach can save your day, but in the end the approximation "garbage in, garbage out" usually holds also for the more fancy methods.
Quotations for the topic:
A statistician once told me:
The problem with you (chemists/spectroscopists) is that your problems are either so hard that they cannot be solved or so easy that there is no fun in solving them.
Fisher about the statistical post-mortem of experiments | How small a quantity should be added to x to avoid taking the log of zero? | As the zeros merely indicate concentrations below the detection limit, maybe setting them to (detection limit)/2 would be appropriate
I was just typing that the thing that comes to my mind where log | How small a quantity should be added to x to avoid taking the log of zero?
As the zeros merely indicate concentrations below the detection limit, maybe setting them to (detection limit)/2 would be appropriate
I was just typing that the thing that comes to my mind where log does (frequently) make sense and 0 may occur are concentrations when you did the 2nd edit. As you say, for measured concentrations the 0 just means "I couldn't measure that low concentrations".
Side note: do you mean LOQ instead of LOD?
Whether setting the 0 to $\frac{1}{2}$LOQ is a good idea or not depends:
from the point of view that $\frac{1}{2}\mathrm{LOQ}$ is your "guess" expressing that c is anywhere between 0 and LOQ, it does make sense.
But consider the corresponding calibration function:
On the left, the calibration function yields c = 0 below the LOQ. On the right, $\frac{1}{2}\mathrm{LOQ}$ is used instead of 0.
However, if the original measured value is available, that may provide a better guess. After all, LOQ usually just means that the relative error is 10%. Below that the measurement still carries information, but the relative error becomes huge.
(blue: LOD, red: LOQ)
An alternative would be to exclude these measurements. That can be reasonable, too
e.g. think of a calibration curve. In practice you often observe a sigmoid shape: for low c, signal ≈ constant, intermediate linear behaviour, then detector saturation.
In that situation you may want to restrict yourself to statements about concentrations that are clearly in the linear range as both below and above other processes heavily influence the result.
Make sure you explain that the data was selected that way and why.
edit: What is sensible or acceptable, depends of course on the problem. Hopefully, we're talking here about a small part of the data that does not influence the analyis.
Maybe a quick and dirty check is: run your data analysis with and without excluding the data (or whatever treatment you propose) and see whether anything changes substantially.
If you see changes, then of course you're in trouble. However, from the analytical chemistry point of view, I'd say your trouble does not primarily lie in which method you use to deal with the data, but the underlying problem is that the analytical method (or its working range) was not appropriate for the problem at hand. There is of course a zone where the better statistical approach can save your day, but in the end the approximation "garbage in, garbage out" usually holds also for the more fancy methods.
Quotations for the topic:
A statistician once told me:
The problem with you (chemists/spectroscopists) is that your problems are either so hard that they cannot be solved or so easy that there is no fun in solving them.
Fisher about the statistical post-mortem of experiments | How small a quantity should be added to x to avoid taking the log of zero?
As the zeros merely indicate concentrations below the detection limit, maybe setting them to (detection limit)/2 would be appropriate
I was just typing that the thing that comes to my mind where log |
3,060 | How small a quantity should be added to x to avoid taking the log of zero? | Chemical concentration data often have zeros, but these do not represent zero values: they are codes that variously (and confusingly) represent both nondetects (the measurement indicated, with a high degree of likelihood, that the analyte was not present) and "unquantified" values (the measurement detected the analyte but could not produce a reliable numeric value). Let's just vaguely call these "NDs" here.
Typically, there is a limit associated with an ND variously known as a "detection limit," "quantitation limit," or (much more honestly) a "reporting limit," because the laboratory chooses not to provide a numerical value (often for legal reasons). About all we really know of an ND is that the true value is likely less than the associated limit: it's almost (but not quite) a form of left censoring. (Well, that's not really true either: it's a convenient fiction. These limits are determined via calibrations which, in most cases, have poor to terrible statistical properties. They may be grossly over- or under-estimated. This is important to know when you're looking at a set of concentration data which appear to have a lognormal right tail which is cut off (say) at $1.33$, plus a "spike" at $0$ representing all the NDs. That would strongly suggest the reporting limit is just a little less than $1.33$, but the lab data might try to tell you it is $0.5$ or $0.1$ or something like that.)
Extensive research has been done over the last 30 years or so concerning how best to summarize and evaluate such datasets. Dennis Helsel published a book on this, Nondetects and Data Analysis (Wiley, 2005), teaches a course, and released an R package based on some of the techniques he favors. His website is comprehensive.
This field is fraught with error and misconception. Helsel is frank about this: on the first page of chapter 1 of his book he writes,
...the most commonly used method in environmental studies today, substitution of one-half the detection limit, is NOT a reasonable method for interpreting censored data.
So, what to do? Options include ignoring this good advice, applying some of the methods in Helsel's book, and using some alternative methods. That's right, the book is not comprehensive and valid alternatives do exist. Adding a constant to all values in the dataset ("starting" them) is one. But consider:
Adding $1$ is not a good place to start, because this recipe depends on the units of measurement. Adding $1$ microgram per deciliter will not have the same result as adding $1$ millimole per liter.
After starting all the values, you will still have a spike at the smallest value, representing that collection of NDs. Your hope is that this spike is consistent with the quantified data in the sense that its total mass is approximately equal to the mass of a lognormal distribution between $0$ and the start value.
An excellent tool for determining the start value is a lognormal probability plot: apart from the NDs, the data should be approximately linear.
The collection of NDs can also be described with a so-called "delta lognormal" distribution. This is a mixture of a point mass and a lognormal.
As is evident in the following histograms of simulated values, the censored and delta distributions are not the same. The delta approach is most useful for explanatory variables in regression: you can create a "dummy" variable to indicate the NDs, take logarithms of the detected values (or otherwise transform them as needed), and not worry about the replacement values for the NDs.
In these histograms, approximately 20% of the lowest values have been replaced by zeros. For comparability, they are all based on the same 1000 simulated underlying lognormal values (upper left). The delta distribution was created by replacing 200 of the values by zeros at random. The censored distribution was created by replacing the 200 smallest values by zeros. The "realistic" distribution conforms to my experience, which is that the reporting limits actually vary in practice (even when that is not indicated by the laboratory!): I made them vary randomly (by just a little bit, rarely more than 30 in either direction) and replaced all simulated values less than their reporting limits by zeros.
To show the utility of the probability plot and to explain its interpretation, the next figure displays normal probability plots related to the logarithms of the preceding data.
The upper left shows all the data (before any censoring or replacement). It's a good fit to the ideal diagonal line (we expect some deviations in the extreme tails). This is what we are aiming to achieve in all the subsequent plots (but, due to the NDs, we will inevitably fall short of this ideal.) The upper right is a probability plot for the censored dataset, using a start value of 1. It's a terrible fit, because all the NDs (plotted at 0, because $\log(1+0)=0$) are plotted much too low. The lower left is a probability plot for the censored dataset with a start value of 120, which is close to a typical reporting limit. The fit in the bottom left is now decent--we only hope that all these values come somewhere near to, but to the right of, the fitted line--but the curvature in the upper tail shows that adding 120 is starting to alter the shape of the distribution. The bottom right shows what happens to the delta-lognormal data: there's a good fit to the upper tail, but some pronounced curvature near the reporting limit (at the middle of the plot).
Finally, let's explore some of the more realistic scenarios:
The upper left shows the censored dataset with the zeros set to one-half the reporting limit. It's a pretty good fit. On the upper right is the more realistic dataset (with randomly varying reporting limits). A start value of 1 does not help, but--on the lower left--for a start value of 120 (near the upper range of the reporting limits) the fit is quite good. Interestingly, the curvature near the middle as the points rise up from the NDs to the quantified values is reminiscent of the delta lognormal distribution (even though these data were not generated from such a mixture). On the lower right is the probability plot you get when the realistic data have their NDs replaced by one-half the (typical) reporting limit. This is the best fit, even though it shows some delta-lognormal-like behavior in the middle.
What you ought to do, then, is to use probability plots to explore the distributions as various constants are used in place of the NDs. Start the search with one-half the nominal, average, reporting limit, then vary it up and down from there. Choose a plot that looks like the bottom right: roughly a diagonal straight line for the quantified values, a quick drop-off to a low plateau, and a plateau of values that (just barely) meets the extension of the diagonal. However, following Helsel's advice (which is strongly supported in the literature), for actual statistical summaries, avoid any method that replaces the NDs by any constant. For regression, consider adding in a dummy variable to indicate the NDs. For some graphical displays, the constant replacement of NDs by the value found with the probability plot exercise will work well. For other graphical displays it may be important to depict the actual reporting limits, so replace the NDs by their reporting limits instead. You need to be flexible! | How small a quantity should be added to x to avoid taking the log of zero? | Chemical concentration data often have zeros, but these do not represent zero values: they are codes that variously (and confusingly) represent both nondetects (the measurement indicated, with a high | How small a quantity should be added to x to avoid taking the log of zero?
Chemical concentration data often have zeros, but these do not represent zero values: they are codes that variously (and confusingly) represent both nondetects (the measurement indicated, with a high degree of likelihood, that the analyte was not present) and "unquantified" values (the measurement detected the analyte but could not produce a reliable numeric value). Let's just vaguely call these "NDs" here.
Typically, there is a limit associated with an ND variously known as a "detection limit," "quantitation limit," or (much more honestly) a "reporting limit," because the laboratory chooses not to provide a numerical value (often for legal reasons). About all we really know of an ND is that the true value is likely less than the associated limit: it's almost (but not quite) a form of left censoring. (Well, that's not really true either: it's a convenient fiction. These limits are determined via calibrations which, in most cases, have poor to terrible statistical properties. They may be grossly over- or under-estimated. This is important to know when you're looking at a set of concentration data which appear to have a lognormal right tail which is cut off (say) at $1.33$, plus a "spike" at $0$ representing all the NDs. That would strongly suggest the reporting limit is just a little less than $1.33$, but the lab data might try to tell you it is $0.5$ or $0.1$ or something like that.)
Extensive research has been done over the last 30 years or so concerning how best to summarize and evaluate such datasets. Dennis Helsel published a book on this, Nondetects and Data Analysis (Wiley, 2005), teaches a course, and released an R package based on some of the techniques he favors. His website is comprehensive.
This field is fraught with error and misconception. Helsel is frank about this: on the first page of chapter 1 of his book he writes,
...the most commonly used method in environmental studies today, substitution of one-half the detection limit, is NOT a reasonable method for interpreting censored data.
So, what to do? Options include ignoring this good advice, applying some of the methods in Helsel's book, and using some alternative methods. That's right, the book is not comprehensive and valid alternatives do exist. Adding a constant to all values in the dataset ("starting" them) is one. But consider:
Adding $1$ is not a good place to start, because this recipe depends on the units of measurement. Adding $1$ microgram per deciliter will not have the same result as adding $1$ millimole per liter.
After starting all the values, you will still have a spike at the smallest value, representing that collection of NDs. Your hope is that this spike is consistent with the quantified data in the sense that its total mass is approximately equal to the mass of a lognormal distribution between $0$ and the start value.
An excellent tool for determining the start value is a lognormal probability plot: apart from the NDs, the data should be approximately linear.
The collection of NDs can also be described with a so-called "delta lognormal" distribution. This is a mixture of a point mass and a lognormal.
As is evident in the following histograms of simulated values, the censored and delta distributions are not the same. The delta approach is most useful for explanatory variables in regression: you can create a "dummy" variable to indicate the NDs, take logarithms of the detected values (or otherwise transform them as needed), and not worry about the replacement values for the NDs.
In these histograms, approximately 20% of the lowest values have been replaced by zeros. For comparability, they are all based on the same 1000 simulated underlying lognormal values (upper left). The delta distribution was created by replacing 200 of the values by zeros at random. The censored distribution was created by replacing the 200 smallest values by zeros. The "realistic" distribution conforms to my experience, which is that the reporting limits actually vary in practice (even when that is not indicated by the laboratory!): I made them vary randomly (by just a little bit, rarely more than 30 in either direction) and replaced all simulated values less than their reporting limits by zeros.
To show the utility of the probability plot and to explain its interpretation, the next figure displays normal probability plots related to the logarithms of the preceding data.
The upper left shows all the data (before any censoring or replacement). It's a good fit to the ideal diagonal line (we expect some deviations in the extreme tails). This is what we are aiming to achieve in all the subsequent plots (but, due to the NDs, we will inevitably fall short of this ideal.) The upper right is a probability plot for the censored dataset, using a start value of 1. It's a terrible fit, because all the NDs (plotted at 0, because $\log(1+0)=0$) are plotted much too low. The lower left is a probability plot for the censored dataset with a start value of 120, which is close to a typical reporting limit. The fit in the bottom left is now decent--we only hope that all these values come somewhere near to, but to the right of, the fitted line--but the curvature in the upper tail shows that adding 120 is starting to alter the shape of the distribution. The bottom right shows what happens to the delta-lognormal data: there's a good fit to the upper tail, but some pronounced curvature near the reporting limit (at the middle of the plot).
Finally, let's explore some of the more realistic scenarios:
The upper left shows the censored dataset with the zeros set to one-half the reporting limit. It's a pretty good fit. On the upper right is the more realistic dataset (with randomly varying reporting limits). A start value of 1 does not help, but--on the lower left--for a start value of 120 (near the upper range of the reporting limits) the fit is quite good. Interestingly, the curvature near the middle as the points rise up from the NDs to the quantified values is reminiscent of the delta lognormal distribution (even though these data were not generated from such a mixture). On the lower right is the probability plot you get when the realistic data have their NDs replaced by one-half the (typical) reporting limit. This is the best fit, even though it shows some delta-lognormal-like behavior in the middle.
What you ought to do, then, is to use probability plots to explore the distributions as various constants are used in place of the NDs. Start the search with one-half the nominal, average, reporting limit, then vary it up and down from there. Choose a plot that looks like the bottom right: roughly a diagonal straight line for the quantified values, a quick drop-off to a low plateau, and a plateau of values that (just barely) meets the extension of the diagonal. However, following Helsel's advice (which is strongly supported in the literature), for actual statistical summaries, avoid any method that replaces the NDs by any constant. For regression, consider adding in a dummy variable to indicate the NDs. For some graphical displays, the constant replacement of NDs by the value found with the probability plot exercise will work well. For other graphical displays it may be important to depict the actual reporting limits, so replace the NDs by their reporting limits instead. You need to be flexible! | How small a quantity should be added to x to avoid taking the log of zero?
Chemical concentration data often have zeros, but these do not represent zero values: they are codes that variously (and confusingly) represent both nondetects (the measurement indicated, with a high |
3,061 | How small a quantity should be added to x to avoid taking the log of zero? | @miura
I came across this article by Bill Gould on the Stata blog (I think he actually founded Stata) which I think could provide help with your analysis. Near the end of the article he cautions against the use of arbitrary numbers that are close to zero, such as 0.01, 0.0001, 0.0000001, and 0 since in logs they are -4.61, -9.21, -16.12, and $-\infty$. In this situation they are not arbitrary at all. He advises the use of a Poisson regression since it recognizes that the above number are actually close together. | How small a quantity should be added to x to avoid taking the log of zero? | @miura
I came across this article by Bill Gould on the Stata blog (I think he actually founded Stata) which I think could provide help with your analysis. Near the end of the article he cautions again | How small a quantity should be added to x to avoid taking the log of zero?
@miura
I came across this article by Bill Gould on the Stata blog (I think he actually founded Stata) which I think could provide help with your analysis. Near the end of the article he cautions against the use of arbitrary numbers that are close to zero, such as 0.01, 0.0001, 0.0000001, and 0 since in logs they are -4.61, -9.21, -16.12, and $-\infty$. In this situation they are not arbitrary at all. He advises the use of a Poisson regression since it recognizes that the above number are actually close together. | How small a quantity should be added to x to avoid taking the log of zero?
@miura
I came across this article by Bill Gould on the Stata blog (I think he actually founded Stata) which I think could provide help with your analysis. Near the end of the article he cautions again |
3,062 | How small a quantity should be added to x to avoid taking the log of zero? | To clarify how to deal with the log of zero in regression models, we have written a pedagogical paper explaining the best solution and the common mistakes people make in practice. We also came out with a new solution to tackle this issue.
You can find the paper by clicking here: https://ssrn.com/abstract=3444996
First, we think that ones should wonder why using a log transformation. In regression models, a log-log relationship leads to the identification of an elasticity. Indeed, if $\log(y) = \beta \log(x) + \varepsilon$, then $\beta$ corresponds to the elasticity of $y$ to $x$. The log can also linearize a theoretical model. It can also be used to reduce heteroskedasticity. However, in practice, it often occurs that the variable taken in log contains non-positive values.
A solution that is often proposed consists in adding a positive constant c to all observations $Y$ so that $Y + c > 0$. However, contrary to linear regressions, log-linear
regressions are not robust to linear transformation of the dependent variable. This
is due to the non-linear nature of the log function. Log transformation expands low
values and squeezes high values. Therefore, adding a constant will distort the (linear)
relationship between zeros and other observations in the data. The magnitude of the
bias generated by the constant actually depends on the range of observations in the
data. For that reason, adding the smallest possible constant is not necessarily the best
worst solution.
In our article, we actually provide an example where adding very small constants is actually providing the highest bias. We provide derive an expression of the bias.
Actually, Poisson Pseudo Maximum Likelihood (PPML) can be considered as a good solution to this issue. One has to consider the following process:
$y_i = a_i \exp(\alpha + x_i' \beta)$ with $E(a_i | x_i) = 1$
This process is motivated by several features. First, it provides the same interpretation
to $\beta$ as a semi-log model. Second, this data generating process provides a logical
rationalization of zero values in the dependent variable. This situation can arise when
the multiplicative error term, $a_i$ , is equal to zero. Third, estimating this model with PPML does not encounter the computational difficulty when $y_i = 0$. Under the assumption that $E(a_i|x_i) = 1$, we have $E( y_i - \exp(\alpha + x_i' \beta) | x_i) = 0$. We want to minimize the quadratic error of this moment, leading to the following first-order conditions:
$\sum_{i=1}^N ( y_i - \exp(\alpha + x_i' \beta) )x_i' = 0$
These conditions are defined even when $y_i = 0$. These first-order conditions are numerically equivalent to those of a Poisson model, so it can be estimated with any standard statistical software.
Finally, we propose a new solution that is also easy to implement and that provides unbiased estimator of $\beta$. One simply need to estimate:
$\log( y_i + \exp (\alpha + x_i' \beta)) = x_i' \beta + \eta_i $
We show that this estimator is unbiased and that it can simply be estimated with GMM with any standard statistical software. For instance, it can be estimated by executing just one line of code with Stata.
We hope that this article can help and we'd love to get feedback from you.
Christophe Bellégo and Louis-Daniel Pape,
CREST - Ecole Polytechnique - ENSAE | How small a quantity should be added to x to avoid taking the log of zero? | To clarify how to deal with the log of zero in regression models, we have written a pedagogical paper explaining the best solution and the common mistakes people make in practice. We also came out wit | How small a quantity should be added to x to avoid taking the log of zero?
To clarify how to deal with the log of zero in regression models, we have written a pedagogical paper explaining the best solution and the common mistakes people make in practice. We also came out with a new solution to tackle this issue.
You can find the paper by clicking here: https://ssrn.com/abstract=3444996
First, we think that ones should wonder why using a log transformation. In regression models, a log-log relationship leads to the identification of an elasticity. Indeed, if $\log(y) = \beta \log(x) + \varepsilon$, then $\beta$ corresponds to the elasticity of $y$ to $x$. The log can also linearize a theoretical model. It can also be used to reduce heteroskedasticity. However, in practice, it often occurs that the variable taken in log contains non-positive values.
A solution that is often proposed consists in adding a positive constant c to all observations $Y$ so that $Y + c > 0$. However, contrary to linear regressions, log-linear
regressions are not robust to linear transformation of the dependent variable. This
is due to the non-linear nature of the log function. Log transformation expands low
values and squeezes high values. Therefore, adding a constant will distort the (linear)
relationship between zeros and other observations in the data. The magnitude of the
bias generated by the constant actually depends on the range of observations in the
data. For that reason, adding the smallest possible constant is not necessarily the best
worst solution.
In our article, we actually provide an example where adding very small constants is actually providing the highest bias. We provide derive an expression of the bias.
Actually, Poisson Pseudo Maximum Likelihood (PPML) can be considered as a good solution to this issue. One has to consider the following process:
$y_i = a_i \exp(\alpha + x_i' \beta)$ with $E(a_i | x_i) = 1$
This process is motivated by several features. First, it provides the same interpretation
to $\beta$ as a semi-log model. Second, this data generating process provides a logical
rationalization of zero values in the dependent variable. This situation can arise when
the multiplicative error term, $a_i$ , is equal to zero. Third, estimating this model with PPML does not encounter the computational difficulty when $y_i = 0$. Under the assumption that $E(a_i|x_i) = 1$, we have $E( y_i - \exp(\alpha + x_i' \beta) | x_i) = 0$. We want to minimize the quadratic error of this moment, leading to the following first-order conditions:
$\sum_{i=1}^N ( y_i - \exp(\alpha + x_i' \beta) )x_i' = 0$
These conditions are defined even when $y_i = 0$. These first-order conditions are numerically equivalent to those of a Poisson model, so it can be estimated with any standard statistical software.
Finally, we propose a new solution that is also easy to implement and that provides unbiased estimator of $\beta$. One simply need to estimate:
$\log( y_i + \exp (\alpha + x_i' \beta)) = x_i' \beta + \eta_i $
We show that this estimator is unbiased and that it can simply be estimated with GMM with any standard statistical software. For instance, it can be estimated by executing just one line of code with Stata.
We hope that this article can help and we'd love to get feedback from you.
Christophe Bellégo and Louis-Daniel Pape,
CREST - Ecole Polytechnique - ENSAE | How small a quantity should be added to x to avoid taking the log of zero?
To clarify how to deal with the log of zero in regression models, we have written a pedagogical paper explaining the best solution and the common mistakes people make in practice. We also came out wit |
3,063 | How small a quantity should be added to x to avoid taking the log of zero? | You can set the zeros of the $i^{th}$ variable to the ${\rm mean}(x_i) - n\times{\rm stddev}(x_i)$ where $n$ is large enough to distinguish these cases from the rest (e.g., 6 or 10).
Note that any such artificial setup will affect your analyses so you should be careful with your interpretation and in some cases discard these cases to avoid artifacts.
Using the detection limit is also a reasonable idea. | How small a quantity should be added to x to avoid taking the log of zero? | You can set the zeros of the $i^{th}$ variable to the ${\rm mean}(x_i) - n\times{\rm stddev}(x_i)$ where $n$ is large enough to distinguish these cases from the rest (e.g., 6 or 10).
Note that any su | How small a quantity should be added to x to avoid taking the log of zero?
You can set the zeros of the $i^{th}$ variable to the ${\rm mean}(x_i) - n\times{\rm stddev}(x_i)$ where $n$ is large enough to distinguish these cases from the rest (e.g., 6 or 10).
Note that any such artificial setup will affect your analyses so you should be careful with your interpretation and in some cases discard these cases to avoid artifacts.
Using the detection limit is also a reasonable idea. | How small a quantity should be added to x to avoid taking the log of zero?
You can set the zeros of the $i^{th}$ variable to the ${\rm mean}(x_i) - n\times{\rm stddev}(x_i)$ where $n$ is large enough to distinguish these cases from the rest (e.g., 6 or 10).
Note that any su |
3,064 | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | Normally I wouldn't report $R^2$ at all. Hosmer and Lemeshow, in their textbook Applied Logistic Regression (2nd Ed.), explain why:
In general, [$R^2$ measures] are based on various comparisons of the predicted values from the fitted model to those from [the base model], the no data or intercept only model and, as a result, do not assess goodness-of-fit. We think that a true measure of fit is one based strictly on a comparison of observed to predicted values from the fitted model.
[At p. 164.]
Concerning various ML versions of $R^2$, the "pseudo $R^2$" stat, they mention that it is not "recommended for routine use, as it is not as intuitively easy to explain," but they feel obliged to describe it because various software packages report it.
They conclude this discussion by writing,
...low $R^2$ values in logistic regression are the norm and this presents a problem when reporting their values to an audience accustomed to seeing linear regression values. ... Thus [arguing by reference to running examples in the text] we do not recommend routine publishing of $R^2$ values with results from fitted logistic models. However, they may be helpful in the model building state as a statistic to evaluate competing models.
[At p. 167.]
My experience with some large logistic models (100k to 300k records, 100 - 300 explanatory variables) has been exactly as H & L describe. I could achieve relatively high $R^2$ with my data, up to about 0.40. These corresponded to classification error rates between 3% and 15% (false negatives and false positives, balanced, as confirmed using 50% hold-out datasets). As H & L hinted, I had to spend a lot of time disabusing the client (a sophisticated consultant himself, who was familiar with $R^2$) concerning $R^2$ and getting him to focus on what mattered in the analysis (the classification error rates). I can warmly recommend describing the results of your analysis without reference to $R^2$, which is more likely to mislead than not. | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | Normally I wouldn't report $R^2$ at all. Hosmer and Lemeshow, in their textbook Applied Logistic Regression (2nd Ed.), explain why:
In general, [$R^2$ measures] are based on various comparisons of t | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
Normally I wouldn't report $R^2$ at all. Hosmer and Lemeshow, in their textbook Applied Logistic Regression (2nd Ed.), explain why:
In general, [$R^2$ measures] are based on various comparisons of the predicted values from the fitted model to those from [the base model], the no data or intercept only model and, as a result, do not assess goodness-of-fit. We think that a true measure of fit is one based strictly on a comparison of observed to predicted values from the fitted model.
[At p. 164.]
Concerning various ML versions of $R^2$, the "pseudo $R^2$" stat, they mention that it is not "recommended for routine use, as it is not as intuitively easy to explain," but they feel obliged to describe it because various software packages report it.
They conclude this discussion by writing,
...low $R^2$ values in logistic regression are the norm and this presents a problem when reporting their values to an audience accustomed to seeing linear regression values. ... Thus [arguing by reference to running examples in the text] we do not recommend routine publishing of $R^2$ values with results from fitted logistic models. However, they may be helpful in the model building state as a statistic to evaluate competing models.
[At p. 167.]
My experience with some large logistic models (100k to 300k records, 100 - 300 explanatory variables) has been exactly as H & L describe. I could achieve relatively high $R^2$ with my data, up to about 0.40. These corresponded to classification error rates between 3% and 15% (false negatives and false positives, balanced, as confirmed using 50% hold-out datasets). As H & L hinted, I had to spend a lot of time disabusing the client (a sophisticated consultant himself, who was familiar with $R^2$) concerning $R^2$ and getting him to focus on what mattered in the analysis (the classification error rates). I can warmly recommend describing the results of your analysis without reference to $R^2$, which is more likely to mislead than not. | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
Normally I wouldn't report $R^2$ at all. Hosmer and Lemeshow, in their textbook Applied Logistic Regression (2nd Ed.), explain why:
In general, [$R^2$ measures] are based on various comparisons of t |
3,065 | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | Both indices are measures of strength of association (i.e. whether any predictor is associated with the outcome, as for an LR test), and can be used to quantify predictive ability or model performance. A single predictor may have a significant effect on the outcome but it might not necessarily be so useful for predicting individual response, hence the need to assess model performance as a whole (wrt. the null model). The Nagelkerke $R^2$ is useful because it has a maximum value of 1.0, as Srikant said. This is just a normalized version of the $R^2$ computed from the likelihood ratio, $R^2_{\text{LR}}=1-\exp(-\text{LR}/n)$, which has connection with the Wald statistic for overall association, as originally proposed by Cox and Snell.
Other indices of predictive ability are Brier score, the C index (concordance probability or ROC area), or Somers' D, the latter two providing a better measure of predictive discrimination.
The only assumptions made in logistic regression are that of linearity and additivity (+ independence). Although many global goodness-of-fit tests (like the Hosmer & Lemeshow $\chi^2$ test, but see my comment to @onestop) have been proposed, they generally lack power. For assessing model fit, it is better to rely on visual criteria (stratified estimates, nonparametric smoothing) that help to spot local or global departure between predicted and observed outcomes (e.g. non-linearity or interaction), and this is largely detailed in Harrell's RMS handout. On a related subject (calibration tests), Steyerberg (Clinical Prediction Models, 2009) points to the same approach for assessing the agreement between observed outcomes and predicted probabilities:
Calibration is related to
goodness-of-fit, which relates to the
ability of a model to fit a given set
of data. Typically, there is no single
goodness-of-fit test that has good
power against all kinds of lack of fit
of a prediction model. Examples of
lack of fit are missed
non-linearities, interactions, or an
inappropriate link function between
the linear predictor and the outcome.
Goodness-of-fit can be tested with a
$\chi^2$ statistic. (p. 274)
He also suggests to rely on the absolute difference between smoothed observed outcomes and predicted probabilities either visually, or with the so-called Harrell's E statistic.
More details can be found in Harrell's book, Regression Modeling Strategies (pp. 203-205, 230-244, 247-249). For a more recent discussion, see also
Steyerberg, EW, Vickers, AJ, Cook, NR, Gerds, T, Gonen, M, Obuchowski, N, Pencina, MJ, and Kattan, MW (2010). Assessing the Performance of Prediction Models, A Framework for Traditional and Novel Measures. Epidemiology, 21(1), 128-138. | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | Both indices are measures of strength of association (i.e. whether any predictor is associated with the outcome, as for an LR test), and can be used to quantify predictive ability or model performance | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
Both indices are measures of strength of association (i.e. whether any predictor is associated with the outcome, as for an LR test), and can be used to quantify predictive ability or model performance. A single predictor may have a significant effect on the outcome but it might not necessarily be so useful for predicting individual response, hence the need to assess model performance as a whole (wrt. the null model). The Nagelkerke $R^2$ is useful because it has a maximum value of 1.0, as Srikant said. This is just a normalized version of the $R^2$ computed from the likelihood ratio, $R^2_{\text{LR}}=1-\exp(-\text{LR}/n)$, which has connection with the Wald statistic for overall association, as originally proposed by Cox and Snell.
Other indices of predictive ability are Brier score, the C index (concordance probability or ROC area), or Somers' D, the latter two providing a better measure of predictive discrimination.
The only assumptions made in logistic regression are that of linearity and additivity (+ independence). Although many global goodness-of-fit tests (like the Hosmer & Lemeshow $\chi^2$ test, but see my comment to @onestop) have been proposed, they generally lack power. For assessing model fit, it is better to rely on visual criteria (stratified estimates, nonparametric smoothing) that help to spot local or global departure between predicted and observed outcomes (e.g. non-linearity or interaction), and this is largely detailed in Harrell's RMS handout. On a related subject (calibration tests), Steyerberg (Clinical Prediction Models, 2009) points to the same approach for assessing the agreement between observed outcomes and predicted probabilities:
Calibration is related to
goodness-of-fit, which relates to the
ability of a model to fit a given set
of data. Typically, there is no single
goodness-of-fit test that has good
power against all kinds of lack of fit
of a prediction model. Examples of
lack of fit are missed
non-linearities, interactions, or an
inappropriate link function between
the linear predictor and the outcome.
Goodness-of-fit can be tested with a
$\chi^2$ statistic. (p. 274)
He also suggests to rely on the absolute difference between smoothed observed outcomes and predicted probabilities either visually, or with the so-called Harrell's E statistic.
More details can be found in Harrell's book, Regression Modeling Strategies (pp. 203-205, 230-244, 247-249). For a more recent discussion, see also
Steyerberg, EW, Vickers, AJ, Cook, NR, Gerds, T, Gonen, M, Obuchowski, N, Pencina, MJ, and Kattan, MW (2010). Assessing the Performance of Prediction Models, A Framework for Traditional and Novel Measures. Epidemiology, 21(1), 128-138. | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
Both indices are measures of strength of association (i.e. whether any predictor is associated with the outcome, as for an LR test), and can be used to quantify predictive ability or model performance |
3,066 | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | I would have thought the main problem with any kind of $R^2$ measure for logistic regression is that you are dealing with a model which has a known noise value. This is unlike standard linear regression, where the noise level is usually treated as unknown. For we can write a glm probability density function as:
$$f(y_i|\mu_i,\phi)=\exp\left(\frac{y_ib(\mu_i)-c(\mu_i)}{\phi}+d(y_i,\phi)\right)$$
Where $b(.),\ c(.),\ d(.;.)$ are known functions, and $\mu_i=g^{-1}(x_i^T\beta)$ for inverse link function $g^{-1}(.)$. If we define the usual GLM deviance residuals as
\begin{align}
d_i^2 &= 2\phi\left(\log[f(y_i|\mu_i=y_i,\phi)]-\log[f(y_i|\mu_i=\hat{\mu}_i,\phi)]\right) \\
&= 2\phi \left[y_ib(y_i)-y_ib(\hat{\mu}_i)-c(y_i)+c(\hat{\mu}_i)\right]
\end{align}
The we have (via likelihood ratio chi-square, $\chi^2=\frac{1}{\phi}\sum_{i=1}^{N}d_i^2$)
$$E\left(\sum_{i=1}^{N}d_i^2\right)=E(\phi\chi^2)\approx (N-p)\phi$$
Where $p$ is the dimension of $\beta$. For logistic regression we have $\phi=1$, which is known. So we can use this to decide on a definite level of residual that is "acceptable" or "reasonable". This usually cannot be done for OLS regression (unless you have prior information about the noise). Namely, we expect each deviance residual to be about $1$. Too many $d_i^2\gg1$ and it is likely that an important effects are missing from the model (under-fitting); too many $d_i^2\ll1$ and it is likely that there are redundant or spurious effects in the model (over-fitting). (these could also mean model mispecification).
Now this means that the problem for the pseudo-$R^2$ is that it fails to take into account that the level of binomial variation is predictable (provided the binomial error structure isn't being questioned). Thus even though Nagelkerke ranges from $0$ to $1$, it is still not scaled properly. Additionally, I can't see why these are called pseudo $R^2$ if they aren't equal to the usual $R^2$ when you fit a "GLM" with an identity link and normal error. For example, the equivalent cox-snell R-squared for normal error (using REML estimate of variance) is given by:
$$R^2_{CS}=1-\exp\left(-\frac{N-p}{N}\cdot \frac{R^2_{OLS}}{1-R^2_{OLS}}\right)$$
Which certainly looks strange.
I think the better "Goodness of Fit" measure is the sum of the deviance residuals, $\chi^2$. This is mainly because we have a target to aim for. | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | I would have thought the main problem with any kind of $R^2$ measure for logistic regression is that you are dealing with a model which has a known noise value. This is unlike standard linear regress | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
I would have thought the main problem with any kind of $R^2$ measure for logistic regression is that you are dealing with a model which has a known noise value. This is unlike standard linear regression, where the noise level is usually treated as unknown. For we can write a glm probability density function as:
$$f(y_i|\mu_i,\phi)=\exp\left(\frac{y_ib(\mu_i)-c(\mu_i)}{\phi}+d(y_i,\phi)\right)$$
Where $b(.),\ c(.),\ d(.;.)$ are known functions, and $\mu_i=g^{-1}(x_i^T\beta)$ for inverse link function $g^{-1}(.)$. If we define the usual GLM deviance residuals as
\begin{align}
d_i^2 &= 2\phi\left(\log[f(y_i|\mu_i=y_i,\phi)]-\log[f(y_i|\mu_i=\hat{\mu}_i,\phi)]\right) \\
&= 2\phi \left[y_ib(y_i)-y_ib(\hat{\mu}_i)-c(y_i)+c(\hat{\mu}_i)\right]
\end{align}
The we have (via likelihood ratio chi-square, $\chi^2=\frac{1}{\phi}\sum_{i=1}^{N}d_i^2$)
$$E\left(\sum_{i=1}^{N}d_i^2\right)=E(\phi\chi^2)\approx (N-p)\phi$$
Where $p$ is the dimension of $\beta$. For logistic regression we have $\phi=1$, which is known. So we can use this to decide on a definite level of residual that is "acceptable" or "reasonable". This usually cannot be done for OLS regression (unless you have prior information about the noise). Namely, we expect each deviance residual to be about $1$. Too many $d_i^2\gg1$ and it is likely that an important effects are missing from the model (under-fitting); too many $d_i^2\ll1$ and it is likely that there are redundant or spurious effects in the model (over-fitting). (these could also mean model mispecification).
Now this means that the problem for the pseudo-$R^2$ is that it fails to take into account that the level of binomial variation is predictable (provided the binomial error structure isn't being questioned). Thus even though Nagelkerke ranges from $0$ to $1$, it is still not scaled properly. Additionally, I can't see why these are called pseudo $R^2$ if they aren't equal to the usual $R^2$ when you fit a "GLM" with an identity link and normal error. For example, the equivalent cox-snell R-squared for normal error (using REML estimate of variance) is given by:
$$R^2_{CS}=1-\exp\left(-\frac{N-p}{N}\cdot \frac{R^2_{OLS}}{1-R^2_{OLS}}\right)$$
Which certainly looks strange.
I think the better "Goodness of Fit" measure is the sum of the deviance residuals, $\chi^2$. This is mainly because we have a target to aim for. | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
I would have thought the main problem with any kind of $R^2$ measure for logistic regression is that you are dealing with a model which has a known noise value. This is unlike standard linear regress |
3,067 | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | I found Tue Tjur's short paper "Coefficients of Determination in Logistic Regression Models - A New Proposal: The Coefficient of Discrimination" (2009,
The American Statistician) on various proposals for a coefficient of determination in logistic models quite enlightening. He does a good job highlighting pros and cons - and of course offers a new definition. Very much recommended (though I have no favorite myself). | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | I found Tue Tjur's short paper "Coefficients of Determination in Logistic Regression Models - A New Proposal: The Coefficient of Discrimination" (2009,
The American Statistician) on various proposals | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
I found Tue Tjur's short paper "Coefficients of Determination in Logistic Regression Models - A New Proposal: The Coefficient of Discrimination" (2009,
The American Statistician) on various proposals for a coefficient of determination in logistic models quite enlightening. He does a good job highlighting pros and cons - and of course offers a new definition. Very much recommended (though I have no favorite myself). | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
I found Tue Tjur's short paper "Coefficients of Determination in Logistic Regression Models - A New Proposal: The Coefficient of Discrimination" (2009,
The American Statistician) on various proposals |
3,068 | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | I was also going to say 'neither of them', so i've upvoted whuber's answer.
As well as criticising R^2, Hosmer & Lemeshow did propose an alternative measure of goodness-of-fit for logistic regression that is sometimes useful. This is based on dividing the data into (say) 10 groups of equal size (or as near as possible) by ordering on the predicted probability (or equivalently, the linear predictor) then comparing the observed to expected number of positive responses in each group and performing a chi-squared test. This 'Hosmer-Lemeshow goodness-of-fit test' is implemented in most statistical software packages. | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | I was also going to say 'neither of them', so i've upvoted whuber's answer.
As well as criticising R^2, Hosmer & Lemeshow did propose an alternative measure of goodness-of-fit for logistic regression | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
I was also going to say 'neither of them', so i've upvoted whuber's answer.
As well as criticising R^2, Hosmer & Lemeshow did propose an alternative measure of goodness-of-fit for logistic regression that is sometimes useful. This is based on dividing the data into (say) 10 groups of equal size (or as near as possible) by ordering on the predicted probability (or equivalently, the linear predictor) then comparing the observed to expected number of positive responses in each group and performing a chi-squared test. This 'Hosmer-Lemeshow goodness-of-fit test' is implemented in most statistical software packages. | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
I was also going to say 'neither of them', so i've upvoted whuber's answer.
As well as criticising R^2, Hosmer & Lemeshow did propose an alternative measure of goodness-of-fit for logistic regression |
3,069 | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | Despite the arguments against using pseudo-r-squareds, some people will for various reasons want to continue using them at least at certain times. What I have internalized from my readings (and I'm sorry I cannot provide citations at the moment) is that
if both C&S and Nag. are below .5, C&S will be a better gauge;
if
they're both above .5, Nag. will; and
if they straddle .5, punt.
Also, a formula whose results often fall between these two, mentioned by Scott Menard in Applied Logistic Regression Analysis (Sage), is
[-2LL0 - (-2LL1)]/-2LL0.
This is denoted as "L" in the chart below. | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | Despite the arguments against using pseudo-r-squareds, some people will for various reasons want to continue using them at least at certain times. What I have internalized from my readings (and I'm s | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
Despite the arguments against using pseudo-r-squareds, some people will for various reasons want to continue using them at least at certain times. What I have internalized from my readings (and I'm sorry I cannot provide citations at the moment) is that
if both C&S and Nag. are below .5, C&S will be a better gauge;
if
they're both above .5, Nag. will; and
if they straddle .5, punt.
Also, a formula whose results often fall between these two, mentioned by Scott Menard in Applied Logistic Regression Analysis (Sage), is
[-2LL0 - (-2LL1)]/-2LL0.
This is denoted as "L" in the chart below. | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
Despite the arguments against using pseudo-r-squareds, some people will for various reasons want to continue using them at least at certain times. What I have internalized from my readings (and I'm s |
3,070 | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | I would agree in general that just using R2 is not good. But also see that point of @rolando2 comments that focusing on classification metrics could be not enough while comparing the models.
I guess my contribution to the discussion is that I think that several measures are to be reported to assess different model qualities.
For example, one may not only want to know what happens at the optimum threshold which separates cases from non-cases and respective error rates (false positives etc and integrated measure of this which is c-statistic), but also how good model output in terms of probabilities of being a case or non-case is close to reality (or corresponds to the actual rate of cases in the pool of observations with given parameters) across ALL ranges of output. I.e. that if it says an observation is 20% likely to have output of 1, that around 20% of observations with similar risk factors are cases. This is what referred to as calibration quality as opposed to discrimination (e.g. in Steyerberg et al Assessing the performance of prediction models: a framework for traditional and novel measures.) And this is exactly what calibration plot visually assesses, while Hosmer & Lemeshow quantified it in their goodness-of-fit statistics. The critics of H-L is that it depends on how you group observations, and playing with it I see that the value can change quite a bit on different data. Calibration slopes and intercept may be good alternative if one also does cross-validation or evaluates the model on a hold-out data as in predictive modelling.
Finally, some kind of R2, preferably Brier score or scaled Brier score, can be used to assess the overall fit. This measure is the same as R2 for linear regression, where error is defined as diff in probability and binary output, but also takes into account known variance of the binary output and normalises by q*(1-q).
Also, for a narrower discussion on checking of adding a new predictor makes a better model, IDI - integrated discrimination improvement, which is similar to the difference in scaled Brier scores, could be a very good addition to, say, the change in c-statistics or how good reclassification was - as it checks how re-classification got better across all thresholds. (Pencina MJ, D’Agostino RB, D’Agostino RB, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond) | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | I would agree in general that just using R2 is not good. But also see that point of @rolando2 comments that focusing on classification metrics could be not enough while comparing the models.
I guess m | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
I would agree in general that just using R2 is not good. But also see that point of @rolando2 comments that focusing on classification metrics could be not enough while comparing the models.
I guess my contribution to the discussion is that I think that several measures are to be reported to assess different model qualities.
For example, one may not only want to know what happens at the optimum threshold which separates cases from non-cases and respective error rates (false positives etc and integrated measure of this which is c-statistic), but also how good model output in terms of probabilities of being a case or non-case is close to reality (or corresponds to the actual rate of cases in the pool of observations with given parameters) across ALL ranges of output. I.e. that if it says an observation is 20% likely to have output of 1, that around 20% of observations with similar risk factors are cases. This is what referred to as calibration quality as opposed to discrimination (e.g. in Steyerberg et al Assessing the performance of prediction models: a framework for traditional and novel measures.) And this is exactly what calibration plot visually assesses, while Hosmer & Lemeshow quantified it in their goodness-of-fit statistics. The critics of H-L is that it depends on how you group observations, and playing with it I see that the value can change quite a bit on different data. Calibration slopes and intercept may be good alternative if one also does cross-validation or evaluates the model on a hold-out data as in predictive modelling.
Finally, some kind of R2, preferably Brier score or scaled Brier score, can be used to assess the overall fit. This measure is the same as R2 for linear regression, where error is defined as diff in probability and binary output, but also takes into account known variance of the binary output and normalises by q*(1-q).
Also, for a narrower discussion on checking of adding a new predictor makes a better model, IDI - integrated discrimination improvement, which is similar to the difference in scaled Brier scores, could be a very good addition to, say, the change in c-statistics or how good reclassification was - as it checks how re-classification got better across all thresholds. (Pencina MJ, D’Agostino RB, D’Agostino RB, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond) | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
I would agree in general that just using R2 is not good. But also see that point of @rolando2 comments that focusing on classification metrics could be not enough while comparing the models.
I guess m |
3,071 | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | I would prefer the Nagelkerke as this model fit attains 1 when the model fits perfectly giving the reader a sense of how far your model is from perfect fit. The Cox & Shell does not attain 1 for perfect model fit and hence interpreting a value of 0.09 is a bit harder. See this url for further info on Pseudo RSquared for an explanation of various types of fits. | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | I would prefer the Nagelkerke as this model fit attains 1 when the model fits perfectly giving the reader a sense of how far your model is from perfect fit. The Cox & Shell does not attain 1 for perfe | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
I would prefer the Nagelkerke as this model fit attains 1 when the model fits perfectly giving the reader a sense of how far your model is from perfect fit. The Cox & Shell does not attain 1 for perfect model fit and hence interpreting a value of 0.09 is a bit harder. See this url for further info on Pseudo RSquared for an explanation of various types of fits. | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
I would prefer the Nagelkerke as this model fit attains 1 when the model fits perfectly giving the reader a sense of how far your model is from perfect fit. The Cox & Shell does not attain 1 for perfe |
3,072 | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | Instead of the Nagelkerke way of scaling $R^2$ to allow a 1.0 to be attained, I prefer to substitute the effective sample size for $N$ in the $R^2$ formula. This will not reach 1.0 for perfect binary predictions but this approach translates to other settings such as survival analysis where often the effective $N$ is the number of events, and to ordinal regression. See https://hbiostat.org/bib/r2.html.
Of those my favorite is the modified Maddala-Cox-Snell $R^{2}_{m,p}$ which uses effective sample size $m$ and penalizes for $p$ covariates. In the normal linear model this is almost exactly the traditional $R^{2}_{\mathrm{adj}}$. | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? | Instead of the Nagelkerke way of scaling $R^2$ to allow a 1.0 to be attained, I prefer to substitute the effective sample size for $N$ in the $R^2$ formula. This will not reach 1.0 for perfect binary | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
Instead of the Nagelkerke way of scaling $R^2$ to allow a 1.0 to be attained, I prefer to substitute the effective sample size for $N$ in the $R^2$ formula. This will not reach 1.0 for perfect binary predictions but this approach translates to other settings such as survival analysis where often the effective $N$ is the number of events, and to ordinal regression. See https://hbiostat.org/bib/r2.html.
Of those my favorite is the modified Maddala-Cox-Snell $R^{2}_{m,p}$ which uses effective sample size $m$ and penalizes for $p$ covariates. In the normal linear model this is almost exactly the traditional $R^{2}_{\mathrm{adj}}$. | Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)?
Instead of the Nagelkerke way of scaling $R^2$ to allow a 1.0 to be attained, I prefer to substitute the effective sample size for $N$ in the $R^2$ formula. This will not reach 1.0 for perfect binary |
3,073 | Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions? | Bernoulli$^*$ cross-entropy loss is a special case of categorical cross-entropy loss for $m=2$.
$$
\begin{align}
\mathcal{L}(\theta)
&= -\frac{1}{n}\sum_{i=1}^n\sum_{j=1}^m y_{ij}\log(p_{ij}) \\
&= -\frac{1}{n}\sum_{i=1}^n \left[y_i \log(p_i) + (1-y_i) \log(1-p_i)\right]
\end{align}
$$
Where $i$ indexes samples/observations and $j$ indexes classes, and $y$ is the sample label (binary for LSH, one-hot vector on the RHS) and $p_{ij}\in(0,1):\sum_{j} p_{ij} =1\forall i,j$ is the prediction for a sample.
I write "Bernoulli cross-entropy" because this loss arises from a Bernoulli probability model. There is not a "binary distribution." A "binary cross-entropy" doesn't tell us if the thing that is binary is the one-hot vector of $k \ge 2$ labels, or if the author is using binary encoding for each trial (success or failure). This isn't a general convention, but it makes clear that these formulae arise from particular probability models. Conventional jargon is not clear in that way. | Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions? | Bernoulli$^*$ cross-entropy loss is a special case of categorical cross-entropy loss for $m=2$.
$$
\begin{align}
\mathcal{L}(\theta)
&= -\frac{1}{n}\sum_{i=1}^n\sum_{j=1}^m y_{ij}\log(p_{ij}) \\
&= -\ | Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions?
Bernoulli$^*$ cross-entropy loss is a special case of categorical cross-entropy loss for $m=2$.
$$
\begin{align}
\mathcal{L}(\theta)
&= -\frac{1}{n}\sum_{i=1}^n\sum_{j=1}^m y_{ij}\log(p_{ij}) \\
&= -\frac{1}{n}\sum_{i=1}^n \left[y_i \log(p_i) + (1-y_i) \log(1-p_i)\right]
\end{align}
$$
Where $i$ indexes samples/observations and $j$ indexes classes, and $y$ is the sample label (binary for LSH, one-hot vector on the RHS) and $p_{ij}\in(0,1):\sum_{j} p_{ij} =1\forall i,j$ is the prediction for a sample.
I write "Bernoulli cross-entropy" because this loss arises from a Bernoulli probability model. There is not a "binary distribution." A "binary cross-entropy" doesn't tell us if the thing that is binary is the one-hot vector of $k \ge 2$ labels, or if the author is using binary encoding for each trial (success or failure). This isn't a general convention, but it makes clear that these formulae arise from particular probability models. Conventional jargon is not clear in that way. | Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions?
Bernoulli$^*$ cross-entropy loss is a special case of categorical cross-entropy loss for $m=2$.
$$
\begin{align}
\mathcal{L}(\theta)
&= -\frac{1}{n}\sum_{i=1}^n\sum_{j=1}^m y_{ij}\log(p_{ij}) \\
&= -\ |
3,074 | Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions? | There are three kinds of classification tasks:
Binary classification: two exclusive classes
Multi-class classification: more than two exclusive classes
Multi-label classification: just non-exclusive classes
Here, we can say
In the case of (1), you need to use binary cross entropy.
In the case of (2), you need to use categorical cross entropy.
In the case of (3), you need to use binary cross entropy.
You can just consider the multi-label classifier as a combination of multiple independent binary classifiers. If you have 10 classes here, you have 10 binary classifiers separately. Each binary classifier is trained independently. Thus, we can produce multi-label for each sample. If you want to make sure at least one label must be acquired, then you can select the one with the lowest classification loss function, or using other metrics.
I want to emphasize that multi-class classification is not similar to multi-label classification! Rather, multi-label classifier borrows an idea from the binary classifier! | Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions? | There are three kinds of classification tasks:
Binary classification: two exclusive classes
Multi-class classification: more than two exclusive classes
Multi-label classification: just non-excl | Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions?
There are three kinds of classification tasks:
Binary classification: two exclusive classes
Multi-class classification: more than two exclusive classes
Multi-label classification: just non-exclusive classes
Here, we can say
In the case of (1), you need to use binary cross entropy.
In the case of (2), you need to use categorical cross entropy.
In the case of (3), you need to use binary cross entropy.
You can just consider the multi-label classifier as a combination of multiple independent binary classifiers. If you have 10 classes here, you have 10 binary classifiers separately. Each binary classifier is trained independently. Thus, we can produce multi-label for each sample. If you want to make sure at least one label must be acquired, then you can select the one with the lowest classification loss function, or using other metrics.
I want to emphasize that multi-class classification is not similar to multi-label classification! Rather, multi-label classifier borrows an idea from the binary classifier! | Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions?
There are three kinds of classification tasks:
Binary classification: two exclusive classes
Multi-class classification: more than two exclusive classes
Multi-label classification: just non-excl |
3,075 | Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions? | Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
Binary cross-entropy is for multi-label classifications, whereas categorical cross entropy is for multi-class classification where each example belongs to a single class. | Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions? | Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
| Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
Binary cross-entropy is for multi-label classifications, whereas categorical cross entropy is for multi-class classification where each example belongs to a single class. | Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
|
3,076 | Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions? | Binary Cross Entropy is a special case of Categorical Cross Entropy with 2 classes (class=1, and class=0). If we formulate Binary Cross Entropy this way, then we can use the general Cross-Entropy loss formula here: Sum(y*log y) for each class. Notice how this is the same as binary cross entropy.
For multi-label classification, the idea is the same. But instead of say 3 labels to indicate 3 classes, we have 6 labels to indicate presence or absence of each class (class1=1, class1=0, class2=1, class2=0, class3=1, and class3=0). The loss then is the sum of cross-entropy loss for each of these 6 classes. | Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions? | Binary Cross Entropy is a special case of Categorical Cross Entropy with 2 classes (class=1, and class=0). If we formulate Binary Cross Entropy this way, then we can use the general Cross-Entropy los | Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions?
Binary Cross Entropy is a special case of Categorical Cross Entropy with 2 classes (class=1, and class=0). If we formulate Binary Cross Entropy this way, then we can use the general Cross-Entropy loss formula here: Sum(y*log y) for each class. Notice how this is the same as binary cross entropy.
For multi-label classification, the idea is the same. But instead of say 3 labels to indicate 3 classes, we have 6 labels to indicate presence or absence of each class (class1=1, class1=0, class2=1, class2=0, class3=1, and class3=0). The loss then is the sum of cross-entropy loss for each of these 6 classes. | Should I use a categorical cross-entropy or binary cross-entropy loss for binary predictions?
Binary Cross Entropy is a special case of Categorical Cross Entropy with 2 classes (class=1, and class=0). If we formulate Binary Cross Entropy this way, then we can use the general Cross-Entropy los |
3,077 | How to tune hyperparameters of xgboost trees? | Since the interface to xgboost in caret has recently changed, here is a script that provides a fully commented walkthrough of using caret to tune xgboost hyper-parameters.
For this, I will be using the training data from the Kaggle competition "Give Me Some Credit".
1. Fitting an xgboost model
In this section, we:
fit an xgboost model with arbitrary hyperparameters
evaluate the loss (AUC-ROC) using cross-validation (xgb.cv)
plot the training versus testing evaluation metric
Here is some code to do this.
library(caret)
library(xgboost)
library(readr)
library(dplyr)
library(tidyr)
# load in the training data
df_train = read_csv("04-GiveMeSomeCredit/Data/cs-training.csv") %>%
na.omit() %>% # listwise deletion
select(-`[EMPTY]`) %>%
mutate(SeriousDlqin2yrs = factor(SeriousDlqin2yrs, # factor variable for classification
labels = c("Failure", "Success")))
# xgboost fitting with arbitrary parameters
xgb_params_1 = list(
objective = "binary:logistic", # binary classification
eta = 0.01, # learning rate
max.depth = 3, # max tree depth
eval_metric = "auc" # evaluation/loss metric
)
# fit the model with the arbitrary parameters specified above
xgb_1 = xgboost(data = as.matrix(df_train %>%
select(-SeriousDlqin2yrs)),
label = df_train$SeriousDlqin2yrs,
params = xgb_params_1,
nrounds = 100, # max number of trees to build
verbose = TRUE,
print.every.n = 1,
early.stop.round = 10 # stop if no improvement within 10 trees
)
# cross-validate xgboost to get the accurate measure of error
xgb_cv_1 = xgb.cv(params = xgb_params_1,
data = as.matrix(df_train %>%
select(-SeriousDlqin2yrs)),
label = df_train$SeriousDlqin2yrs,
nrounds = 100,
nfold = 5, # number of folds in K-fold
prediction = TRUE, # return the prediction using the final model
showsd = TRUE, # standard deviation of loss across folds
stratified = TRUE, # sample is unbalanced; use stratified sampling
verbose = TRUE,
print.every.n = 1,
early.stop.round = 10
)
# plot the AUC for the training and testing samples
xgb_cv_1$dt %>%
select(-contains("std")) %>%
mutate(IterationNum = 1:n()) %>%
gather(TestOrTrain, AUC, -IterationNum) %>%
ggplot(aes(x = IterationNum, y = AUC, group = TestOrTrain, color = TestOrTrain)) +
geom_line() +
theme_bw()
Here is what the testing versus training AUC looks like:
2. Hyperparameter search using train
For the hyperparameter search, we perform the following steps:
create a data.frame with unique combinations of parameters that we want trained models for.
Specify the control parameters that apply to each model's training, including the cross-validation parameters, and specify that the probabilities be computed so that the AUC can be computed
cross-validate & train the models for each parameter combination, saving the AUC for each model.
Here is some code that shows how to do this.
# set up the cross-validated hyper-parameter search
xgb_grid_1 = expand.grid(
nrounds = 1000,
eta = c(0.01, 0.001, 0.0001),
max_depth = c(2, 4, 6, 8, 10),
gamma = 1
)
# pack the training control parameters
xgb_trcontrol_1 = trainControl(
method = "cv",
number = 5,
verboseIter = TRUE,
returnData = FALSE,
returnResamp = "all", # save losses across all models
classProbs = TRUE, # set to TRUE for AUC to be computed
summaryFunction = twoClassSummary,
allowParallel = TRUE
)
# train the model for each parameter combination in the grid,
# using CV to evaluate
xgb_train_1 = train(
x = as.matrix(df_train %>%
select(-SeriousDlqin2yrs)),
y = as.factor(df_train$SeriousDlqin2yrs),
trControl = xgb_trcontrol_1,
tuneGrid = xgb_grid_1,
method = "xgbTree"
)
# scatter plot of the AUC against max_depth and eta
ggplot(xgb_train_1$results, aes(x = as.factor(eta), y = max_depth, size = ROC, color = ROC)) +
geom_point() +
theme_bw() +
scale_size_continuous(guide = "none")
Lastly, you can create a bubbleplot for the AUC over the variations of eta and max_depth: | How to tune hyperparameters of xgboost trees? | Since the interface to xgboost in caret has recently changed, here is a script that provides a fully commented walkthrough of using caret to tune xgboost hyper-parameters.
For this, I will be using t | How to tune hyperparameters of xgboost trees?
Since the interface to xgboost in caret has recently changed, here is a script that provides a fully commented walkthrough of using caret to tune xgboost hyper-parameters.
For this, I will be using the training data from the Kaggle competition "Give Me Some Credit".
1. Fitting an xgboost model
In this section, we:
fit an xgboost model with arbitrary hyperparameters
evaluate the loss (AUC-ROC) using cross-validation (xgb.cv)
plot the training versus testing evaluation metric
Here is some code to do this.
library(caret)
library(xgboost)
library(readr)
library(dplyr)
library(tidyr)
# load in the training data
df_train = read_csv("04-GiveMeSomeCredit/Data/cs-training.csv") %>%
na.omit() %>% # listwise deletion
select(-`[EMPTY]`) %>%
mutate(SeriousDlqin2yrs = factor(SeriousDlqin2yrs, # factor variable for classification
labels = c("Failure", "Success")))
# xgboost fitting with arbitrary parameters
xgb_params_1 = list(
objective = "binary:logistic", # binary classification
eta = 0.01, # learning rate
max.depth = 3, # max tree depth
eval_metric = "auc" # evaluation/loss metric
)
# fit the model with the arbitrary parameters specified above
xgb_1 = xgboost(data = as.matrix(df_train %>%
select(-SeriousDlqin2yrs)),
label = df_train$SeriousDlqin2yrs,
params = xgb_params_1,
nrounds = 100, # max number of trees to build
verbose = TRUE,
print.every.n = 1,
early.stop.round = 10 # stop if no improvement within 10 trees
)
# cross-validate xgboost to get the accurate measure of error
xgb_cv_1 = xgb.cv(params = xgb_params_1,
data = as.matrix(df_train %>%
select(-SeriousDlqin2yrs)),
label = df_train$SeriousDlqin2yrs,
nrounds = 100,
nfold = 5, # number of folds in K-fold
prediction = TRUE, # return the prediction using the final model
showsd = TRUE, # standard deviation of loss across folds
stratified = TRUE, # sample is unbalanced; use stratified sampling
verbose = TRUE,
print.every.n = 1,
early.stop.round = 10
)
# plot the AUC for the training and testing samples
xgb_cv_1$dt %>%
select(-contains("std")) %>%
mutate(IterationNum = 1:n()) %>%
gather(TestOrTrain, AUC, -IterationNum) %>%
ggplot(aes(x = IterationNum, y = AUC, group = TestOrTrain, color = TestOrTrain)) +
geom_line() +
theme_bw()
Here is what the testing versus training AUC looks like:
2. Hyperparameter search using train
For the hyperparameter search, we perform the following steps:
create a data.frame with unique combinations of parameters that we want trained models for.
Specify the control parameters that apply to each model's training, including the cross-validation parameters, and specify that the probabilities be computed so that the AUC can be computed
cross-validate & train the models for each parameter combination, saving the AUC for each model.
Here is some code that shows how to do this.
# set up the cross-validated hyper-parameter search
xgb_grid_1 = expand.grid(
nrounds = 1000,
eta = c(0.01, 0.001, 0.0001),
max_depth = c(2, 4, 6, 8, 10),
gamma = 1
)
# pack the training control parameters
xgb_trcontrol_1 = trainControl(
method = "cv",
number = 5,
verboseIter = TRUE,
returnData = FALSE,
returnResamp = "all", # save losses across all models
classProbs = TRUE, # set to TRUE for AUC to be computed
summaryFunction = twoClassSummary,
allowParallel = TRUE
)
# train the model for each parameter combination in the grid,
# using CV to evaluate
xgb_train_1 = train(
x = as.matrix(df_train %>%
select(-SeriousDlqin2yrs)),
y = as.factor(df_train$SeriousDlqin2yrs),
trControl = xgb_trcontrol_1,
tuneGrid = xgb_grid_1,
method = "xgbTree"
)
# scatter plot of the AUC against max_depth and eta
ggplot(xgb_train_1$results, aes(x = as.factor(eta), y = max_depth, size = ROC, color = ROC)) +
geom_point() +
theme_bw() +
scale_size_continuous(guide = "none")
Lastly, you can create a bubbleplot for the AUC over the variations of eta and max_depth: | How to tune hyperparameters of xgboost trees?
Since the interface to xgboost in caret has recently changed, here is a script that provides a fully commented walkthrough of using caret to tune xgboost hyper-parameters.
For this, I will be using t |
3,078 | How to tune hyperparameters of xgboost trees? | Caret package have incorporated xgboost.
cv.ctrl <- trainControl(method = "repeatedcv", repeats = 1,number = 3,
#summaryFunction = twoClassSummary,
classProbs = TRUE,
allowParallel=T)
xgb.grid <- expand.grid(nrounds = 1000,
eta = c(0.01,0.05,0.1),
max_depth = c(2,4,6,8,10,14)
)
set.seed(45)
xgb_tune <-train(formula,
data=train,
method="xgbTree",
trControl=cv.ctrl,
tuneGrid=xgb.grid,
verbose=T,
metric="Kappa",
nthread =3
)
Sample output
eXtreme Gradient Boosting
32218 samples
41 predictor
2 classes: 'N', 'Y'
No pre-processing
Resampling: Cross-Validated (3 fold, repeated 1 times)
Summary of sample sizes: 21479, 21479, 21478
Resampling results
Accuracy Kappa Accuracy SD Kappa SD
0.9324911 0.1094426 0.0009742774 0.008972911
One drawback i see is that other parameters of xgboost like subsample etc are not supported by caret currently.
Edit
Gamma, colsample_bytree, min_child_weight and subsample etc can now (June 2017) be tuned directly using Caret. Just add them in the grid portion of the above code to make it work. Thanks usεr11852 for highliting it in the comment. | How to tune hyperparameters of xgboost trees? | Caret package have incorporated xgboost.
cv.ctrl <- trainControl(method = "repeatedcv", repeats = 1,number = 3,
#summaryFunction = twoClassSummary,
cla | How to tune hyperparameters of xgboost trees?
Caret package have incorporated xgboost.
cv.ctrl <- trainControl(method = "repeatedcv", repeats = 1,number = 3,
#summaryFunction = twoClassSummary,
classProbs = TRUE,
allowParallel=T)
xgb.grid <- expand.grid(nrounds = 1000,
eta = c(0.01,0.05,0.1),
max_depth = c(2,4,6,8,10,14)
)
set.seed(45)
xgb_tune <-train(formula,
data=train,
method="xgbTree",
trControl=cv.ctrl,
tuneGrid=xgb.grid,
verbose=T,
metric="Kappa",
nthread =3
)
Sample output
eXtreme Gradient Boosting
32218 samples
41 predictor
2 classes: 'N', 'Y'
No pre-processing
Resampling: Cross-Validated (3 fold, repeated 1 times)
Summary of sample sizes: 21479, 21479, 21478
Resampling results
Accuracy Kappa Accuracy SD Kappa SD
0.9324911 0.1094426 0.0009742774 0.008972911
One drawback i see is that other parameters of xgboost like subsample etc are not supported by caret currently.
Edit
Gamma, colsample_bytree, min_child_weight and subsample etc can now (June 2017) be tuned directly using Caret. Just add them in the grid portion of the above code to make it work. Thanks usεr11852 for highliting it in the comment. | How to tune hyperparameters of xgboost trees?
Caret package have incorporated xgboost.
cv.ctrl <- trainControl(method = "repeatedcv", repeats = 1,number = 3,
#summaryFunction = twoClassSummary,
cla |
3,079 | How to tune hyperparameters of xgboost trees? | I know this is an old question, but I use a different method from the ones above. I use the BayesianOptimization function from the Bayesian Optimization package to find optimal parameters. To do this, you first create cross validation folds, then create a function xgb.cv.bayes that has as parameters the boosting hyper parameters you want to change. In this example I am tuning max.depth, min_child_weight, subsample, colsample_bytree, gamma. You then call xgb.cv in that function with the hyper parameters set to in the input parameters of xgb.cv.bayes. Then you call BayesianOptimization with the xgb.cv.bayes and the desired ranges of the boosting hyper parameters. init_points is the number of initial models with hyper parameters taken randomly from the specified ranges, and n_iter is the number of rounds of models after the initial points. The function outputs all boosting parameters and the test AUC.
cv_folds <- KFold(as.matrix(df.train[,target.var]), nfolds = 5,
stratified = TRUE, seed = 50)
xgb.cv.bayes <- function(max.depth, min_child_weight, subsample, colsample_bytree, gamma){
cv <- xgv.cv(params = list(booster = 'gbtree', eta = 0.05,
max_depth = max.depth,
min_child_weight = min_child_weight,
subsample = subsample,
colsample_bytree = colsample_bytree,
gamma = gamma,
lambda = 1, alpha = 0,
objective = 'binary:logistic',
eval_metric = 'auc'),
data = data.matrix(df.train[,-target.var]),
label = as.matrix(df.train[, target.var]),
nround = 500, folds = cv_folds, prediction = TRUE,
showsd = TRUE, early.stop.round = 5, maximize = TRUE,
verbose = 0
)
list(Score = cv$dt[, max(test.auc.mean)],
Pred = cv$pred)
}
xgb.bayes.model <- BayesianOptimization(
xgb.cv.bayes,
bounds = list(max.depth = c(2L, 12L),
min_child_weight = c(1L, 10L),
subsample = c(0.5, 1),
colsample_bytree = c(0.1, 0.4),
gamma = c(0, 10)
),
init_grid_dt = NULL,
init_points = 10, # number of random points to start search
n_iter = 20, # number of iterations after initial random points are set
acq = 'ucb', kappa = 2.576, eps = 0.0, verbose = TRUE
) | How to tune hyperparameters of xgboost trees? | I know this is an old question, but I use a different method from the ones above. I use the BayesianOptimization function from the Bayesian Optimization package to find optimal parameters. To do thi | How to tune hyperparameters of xgboost trees?
I know this is an old question, but I use a different method from the ones above. I use the BayesianOptimization function from the Bayesian Optimization package to find optimal parameters. To do this, you first create cross validation folds, then create a function xgb.cv.bayes that has as parameters the boosting hyper parameters you want to change. In this example I am tuning max.depth, min_child_weight, subsample, colsample_bytree, gamma. You then call xgb.cv in that function with the hyper parameters set to in the input parameters of xgb.cv.bayes. Then you call BayesianOptimization with the xgb.cv.bayes and the desired ranges of the boosting hyper parameters. init_points is the number of initial models with hyper parameters taken randomly from the specified ranges, and n_iter is the number of rounds of models after the initial points. The function outputs all boosting parameters and the test AUC.
cv_folds <- KFold(as.matrix(df.train[,target.var]), nfolds = 5,
stratified = TRUE, seed = 50)
xgb.cv.bayes <- function(max.depth, min_child_weight, subsample, colsample_bytree, gamma){
cv <- xgv.cv(params = list(booster = 'gbtree', eta = 0.05,
max_depth = max.depth,
min_child_weight = min_child_weight,
subsample = subsample,
colsample_bytree = colsample_bytree,
gamma = gamma,
lambda = 1, alpha = 0,
objective = 'binary:logistic',
eval_metric = 'auc'),
data = data.matrix(df.train[,-target.var]),
label = as.matrix(df.train[, target.var]),
nround = 500, folds = cv_folds, prediction = TRUE,
showsd = TRUE, early.stop.round = 5, maximize = TRUE,
verbose = 0
)
list(Score = cv$dt[, max(test.auc.mean)],
Pred = cv$pred)
}
xgb.bayes.model <- BayesianOptimization(
xgb.cv.bayes,
bounds = list(max.depth = c(2L, 12L),
min_child_weight = c(1L, 10L),
subsample = c(0.5, 1),
colsample_bytree = c(0.1, 0.4),
gamma = c(0, 10)
),
init_grid_dt = NULL,
init_points = 10, # number of random points to start search
n_iter = 20, # number of iterations after initial random points are set
acq = 'ucb', kappa = 2.576, eps = 0.0, verbose = TRUE
) | How to tune hyperparameters of xgboost trees?
I know this is an old question, but I use a different method from the ones above. I use the BayesianOptimization function from the Bayesian Optimization package to find optimal parameters. To do thi |
3,080 | How to tune hyperparameters of xgboost trees? | This is an older question but thought I would share how I tune xgboost parameters. I originally thought I would use caret for this but recently found an issue handling all of the parameters as well as missing values. I was also considering writing an iterating loop through different combinations of parameters but wanted it to run in parallel and would require too much time. Using gridSearch from the NMOF package provided the best from both worlds (all parameters as well as parallel processing). Here is example code for binary classification (works on windows and linux):
# xgboost task parameters
nrounds <- 1000
folds <- 10
obj <- 'binary:logistic'
eval <- 'logloss'
# Parameter grid to search
params <- list(
eval_metric = eval,
objective = obj,
eta = c(0.1,0.01),
max_depth = c(4,6,8,10),
max_delta_step = c(0,1),
subsample = 1,
scale_pos_weight = 1
)
# Table to track performance from each worker node
res <- data.frame()
# Simple cross validated xgboost training function (returning minimum error for grid search)
xgbCV <- function (params) {
fit <- xgb.cv(
data = data.matrix(train),
label = trainLabel,
param =params,
missing = NA,
nfold = folds,
prediction = FALSE,
early.stop.round = 50,
maximize = FALSE,
nrounds = nrounds
)
rounds <- nrow(fit)
metric = paste('test.',eval,'.mean',sep='')
idx <- which.min(fit[,fit[[metric]]])
val <- fit[idx,][[metric]]
res <<- rbind(res,c(idx,val,rounds))
colnames(res) <<- c('idx','val','rounds')
return(val)
}
# Find minimal testing error in parallel
cl <- makeCluster(round(detectCores()/2))
clusterExport(cl, c("xgb.cv",'train','trainLabel','nrounds','res','eval','folds'))
sol <- gridSearch(
fun = xgbCV,
levels = params,
method = 'snow',
cl = cl,
keepNames = TRUE,
asList = TRUE
)
# Combine all model results
comb=clusterEvalQ(cl,res)
results <- ldply(comb,data.frame)
stopCluster(cl)
# Train model given solution above
params <- c(sol$minlevels,objective = obj, eval_metric = eval)
xgbModel <- xgboost(
data = xgb.DMatrix(data.matrix(train),missing=NaN, label = trainLabel),
param = params,
nrounds = results[which.min(results[,2]),1]
)
print(params)
print(results) | How to tune hyperparameters of xgboost trees? | This is an older question but thought I would share how I tune xgboost parameters. I originally thought I would use caret for this but recently found an issue handling all of the parameters as well as | How to tune hyperparameters of xgboost trees?
This is an older question but thought I would share how I tune xgboost parameters. I originally thought I would use caret for this but recently found an issue handling all of the parameters as well as missing values. I was also considering writing an iterating loop through different combinations of parameters but wanted it to run in parallel and would require too much time. Using gridSearch from the NMOF package provided the best from both worlds (all parameters as well as parallel processing). Here is example code for binary classification (works on windows and linux):
# xgboost task parameters
nrounds <- 1000
folds <- 10
obj <- 'binary:logistic'
eval <- 'logloss'
# Parameter grid to search
params <- list(
eval_metric = eval,
objective = obj,
eta = c(0.1,0.01),
max_depth = c(4,6,8,10),
max_delta_step = c(0,1),
subsample = 1,
scale_pos_weight = 1
)
# Table to track performance from each worker node
res <- data.frame()
# Simple cross validated xgboost training function (returning minimum error for grid search)
xgbCV <- function (params) {
fit <- xgb.cv(
data = data.matrix(train),
label = trainLabel,
param =params,
missing = NA,
nfold = folds,
prediction = FALSE,
early.stop.round = 50,
maximize = FALSE,
nrounds = nrounds
)
rounds <- nrow(fit)
metric = paste('test.',eval,'.mean',sep='')
idx <- which.min(fit[,fit[[metric]]])
val <- fit[idx,][[metric]]
res <<- rbind(res,c(idx,val,rounds))
colnames(res) <<- c('idx','val','rounds')
return(val)
}
# Find minimal testing error in parallel
cl <- makeCluster(round(detectCores()/2))
clusterExport(cl, c("xgb.cv",'train','trainLabel','nrounds','res','eval','folds'))
sol <- gridSearch(
fun = xgbCV,
levels = params,
method = 'snow',
cl = cl,
keepNames = TRUE,
asList = TRUE
)
# Combine all model results
comb=clusterEvalQ(cl,res)
results <- ldply(comb,data.frame)
stopCluster(cl)
# Train model given solution above
params <- c(sol$minlevels,objective = obj, eval_metric = eval)
xgbModel <- xgboost(
data = xgb.DMatrix(data.matrix(train),missing=NaN, label = trainLabel),
param = params,
nrounds = results[which.min(results[,2]),1]
)
print(params)
print(results) | How to tune hyperparameters of xgboost trees?
This is an older question but thought I would share how I tune xgboost parameters. I originally thought I would use caret for this but recently found an issue handling all of the parameters as well as |
3,081 | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood? | This is an alternative answer: optimizers in statistical packages usually work by minimizing the result of a function. If your function gives the likelihood value first it's more convenient to use logarithm in order to decrease the value returned by likelihood function. Then, since the log likelihood and likelihood function have the same increasing or decreasing trend, you can minimize the negative log likelihood in order to actually perform the maximum likelihood estimate of the function you are testing. See for example the nlminb function in R here | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood? | This is an alternative answer: optimizers in statistical packages usually work by minimizing the result of a function. If your function gives the likelihood value first it's more convenient to use log | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood?
This is an alternative answer: optimizers in statistical packages usually work by minimizing the result of a function. If your function gives the likelihood value first it's more convenient to use logarithm in order to decrease the value returned by likelihood function. Then, since the log likelihood and likelihood function have the same increasing or decreasing trend, you can minimize the negative log likelihood in order to actually perform the maximum likelihood estimate of the function you are testing. See for example the nlminb function in R here | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood?
This is an alternative answer: optimizers in statistical packages usually work by minimizing the result of a function. If your function gives the likelihood value first it's more convenient to use log |
3,082 | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood? | Optimisers typically minimize a function, so we use negative log-likelihood as minimising that is equivalent to maximising the log-likelihood or the likelihood itself.
Just for completeness, I would mention that the logarithm is a monotonic function, so optimising a function is the same as optimising the logarithm of it. Doing the log transform of the likelihood function makes it easier to handle (multiplication becomes sums) and this is also numerically more stable. This is because the magnitude of the likelihoods can be very small. Doing a log transform converts these small numbers to larger negative values which a finite precision machine can handle better. | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood? | Optimisers typically minimize a function, so we use negative log-likelihood as minimising that is equivalent to maximising the log-likelihood or the likelihood itself.
Just for completeness, I would m | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood?
Optimisers typically minimize a function, so we use negative log-likelihood as minimising that is equivalent to maximising the log-likelihood or the likelihood itself.
Just for completeness, I would mention that the logarithm is a monotonic function, so optimising a function is the same as optimising the logarithm of it. Doing the log transform of the likelihood function makes it easier to handle (multiplication becomes sums) and this is also numerically more stable. This is because the magnitude of the likelihoods can be very small. Doing a log transform converts these small numbers to larger negative values which a finite precision machine can handle better. | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood?
Optimisers typically minimize a function, so we use negative log-likelihood as minimising that is equivalent to maximising the log-likelihood or the likelihood itself.
Just for completeness, I would m |
3,083 | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood? | Here minimizing means decrease the distance of two distributions to its lowest: the target Bernoulli distribution and the generated result distribution. We measure the distance of two distributions using Kullback-Leibler divergence(also called relative entropy), and due to the large number theory minimizing KL divergence is amount to minimizing cross entropy(either multiclass cross entropy, see here or binary classification, see here and here).
Thus
maximizing log likelihood is equivalent to minimizing "negative log
likelihood"
can be translated to
Maximizing the log likelihood is equivalent to minimizing the distance between two distributions, thus is equivalent to minimizing KL divergence, and then the cross entropy.
I think it has become quite intuitive. | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood? | Here minimizing means decrease the distance of two distributions to its lowest: the target Bernoulli distribution and the generated result distribution. We measure the distance of two distributions us | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood?
Here minimizing means decrease the distance of two distributions to its lowest: the target Bernoulli distribution and the generated result distribution. We measure the distance of two distributions using Kullback-Leibler divergence(also called relative entropy), and due to the large number theory minimizing KL divergence is amount to minimizing cross entropy(either multiclass cross entropy, see here or binary classification, see here and here).
Thus
maximizing log likelihood is equivalent to minimizing "negative log
likelihood"
can be translated to
Maximizing the log likelihood is equivalent to minimizing the distance between two distributions, thus is equivalent to minimizing KL divergence, and then the cross entropy.
I think it has become quite intuitive. | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood?
Here minimizing means decrease the distance of two distributions to its lowest: the target Bernoulli distribution and the generated result distribution. We measure the distance of two distributions us |
3,084 | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood? | The answer is simpler than you might think. It is the convention that we call the optimization objective function a "cost function" or "loss function" and therefore, we want to minimize them, rather than maximize them, and hence the negative log likelihood is formed, rather than
positive likelihood in your word. Technically both are correct though. By the way, if we do want to maximize something, usually we call it "utility function" and hence the goal is to maximize them. | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood? | The answer is simpler than you might think. It is the convention that we call the optimization objective function a "cost function" or "loss function" and therefore, we want to minimize them, rather t | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood?
The answer is simpler than you might think. It is the convention that we call the optimization objective function a "cost function" or "loss function" and therefore, we want to minimize them, rather than maximize them, and hence the negative log likelihood is formed, rather than
positive likelihood in your word. Technically both are correct though. By the way, if we do want to maximize something, usually we call it "utility function" and hence the goal is to maximize them. | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood?
The answer is simpler than you might think. It is the convention that we call the optimization objective function a "cost function" or "loss function" and therefore, we want to minimize them, rather t |
3,085 | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood? | The main reason for using log is to handle very small likelihoods. A 32-bit float can only go down to 2^-126 before it gets rounded to 0. It's not just because optimizers are built to minimize functions, since you can easily minimize -likelihood. If you have a large model computing likelihood of a sequence with hundreds of factors it's easy for likelihood to go below floating point limits. Using log transforms 2^-126 into -126, making it more resistant to underflow. | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood? | The main reason for using log is to handle very small likelihoods. A 32-bit float can only go down to 2^-126 before it gets rounded to 0. It's not just because optimizers are built to minimize functio | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood?
The main reason for using log is to handle very small likelihoods. A 32-bit float can only go down to 2^-126 before it gets rounded to 0. It's not just because optimizers are built to minimize functions, since you can easily minimize -likelihood. If you have a large model computing likelihood of a sequence with hundreds of factors it's easy for likelihood to go below floating point limits. Using log transforms 2^-126 into -126, making it more resistant to underflow. | Why do we minimize the negative likelihood if it is equivalent to maximization of the likelihood?
The main reason for using log is to handle very small likelihoods. A 32-bit float can only go down to 2^-126 before it gets rounded to 0. It's not just because optimizers are built to minimize functio |
3,086 | Why is the Jeffreys prior useful? | Let me complete Zen's answer. I don't very like the notion of "representing ignorance". The important thing is not the Jeffreys prior but the Jeffreys posterior. This posterior aims to reflect as best as possible the information about the parameters brought by the data. The invariance property is naturally required for the two following points. Consider for instance the binomial model with unknown proportion parameter $\theta$ and odds parameter $\psi=\frac{\theta}{1-\theta}$.
The Jeffreys posterior on $\theta$ reflects as best as possible the information about $\theta$ brought by the data. There is a one-to-one correspondence between $\theta$ and $\psi$. Then, transforming the Jeffreys posterior on $\theta$ into a posterior on $\psi$ (via the usual change-of-variables formula) should yield a distribution reflecting as best as possible the information about $\psi$. Thus this distribution should be the Jeffreys posterior about $\psi$. This is the invariance property.
An important point when drawing conclusions of a statistical analysis is scientific communication. Imagine you give the Jeffreys posterior on $\theta$ to a scientific colleague. But he/she is interested in $\psi$ rather than $\theta$. Then this is not a problem with the invariance property: he/she just has to apply the change-of-variables formula. | Why is the Jeffreys prior useful? | Let me complete Zen's answer. I don't very like the notion of "representing ignorance". The important thing is not the Jeffreys prior but the Jeffreys posterior. This posterior aims to reflect as best | Why is the Jeffreys prior useful?
Let me complete Zen's answer. I don't very like the notion of "representing ignorance". The important thing is not the Jeffreys prior but the Jeffreys posterior. This posterior aims to reflect as best as possible the information about the parameters brought by the data. The invariance property is naturally required for the two following points. Consider for instance the binomial model with unknown proportion parameter $\theta$ and odds parameter $\psi=\frac{\theta}{1-\theta}$.
The Jeffreys posterior on $\theta$ reflects as best as possible the information about $\theta$ brought by the data. There is a one-to-one correspondence between $\theta$ and $\psi$. Then, transforming the Jeffreys posterior on $\theta$ into a posterior on $\psi$ (via the usual change-of-variables formula) should yield a distribution reflecting as best as possible the information about $\psi$. Thus this distribution should be the Jeffreys posterior about $\psi$. This is the invariance property.
An important point when drawing conclusions of a statistical analysis is scientific communication. Imagine you give the Jeffreys posterior on $\theta$ to a scientific colleague. But he/she is interested in $\psi$ rather than $\theta$. Then this is not a problem with the invariance property: he/she just has to apply the change-of-variables formula. | Why is the Jeffreys prior useful?
Let me complete Zen's answer. I don't very like the notion of "representing ignorance". The important thing is not the Jeffreys prior but the Jeffreys posterior. This posterior aims to reflect as best |
3,087 | Why is the Jeffreys prior useful? | Suppose that you and a friend are analyzing the same set of data using a normal model. You adopt the usual parameterization of the normal model using the mean and the variance as parameters, but your friend prefers to parameterize the normal model with the coefficient of variation and the precision as parameters (which is perfectly "legal"). If both of you use Jeffreys' priors, your posterior distribution will be your friend's posterior distribution properly transformed (don't forget that Jacobian) from his parameterization to yours. It is in this sense that the Jeffreys' prior is "invariant"
(By the way, "invariant" is a horrible word; what we really mean is that it is "covariant" in the same sense of tensor calculus/differential geometry, but, of course, this term already has a well established probabilistic meaning, so we can't use it.)
Why is this consistency property desired? Because, if Jeffreys' prior has any chance of representing ignorance about the value of the parameters in an absolute sense (actually, it doesn't, but for other reasons not related to "invariance"), and not ignorance relatively to a particular parameterization of the model, it must be the case that, no matter which parameterizations we arbitrarily choose to start with, our posteriors should "match" after transformation.
Jeffreys himself violated this "invariance" property routinely when constructing his priors. | Why is the Jeffreys prior useful? | Suppose that you and a friend are analyzing the same set of data using a normal model. You adopt the usual parameterization of the normal model using the mean and the variance as parameters, but your | Why is the Jeffreys prior useful?
Suppose that you and a friend are analyzing the same set of data using a normal model. You adopt the usual parameterization of the normal model using the mean and the variance as parameters, but your friend prefers to parameterize the normal model with the coefficient of variation and the precision as parameters (which is perfectly "legal"). If both of you use Jeffreys' priors, your posterior distribution will be your friend's posterior distribution properly transformed (don't forget that Jacobian) from his parameterization to yours. It is in this sense that the Jeffreys' prior is "invariant"
(By the way, "invariant" is a horrible word; what we really mean is that it is "covariant" in the same sense of tensor calculus/differential geometry, but, of course, this term already has a well established probabilistic meaning, so we can't use it.)
Why is this consistency property desired? Because, if Jeffreys' prior has any chance of representing ignorance about the value of the parameters in an absolute sense (actually, it doesn't, but for other reasons not related to "invariance"), and not ignorance relatively to a particular parameterization of the model, it must be the case that, no matter which parameterizations we arbitrarily choose to start with, our posteriors should "match" after transformation.
Jeffreys himself violated this "invariance" property routinely when constructing his priors. | Why is the Jeffreys prior useful?
Suppose that you and a friend are analyzing the same set of data using a normal model. You adopt the usual parameterization of the normal model using the mean and the variance as parameters, but your |
3,088 | Why is the Jeffreys prior useful? | To add some quotations to Zen's great answer: According to Jaynes, the Jeffreys prior is an example of the principle of transformation groups, which results from the principle of indifference:
The essence of the principle is just: (1) we recognize that a
probability assignment is a means of describing a certain state i
knowledge. (2) If the available evidence gives us no reason to
consider proposition $A_1$ either more or less likely than $A_2$, then
the only honest-way we can describe that state of knowledge is to
assign them equal probabilities: $p_1=p_2$. Any other procedure would
be inconsistent in the sense that, by a mere interchange of the
labels $(1, 2)$ we could then generate a new problem in which our
state of knowledge is the same but in which we are assigning different
probabilities…
Now, to answer your question: “Why wouldn't you want the prior to change under a change of variables?”
According to Jaynes, the parametrization is another kind of arbitrary label, and one should not be able to “by a mere interchange of the labels generate a new problem in which our state of knowledge is the same but in which we are assigning different probabilities.” | Why is the Jeffreys prior useful? | To add some quotations to Zen's great answer: According to Jaynes, the Jeffreys prior is an example of the principle of transformation groups, which results from the principle of indifference:
The es | Why is the Jeffreys prior useful?
To add some quotations to Zen's great answer: According to Jaynes, the Jeffreys prior is an example of the principle of transformation groups, which results from the principle of indifference:
The essence of the principle is just: (1) we recognize that a
probability assignment is a means of describing a certain state i
knowledge. (2) If the available evidence gives us no reason to
consider proposition $A_1$ either more or less likely than $A_2$, then
the only honest-way we can describe that state of knowledge is to
assign them equal probabilities: $p_1=p_2$. Any other procedure would
be inconsistent in the sense that, by a mere interchange of the
labels $(1, 2)$ we could then generate a new problem in which our
state of knowledge is the same but in which we are assigning different
probabilities…
Now, to answer your question: “Why wouldn't you want the prior to change under a change of variables?”
According to Jaynes, the parametrization is another kind of arbitrary label, and one should not be able to “by a mere interchange of the labels generate a new problem in which our state of knowledge is the same but in which we are assigning different probabilities.” | Why is the Jeffreys prior useful?
To add some quotations to Zen's great answer: According to Jaynes, the Jeffreys prior is an example of the principle of transformation groups, which results from the principle of indifference:
The es |
3,089 | Why is the Jeffreys prior useful? | While often of interest, if only for setting a reference prior against which to gauge other priors, Jeffreys priors may be completely useless as for instance when they lead to improper posteriors: this is for instance the case with the simple two-component Gaussian mixture
$$p\mathcal{N}(\mu_0,\sigma_0^2)+(1-p)\mathcal{N}(\mu_1,\sigma_1^2)$$
with all parameters unknown. In this case, the posterior of the Jeffreys prior does not exist, no matter how many observations are available. (The proof is available in a recent paper I wrote with Clara Grazian.) | Why is the Jeffreys prior useful? | While often of interest, if only for setting a reference prior against which to gauge other priors, Jeffreys priors may be completely useless as for instance when they lead to improper posteriors: thi | Why is the Jeffreys prior useful?
While often of interest, if only for setting a reference prior against which to gauge other priors, Jeffreys priors may be completely useless as for instance when they lead to improper posteriors: this is for instance the case with the simple two-component Gaussian mixture
$$p\mathcal{N}(\mu_0,\sigma_0^2)+(1-p)\mathcal{N}(\mu_1,\sigma_1^2)$$
with all parameters unknown. In this case, the posterior of the Jeffreys prior does not exist, no matter how many observations are available. (The proof is available in a recent paper I wrote with Clara Grazian.) | Why is the Jeffreys prior useful?
While often of interest, if only for setting a reference prior against which to gauge other priors, Jeffreys priors may be completely useless as for instance when they lead to improper posteriors: thi |
3,090 | Why is the Jeffreys prior useful? | Jeffreys prior is useless. This is because:
It just specifies the form of the distribution; it does not tell you what its parameters should be.
You are never completely ignorant - there is always something about the parameter that you know (e.g. often it cannot be infinity). Use it for your inference by defining a prior distribution. Don't lie to yourself by saying that you don't know anything.
"Invariance under transformation" is not a desirable property. Your likelihood changes under transformation (e.g. by the Jacobian). This does not create "new problems," pace Jaynes. Why shouldn't the prior be treated the same?
Just don't use it. | Why is the Jeffreys prior useful? | Jeffreys prior is useless. This is because:
It just specifies the form of the distribution; it does not tell you what its parameters should be.
You are never completely ignorant - there is always som | Why is the Jeffreys prior useful?
Jeffreys prior is useless. This is because:
It just specifies the form of the distribution; it does not tell you what its parameters should be.
You are never completely ignorant - there is always something about the parameter that you know (e.g. often it cannot be infinity). Use it for your inference by defining a prior distribution. Don't lie to yourself by saying that you don't know anything.
"Invariance under transformation" is not a desirable property. Your likelihood changes under transformation (e.g. by the Jacobian). This does not create "new problems," pace Jaynes. Why shouldn't the prior be treated the same?
Just don't use it. | Why is the Jeffreys prior useful?
Jeffreys prior is useless. This is because:
It just specifies the form of the distribution; it does not tell you what its parameters should be.
You are never completely ignorant - there is always som |
3,091 | Criticism of Pearl's theory of causality | Some authors dislike Pearl's focus on the directed acyclic graph (DAG) as the way in which to view causality. Pearl essentially argues that any causal system can be considered as a non-parametric structural equation model (NPSEM), in which the value of each node is taken as a function of its parents and some individual error term; the error terms between different nodes may in general be correlated, to represent common causes.
Cartwright's book Hunting Causes and Using Them, for example, gives an example involving a car engine, which she claims cannot be modelled in the NPSEM framework. Pearl disputes this in his review of Cartwright's book.
Others caution that the use of DAGs can be misleading, in that the arrows lend an apparent authority to a chosen model as having causal implications, when this may not be the case at all. See Dawid's Beware of the DAG. For example, the three DAGs $A \rightarrow B \rightarrow C$, $A \leftarrow B \rightarrow C$ and $A \leftarrow B \leftarrow C$ all induce the same probabilistic model under Pearl's d-separation criterion, which is that A is independent of C given B. They are therefore indistinguishable based upon observational data.
However they have quite different causal interpretations, so if we wish to learn about the causal relationships here we would need more than simply observational data, whether that be the results of interventional experiments, prior information about the system, or something else. | Criticism of Pearl's theory of causality | Some authors dislike Pearl's focus on the directed acyclic graph (DAG) as the way in which to view causality. Pearl essentially argues that any causal system can be considered as a non-parametric str | Criticism of Pearl's theory of causality
Some authors dislike Pearl's focus on the directed acyclic graph (DAG) as the way in which to view causality. Pearl essentially argues that any causal system can be considered as a non-parametric structural equation model (NPSEM), in which the value of each node is taken as a function of its parents and some individual error term; the error terms between different nodes may in general be correlated, to represent common causes.
Cartwright's book Hunting Causes and Using Them, for example, gives an example involving a car engine, which she claims cannot be modelled in the NPSEM framework. Pearl disputes this in his review of Cartwright's book.
Others caution that the use of DAGs can be misleading, in that the arrows lend an apparent authority to a chosen model as having causal implications, when this may not be the case at all. See Dawid's Beware of the DAG. For example, the three DAGs $A \rightarrow B \rightarrow C$, $A \leftarrow B \rightarrow C$ and $A \leftarrow B \leftarrow C$ all induce the same probabilistic model under Pearl's d-separation criterion, which is that A is independent of C given B. They are therefore indistinguishable based upon observational data.
However they have quite different causal interpretations, so if we wish to learn about the causal relationships here we would need more than simply observational data, whether that be the results of interventional experiments, prior information about the system, or something else. | Criticism of Pearl's theory of causality
Some authors dislike Pearl's focus on the directed acyclic graph (DAG) as the way in which to view causality. Pearl essentially argues that any causal system can be considered as a non-parametric str |
3,092 | Criticism of Pearl's theory of causality | I think this framework has a lot of trouble with general equilibrium effects or Stable Unit Treatment Value Assumption violations. In that case, the "untreated" observations no longer provide the desired counterfactual in a meaningful way. Massive job training programs that shift the entire wage distribution are one example. The counterfactual may not even be well-defined in some cases. In Morgan and Winship's Counterfactuals and Causal Inference, they give an example of the claim that the 2000 election would have gone in favor of Al Gore if felons and ex-felons had been allowed to vote. They point out that the counterfactual world would have very different candidates and issues, so that you cannot characterize the alternative causal state. The ceteris paribus effect would not be the policy relevant parameter here. | Criticism of Pearl's theory of causality | I think this framework has a lot of trouble with general equilibrium effects or Stable Unit Treatment Value Assumption violations. In that case, the "untreated" observations no longer provide the desi | Criticism of Pearl's theory of causality
I think this framework has a lot of trouble with general equilibrium effects or Stable Unit Treatment Value Assumption violations. In that case, the "untreated" observations no longer provide the desired counterfactual in a meaningful way. Massive job training programs that shift the entire wage distribution are one example. The counterfactual may not even be well-defined in some cases. In Morgan and Winship's Counterfactuals and Causal Inference, they give an example of the claim that the 2000 election would have gone in favor of Al Gore if felons and ex-felons had been allowed to vote. They point out that the counterfactual world would have very different candidates and issues, so that you cannot characterize the alternative causal state. The ceteris paribus effect would not be the policy relevant parameter here. | Criticism of Pearl's theory of causality
I think this framework has a lot of trouble with general equilibrium effects or Stable Unit Treatment Value Assumption violations. In that case, the "untreated" observations no longer provide the desi |
3,093 | Criticism of Pearl's theory of causality | Counterfactual formal causal reasoning of the form motivated by Pearl is ill-suited to the analysis of complex dynamic causal systems (i.e. networks in which every variable is either directly or indirectly the cause of every variable at some future time).1
Disclaimer 1: I am a fan of Pearl's framework, and had the privilege of being taught counterfactual formal causal inference by two of his early exponents (Hernán and Robins).
Disclaimer 2: Levins was my mentor when I was a doctoral student, and I have published using his methods.
The counterfactual theory of causality, and the counterfactual formal causal inference/reasoning built atop of it, are profoundly useful for reasoning through both the strengths and weaknesses of causal inference based on specific combinations of study design and analyses. However, to my mind, the counterfactual theory is a theory of terminal causal narratives: $A$, and $L$, and $U$, (and maybe $V$ or $E$) happened, and then they caused $\pmb{Y}$ to happen (or not). However, the counterfactual theory of causality does not appear to describe or infer the behavior of complex causal systems, and is thus not a theory of cyclic causal narratives.
I would raise as a counterexample of a formal causal reasoning system Levins' qualitative loop analysis, which, like Pearl's work with DAGs also hearkens back to Wright's path analysis, but employs signed digraphs in a different causal formalism (in fact, one obvious distinction is that qualitative loop analysis employs signed digraphs which are cyclic, not acyclic), to describe the behavior of such causal systems under different kinds of perturbation.
The questions posed and answered by Levins' method (and subsequent elaborations on it) include:
How does the level of each variable in a complex system respond to press perturbation at one or more variables in the system?
How does the life expectancy/turnover of each variable in a complex system respond to press perturbation at one or more variables in the system?
Does variance induced by system perturbation (at specific variables) tend to diffuse across the system, or sink into very few variables in the system?
Where do (Lyapunov) stability and instability emerge in the system?
Where does system behavior depend on either ontological or epistemic uncertainty regarding the existence or magnitude of specific direct causal relationships comprising the system?
What are the signs of expected bivariate correlations (or correspondences) between any variable pairs given a press perturbation at one or more variables in the system?
(Because most of Levins' loop analysis is a purely deductive method—although, see Dambacher's extension—only the bolded question is directly statistical.)
These questions are different questions than the ones posed and answered in the counterfactual formal causal inference championed by Pearl. I have even had difficulty finding examples of counterfactual formal causal inference applied in the context of stochastic processes and autoregressive models (e.g., dynamic models including $Y_{t}$ as a function of $Y_{<t}$), although this may be more due to my lack of familiarity with the intersections of Bayesian probabilistic causal graphs and Pearl's work, than due to a specific deficiency in the latter.
Aside: Sugihara's empirical dynamic modeling (see tutorial by Chaing, et al.) elaboration on state space reconstructions, likewise provides an alternative perspective to counterfactual formal causal reasoning, also from the world of complex causal systems.
Video of Sugihara and friends blowing your mind and mine by recovering the topology of 3D complex system from a 1D time series.
1 A point similar to something Spirtes pointed out quite a while ago.
References
Chang, C.-W., Ushio, M., & Hseih, C. (2017). Empirical Dynamic Modeling for Beginners. Ecological Research, 32(6), 785–796.
Dambacher, J. M., Li, H. W., & Rossignol, P. A. (2003). Qualitative predictions in model ecosystems. Ecological Modelling, 161, 79 /93.
Dambacher, J. M., Levins, R., & Rossignol, P. A. (2005). Life expectancy change in perturbed communities: Derivation and qualitative analysis. Mathematical Biosciences, 197, 1–14.
Levins, R. (1974). The Qualitative Analysis of Partially Specified Systems. Annals of the New York Academy of Sciences, 231, 123–138.
Puccia, C. J., & Levins, R. (1986). Qualitative Modeling of Complex Systems: An Introduction to Loop Analysis and Time Averaging. Harvard University Press.
Spirtes, P. (1995). Directed Cyclic Graphical Representations of Feedback. In P. Besnard & S. Hanks (Eds.), Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann Publishers, Inc.
Sugihara, G., May, R., Ye, H., Hsieh, C., Deyle, E., Fogarty, M., & Munch, S. (2012). Detecting Causality in Complex Ecosystems. Science, 338, 496–500.
Wright, S. (1934). The Method of Path Coefficients. The Annals of Mathematical Statistics, 5(3), 161–215. | Criticism of Pearl's theory of causality | Counterfactual formal causal reasoning of the form motivated by Pearl is ill-suited to the analysis of complex dynamic causal systems (i.e. networks in which every variable is either directly or indir | Criticism of Pearl's theory of causality
Counterfactual formal causal reasoning of the form motivated by Pearl is ill-suited to the analysis of complex dynamic causal systems (i.e. networks in which every variable is either directly or indirectly the cause of every variable at some future time).1
Disclaimer 1: I am a fan of Pearl's framework, and had the privilege of being taught counterfactual formal causal inference by two of his early exponents (Hernán and Robins).
Disclaimer 2: Levins was my mentor when I was a doctoral student, and I have published using his methods.
The counterfactual theory of causality, and the counterfactual formal causal inference/reasoning built atop of it, are profoundly useful for reasoning through both the strengths and weaknesses of causal inference based on specific combinations of study design and analyses. However, to my mind, the counterfactual theory is a theory of terminal causal narratives: $A$, and $L$, and $U$, (and maybe $V$ or $E$) happened, and then they caused $\pmb{Y}$ to happen (or not). However, the counterfactual theory of causality does not appear to describe or infer the behavior of complex causal systems, and is thus not a theory of cyclic causal narratives.
I would raise as a counterexample of a formal causal reasoning system Levins' qualitative loop analysis, which, like Pearl's work with DAGs also hearkens back to Wright's path analysis, but employs signed digraphs in a different causal formalism (in fact, one obvious distinction is that qualitative loop analysis employs signed digraphs which are cyclic, not acyclic), to describe the behavior of such causal systems under different kinds of perturbation.
The questions posed and answered by Levins' method (and subsequent elaborations on it) include:
How does the level of each variable in a complex system respond to press perturbation at one or more variables in the system?
How does the life expectancy/turnover of each variable in a complex system respond to press perturbation at one or more variables in the system?
Does variance induced by system perturbation (at specific variables) tend to diffuse across the system, or sink into very few variables in the system?
Where do (Lyapunov) stability and instability emerge in the system?
Where does system behavior depend on either ontological or epistemic uncertainty regarding the existence or magnitude of specific direct causal relationships comprising the system?
What are the signs of expected bivariate correlations (or correspondences) between any variable pairs given a press perturbation at one or more variables in the system?
(Because most of Levins' loop analysis is a purely deductive method—although, see Dambacher's extension—only the bolded question is directly statistical.)
These questions are different questions than the ones posed and answered in the counterfactual formal causal inference championed by Pearl. I have even had difficulty finding examples of counterfactual formal causal inference applied in the context of stochastic processes and autoregressive models (e.g., dynamic models including $Y_{t}$ as a function of $Y_{<t}$), although this may be more due to my lack of familiarity with the intersections of Bayesian probabilistic causal graphs and Pearl's work, than due to a specific deficiency in the latter.
Aside: Sugihara's empirical dynamic modeling (see tutorial by Chaing, et al.) elaboration on state space reconstructions, likewise provides an alternative perspective to counterfactual formal causal reasoning, also from the world of complex causal systems.
Video of Sugihara and friends blowing your mind and mine by recovering the topology of 3D complex system from a 1D time series.
1 A point similar to something Spirtes pointed out quite a while ago.
References
Chang, C.-W., Ushio, M., & Hseih, C. (2017). Empirical Dynamic Modeling for Beginners. Ecological Research, 32(6), 785–796.
Dambacher, J. M., Li, H. W., & Rossignol, P. A. (2003). Qualitative predictions in model ecosystems. Ecological Modelling, 161, 79 /93.
Dambacher, J. M., Levins, R., & Rossignol, P. A. (2005). Life expectancy change in perturbed communities: Derivation and qualitative analysis. Mathematical Biosciences, 197, 1–14.
Levins, R. (1974). The Qualitative Analysis of Partially Specified Systems. Annals of the New York Academy of Sciences, 231, 123–138.
Puccia, C. J., & Levins, R. (1986). Qualitative Modeling of Complex Systems: An Introduction to Loop Analysis and Time Averaging. Harvard University Press.
Spirtes, P. (1995). Directed Cyclic Graphical Representations of Feedback. In P. Besnard & S. Hanks (Eds.), Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann Publishers, Inc.
Sugihara, G., May, R., Ye, H., Hsieh, C., Deyle, E., Fogarty, M., & Munch, S. (2012). Detecting Causality in Complex Ecosystems. Science, 338, 496–500.
Wright, S. (1934). The Method of Path Coefficients. The Annals of Mathematical Statistics, 5(3), 161–215. | Criticism of Pearl's theory of causality
Counterfactual formal causal reasoning of the form motivated by Pearl is ill-suited to the analysis of complex dynamic causal systems (i.e. networks in which every variable is either directly or indir |
3,094 | Criticism of Pearl's theory of causality | The most important criticism of Pearl's system is, from my perspective, that it has not yielded any practical, empirical advances anywhere it has been used. Given how long it has been around, there's no reason to think it will ever be a practical tool. This indicates that it can be used for some theoretical and perhaps didactic purposes, but a practical researcher will gain little from studying it. | Criticism of Pearl's theory of causality | The most important criticism of Pearl's system is, from my perspective, that it has not yielded any practical, empirical advances anywhere it has been used. Given how long it has been around, there's | Criticism of Pearl's theory of causality
The most important criticism of Pearl's system is, from my perspective, that it has not yielded any practical, empirical advances anywhere it has been used. Given how long it has been around, there's no reason to think it will ever be a practical tool. This indicates that it can be used for some theoretical and perhaps didactic purposes, but a practical researcher will gain little from studying it. | Criticism of Pearl's theory of causality
The most important criticism of Pearl's system is, from my perspective, that it has not yielded any practical, empirical advances anywhere it has been used. Given how long it has been around, there's |
3,095 | Criticism of Pearl's theory of causality | Reading answers and comments I feel the opportunity to add something.
The accepted answer, by rje42, is focused on DAG’s and non-parametric systems; strongly related concepts. Now, capabilities and limitations of these tools can be argued, however we have to say that linear SEMs are part of the Theory presented in Pearl manual (2000 or 2009). Pearl underscores limitations of linear systems but they are part of the Theory presented.
The comment of Paul seems crucial to me: “The whole point of Pearl's work is that different causal models can generate the same probabilistic model and hence the same-looking data. So it's not enough to have a probabilistic model, you have to base your analysis and causal interpretation on the full DAG to get reliable results.” Let me says that the last phrase can be rewritten as: so it's not enough to have a probabilistic model, you need structural causal equation/model (SCM). Pearl asked us to keep in mind a demarcation line between statistical and structural/causal concepts. The former alone can be never enough for proper causal inference, we need the latter too; here stays the root of most problems. In my opinion this clear distinction and his defense, represent the most important merit of Pearl.
Moreover Pearl suggests some tools such as: DAG, d-separation, do-operator, backdoor and front door criterion, among others.
All of them are important, and express his theory, but all come from the demarcation line mentioned above, and all help us to work according with it. Put it differently, is not so tremendously relevant to argue pro or cons of one specific tool, it is rather about the necessity of the demarcation line. If the demarcation line disappears, all of Pearl's theory goes down, or, at best, adds just a bit of language to what we already have. However, this seems to me an unsustainable position. If some authors today still seriously argue so, please give me some reference about it.
I'm not yet expert enough to challenge the capability of all these tools, but they seem clear to me, and, until now, it seems to me that they work. I come from the econometric side and, about the tools therein, that I think the opposite. I can say that econometrics is: very widespread, very useful, very practical, very empirical, very challenging, very considered matters; and it has one of his most interesting topics in causality. In econometrics some causal issues can be fruitfully addressed with RCT tools. However, unfortunately, we can show that the econometrics literature, too often, addressed causal problems not properly. Shortly, this happened due to theoretical flaws. The dimensions of this problem emerge in their full width in:
Regression and Causation: A Critical Examination of Six Econometrics Textbooks - Chen and Pearl (2013) and
Trygve Haavelmo and the Emergence of causal calculus - Pearl; Econometric Theory (2015)
In these related discussions some point are addressed:
Under which assumptions a regression can be interpreted causally?
Difference Between Simultaneous Equation Model and Structural Equation Model
I don’t know if "equilibrium problems" invoked by Dimitriy V. Masterov cannot be addressed properly with Pearl SCMs, but from here:
Eight Myths About Causality and Structural Equation Models - Handbook of Causal Analysis for Social Research, Springer (2013)
it emerges that some frequently invoked limitations are false.
Finally, the argument of Matt seems to me not relevant, but not for "citations evidence" as argued by Neil G. In two words, Matt's point is
“Pearl's theory can be good for itself but not for the purpose of practice”.
This seems to me a logically wrong argument, definitely. Matter of fact is that Pearl presented a theory. So, it suffices to mention the motto “nothing can be more practical and useful than a good theory”. It is obvious that the examples in the book are as simple as possible, good didactic practice demands this. Making things more complicated is always possible and not difficult; on the other hand, proper simplifications are usually hard to make. The possibility to face simple problems or to rend them more simple seems to me strong evidence in favor of Pearl's Theory.
That said, the fact that no real issues are solved by Pearl's Theory (if it is true) is neither his responsibility not the responsibility of his theory. He himself complains that professors and researchers haven't spent time enough on his theory and tools (and on causal inference in general). This fact could be justified only in face of a clear theoretical flaw of Pearl's theory and clear superiority of another one. It is curious to see that probably the opposite is true; note that Pearl argued that RCT boil down to SCM. The problem that Matt underscores comes from professors' and researchers' responsibility.
I think that in the future Pearl's Theory will become common in econometrics too. | Criticism of Pearl's theory of causality | Reading answers and comments I feel the opportunity to add something.
The accepted answer, by rje42, is focused on DAG’s and non-parametric systems; strongly related concepts. Now, capabilities and li | Criticism of Pearl's theory of causality
Reading answers and comments I feel the opportunity to add something.
The accepted answer, by rje42, is focused on DAG’s and non-parametric systems; strongly related concepts. Now, capabilities and limitations of these tools can be argued, however we have to say that linear SEMs are part of the Theory presented in Pearl manual (2000 or 2009). Pearl underscores limitations of linear systems but they are part of the Theory presented.
The comment of Paul seems crucial to me: “The whole point of Pearl's work is that different causal models can generate the same probabilistic model and hence the same-looking data. So it's not enough to have a probabilistic model, you have to base your analysis and causal interpretation on the full DAG to get reliable results.” Let me says that the last phrase can be rewritten as: so it's not enough to have a probabilistic model, you need structural causal equation/model (SCM). Pearl asked us to keep in mind a demarcation line between statistical and structural/causal concepts. The former alone can be never enough for proper causal inference, we need the latter too; here stays the root of most problems. In my opinion this clear distinction and his defense, represent the most important merit of Pearl.
Moreover Pearl suggests some tools such as: DAG, d-separation, do-operator, backdoor and front door criterion, among others.
All of them are important, and express his theory, but all come from the demarcation line mentioned above, and all help us to work according with it. Put it differently, is not so tremendously relevant to argue pro or cons of one specific tool, it is rather about the necessity of the demarcation line. If the demarcation line disappears, all of Pearl's theory goes down, or, at best, adds just a bit of language to what we already have. However, this seems to me an unsustainable position. If some authors today still seriously argue so, please give me some reference about it.
I'm not yet expert enough to challenge the capability of all these tools, but they seem clear to me, and, until now, it seems to me that they work. I come from the econometric side and, about the tools therein, that I think the opposite. I can say that econometrics is: very widespread, very useful, very practical, very empirical, very challenging, very considered matters; and it has one of his most interesting topics in causality. In econometrics some causal issues can be fruitfully addressed with RCT tools. However, unfortunately, we can show that the econometrics literature, too often, addressed causal problems not properly. Shortly, this happened due to theoretical flaws. The dimensions of this problem emerge in their full width in:
Regression and Causation: A Critical Examination of Six Econometrics Textbooks - Chen and Pearl (2013) and
Trygve Haavelmo and the Emergence of causal calculus - Pearl; Econometric Theory (2015)
In these related discussions some point are addressed:
Under which assumptions a regression can be interpreted causally?
Difference Between Simultaneous Equation Model and Structural Equation Model
I don’t know if "equilibrium problems" invoked by Dimitriy V. Masterov cannot be addressed properly with Pearl SCMs, but from here:
Eight Myths About Causality and Structural Equation Models - Handbook of Causal Analysis for Social Research, Springer (2013)
it emerges that some frequently invoked limitations are false.
Finally, the argument of Matt seems to me not relevant, but not for "citations evidence" as argued by Neil G. In two words, Matt's point is
“Pearl's theory can be good for itself but not for the purpose of practice”.
This seems to me a logically wrong argument, definitely. Matter of fact is that Pearl presented a theory. So, it suffices to mention the motto “nothing can be more practical and useful than a good theory”. It is obvious that the examples in the book are as simple as possible, good didactic practice demands this. Making things more complicated is always possible and not difficult; on the other hand, proper simplifications are usually hard to make. The possibility to face simple problems or to rend them more simple seems to me strong evidence in favor of Pearl's Theory.
That said, the fact that no real issues are solved by Pearl's Theory (if it is true) is neither his responsibility not the responsibility of his theory. He himself complains that professors and researchers haven't spent time enough on his theory and tools (and on causal inference in general). This fact could be justified only in face of a clear theoretical flaw of Pearl's theory and clear superiority of another one. It is curious to see that probably the opposite is true; note that Pearl argued that RCT boil down to SCM. The problem that Matt underscores comes from professors' and researchers' responsibility.
I think that in the future Pearl's Theory will become common in econometrics too. | Criticism of Pearl's theory of causality
Reading answers and comments I feel the opportunity to add something.
The accepted answer, by rje42, is focused on DAG’s and non-parametric systems; strongly related concepts. Now, capabilities and li |
3,096 | Criticism of Pearl's theory of causality | The synthesis behind causal reasoning is access to different aspects of reality/model using inferences from data. The paradoxical features in a model are avoided through formalisation in mathematical language. Informally, these counterintuitive features arise in observing a model unlike intervening through doing calculus.
The question on the openess of algorithms in the foundations of mathematics and logic is debated. One of the central point is that formalisations lead to harness infinite power to practice but turns these consistent procedures closed. In principle, while you do, you close the system unlike observing that has an active role in quantum mechanics unlike Pearl's causation hierarchy. Quantum contextuality asserts fundamental acausal relations between events, manifest as bugs! Bug could be a way into the depth of an iceberg! Technically, DAG are insufficient to reveal deeper structure of nature, where unrelated events could be related in a non-trivial way via topology.
Pearl's vision is significant in the sense that, he formally constructed causation theory and this integrity is elegant! | Criticism of Pearl's theory of causality | The synthesis behind causal reasoning is access to different aspects of reality/model using inferences from data. The paradoxical features in a model are avoided through formalisation in mathematical | Criticism of Pearl's theory of causality
The synthesis behind causal reasoning is access to different aspects of reality/model using inferences from data. The paradoxical features in a model are avoided through formalisation in mathematical language. Informally, these counterintuitive features arise in observing a model unlike intervening through doing calculus.
The question on the openess of algorithms in the foundations of mathematics and logic is debated. One of the central point is that formalisations lead to harness infinite power to practice but turns these consistent procedures closed. In principle, while you do, you close the system unlike observing that has an active role in quantum mechanics unlike Pearl's causation hierarchy. Quantum contextuality asserts fundamental acausal relations between events, manifest as bugs! Bug could be a way into the depth of an iceberg! Technically, DAG are insufficient to reveal deeper structure of nature, where unrelated events could be related in a non-trivial way via topology.
Pearl's vision is significant in the sense that, he formally constructed causation theory and this integrity is elegant! | Criticism of Pearl's theory of causality
The synthesis behind causal reasoning is access to different aspects of reality/model using inferences from data. The paradoxical features in a model are avoided through formalisation in mathematical |
3,097 | Assumptions regarding bootstrap estimates of uncertainty | There are several ways that one can conceivably apply the bootstrap. The two most basic approaches are what are deemed the "nonparametric" and "parametric" bootstrap. The second one assumes that the model you're using is (essentially) correct.
Let's focus on the first one. We'll assume that you have a random sample $X_1, X_2, \ldots, X_n$ distributed according the the distribution function $F$. (Assuming otherwise requires modified approaches.) Let $\hat{F}_n(x) = n^{-1} \sum_{i=1}^n \mathbf{1}(X_i \leq x)$ be the empirical cumulative distribution function. Much of the motivation for the bootstrap comes from a couple of facts.
Dvoretzky–Kiefer–Wolfowitz inequality
$$
\renewcommand{\Pr}{\mathbb{P}}
\Pr\big( \textstyle\sup_{x \in \mathbb{R}} \,|\hat{F}_n(x) - F(x)| > \varepsilon \big) \leq 2 e^{-2n \varepsilon^2} \> .
$$
What this shows is that the empirical distribution function converges uniformly to the true distribution function exponentially fast in probability. Indeed, this inequality coupled with the Borel–Cantelli lemma shows immediately that $\sup_{x \in \mathbb{R}} \,|\hat{F}_n(x) - F(x)| \to 0$ almost surely.
There are no additional conditions on the form of $F$ in order to guarantee this convergence.
Heuristically, then, if we are interested in some functional $T(F)$ of the distribution function that is smooth, then we expect $T(\hat{F}_n)$ to be close to $T(F)$.
(Pointwise) Unbiasedness of $\hat{F}_n(x)$
By simple linearity of expectation and the definition of $\hat{F}_n(x)$, for each $x \in \mathbb{R}$,
$$
\newcommand{\e}{\mathbb{E}}
\e_F \hat{F}_n(x) = F(x) \>.
$$
Suppose we are interested in the mean $\mu = T(F)$. Then the unbiasedness of the empirical measure extends to the unbiasedness of linear functionals of the empirical measure. So,
$$
\e_F T(\hat{F}_n) = \e_F \bar{X}_n = \mu = T(F) \> .
$$
So $T(\hat{F}_n)$ is correct on average and since $\hat{F_n}$ is rapidly approaching $F$, then (heuristically), $T(\hat{F}_n)$ rapidly approaches $T(F)$.
To construct a confidence interval (which is, essentially, what the bootstrap is all about), we can use the central limit theorem, the consistency of empirical quantiles and the delta method as tools to move from simple linear functionals to more complicated statistics of interest.
Good references are
B. Efron, Bootstrap methods: Another look at the jackknife, Ann. Stat., vol. 7, no. 1, 1–26.
B. Efron and R. Tibshirani, An Introduction to the Bootstrap, Chapman–Hall, 1994.
G. A. Young and R. L. Smith, Essentials of Statistical Inference, Cambridge University Press, 2005, Chapter 11.
A. W. van der Vaart, Asymptotic Statistics, Cambridge University Press, 1998, Chapter 23.
P. Bickel and D. Freedman, Some asymptotic theory for the bootstrap. Ann. Stat., vol. 9, no. 6 (1981), 1196–1217. | Assumptions regarding bootstrap estimates of uncertainty | There are several ways that one can conceivably apply the bootstrap. The two most basic approaches are what are deemed the "nonparametric" and "parametric" bootstrap. The second one assumes that the m | Assumptions regarding bootstrap estimates of uncertainty
There are several ways that one can conceivably apply the bootstrap. The two most basic approaches are what are deemed the "nonparametric" and "parametric" bootstrap. The second one assumes that the model you're using is (essentially) correct.
Let's focus on the first one. We'll assume that you have a random sample $X_1, X_2, \ldots, X_n$ distributed according the the distribution function $F$. (Assuming otherwise requires modified approaches.) Let $\hat{F}_n(x) = n^{-1} \sum_{i=1}^n \mathbf{1}(X_i \leq x)$ be the empirical cumulative distribution function. Much of the motivation for the bootstrap comes from a couple of facts.
Dvoretzky–Kiefer–Wolfowitz inequality
$$
\renewcommand{\Pr}{\mathbb{P}}
\Pr\big( \textstyle\sup_{x \in \mathbb{R}} \,|\hat{F}_n(x) - F(x)| > \varepsilon \big) \leq 2 e^{-2n \varepsilon^2} \> .
$$
What this shows is that the empirical distribution function converges uniformly to the true distribution function exponentially fast in probability. Indeed, this inequality coupled with the Borel–Cantelli lemma shows immediately that $\sup_{x \in \mathbb{R}} \,|\hat{F}_n(x) - F(x)| \to 0$ almost surely.
There are no additional conditions on the form of $F$ in order to guarantee this convergence.
Heuristically, then, if we are interested in some functional $T(F)$ of the distribution function that is smooth, then we expect $T(\hat{F}_n)$ to be close to $T(F)$.
(Pointwise) Unbiasedness of $\hat{F}_n(x)$
By simple linearity of expectation and the definition of $\hat{F}_n(x)$, for each $x \in \mathbb{R}$,
$$
\newcommand{\e}{\mathbb{E}}
\e_F \hat{F}_n(x) = F(x) \>.
$$
Suppose we are interested in the mean $\mu = T(F)$. Then the unbiasedness of the empirical measure extends to the unbiasedness of linear functionals of the empirical measure. So,
$$
\e_F T(\hat{F}_n) = \e_F \bar{X}_n = \mu = T(F) \> .
$$
So $T(\hat{F}_n)$ is correct on average and since $\hat{F_n}$ is rapidly approaching $F$, then (heuristically), $T(\hat{F}_n)$ rapidly approaches $T(F)$.
To construct a confidence interval (which is, essentially, what the bootstrap is all about), we can use the central limit theorem, the consistency of empirical quantiles and the delta method as tools to move from simple linear functionals to more complicated statistics of interest.
Good references are
B. Efron, Bootstrap methods: Another look at the jackknife, Ann. Stat., vol. 7, no. 1, 1–26.
B. Efron and R. Tibshirani, An Introduction to the Bootstrap, Chapman–Hall, 1994.
G. A. Young and R. L. Smith, Essentials of Statistical Inference, Cambridge University Press, 2005, Chapter 11.
A. W. van der Vaart, Asymptotic Statistics, Cambridge University Press, 1998, Chapter 23.
P. Bickel and D. Freedman, Some asymptotic theory for the bootstrap. Ann. Stat., vol. 9, no. 6 (1981), 1196–1217. | Assumptions regarding bootstrap estimates of uncertainty
There are several ways that one can conceivably apply the bootstrap. The two most basic approaches are what are deemed the "nonparametric" and "parametric" bootstrap. The second one assumes that the m |
3,098 | Assumptions regarding bootstrap estimates of uncertainty | Here is a different approach to thinking about it:
Start with the theory where we know the true distribution, we can discover properties of sample statistics by simulating from the true distribution. This is how Gosset developed the t-distribution and t-test, by sampling from known normals and computing the statistic. This is actually a form of the parametric bootstrap. Note that we are simulating to discover the behavior of the statistics (sometimes relative to the parameters).
Now, what if we do not know the population distribution, we have an estimate of the distribution in the empirical distribution and we can sample from that. By sampling from the empirical distribution (which is known) we can see the relationship between the bootstrap samples and the empirical distribution (the population for the bootstrap sample). Now we infer that the relationship from bootstrap samples to empirical distribution is the same as from the sample to the unknown population. Of course how well this relationship translates will depend on how representative the sample is of the population.
Remember that we are not using the means of the bootstrap samples to estimate the population mean, we use the sample mean for that (or whatever the statistic of interest is). But we are using the bootstrap samples to estimate properties (spread, bias) of the sampling proccess. And using sampling from a know population (that we hope is representative of the population of interest) to learn the effects of sampling makes sense and is much less circular. | Assumptions regarding bootstrap estimates of uncertainty | Here is a different approach to thinking about it:
Start with the theory where we know the true distribution, we can discover properties of sample statistics by simulating from the true distribution. | Assumptions regarding bootstrap estimates of uncertainty
Here is a different approach to thinking about it:
Start with the theory where we know the true distribution, we can discover properties of sample statistics by simulating from the true distribution. This is how Gosset developed the t-distribution and t-test, by sampling from known normals and computing the statistic. This is actually a form of the parametric bootstrap. Note that we are simulating to discover the behavior of the statistics (sometimes relative to the parameters).
Now, what if we do not know the population distribution, we have an estimate of the distribution in the empirical distribution and we can sample from that. By sampling from the empirical distribution (which is known) we can see the relationship between the bootstrap samples and the empirical distribution (the population for the bootstrap sample). Now we infer that the relationship from bootstrap samples to empirical distribution is the same as from the sample to the unknown population. Of course how well this relationship translates will depend on how representative the sample is of the population.
Remember that we are not using the means of the bootstrap samples to estimate the population mean, we use the sample mean for that (or whatever the statistic of interest is). But we are using the bootstrap samples to estimate properties (spread, bias) of the sampling proccess. And using sampling from a know population (that we hope is representative of the population of interest) to learn the effects of sampling makes sense and is much less circular. | Assumptions regarding bootstrap estimates of uncertainty
Here is a different approach to thinking about it:
Start with the theory where we know the true distribution, we can discover properties of sample statistics by simulating from the true distribution. |
3,099 | Assumptions regarding bootstrap estimates of uncertainty | The main trick (and sting) of bootstrapping is that it is an asymptotic theory: if you have an infinite sample to start with, the empirical distribution is going to be so close to the actual distribution that the difference is negligible.
Unfortunately, bootstrapping is often applied in small sample sizes. The common feel is that bootstrapping has shown itself to work in some very non-asymptotic situations, but be careful nonetheless. If your samplesize is too small, you are in fact working conditionally on your sample being a 'good representation' of the true distribution, which leads very easily to reasoning in circles :-) | Assumptions regarding bootstrap estimates of uncertainty | The main trick (and sting) of bootstrapping is that it is an asymptotic theory: if you have an infinite sample to start with, the empirical distribution is going to be so close to the actual distribut | Assumptions regarding bootstrap estimates of uncertainty
The main trick (and sting) of bootstrapping is that it is an asymptotic theory: if you have an infinite sample to start with, the empirical distribution is going to be so close to the actual distribution that the difference is negligible.
Unfortunately, bootstrapping is often applied in small sample sizes. The common feel is that bootstrapping has shown itself to work in some very non-asymptotic situations, but be careful nonetheless. If your samplesize is too small, you are in fact working conditionally on your sample being a 'good representation' of the true distribution, which leads very easily to reasoning in circles :-) | Assumptions regarding bootstrap estimates of uncertainty
The main trick (and sting) of bootstrapping is that it is an asymptotic theory: if you have an infinite sample to start with, the empirical distribution is going to be so close to the actual distribut |
3,100 | Assumptions regarding bootstrap estimates of uncertainty | I would argue not from the perspective of "asymptotically, the empirical distribution will be close to the actual distribution" (which, of course, is very true), but from a "long run perspective". In other words, in any particular case, the empirical distribution derived by bootstrapping will be off (sometimes shifted too far this way, sometimes shifted too far that way, sometimes too skewed this way, sometimes too skewed that way), but on average it will be a good approximation to the actual distribution. Similarly, your uncertainty estimates derived from the bootstrap distribution will be off in any particular case, but again, on average, they will be (approximately) right. | Assumptions regarding bootstrap estimates of uncertainty | I would argue not from the perspective of "asymptotically, the empirical distribution will be close to the actual distribution" (which, of course, is very true), but from a "long run perspective". In | Assumptions regarding bootstrap estimates of uncertainty
I would argue not from the perspective of "asymptotically, the empirical distribution will be close to the actual distribution" (which, of course, is very true), but from a "long run perspective". In other words, in any particular case, the empirical distribution derived by bootstrapping will be off (sometimes shifted too far this way, sometimes shifted too far that way, sometimes too skewed this way, sometimes too skewed that way), but on average it will be a good approximation to the actual distribution. Similarly, your uncertainty estimates derived from the bootstrap distribution will be off in any particular case, but again, on average, they will be (approximately) right. | Assumptions regarding bootstrap estimates of uncertainty
I would argue not from the perspective of "asymptotically, the empirical distribution will be close to the actual distribution" (which, of course, is very true), but from a "long run perspective". In |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.