idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
51,501
PCA on train and test datasets: should I run one PCA on train+test or two separate on train and on test? [duplicate]
In the context of this problem, (2) makes more sense, because otherwise you may not even have the same features you are trying to classify (ie reduced dimensions may mean very different things). See here for a more detailed discussion https://stackoverflow.com/questions/10818718/principal-component-analysis
PCA on train and test datasets: should I run one PCA on train+test or two separate on train and on t
In the context of this problem, (2) makes more sense, because otherwise you may not even have the same features you are trying to classify (ie reduced dimensions may mean very different things). See h
PCA on train and test datasets: should I run one PCA on train+test or two separate on train and on test? [duplicate] In the context of this problem, (2) makes more sense, because otherwise you may not even have the same features you are trying to classify (ie reduced dimensions may mean very different things). See here for a more detailed discussion https://stackoverflow.com/questions/10818718/principal-component-analysis
PCA on train and test datasets: should I run one PCA on train+test or two separate on train and on t In the context of this problem, (2) makes more sense, because otherwise you may not even have the same features you are trying to classify (ie reduced dimensions may mean very different things). See h
51,502
PCA on train and test datasets: should I run one PCA on train+test or two separate on train and on test? [duplicate]
(1) is incorrect, because if you run PCA on the two sets separately, you will end up with two different spaces. You cannot train a classifier in one space, and apply it to a different space. (2) is cheating. When you train a classifier, you cannot use any information from the test set. The correct way would be to run PCA on the training set, save the principal components that you use, and then use them to transform the points in your test set. This way the points in both sets end up in the same space, and you are not using any knowledge about your test set during training. Alternatively, you can use an entirely separate data set, just for computing the principal components. Then project both your training set and your test set into the space defined by those.
PCA on train and test datasets: should I run one PCA on train+test or two separate on train and on t
(1) is incorrect, because if you run PCA on the two sets separately, you will end up with two different spaces. You cannot train a classifier in one space, and apply it to a different space. (2) is ch
PCA on train and test datasets: should I run one PCA on train+test or two separate on train and on test? [duplicate] (1) is incorrect, because if you run PCA on the two sets separately, you will end up with two different spaces. You cannot train a classifier in one space, and apply it to a different space. (2) is cheating. When you train a classifier, you cannot use any information from the test set. The correct way would be to run PCA on the training set, save the principal components that you use, and then use them to transform the points in your test set. This way the points in both sets end up in the same space, and you are not using any knowledge about your test set during training. Alternatively, you can use an entirely separate data set, just for computing the principal components. Then project both your training set and your test set into the space defined by those.
PCA on train and test datasets: should I run one PCA on train+test or two separate on train and on t (1) is incorrect, because if you run PCA on the two sets separately, you will end up with two different spaces. You cannot train a classifier in one space, and apply it to a different space. (2) is ch
51,503
Probability of each of the three Christmas puddings having exactly 2 coins
Call the number of pieces in each section $A$, $B$, and $C$. Because $A+B+C=6$, you are interested in $Pr(A=2, B=2) = Pr(B=2|A=2)Pr(A=2)$. $Pr(A=2)$ is a simple binomial calculation: $A\sim Binom(6, 1/3)$, so $Pr(A=2) = {6\choose 2}(1/3)^2(2/3)^4 = 80/243$. Conditioned on $A$ having two pieces, $B\sim Binom(4, 1/2)$, so $Pr(B=2|A=2) = {4\choose 2}(1/2)^4 = 3/8$. Multiplying these together, we conclude that $Pr(A=2, B=2) = 10/81$.
Probability of each of the three Christmas puddings having exactly 2 coins
Call the number of pieces in each section $A$, $B$, and $C$. Because $A+B+C=6$, you are interested in $Pr(A=2, B=2) = Pr(B=2|A=2)Pr(A=2)$. $Pr(A=2)$ is a simple binomial calculation: $A\sim Binom(6, 1
Probability of each of the three Christmas puddings having exactly 2 coins Call the number of pieces in each section $A$, $B$, and $C$. Because $A+B+C=6$, you are interested in $Pr(A=2, B=2) = Pr(B=2|A=2)Pr(A=2)$. $Pr(A=2)$ is a simple binomial calculation: $A\sim Binom(6, 1/3)$, so $Pr(A=2) = {6\choose 2}(1/3)^2(2/3)^4 = 80/243$. Conditioned on $A$ having two pieces, $B\sim Binom(4, 1/2)$, so $Pr(B=2|A=2) = {4\choose 2}(1/2)^4 = 3/8$. Multiplying these together, we conclude that $Pr(A=2, B=2) = 10/81$.
Probability of each of the three Christmas puddings having exactly 2 coins Call the number of pieces in each section $A$, $B$, and $C$. Because $A+B+C=6$, you are interested in $Pr(A=2, B=2) = Pr(B=2|A=2)Pr(A=2)$. $Pr(A=2)$ is a simple binomial calculation: $A\sim Binom(6, 1
51,504
Probability of each of the three Christmas puddings having exactly 2 coins
You shouldn't use the binomial distribution here as it is a multinominal distribution problem (a generalization of the binomial). So let's gather what we have: n = 6 (total number of events) n1 = 2 in part 1 (pudding #1) n2 = 2 in part 2 (pudding #2) n3 = 2 in part 3 (pudding #3) p1 = 2/6 (probability to get 2 from n1) p2 = 2/6 (probability to get 2 from n2) p3 = 2/6 (probability to get 2 from n3) the formula goes like this: so let's put the numbers in motion: $p = \frac{6!}{(2!*2!*2!)} *(\frac{2}{6})^2 * (\frac{2}{6})^2 * (\frac{2}{6})^2 = 0.12345 $ and we get 10/81
Probability of each of the three Christmas puddings having exactly 2 coins
You shouldn't use the binomial distribution here as it is a multinominal distribution problem (a generalization of the binomial). So let's gather what we have: n = 6 (total number of events) n1 = 2 i
Probability of each of the three Christmas puddings having exactly 2 coins You shouldn't use the binomial distribution here as it is a multinominal distribution problem (a generalization of the binomial). So let's gather what we have: n = 6 (total number of events) n1 = 2 in part 1 (pudding #1) n2 = 2 in part 2 (pudding #2) n3 = 2 in part 3 (pudding #3) p1 = 2/6 (probability to get 2 from n1) p2 = 2/6 (probability to get 2 from n2) p3 = 2/6 (probability to get 2 from n3) the formula goes like this: so let's put the numbers in motion: $p = \frac{6!}{(2!*2!*2!)} *(\frac{2}{6})^2 * (\frac{2}{6})^2 * (\frac{2}{6})^2 = 0.12345 $ and we get 10/81
Probability of each of the three Christmas puddings having exactly 2 coins You shouldn't use the binomial distribution here as it is a multinominal distribution problem (a generalization of the binomial). So let's gather what we have: n = 6 (total number of events) n1 = 2 i
51,505
Statistical fallacy when not controlling for variables?
You could call it the omitted variable bias, (although that doesn't have "fallacy" in the name). It is a form of endogeneity; closely related to the omitted variable bias / another form of endogeneity is the ecological fallacy, which does have "fallacy" in the name. For what it's worth, I'm not sure the statement as you present it ("You never see a Ferrari rust like a Honda") is legitimately a fallacy. It is simply a statement of an empirical observation (and is presumably correct). If someone concluded that Farraris can't rust like Hondas, that would be a fallacy.
Statistical fallacy when not controlling for variables?
You could call it the omitted variable bias, (although that doesn't have "fallacy" in the name). It is a form of endogeneity; closely related to the omitted variable bias / another form of endogeneit
Statistical fallacy when not controlling for variables? You could call it the omitted variable bias, (although that doesn't have "fallacy" in the name). It is a form of endogeneity; closely related to the omitted variable bias / another form of endogeneity is the ecological fallacy, which does have "fallacy" in the name. For what it's worth, I'm not sure the statement as you present it ("You never see a Ferrari rust like a Honda") is legitimately a fallacy. It is simply a statement of an empirical observation (and is presumably correct). If someone concluded that Farraris can't rust like Hondas, that would be a fallacy.
Statistical fallacy when not controlling for variables? You could call it the omitted variable bias, (although that doesn't have "fallacy" in the name). It is a form of endogeneity; closely related to the omitted variable bias / another form of endogeneit
51,506
Statistical fallacy when not controlling for variables?
There isn't a "fallacy" named after confounding, as far as I know. But if someone mistakenly suggested a causal relationship (like car brand and rusting), we called that a "spurious relationship."
Statistical fallacy when not controlling for variables?
There isn't a "fallacy" named after confounding, as far as I know. But if someone mistakenly suggested a causal relationship (like car brand and rusting), we called that a "spurious relationship."
Statistical fallacy when not controlling for variables? There isn't a "fallacy" named after confounding, as far as I know. But if someone mistakenly suggested a causal relationship (like car brand and rusting), we called that a "spurious relationship."
Statistical fallacy when not controlling for variables? There isn't a "fallacy" named after confounding, as far as I know. But if someone mistakenly suggested a causal relationship (like car brand and rusting), we called that a "spurious relationship."
51,507
Statistical fallacy when not controlling for variables?
This is not a statistical (associational) fallacy, this is a logical fallacy of a causal claim. Let's take the statement: "you never see a Ferrari rust like a Honda". Statistically this means that the observed rusting in the "population" of Ferraris is somehow different from the observed rusting in the "population" of Hondas. This might be true and it would not be a statistical fallacy at all. The fallacy comes into play when someone uses that to infer that this observed association is caused by a specific mechanism, such as the intrinsic qualities of Ferraris or Hondas. So, when you claim: "the logical flaw is that a Honda is typically used as daily drivers thru severe winters, while a Ferrari is a 2nd or 3rd car limited to sunny weekend use" what you are doing is explaining a possible causal mechanism that also leads to such association, so the observed association cannot rule out two different causal models. Therefore, even if the association is legitimate in the population, what might not be legitimate is the causal explanation to that association. This logical fallacy of inferring a specific causal mechanism from association is usually called "false cause". But this fallacy is just the old simple "affirming the consequent" fallacy --- the causal model that Ferrari's are better than Honda's would generate the observed association. But it's a fallacy to conclude that, because the association is true, this specific causal model is true. There are several competing models that could generate the same observed association, such as your alternative explanation of how Ferraris and Hondas will have different usage patterns. This "spurious" association can come up for several reasons, not only failing to "control" for a variable. When it's due to failure of control of a common cause, we usually call this "confounding bias". But you can actually create a non causal association by controlling for wrong variables. As shown in this other answer, in the model below, "controlling" for the type of car owned by the patient would bias the effect estimate: This is usually called "collider bias" or "selection bias". You can also have biases due to controlling for mediators, due to measurement error and so on.
Statistical fallacy when not controlling for variables?
This is not a statistical (associational) fallacy, this is a logical fallacy of a causal claim. Let's take the statement: "you never see a Ferrari rust like a Honda". Statistically this means that the
Statistical fallacy when not controlling for variables? This is not a statistical (associational) fallacy, this is a logical fallacy of a causal claim. Let's take the statement: "you never see a Ferrari rust like a Honda". Statistically this means that the observed rusting in the "population" of Ferraris is somehow different from the observed rusting in the "population" of Hondas. This might be true and it would not be a statistical fallacy at all. The fallacy comes into play when someone uses that to infer that this observed association is caused by a specific mechanism, such as the intrinsic qualities of Ferraris or Hondas. So, when you claim: "the logical flaw is that a Honda is typically used as daily drivers thru severe winters, while a Ferrari is a 2nd or 3rd car limited to sunny weekend use" what you are doing is explaining a possible causal mechanism that also leads to such association, so the observed association cannot rule out two different causal models. Therefore, even if the association is legitimate in the population, what might not be legitimate is the causal explanation to that association. This logical fallacy of inferring a specific causal mechanism from association is usually called "false cause". But this fallacy is just the old simple "affirming the consequent" fallacy --- the causal model that Ferrari's are better than Honda's would generate the observed association. But it's a fallacy to conclude that, because the association is true, this specific causal model is true. There are several competing models that could generate the same observed association, such as your alternative explanation of how Ferraris and Hondas will have different usage patterns. This "spurious" association can come up for several reasons, not only failing to "control" for a variable. When it's due to failure of control of a common cause, we usually call this "confounding bias". But you can actually create a non causal association by controlling for wrong variables. As shown in this other answer, in the model below, "controlling" for the type of car owned by the patient would bias the effect estimate: This is usually called "collider bias" or "selection bias". You can also have biases due to controlling for mediators, due to measurement error and so on.
Statistical fallacy when not controlling for variables? This is not a statistical (associational) fallacy, this is a logical fallacy of a causal claim. Let's take the statement: "you never see a Ferrari rust like a Honda". Statistically this means that the
51,508
Statistical fallacy when not controlling for variables?
You may call it confounding or mediation depending on the exact relationship between the control variables and the variables of interest
Statistical fallacy when not controlling for variables?
You may call it confounding or mediation depending on the exact relationship between the control variables and the variables of interest
Statistical fallacy when not controlling for variables? You may call it confounding or mediation depending on the exact relationship between the control variables and the variables of interest
Statistical fallacy when not controlling for variables? You may call it confounding or mediation depending on the exact relationship between the control variables and the variables of interest
51,509
Statistical fallacy when not controlling for variables?
"Correlation does not imply Causation". Clearly CarMake has a very strong CORRELATION with rust: CarMake=Honda often has rust, CarMake=Ferrari never has rust. But that does not mean that CarMake CAUSES rust. Instead, ConsumerDesireForLuxuryCar causes CarMake=Ferrari, and ConsumerDesireForLuxuryCar also cause TakingCareOfCar which causes NoRust. I wouldn't disagree with the other answers, but "Correlation does not imply Causation" is a very commonly used phrase in statistics and it succintly captures this situation (and many others too).
Statistical fallacy when not controlling for variables?
"Correlation does not imply Causation". Clearly CarMake has a very strong CORRELATION with rust: CarMake=Honda often has rust, CarMake=Ferrari never has rust. But that does not mean that CarMake CAUSE
Statistical fallacy when not controlling for variables? "Correlation does not imply Causation". Clearly CarMake has a very strong CORRELATION with rust: CarMake=Honda often has rust, CarMake=Ferrari never has rust. But that does not mean that CarMake CAUSES rust. Instead, ConsumerDesireForLuxuryCar causes CarMake=Ferrari, and ConsumerDesireForLuxuryCar also cause TakingCareOfCar which causes NoRust. I wouldn't disagree with the other answers, but "Correlation does not imply Causation" is a very commonly used phrase in statistics and it succintly captures this situation (and many others too).
Statistical fallacy when not controlling for variables? "Correlation does not imply Causation". Clearly CarMake has a very strong CORRELATION with rust: CarMake=Honda often has rust, CarMake=Ferrari never has rust. But that does not mean that CarMake CAUSE
51,510
Must we do feature selection in cross validation?
One of the most challenging elements of this problem is knowing when it's OK to put unsupervised learning steps outside of the CV loop and when they should be fully penalized for by including them inside the loop. Generally speaking, unsupervised learning procedures such as principal components analysis can be unstable, i.e., the loadings of the first principal component will change when computed on a new sample. And unsupervised learning steps to exclude features, such as redundancy analysis and variable clustering, can also be unstable. But their instabilities can either hurt you or help you, i.e., may raise or lower your final $R^2$. So they don't consistently work in your favor. Overfitting in a final predictive discrimination measure such as $R^2$ or pseudo $R^2$ comes from doing things that consistently "work" in your favor, such as doing feature selection using supervised learning, whether manual or automated. So it's generally OK to keep completely unsupervised learning steps outside the CV loop as pre-processing steps, but sometimes bring them into the loop to make sure your final performance measure doesn't suffer from instability in unsupervised learning.
Must we do feature selection in cross validation?
One of the most challenging elements of this problem is knowing when it's OK to put unsupervised learning steps outside of the CV loop and when they should be fully penalized for by including them ins
Must we do feature selection in cross validation? One of the most challenging elements of this problem is knowing when it's OK to put unsupervised learning steps outside of the CV loop and when they should be fully penalized for by including them inside the loop. Generally speaking, unsupervised learning procedures such as principal components analysis can be unstable, i.e., the loadings of the first principal component will change when computed on a new sample. And unsupervised learning steps to exclude features, such as redundancy analysis and variable clustering, can also be unstable. But their instabilities can either hurt you or help you, i.e., may raise or lower your final $R^2$. So they don't consistently work in your favor. Overfitting in a final predictive discrimination measure such as $R^2$ or pseudo $R^2$ comes from doing things that consistently "work" in your favor, such as doing feature selection using supervised learning, whether manual or automated. So it's generally OK to keep completely unsupervised learning steps outside the CV loop as pre-processing steps, but sometimes bring them into the loop to make sure your final performance measure doesn't suffer from instability in unsupervised learning.
Must we do feature selection in cross validation? One of the most challenging elements of this problem is knowing when it's OK to put unsupervised learning steps outside of the CV loop and when they should be fully penalized for by including them ins
51,511
Must we do feature selection in cross validation?
Cross-validation is a means of estimating the performance of a method for fitting a model, rather than of the model itself, so all steps in fitting the model (including feature selection and optimising the hyper-parameters) need to be performed independently in each fold of the cross-validation procedure. If you don't do this, then you will end up with an optimistically biased performance estimate. See my (with Mrs Marsupial) paper on this topic GC Cawley and NLC Talbot, "On over-fitting in model selection and subsequent selection bias in performance evaluation", The Journal of Machine Learning Research 11, 2079-2107 (pdf) I tend to use nested cross-validation to get an unbiased performance estimate, but if you don't need an unbiased performance estimate, just choose between competing methods (that don't have too many degrees of freedom, i.e. not feature selection!) then that often isn't necessary in practice, see Wainer and Cawley J Wainer and G Cawley, "Nested cross-validation when selecting classifiers is overzealous for most practical applications", Expert Systems with Applications 182, 115 (doi:10.1016/j.eswa.2021.115222) Once you have that performance estimate, then retrain the model on the whole dataset, repeating the feature and model selection procedures once more. Also I would advise against feature selection if the aim is to improve performance (rather than identifying relevant features itself being the goal). Using a regularised model will often perform better.
Must we do feature selection in cross validation?
Cross-validation is a means of estimating the performance of a method for fitting a model, rather than of the model itself, so all steps in fitting the model (including feature selection and optimisin
Must we do feature selection in cross validation? Cross-validation is a means of estimating the performance of a method for fitting a model, rather than of the model itself, so all steps in fitting the model (including feature selection and optimising the hyper-parameters) need to be performed independently in each fold of the cross-validation procedure. If you don't do this, then you will end up with an optimistically biased performance estimate. See my (with Mrs Marsupial) paper on this topic GC Cawley and NLC Talbot, "On over-fitting in model selection and subsequent selection bias in performance evaluation", The Journal of Machine Learning Research 11, 2079-2107 (pdf) I tend to use nested cross-validation to get an unbiased performance estimate, but if you don't need an unbiased performance estimate, just choose between competing methods (that don't have too many degrees of freedom, i.e. not feature selection!) then that often isn't necessary in practice, see Wainer and Cawley J Wainer and G Cawley, "Nested cross-validation when selecting classifiers is overzealous for most practical applications", Expert Systems with Applications 182, 115 (doi:10.1016/j.eswa.2021.115222) Once you have that performance estimate, then retrain the model on the whole dataset, repeating the feature and model selection procedures once more. Also I would advise against feature selection if the aim is to improve performance (rather than identifying relevant features itself being the goal). Using a regularised model will often perform better.
Must we do feature selection in cross validation? Cross-validation is a means of estimating the performance of a method for fitting a model, rather than of the model itself, so all steps in fitting the model (including feature selection and optimisin
51,512
Must we do feature selection in cross validation?
The thread you mentioned already discusses it in great detail, so I'd skip the parts that were already mentioned there. Answering your question, it depends on what you mean by "one shot feature selection before cv if we do not take response variable into account". For example, if you look at the data and discover that some of the features are very low quality (the data doesn't make sense, it is obviously wrong, or the feature is constant) than yes, you can do this outside of cross-validation. On another hand, doing in-depth exploratory data analysis to pick the features by hand has the same effect as doing similar things algorithmically and there is no reason why "by hand" you won't produce an overfitting model.
Must we do feature selection in cross validation?
The thread you mentioned already discusses it in great detail, so I'd skip the parts that were already mentioned there. Answering your question, it depends on what you mean by "one shot feature select
Must we do feature selection in cross validation? The thread you mentioned already discusses it in great detail, so I'd skip the parts that were already mentioned there. Answering your question, it depends on what you mean by "one shot feature selection before cv if we do not take response variable into account". For example, if you look at the data and discover that some of the features are very low quality (the data doesn't make sense, it is obviously wrong, or the feature is constant) than yes, you can do this outside of cross-validation. On another hand, doing in-depth exploratory data analysis to pick the features by hand has the same effect as doing similar things algorithmically and there is no reason why "by hand" you won't produce an overfitting model.
Must we do feature selection in cross validation? The thread you mentioned already discusses it in great detail, so I'd skip the parts that were already mentioned there. Answering your question, it depends on what you mean by "one shot feature select
51,513
Must we do feature selection in cross validation?
In principle, if you want your CV validation scores to reflect what applying an algorithm trained in the manner you did will do on new data, then everything should be part of the cross-validation (or bootstrapping, or whatever else you do along those lines). It's of course the most concerning when we need to make decision on this basis (e.g. Which of several approaches that might be affected differently by deviations from the ideal should I pick?). This ideal is not entirely achievable in practice (except for when you reserve a single separate validation and/or test partition, which is however inefficient for anything but huge datasets) and exploratory data analysis is an obvious example of that. Some violations of this ideal are worse than others. E.g. you may have to do some EDA to define what your cross-validation scheme should look like (for example, you may discover that you have multiple records per patient and maybe for that reason you should use group-K-fold instead of basic K-fold). Similarly, screening predictors for whether they have zero (or near-zero) variance seems pretty harmless. Creating features solely based on human understanding of the task (e.g. grouping together different mis-spellings of a category name) is usually also totally unproblematic. Where you are definitely crossing the line into extreme danger is when you do target encoding (representing categories by their mean outcome), that must always be done within the cross-validation loop (or even within an additional CV loop within that). Really, most things that use the prediction target should be considered too dangerous to be done outside the CV loop. Many other things may be a bit between these extremes. E.g. transforming predictors (e.g. standardization or doing PCA) or imputation of missing predictors ideally belongs in the CV loop, too, but it is less immediately obvious how much (clearly at least a little bit) this would undermine the validity of the CV evaluation. The more you deviate though, the more a final external validation on new data becomes more important (it may of course also be very important for other reasons such as a mismatch of where the training data comes from vs. where the model would be used).
Must we do feature selection in cross validation?
In principle, if you want your CV validation scores to reflect what applying an algorithm trained in the manner you did will do on new data, then everything should be part of the cross-validation (or
Must we do feature selection in cross validation? In principle, if you want your CV validation scores to reflect what applying an algorithm trained in the manner you did will do on new data, then everything should be part of the cross-validation (or bootstrapping, or whatever else you do along those lines). It's of course the most concerning when we need to make decision on this basis (e.g. Which of several approaches that might be affected differently by deviations from the ideal should I pick?). This ideal is not entirely achievable in practice (except for when you reserve a single separate validation and/or test partition, which is however inefficient for anything but huge datasets) and exploratory data analysis is an obvious example of that. Some violations of this ideal are worse than others. E.g. you may have to do some EDA to define what your cross-validation scheme should look like (for example, you may discover that you have multiple records per patient and maybe for that reason you should use group-K-fold instead of basic K-fold). Similarly, screening predictors for whether they have zero (or near-zero) variance seems pretty harmless. Creating features solely based on human understanding of the task (e.g. grouping together different mis-spellings of a category name) is usually also totally unproblematic. Where you are definitely crossing the line into extreme danger is when you do target encoding (representing categories by their mean outcome), that must always be done within the cross-validation loop (or even within an additional CV loop within that). Really, most things that use the prediction target should be considered too dangerous to be done outside the CV loop. Many other things may be a bit between these extremes. E.g. transforming predictors (e.g. standardization or doing PCA) or imputation of missing predictors ideally belongs in the CV loop, too, but it is less immediately obvious how much (clearly at least a little bit) this would undermine the validity of the CV evaluation. The more you deviate though, the more a final external validation on new data becomes more important (it may of course also be very important for other reasons such as a mismatch of where the training data comes from vs. where the model would be used).
Must we do feature selection in cross validation? In principle, if you want your CV validation scores to reflect what applying an algorithm trained in the manner you did will do on new data, then everything should be part of the cross-validation (or
51,514
Exponential decay of ACF of AR(p) process
$f(h) =\phi^h$ is an exponential function.
Exponential decay of ACF of AR(p) process
$f(h) =\phi^h$ is an exponential function.
Exponential decay of ACF of AR(p) process $f(h) =\phi^h$ is an exponential function.
Exponential decay of ACF of AR(p) process $f(h) =\phi^h$ is an exponential function.
51,515
Exponential decay of ACF of AR(p) process
The magnitude of the ACF is an exponential function in $h$: $$|\phi|^h = \exp( \log (|\phi|^h)) = \exp( h \log |\phi|).$$
Exponential decay of ACF of AR(p) process
The magnitude of the ACF is an exponential function in $h$: $$|\phi|^h = \exp( \log (|\phi|^h)) = \exp( h \log |\phi|).$$
Exponential decay of ACF of AR(p) process The magnitude of the ACF is an exponential function in $h$: $$|\phi|^h = \exp( \log (|\phi|^h)) = \exp( h \log |\phi|).$$
Exponential decay of ACF of AR(p) process The magnitude of the ACF is an exponential function in $h$: $$|\phi|^h = \exp( \log (|\phi|^h)) = \exp( h \log |\phi|).$$
51,516
Exponential decay of ACF of AR(p) process
No, thinking in $h$, this is not power-law. It would be if it was something like $h^\phi$. Therefore, autocorrelation, $\phi^h$, is referred as exponential.
Exponential decay of ACF of AR(p) process
No, thinking in $h$, this is not power-law. It would be if it was something like $h^\phi$. Therefore, autocorrelation, $\phi^h$, is referred as exponential.
Exponential decay of ACF of AR(p) process No, thinking in $h$, this is not power-law. It would be if it was something like $h^\phi$. Therefore, autocorrelation, $\phi^h$, is referred as exponential.
Exponential decay of ACF of AR(p) process No, thinking in $h$, this is not power-law. It would be if it was something like $h^\phi$. Therefore, autocorrelation, $\phi^h$, is referred as exponential.
51,517
'Size' of intercept at linear regression
The coefficients of each predictor are almost always going to change when you add more predictors. This is an example of the answer changing when you ask a different question. Your software should let you fit a regression with no predictor at all. For example, if I try to predict people's weights with a regression with no predictors, then I will get the mean weight as a prediction. That will be shown as the intercept or constant. If I then add height as a predictor, the intercept $b_0$ in predicted weight $= b_0 + b_1$ height is the prediction for a hypothetical someone with zero height. (Imagine plugging in height $= 0$; then the term with coefficient $b_1$ vanishes.) The intercept reported for $b_0$ in this case is going to be way outside the data and may even be returned as a negative number. If I add an indicator say 1 if male and 0 if female, so that the model now is predicted weight $= b_0 + b_1$ height $+\ b_2$ male the intercept is now the prediction for a hypothetical someone with zero height and who is female (for whom male $= 0$). That will be different again, but not so much. In general in $\hat y = b_0 + b_1 x_1 + \cdots + b_J x_J = b_0 + \sum_{j=1}^J x_j$ the intercept $b_0$ is what is predicted when all the $x_j$ (so all $x_1$ to $x_J$) are zero. The intercept may be, in practice, an implausible or impossible value but that makes no difference to the principle. So, as the set of $x_j$s changes, so also will the intercept.
'Size' of intercept at linear regression
The coefficients of each predictor are almost always going to change when you add more predictors. This is an example of the answer changing when you ask a different question. Your software should let
'Size' of intercept at linear regression The coefficients of each predictor are almost always going to change when you add more predictors. This is an example of the answer changing when you ask a different question. Your software should let you fit a regression with no predictor at all. For example, if I try to predict people's weights with a regression with no predictors, then I will get the mean weight as a prediction. That will be shown as the intercept or constant. If I then add height as a predictor, the intercept $b_0$ in predicted weight $= b_0 + b_1$ height is the prediction for a hypothetical someone with zero height. (Imagine plugging in height $= 0$; then the term with coefficient $b_1$ vanishes.) The intercept reported for $b_0$ in this case is going to be way outside the data and may even be returned as a negative number. If I add an indicator say 1 if male and 0 if female, so that the model now is predicted weight $= b_0 + b_1$ height $+\ b_2$ male the intercept is now the prediction for a hypothetical someone with zero height and who is female (for whom male $= 0$). That will be different again, but not so much. In general in $\hat y = b_0 + b_1 x_1 + \cdots + b_J x_J = b_0 + \sum_{j=1}^J x_j$ the intercept $b_0$ is what is predicted when all the $x_j$ (so all $x_1$ to $x_J$) are zero. The intercept may be, in practice, an implausible or impossible value but that makes no difference to the principle. So, as the set of $x_j$s changes, so also will the intercept.
'Size' of intercept at linear regression The coefficients of each predictor are almost always going to change when you add more predictors. This is an example of the answer changing when you ask a different question. Your software should let
51,518
'Size' of intercept at linear regression
Nick Cox provided an excellent response and I wanted to add a more intuitive answer. Model 1 Model 1 investigates the relationship between IQ and Brain size among subjects represented by the ones in the study, regardless of those subjects' Gender, Height and Weight. In other words, if you imagine the target population of subjects from which the subjects in the study were selected, that population includes a mixture of subjects - some may be females, some may be males, some may have a height of 5 foot 9 inches, some may have a height of 5 foot 5 inches, etc., some may have a weight 160 lbs, some may have a weight of 120 lbs, etc. Model 1 takes all of these subjects and studies the relationship between their IQ and Brain size ignoring (or not accounting for) their Gender, Height and Weight. In other words, Model 1 mixes all of these subjects together and then studies the relationship of interest for the mixed subjects. Model 2 Model 2 investigates the relationship between IQ and Brain size among subjects represented by the one in the study who have: the same Gender, the same Height and the same Weight. For example, Model 2 investigates the relationship between IQ and Brain size among: males with a height of 5 foot 9 inches and a weight of 160 pounds; females with a height of 5 foot 5 inches and a weight of 120 pounds, etc. Model 2 makes the assumption that the relationship betwewn IQ and Brain size is the same for all of these population subsets defined by combinations of values of Gender, Height and Weight supported by the ones present in your data. This relationship is called an adjusted relationship, since it is adjusted for of Gender, Height and Weight. In contrast, the relationship betwewn IQ and Brain size investigated via Model 1 is an unadjusted relationship. Model 2 is selective about which subjects it considers - rather than mixing all subjects together, it focuses on subsets of subjects in the target population sharing the same Gender, the same Height and the same Weight. Intercept Interpretation in Model 1 For Model 1, the (true) intercept represents the average value of IQ in the target population for those subjects for whom Brain size is equal to 0. Clearly, such subjects do not exist - if they did, they would be brainless. Intercept Interpretation in Model 2 For Model 2, the (true) intercept represents the average value of IQ in the target population for those subjects for whom Brain size is equal to 0, Gender is equal to male, Height is equal to 0 inches and Weight is equal to 0 lbs. Again, such subjects do not exist. Neither of the two intercepts has a realistic interpretation. If you mean-center the variable Brain size in Model 1 and the variables Brain size, Height and Weight in Model 2, you will get intercepts with a more realistic interpretation from the refitted models. Note, however, that slope coefficients in the two regression models you have here are interpretable even if the intercept has no meaningful interpretation in practice. Intercept Interpretation in Model 1 after Mean-Centering Brain size For the revised Model 1, the (true) intercept represents the average value of IQ in the target population for those subjects with an average Brain size. Intercept Interpretation in Model 2 after Mean-Centering Brain size, Height and Weight For the revised Model 2, the (true) intercept represents the average value of IQ in the target population for male subjects with an average Brain size, average Height and average Weight.
'Size' of intercept at linear regression
Nick Cox provided an excellent response and I wanted to add a more intuitive answer. Model 1 Model 1 investigates the relationship between IQ and Brain size among subjects represented by the ones in t
'Size' of intercept at linear regression Nick Cox provided an excellent response and I wanted to add a more intuitive answer. Model 1 Model 1 investigates the relationship between IQ and Brain size among subjects represented by the ones in the study, regardless of those subjects' Gender, Height and Weight. In other words, if you imagine the target population of subjects from which the subjects in the study were selected, that population includes a mixture of subjects - some may be females, some may be males, some may have a height of 5 foot 9 inches, some may have a height of 5 foot 5 inches, etc., some may have a weight 160 lbs, some may have a weight of 120 lbs, etc. Model 1 takes all of these subjects and studies the relationship between their IQ and Brain size ignoring (or not accounting for) their Gender, Height and Weight. In other words, Model 1 mixes all of these subjects together and then studies the relationship of interest for the mixed subjects. Model 2 Model 2 investigates the relationship between IQ and Brain size among subjects represented by the one in the study who have: the same Gender, the same Height and the same Weight. For example, Model 2 investigates the relationship between IQ and Brain size among: males with a height of 5 foot 9 inches and a weight of 160 pounds; females with a height of 5 foot 5 inches and a weight of 120 pounds, etc. Model 2 makes the assumption that the relationship betwewn IQ and Brain size is the same for all of these population subsets defined by combinations of values of Gender, Height and Weight supported by the ones present in your data. This relationship is called an adjusted relationship, since it is adjusted for of Gender, Height and Weight. In contrast, the relationship betwewn IQ and Brain size investigated via Model 1 is an unadjusted relationship. Model 2 is selective about which subjects it considers - rather than mixing all subjects together, it focuses on subsets of subjects in the target population sharing the same Gender, the same Height and the same Weight. Intercept Interpretation in Model 1 For Model 1, the (true) intercept represents the average value of IQ in the target population for those subjects for whom Brain size is equal to 0. Clearly, such subjects do not exist - if they did, they would be brainless. Intercept Interpretation in Model 2 For Model 2, the (true) intercept represents the average value of IQ in the target population for those subjects for whom Brain size is equal to 0, Gender is equal to male, Height is equal to 0 inches and Weight is equal to 0 lbs. Again, such subjects do not exist. Neither of the two intercepts has a realistic interpretation. If you mean-center the variable Brain size in Model 1 and the variables Brain size, Height and Weight in Model 2, you will get intercepts with a more realistic interpretation from the refitted models. Note, however, that slope coefficients in the two regression models you have here are interpretable even if the intercept has no meaningful interpretation in practice. Intercept Interpretation in Model 1 after Mean-Centering Brain size For the revised Model 1, the (true) intercept represents the average value of IQ in the target population for those subjects with an average Brain size. Intercept Interpretation in Model 2 after Mean-Centering Brain size, Height and Weight For the revised Model 2, the (true) intercept represents the average value of IQ in the target population for male subjects with an average Brain size, average Height and average Weight.
'Size' of intercept at linear regression Nick Cox provided an excellent response and I wanted to add a more intuitive answer. Model 1 Model 1 investigates the relationship between IQ and Brain size among subjects represented by the ones in t
51,519
Can any dataset be clustered or does there need to be some sort of pattern in the data?
It seems to me there are two different primary goals one might have in clustering a dataset: Identify latent groupings Data reduction Your question implies you have #1 in mind. As other answers have pointed out, determining if the clustering represents 'real' latent groups is a very difficult task. There are a large number of different metrics that have been developed (see: How to validate a cluster solution?, and the section on evaluating clusterings in Wikipedia's clustering entry). None of the methods are perfect, however. It is generally accepted that the assessment of a clustering is subjective and based on expert judgment. Furthermore, it is worth considering that there may be no 'right answer' in reality. Consider the set {whale, monkey, banana}; whales and monkeys are both mammals whereas bananas are fruits, but monkeys and bananas are colocated geographically and monkeys eat bananas. Thus, either grouping could be 'right' depending on what you want to do with the clustering you've found. But let me focus on #2. There may be no actual groupings, and you may not care. A traditionally common use of clustering in computer science is data reduction. A classic example is color quantization for image compression. The linked Python documentation demonstrates compressing "96,615 unique colors to 64, while preserving the overall appearance quality": Another classic application of clustering in computer science is to enhance the efficiency of searching a database and retrieving information. The idea of reducing data is very counter-intuitive in a scientific context, though, because usually we want more data and richer information about what we're trying to study. But pure data reduction can occur in scientific contexts as well. Simply partitioning a homogeneous dataset (i.e., no actual clusters) can be used in several contexts. One example might be blocking for experimental design. Another might be identifying a small number of study units (e.g., patients) that are representative of the whole set in that they span the data space. In this way, you can get a subsample that could be studied in much greater detail (say, structured interviews) that wouldn't be logistically possible with the full sample. The same idea can be applied to make it possible to visualize large, complex, and high dimensional datasets. For instance, when trying to plot longitudinal data on many patients with many measurement occasions, you will typically end up with what's called a 'spaghetti plot' (due to the resulting inability to see anything of value), but it may be possible to plot a smaller number of representative patients yielding lines that can be individual discerned but that collectively represent the data reasonably well. Other examples are possible, but the point is that a clustering can be successful without there being any actual cluster structure at all. You simply partition the space and find a smaller and more manageable dataset that can represent the total dataset by effectively spanning the space of the full data.
Can any dataset be clustered or does there need to be some sort of pattern in the data?
It seems to me there are two different primary goals one might have in clustering a dataset: Identify latent groupings Data reduction Your question implies you have #1 in mind. As other answers h
Can any dataset be clustered or does there need to be some sort of pattern in the data? It seems to me there are two different primary goals one might have in clustering a dataset: Identify latent groupings Data reduction Your question implies you have #1 in mind. As other answers have pointed out, determining if the clustering represents 'real' latent groups is a very difficult task. There are a large number of different metrics that have been developed (see: How to validate a cluster solution?, and the section on evaluating clusterings in Wikipedia's clustering entry). None of the methods are perfect, however. It is generally accepted that the assessment of a clustering is subjective and based on expert judgment. Furthermore, it is worth considering that there may be no 'right answer' in reality. Consider the set {whale, monkey, banana}; whales and monkeys are both mammals whereas bananas are fruits, but monkeys and bananas are colocated geographically and monkeys eat bananas. Thus, either grouping could be 'right' depending on what you want to do with the clustering you've found. But let me focus on #2. There may be no actual groupings, and you may not care. A traditionally common use of clustering in computer science is data reduction. A classic example is color quantization for image compression. The linked Python documentation demonstrates compressing "96,615 unique colors to 64, while preserving the overall appearance quality": Another classic application of clustering in computer science is to enhance the efficiency of searching a database and retrieving information. The idea of reducing data is very counter-intuitive in a scientific context, though, because usually we want more data and richer information about what we're trying to study. But pure data reduction can occur in scientific contexts as well. Simply partitioning a homogeneous dataset (i.e., no actual clusters) can be used in several contexts. One example might be blocking for experimental design. Another might be identifying a small number of study units (e.g., patients) that are representative of the whole set in that they span the data space. In this way, you can get a subsample that could be studied in much greater detail (say, structured interviews) that wouldn't be logistically possible with the full sample. The same idea can be applied to make it possible to visualize large, complex, and high dimensional datasets. For instance, when trying to plot longitudinal data on many patients with many measurement occasions, you will typically end up with what's called a 'spaghetti plot' (due to the resulting inability to see anything of value), but it may be possible to plot a smaller number of representative patients yielding lines that can be individual discerned but that collectively represent the data reasonably well. Other examples are possible, but the point is that a clustering can be successful without there being any actual cluster structure at all. You simply partition the space and find a smaller and more manageable dataset that can represent the total dataset by effectively spanning the space of the full data.
Can any dataset be clustered or does there need to be some sort of pattern in the data? It seems to me there are two different primary goals one might have in clustering a dataset: Identify latent groupings Data reduction Your question implies you have #1 in mind. As other answers h
51,520
Can any dataset be clustered or does there need to be some sort of pattern in the data?
Or is any set of data "clusterable"? Yes, all data is clusterable -- even meaningless random data. ... how can we distinguish meaningful vs. non-meaningful clustering? Depends on what you mean by "meaningful". Sometimes the clusters are useful, often they are not. You have to decide on a case-by-case basis. Successful clustering does not imply meaningful clusters. Also, if the data has your kind of "meaningful" clusters, there is no guarantee that the algorithm will find those clusters.
Can any dataset be clustered or does there need to be some sort of pattern in the data?
Or is any set of data "clusterable"? Yes, all data is clusterable -- even meaningless random data. ... how can we distinguish meaningful vs. non-meaningful clustering? Depends on what you mean by
Can any dataset be clustered or does there need to be some sort of pattern in the data? Or is any set of data "clusterable"? Yes, all data is clusterable -- even meaningless random data. ... how can we distinguish meaningful vs. non-meaningful clustering? Depends on what you mean by "meaningful". Sometimes the clusters are useful, often they are not. You have to decide on a case-by-case basis. Successful clustering does not imply meaningful clusters. Also, if the data has your kind of "meaningful" clusters, there is no guarantee that the algorithm will find those clusters.
Can any dataset be clustered or does there need to be some sort of pattern in the data? Or is any set of data "clusterable"? Yes, all data is clusterable -- even meaningless random data. ... how can we distinguish meaningful vs. non-meaningful clustering? Depends on what you mean by
51,521
Can any dataset be clustered or does there need to be some sort of pattern in the data?
Both @Ray and @Anony-Mousse have captured the ambiguity in the question by highlighting that any data set can be fed into a clustering algorithms and that this does not imply that useful clusters will be found. To address your question from a practical point of view, you can assess the clustering tendency of a given data set to judge whether or not meaningful clusters are likely to be found. One way to assess the clustering tendency of a data set is the Hopkins Statistic, $H$, introduced by Hopkins and Skellam, recommended by Han et al. [1] and used by Banerjee and Dave [2]. A formulation of the Hopkins Statistic is given in Wikipedia: There are multiple formulations of the Hopkins Statistic. A typical one is as follows. Let $X$ be the set of $n$ data points in $d$ dimensional space. Consider a random sample (without replacement) of $m \ll n$ data points with members $x_i$. Also generate a set $Y$ of $m$ uniformly randomly distributed data points. Now define two distance measures, $u_i$ to be the distance of $y_i \in Y$ from its nearest neighbor in $X$ and $w_i$ to be the distance of $x_i \in X$ from its nearest neighbor in $X$. We then define the Hopkins statistic as: $H = \frac{\sum_{i=1}^{m}u_i^d}{\sum_{i=1}^{m}u_i^d + \sum_{i=1}^{m}w_i^d}$ With this definition, uniform random data should tend to have values near to 0.5, and clustered data should tend to have values nearer to 1. One note for interpretation is that an appropriate value for $H$ can be considered "necessary but not sufficient" to determine whether or not meaningful clusters exist. You cannot say that a non-uniform data set has meaningful clusters (i.e. a data set drawn from a single distribution) but it is true that a uniform data set does not have meaningful clusters. [1] J. Han, J. Pei, and M. Kamber, Data mining: concepts and techniques. Elsevier, 2011. pgs 484-486 [2] Banerjee, Amit, and Rajesh N. Dave. "Validating clusters using the Hopkins statistic." In Fuzzy systems, 2004. Proceedings. 2004 IEEE international conference on, vol. 1, pp. 149-153. IEEE, 2004. DOI
Can any dataset be clustered or does there need to be some sort of pattern in the data?
Both @Ray and @Anony-Mousse have captured the ambiguity in the question by highlighting that any data set can be fed into a clustering algorithms and that this does not imply that useful clusters will
Can any dataset be clustered or does there need to be some sort of pattern in the data? Both @Ray and @Anony-Mousse have captured the ambiguity in the question by highlighting that any data set can be fed into a clustering algorithms and that this does not imply that useful clusters will be found. To address your question from a practical point of view, you can assess the clustering tendency of a given data set to judge whether or not meaningful clusters are likely to be found. One way to assess the clustering tendency of a data set is the Hopkins Statistic, $H$, introduced by Hopkins and Skellam, recommended by Han et al. [1] and used by Banerjee and Dave [2]. A formulation of the Hopkins Statistic is given in Wikipedia: There are multiple formulations of the Hopkins Statistic. A typical one is as follows. Let $X$ be the set of $n$ data points in $d$ dimensional space. Consider a random sample (without replacement) of $m \ll n$ data points with members $x_i$. Also generate a set $Y$ of $m$ uniformly randomly distributed data points. Now define two distance measures, $u_i$ to be the distance of $y_i \in Y$ from its nearest neighbor in $X$ and $w_i$ to be the distance of $x_i \in X$ from its nearest neighbor in $X$. We then define the Hopkins statistic as: $H = \frac{\sum_{i=1}^{m}u_i^d}{\sum_{i=1}^{m}u_i^d + \sum_{i=1}^{m}w_i^d}$ With this definition, uniform random data should tend to have values near to 0.5, and clustered data should tend to have values nearer to 1. One note for interpretation is that an appropriate value for $H$ can be considered "necessary but not sufficient" to determine whether or not meaningful clusters exist. You cannot say that a non-uniform data set has meaningful clusters (i.e. a data set drawn from a single distribution) but it is true that a uniform data set does not have meaningful clusters. [1] J. Han, J. Pei, and M. Kamber, Data mining: concepts and techniques. Elsevier, 2011. pgs 484-486 [2] Banerjee, Amit, and Rajesh N. Dave. "Validating clusters using the Hopkins statistic." In Fuzzy systems, 2004. Proceedings. 2004 IEEE international conference on, vol. 1, pp. 149-153. IEEE, 2004. DOI
Can any dataset be clustered or does there need to be some sort of pattern in the data? Both @Ray and @Anony-Mousse have captured the ambiguity in the question by highlighting that any data set can be fed into a clustering algorithms and that this does not imply that useful clusters will
51,522
Can any dataset be clustered or does there need to be some sort of pattern in the data?
No. Not every data set is truly clustered. Sometimes you're lucky to deal with homogenous data, which is by definition not clustered. On the other hand, you can almost always find clusters in the data even if they're not there, think of fortune telling on tea leaves.
Can any dataset be clustered or does there need to be some sort of pattern in the data?
No. Not every data set is truly clustered. Sometimes you're lucky to deal with homogenous data, which is by definition not clustered. On the other hand, you can almost always find clusters in the data
Can any dataset be clustered or does there need to be some sort of pattern in the data? No. Not every data set is truly clustered. Sometimes you're lucky to deal with homogenous data, which is by definition not clustered. On the other hand, you can almost always find clusters in the data even if they're not there, think of fortune telling on tea leaves.
Can any dataset be clustered or does there need to be some sort of pattern in the data? No. Not every data set is truly clustered. Sometimes you're lucky to deal with homogenous data, which is by definition not clustered. On the other hand, you can almost always find clusters in the data
51,523
Can any dataset be clustered or does there need to be some sort of pattern in the data?
Yes, any dataset can be clustered: label some points 0, some points 1 at random. Voila, it is clustered (useless, but clustered). Even the "optimum" clustering of an algorithm frequently is useless. You cannot identify "usefulness" with some mathematical statistic on the input data. Instead, have a human user analyze the results.
Can any dataset be clustered or does there need to be some sort of pattern in the data?
Yes, any dataset can be clustered: label some points 0, some points 1 at random. Voila, it is clustered (useless, but clustered). Even the "optimum" clustering of an algorithm frequently is useless. Y
Can any dataset be clustered or does there need to be some sort of pattern in the data? Yes, any dataset can be clustered: label some points 0, some points 1 at random. Voila, it is clustered (useless, but clustered). Even the "optimum" clustering of an algorithm frequently is useless. You cannot identify "usefulness" with some mathematical statistic on the input data. Instead, have a human user analyze the results.
Can any dataset be clustered or does there need to be some sort of pattern in the data? Yes, any dataset can be clustered: label some points 0, some points 1 at random. Voila, it is clustered (useless, but clustered). Even the "optimum" clustering of an algorithm frequently is useless. Y
51,524
Why do we use separate priors or joint priors?
I think the correct way to phrase this is whether the priors are independent or not. The priors can always be described as (for example in your Normal example) $p(\mu, \sigma^2)$, but the question is does that joint prior factorize as $p(\mu, \sigma^2) = p(\mu)p(\sigma^2)$ or not. Once we have that phrasing in place I think it becomes a little easier to think about it. Are the parameters related in some way? Do changes in one impact another? Then you should consider a prior that includes covariance between the parameters. If not you can consider independent priors. More often than not, independent priors are considered for computational reasons.
Why do we use separate priors or joint priors?
I think the correct way to phrase this is whether the priors are independent or not. The priors can always be described as (for example in your Normal example) $p(\mu, \sigma^2)$, but the question is
Why do we use separate priors or joint priors? I think the correct way to phrase this is whether the priors are independent or not. The priors can always be described as (for example in your Normal example) $p(\mu, \sigma^2)$, but the question is does that joint prior factorize as $p(\mu, \sigma^2) = p(\mu)p(\sigma^2)$ or not. Once we have that phrasing in place I think it becomes a little easier to think about it. Are the parameters related in some way? Do changes in one impact another? Then you should consider a prior that includes covariance between the parameters. If not you can consider independent priors. More often than not, independent priors are considered for computational reasons.
Why do we use separate priors or joint priors? I think the correct way to phrase this is whether the priors are independent or not. The priors can always be described as (for example in your Normal example) $p(\mu, \sigma^2)$, but the question is
51,525
Why do we use separate priors or joint priors?
All the priors you mention are "joint" priors in that they define a joint distribution on the parameter vector $\mathbf{\theta}=(\theta_1,\ldots,\theta_p)$. When the prior writes down as $$\prod_{i=1}^p \pi_i(\theta_i)$$ each component $\pi_i(\theta_i)$ can also be interpreted as a (marginal) prior on the component $\theta_i$ [provided all components are proper] and the components are independent a priori. Since all priors are acceptable within the Bayesian paradigm, there is no foundational reason to favour independent priors over dependent priors.
Why do we use separate priors or joint priors?
All the priors you mention are "joint" priors in that they define a joint distribution on the parameter vector $\mathbf{\theta}=(\theta_1,\ldots,\theta_p)$. When the prior writes down as $$\prod_{i=1}
Why do we use separate priors or joint priors? All the priors you mention are "joint" priors in that they define a joint distribution on the parameter vector $\mathbf{\theta}=(\theta_1,\ldots,\theta_p)$. When the prior writes down as $$\prod_{i=1}^p \pi_i(\theta_i)$$ each component $\pi_i(\theta_i)$ can also be interpreted as a (marginal) prior on the component $\theta_i$ [provided all components are proper] and the components are independent a priori. Since all priors are acceptable within the Bayesian paradigm, there is no foundational reason to favour independent priors over dependent priors.
Why do we use separate priors or joint priors? All the priors you mention are "joint" priors in that they define a joint distribution on the parameter vector $\mathbf{\theta}=(\theta_1,\ldots,\theta_p)$. When the prior writes down as $$\prod_{i=1}
51,526
Example of distribution whose support is strictly positive
You may be interested in this gallery of distributions. In addition to the gamma distribution the lognormal distribution the $\chi^2$ distribution and the truncated normal distribution that have already been brought up, you could check the F distribution the exponential distribution the Weibull distribution the power lognormal distribution All of these have defined means and variances. Pick the one you like best.
Example of distribution whose support is strictly positive
You may be interested in this gallery of distributions. In addition to the gamma distribution the lognormal distribution the $\chi^2$ distribution and the truncated normal distribution that have alr
Example of distribution whose support is strictly positive You may be interested in this gallery of distributions. In addition to the gamma distribution the lognormal distribution the $\chi^2$ distribution and the truncated normal distribution that have already been brought up, you could check the F distribution the exponential distribution the Weibull distribution the power lognormal distribution All of these have defined means and variances. Pick the one you like best.
Example of distribution whose support is strictly positive You may be interested in this gallery of distributions. In addition to the gamma distribution the lognormal distribution the $\chi^2$ distribution and the truncated normal distribution that have alr
51,527
Example of distribution whose support is strictly positive
What about the truncated normal distribution? Try for example library(truncnorm) x <- seq(0,10,by=.01) plot(x,dtruncnorm(x, a=3, b=Inf, mean = 5, sd = 1),type="l") This gives By taking the mean of the underlying normal distribution $\mu$ (mean in the command) larger you can make it look "almost" normal, without a nonzero probability of nonpositive values.
Example of distribution whose support is strictly positive
What about the truncated normal distribution? Try for example library(truncnorm) x <- seq(0,10,by=.01) plot(x,dtruncnorm(x, a=3, b=Inf, mean = 5, sd = 1),type="l") This gives By taking the mean of t
Example of distribution whose support is strictly positive What about the truncated normal distribution? Try for example library(truncnorm) x <- seq(0,10,by=.01) plot(x,dtruncnorm(x, a=3, b=Inf, mean = 5, sd = 1),type="l") This gives By taking the mean of the underlying normal distribution $\mu$ (mean in the command) larger you can make it look "almost" normal, without a nonzero probability of nonpositive values.
Example of distribution whose support is strictly positive What about the truncated normal distribution? Try for example library(truncnorm) x <- seq(0,10,by=.01) plot(x,dtruncnorm(x, a=3, b=Inf, mean = 5, sd = 1),type="l") This gives By taking the mean of t
51,528
Example of distribution whose support is strictly positive
There are infinitely many such distributions ... Consider the family of uniform distributions from $0$ to $N$ (non-inclusive of $0$), where $N$ is an arbitrary integer. Now choose any one of these, say $X_1 \sim U(0,3)$. Then the sum of $X_1 + \dots + X_{10}$, where $X_1, \dots, X_{10} \overset{\text{iid}} \sim U(0,3)$ will be approximately normal in shape. This uniform sum distribution is also known as the Irwin-Hall distribution.
Example of distribution whose support is strictly positive
There are infinitely many such distributions ... Consider the family of uniform distributions from $0$ to $N$ (non-inclusive of $0$), where $N$ is an arbitrary integer. Now choose any one of these, s
Example of distribution whose support is strictly positive There are infinitely many such distributions ... Consider the family of uniform distributions from $0$ to $N$ (non-inclusive of $0$), where $N$ is an arbitrary integer. Now choose any one of these, say $X_1 \sim U(0,3)$. Then the sum of $X_1 + \dots + X_{10}$, where $X_1, \dots, X_{10} \overset{\text{iid}} \sim U(0,3)$ will be approximately normal in shape. This uniform sum distribution is also known as the Irwin-Hall distribution.
Example of distribution whose support is strictly positive There are infinitely many such distributions ... Consider the family of uniform distributions from $0$ to $N$ (non-inclusive of $0$), where $N$ is an arbitrary integer. Now choose any one of these, s
51,529
Example of distribution whose support is strictly positive
Two suggestions for you: The Non-central $\chi^2_1$ distribution which is (can be) obtained by squaring the normal distribution $N(\mu,1)$? This easily satisfies your relation to normal as it always shares the parameters of some underlying normal. However, it always skews right, so while it may look sort of normal, it will never be. The distribution of number of heads from a series $n$ of coin flips (also known as the $Binomial(n,p)$ distribution) looks more and more like the normal distribution as $n$ increases. The drawback here is that the support is always finite and includes 0; this latter can easily be remedied by just adding 1.
Example of distribution whose support is strictly positive
Two suggestions for you: The Non-central $\chi^2_1$ distribution which is (can be) obtained by squaring the normal distribution $N(\mu,1)$? This easily satisfies your relation to normal as it alw
Example of distribution whose support is strictly positive Two suggestions for you: The Non-central $\chi^2_1$ distribution which is (can be) obtained by squaring the normal distribution $N(\mu,1)$? This easily satisfies your relation to normal as it always shares the parameters of some underlying normal. However, it always skews right, so while it may look sort of normal, it will never be. The distribution of number of heads from a series $n$ of coin flips (also known as the $Binomial(n,p)$ distribution) looks more and more like the normal distribution as $n$ increases. The drawback here is that the support is always finite and includes 0; this latter can easily be remedied by just adding 1.
Example of distribution whose support is strictly positive Two suggestions for you: The Non-central $\chi^2_1$ distribution which is (can be) obtained by squaring the normal distribution $N(\mu,1)$? This easily satisfies your relation to normal as it alw
51,530
Is the key assumption for instrumental variables not testable?
The assumption is not correct as you stated it. The correct version is: the instrument I is independent of the outcome Y given the covariates X. This is called the exclusion restriction. If you ignore the covariates, then there should be a dependence of Y on I (otherwise either the link I -> X or the link X -> Y are missing). [Removed the rest of this answer - jabberwocky is correct]
Is the key assumption for instrumental variables not testable?
The assumption is not correct as you stated it. The correct version is: the instrument I is independent of the outcome Y given the covariates X. This is called the exclusion restriction. If you ignore
Is the key assumption for instrumental variables not testable? The assumption is not correct as you stated it. The correct version is: the instrument I is independent of the outcome Y given the covariates X. This is called the exclusion restriction. If you ignore the covariates, then there should be a dependence of Y on I (otherwise either the link I -> X or the link X -> Y are missing). [Removed the rest of this answer - jabberwocky is correct]
Is the key assumption for instrumental variables not testable? The assumption is not correct as you stated it. The correct version is: the instrument I is independent of the outcome Y given the covariates X. This is called the exclusion restriction. If you ignore
51,531
Is the key assumption for instrumental variables not testable?
In a regression like $$Y_i = \alpha + \beta X_i + \eta_i$$ where $X_i$ is the endogenous variable such that $Cov(X_i,\eta_i)\neq 0$, a "good" instrument must satisfy two conditions, which are $Cov(X_i,Z_i)\neq 0$, meaning that the instrument must be correlated with the endogenous variable, i.e. a first stage exists $Cov(Z_i,\eta_i)=0$, meaning that the instrument is not correlated with the outcome or any other unobserved determinant of it such that the effect of $Z_i$ on $Y_i$ only goes through the endogenous variable $X_i$ The stronger assumption of independence between the instrument and the structural error is invoked in a particular type of model, namely the linear constant effects model. Independence also implies that the instrument and the error are uncorrelated, the converse is not true though. You can visualize the idea of the exclusion restriction in the graph below: $$\begin{matrix} Z & \rightarrow & X & \rightarrow & Y \newline & & \uparrow & \nearrow & \newline & & \eta & \end{matrix}$$ The fundamental problem with testing the exclusion restriction is that it involves the structural error $\eta$ which is never observable. This is why you cannot formally test this restriction, neither with one nor with thousand instruments. To motivate the exclusion restriction we therefore often need to rely on good theoretical foundations of the relationship under investigation. Having said that, you might not want to use 1000 instruments because what matters is the quality of the instruments and not the quantity. There are two distinct problems, one which relates to the inconsistency of instrumental variables methods under many instruments and, often related, the problem of having weak instruments. See for example this lecture on the topic.
Is the key assumption for instrumental variables not testable?
In a regression like $$Y_i = \alpha + \beta X_i + \eta_i$$ where $X_i$ is the endogenous variable such that $Cov(X_i,\eta_i)\neq 0$, a "good" instrument must satisfy two conditions, which are $Cov(X_
Is the key assumption for instrumental variables not testable? In a regression like $$Y_i = \alpha + \beta X_i + \eta_i$$ where $X_i$ is the endogenous variable such that $Cov(X_i,\eta_i)\neq 0$, a "good" instrument must satisfy two conditions, which are $Cov(X_i,Z_i)\neq 0$, meaning that the instrument must be correlated with the endogenous variable, i.e. a first stage exists $Cov(Z_i,\eta_i)=0$, meaning that the instrument is not correlated with the outcome or any other unobserved determinant of it such that the effect of $Z_i$ on $Y_i$ only goes through the endogenous variable $X_i$ The stronger assumption of independence between the instrument and the structural error is invoked in a particular type of model, namely the linear constant effects model. Independence also implies that the instrument and the error are uncorrelated, the converse is not true though. You can visualize the idea of the exclusion restriction in the graph below: $$\begin{matrix} Z & \rightarrow & X & \rightarrow & Y \newline & & \uparrow & \nearrow & \newline & & \eta & \end{matrix}$$ The fundamental problem with testing the exclusion restriction is that it involves the structural error $\eta$ which is never observable. This is why you cannot formally test this restriction, neither with one nor with thousand instruments. To motivate the exclusion restriction we therefore often need to rely on good theoretical foundations of the relationship under investigation. Having said that, you might not want to use 1000 instruments because what matters is the quality of the instruments and not the quantity. There are two distinct problems, one which relates to the inconsistency of instrumental variables methods under many instruments and, often related, the problem of having weak instruments. See for example this lecture on the topic.
Is the key assumption for instrumental variables not testable? In a regression like $$Y_i = \alpha + \beta X_i + \eta_i$$ where $X_i$ is the endogenous variable such that $Cov(X_i,\eta_i)\neq 0$, a "good" instrument must satisfy two conditions, which are $Cov(X_
51,532
Is the key assumption for instrumental variables not testable?
First, as others have said, the assumption as you have stated is not correct. The standard IV model is given by, And the key assumptions here are that $Z$ has no effect on $Y$ except through $X$ (exclusion restriction) and that there are no common causes between $Z$ and $Y$ (independence restriction, or unconfoundedness of $Z$). Since we are saying that $Z$ does not affect $Y$ other than through $X$, it would seem reasonable to think that we could check whether $Z$ is independent of $Y$ conditional on $X$ to test that assumption. However, conditioning on $X$ opens the colliding path $Z \rightarrow X \leftrightarrow Y$, which creates a spurious association between $Z$ and $Y$. That is, even though there is no direct effect of $Z$ on $Y$, we would still see that $Z$ is associated with $Y$ conditional on $X$. That being said, it is not completely true that the exclusion restriction assumption is not testable. Although there are no conditional independences implied by the model, if the variables are discrete, the IV model does have testable implications, in the form of inequalities. These are usually called "instrumental inequalities". To learn more, I suggest the original paper by Pearl and a recent review by Swanson and colleagues.
Is the key assumption for instrumental variables not testable?
First, as others have said, the assumption as you have stated is not correct. The standard IV model is given by, And the key assumptions here are that $Z$ has no effect on $Y$ except through $X$ (ex
Is the key assumption for instrumental variables not testable? First, as others have said, the assumption as you have stated is not correct. The standard IV model is given by, And the key assumptions here are that $Z$ has no effect on $Y$ except through $X$ (exclusion restriction) and that there are no common causes between $Z$ and $Y$ (independence restriction, or unconfoundedness of $Z$). Since we are saying that $Z$ does not affect $Y$ other than through $X$, it would seem reasonable to think that we could check whether $Z$ is independent of $Y$ conditional on $X$ to test that assumption. However, conditioning on $X$ opens the colliding path $Z \rightarrow X \leftrightarrow Y$, which creates a spurious association between $Z$ and $Y$. That is, even though there is no direct effect of $Z$ on $Y$, we would still see that $Z$ is associated with $Y$ conditional on $X$. That being said, it is not completely true that the exclusion restriction assumption is not testable. Although there are no conditional independences implied by the model, if the variables are discrete, the IV model does have testable implications, in the form of inequalities. These are usually called "instrumental inequalities". To learn more, I suggest the original paper by Pearl and a recent review by Swanson and colleagues.
Is the key assumption for instrumental variables not testable? First, as others have said, the assumption as you have stated is not correct. The standard IV model is given by, And the key assumptions here are that $Z$ has no effect on $Y$ except through $X$ (ex
51,533
Is the key assumption for instrumental variables not testable?
[I second Rob's clarification about revising the independence statement, but I disagree with his statements about testing the exclusion restriction.] The exclusion restriction cannot be tested. Some tests are possible if the researcher imposes additional assumptions, but as a general rule the exclusion restriction cannot be tested. The statements below are intended to be general statements. The first-stage of your IV regression is testable, sometimes called the inclusion restriction. Does your instrument (I) affect your treatment (T)? That's testable with an F-test. This is used to test whether you have a strong or a weak instrument. But you cannot test the exclusion restriction, that is, you cannot test whether the only path from I to Y runs through T (I->T->Y and not I->Y and not I->e->Y, where e is your error term). You cannot test the exclusion restriction for the same reason you are looking for an instrument in the first place: the relationship between T and Y is confounded by some error or unobservable factors. Therefore, any test of conditional independence between I and Y controlling for T would be confounded by the same error or unobservable factors. So how do you make the argument for an instrument? Arguing that there is plausible causal pathway from your instrument (I) to your outcome (Y) requires what David Freedman calls "Shoe leather": intimate knowledge of the subject matter to develop meticulous research designs and eliminate rival explanations. That is, by using IV regression you are proposing a natural experiment. The natural experiment doesn't rely on statistical tests but rather on the assertion that you've found some as-if random process that eliminates confounding. Reference: “Statistical Models and Shoe Leather,” David Freedman, 1991.
Is the key assumption for instrumental variables not testable?
[I second Rob's clarification about revising the independence statement, but I disagree with his statements about testing the exclusion restriction.] The exclusion restriction cannot be tested. Some t
Is the key assumption for instrumental variables not testable? [I second Rob's clarification about revising the independence statement, but I disagree with his statements about testing the exclusion restriction.] The exclusion restriction cannot be tested. Some tests are possible if the researcher imposes additional assumptions, but as a general rule the exclusion restriction cannot be tested. The statements below are intended to be general statements. The first-stage of your IV regression is testable, sometimes called the inclusion restriction. Does your instrument (I) affect your treatment (T)? That's testable with an F-test. This is used to test whether you have a strong or a weak instrument. But you cannot test the exclusion restriction, that is, you cannot test whether the only path from I to Y runs through T (I->T->Y and not I->Y and not I->e->Y, where e is your error term). You cannot test the exclusion restriction for the same reason you are looking for an instrument in the first place: the relationship between T and Y is confounded by some error or unobservable factors. Therefore, any test of conditional independence between I and Y controlling for T would be confounded by the same error or unobservable factors. So how do you make the argument for an instrument? Arguing that there is plausible causal pathway from your instrument (I) to your outcome (Y) requires what David Freedman calls "Shoe leather": intimate knowledge of the subject matter to develop meticulous research designs and eliminate rival explanations. That is, by using IV regression you are proposing a natural experiment. The natural experiment doesn't rely on statistical tests but rather on the assertion that you've found some as-if random process that eliminates confounding. Reference: “Statistical Models and Shoe Leather,” David Freedman, 1991.
Is the key assumption for instrumental variables not testable? [I second Rob's clarification about revising the independence statement, but I disagree with his statements about testing the exclusion restriction.] The exclusion restriction cannot be tested. Some t
51,534
Is the key assumption for instrumental variables not testable?
Several other answers have already done a good job explaining the underlying causal assumptions of the method (I especially like Carlos's answer). As has been pointed out, there are some observable implications to the satisfaction or non-satisfaction of the underlying instrumental assumption, and thus, it is not entirely true that it cannot be tested. Moreover, by asserting the absence of other causal pathways, one can at least falsify the IV assumption empirically by conducting separate research that establishes evidence for a "back-door" causal pathway to be present. In any case, one important point to make about causal analysis hinging on instrumental variables is that the analysis is extremely sensitive to the IV assumption. The assumption itself is often dubious, and loss of that assumption is usually catastrophic for the analysis. My experience of seeing researchers use this method makes me extremely sceptical of its utility. Perhaps others have had different experiences, but in many years of seeing this technique used by economic researchers: Every application I have seen has involved possible (even plausible) "back-door" causal pathways that would falsify the IV assumption; Arguments for the IV assumption have almost always been based on theoretical argument of why there should not be any causal pathway that would falsify it (usually dubious temporal arguments), and those arguments have often been flimsy; and I have not seen any instance in which research has shown satisfying empirical evidence for the absence of "back-door" pathways that would falsify the assumption; and Most research does not examine the sensitivity of the causal conclusion to loss of the IV assumption (and it is highly sensitive to loss of this assumption). Causal analysis using instrumental variables is a clever idea in theory, but the method hinges heavily on strong causal assumptions that are usually false, and are hard or impossible to test in most applications. For all the above reasons, I tend to regard causal research hinging on the use of instrumental variables as being highly dubious. Controlled randomised experimentation remains by far the best method of causal research.
Is the key assumption for instrumental variables not testable?
Several other answers have already done a good job explaining the underlying causal assumptions of the method (I especially like Carlos's answer). As has been pointed out, there are some observable i
Is the key assumption for instrumental variables not testable? Several other answers have already done a good job explaining the underlying causal assumptions of the method (I especially like Carlos's answer). As has been pointed out, there are some observable implications to the satisfaction or non-satisfaction of the underlying instrumental assumption, and thus, it is not entirely true that it cannot be tested. Moreover, by asserting the absence of other causal pathways, one can at least falsify the IV assumption empirically by conducting separate research that establishes evidence for a "back-door" causal pathway to be present. In any case, one important point to make about causal analysis hinging on instrumental variables is that the analysis is extremely sensitive to the IV assumption. The assumption itself is often dubious, and loss of that assumption is usually catastrophic for the analysis. My experience of seeing researchers use this method makes me extremely sceptical of its utility. Perhaps others have had different experiences, but in many years of seeing this technique used by economic researchers: Every application I have seen has involved possible (even plausible) "back-door" causal pathways that would falsify the IV assumption; Arguments for the IV assumption have almost always been based on theoretical argument of why there should not be any causal pathway that would falsify it (usually dubious temporal arguments), and those arguments have often been flimsy; and I have not seen any instance in which research has shown satisfying empirical evidence for the absence of "back-door" pathways that would falsify the assumption; and Most research does not examine the sensitivity of the causal conclusion to loss of the IV assumption (and it is highly sensitive to loss of this assumption). Causal analysis using instrumental variables is a clever idea in theory, but the method hinges heavily on strong causal assumptions that are usually false, and are hard or impossible to test in most applications. For all the above reasons, I tend to regard causal research hinging on the use of instrumental variables as being highly dubious. Controlled randomised experimentation remains by far the best method of causal research.
Is the key assumption for instrumental variables not testable? Several other answers have already done a good job explaining the underlying causal assumptions of the method (I especially like Carlos's answer). As has been pointed out, there are some observable i
51,535
Is the key assumption for instrumental variables not testable?
I don’t have the reputation to add a comment to Rob’s answer - the currently accepted answer. However, describing the exclusion restriction as “Z is independent of Y given X” is not correct. Controlling for X induces spurious correlation between Z and Y as described in Carlos’s answer. In the simple one variable toy example, the exclusion restriction says Z affects Y only through X, which is not analogous to saying Z is independent of Y when conditioning on X. imo Rob’s answer should not be the accepted answer. I’ll also add another explanation: Commonly, students want to run Y = bX + aZ to test exclusion, hoping a=0. However because X is correlated with the error term, the estimate for b will be biased. Further, since X and Z are correlated (because the instrument has relevance), the estimated coefficient for a will also be biased.
Is the key assumption for instrumental variables not testable?
I don’t have the reputation to add a comment to Rob’s answer - the currently accepted answer. However, describing the exclusion restriction as “Z is independent of Y given X” is not correct. Controll
Is the key assumption for instrumental variables not testable? I don’t have the reputation to add a comment to Rob’s answer - the currently accepted answer. However, describing the exclusion restriction as “Z is independent of Y given X” is not correct. Controlling for X induces spurious correlation between Z and Y as described in Carlos’s answer. In the simple one variable toy example, the exclusion restriction says Z affects Y only through X, which is not analogous to saying Z is independent of Y when conditioning on X. imo Rob’s answer should not be the accepted answer. I’ll also add another explanation: Commonly, students want to run Y = bX + aZ to test exclusion, hoping a=0. However because X is correlated with the error term, the estimate for b will be biased. Further, since X and Z are correlated (because the instrument has relevance), the estimated coefficient for a will also be biased.
Is the key assumption for instrumental variables not testable? I don’t have the reputation to add a comment to Rob’s answer - the currently accepted answer. However, describing the exclusion restriction as “Z is independent of Y given X” is not correct. Controll
51,536
In which case $\mathbb E[X]=\sum _ix_i P[x_i]$ can be $0$ when all $x$'s are not zero ($0$)?
How about $$ x_1 = 1, P(x_1)=\frac{1}{2}, \qquad x_2 = -1, P(x_2)=\frac{1}{2} $$
In which case $\mathbb E[X]=\sum _ix_i P[x_i]$ can be $0$ when all $x$'s are not zero ($0$)?
How about $$ x_1 = 1, P(x_1)=\frac{1}{2}, \qquad x_2 = -1, P(x_2)=\frac{1}{2} $$
In which case $\mathbb E[X]=\sum _ix_i P[x_i]$ can be $0$ when all $x$'s are not zero ($0$)? How about $$ x_1 = 1, P(x_1)=\frac{1}{2}, \qquad x_2 = -1, P(x_2)=\frac{1}{2} $$
In which case $\mathbb E[X]=\sum _ix_i P[x_i]$ can be $0$ when all $x$'s are not zero ($0$)? How about $$ x_1 = 1, P(x_1)=\frac{1}{2}, \qquad x_2 = -1, P(x_2)=\frac{1}{2} $$
51,537
In which case $\mathbb E[X]=\sum _ix_i P[x_i]$ can be $0$ when all $x$'s are not zero ($0$)?
At least one of the $x_i$ in the support of $X$ must be negative for the mean to be zero, so long as there is non-zero probability of a positive $x_i$ occurring. Otherwise each $x_i P(X=x_i) \geq 0$, and there is at least one $i$ for which $x_i P(X=x_i) > 0$, so $\mathbb{E}(X) = \sum_{i=1}^{n} x_i P(X=x_i) > 0$. By analogy to mechanics, the mean is the "first moment" of the distribution. Imagine a light see-saw with weights corresponding to the probability masses, $P(X=x_i)$, each placed at a position $x_i$ from the zero point on the line. A negative $x_i$ would be on the left of the zero point, a positive $x_i$ on the right. If the mean is zero, that is saying the centre of mass of all these probability masses is at the zero point — so the see-saw would be perfectly balanced if we placed the pivot there. In the example above, I put: $\Pr(X=x) = \begin{cases} 2/10 & \text{if } x = -3 \\ 3/10 &\text{if } x = 0 \\ 4/10 & \text{if } x = 1 \\ 1/10 & \text{if } x = 2 \\ 0 & \text{otherwise} \end{cases}$ Can you see, physically, why the see-saw will not turn about the pivot, because its clockwise and anticlockwise moments about zero are balanced? Can you use $\mathbb{E}(X)=\sum_i x_i P(X=x_i)$ to show why $\mathbb{E}(X)=0$? Of course, there are very many different distributions I could have chosen. Any distribution which is symmetric about zero, such as the Rademacher distribution in Stephan Kolassa's answer, or the famous standard normal distribution, as an example of a continuous variable, would have worked too. But this example shows some asymmetric distributions also work. If you only allowed a probability mass at zero, and at least one probability mass on the right hand side (corresponding to a positive $x_i$), but disallowed any masses on the left hand side (negative $x_i$) then there would be a moment about zero turning it clockwise. So we could not balance the see-saw with a pivot at zero, hence zero cannot be the mean. Intuitively this is why, for a zero mean, we need to balance out the positive $x_i$ on the right with at least one negative $x_i$ on the left which can give us the counterbalancing anti-clockwise moment. Consider the random variable with the following probability mass function. It would not balance about zero (it would tip clockwise if you pivoted it there), but would balance about a pivot at $\mathbb{E}(X)=2$. Note that taking moments about zero finds the first (raw) moment, which by definition is equal to the mean. You can verify that, taking clockwise as positive, the total moment about zero here is indeed $2$. On the other hand, taking moments about the mean finds the first central moment. Now as the distribution balanced about the mean, the total moment is zero: this is a general result, and the first central moment, if it exists, is always zero. (The second central moment, where we square the distance from the mean before summing, is rather more interesting: it is the variance of the distribution.) \begin{array} {|c|c|c|c|} \hline x & 0 & 1 & 3 \\ \hline \Pr(X=x) & \frac{1}{7} & \frac{2}{7} & \frac{4}{7}\\ \hline \end{array} We could transform this distribution to have zero mean by moving the probability masses around. One simple way is to follow Glen_b's suggestion and define $Y=X - \mu_X$. The new mean is $\mathbb{E}(Y)=\mathbb{E}(X - \mu_X) = \mathbb{E}(X) - \mathbb{E}(\mu_X) = \mu_X - \mu_X = 0$ (using linearity of expectations, and that the expectation of a constant is the constant itself). In our case $Y=X-2$: we translate the masses on the see-saw left by two units. After the translation, we have probability masses in both positive and negative positions, so the clockwise and anticlockwise moments about zero can cancel each other out. Note how the centre of mass can be at zero, even with no mass placed there! \begin{array} {|c|c|c|c|} \hline y & -2 & -1 & 1 \\ \hline \Pr(Y=y) & \frac{1}{7} & \frac{2}{7} & \frac{4}{7}\\ \hline \end{array} R code for see-saw plots (pivots added in MS Paint) seesawPlot <- function(values, masses, xlimits=c(-max(abs(values)), max(abs(values)))) { x <- rep(values, times=masses) y <- unlist(lapply(masses, function(i){1:i})) plot(x, y, ylim=c(0,50), xlim=xlimits, pch=19, col="blue", yaxt="n", yaxs="i", frame=F, xlab="", ylab="") } values <- c(-3, 0, 1, 2); masses <- c(2, 3, 4, 1) seesawPlot(values, masses) values <- c(0, 1, 3); masses <- c(1, 2, 4) seesawPlot(values, masses) values <- values - 2 seesawPlot(values, masses, xlimits=c(-3,3))
In which case $\mathbb E[X]=\sum _ix_i P[x_i]$ can be $0$ when all $x$'s are not zero ($0$)?
At least one of the $x_i$ in the support of $X$ must be negative for the mean to be zero, so long as there is non-zero probability of a positive $x_i$ occurring. Otherwise each $x_i P(X=x_i) \geq 0$,
In which case $\mathbb E[X]=\sum _ix_i P[x_i]$ can be $0$ when all $x$'s are not zero ($0$)? At least one of the $x_i$ in the support of $X$ must be negative for the mean to be zero, so long as there is non-zero probability of a positive $x_i$ occurring. Otherwise each $x_i P(X=x_i) \geq 0$, and there is at least one $i$ for which $x_i P(X=x_i) > 0$, so $\mathbb{E}(X) = \sum_{i=1}^{n} x_i P(X=x_i) > 0$. By analogy to mechanics, the mean is the "first moment" of the distribution. Imagine a light see-saw with weights corresponding to the probability masses, $P(X=x_i)$, each placed at a position $x_i$ from the zero point on the line. A negative $x_i$ would be on the left of the zero point, a positive $x_i$ on the right. If the mean is zero, that is saying the centre of mass of all these probability masses is at the zero point — so the see-saw would be perfectly balanced if we placed the pivot there. In the example above, I put: $\Pr(X=x) = \begin{cases} 2/10 & \text{if } x = -3 \\ 3/10 &\text{if } x = 0 \\ 4/10 & \text{if } x = 1 \\ 1/10 & \text{if } x = 2 \\ 0 & \text{otherwise} \end{cases}$ Can you see, physically, why the see-saw will not turn about the pivot, because its clockwise and anticlockwise moments about zero are balanced? Can you use $\mathbb{E}(X)=\sum_i x_i P(X=x_i)$ to show why $\mathbb{E}(X)=0$? Of course, there are very many different distributions I could have chosen. Any distribution which is symmetric about zero, such as the Rademacher distribution in Stephan Kolassa's answer, or the famous standard normal distribution, as an example of a continuous variable, would have worked too. But this example shows some asymmetric distributions also work. If you only allowed a probability mass at zero, and at least one probability mass on the right hand side (corresponding to a positive $x_i$), but disallowed any masses on the left hand side (negative $x_i$) then there would be a moment about zero turning it clockwise. So we could not balance the see-saw with a pivot at zero, hence zero cannot be the mean. Intuitively this is why, for a zero mean, we need to balance out the positive $x_i$ on the right with at least one negative $x_i$ on the left which can give us the counterbalancing anti-clockwise moment. Consider the random variable with the following probability mass function. It would not balance about zero (it would tip clockwise if you pivoted it there), but would balance about a pivot at $\mathbb{E}(X)=2$. Note that taking moments about zero finds the first (raw) moment, which by definition is equal to the mean. You can verify that, taking clockwise as positive, the total moment about zero here is indeed $2$. On the other hand, taking moments about the mean finds the first central moment. Now as the distribution balanced about the mean, the total moment is zero: this is a general result, and the first central moment, if it exists, is always zero. (The second central moment, where we square the distance from the mean before summing, is rather more interesting: it is the variance of the distribution.) \begin{array} {|c|c|c|c|} \hline x & 0 & 1 & 3 \\ \hline \Pr(X=x) & \frac{1}{7} & \frac{2}{7} & \frac{4}{7}\\ \hline \end{array} We could transform this distribution to have zero mean by moving the probability masses around. One simple way is to follow Glen_b's suggestion and define $Y=X - \mu_X$. The new mean is $\mathbb{E}(Y)=\mathbb{E}(X - \mu_X) = \mathbb{E}(X) - \mathbb{E}(\mu_X) = \mu_X - \mu_X = 0$ (using linearity of expectations, and that the expectation of a constant is the constant itself). In our case $Y=X-2$: we translate the masses on the see-saw left by two units. After the translation, we have probability masses in both positive and negative positions, so the clockwise and anticlockwise moments about zero can cancel each other out. Note how the centre of mass can be at zero, even with no mass placed there! \begin{array} {|c|c|c|c|} \hline y & -2 & -1 & 1 \\ \hline \Pr(Y=y) & \frac{1}{7} & \frac{2}{7} & \frac{4}{7}\\ \hline \end{array} R code for see-saw plots (pivots added in MS Paint) seesawPlot <- function(values, masses, xlimits=c(-max(abs(values)), max(abs(values)))) { x <- rep(values, times=masses) y <- unlist(lapply(masses, function(i){1:i})) plot(x, y, ylim=c(0,50), xlim=xlimits, pch=19, col="blue", yaxt="n", yaxs="i", frame=F, xlab="", ylab="") } values <- c(-3, 0, 1, 2); masses <- c(2, 3, 4, 1) seesawPlot(values, masses) values <- c(0, 1, 3); masses <- c(1, 2, 4) seesawPlot(values, masses) values <- values - 2 seesawPlot(values, masses, xlimits=c(-3,3))
In which case $\mathbb E[X]=\sum _ix_i P[x_i]$ can be $0$ when all $x$'s are not zero ($0$)? At least one of the $x_i$ in the support of $X$ must be negative for the mean to be zero, so long as there is non-zero probability of a positive $x_i$ occurring. Otherwise each $x_i P(X=x_i) \geq 0$,
51,538
In which case $\mathbb E[X]=\sum _ix_i P[x_i]$ can be $0$ when all $x$'s are not zero ($0$)?
But I do not understand in which case $E[X]=∑_ix_iP[x_i]$ can be $0$ when all $x$'s are not zero ($0$) You can construct any number of such distributions. e.g. let $Y\sim F_Y$ for some distribution function $F_Y$ with mean $\mu$, in which not all values lie at the mean -- then some will lie above it and some will lie below it. Let $X=Y-\mu$. Then $E(X)=0$. ($\text{Var}(X)$ and indeed all central moments will be unaffected by the shift.) The mass that was below the mean will now lie below 0, and the mass above it will lie above zero. After pondering the question more closely, I wonder if the question here: $E[X]=∑_ix_iP[x_i]$ can be $0$ when all $x$'s are not zero ($0$) is actually "how can the expectation be a value that's not observable"? The answer to that follows directly from the definition of expectation. It's analogous to asking "how can the center of mass of the Pluto-Charon system lie outside either body?" (The same is true of the earth-moon system.) The center of mass can be in a place where there's no mass at all. It is no more astonishing for probability mass (or probability density) than it is for physical mass. Handwaving aside, the answer eventually falls back to "because it's defined that way". If I take a fair six-sided die, the average outcome is 3.5 even though you can never observe it in a single outcome (and most unfair (loaded) six-sided dice also have unobservable means). There's nothing magical about it - the long term average from many tosses can be a value you can't see in a single toss. Indeed, it's acceptable for it to be a value you can't observe in any finite number of tosses (the expectation can be irrational).
In which case $\mathbb E[X]=\sum _ix_i P[x_i]$ can be $0$ when all $x$'s are not zero ($0$)?
But I do not understand in which case $E[X]=∑_ix_iP[x_i]$ can be $0$ when all $x$'s are not zero ($0$) You can construct any number of such distributions. e.g. let $Y\sim F_Y$ for some distribution f
In which case $\mathbb E[X]=\sum _ix_i P[x_i]$ can be $0$ when all $x$'s are not zero ($0$)? But I do not understand in which case $E[X]=∑_ix_iP[x_i]$ can be $0$ when all $x$'s are not zero ($0$) You can construct any number of such distributions. e.g. let $Y\sim F_Y$ for some distribution function $F_Y$ with mean $\mu$, in which not all values lie at the mean -- then some will lie above it and some will lie below it. Let $X=Y-\mu$. Then $E(X)=0$. ($\text{Var}(X)$ and indeed all central moments will be unaffected by the shift.) The mass that was below the mean will now lie below 0, and the mass above it will lie above zero. After pondering the question more closely, I wonder if the question here: $E[X]=∑_ix_iP[x_i]$ can be $0$ when all $x$'s are not zero ($0$) is actually "how can the expectation be a value that's not observable"? The answer to that follows directly from the definition of expectation. It's analogous to asking "how can the center of mass of the Pluto-Charon system lie outside either body?" (The same is true of the earth-moon system.) The center of mass can be in a place where there's no mass at all. It is no more astonishing for probability mass (or probability density) than it is for physical mass. Handwaving aside, the answer eventually falls back to "because it's defined that way". If I take a fair six-sided die, the average outcome is 3.5 even though you can never observe it in a single outcome (and most unfair (loaded) six-sided dice also have unobservable means). There's nothing magical about it - the long term average from many tosses can be a value you can't see in a single toss. Indeed, it's acceptable for it to be a value you can't observe in any finite number of tosses (the expectation can be irrational).
In which case $\mathbb E[X]=\sum _ix_i P[x_i]$ can be $0$ when all $x$'s are not zero ($0$)? But I do not understand in which case $E[X]=∑_ix_iP[x_i]$ can be $0$ when all $x$'s are not zero ($0$) You can construct any number of such distributions. e.g. let $Y\sim F_Y$ for some distribution f
51,539
When is the median more affected by sampling error than the mean?
Imagine that a variable takes values 0 and 1 with probability both 0.5. Sample from that distribution and most of the medians will be 0 or 1 and a very few exactly 0.5. The means will vary far less. The mean is much more stable in this circumstance. Here is a sample graph of results. The plots are quantile plots, i.e. ordered values versus plotting position, a modified cumulative probability. The results are for 10,000 bootstrap samples from 1000 values, 500 each 0 and 1. The means range fortuitously but nicely from 0.436 to 0.564 with standard error 0.016. The medians are as said, with standard error 0.493. (Closed-form results are no doubt possible here too, but a graph makes the point vivid for all.) But that is exceptional. It illustrates the least favourable case for medians, a symmetric bimodal distribution such that the median is likely to flip between different halves of the data. However, symmetric bimodal distributions are not especially common, but watch out for so-called U-shaped distributions in which the extremes are most common and intermediate values uncommon. Distributions that are unimodal, or in which the number of modes has only a small effect on median or mean, are more common. As advised by every treatment of robust statistics, a very common situation is that your data come with tails heavier than Gaussian and/or with outliers, and in those circumstances median will almost always be more robust. The point is that that is not a universal general result. All that said, what relevance is a general result? You can at a minimum establish by bootstrapping the relative variability of mean and median for your own data. That's what you care about.
When is the median more affected by sampling error than the mean?
Imagine that a variable takes values 0 and 1 with probability both 0.5. Sample from that distribution and most of the medians will be 0 or 1 and a very few exactly 0.5. The means will vary far less. T
When is the median more affected by sampling error than the mean? Imagine that a variable takes values 0 and 1 with probability both 0.5. Sample from that distribution and most of the medians will be 0 or 1 and a very few exactly 0.5. The means will vary far less. The mean is much more stable in this circumstance. Here is a sample graph of results. The plots are quantile plots, i.e. ordered values versus plotting position, a modified cumulative probability. The results are for 10,000 bootstrap samples from 1000 values, 500 each 0 and 1. The means range fortuitously but nicely from 0.436 to 0.564 with standard error 0.016. The medians are as said, with standard error 0.493. (Closed-form results are no doubt possible here too, but a graph makes the point vivid for all.) But that is exceptional. It illustrates the least favourable case for medians, a symmetric bimodal distribution such that the median is likely to flip between different halves of the data. However, symmetric bimodal distributions are not especially common, but watch out for so-called U-shaped distributions in which the extremes are most common and intermediate values uncommon. Distributions that are unimodal, or in which the number of modes has only a small effect on median or mean, are more common. As advised by every treatment of robust statistics, a very common situation is that your data come with tails heavier than Gaussian and/or with outliers, and in those circumstances median will almost always be more robust. The point is that that is not a universal general result. All that said, what relevance is a general result? You can at a minimum establish by bootstrapping the relative variability of mean and median for your own data. That's what you care about.
When is the median more affected by sampling error than the mean? Imagine that a variable takes values 0 and 1 with probability both 0.5. Sample from that distribution and most of the medians will be 0 or 1 and a very few exactly 0.5. The means will vary far less. T
51,540
When is the median more affected by sampling error than the mean?
Where did you hear this? The usual reason for preferring the median is that it is less affected by extreme values than the mean. However, it is in general less sensitive to changes in the data. I ran a tiny example in R set.seed(1234) true <- rnorm(1000) smallerror <- true + rnorm(1000,0,.1) largeerror <- true + rnorm(1000, 0, 1) bias <- true + rnorm(1000,1, .5) mean(true) - mean(smallerror) quantile(true, .5) - quantile(smallerror, .5) mean(true) - mean(largeerror) quantile(true, .5) - quantile(largeerror, .5) In this particular case, the mean was more affected than the median.
When is the median more affected by sampling error than the mean?
Where did you hear this? The usual reason for preferring the median is that it is less affected by extreme values than the mean. However, it is in general less sensitive to changes in the data. I ran
When is the median more affected by sampling error than the mean? Where did you hear this? The usual reason for preferring the median is that it is less affected by extreme values than the mean. However, it is in general less sensitive to changes in the data. I ran a tiny example in R set.seed(1234) true <- rnorm(1000) smallerror <- true + rnorm(1000,0,.1) largeerror <- true + rnorm(1000, 0, 1) bias <- true + rnorm(1000,1, .5) mean(true) - mean(smallerror) quantile(true, .5) - quantile(smallerror, .5) mean(true) - mean(largeerror) quantile(true, .5) - quantile(largeerror, .5) In this particular case, the mean was more affected than the median.
When is the median more affected by sampling error than the mean? Where did you hear this? The usual reason for preferring the median is that it is less affected by extreme values than the mean. However, it is in general less sensitive to changes in the data. I ran
51,541
Wolfram Mathematica, MATLAB or something else?
Wolfram Mathematica is a very capable software for doing statistics, and unlike Matlab, its statistical functionality is included in the core Mathematica. Unlike R or Matlab it provides symbolic support for probability computations. You can peruse Probability & Statistics guide page to get an idea about the functionality. Mathematica is often faster than R for statistical data crunching. Mathematica has superior statistical visualization functionality. See Statistical Visualization guide page on the web.
Wolfram Mathematica, MATLAB or something else?
Wolfram Mathematica is a very capable software for doing statistics, and unlike Matlab, its statistical functionality is included in the core Mathematica. Unlike R or Matlab it provides symbolic sup
Wolfram Mathematica, MATLAB or something else? Wolfram Mathematica is a very capable software for doing statistics, and unlike Matlab, its statistical functionality is included in the core Mathematica. Unlike R or Matlab it provides symbolic support for probability computations. You can peruse Probability & Statistics guide page to get an idea about the functionality. Mathematica is often faster than R for statistical data crunching. Mathematica has superior statistical visualization functionality. See Statistical Visualization guide page on the web.
Wolfram Mathematica, MATLAB or something else? Wolfram Mathematica is a very capable software for doing statistics, and unlike Matlab, its statistical functionality is included in the core Mathematica. Unlike R or Matlab it provides symbolic sup
51,542
Wolfram Mathematica, MATLAB or something else?
For statistics I would recommend R - computation environment concerning statistics. Here you can find many tutorials for R.
Wolfram Mathematica, MATLAB or something else?
For statistics I would recommend R - computation environment concerning statistics. Here you can find many tutorials for R.
Wolfram Mathematica, MATLAB or something else? For statistics I would recommend R - computation environment concerning statistics. Here you can find many tutorials for R.
Wolfram Mathematica, MATLAB or something else? For statistics I would recommend R - computation environment concerning statistics. Here you can find many tutorials for R.
51,543
Wolfram Mathematica, MATLAB or something else?
There are specific tools out there for statistical analysis. For example, R, SPSS, and Minitab. Nevertheless, both Mathematica and MATLAB are capable of doing the required computations; which is "better" is a matter of taste and application requirements, most of the time. MATLAB doesn't ship default with many statistical subroutines. However, it does have a Statistics toolbox that contains much of the functionality found default in some other packages. It all depends on what exactly you're trying to do, statistics wise, and what your comfort level with programming is. Maybe you can elaborate more on what you're attempting. (Of course, the man-hours spent re-programming an algorithm for, say, ANOVA is often costlier than just buying the license to the toolbox that has a stock routine outright). Finally, to answer your question, I personally prefer MATLAB. I find that it is easier to program in and allows for easier rapid development and testing of algorithms, etc. Syntactically, I prefer MATLAB. MATLAB likes to keep things close to pseudo-code. Mathematica tries to keep things closer to mathematical notation. You can find MATLAB tutorials all over the internet. Just google "MATLAB tutorial" and you'll have a slew of results.
Wolfram Mathematica, MATLAB or something else?
There are specific tools out there for statistical analysis. For example, R, SPSS, and Minitab. Nevertheless, both Mathematica and MATLAB are capable of doing the required computations; which is "bett
Wolfram Mathematica, MATLAB or something else? There are specific tools out there for statistical analysis. For example, R, SPSS, and Minitab. Nevertheless, both Mathematica and MATLAB are capable of doing the required computations; which is "better" is a matter of taste and application requirements, most of the time. MATLAB doesn't ship default with many statistical subroutines. However, it does have a Statistics toolbox that contains much of the functionality found default in some other packages. It all depends on what exactly you're trying to do, statistics wise, and what your comfort level with programming is. Maybe you can elaborate more on what you're attempting. (Of course, the man-hours spent re-programming an algorithm for, say, ANOVA is often costlier than just buying the license to the toolbox that has a stock routine outright). Finally, to answer your question, I personally prefer MATLAB. I find that it is easier to program in and allows for easier rapid development and testing of algorithms, etc. Syntactically, I prefer MATLAB. MATLAB likes to keep things close to pseudo-code. Mathematica tries to keep things closer to mathematical notation. You can find MATLAB tutorials all over the internet. Just google "MATLAB tutorial" and you'll have a slew of results.
Wolfram Mathematica, MATLAB or something else? There are specific tools out there for statistical analysis. For example, R, SPSS, and Minitab. Nevertheless, both Mathematica and MATLAB are capable of doing the required computations; which is "bett
51,544
Wolfram Mathematica, MATLAB or something else?
Some people would say that Matlab is for numerical matrix calculations and simulations, and Mathematica for symbolic calculation. But nowadays both can perform both things. And we should add another contender, Maple. In my opinion Matlab is too big and needs too much resources. You should only choose it if your company or school forces you or if you need some rare simulation tool. If not, you should choose Mathematica or Maple. They are also very good with graphics, numeric matrixes and both have simulation tools. But they have something much better than Matlab: symbolic and abstract mathematics. If you can't afford commercial software or you like free software you can try R, for statistics and little bit more.
Wolfram Mathematica, MATLAB or something else?
Some people would say that Matlab is for numerical matrix calculations and simulations, and Mathematica for symbolic calculation. But nowadays both can perform both things. And we should add another
Wolfram Mathematica, MATLAB or something else? Some people would say that Matlab is for numerical matrix calculations and simulations, and Mathematica for symbolic calculation. But nowadays both can perform both things. And we should add another contender, Maple. In my opinion Matlab is too big and needs too much resources. You should only choose it if your company or school forces you or if you need some rare simulation tool. If not, you should choose Mathematica or Maple. They are also very good with graphics, numeric matrixes and both have simulation tools. But they have something much better than Matlab: symbolic and abstract mathematics. If you can't afford commercial software or you like free software you can try R, for statistics and little bit more.
Wolfram Mathematica, MATLAB or something else? Some people would say that Matlab is for numerical matrix calculations and simulations, and Mathematica for symbolic calculation. But nowadays both can perform both things. And we should add another
51,545
Logistic regression with LBFGS solver
Here is an example of logistic regression estimation using the limited memory BFGS [L-BFGS] optimization algorithm. I will be using the optimx function from the optimx library in R, and SciPy's scipy.optimize.fmin_l_bfgs_b in Python. Python The example that I am using is from Sheather (2009, pg. 264). The following Python code shows estimation of the logistic regression using the BFGS algorithm: # load required libraries import numpy as np import scipy as sp import scipy.optimize import pandas as pd import os # hyperlink to data location urlSheatherData = "http://www.stat.tamu.edu/~sheather/book/docs/datasets/MichelinNY.csv" # read in the data to a NumPy array arrSheatherData = np.asarray(pd.read_csv(urlSheatherData)) # slice the data to get the dependent variable vY = arrSheatherData[:, 0].astype('float64') # slice the data to get the matrix of predictor variables mX = np.asarray(arrSheatherData[:, 2:]).astype('float64') # add an intercept to the predictor variables intercept = np.ones(mX.shape[0]).reshape(mX.shape[0], 1) mX = np.concatenate((intercept, mX), axis = 1) # the number of variables and obserations iK = mX.shape[1] iN = mX.shape[0] # logistic transformation def logit(mX, vBeta): return((np.exp(np.dot(mX, vBeta))/(1.0 + np.exp(np.dot(mX, vBeta))))) # stable parametrisation of the cost function def logLikelihoodLogitStable(vBeta, mX, vY): return(-(np.sum(vY*(np.dot(mX, vBeta) - np.log((1.0 + np.exp(np.dot(mX, vBeta))))) + (1-vY)*(-np.log((1.0 + np.exp(np.dot(mX, vBeta)))))))) # score function def likelihoodScore(vBeta, mX, vY): return(np.dot(mX.T, (logit(mX, vBeta) - vY))) #==================================================================== # optimize to get the MLE using the BFGS optimizer (numerical derivatives) #==================================================================== optimLogitBFGS = sp.optimize.minimize(logLikelihoodLogitStable, x0 = np.array([10, 0.5, 0.1, -0.3, 0.1]), args = (mX, vY), method = 'BFGS', options={'gtol': 1e-3, 'disp': True}) print(optimLogitBFGS) # print the results of the optimisation And this can easily be adapted to the scipy.optimize.fmin_l_bfgs_b function: #==================================================================== # optimize to get the MLE using the L-BFGS optimizer (analytical derivatives) #==================================================================== optimLogitLBFGS = sp.optimize.fmin_l_bfgs_b(logLikelihoodLogitStable, x0 = np.array([10, 0.5, 0.1, -0.3, 0.1]), args = (mX, vY), fprime = likelihoodScore, pgtol = 1e-3, disp = True) print(optimLogitLBFGS) # print the results of the optimisation R Using the L-BFGS-B optimizer in R is just as simple. First the version with the BFGS algorithm: library(optimx) # read in the data urlSheatherData = "http://www.stat.tamu.edu/~sheather/book/docs/datasets/MichelinNY.csv" dfSheatherData = as.data.frame(read.csv(urlSheatherData, header = T)) # create the design matrices vY = as.matrix(dfSheatherData['InMichelin']) mX = as.matrix(dfSheatherData[c('Service','Decor', 'Food', 'Price')]) # add an intercept to the predictor variables mX = cbind(rep(1, nrow(mX)), mX) # the number of variables and observations iK = ncol(mX) iN = nrow(mX) # define the logistic transformation logit = function(mX, vBeta) { return(exp(mX %*% vBeta)/(1+ exp(mX %*% vBeta)) ) } # stable parametrisation of the log-likelihood function # Note: The negative of the log-likelihood is being returned, since we will be # /minimising/ the function. logLikelihoodLogitStable = function(vBeta, mX, vY) { return(-sum( vY*(mX %*% vBeta - log(1+exp(mX %*% vBeta))) + (1-vY)*(-log(1 + exp(mX %*% vBeta))) ) # sum ) # return } # score function likelihoodScore = function(vBeta, mX, vY) { return(t(mX) %*% (logit(mX, vBeta) - vY) ) } # initial set of parameters vBeta0 = c(10, -0.1, -0.3, 0.001, 0.01) # arbitrary starting parameters #==================================================================== # optimize to get the MLE using the BFGS optimizer (numerical derivatives) #==================================================================== optimLogitBFGS = optim(vBeta0, logLikelihoodLogitStable, mX = mX, vY = vY, method = 'BFGS', hessian=TRUE) optimLogitBFGS # get the results of the optimisation and then the version with the L-BFGS-B from the optimx package: #==================================================================== # optimize to get the MLE using the L-BFGS optimizer (analytical derivatives) #==================================================================== optimLogitLBFGS = optimx(vBeta0, logLikelihoodLogitStable, method = 'L-BFGS-B', gr = likelihoodScore, mX = mX, vY = vY, hessian=TRUE) summary(optimLogitLBFGS)
Logistic regression with LBFGS solver
Here is an example of logistic regression estimation using the limited memory BFGS [L-BFGS] optimization algorithm. I will be using the optimx function from the optimx library in R, and SciPy's scipy
Logistic regression with LBFGS solver Here is an example of logistic regression estimation using the limited memory BFGS [L-BFGS] optimization algorithm. I will be using the optimx function from the optimx library in R, and SciPy's scipy.optimize.fmin_l_bfgs_b in Python. Python The example that I am using is from Sheather (2009, pg. 264). The following Python code shows estimation of the logistic regression using the BFGS algorithm: # load required libraries import numpy as np import scipy as sp import scipy.optimize import pandas as pd import os # hyperlink to data location urlSheatherData = "http://www.stat.tamu.edu/~sheather/book/docs/datasets/MichelinNY.csv" # read in the data to a NumPy array arrSheatherData = np.asarray(pd.read_csv(urlSheatherData)) # slice the data to get the dependent variable vY = arrSheatherData[:, 0].astype('float64') # slice the data to get the matrix of predictor variables mX = np.asarray(arrSheatherData[:, 2:]).astype('float64') # add an intercept to the predictor variables intercept = np.ones(mX.shape[0]).reshape(mX.shape[0], 1) mX = np.concatenate((intercept, mX), axis = 1) # the number of variables and obserations iK = mX.shape[1] iN = mX.shape[0] # logistic transformation def logit(mX, vBeta): return((np.exp(np.dot(mX, vBeta))/(1.0 + np.exp(np.dot(mX, vBeta))))) # stable parametrisation of the cost function def logLikelihoodLogitStable(vBeta, mX, vY): return(-(np.sum(vY*(np.dot(mX, vBeta) - np.log((1.0 + np.exp(np.dot(mX, vBeta))))) + (1-vY)*(-np.log((1.0 + np.exp(np.dot(mX, vBeta)))))))) # score function def likelihoodScore(vBeta, mX, vY): return(np.dot(mX.T, (logit(mX, vBeta) - vY))) #==================================================================== # optimize to get the MLE using the BFGS optimizer (numerical derivatives) #==================================================================== optimLogitBFGS = sp.optimize.minimize(logLikelihoodLogitStable, x0 = np.array([10, 0.5, 0.1, -0.3, 0.1]), args = (mX, vY), method = 'BFGS', options={'gtol': 1e-3, 'disp': True}) print(optimLogitBFGS) # print the results of the optimisation And this can easily be adapted to the scipy.optimize.fmin_l_bfgs_b function: #==================================================================== # optimize to get the MLE using the L-BFGS optimizer (analytical derivatives) #==================================================================== optimLogitLBFGS = sp.optimize.fmin_l_bfgs_b(logLikelihoodLogitStable, x0 = np.array([10, 0.5, 0.1, -0.3, 0.1]), args = (mX, vY), fprime = likelihoodScore, pgtol = 1e-3, disp = True) print(optimLogitLBFGS) # print the results of the optimisation R Using the L-BFGS-B optimizer in R is just as simple. First the version with the BFGS algorithm: library(optimx) # read in the data urlSheatherData = "http://www.stat.tamu.edu/~sheather/book/docs/datasets/MichelinNY.csv" dfSheatherData = as.data.frame(read.csv(urlSheatherData, header = T)) # create the design matrices vY = as.matrix(dfSheatherData['InMichelin']) mX = as.matrix(dfSheatherData[c('Service','Decor', 'Food', 'Price')]) # add an intercept to the predictor variables mX = cbind(rep(1, nrow(mX)), mX) # the number of variables and observations iK = ncol(mX) iN = nrow(mX) # define the logistic transformation logit = function(mX, vBeta) { return(exp(mX %*% vBeta)/(1+ exp(mX %*% vBeta)) ) } # stable parametrisation of the log-likelihood function # Note: The negative of the log-likelihood is being returned, since we will be # /minimising/ the function. logLikelihoodLogitStable = function(vBeta, mX, vY) { return(-sum( vY*(mX %*% vBeta - log(1+exp(mX %*% vBeta))) + (1-vY)*(-log(1 + exp(mX %*% vBeta))) ) # sum ) # return } # score function likelihoodScore = function(vBeta, mX, vY) { return(t(mX) %*% (logit(mX, vBeta) - vY) ) } # initial set of parameters vBeta0 = c(10, -0.1, -0.3, 0.001, 0.01) # arbitrary starting parameters #==================================================================== # optimize to get the MLE using the BFGS optimizer (numerical derivatives) #==================================================================== optimLogitBFGS = optim(vBeta0, logLikelihoodLogitStable, mX = mX, vY = vY, method = 'BFGS', hessian=TRUE) optimLogitBFGS # get the results of the optimisation and then the version with the L-BFGS-B from the optimx package: #==================================================================== # optimize to get the MLE using the L-BFGS optimizer (analytical derivatives) #==================================================================== optimLogitLBFGS = optimx(vBeta0, logLikelihoodLogitStable, method = 'L-BFGS-B', gr = likelihoodScore, mX = mX, vY = vY, hessian=TRUE) summary(optimLogitLBFGS)
Logistic regression with LBFGS solver Here is an example of logistic regression estimation using the limited memory BFGS [L-BFGS] optimization algorithm. I will be using the optimx function from the optimx library in R, and SciPy's scipy
51,546
Logistic regression with LBFGS solver
If you're worrying about memory I guess you're either working with embedded hardware or expecting to have a big model. I'm going to guess that it's the latter and that you have a high dimensional text or bioinformatics classification problem of some sort. If so you should ponder Mallet's Java implementation, since that plugs into their relevant logistic regression (a.k.a. maxent) models most easily. L-BFGS as a standalone algorithm is available in Java, Python, C and fortran implementations, handily linked from the L-BFGS wikipedia page. The Python (SciPy) version will presumably be of most interest to you. Applying this to a logistic regression models is relatively straightforward, except perhaps for the part where you choose a regulariser. Full disclosure: I do not use SciPy. In logistic regression applications fancy regularisation and a limited-memory optimisation process, while conceptually separate, are often needed together due to the nature of the problem. Hence there's some reason to choose a library that bundles the two together in a sensible manner.
Logistic regression with LBFGS solver
If you're worrying about memory I guess you're either working with embedded hardware or expecting to have a big model. I'm going to guess that it's the latter and that you have a high dimensional tex
Logistic regression with LBFGS solver If you're worrying about memory I guess you're either working with embedded hardware or expecting to have a big model. I'm going to guess that it's the latter and that you have a high dimensional text or bioinformatics classification problem of some sort. If so you should ponder Mallet's Java implementation, since that plugs into their relevant logistic regression (a.k.a. maxent) models most easily. L-BFGS as a standalone algorithm is available in Java, Python, C and fortran implementations, handily linked from the L-BFGS wikipedia page. The Python (SciPy) version will presumably be of most interest to you. Applying this to a logistic regression models is relatively straightforward, except perhaps for the part where you choose a regulariser. Full disclosure: I do not use SciPy. In logistic regression applications fancy regularisation and a limited-memory optimisation process, while conceptually separate, are often needed together due to the nature of the problem. Hence there's some reason to choose a library that bundles the two together in a sensible manner.
Logistic regression with LBFGS solver If you're worrying about memory I guess you're either working with embedded hardware or expecting to have a big model. I'm going to guess that it's the latter and that you have a high dimensional tex
51,547
Logistic regression with LBFGS solver
The Apache Spark compute engine is open source and has great performance on very large datasets. As of version 1.2 (I think) from 2014, Spark MLlib supports LogisticRegressionWithLBFGS. The API has bindings for Python, Scala or Java. It uses feature scaling and L2-Regularization by default, unlike the gsm method in R. There is an explanation with example code at Linear Methods - MLlib - Spark Documentation. The documentation license is CC BY-SA 3.0 US, so here is a snippet. from pyspark.mllib.regression import LabeledPoint, LinearRegressionWithSGD from numpy import array # Load and parse the data def parsePoint(line): values = [float(x) for x in line.replace(',', ' ').split(' ')] return LabeledPoint(values[0], values[1:]) data = sc.textFile("data/mllib/ridge-data/lpsa.data") parsedData = data.map(parsePoint) # Build the model model = LinearRegressionWithSGD.train(parsedData) # Evaluate the model on training data valuesAndPreds = parsedData.map(lambda p: (p.label, model.predict(p.features))) MSE = valuesAndPreds.map(lambda (v, p): (v - p)**2).reduce(lambda x, y: x + y) / valuesAndPreds.count() print("Mean Squared Error = " + str(MSE))
Logistic regression with LBFGS solver
The Apache Spark compute engine is open source and has great performance on very large datasets. As of version 1.2 (I think) from 2014, Spark MLlib supports LogisticRegressionWithLBFGS. The API has
Logistic regression with LBFGS solver The Apache Spark compute engine is open source and has great performance on very large datasets. As of version 1.2 (I think) from 2014, Spark MLlib supports LogisticRegressionWithLBFGS. The API has bindings for Python, Scala or Java. It uses feature scaling and L2-Regularization by default, unlike the gsm method in R. There is an explanation with example code at Linear Methods - MLlib - Spark Documentation. The documentation license is CC BY-SA 3.0 US, so here is a snippet. from pyspark.mllib.regression import LabeledPoint, LinearRegressionWithSGD from numpy import array # Load and parse the data def parsePoint(line): values = [float(x) for x in line.replace(',', ' ').split(' ')] return LabeledPoint(values[0], values[1:]) data = sc.textFile("data/mllib/ridge-data/lpsa.data") parsedData = data.map(parsePoint) # Build the model model = LinearRegressionWithSGD.train(parsedData) # Evaluate the model on training data valuesAndPreds = parsedData.map(lambda p: (p.label, model.predict(p.features))) MSE = valuesAndPreds.map(lambda (v, p): (v - p)**2).reduce(lambda x, y: x + y) / valuesAndPreds.count() print("Mean Squared Error = " + str(MSE))
Logistic regression with LBFGS solver The Apache Spark compute engine is open source and has great performance on very large datasets. As of version 1.2 (I think) from 2014, Spark MLlib supports LogisticRegressionWithLBFGS. The API has
51,548
Logistic regression with LBFGS solver
Sk-learn has an excellent Logistic Regression implementation. It's just a wrapper around [LIBLINEAR], but LIBLINEAR is state-of-the-art and although it doesn't use LBFGS, it uses something else called dual-coordinate descent, which according to this paper is even better in many situations. An alternate supposedly Python friendly implementation that does include LBFGS is Le Zhang's Maximum Entropy Toolkit although I haven't used it yet.
Logistic regression with LBFGS solver
Sk-learn has an excellent Logistic Regression implementation. It's just a wrapper around [LIBLINEAR], but LIBLINEAR is state-of-the-art and although it doesn't use LBFGS, it uses something else called
Logistic regression with LBFGS solver Sk-learn has an excellent Logistic Regression implementation. It's just a wrapper around [LIBLINEAR], but LIBLINEAR is state-of-the-art and although it doesn't use LBFGS, it uses something else called dual-coordinate descent, which according to this paper is even better in many situations. An alternate supposedly Python friendly implementation that does include LBFGS is Le Zhang's Maximum Entropy Toolkit although I haven't used it yet.
Logistic regression with LBFGS solver Sk-learn has an excellent Logistic Regression implementation. It's just a wrapper around [LIBLINEAR], but LIBLINEAR is state-of-the-art and although it doesn't use LBFGS, it uses something else called
51,549
Logistic regression with LBFGS solver
From here, http://www.kazanovaforanalytics.com/download.html, You can download a .jar with an implementation of logistic regression via newton raphson method that minimizes the -2 log likelihood. A comprehensive example can be found here : http://www.kazanovaforanalytics.com/example_classes.txt Provided via Apache licence 2.0, so you can include it in commercial applications.
Logistic regression with LBFGS solver
From here, http://www.kazanovaforanalytics.com/download.html, You can download a .jar with an implementation of logistic regression via newton raphson method that minimizes the -2 log likelihood. A c
Logistic regression with LBFGS solver From here, http://www.kazanovaforanalytics.com/download.html, You can download a .jar with an implementation of logistic regression via newton raphson method that minimizes the -2 log likelihood. A comprehensive example can be found here : http://www.kazanovaforanalytics.com/example_classes.txt Provided via Apache licence 2.0, so you can include it in commercial applications.
Logistic regression with LBFGS solver From here, http://www.kazanovaforanalytics.com/download.html, You can download a .jar with an implementation of logistic regression via newton raphson method that minimizes the -2 log likelihood. A c
51,550
How can I integrate R with PHP? [closed]
Here is the easiest way to do it that I found: This implementation of PHP and R consists of only two files. One written in PHP, and the other an R script. The PHP returns a form which uses the GET method to send a variable N to the server. When the form is submitted, the PHP will then execute an R script from the shell using a combination of the PHP command exec() and the Rscript shell command. This command will pass the variable N to the R script. The R script will then execute and save a histogram plot of N normally distributed values to the filesystem. Finally, when the R script is complete, the PHP will return the HTML tag containing the saved images path. First, the PHP file < ?php // poorman.php echo "< form action='poorman.php' method='get'>"; echo "Number values to generate: < input type='text' name='N' />"; echo "< input type='submit' />"; echo "< /form>"; if( isset($_GET['N'])) { $N = $_GET['N']; // execute R script from shell // this will save a plot at temp.png to the filesystem exec("Rscript my_rscript.R $N"); // return image tag $nocache = rand(); echo("< img src='temp.png?$nocache' /> "); } ?> and the R script # my_rscript.R args <- commandArgs(TRUE) N <- args[1] x <- rnorm(N,0,1) png(filename="temp.png", width=500, height=500) hist(x, col="lightblue") dev.off() Here are some more you are welcome to try: http://danpolant.com/r-integration-with-php/ http://steve-chen.net/document/r/r_php
How can I integrate R with PHP? [closed]
Here is the easiest way to do it that I found: This implementation of PHP and R consists of only two files. One written in PHP, and the other an R script. The PHP returns a form which uses the GET met
How can I integrate R with PHP? [closed] Here is the easiest way to do it that I found: This implementation of PHP and R consists of only two files. One written in PHP, and the other an R script. The PHP returns a form which uses the GET method to send a variable N to the server. When the form is submitted, the PHP will then execute an R script from the shell using a combination of the PHP command exec() and the Rscript shell command. This command will pass the variable N to the R script. The R script will then execute and save a histogram plot of N normally distributed values to the filesystem. Finally, when the R script is complete, the PHP will return the HTML tag containing the saved images path. First, the PHP file < ?php // poorman.php echo "< form action='poorman.php' method='get'>"; echo "Number values to generate: < input type='text' name='N' />"; echo "< input type='submit' />"; echo "< /form>"; if( isset($_GET['N'])) { $N = $_GET['N']; // execute R script from shell // this will save a plot at temp.png to the filesystem exec("Rscript my_rscript.R $N"); // return image tag $nocache = rand(); echo("< img src='temp.png?$nocache' /> "); } ?> and the R script # my_rscript.R args <- commandArgs(TRUE) N <- args[1] x <- rnorm(N,0,1) png(filename="temp.png", width=500, height=500) hist(x, col="lightblue") dev.off() Here are some more you are welcome to try: http://danpolant.com/r-integration-with-php/ http://steve-chen.net/document/r/r_php
How can I integrate R with PHP? [closed] Here is the easiest way to do it that I found: This implementation of PHP and R consists of only two files. One written in PHP, and the other an R script. The PHP returns a form which uses the GET met
51,551
How can I integrate R with PHP? [closed]
If you ever think to switch to Linux, the best way would be to use RApache, which is an Apache module that embeds an R interpreter (mod_R) in the webserver
How can I integrate R with PHP? [closed]
If you ever think to switch to Linux, the best way would be to use RApache, which is an Apache module that embeds an R interpreter (mod_R) in the webserver
How can I integrate R with PHP? [closed] If you ever think to switch to Linux, the best way would be to use RApache, which is an Apache module that embeds an R interpreter (mod_R) in the webserver
How can I integrate R with PHP? [closed] If you ever think to switch to Linux, the best way would be to use RApache, which is an Apache module that embeds an R interpreter (mod_R) in the webserver
51,552
How can I integrate R with PHP? [closed]
If you are looking for a way of executing chunks of R code from PHP, here is a library that might help: https://github.com/kachkaev/php-r use Kachkaev\PHPR\RCore; use Kachkaev\PHPR\Engine\CommandLineREngine; $r = new RCore(new CommandLineREngine('/usr/bin/R')); $result = $r->run('1 + 1'); echo $result; This will output: > 1 + 1 [1] 2
How can I integrate R with PHP? [closed]
If you are looking for a way of executing chunks of R code from PHP, here is a library that might help: https://github.com/kachkaev/php-r use Kachkaev\PHPR\RCore; use Kachkaev\PHPR\Engine\CommandLineR
How can I integrate R with PHP? [closed] If you are looking for a way of executing chunks of R code from PHP, here is a library that might help: https://github.com/kachkaev/php-r use Kachkaev\PHPR\RCore; use Kachkaev\PHPR\Engine\CommandLineREngine; $r = new RCore(new CommandLineREngine('/usr/bin/R')); $result = $r->run('1 + 1'); echo $result; This will output: > 1 + 1 [1] 2
How can I integrate R with PHP? [closed] If you are looking for a way of executing chunks of R code from PHP, here is a library that might help: https://github.com/kachkaev/php-r use Kachkaev\PHPR\RCore; use Kachkaev\PHPR\Engine\CommandLineR
51,553
Is Spearman's correlation coefficient usable to compare distributions?
For measuring the bin frequencies of two distributions, a pretty good test is the Chi Square test. It is exactly what it is designed for. And, it is even nonparametric. The distribution don't even have to be normal or symmetric. It is much better than the Kolmogorov-Smirnov test that is known to be weak in fitting the tails of the distribution where the fitting or diagnosing is often the most important. Spearman's correlation won't be so precise in terms of capturing the similarities of your actual bin frequencies. It will just tell you that your overall ranking of observations for the two distributions are similar. Instead, when calculating the Chi Square test (long hand so to speak) you will be able to observe readily which bin frequencies differentials are the most responsible for driving down the overall p value of the Chi Square test. Another pretty good test is the Anderson-Darling test. It is one of the best tests to diagnose the fit between two distributions. However, in terms of giving information about the specific bin frequencies I suspect that the Chi Square test gives you more information.
Is Spearman's correlation coefficient usable to compare distributions?
For measuring the bin frequencies of two distributions, a pretty good test is the Chi Square test. It is exactly what it is designed for. And, it is even nonparametric. The distribution don't even
Is Spearman's correlation coefficient usable to compare distributions? For measuring the bin frequencies of two distributions, a pretty good test is the Chi Square test. It is exactly what it is designed for. And, it is even nonparametric. The distribution don't even have to be normal or symmetric. It is much better than the Kolmogorov-Smirnov test that is known to be weak in fitting the tails of the distribution where the fitting or diagnosing is often the most important. Spearman's correlation won't be so precise in terms of capturing the similarities of your actual bin frequencies. It will just tell you that your overall ranking of observations for the two distributions are similar. Instead, when calculating the Chi Square test (long hand so to speak) you will be able to observe readily which bin frequencies differentials are the most responsible for driving down the overall p value of the Chi Square test. Another pretty good test is the Anderson-Darling test. It is one of the best tests to diagnose the fit between two distributions. However, in terms of giving information about the specific bin frequencies I suspect that the Chi Square test gives you more information.
Is Spearman's correlation coefficient usable to compare distributions? For measuring the bin frequencies of two distributions, a pretty good test is the Chi Square test. It is exactly what it is designed for. And, it is even nonparametric. The distribution don't even
51,554
Is Spearman's correlation coefficient usable to compare distributions?
Rather use Kolmogorov–Smirnov test, which is exactly what you need. R function ks.test implements it. Also check this question.
Is Spearman's correlation coefficient usable to compare distributions?
Rather use Kolmogorov–Smirnov test, which is exactly what you need. R function ks.test implements it. Also check this question.
Is Spearman's correlation coefficient usable to compare distributions? Rather use Kolmogorov–Smirnov test, which is exactly what you need. R function ks.test implements it. Also check this question.
Is Spearman's correlation coefficient usable to compare distributions? Rather use Kolmogorov–Smirnov test, which is exactly what you need. R function ks.test implements it. Also check this question.
51,555
Is Spearman's correlation coefficient usable to compare distributions?
The Baumgartner-Weiss-Schindler statistic is a modern alternative to the K-S test, and appears to be more powerful in certain situations. A few links: A Nonparametric Test for the General Two-Sample Problem (the original B.W.S. paper) M. Neuhauser, 'Exact Tests Based on the Baumgartner-Weiss-Schindler Statistic--A Survey', Statistical Papers, Vol 46 (2005), pp. 1-30. (perhaps not relevant to your large sample case...) H. Murakami, 'K-Sample Rank Test Based on Modified Baumgartner Statistic and its Power Comparison', J. Jpn. Comp. Statist. Vol 19 (2006), pp. 1-13. M. Neuhauser, 'One-Sided Two-Sample and Trend Tests Based on a Modified Baumgartner-Weiss-Schindler Statistic', J. Nonparametric Statistics, Vol 13 (2001) pp 729-739. edit: in the years since I posted this answer, I have implemented the BWS test in R in the BWStest package. Use is as simple as: require(BWStest) set.seed(12345) # under the null: x <- rnorm(200) y <- rnorm(200) hval <- bws_test(x, y)
Is Spearman's correlation coefficient usable to compare distributions?
The Baumgartner-Weiss-Schindler statistic is a modern alternative to the K-S test, and appears to be more powerful in certain situations. A few links: A Nonparametric Test for the General Two-Sample
Is Spearman's correlation coefficient usable to compare distributions? The Baumgartner-Weiss-Schindler statistic is a modern alternative to the K-S test, and appears to be more powerful in certain situations. A few links: A Nonparametric Test for the General Two-Sample Problem (the original B.W.S. paper) M. Neuhauser, 'Exact Tests Based on the Baumgartner-Weiss-Schindler Statistic--A Survey', Statistical Papers, Vol 46 (2005), pp. 1-30. (perhaps not relevant to your large sample case...) H. Murakami, 'K-Sample Rank Test Based on Modified Baumgartner Statistic and its Power Comparison', J. Jpn. Comp. Statist. Vol 19 (2006), pp. 1-13. M. Neuhauser, 'One-Sided Two-Sample and Trend Tests Based on a Modified Baumgartner-Weiss-Schindler Statistic', J. Nonparametric Statistics, Vol 13 (2001) pp 729-739. edit: in the years since I posted this answer, I have implemented the BWS test in R in the BWStest package. Use is as simple as: require(BWStest) set.seed(12345) # under the null: x <- rnorm(200) y <- rnorm(200) hval <- bws_test(x, y)
Is Spearman's correlation coefficient usable to compare distributions? The Baumgartner-Weiss-Schindler statistic is a modern alternative to the K-S test, and appears to be more powerful in certain situations. A few links: A Nonparametric Test for the General Two-Sample
51,556
How does an ideal prior distribution needs a probability mass on zero to reduce variance, and have fat tails to reduce bias?
The MAP estimator can have non-zero probability mass at a point (even if the posterior distribution is always continuous) The linked article is actually a bit misleading on this point, since even under the stipulated model all the relevant distributions are still continuous, so there is still zero probability mass at the point $\beta=0$. This is typical in penalised regression models, so it is misleading to describe things in the way the author has done. The issue here is really about the difference between properties of a posterior distribution, versus properties of a point estimator formed by taking the posterior mode (called the MAP estimator). In regard to your first query, you should note that probability mass refers to actual probability ---not probability density--- so if a random variable has any continuous distribution then it has zero probability mass at any single point. This is true of the normal distribution, just as with other continuous distributions. The stipulated model in the linked post also uses a continuous prior distribution for the coefficient parameter in the regression. The real issue here (which is obscured by the misleading language of the linked post) is that the estimator $\hat{\beta}$ used in penalised regression analysis can have a non-zero probability of being zero even when you use a continuous prior distribution for the true coefficient parameter. To see this, we first note that the estimator is obtained by maximising the log-likelihood plus penalty, which is equivalent to a MAP estimator (see this related answer for how these two approaches link to each other). Under certain specifications of the penalty function (equivalently the prior in Bayesian analysis) the MAP estimator has a non-zero probability of being equal to zero. In other words, it is possible to have: $$\mathbb{P}(\beta=0) = 0 \quad \quad \quad \quad \quad \mathbb{P}(\hat{\beta} = 0) > 0.$$ This possibility may seem a bit subtle and it requires some explanation. A continuous prior leads to a continuous posterior in the regression analysis, so there is zero probability mass a posteriori at any given point in the parameter space. However, although every parameter value has zero probability mass a posteriori, the mode of the posterior (which is used as the point estimator) may be the same under a wide enough set of sampling outcomes that it has a non-zero probability of falling at a given point. This is a common occurrence in penalised regression analysis.
How does an ideal prior distribution needs a probability mass on zero to reduce variance, and have f
The MAP estimator can have non-zero probability mass at a point (even if the posterior distribution is always continuous) The linked article is actually a bit misleading on this point, since even unde
How does an ideal prior distribution needs a probability mass on zero to reduce variance, and have fat tails to reduce bias? The MAP estimator can have non-zero probability mass at a point (even if the posterior distribution is always continuous) The linked article is actually a bit misleading on this point, since even under the stipulated model all the relevant distributions are still continuous, so there is still zero probability mass at the point $\beta=0$. This is typical in penalised regression models, so it is misleading to describe things in the way the author has done. The issue here is really about the difference between properties of a posterior distribution, versus properties of a point estimator formed by taking the posterior mode (called the MAP estimator). In regard to your first query, you should note that probability mass refers to actual probability ---not probability density--- so if a random variable has any continuous distribution then it has zero probability mass at any single point. This is true of the normal distribution, just as with other continuous distributions. The stipulated model in the linked post also uses a continuous prior distribution for the coefficient parameter in the regression. The real issue here (which is obscured by the misleading language of the linked post) is that the estimator $\hat{\beta}$ used in penalised regression analysis can have a non-zero probability of being zero even when you use a continuous prior distribution for the true coefficient parameter. To see this, we first note that the estimator is obtained by maximising the log-likelihood plus penalty, which is equivalent to a MAP estimator (see this related answer for how these two approaches link to each other). Under certain specifications of the penalty function (equivalently the prior in Bayesian analysis) the MAP estimator has a non-zero probability of being equal to zero. In other words, it is possible to have: $$\mathbb{P}(\beta=0) = 0 \quad \quad \quad \quad \quad \mathbb{P}(\hat{\beta} = 0) > 0.$$ This possibility may seem a bit subtle and it requires some explanation. A continuous prior leads to a continuous posterior in the regression analysis, so there is zero probability mass a posteriori at any given point in the parameter space. However, although every parameter value has zero probability mass a posteriori, the mode of the posterior (which is used as the point estimator) may be the same under a wide enough set of sampling outcomes that it has a non-zero probability of falling at a given point. This is a common occurrence in penalised regression analysis.
How does an ideal prior distribution needs a probability mass on zero to reduce variance, and have f The MAP estimator can have non-zero probability mass at a point (even if the posterior distribution is always continuous) The linked article is actually a bit misleading on this point, since even unde
51,557
How does an ideal prior distribution needs a probability mass on zero to reduce variance, and have fat tails to reduce bias?
The idea is that you want your regularisation procedure to set small parameter estimates to zero and leave large estimates unchanged. Now, lasso does zero out small estimates (ridge doesn't even do that), but both lasso and ridge shrink large estimates towards zero, which is a significant source of bias in the two procedures. For some intuition about why fat-tailed priors tend to leave large values relatively untouched, consider the ultimate fat-tailed prior, which is the improper flat pior $\pi(\beta) \propto 1$. In this case, the regression estimates are usual least-squares estimates and so are completely unbiased. As for the probability mass question, both the normal and Laplace/double-exponential distributions have zero probability mass at zero in the sense that $\Pr_\pi(\beta = 0) = 0$. The advantage of having non-zero prior mass at zero $\Pr_\pi(\beta = 0) = p > 0$ is that this allows the posterior distribution of $\beta$ to have a positive probability of being zero, and so the posterior estimates of the regression coefficients are likely to have zeroed-out components. This reduces variance, since very small coefficient estimates are likely to be just be fitted to noise. Again, for intuition, we consider the limiting case, where $p = 1$ and so $\pi$ puts all probability mass at $0$. Now, the posterior estimate is always $0$, and so has zero variance.
How does an ideal prior distribution needs a probability mass on zero to reduce variance, and have f
The idea is that you want your regularisation procedure to set small parameter estimates to zero and leave large estimates unchanged. Now, lasso does zero out small estimates (ridge doesn't even do th
How does an ideal prior distribution needs a probability mass on zero to reduce variance, and have fat tails to reduce bias? The idea is that you want your regularisation procedure to set small parameter estimates to zero and leave large estimates unchanged. Now, lasso does zero out small estimates (ridge doesn't even do that), but both lasso and ridge shrink large estimates towards zero, which is a significant source of bias in the two procedures. For some intuition about why fat-tailed priors tend to leave large values relatively untouched, consider the ultimate fat-tailed prior, which is the improper flat pior $\pi(\beta) \propto 1$. In this case, the regression estimates are usual least-squares estimates and so are completely unbiased. As for the probability mass question, both the normal and Laplace/double-exponential distributions have zero probability mass at zero in the sense that $\Pr_\pi(\beta = 0) = 0$. The advantage of having non-zero prior mass at zero $\Pr_\pi(\beta = 0) = p > 0$ is that this allows the posterior distribution of $\beta$ to have a positive probability of being zero, and so the posterior estimates of the regression coefficients are likely to have zeroed-out components. This reduces variance, since very small coefficient estimates are likely to be just be fitted to noise. Again, for intuition, we consider the limiting case, where $p = 1$ and so $\pi$ puts all probability mass at $0$. Now, the posterior estimate is always $0$, and so has zero variance.
How does an ideal prior distribution needs a probability mass on zero to reduce variance, and have f The idea is that you want your regularisation procedure to set small parameter estimates to zero and leave large estimates unchanged. Now, lasso does zero out small estimates (ridge doesn't even do th
51,558
How does an ideal prior distribution needs a probability mass on zero to reduce variance, and have fat tails to reduce bias?
Probability mass at zero How does the normal distribution have a zero probability mass at zero? The normal distribution has a non zero density at zero but the probability (mass) is zero $P[X=0] = 0$. By placing a probability mass at zero the prior is expressing more strongly the believe that a parameter is probably zero. That is helpful in a setting where many regressors are included, of which we believe most are not truly in the model. Fatter tails By having fatter tails we allow for a few of the parameters to have more easily larger values. In RIDGE and LASSO the penalty is not only keeping out the 'unwanted' overfitting of noise due to too many parameters, it also makes that the 'correct' model parameters are shrunken. The estimated parameter values with penalization are smaller than with the unbiased ordinary least squares estimate. What's the difference and which is better? So you can see this prior as a more extreme variant of LASSO in comparison to ridge, placing even more focus on parameter selection and less on regularising by shrinking parameters. Note that the one is not neccesarily better than the other. Shrinkage is not always unwanted and regularisation is not all about parameter selection. They are just placing a different focus. The horseshoe The name “horseshoe” came from the shape of the distribution if we re-parametrize it using this transformation using shrinkage weight k: What is this 'shrinkage weight'? It relates to the use of the following prior model: $$\begin{array}{} \beta_i &\sim& N(0,\tau \lambda_i) \\ \tau &\sim& \text{Half-Cauchy}(0,\tau_0) \\ \lambda_i &\sim& \text{Half-Cauchy}(0,1) \end{array}$$ or in reparameterized form $$\begin{array}{} \beta_i &\sim& N\left(0,\frac{\tau}{\sqrt{\kappa_i^{-1}-1}} \right) \\ \tau &\sim& \text{Half-Cauchy}(0,\tau_0) \\ \kappa_i &\sim& \text{Beta}\left(\frac{1}{2},\frac{1}{2}\right) \end{array}$$ The relationship between the beta distribution and the half-cauchy distribution (whose square is F distributed), can be seen when we rewrite the reparameterization as $\lambda^2 = \frac{1-\kappa}{\kappa}$, and that resembles the transformation between the F-distribution and the beta distribution written in several places (e.g. Wikipedia here). This $\kappa_i$ relates to the size of the prior $N\left(0,\frac{\tau}{{\kappa_i^{-1}-1}} \right)$ and makes it either a point mass when $\kappa = 1$ or a heavy tailed distribution when $\kappa = 0$.
How does an ideal prior distribution needs a probability mass on zero to reduce variance, and have f
Probability mass at zero How does the normal distribution have a zero probability mass at zero? The normal distribution has a non zero density at zero but the probability (mass) is zero $P[X=0] = 0$
How does an ideal prior distribution needs a probability mass on zero to reduce variance, and have fat tails to reduce bias? Probability mass at zero How does the normal distribution have a zero probability mass at zero? The normal distribution has a non zero density at zero but the probability (mass) is zero $P[X=0] = 0$. By placing a probability mass at zero the prior is expressing more strongly the believe that a parameter is probably zero. That is helpful in a setting where many regressors are included, of which we believe most are not truly in the model. Fatter tails By having fatter tails we allow for a few of the parameters to have more easily larger values. In RIDGE and LASSO the penalty is not only keeping out the 'unwanted' overfitting of noise due to too many parameters, it also makes that the 'correct' model parameters are shrunken. The estimated parameter values with penalization are smaller than with the unbiased ordinary least squares estimate. What's the difference and which is better? So you can see this prior as a more extreme variant of LASSO in comparison to ridge, placing even more focus on parameter selection and less on regularising by shrinking parameters. Note that the one is not neccesarily better than the other. Shrinkage is not always unwanted and regularisation is not all about parameter selection. They are just placing a different focus. The horseshoe The name “horseshoe” came from the shape of the distribution if we re-parametrize it using this transformation using shrinkage weight k: What is this 'shrinkage weight'? It relates to the use of the following prior model: $$\begin{array}{} \beta_i &\sim& N(0,\tau \lambda_i) \\ \tau &\sim& \text{Half-Cauchy}(0,\tau_0) \\ \lambda_i &\sim& \text{Half-Cauchy}(0,1) \end{array}$$ or in reparameterized form $$\begin{array}{} \beta_i &\sim& N\left(0,\frac{\tau}{\sqrt{\kappa_i^{-1}-1}} \right) \\ \tau &\sim& \text{Half-Cauchy}(0,\tau_0) \\ \kappa_i &\sim& \text{Beta}\left(\frac{1}{2},\frac{1}{2}\right) \end{array}$$ The relationship between the beta distribution and the half-cauchy distribution (whose square is F distributed), can be seen when we rewrite the reparameterization as $\lambda^2 = \frac{1-\kappa}{\kappa}$, and that resembles the transformation between the F-distribution and the beta distribution written in several places (e.g. Wikipedia here). This $\kappa_i$ relates to the size of the prior $N\left(0,\frac{\tau}{{\kappa_i^{-1}-1}} \right)$ and makes it either a point mass when $\kappa = 1$ or a heavy tailed distribution when $\kappa = 0$.
How does an ideal prior distribution needs a probability mass on zero to reduce variance, and have f Probability mass at zero How does the normal distribution have a zero probability mass at zero? The normal distribution has a non zero density at zero but the probability (mass) is zero $P[X=0] = 0$
51,559
Should the t-statistic (not data) be normally distributed for using the t-test?
Quoting (the complete sentence) from Wikipedia It is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known (typically, the scaling term is unknown and therefore a nuisance parameter). This sentence is true. Indeed, as stated in Dave's answer, the $t$-statistic follows Student's $t$ when the population variance is unknown. However, when the latter is known and you use it in place of the sample variance, the implied statistic is actually a $z$ statistic, i.e. it has distribution $N(0,1)$. On the other hand, if the data do not come from the normal distribution and if the true distribution is not too wired, the $t$-test is still useful; see this answer by Stephan Kolassa.
Should the t-statistic (not data) be normally distributed for using the t-test?
Quoting (the complete sentence) from Wikipedia It is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known (
Should the t-statistic (not data) be normally distributed for using the t-test? Quoting (the complete sentence) from Wikipedia It is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known (typically, the scaling term is unknown and therefore a nuisance parameter). This sentence is true. Indeed, as stated in Dave's answer, the $t$-statistic follows Student's $t$ when the population variance is unknown. However, when the latter is known and you use it in place of the sample variance, the implied statistic is actually a $z$ statistic, i.e. it has distribution $N(0,1)$. On the other hand, if the data do not come from the normal distribution and if the true distribution is not too wired, the $t$-test is still useful; see this answer by Stephan Kolassa.
Should the t-statistic (not data) be normally distributed for using the t-test? Quoting (the complete sentence) from Wikipedia It is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known (
51,560
Should the t-statistic (not data) be normally distributed for using the t-test?
The $t$-statistic should be $t$- distributed. This is guaranteed by math when the data are $iid$ normal, which is the legendary normality assumption. However, the t-stats often have close to the correct distribution, especially in large sample sizes, even when the data violate the normality assumption. That is, the usual t-test is fairly robust to violations of the normality assumption, and it is not so ridiculous to t-test data that clearly did not come from normal distributions (though there may be even better approaches).
Should the t-statistic (not data) be normally distributed for using the t-test?
The $t$-statistic should be $t$- distributed. This is guaranteed by math when the data are $iid$ normal, which is the legendary normality assumption. However, the t-stats often have close to the corre
Should the t-statistic (not data) be normally distributed for using the t-test? The $t$-statistic should be $t$- distributed. This is guaranteed by math when the data are $iid$ normal, which is the legendary normality assumption. However, the t-stats often have close to the correct distribution, especially in large sample sizes, even when the data violate the normality assumption. That is, the usual t-test is fairly robust to violations of the normality assumption, and it is not so ridiculous to t-test data that clearly did not come from normal distributions (though there may be even better approaches).
Should the t-statistic (not data) be normally distributed for using the t-test? The $t$-statistic should be $t$- distributed. This is guaranteed by math when the data are $iid$ normal, which is the legendary normality assumption. However, the t-stats often have close to the corre
51,561
Should the t-statistic (not data) be normally distributed for using the t-test?
The full statement from Wikipedia is necessary to establish the proper context (emphasis mine): A t-test is any statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis. It is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known (typically, the scaling term is unknown and therefore a nuisance parameter). The first sentence tells us that the test statistic follows a particular distribution, which is not actually normal. The second sentence, which is the part you quoted, tells us that this "Student's t-distribution" the statistic follows, obeys a specific property; namely, that if the scale parameter were known (rather than estimated), the corresponding statistic would be normally distributed. Indeed, the purpose of the second sentence is to introduce the reader to the motivation behind the development of this test, since it is a natural question to ask how to perform statistical inference for a population mean when the variability in that population is unknown. As it is intuitive to estimate that variability from the observed data, the question of how the usual $z$-statistic $$Z \mid H_0 = \frac{\bar x - \mu_0}{\sigma/\sqrt{n}}$$ is distributed when $\sigma$ is replaced by the sample standard deviation $s$, follows readily. So in a sense, the Student's $t$-test has the aforementioned property by construction: the test is what it is because it arose from considering what happens to a $z$-test when $\sigma$ is unknown. It is important to understand that because of this relationship, the assumptions that underlie the $t$-test are inherited from those from the $z$-test. For instance, the observations are assumed independent and identically distributed realizations from a normal distribution; the mean of this distribution is fixed but unknown; and the variance is fixed and known (in the case of the $z$-test). When this distributional assumption is satisfied, the $z$-statistic is exactly normal; consequently, the $t$-statistic is exactly $t$-distributed under the same assumptions. What many students misunderstand (and I have pointed this out previously), and what has been nicely addressed in other answers to your question, is the robustness of these statistics to deviations from the normality assumption in relation to the sample size. Such deviations do not necessarily invalidate the test because when the sample size is sufficiently large, the Central Limit Theorem implies the sample mean will be approximately normal. But robustness is not a statement about the actual distribution the statistic follows, and it is also not immediately pertinent to the above quote from Wikipedia.
Should the t-statistic (not data) be normally distributed for using the t-test?
The full statement from Wikipedia is necessary to establish the proper context (emphasis mine): A t-test is any statistical hypothesis test in which the test statistic follows a Student's t-distribut
Should the t-statistic (not data) be normally distributed for using the t-test? The full statement from Wikipedia is necessary to establish the proper context (emphasis mine): A t-test is any statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis. It is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known (typically, the scaling term is unknown and therefore a nuisance parameter). The first sentence tells us that the test statistic follows a particular distribution, which is not actually normal. The second sentence, which is the part you quoted, tells us that this "Student's t-distribution" the statistic follows, obeys a specific property; namely, that if the scale parameter were known (rather than estimated), the corresponding statistic would be normally distributed. Indeed, the purpose of the second sentence is to introduce the reader to the motivation behind the development of this test, since it is a natural question to ask how to perform statistical inference for a population mean when the variability in that population is unknown. As it is intuitive to estimate that variability from the observed data, the question of how the usual $z$-statistic $$Z \mid H_0 = \frac{\bar x - \mu_0}{\sigma/\sqrt{n}}$$ is distributed when $\sigma$ is replaced by the sample standard deviation $s$, follows readily. So in a sense, the Student's $t$-test has the aforementioned property by construction: the test is what it is because it arose from considering what happens to a $z$-test when $\sigma$ is unknown. It is important to understand that because of this relationship, the assumptions that underlie the $t$-test are inherited from those from the $z$-test. For instance, the observations are assumed independent and identically distributed realizations from a normal distribution; the mean of this distribution is fixed but unknown; and the variance is fixed and known (in the case of the $z$-test). When this distributional assumption is satisfied, the $z$-statistic is exactly normal; consequently, the $t$-statistic is exactly $t$-distributed under the same assumptions. What many students misunderstand (and I have pointed this out previously), and what has been nicely addressed in other answers to your question, is the robustness of these statistics to deviations from the normality assumption in relation to the sample size. Such deviations do not necessarily invalidate the test because when the sample size is sufficiently large, the Central Limit Theorem implies the sample mean will be approximately normal. But robustness is not a statement about the actual distribution the statistic follows, and it is also not immediately pertinent to the above quote from Wikipedia.
Should the t-statistic (not data) be normally distributed for using the t-test? The full statement from Wikipedia is necessary to establish the proper context (emphasis mine): A t-test is any statistical hypothesis test in which the test statistic follows a Student's t-distribut
51,562
Chi squared test with reasonable sample size results in R warning
This warning is due to an error by Pearson, the inventor of the test, who wrongly estimated that P-values would not be accurate were an expected cell frequency be less than 5. See this.
Chi squared test with reasonable sample size results in R warning
This warning is due to an error by Pearson, the inventor of the test, who wrongly estimated that P-values would not be accurate were an expected cell frequency be less than 5. See this.
Chi squared test with reasonable sample size results in R warning This warning is due to an error by Pearson, the inventor of the test, who wrongly estimated that P-values would not be accurate were an expected cell frequency be less than 5. See this.
Chi squared test with reasonable sample size results in R warning This warning is due to an error by Pearson, the inventor of the test, who wrongly estimated that P-values would not be accurate were an expected cell frequency be less than 5. See this.
51,563
Chi squared test with reasonable sample size results in R warning
As a supplement to the answer by @Frank Harrell, here are the observed and expected frequencies and so-called Pearson residuals, (observed $-$ expected) / sqrt(expected). The name of the latter is generous to Pearson, but honours the fact that the chi-square statistic can be regarded as the sum of such residuals squared. +----------------------------------+ 1 | 1527.000 1238.037 8.212 | 2 | 0.000 93.631 -9.676 | 3 | 0.000 3.595 -1.896 | 4 | 0.000 63.091 -7.943 | 5 | 17.000 85.259 -7.392 | 6 | 0.000 38.261 -6.186 | 7 | 0.000 17.375 -4.168 | 8 | 0.000 0.015 -0.122 | 9 | 834.000 838.735 -0.163 | +----------------------------------+ In practical terms the chi-square test rejects the null overwhelmingly, but the warning presumably arises from one extremely low expected frequency and one moderately low expected frequency. The pattern of the residuals should be as or more instructive than the P-value.
Chi squared test with reasonable sample size results in R warning
As a supplement to the answer by @Frank Harrell, here are the observed and expected frequencies and so-called Pearson residuals, (observed $-$ expected) / sqrt(expected). The name of the latter is gen
Chi squared test with reasonable sample size results in R warning As a supplement to the answer by @Frank Harrell, here are the observed and expected frequencies and so-called Pearson residuals, (observed $-$ expected) / sqrt(expected). The name of the latter is generous to Pearson, but honours the fact that the chi-square statistic can be regarded as the sum of such residuals squared. +----------------------------------+ 1 | 1527.000 1238.037 8.212 | 2 | 0.000 93.631 -9.676 | 3 | 0.000 3.595 -1.896 | 4 | 0.000 63.091 -7.943 | 5 | 17.000 85.259 -7.392 | 6 | 0.000 38.261 -6.186 | 7 | 0.000 17.375 -4.168 | 8 | 0.000 0.015 -0.122 | 9 | 834.000 838.735 -0.163 | +----------------------------------+ In practical terms the chi-square test rejects the null overwhelmingly, but the warning presumably arises from one extremely low expected frequency and one moderately low expected frequency. The pattern of the residuals should be as or more instructive than the P-value.
Chi squared test with reasonable sample size results in R warning As a supplement to the answer by @Frank Harrell, here are the observed and expected frequencies and so-called Pearson residuals, (observed $-$ expected) / sqrt(expected). The name of the latter is gen
51,564
How do you learn labels with unsupervised learning?
Normally, you don't (and you don't believe everything someone writes somewhere on the internet). What the writer probably meant (at least that's my interpretation) is that you can use clustering to identify the clusters, declare each cluster to be a class for itself, and use these "classes" to learn class boundaries or other rules for "classifying" new data. This approach, however, is likely to suffer from severe generalisation issues, if it works at all. If the true classes overlap, clustering won't be able to identify them and the clusters will not correspond to the classes. Even if the clusters/classes are well separated, lack of true labels will prevent you from tuning hyperparameters and ensuring good generalisation. So, it is a theoretically possible concept, but unlikely to work in practice. I also stumbled over the preceding sentence in the blog you quoted: An income prediction task can be regression if we output raw numbers, but if we quantize the income into different brackets and predict the bracket, it becomes a classification problem. Again, it is theoretically possible, but not a recommended approach. By treating income prediction as a classification task we ignore (lose information about) the similarity between different "classes". The bracket [20,000 - 30,000] is closer to the bracket [30,000 - 40,000] than to [150,000 - 200,000]. Classification wouldn't take this into account. See my answer here for more details.
How do you learn labels with unsupervised learning?
Normally, you don't (and you don't believe everything someone writes somewhere on the internet). What the writer probably meant (at least that's my interpretation) is that you can use clustering to id
How do you learn labels with unsupervised learning? Normally, you don't (and you don't believe everything someone writes somewhere on the internet). What the writer probably meant (at least that's my interpretation) is that you can use clustering to identify the clusters, declare each cluster to be a class for itself, and use these "classes" to learn class boundaries or other rules for "classifying" new data. This approach, however, is likely to suffer from severe generalisation issues, if it works at all. If the true classes overlap, clustering won't be able to identify them and the clusters will not correspond to the classes. Even if the clusters/classes are well separated, lack of true labels will prevent you from tuning hyperparameters and ensuring good generalisation. So, it is a theoretically possible concept, but unlikely to work in practice. I also stumbled over the preceding sentence in the blog you quoted: An income prediction task can be regression if we output raw numbers, but if we quantize the income into different brackets and predict the bracket, it becomes a classification problem. Again, it is theoretically possible, but not a recommended approach. By treating income prediction as a classification task we ignore (lose information about) the similarity between different "classes". The bracket [20,000 - 30,000] is closer to the bracket [30,000 - 40,000] than to [150,000 - 200,000]. Classification wouldn't take this into account. See my answer here for more details.
How do you learn labels with unsupervised learning? Normally, you don't (and you don't believe everything someone writes somewhere on the internet). What the writer probably meant (at least that's my interpretation) is that you can use clustering to id
51,565
How do you learn labels with unsupervised learning?
This pops up a lot when labelling your full data set is expensive and time consuming. A simple example would be labelling product reviews into buckets such as: Price Related Shipping Related Quality Related What we may do is label a small fraction of our dataset and then we can cluster the word vecs, do knn with them, or do some analysis to pull out keywords to then label the rest (although technically not unsupervised but the easiest to explain). For example, the word 'price' pops up mostly for reviews about price (unsurprisingly). So, if we see that word we can just label it price and let the machine learn the label and hopefully generalize better than just mapping keywords to labels (it usually does). Alternatively, with clustering we would hope that reviews with the word 'price' would get lumped with the other price labels. Obviously, this approach will add error over labelling everything but it can definitely get you closer to your end goal. This type of approach is called 'semi-supervised' learning.
How do you learn labels with unsupervised learning?
This pops up a lot when labelling your full data set is expensive and time consuming. A simple example would be labelling product reviews into buckets such as: Price Related Shipping Related Quality
How do you learn labels with unsupervised learning? This pops up a lot when labelling your full data set is expensive and time consuming. A simple example would be labelling product reviews into buckets such as: Price Related Shipping Related Quality Related What we may do is label a small fraction of our dataset and then we can cluster the word vecs, do knn with them, or do some analysis to pull out keywords to then label the rest (although technically not unsupervised but the easiest to explain). For example, the word 'price' pops up mostly for reviews about price (unsurprisingly). So, if we see that word we can just label it price and let the machine learn the label and hopefully generalize better than just mapping keywords to labels (it usually does). Alternatively, with clustering we would hope that reviews with the word 'price' would get lumped with the other price labels. Obviously, this approach will add error over labelling everything but it can definitely get you closer to your end goal. This type of approach is called 'semi-supervised' learning.
How do you learn labels with unsupervised learning? This pops up a lot when labelling your full data set is expensive and time consuming. A simple example would be labelling product reviews into buckets such as: Price Related Shipping Related Quality
51,566
How do you learn labels with unsupervised learning?
Unsupervised methods usually assign data points to clusters, which could be considered algorithmically generated labels. We don't "learn" labels in the sense that there is some true target label we want to identify, but rather create labels and assign them to the data. An unsupervised clustering will identify natural groups in the data, and you can interpret those groups to come up with meaningful labels instead of "Cluster 1", "Cluster 2", etc. - perhaps a patient cluster represents some aspect of biology, or some group of transactions represents fraud. The clustering assigns arbitrary categorical "labels" which can be further analyzed to discern whether they represent true, meaningful classes in your data. If you have a useful clustering, you can then use those labels in a supervised manner to train a classifier. Rather than clustering every patient or transaction dataset and hoping to find the same clusters, you can train a classifier to use cluster-discriminative gene signatures or fraud profiles in order to directly assign the "labels" you discovered through unsupervised clustering to new data.
How do you learn labels with unsupervised learning?
Unsupervised methods usually assign data points to clusters, which could be considered algorithmically generated labels. We don't "learn" labels in the sense that there is some true target label we wa
How do you learn labels with unsupervised learning? Unsupervised methods usually assign data points to clusters, which could be considered algorithmically generated labels. We don't "learn" labels in the sense that there is some true target label we want to identify, but rather create labels and assign them to the data. An unsupervised clustering will identify natural groups in the data, and you can interpret those groups to come up with meaningful labels instead of "Cluster 1", "Cluster 2", etc. - perhaps a patient cluster represents some aspect of biology, or some group of transactions represents fraud. The clustering assigns arbitrary categorical "labels" which can be further analyzed to discern whether they represent true, meaningful classes in your data. If you have a useful clustering, you can then use those labels in a supervised manner to train a classifier. Rather than clustering every patient or transaction dataset and hoping to find the same clusters, you can train a classifier to use cluster-discriminative gene signatures or fraud profiles in order to directly assign the "labels" you discovered through unsupervised clustering to new data.
How do you learn labels with unsupervised learning? Unsupervised methods usually assign data points to clusters, which could be considered algorithmically generated labels. We don't "learn" labels in the sense that there is some true target label we wa
51,567
Uniform posterior on bounded space vs unbounded space
It is not possible to have a flat (uniform) probability distribution on an unbounded space, so in particular it's not possible to have a flat posterior distribution. If you had a uniform probability density on the entire real line, you would need a function $f(x)$ that integrated to 1 (to be a probability density) but was constant. That's not possible: any constant function integrates to 0 or infinity. Similarly, if you had a uniform distribution on an infinite set of integers, you'd need the probability mass function $p(n)$ to be equal for all $n$ and add to 1. It can't; if $p(n)$ is equal for all $n$ it must add to zero or infinity. Analogous problems occur for more complicated spaces where it's meaningful to talk about a distribution being 'flat'. On a bounded finite-dimensional space, it is possible to have a constant function that integrates to 1, and so a probability distribution can be flat. The Dirichlet distribution, for example, is defined on a $n$-dimensional triangle with area $$\mathrm{B}(\boldsymbol{\alpha})=\frac{\prod_{i=1}^{K} \Gamma\left(\alpha_{i}\right)}{\Gamma\left(\sum_{i=1}^{K} \alpha_{i}\right)}$$ so any constant function has finite integral, and a function $$f(\boldsymbol{\alpha})=1/B(\boldsymbol{\alpha})$$ integrates to 1. The probability distribution for New Zealand Lotto is over the set of six-number sequences with values from 1 to 40, so there are only finitely many of them, and you can put equal probability on each one ($p(x)=1/3838380$) and have it add up to 1. So, given that, the real question is how flat prior distributions make sense. It turns out that you can often put a constant function into Bayes' Rule in place of the prior density and get a genuine distribution out as the posterior. It makes sense, then, to think of that posterior as belonging to a 'flat prior' even if there is no such thing. Also, the posterior you get for a 'flat prior', when there is one, is often the same as the limit of the posteriors you'd get for more and more spread out genuine priors [I don't know if this is always true or just often true]. So, for example, if you have $X_m\sim N(\mu,1)$ data and a $\mu\sim N(0,\omega^2)$ prior, the posterior is Normal with mean $$\frac{n\bar X_n}{n+\omega^{-2}}$$ and variance $1/(n+\omega^{-2})$. If you let $\omega$ increase, the prior gets more and more spread out and the posterior gets closer and closer to $N(\bar X, 1/n)$, which is also what you'd get with a 'flat prior'. Sometimes, though, using a 'flat prior' doesn't give a genuine probability distribution for the posterior, in which case it doesn't really make sense.
Uniform posterior on bounded space vs unbounded space
It is not possible to have a flat (uniform) probability distribution on an unbounded space, so in particular it's not possible to have a flat posterior distribution. If you had a uniform probability d
Uniform posterior on bounded space vs unbounded space It is not possible to have a flat (uniform) probability distribution on an unbounded space, so in particular it's not possible to have a flat posterior distribution. If you had a uniform probability density on the entire real line, you would need a function $f(x)$ that integrated to 1 (to be a probability density) but was constant. That's not possible: any constant function integrates to 0 or infinity. Similarly, if you had a uniform distribution on an infinite set of integers, you'd need the probability mass function $p(n)$ to be equal for all $n$ and add to 1. It can't; if $p(n)$ is equal for all $n$ it must add to zero or infinity. Analogous problems occur for more complicated spaces where it's meaningful to talk about a distribution being 'flat'. On a bounded finite-dimensional space, it is possible to have a constant function that integrates to 1, and so a probability distribution can be flat. The Dirichlet distribution, for example, is defined on a $n$-dimensional triangle with area $$\mathrm{B}(\boldsymbol{\alpha})=\frac{\prod_{i=1}^{K} \Gamma\left(\alpha_{i}\right)}{\Gamma\left(\sum_{i=1}^{K} \alpha_{i}\right)}$$ so any constant function has finite integral, and a function $$f(\boldsymbol{\alpha})=1/B(\boldsymbol{\alpha})$$ integrates to 1. The probability distribution for New Zealand Lotto is over the set of six-number sequences with values from 1 to 40, so there are only finitely many of them, and you can put equal probability on each one ($p(x)=1/3838380$) and have it add up to 1. So, given that, the real question is how flat prior distributions make sense. It turns out that you can often put a constant function into Bayes' Rule in place of the prior density and get a genuine distribution out as the posterior. It makes sense, then, to think of that posterior as belonging to a 'flat prior' even if there is no such thing. Also, the posterior you get for a 'flat prior', when there is one, is often the same as the limit of the posteriors you'd get for more and more spread out genuine priors [I don't know if this is always true or just often true]. So, for example, if you have $X_m\sim N(\mu,1)$ data and a $\mu\sim N(0,\omega^2)$ prior, the posterior is Normal with mean $$\frac{n\bar X_n}{n+\omega^{-2}}$$ and variance $1/(n+\omega^{-2})$. If you let $\omega$ increase, the prior gets more and more spread out and the posterior gets closer and closer to $N(\bar X, 1/n)$, which is also what you'd get with a 'flat prior'. Sometimes, though, using a 'flat prior' doesn't give a genuine probability distribution for the posterior, in which case it doesn't really make sense.
Uniform posterior on bounded space vs unbounded space It is not possible to have a flat (uniform) probability distribution on an unbounded space, so in particular it's not possible to have a flat posterior distribution. If you had a uniform probability d
51,568
Uniform posterior on bounded space vs unbounded space
Strictly speaking, the question is imprecise in that it does not specify the reference measure. If the reference measure is $\text{d}\mu(x)=e^{-x^2}\text{d}\lambda(x)$ where $\lambda$ is the Lebesgue measure, a posterior with a flat density is valid. Assuming however using a "flat prior" means having a constant density with respect to the Lebesgue measure, Thomas Lumley's answer clearly explains why Bayesian inference is impossible with such a "posterior". This is not a probability density and hence the posterior is simply not defined. There is no way to compute posterior expectations or even posterior probabilities since the posterior mass of the entire space in infinity. Any parameter space with an infinite volume cannot be inferred under a posterior like this. More generally any posterior integrating to infinity is not acceptable for Bayesian inference for the very same reason that this cannot be turned into a probability density. As a marginalia, and as discussed in an earlier X validated entry, the maximum entropy prior $$\arg_p \max \int p(x) \log p(x) \text{d}\lambda(x)$$ is defined in terms of a dominating measure $\text{d}\lambda$. There is no absolute or unique measure of entropy in continuous spaces.
Uniform posterior on bounded space vs unbounded space
Strictly speaking, the question is imprecise in that it does not specify the reference measure. If the reference measure is $\text{d}\mu(x)=e^{-x^2}\text{d}\lambda(x)$ where $\lambda$ is the Lebesgue
Uniform posterior on bounded space vs unbounded space Strictly speaking, the question is imprecise in that it does not specify the reference measure. If the reference measure is $\text{d}\mu(x)=e^{-x^2}\text{d}\lambda(x)$ where $\lambda$ is the Lebesgue measure, a posterior with a flat density is valid. Assuming however using a "flat prior" means having a constant density with respect to the Lebesgue measure, Thomas Lumley's answer clearly explains why Bayesian inference is impossible with such a "posterior". This is not a probability density and hence the posterior is simply not defined. There is no way to compute posterior expectations or even posterior probabilities since the posterior mass of the entire space in infinity. Any parameter space with an infinite volume cannot be inferred under a posterior like this. More generally any posterior integrating to infinity is not acceptable for Bayesian inference for the very same reason that this cannot be turned into a probability density. As a marginalia, and as discussed in an earlier X validated entry, the maximum entropy prior $$\arg_p \max \int p(x) \log p(x) \text{d}\lambda(x)$$ is defined in terms of a dominating measure $\text{d}\lambda$. There is no absolute or unique measure of entropy in continuous spaces.
Uniform posterior on bounded space vs unbounded space Strictly speaking, the question is imprecise in that it does not specify the reference measure. If the reference measure is $\text{d}\mu(x)=e^{-x^2}\text{d}\lambda(x)$ where $\lambda$ is the Lebesgue
51,569
Is there any non-gaussian distribution has skewness 0 and kurtosis 3?
The discrete distribution with probabilities \begin{align} p(-2)&=1/12\\ p(-1)&=\ 1/6\\ p(0)&=\ 1/2\\ p(1)&= \ 1/6\\ p(2)&=1/12 \end{align} has the same mean, variance, skewness and kurtosis as the Gaussian.
Is there any non-gaussian distribution has skewness 0 and kurtosis 3?
The discrete distribution with probabilities \begin{align} p(-2)&=1/12\\ p(-1)&=\ 1/6\\ p(0)&=\ 1/2\\ p(1)&= \ 1/6\\ p(2)&=1/12 \end{align} has the same mean, variance, skewness and kurtosis as the Ga
Is there any non-gaussian distribution has skewness 0 and kurtosis 3? The discrete distribution with probabilities \begin{align} p(-2)&=1/12\\ p(-1)&=\ 1/6\\ p(0)&=\ 1/2\\ p(1)&= \ 1/6\\ p(2)&=1/12 \end{align} has the same mean, variance, skewness and kurtosis as the Gaussian.
Is there any non-gaussian distribution has skewness 0 and kurtosis 3? The discrete distribution with probabilities \begin{align} p(-2)&=1/12\\ p(-1)&=\ 1/6\\ p(0)&=\ 1/2\\ p(1)&= \ 1/6\\ p(2)&=1/12 \end{align} has the same mean, variance, skewness and kurtosis as the Ga
51,570
Is there any non-gaussian distribution has skewness 0 and kurtosis 3?
Note that in terms of cumulants $\kappa_n$, $n\ge 1$ one has $$ {\rm Mean}=\kappa_1 $$ $$ {\rm Variance}=\kappa_2 $$ $$ {\rm Skewness}=\frac{\kappa_3}{\kappa_{2}^{\frac{3}{2}}} $$ $$ {\rm Kurtosis}=3+\frac{\kappa_4}{\kappa_2^2} $$ My understanding of the OP's question is whether there are RVs other than the $N(0,1)$ which satisfies $\kappa_1=0,\kappa_2=1,\kappa_3=0,\kappa_4=0$. This is the truncated Hamburger moment problem for the sequence of moments $$ (m_0,m_1,m_2,m_3,m_4)=(1,0,1,0,3)\ . $$ The corresponding Hankel matrix is $$ H_2=\left( \begin{array}{ccc} 1 & 0 & 1\\ 0 & 1 & 0\\ 1 & 0 & 3 \end{array} \right) $$ which is nonsingular and thus has Hankel rank $3=n+1$ for a sequence of moments $(m_0,\ldots m_{2n})$. This implies that there are infinitely many atomic measures (with $n+1$ atoms) that have the moments up to order four as the standard Gaussian. So the answer to the OP's question is yes. The above is a particular case of a result by Curto and Fialkow in Houston J. Math. 1991. A good account is in chapter 9 of the book "The Moment Problem" by Konrad Schmüdgen. A perhaps more interesting question is under what extra condition does the answer become no, i.e., one has uniqueness. Being in fixed wiener chaos comes to mind because of the fourth moment theorem. There is a similar result by Newman for RV's that satisfy the Lee-Yang theorem as in ferromagnetic spin systems (see Theorem 3 in this article). Approaches to triviality of the $\phi^4$ model in dimension $\ge 4$ also use the fourth moment to show the Gaussian property.
Is there any non-gaussian distribution has skewness 0 and kurtosis 3?
Note that in terms of cumulants $\kappa_n$, $n\ge 1$ one has $$ {\rm Mean}=\kappa_1 $$ $$ {\rm Variance}=\kappa_2 $$ $$ {\rm Skewness}=\frac{\kappa_3}{\kappa_{2}^{\frac{3}{2}}} $$ $$ {\rm Kurtosis}=3+
Is there any non-gaussian distribution has skewness 0 and kurtosis 3? Note that in terms of cumulants $\kappa_n$, $n\ge 1$ one has $$ {\rm Mean}=\kappa_1 $$ $$ {\rm Variance}=\kappa_2 $$ $$ {\rm Skewness}=\frac{\kappa_3}{\kappa_{2}^{\frac{3}{2}}} $$ $$ {\rm Kurtosis}=3+\frac{\kappa_4}{\kappa_2^2} $$ My understanding of the OP's question is whether there are RVs other than the $N(0,1)$ which satisfies $\kappa_1=0,\kappa_2=1,\kappa_3=0,\kappa_4=0$. This is the truncated Hamburger moment problem for the sequence of moments $$ (m_0,m_1,m_2,m_3,m_4)=(1,0,1,0,3)\ . $$ The corresponding Hankel matrix is $$ H_2=\left( \begin{array}{ccc} 1 & 0 & 1\\ 0 & 1 & 0\\ 1 & 0 & 3 \end{array} \right) $$ which is nonsingular and thus has Hankel rank $3=n+1$ for a sequence of moments $(m_0,\ldots m_{2n})$. This implies that there are infinitely many atomic measures (with $n+1$ atoms) that have the moments up to order four as the standard Gaussian. So the answer to the OP's question is yes. The above is a particular case of a result by Curto and Fialkow in Houston J. Math. 1991. A good account is in chapter 9 of the book "The Moment Problem" by Konrad Schmüdgen. A perhaps more interesting question is under what extra condition does the answer become no, i.e., one has uniqueness. Being in fixed wiener chaos comes to mind because of the fourth moment theorem. There is a similar result by Newman for RV's that satisfy the Lee-Yang theorem as in ferromagnetic spin systems (see Theorem 3 in this article). Approaches to triviality of the $\phi^4$ model in dimension $\ge 4$ also use the fourth moment to show the Gaussian property.
Is there any non-gaussian distribution has skewness 0 and kurtosis 3? Note that in terms of cumulants $\kappa_n$, $n\ge 1$ one has $$ {\rm Mean}=\kappa_1 $$ $$ {\rm Variance}=\kappa_2 $$ $$ {\rm Skewness}=\frac{\kappa_3}{\kappa_{2}^{\frac{3}{2}}} $$ $$ {\rm Kurtosis}=3+
51,571
Is there any non-gaussian distribution has skewness 0 and kurtosis 3?
Yes, there is. The Laplace$(0,\frac{1}{2})$ distribution has the required properties. Its probability density function is given by $f(x)=\exp(-2\lvert x\rvert)$ over the real line.
Is there any non-gaussian distribution has skewness 0 and kurtosis 3?
Yes, there is. The Laplace$(0,\frac{1}{2})$ distribution has the required properties. Its probability density function is given by $f(x)=\exp(-2\lvert x\rvert)$ over the real line.
Is there any non-gaussian distribution has skewness 0 and kurtosis 3? Yes, there is. The Laplace$(0,\frac{1}{2})$ distribution has the required properties. Its probability density function is given by $f(x)=\exp(-2\lvert x\rvert)$ over the real line.
Is there any non-gaussian distribution has skewness 0 and kurtosis 3? Yes, there is. The Laplace$(0,\frac{1}{2})$ distribution has the required properties. Its probability density function is given by $f(x)=\exp(-2\lvert x\rvert)$ over the real line.
51,572
Is there any non-gaussian distribution has skewness 0 and kurtosis 3?
Skewness and Kurtosis, opposite to the popular belief, are non-uniquely defined concepts. There are general definitions of Skewness and Kurtosis: Convex transformations of random variables and the definitions based on the third and fourth moments are just one of them. In fact, these definitions are not even defined for all distributions, as the Cauchy distribution, a clearly symmetric distribution, has undefined skewness and kurtosis.
Is there any non-gaussian distribution has skewness 0 and kurtosis 3?
Skewness and Kurtosis, opposite to the popular belief, are non-uniquely defined concepts. There are general definitions of Skewness and Kurtosis: Convex transformations of random variables and the de
Is there any non-gaussian distribution has skewness 0 and kurtosis 3? Skewness and Kurtosis, opposite to the popular belief, are non-uniquely defined concepts. There are general definitions of Skewness and Kurtosis: Convex transformations of random variables and the definitions based on the third and fourth moments are just one of them. In fact, these definitions are not even defined for all distributions, as the Cauchy distribution, a clearly symmetric distribution, has undefined skewness and kurtosis.
Is there any non-gaussian distribution has skewness 0 and kurtosis 3? Skewness and Kurtosis, opposite to the popular belief, are non-uniquely defined concepts. There are general definitions of Skewness and Kurtosis: Convex transformations of random variables and the de
51,573
Birthday Problem: How am I wrong? [duplicate]
I think the logic is wrong since the probability of two persons having different birthdays is dependent on the fact that they need to have different birthdays than all the others. A simple example birthday paradox for A,B and C not having birthday on the same weekday. Each of these pairs are 1/7 in a vacuum. But given A had birthday an a Monday and B and C did not have birthday the same day. The probability of B and C not having birthday on the same day given they not having birthday on the same day as A is 1/6. The logic you should apply is the following. Let the person enter one by one and stop the experiment if two has the same birthday. Person 1 enters, so cant have the same birthday as anyone else Person 2 enters, so there is 1/365 chance that she has the same birthday as person 1. If so the experiments stops otherwise the number of days taking goes up to 2. Person 3 enters, so there is 2/365 chance that he will have the same birthday as either person 1 or 2. Now the pattern is clear. The probability of k (k < 366) persons all having different birthdays are: $$ P(k)=\prod^{k}_{i=1}\left[1-\frac{i-1}{365}\right]=\prod^{k}_{i=1}\frac{366-i}{365}=\frac{365!}{(365-k)!*365^k}$$
Birthday Problem: How am I wrong? [duplicate]
I think the logic is wrong since the probability of two persons having different birthdays is dependent on the fact that they need to have different birthdays than all the others. A simple example bir
Birthday Problem: How am I wrong? [duplicate] I think the logic is wrong since the probability of two persons having different birthdays is dependent on the fact that they need to have different birthdays than all the others. A simple example birthday paradox for A,B and C not having birthday on the same weekday. Each of these pairs are 1/7 in a vacuum. But given A had birthday an a Monday and B and C did not have birthday the same day. The probability of B and C not having birthday on the same day given they not having birthday on the same day as A is 1/6. The logic you should apply is the following. Let the person enter one by one and stop the experiment if two has the same birthday. Person 1 enters, so cant have the same birthday as anyone else Person 2 enters, so there is 1/365 chance that she has the same birthday as person 1. If so the experiments stops otherwise the number of days taking goes up to 2. Person 3 enters, so there is 2/365 chance that he will have the same birthday as either person 1 or 2. Now the pattern is clear. The probability of k (k < 366) persons all having different birthdays are: $$ P(k)=\prod^{k}_{i=1}\left[1-\frac{i-1}{365}\right]=\prod^{k}_{i=1}\frac{366-i}{365}=\frac{365!}{(365-k)!*365^k}$$
Birthday Problem: How am I wrong? [duplicate] I think the logic is wrong since the probability of two persons having different birthdays is dependent on the fact that they need to have different birthdays than all the others. A simple example bir
51,574
Birthday Problem: How am I wrong? [duplicate]
Notice that your answer is never equal to $1$ regardless of how high $n$ is. However, obviously if $n=366$ then there must be two people with the same birthday. So basically, the correct answer captures the fact that for everyone to have a different birthday, you begin running out of dates the more people are in the room.
Birthday Problem: How am I wrong? [duplicate]
Notice that your answer is never equal to $1$ regardless of how high $n$ is. However, obviously if $n=366$ then there must be two people with the same birthday. So basically, the correct answer captur
Birthday Problem: How am I wrong? [duplicate] Notice that your answer is never equal to $1$ regardless of how high $n$ is. However, obviously if $n=366$ then there must be two people with the same birthday. So basically, the correct answer captures the fact that for everyone to have a different birthday, you begin running out of dates the more people are in the room.
Birthday Problem: How am I wrong? [duplicate] Notice that your answer is never equal to $1$ regardless of how high $n$ is. However, obviously if $n=366$ then there must be two people with the same birthday. So basically, the correct answer captur
51,575
Birthday Problem: How am I wrong? [duplicate]
If you have three days $x$, $y$, $z$, the events $x\ne y$, $x\ne z$, and $y\ne z$ are not independent, but you treat them as such.
Birthday Problem: How am I wrong? [duplicate]
If you have three days $x$, $y$, $z$, the events $x\ne y$, $x\ne z$, and $y\ne z$ are not independent, but you treat them as such.
Birthday Problem: How am I wrong? [duplicate] If you have three days $x$, $y$, $z$, the events $x\ne y$, $x\ne z$, and $y\ne z$ are not independent, but you treat them as such.
Birthday Problem: How am I wrong? [duplicate] If you have three days $x$, $y$, $z$, the events $x\ne y$, $x\ne z$, and $y\ne z$ are not independent, but you treat them as such.
51,576
Is time of the day (predictor in regression) a categorical or a continuous variable?
It is neither. Actually, it is what you make it to be in your model formula, there are more than two possibilities, and there is not necessarily one correct answer among them! If you make it categorical, then your model will have a separate, independent coefficient (or more precisely, degree of freedom) for each hour of the day. This could be too many variables to fit with your limited available data, in which case you could divide the day into halves or quarters instead of 24ths, which is what hours do. If you make the hours variable numeric, your model will have an effect with magnitude proportional to the hour. You might want to think twice about that: it will cause a discontinuity between 11pm and midnight (23 and 0), which is not realistic for most situations (unless you have a process that is accumulating through the day and getting reset every midnight). Consider instead fitting a periodic formula like $$y \sim A \sin(2\pi h/24) + B \cos(2\pi h/24)$$ where h is the hour (numeric not categorical) and $A$,$B$ are the fit coefficients. This is just one of many possible periodic functions, all of which will have no discontinuity. If a smooth, periodic function $f(h)$ is desired, one especially appealing option could be to find the best such curve using Generalized Additive Modeling (GAM) and cyclic regression splines. GAM is fully nonparametric for univariate functionals, automatically searching a (potentially) infinite-dimensional space of smooth, periodic functions for the one that best describes your data. The key takeaway here is that numeric vs categorical is better thought of as a modeling choice, not a property of the data, and there are many modeling choices besides just those two. You have to consider your situation and try to find the most appropriate one.
Is time of the day (predictor in regression) a categorical or a continuous variable?
It is neither. Actually, it is what you make it to be in your model formula, there are more than two possibilities, and there is not necessarily one correct answer among them! If you make it categoric
Is time of the day (predictor in regression) a categorical or a continuous variable? It is neither. Actually, it is what you make it to be in your model formula, there are more than two possibilities, and there is not necessarily one correct answer among them! If you make it categorical, then your model will have a separate, independent coefficient (or more precisely, degree of freedom) for each hour of the day. This could be too many variables to fit with your limited available data, in which case you could divide the day into halves or quarters instead of 24ths, which is what hours do. If you make the hours variable numeric, your model will have an effect with magnitude proportional to the hour. You might want to think twice about that: it will cause a discontinuity between 11pm and midnight (23 and 0), which is not realistic for most situations (unless you have a process that is accumulating through the day and getting reset every midnight). Consider instead fitting a periodic formula like $$y \sim A \sin(2\pi h/24) + B \cos(2\pi h/24)$$ where h is the hour (numeric not categorical) and $A$,$B$ are the fit coefficients. This is just one of many possible periodic functions, all of which will have no discontinuity. If a smooth, periodic function $f(h)$ is desired, one especially appealing option could be to find the best such curve using Generalized Additive Modeling (GAM) and cyclic regression splines. GAM is fully nonparametric for univariate functionals, automatically searching a (potentially) infinite-dimensional space of smooth, periodic functions for the one that best describes your data. The key takeaway here is that numeric vs categorical is better thought of as a modeling choice, not a property of the data, and there are many modeling choices besides just those two. You have to consider your situation and try to find the most appropriate one.
Is time of the day (predictor in regression) a categorical or a continuous variable? It is neither. Actually, it is what you make it to be in your model formula, there are more than two possibilities, and there is not necessarily one correct answer among them! If you make it categoric
51,577
Is time of the day (predictor in regression) a categorical or a continuous variable?
Well if you include it in levels, $0, ..., 23$ then what would be the interpretation of the $\hat{\beta}_{\text{time of day}}$? You are including ordinal information, the coding is esstially arbitrary. You could change the value of 23 (11 PM) to 512, and it would still hold the same meaning. This is unlike (say) height, where 23cm is implies something very different from 510cm. Therefore dummuies are the way to go. You need some form of coding scheme. Most software programs have a very easy way of dealing with dummuies, for instance as.factor in R.
Is time of the day (predictor in regression) a categorical or a continuous variable?
Well if you include it in levels, $0, ..., 23$ then what would be the interpretation of the $\hat{\beta}_{\text{time of day}}$? You are including ordinal information, the coding is esstially arbitrar
Is time of the day (predictor in regression) a categorical or a continuous variable? Well if you include it in levels, $0, ..., 23$ then what would be the interpretation of the $\hat{\beta}_{\text{time of day}}$? You are including ordinal information, the coding is esstially arbitrary. You could change the value of 23 (11 PM) to 512, and it would still hold the same meaning. This is unlike (say) height, where 23cm is implies something very different from 510cm. Therefore dummuies are the way to go. You need some form of coding scheme. Most software programs have a very easy way of dealing with dummuies, for instance as.factor in R.
Is time of the day (predictor in regression) a categorical or a continuous variable? Well if you include it in levels, $0, ..., 23$ then what would be the interpretation of the $\hat{\beta}_{\text{time of day}}$? You are including ordinal information, the coding is esstially arbitrar
51,578
Is time of the day (predictor in regression) a categorical or a continuous variable?
It depends on how you interpret the variable but I would be inclined to say continuous, since it is ordered and there is a natural, consistent separation between the values that can be assumed (1 hr between consecutive values). A continuous example would be if your response is the location of an object in freefall and your predictor is time and/or the square on time. A somewhat contrived categorical example would be optical character recognition, where each time point really corresponds to the character representation of that time point.
Is time of the day (predictor in regression) a categorical or a continuous variable?
It depends on how you interpret the variable but I would be inclined to say continuous, since it is ordered and there is a natural, consistent separation between the values that can be assumed (1 hr b
Is time of the day (predictor in regression) a categorical or a continuous variable? It depends on how you interpret the variable but I would be inclined to say continuous, since it is ordered and there is a natural, consistent separation between the values that can be assumed (1 hr between consecutive values). A continuous example would be if your response is the location of an object in freefall and your predictor is time and/or the square on time. A somewhat contrived categorical example would be optical character recognition, where each time point really corresponds to the character representation of that time point.
Is time of the day (predictor in regression) a categorical or a continuous variable? It depends on how you interpret the variable but I would be inclined to say continuous, since it is ordered and there is a natural, consistent separation between the values that can be assumed (1 hr b
51,579
What is this equation (pictured) called?
On the linked page it introduces the equation as, "[t]he probability Q that a $\chi^2$ value calculated for an experiment with $d$ degrees of freedom... is due to chance". This suggests it is a version of the chi-squared distribution's CDF. Moreover, it looks a lot like the chi-squared distribution's pdf listed on the Wikipedia page, but with the integral added. Recognize that any distribution's CDF (cumulative distribution function) is the integral of it's pdf (probability density function). If you were to draw what people think of as the 'shape' of a distribution, you are typically drawing the pdf. Here are some chi-squared pdfs from the Wikipedia page: Integrating over this means that for one of the curves, you take the height of the line at every point from a lower bound (possibly as low as $0$) to an upper bound (possibly as high as $\infty$) and add them up. In the case of your equation, you have an integral that goes from the observed chi-squared value to infinity. A defining feature of a pdf is that it must integrate (add up) to $1$. But the expression inside the integral does not necessarily add up to $1$. We can get out of this problem by dividing by the total, as any number divided by itself is $1$. Notice that the bracketed expression is raised to the power of $-1$; thus, you are dividing the integral by the bracketed expression. From this we can deduce that the bracketed expression is the total (or would be, if you integrated over the entire range from $0$ to $\infty$). So this calculation is giving you the proportion of the chi-squared distribution that is to the right of / $\ge$ the observed chi-squared value. Namely, it is giving the $p$-value. At this point I must state that the quote from the linked page that I pasted in above is incorrect. It actually gives a pernicious misunderstanding / myth about $p$-values. It states that the equation gives you the probability an experimental value is due to chance. This is false. Instead, this calculation gives you the probability a value drawn from this distribution would be that large or larger. You do not know whether your observed value was drawn from this (null) distribution or not, and the $p$-value is definitely not the probability that the null hypothesis is true. To get a clearer understanding of $p$-values, it may help to read this excellent CV thread: What is the meaning of p values and t values in statistical tests?
What is this equation (pictured) called?
On the linked page it introduces the equation as, "[t]he probability Q that a $\chi^2$ value calculated for an experiment with $d$ degrees of freedom... is due to chance". This suggests it is a versi
What is this equation (pictured) called? On the linked page it introduces the equation as, "[t]he probability Q that a $\chi^2$ value calculated for an experiment with $d$ degrees of freedom... is due to chance". This suggests it is a version of the chi-squared distribution's CDF. Moreover, it looks a lot like the chi-squared distribution's pdf listed on the Wikipedia page, but with the integral added. Recognize that any distribution's CDF (cumulative distribution function) is the integral of it's pdf (probability density function). If you were to draw what people think of as the 'shape' of a distribution, you are typically drawing the pdf. Here are some chi-squared pdfs from the Wikipedia page: Integrating over this means that for one of the curves, you take the height of the line at every point from a lower bound (possibly as low as $0$) to an upper bound (possibly as high as $\infty$) and add them up. In the case of your equation, you have an integral that goes from the observed chi-squared value to infinity. A defining feature of a pdf is that it must integrate (add up) to $1$. But the expression inside the integral does not necessarily add up to $1$. We can get out of this problem by dividing by the total, as any number divided by itself is $1$. Notice that the bracketed expression is raised to the power of $-1$; thus, you are dividing the integral by the bracketed expression. From this we can deduce that the bracketed expression is the total (or would be, if you integrated over the entire range from $0$ to $\infty$). So this calculation is giving you the proportion of the chi-squared distribution that is to the right of / $\ge$ the observed chi-squared value. Namely, it is giving the $p$-value. At this point I must state that the quote from the linked page that I pasted in above is incorrect. It actually gives a pernicious misunderstanding / myth about $p$-values. It states that the equation gives you the probability an experimental value is due to chance. This is false. Instead, this calculation gives you the probability a value drawn from this distribution would be that large or larger. You do not know whether your observed value was drawn from this (null) distribution or not, and the $p$-value is definitely not the probability that the null hypothesis is true. To get a clearer understanding of $p$-values, it may help to read this excellent CV thread: What is the meaning of p values and t values in statistical tests?
What is this equation (pictured) called? On the linked page it introduces the equation as, "[t]he probability Q that a $\chi^2$ value calculated for an experiment with $d$ degrees of freedom... is due to chance". This suggests it is a versi
51,580
What is this equation (pictured) called?
The brackets are for grouping; they're just parentheses here. This is $1 - \operatorname{CDF}_{\chi^2}(x^2; d)$. Let $\operatorname{PDF}_{\chi^2}(\cdot;d) = f(\cdot;d)$. Then $$ f(t;d) \equiv \left( 2^\frac{d}{2} \operatorname{\Gamma}\left(\frac{d}{2} \right) \right)^{-1} t^{\frac{d}{2}-1} e^{-\frac{t}{2}} $$ so that $$\begin{align} 1 - \operatorname{CDF}_{\chi^2}(x;d) &= 1 - \int_{-\infty}^x f(t)\,dt \\ &= \int_x^{\infty} f(t)\,dt \\ &= \int_x^\infty \left( 2^\frac{d}{2} \operatorname{\Gamma}\left(\frac{d}{2} \right) \right)^{-1} t^{\frac{d}{2}-1} e^{-\frac{t}{2}}\,dt \\ &= \left( 2^\frac{d}{2} \operatorname{\Gamma}\left(\frac{d}{2} \right) \right)^{-1} \int_x^\infty t^{\frac{d}{2}-1} e^{-\frac{t}{2}}\,dt \end{align}$$ As for the statement: It's for calculating probability that a chi-squared value is due to chance. That's not true. For any random variable $X$, $\operatorname{CDF}(x) \equiv \operatorname{Pr}(X \leq x)$. So $Q_{\chi^2,d}$ is really the probability that a chi-square random variable with $d$ degrees of freedom takes a value less than $\chi^2$, whatever $\chi^2$ might be. A more correct statement would be: Given some test statistic $T$ and an observed value of that test statistic $t$, and given that $T \sim \chi^2(d)$ under the null hypothesis of that test, it's the probability that a $T$ at least as large as $t$ could have arisen purely by chance while the null hypothesis is known to be true.
What is this equation (pictured) called?
The brackets are for grouping; they're just parentheses here. This is $1 - \operatorname{CDF}_{\chi^2}(x^2; d)$. Let $\operatorname{PDF}_{\chi^2}(\cdot;d) = f(\cdot;d)$. Then $$ f(t;d) \equiv \left( 2
What is this equation (pictured) called? The brackets are for grouping; they're just parentheses here. This is $1 - \operatorname{CDF}_{\chi^2}(x^2; d)$. Let $\operatorname{PDF}_{\chi^2}(\cdot;d) = f(\cdot;d)$. Then $$ f(t;d) \equiv \left( 2^\frac{d}{2} \operatorname{\Gamma}\left(\frac{d}{2} \right) \right)^{-1} t^{\frac{d}{2}-1} e^{-\frac{t}{2}} $$ so that $$\begin{align} 1 - \operatorname{CDF}_{\chi^2}(x;d) &= 1 - \int_{-\infty}^x f(t)\,dt \\ &= \int_x^{\infty} f(t)\,dt \\ &= \int_x^\infty \left( 2^\frac{d}{2} \operatorname{\Gamma}\left(\frac{d}{2} \right) \right)^{-1} t^{\frac{d}{2}-1} e^{-\frac{t}{2}}\,dt \\ &= \left( 2^\frac{d}{2} \operatorname{\Gamma}\left(\frac{d}{2} \right) \right)^{-1} \int_x^\infty t^{\frac{d}{2}-1} e^{-\frac{t}{2}}\,dt \end{align}$$ As for the statement: It's for calculating probability that a chi-squared value is due to chance. That's not true. For any random variable $X$, $\operatorname{CDF}(x) \equiv \operatorname{Pr}(X \leq x)$. So $Q_{\chi^2,d}$ is really the probability that a chi-square random variable with $d$ degrees of freedom takes a value less than $\chi^2$, whatever $\chi^2$ might be. A more correct statement would be: Given some test statistic $T$ and an observed value of that test statistic $t$, and given that $T \sim \chi^2(d)$ under the null hypothesis of that test, it's the probability that a $T$ at least as large as $t$ could have arisen purely by chance while the null hypothesis is known to be true.
What is this equation (pictured) called? The brackets are for grouping; they're just parentheses here. This is $1 - \operatorname{CDF}_{\chi^2}(x^2; d)$. Let $\operatorname{PDF}_{\chi^2}(\cdot;d) = f(\cdot;d)$. Then $$ f(t;d) \equiv \left( 2
51,581
How does cross validation works for feature selection (using stepwise regression)?
Running a single cross-validation loop yields an estimate of the out of sample predictive error associated with your modeling procedure, nothing more. You have 10 different models because stepwise selection is unstable, as @Dave explains. There is no reason to believe that any of your 10 models is 'right', but the mean of the cross-validation prediction error gives you an estimate of how large the prediction error will be in the future. At this point, you would run your procedure over the full dataset and use that as the final model. In general, I would advise against this, but that would be the protocol. If you want to use cross-validation to determine the $F$-value to use as a cutoff for your modeling procedure, you need to do more. In that case, you would use a nested cross-validation scheme. In the outer loop, you would partition the data into $k$ folds and set one aside. Then you would perform another cross-validation loop on the remaining folds. In the inner loop, you would use some means to search over possible $F$-values; for instance, you could use a grid search over a series of possible $F$ cutoffs. For each possible $F$, there would be a average out of sample predictive accuracy score. You would take the cutoff that performed best and use it on the entire (nested) dataset to get a model. That model would be used to make predictions on the top level set that had been set aside, and from that you would get an estimate of the out of sample performance of a model that is selected in this manner. Then you would set the second fold aside, and perform the inner loop cross-validation and $F$ cutoff selection again, etc. After having done all this $k$ times, you could average those and get an average estimate of the out of sample performance of models selected in this manner. After that, you can repeat the search procedure that you had used in the inner loop on the outer loop alone (i.e., there wouldn't be an inner loop this time). That will give the model slightly more data to work with to select your final cutoff. Finally, you would fit your intended model using that cutoff on the whole dataset to get your final model, and you would have an estimate of how well a model of that type, selected in that manner, will perform out of sample. In short, the larger protocol is this: Run nested cross-validation, selecting a cutoff on the inner loop and then using it in the outer loop, to get an estimate of out of sample performance. Run cross-validation to select the cutoff to be used for the final model. Fit your model to the full dataset using the cutoff selected. To get more detail, try reading: Nested cross validation for model selection and Training on the full dataset after cross-validation?. Again, I wouldn't recommend you use stepwise selection, even in this case because the parameters will still be biased (and the constituent hypothesis tests will still be garbage), but the out of sample estimate of the predictive performance of a model fitted in this manner should be OK.
How does cross validation works for feature selection (using stepwise regression)?
Running a single cross-validation loop yields an estimate of the out of sample predictive error associated with your modeling procedure, nothing more. You have 10 different models because stepwise se
How does cross validation works for feature selection (using stepwise regression)? Running a single cross-validation loop yields an estimate of the out of sample predictive error associated with your modeling procedure, nothing more. You have 10 different models because stepwise selection is unstable, as @Dave explains. There is no reason to believe that any of your 10 models is 'right', but the mean of the cross-validation prediction error gives you an estimate of how large the prediction error will be in the future. At this point, you would run your procedure over the full dataset and use that as the final model. In general, I would advise against this, but that would be the protocol. If you want to use cross-validation to determine the $F$-value to use as a cutoff for your modeling procedure, you need to do more. In that case, you would use a nested cross-validation scheme. In the outer loop, you would partition the data into $k$ folds and set one aside. Then you would perform another cross-validation loop on the remaining folds. In the inner loop, you would use some means to search over possible $F$-values; for instance, you could use a grid search over a series of possible $F$ cutoffs. For each possible $F$, there would be a average out of sample predictive accuracy score. You would take the cutoff that performed best and use it on the entire (nested) dataset to get a model. That model would be used to make predictions on the top level set that had been set aside, and from that you would get an estimate of the out of sample performance of a model that is selected in this manner. Then you would set the second fold aside, and perform the inner loop cross-validation and $F$ cutoff selection again, etc. After having done all this $k$ times, you could average those and get an average estimate of the out of sample performance of models selected in this manner. After that, you can repeat the search procedure that you had used in the inner loop on the outer loop alone (i.e., there wouldn't be an inner loop this time). That will give the model slightly more data to work with to select your final cutoff. Finally, you would fit your intended model using that cutoff on the whole dataset to get your final model, and you would have an estimate of how well a model of that type, selected in that manner, will perform out of sample. In short, the larger protocol is this: Run nested cross-validation, selecting a cutoff on the inner loop and then using it in the outer loop, to get an estimate of out of sample performance. Run cross-validation to select the cutoff to be used for the final model. Fit your model to the full dataset using the cutoff selected. To get more detail, try reading: Nested cross validation for model selection and Training on the full dataset after cross-validation?. Again, I wouldn't recommend you use stepwise selection, even in this case because the parameters will still be biased (and the constituent hypothesis tests will still be garbage), but the out of sample estimate of the predictive performance of a model fitted in this manner should be OK.
How does cross validation works for feature selection (using stepwise regression)? Running a single cross-validation loop yields an estimate of the out of sample predictive error associated with your modeling procedure, nothing more. You have 10 different models because stepwise se
51,582
How does cross validation works for feature selection (using stepwise regression)?
Welcome to the instability of feature selection. This is totally predictable behavior and one of the reasons why stepwise regression is less of a panacea than it first seems to be. Sure, you select some variables that work well on the training data, and by limiting the variable count to just those that influence the outcome the most, you seem to restrict the opportunity for overfitting, right? Unfortunately, you put yourself at risk of the variable selection overfitting to the training data. As you can see from your cross validation, just because a set of variables works on one sample does not assure it of working on another. That is, the feature selection in unstable, and with the selected features bouncing all over the place as you make changes to the data (which will be the case when you go predict on new data), there is justifiable doubt that the variables selected based on the training data will be the right variables for making predictions on new data. If you want to use your model just to predict, then you might be better off bootstrapping the entire dataset, fitting a stepwise model to the bootstrap sample, applying that model to the entire data set, and seeing by how much the performance (on some metric of interest, say MSE or MAE) differs. This is related to the procedure I discuss here. If that is an acceptable amount, you have evidence that the overall stepwise procedure is effective, which can be the case for stepwise regression in pure prediction problems. If you want to use the stepwise regression to select variables on which you do inferences like p-values or confidence intervals, all of these downstream inferences are distorted by the stepwise selection. While this link mentions Stats software, the theory does not care if you use Stata, MATLAB, Python, R, SAS, or any other software, and the previous sentence relates to points 2, 3, 4, and 7. Briefly, by doing the stepwise regression and then calculating statistics as if you have not, you are performing dishonest calculations that fail to account for the variable selection process.
How does cross validation works for feature selection (using stepwise regression)?
Welcome to the instability of feature selection. This is totally predictable behavior and one of the reasons why stepwise regression is less of a panacea than it first seems to be. Sure, you select so
How does cross validation works for feature selection (using stepwise regression)? Welcome to the instability of feature selection. This is totally predictable behavior and one of the reasons why stepwise regression is less of a panacea than it first seems to be. Sure, you select some variables that work well on the training data, and by limiting the variable count to just those that influence the outcome the most, you seem to restrict the opportunity for overfitting, right? Unfortunately, you put yourself at risk of the variable selection overfitting to the training data. As you can see from your cross validation, just because a set of variables works on one sample does not assure it of working on another. That is, the feature selection in unstable, and with the selected features bouncing all over the place as you make changes to the data (which will be the case when you go predict on new data), there is justifiable doubt that the variables selected based on the training data will be the right variables for making predictions on new data. If you want to use your model just to predict, then you might be better off bootstrapping the entire dataset, fitting a stepwise model to the bootstrap sample, applying that model to the entire data set, and seeing by how much the performance (on some metric of interest, say MSE or MAE) differs. This is related to the procedure I discuss here. If that is an acceptable amount, you have evidence that the overall stepwise procedure is effective, which can be the case for stepwise regression in pure prediction problems. If you want to use the stepwise regression to select variables on which you do inferences like p-values or confidence intervals, all of these downstream inferences are distorted by the stepwise selection. While this link mentions Stats software, the theory does not care if you use Stata, MATLAB, Python, R, SAS, or any other software, and the previous sentence relates to points 2, 3, 4, and 7. Briefly, by doing the stepwise regression and then calculating statistics as if you have not, you are performing dishonest calculations that fail to account for the variable selection process.
How does cross validation works for feature selection (using stepwise regression)? Welcome to the instability of feature selection. This is totally predictable behavior and one of the reasons why stepwise regression is less of a panacea than it first seems to be. Sure, you select so
51,583
How does cross validation works for feature selection (using stepwise regression)?
Training 10 models and picking the best one based on the test set performance metrics is "cheating" - your performance metrics are no longer an unbiased measure of your overall model training procedure, since your model training procedure now uses the test data to select the model! A test set should only be used to evaluate a model, never to train or select it. If you want one single set of features and one model, you can run your model training procedure on the entire dataset. You will not have a direct unbiased measure of performance (since you have no held-out test data), but the cross-validation performance should be a good approximation. You would generally expect a model trained on the full data to perform slightly better than CV suggests, as it is trained on more data.
How does cross validation works for feature selection (using stepwise regression)?
Training 10 models and picking the best one based on the test set performance metrics is "cheating" - your performance metrics are no longer an unbiased measure of your overall model training procedur
How does cross validation works for feature selection (using stepwise regression)? Training 10 models and picking the best one based on the test set performance metrics is "cheating" - your performance metrics are no longer an unbiased measure of your overall model training procedure, since your model training procedure now uses the test data to select the model! A test set should only be used to evaluate a model, never to train or select it. If you want one single set of features and one model, you can run your model training procedure on the entire dataset. You will not have a direct unbiased measure of performance (since you have no held-out test data), but the cross-validation performance should be a good approximation. You would generally expect a model trained on the full data to perform slightly better than CV suggests, as it is trained on more data.
How does cross validation works for feature selection (using stepwise regression)? Training 10 models and picking the best one based on the test set performance metrics is "cheating" - your performance metrics are no longer an unbiased measure of your overall model training procedur
51,584
Antithetic method for monte carlo when bounds of the integral are infinite
Your code does not correspond to your description of the problem, so it is not surprising that you are not getting the results you expected. The integral you want to approximate is $$ \int_0^\infty e^{-x} \, dx $$ but in your code, you sample the a values from $\mathcal{U}(0, 1)$ distribution, I'd argue that $1$ is much less than infinity. As you can see below, the integral in $(0, 1)$ is indeed close to what you've got > integrate(\(x) exp(-x), 0, 1) 0.6321206 with absolute error < 7e-15 > pexp(1) [1] 0.6321206 The basic Monte Carlo approximation of the integral samples the $x$ values from the uniform distribution $\mathcal{U}(a, b)$ with the bounds corresponding to the integration bounds, and approximates the integral with $$ \frac{b-a}{N} \sum_{i=1}^N f(x) $$ Let's pick $1000$ as a "high number" to approximate the infinity ($e^{-1000}$ would be close enough to zero), in such a case the result is > n <- 1000000 > a <- 0 > b <- 1000 > exp(-b) [1] 0 > (b-a)/n * sum(exp(-runif(n, a, b))) [1] 1.008663 > (b-a)/n * sum(exp(-runif(n, a, b))) [1] 0.989083 You cannot use $\mathcal{U}(0, 1)$ for sampling here and use 1-a as the antithetic variable.
Antithetic method for monte carlo when bounds of the integral are infinite
Your code does not correspond to your description of the problem, so it is not surprising that you are not getting the results you expected. The integral you want to approximate is $$ \int_0^\infty e^
Antithetic method for monte carlo when bounds of the integral are infinite Your code does not correspond to your description of the problem, so it is not surprising that you are not getting the results you expected. The integral you want to approximate is $$ \int_0^\infty e^{-x} \, dx $$ but in your code, you sample the a values from $\mathcal{U}(0, 1)$ distribution, I'd argue that $1$ is much less than infinity. As you can see below, the integral in $(0, 1)$ is indeed close to what you've got > integrate(\(x) exp(-x), 0, 1) 0.6321206 with absolute error < 7e-15 > pexp(1) [1] 0.6321206 The basic Monte Carlo approximation of the integral samples the $x$ values from the uniform distribution $\mathcal{U}(a, b)$ with the bounds corresponding to the integration bounds, and approximates the integral with $$ \frac{b-a}{N} \sum_{i=1}^N f(x) $$ Let's pick $1000$ as a "high number" to approximate the infinity ($e^{-1000}$ would be close enough to zero), in such a case the result is > n <- 1000000 > a <- 0 > b <- 1000 > exp(-b) [1] 0 > (b-a)/n * sum(exp(-runif(n, a, b))) [1] 1.008663 > (b-a)/n * sum(exp(-runif(n, a, b))) [1] 0.989083 You cannot use $\mathcal{U}(0, 1)$ for sampling here and use 1-a as the antithetic variable.
Antithetic method for monte carlo when bounds of the integral are infinite Your code does not correspond to your description of the problem, so it is not surprising that you are not getting the results you expected. The integral you want to approximate is $$ \int_0^\infty e^
51,585
Antithetic method for monte carlo when bounds of the integral are infinite
The point to the method of antithetic variates is to improve on such direct sampling methods. What can make it work well is to re-express the integral as an expectation of something with respect to a distribution that (a) you can efficiently sample from and (b) samples the largest absolute values of the integrand with relatively high probability. This distribution automatically finesses the problem of an infinite integration interval by assigning low probabilities to high values. Criterion (b) means the density function of your chosen distribution should look a little like the integrand. In mathematical notation, this means finding a distribution with density $f$ and re-expressing the integral as $$\int_0^\infty e^{-x}\,\mathrm{d}x = \int_0^\infty \frac{e^{-x}}{f(x)}\, f(x)\mathrm{d}x.$$ This is the expectation of $e^{-x}/f(x)$ with respect to the new distribution. Finding such a density $f$ takes some creativity but it can really pay off. In the present instance, suppose you didn't know how to integrate exponential functions, so that "$e^{-x}$" is just some kind of black box, perhaps an expensive one to consult at that. But let's suppose you do have some elementary command of integration; say, the ability to integrate powers (the very first thing learned in integral Calculus). We might then try something like $$f(x) \ \propto \ x^r$$ for some power $r.$ Its integral is $$F(x) = \frac{x^{r+1}}{r+1}.$$ For this to remain finite for large $x$ we will need $r+1 \lt 0;$ that is, $r \lt 1.$ But there's a problem at $x=0:$ both $f$ and $F$ will blow up. But why start integrating at $0,$ since that's a problem? Let's start, say, at $1.$ What this amounts to is choosing $$f(x) \ \propto \ \frac{1}{(1+x)^p}$$ for some power $p \gt 1$ (a Pareto distribution). The integral of this $f$ is $$F(x) = 1 - \frac{1}{(1+x)^{p-1}}.$$ Consequently $$\frac{e^{-x}}{f(x)} = \frac{e^{-x} (1 + x)^p}{p-1}\tag{*}$$ and you can also easily solve the equation $F(x) = q$ (for $0\le q \lt 1$) to give $$x = F^{-1}(q) = \left(1-q\right)^{1/(1-p)} - 1\tag{**}.$$ This means you can generate random values $x_i$ from $F$ by generating $q$ from a uniform distribution and applying $(**)$ to it. The antithetic variable is obtained by applying $(**)$ to $1-q,$ as shown in the question. Let's compare the original integrand to this density $f:$ They are qualitatively similar, implying we ought to gain something for our efforts. To sum up this analysis, you will draw uniform independent values $q_1, q_2, \ldots, q_n$ from which you will compute $x_i$ and antithetic values $x_i^\prime$ using $(**)$ and, from those, apply $(*)$ to obtain values $y_i$ and $y_i^\prime,$ averaging them as $(y_i+y^\prime_i)/2$ to obtain one estimate of the integral. By repeating this you can estimate the integral with their average $$\frac{1}{n}\sum_{i=1}^n \frac{y_i + y^\prime_i}{2}.$$ We can do even better: we can use tiny $n$ to estimate the integral and study how variable the resulting estimate is when we alter the parameter $p.$ When I did this, I found that $p\approx 3$ gives estimates with the lowest variance. Here, to illustrate, are the results of 50,000 estimates of the integral based on just $n=10$ draws. On the left is the solution for $p=3$ and on the right is the solution based on drawing $x$ uniformly from the interval $[0, \log 2^{52}]$ (on which $1 + e^{-x}$ is distinguishable from $1$). Although both are centered around the correct value of $1,$ the Pareto solution tends to be much closer to this value than the Uniform solution. Indeed, because the ratio of their variances is $235:1,$ the Uniform solution requires $235$ times as many draws as the Pareto distribution in order to estimate the integral with the same precision. Specifically, this example indicates you can reliably estimate the integral to within 20% of its true value with just $2n=20$ evaluations of the exponential function, while using the uniform method would require $235\times 20 = 4700$ evaluations to achieve the same precision. This is what a decent choice of distribution can accomplish. Code to reproduce the example # # The percentage point function for a Pareto distribution. # h <- function(q, pow) (1-q) ^ (1 / (1 - pow)) - 1 xmax <- 52 * log(2) # An effectively infinite (log) upper limit. pow <- 3 # # Study two estimation procedures by applying them to many independent samples. # (Takes less one second for 50,000 iterations.) # # set.seed(17) # Use for reproducibility z <- replicate(5e4, { q <- runif(1e1) # Random uniform values # # The Pareto sampling estimate. # x <- h(q, pow) # A random draw x. <- h(1-q, pow) # Its antithetic variable y <- exp(-x) * (1 + x)^pow / (pow - 1) y. <- exp(-x.) * (1 + x.)^pow / (pow - 1) z1 <- mean(c(y, y.)) # # The Uniform sampling estimate. # y <- exp(-q*xmax) y. <- exp(-(1-q)*xmax) z2 <- xmax * mean(c(y, y.)) c(Pareto=z1, Uniform=z2) }) # # Plot the sampling distributions of the two estimation procedures. # par(mfrow = c(1,2)) hist(z[1,], breaks=50, freq=FALSE, main="Pareto Solution", xlab="Z") hist(z[2,], breaks=25, freq=FALSE, main="Uniform Solution", xlab="Z") par(mfrow = c(1,1)) # # How much more precise is the first procedure compared to the second? # (var(z[2,]) / var(z[1,]))
Antithetic method for monte carlo when bounds of the integral are infinite
The point to the method of antithetic variates is to improve on such direct sampling methods. What can make it work well is to re-express the integral as an expectation of something with respect to a
Antithetic method for monte carlo when bounds of the integral are infinite The point to the method of antithetic variates is to improve on such direct sampling methods. What can make it work well is to re-express the integral as an expectation of something with respect to a distribution that (a) you can efficiently sample from and (b) samples the largest absolute values of the integrand with relatively high probability. This distribution automatically finesses the problem of an infinite integration interval by assigning low probabilities to high values. Criterion (b) means the density function of your chosen distribution should look a little like the integrand. In mathematical notation, this means finding a distribution with density $f$ and re-expressing the integral as $$\int_0^\infty e^{-x}\,\mathrm{d}x = \int_0^\infty \frac{e^{-x}}{f(x)}\, f(x)\mathrm{d}x.$$ This is the expectation of $e^{-x}/f(x)$ with respect to the new distribution. Finding such a density $f$ takes some creativity but it can really pay off. In the present instance, suppose you didn't know how to integrate exponential functions, so that "$e^{-x}$" is just some kind of black box, perhaps an expensive one to consult at that. But let's suppose you do have some elementary command of integration; say, the ability to integrate powers (the very first thing learned in integral Calculus). We might then try something like $$f(x) \ \propto \ x^r$$ for some power $r.$ Its integral is $$F(x) = \frac{x^{r+1}}{r+1}.$$ For this to remain finite for large $x$ we will need $r+1 \lt 0;$ that is, $r \lt 1.$ But there's a problem at $x=0:$ both $f$ and $F$ will blow up. But why start integrating at $0,$ since that's a problem? Let's start, say, at $1.$ What this amounts to is choosing $$f(x) \ \propto \ \frac{1}{(1+x)^p}$$ for some power $p \gt 1$ (a Pareto distribution). The integral of this $f$ is $$F(x) = 1 - \frac{1}{(1+x)^{p-1}}.$$ Consequently $$\frac{e^{-x}}{f(x)} = \frac{e^{-x} (1 + x)^p}{p-1}\tag{*}$$ and you can also easily solve the equation $F(x) = q$ (for $0\le q \lt 1$) to give $$x = F^{-1}(q) = \left(1-q\right)^{1/(1-p)} - 1\tag{**}.$$ This means you can generate random values $x_i$ from $F$ by generating $q$ from a uniform distribution and applying $(**)$ to it. The antithetic variable is obtained by applying $(**)$ to $1-q,$ as shown in the question. Let's compare the original integrand to this density $f:$ They are qualitatively similar, implying we ought to gain something for our efforts. To sum up this analysis, you will draw uniform independent values $q_1, q_2, \ldots, q_n$ from which you will compute $x_i$ and antithetic values $x_i^\prime$ using $(**)$ and, from those, apply $(*)$ to obtain values $y_i$ and $y_i^\prime,$ averaging them as $(y_i+y^\prime_i)/2$ to obtain one estimate of the integral. By repeating this you can estimate the integral with their average $$\frac{1}{n}\sum_{i=1}^n \frac{y_i + y^\prime_i}{2}.$$ We can do even better: we can use tiny $n$ to estimate the integral and study how variable the resulting estimate is when we alter the parameter $p.$ When I did this, I found that $p\approx 3$ gives estimates with the lowest variance. Here, to illustrate, are the results of 50,000 estimates of the integral based on just $n=10$ draws. On the left is the solution for $p=3$ and on the right is the solution based on drawing $x$ uniformly from the interval $[0, \log 2^{52}]$ (on which $1 + e^{-x}$ is distinguishable from $1$). Although both are centered around the correct value of $1,$ the Pareto solution tends to be much closer to this value than the Uniform solution. Indeed, because the ratio of their variances is $235:1,$ the Uniform solution requires $235$ times as many draws as the Pareto distribution in order to estimate the integral with the same precision. Specifically, this example indicates you can reliably estimate the integral to within 20% of its true value with just $2n=20$ evaluations of the exponential function, while using the uniform method would require $235\times 20 = 4700$ evaluations to achieve the same precision. This is what a decent choice of distribution can accomplish. Code to reproduce the example # # The percentage point function for a Pareto distribution. # h <- function(q, pow) (1-q) ^ (1 / (1 - pow)) - 1 xmax <- 52 * log(2) # An effectively infinite (log) upper limit. pow <- 3 # # Study two estimation procedures by applying them to many independent samples. # (Takes less one second for 50,000 iterations.) # # set.seed(17) # Use for reproducibility z <- replicate(5e4, { q <- runif(1e1) # Random uniform values # # The Pareto sampling estimate. # x <- h(q, pow) # A random draw x. <- h(1-q, pow) # Its antithetic variable y <- exp(-x) * (1 + x)^pow / (pow - 1) y. <- exp(-x.) * (1 + x.)^pow / (pow - 1) z1 <- mean(c(y, y.)) # # The Uniform sampling estimate. # y <- exp(-q*xmax) y. <- exp(-(1-q)*xmax) z2 <- xmax * mean(c(y, y.)) c(Pareto=z1, Uniform=z2) }) # # Plot the sampling distributions of the two estimation procedures. # par(mfrow = c(1,2)) hist(z[1,], breaks=50, freq=FALSE, main="Pareto Solution", xlab="Z") hist(z[2,], breaks=25, freq=FALSE, main="Uniform Solution", xlab="Z") par(mfrow = c(1,1)) # # How much more precise is the first procedure compared to the second? # (var(z[2,]) / var(z[1,]))
Antithetic method for monte carlo when bounds of the integral are infinite The point to the method of antithetic variates is to improve on such direct sampling methods. What can make it work well is to re-express the integral as an expectation of something with respect to a
51,586
Slope of independent variable is larger when I divide sample into subsets
This is a very common scenario when you split your data into groups that differ systematically. Here's an example: set.seed(4218) N = 100 group <- rep(1:2, each = N%/%2) x <- rnorm(N) y <- sqrt(.2) * x + sqrt(.8) * rnorm(N) x[group==2] <- x[group==2]+5 splitByGroup <- split(cbind.data.frame(x=x,y=y), group) modelAll <- lm(y~x) modelG1 <- lm(y~x, data=splitByGroup[[1]]) modelG2 <- lm(y~x, data=splitByGroup[[2]]) plot(y~x, col = group + 1) abline(coef=modelAll$coef, col = 4, lwd = 2) abline(coef=modelG1$coef, col = 2, lwd = 2) abline(coef=modelG2$coef, col = 3, lwd = 2) legend("topleft", col = 2:4, lwd=2, legend = paste("Slope for", c("group 1", "group 2", "both groups"), "=", round(c(modelG1$coef[2],modelG2$coef[2],modelAll$coef[2]),3))) Here, the overall slope is attenuated because we've failed to account for group. If we include group in our model, however, we can recover a weighted average of the within-group slopes (as you intuited): modelAll2 <- lm(y~x + group) summary(modelAll2)$coef Estimate Std. Error t value Pr(>|t|) (Intercept) 1.3079814 0.45867941 2.851624 5.315169e-03 x 0.3730811 0.08017352 4.653420 1.034285e-05 group -1.5352505 0.42212272 -3.636977 4.440556e-04
Slope of independent variable is larger when I divide sample into subsets
This is a very common scenario when you split your data into groups that differ systematically. Here's an example: set.seed(4218) N = 100 group <- rep(1:2, each = N%/%2) x <- rnorm(N) y <- sqrt(.2) *
Slope of independent variable is larger when I divide sample into subsets This is a very common scenario when you split your data into groups that differ systematically. Here's an example: set.seed(4218) N = 100 group <- rep(1:2, each = N%/%2) x <- rnorm(N) y <- sqrt(.2) * x + sqrt(.8) * rnorm(N) x[group==2] <- x[group==2]+5 splitByGroup <- split(cbind.data.frame(x=x,y=y), group) modelAll <- lm(y~x) modelG1 <- lm(y~x, data=splitByGroup[[1]]) modelG2 <- lm(y~x, data=splitByGroup[[2]]) plot(y~x, col = group + 1) abline(coef=modelAll$coef, col = 4, lwd = 2) abline(coef=modelG1$coef, col = 2, lwd = 2) abline(coef=modelG2$coef, col = 3, lwd = 2) legend("topleft", col = 2:4, lwd=2, legend = paste("Slope for", c("group 1", "group 2", "both groups"), "=", round(c(modelG1$coef[2],modelG2$coef[2],modelAll$coef[2]),3))) Here, the overall slope is attenuated because we've failed to account for group. If we include group in our model, however, we can recover a weighted average of the within-group slopes (as you intuited): modelAll2 <- lm(y~x + group) summary(modelAll2)$coef Estimate Std. Error t value Pr(>|t|) (Intercept) 1.3079814 0.45867941 2.851624 5.315169e-03 x 0.3730811 0.08017352 4.653420 1.034285e-05 group -1.5352505 0.42212272 -3.636977 4.440556e-04
Slope of independent variable is larger when I divide sample into subsets This is a very common scenario when you split your data into groups that differ systematically. Here's an example: set.seed(4218) N = 100 group <- rep(1:2, each = N%/%2) x <- rnorm(N) y <- sqrt(.2) *
51,587
Slope of independent variable is larger when I divide sample into subsets
This is possible due to the possible nature of different distributions for different subsets of the data. This will be hard to describe without a graph, but imagine a small cloud of points in the upper-left corner of a square. Let's assume the points follow closely to a cigar-shaped cloud of points that would suggest a positive correlation. Now, imagine another duplicate cloud of points, but in the lower-right corner of the square. Let's keep the clouds of points far enough apart so that they don't really overlap. If you take the regression for each cloud separately, you will obtain a positive slope. However, if you combine the data sets into one (and ignore the grouping structure), then the aggregate regression will have a negative slope. This example is a little more extreme than what you have described, but the same general theory could explain what you are observing.
Slope of independent variable is larger when I divide sample into subsets
This is possible due to the possible nature of different distributions for different subsets of the data. This will be hard to describe without a graph, but imagine a small cloud of points in the uppe
Slope of independent variable is larger when I divide sample into subsets This is possible due to the possible nature of different distributions for different subsets of the data. This will be hard to describe without a graph, but imagine a small cloud of points in the upper-left corner of a square. Let's assume the points follow closely to a cigar-shaped cloud of points that would suggest a positive correlation. Now, imagine another duplicate cloud of points, but in the lower-right corner of the square. Let's keep the clouds of points far enough apart so that they don't really overlap. If you take the regression for each cloud separately, you will obtain a positive slope. However, if you combine the data sets into one (and ignore the grouping structure), then the aggregate regression will have a negative slope. This example is a little more extreme than what you have described, but the same general theory could explain what you are observing.
Slope of independent variable is larger when I divide sample into subsets This is possible due to the possible nature of different distributions for different subsets of the data. This will be hard to describe without a graph, but imagine a small cloud of points in the uppe
51,588
For least squares estimation, what is the difference between using the estimator $\hat{\beta} = X^{T}Y$ vs $\hat{\beta} = (X^{T}X)^{-1}X^{T}Y$
I don't believe that the first estimator is unbiased. Under the linear model assumption $$ Y = X \beta + \epsilon $$ the expectation of the first estimator is $$ E[X^{T} Y] = E[X^{T}X \beta] + E[X^{T} \epsilon] = \underbrace{X^{T}X \beta + X^{T} E[\epsilon]}_{\text{linearity of expectation}} = X^{T}X \beta$$ From which we conclude that the proposed estimator is unbiased if and only if $X^{T} X \beta = \beta$. This computation also clearly shows why the second estimator is always unbiased.
For least squares estimation, what is the difference between using the estimator $\hat{\beta} = X^{T
I don't believe that the first estimator is unbiased. Under the linear model assumption $$ Y = X \beta + \epsilon $$ the expectation of the first estimator is $$ E[X^{T} Y] = E[X^{T}X \beta] + E[X^{T}
For least squares estimation, what is the difference between using the estimator $\hat{\beta} = X^{T}Y$ vs $\hat{\beta} = (X^{T}X)^{-1}X^{T}Y$ I don't believe that the first estimator is unbiased. Under the linear model assumption $$ Y = X \beta + \epsilon $$ the expectation of the first estimator is $$ E[X^{T} Y] = E[X^{T}X \beta] + E[X^{T} \epsilon] = \underbrace{X^{T}X \beta + X^{T} E[\epsilon]}_{\text{linearity of expectation}} = X^{T}X \beta$$ From which we conclude that the proposed estimator is unbiased if and only if $X^{T} X \beta = \beta$. This computation also clearly shows why the second estimator is always unbiased.
For least squares estimation, what is the difference between using the estimator $\hat{\beta} = X^{T I don't believe that the first estimator is unbiased. Under the linear model assumption $$ Y = X \beta + \epsilon $$ the expectation of the first estimator is $$ E[X^{T} Y] = E[X^{T}X \beta] + E[X^{T}
51,589
For least squares estimation, what is the difference between using the estimator $\hat{\beta} = X^{T}Y$ vs $\hat{\beta} = (X^{T}X)^{-1}X^{T}Y$
Your first estimator doesn't work. It's not only biased, but the bias increases with the sample size. Imagine a simple intercept model: $$y_i=c+\varepsilon_i$$ here your design matrix is a simple vector of ones, $x_i=1$. Your estimator is $$\hat c = X'Y\equiv\sum_{i=1}^n y_i$$ If you add more data your estimator keeps increasing without bound: $$\hat c\to\infty$$ when $n\to\infty$ This doesn't make a sense, because, $E[y_i]=c$, assuming $E[\varepsilon_i]=0$, which suggests that maybe $\hat c=\bar y_i$ would be a good estimator. However, here $X'X=n$, and with the second estimator you get what you'd expect from a reasonable looking estimator: $$\hat c=\frac{1}{n}\sum_{i=1}^n y_i\equiv \bar y$$ UPDATE: Your second estimator will work in one case: if the design matrix is orthonormal and square, i.e. forms orthonormal basis. The simplest case is when you have exactly one observation in my example. You have $X=1$, so $X'Y=y_1$, then you get: $$\hat c=y_1$$
For least squares estimation, what is the difference between using the estimator $\hat{\beta} = X^{T
Your first estimator doesn't work. It's not only biased, but the bias increases with the sample size. Imagine a simple intercept model: $$y_i=c+\varepsilon_i$$ here your design matrix is a simple vect
For least squares estimation, what is the difference between using the estimator $\hat{\beta} = X^{T}Y$ vs $\hat{\beta} = (X^{T}X)^{-1}X^{T}Y$ Your first estimator doesn't work. It's not only biased, but the bias increases with the sample size. Imagine a simple intercept model: $$y_i=c+\varepsilon_i$$ here your design matrix is a simple vector of ones, $x_i=1$. Your estimator is $$\hat c = X'Y\equiv\sum_{i=1}^n y_i$$ If you add more data your estimator keeps increasing without bound: $$\hat c\to\infty$$ when $n\to\infty$ This doesn't make a sense, because, $E[y_i]=c$, assuming $E[\varepsilon_i]=0$, which suggests that maybe $\hat c=\bar y_i$ would be a good estimator. However, here $X'X=n$, and with the second estimator you get what you'd expect from a reasonable looking estimator: $$\hat c=\frac{1}{n}\sum_{i=1}^n y_i\equiv \bar y$$ UPDATE: Your second estimator will work in one case: if the design matrix is orthonormal and square, i.e. forms orthonormal basis. The simplest case is when you have exactly one observation in my example. You have $X=1$, so $X'Y=y_1$, then you get: $$\hat c=y_1$$
For least squares estimation, what is the difference between using the estimator $\hat{\beta} = X^{T Your first estimator doesn't work. It's not only biased, but the bias increases with the sample size. Imagine a simple intercept model: $$y_i=c+\varepsilon_i$$ here your design matrix is a simple vect
51,590
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers?
Search for the "transformation method" or "inverse transform method", which is a way to generate random numbers with an arbitrary distribution. You'll find many lecture notes describing the idea. This is the wikipedia page: Inverse transform sampling. It has links to more detailed resources at the bottom. The basic result is this recipe: If you need some distribution $D$, then find its CDF and invert it. generate uniformly distributed random numbers between 0 and 1 transform these numbers with the inverse CDF to get numbers distributed according to $D$ The calculation can't be done analytically for every distribution. For your distribution it can. If the domain of the "V" is $[-1,1]$, then the PDF is $|x|$, the CDF is $(1+\operatorname{sign}(x) x^2)/2$, and the inverse CDF will be $$\operatorname{sign}(2x-1) \sqrt{|1-2x|}$$ For example, in Mathematica
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers?
Search for the "transformation method" or "inverse transform method", which is a way to generate random numbers with an arbitrary distribution. You'll find many lecture notes describing the idea. This
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers? Search for the "transformation method" or "inverse transform method", which is a way to generate random numbers with an arbitrary distribution. You'll find many lecture notes describing the idea. This is the wikipedia page: Inverse transform sampling. It has links to more detailed resources at the bottom. The basic result is this recipe: If you need some distribution $D$, then find its CDF and invert it. generate uniformly distributed random numbers between 0 and 1 transform these numbers with the inverse CDF to get numbers distributed according to $D$ The calculation can't be done analytically for every distribution. For your distribution it can. If the domain of the "V" is $[-1,1]$, then the PDF is $|x|$, the CDF is $(1+\operatorname{sign}(x) x^2)/2$, and the inverse CDF will be $$\operatorname{sign}(2x-1) \sqrt{|1-2x|}$$ For example, in Mathematica
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers? Search for the "transformation method" or "inverse transform method", which is a way to generate random numbers with an arbitrary distribution. You'll find many lecture notes describing the idea. This
51,591
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers?
If $X$ and $Y$ are independent uniformly distributed random variables on $[0,1]$, then $X + Y$ has a pyramid or "inverted V" shaped distribution on $[0,2]$. All we need to do to turn this pyramid into a V is to swap the two halves of the distribution. Thus, given independent $X, Y \sim \mathcal U(0,1)$, let $$Z = \begin{cases} X+Y & \text{if } X+Y < 1 \\ X+Y-2 & \text{otherwise.} \end{cases}$$ The random variable $Z$ will then have the V-shaped the distribution you want on $[-1,1]$.
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers?
If $X$ and $Y$ are independent uniformly distributed random variables on $[0,1]$, then $X + Y$ has a pyramid or "inverted V" shaped distribution on $[0,2]$. All we need to do to turn this pyramid into
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers? If $X$ and $Y$ are independent uniformly distributed random variables on $[0,1]$, then $X + Y$ has a pyramid or "inverted V" shaped distribution on $[0,2]$. All we need to do to turn this pyramid into a V is to swap the two halves of the distribution. Thus, given independent $X, Y \sim \mathcal U(0,1)$, let $$Z = \begin{cases} X+Y & \text{if } X+Y < 1 \\ X+Y-2 & \text{otherwise.} \end{cases}$$ The random variable $Z$ will then have the V-shaped the distribution you want on $[-1,1]$.
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers? If $X$ and $Y$ are independent uniformly distributed random variables on $[0,1]$, then $X + Y$ has a pyramid or "inverted V" shaped distribution on $[0,2]$. All we need to do to turn this pyramid into
51,592
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers?
This should really be a comment below the above answer. But since I do not have enough reputation to make this comment I will post it here. In the question you originally asked how to do this in "excel". This should do it, =SIGN(2*RAND()-1)*(ABS(1-2*RAND()))^0.5 An interesting note to the previous answer is that it does not change the distribution of the created variable at all if you are using one uniform random variable (the same for both) to generate the V-shaped distribution or two (for the sign function and the abs function).
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers?
This should really be a comment below the above answer. But since I do not have enough reputation to make this comment I will post it here. In the question you originally asked how to do this in "ex
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers? This should really be a comment below the above answer. But since I do not have enough reputation to make this comment I will post it here. In the question you originally asked how to do this in "excel". This should do it, =SIGN(2*RAND()-1)*(ABS(1-2*RAND()))^0.5 An interesting note to the previous answer is that it does not change the distribution of the created variable at all if you are using one uniform random variable (the same for both) to generate the V-shaped distribution or two (for the sign function and the abs function).
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers? This should really be a comment below the above answer. But since I do not have enough reputation to make this comment I will post it here. In the question you originally asked how to do this in "ex
51,593
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers?
You do want to use $U\sim \text{Uniform}(0,1)$. You are looking for something called Probability Integral Transform (PIT). If $X \sim F$ then, $F(X)\sim \text{Uniform}(0,1)$. Therefore, first you will find the CDF for the distribution you are interested in, then you will transform. For example if you want $f(x)=|x|$, $x\in (-1,1)$, integrate to get $F(x)=P(X\leq x)=\begin{cases} \frac{-x^2}{2}+\frac{1}{2} & x\leq 0 \\ \frac{x^2}{2}+\frac{1}{2} & x> 0 \end{cases}$. Then, $F^{-1}(u) \sim F$
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers?
You do want to use $U\sim \text{Uniform}(0,1)$. You are looking for something called Probability Integral Transform (PIT). If $X \sim F$ then, $F(X)\sim \text{Uniform}(0,1)$. Therefore, first you wil
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers? You do want to use $U\sim \text{Uniform}(0,1)$. You are looking for something called Probability Integral Transform (PIT). If $X \sim F$ then, $F(X)\sim \text{Uniform}(0,1)$. Therefore, first you will find the CDF for the distribution you are interested in, then you will transform. For example if you want $f(x)=|x|$, $x\in (-1,1)$, integrate to get $F(x)=P(X\leq x)=\begin{cases} \frac{-x^2}{2}+\frac{1}{2} & x\leq 0 \\ \frac{x^2}{2}+\frac{1}{2} & x> 0 \end{cases}$. Then, $F^{-1}(u) \sim F$
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers? You do want to use $U\sim \text{Uniform}(0,1)$. You are looking for something called Probability Integral Transform (PIT). If $X \sim F$ then, $F(X)\sim \text{Uniform}(0,1)$. Therefore, first you wil
51,594
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers?
There are a variety of approaches that might be suitable. Even with the comments you haven't really pinned it down enough (you give an example of what you want, but not what range of cases you want considered), but here are some examples of approaches: 1) rejection sampling. Generate a uniform on the desired range, and use rejection to obtain the desired distribution. This works quite generally, and can be made reasonably efficient even for fairly general cases. Variants of rejection sampling, such as the ziggurat approach, can be very fast, but may be quite fiddly to set up if you only want a few numbers. 2) For the case of $f(x) =|x|$ on $(-1,1)$. Let $U_1,U_2$ be iid standard uniform. Then $\text{max}(U_1,U_2)$ has a distribution like the positive half of the desired density. Let $Z$ be a random sign - i.e. $\{-1, +1\}$ with equal probability. Then $X=Z\,\text{max}(U_1,U_2)$ has the desired distribution. 3) (following the same set-up as in (2)): Let $V=\sqrt{U_1}$, and attach a random sign to that, $X=Z\,V$. (This uses the probability integral transform to get V of the right form). There are innumerable other approaches with varying mixes of convenience and speed. For example, the approach in (2) can be used to generate two such variables at a time, and if speed is paramount, the bit required for a random sign may be taken from one of the uniforms used (preferably before normalizing to (0,1), and then bitshifting or a smaller scaling factor used to take what's left to still be uniform); this later approach might be used in (3) for example, or in a modified version of (1).
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers?
There are a variety of approaches that might be suitable. Even with the comments you haven't really pinned it down enough (you give an example of what you want, but not what range of cases you want c
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers? There are a variety of approaches that might be suitable. Even with the comments you haven't really pinned it down enough (you give an example of what you want, but not what range of cases you want considered), but here are some examples of approaches: 1) rejection sampling. Generate a uniform on the desired range, and use rejection to obtain the desired distribution. This works quite generally, and can be made reasonably efficient even for fairly general cases. Variants of rejection sampling, such as the ziggurat approach, can be very fast, but may be quite fiddly to set up if you only want a few numbers. 2) For the case of $f(x) =|x|$ on $(-1,1)$. Let $U_1,U_2$ be iid standard uniform. Then $\text{max}(U_1,U_2)$ has a distribution like the positive half of the desired density. Let $Z$ be a random sign - i.e. $\{-1, +1\}$ with equal probability. Then $X=Z\,\text{max}(U_1,U_2)$ has the desired distribution. 3) (following the same set-up as in (2)): Let $V=\sqrt{U_1}$, and attach a random sign to that, $X=Z\,V$. (This uses the probability integral transform to get V of the right form). There are innumerable other approaches with varying mixes of convenience and speed. For example, the approach in (2) can be used to generate two such variables at a time, and if speed is paramount, the bit required for a random sign may be taken from one of the uniforms used (preferably before normalizing to (0,1), and then bitshifting or a smaller scaling factor used to take what's left to still be uniform); this later approach might be used in (3) for example, or in a modified version of (1).
How do I get "V-shaped" distributed random numbers from uniformly distributed numbers? There are a variety of approaches that might be suitable. Even with the comments you haven't really pinned it down enough (you give an example of what you want, but not what range of cases you want c
51,595
Interpreting weird box plot with reversed whiskers
It is impossible to know without knowing more about what your software thinks is the right way to draw a box and whisker plot. It is even more difficult without a numeric scale to anchor the results on. Regardless, there are a number of different guidelines in this regard (in general). However, we can always resort to reading the documentation boxes: the main body of the boxplot showing the quartiles and the median’s confidence intervals if enabled. medians: horizonal lines at the median of each box. whiskers: the vertical lines extending to the most extreme, n-outlier data points. caps: the horizontal lines at the ends of the whiskers. fliers: points representing data that extend beyone (sic) the whiskers (outliers). Given the values of 16.5, 17.14, 13.5, and 16.75, the value of 13.5 is being treated as a 'flier'. The boxes are stretching from Q1 to Q3. The horizontal line is the median (aka Q2). The exact calculation of these values has a number of different approaches, but I'll just grab the handy values from R (quantile defaults) of 15.75 for Q1, 16.625 for Q2, and 16.8475 for Q3. Although the documentation cited above is unclear, it appears that the whiskers and caps extend to the most extreme, n-outlier data points excluding the 'fliers' (more on this later). Therefore, we can expect them to extend from 16.50 to 17.14. That is, they will extend to a value closer to the median than Q1 (at the bottom) and slightly beyond Q3 (at the top)... which is exactly what we see. However, given the circular definition of whiskers and fliers... you have to look further up in the docs to see that whiskers are "a function of the inner quartile range. They extend to the most extreme data point within ( whis*(75%-25%) ) data range" where 'whis' has a default of 1.5. Combining these sources of information, we can see that whiskers would plot points 1.5 times the interquartile range, but they stop at the most extreme data point inside that range. Data points beyond that range are dubbed fliers and plotted as such. So, in response to the second question it is 'valid'...it isn't my preferred way of seeing boxplots drawn, but that doesn't make it invalid. As I mentioned there is no one convention in this regard. So long as you know what the boxplot is drawing, and it draws it in that way - then it is at least reliable. Valid will be a value judgement you have to make for yourself. My descriptions above, plus the docs should help you interpret your boxplot, but just in case: Central Line: Median Edges of Boxes: Q1 and Q3 Limits of Whiskers: The minimum and maximum values inside the inflated inter-quartile range (e.g. whis*(75%-25%) where whis defaults to 1.5) Little plus signs: 'fliers', data-points beyond the limits of the whiskers
Interpreting weird box plot with reversed whiskers
It is impossible to know without knowing more about what your software thinks is the right way to draw a box and whisker plot. It is even more difficult without a numeric scale to anchor the results
Interpreting weird box plot with reversed whiskers It is impossible to know without knowing more about what your software thinks is the right way to draw a box and whisker plot. It is even more difficult without a numeric scale to anchor the results on. Regardless, there are a number of different guidelines in this regard (in general). However, we can always resort to reading the documentation boxes: the main body of the boxplot showing the quartiles and the median’s confidence intervals if enabled. medians: horizonal lines at the median of each box. whiskers: the vertical lines extending to the most extreme, n-outlier data points. caps: the horizontal lines at the ends of the whiskers. fliers: points representing data that extend beyone (sic) the whiskers (outliers). Given the values of 16.5, 17.14, 13.5, and 16.75, the value of 13.5 is being treated as a 'flier'. The boxes are stretching from Q1 to Q3. The horizontal line is the median (aka Q2). The exact calculation of these values has a number of different approaches, but I'll just grab the handy values from R (quantile defaults) of 15.75 for Q1, 16.625 for Q2, and 16.8475 for Q3. Although the documentation cited above is unclear, it appears that the whiskers and caps extend to the most extreme, n-outlier data points excluding the 'fliers' (more on this later). Therefore, we can expect them to extend from 16.50 to 17.14. That is, they will extend to a value closer to the median than Q1 (at the bottom) and slightly beyond Q3 (at the top)... which is exactly what we see. However, given the circular definition of whiskers and fliers... you have to look further up in the docs to see that whiskers are "a function of the inner quartile range. They extend to the most extreme data point within ( whis*(75%-25%) ) data range" where 'whis' has a default of 1.5. Combining these sources of information, we can see that whiskers would plot points 1.5 times the interquartile range, but they stop at the most extreme data point inside that range. Data points beyond that range are dubbed fliers and plotted as such. So, in response to the second question it is 'valid'...it isn't my preferred way of seeing boxplots drawn, but that doesn't make it invalid. As I mentioned there is no one convention in this regard. So long as you know what the boxplot is drawing, and it draws it in that way - then it is at least reliable. Valid will be a value judgement you have to make for yourself. My descriptions above, plus the docs should help you interpret your boxplot, but just in case: Central Line: Median Edges of Boxes: Q1 and Q3 Limits of Whiskers: The minimum and maximum values inside the inflated inter-quartile range (e.g. whis*(75%-25%) where whis defaults to 1.5) Little plus signs: 'fliers', data-points beyond the limits of the whiskers
Interpreting weird box plot with reversed whiskers It is impossible to know without knowing more about what your software thinks is the right way to draw a box and whisker plot. It is even more difficult without a numeric scale to anchor the results
51,596
Interpreting weird box plot with reversed whiskers
In R I've done the boxplot and plotted the individual points so you can see what it's doing: > x<-c(16.5, 17.14, 13.5, 16.75) > boxplot(x,boxwex=.2) > points(x~rep(1,4),pch="x",col=2) As you see, it's not like the one you have. In particular, after I stretch your bitmap out to approximately match the range (assuming the range of the two matches!), the box you have is shorter, as well as the whisker being inside the box. You need to check how they've defined the boxplot (definitions vary - but I think they're not using Tukey's definition of how hinges or whiskers work). I've played about in various ways but I can't work out for sure how they're getting their hinges. They seem to be half the distance from the median that they should be. (Aside from a different definition, it may be that their code simply assumes there's always more than four points somewhere in it, and that maybe has caused a problem.)
Interpreting weird box plot with reversed whiskers
In R I've done the boxplot and plotted the individual points so you can see what it's doing: > x<-c(16.5, 17.14, 13.5, 16.75) > boxplot(x,boxwex=.2) > points(x~rep(1,4),pch="x",col=2) As you see, it
Interpreting weird box plot with reversed whiskers In R I've done the boxplot and plotted the individual points so you can see what it's doing: > x<-c(16.5, 17.14, 13.5, 16.75) > boxplot(x,boxwex=.2) > points(x~rep(1,4),pch="x",col=2) As you see, it's not like the one you have. In particular, after I stretch your bitmap out to approximately match the range (assuming the range of the two matches!), the box you have is shorter, as well as the whisker being inside the box. You need to check how they've defined the boxplot (definitions vary - but I think they're not using Tukey's definition of how hinges or whiskers work). I've played about in various ways but I can't work out for sure how they're getting their hinges. They seem to be half the distance from the median that they should be. (Aside from a different definition, it may be that their code simply assumes there's always more than four points somewhere in it, and that maybe has caused a problem.)
Interpreting weird box plot with reversed whiskers In R I've done the boxplot and plotted the individual points so you can see what it's doing: > x<-c(16.5, 17.14, 13.5, 16.75) > boxplot(x,boxwex=.2) > points(x~rep(1,4),pch="x",col=2) As you see, it
51,597
What is the "root MSE" in Stata?
Calculate the difference between the observed and predicted dependent variables Square them Add them up, this will give you the "Error sum of squares," SS in Stata output Divide it by the error's degrees of freedom, this will give you the "Mean error sum of squares," MS in Stata output Take a square root of it, and this is the Root MSE Done If you look at the Stata output: . sysuse auto, clear (1978 Automobile Data) . reg mpg weight Source | SS df MS Number of obs = 74 -------------+------------------------------ F( 1, 72) = 134.62 Model | 1591.9902 1 1591.9902 Prob > F = 0.0000 Residual | 851.469256 72 11.8259619 R-squared = 0.6515 -------------+------------------------------ Adj R-squared = 0.6467 Total | 2443.45946 73 33.4720474 Root MSE = 3.4389 ------------------------------------------------------------------------------ mpg | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- weight | -.0060087 .0005179 -11.60 0.000 -.0070411 -.0049763 _cons | 39.44028 1.614003 24.44 0.000 36.22283 42.65774 ------------------------------------------------------------------------------ Dividing the sum of squares of the residual (851.469) by its degrees of freedom (72) yields 11.826. That is the mean sum of squares. If you further take a square root, you'll get Root MSE (3.4289 in the output). Basically, it's a measurement of accuracy. The more accurate model would have less error, leading to a smaller error sum of squares, then MS, then Root MSE. However, you can only apply this comparison within the same dependent variables, because MS and Root MSE are not standardized. Depending on the unit of measurements, Root MSE can vary greatly.
What is the "root MSE" in Stata?
Calculate the difference between the observed and predicted dependent variables Square them Add them up, this will give you the "Error sum of squares," SS in Stata output Divide it by the error's degr
What is the "root MSE" in Stata? Calculate the difference between the observed and predicted dependent variables Square them Add them up, this will give you the "Error sum of squares," SS in Stata output Divide it by the error's degrees of freedom, this will give you the "Mean error sum of squares," MS in Stata output Take a square root of it, and this is the Root MSE Done If you look at the Stata output: . sysuse auto, clear (1978 Automobile Data) . reg mpg weight Source | SS df MS Number of obs = 74 -------------+------------------------------ F( 1, 72) = 134.62 Model | 1591.9902 1 1591.9902 Prob > F = 0.0000 Residual | 851.469256 72 11.8259619 R-squared = 0.6515 -------------+------------------------------ Adj R-squared = 0.6467 Total | 2443.45946 73 33.4720474 Root MSE = 3.4389 ------------------------------------------------------------------------------ mpg | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- weight | -.0060087 .0005179 -11.60 0.000 -.0070411 -.0049763 _cons | 39.44028 1.614003 24.44 0.000 36.22283 42.65774 ------------------------------------------------------------------------------ Dividing the sum of squares of the residual (851.469) by its degrees of freedom (72) yields 11.826. That is the mean sum of squares. If you further take a square root, you'll get Root MSE (3.4289 in the output). Basically, it's a measurement of accuracy. The more accurate model would have less error, leading to a smaller error sum of squares, then MS, then Root MSE. However, you can only apply this comparison within the same dependent variables, because MS and Root MSE are not standardized. Depending on the unit of measurements, Root MSE can vary greatly.
What is the "root MSE" in Stata? Calculate the difference between the observed and predicted dependent variables Square them Add them up, this will give you the "Error sum of squares," SS in Stata output Divide it by the error's degr
51,598
What is the "root MSE" in Stata?
RMSE is the std dev of the model's error. Wikipedia can tell you this and the formula: http://en.wikipedia.org/wiki/Root-mean-square_deviation With it, you can compare model accuracy
What is the "root MSE" in Stata?
RMSE is the std dev of the model's error. Wikipedia can tell you this and the formula: http://en.wikipedia.org/wiki/Root-mean-square_deviation With it, you can compare model accuracy
What is the "root MSE" in Stata? RMSE is the std dev of the model's error. Wikipedia can tell you this and the formula: http://en.wikipedia.org/wiki/Root-mean-square_deviation With it, you can compare model accuracy
What is the "root MSE" in Stata? RMSE is the std dev of the model's error. Wikipedia can tell you this and the formula: http://en.wikipedia.org/wiki/Root-mean-square_deviation With it, you can compare model accuracy
51,599
Is it appropriate to examine an interaction effect that is almost statistically significant?
Worship not p = 0.05. Explore away. Additionally, in some contexts, relying on p = 0.05 for an interaction threshold is actually a bit flawed, as interaction tests are typically fairly low powered, and you can and should be using a somewhat higher threshold to accept statistical evidence of interaction. Sander Greenland or Miguel Hernan undoubtedly have a paper discussing the problem.
Is it appropriate to examine an interaction effect that is almost statistically significant?
Worship not p = 0.05. Explore away. Additionally, in some contexts, relying on p = 0.05 for an interaction threshold is actually a bit flawed, as interaction tests are typically fairly low powered, an
Is it appropriate to examine an interaction effect that is almost statistically significant? Worship not p = 0.05. Explore away. Additionally, in some contexts, relying on p = 0.05 for an interaction threshold is actually a bit flawed, as interaction tests are typically fairly low powered, and you can and should be using a somewhat higher threshold to accept statistical evidence of interaction. Sander Greenland or Miguel Hernan undoubtedly have a paper discussing the problem.
Is it appropriate to examine an interaction effect that is almost statistically significant? Worship not p = 0.05. Explore away. Additionally, in some contexts, relying on p = 0.05 for an interaction threshold is actually a bit flawed, as interaction tests are typically fairly low powered, an
51,600
Is it appropriate to examine an interaction effect that is almost statistically significant?
"Can't"? Who says you can't? There's nothing magic about p = .05. You can certainly explore the interaction. The question is how you deal with complaints from people who say you can't do this. In addition to works by Greenland or Hernan (see @EpiGrad's response) you can look for papers by Jacob Cohen or Paul Meehl or the book "The Cult of Statistical Significance" by Ziliak or the book "Statistics as Principled Argument" by Abelson
Is it appropriate to examine an interaction effect that is almost statistically significant?
"Can't"? Who says you can't? There's nothing magic about p = .05. You can certainly explore the interaction. The question is how you deal with complaints from people who say you can't do this. In addi
Is it appropriate to examine an interaction effect that is almost statistically significant? "Can't"? Who says you can't? There's nothing magic about p = .05. You can certainly explore the interaction. The question is how you deal with complaints from people who say you can't do this. In addition to works by Greenland or Hernan (see @EpiGrad's response) you can look for papers by Jacob Cohen or Paul Meehl or the book "The Cult of Statistical Significance" by Ziliak or the book "Statistics as Principled Argument" by Abelson
Is it appropriate to examine an interaction effect that is almost statistically significant? "Can't"? Who says you can't? There's nothing magic about p = .05. You can certainly explore the interaction. The question is how you deal with complaints from people who say you can't do this. In addi