idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
52,901 | Faster option than glmnet for elastic net regularized regression [closed] | First, you have a good idea that you can get a feel for things with a smaller sample, to work out the kinks in your approach. And it's definitely true, as you've noticed, that things that take a while to run really break up your focus. It's annoying. But...
Second, you have to define "interactively". Do you mean instantaneously, 1 second, 10 seconds, or what?
Third, you need to account for your hardware, the software you're using, how much data you have, and what algorithm you're executing.
For example, doing an lm on 100K rows of data might be "interactive" for you. Obviously glmnet is doing a lot more than that.
In terms of other languages, Python or Java may be faster if you have a lot of code outside of glmnet that you're executing. As one of the comments says, glmnet is coded in C or Fortran and will be as fast as possible. If you're doing looping or a lot of code around your glmnet, it's easier in R to do something inefficient than it is in programming languages not designed for statistics like Python or Java.
It's possible to parallelize some algorithms. I'm not sure if glmnet is one of them. If you find a language that has a parallelized version -- say one that uses multiple cores, and you're running on a machine with multiple cores and enough RAM -- that will speed things up sub-linearly. Four cores won't be four times as fast, but it should be 2-3x faster.
So, the answer is "no, probably not". Your algorithm will usually trump anything else, and perhaps glmnet alone isn't the best option. | Faster option than glmnet for elastic net regularized regression [closed] | First, you have a good idea that you can get a feel for things with a smaller sample, to work out the kinks in your approach. And it's definitely true, as you've noticed, that things that take a while | Faster option than glmnet for elastic net regularized regression [closed]
First, you have a good idea that you can get a feel for things with a smaller sample, to work out the kinks in your approach. And it's definitely true, as you've noticed, that things that take a while to run really break up your focus. It's annoying. But...
Second, you have to define "interactively". Do you mean instantaneously, 1 second, 10 seconds, or what?
Third, you need to account for your hardware, the software you're using, how much data you have, and what algorithm you're executing.
For example, doing an lm on 100K rows of data might be "interactive" for you. Obviously glmnet is doing a lot more than that.
In terms of other languages, Python or Java may be faster if you have a lot of code outside of glmnet that you're executing. As one of the comments says, glmnet is coded in C or Fortran and will be as fast as possible. If you're doing looping or a lot of code around your glmnet, it's easier in R to do something inefficient than it is in programming languages not designed for statistics like Python or Java.
It's possible to parallelize some algorithms. I'm not sure if glmnet is one of them. If you find a language that has a parallelized version -- say one that uses multiple cores, and you're running on a machine with multiple cores and enough RAM -- that will speed things up sub-linearly. Four cores won't be four times as fast, but it should be 2-3x faster.
So, the answer is "no, probably not". Your algorithm will usually trump anything else, and perhaps glmnet alone isn't the best option. | Faster option than glmnet for elastic net regularized regression [closed]
First, you have a good idea that you can get a feel for things with a smaller sample, to work out the kinks in your approach. And it's definitely true, as you've noticed, that things that take a while |
52,902 | Faster option than glmnet for elastic net regularized regression [closed] | Following @Wayne's comments on parallelization, see Zhou et al. (2014), "A Reduction of the Elastic Net to Support Vector Machines with an Application to GPU Computing", arXiv:1409.1976. They say
The state-of-the-art single-core implementation for solving the
Elastic Net problem is the glmnet package developed by Friedman.
Mostly written in Fortran language, glmnet adopts the coordinate
gradient descent strategy and is highly optimized. As far as we know,
it is the fastest off-the-shelf solver for the Elastic Net. Due to its
inherent sequential nature, the coordinate descent algorithm is
extremely hard to parallelize.
but come up with a different approach, showing the elastic net is equivalent a type of support vector machine (which can be parallelized):
[...] we take inspiration from recent work on machine learning
reductions [15, 19] and we reduce the Elastic Net to the squared
hinge-loss SVM (without a bias term). We show that this reduction is
exact and extremely efficient in practice. The resulting algorithm,
which we refer to as Support Vector Elastic Net (SVEN), naturally
takes advantage of the vast existing work on parallel SVMs,
immediately providing highly efficient Elastic Net and Lasso
implementations on GPUs, multi-core CPUs and distributed systems
Note that hyper-parameter selection by cross-validation can of course be parallelized (and is implemented for the shrinkage hyper-parameter in cv.glmnet—see executing glmnet in parallel in R).
The Matlab code to fit a Support Vector Elastic Net is available at https://bitbucket.org/mlcircus/sven/. You could give that a try! | Faster option than glmnet for elastic net regularized regression [closed] | Following @Wayne's comments on parallelization, see Zhou et al. (2014), "A Reduction of the Elastic Net to Support Vector Machines with an Application to GPU Computing", arXiv:1409.1976. They say
The | Faster option than glmnet for elastic net regularized regression [closed]
Following @Wayne's comments on parallelization, see Zhou et al. (2014), "A Reduction of the Elastic Net to Support Vector Machines with an Application to GPU Computing", arXiv:1409.1976. They say
The state-of-the-art single-core implementation for solving the
Elastic Net problem is the glmnet package developed by Friedman.
Mostly written in Fortran language, glmnet adopts the coordinate
gradient descent strategy and is highly optimized. As far as we know,
it is the fastest off-the-shelf solver for the Elastic Net. Due to its
inherent sequential nature, the coordinate descent algorithm is
extremely hard to parallelize.
but come up with a different approach, showing the elastic net is equivalent a type of support vector machine (which can be parallelized):
[...] we take inspiration from recent work on machine learning
reductions [15, 19] and we reduce the Elastic Net to the squared
hinge-loss SVM (without a bias term). We show that this reduction is
exact and extremely efficient in practice. The resulting algorithm,
which we refer to as Support Vector Elastic Net (SVEN), naturally
takes advantage of the vast existing work on parallel SVMs,
immediately providing highly efficient Elastic Net and Lasso
implementations on GPUs, multi-core CPUs and distributed systems
Note that hyper-parameter selection by cross-validation can of course be parallelized (and is implemented for the shrinkage hyper-parameter in cv.glmnet—see executing glmnet in parallel in R).
The Matlab code to fit a Support Vector Elastic Net is available at https://bitbucket.org/mlcircus/sven/. You could give that a try! | Faster option than glmnet for elastic net regularized regression [closed]
Following @Wayne's comments on parallelization, see Zhou et al. (2014), "A Reduction of the Elastic Net to Support Vector Machines with an Application to GPU Computing", arXiv:1409.1976. They say
The |
52,903 | Post hoc power analysis for a non significant result? | Power analyses exploit an equation with four variables ($\alpha$, power, $N$, and the effect size). When you solve for power by stipulating the others, it is called "post hoc" power analysis. People often use post hoc power analysis to determine the power they had to detect the effect observed in their study after finding a non-significant result, and use the low power to justify why their result was non-significant and their theory might still be right. As @rvl points out in the comments, this involves "circular logic and [is] an empty exercise". However, that is not what you are doing here. Moreover, 'post hoc' power analysis can be a legitimate exercise: for example, I have had cases where a researcher knew they would only be able to get a certain number of patients with a rare disease and wanted to know the power they would be able to achieve to detect a given clinically significant effect. Although that isn't 'post hoc' in the sense of after the fact, it is called "post hoc" power analysis because it solves for power as a function of the other three.
I will go out on a limb and assume your $\alpha$ was $.05$. Clearly, $N = 9$. We can determine the effect size by calculating the pooled SD, and then the standardized mean difference that corresponds to a raw mean difference of $1500$ and the computed pooled SD.
\begin{align}
SD_\text{pooled} &= \sqrt{\frac{(n_1-1)s^2_1 + (n_2-1)s^2_2}{(n_1+n_2)-2}} \\[10pt]
2145.041 &= \sqrt{\frac{(5-1)1930^2 + (4-1)2402^2}{(5+4)-2}} \\[30pt]
ES &= \frac{\text{mean difference}}{SD_\text{pooled}} \\[10pt]
0.70 &= \frac{1500}{2145.041} \\
\end{align}
Having determined the effect size you want to use, you need some software to do the power analysis calculation for you. (It involves numerical approximations that you cannot do by hand.) A free and convenient application is G*Power.
The power to detect a standardized mean difference of $0.70$ with $\alpha = .05$ using a two-tailed $t$-test when $N = 9$ is $\approx 15\%$. I would say your study is underpowered. | Post hoc power analysis for a non significant result? | Power analyses exploit an equation with four variables ($\alpha$, power, $N$, and the effect size). When you solve for power by stipulating the others, it is called "post hoc" power analysis. People | Post hoc power analysis for a non significant result?
Power analyses exploit an equation with four variables ($\alpha$, power, $N$, and the effect size). When you solve for power by stipulating the others, it is called "post hoc" power analysis. People often use post hoc power analysis to determine the power they had to detect the effect observed in their study after finding a non-significant result, and use the low power to justify why their result was non-significant and their theory might still be right. As @rvl points out in the comments, this involves "circular logic and [is] an empty exercise". However, that is not what you are doing here. Moreover, 'post hoc' power analysis can be a legitimate exercise: for example, I have had cases where a researcher knew they would only be able to get a certain number of patients with a rare disease and wanted to know the power they would be able to achieve to detect a given clinically significant effect. Although that isn't 'post hoc' in the sense of after the fact, it is called "post hoc" power analysis because it solves for power as a function of the other three.
I will go out on a limb and assume your $\alpha$ was $.05$. Clearly, $N = 9$. We can determine the effect size by calculating the pooled SD, and then the standardized mean difference that corresponds to a raw mean difference of $1500$ and the computed pooled SD.
\begin{align}
SD_\text{pooled} &= \sqrt{\frac{(n_1-1)s^2_1 + (n_2-1)s^2_2}{(n_1+n_2)-2}} \\[10pt]
2145.041 &= \sqrt{\frac{(5-1)1930^2 + (4-1)2402^2}{(5+4)-2}} \\[30pt]
ES &= \frac{\text{mean difference}}{SD_\text{pooled}} \\[10pt]
0.70 &= \frac{1500}{2145.041} \\
\end{align}
Having determined the effect size you want to use, you need some software to do the power analysis calculation for you. (It involves numerical approximations that you cannot do by hand.) A free and convenient application is G*Power.
The power to detect a standardized mean difference of $0.70$ with $\alpha = .05$ using a two-tailed $t$-test when $N = 9$ is $\approx 15\%$. I would say your study is underpowered. | Post hoc power analysis for a non significant result?
Power analyses exploit an equation with four variables ($\alpha$, power, $N$, and the effect size). When you solve for power by stipulating the others, it is called "post hoc" power analysis. People |
52,904 | Post hoc power analysis for a non significant result? | I can't believe people are still asking for post-hoc power analyses!
Please, do not include this in your paper. The post-hoc power analysis is not going to tell you anything, and people reading your paper will think that you do not know what you are doing!
Power analyses can only be performed before you collect your data. They are very useful for e.g. determining the number of samples you need to collect in order to observe a particular effect size. After the study, a "post hoc" analysis is useless, since both your effect and sample sizes are constants. Some people argue that they can be used to determine the required sample size of a hypothetical future study, but the utility of this is debated (since it only makes sense if the study is actually performed). But the reviewer did not ask for this, he/she asked for you to prove that your study was underpowered, something that you can not do using a post-hoc power analysis.
A quick web search gave me the following posts/papers demoting post-hoc power. Please read them and refer your reviewer to them. Clearly, he/she does not understand what he/she is asking you to do.
References:
http://daniellakens.blogspot.se/2014/12/observed-power-and-what-to-do-if-your.html
https://www.researchgate.net/post/Is_it_possible_to_calculate_the_power_of_study_retrospectively
https://dirnagl.com/2014/07/14/why-post-hoc-power-calculation-does-not-help/
http://www.dokeefe.net/pub/okeefe07cmm-posthoc.pdf
http://www.ncbi.nlm.nih.gov/pubmed/11310512
Hoenig & Heisey, "The Abuse of Power - The Pervasive Fallacy of Power Calculations for Data Analysis", The American Statistician, 2012, available at the author's website at http://www.vims.edu/people/hoenig_jm/pubs/hoenig2.pdf. Note that this is a peer-reviewed publication, which can be used to counter a reviewer's insistence that such an analysis should be performed. Providing this reference in the response to a review may work - or not.
Sorry for yelling ;-P | Post hoc power analysis for a non significant result? | I can't believe people are still asking for post-hoc power analyses!
Please, do not include this in your paper. The post-hoc power analysis is not going to tell you anything, and people reading your p | Post hoc power analysis for a non significant result?
I can't believe people are still asking for post-hoc power analyses!
Please, do not include this in your paper. The post-hoc power analysis is not going to tell you anything, and people reading your paper will think that you do not know what you are doing!
Power analyses can only be performed before you collect your data. They are very useful for e.g. determining the number of samples you need to collect in order to observe a particular effect size. After the study, a "post hoc" analysis is useless, since both your effect and sample sizes are constants. Some people argue that they can be used to determine the required sample size of a hypothetical future study, but the utility of this is debated (since it only makes sense if the study is actually performed). But the reviewer did not ask for this, he/she asked for you to prove that your study was underpowered, something that you can not do using a post-hoc power analysis.
A quick web search gave me the following posts/papers demoting post-hoc power. Please read them and refer your reviewer to them. Clearly, he/she does not understand what he/she is asking you to do.
References:
http://daniellakens.blogspot.se/2014/12/observed-power-and-what-to-do-if-your.html
https://www.researchgate.net/post/Is_it_possible_to_calculate_the_power_of_study_retrospectively
https://dirnagl.com/2014/07/14/why-post-hoc-power-calculation-does-not-help/
http://www.dokeefe.net/pub/okeefe07cmm-posthoc.pdf
http://www.ncbi.nlm.nih.gov/pubmed/11310512
Hoenig & Heisey, "The Abuse of Power - The Pervasive Fallacy of Power Calculations for Data Analysis", The American Statistician, 2012, available at the author's website at http://www.vims.edu/people/hoenig_jm/pubs/hoenig2.pdf. Note that this is a peer-reviewed publication, which can be used to counter a reviewer's insistence that such an analysis should be performed. Providing this reference in the response to a review may work - or not.
Sorry for yelling ;-P | Post hoc power analysis for a non significant result?
I can't believe people are still asking for post-hoc power analyses!
Please, do not include this in your paper. The post-hoc power analysis is not going to tell you anything, and people reading your p |
52,905 | How can I determine Gamma distribution parameters from data | There is a number of ways you can estimate the parameters of a gamma distribution. The most popular is maximum likelihood estimation. The resulting estimators are known to have optimal properties for moderately large samples, such as asymptotic normality and minimum variance. The problem with the mle for a gamma distribution is that a closed form expression exists only for the scale parameter and some kind of iterative algorithm will have to be employed for the shape parameter.
Alternatively, you may compute the parameters using the method of moments. The estimates in that case are also asymptotically normal but are no longer minimum variance. On thus plus side, closed form expressions exist for the gamma distribution so estimation is quick and easy.
If you have a large sample, those two will not differ by much so if I were you I would probably use the mom estimators. Unless of course you have an algorithm at hand that can deal with derivatives of the gamma function. | How can I determine Gamma distribution parameters from data | There is a number of ways you can estimate the parameters of a gamma distribution. The most popular is maximum likelihood estimation. The resulting estimators are known to have optimal properties for | How can I determine Gamma distribution parameters from data
There is a number of ways you can estimate the parameters of a gamma distribution. The most popular is maximum likelihood estimation. The resulting estimators are known to have optimal properties for moderately large samples, such as asymptotic normality and minimum variance. The problem with the mle for a gamma distribution is that a closed form expression exists only for the scale parameter and some kind of iterative algorithm will have to be employed for the shape parameter.
Alternatively, you may compute the parameters using the method of moments. The estimates in that case are also asymptotically normal but are no longer minimum variance. On thus plus side, closed form expressions exist for the gamma distribution so estimation is quick and easy.
If you have a large sample, those two will not differ by much so if I were you I would probably use the mom estimators. Unless of course you have an algorithm at hand that can deal with derivatives of the gamma function. | How can I determine Gamma distribution parameters from data
There is a number of ways you can estimate the parameters of a gamma distribution. The most popular is maximum likelihood estimation. The resulting estimators are known to have optimal properties for |
52,906 | Student looking to practice | I cannot stress this enough, look for real data.
If you are using a particular statistical method provided in an R stats package for example, often the package will contain a sample dataset which the authors will demonstrate their methods on. Similarly, if you look in R's datasets, a whole bunch of classic datasets are included.
But if you are looking to practice use of statistical methods, this is only the place to start. Datasets included in an R package are not a great representation of real data analysis for two reasons: first, the data is typically organized and clean. Second, the data is typically cherry-picked to make that particular statistical method look great. While it makes sense for an author to do this (why would you demonstrate your method on data where it is unnecessary?), when using statistics to answer real questions, it's not always clear or obvious which method to use.
As such, while these datasets are a good basic tutorial (just as anyone should start programming with "hello world") to get more real experience, I would suggest going to public repositories. You might be surprised how much data is freely available. Interested in climate modeling? Go to NOAA and pick what you'd like. Interested in income modeling? European Social Survey. I think it's really cool to actually be making real insights while learning a new method.
I do caution that getting and cleaning the data is a considerable amount of work. But that's not an inaccurate representation of what real data analysis is like. | Student looking to practice | I cannot stress this enough, look for real data.
If you are using a particular statistical method provided in an R stats package for example, often the package will contain a sample dataset which the | Student looking to practice
I cannot stress this enough, look for real data.
If you are using a particular statistical method provided in an R stats package for example, often the package will contain a sample dataset which the authors will demonstrate their methods on. Similarly, if you look in R's datasets, a whole bunch of classic datasets are included.
But if you are looking to practice use of statistical methods, this is only the place to start. Datasets included in an R package are not a great representation of real data analysis for two reasons: first, the data is typically organized and clean. Second, the data is typically cherry-picked to make that particular statistical method look great. While it makes sense for an author to do this (why would you demonstrate your method on data where it is unnecessary?), when using statistics to answer real questions, it's not always clear or obvious which method to use.
As such, while these datasets are a good basic tutorial (just as anyone should start programming with "hello world") to get more real experience, I would suggest going to public repositories. You might be surprised how much data is freely available. Interested in climate modeling? Go to NOAA and pick what you'd like. Interested in income modeling? European Social Survey. I think it's really cool to actually be making real insights while learning a new method.
I do caution that getting and cleaning the data is a considerable amount of work. But that's not an inaccurate representation of what real data analysis is like. | Student looking to practice
I cannot stress this enough, look for real data.
If you are using a particular statistical method provided in an R stats package for example, often the package will contain a sample dataset which the |
52,907 | Student looking to practice | If you want to "get your hands dirty" with statistics, there is an infinite number of ways to mine data, set up hypotheses, experiments, analyze data sets using various styles of analysis (i.e., Bayesian vs. frequentist), etc. However, your question indicates that you're having trouble figuring out where to begin.
Find a topic that interests you, be it sports, politics, science, commerce/economics, behavioral psychology, biology, and so forth. If there are phenomena in said topic that can be quantified, then more likely than not, someone has already dug up massive amounts of data. I highly recommend Nate Silver's blog FiveThirtyEight, (of which I hold no vested interest/position in) where there's a wide variety of topics with small studies and analyses featured. The topics are well-thought out, and the statistics used isn't terribly difficult to understand or grasp for beginners (no offense to Nate Silver). At the very least, you could build upon whatever articles that have been featured, and use the many data references in the articles to do your own analyses or run your own specific tests.
After you have figured out what topic you'd like, specify a particular question you have in said topic- for example, I just thought of "in sports, do high-margin wins correlate to championship titles, or fatigue?"- and then find your data. The resources on the internet are nearly endless, but you must remember that not all data is quality data (i.e., be wary where you find your data, and whether there are any ethical/quality concerns).
Some useful links include DataHub.IO, where you can find (and share!) many free datasets, and Data.gov, a source of all open data that the US GOV shares. If your programming skills are pretty good, I imagine you can also fetch data from the popular social media webpages, i.e. Twitter, Instagram, Facebook, etc.
Don't forget to have some sort of go-to statistical evaluation software. Most (myself included) would recommend the open-software standard, R, but you'd be surprised how far you could go with something like Microsoft Excel, if your data size isn't terribly large and complicated.
Good luck! | Student looking to practice | If you want to "get your hands dirty" with statistics, there is an infinite number of ways to mine data, set up hypotheses, experiments, analyze data sets using various styles of analysis (i.e., Bayes | Student looking to practice
If you want to "get your hands dirty" with statistics, there is an infinite number of ways to mine data, set up hypotheses, experiments, analyze data sets using various styles of analysis (i.e., Bayesian vs. frequentist), etc. However, your question indicates that you're having trouble figuring out where to begin.
Find a topic that interests you, be it sports, politics, science, commerce/economics, behavioral psychology, biology, and so forth. If there are phenomena in said topic that can be quantified, then more likely than not, someone has already dug up massive amounts of data. I highly recommend Nate Silver's blog FiveThirtyEight, (of which I hold no vested interest/position in) where there's a wide variety of topics with small studies and analyses featured. The topics are well-thought out, and the statistics used isn't terribly difficult to understand or grasp for beginners (no offense to Nate Silver). At the very least, you could build upon whatever articles that have been featured, and use the many data references in the articles to do your own analyses or run your own specific tests.
After you have figured out what topic you'd like, specify a particular question you have in said topic- for example, I just thought of "in sports, do high-margin wins correlate to championship titles, or fatigue?"- and then find your data. The resources on the internet are nearly endless, but you must remember that not all data is quality data (i.e., be wary where you find your data, and whether there are any ethical/quality concerns).
Some useful links include DataHub.IO, where you can find (and share!) many free datasets, and Data.gov, a source of all open data that the US GOV shares. If your programming skills are pretty good, I imagine you can also fetch data from the popular social media webpages, i.e. Twitter, Instagram, Facebook, etc.
Don't forget to have some sort of go-to statistical evaluation software. Most (myself included) would recommend the open-software standard, R, but you'd be surprised how far you could go with something like Microsoft Excel, if your data size isn't terribly large and complicated.
Good luck! | Student looking to practice
If you want to "get your hands dirty" with statistics, there is an infinite number of ways to mine data, set up hypotheses, experiments, analyze data sets using various styles of analysis (i.e., Bayes |
52,908 | Student looking to practice | The best method in my opinion to practice the skills you have is getting some data (this is the last of your problem, you can find dataset on the internet or you can create your own dataset) and try to figure out if there are relations or difference in some groups of the data (assuming you want to practice with inferential statistics).
For example:
I'm learning regression or machine learning techniques: can i predict the outcome of an event currently evolving? (the oscars? the end of a tournament?)
For inferential statistics, can i analyze some data and find difference between some groups? Associations between variables?
Ans so on. The possibilities are endless | Student looking to practice | The best method in my opinion to practice the skills you have is getting some data (this is the last of your problem, you can find dataset on the internet or you can create your own dataset) and try t | Student looking to practice
The best method in my opinion to practice the skills you have is getting some data (this is the last of your problem, you can find dataset on the internet or you can create your own dataset) and try to figure out if there are relations or difference in some groups of the data (assuming you want to practice with inferential statistics).
For example:
I'm learning regression or machine learning techniques: can i predict the outcome of an event currently evolving? (the oscars? the end of a tournament?)
For inferential statistics, can i analyze some data and find difference between some groups? Associations between variables?
Ans so on. The possibilities are endless | Student looking to practice
The best method in my opinion to practice the skills you have is getting some data (this is the last of your problem, you can find dataset on the internet or you can create your own dataset) and try t |
52,909 | Student looking to practice | Several of the free (or at least inexpensive) certifications (Coursera, etc.) have small projects that can get you a lot of practice. You can do them quick and dirty, or you can do them in way more detail than is asked for. The advantage over doing it alone is there is a forum where you can discuss your results and often community experts who can offer advice as well. And of course they are well tested with beginners so there are no insurmountable obstacles which can discourage learning.
The Practical Machine Learning from Johns Hopkins comes to mind. But also the Stanford ML course from Andrew Ng is another possibility, and there are many more. | Student looking to practice | Several of the free (or at least inexpensive) certifications (Coursera, etc.) have small projects that can get you a lot of practice. You can do them quick and dirty, or you can do them in way more de | Student looking to practice
Several of the free (or at least inexpensive) certifications (Coursera, etc.) have small projects that can get you a lot of practice. You can do them quick and dirty, or you can do them in way more detail than is asked for. The advantage over doing it alone is there is a forum where you can discuss your results and often community experts who can offer advice as well. And of course they are well tested with beginners so there are no insurmountable obstacles which can discourage learning.
The Practical Machine Learning from Johns Hopkins comes to mind. But also the Stanford ML course from Andrew Ng is another possibility, and there are many more. | Student looking to practice
Several of the free (or at least inexpensive) certifications (Coursera, etc.) have small projects that can get you a lot of practice. You can do them quick and dirty, or you can do them in way more de |
52,910 | Portmanteau test results R | Let me use the term "autocorrelation" as a synonym for "serial correlation".
1) How does one interpret the results of the below demonstration?
Most of the interpretation is already in the comments to the code. First, a VAR(1) model is estimated. It is tested for autocorrelation in errors using a portmanteau test. The null hypothesis of no autocorrelation is rejected since the $p$-value of 0.01996 is lower than the significance level of 0.05. Since autocorrelation is an undesirable feature of the model, the author moves on to look for another model that does not have autocorrelation. He/She estimates a VAR(3) model, tests for autocorrelation, and finds that the null of no autocorrelation cannot be rejected because the $p$-value of 0.1394 is greater than the significance level of 0.05. Since there is not enough evidence of presence of autocorrelation, the author is satisfied and sticks to the VAR(3) model.
2) How does the author come to the conclusion that the VAR3 model is more suitable, and based on what criteria?
Lack of autocorrelation makes VAR(3) more suitable than VAR(1); see also my answer to question (1).
3) Assuming (I don't know if it the right assumption) that serial autocorrelation is a desired trait in terms of VAR(p) predictions, why does the author move on to var3 after rejecting the null hypothesis for var?
Autocorrelation is not a desired trait. It biases the estimators and makes them less efficient. Meanwhile, the estimators do have their nice properties (unbiasedness, efficiency) when there is no autocorrelation (assuming the other standard assumptions are satisfied, too). | Portmanteau test results R | Let me use the term "autocorrelation" as a synonym for "serial correlation".
1) How does one interpret the results of the below demonstration?
Most of the interpretation is already in the comments t | Portmanteau test results R
Let me use the term "autocorrelation" as a synonym for "serial correlation".
1) How does one interpret the results of the below demonstration?
Most of the interpretation is already in the comments to the code. First, a VAR(1) model is estimated. It is tested for autocorrelation in errors using a portmanteau test. The null hypothesis of no autocorrelation is rejected since the $p$-value of 0.01996 is lower than the significance level of 0.05. Since autocorrelation is an undesirable feature of the model, the author moves on to look for another model that does not have autocorrelation. He/She estimates a VAR(3) model, tests for autocorrelation, and finds that the null of no autocorrelation cannot be rejected because the $p$-value of 0.1394 is greater than the significance level of 0.05. Since there is not enough evidence of presence of autocorrelation, the author is satisfied and sticks to the VAR(3) model.
2) How does the author come to the conclusion that the VAR3 model is more suitable, and based on what criteria?
Lack of autocorrelation makes VAR(3) more suitable than VAR(1); see also my answer to question (1).
3) Assuming (I don't know if it the right assumption) that serial autocorrelation is a desired trait in terms of VAR(p) predictions, why does the author move on to var3 after rejecting the null hypothesis for var?
Autocorrelation is not a desired trait. It biases the estimators and makes them less efficient. Meanwhile, the estimators do have their nice properties (unbiasedness, efficiency) when there is no autocorrelation (assuming the other standard assumptions are satisfied, too). | Portmanteau test results R
Let me use the term "autocorrelation" as a synonym for "serial correlation".
1) How does one interpret the results of the below demonstration?
Most of the interpretation is already in the comments t |
52,911 | Are complete statistics always sufficient? | For completeness you need to consider only the distribution of the statistic $T$ (more correctly, the family of distributions indexed by $\theta$), whereas for sufficiency you need to consider the complete likelihood for $\theta$ as a function of the sample $X$; so it's trivial to come up with examples where $T$ isn't sufficient but is still complete:
The constant statistic $T(X)=7$.
Throw some of your data away & calculate your (previously complete &
sufficient) test statistic on what's left. Suppose you've four
observations $X_1,\ldots,X_4$ from a Gaussian with mean $\theta$ (& known standard deviation):
$T(X)=\frac{X_1+X_2}{2}$ isn't sufficient for $\theta$ but is
complete.
Twice the sample mean from a random variable uniformly distributed between
$0$ & $\theta$, $T(X)=\frac{2\sum_{i=1}^n X_i}{n}$ (the method-of-moments estimator), is complete, but we know the sufficient statistic for
$\theta$ is the sample maximum $X_{(n)}$.
See Complete sufficient statistic for a more interesting example—of a sufficient statistic that isn't complete. | Are complete statistics always sufficient? | For completeness you need to consider only the distribution of the statistic $T$ (more correctly, the family of distributions indexed by $\theta$), whereas for sufficiency you need to consider the com | Are complete statistics always sufficient?
For completeness you need to consider only the distribution of the statistic $T$ (more correctly, the family of distributions indexed by $\theta$), whereas for sufficiency you need to consider the complete likelihood for $\theta$ as a function of the sample $X$; so it's trivial to come up with examples where $T$ isn't sufficient but is still complete:
The constant statistic $T(X)=7$.
Throw some of your data away & calculate your (previously complete &
sufficient) test statistic on what's left. Suppose you've four
observations $X_1,\ldots,X_4$ from a Gaussian with mean $\theta$ (& known standard deviation):
$T(X)=\frac{X_1+X_2}{2}$ isn't sufficient for $\theta$ but is
complete.
Twice the sample mean from a random variable uniformly distributed between
$0$ & $\theta$, $T(X)=\frac{2\sum_{i=1}^n X_i}{n}$ (the method-of-moments estimator), is complete, but we know the sufficient statistic for
$\theta$ is the sample maximum $X_{(n)}$.
See Complete sufficient statistic for a more interesting example—of a sufficient statistic that isn't complete. | Are complete statistics always sufficient?
For completeness you need to consider only the distribution of the statistic $T$ (more correctly, the family of distributions indexed by $\theta$), whereas for sufficiency you need to consider the com |
52,912 | Independence of $\min(X,Y)$ and $\max(X,Y)$ for independent $X$, $Y$? | If $X$ and $Y$ are independent continuous random variables, then $\max(X,Y)$ and
$\min(X,Y)$ are independent random variables if and only if one of the
following two conditions holds:
$P(X > Y) = 1$
$P(X < Y) = 1$
Note that the above conditions mean that $P(X=Y) = 0$ but this does
not mean that $(X=Y)$ is the same as the impossible event, that is,
there is no outcome $\omega$ in the sample space for which
$X(\omega) = Y(\omega)$. Those thoroughly confused by this notion
should recall that they might have been told that
for a continuous random variable $V$,
$P(V = a) = 0$ for all real numbers $a$ even though it is manifestly
true that $V$ can take on value $a$ for some particular $a$,
and if they have
swallowed that whopper, then accepting that $P(X=Y)=0$ does not
mean that the event $(X=Y)$ will never occur is just a small
additional stretch of their credulity.
When $X$ and $Y$ are independent discrete random variables, then
the above condition needs to be relaxed slightly, and it is possible
to have $P(X=Y) > 0$. For example, if $(X,Y)$ takes on values
$(1,0), (2,0), (1,1), (2,1)$ with equal probability $\frac 14$, then
$(\min(X,Y), \max(X,Y))$ takes on values $(0,1), (0,2), (1,1), (1,2)$
with equal probability $\frac 14$ and thus $\min(X,Y)$ and
$\max(X,Y))$ are
independent. A little thought will show that $(\min(X,Y), \max(X,Y))$
is the same as $(Y,X)$ in this case. A little further thought will
show that if $P(X=Y)>0$, then it must be that there is a unique
$a$ such that $P(X=a, Y= a) >0$ and that for all other real numbers
$b$, $P(X=b, Y= b) =0$. For independent discrete random variables
$X$ and $Y$, the probability mass function has nonzero values at
all points on a rectangular grid, and this grid must be strictly
below or strictly above the line $x=y$ or must have only one point
(the upper left corner or the lower right corner) on the line $x=y$;
the point $(1,1)$ in the example above.
An interesting follow-up question is:
When $X$ and $Y$ are dependent random variables, is it possible for
$\max(X,Y)$ and $\min(X,Y)$ to be independent random variables?
to which the answer is Yes, it is possible. Consider the case when
$X$ and $Y$ are jointly continuous random variables uniformly
distributed on the set
$$\left\{(x,y)\colon \frac 12 \leq x \leq 1,
0 \leq y \leq x-\frac 12\right\}
\bigcup
\left\{(x,y)\colon 0 \leq x \leq \frac 12,
\frac 12 \leq y < x + \frac 12\right\}$$
The joint density of the minimum and maximum can be worked out
as described here
where it is shown that if $Z = \min(X,Y)$ and $W = \max(X,Y)$,
then
$$f_{Z,W}(z,w) = \begin{cases}
f_{X,Y}(z,w) + f_{X,Y}(w,z), & \text{if}~w > z,\\
\\
0, & \text{if}~w < z.
\end{cases}
$$
Applying this, it can be shown that the joint density of
$Z$ and $W$ is uniform on interior of the square with vertices
$(0,\frac 12), (\frac 12, \frac 12), (\frac 12, 1),
(0,1)$, and so $Z \sim U[0,\frac 12]$ and $W \sim U[\frac 12,1]$
are independent random variables. | Independence of $\min(X,Y)$ and $\max(X,Y)$ for independent $X$, $Y$? | If $X$ and $Y$ are independent continuous random variables, then $\max(X,Y)$ and
$\min(X,Y)$ are independent random variables if and only if one of the
following two conditions holds:
$P(X > Y) = 1$
| Independence of $\min(X,Y)$ and $\max(X,Y)$ for independent $X$, $Y$?
If $X$ and $Y$ are independent continuous random variables, then $\max(X,Y)$ and
$\min(X,Y)$ are independent random variables if and only if one of the
following two conditions holds:
$P(X > Y) = 1$
$P(X < Y) = 1$
Note that the above conditions mean that $P(X=Y) = 0$ but this does
not mean that $(X=Y)$ is the same as the impossible event, that is,
there is no outcome $\omega$ in the sample space for which
$X(\omega) = Y(\omega)$. Those thoroughly confused by this notion
should recall that they might have been told that
for a continuous random variable $V$,
$P(V = a) = 0$ for all real numbers $a$ even though it is manifestly
true that $V$ can take on value $a$ for some particular $a$,
and if they have
swallowed that whopper, then accepting that $P(X=Y)=0$ does not
mean that the event $(X=Y)$ will never occur is just a small
additional stretch of their credulity.
When $X$ and $Y$ are independent discrete random variables, then
the above condition needs to be relaxed slightly, and it is possible
to have $P(X=Y) > 0$. For example, if $(X,Y)$ takes on values
$(1,0), (2,0), (1,1), (2,1)$ with equal probability $\frac 14$, then
$(\min(X,Y), \max(X,Y))$ takes on values $(0,1), (0,2), (1,1), (1,2)$
with equal probability $\frac 14$ and thus $\min(X,Y)$ and
$\max(X,Y))$ are
independent. A little thought will show that $(\min(X,Y), \max(X,Y))$
is the same as $(Y,X)$ in this case. A little further thought will
show that if $P(X=Y)>0$, then it must be that there is a unique
$a$ such that $P(X=a, Y= a) >0$ and that for all other real numbers
$b$, $P(X=b, Y= b) =0$. For independent discrete random variables
$X$ and $Y$, the probability mass function has nonzero values at
all points on a rectangular grid, and this grid must be strictly
below or strictly above the line $x=y$ or must have only one point
(the upper left corner or the lower right corner) on the line $x=y$;
the point $(1,1)$ in the example above.
An interesting follow-up question is:
When $X$ and $Y$ are dependent random variables, is it possible for
$\max(X,Y)$ and $\min(X,Y)$ to be independent random variables?
to which the answer is Yes, it is possible. Consider the case when
$X$ and $Y$ are jointly continuous random variables uniformly
distributed on the set
$$\left\{(x,y)\colon \frac 12 \leq x \leq 1,
0 \leq y \leq x-\frac 12\right\}
\bigcup
\left\{(x,y)\colon 0 \leq x \leq \frac 12,
\frac 12 \leq y < x + \frac 12\right\}$$
The joint density of the minimum and maximum can be worked out
as described here
where it is shown that if $Z = \min(X,Y)$ and $W = \max(X,Y)$,
then
$$f_{Z,W}(z,w) = \begin{cases}
f_{X,Y}(z,w) + f_{X,Y}(w,z), & \text{if}~w > z,\\
\\
0, & \text{if}~w < z.
\end{cases}
$$
Applying this, it can be shown that the joint density of
$Z$ and $W$ is uniform on interior of the square with vertices
$(0,\frac 12), (\frac 12, \frac 12), (\frac 12, 1),
(0,1)$, and so $Z \sim U[0,\frac 12]$ and $W \sim U[\frac 12,1]$
are independent random variables. | Independence of $\min(X,Y)$ and $\max(X,Y)$ for independent $X$, $Y$?
If $X$ and $Y$ are independent continuous random variables, then $\max(X,Y)$ and
$\min(X,Y)$ are independent random variables if and only if one of the
following two conditions holds:
$P(X > Y) = 1$
|
52,913 | Why is Logistic Regression mentioned by many sources as useful in predicting stock prices? | Instead of predicting how much the stock gains or loses, the models are predicting the sign of the gain or loss, i.e. a binary outcome. | Why is Logistic Regression mentioned by many sources as useful in predicting stock prices? | Instead of predicting how much the stock gains or loses, the models are predicting the sign of the gain or loss, i.e. a binary outcome. | Why is Logistic Regression mentioned by many sources as useful in predicting stock prices?
Instead of predicting how much the stock gains or loses, the models are predicting the sign of the gain or loss, i.e. a binary outcome. | Why is Logistic Regression mentioned by many sources as useful in predicting stock prices?
Instead of predicting how much the stock gains or loses, the models are predicting the sign of the gain or loss, i.e. a binary outcome. |
52,914 | Why is Logistic Regression mentioned by many sources as useful in predicting stock prices? | The poor phrasing of the abstract* suggests a possible misuse of the term; I've often seen Linear Regression of the logarithm used to predict asset price movements (the idea being that asset prices tend to change by percentages of their current value, rather than by consistent nominal values).
*Full disclosure: I only read the abstract. | Why is Logistic Regression mentioned by many sources as useful in predicting stock prices? | The poor phrasing of the abstract* suggests a possible misuse of the term; I've often seen Linear Regression of the logarithm used to predict asset price movements (the idea being that asset prices te | Why is Logistic Regression mentioned by many sources as useful in predicting stock prices?
The poor phrasing of the abstract* suggests a possible misuse of the term; I've often seen Linear Regression of the logarithm used to predict asset price movements (the idea being that asset prices tend to change by percentages of their current value, rather than by consistent nominal values).
*Full disclosure: I only read the abstract. | Why is Logistic Regression mentioned by many sources as useful in predicting stock prices?
The poor phrasing of the abstract* suggests a possible misuse of the term; I've often seen Linear Regression of the logarithm used to predict asset price movements (the idea being that asset prices te |
52,915 | Reference for xgboost | Source: Tianqi Chen's Quora answer
Both xgboost and gbm follows the principle of gradient boosting.
There are however, the difference in modeling details. Specifically,
xgboost used a more regularized model formalization to control
over-fitting, which gives it better performance.
We have updated a comprehensive tutorial on introduction to the model,
which you might want to take a look at. Introduction to Boosted Trees
The name xgboost, though, actually refers to the engineering goal to
push the limit of computations resources for boosted tree algorithms.
Which is the reason why many people use xgboost. For model, it might
be more suitable to be called as regularized gradient boosting. | Reference for xgboost | Source: Tianqi Chen's Quora answer
Both xgboost and gbm follows the principle of gradient boosting.
There are however, the difference in modeling details. Specifically,
xgboost used a more reg | Reference for xgboost
Source: Tianqi Chen's Quora answer
Both xgboost and gbm follows the principle of gradient boosting.
There are however, the difference in modeling details. Specifically,
xgboost used a more regularized model formalization to control
over-fitting, which gives it better performance.
We have updated a comprehensive tutorial on introduction to the model,
which you might want to take a look at. Introduction to Boosted Trees
The name xgboost, though, actually refers to the engineering goal to
push the limit of computations resources for boosted tree algorithms.
Which is the reason why many people use xgboost. For model, it might
be more suitable to be called as regularized gradient boosting. | Reference for xgboost
Source: Tianqi Chen's Quora answer
Both xgboost and gbm follows the principle of gradient boosting.
There are however, the difference in modeling details. Specifically,
xgboost used a more reg |
52,916 | Reference for xgboost | There is now an arXiv article XGBoost: A Scalable Tree Boosting System that describes the algorithm. At the time of the original question and answer this had not yet been posted. | Reference for xgboost | There is now an arXiv article XGBoost: A Scalable Tree Boosting System that describes the algorithm. At the time of the original question and answer this had not yet been posted. | Reference for xgboost
There is now an arXiv article XGBoost: A Scalable Tree Boosting System that describes the algorithm. At the time of the original question and answer this had not yet been posted. | Reference for xgboost
There is now an arXiv article XGBoost: A Scalable Tree Boosting System that describes the algorithm. At the time of the original question and answer this had not yet been posted. |
52,917 | Chi-Square-Test: Why is the chi-squared test a one-tailed test? [duplicate] | There is a reason why the 'two-tailed chi-squared' is seldomly used: if you do a $\chi^2$ test for contingency tables, then the test statistic is (without the continuity correction):
$X^2 = \sum_{i,j}\frac{(o_{ij}-e_{ij})^2}{e_{ij}}$
where $o_{ij}$ are the observed counts in cell $i,j$ and $e_{ij}$ are the expected cell count in cell $i,j$. Under relatively weak assumptions it can be shown that $X^2$ approximately follows a $\chi^2$ distribution with $1$ degree of freedom (this is for a 2x2 table as in your case).
If you assume independence between the row and column variable, (which is $H_0$ ) , then the $e_{ij}$ are estimated from the marginal probabilities.
This is just for a short intro to $\chi^2$ for contingency tables. The most important thing is that the numerator of each term in $X^2$ is the squared difference between the 'observed counts' and the 'expected counts'. So whether $o_{ij} < e_{ij}$ or $o_{ij} > e_{ij}$ makes no difference in the result for $X^2$.
So the $\chi^2$ test for contingency table tests whether the observations are either smaller or larger than expected ! So it is a two-sided test even if the critical region is defined in one (the right) tail of the $\chi^2$ distribution.
So the point is that the $\chi^2$-test is a two-sided test (it can reject values $o_{ij}$ that are either too small or too large) but it uses a one-sided critial region (the right queue of $\chi^2$).
So how do you have to interpret your result: if $H_0: \text{ 'row variable and column variable are independent' }$ then the probability of observing a value at least as extreme as the computed $X^2$ is 0.059. This is called the p-value of the test.
(Note that, by the above 'independent' includes 'either too high or too low'. )
In order to 'decide' something, you have to first chose a significance level. This is a 'risk that you accept for making type I errors'. The significance level of $5\%$ is commonly used.
You will now reject the null hypothesis when the p-value (0.059) is smaller than the choosen significance level (0.05). This is not so for your table, so you will not reject $H_0$ at a significance level of $5\%$.
As far as your question at the bottom is concerned you should say (but in your example it is not the case) : the p-value is lower than or equal to the choosen significance level of 0.05, so the $H_0$ is rejeceted and we conclude that the rows and column variables are dependent. (but, as said, in your example the p-value is higher than the 0.05 significance level).
Maybe you should also take a look at Misunderstanding a P-value?.
EDIT: to react to questions/remarks in the comments I added this:
@StijnDeVuyst:
An 'extreme' case may make this clear. Assume the we know that the population is normal with an unknown $\mu$ but $\sigma=1$, i.e. $N(\mu,1)$.
We want to test the hypothesis $H_0: \mu = 0$ versus $\mu \ne 0$. If we observe a value $x=2$ from a sample then the p-value of this observation is $0.2275$ (1-pnorm(q=2)) multiplied by two because our critical region is two-tailed.
We would also define the critical region in another way: if $H_0$ is true then the population has a standard normal distribution, so the test statistic $X$ is $X \sim N(0;1)$. Then by definition of a $\chi^2$ with one degree of freedom, we also know that $X^2 \sim \chi^2_{(1)}$.
We had observed $x=2$ thus $x^2=4$ and if we compute the p-value of 4 for a $\chi^2_{(1)}$ we find 1-pchisq(q=4,df=1)=0.0455.
Note that this is exactly equal to (2*(1-pnorm(q=2)).
So we do the same test $H_0: \mu = 0$ versus $\mu \ne 0$ (different or not, so two-sided) with two (equivalent) critical regions one that is one-tailed (the one based on the $\chi^2$ distribution) and another one that is two-tailed (the one based on the normal distribution).
@StepMuc:
I will not try to be precise here, the goal is to give you the 'feeling' of what it is about, because you asked for it in the comment.
For the 'idea behind' hypothesis testing I refer to What follows if we fail to reject the null hypothesis?.
So, in hypothesis testing, if you want to 'find evidence' for something, then you assume the opposite. You want to show that the group you belong to (treatment/control) 'influences' your choice for 'A' of 'B', so you want to show that 'group' (treatment/control) and choice (A/B) are 'dependent'. If you want to show that, then you assume the opposite, so
$H_0: \text{ group and choice are independent }$
and the alternative is then
$H_1: \text{ group and choice are dependent }$.
The next thing you, as a scientist, have to decide is the 'significance level $\alpha$'. This is the probability that the test rejects $H_0$ while in reality it is true. When we reject $H_0$ we conclude (see the What follows if we fail to reject the null hypothesis?) that we found statistical evidence for $H_1$, and we have probability of $\alpha$ that we found 'false evidence'. It is up to you (or your risk appetite) how high you choose this $\alpha$. Common values are 0.001, 0.01, 0.05, 0.1. The higher $\alpha$ the higher the risk that you discover 'false evidence', false evidence for $H_1$ is called a type I error.
The $\chi^2$ test for contingence tables tests the $H_0: \text{ group and choice are independent }$ versus $H_1: \text{ group and choice are dependent }$.
If $H_0$ is true, then it can be shown that the $X^2$ defined supra is comes from a $\chi^2$ distribution. From the table that you have, you can compute $X^2$, this gives you a number.
You have 54 people, 31 in treatment group, 23 in control group. Assume now that you let these people randomly choose A/B en you make e.g. 1000000 tables with these random outcomes and for each of these tables you compute $X^2$ as above, then the probability that the computed $X^2$ is larger or equal than the one for your table is 0.059 (which is the p-value in your question).
So what we have now is that we assumed that $H_0$ is true and if that is the case then we find that the probability to have the value of higher is 0.059. If this a 'low enough' probability that means that we found that 'if $H_0$ is true, then we find a value that is very improbable' so $H_0$ must be false and $H_1$ is 'statistically proven'.
We still have to define what is meant by 'low enough' and that is defined as 'lower than or equal to the choosen significance level $\alpha$.
So if you choose a significance level $\alpha=0.05$ then, as the probability of having the value for $X^2$ or higher was 0.059, you will not reject $H_0$ at the $5\%$ significance level, so you find no evidence for $H_1$ at the $5\%$ significance level. So you have to accept $H_0$ that group and choice are independent.
If you are ready to accept more type I errors and set $\alpha=0.1$ then, as your p-value is lower than 0.1, you will reject $H_0$ and conclude that 'group and choice' are dependent at the $10\%$ significance level. | Chi-Square-Test: Why is the chi-squared test a one-tailed test? [duplicate] | There is a reason why the 'two-tailed chi-squared' is seldomly used: if you do a $\chi^2$ test for contingency tables, then the test statistic is (without the continuity correction):
$X^2 = \sum_{i, | Chi-Square-Test: Why is the chi-squared test a one-tailed test? [duplicate]
There is a reason why the 'two-tailed chi-squared' is seldomly used: if you do a $\chi^2$ test for contingency tables, then the test statistic is (without the continuity correction):
$X^2 = \sum_{i,j}\frac{(o_{ij}-e_{ij})^2}{e_{ij}}$
where $o_{ij}$ are the observed counts in cell $i,j$ and $e_{ij}$ are the expected cell count in cell $i,j$. Under relatively weak assumptions it can be shown that $X^2$ approximately follows a $\chi^2$ distribution with $1$ degree of freedom (this is for a 2x2 table as in your case).
If you assume independence between the row and column variable, (which is $H_0$ ) , then the $e_{ij}$ are estimated from the marginal probabilities.
This is just for a short intro to $\chi^2$ for contingency tables. The most important thing is that the numerator of each term in $X^2$ is the squared difference between the 'observed counts' and the 'expected counts'. So whether $o_{ij} < e_{ij}$ or $o_{ij} > e_{ij}$ makes no difference in the result for $X^2$.
So the $\chi^2$ test for contingency table tests whether the observations are either smaller or larger than expected ! So it is a two-sided test even if the critical region is defined in one (the right) tail of the $\chi^2$ distribution.
So the point is that the $\chi^2$-test is a two-sided test (it can reject values $o_{ij}$ that are either too small or too large) but it uses a one-sided critial region (the right queue of $\chi^2$).
So how do you have to interpret your result: if $H_0: \text{ 'row variable and column variable are independent' }$ then the probability of observing a value at least as extreme as the computed $X^2$ is 0.059. This is called the p-value of the test.
(Note that, by the above 'independent' includes 'either too high or too low'. )
In order to 'decide' something, you have to first chose a significance level. This is a 'risk that you accept for making type I errors'. The significance level of $5\%$ is commonly used.
You will now reject the null hypothesis when the p-value (0.059) is smaller than the choosen significance level (0.05). This is not so for your table, so you will not reject $H_0$ at a significance level of $5\%$.
As far as your question at the bottom is concerned you should say (but in your example it is not the case) : the p-value is lower than or equal to the choosen significance level of 0.05, so the $H_0$ is rejeceted and we conclude that the rows and column variables are dependent. (but, as said, in your example the p-value is higher than the 0.05 significance level).
Maybe you should also take a look at Misunderstanding a P-value?.
EDIT: to react to questions/remarks in the comments I added this:
@StijnDeVuyst:
An 'extreme' case may make this clear. Assume the we know that the population is normal with an unknown $\mu$ but $\sigma=1$, i.e. $N(\mu,1)$.
We want to test the hypothesis $H_0: \mu = 0$ versus $\mu \ne 0$. If we observe a value $x=2$ from a sample then the p-value of this observation is $0.2275$ (1-pnorm(q=2)) multiplied by two because our critical region is two-tailed.
We would also define the critical region in another way: if $H_0$ is true then the population has a standard normal distribution, so the test statistic $X$ is $X \sim N(0;1)$. Then by definition of a $\chi^2$ with one degree of freedom, we also know that $X^2 \sim \chi^2_{(1)}$.
We had observed $x=2$ thus $x^2=4$ and if we compute the p-value of 4 for a $\chi^2_{(1)}$ we find 1-pchisq(q=4,df=1)=0.0455.
Note that this is exactly equal to (2*(1-pnorm(q=2)).
So we do the same test $H_0: \mu = 0$ versus $\mu \ne 0$ (different or not, so two-sided) with two (equivalent) critical regions one that is one-tailed (the one based on the $\chi^2$ distribution) and another one that is two-tailed (the one based on the normal distribution).
@StepMuc:
I will not try to be precise here, the goal is to give you the 'feeling' of what it is about, because you asked for it in the comment.
For the 'idea behind' hypothesis testing I refer to What follows if we fail to reject the null hypothesis?.
So, in hypothesis testing, if you want to 'find evidence' for something, then you assume the opposite. You want to show that the group you belong to (treatment/control) 'influences' your choice for 'A' of 'B', so you want to show that 'group' (treatment/control) and choice (A/B) are 'dependent'. If you want to show that, then you assume the opposite, so
$H_0: \text{ group and choice are independent }$
and the alternative is then
$H_1: \text{ group and choice are dependent }$.
The next thing you, as a scientist, have to decide is the 'significance level $\alpha$'. This is the probability that the test rejects $H_0$ while in reality it is true. When we reject $H_0$ we conclude (see the What follows if we fail to reject the null hypothesis?) that we found statistical evidence for $H_1$, and we have probability of $\alpha$ that we found 'false evidence'. It is up to you (or your risk appetite) how high you choose this $\alpha$. Common values are 0.001, 0.01, 0.05, 0.1. The higher $\alpha$ the higher the risk that you discover 'false evidence', false evidence for $H_1$ is called a type I error.
The $\chi^2$ test for contingence tables tests the $H_0: \text{ group and choice are independent }$ versus $H_1: \text{ group and choice are dependent }$.
If $H_0$ is true, then it can be shown that the $X^2$ defined supra is comes from a $\chi^2$ distribution. From the table that you have, you can compute $X^2$, this gives you a number.
You have 54 people, 31 in treatment group, 23 in control group. Assume now that you let these people randomly choose A/B en you make e.g. 1000000 tables with these random outcomes and for each of these tables you compute $X^2$ as above, then the probability that the computed $X^2$ is larger or equal than the one for your table is 0.059 (which is the p-value in your question).
So what we have now is that we assumed that $H_0$ is true and if that is the case then we find that the probability to have the value of higher is 0.059. If this a 'low enough' probability that means that we found that 'if $H_0$ is true, then we find a value that is very improbable' so $H_0$ must be false and $H_1$ is 'statistically proven'.
We still have to define what is meant by 'low enough' and that is defined as 'lower than or equal to the choosen significance level $\alpha$.
So if you choose a significance level $\alpha=0.05$ then, as the probability of having the value for $X^2$ or higher was 0.059, you will not reject $H_0$ at the $5\%$ significance level, so you find no evidence for $H_1$ at the $5\%$ significance level. So you have to accept $H_0$ that group and choice are independent.
If you are ready to accept more type I errors and set $\alpha=0.1$ then, as your p-value is lower than 0.1, you will reject $H_0$ and conclude that 'group and choice' are dependent at the $10\%$ significance level. | Chi-Square-Test: Why is the chi-squared test a one-tailed test? [duplicate]
There is a reason why the 'two-tailed chi-squared' is seldomly used: if you do a $\chi^2$ test for contingency tables, then the test statistic is (without the continuity correction):
$X^2 = \sum_{i, |
52,918 | Chi-Square-Test: Why is the chi-squared test a one-tailed test? [duplicate] | You are correct that the one-tailed p-value is for any hypothesis that is formulated as one-sided and that two-tailed p-values are for two-sided hypotheses, in which either group (in this case) can prove to be "better" than the other.
Since you're talking about treatments, I assume that your field is in medicine, psychology or something similar. It is very rare for hypotheses in these fields to be one-sided (to my experience at least), and if you want to test the effect of a treatment, you must have very convincing arguments for why the treatment effect under no circumstances could be negative.
So in practice, one-tailed p-values are rarely used (I can't recall ever having seen them used in a paper) and I think you should use the two-tailed test as well. | Chi-Square-Test: Why is the chi-squared test a one-tailed test? [duplicate] | You are correct that the one-tailed p-value is for any hypothesis that is formulated as one-sided and that two-tailed p-values are for two-sided hypotheses, in which either group (in this case) can pr | Chi-Square-Test: Why is the chi-squared test a one-tailed test? [duplicate]
You are correct that the one-tailed p-value is for any hypothesis that is formulated as one-sided and that two-tailed p-values are for two-sided hypotheses, in which either group (in this case) can prove to be "better" than the other.
Since you're talking about treatments, I assume that your field is in medicine, psychology or something similar. It is very rare for hypotheses in these fields to be one-sided (to my experience at least), and if you want to test the effect of a treatment, you must have very convincing arguments for why the treatment effect under no circumstances could be negative.
So in practice, one-tailed p-values are rarely used (I can't recall ever having seen them used in a paper) and I think you should use the two-tailed test as well. | Chi-Square-Test: Why is the chi-squared test a one-tailed test? [duplicate]
You are correct that the one-tailed p-value is for any hypothesis that is formulated as one-sided and that two-tailed p-values are for two-sided hypotheses, in which either group (in this case) can pr |
52,919 | Chi-Square-Test: Why is the chi-squared test a one-tailed test? [duplicate] | As far as I can see, the twosidedness is not referring to the chisquare test at all, but rather to the corresponding two-sided test of two proportions. The chisquare part is indeed a onetailed test, which it should be. The same kind of vocabulary is used in i e openepi.com. Some more details were covered in another comment, see https://stats.stackexchange.com/a/157005/18276.
If I am correct, then the entire discussion of when a twosided chisquare test should be used is more or less off topic. Or at least the answer to a question that was not raised, but to another question.
It's really odd that SPSS does not have a clear formulation of tests of proportions since that is such a common task. Even if a chisquare test in a 2x2 table is equivalent to a test of proportions, it would be easier to understand the output if it had been based on proportions rather than a substitute. It is also strange that they haven't included tests which are less sensitive to small samples (the Agresti-Coull or mid-P tests for example). | Chi-Square-Test: Why is the chi-squared test a one-tailed test? [duplicate] | As far as I can see, the twosidedness is not referring to the chisquare test at all, but rather to the corresponding two-sided test of two proportions. The chisquare part is indeed a onetailed test, w | Chi-Square-Test: Why is the chi-squared test a one-tailed test? [duplicate]
As far as I can see, the twosidedness is not referring to the chisquare test at all, but rather to the corresponding two-sided test of two proportions. The chisquare part is indeed a onetailed test, which it should be. The same kind of vocabulary is used in i e openepi.com. Some more details were covered in another comment, see https://stats.stackexchange.com/a/157005/18276.
If I am correct, then the entire discussion of when a twosided chisquare test should be used is more or less off topic. Or at least the answer to a question that was not raised, but to another question.
It's really odd that SPSS does not have a clear formulation of tests of proportions since that is such a common task. Even if a chisquare test in a 2x2 table is equivalent to a test of proportions, it would be easier to understand the output if it had been based on proportions rather than a substitute. It is also strange that they haven't included tests which are less sensitive to small samples (the Agresti-Coull or mid-P tests for example). | Chi-Square-Test: Why is the chi-squared test a one-tailed test? [duplicate]
As far as I can see, the twosidedness is not referring to the chisquare test at all, but rather to the corresponding two-sided test of two proportions. The chisquare part is indeed a onetailed test, w |
52,920 | Are "covariance function" and "kernel function" synonyms? | Yes, "covariance function" and "positive-definite kernel" refer to the same concept. (Authors in the SVM literature sometimes omit the qualification "positive-definite", since it's typically by far the most relevant type of kernel.)
For example, see page 80 of Rasmussen and Williams, Gaussian Processes for Machine Learning, 2006. | Are "covariance function" and "kernel function" synonyms? | Yes, "covariance function" and "positive-definite kernel" refer to the same concept. (Authors in the SVM literature sometimes omit the qualification "positive-definite", since it's typically by far th | Are "covariance function" and "kernel function" synonyms?
Yes, "covariance function" and "positive-definite kernel" refer to the same concept. (Authors in the SVM literature sometimes omit the qualification "positive-definite", since it's typically by far the most relevant type of kernel.)
For example, see page 80 of Rasmussen and Williams, Gaussian Processes for Machine Learning, 2006. | Are "covariance function" and "kernel function" synonyms?
Yes, "covariance function" and "positive-definite kernel" refer to the same concept. (Authors in the SVM literature sometimes omit the qualification "positive-definite", since it's typically by far th |
52,921 | Are "covariance function" and "kernel function" synonyms? | Danica's answer is correct. Precisely stated, covariance matrices and Mercer kernels are both matrices which are (1) positive definite and (2) symmetric. However, there is some research into matrices otherwise than Mercer kernels, that is, matrices which are not positive-definite but which may be useful in machine learning nonetheless. These are occasionally referred to as kernels, as in this paper, but at least in this case the authors are careful to stress that when they are speaking of kernels, they have a definition in mind which does not necessarily satisfy Mercer's conditions.
Cheng Soon, Xavier Mary, Alexander J. Smola. "Learning with Non-Positive Kernels." Appearing in Proceedings of the 21 st International Conference
on Machine Learning, Banff, Canada, 2004. | Are "covariance function" and "kernel function" synonyms? | Danica's answer is correct. Precisely stated, covariance matrices and Mercer kernels are both matrices which are (1) positive definite and (2) symmetric. However, there is some research into matrices | Are "covariance function" and "kernel function" synonyms?
Danica's answer is correct. Precisely stated, covariance matrices and Mercer kernels are both matrices which are (1) positive definite and (2) symmetric. However, there is some research into matrices otherwise than Mercer kernels, that is, matrices which are not positive-definite but which may be useful in machine learning nonetheless. These are occasionally referred to as kernels, as in this paper, but at least in this case the authors are careful to stress that when they are speaking of kernels, they have a definition in mind which does not necessarily satisfy Mercer's conditions.
Cheng Soon, Xavier Mary, Alexander J. Smola. "Learning with Non-Positive Kernels." Appearing in Proceedings of the 21 st International Conference
on Machine Learning, Banff, Canada, 2004. | Are "covariance function" and "kernel function" synonyms?
Danica's answer is correct. Precisely stated, covariance matrices and Mercer kernels are both matrices which are (1) positive definite and (2) symmetric. However, there is some research into matrices |
52,922 | A good description of the random forests method | When getting up to speed on a topic, I find it helpful to start at the beginning and work forward chronologically. Breiman's original paper on random forests is where I would recommend starting.
Leo Breiman. "Random Forests." Machine Learning (2001). 45, 5-32. | A good description of the random forests method | When getting up to speed on a topic, I find it helpful to start at the beginning and work forward chronologically. Breiman's original paper on random forests is where I would recommend starting.
Leo B | A good description of the random forests method
When getting up to speed on a topic, I find it helpful to start at the beginning and work forward chronologically. Breiman's original paper on random forests is where I would recommend starting.
Leo Breiman. "Random Forests." Machine Learning (2001). 45, 5-32. | A good description of the random forests method
When getting up to speed on a topic, I find it helpful to start at the beginning and work forward chronologically. Breiman's original paper on random forests is where I would recommend starting.
Leo B |
52,923 | A good description of the random forests method | Did you check out "The Elements of Statistical Learning" http://statweb.stanford.edu/~tibs/ElemStatLearn/ (free online pdf). It is the more advanced version of "An Introduction to Statistical Learning with Applications in R "
If that is not appropriate I would probably just start reading the original journal articles that are listed as references in the relevant chapters of these books. | A good description of the random forests method | Did you check out "The Elements of Statistical Learning" http://statweb.stanford.edu/~tibs/ElemStatLearn/ (free online pdf). It is the more advanced version of "An Introduction to Statistical Learning | A good description of the random forests method
Did you check out "The Elements of Statistical Learning" http://statweb.stanford.edu/~tibs/ElemStatLearn/ (free online pdf). It is the more advanced version of "An Introduction to Statistical Learning with Applications in R "
If that is not appropriate I would probably just start reading the original journal articles that are listed as references in the relevant chapters of these books. | A good description of the random forests method
Did you check out "The Elements of Statistical Learning" http://statweb.stanford.edu/~tibs/ElemStatLearn/ (free online pdf). It is the more advanced version of "An Introduction to Statistical Learning |
52,924 | A good description of the random forests method | There is a PhD thesis from one of the Kaggle guys about Understanding Random Forests. And that's actually the title of his thesis. This is the link and i think its a pretty new PhD:
http://www.montefiore.ulg.ac.be/~glouppe/pdf/phd-thesis.pdf
Hope this helps, it's more specific and starts from basics as well. | A good description of the random forests method | There is a PhD thesis from one of the Kaggle guys about Understanding Random Forests. And that's actually the title of his thesis. This is the link and i think its a pretty new PhD:
http://www.montefi | A good description of the random forests method
There is a PhD thesis from one of the Kaggle guys about Understanding Random Forests. And that's actually the title of his thesis. This is the link and i think its a pretty new PhD:
http://www.montefiore.ulg.ac.be/~glouppe/pdf/phd-thesis.pdf
Hope this helps, it's more specific and starts from basics as well. | A good description of the random forests method
There is a PhD thesis from one of the Kaggle guys about Understanding Random Forests. And that's actually the title of his thesis. This is the link and i think its a pretty new PhD:
http://www.montefi |
52,925 | Do I have a justified reason to exclude a non-significant covariate from my ANCOVA? How interesting is unequal variance? | First, let me assure you that - as mentioned by @amoeba - you are on the right path to land in "research hell", that is the place where researchers (should) go when they let their p-value to decide what to include or not in the analysis.
Reason 1.
You have to decide a priori whether you consider Levene's test a good test for heteroscedasticity or not, and what to do in case the test's significant/not significant. Also, while you and I may disagree, if you use thresholds to make decisions (and this is the still the standard in classic statistics, unfortunately) whether something is less or more significant has little importance.
Reason 2 and 3.
"Both groups are well matched on age and IQ", this is good. One of ANCOVA most important assumption is independence of the covariate and treatment effect. "This association is not significant for IQ and glutamate", I am not completely sure this is a problem, I actually believe it isn't. The second most important assumption for ANCOVA, in fact, is that the co-variates have the same relationship with the dependent variable regardless of the independent variables. This is usually called Homogeneity of regression slopes.
Question 1.
The problem here is not that you are going to be eaten alive; likely, you won't (I have seen worse analysis being published...). The problem is that you are thinking about publishing results that are the results of an analysis you are not completely sure of after you have violated the most important rule in experimental data analysis which is not to make changes based on the final outcome.
Question 2.
In theory, yes: heteroscedasticity could be the sign of something else going on in your data. Or not. We don't (can't) know. However, given your small sample size I would argue against continuing with your analysis without fixing the problem of unequal variance. There are techniques you may want to consider, they may or may not work. But, if they don't work, you are left with only one possibility: do not conduct your analysis (since you have already done that: do not report the results).
Question 3.
Yes, it does. Again, you should decide a priori. Type II is preferable if you are interested in main effects while Type III is preferable when you anticipate interactions. I tend to prefer Type III but a debate over which one is better still exists.
Conclusion.
My suggestion is not to publish these results and to try to collect more data. Some of the assumptions you broke become less important with bigger sample (60+). That said, you should never look at the final results before being sure you have done everything correctly because, in theory, that is a point of no return. | Do I have a justified reason to exclude a non-significant covariate from my ANCOVA? How interesting | First, let me assure you that - as mentioned by @amoeba - you are on the right path to land in "research hell", that is the place where researchers (should) go when they let their p-value to decide wh | Do I have a justified reason to exclude a non-significant covariate from my ANCOVA? How interesting is unequal variance?
First, let me assure you that - as mentioned by @amoeba - you are on the right path to land in "research hell", that is the place where researchers (should) go when they let their p-value to decide what to include or not in the analysis.
Reason 1.
You have to decide a priori whether you consider Levene's test a good test for heteroscedasticity or not, and what to do in case the test's significant/not significant. Also, while you and I may disagree, if you use thresholds to make decisions (and this is the still the standard in classic statistics, unfortunately) whether something is less or more significant has little importance.
Reason 2 and 3.
"Both groups are well matched on age and IQ", this is good. One of ANCOVA most important assumption is independence of the covariate and treatment effect. "This association is not significant for IQ and glutamate", I am not completely sure this is a problem, I actually believe it isn't. The second most important assumption for ANCOVA, in fact, is that the co-variates have the same relationship with the dependent variable regardless of the independent variables. This is usually called Homogeneity of regression slopes.
Question 1.
The problem here is not that you are going to be eaten alive; likely, you won't (I have seen worse analysis being published...). The problem is that you are thinking about publishing results that are the results of an analysis you are not completely sure of after you have violated the most important rule in experimental data analysis which is not to make changes based on the final outcome.
Question 2.
In theory, yes: heteroscedasticity could be the sign of something else going on in your data. Or not. We don't (can't) know. However, given your small sample size I would argue against continuing with your analysis without fixing the problem of unequal variance. There are techniques you may want to consider, they may or may not work. But, if they don't work, you are left with only one possibility: do not conduct your analysis (since you have already done that: do not report the results).
Question 3.
Yes, it does. Again, you should decide a priori. Type II is preferable if you are interested in main effects while Type III is preferable when you anticipate interactions. I tend to prefer Type III but a debate over which one is better still exists.
Conclusion.
My suggestion is not to publish these results and to try to collect more data. Some of the assumptions you broke become less important with bigger sample (60+). That said, you should never look at the final results before being sure you have done everything correctly because, in theory, that is a point of no return. | Do I have a justified reason to exclude a non-significant covariate from my ANCOVA? How interesting
First, let me assure you that - as mentioned by @amoeba - you are on the right path to land in "research hell", that is the place where researchers (should) go when they let their p-value to decide wh |
52,926 | Do I have a justified reason to exclude a non-significant covariate from my ANCOVA? How interesting is unequal variance? | First, ask yourself why IQ is always included in such models. There is probably some reason. It might be that IQ is a mediator (see below)
Second, from what you say, it sounds like IQ is a type of mediator of the relationship between glutamate concentration and whatever your group variable is. Matching will not deal with mediating relationships. The correct way to establish mediation is not completely agreed on (even the terminology is not completely settled), but my view is that statistical significance has little role in it. The key thing is not whether the relationships are significant or not (with N = 33, significance is hard) but changes in the parameter estimates.
Third, the fact that "the model becomes messy" is no reason to exclude a variable. Not all relationships are simple. To exclude a mediator can give a very wrong picture of a relationship. | Do I have a justified reason to exclude a non-significant covariate from my ANCOVA? How interesting | First, ask yourself why IQ is always included in such models. There is probably some reason. It might be that IQ is a mediator (see below)
Second, from what you say, it sounds like IQ is a type of me | Do I have a justified reason to exclude a non-significant covariate from my ANCOVA? How interesting is unequal variance?
First, ask yourself why IQ is always included in such models. There is probably some reason. It might be that IQ is a mediator (see below)
Second, from what you say, it sounds like IQ is a type of mediator of the relationship between glutamate concentration and whatever your group variable is. Matching will not deal with mediating relationships. The correct way to establish mediation is not completely agreed on (even the terminology is not completely settled), but my view is that statistical significance has little role in it. The key thing is not whether the relationships are significant or not (with N = 33, significance is hard) but changes in the parameter estimates.
Third, the fact that "the model becomes messy" is no reason to exclude a variable. Not all relationships are simple. To exclude a mediator can give a very wrong picture of a relationship. | Do I have a justified reason to exclude a non-significant covariate from my ANCOVA? How interesting
First, ask yourself why IQ is always included in such models. There is probably some reason. It might be that IQ is a mediator (see below)
Second, from what you say, it sounds like IQ is a type of me |
52,927 | Do I have a justified reason to exclude a non-significant covariate from my ANCOVA? How interesting is unequal variance? | I'm a bit worried about the y-axis label on your plot, "C glutamate SD less than 20 - extremes," which has two potentially important implications.
For one, it might be taken to suggest that there has already been some removal of "outlier" determinations, which is tricky business. This should usually only be done if you know that measurements were in error (as opposed to messy). A possible interpretation of your axis label is that there were multiple analytical determinations of glutamate for each individual, each determination involving more than 1 technical replicate, and that some determinations were excluded if the technical replicates disagreed with an SD more than 20. If the "- extremes" means that you further removed the extreme values of the determinations for an individual, then that's an additional issue to consider.
Second, if you have analysis SDs on the order of 20 with mean values on the order of 9, and glutamate concentrations certainly cannot go below 0, then you probably should not be analyzing your glutamate analyses on a linear scale, at least in terms of combining analytical results to obtain a glutamate value for each individual. My guess is that the analytical errors in glutamate determinations are more or less proportional to the values measured, so for the glutamate-analysis part of this work you would be better off working on a log scale so that magnitudes of analytical errors are independent of the measured values, on that scale. On a log scale some of your "outliers" might not be so far off, and your results might be more reliable (and potentially even in support of your hypothesis).
The "weak" relation between IQ and glutamate that you cite (Pearson Correlation = .203, sig=.213, N=33) is not necessarily so weak. Trying to rule out a relation between two variables is different from trying to demonstrate a significant relation between them. That correlation coefficient isn't atypical of many biological relationships, and the lack of "significance" might simply represent the small number of cases, so that's not a reason to exclude IQ.
Part of the problem here is an under-powered experimental design, as you seem to understand. If controlling for age and IQ is typically expected in this type of study, then there needed to be enough cases to accommodate that. Each additional covariate uses up a degree of freedom in your analysis, potentially making it harder to detect significance if the covariate bears only a weak relation to the outcome variable. It is not unusual to find "significance" with a small number of predictors, which then disappears as extra predictors are added.
If I am correct about the nature of your glutamate determinations, you will need to re-evaluate these relationships in any event. | Do I have a justified reason to exclude a non-significant covariate from my ANCOVA? How interesting | I'm a bit worried about the y-axis label on your plot, "C glutamate SD less than 20 - extremes," which has two potentially important implications.
For one, it might be taken to suggest that there has | Do I have a justified reason to exclude a non-significant covariate from my ANCOVA? How interesting is unequal variance?
I'm a bit worried about the y-axis label on your plot, "C glutamate SD less than 20 - extremes," which has two potentially important implications.
For one, it might be taken to suggest that there has already been some removal of "outlier" determinations, which is tricky business. This should usually only be done if you know that measurements were in error (as opposed to messy). A possible interpretation of your axis label is that there were multiple analytical determinations of glutamate for each individual, each determination involving more than 1 technical replicate, and that some determinations were excluded if the technical replicates disagreed with an SD more than 20. If the "- extremes" means that you further removed the extreme values of the determinations for an individual, then that's an additional issue to consider.
Second, if you have analysis SDs on the order of 20 with mean values on the order of 9, and glutamate concentrations certainly cannot go below 0, then you probably should not be analyzing your glutamate analyses on a linear scale, at least in terms of combining analytical results to obtain a glutamate value for each individual. My guess is that the analytical errors in glutamate determinations are more or less proportional to the values measured, so for the glutamate-analysis part of this work you would be better off working on a log scale so that magnitudes of analytical errors are independent of the measured values, on that scale. On a log scale some of your "outliers" might not be so far off, and your results might be more reliable (and potentially even in support of your hypothesis).
The "weak" relation between IQ and glutamate that you cite (Pearson Correlation = .203, sig=.213, N=33) is not necessarily so weak. Trying to rule out a relation between two variables is different from trying to demonstrate a significant relation between them. That correlation coefficient isn't atypical of many biological relationships, and the lack of "significance" might simply represent the small number of cases, so that's not a reason to exclude IQ.
Part of the problem here is an under-powered experimental design, as you seem to understand. If controlling for age and IQ is typically expected in this type of study, then there needed to be enough cases to accommodate that. Each additional covariate uses up a degree of freedom in your analysis, potentially making it harder to detect significance if the covariate bears only a weak relation to the outcome variable. It is not unusual to find "significance" with a small number of predictors, which then disappears as extra predictors are added.
If I am correct about the nature of your glutamate determinations, you will need to re-evaluate these relationships in any event. | Do I have a justified reason to exclude a non-significant covariate from my ANCOVA? How interesting
I'm a bit worried about the y-axis label on your plot, "C glutamate SD less than 20 - extremes," which has two potentially important implications.
For one, it might be taken to suggest that there has |
52,928 | Generating a simulated dataset from a correlation matrix with means and standard deviations [duplicate] | You can use the function mvrnorm from the MASS package to sample values from a multivariate normal distrbution.
Your data:
mu <- c(4.23, 3.01, 2.91)
stddev <- c(1.23, 0.92, 1.32)
corMat <- matrix(c(1, 0.78, 0.23,
0.78, 1, 0.27,
0.23, 0.27, 1),
ncol = 3)
corMat
# [,1] [,2] [,3]
# [1,] 1.00 0.78 0.23
# [2,] 0.78 1.00 0.27
# [3,] 0.23 0.27 1.00
Create the covariance matrix:
covMat <- stddev %*% t(stddev) * corMat
covMat
# [,1] [,2] [,3]
# [1,] 1.512900 0.882648 0.373428
# [2,] 0.882648 0.846400 0.327888
# [3,] 0.373428 0.327888 1.742400
Sample values. If you use empirical = FALSE, the means and covariance values represent the population values. Hence, the sampled data-set most likely does not match these values exactly.
set.seed(1)
library(MASS)
dat1 <- mvrnorm(n = 212, mu = mu, Sigma = covMat, empirical = FALSE)
colMeans(dat1)
# [1] 4.163594 2.995814 2.835397
cor(dat1)
# [,1] [,2] [,3]
# [1,] 1.0000000 0.7348533 0.1514836
# [2,] 0.7348533 1.0000000 0.2654715
# [3,] 0.1514836 0.2654715 1.0000000
If you sample with empirical = TRUE, the properties of the sampled data-set match means and covariances exactly.
dat2 <- mvrnorm(n = 212, mu = mu, Sigma = covMat, empirical = TRUE)
colMeans(dat2)
# [1] 4.23 3.01 2.91
cor(dat2)
# [,1] [,2] [,3]
# [1,] 1.00 0.78 0.23
# [2,] 0.78 1.00 0.27
# [3,] 0.23 0.27 1.00 | Generating a simulated dataset from a correlation matrix with means and standard deviations [duplica | You can use the function mvrnorm from the MASS package to sample values from a multivariate normal distrbution.
Your data:
mu <- c(4.23, 3.01, 2.91)
stddev <- c(1.23, 0.92, 1.32)
corMat <- matrix(c(1 | Generating a simulated dataset from a correlation matrix with means and standard deviations [duplicate]
You can use the function mvrnorm from the MASS package to sample values from a multivariate normal distrbution.
Your data:
mu <- c(4.23, 3.01, 2.91)
stddev <- c(1.23, 0.92, 1.32)
corMat <- matrix(c(1, 0.78, 0.23,
0.78, 1, 0.27,
0.23, 0.27, 1),
ncol = 3)
corMat
# [,1] [,2] [,3]
# [1,] 1.00 0.78 0.23
# [2,] 0.78 1.00 0.27
# [3,] 0.23 0.27 1.00
Create the covariance matrix:
covMat <- stddev %*% t(stddev) * corMat
covMat
# [,1] [,2] [,3]
# [1,] 1.512900 0.882648 0.373428
# [2,] 0.882648 0.846400 0.327888
# [3,] 0.373428 0.327888 1.742400
Sample values. If you use empirical = FALSE, the means and covariance values represent the population values. Hence, the sampled data-set most likely does not match these values exactly.
set.seed(1)
library(MASS)
dat1 <- mvrnorm(n = 212, mu = mu, Sigma = covMat, empirical = FALSE)
colMeans(dat1)
# [1] 4.163594 2.995814 2.835397
cor(dat1)
# [,1] [,2] [,3]
# [1,] 1.0000000 0.7348533 0.1514836
# [2,] 0.7348533 1.0000000 0.2654715
# [3,] 0.1514836 0.2654715 1.0000000
If you sample with empirical = TRUE, the properties of the sampled data-set match means and covariances exactly.
dat2 <- mvrnorm(n = 212, mu = mu, Sigma = covMat, empirical = TRUE)
colMeans(dat2)
# [1] 4.23 3.01 2.91
cor(dat2)
# [,1] [,2] [,3]
# [1,] 1.00 0.78 0.23
# [2,] 0.78 1.00 0.27
# [3,] 0.23 0.27 1.00 | Generating a simulated dataset from a correlation matrix with means and standard deviations [duplica
You can use the function mvrnorm from the MASS package to sample values from a multivariate normal distrbution.
Your data:
mu <- c(4.23, 3.01, 2.91)
stddev <- c(1.23, 0.92, 1.32)
corMat <- matrix(c(1 |
52,929 | Generating a simulated dataset from a correlation matrix with means and standard deviations [duplicate] | Assuming normality, you could draw samples from Multivariate Normal distribution. What you need for that is a vector of means $\boldsymbol{\mu} = (\mu_1, ..., \mu_k)$ and a covariance matrix $\boldsymbol{\Sigma}$. If you recall that covariance matrix has variances on the diagonal and values of covariance in the rest of cells, you can re-create if from your data.
Correlation is
$$ \mathrm{corr}(X,Y) = \frac{\mathrm{cov}(X,Y)}{\sigma_X \sigma_Y} $$
and you already have both the correlation coefficients and standard deviations of individual variables, so you can use them to create covariance matrix. Now, you just have to use those values as parameters of some function from statistical package that samples from MVN distribution, e.g. mvtnorm package in R. | Generating a simulated dataset from a correlation matrix with means and standard deviations [duplica | Assuming normality, you could draw samples from Multivariate Normal distribution. What you need for that is a vector of means $\boldsymbol{\mu} = (\mu_1, ..., \mu_k)$ and a covariance matrix $\boldsym | Generating a simulated dataset from a correlation matrix with means and standard deviations [duplicate]
Assuming normality, you could draw samples from Multivariate Normal distribution. What you need for that is a vector of means $\boldsymbol{\mu} = (\mu_1, ..., \mu_k)$ and a covariance matrix $\boldsymbol{\Sigma}$. If you recall that covariance matrix has variances on the diagonal and values of covariance in the rest of cells, you can re-create if from your data.
Correlation is
$$ \mathrm{corr}(X,Y) = \frac{\mathrm{cov}(X,Y)}{\sigma_X \sigma_Y} $$
and you already have both the correlation coefficients and standard deviations of individual variables, so you can use them to create covariance matrix. Now, you just have to use those values as parameters of some function from statistical package that samples from MVN distribution, e.g. mvtnorm package in R. | Generating a simulated dataset from a correlation matrix with means and standard deviations [duplica
Assuming normality, you could draw samples from Multivariate Normal distribution. What you need for that is a vector of means $\boldsymbol{\mu} = (\mu_1, ..., \mu_k)$ and a covariance matrix $\boldsym |
52,930 | Why am I getting 100% accuracy for SVM and Decision Tree (scikit) | I was able to reproduce your results:
> clf = svm.SVC()
> scores = cross_validation.cross_val_score(clf, X, Y, cv=10)
I didn't get perfect out of fold classification, but close:
> print(scores)
array([ 1. , 1. , 1. , 0.99152542, 1. ,
1. , 1. , 1. , 1. , 1. ])
It's not very easy to figure out what's going on with a support vector machine, so I fit a decision tree to your data:
> tre = tree.DecisionTreeClassifier()
> tre.fit(X, Y)
The tree is a prefect classifier on the training data:
> sum(abs(tre.predict(X) - Y))
0
Turns out this tree is pretty simple:
It looks like the third column in your data (the one named Z) is a perfect separator. This is easily confirmed with a scatterplot: | Why am I getting 100% accuracy for SVM and Decision Tree (scikit) | I was able to reproduce your results:
> clf = svm.SVC()
> scores = cross_validation.cross_val_score(clf, X, Y, cv=10)
I didn't get perfect out of fold classification, but close:
> print(scores)
array | Why am I getting 100% accuracy for SVM and Decision Tree (scikit)
I was able to reproduce your results:
> clf = svm.SVC()
> scores = cross_validation.cross_val_score(clf, X, Y, cv=10)
I didn't get perfect out of fold classification, but close:
> print(scores)
array([ 1. , 1. , 1. , 0.99152542, 1. ,
1. , 1. , 1. , 1. , 1. ])
It's not very easy to figure out what's going on with a support vector machine, so I fit a decision tree to your data:
> tre = tree.DecisionTreeClassifier()
> tre.fit(X, Y)
The tree is a prefect classifier on the training data:
> sum(abs(tre.predict(X) - Y))
0
Turns out this tree is pretty simple:
It looks like the third column in your data (the one named Z) is a perfect separator. This is easily confirmed with a scatterplot: | Why am I getting 100% accuracy for SVM and Decision Tree (scikit)
I was able to reproduce your results:
> clf = svm.SVC()
> scores = cross_validation.cross_val_score(clf, X, Y, cv=10)
I didn't get perfect out of fold classification, but close:
> print(scores)
array |
52,931 | Why am I getting 100% accuracy for SVM and Decision Tree (scikit) | Oh, my God, I came across the same issue. Maybe my answer is not the best answer for you, but it may help other people.
Here is my code with Scikit-Learn
clf = DecisionTreeClassifier(criterion='entropy', max_depth=10)
clf.fit(X, y)
And I got 100% accuracy score.
However, when I got the feature_importances_ of clf, and I found the tag column was in X which should be removed from X, after removing the tag column from X, the accuracy was 89%.
So, my suggestion is after you built a model, check the parameters in it, for example feature_importances_ etc. Good luck! | Why am I getting 100% accuracy for SVM and Decision Tree (scikit) | Oh, my God, I came across the same issue. Maybe my answer is not the best answer for you, but it may help other people.
Here is my code with Scikit-Learn
clf = DecisionTreeClassifier(criterion='entro | Why am I getting 100% accuracy for SVM and Decision Tree (scikit)
Oh, my God, I came across the same issue. Maybe my answer is not the best answer for you, but it may help other people.
Here is my code with Scikit-Learn
clf = DecisionTreeClassifier(criterion='entropy', max_depth=10)
clf.fit(X, y)
And I got 100% accuracy score.
However, when I got the feature_importances_ of clf, and I found the tag column was in X which should be removed from X, after removing the tag column from X, the accuracy was 89%.
So, my suggestion is after you built a model, check the parameters in it, for example feature_importances_ etc. Good luck! | Why am I getting 100% accuracy for SVM and Decision Tree (scikit)
Oh, my God, I came across the same issue. Maybe my answer is not the best answer for you, but it may help other people.
Here is my code with Scikit-Learn
clf = DecisionTreeClassifier(criterion='entro |
52,932 | Generating and Working with Random Vectors in R | The mvtnorm package in R has the rmvnorm function (analogous to rnorm) that produces arbitrary-dimensional Gaussian random variables. It also provides the option to use three different algorithms. A quick comparison using your exact setup:
library(mvtnorm)
library(microbenchmark)
sigma <- matrix(c(20, 8, 8, 20), 2)
mu <- c(3, -5)
microbenchmark(v1 <- rmvnorm(1e5, mu, sigma, "eigen"),
v2 <- rmvnorm(1e5, mu, sigma, "svd"),
v3 <- rmvnorm(1e5, mu, sigma, "chol"))
# Unit: milliseconds
# expr min lq mean median
# v1 <- rmvnorm(1e+05, mu, sigma, "eigen") 19.95751 21.31730 28.14967 21.57772
# v2 <- rmvnorm(1e+05, mu, sigma, "svd") 19.98124 21.29868 30.23727 21.74448
# v3 <- rmvnorm(1e+05, mu, sigma, "chol") 19.92971 21.31440 32.01633 21.77176
# uq max neval cld
# 22.84293 91.37796 100 a
# 23.23654 89.43729 100 a
# 24.03474 91.22031 100 a
The timings are all about the same, and they're all pretty darn fast. If you need to generate these simulations in significantly less time than that, you'll have to look for a solution in C++ (to interface with Rcpp), C, or Fortran. | Generating and Working with Random Vectors in R | The mvtnorm package in R has the rmvnorm function (analogous to rnorm) that produces arbitrary-dimensional Gaussian random variables. It also provides the option to use three different algorithms. A q | Generating and Working with Random Vectors in R
The mvtnorm package in R has the rmvnorm function (analogous to rnorm) that produces arbitrary-dimensional Gaussian random variables. It also provides the option to use three different algorithms. A quick comparison using your exact setup:
library(mvtnorm)
library(microbenchmark)
sigma <- matrix(c(20, 8, 8, 20), 2)
mu <- c(3, -5)
microbenchmark(v1 <- rmvnorm(1e5, mu, sigma, "eigen"),
v2 <- rmvnorm(1e5, mu, sigma, "svd"),
v3 <- rmvnorm(1e5, mu, sigma, "chol"))
# Unit: milliseconds
# expr min lq mean median
# v1 <- rmvnorm(1e+05, mu, sigma, "eigen") 19.95751 21.31730 28.14967 21.57772
# v2 <- rmvnorm(1e+05, mu, sigma, "svd") 19.98124 21.29868 30.23727 21.74448
# v3 <- rmvnorm(1e+05, mu, sigma, "chol") 19.92971 21.31440 32.01633 21.77176
# uq max neval cld
# 22.84293 91.37796 100 a
# 23.23654 89.43729 100 a
# 24.03474 91.22031 100 a
The timings are all about the same, and they're all pretty darn fast. If you need to generate these simulations in significantly less time than that, you'll have to look for a solution in C++ (to interface with Rcpp), C, or Fortran. | Generating and Working with Random Vectors in R
The mvtnorm package in R has the rmvnorm function (analogous to rnorm) that produces arbitrary-dimensional Gaussian random variables. It also provides the option to use three different algorithms. A q |
52,933 | Generating and Working with Random Vectors in R | In addition to @ssdecontrol's answer, I've been using the MASS package's mvrnorm. Adding to @ssdecontrol's code (with my slower compy):
library(mvtnorm)
library(MASS)
library(microbenchmark)
sigma <- matrix(c(20, 8, 8, 20), 2)
mu <- c(3, -5)
microbenchmark(v1 <- rmvnorm(1e5, mu, sigma, "eigen"),
v2 <- rmvnorm(1e5, mu, sigma, "svd"),
v3 <- rmvnorm(1e5, mu, sigma, "chol"),
v4 <- mvrnorm(1e5, mu, sigma))
# Unit: milliseconds
# expr min lq mean median uq max neval
# v1 <- rmvnorm(1e+05, mu, sigma, "eigen") 37.49799 40.23405 43.08878 42.20849 45.07547 76.43984 100
# v2 <- rmvnorm(1e+05, mu, sigma, "svd") 37.51092 39.18271 44.08090 41.82957 44.20879 206.87745 100
# v3 <- rmvnorm(1e+05, mu, sigma, "chol") 37.40030 39.74741 41.96467 40.84335 43.63740 50.37007 100
# v4 <- mvrnorm(1e+05, mu, sigma) 36.78208 37.73462 40.67353 39.23602 41.96271 89.75172 100
Note that mvrnorm only does the Eigen decomposition. | Generating and Working with Random Vectors in R | In addition to @ssdecontrol's answer, I've been using the MASS package's mvrnorm. Adding to @ssdecontrol's code (with my slower compy):
library(mvtnorm)
library(MASS)
library(microbenchmark)
sigma | Generating and Working with Random Vectors in R
In addition to @ssdecontrol's answer, I've been using the MASS package's mvrnorm. Adding to @ssdecontrol's code (with my slower compy):
library(mvtnorm)
library(MASS)
library(microbenchmark)
sigma <- matrix(c(20, 8, 8, 20), 2)
mu <- c(3, -5)
microbenchmark(v1 <- rmvnorm(1e5, mu, sigma, "eigen"),
v2 <- rmvnorm(1e5, mu, sigma, "svd"),
v3 <- rmvnorm(1e5, mu, sigma, "chol"),
v4 <- mvrnorm(1e5, mu, sigma))
# Unit: milliseconds
# expr min lq mean median uq max neval
# v1 <- rmvnorm(1e+05, mu, sigma, "eigen") 37.49799 40.23405 43.08878 42.20849 45.07547 76.43984 100
# v2 <- rmvnorm(1e+05, mu, sigma, "svd") 37.51092 39.18271 44.08090 41.82957 44.20879 206.87745 100
# v3 <- rmvnorm(1e+05, mu, sigma, "chol") 37.40030 39.74741 41.96467 40.84335 43.63740 50.37007 100
# v4 <- mvrnorm(1e+05, mu, sigma) 36.78208 37.73462 40.67353 39.23602 41.96271 89.75172 100
Note that mvrnorm only does the Eigen decomposition. | Generating and Working with Random Vectors in R
In addition to @ssdecontrol's answer, I've been using the MASS package's mvrnorm. Adding to @ssdecontrol's code (with my slower compy):
library(mvtnorm)
library(MASS)
library(microbenchmark)
sigma |
52,934 | Generating and Working with Random Vectors in R | For completeness sake, here's a follow-up note on how to generate random vectors regardless of the marginal distribution of the individual components. I'm going to stick with the bivariate case:
Generate a bivariate vector from a standard normal random distribution following a predetermined correlation*. I'll stick with the case initially posted, which had used a covariance of 8 as an example. Once the final vector $\textbf{V}=[X_1,X_2]^{T}$ was obtained the correlation between $X_1$ and $X_2$ was found to be cor(X1,X2) [1] 0.4015484 (and the covariance as set up initially, cov(X1,X2) = 8.066535) (no seed was set).
We now set.seed(0), and sticking with a correlation of 0.4, we code a correlation matrix such as: matrix(c(1,0.4,0.4,1), nrow = 2), and we are ready for mvtnorm:
SN <- rmvnorm(mean = c(0,0), sig = C, n = 1e5) to produce two vectors distributed as ~ $N(0, 1)$ and with a cor(SN[,1],SN[,2]) = 0.3993723 ~ 0.4. Here's the plot with regression line:
Use the Probability Integral Transform here to obtain a bivariate random vector with marginal distributions ~ $U(0, 1)$ and the same correlation:
U <- pnorm(SN) - so we are feeding into pnorm the SN vector to find $erf$(SN). Here's the cor(U[,1], U[,2]) = 0.3828065 ~ 0.4. And here's the scatterplot with marginal distributions at the edges:
Apply the inverse transform sampling method here to finally obtain the bivector of equally correlated points belonging to whichever distribution family.
We can replicate initial posting and end up with two correlated samples from $N(3, 20)$ and $N(-5, 20)$, respectively:
X1 <- qnorm(U[,1], mean = 3,sd = 4.47) and
X2 <- qnorm(U[,2], mean = -5, sd = 4.47), which will show a cor(X1,X2) = 0.3993723 ~ 4.
However, if the distributions chosen are more dissimilar, the correlation may not be as precise. For instance, let's get the first column of $U$ (U[,1]) to follow a Student's $t$ distribution with 3 d.f., and the second an Exponential with a $\lambda$=1:
X1 <- qt(U[,1], df = 3) and
X2 <- qexp(U[,2], rate = 1)
The cor(X1,X2) = 0.333598 < 0.4. Here are the respective histograms: | Generating and Working with Random Vectors in R | For completeness sake, here's a follow-up note on how to generate random vectors regardless of the marginal distribution of the individual components. I'm going to stick with the bivariate case:
Gene | Generating and Working with Random Vectors in R
For completeness sake, here's a follow-up note on how to generate random vectors regardless of the marginal distribution of the individual components. I'm going to stick with the bivariate case:
Generate a bivariate vector from a standard normal random distribution following a predetermined correlation*. I'll stick with the case initially posted, which had used a covariance of 8 as an example. Once the final vector $\textbf{V}=[X_1,X_2]^{T}$ was obtained the correlation between $X_1$ and $X_2$ was found to be cor(X1,X2) [1] 0.4015484 (and the covariance as set up initially, cov(X1,X2) = 8.066535) (no seed was set).
We now set.seed(0), and sticking with a correlation of 0.4, we code a correlation matrix such as: matrix(c(1,0.4,0.4,1), nrow = 2), and we are ready for mvtnorm:
SN <- rmvnorm(mean = c(0,0), sig = C, n = 1e5) to produce two vectors distributed as ~ $N(0, 1)$ and with a cor(SN[,1],SN[,2]) = 0.3993723 ~ 0.4. Here's the plot with regression line:
Use the Probability Integral Transform here to obtain a bivariate random vector with marginal distributions ~ $U(0, 1)$ and the same correlation:
U <- pnorm(SN) - so we are feeding into pnorm the SN vector to find $erf$(SN). Here's the cor(U[,1], U[,2]) = 0.3828065 ~ 0.4. And here's the scatterplot with marginal distributions at the edges:
Apply the inverse transform sampling method here to finally obtain the bivector of equally correlated points belonging to whichever distribution family.
We can replicate initial posting and end up with two correlated samples from $N(3, 20)$ and $N(-5, 20)$, respectively:
X1 <- qnorm(U[,1], mean = 3,sd = 4.47) and
X2 <- qnorm(U[,2], mean = -5, sd = 4.47), which will show a cor(X1,X2) = 0.3993723 ~ 4.
However, if the distributions chosen are more dissimilar, the correlation may not be as precise. For instance, let's get the first column of $U$ (U[,1]) to follow a Student's $t$ distribution with 3 d.f., and the second an Exponential with a $\lambda$=1:
X1 <- qt(U[,1], df = 3) and
X2 <- qexp(U[,2], rate = 1)
The cor(X1,X2) = 0.333598 < 0.4. Here are the respective histograms: | Generating and Working with Random Vectors in R
For completeness sake, here's a follow-up note on how to generate random vectors regardless of the marginal distribution of the individual components. I'm going to stick with the bivariate case:
Gene |
52,935 | Truncating data reduces correlation? | There's a number of ways to look at it, but this is a pretty straightforward one:
Imagine for a moment we're looking at a regression problem. The squared correlation between the two variables ($r^2$) is $R^2$, the coefficient of determination, which is $1-\frac{s^2_\epsilon}{\text{Var}(y)}$. When you restrict the range of $x$, you also reduce the range of $y$, so $\text{Var}(y)$ goes down with it, while $s^2_\epsilon$ (the noise about the line) should hardly change, since it still has an expected value of $\sigma^2_\epsilon$. Here's an example of that:
Since the denominator of the fraction decreases while the numerator hardly changes, the fraction gets larger, so $R^2$ gets smaller, so $r^2(x,y)$ and hence $|r|$ will be smaller. So we really should expect that the size of the correlation decreases. | Truncating data reduces correlation? | There's a number of ways to look at it, but this is a pretty straightforward one:
Imagine for a moment we're looking at a regression problem. The squared correlation between the two variables ($r^2$) | Truncating data reduces correlation?
There's a number of ways to look at it, but this is a pretty straightforward one:
Imagine for a moment we're looking at a regression problem. The squared correlation between the two variables ($r^2$) is $R^2$, the coefficient of determination, which is $1-\frac{s^2_\epsilon}{\text{Var}(y)}$. When you restrict the range of $x$, you also reduce the range of $y$, so $\text{Var}(y)$ goes down with it, while $s^2_\epsilon$ (the noise about the line) should hardly change, since it still has an expected value of $\sigma^2_\epsilon$. Here's an example of that:
Since the denominator of the fraction decreases while the numerator hardly changes, the fraction gets larger, so $R^2$ gets smaller, so $r^2(x,y)$ and hence $|r|$ will be smaller. So we really should expect that the size of the correlation decreases. | Truncating data reduces correlation?
There's a number of ways to look at it, but this is a pretty straightforward one:
Imagine for a moment we're looking at a regression problem. The squared correlation between the two variables ($r^2$) |
52,936 | Truncating data reduces correlation? | Thinking of a 2D plot of one variable plotted against the other, limiting the range for one variable means only looking at a vertical or horizontal "slice".
So my intuition is that the overall shape of the "cloud" of points will be more vertical or more horizontal, instead of "diagonal". A vertical or horizontal-looking cloud of points has zero correlation. So to me there is indeed an intuition that correlation is likely to decrease.
As a toy example, if your data points are (1,1), (1,20), and (20,20), you have 0.5 correlation, but if you limit the range of the first variable to [0,10] you are left with two points (1,1) and (1,20), and correlation =0. If you limit the second variable to [10,30] then you get two points aligned vertically, and again correlation =0. | Truncating data reduces correlation? | Thinking of a 2D plot of one variable plotted against the other, limiting the range for one variable means only looking at a vertical or horizontal "slice".
So my intuition is that the overall shape o | Truncating data reduces correlation?
Thinking of a 2D plot of one variable plotted against the other, limiting the range for one variable means only looking at a vertical or horizontal "slice".
So my intuition is that the overall shape of the "cloud" of points will be more vertical or more horizontal, instead of "diagonal". A vertical or horizontal-looking cloud of points has zero correlation. So to me there is indeed an intuition that correlation is likely to decrease.
As a toy example, if your data points are (1,1), (1,20), and (20,20), you have 0.5 correlation, but if you limit the range of the first variable to [0,10] you are left with two points (1,1) and (1,20), and correlation =0. If you limit the second variable to [10,30] then you get two points aligned vertically, and again correlation =0. | Truncating data reduces correlation?
Thinking of a 2D plot of one variable plotted against the other, limiting the range for one variable means only looking at a vertical or horizontal "slice".
So my intuition is that the overall shape o |
52,937 | Why does Pearson's chi-squared test detect differences that the GLM model fails to detect? | I don't see a big difference in the results:
d = read.table(text="Group Black Red
A 296 14
B 292 16
C 301 7
D 289 23", header=T)
chisq.test(d[,2:3])
# Pearson's Chi-squared test
#
# data: d[, 2:3]
# X-squared = 8.893, df = 3, p-value = 0.03075
mod = glm(cbind(Black, Red)~Group, data=d, family=binomial)
summary(mod)
# ...
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) 3.0513 0.2735 11.156 <2e-16 ***
# GroupB -0.1471 0.3751 -0.392 0.695
# GroupC 0.7099 0.4701 1.510 0.131
# GroupD -0.5204 0.3489 -1.491 0.136
# ...
#
# Null deviance: 9.3651e+00 on 3 degrees of freedom
# Residual deviance: 1.1902e-13 on 0 degrees of freedom
# AIC: 25.699
1-pchisq((9.3651 - 1.1902e-13), df=(3-0))
# [1] 0.02481063
The GLM is, if anything, slightly more significant. I wonder if this is a confusion about how to interpret statistical output from a model with categorical variables. When you have a categorical variable, most software (including R, above) uses reference cell coding (see here). The first level of the variable becomes the intercept, and the other levels are compared to the intercept. Thus, the output shows that B, C, and D do not significantly differ from A, but that doesn't mean they don't differ from each other (C and D look like they will, e.g.). To test if the entire factor / categorical variable is significant, you need to fit a new model without that variable and perform a nested model test. Since you have only one variable, you can just calculate the significance of the whole model directly using the null and residual deviance (see here). | Why does Pearson's chi-squared test detect differences that the GLM model fails to detect? | I don't see a big difference in the results:
d = read.table(text="Group Black Red
A 296 14
B 292 16
C 301 7
| Why does Pearson's chi-squared test detect differences that the GLM model fails to detect?
I don't see a big difference in the results:
d = read.table(text="Group Black Red
A 296 14
B 292 16
C 301 7
D 289 23", header=T)
chisq.test(d[,2:3])
# Pearson's Chi-squared test
#
# data: d[, 2:3]
# X-squared = 8.893, df = 3, p-value = 0.03075
mod = glm(cbind(Black, Red)~Group, data=d, family=binomial)
summary(mod)
# ...
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) 3.0513 0.2735 11.156 <2e-16 ***
# GroupB -0.1471 0.3751 -0.392 0.695
# GroupC 0.7099 0.4701 1.510 0.131
# GroupD -0.5204 0.3489 -1.491 0.136
# ...
#
# Null deviance: 9.3651e+00 on 3 degrees of freedom
# Residual deviance: 1.1902e-13 on 0 degrees of freedom
# AIC: 25.699
1-pchisq((9.3651 - 1.1902e-13), df=(3-0))
# [1] 0.02481063
The GLM is, if anything, slightly more significant. I wonder if this is a confusion about how to interpret statistical output from a model with categorical variables. When you have a categorical variable, most software (including R, above) uses reference cell coding (see here). The first level of the variable becomes the intercept, and the other levels are compared to the intercept. Thus, the output shows that B, C, and D do not significantly differ from A, but that doesn't mean they don't differ from each other (C and D look like they will, e.g.). To test if the entire factor / categorical variable is significant, you need to fit a new model without that variable and perform a nested model test. Since you have only one variable, you can just calculate the significance of the whole model directly using the null and residual deviance (see here). | Why does Pearson's chi-squared test detect differences that the GLM model fails to detect?
I don't see a big difference in the results:
d = read.table(text="Group Black Red
A 296 14
B 292 16
C 301 7
|
52,938 | How to draw a random sample from a Generalized Beta distribution of the second kind | If you consider the density$$f(y;a,b)=\frac{|a|y^{ap-1}}{b^{ap}B(p,q)(1+(y/b)^a)^{p+q}},$$b appears as a scale parameter. This means that, if $Z\sim f(z;a,1)$, then $bZ\sim f(z;a,b)$. So we can assume $b=1$ wlog. Now,$$f(y;a,1)=\frac{|a|y^{ap-1}}{B(p,q)(1+y^a)^{p+q}}=\frac{|a|\{y^{a}\}^{(ap-1)/a}}{B(p,q)(1+y^a)^{p+q}}$$involves only $y^a$. This suggests the change of variable $z=y^a$ or $y=z^{1/a}$. The Jacobian of this transform is
$$\frac{\text{d}y}{\text{d}z}=\frac{1}{a}z^{a^{-1}-1}$$ and the density
$$g(z;a)=\frac{|a|z^{(ap-1)/a}}{B(p,q)(1+z)^{p+q}}\,\frac{1}{|a|}z^{a^{-1}-1}=
\frac{z^{p-a^{-1}+a^{-1}-1}}{B(p,q)(1+z)^{p+q}}=\frac{z^{p-1}}{B(p,q)(1+z)^{p+q}}$$This happens to be the density of an unormalised $F(2p,2q)$ distribution, defined as the distribution of the ratio of two independent $\chi^2$ random variables with degrees of freedom $2p$ and $2q$.
Therefore, to simulate from this $GB_2(a,b,p,q)$ distribution, follow the steps:
simulate $U_1\sim\chi^2_{2p}$, $U_2\sim\chi^2_{2q}$
take $Z=U_1\big/U_2$
take $Y=Z^{a^{-1}}$
take $X=bY$ | How to draw a random sample from a Generalized Beta distribution of the second kind | If you consider the density$$f(y;a,b)=\frac{|a|y^{ap-1}}{b^{ap}B(p,q)(1+(y/b)^a)^{p+q}},$$b appears as a scale parameter. This means that, if $Z\sim f(z;a,1)$, then $bZ\sim f(z;a,b)$. So we can assume | How to draw a random sample from a Generalized Beta distribution of the second kind
If you consider the density$$f(y;a,b)=\frac{|a|y^{ap-1}}{b^{ap}B(p,q)(1+(y/b)^a)^{p+q}},$$b appears as a scale parameter. This means that, if $Z\sim f(z;a,1)$, then $bZ\sim f(z;a,b)$. So we can assume $b=1$ wlog. Now,$$f(y;a,1)=\frac{|a|y^{ap-1}}{B(p,q)(1+y^a)^{p+q}}=\frac{|a|\{y^{a}\}^{(ap-1)/a}}{B(p,q)(1+y^a)^{p+q}}$$involves only $y^a$. This suggests the change of variable $z=y^a$ or $y=z^{1/a}$. The Jacobian of this transform is
$$\frac{\text{d}y}{\text{d}z}=\frac{1}{a}z^{a^{-1}-1}$$ and the density
$$g(z;a)=\frac{|a|z^{(ap-1)/a}}{B(p,q)(1+z)^{p+q}}\,\frac{1}{|a|}z^{a^{-1}-1}=
\frac{z^{p-a^{-1}+a^{-1}-1}}{B(p,q)(1+z)^{p+q}}=\frac{z^{p-1}}{B(p,q)(1+z)^{p+q}}$$This happens to be the density of an unormalised $F(2p,2q)$ distribution, defined as the distribution of the ratio of two independent $\chi^2$ random variables with degrees of freedom $2p$ and $2q$.
Therefore, to simulate from this $GB_2(a,b,p,q)$ distribution, follow the steps:
simulate $U_1\sim\chi^2_{2p}$, $U_2\sim\chi^2_{2q}$
take $Z=U_1\big/U_2$
take $Y=Z^{a^{-1}}$
take $X=bY$ | How to draw a random sample from a Generalized Beta distribution of the second kind
If you consider the density$$f(y;a,b)=\frac{|a|y^{ap-1}}{b^{ap}B(p,q)(1+(y/b)^a)^{p+q}},$$b appears as a scale parameter. This means that, if $Z\sim f(z;a,1)$, then $bZ\sim f(z;a,b)$. So we can assume |
52,939 | How to draw a random sample from a Generalized Beta distribution of the second kind | While Xi'an's answer of course already addresses the underlying theory, you may also find the GB2 package in R to be convenient.
In particular, rgb2(x, shape1, scale, shape2, shape3) will produce random draws. | How to draw a random sample from a Generalized Beta distribution of the second kind | While Xi'an's answer of course already addresses the underlying theory, you may also find the GB2 package in R to be convenient.
In particular, rgb2(x, shape1, scale, shape2, shape3) will produce rand | How to draw a random sample from a Generalized Beta distribution of the second kind
While Xi'an's answer of course already addresses the underlying theory, you may also find the GB2 package in R to be convenient.
In particular, rgb2(x, shape1, scale, shape2, shape3) will produce random draws. | How to draw a random sample from a Generalized Beta distribution of the second kind
While Xi'an's answer of course already addresses the underlying theory, you may also find the GB2 package in R to be convenient.
In particular, rgb2(x, shape1, scale, shape2, shape3) will produce rand |
52,940 | Is "Confidence Level" just 1 minus P-Value? | Is "Confidence Level" just 1 minus P-Value?
No.
If the results of a Chi-Square test give a P-Value of 0.01 then can we say that the confidence level in their being a difference is (1-0.01) = 99% confidence.
This is misusing the terms. Words like "confidence" and "p-value" have a specific meaning in frequentist statistics.
If you consider their meanings, it becomes clear why the phrasing you want to use would be wrong.
The p-value is the probability of a result at least as extreme as the observed one, given the null hypothesis is true.
1-pvalue would then be "the probability of a result less extreme than the observed one, given the null hypothesis is true.
That is not remotely the same thing as the probability that the alternative is true given the observed result, which is presumably the kind of thing you are trying to talk about.
We are conducting website landing page tests for our clients. We want to be able to say "We are 99% confidence that version B performed better than version A".
You can make a more-or-less similar statement to that from a Bayesian framework, but its size won't be related to p-values in a direct way.
The folks who run this website use his type of terminology: http://getdatadriven.com/ab-significance-test
Don't accept everything you read on the internet uncritically (that would include answers on stats.stackexchange.com, but at least there's some degree of critical peer-review, which should help with feeling confident that mostly it isn't nonsense). However, from what I see on the page you link to, they don't misuse statistical terms quite in the way you suggest, but misuse them in a somewhat different way (using '99% certain' rather than '99% confident', presumably making it more explicitly a probability statement that's intended, and without any suggestion of a relation to confidence intervals).
Statistical terms are misused all the time on the internet. [I'm fairly sure one could find links to support almost every single wrong idea raised in questions here.]
Is it right to say this with the example I gave?
Not by any reasonable interpretation of the word 'right', given the context. | Is "Confidence Level" just 1 minus P-Value? | Is "Confidence Level" just 1 minus P-Value?
No.
If the results of a Chi-Square test give a P-Value of 0.01 then can we say that the confidence level in their being a difference is (1-0.01) = 99% co | Is "Confidence Level" just 1 minus P-Value?
Is "Confidence Level" just 1 minus P-Value?
No.
If the results of a Chi-Square test give a P-Value of 0.01 then can we say that the confidence level in their being a difference is (1-0.01) = 99% confidence.
This is misusing the terms. Words like "confidence" and "p-value" have a specific meaning in frequentist statistics.
If you consider their meanings, it becomes clear why the phrasing you want to use would be wrong.
The p-value is the probability of a result at least as extreme as the observed one, given the null hypothesis is true.
1-pvalue would then be "the probability of a result less extreme than the observed one, given the null hypothesis is true.
That is not remotely the same thing as the probability that the alternative is true given the observed result, which is presumably the kind of thing you are trying to talk about.
We are conducting website landing page tests for our clients. We want to be able to say "We are 99% confidence that version B performed better than version A".
You can make a more-or-less similar statement to that from a Bayesian framework, but its size won't be related to p-values in a direct way.
The folks who run this website use his type of terminology: http://getdatadriven.com/ab-significance-test
Don't accept everything you read on the internet uncritically (that would include answers on stats.stackexchange.com, but at least there's some degree of critical peer-review, which should help with feeling confident that mostly it isn't nonsense). However, from what I see on the page you link to, they don't misuse statistical terms quite in the way you suggest, but misuse them in a somewhat different way (using '99% certain' rather than '99% confident', presumably making it more explicitly a probability statement that's intended, and without any suggestion of a relation to confidence intervals).
Statistical terms are misused all the time on the internet. [I'm fairly sure one could find links to support almost every single wrong idea raised in questions here.]
Is it right to say this with the example I gave?
Not by any reasonable interpretation of the word 'right', given the context. | Is "Confidence Level" just 1 minus P-Value?
Is "Confidence Level" just 1 minus P-Value?
No.
If the results of a Chi-Square test give a P-Value of 0.01 then can we say that the confidence level in their being a difference is (1-0.01) = 99% co |
52,941 | Is "Confidence Level" just 1 minus P-Value? | With Bayesian statistics, you could conceivably turn it into a statement about "99% confident that version B performed better than version A." However, neither the p-value nor the 99% confidence interval will let you conclude that. See Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? for a discussion of the meaning of a confidence interval.
With a p<.01, it would be more precise to say that "chance alone would create these differences between version A and B less than 1% of the time." Of course, I realize that's probably not as persuasive in the real-world. | Is "Confidence Level" just 1 minus P-Value? | With Bayesian statistics, you could conceivably turn it into a statement about "99% confident that version B performed better than version A." However, neither the p-value nor the 99% confidence inte | Is "Confidence Level" just 1 minus P-Value?
With Bayesian statistics, you could conceivably turn it into a statement about "99% confident that version B performed better than version A." However, neither the p-value nor the 99% confidence interval will let you conclude that. See Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? for a discussion of the meaning of a confidence interval.
With a p<.01, it would be more precise to say that "chance alone would create these differences between version A and B less than 1% of the time." Of course, I realize that's probably not as persuasive in the real-world. | Is "Confidence Level" just 1 minus P-Value?
With Bayesian statistics, you could conceivably turn it into a statement about "99% confident that version B performed better than version A." However, neither the p-value nor the 99% confidence inte |
52,942 | If Manhattan distance always performs better on a dataset...what does it mean? | Also use the search terms l1 norm, l1 distance, absolute deviance etc all of which refer to the same thing as manhattan distance.
The properties of the l1-norm (manhattan distance) can largely be deduced from its shape (ie it is V shaped instead of U shaped like the parabola of the l2-norm (euclidian distance). The l1-norm can be said to be less sensitive to outliers and more sensitive to small scale behavior then the l2-norm.
Ie it will tend to "drive things to zero" focusing on small scale behavior because it doesn't flatten out around zero like a parabola. It will also be less sensitive to large distances because the slope does not increase with distance from the origin. This can result in a model that fits part of the data very well/exactly but ignores a few dimensions or cases that don't fit with the rest of the data.
I suspect that these properties explain its performance on the data set you are seeing. Ie in this classification problem it is better to have an exact/excellent match on a few of the dimensions and miss some of the other dimensions then to do fairly well on all of the dimensions.
These reasons also explain why the l1-norm is often used in robust regression (where it will ignore outliers) or as a penalty in the lasso algorithm (where it will drive some of the coefficients to zero resulting in a simpler model). | If Manhattan distance always performs better on a dataset...what does it mean? | Also use the search terms l1 norm, l1 distance, absolute deviance etc all of which refer to the same thing as manhattan distance.
The properties of the l1-norm (manhattan distance) can largely be dedu | If Manhattan distance always performs better on a dataset...what does it mean?
Also use the search terms l1 norm, l1 distance, absolute deviance etc all of which refer to the same thing as manhattan distance.
The properties of the l1-norm (manhattan distance) can largely be deduced from its shape (ie it is V shaped instead of U shaped like the parabola of the l2-norm (euclidian distance). The l1-norm can be said to be less sensitive to outliers and more sensitive to small scale behavior then the l2-norm.
Ie it will tend to "drive things to zero" focusing on small scale behavior because it doesn't flatten out around zero like a parabola. It will also be less sensitive to large distances because the slope does not increase with distance from the origin. This can result in a model that fits part of the data very well/exactly but ignores a few dimensions or cases that don't fit with the rest of the data.
I suspect that these properties explain its performance on the data set you are seeing. Ie in this classification problem it is better to have an exact/excellent match on a few of the dimensions and miss some of the other dimensions then to do fairly well on all of the dimensions.
These reasons also explain why the l1-norm is often used in robust regression (where it will ignore outliers) or as a penalty in the lasso algorithm (where it will drive some of the coefficients to zero resulting in a simpler model). | If Manhattan distance always performs better on a dataset...what does it mean?
Also use the search terms l1 norm, l1 distance, absolute deviance etc all of which refer to the same thing as manhattan distance.
The properties of the l1-norm (manhattan distance) can largely be dedu |
52,943 | $x_{1}...x_{n}$ are independent continuous random variables with common distribution function $F(x)$,compute $E(F(x_{(n)})-F(x_{(1)}))$ | Let's do this the roundabout way (the direct way is what @JohnK's answer remarked).
To consider the expected value, we need to treat the variables involved as random variables. To stress this we write
$$E[F(X_{(n)})-F(X_{(1)})]$$
and we set $F(X_{(n)}) \equiv Z$, $F(X_{(1)}) \equiv Y$,
so we want to calculate
$$E[F(X_{(n)})-F(X_{(1)})] = E(Z) - E(Y)$$
The cumulative distribution function of $X_{(n)}$ is
$F_{X_{(n)}}(x_{(n)}) = [F(x_{(n)})]^n $ and $[F(X_{(n)})]^n$, viewed as a random variable, follows a uniform $U(0,1)$ by the Probability Integral Transform.
So
$$[F(X_{(n)})]^n = U \Rightarrow Z^n = U$$
Applying the change-of-variable formula
$$f_Z(z) = \left|\frac{\partial U}{\partial Z}\right|\cdot f_U(u) = nz^{n-1} \cdot 1= nz^{n-1}, z\in [0,1]$$
Therefore
$$E(Z) = \int_0^1nz^{n-1}zdz = \frac {n}{n+1} \tag{1}$$
The cumulative distribution function of $X_{(1)}$ is
$F_{X_{(1)}}(x_{(1)}) =1- [1-F(x_{(1)})]^n $ and, $1- [1-F(X_{(1)})]^n$, viewed as a random variable, follows too a uniform $U(0,1)$.
So
$$1-[1-F(X_{(1)})]^n = U \Rightarrow 1-[1-Y]^n = U$$
Applying the change-of-variable formula
$$f_Y(y) = \left|\frac{\partial U}{\partial Y}\right|\cdot f_U(u) = n(1-y)^{n-1} , y\in [0,1]$$
Therefore
$$E(Y) = \int_0^1n(1-y)^{n-1}ydy = nB(2,n) = \frac {1}{n+1} \tag{2}$$
where $B(2,n)$ is the beta function. See also this derivation, since, indeed, as mentioned in another answer, $Y$ is the minimum order statistic of an i.i.d. sample of standard uniform random variables (and $Z$ is the corresponding maximum).
So
$$E[F(X_{(n)})-F(X_{(1)})] = E(Z) - E(Y) = \frac {n}{n+1} - \frac {1}{n+1} = \frac {n-1}{n+1}$$
For the direct way, it is a very good suggestion to study the Probability Integral Transform. | $x_{1}...x_{n}$ are independent continuous random variables with common distribution function $F(x)$ | Let's do this the roundabout way (the direct way is what @JohnK's answer remarked).
To consider the expected value, we need to treat the variables involved as random variables. To stress this we wri | $x_{1}...x_{n}$ are independent continuous random variables with common distribution function $F(x)$,compute $E(F(x_{(n)})-F(x_{(1)}))$
Let's do this the roundabout way (the direct way is what @JohnK's answer remarked).
To consider the expected value, we need to treat the variables involved as random variables. To stress this we write
$$E[F(X_{(n)})-F(X_{(1)})]$$
and we set $F(X_{(n)}) \equiv Z$, $F(X_{(1)}) \equiv Y$,
so we want to calculate
$$E[F(X_{(n)})-F(X_{(1)})] = E(Z) - E(Y)$$
The cumulative distribution function of $X_{(n)}$ is
$F_{X_{(n)}}(x_{(n)}) = [F(x_{(n)})]^n $ and $[F(X_{(n)})]^n$, viewed as a random variable, follows a uniform $U(0,1)$ by the Probability Integral Transform.
So
$$[F(X_{(n)})]^n = U \Rightarrow Z^n = U$$
Applying the change-of-variable formula
$$f_Z(z) = \left|\frac{\partial U}{\partial Z}\right|\cdot f_U(u) = nz^{n-1} \cdot 1= nz^{n-1}, z\in [0,1]$$
Therefore
$$E(Z) = \int_0^1nz^{n-1}zdz = \frac {n}{n+1} \tag{1}$$
The cumulative distribution function of $X_{(1)}$ is
$F_{X_{(1)}}(x_{(1)}) =1- [1-F(x_{(1)})]^n $ and, $1- [1-F(X_{(1)})]^n$, viewed as a random variable, follows too a uniform $U(0,1)$.
So
$$1-[1-F(X_{(1)})]^n = U \Rightarrow 1-[1-Y]^n = U$$
Applying the change-of-variable formula
$$f_Y(y) = \left|\frac{\partial U}{\partial Y}\right|\cdot f_U(u) = n(1-y)^{n-1} , y\in [0,1]$$
Therefore
$$E(Y) = \int_0^1n(1-y)^{n-1}ydy = nB(2,n) = \frac {1}{n+1} \tag{2}$$
where $B(2,n)$ is the beta function. See also this derivation, since, indeed, as mentioned in another answer, $Y$ is the minimum order statistic of an i.i.d. sample of standard uniform random variables (and $Z$ is the corresponding maximum).
So
$$E[F(X_{(n)})-F(X_{(1)})] = E(Z) - E(Y) = \frac {n}{n+1} - \frac {1}{n+1} = \frac {n-1}{n+1}$$
For the direct way, it is a very good suggestion to study the Probability Integral Transform. | $x_{1}...x_{n}$ are independent continuous random variables with common distribution function $F(x)$
Let's do this the roundabout way (the direct way is what @JohnK's answer remarked).
To consider the expected value, we need to treat the variables involved as random variables. To stress this we wri |
52,944 | $x_{1}...x_{n}$ are independent continuous random variables with common distribution function $F(x)$,compute $E(F(x_{(n)})-F(x_{(1)}))$ | You need to consider the Probability Integral Transform. For ease of notation denote the order statistics from smallest to largest by $Y_1, \ldots, Y_n $
You can afterwards see that
$$E \left[ F \left(Y_n \right)-F \left( Y_1 \right) \right] $$
is basically the expected value of the difference between the maximum and the minimum of a uniform $(0,1)$ distribution. | $x_{1}...x_{n}$ are independent continuous random variables with common distribution function $F(x)$ | You need to consider the Probability Integral Transform. For ease of notation denote the order statistics from smallest to largest by $Y_1, \ldots, Y_n $
You can afterwards see that
$$E \left[ F \lef | $x_{1}...x_{n}$ are independent continuous random variables with common distribution function $F(x)$,compute $E(F(x_{(n)})-F(x_{(1)}))$
You need to consider the Probability Integral Transform. For ease of notation denote the order statistics from smallest to largest by $Y_1, \ldots, Y_n $
You can afterwards see that
$$E \left[ F \left(Y_n \right)-F \left( Y_1 \right) \right] $$
is basically the expected value of the difference between the maximum and the minimum of a uniform $(0,1)$ distribution. | $x_{1}...x_{n}$ are independent continuous random variables with common distribution function $F(x)$
You need to consider the Probability Integral Transform. For ease of notation denote the order statistics from smallest to largest by $Y_1, \ldots, Y_n $
You can afterwards see that
$$E \left[ F \lef |
52,945 | $x_{1}...x_{n}$ are independent continuous random variables with common distribution function $F(x)$,compute $E(F(x_{(n)})-F(x_{(1)}))$ | You can distribute the expectation function. Then you would have the E[x_n]-E[x_1]. Then you need to find the CDF of the max and min order statistic. From there you can find the expectation of each. To find the CDF of the max and min order statistic use the CDF substitution method and use the fact that x_1,...,x_n are independent. | $x_{1}...x_{n}$ are independent continuous random variables with common distribution function $F(x)$ | You can distribute the expectation function. Then you would have the E[x_n]-E[x_1]. Then you need to find the CDF of the max and min order statistic. From there you can find the expectation of each. T | $x_{1}...x_{n}$ are independent continuous random variables with common distribution function $F(x)$,compute $E(F(x_{(n)})-F(x_{(1)}))$
You can distribute the expectation function. Then you would have the E[x_n]-E[x_1]. Then you need to find the CDF of the max and min order statistic. From there you can find the expectation of each. To find the CDF of the max and min order statistic use the CDF substitution method and use the fact that x_1,...,x_n are independent. | $x_{1}...x_{n}$ are independent continuous random variables with common distribution function $F(x)$
You can distribute the expectation function. Then you would have the E[x_n]-E[x_1]. Then you need to find the CDF of the max and min order statistic. From there you can find the expectation of each. T |
52,946 | Comparison of machine learning algorithms | For classification algorithms this would be a good start: Statistical Comparisons of Classifiers over Multiple Data Sets.
To summarize this excellent paper: perform a Friedman test to determine if there is any significant difference between the classifiers and follow-up with an appropriate post-hoc test if there is:
to compare all classifiers: Nemenyi test
to compare one with all others: Bonferroni-Dunn test
Both post-hoc tests can be visualized neatly in so-called critical difference diagrams. | Comparison of machine learning algorithms | For classification algorithms this would be a good start: Statistical Comparisons of Classifiers over Multiple Data Sets.
To summarize this excellent paper: perform a Friedman test to determine if the | Comparison of machine learning algorithms
For classification algorithms this would be a good start: Statistical Comparisons of Classifiers over Multiple Data Sets.
To summarize this excellent paper: perform a Friedman test to determine if there is any significant difference between the classifiers and follow-up with an appropriate post-hoc test if there is:
to compare all classifiers: Nemenyi test
to compare one with all others: Bonferroni-Dunn test
Both post-hoc tests can be visualized neatly in so-called critical difference diagrams. | Comparison of machine learning algorithms
For classification algorithms this would be a good start: Statistical Comparisons of Classifiers over Multiple Data Sets.
To summarize this excellent paper: perform a Friedman test to determine if the |
52,947 | Comparison of machine learning algorithms | The notion of which Machine Learning algorithm is best is not universal, rather specific to the problem or the dataset you are dealing with.
In case of a single dataset or a problem, apply all learning algorithms and check the performance on out of sample data. Calculate Root mean Square errors between the predicted and actual values of out of sample data, and the algorithm with least RMSE will be the best only for that dataset. | Comparison of machine learning algorithms | The notion of which Machine Learning algorithm is best is not universal, rather specific to the problem or the dataset you are dealing with.
In case of a single dataset or a problem, apply all learni | Comparison of machine learning algorithms
The notion of which Machine Learning algorithm is best is not universal, rather specific to the problem or the dataset you are dealing with.
In case of a single dataset or a problem, apply all learning algorithms and check the performance on out of sample data. Calculate Root mean Square errors between the predicted and actual values of out of sample data, and the algorithm with least RMSE will be the best only for that dataset. | Comparison of machine learning algorithms
The notion of which Machine Learning algorithm is best is not universal, rather specific to the problem or the dataset you are dealing with.
In case of a single dataset or a problem, apply all learni |
52,948 | Does Stationarity for Time Series extend to Independent Variables? | Stationarity should be sought for both to avoid any spurious correlations (that some or all variables actually just increase/decrease over time, independent of other factors), and there are methods of correction that can be applied to the entire model (i.e., including a trend as a DV) or just to specific variables (i.e., taking first differences of non-stationary variables) to address this if it is a problem.
Level variables are frequently violated by non-stationarity; for example, the number of Internet users in the world or the amount of pollution generally continually increase. So, if you have one of those as either independent or dependent variable, it will cause an issue. Non-stationarity both inflates t-statistics and biases betas. Including a trend often helps to control for that (so that you can say "the effect of the number of Internet users on Y, controlling for time, is predicted to be..."). And again, first differences of the specific variable at fault is also an option. | Does Stationarity for Time Series extend to Independent Variables? | Stationarity should be sought for both to avoid any spurious correlations (that some or all variables actually just increase/decrease over time, independent of other factors), and there are methods of | Does Stationarity for Time Series extend to Independent Variables?
Stationarity should be sought for both to avoid any spurious correlations (that some or all variables actually just increase/decrease over time, independent of other factors), and there are methods of correction that can be applied to the entire model (i.e., including a trend as a DV) or just to specific variables (i.e., taking first differences of non-stationary variables) to address this if it is a problem.
Level variables are frequently violated by non-stationarity; for example, the number of Internet users in the world or the amount of pollution generally continually increase. So, if you have one of those as either independent or dependent variable, it will cause an issue. Non-stationarity both inflates t-statistics and biases betas. Including a trend often helps to control for that (so that you can say "the effect of the number of Internet users on Y, controlling for time, is predicted to be..."). And again, first differences of the specific variable at fault is also an option. | Does Stationarity for Time Series extend to Independent Variables?
Stationarity should be sought for both to avoid any spurious correlations (that some or all variables actually just increase/decrease over time, independent of other factors), and there are methods of |
52,949 | Does Stationarity for Time Series extend to Independent Variables? | Assume you have time series on a variable $Y$ and on a variable $X$, say both non-negative, and based on some idea of yours you believe that there is a reasonable argument that they can be linked by a linear relationship
$$y_t = \beta x_t + u_t \tag{1}$$
Now say that your theoretical argument is basically sound, but it captures how $Y$ relates to $X$ around a positive deterministic time trend, denote it $d_t =1,2,...$. In reality therefore, the true relationship is
$$y_t = \beta x_t + \gamma d_t + v_t \tag{2}$$
where $v_t$ has all the nice properties.
But you are impatient, you don't even graph the series to see how they evolve over time, and you go on and estimate model $(1)$ by OLS. You will obtain
$$\hat \beta = \frac {\sum_{t=1}^Tx_ty_t}{\sum_{t=1}^Tx_t^2}$$
and inserting the true equation $(2)$ for $y_t$ into this, you will get
$$\hat \beta = \beta + \frac {\gamma\sum_{t=1}^Tx_id_t +\sum_{t=1}^Tx_tu_t}{\sum_{t=1}^Tx_i^2}$$
Consider the expected value of the estimator conditional on the regressor series
$$E(\hat \beta \mid X) = \beta + \frac {\gamma\sum_{t=1}^Tx_id_t }{\sum_{t=1}^Tx_i^2}$$
which explodes as $t$ increases since $d_t$ increases, going away from the true value of the parameter. This happens because the estimator is forced to incorporate the existence of the time trend into the estimate of $\beta$. In practice if you start with a sample of length $T$ and then gradually increase the sample length, you will see the OLS estimate of $\beta$ to increase also, making the whole estimation useless.
This is the most basic example to show that stationary and non-stationary data "don't mix".
It would be instructive to work the reverse case. Assume that $y_t$ is
stationary, and you try to regress it on a time trend. Say, the true
model is
$$y_t = \alpha + v_t$$ but you specify and estimate
$$y_t = \alpha + \gamma d_t + u_t$$
What will happen here?
ADDENDUM
Responding to conversation in the comments, the essence of the answer above is that if some of our data series are stationary, and some non-stationary, putting them together through an estimation algorithm will not provide meaningful results.
But, there exist techniques to transform the data series so that they become stationary, and then we can execute estimation (since now all data series involved in the estimation have become stationary), and obtain meaningful results. Detrending or Differencing a non-stationary series is the most usual such technique.
Moreover, there exists the phenomenon of co-integration, where, say, two stochastic processes are both non-stationary, but there exists a vector of constants that makes their linear combination weighted by this vector a stationary process. In such cases, regressing the one non-stationary series on the other non-stationary series provides meaningful results, and in some cases it is even to be preferred from the alternative approach, i.e. to, say, differencing both series to make them both stationary. | Does Stationarity for Time Series extend to Independent Variables? | Assume you have time series on a variable $Y$ and on a variable $X$, say both non-negative, and based on some idea of yours you believe that there is a reasonable argument that they can be linked by a | Does Stationarity for Time Series extend to Independent Variables?
Assume you have time series on a variable $Y$ and on a variable $X$, say both non-negative, and based on some idea of yours you believe that there is a reasonable argument that they can be linked by a linear relationship
$$y_t = \beta x_t + u_t \tag{1}$$
Now say that your theoretical argument is basically sound, but it captures how $Y$ relates to $X$ around a positive deterministic time trend, denote it $d_t =1,2,...$. In reality therefore, the true relationship is
$$y_t = \beta x_t + \gamma d_t + v_t \tag{2}$$
where $v_t$ has all the nice properties.
But you are impatient, you don't even graph the series to see how they evolve over time, and you go on and estimate model $(1)$ by OLS. You will obtain
$$\hat \beta = \frac {\sum_{t=1}^Tx_ty_t}{\sum_{t=1}^Tx_t^2}$$
and inserting the true equation $(2)$ for $y_t$ into this, you will get
$$\hat \beta = \beta + \frac {\gamma\sum_{t=1}^Tx_id_t +\sum_{t=1}^Tx_tu_t}{\sum_{t=1}^Tx_i^2}$$
Consider the expected value of the estimator conditional on the regressor series
$$E(\hat \beta \mid X) = \beta + \frac {\gamma\sum_{t=1}^Tx_id_t }{\sum_{t=1}^Tx_i^2}$$
which explodes as $t$ increases since $d_t$ increases, going away from the true value of the parameter. This happens because the estimator is forced to incorporate the existence of the time trend into the estimate of $\beta$. In practice if you start with a sample of length $T$ and then gradually increase the sample length, you will see the OLS estimate of $\beta$ to increase also, making the whole estimation useless.
This is the most basic example to show that stationary and non-stationary data "don't mix".
It would be instructive to work the reverse case. Assume that $y_t$ is
stationary, and you try to regress it on a time trend. Say, the true
model is
$$y_t = \alpha + v_t$$ but you specify and estimate
$$y_t = \alpha + \gamma d_t + u_t$$
What will happen here?
ADDENDUM
Responding to conversation in the comments, the essence of the answer above is that if some of our data series are stationary, and some non-stationary, putting them together through an estimation algorithm will not provide meaningful results.
But, there exist techniques to transform the data series so that they become stationary, and then we can execute estimation (since now all data series involved in the estimation have become stationary), and obtain meaningful results. Detrending or Differencing a non-stationary series is the most usual such technique.
Moreover, there exists the phenomenon of co-integration, where, say, two stochastic processes are both non-stationary, but there exists a vector of constants that makes their linear combination weighted by this vector a stationary process. In such cases, regressing the one non-stationary series on the other non-stationary series provides meaningful results, and in some cases it is even to be preferred from the alternative approach, i.e. to, say, differencing both series to make them both stationary. | Does Stationarity for Time Series extend to Independent Variables?
Assume you have time series on a variable $Y$ and on a variable $X$, say both non-negative, and based on some idea of yours you believe that there is a reasonable argument that they can be linked by a |
52,950 | Does Stationarity for Time Series extend to Independent Variables? | Stationarity means, among other things, that a
All the random variables $X_t$ have the same distribution
All pairs of random variables $(X_t, X_s)$ have a joint distribution that depends only on the difference $t-s$ and not on the
individual values of $t$ and $s$: the joint distribution of $(X_1,X_3)$ is the
same as the joint distribution of $(X_2,X_4)$.
More generally, the $n$-th order
distributions of $(X_t, X_s, X_u, \ldots, )$ are the same as the $n$-th order
distributions of the random variables
$(X_{t+\tau}, X_{s+\tau}, X_{u+\tau}, \ldots, )$
If the $X_t$ are all independent random variables, then the first of
the above conditions is not satisfied unless we also say, claim, assert,
or have reasons for believing, that all the $X_t$ have the same distribution.
This is often referred to in abbreviated fashion as iid or i.i.d. which
stands for independent and identically distributed random variables.
In this case, all the other conditions are automatically satisfied.
In other words,
If the random variables constituting the process or time series are
iid random variables, then the process is stationary. If the random
variables are independent but not necessarily identically distributed,
then the process is non-stationary. | Does Stationarity for Time Series extend to Independent Variables? | Stationarity means, among other things, that a
All the random variables $X_t$ have the same distribution
All pairs of random variables $(X_t, X_s)$ have a joint distribution that depends only on the | Does Stationarity for Time Series extend to Independent Variables?
Stationarity means, among other things, that a
All the random variables $X_t$ have the same distribution
All pairs of random variables $(X_t, X_s)$ have a joint distribution that depends only on the difference $t-s$ and not on the
individual values of $t$ and $s$: the joint distribution of $(X_1,X_3)$ is the
same as the joint distribution of $(X_2,X_4)$.
More generally, the $n$-th order
distributions of $(X_t, X_s, X_u, \ldots, )$ are the same as the $n$-th order
distributions of the random variables
$(X_{t+\tau}, X_{s+\tau}, X_{u+\tau}, \ldots, )$
If the $X_t$ are all independent random variables, then the first of
the above conditions is not satisfied unless we also say, claim, assert,
or have reasons for believing, that all the $X_t$ have the same distribution.
This is often referred to in abbreviated fashion as iid or i.i.d. which
stands for independent and identically distributed random variables.
In this case, all the other conditions are automatically satisfied.
In other words,
If the random variables constituting the process or time series are
iid random variables, then the process is stationary. If the random
variables are independent but not necessarily identically distributed,
then the process is non-stationary. | Does Stationarity for Time Series extend to Independent Variables?
Stationarity means, among other things, that a
All the random variables $X_t$ have the same distribution
All pairs of random variables $(X_t, X_s)$ have a joint distribution that depends only on the |
52,951 | Expected value of q given y is weighted average of mean q and and y | All you need to know is that the regression of $q$ on $y$ is determined by standardizing both variables and their correlation coefficient will be the slope.
(In particular this result owes nothing to the assumptions that distributions are Normal; the independence of $q$ and $u$ is sufficient. Thus it will be most revealing to obtain it without recourse to any properties of Normal distributions.)
Preliminary Calculations
To standardize a variable, you subtract its expectation and divide by its standard deviation. We will therefore need to compute standard deviations, expectations, and a correlation coefficient.
Because $y=q+u$,
$$\mathbb{E}(y) = \mathbb{E}(q+u) = \mathbb{E}(q) + \mathbb{E}(u) = \alpha + 0 = \alpha,$$
taking care of computing the expectations.
Turn now to the standard deviations. Recall that it's simpler to work with their squares: the variances. For brevity, write $\sigma^2$ for the variance of $q$ and $\tau^2$ for the variance of $u$. Then
$$\text{Var}(y) = \text{Var}(q+u) = \text{Var}(q) + \text{Var}(u) + 2\text{Cov}(u,q) = \sigma^2 + \tau^2 + 0 = \sigma^2 + \tau^2.$$
Finally, the correlation is computed from the covariance:
$$\text{Cov}(y, q) = \text{Cov}(q+u, q) = \text{Cov}(q,q) + \text{Cov}(u,q) = \sigma^2.$$
(Both these calculations used the simplification $\text{Cov}(u,q)=0$ arising from the independence of $u$ and $q$.)
Therefore the standardized variables are $$\eta = (y-\alpha)/\sqrt{\sigma^2+\tau^2}$$ and $$\theta=(q-\alpha)/\sigma.$$
Moreover, the correlation is $$\rho=\sigma^2/\left(\sigma\sqrt{\sigma^2+\tau^2}\right) = \sigma / \sqrt{\sigma^2+\tau^2}.$$
Solution
We have computed everything necessary to regress $q$ against $y$:
$$\mathbb{E}(\theta\ |\ \eta) = \rho\, \eta.$$
(This is a fact about geometry, really: see the "Conclusions" section at https://stats.stackexchange.com/a/71303 for the derivation, which--although it is illustrated there for Normal distributions--still does not require Normality to derive.)
Expanding, and once again exploiting linearity of expectation,
$$\frac{\mathbb{E}(q\ |\ y)-\alpha}{\sigma} = \mathbb{E}(\theta\ |\ \eta) = \rho\, \eta = \frac{\sigma}{\sqrt{\sigma^2+\tau^2}}\left(\frac{y-\alpha}{\sqrt{\sigma^2+\tau^2}}\right) = \frac{\sigma(y-\alpha)}{\sigma^2+\tau^2}.$$
It is the task of ordinary algebra to convert this back to an expression for $\mathbb{E}(q\ |\ y)$ in terms of $y$, because (insofar as $\mathbb{E}(q\ |\ y)$ is concerned) all variables now represent numbers:
$$\mathbb{E}(q\ |\ y) = \frac{\tau^2}{\sigma^2+\tau^2} \alpha + \frac{\sigma^2}{\sigma^2+\tau^2} y.$$
That is Equation (2). Casting an eye back over the calculations should relieve any mystery about where these coefficients came from or what they mean. | Expected value of q given y is weighted average of mean q and and y | All you need to know is that the regression of $q$ on $y$ is determined by standardizing both variables and their correlation coefficient will be the slope.
(In particular this result owes nothing to | Expected value of q given y is weighted average of mean q and and y
All you need to know is that the regression of $q$ on $y$ is determined by standardizing both variables and their correlation coefficient will be the slope.
(In particular this result owes nothing to the assumptions that distributions are Normal; the independence of $q$ and $u$ is sufficient. Thus it will be most revealing to obtain it without recourse to any properties of Normal distributions.)
Preliminary Calculations
To standardize a variable, you subtract its expectation and divide by its standard deviation. We will therefore need to compute standard deviations, expectations, and a correlation coefficient.
Because $y=q+u$,
$$\mathbb{E}(y) = \mathbb{E}(q+u) = \mathbb{E}(q) + \mathbb{E}(u) = \alpha + 0 = \alpha,$$
taking care of computing the expectations.
Turn now to the standard deviations. Recall that it's simpler to work with their squares: the variances. For brevity, write $\sigma^2$ for the variance of $q$ and $\tau^2$ for the variance of $u$. Then
$$\text{Var}(y) = \text{Var}(q+u) = \text{Var}(q) + \text{Var}(u) + 2\text{Cov}(u,q) = \sigma^2 + \tau^2 + 0 = \sigma^2 + \tau^2.$$
Finally, the correlation is computed from the covariance:
$$\text{Cov}(y, q) = \text{Cov}(q+u, q) = \text{Cov}(q,q) + \text{Cov}(u,q) = \sigma^2.$$
(Both these calculations used the simplification $\text{Cov}(u,q)=0$ arising from the independence of $u$ and $q$.)
Therefore the standardized variables are $$\eta = (y-\alpha)/\sqrt{\sigma^2+\tau^2}$$ and $$\theta=(q-\alpha)/\sigma.$$
Moreover, the correlation is $$\rho=\sigma^2/\left(\sigma\sqrt{\sigma^2+\tau^2}\right) = \sigma / \sqrt{\sigma^2+\tau^2}.$$
Solution
We have computed everything necessary to regress $q$ against $y$:
$$\mathbb{E}(\theta\ |\ \eta) = \rho\, \eta.$$
(This is a fact about geometry, really: see the "Conclusions" section at https://stats.stackexchange.com/a/71303 for the derivation, which--although it is illustrated there for Normal distributions--still does not require Normality to derive.)
Expanding, and once again exploiting linearity of expectation,
$$\frac{\mathbb{E}(q\ |\ y)-\alpha}{\sigma} = \mathbb{E}(\theta\ |\ \eta) = \rho\, \eta = \frac{\sigma}{\sqrt{\sigma^2+\tau^2}}\left(\frac{y-\alpha}{\sqrt{\sigma^2+\tau^2}}\right) = \frac{\sigma(y-\alpha)}{\sigma^2+\tau^2}.$$
It is the task of ordinary algebra to convert this back to an expression for $\mathbb{E}(q\ |\ y)$ in terms of $y$, because (insofar as $\mathbb{E}(q\ |\ y)$ is concerned) all variables now represent numbers:
$$\mathbb{E}(q\ |\ y) = \frac{\tau^2}{\sigma^2+\tau^2} \alpha + \frac{\sigma^2}{\sigma^2+\tau^2} y.$$
That is Equation (2). Casting an eye back over the calculations should relieve any mystery about where these coefficients came from or what they mean. | Expected value of q given y is weighted average of mean q and and y
All you need to know is that the regression of $q$ on $y$ is determined by standardizing both variables and their correlation coefficient will be the slope.
(In particular this result owes nothing to |
52,952 | Expected value of q given y is weighted average of mean q and and y | The model implies that $y\sim\mathcal{N}(q,\sigma^2_u)$ and $q\sim\mathcal{N}(a,\sigma^2_q)$. By Bayes' rule:
$$p(q\mid y)\propto p(y\mid q,\sigma^2_u)p(q)$$
Ignoring constant factors (see here for a similar development):
$$\begin{align}p(q\mid y) & \propto \exp\left\{-\frac{(y-q)^2}{2\sigma^2_u}-\frac{(q-a)^2}{\sigma^2_q}\right\}\\
&=\exp\left\{-\frac{1}{2}\left(\frac{y^2-2yq+q^2}{\sigma^2_u}+\frac{q^2-2qa+a^2}{\sigma^2_q}\right)\right\}\end{align}$$
any term that does not include $q$ can be viewed as a proportionality constant:
$$\begin{align}\qquad\qquad\qquad &\propto\exp\left\{-\frac{1}{2}\frac{-2\sigma^2_q yq+\sigma^2_q q^2+\sigma^2_u q^2-2\sigma^2_u qa}{\sigma^2_u\sigma^2_q}\right\}\\
&=\exp\left\{-\frac{1}{2}\frac{(\sigma^2_q+\sigma^2_u)q^2-2(\sigma^2_u a+\sigma^2_q y)q}{\sigma^2_u\sigma^2_q}\right\}\\
&=\exp\left\{-\frac{1}{2}\frac{q^2-2q\frac{\sigma^2_u a+\sigma^2_q y}{\sigma^2_q+\sigma^2_u}}{\frac{\sigma^2_q\sigma^2_u}{\sigma^2_q+\sigma^2_u}}\right\}\propto \exp\left\{-\frac{1}{2}\frac{\left(q-\frac{\sigma^2_u a+\sigma^2_q y}{\sigma^2_q+\sigma^2_u}\right)^2}{\frac{\sigma^2_q\sigma^2_u}{\sigma^2_q+\sigma^2_u}}\right\}\end{align}$$
Therefore:
$$E(q\mid y)=\frac{\sigma^2_u a+\sigma^2_q y}{\sigma^2_q+\sigma^2_u}
=\left(1-\frac{\sigma^2_q}{\sigma^2_q+\sigma^2_u}\right)a+\frac{\sigma^2_q}{\sigma^2_q+\sigma^2_u}y$$ | Expected value of q given y is weighted average of mean q and and y | The model implies that $y\sim\mathcal{N}(q,\sigma^2_u)$ and $q\sim\mathcal{N}(a,\sigma^2_q)$. By Bayes' rule:
$$p(q\mid y)\propto p(y\mid q,\sigma^2_u)p(q)$$
Ignoring constant factors (see here for a | Expected value of q given y is weighted average of mean q and and y
The model implies that $y\sim\mathcal{N}(q,\sigma^2_u)$ and $q\sim\mathcal{N}(a,\sigma^2_q)$. By Bayes' rule:
$$p(q\mid y)\propto p(y\mid q,\sigma^2_u)p(q)$$
Ignoring constant factors (see here for a similar development):
$$\begin{align}p(q\mid y) & \propto \exp\left\{-\frac{(y-q)^2}{2\sigma^2_u}-\frac{(q-a)^2}{\sigma^2_q}\right\}\\
&=\exp\left\{-\frac{1}{2}\left(\frac{y^2-2yq+q^2}{\sigma^2_u}+\frac{q^2-2qa+a^2}{\sigma^2_q}\right)\right\}\end{align}$$
any term that does not include $q$ can be viewed as a proportionality constant:
$$\begin{align}\qquad\qquad\qquad &\propto\exp\left\{-\frac{1}{2}\frac{-2\sigma^2_q yq+\sigma^2_q q^2+\sigma^2_u q^2-2\sigma^2_u qa}{\sigma^2_u\sigma^2_q}\right\}\\
&=\exp\left\{-\frac{1}{2}\frac{(\sigma^2_q+\sigma^2_u)q^2-2(\sigma^2_u a+\sigma^2_q y)q}{\sigma^2_u\sigma^2_q}\right\}\\
&=\exp\left\{-\frac{1}{2}\frac{q^2-2q\frac{\sigma^2_u a+\sigma^2_q y}{\sigma^2_q+\sigma^2_u}}{\frac{\sigma^2_q\sigma^2_u}{\sigma^2_q+\sigma^2_u}}\right\}\propto \exp\left\{-\frac{1}{2}\frac{\left(q-\frac{\sigma^2_u a+\sigma^2_q y}{\sigma^2_q+\sigma^2_u}\right)^2}{\frac{\sigma^2_q\sigma^2_u}{\sigma^2_q+\sigma^2_u}}\right\}\end{align}$$
Therefore:
$$E(q\mid y)=\frac{\sigma^2_u a+\sigma^2_q y}{\sigma^2_q+\sigma^2_u}
=\left(1-\frac{\sigma^2_q}{\sigma^2_q+\sigma^2_u}\right)a+\frac{\sigma^2_q}{\sigma^2_q+\sigma^2_u}y$$ | Expected value of q given y is weighted average of mean q and and y
The model implies that $y\sim\mathcal{N}(q,\sigma^2_u)$ and $q\sim\mathcal{N}(a,\sigma^2_q)$. By Bayes' rule:
$$p(q\mid y)\propto p(y\mid q,\sigma^2_u)p(q)$$
Ignoring constant factors (see here for a |
52,953 | Expected value of q given y is weighted average of mean q and and y | Another way, the shortest one ;-)
In general, if $X$ and $Y$ have a bivariate normal distribution, then (Anderson, Theorem 2.5.1):
$$E[X\mid Y]=E[X]+\frac{\text{Cov}(X,Y)}{V[Y]}(Y-E[X])
=\left(1-\frac{\text{Cov}(X,Y)}{V[Y]}\right)E[X]+\frac{\text{Cov}(X,Y)}{V[Y]}Y$$
i.e. "expected value of X given Y is weighted average of mean X and Y" is a well-known result.
In your model $E[q]=a$, $V[y]=\sigma^2_q+\sigma^2_u$ and $\text{Cov}(y,q)=\sigma^2_q$ (see whubner's answer), so:
$$E[q\mid y]=a+\frac{\sigma^2_q}{\sigma^2_q+\sigma^2_u}(y-a)=
\left(1-\frac{\sigma^2_q}{\sigma^2_q+\sigma^2_u}\right)a+\frac{\sigma^2_q}{\sigma^2_q+\sigma^2_u}y$$ | Expected value of q given y is weighted average of mean q and and y | Another way, the shortest one ;-)
In general, if $X$ and $Y$ have a bivariate normal distribution, then (Anderson, Theorem 2.5.1):
$$E[X\mid Y]=E[X]+\frac{\text{Cov}(X,Y)}{V[Y]}(Y-E[X])
=\left(1-\frac | Expected value of q given y is weighted average of mean q and and y
Another way, the shortest one ;-)
In general, if $X$ and $Y$ have a bivariate normal distribution, then (Anderson, Theorem 2.5.1):
$$E[X\mid Y]=E[X]+\frac{\text{Cov}(X,Y)}{V[Y]}(Y-E[X])
=\left(1-\frac{\text{Cov}(X,Y)}{V[Y]}\right)E[X]+\frac{\text{Cov}(X,Y)}{V[Y]}Y$$
i.e. "expected value of X given Y is weighted average of mean X and Y" is a well-known result.
In your model $E[q]=a$, $V[y]=\sigma^2_q+\sigma^2_u$ and $\text{Cov}(y,q)=\sigma^2_q$ (see whubner's answer), so:
$$E[q\mid y]=a+\frac{\sigma^2_q}{\sigma^2_q+\sigma^2_u}(y-a)=
\left(1-\frac{\sigma^2_q}{\sigma^2_q+\sigma^2_u}\right)a+\frac{\sigma^2_q}{\sigma^2_q+\sigma^2_u}y$$ | Expected value of q given y is weighted average of mean q and and y
Another way, the shortest one ;-)
In general, if $X$ and $Y$ have a bivariate normal distribution, then (Anderson, Theorem 2.5.1):
$$E[X\mid Y]=E[X]+\frac{\text{Cov}(X,Y)}{V[Y]}(Y-E[X])
=\left(1-\frac |
52,954 | Expected value of q given y is weighted average of mean q and and y | I think the following argument shows why, unfortunately it's a messy. Much more elegant derivations are certainly out there somewhere as the linear Gaussian case is the best understood statistical model in existence.
Anyway, we have that:
U~N(0, sigma²)
Q~N(alpha, beta^2)
Y=Q+U.
U and Q are independent.
Because a linear function of normal random variables is itself a normal random variable and due to independence of U and Q it follows that Y | Q ~ N(Q+0,sigma²).
We can now write down the probability density function of Q conditional on Y=y. By Bayes theorem that's:
(pdf of Q times * pdf of Y|Q)/(pdf of Y).
I won't write this out because it's very messy with all the Gaussian densities.
Q|Y will be a normal random variable, which means that it's mode is its mean. Ignoring the denominator (the normalizing constant), we're left with:
1/(2pi*sigma^2*beta^2) * exp(-Something(Q))
We find the mode of the posterior distribution by choosing Q so as to maximise the density. That's going to be the conditional expected value too, because the mode is the mean for a Gaussian. To do that we can ignore everything except Something(Q) because the rest isn't a function of Q.
If you do the algebra, Something(Q) = 1/2 *( (y-q)^2/sigma^2) + (q-alpha)^2/beta^2)
If you differentiate wrt q, set to 0 and solve for q, you'll get:
q=beta^2/(beta^2+sigma^2)*y+sigma^2/(beta^2+sigma^2)*alpha... as required! | Expected value of q given y is weighted average of mean q and and y | I think the following argument shows why, unfortunately it's a messy. Much more elegant derivations are certainly out there somewhere as the linear Gaussian case is the best understood statistical mod | Expected value of q given y is weighted average of mean q and and y
I think the following argument shows why, unfortunately it's a messy. Much more elegant derivations are certainly out there somewhere as the linear Gaussian case is the best understood statistical model in existence.
Anyway, we have that:
U~N(0, sigma²)
Q~N(alpha, beta^2)
Y=Q+U.
U and Q are independent.
Because a linear function of normal random variables is itself a normal random variable and due to independence of U and Q it follows that Y | Q ~ N(Q+0,sigma²).
We can now write down the probability density function of Q conditional on Y=y. By Bayes theorem that's:
(pdf of Q times * pdf of Y|Q)/(pdf of Y).
I won't write this out because it's very messy with all the Gaussian densities.
Q|Y will be a normal random variable, which means that it's mode is its mean. Ignoring the denominator (the normalizing constant), we're left with:
1/(2pi*sigma^2*beta^2) * exp(-Something(Q))
We find the mode of the posterior distribution by choosing Q so as to maximise the density. That's going to be the conditional expected value too, because the mode is the mean for a Gaussian. To do that we can ignore everything except Something(Q) because the rest isn't a function of Q.
If you do the algebra, Something(Q) = 1/2 *( (y-q)^2/sigma^2) + (q-alpha)^2/beta^2)
If you differentiate wrt q, set to 0 and solve for q, you'll get:
q=beta^2/(beta^2+sigma^2)*y+sigma^2/(beta^2+sigma^2)*alpha... as required! | Expected value of q given y is weighted average of mean q and and y
I think the following argument shows why, unfortunately it's a messy. Much more elegant derivations are certainly out there somewhere as the linear Gaussian case is the best understood statistical mod |
52,955 | How does the interpretation of main effects in a Two-Way ANOVA change depending on whether the interaction effect is significant? | There is less to this issue than it seems. The real answer isn't that you cannot interpret the main effects at all, but rather that it is very difficult to interpret them correctly. The reason for the warning not to interpret the main effects is because people will inevitably interpret them incorrectly.
If there isn't an interaction term included in the model, the main effects have a straightforward meaning: is there variation amongst the levels of the factor in question? If there is an interaction in the model, the main effects don't mean that. In fact their meaning is hard to convey and it depends on how the model was fit and how it was tested. In the abstract, I cannot tell you exactly what they mean in any given model. However, interpreting them as you would if there weren't an interaction would be incorrect. What is important for this issue is not whether or not the interaction is significant, but whether or not the interaction was included in the model in the first place.
If the interaction is sufficiently non-significant for your purposes, and you want to test and interpret the main effects, the simplest thing to do would be to drop the interaction and re-fit / re-test the model. Note that this procedure, if not a-priori, comes with all the usual caveats about fishing and threats to the validity of the hypothesis tests. | How does the interpretation of main effects in a Two-Way ANOVA change depending on whether the inter | There is less to this issue than it seems. The real answer isn't that you cannot interpret the main effects at all, but rather that it is very difficult to interpret them correctly. The reason for t | How does the interpretation of main effects in a Two-Way ANOVA change depending on whether the interaction effect is significant?
There is less to this issue than it seems. The real answer isn't that you cannot interpret the main effects at all, but rather that it is very difficult to interpret them correctly. The reason for the warning not to interpret the main effects is because people will inevitably interpret them incorrectly.
If there isn't an interaction term included in the model, the main effects have a straightforward meaning: is there variation amongst the levels of the factor in question? If there is an interaction in the model, the main effects don't mean that. In fact their meaning is hard to convey and it depends on how the model was fit and how it was tested. In the abstract, I cannot tell you exactly what they mean in any given model. However, interpreting them as you would if there weren't an interaction would be incorrect. What is important for this issue is not whether or not the interaction is significant, but whether or not the interaction was included in the model in the first place.
If the interaction is sufficiently non-significant for your purposes, and you want to test and interpret the main effects, the simplest thing to do would be to drop the interaction and re-fit / re-test the model. Note that this procedure, if not a-priori, comes with all the usual caveats about fishing and threats to the validity of the hypothesis tests. | How does the interpretation of main effects in a Two-Way ANOVA change depending on whether the inter
There is less to this issue than it seems. The real answer isn't that you cannot interpret the main effects at all, but rather that it is very difficult to interpret them correctly. The reason for t |
52,956 | How does the interpretation of main effects in a Two-Way ANOVA change depending on whether the interaction effect is significant? | This is an interesting question. Since I don't like gross generalization, I am going to disagree with the suggestion that you should "never" interpret the main effects at all if an interaction is present. Never is just to strong (even if some might argue that there are clear situations where the interaction tells you what you need to know. To do this, I will provide a counter example and a link to a paper in the Journal of Consumer Psychology where this precise question was asked & then answered by the editor.
In the paper, the following question is asked: Can I make conclusions based on an interaction being significant without testing the simple or main effects?
Here is the hypothetical situation that is given:
"A researcher predicts a crossover interaction between two variables (one continuous, the other dichotomous) that is tested via a linear contrast regression model containing the
appropriate main effect and interaction terms. The interaction term receives a significant coefficient. The researcher then concludes that the data support the predicted interaction. Is this an appropriate conclusion? More specifically, given that a significant interaction could occur for data patterns that differ from the one predicted (e.g., a non crossover pattern, a crossover pattern in the opposite direction), is it not necessary to undertake the appropriate simple main effects tests to establish whether the data actually support the differences predicted by the crossover interaction?"
The argument / response that is provided by the editor in favor of always reporting that main effect even when the interaction is present (demonstrating a counter example):
"Yes, it is imperative to defend a statement purported to describe data with a statistic. The scenario you have described for regression would be like obtaining a significant F
test for the A×B interaction in ANOVA and then doing no further investigations — that is, both the plot of the cell means and the tests of simple effects to substantiate claims about precisely which means are significantly different from each other. You are absolutely right that a mere significant regression coefficient associated with an interaction term yields no detailed information about the nature of that interaction. If you see someone trying to do this—make a claim about their data without a statistic to support that claim (regardless of whether in regression or ANOVA or anything else for that matter), nail them."
While others may not, I agree with the argument proposed by the author and find the counter-example compelling enough to reject the hardline claim you described that one should not interpret the main effects if the interaction effects are significant. | How does the interpretation of main effects in a Two-Way ANOVA change depending on whether the inter | This is an interesting question. Since I don't like gross generalization, I am going to disagree with the suggestion that you should "never" interpret the main effects at all if an interaction is pres | How does the interpretation of main effects in a Two-Way ANOVA change depending on whether the interaction effect is significant?
This is an interesting question. Since I don't like gross generalization, I am going to disagree with the suggestion that you should "never" interpret the main effects at all if an interaction is present. Never is just to strong (even if some might argue that there are clear situations where the interaction tells you what you need to know. To do this, I will provide a counter example and a link to a paper in the Journal of Consumer Psychology where this precise question was asked & then answered by the editor.
In the paper, the following question is asked: Can I make conclusions based on an interaction being significant without testing the simple or main effects?
Here is the hypothetical situation that is given:
"A researcher predicts a crossover interaction between two variables (one continuous, the other dichotomous) that is tested via a linear contrast regression model containing the
appropriate main effect and interaction terms. The interaction term receives a significant coefficient. The researcher then concludes that the data support the predicted interaction. Is this an appropriate conclusion? More specifically, given that a significant interaction could occur for data patterns that differ from the one predicted (e.g., a non crossover pattern, a crossover pattern in the opposite direction), is it not necessary to undertake the appropriate simple main effects tests to establish whether the data actually support the differences predicted by the crossover interaction?"
The argument / response that is provided by the editor in favor of always reporting that main effect even when the interaction is present (demonstrating a counter example):
"Yes, it is imperative to defend a statement purported to describe data with a statistic. The scenario you have described for regression would be like obtaining a significant F
test for the A×B interaction in ANOVA and then doing no further investigations — that is, both the plot of the cell means and the tests of simple effects to substantiate claims about precisely which means are significantly different from each other. You are absolutely right that a mere significant regression coefficient associated with an interaction term yields no detailed information about the nature of that interaction. If you see someone trying to do this—make a claim about their data without a statistic to support that claim (regardless of whether in regression or ANOVA or anything else for that matter), nail them."
While others may not, I agree with the argument proposed by the author and find the counter-example compelling enough to reject the hardline claim you described that one should not interpret the main effects if the interaction effects are significant. | How does the interpretation of main effects in a Two-Way ANOVA change depending on whether the inter
This is an interesting question. Since I don't like gross generalization, I am going to disagree with the suggestion that you should "never" interpret the main effects at all if an interaction is pres |
52,957 | How does the interpretation of main effects in a Two-Way ANOVA change depending on whether the interaction effect is significant? | When you have an interaction, the real issue is whether the main effects (significant or not) are “descriptive” or “misleading”.
Here is an example of data with a significant interaction and “descriptive” main effect for Task Presentation (Computer does better than Paper overall, and for Easy Tasks and for Hard tasks)
Here are two examples of data with a significant interaction and “misleading” main effect for Task Presentation (computer does better than paper overall, but not for both Easy and Hard tasks)
and here is an example of a null main effect that is “misleading” because although there is no main effect of Task Presentation, there is a Task Presentation effect for both Easy and Hard tasks.
Kind of a long answer, but it is an important issue.
Here is the PowerPoint file that contains these examples, and the course. | How does the interpretation of main effects in a Two-Way ANOVA change depending on whether the inter | When you have an interaction, the real issue is whether the main effects (significant or not) are “descriptive” or “misleading”.
Here is an example of data with a significant interaction and “descript | How does the interpretation of main effects in a Two-Way ANOVA change depending on whether the interaction effect is significant?
When you have an interaction, the real issue is whether the main effects (significant or not) are “descriptive” or “misleading”.
Here is an example of data with a significant interaction and “descriptive” main effect for Task Presentation (Computer does better than Paper overall, and for Easy Tasks and for Hard tasks)
Here are two examples of data with a significant interaction and “misleading” main effect for Task Presentation (computer does better than paper overall, but not for both Easy and Hard tasks)
and here is an example of a null main effect that is “misleading” because although there is no main effect of Task Presentation, there is a Task Presentation effect for both Easy and Hard tasks.
Kind of a long answer, but it is an important issue.
Here is the PowerPoint file that contains these examples, and the course. | How does the interpretation of main effects in a Two-Way ANOVA change depending on whether the inter
When you have an interaction, the real issue is whether the main effects (significant or not) are “descriptive” or “misleading”.
Here is an example of data with a significant interaction and “descript |
52,958 | The value of adding the ROC graph if the AUC is given | I usually give the ROC plot but not the AUC: For my applications it is usually clear that either a specific or a sensitive regocognition is needed. The ROC is different for classifiers that are specific but not sensitive vs. sensitive but not specific while the AUC hides this information.
Besides, one can put a whole lot of further information into the plot, e.g. color-coding the thresholds (check whether the classifier is well calibrated if the primary output is posterior probability), model stability (after resampling validation), or confidence regions for a chosen classifier (if one chooses a threshold). Finally, you can even put "extented" measures of sensitivity and specificity which do not require the thresholding @FrankHarrell fights against, e.g. it is possible to extend the concept of sensitivity and specificity and the concept behind Brier's score to yield such measures. | The value of adding the ROC graph if the AUC is given | I usually give the ROC plot but not the AUC: For my applications it is usually clear that either a specific or a sensitive regocognition is needed. The ROC is different for classifiers that are specif | The value of adding the ROC graph if the AUC is given
I usually give the ROC plot but not the AUC: For my applications it is usually clear that either a specific or a sensitive regocognition is needed. The ROC is different for classifiers that are specific but not sensitive vs. sensitive but not specific while the AUC hides this information.
Besides, one can put a whole lot of further information into the plot, e.g. color-coding the thresholds (check whether the classifier is well calibrated if the primary output is posterior probability), model stability (after resampling validation), or confidence regions for a chosen classifier (if one chooses a threshold). Finally, you can even put "extented" measures of sensitivity and specificity which do not require the thresholding @FrankHarrell fights against, e.g. it is possible to extend the concept of sensitivity and specificity and the concept behind Brier's score to yield such measures. | The value of adding the ROC graph if the AUC is given
I usually give the ROC plot but not the AUC: For my applications it is usually clear that either a specific or a sensitive regocognition is needed. The ROC is different for classifiers that are specif |
52,959 | The value of adding the ROC graph if the AUC is given | I have not seen a single example where the graph changes our actions or the way we think. I think the ink:information ratio in an ROC graph is enormous. But the worst thing about it is that it tempts us to try to select a cutoff for the predicted risk, which is arbitrary, and inconsistent with optimum decision making.
Beware of the word classifier which implies discarding continuous information. | The value of adding the ROC graph if the AUC is given | I have not seen a single example where the graph changes our actions or the way we think. I think the ink:information ratio in an ROC graph is enormous. But the worst thing about it is that it tempt | The value of adding the ROC graph if the AUC is given
I have not seen a single example where the graph changes our actions or the way we think. I think the ink:information ratio in an ROC graph is enormous. But the worst thing about it is that it tempts us to try to select a cutoff for the predicted risk, which is arbitrary, and inconsistent with optimum decision making.
Beware of the word classifier which implies discarding continuous information. | The value of adding the ROC graph if the AUC is given
I have not seen a single example where the graph changes our actions or the way we think. I think the ink:information ratio in an ROC graph is enormous. But the worst thing about it is that it tempt |
52,960 | The value of adding the ROC graph if the AUC is given | The ROC curve is the specificity/sensitivity plot; the AUC is the Area Under Curve.
To be brief, the ROC curve can be interesting because it allows comparison of the sensitivity/specificity behaviour of the model. More simply:
$ROC = (x,y) \in R^2 \Rightarrow AUC = z$ but $AUC = z \nRightarrow ROC = (x,y) \in R^2$ | The value of adding the ROC graph if the AUC is given | The ROC curve is the specificity/sensitivity plot; the AUC is the Area Under Curve.
To be brief, the ROC curve can be interesting because it allows comparison of the sensitivity/specificity behaviour | The value of adding the ROC graph if the AUC is given
The ROC curve is the specificity/sensitivity plot; the AUC is the Area Under Curve.
To be brief, the ROC curve can be interesting because it allows comparison of the sensitivity/specificity behaviour of the model. More simply:
$ROC = (x,y) \in R^2 \Rightarrow AUC = z$ but $AUC = z \nRightarrow ROC = (x,y) \in R^2$ | The value of adding the ROC graph if the AUC is given
The ROC curve is the specificity/sensitivity plot; the AUC is the Area Under Curve.
To be brief, the ROC curve can be interesting because it allows comparison of the sensitivity/specificity behaviour |
52,961 | The value of adding the ROC graph if the AUC is given | There is great value in showing the entire ROC esp. when comparing two different classifiers, as it helps us to see whether different curves cross each other. One is not superior to the other, overall, if they cross - see the Figure 3 (screenshot shown below) from
Seong Ho Park, J. M. G. C.-H. J. (2004). Receiver Operating Characteristic (ROC) Curve: Practical Review for Radiologists. Korean Journal of Radiology, 5(1), 11–18.
In our own study, we noticed this behaviour for biomarkers that have similar performance, and partial AUCs can be studied to demonstrate their utility for particular applications e.g. in high specificity regions for early detection of Alzheimer's disease.
Citation: Raamana, P. R., Weiner, M. W., Wang, L., & Beg, M. F. (2015). Thickness network features for prognostic applications in dementia. Neurobiology of Aging, 36, S91–S102.
Hope that helps make the case for presenting the ROC always. | The value of adding the ROC graph if the AUC is given | There is great value in showing the entire ROC esp. when comparing two different classifiers, as it helps us to see whether different curves cross each other. One is not superior to the other, overall | The value of adding the ROC graph if the AUC is given
There is great value in showing the entire ROC esp. when comparing two different classifiers, as it helps us to see whether different curves cross each other. One is not superior to the other, overall, if they cross - see the Figure 3 (screenshot shown below) from
Seong Ho Park, J. M. G. C.-H. J. (2004). Receiver Operating Characteristic (ROC) Curve: Practical Review for Radiologists. Korean Journal of Radiology, 5(1), 11–18.
In our own study, we noticed this behaviour for biomarkers that have similar performance, and partial AUCs can be studied to demonstrate their utility for particular applications e.g. in high specificity regions for early detection of Alzheimer's disease.
Citation: Raamana, P. R., Weiner, M. W., Wang, L., & Beg, M. F. (2015). Thickness network features for prognostic applications in dementia. Neurobiology of Aging, 36, S91–S102.
Hope that helps make the case for presenting the ROC always. | The value of adding the ROC graph if the AUC is given
There is great value in showing the entire ROC esp. when comparing two different classifiers, as it helps us to see whether different curves cross each other. One is not superior to the other, overall |
52,962 | Is there a better way than side-by-side barplots to compare binned data from different series | I agree with the principle that using more detail, as in looking at the entire distributions or sets of quantiles, would be much better if the data were available. Conversely, converting what you have to quartiles just discards yet more information and is not a good idea here.
You are right that side-by-side or back-to-back bar charts are both popular. In the case of age distribution by sex the latter is often called a population pyramid, but it's a very inefficient design for showing differences (or ratios for that matter) of distribution, as it obliges readers to make comparisons between bars pointing in different directions. Surprisingly few texts make this very simple point about the limitations of pyramids. The impression is that using this kind of graph is a custom or ritual passed on between generations.
For this kind of age-sex data, the context is that rather small differences or ratios are often of interest and importance, as if say the number of people in the oldest category is 2% or 3%, so you want to be able to see that easily. For any kind of data, indeed, that's a useful feature.
A competitive alternative is therefore just a (Cleveland) dot chart. For this example I just guessed roughly at your data from your own displays.
Small points of importance:
Symbols such as o and + tolerate overlap well.
A dot chart is compatible with e.g. logarithmic scale when that makes sense in a way that a bar chart isn't.
A variant on this design connects the data points with explicit horizontal line segments or even arrows.
We have here just two series, but the dot chart could show more. Naturally, the chart would get more crowded and be more difficult to interpret, but that is true of any alternative design as well.
You accepted the Excel defaults of "Series 1" and "Series 2" and I copied you. It's not your question, but it's still immensely better practice to reach in and use informative text.
For another example see How to best visualize differences in many proportions across three groups? | Is there a better way than side-by-side barplots to compare binned data from different series | I agree with the principle that using more detail, as in looking at the entire distributions or sets of quantiles, would be much better if the data were available. Conversely, converting what you have | Is there a better way than side-by-side barplots to compare binned data from different series
I agree with the principle that using more detail, as in looking at the entire distributions or sets of quantiles, would be much better if the data were available. Conversely, converting what you have to quartiles just discards yet more information and is not a good idea here.
You are right that side-by-side or back-to-back bar charts are both popular. In the case of age distribution by sex the latter is often called a population pyramid, but it's a very inefficient design for showing differences (or ratios for that matter) of distribution, as it obliges readers to make comparisons between bars pointing in different directions. Surprisingly few texts make this very simple point about the limitations of pyramids. The impression is that using this kind of graph is a custom or ritual passed on between generations.
For this kind of age-sex data, the context is that rather small differences or ratios are often of interest and importance, as if say the number of people in the oldest category is 2% or 3%, so you want to be able to see that easily. For any kind of data, indeed, that's a useful feature.
A competitive alternative is therefore just a (Cleveland) dot chart. For this example I just guessed roughly at your data from your own displays.
Small points of importance:
Symbols such as o and + tolerate overlap well.
A dot chart is compatible with e.g. logarithmic scale when that makes sense in a way that a bar chart isn't.
A variant on this design connects the data points with explicit horizontal line segments or even arrows.
We have here just two series, but the dot chart could show more. Naturally, the chart would get more crowded and be more difficult to interpret, but that is true of any alternative design as well.
You accepted the Excel defaults of "Series 1" and "Series 2" and I copied you. It's not your question, but it's still immensely better practice to reach in and use informative text.
For another example see How to best visualize differences in many proportions across three groups? | Is there a better way than side-by-side barplots to compare binned data from different series
I agree with the principle that using more detail, as in looking at the entire distributions or sets of quantiles, would be much better if the data were available. Conversely, converting what you have |
52,963 | Is there a better way than side-by-side barplots to compare binned data from different series | The problem with bars is they don't overlay well. Dots are one alternative and lines are another. If you have the full data there are still others (box plots, violin plots, ...). Nick Cox's answer shows dots, and it's worth highlighting lines in this case since a it's so similar to the frequency polygon use.
I don't know why it's called a "polygon" -- it's just the connected tops of histogram bars, which allows overlaying without much obscuring. | Is there a better way than side-by-side barplots to compare binned data from different series | The problem with bars is they don't overlay well. Dots are one alternative and lines are another. If you have the full data there are still others (box plots, violin plots, ...). Nick Cox's answer sho | Is there a better way than side-by-side barplots to compare binned data from different series
The problem with bars is they don't overlay well. Dots are one alternative and lines are another. If you have the full data there are still others (box plots, violin plots, ...). Nick Cox's answer shows dots, and it's worth highlighting lines in this case since a it's so similar to the frequency polygon use.
I don't know why it's called a "polygon" -- it's just the connected tops of histogram bars, which allows overlaying without much obscuring. | Is there a better way than side-by-side barplots to compare binned data from different series
The problem with bars is they don't overlay well. Dots are one alternative and lines are another. If you have the full data there are still others (box plots, violin plots, ...). Nick Cox's answer sho |
52,964 | Understanding the role of the chi-squared distribution in the confidence interval for the variance | Variance is not normally distributed, because variance is the average of the squared deviations of each datum from the mean of the distribution. If all data points in your dataset are identical, then the deviations would each be zero, and so would the squared deviations and their average. Thus, $0$ is the lowest variance possible. On the other hand, the normal distribution ranges from $-\infty$ to $\infty$. Therefore, variance cannot be normally distributed.
The chi-squared distribution is related to $z$-scores. A $z$-score is a quantile of the standard normal distribution. That is, it is the value of a data point from a normal distribution with mean $0$ and variance $1$ (e.g., if the distribution was standardized first). The distribution of $z$-scores that have been squared is $\chi^2_\text{df=1}$. To understand this connection more fully, let's examine the formula for the variance:
$$
s^2 = \frac{\sum_{i=1}^N(x_i-\bar x)^2}{N-1}
$$
If you were to multiply both sides by $(N-1)$ (as in the numerator of the middle of your top set of inequalities), then you simply have a sum of squares. The sum of squared deviations is distributed as chi-squared. In other words, the squaring already exists in $s^2(N-1)$, and so you need a distribution that accounts for that. (To answer one of your specific questions at this point, it is not the number of standard deviations of something from your mean.)
Now if you want a two-sided $1-\alpha$ confidence interval for anything (including this as a special case), you find the quantiles that correspond to the $\alpha/2$ percentile and the $1-\alpha/2$ percentile. In this case, you do that for the appropriate chi-squared distribution, which is chi-squared (since these are sums of squared deviations as noted above) with $\text{df} = N-1$. This value is then scaled as described in your second set of inequalities. (As to how we got from the first set to the second set, it is just algebra.) | Understanding the role of the chi-squared distribution in the confidence interval for the variance | Variance is not normally distributed, because variance is the average of the squared deviations of each datum from the mean of the distribution. If all data points in your dataset are identical, then | Understanding the role of the chi-squared distribution in the confidence interval for the variance
Variance is not normally distributed, because variance is the average of the squared deviations of each datum from the mean of the distribution. If all data points in your dataset are identical, then the deviations would each be zero, and so would the squared deviations and their average. Thus, $0$ is the lowest variance possible. On the other hand, the normal distribution ranges from $-\infty$ to $\infty$. Therefore, variance cannot be normally distributed.
The chi-squared distribution is related to $z$-scores. A $z$-score is a quantile of the standard normal distribution. That is, it is the value of a data point from a normal distribution with mean $0$ and variance $1$ (e.g., if the distribution was standardized first). The distribution of $z$-scores that have been squared is $\chi^2_\text{df=1}$. To understand this connection more fully, let's examine the formula for the variance:
$$
s^2 = \frac{\sum_{i=1}^N(x_i-\bar x)^2}{N-1}
$$
If you were to multiply both sides by $(N-1)$ (as in the numerator of the middle of your top set of inequalities), then you simply have a sum of squares. The sum of squared deviations is distributed as chi-squared. In other words, the squaring already exists in $s^2(N-1)$, and so you need a distribution that accounts for that. (To answer one of your specific questions at this point, it is not the number of standard deviations of something from your mean.)
Now if you want a two-sided $1-\alpha$ confidence interval for anything (including this as a special case), you find the quantiles that correspond to the $\alpha/2$ percentile and the $1-\alpha/2$ percentile. In this case, you do that for the appropriate chi-squared distribution, which is chi-squared (since these are sums of squared deviations as noted above) with $\text{df} = N-1$. This value is then scaled as described in your second set of inequalities. (As to how we got from the first set to the second set, it is just algebra.) | Understanding the role of the chi-squared distribution in the confidence interval for the variance
Variance is not normally distributed, because variance is the average of the squared deviations of each datum from the mean of the distribution. If all data points in your dataset are identical, then |
52,965 | Understanding the role of the chi-squared distribution in the confidence interval for the variance | The chi squared distribution is indeed related to the normal distribution. Specifically, a chi squared variable with N degrees of freedom is equivalent to the distribution of the sum of N squared independent standard normal random variables, which would be the same as the sum of N squared independent Z scores from a normal population.
Therefore, to answer your first question, no..the sample variance is NOT normally distributed. As for your second qestion about why its chi-SQUARED...I have no idea, but if a random standard normal variable is represented by X, then $X^2$ is distributed chi-squared with 1 degree of freedom. For me at least, I alwasy think that the chi-squared symbol looks like $X^2$, so that's my only input for this question.
As far as interpretation: since the chi-squared is the sum of squared independent standard normals, it is essentially the distribution of the sum of squared errors for N predictions/estimates. | Understanding the role of the chi-squared distribution in the confidence interval for the variance | The chi squared distribution is indeed related to the normal distribution. Specifically, a chi squared variable with N degrees of freedom is equivalent to the distribution of the sum of N squared inde | Understanding the role of the chi-squared distribution in the confidence interval for the variance
The chi squared distribution is indeed related to the normal distribution. Specifically, a chi squared variable with N degrees of freedom is equivalent to the distribution of the sum of N squared independent standard normal random variables, which would be the same as the sum of N squared independent Z scores from a normal population.
Therefore, to answer your first question, no..the sample variance is NOT normally distributed. As for your second qestion about why its chi-SQUARED...I have no idea, but if a random standard normal variable is represented by X, then $X^2$ is distributed chi-squared with 1 degree of freedom. For me at least, I alwasy think that the chi-squared symbol looks like $X^2$, so that's my only input for this question.
As far as interpretation: since the chi-squared is the sum of squared independent standard normals, it is essentially the distribution of the sum of squared errors for N predictions/estimates. | Understanding the role of the chi-squared distribution in the confidence interval for the variance
The chi squared distribution is indeed related to the normal distribution. Specifically, a chi squared variable with N degrees of freedom is equivalent to the distribution of the sum of N squared inde |
52,966 | How to extract values of the forecasted times series of auto.arima [closed] | You didn't provide your data. So I am guessing what you are looking for: I think you want to get something like this:
> library(forecast)
> fit=Arima(WWWusage,c(3,1,0))
> AA11a<-forecast(fit)
> AA11a$lower
80% 95%
[1,] 215.7393 213.6634
[2,] 209.9265 205.0016
[3,] 203.8380 196.1947
[4,] 198.3212 188.2489
[5,] 193.2807 180.8498
[6,] 188.3324 173.4858
[7,] 183.3651 166.0860
[8,] 178.5027 158.8474
[9,] 173.8431 151.8879
[10,] 169.3780 145.1874
> AA11a$upper
80% 95%
[1,] 223.5823 225.6582
[2,] 228.5332 233.4581
[3,] 232.7151 240.3585
[4,] 236.3756 246.4479
[5,] 240.2458 252.6768
[6,] 244.4246 259.2713
[7,] 248.6473 265.9264
[
[9,] 256.7919 278.7471
[10,] 260.7719 284.9625
> AA11a$upper[1,2]
95%
225.6582
> AA11a$lower[,2]
[1] 213.6634 205.0016 196.1947 188.2489 180.8498 173.4858 166.0860 158.8474
[9] 151.8879 145.1874
> | How to extract values of the forecasted times series of auto.arima [closed] | You didn't provide your data. So I am guessing what you are looking for: I think you want to get something like this:
> library(forecast)
> fit=Arima(WWWusage,c(3,1,0))
> AA11a<-forecast(fit)
> AA11a$ | How to extract values of the forecasted times series of auto.arima [closed]
You didn't provide your data. So I am guessing what you are looking for: I think you want to get something like this:
> library(forecast)
> fit=Arima(WWWusage,c(3,1,0))
> AA11a<-forecast(fit)
> AA11a$lower
80% 95%
[1,] 215.7393 213.6634
[2,] 209.9265 205.0016
[3,] 203.8380 196.1947
[4,] 198.3212 188.2489
[5,] 193.2807 180.8498
[6,] 188.3324 173.4858
[7,] 183.3651 166.0860
[8,] 178.5027 158.8474
[9,] 173.8431 151.8879
[10,] 169.3780 145.1874
> AA11a$upper
80% 95%
[1,] 223.5823 225.6582
[2,] 228.5332 233.4581
[3,] 232.7151 240.3585
[4,] 236.3756 246.4479
[5,] 240.2458 252.6768
[6,] 244.4246 259.2713
[7,] 248.6473 265.9264
[
[9,] 256.7919 278.7471
[10,] 260.7719 284.9625
> AA11a$upper[1,2]
95%
225.6582
> AA11a$lower[,2]
[1] 213.6634 205.0016 196.1947 188.2489 180.8498 173.4858 166.0860 158.8474
[9] 151.8879 145.1874
> | How to extract values of the forecasted times series of auto.arima [closed]
You didn't provide your data. So I am guessing what you are looking for: I think you want to get something like this:
> library(forecast)
> fit=Arima(WWWusage,c(3,1,0))
> AA11a<-forecast(fit)
> AA11a$ |
52,967 | How to extract values of the forecasted times series of auto.arima [closed] | To extract the mean of the prediction interval as a numeric vector use:
> as.numeric(AA11a$mean)
[1] 219.6608 219.2299 218.2766 217.3484 216.7633 216.3785 216.0062 215.6326 215.3175
[10] 215.0749
The mean itself is a tsclass
> class(AA11a$mean)
[1] "ts" | How to extract values of the forecasted times series of auto.arima [closed] | To extract the mean of the prediction interval as a numeric vector use:
> as.numeric(AA11a$mean)
[1] 219.6608 219.2299 218.2766 217.3484 216.7633 216.3785 216.0062 215.6326 215.3175
[10] 215.0749
Th | How to extract values of the forecasted times series of auto.arima [closed]
To extract the mean of the prediction interval as a numeric vector use:
> as.numeric(AA11a$mean)
[1] 219.6608 219.2299 218.2766 217.3484 216.7633 216.3785 216.0062 215.6326 215.3175
[10] 215.0749
The mean itself is a tsclass
> class(AA11a$mean)
[1] "ts" | How to extract values of the forecasted times series of auto.arima [closed]
To extract the mean of the prediction interval as a numeric vector use:
> as.numeric(AA11a$mean)
[1] 219.6608 219.2299 218.2766 217.3484 216.7633 216.3785 216.0062 215.6326 215.3175
[10] 215.0749
Th |
52,968 | Present numerical results with variable system parameters | Because contour plots--especially 3D contour plots--are usually difficult to interpret and plots of $I$ against $u$ are familiar to physicists, consider a small multiple of such plots where $A$ and $B$ range through selected values.
Experimentation is in order. You might, for instance, overlay multiple graphs for fixed $A$:
In each of these plots $B$ varies through the sequence $(-6, -1, 0, 1, 2)$ with color denoting the value of $B$; a legend would be helpful. (The first value of $B$ is drawn in blue; the next values are drawn in red, gold, green, and so on.)
You could also overlay multiple graphs for fixed $B$:
In each of these plots $A$ varies through the sequence $(-2, 0, 2, 4, 6, 8)$. Again, a legend would help.
Because these tableaux convey the same information in different ways, if space is available you might publish both versions.
Notice how, to assist visual comparison across the cells of each tableau, identical scales and ranges on the axes were used. Sometimes values vary so much this is not feasible, in which case you need to draw the reader's attention to the changes in the scales.
Another thing worth considering is how, if at all, to standardize the plots. It might be more meaningful physically, for instance, to scale them all so that the minimum $I(1/2)$ is equal to a constant value. For other purposes you might standardize them to make their slopes at a distinguished value, such as $I^\prime(1) = 1/\left(\sqrt{e} \sqrt{A-e B+e}\right)$, equal to a constant. | Present numerical results with variable system parameters | Because contour plots--especially 3D contour plots--are usually difficult to interpret and plots of $I$ against $u$ are familiar to physicists, consider a small multiple of such plots where $A$ and $B | Present numerical results with variable system parameters
Because contour plots--especially 3D contour plots--are usually difficult to interpret and plots of $I$ against $u$ are familiar to physicists, consider a small multiple of such plots where $A$ and $B$ range through selected values.
Experimentation is in order. You might, for instance, overlay multiple graphs for fixed $A$:
In each of these plots $B$ varies through the sequence $(-6, -1, 0, 1, 2)$ with color denoting the value of $B$; a legend would be helpful. (The first value of $B$ is drawn in blue; the next values are drawn in red, gold, green, and so on.)
You could also overlay multiple graphs for fixed $B$:
In each of these plots $A$ varies through the sequence $(-2, 0, 2, 4, 6, 8)$. Again, a legend would help.
Because these tableaux convey the same information in different ways, if space is available you might publish both versions.
Notice how, to assist visual comparison across the cells of each tableau, identical scales and ranges on the axes were used. Sometimes values vary so much this is not feasible, in which case you need to draw the reader's attention to the changes in the scales.
Another thing worth considering is how, if at all, to standardize the plots. It might be more meaningful physically, for instance, to scale them all so that the minimum $I(1/2)$ is equal to a constant value. For other purposes you might standardize them to make their slopes at a distinguished value, such as $I^\prime(1) = 1/\left(\sqrt{e} \sqrt{A-e B+e}\right)$, equal to a constant. | Present numerical results with variable system parameters
Because contour plots--especially 3D contour plots--are usually difficult to interpret and plots of $I$ against $u$ are familiar to physicists, consider a small multiple of such plots where $A$ and $B |
52,969 | Present numerical results with variable system parameters | What would be "typical" values? What would be extreme values?
You can think of I as a function of three variables $I(u,A,B)$. Since $u, A, B \in \mathbb{R}$, it's not that easy to represent. You can either make a 1d plot of $I(u;A,B)$ vs $u$ for fixed values of $A$ and $B$, where you then have to take some representative values of $A$ and $B$. You could also make 2d color or contour plots where only $B$ is fixed, or only $A$, so you'd have $I(u,A;B)$.
One thing you can do for a 3d plot where the axis are for $u$, $A$ and $B$ would be a constant-value contour, i.e. make a 3d plot of the surface at which $I(u,A,B) = c$ for some constant $c$. | Present numerical results with variable system parameters | What would be "typical" values? What would be extreme values?
You can think of I as a function of three variables $I(u,A,B)$. Since $u, A, B \in \mathbb{R}$, it's not that easy to represent. You can | Present numerical results with variable system parameters
What would be "typical" values? What would be extreme values?
You can think of I as a function of three variables $I(u,A,B)$. Since $u, A, B \in \mathbb{R}$, it's not that easy to represent. You can either make a 1d plot of $I(u;A,B)$ vs $u$ for fixed values of $A$ and $B$, where you then have to take some representative values of $A$ and $B$. You could also make 2d color or contour plots where only $B$ is fixed, or only $A$, so you'd have $I(u,A;B)$.
One thing you can do for a 3d plot where the axis are for $u$, $A$ and $B$ would be a constant-value contour, i.e. make a 3d plot of the surface at which $I(u,A,B) = c$ for some constant $c$. | Present numerical results with variable system parameters
What would be "typical" values? What would be extreme values?
You can think of I as a function of three variables $I(u,A,B)$. Since $u, A, B \in \mathbb{R}$, it's not that easy to represent. You can |
52,970 | Multicollinearity when adding a confounding variable | First, just because two variables are fairly highly correlated does not mean they are colinear to a problematic degree. Problematic collinearity is best examined with condition indices. See the work of David Belsley or see my dissertation Collinearity Diagnostics in Multiple Regression: A Monte Carlo Study.
Second, if you did find high collinearity, you could solve it with methods such as ridge regression, which are biased but have better variance when there is collinearity.
Third, the effects of collinearity are seen in the variances of the parameter estimates, not in the parameter estimates themselves. So, when you see that the parameter estimate for shark attacks is greatly reduced, that is a sign of confounding. | Multicollinearity when adding a confounding variable | First, just because two variables are fairly highly correlated does not mean they are colinear to a problematic degree. Problematic collinearity is best examined with condition indices. See the work | Multicollinearity when adding a confounding variable
First, just because two variables are fairly highly correlated does not mean they are colinear to a problematic degree. Problematic collinearity is best examined with condition indices. See the work of David Belsley or see my dissertation Collinearity Diagnostics in Multiple Regression: A Monte Carlo Study.
Second, if you did find high collinearity, you could solve it with methods such as ridge regression, which are biased but have better variance when there is collinearity.
Third, the effects of collinearity are seen in the variances of the parameter estimates, not in the parameter estimates themselves. So, when you see that the parameter estimate for shark attacks is greatly reduced, that is a sign of confounding. | Multicollinearity when adding a confounding variable
First, just because two variables are fairly highly correlated does not mean they are colinear to a problematic degree. Problematic collinearity is best examined with condition indices. See the work |
52,971 | Multicollinearity when adding a confounding variable | If you cannot use a linear regression with a controlling variable, or related, the partial correlation, another option is to stratify on the temperature variable.
To do so, categorize temperature into multiple, say five, categories. Then you run five conditional regression models, i.e. for the units in each category of temperature one model of shark attacks and sales. Use the quintiles of temperature as thresholds for categorization.
The net effect of shark attacks is found by taking the weighted average of the regression parameter across the categories of temperature (i.e. weighted for the category frequency of temperature).
I believe there is an argument by Cochran that says that stratifications of this type can remove 90% of the bias due to confounding. | Multicollinearity when adding a confounding variable | If you cannot use a linear regression with a controlling variable, or related, the partial correlation, another option is to stratify on the temperature variable.
To do so, categorize temperature int | Multicollinearity when adding a confounding variable
If you cannot use a linear regression with a controlling variable, or related, the partial correlation, another option is to stratify on the temperature variable.
To do so, categorize temperature into multiple, say five, categories. Then you run five conditional regression models, i.e. for the units in each category of temperature one model of shark attacks and sales. Use the quintiles of temperature as thresholds for categorization.
The net effect of shark attacks is found by taking the weighted average of the regression parameter across the categories of temperature (i.e. weighted for the category frequency of temperature).
I believe there is an argument by Cochran that says that stratifications of this type can remove 90% of the bias due to confounding. | Multicollinearity when adding a confounding variable
If you cannot use a linear regression with a controlling variable, or related, the partial correlation, another option is to stratify on the temperature variable.
To do so, categorize temperature int |
52,972 | Why is the average the right way to deal with Gaussian noise? | There are several senses in which the sample mean might be regarded as optimal as an estimator of $\theta$, some of which are optimal more generally than for the normal, and then there's senses in which it's optimal specifically for the normal.
If we take the Gauss-Markov theorem, the sample mean is the best linear unbiased estimator of the population mean. (It's not always the case that a linear estimator is desirable, but linear estimators are intuitive and have some nice properties - and if you want an unbiased linear estimator of the population mean, the sample mean is the best one). This doesn't rely on normality.
Now, desirable properties of estimators might include having small variance, having small (or zero) bias, perhaps approaching $\theta$ as $n$ becomes large (consistency), 'using all the information in the data' (sufficiency).
When bias isn't zero, a useful way to compare estimators might be by mean square error (which turns out to be variance plus the square of the bias). When the bias is zero, you might compare them by variance. These are far from the only possible desirable properties (you might prefer small absolute error over small squared error for example).
When the data are normal, the sample mean is the maximum likelihood estimator.
MLE's are generally (under some conditions) consistent, sufficient, and asymptotically normal (glossing over many details).
The sample mean is unbiased for the population mean (again, omitting some conditions), so one thing we might be interested in for the normal case is how its variance compares to other possible estimators.
Asymptotically, MLEs will achieve the Cramer-Rao lower bound, so in large samples, they're often going to be as good as you can do. In small samples they may be biased (in fact, MLEs are usually biased), and may not be minimum MSE in small samples.
In the case of the normal, the sample mean achieves the Cramer-Rao lower bound at every $n$ (the bound in this case is simply $\sigma^2/n$); if you like minimum variance unbiased estimation, you can't do better.
Many books cover these issues at a fairly elementary level, not requiring much more than some basic calculus and algebra. | Why is the average the right way to deal with Gaussian noise? | There are several senses in which the sample mean might be regarded as optimal as an estimator of $\theta$, some of which are optimal more generally than for the normal, and then there's senses in whi | Why is the average the right way to deal with Gaussian noise?
There are several senses in which the sample mean might be regarded as optimal as an estimator of $\theta$, some of which are optimal more generally than for the normal, and then there's senses in which it's optimal specifically for the normal.
If we take the Gauss-Markov theorem, the sample mean is the best linear unbiased estimator of the population mean. (It's not always the case that a linear estimator is desirable, but linear estimators are intuitive and have some nice properties - and if you want an unbiased linear estimator of the population mean, the sample mean is the best one). This doesn't rely on normality.
Now, desirable properties of estimators might include having small variance, having small (or zero) bias, perhaps approaching $\theta$ as $n$ becomes large (consistency), 'using all the information in the data' (sufficiency).
When bias isn't zero, a useful way to compare estimators might be by mean square error (which turns out to be variance plus the square of the bias). When the bias is zero, you might compare them by variance. These are far from the only possible desirable properties (you might prefer small absolute error over small squared error for example).
When the data are normal, the sample mean is the maximum likelihood estimator.
MLE's are generally (under some conditions) consistent, sufficient, and asymptotically normal (glossing over many details).
The sample mean is unbiased for the population mean (again, omitting some conditions), so one thing we might be interested in for the normal case is how its variance compares to other possible estimators.
Asymptotically, MLEs will achieve the Cramer-Rao lower bound, so in large samples, they're often going to be as good as you can do. In small samples they may be biased (in fact, MLEs are usually biased), and may not be minimum MSE in small samples.
In the case of the normal, the sample mean achieves the Cramer-Rao lower bound at every $n$ (the bound in this case is simply $\sigma^2/n$); if you like minimum variance unbiased estimation, you can't do better.
Many books cover these issues at a fairly elementary level, not requiring much more than some basic calculus and algebra. | Why is the average the right way to deal with Gaussian noise?
There are several senses in which the sample mean might be regarded as optimal as an estimator of $\theta$, some of which are optimal more generally than for the normal, and then there's senses in whi |
52,973 | Why is the average the right way to deal with Gaussian noise? | Here's the argument that the average is the maximum-likelihood estimator (MLE) for the parameter $\theta$.
Suppose we are given observations $x_1,\dots,x_n$ of the r.v.s $X_1,\dots,X_n$. The likelihood of $\hat{\theta}$, given $x_1,\dots,x_n$, is
$$L(\hat{\theta}) = \prod_i \Pr[X_i=x_i|\hat{\theta}] = \prod_i \Pr[Y_i=x_i-\hat{\theta}] = \prod_i e^{-(x_i-\hat{\theta})^2/2\sigma^2}.$$
Thus, the log-likelihood is
$$\log L(\hat{\theta}) = -\frac{1}{2\sigma^2} \sum_i (x_i-\hat{\theta})^2.$$
We want to maximize this value. When maximizing, we can ignore the constant term out front, so our goal is: given $x_1,\dots,x_n$, find $\hat{\theta}$ that minimizes the sum
$$f(\hat{\theta}) = \sum_i (x_i-\hat{\theta})^2.$$
We can find the minimum by taking the first derivative and setting it to zero. Notice that
$$f'(\hat{\theta}) = -2 \sum_i (x_i - \hat{\theta}).$$
Setting $f'(\hat{\theta})$ to zero yields the condition
$$\sum_i (x_i - \hat{\theta}) = 0.$$
Re-arranging terms, we find
$$\hat{\theta} = \frac{1}{n} \sum_i x_i.$$
In other words, the value of $\hat{\theta}$ that maximizes the likelihood score is precisely the average of the observed values of $X_1,\dots,X_n$. | Why is the average the right way to deal with Gaussian noise? | Here's the argument that the average is the maximum-likelihood estimator (MLE) for the parameter $\theta$.
Suppose we are given observations $x_1,\dots,x_n$ of the r.v.s $X_1,\dots,X_n$. The likeliho | Why is the average the right way to deal with Gaussian noise?
Here's the argument that the average is the maximum-likelihood estimator (MLE) for the parameter $\theta$.
Suppose we are given observations $x_1,\dots,x_n$ of the r.v.s $X_1,\dots,X_n$. The likelihood of $\hat{\theta}$, given $x_1,\dots,x_n$, is
$$L(\hat{\theta}) = \prod_i \Pr[X_i=x_i|\hat{\theta}] = \prod_i \Pr[Y_i=x_i-\hat{\theta}] = \prod_i e^{-(x_i-\hat{\theta})^2/2\sigma^2}.$$
Thus, the log-likelihood is
$$\log L(\hat{\theta}) = -\frac{1}{2\sigma^2} \sum_i (x_i-\hat{\theta})^2.$$
We want to maximize this value. When maximizing, we can ignore the constant term out front, so our goal is: given $x_1,\dots,x_n$, find $\hat{\theta}$ that minimizes the sum
$$f(\hat{\theta}) = \sum_i (x_i-\hat{\theta})^2.$$
We can find the minimum by taking the first derivative and setting it to zero. Notice that
$$f'(\hat{\theta}) = -2 \sum_i (x_i - \hat{\theta}).$$
Setting $f'(\hat{\theta})$ to zero yields the condition
$$\sum_i (x_i - \hat{\theta}) = 0.$$
Re-arranging terms, we find
$$\hat{\theta} = \frac{1}{n} \sum_i x_i.$$
In other words, the value of $\hat{\theta}$ that maximizes the likelihood score is precisely the average of the observed values of $X_1,\dots,X_n$. | Why is the average the right way to deal with Gaussian noise?
Here's the argument that the average is the maximum-likelihood estimator (MLE) for the parameter $\theta$.
Suppose we are given observations $x_1,\dots,x_n$ of the r.v.s $X_1,\dots,X_n$. The likeliho |
52,974 | Why is the average the right way to deal with Gaussian noise? | A quick remark in addition to the excellent answers above: the type of situation you describe is often discussed in classical test theory. In CTT you regard $X$ as independent repeated measures of the true score $\theta$. The major difference between your question and CTT is that $\theta$ is assumed to have a distribution in CTT (i.e. the true score is not constant in the population). A good starting point is: McDonalds, R. (1999). Test Theory.
Hope this helps. | Why is the average the right way to deal with Gaussian noise? | A quick remark in addition to the excellent answers above: the type of situation you describe is often discussed in classical test theory. In CTT you regard $X$ as independent repeated measures of the | Why is the average the right way to deal with Gaussian noise?
A quick remark in addition to the excellent answers above: the type of situation you describe is often discussed in classical test theory. In CTT you regard $X$ as independent repeated measures of the true score $\theta$. The major difference between your question and CTT is that $\theta$ is assumed to have a distribution in CTT (i.e. the true score is not constant in the population). A good starting point is: McDonalds, R. (1999). Test Theory.
Hope this helps. | Why is the average the right way to deal with Gaussian noise?
A quick remark in addition to the excellent answers above: the type of situation you describe is often discussed in classical test theory. In CTT you regard $X$ as independent repeated measures of the |
52,975 | Developing a prediction model for bus stops | It seems like simple logistic regression would work well. Hopefully this matches well to the data you have. I've tried to lay off jargon as much as possible.
Let's confine our analysis to a single bus route for simplicity (you can simply repeat this procedure for other routes). The dependent/predicted variable you are trying to measure is a binary variable $y$; that is, $y=0$ if the bus misses the stop, and $y=1$ if the bus makes the stop.
From your GPS data you can extract the value of $y$ for a bunch of previous bus runs. Let's say you have $N$ observations in this data. This is best conceptualized as a vector/list, $y= \{ y_1, y_2, ... ,y_N\} $. For example, $y= \{ 0,1,0,0,1\}$ would correspond to miss, stop, miss, miss, stop, for five observations.
Now you want to develop a series of predictor/independent variables that you can use to predict $y$ for future observations. AccidentalStatistician has mentioned a few possibilities. Here are a few simple ones:
Whether the bus stopped or missed the previous bus stop (call this variable $x_1$). This is also a binary variable. This could potentially be very informative, for example $x_1 = \{ 0,1,0,0,1\}$, would give evidence that $y=x_1$. Of course, there is no reason to just check the previous bus stop. To be complete, you could try using ALL previous bus stops on the line as predictor variables. Each will be a binary vector like $x_1$ above.
The distance between the bus and the bus in front of it (call this $x_2$). In contrast to $x_1$, this variable is continuous, and could look something like $x_2 = \{0.53,0.9,0.72,0.81,0.62 \}$ where each entry corresponds the distance (e.g., in miles) between the two buses at the time of the stop (or averaged over some period before the stop). It might be more informative to measure this distance in minutes than miles.
Time of day. $x_3 = \{8.5,9.2,10.1,11.2,14.9\}$ in hours.
Day of year... You get the recipe by now hopefully. Feel free to come up with more ideas.
The important step is figuring out what you think might be important in your data and distilling it into some simple form (e.g. zeros and ones).
Once you have your data in this form, you can run a logistic regression to predict the probability that $y=1$ for any observed values of $x_1, x_2, ..., x_p$ (where $p$ is the number of independent variables). If you have just one independent variable, $x$, the result will look something like this (image source)
Here, the black dots are your observed values of $y$ plotted against your observed values of $x$. The red line is the predicted probability of $y=1$ for an arbitrary value of $x$ (here $x$ is a continuous variable).
The following source explains how to fit a logistic model in R: LINK. I recommend the following textbook as an introduction to logistic regression and multiple linear regression (which has pretty similar motivations) LINK. And the following book for advanced understanding of logistic regression and other classification methods: LINK. This last reference will cover a lot of really important methods for variable selection -- it is really easy to come up with too many independent variables and over-fit your data: LINK. Don't do this! | Developing a prediction model for bus stops | It seems like simple logistic regression would work well. Hopefully this matches well to the data you have. I've tried to lay off jargon as much as possible.
Let's confine our analysis to a single bus | Developing a prediction model for bus stops
It seems like simple logistic regression would work well. Hopefully this matches well to the data you have. I've tried to lay off jargon as much as possible.
Let's confine our analysis to a single bus route for simplicity (you can simply repeat this procedure for other routes). The dependent/predicted variable you are trying to measure is a binary variable $y$; that is, $y=0$ if the bus misses the stop, and $y=1$ if the bus makes the stop.
From your GPS data you can extract the value of $y$ for a bunch of previous bus runs. Let's say you have $N$ observations in this data. This is best conceptualized as a vector/list, $y= \{ y_1, y_2, ... ,y_N\} $. For example, $y= \{ 0,1,0,0,1\}$ would correspond to miss, stop, miss, miss, stop, for five observations.
Now you want to develop a series of predictor/independent variables that you can use to predict $y$ for future observations. AccidentalStatistician has mentioned a few possibilities. Here are a few simple ones:
Whether the bus stopped or missed the previous bus stop (call this variable $x_1$). This is also a binary variable. This could potentially be very informative, for example $x_1 = \{ 0,1,0,0,1\}$, would give evidence that $y=x_1$. Of course, there is no reason to just check the previous bus stop. To be complete, you could try using ALL previous bus stops on the line as predictor variables. Each will be a binary vector like $x_1$ above.
The distance between the bus and the bus in front of it (call this $x_2$). In contrast to $x_1$, this variable is continuous, and could look something like $x_2 = \{0.53,0.9,0.72,0.81,0.62 \}$ where each entry corresponds the distance (e.g., in miles) between the two buses at the time of the stop (or averaged over some period before the stop). It might be more informative to measure this distance in minutes than miles.
Time of day. $x_3 = \{8.5,9.2,10.1,11.2,14.9\}$ in hours.
Day of year... You get the recipe by now hopefully. Feel free to come up with more ideas.
The important step is figuring out what you think might be important in your data and distilling it into some simple form (e.g. zeros and ones).
Once you have your data in this form, you can run a logistic regression to predict the probability that $y=1$ for any observed values of $x_1, x_2, ..., x_p$ (where $p$ is the number of independent variables). If you have just one independent variable, $x$, the result will look something like this (image source)
Here, the black dots are your observed values of $y$ plotted against your observed values of $x$. The red line is the predicted probability of $y=1$ for an arbitrary value of $x$ (here $x$ is a continuous variable).
The following source explains how to fit a logistic model in R: LINK. I recommend the following textbook as an introduction to logistic regression and multiple linear regression (which has pretty similar motivations) LINK. And the following book for advanced understanding of logistic regression and other classification methods: LINK. This last reference will cover a lot of really important methods for variable selection -- it is really easy to come up with too many independent variables and over-fit your data: LINK. Don't do this! | Developing a prediction model for bus stops
It seems like simple logistic regression would work well. Hopefully this matches well to the data you have. I've tried to lay off jargon as much as possible.
Let's confine our analysis to a single bus |
52,976 | Developing a prediction model for bus stops | In order to predict any outcome (bus stop in your case), you need some information other than the outcome you want to predict. These variables are often called predictors / covariates / independent variables.
So the answer to your question depends on what information you have.
1. If, the bus GPS signal and door open signal are the only ones you have, your predictor can include
historical data on the route
bus stop data in the morning (from home to work) to predict after work stop
data on the previous stops (if real time)
In this case, you can probably find a correlation between pick-up stops and drop-off stops i.e the people from the same pick-up sites are likely to get off at the same drop-off stops. So if a particular pick-up site was not stopped, then the corresponding drop-off site may be avoided. You can use logistic regression for this purpose. You probably cannot model the pick-up stops in this case.
If real-time, you can also use the previous stop information on the same route. The method for real time modelling is Markov Chain Monte Carlo but you can use regression if that is beyond your knowledge.
2.If you have other information such as
day of the week
time of the day
# of people on the drop-off bus
they can also be used in a regression as predictors.
In short, you need Markov Chain Monte Carlo if you know enough statistics and logistic regression otherwise. Your predictors will be anything you think is relevant. | Developing a prediction model for bus stops | In order to predict any outcome (bus stop in your case), you need some information other than the outcome you want to predict. These variables are often called predictors / covariates / independent va | Developing a prediction model for bus stops
In order to predict any outcome (bus stop in your case), you need some information other than the outcome you want to predict. These variables are often called predictors / covariates / independent variables.
So the answer to your question depends on what information you have.
1. If, the bus GPS signal and door open signal are the only ones you have, your predictor can include
historical data on the route
bus stop data in the morning (from home to work) to predict after work stop
data on the previous stops (if real time)
In this case, you can probably find a correlation between pick-up stops and drop-off stops i.e the people from the same pick-up sites are likely to get off at the same drop-off stops. So if a particular pick-up site was not stopped, then the corresponding drop-off site may be avoided. You can use logistic regression for this purpose. You probably cannot model the pick-up stops in this case.
If real-time, you can also use the previous stop information on the same route. The method for real time modelling is Markov Chain Monte Carlo but you can use regression if that is beyond your knowledge.
2.If you have other information such as
day of the week
time of the day
# of people on the drop-off bus
they can also be used in a regression as predictors.
In short, you need Markov Chain Monte Carlo if you know enough statistics and logistic regression otherwise. Your predictors will be anything you think is relevant. | Developing a prediction model for bus stops
In order to predict any outcome (bus stop in your case), you need some information other than the outcome you want to predict. These variables are often called predictors / covariates / independent va |
52,977 | Developing a prediction model for bus stops | I'm assuming, from your description of your data, that you effectively have a perfect record of where the bus opens its doors. So, for predicting what a bus will do at a particular stop, your data is what the bus did on previous shifts (entire routes), and what the bus did for previous stops on the current shift.
For the non-mandatory stops, the simplest model would be to assume the probability of the bus opening its doors at a stop is independent of what happens at other stops. In that case each stop would have a simple likelihood ratio, since it's equivalent to the likelihood for, say, seeing heads when flipping a biased coin.
The next step would be a model where the probability of opening doors at a stop depends on what the bus did at stops earlier along the route. For that you'd want some historical data to look at, so you can look for correlation between stops.
For making the model more complete, it depends on the specifics of your problem.
I've assumed above, for example, that what the bus does on different shifts are independent, but if it does several shifts, at different times of day, that might not be the case. Then you'd have to account for what time of day it is, if you've got measurements for that.
I've also assumed there's only one bus, but if two buses happened to turn up at the same stop soon after each other, that would lower the probability the second bus opens its doors there. So then you'd have to account for what other buses are doing. | Developing a prediction model for bus stops | I'm assuming, from your description of your data, that you effectively have a perfect record of where the bus opens its doors. So, for predicting what a bus will do at a particular stop, your data is | Developing a prediction model for bus stops
I'm assuming, from your description of your data, that you effectively have a perfect record of where the bus opens its doors. So, for predicting what a bus will do at a particular stop, your data is what the bus did on previous shifts (entire routes), and what the bus did for previous stops on the current shift.
For the non-mandatory stops, the simplest model would be to assume the probability of the bus opening its doors at a stop is independent of what happens at other stops. In that case each stop would have a simple likelihood ratio, since it's equivalent to the likelihood for, say, seeing heads when flipping a biased coin.
The next step would be a model where the probability of opening doors at a stop depends on what the bus did at stops earlier along the route. For that you'd want some historical data to look at, so you can look for correlation between stops.
For making the model more complete, it depends on the specifics of your problem.
I've assumed above, for example, that what the bus does on different shifts are independent, but if it does several shifts, at different times of day, that might not be the case. Then you'd have to account for what time of day it is, if you've got measurements for that.
I've also assumed there's only one bus, but if two buses happened to turn up at the same stop soon after each other, that would lower the probability the second bus opens its doors there. So then you'd have to account for what other buses are doing. | Developing a prediction model for bus stops
I'm assuming, from your description of your data, that you effectively have a perfect record of where the bus opens its doors. So, for predicting what a bus will do at a particular stop, your data is |
52,978 | Developing a prediction model for bus stops | In your question, you need to describe what data you have that are complete, jagged, and missing.
Firstly, how much data do you have for a single bus?
Does a bus always use the same route (for the data you have)? Or does the route change over time?
If you can assemble data that are not jagged, you may be able to develop a model.
However, if you have multiple realizations representing say, several months of stop data for single bus route where all the trips are over the same route and same stops, then you will be in a better position.
What seems missing is your inclination to describe data for multiple measurements of the same experiment (one bus, same route, 100 days of data for the same route, with stop data and characteristics of bus conditions --> #passengers, etc., when each stop is made.)
Overall, you should think about assembling your data to determine what is missing, what is unique, what is repeated, what is assymetric and jagged (different). Then use a "divide and conquer" method to reduce your large problem into many small problems -- then solve each small problem. | Developing a prediction model for bus stops | In your question, you need to describe what data you have that are complete, jagged, and missing.
Firstly, how much data do you have for a single bus?
Does a bus always use the same route (for the d | Developing a prediction model for bus stops
In your question, you need to describe what data you have that are complete, jagged, and missing.
Firstly, how much data do you have for a single bus?
Does a bus always use the same route (for the data you have)? Or does the route change over time?
If you can assemble data that are not jagged, you may be able to develop a model.
However, if you have multiple realizations representing say, several months of stop data for single bus route where all the trips are over the same route and same stops, then you will be in a better position.
What seems missing is your inclination to describe data for multiple measurements of the same experiment (one bus, same route, 100 days of data for the same route, with stop data and characteristics of bus conditions --> #passengers, etc., when each stop is made.)
Overall, you should think about assembling your data to determine what is missing, what is unique, what is repeated, what is assymetric and jagged (different). Then use a "divide and conquer" method to reduce your large problem into many small problems -- then solve each small problem. | Developing a prediction model for bus stops
In your question, you need to describe what data you have that are complete, jagged, and missing.
Firstly, how much data do you have for a single bus?
Does a bus always use the same route (for the d |
52,979 | Different regularization parameter per parameter | Yes, it had been tried (including by myself - I tried it with neural nets, with rather mixed success). The Relevance Vector Machine (RVM) does pretty much exactly that, and the regularisation parameters are tuned by maximising the marginal likelihood. The advantage of this is that it leads to a sparse model where uninformative attributes end up with large regularisation parameters and hence small weights. The problem with this approach lies in the tuning of the regularisation parameters, which tends to result in over-fitting the model selection criterion (whether Bayesian or cross-validation based), simply because there are many degrees of freedom introduced by having many hyper-parameters to tune. | Different regularization parameter per parameter | Yes, it had been tried (including by myself - I tried it with neural nets, with rather mixed success). The Relevance Vector Machine (RVM) does pretty much exactly that, and the regularisation paramet | Different regularization parameter per parameter
Yes, it had been tried (including by myself - I tried it with neural nets, with rather mixed success). The Relevance Vector Machine (RVM) does pretty much exactly that, and the regularisation parameters are tuned by maximising the marginal likelihood. The advantage of this is that it leads to a sparse model where uninformative attributes end up with large regularisation parameters and hence small weights. The problem with this approach lies in the tuning of the regularisation parameters, which tends to result in over-fitting the model selection criterion (whether Bayesian or cross-validation based), simply because there are many degrees of freedom introduced by having many hyper-parameters to tune. | Different regularization parameter per parameter
Yes, it had been tried (including by myself - I tried it with neural nets, with rather mixed success). The Relevance Vector Machine (RVM) does pretty much exactly that, and the regularisation paramet |
52,980 | Different regularization parameter per parameter | Adaptive Lasso (H.Zou, JASA 2006, Vol. 101, No. 476) achieves consistency in parameter estimates by using individual lambda for each variable. Lambda values are tuned based on OLS solution (which unfortunately is not available in many practical cases where Lasso is used). | Different regularization parameter per parameter | Adaptive Lasso (H.Zou, JASA 2006, Vol. 101, No. 476) achieves consistency in parameter estimates by using individual lambda for each variable. Lambda values are tuned based on OLS solution (which unfo | Different regularization parameter per parameter
Adaptive Lasso (H.Zou, JASA 2006, Vol. 101, No. 476) achieves consistency in parameter estimates by using individual lambda for each variable. Lambda values are tuned based on OLS solution (which unfortunately is not available in many practical cases where Lasso is used). | Different regularization parameter per parameter
Adaptive Lasso (H.Zou, JASA 2006, Vol. 101, No. 476) achieves consistency in parameter estimates by using individual lambda for each variable. Lambda values are tuned based on OLS solution (which unfo |
52,981 | Different regularization parameter per parameter | If you want to/are able to go nonparametric, this is implemented in the mgcv package, which implements penalized splines. If you use the option select=TRUE, the optimizer that selects smoothness penalties also adds a penalty term to the "main effect" of each smooth term, in addition to the penalty used for smoothness selection. It doesn't however implement this for the parametric part of the model, and it is computationally intensive. | Different regularization parameter per parameter | If you want to/are able to go nonparametric, this is implemented in the mgcv package, which implements penalized splines. If you use the option select=TRUE, the optimizer that selects smoothness pena | Different regularization parameter per parameter
If you want to/are able to go nonparametric, this is implemented in the mgcv package, which implements penalized splines. If you use the option select=TRUE, the optimizer that selects smoothness penalties also adds a penalty term to the "main effect" of each smooth term, in addition to the penalty used for smoothness selection. It doesn't however implement this for the parametric part of the model, and it is computationally intensive. | Different regularization parameter per parameter
If you want to/are able to go nonparametric, this is implemented in the mgcv package, which implements penalized splines. If you use the option select=TRUE, the optimizer that selects smoothness pena |
52,982 | Neural network modeling sample size | There are two rules of thumb that I know of:
There should be approximately 30 times more training cases than the number of weights (Neural Network FAQ)
General generalization rule: there should be 10 times more training cases than the VC dimension of the hypothesis set. In NN case the VC dimension is usually assumed to be around the number of weights, so you should have 10 times more training cases than the weights (this rule is presented for example during this course by Dr. Abu-Mostafa. If you need a reference, then you can probably find it in his book). | Neural network modeling sample size | There are two rules of thumb that I know of:
There should be approximately 30 times more training cases than the number of weights (Neural Network FAQ)
General generalization rule: there should be 10 | Neural network modeling sample size
There are two rules of thumb that I know of:
There should be approximately 30 times more training cases than the number of weights (Neural Network FAQ)
General generalization rule: there should be 10 times more training cases than the VC dimension of the hypothesis set. In NN case the VC dimension is usually assumed to be around the number of weights, so you should have 10 times more training cases than the weights (this rule is presented for example during this course by Dr. Abu-Mostafa. If you need a reference, then you can probably find it in his book). | Neural network modeling sample size
There are two rules of thumb that I know of:
There should be approximately 30 times more training cases than the number of weights (Neural Network FAQ)
General generalization rule: there should be 10 |
52,983 | I want to show $e^{-\alpha t}B(e^{2\alpha t})$ is a Gaussian process and I find mean and covariance functions | The proof is very similar to how you prove a bm is a gaussian process.
What i did not say at the end of proof is that, notice, by the stationary independent increment property of a BM, the increments in the brackets i lablelled normal are INDEPENDENT normal distributions, so linear combinations of independent normal distributions are also normal, which proves the claim that $\sum a_i X_{t_i}$ is Gaussian for any choice of $a_i$ and $t_i$ | I want to show $e^{-\alpha t}B(e^{2\alpha t})$ is a Gaussian process and I find mean and covariance | The proof is very similar to how you prove a bm is a gaussian process.
What i did not say at the end of proof is that, notice, by the stationary independent increment property of a BM, the increments | I want to show $e^{-\alpha t}B(e^{2\alpha t})$ is a Gaussian process and I find mean and covariance functions
The proof is very similar to how you prove a bm is a gaussian process.
What i did not say at the end of proof is that, notice, by the stationary independent increment property of a BM, the increments in the brackets i lablelled normal are INDEPENDENT normal distributions, so linear combinations of independent normal distributions are also normal, which proves the claim that $\sum a_i X_{t_i}$ is Gaussian for any choice of $a_i$ and $t_i$ | I want to show $e^{-\alpha t}B(e^{2\alpha t})$ is a Gaussian process and I find mean and covariance
The proof is very similar to how you prove a bm is a gaussian process.
What i did not say at the end of proof is that, notice, by the stationary independent increment property of a BM, the increments |
52,984 | I want to show $e^{-\alpha t}B(e^{2\alpha t})$ is a Gaussian process and I find mean and covariance functions | Maybe I'm missing something, but this question seems to be easier than it, at first, appears to be. It's not a change of variable problem (which would be messy) but simply a change in labeling with a change of scale. $t$ is an index, not a random variable.
A process is Gaussian if every random subset of variables from the index set has a multivariate Gaussian distribution. Take a collection of variables with indeces $t_1,t_2, \ldots, t_k$. They are jointly Brownian (modulo the scale change), and so jointly multivariate normal.
Every random variable from a Brownian process has mean 0, so the process defined here has mean 0 everywhere.
Now, to the covariance function. Let's look at the variance first. Call the new process $X(t)$.
The variance of the Brownian at $t$ is $t$. The variance of $X(t)$ is $e^{-2\alpha t} e^{2 \alpha t}$, which is 1. That's the point of the scaling factor, clearly.
The covariance of the Brownian at $t$ and $s$ is $\text{min}(s,t)$, so the covariance of $X(t)$ and $X(s)$ will be
$e^{-\alpha(s+t)} \text{min}(e^{2\alpha t}, e^{2 \alpha s})$.
So if $s < t$, the covariance will be $e^{- \alpha (t-s)}$. Which is kinda cool. | I want to show $e^{-\alpha t}B(e^{2\alpha t})$ is a Gaussian process and I find mean and covariance | Maybe I'm missing something, but this question seems to be easier than it, at first, appears to be. It's not a change of variable problem (which would be messy) but simply a change in labeling with a | I want to show $e^{-\alpha t}B(e^{2\alpha t})$ is a Gaussian process and I find mean and covariance functions
Maybe I'm missing something, but this question seems to be easier than it, at first, appears to be. It's not a change of variable problem (which would be messy) but simply a change in labeling with a change of scale. $t$ is an index, not a random variable.
A process is Gaussian if every random subset of variables from the index set has a multivariate Gaussian distribution. Take a collection of variables with indeces $t_1,t_2, \ldots, t_k$. They are jointly Brownian (modulo the scale change), and so jointly multivariate normal.
Every random variable from a Brownian process has mean 0, so the process defined here has mean 0 everywhere.
Now, to the covariance function. Let's look at the variance first. Call the new process $X(t)$.
The variance of the Brownian at $t$ is $t$. The variance of $X(t)$ is $e^{-2\alpha t} e^{2 \alpha t}$, which is 1. That's the point of the scaling factor, clearly.
The covariance of the Brownian at $t$ and $s$ is $\text{min}(s,t)$, so the covariance of $X(t)$ and $X(s)$ will be
$e^{-\alpha(s+t)} \text{min}(e^{2\alpha t}, e^{2 \alpha s})$.
So if $s < t$, the covariance will be $e^{- \alpha (t-s)}$. Which is kinda cool. | I want to show $e^{-\alpha t}B(e^{2\alpha t})$ is a Gaussian process and I find mean and covariance
Maybe I'm missing something, but this question seems to be easier than it, at first, appears to be. It's not a change of variable problem (which would be messy) but simply a change in labeling with a |
52,985 | Displaying fine variation across multiple orders of magnitude | Let's step back, and think how to represent data instead of how to visualize data. I love data visualization but I'd say that graph is not a suitable solution here.
Let's evaluate your requests:
I'd like to be able to plot the data in such a way that the viewer will be able to tell how the whole sequence increases
[F]or my application, the first few values are especially important, but I'd still like to show the whole range.
The problem with graphs with original or log scale is that it's still practically impossible to name the first few values from those graphs, which is your objective #2.
A graph is on the only tool to show range, you can also tabulate your readings:
0.993
0.999
1.037
1.054
1.195
1.550
2.953
15.369
815.687
26492.118
Most people can immediately notice that very high rate of growth, which should fulfill your objective #1. In the mean time, you got to keep the true data of the first few cases. And if you so wish, you can even plot those first few cases using the original scale. | Displaying fine variation across multiple orders of magnitude | Let's step back, and think how to represent data instead of how to visualize data. I love data visualization but I'd say that graph is not a suitable solution here.
Let's evaluate your requests:
I'd | Displaying fine variation across multiple orders of magnitude
Let's step back, and think how to represent data instead of how to visualize data. I love data visualization but I'd say that graph is not a suitable solution here.
Let's evaluate your requests:
I'd like to be able to plot the data in such a way that the viewer will be able to tell how the whole sequence increases
[F]or my application, the first few values are especially important, but I'd still like to show the whole range.
The problem with graphs with original or log scale is that it's still practically impossible to name the first few values from those graphs, which is your objective #2.
A graph is on the only tool to show range, you can also tabulate your readings:
0.993
0.999
1.037
1.054
1.195
1.550
2.953
15.369
815.687
26492.118
Most people can immediately notice that very high rate of growth, which should fulfill your objective #1. In the mean time, you got to keep the true data of the first few cases. And if you so wish, you can even plot those first few cases using the original scale. | Displaying fine variation across multiple orders of magnitude
Let's step back, and think how to represent data instead of how to visualize data. I love data visualization but I'd say that graph is not a suitable solution here.
Let's evaluate your requests:
I'd |
52,986 | Displaying fine variation across multiple orders of magnitude | Your idea of two plots with different scales is better than a broken axis since it gives you a perceptual view of how disparate the values are. Below is a quick mock-up. For presentation, the two graphs should have more separation and some cue or text that one is a zoomed in view of the other.
Log or some other transformation may still be best to show the power of the growth. Here's log.
Another option is to plot some derived value, such as the ratio yk+1/yk. | Displaying fine variation across multiple orders of magnitude | Your idea of two plots with different scales is better than a broken axis since it gives you a perceptual view of how disparate the values are. Below is a quick mock-up. For presentation, the two grap | Displaying fine variation across multiple orders of magnitude
Your idea of two plots with different scales is better than a broken axis since it gives you a perceptual view of how disparate the values are. Below is a quick mock-up. For presentation, the two graphs should have more separation and some cue or text that one is a zoomed in view of the other.
Log or some other transformation may still be best to show the power of the growth. Here's log.
Another option is to plot some derived value, such as the ratio yk+1/yk. | Displaying fine variation across multiple orders of magnitude
Your idea of two plots with different scales is better than a broken axis since it gives you a perceptual view of how disparate the values are. Below is a quick mock-up. For presentation, the two grap |
52,987 | When to use residual plots? | They are still useful in assessing whether the relationship between the explanatory variables and the dependent variable is linear (or modeled properly given the equation). For an extreme example, I generated some data with a quadratic relationship and fit a linear regression of the form $Y = \alpha + \beta(X) + e$. (Because the parabola is approximately centered on zero $\beta$ is insignificant in the equation).
If you plot $X$ versus the residuals though the quadratic relationship is still very clear. (Imagine just detilting the first plot.)
I'm sure you can dream up other scenarios in which regression coefficients are insignificant but examining the residuals will show how the model is inadequate. | When to use residual plots? | They are still useful in assessing whether the relationship between the explanatory variables and the dependent variable is linear (or modeled properly given the equation). For an extreme example, I g | When to use residual plots?
They are still useful in assessing whether the relationship between the explanatory variables and the dependent variable is linear (or modeled properly given the equation). For an extreme example, I generated some data with a quadratic relationship and fit a linear regression of the form $Y = \alpha + \beta(X) + e$. (Because the parabola is approximately centered on zero $\beta$ is insignificant in the equation).
If you plot $X$ versus the residuals though the quadratic relationship is still very clear. (Imagine just detilting the first plot.)
I'm sure you can dream up other scenarios in which regression coefficients are insignificant but examining the residuals will show how the model is inadequate. | When to use residual plots?
They are still useful in assessing whether the relationship between the explanatory variables and the dependent variable is linear (or modeled properly given the equation). For an extreme example, I g |
52,988 | When to use residual plots? | Assume for simplicity that you have fitted some line $\hat y = b_0 + b_1 x$ given a dependent or response variable $y$ and a predictor or independent variable $x$. This specific assumption can be relaxed, which we will get to in good time.
With one variable on each side, a residual plot (meaning, a plot of residual $y - \hat y =: e$ versus fitted or predicted $\hat y$) in principle shows just the same information as a scatter plot with regression line superimposed. On the latter, the residuals are just the vertical differences between the data points and the line and the fitted are the corresponding values on the line, i.e. for the same value of $x$.
In practice, a residual plot can make structure in the residuals more evident:
The regression line is rotated to the horizontal. Seeing structure in anything is easiest when the reference indicating no structure is a horizontal straight line, here the line $e = 0$.
There is better use of space.
In this easy example, some structure in the residuals is discernible in the scatter plot
but even easier to see in the residual plot:
The recipe here was simple. The data were fabricated as a quadratic plus Gaussian noise, but the quadratic is only roughly captured by the naive linear fit.
But it is still generally true that structure is easier to see on a residual plot. Some caution is needed in not over-interpreting residual plots, especially with very small sample sizes. As usual, what you spot should make scientific or practical sense too.
What if the fitted is more complicated than $b_0 + b_1 x$? There are two cases:
Everything can still be shown on a scatter plot, e.g. the right-hand side is a polynomial or something in trigonometric functions of $x$. Here, if anything, the residual plot is even more valuable in mapping everything so that zero residual is a reference.
The model uses two or more predictors. Here also the residual plot can be invaluable as a kind of health check showing how well you did and what you missed.
The health check analogy is a fair one more generally: Residual plots can help you spot if something is wrong. If nothing is evidently wrong, no news is good news, but there is no absolute guarantee: something important may have been missed.
On whether the predictor had a significant effect, I know of no rule whatever for drawing or not drawing a residual plot. In the concocted example here, significance levels and figures of merit such as $R^2$ are extremely good, but the straight line model still misses a key part of the real structure. Conversely, a residual plot often illuminates why a model failed to work: either the pattern really is all noise, so far as can be seen, or your model misses something really important, such as some nonlinearity.
Footnote: for many statistical people IV means instrumental variable, not independent variable. | When to use residual plots? | Assume for simplicity that you have fitted some line $\hat y = b_0 + b_1 x$ given a dependent or response variable $y$ and a predictor or independent variable $x$. This specific assumption can be rela | When to use residual plots?
Assume for simplicity that you have fitted some line $\hat y = b_0 + b_1 x$ given a dependent or response variable $y$ and a predictor or independent variable $x$. This specific assumption can be relaxed, which we will get to in good time.
With one variable on each side, a residual plot (meaning, a plot of residual $y - \hat y =: e$ versus fitted or predicted $\hat y$) in principle shows just the same information as a scatter plot with regression line superimposed. On the latter, the residuals are just the vertical differences between the data points and the line and the fitted are the corresponding values on the line, i.e. for the same value of $x$.
In practice, a residual plot can make structure in the residuals more evident:
The regression line is rotated to the horizontal. Seeing structure in anything is easiest when the reference indicating no structure is a horizontal straight line, here the line $e = 0$.
There is better use of space.
In this easy example, some structure in the residuals is discernible in the scatter plot
but even easier to see in the residual plot:
The recipe here was simple. The data were fabricated as a quadratic plus Gaussian noise, but the quadratic is only roughly captured by the naive linear fit.
But it is still generally true that structure is easier to see on a residual plot. Some caution is needed in not over-interpreting residual plots, especially with very small sample sizes. As usual, what you spot should make scientific or practical sense too.
What if the fitted is more complicated than $b_0 + b_1 x$? There are two cases:
Everything can still be shown on a scatter plot, e.g. the right-hand side is a polynomial or something in trigonometric functions of $x$. Here, if anything, the residual plot is even more valuable in mapping everything so that zero residual is a reference.
The model uses two or more predictors. Here also the residual plot can be invaluable as a kind of health check showing how well you did and what you missed.
The health check analogy is a fair one more generally: Residual plots can help you spot if something is wrong. If nothing is evidently wrong, no news is good news, but there is no absolute guarantee: something important may have been missed.
On whether the predictor had a significant effect, I know of no rule whatever for drawing or not drawing a residual plot. In the concocted example here, significance levels and figures of merit such as $R^2$ are extremely good, but the straight line model still misses a key part of the real structure. Conversely, a residual plot often illuminates why a model failed to work: either the pattern really is all noise, so far as can be seen, or your model misses something really important, such as some nonlinearity.
Footnote: for many statistical people IV means instrumental variable, not independent variable. | When to use residual plots?
Assume for simplicity that you have fitted some line $\hat y = b_0 + b_1 x$ given a dependent or response variable $y$ and a predictor or independent variable $x$. This specific assumption can be rela |
52,989 | Can I compare AIC values of a linear function with a non-linear function? | Looks like one of them doesn't really fit the data.
As long as you did not transform the response variable -- for example, replacing $y$ by $\log(y)$ -- you can use very different models, for example you can compare $y = b \cdot x$ with $y = e^{b \cdot x}$ or $y = a\cdot x^b$.
However, you are not allowed to use AIC for comparing $y = b\cdot x$ with $\log(y) = b \cdot x$. | Can I compare AIC values of a linear function with a non-linear function? | Looks like one of them doesn't really fit the data.
As long as you did not transform the response variable -- for example, replacing $y$ by $\log(y)$ -- you can use very different models, for example | Can I compare AIC values of a linear function with a non-linear function?
Looks like one of them doesn't really fit the data.
As long as you did not transform the response variable -- for example, replacing $y$ by $\log(y)$ -- you can use very different models, for example you can compare $y = b \cdot x$ with $y = e^{b \cdot x}$ or $y = a\cdot x^b$.
However, you are not allowed to use AIC for comparing $y = b\cdot x$ with $\log(y) = b \cdot x$. | Can I compare AIC values of a linear function with a non-linear function?
Looks like one of them doesn't really fit the data.
As long as you did not transform the response variable -- for example, replacing $y$ by $\log(y)$ -- you can use very different models, for example |
52,990 | Can I compare AIC values of a linear function with a non-linear function? | Well, they seem to be fitted with different algorithms and the likelihoods (or AIC) are calculated with different methods. Please check about it with your software.
Different software usually uses different methods to calculate the likelihood and often adds a constant to the likelihood or AIC for convenience. That means the scales are totally different.
Regarding to AIC comparison with different class of models, there are some arguing whether they can be compared or not. Prof Ripley (2002, 2004 http://www.stats.ox.ac.uk/~ripley/Nelder80.pdf, and some other posts on forum) said AIC must be compared for nested model, while Anderson and Burnham (2006 https://sites.warnercnr.colostate.edu/anderson/wp-content/uploads/sites/26/2016/11/AIC-Myths-and-Misunderstandings.pdf) claimed it is not. Moreover Ripley also said that AIC can only be used for MLE method, while the non-linear model is fitted by nonlinear (weighted) least-square rather than MLE.
In my opinion, AIC can be used for some non-nested models like Y~A+B and Y~A+C, but at least the same class of model. Supposing that the (+2*p) of AIC as the complexity penalty, it seems that linear and non-linear models, and even spline, with the same number of parameters are not the same complex. | Can I compare AIC values of a linear function with a non-linear function? | Well, they seem to be fitted with different algorithms and the likelihoods (or AIC) are calculated with different methods. Please check about it with your software.
Different software usually uses dif | Can I compare AIC values of a linear function with a non-linear function?
Well, they seem to be fitted with different algorithms and the likelihoods (or AIC) are calculated with different methods. Please check about it with your software.
Different software usually uses different methods to calculate the likelihood and often adds a constant to the likelihood or AIC for convenience. That means the scales are totally different.
Regarding to AIC comparison with different class of models, there are some arguing whether they can be compared or not. Prof Ripley (2002, 2004 http://www.stats.ox.ac.uk/~ripley/Nelder80.pdf, and some other posts on forum) said AIC must be compared for nested model, while Anderson and Burnham (2006 https://sites.warnercnr.colostate.edu/anderson/wp-content/uploads/sites/26/2016/11/AIC-Myths-and-Misunderstandings.pdf) claimed it is not. Moreover Ripley also said that AIC can only be used for MLE method, while the non-linear model is fitted by nonlinear (weighted) least-square rather than MLE.
In my opinion, AIC can be used for some non-nested models like Y~A+B and Y~A+C, but at least the same class of model. Supposing that the (+2*p) of AIC as the complexity penalty, it seems that linear and non-linear models, and even spline, with the same number of parameters are not the same complex. | Can I compare AIC values of a linear function with a non-linear function?
Well, they seem to be fitted with different algorithms and the likelihoods (or AIC) are calculated with different methods. Please check about it with your software.
Different software usually uses dif |
52,991 | Can I compare AIC values of a linear function with a non-linear function? | From the description you have given, yes. This is exactly the case where you would want to use AIC and the like, differing models to the same data. The model with the higher value is the worse model. And if the $\Delta \mbox{AIC} >10$ there is a hardly any evidence for the worse model.
However, to make sure there is no error, I would check the model's fits and see if the one really does fit that bad. | Can I compare AIC values of a linear function with a non-linear function? | From the description you have given, yes. This is exactly the case where you would want to use AIC and the like, differing models to the same data. The model with the higher value is the worse model. | Can I compare AIC values of a linear function with a non-linear function?
From the description you have given, yes. This is exactly the case where you would want to use AIC and the like, differing models to the same data. The model with the higher value is the worse model. And if the $\Delta \mbox{AIC} >10$ there is a hardly any evidence for the worse model.
However, to make sure there is no error, I would check the model's fits and see if the one really does fit that bad. | Can I compare AIC values of a linear function with a non-linear function?
From the description you have given, yes. This is exactly the case where you would want to use AIC and the like, differing models to the same data. The model with the higher value is the worse model. |
52,992 | Difference between Sobol indices and total Sobol indices? | The reason why total Sobol' indices are interesting is interactions.
Two inputs $x_1$ and $x_2$ are interacting when their joint effect on the output is different from the sum of their individual effects.
Consider for instance the following model
$$ f(\mathbf{x}) = x_1 . x_2 $$
It is possible to measure interactions by computing higher order Sobol' indices, that is Sobol' indices for groups of variables. These can be defined in two ways, depending if one counts the interactions of the subgroups or not.
The problem of this approach is that the number of Sobol' indices grow geometrically with the number of inputs so that computing them quickly become intractable.
Total Sobol' indices is a viable alternative: the total index for a given input $x_i$ represent the effect of all the group of variables that contain $x_i$.
Hence, the difference between the total index and first order index of $x_i$ is the amount of interactions that $x_i$ contributes to.
Note that unlike the first order indices, the sum of total indices can exceed one. The equality is obtained when there are no interactions.
For a deeper understanding, have a look at the papers from Saltelli and coworkers for interpretation in terms of variance lowering when freezing a variable and to the seminal papers from Sobol' for the interpretation in terms of the ANOVA decomposition, also named HDMR, Sobol' or Hoefding decomposition (the one from 2001 is very clear and concise but might need a few readings if you are not familiar with the domain).
Both approach have their merits and complement each other for a deep understanding of the meanings of sensitivity indices.
Regarding the estimation of total Sobol' indices, a review of modern variance based estimator can be found in [1].
-- Reminder --
First order Sobol' indices are
the variance of the conditional expectation of the output given the value of an input, normalised by the total variance.
Total Sobol' indices are
the complementary of the variance of the conditional expectation given the values of all but an input, normalised by the total variance.
This is equal, thanks to the total variance theorem, to
The expectation of the conditional variance of the output given the values of all but an input, normalised by the total variance.
[1] Andrea Saltelli, Paola Annoni, Ivano Azzini, Francesca Campolongo, Marco Ratto, Stefano Tarantola, Variance based sensitivity analysis of model output. Design and estimator for the total sensitivity index, Computer Physics Communications, Volume 181, Issue 2, February 2010, Pages 259-270, ISSN 0010-4655, http://dx.doi.org/10.1016/j.cpc.2009.09.018.
(http://www.sciencedirect.com/science/article/pii/S0010465509003087) | Difference between Sobol indices and total Sobol indices? | The reason why total Sobol' indices are interesting is interactions.
Two inputs $x_1$ and $x_2$ are interacting when their joint effect on the output is different from the sum of their individual effe | Difference between Sobol indices and total Sobol indices?
The reason why total Sobol' indices are interesting is interactions.
Two inputs $x_1$ and $x_2$ are interacting when their joint effect on the output is different from the sum of their individual effects.
Consider for instance the following model
$$ f(\mathbf{x}) = x_1 . x_2 $$
It is possible to measure interactions by computing higher order Sobol' indices, that is Sobol' indices for groups of variables. These can be defined in two ways, depending if one counts the interactions of the subgroups or not.
The problem of this approach is that the number of Sobol' indices grow geometrically with the number of inputs so that computing them quickly become intractable.
Total Sobol' indices is a viable alternative: the total index for a given input $x_i$ represent the effect of all the group of variables that contain $x_i$.
Hence, the difference between the total index and first order index of $x_i$ is the amount of interactions that $x_i$ contributes to.
Note that unlike the first order indices, the sum of total indices can exceed one. The equality is obtained when there are no interactions.
For a deeper understanding, have a look at the papers from Saltelli and coworkers for interpretation in terms of variance lowering when freezing a variable and to the seminal papers from Sobol' for the interpretation in terms of the ANOVA decomposition, also named HDMR, Sobol' or Hoefding decomposition (the one from 2001 is very clear and concise but might need a few readings if you are not familiar with the domain).
Both approach have their merits and complement each other for a deep understanding of the meanings of sensitivity indices.
Regarding the estimation of total Sobol' indices, a review of modern variance based estimator can be found in [1].
-- Reminder --
First order Sobol' indices are
the variance of the conditional expectation of the output given the value of an input, normalised by the total variance.
Total Sobol' indices are
the complementary of the variance of the conditional expectation given the values of all but an input, normalised by the total variance.
This is equal, thanks to the total variance theorem, to
The expectation of the conditional variance of the output given the values of all but an input, normalised by the total variance.
[1] Andrea Saltelli, Paola Annoni, Ivano Azzini, Francesca Campolongo, Marco Ratto, Stefano Tarantola, Variance based sensitivity analysis of model output. Design and estimator for the total sensitivity index, Computer Physics Communications, Volume 181, Issue 2, February 2010, Pages 259-270, ISSN 0010-4655, http://dx.doi.org/10.1016/j.cpc.2009.09.018.
(http://www.sciencedirect.com/science/article/pii/S0010465509003087) | Difference between Sobol indices and total Sobol indices?
The reason why total Sobol' indices are interesting is interactions.
Two inputs $x_1$ and $x_2$ are interacting when their joint effect on the output is different from the sum of their individual effe |
52,993 | Show that $E(x)=M'_X(0)$, where $M'_X(p)=\frac{dM_X(p)}{dp}$ | $$
(1) \quad M_X(t) = \mathrm{E}\left[e^{tX}\right] = \int_{-\infty}^\infty e^{tx} f_X(x)\,dx \quad
$$
$$
(2) \quad M'_X(t) = \frac{d}{dt} \int_{-\infty}^\infty e^{tx} f_X(x)\,dx = \int_{-\infty}^\infty \frac{d}{dt} e^{tx} f_X(x)\,dx = \int_{-\infty}^\infty x\,e^{tx} f_X(x)\,dx
$$
$$
M'_X(0) = \int_{-\infty}^\infty x\,f_X(x)\,dx = \mathrm{E}[X]
$$
Reasons: The second equality in (1) is due to the theorem known folklorically as the Law of the Unconscious Statistician; In (2) the derivation under the integral sign is justified by the Dominated Convergence Theorem. | Show that $E(x)=M'_X(0)$, where $M'_X(p)=\frac{dM_X(p)}{dp}$ | $$
(1) \quad M_X(t) = \mathrm{E}\left[e^{tX}\right] = \int_{-\infty}^\infty e^{tx} f_X(x)\,dx \quad
$$
$$
(2) \quad M'_X(t) = \frac{d}{dt} \int_{-\infty}^\infty e^{tx} f_X(x)\,dx = \int_{-\infty} | Show that $E(x)=M'_X(0)$, where $M'_X(p)=\frac{dM_X(p)}{dp}$
$$
(1) \quad M_X(t) = \mathrm{E}\left[e^{tX}\right] = \int_{-\infty}^\infty e^{tx} f_X(x)\,dx \quad
$$
$$
(2) \quad M'_X(t) = \frac{d}{dt} \int_{-\infty}^\infty e^{tx} f_X(x)\,dx = \int_{-\infty}^\infty \frac{d}{dt} e^{tx} f_X(x)\,dx = \int_{-\infty}^\infty x\,e^{tx} f_X(x)\,dx
$$
$$
M'_X(0) = \int_{-\infty}^\infty x\,f_X(x)\,dx = \mathrm{E}[X]
$$
Reasons: The second equality in (1) is due to the theorem known folklorically as the Law of the Unconscious Statistician; In (2) the derivation under the integral sign is justified by the Dominated Convergence Theorem. | Show that $E(x)=M'_X(0)$, where $M'_X(p)=\frac{dM_X(p)}{dp}$
$$
(1) \quad M_X(t) = \mathrm{E}\left[e^{tX}\right] = \int_{-\infty}^\infty e^{tx} f_X(x)\,dx \quad
$$
$$
(2) \quad M'_X(t) = \frac{d}{dt} \int_{-\infty}^\infty e^{tx} f_X(x)\,dx = \int_{-\infty} |
52,994 | Show that $E(x)=M'_X(0)$, where $M'_X(p)=\frac{dM_X(p)}{dp}$ | You have an error:
$$Ee^{pX}=\int_{-\infty}^{\infty}e^{px}f_X(x)dx$$
Note the limits of integration, you had them wrong. This way you can differentiate under integral. Of course there are conditions, when you can do this exactly, but you can assume that they are met.
The fact that you are integrating over infinite interval is a complication, but still everything depends on the properties of $exp$ and $f_X$. I.e. as long as the integral is differentiable function and some additional smoothness properties (probably), you can differentiate inside the integral. | Show that $E(x)=M'_X(0)$, where $M'_X(p)=\frac{dM_X(p)}{dp}$ | You have an error:
$$Ee^{pX}=\int_{-\infty}^{\infty}e^{px}f_X(x)dx$$
Note the limits of integration, you had them wrong. This way you can differentiate under integral. Of course there are conditions, | Show that $E(x)=M'_X(0)$, where $M'_X(p)=\frac{dM_X(p)}{dp}$
You have an error:
$$Ee^{pX}=\int_{-\infty}^{\infty}e^{px}f_X(x)dx$$
Note the limits of integration, you had them wrong. This way you can differentiate under integral. Of course there are conditions, when you can do this exactly, but you can assume that they are met.
The fact that you are integrating over infinite interval is a complication, but still everything depends on the properties of $exp$ and $f_X$. I.e. as long as the integral is differentiable function and some additional smoothness properties (probably), you can differentiate inside the integral. | Show that $E(x)=M'_X(0)$, where $M'_X(p)=\frac{dM_X(p)}{dp}$
You have an error:
$$Ee^{pX}=\int_{-\infty}^{\infty}e^{px}f_X(x)dx$$
Note the limits of integration, you had them wrong. This way you can differentiate under integral. Of course there are conditions, |
52,995 | K-Means Clustering - Calculating Euclidean distances in a multiple variable dataset | You are talking about two distinct problems here
How do I visualise what k-means is doing in N>2 dimensions
How do I calculate k-means in N>2 dimensions
The second one is much easier than the first to answer.
To calculate the Euclidean distance when you have X, Y and Z, you simply sum the squares and square root. This works for any number of dimensions
$D=\sqrt{\sum_i X_i^2}$
The first part, visualisation, is much harder, but also has no right answer - it is simply a tool for checking that it is doing what you think it is, and for understanding what is going on. If N gets very large, there is no simple way to do this.
For three dimensions, there are a couple of common approaches, with their own pros and cons:
3D chart: You see things as you do in the real world, but you will really need to be able to rotate the image to get a feel for depth
Colour in the points: This is quite a nice approach, use red say for the lowest Z value and blue for the highest Z value, and then the spectrum between the two. The K-Mean centre will then have the "average" colour of the cluster.
For higher dimensions you have to resort to more approximate techniques:
Slices/projections: Drop one or more dimensions and project or slice onto a lower (i.e. 2D) number of dimensions. This gives you a feel for what's going on, but you'll need a lot of them to check the K-Means are in the centres (and the wrong slices/projections might completely miss the interesting structure)
Dimension reduction: Starting to get really hard work now (much more complex than K-Means itself). You can attempt to use things like PCA, either locally for each cluster, or globally, to find planes that are "interesting" to look at, and just plot those.
Specific to K-Means, and particularly useful when K is low (e.g. 2), you can plot the density of points at distances on the projection between a pair of clusters.
For example, suppose we go back to 2D and have a scatter chart like this:
Where the two big blobs are the KMeans centres, and I have added the line that passes through the two points. If you perpendicularly project each point onto that line, then you can view the distribution of the points around each centre like so:
Where I have marked on the location of the means with the thick lines. The second graph can be drawn regardless how many dimensions you are working in, and is a way of seeing how well separated the clusters are. | K-Means Clustering - Calculating Euclidean distances in a multiple variable dataset | You are talking about two distinct problems here
How do I visualise what k-means is doing in N>2 dimensions
How do I calculate k-means in N>2 dimensions
The second one is much easier than the first | K-Means Clustering - Calculating Euclidean distances in a multiple variable dataset
You are talking about two distinct problems here
How do I visualise what k-means is doing in N>2 dimensions
How do I calculate k-means in N>2 dimensions
The second one is much easier than the first to answer.
To calculate the Euclidean distance when you have X, Y and Z, you simply sum the squares and square root. This works for any number of dimensions
$D=\sqrt{\sum_i X_i^2}$
The first part, visualisation, is much harder, but also has no right answer - it is simply a tool for checking that it is doing what you think it is, and for understanding what is going on. If N gets very large, there is no simple way to do this.
For three dimensions, there are a couple of common approaches, with their own pros and cons:
3D chart: You see things as you do in the real world, but you will really need to be able to rotate the image to get a feel for depth
Colour in the points: This is quite a nice approach, use red say for the lowest Z value and blue for the highest Z value, and then the spectrum between the two. The K-Mean centre will then have the "average" colour of the cluster.
For higher dimensions you have to resort to more approximate techniques:
Slices/projections: Drop one or more dimensions and project or slice onto a lower (i.e. 2D) number of dimensions. This gives you a feel for what's going on, but you'll need a lot of them to check the K-Means are in the centres (and the wrong slices/projections might completely miss the interesting structure)
Dimension reduction: Starting to get really hard work now (much more complex than K-Means itself). You can attempt to use things like PCA, either locally for each cluster, or globally, to find planes that are "interesting" to look at, and just plot those.
Specific to K-Means, and particularly useful when K is low (e.g. 2), you can plot the density of points at distances on the projection between a pair of clusters.
For example, suppose we go back to 2D and have a scatter chart like this:
Where the two big blobs are the KMeans centres, and I have added the line that passes through the two points. If you perpendicularly project each point onto that line, then you can view the distribution of the points around each centre like so:
Where I have marked on the location of the means with the thick lines. The second graph can be drawn regardless how many dimensions you are working in, and is a way of seeing how well separated the clusters are. | K-Means Clustering - Calculating Euclidean distances in a multiple variable dataset
You are talking about two distinct problems here
How do I visualise what k-means is doing in N>2 dimensions
How do I calculate k-means in N>2 dimensions
The second one is much easier than the first |
52,996 | K-Means Clustering - Calculating Euclidean distances in a multiple variable dataset | Don't compute Euclidean distance.
K-means minimizes the within-cluster variance aka: WCSS.
http://en.wikipedia.org/wiki/K-means_clustering
Then your question should be obvious. Sum of squared deviations, sum over all dimensions.
It is equivalent, but misleading, to think of k-means as of "minimizing the squared distances". The problem is that k-means cannot optimize arbitrary distances. The mean is not compatible with arbitrary distances, but it is a least squares estimate (in each single dimension).
So don't use Euclidean distances in the first place; use Within-Cluster-Sum-of-Squares (which will also be faster, as you don't compute the square root) | K-Means Clustering - Calculating Euclidean distances in a multiple variable dataset | Don't compute Euclidean distance.
K-means minimizes the within-cluster variance aka: WCSS.
http://en.wikipedia.org/wiki/K-means_clustering
Then your question should be obvious. Sum of squared deviatio | K-Means Clustering - Calculating Euclidean distances in a multiple variable dataset
Don't compute Euclidean distance.
K-means minimizes the within-cluster variance aka: WCSS.
http://en.wikipedia.org/wiki/K-means_clustering
Then your question should be obvious. Sum of squared deviations, sum over all dimensions.
It is equivalent, but misleading, to think of k-means as of "minimizing the squared distances". The problem is that k-means cannot optimize arbitrary distances. The mean is not compatible with arbitrary distances, but it is a least squares estimate (in each single dimension).
So don't use Euclidean distances in the first place; use Within-Cluster-Sum-of-Squares (which will also be faster, as you don't compute the square root) | K-Means Clustering - Calculating Euclidean distances in a multiple variable dataset
Don't compute Euclidean distance.
K-means minimizes the within-cluster variance aka: WCSS.
http://en.wikipedia.org/wiki/K-means_clustering
Then your question should be obvious. Sum of squared deviatio |
52,997 | K-Means Clustering - Calculating Euclidean distances in a multiple variable dataset | Generally speaking Kmeans algorithm can work for any dimensions just make sure that you when calculating the distance you take into account all the N features.
You can still use euclidean distance as a similarity measure have a look at the n dimensional equation http://en.wikipedia.org/wiki/Euclidean_distance.
In order to visualize the result I would advise using Principle Component Analysis for dimensionality reduction the best results (accuracy) are from performing PCA after clustering thus you do not loose any information although you can pre process it with PCA to a smaller set dimension and cluster the data, that clustering will take less time to complete as they are less dimensions to process in the distance function. Also if you plan on pre processing it then not that much inofrmation is lost as most of it is in the first derived component. Their should be libraries in R to perfrom PCA. https://stat.ethz.ch/R-manual/R-patched/library/stats/html/princomp.html | K-Means Clustering - Calculating Euclidean distances in a multiple variable dataset | Generally speaking Kmeans algorithm can work for any dimensions just make sure that you when calculating the distance you take into account all the N features.
You can still use euclidean distance as | K-Means Clustering - Calculating Euclidean distances in a multiple variable dataset
Generally speaking Kmeans algorithm can work for any dimensions just make sure that you when calculating the distance you take into account all the N features.
You can still use euclidean distance as a similarity measure have a look at the n dimensional equation http://en.wikipedia.org/wiki/Euclidean_distance.
In order to visualize the result I would advise using Principle Component Analysis for dimensionality reduction the best results (accuracy) are from performing PCA after clustering thus you do not loose any information although you can pre process it with PCA to a smaller set dimension and cluster the data, that clustering will take less time to complete as they are less dimensions to process in the distance function. Also if you plan on pre processing it then not that much inofrmation is lost as most of it is in the first derived component. Their should be libraries in R to perfrom PCA. https://stat.ethz.ch/R-manual/R-patched/library/stats/html/princomp.html | K-Means Clustering - Calculating Euclidean distances in a multiple variable dataset
Generally speaking Kmeans algorithm can work for any dimensions just make sure that you when calculating the distance you take into account all the N features.
You can still use euclidean distance as |
52,998 | I want to use pvals.fnc() to get p-values for a lmer() model but cannot get rid of correlations between random factors | (Italics represent corrected text)
You are making a 'mistake' in your model specification given what you say you want.
Random effects:
Groups Name Variance Std.Dev. Corr
Item (Intercept) 273.508 16.5381
Subject Gramgram 0.000 0.0000
Gramungram 3717.213 60.9689 NaN
Number1 59.361 7.7046 NaN -1.000
You see the numbers there under Corr for Subject? That shows that you are estimating correlations between the random slope of Gramgungram and Gramgram, Number1 and Gramgram, and Number1 and Gramungram by subject. If Gram was a numeric , you could eliminate the random correlation between Gram and Number1 with a model specified by:
m <- lmer(RT ~ Gram*Number + (1|Subject) + (0+Gram|Subject) (0+Number|Subject) + (1|Item),data= data)
You'll notice that any random effect specified in the same set of parentheses yields a random effect correlation. At least that is true for models without the / symbol, I'm not really familiar with the notion for lmer when that is in the mix.
However, given what we saw from a model where you estimated this parameter, I'd advise caution. Moreover, you'll probably note that my code above doesn't work for you.
EDIT
For those of you just now joining our program... for these examples I'll refer to primingHeid as OP did in the comments, this dataset can be found in languageR.
library(languageR)
library(lme4)
data(primingHeid)
Why doesn't my code work? It doesn't work because Gram is a factor. Think about it for a little bit... and look at your fixed effects. If a factor has two levels how many parameters must you estimate to explain its effects? Two. Of course, one of the parameters you estimate is the intercept. The interpretation of the intercept will depend on how your factors are coded. In treatment coding (the default in R), the intercept represents the value for a case when all variables are at level 1 (cf. a regression textbook for details of other contrasts). Regardless of your contrasts two parameters are estimated for two levels of a factor. I think what is happening is that when you fail to specify an intercept R is protecting you from yourself and going ahead and estimating two parameters anyway. Try summary(lm(RT ~ 0 + Condition,data=primingHeid)) and you'll see that it went right on ahead and estimated two parameters. So, back to the context of lmer... if you have a factor with two levels, R will gladly estimate two parameters and then correlate them all under the hood. Back again to your comments... estimate lmer(RT ~ Condition +(0+Condition|Subject),data=primingHeid) and look and the ranef of that model and you'll see yet again that this is exactly what R has done.
If you wanted to force R to stop that, you'd have to do the factor coding manually by turning Condition into a numeric. The assumptions you'd have to make about the mean value of RT when Condition was at the level you coded as 0 are likely untenable (i.e. that RT is really 0). I won't exclude the possibility that with some careful thought, transformation of the DV (centering on the mean of condition you are setting equal to 0?), and good model specification you might work your way somewhere that made some sense... but, that would be an entirely different question and I can't speak to it at the moment.
\EDIT
I think you probably should step back and think about your model structure a little more (which really is one of the great messages in Barr et al., 2013). Are items crossed with gram and number? How many items occur within a unique arrangement of gram and number per subject?
More general issues now...
I have a huge amount of respect for Barr (no surprises there). However, he is not entirely mainstream on issues related to fitting this type of model. That isn't a bad thing ... but time will tell whether his approach for these models will become the next big thing. I have little doubt that 'keeping it maximal' is great if your data will tolerate it. But sometimes it won't. The backwards selection procedure he published involving the use of non-converged models is a bit unexpected. However, I have to admit now that I've seen his appendices I'm a little less sour on the idea than I was when I first read it. All the same, I'd like to see it be vetted a bit more.
You'll note Barr specifically does not use pvals.fnc() for models that have random correlations. So, only having skimmed the published version of his paper, I'd guess you can only use it under his approach if you can backwards step to a point where you don't have any.
Going now to my training with other stats gurus I feel compelled to say that almost all of this worry is an exercise in p-value fetishism that may be entirely misplaced - especially if you consider that this level of nested decision making yields a test that has a definition that is difficult to define. | I want to use pvals.fnc() to get p-values for a lmer() model but cannot get rid of correlations betw | (Italics represent corrected text)
You are making a 'mistake' in your model specification given what you say you want.
Random effects:
Groups Name Variance Std.Dev. Corr
Item | I want to use pvals.fnc() to get p-values for a lmer() model but cannot get rid of correlations between random factors
(Italics represent corrected text)
You are making a 'mistake' in your model specification given what you say you want.
Random effects:
Groups Name Variance Std.Dev. Corr
Item (Intercept) 273.508 16.5381
Subject Gramgram 0.000 0.0000
Gramungram 3717.213 60.9689 NaN
Number1 59.361 7.7046 NaN -1.000
You see the numbers there under Corr for Subject? That shows that you are estimating correlations between the random slope of Gramgungram and Gramgram, Number1 and Gramgram, and Number1 and Gramungram by subject. If Gram was a numeric , you could eliminate the random correlation between Gram and Number1 with a model specified by:
m <- lmer(RT ~ Gram*Number + (1|Subject) + (0+Gram|Subject) (0+Number|Subject) + (1|Item),data= data)
You'll notice that any random effect specified in the same set of parentheses yields a random effect correlation. At least that is true for models without the / symbol, I'm not really familiar with the notion for lmer when that is in the mix.
However, given what we saw from a model where you estimated this parameter, I'd advise caution. Moreover, you'll probably note that my code above doesn't work for you.
EDIT
For those of you just now joining our program... for these examples I'll refer to primingHeid as OP did in the comments, this dataset can be found in languageR.
library(languageR)
library(lme4)
data(primingHeid)
Why doesn't my code work? It doesn't work because Gram is a factor. Think about it for a little bit... and look at your fixed effects. If a factor has two levels how many parameters must you estimate to explain its effects? Two. Of course, one of the parameters you estimate is the intercept. The interpretation of the intercept will depend on how your factors are coded. In treatment coding (the default in R), the intercept represents the value for a case when all variables are at level 1 (cf. a regression textbook for details of other contrasts). Regardless of your contrasts two parameters are estimated for two levels of a factor. I think what is happening is that when you fail to specify an intercept R is protecting you from yourself and going ahead and estimating two parameters anyway. Try summary(lm(RT ~ 0 + Condition,data=primingHeid)) and you'll see that it went right on ahead and estimated two parameters. So, back to the context of lmer... if you have a factor with two levels, R will gladly estimate two parameters and then correlate them all under the hood. Back again to your comments... estimate lmer(RT ~ Condition +(0+Condition|Subject),data=primingHeid) and look and the ranef of that model and you'll see yet again that this is exactly what R has done.
If you wanted to force R to stop that, you'd have to do the factor coding manually by turning Condition into a numeric. The assumptions you'd have to make about the mean value of RT when Condition was at the level you coded as 0 are likely untenable (i.e. that RT is really 0). I won't exclude the possibility that with some careful thought, transformation of the DV (centering on the mean of condition you are setting equal to 0?), and good model specification you might work your way somewhere that made some sense... but, that would be an entirely different question and I can't speak to it at the moment.
\EDIT
I think you probably should step back and think about your model structure a little more (which really is one of the great messages in Barr et al., 2013). Are items crossed with gram and number? How many items occur within a unique arrangement of gram and number per subject?
More general issues now...
I have a huge amount of respect for Barr (no surprises there). However, he is not entirely mainstream on issues related to fitting this type of model. That isn't a bad thing ... but time will tell whether his approach for these models will become the next big thing. I have little doubt that 'keeping it maximal' is great if your data will tolerate it. But sometimes it won't. The backwards selection procedure he published involving the use of non-converged models is a bit unexpected. However, I have to admit now that I've seen his appendices I'm a little less sour on the idea than I was when I first read it. All the same, I'd like to see it be vetted a bit more.
You'll note Barr specifically does not use pvals.fnc() for models that have random correlations. So, only having skimmed the published version of his paper, I'd guess you can only use it under his approach if you can backwards step to a point where you don't have any.
Going now to my training with other stats gurus I feel compelled to say that almost all of this worry is an exercise in p-value fetishism that may be entirely misplaced - especially if you consider that this level of nested decision making yields a test that has a definition that is difficult to define. | I want to use pvals.fnc() to get p-values for a lmer() model but cannot get rid of correlations betw
(Italics represent corrected text)
You are making a 'mistake' in your model specification given what you say you want.
Random effects:
Groups Name Variance Std.Dev. Corr
Item |
52,999 | I want to use pvals.fnc() to get p-values for a lmer() model but cannot get rid of correlations between random factors | Luke (2016) Evaluating significance in linear mixed-effects models in R reports that the most optimal (certainly most conservative) test is p values based on Kenward-Roger approximation for degrees of freedom (in lmer). With large samples, p values based on the likelihood ratio (through anova()) are just as good. | I want to use pvals.fnc() to get p-values for a lmer() model but cannot get rid of correlations betw | Luke (2016) Evaluating significance in linear mixed-effects models in R reports that the most optimal (certainly most conservative) test is p values based on Kenward-Roger approximation for degrees of | I want to use pvals.fnc() to get p-values for a lmer() model but cannot get rid of correlations between random factors
Luke (2016) Evaluating significance in linear mixed-effects models in R reports that the most optimal (certainly most conservative) test is p values based on Kenward-Roger approximation for degrees of freedom (in lmer). With large samples, p values based on the likelihood ratio (through anova()) are just as good. | I want to use pvals.fnc() to get p-values for a lmer() model but cannot get rid of correlations betw
Luke (2016) Evaluating significance in linear mixed-effects models in R reports that the most optimal (certainly most conservative) test is p values based on Kenward-Roger approximation for degrees of |
53,000 | Statistical meaning of pearsonr() output in Python | The second number is the p value. It can be interpreted as the probability to observe a correlation that extreme in the sample (i.e. that high if it is positive or that low if it is negative) if the true correlation was 0. A low value therefore correspond to stronger evidence that the correlation is different from 0 and you can perform a test by checking if the p value is under (not above) the threshold. Note that there are several ways to test if a correlation coefficient is different from 0 (see Can p-values for Pearson's correlation test be computed just from correlation coefficient and sample size? and in particular the reference provided by Nick Cox in the comments).
What that threshold should be is really up to you and could in principle be determined based on how important it is not to commit an error and how much power your experiment has. In many scientific disciplines (psychology, biomedicine and neuroscience, possibly economics), the error level is routinely set to 5% (i.e. a p value under .05) and you would call anything under that threshold “statistically significant”. In physics and engineering, the threshold is sometimes much lower (five or six “sigmas”). See also Examples of studies using p < 0.001, p < 0.0001 or even lower p-values? and Comparing and contrasting, p-values, significance levels and type I error
Yes, for a simple linear regression with one predictor and an intercept, $r * r$ is really an estimate of $R^2$. Of course, your code is not explicitly fitting a model or anything but there is a link between the Pearson product-moment correlation, this simple linear model and different other tests. It becomes more complicated if the model includes several predictors (see Regression $R^2$ and correlations).
Adjusted $R^2$ is adjusted to take the number of parameters into account (a model with more parameters can be expected to better predict the data in the sample even if the additional variables aren't really useful). There is a formula in Wikipedia and several earlier questions on this: How to choose between the different Adjusted $R^2$ formulas?, Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better?. If you read more on that, you will notice that there is in fact quite a lot of discussion on how to adjust $R^2$ and the usefulness of these coefficients in practice.
You can apparently get an adjusted $R^2$ directly in Python/SciPy using the ols.ols() function, see http://wiki.scipy.org/Cookbook/OLS | Statistical meaning of pearsonr() output in Python | The second number is the p value. It can be interpreted as the probability to observe a correlation that extreme in the sample (i.e. that high if it is positive or that low if it is negative) if the t | Statistical meaning of pearsonr() output in Python
The second number is the p value. It can be interpreted as the probability to observe a correlation that extreme in the sample (i.e. that high if it is positive or that low if it is negative) if the true correlation was 0. A low value therefore correspond to stronger evidence that the correlation is different from 0 and you can perform a test by checking if the p value is under (not above) the threshold. Note that there are several ways to test if a correlation coefficient is different from 0 (see Can p-values for Pearson's correlation test be computed just from correlation coefficient and sample size? and in particular the reference provided by Nick Cox in the comments).
What that threshold should be is really up to you and could in principle be determined based on how important it is not to commit an error and how much power your experiment has. In many scientific disciplines (psychology, biomedicine and neuroscience, possibly economics), the error level is routinely set to 5% (i.e. a p value under .05) and you would call anything under that threshold “statistically significant”. In physics and engineering, the threshold is sometimes much lower (five or six “sigmas”). See also Examples of studies using p < 0.001, p < 0.0001 or even lower p-values? and Comparing and contrasting, p-values, significance levels and type I error
Yes, for a simple linear regression with one predictor and an intercept, $r * r$ is really an estimate of $R^2$. Of course, your code is not explicitly fitting a model or anything but there is a link between the Pearson product-moment correlation, this simple linear model and different other tests. It becomes more complicated if the model includes several predictors (see Regression $R^2$ and correlations).
Adjusted $R^2$ is adjusted to take the number of parameters into account (a model with more parameters can be expected to better predict the data in the sample even if the additional variables aren't really useful). There is a formula in Wikipedia and several earlier questions on this: How to choose between the different Adjusted $R^2$ formulas?, Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better?. If you read more on that, you will notice that there is in fact quite a lot of discussion on how to adjust $R^2$ and the usefulness of these coefficients in practice.
You can apparently get an adjusted $R^2$ directly in Python/SciPy using the ols.ols() function, see http://wiki.scipy.org/Cookbook/OLS | Statistical meaning of pearsonr() output in Python
The second number is the p value. It can be interpreted as the probability to observe a correlation that extreme in the sample (i.e. that high if it is positive or that low if it is negative) if the t |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.