idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
2,601 | Rules of thumb for minimum sample size for multiple regression | I have found this rather recent paper (2015) assessing that just 2 observations per variable are enough, as long as our interest is on the accuracy of estimated regression coefficients and standard errors (and on the empirical coverage of the resulting confidence intervals) and we use the adjusted $R^2$:
(pdf)
Of course, as also acknowledged by the paper, (relative) unbiasedness does not necessarily imply having enough statistical power. However, power and sample size calculations are typically made by specifying the expected effects; in the case of multiple regression, this implies an hypothesis on the value of regression coefficients or on the correlation matrix between the regressors and the outcome must be made. In practice, it depends on the strength of the correlation of regressors with the outcome and between themselves (obviously, the stronger the better for the correlation with the outcome, while things get worse with multicollinearity). For example, in the extreme case of two perfectly collinear variables, you can't perform the regression regardless of the number of observations, and even with only 2 covariates. | Rules of thumb for minimum sample size for multiple regression | I have found this rather recent paper (2015) assessing that just 2 observations per variable are enough, as long as our interest is on the accuracy of estimated regression coefficients and standard er | Rules of thumb for minimum sample size for multiple regression
I have found this rather recent paper (2015) assessing that just 2 observations per variable are enough, as long as our interest is on the accuracy of estimated regression coefficients and standard errors (and on the empirical coverage of the resulting confidence intervals) and we use the adjusted $R^2$:
(pdf)
Of course, as also acknowledged by the paper, (relative) unbiasedness does not necessarily imply having enough statistical power. However, power and sample size calculations are typically made by specifying the expected effects; in the case of multiple regression, this implies an hypothesis on the value of regression coefficients or on the correlation matrix between the regressors and the outcome must be made. In practice, it depends on the strength of the correlation of regressors with the outcome and between themselves (obviously, the stronger the better for the correlation with the outcome, while things get worse with multicollinearity). For example, in the extreme case of two perfectly collinear variables, you can't perform the regression regardless of the number of observations, and even with only 2 covariates. | Rules of thumb for minimum sample size for multiple regression
I have found this rather recent paper (2015) assessing that just 2 observations per variable are enough, as long as our interest is on the accuracy of estimated regression coefficients and standard er |
2,602 | Skills hard to find in machine learners? | I have seen multiple times developers use ML techniques. This is the usual pattern:
download library with fancy name;
spend 10 mins reading how to use it (skipping any statistics, maths, etc);
feed it with data (no preprocessing);
measure performance (e.g. accuracy even if classes are totally imbalanced) and tell everybody how awesome it is with its 99% accuracy;
deploy in production with epic fail results;
find somebody who understand what's going on to help them out because the instruction manual makes no sense at all.
The simple answer is that (most) software engineers are very weak in stats and math. This is the advantage of anyone who wants to compete with them. Of course stats people are out of their comfort zone if they need to write production code. The kind of role that become really rare is that of Data Scientist. It is someone who can write code to access and play with the enormous amount of data and find the value in them. | Skills hard to find in machine learners? | I have seen multiple times developers use ML techniques. This is the usual pattern:
download library with fancy name;
spend 10 mins reading how to use it (skipping any statistics, maths, etc);
feed i | Skills hard to find in machine learners?
I have seen multiple times developers use ML techniques. This is the usual pattern:
download library with fancy name;
spend 10 mins reading how to use it (skipping any statistics, maths, etc);
feed it with data (no preprocessing);
measure performance (e.g. accuracy even if classes are totally imbalanced) and tell everybody how awesome it is with its 99% accuracy;
deploy in production with epic fail results;
find somebody who understand what's going on to help them out because the instruction manual makes no sense at all.
The simple answer is that (most) software engineers are very weak in stats and math. This is the advantage of anyone who wants to compete with them. Of course stats people are out of their comfort zone if they need to write production code. The kind of role that become really rare is that of Data Scientist. It is someone who can write code to access and play with the enormous amount of data and find the value in them. | Skills hard to find in machine learners?
I have seen multiple times developers use ML techniques. This is the usual pattern:
download library with fancy name;
spend 10 mins reading how to use it (skipping any statistics, maths, etc);
feed i |
2,603 | Skills hard to find in machine learners? | What it's about
Just knowing about techniques is akin to knowing the animals in a zoo -- you can name them, describe their properties, perhaps identify them in the wild.
Understanding when to use them, formulating, building, testing, and deploying working mathematical models within an application area while avoiding the pitfalls --- these are the skills that distinguish, in my opinion.
The emphasis should be on the science, applying a systematic, scientific approach to business, industrial, and commercial problems. But this requires skills broader than data mining & machine learning, as Robin Bloor argues persuasively in "A Data Science Rant".
So what can one do?
Application areas: learn about various application areas close to your interest, or that of your employer. The area is often less important than understanding how the model was built and how it was used to add value to that area. Models that are successful in one area can often be transplanted and applied to different areas that work in similar ways.
Competitions: try the data mining competition site Kaggle, preferably joining a team of others. (Kaggle: a platform for predictive modeling competitions. Companies, governments and researchers present datasets and problems and the world’s best data scientists compete to produce the best solutions.)
Fundamentals: There are four: (1) solid grounding in statistics, (2) reasonably good programming skills, (3) understanding how to structure complex data queries, (4) building data models. If any are weak, then that's an important place to start.
A few quotes in this respect:
``I learned very early the difference between knowing the name of something and knowing something. You can know the name of a bird in all the languages of the world, but when you're finished, you'll know absolutely nothing whatever about the bird... So let's look at the bird and see what it's doing -- that's what counts.'' -- Richard Feynman, "The Making of a Scientist", p14 in What Do You Care What Other People Think, 1988
Keep in mind:
``The combination of skills required to carry out these business science [data science] projects rarely reside in one person. Someone could indeed have attained extensive knowledge in the triple areas of (i) what the business does, (ii) how to use statistics, and (iii) how to manage data and data flows. If so, he or she could indeed claim to be a business scientist (a.k.a., “data scientist”) in a given sector. But such individuals are almost as rare as hen’s teeth.'' -- Robin Bloor, A Data Science Rant, Aug 2013, Inside Analysis
And finally:
``The Map is Not the Territory.''
-- Alfred Korzybski, 1933, Science & Sanity.
Most real, applied problems are not accessible solely from ``the map''. To do practical things with mathematical modelling one must be willing to get grubby with details, subtleties, and exceptions. Nothing can substitute for knowing the territory first-hand. | Skills hard to find in machine learners? | What it's about
Just knowing about techniques is akin to knowing the animals in a zoo -- you can name them, describe their properties, perhaps identify them in the wild.
Understanding when to use them | Skills hard to find in machine learners?
What it's about
Just knowing about techniques is akin to knowing the animals in a zoo -- you can name them, describe their properties, perhaps identify them in the wild.
Understanding when to use them, formulating, building, testing, and deploying working mathematical models within an application area while avoiding the pitfalls --- these are the skills that distinguish, in my opinion.
The emphasis should be on the science, applying a systematic, scientific approach to business, industrial, and commercial problems. But this requires skills broader than data mining & machine learning, as Robin Bloor argues persuasively in "A Data Science Rant".
So what can one do?
Application areas: learn about various application areas close to your interest, or that of your employer. The area is often less important than understanding how the model was built and how it was used to add value to that area. Models that are successful in one area can often be transplanted and applied to different areas that work in similar ways.
Competitions: try the data mining competition site Kaggle, preferably joining a team of others. (Kaggle: a platform for predictive modeling competitions. Companies, governments and researchers present datasets and problems and the world’s best data scientists compete to produce the best solutions.)
Fundamentals: There are four: (1) solid grounding in statistics, (2) reasonably good programming skills, (3) understanding how to structure complex data queries, (4) building data models. If any are weak, then that's an important place to start.
A few quotes in this respect:
``I learned very early the difference between knowing the name of something and knowing something. You can know the name of a bird in all the languages of the world, but when you're finished, you'll know absolutely nothing whatever about the bird... So let's look at the bird and see what it's doing -- that's what counts.'' -- Richard Feynman, "The Making of a Scientist", p14 in What Do You Care What Other People Think, 1988
Keep in mind:
``The combination of skills required to carry out these business science [data science] projects rarely reside in one person. Someone could indeed have attained extensive knowledge in the triple areas of (i) what the business does, (ii) how to use statistics, and (iii) how to manage data and data flows. If so, he or she could indeed claim to be a business scientist (a.k.a., “data scientist”) in a given sector. But such individuals are almost as rare as hen’s teeth.'' -- Robin Bloor, A Data Science Rant, Aug 2013, Inside Analysis
And finally:
``The Map is Not the Territory.''
-- Alfred Korzybski, 1933, Science & Sanity.
Most real, applied problems are not accessible solely from ``the map''. To do practical things with mathematical modelling one must be willing to get grubby with details, subtleties, and exceptions. Nothing can substitute for knowing the territory first-hand. | Skills hard to find in machine learners?
What it's about
Just knowing about techniques is akin to knowing the animals in a zoo -- you can name them, describe their properties, perhaps identify them in the wild.
Understanding when to use them |
2,604 | Skills hard to find in machine learners? | I agree with everything that's been said. What stands out for me are:
How few machine learning "experts" are really interested in the subject matter to which they want to apply ML
How few truly understand predictive accuracy and proper scoring rules
How few understand principles of validation
How few know when to use a black box vs. a traditional regression model
How none of the "experts" seem to have ever studied Bayes optimum decision or loss/utility/cost functions [this lack of understanding is displayed almost any time someone uses classification instead of predicted risk] | Skills hard to find in machine learners? | I agree with everything that's been said. What stands out for me are:
How few machine learning "experts" are really interested in the subject matter to which they want to apply ML
How few truly unde | Skills hard to find in machine learners?
I agree with everything that's been said. What stands out for me are:
How few machine learning "experts" are really interested in the subject matter to which they want to apply ML
How few truly understand predictive accuracy and proper scoring rules
How few understand principles of validation
How few know when to use a black box vs. a traditional regression model
How none of the "experts" seem to have ever studied Bayes optimum decision or loss/utility/cost functions [this lack of understanding is displayed almost any time someone uses classification instead of predicted risk] | Skills hard to find in machine learners?
I agree with everything that's been said. What stands out for me are:
How few machine learning "experts" are really interested in the subject matter to which they want to apply ML
How few truly unde |
2,605 | Skills hard to find in machine learners? | Here are a couple of things to make you stand out from the crowd:
Understand the application domain or domains. That is, the business environment or other context.
Understand the big picture. This is very important! People who study machine learning often get lost in the details. Think about the overall picture that your ML models will fit into. Often the ML part is just a small segment of a much larger system. Understand the whole system.
Study utility and decision theory and Bayesian inference, not just whatever is now considered "the usual" ML models. Bayesian inference is just a way to formalize the notion of bringing all contextual information to bear on a problem. Utility and decision theory is about bringing values into the picture.
The overall message that applies to all three points: Look at the big picture, don't get lost in the details. | Skills hard to find in machine learners? | Here are a couple of things to make you stand out from the crowd:
Understand the application domain or domains. That is, the business environment or other context.
Understand the big picture. This is | Skills hard to find in machine learners?
Here are a couple of things to make you stand out from the crowd:
Understand the application domain or domains. That is, the business environment or other context.
Understand the big picture. This is very important! People who study machine learning often get lost in the details. Think about the overall picture that your ML models will fit into. Often the ML part is just a small segment of a much larger system. Understand the whole system.
Study utility and decision theory and Bayesian inference, not just whatever is now considered "the usual" ML models. Bayesian inference is just a way to formalize the notion of bringing all contextual information to bear on a problem. Utility and decision theory is about bringing values into the picture.
The overall message that applies to all three points: Look at the big picture, don't get lost in the details. | Skills hard to find in machine learners?
Here are a couple of things to make you stand out from the crowd:
Understand the application domain or domains. That is, the business environment or other context.
Understand the big picture. This is |
2,606 | Skills hard to find in machine learners? | I would put out there the notion of "soft skills".
recognising who the "expert" is for method X, and being able to tap into their knowledge (you shouldn't be able to or expected to know everything about erything). The ability and willingness to collaborate with others.
the ability to translate or represent "the real world" with the mathematics used in ML.
the ability to explain your methods in different ways to different audiences - knowing when to focus on details and when to step back and view the wider context.
systems thinking, being able to see how your role feeds into other areas of the business, and how these areas feed back into your work.
an appreciation and understanding of uncertainty, and having some structured methods to deal with it. Being able to state clearly what your assumptions are. | Skills hard to find in machine learners? | I would put out there the notion of "soft skills".
recognising who the "expert" is for method X, and being able to tap into their knowledge (you shouldn't be able to or expected to know everything ab | Skills hard to find in machine learners?
I would put out there the notion of "soft skills".
recognising who the "expert" is for method X, and being able to tap into their knowledge (you shouldn't be able to or expected to know everything about erything). The ability and willingness to collaborate with others.
the ability to translate or represent "the real world" with the mathematics used in ML.
the ability to explain your methods in different ways to different audiences - knowing when to focus on details and when to step back and view the wider context.
systems thinking, being able to see how your role feeds into other areas of the business, and how these areas feed back into your work.
an appreciation and understanding of uncertainty, and having some structured methods to deal with it. Being able to state clearly what your assumptions are. | Skills hard to find in machine learners?
I would put out there the notion of "soft skills".
recognising who the "expert" is for method X, and being able to tap into their knowledge (you shouldn't be able to or expected to know everything ab |
2,607 | Skills hard to find in machine learners? | Being able to generalize well
This is the essence of a good model. And it is the essence of what makes the best practitioners of the art of machine learning stand out from the crowd.
Understanding that the goal is to maximize performance on unseen data, i.e minimize generalization error, not to minimize training error. Knowing how to avoid both over-fitting and under-fitting. Coming up with models that are not too complex yet not too simple in describing the problem. Extracting the gist of a training-set, rather than the maximum possible.
It is surprising how often, even experienced machine learning practitioners, fail to follow this principle. One reason is that humans fail to appreciate two vast theory-vs-practice magnitude differences:
How much larger is the space of all possible examples compared to the training-data at hand, even when the training data is very large.
How much larger is the full "hypothesis space": number of possible models for a problem, compared to the practical "solution space": everything you can think of, and everything your software/tools are capable of representing.
The 2nd magnitude gap is especially incomprehensible. Even for the simplest problem with $N$ inputs and a binary outcome, there are $2^N$ possible input-examples. And this is dwarfed by the exponentially larger "hypothesis space" number which is $2^{2^N}$ possible models.
It is also what most of the above answers said in more specific and concrete ways. to generalize well is just the shortest way I could think of, to put it. | Skills hard to find in machine learners? | Being able to generalize well
This is the essence of a good model. And it is the essence of what makes the best practitioners of the art of machine learning stand out from the crowd.
Understanding th | Skills hard to find in machine learners?
Being able to generalize well
This is the essence of a good model. And it is the essence of what makes the best practitioners of the art of machine learning stand out from the crowd.
Understanding that the goal is to maximize performance on unseen data, i.e minimize generalization error, not to minimize training error. Knowing how to avoid both over-fitting and under-fitting. Coming up with models that are not too complex yet not too simple in describing the problem. Extracting the gist of a training-set, rather than the maximum possible.
It is surprising how often, even experienced machine learning practitioners, fail to follow this principle. One reason is that humans fail to appreciate two vast theory-vs-practice magnitude differences:
How much larger is the space of all possible examples compared to the training-data at hand, even when the training data is very large.
How much larger is the full "hypothesis space": number of possible models for a problem, compared to the practical "solution space": everything you can think of, and everything your software/tools are capable of representing.
The 2nd magnitude gap is especially incomprehensible. Even for the simplest problem with $N$ inputs and a binary outcome, there are $2^N$ possible input-examples. And this is dwarfed by the exponentially larger "hypothesis space" number which is $2^{2^N}$ possible models.
It is also what most of the above answers said in more specific and concrete ways. to generalize well is just the shortest way I could think of, to put it. | Skills hard to find in machine learners?
Being able to generalize well
This is the essence of a good model. And it is the essence of what makes the best practitioners of the art of machine learning stand out from the crowd.
Understanding th |
2,608 | Skills hard to find in machine learners? | The skill that sets one data miner apart from others is the ability to interpret machine learning models. Most build a machine, report the error and then stop. What are the mathematical relationships between the features? Are the effects additive or non-additive or both? Are any of the features irrelevant? Is the machine expected under the null hypothesis that there are only chance patterns in the data? Does the model generalize to independent data? What do these patterns mean for the problem being studied? What are the inference? What are the insights? Why should a domain expert get excited? Will the machine lead to the domain expert asking new questions and designing new experiments? Can the data miner effectively communicate the model and its implications to the world? | Skills hard to find in machine learners? | The skill that sets one data miner apart from others is the ability to interpret machine learning models. Most build a machine, report the error and then stop. What are the mathematical relationships | Skills hard to find in machine learners?
The skill that sets one data miner apart from others is the ability to interpret machine learning models. Most build a machine, report the error and then stop. What are the mathematical relationships between the features? Are the effects additive or non-additive or both? Are any of the features irrelevant? Is the machine expected under the null hypothesis that there are only chance patterns in the data? Does the model generalize to independent data? What do these patterns mean for the problem being studied? What are the inference? What are the insights? Why should a domain expert get excited? Will the machine lead to the domain expert asking new questions and designing new experiments? Can the data miner effectively communicate the model and its implications to the world? | Skills hard to find in machine learners?
The skill that sets one data miner apart from others is the ability to interpret machine learning models. Most build a machine, report the error and then stop. What are the mathematical relationships |
2,609 | Skills hard to find in machine learners? | Having done scientific research in Machine learning / Statistical pattern recognition for 17 years - I can come up with a few skills that make a wanted-for data scientist stand out from others.
Machine learning is about:
Achieving the algorithmic knowledge of learning algorithms out there, and getting the skill of how to apply these learning algorithms successfully to practical ML-problems,
Gaining the required level of knowledge in probability theory (beginning with Bayes rule), parametric statistics and nonparametric statistics as to assess and compare different types of learnable models, model performance, confidence intervals, sampling theory, and ML-estimation. Don't underestimate the level of statistical knowledge required to become a skilled professional (go though the proof of the central limit theorem, for example, and understand when this theorem is not applicable),
Understand mathematical approximation theory to a level so you can see why feed-forward neural networks (with 2 hidden layers, or more) are universal approximators - offspring of the theorem of Komolgorov,
Gain practical experience from learning classifiers on many different training sets, and validate their performance on independent test sets,
Understand that optimal feature selection and model selection require knowledge of how algorithmics and statistics intertwine - take as an example the branch-and-bound algorithm for feature selection. Recognize that feature selection and model selection always involve a bias-variance trade-off (between performance variance and optimal model fit),
Go through the derivations by Richard & Lippmann (1991) why neural network classifiers estimate bayesian a posteriori probabilities,
Get acquainted with major scientific breakthroughs in the last eight decades with respect to the development of statistical and algorithmic prediction models, beginning with linear discriminant analysis (a statistical classifier),
Embrace the fact that no one machine learning scheme is optimal for almost 'all kinds of problems'. So - neural networks are not better than all other models, neither are support vector machines or random forests for that sake - it all depends on the statistics of the underlying domain. Practical machine learning remains an experimental science, but many relevant theoretical results have been published in literature over the years.
It's hard work to be able to span algorithmics, statistics and mathematical approximation theory. I did my Ph.D. in machine learning and first really became a professional after more than 10 years of work.
A final note is that it is not always necessary to be a programmer to apply machine learning algorithms. ML-suites like Weka or available classifier services like insight classifiers let data scientists apply different ML-algorithms without having to program in for example Python or R.
It is a great discipline - Machine learning. | Skills hard to find in machine learners? | Having done scientific research in Machine learning / Statistical pattern recognition for 17 years - I can come up with a few skills that make a wanted-for data scientist stand out from others.
Machin | Skills hard to find in machine learners?
Having done scientific research in Machine learning / Statistical pattern recognition for 17 years - I can come up with a few skills that make a wanted-for data scientist stand out from others.
Machine learning is about:
Achieving the algorithmic knowledge of learning algorithms out there, and getting the skill of how to apply these learning algorithms successfully to practical ML-problems,
Gaining the required level of knowledge in probability theory (beginning with Bayes rule), parametric statistics and nonparametric statistics as to assess and compare different types of learnable models, model performance, confidence intervals, sampling theory, and ML-estimation. Don't underestimate the level of statistical knowledge required to become a skilled professional (go though the proof of the central limit theorem, for example, and understand when this theorem is not applicable),
Understand mathematical approximation theory to a level so you can see why feed-forward neural networks (with 2 hidden layers, or more) are universal approximators - offspring of the theorem of Komolgorov,
Gain practical experience from learning classifiers on many different training sets, and validate their performance on independent test sets,
Understand that optimal feature selection and model selection require knowledge of how algorithmics and statistics intertwine - take as an example the branch-and-bound algorithm for feature selection. Recognize that feature selection and model selection always involve a bias-variance trade-off (between performance variance and optimal model fit),
Go through the derivations by Richard & Lippmann (1991) why neural network classifiers estimate bayesian a posteriori probabilities,
Get acquainted with major scientific breakthroughs in the last eight decades with respect to the development of statistical and algorithmic prediction models, beginning with linear discriminant analysis (a statistical classifier),
Embrace the fact that no one machine learning scheme is optimal for almost 'all kinds of problems'. So - neural networks are not better than all other models, neither are support vector machines or random forests for that sake - it all depends on the statistics of the underlying domain. Practical machine learning remains an experimental science, but many relevant theoretical results have been published in literature over the years.
It's hard work to be able to span algorithmics, statistics and mathematical approximation theory. I did my Ph.D. in machine learning and first really became a professional after more than 10 years of work.
A final note is that it is not always necessary to be a programmer to apply machine learning algorithms. ML-suites like Weka or available classifier services like insight classifiers let data scientists apply different ML-algorithms without having to program in for example Python or R.
It is a great discipline - Machine learning. | Skills hard to find in machine learners?
Having done scientific research in Machine learning / Statistical pattern recognition for 17 years - I can come up with a few skills that make a wanted-for data scientist stand out from others.
Machin |
2,610 | Skills hard to find in machine learners? | I see there are two parts while handling machine learning in practice
Engineering ( which covers all the algorithms, learning different packages, programming).
Curiosity/Reasoning (ability to ask better questions to data).
I think 'curiosity/reasoning' is the skill which distinguishes one from others.
For example, if you see the leader boards of the kaggle completions, many people may have used common(similar) algorithms, what makes the difference is, how one logically question the data and formulate it. | Skills hard to find in machine learners? | I see there are two parts while handling machine learning in practice
Engineering ( which covers all the algorithms, learning different packages, programming).
Curiosity/Reasoning (ability to ask bet | Skills hard to find in machine learners?
I see there are two parts while handling machine learning in practice
Engineering ( which covers all the algorithms, learning different packages, programming).
Curiosity/Reasoning (ability to ask better questions to data).
I think 'curiosity/reasoning' is the skill which distinguishes one from others.
For example, if you see the leader boards of the kaggle completions, many people may have used common(similar) algorithms, what makes the difference is, how one logically question the data and formulate it. | Skills hard to find in machine learners?
I see there are two parts while handling machine learning in practice
Engineering ( which covers all the algorithms, learning different packages, programming).
Curiosity/Reasoning (ability to ask bet |
2,611 | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence? | When considering the advantages of Wasserstein metric compared to KL divergence, then the most obvious one is that W is a metric whereas KL divergence is not, since KL is not symmetric (i.e. $D_{KL}(P||Q) \neq D_{KL}(Q||P)$ in general) and does not satisfy the triangle inequality (i.e. $D_{KL}(R||P) \leq D_{KL}(Q||P) + D_{KL}(R||Q)$ does not hold in general).
As what comes to practical difference, then one of the most important is that unlike KL (and many other measures) Wasserstein takes into account the metric space and what this means in less abstract terms is perhaps best explained by an example (feel free to skip to the figure, code just for producing it):
# define samples this way as scipy.stats.wasserstein_distance can't take probability distributions directly
sampP = [1,1,1,1,1,1,2,3,4,5]
sampQ = [1,2,3,4,5,5,5,5,5,5]
# and for scipy.stats.entropy (gives KL divergence here) we want distributions
P = np.unique(sampP, return_counts=True)[1] / len(sampP)
Q = np.unique(sampQ, return_counts=True)[1] / len(sampQ)
# compare to this sample / distribution:
sampQ2 = [1,2,2,2,2,2,2,3,4,5]
Q2 = np.unique(sampQ2, return_counts=True)[1] / len(sampQ2)
fig = plt.figure(figsize=(10,7))
fig.subplots_adjust(wspace=0.5)
plt.subplot(2,2,1)
plt.bar(np.arange(len(P)), P, color='r')
plt.xticks(np.arange(len(P)), np.arange(1,5), fontsize=0)
plt.subplot(2,2,3)
plt.bar(np.arange(len(Q)), Q, color='b')
plt.xticks(np.arange(len(Q)), np.arange(1,5))
plt.title("Wasserstein distance {:.4}\nKL divergence {:.4}".format(
scipy.stats.wasserstein_distance(sampP, sampQ), scipy.stats.entropy(P, Q)), fontsize=10)
plt.subplot(2,2,2)
plt.bar(np.arange(len(P)), P, color='r')
plt.xticks(np.arange(len(P)), np.arange(1,5), fontsize=0)
plt.subplot(2,2,4)
plt.bar(np.arange(len(Q2)), Q2, color='b')
plt.xticks(np.arange(len(Q2)), np.arange(1,5))
plt.title("Wasserstein distance {:.4}\nKL divergence {:.4}".format(
scipy.stats.wasserstein_distance(sampP, sampQ2), scipy.stats.entropy(P, Q2)), fontsize=10)
plt.show()
Here the measures between red and blue distributions are the same for KL divergence whereas Wasserstein distance measures the work required to transport the probability mass from the red state to the blue state using x-axis as a “road”. This measure is obviously the larger the further away the probability mass is (hence the alias earth mover's distance). So which one you want to use depends on your application area and what you want to measure. As a note, instead of KL divergence there are also other options like Jensen-Shannon distance that are proper metrics. | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence? | When considering the advantages of Wasserstein metric compared to KL divergence, then the most obvious one is that W is a metric whereas KL divergence is not, since KL is not symmetric (i.e. $D_{KL}(P | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence?
When considering the advantages of Wasserstein metric compared to KL divergence, then the most obvious one is that W is a metric whereas KL divergence is not, since KL is not symmetric (i.e. $D_{KL}(P||Q) \neq D_{KL}(Q||P)$ in general) and does not satisfy the triangle inequality (i.e. $D_{KL}(R||P) \leq D_{KL}(Q||P) + D_{KL}(R||Q)$ does not hold in general).
As what comes to practical difference, then one of the most important is that unlike KL (and many other measures) Wasserstein takes into account the metric space and what this means in less abstract terms is perhaps best explained by an example (feel free to skip to the figure, code just for producing it):
# define samples this way as scipy.stats.wasserstein_distance can't take probability distributions directly
sampP = [1,1,1,1,1,1,2,3,4,5]
sampQ = [1,2,3,4,5,5,5,5,5,5]
# and for scipy.stats.entropy (gives KL divergence here) we want distributions
P = np.unique(sampP, return_counts=True)[1] / len(sampP)
Q = np.unique(sampQ, return_counts=True)[1] / len(sampQ)
# compare to this sample / distribution:
sampQ2 = [1,2,2,2,2,2,2,3,4,5]
Q2 = np.unique(sampQ2, return_counts=True)[1] / len(sampQ2)
fig = plt.figure(figsize=(10,7))
fig.subplots_adjust(wspace=0.5)
plt.subplot(2,2,1)
plt.bar(np.arange(len(P)), P, color='r')
plt.xticks(np.arange(len(P)), np.arange(1,5), fontsize=0)
plt.subplot(2,2,3)
plt.bar(np.arange(len(Q)), Q, color='b')
plt.xticks(np.arange(len(Q)), np.arange(1,5))
plt.title("Wasserstein distance {:.4}\nKL divergence {:.4}".format(
scipy.stats.wasserstein_distance(sampP, sampQ), scipy.stats.entropy(P, Q)), fontsize=10)
plt.subplot(2,2,2)
plt.bar(np.arange(len(P)), P, color='r')
plt.xticks(np.arange(len(P)), np.arange(1,5), fontsize=0)
plt.subplot(2,2,4)
plt.bar(np.arange(len(Q2)), Q2, color='b')
plt.xticks(np.arange(len(Q2)), np.arange(1,5))
plt.title("Wasserstein distance {:.4}\nKL divergence {:.4}".format(
scipy.stats.wasserstein_distance(sampP, sampQ2), scipy.stats.entropy(P, Q2)), fontsize=10)
plt.show()
Here the measures between red and blue distributions are the same for KL divergence whereas Wasserstein distance measures the work required to transport the probability mass from the red state to the blue state using x-axis as a “road”. This measure is obviously the larger the further away the probability mass is (hence the alias earth mover's distance). So which one you want to use depends on your application area and what you want to measure. As a note, instead of KL divergence there are also other options like Jensen-Shannon distance that are proper metrics. | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence?
When considering the advantages of Wasserstein metric compared to KL divergence, then the most obvious one is that W is a metric whereas KL divergence is not, since KL is not symmetric (i.e. $D_{KL}(P |
2,612 | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence? | Wasserstein metric most commonly appears in optimal transport problems where the goal is to move things from a given configuration to a desired configuration in the minimum cost or minimum distance. The Kullback-Leibler (KL) is a divergence (not a metric) and shows up very often in statistics, machine learning, and information theory.
Also, the Wasserstein metric does not require both measures to be on the same probability space, whereas KL divergence requires both measures to be defined on the same probability space.
Perhaps the easiest spot to see the difference between Wasserstein distance and KL divergence is in the multivariate Gaussian case where both have closed form solutions. Let's assume that these distributions have dimension $k$, means $\mu_i$, and covariance matrices $\Sigma_i$, for $i=1,2$. They two formulae are:
$$
W_{2} (\mathcal{N}_0, \mathcal{N}_1)^2 = \| \mu_1 - \mu_2 \|_2^2 + \mathop{\mathrm{tr}} \bigl( \Sigma_1 + \Sigma_2 - 2 \bigl( \Sigma_2^{1/2} \Sigma_1 \Sigma_2^{1/2} \bigr)^{1/2} \bigr)
$$
and
$$
D_\text{KL} (\mathcal{N}_0, \mathcal{N}_1) = \frac{1}{2}\left( \operatorname{tr} \left(\Sigma_1^{-1}\Sigma_0\right) + (\mu_1 - \mu_0)^\mathsf{T} \Sigma_1^{-1}(\mu_1 - \mu_0) - k + \ln \left(\frac{\det\Sigma_1}{\det\Sigma_0}\right) \right).
$$
To simplify let's consider $\Sigma_1=\Sigma_2=wI_k$ and $\mu_1\neq\mu_2$.
With these simplifying assumptions the trace term in Wasserstein is $0$ and the trace term in the KL divergence will be 0 when combined with the $-k$ term and the log-determinant ratio is also $0$, so these two quantities become:
$$
W_{2} (\mathcal{N}_0, \mathcal{N}_1)^2 = \| \mu_1 - \mu_2 \|_2^2
$$
and
$$
D_\text{KL} (\mathcal{N}_0, \mathcal{N}_1) = (\mu_1 - \mu_0)^\mathsf{T} \Sigma_1^{-1}(\mu_1 - \mu_0).
$$
Notice that Wasserstein distance does not change if the variance changes (say take $w$ as a large quantity in the covariance matrices) whereas the KL divergence does. This is because the Wasserstein distance is a distance function in the joint support spaces of the two probability measures. In contrast the KL divergence is a divergence and this divergence changes based on the information space (signal to noise ratio) of the distributions. | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence? | Wasserstein metric most commonly appears in optimal transport problems where the goal is to move things from a given configuration to a desired configuration in the minimum cost or minimum distance. T | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence?
Wasserstein metric most commonly appears in optimal transport problems where the goal is to move things from a given configuration to a desired configuration in the minimum cost or minimum distance. The Kullback-Leibler (KL) is a divergence (not a metric) and shows up very often in statistics, machine learning, and information theory.
Also, the Wasserstein metric does not require both measures to be on the same probability space, whereas KL divergence requires both measures to be defined on the same probability space.
Perhaps the easiest spot to see the difference between Wasserstein distance and KL divergence is in the multivariate Gaussian case where both have closed form solutions. Let's assume that these distributions have dimension $k$, means $\mu_i$, and covariance matrices $\Sigma_i$, for $i=1,2$. They two formulae are:
$$
W_{2} (\mathcal{N}_0, \mathcal{N}_1)^2 = \| \mu_1 - \mu_2 \|_2^2 + \mathop{\mathrm{tr}} \bigl( \Sigma_1 + \Sigma_2 - 2 \bigl( \Sigma_2^{1/2} \Sigma_1 \Sigma_2^{1/2} \bigr)^{1/2} \bigr)
$$
and
$$
D_\text{KL} (\mathcal{N}_0, \mathcal{N}_1) = \frac{1}{2}\left( \operatorname{tr} \left(\Sigma_1^{-1}\Sigma_0\right) + (\mu_1 - \mu_0)^\mathsf{T} \Sigma_1^{-1}(\mu_1 - \mu_0) - k + \ln \left(\frac{\det\Sigma_1}{\det\Sigma_0}\right) \right).
$$
To simplify let's consider $\Sigma_1=\Sigma_2=wI_k$ and $\mu_1\neq\mu_2$.
With these simplifying assumptions the trace term in Wasserstein is $0$ and the trace term in the KL divergence will be 0 when combined with the $-k$ term and the log-determinant ratio is also $0$, so these two quantities become:
$$
W_{2} (\mathcal{N}_0, \mathcal{N}_1)^2 = \| \mu_1 - \mu_2 \|_2^2
$$
and
$$
D_\text{KL} (\mathcal{N}_0, \mathcal{N}_1) = (\mu_1 - \mu_0)^\mathsf{T} \Sigma_1^{-1}(\mu_1 - \mu_0).
$$
Notice that Wasserstein distance does not change if the variance changes (say take $w$ as a large quantity in the covariance matrices) whereas the KL divergence does. This is because the Wasserstein distance is a distance function in the joint support spaces of the two probability measures. In contrast the KL divergence is a divergence and this divergence changes based on the information space (signal to noise ratio) of the distributions. | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence?
Wasserstein metric most commonly appears in optimal transport problems where the goal is to move things from a given configuration to a desired configuration in the minimum cost or minimum distance. T |
2,613 | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence? | The Wasserstein metric is useful in validation of models as its units are that of the response itself. For example, if you are comparing two stochastic representations of the same system (e.g. a reduced-order-model), $P$ and $Q$, and the response is units of displacement, the Wasserstein metric is also in units of displacement. If you were reduce your stochastic representation to a deterministic, the distribution's CDF of each is a step function. The Wasserstein metric is the difference of the values.
I find this property to be a very natural extension to talk about the absolute difference between two random variables | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence? | The Wasserstein metric is useful in validation of models as its units are that of the response itself. For example, if you are comparing two stochastic representations of the same system (e.g. a reduc | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence?
The Wasserstein metric is useful in validation of models as its units are that of the response itself. For example, if you are comparing two stochastic representations of the same system (e.g. a reduced-order-model), $P$ and $Q$, and the response is units of displacement, the Wasserstein metric is also in units of displacement. If you were reduce your stochastic representation to a deterministic, the distribution's CDF of each is a step function. The Wasserstein metric is the difference of the values.
I find this property to be a very natural extension to talk about the absolute difference between two random variables | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence?
The Wasserstein metric is useful in validation of models as its units are that of the response itself. For example, if you are comparing two stochastic representations of the same system (e.g. a reduc |
2,614 | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence? | As an extension for the answer from antiquity regarding scipy.stats.wasserstein_distance: If you have already binned data with given bin-distances, you can use u_weights and v_weights. Assuming your data is equidistant binned:
from scipy.stats import wasserstein_distance
wasserstein_distance(sampP, sampQ)
>> 2.0
wasserstein_distance(np.arange(len(P)), np.arange(len(Q)), P, Q))
>> 2.0
See scipy.stats._cdf_distance and scipy.stats.wasserstein_distance
Additional example:
import numpy as np
from scipy.stats import wasserstein_distance
# example samples (not binned)
X1 = np.array([6, 1, 2, 3, 5, 5, 1])
X2 = np.array([1, 4, 3, 1, 6, 6, 4])
# equal distant binning for both samples
bins = np.arange(1, 8)
X1b, _ = np.histogram(X1, bins)
X2b, _ = np.histogram(X2, bins)
# bin "positions"
pos_X1 = np.arange(len(X1b))
pos_X2 = np.arange(len(X2b))
print(wasserstein_distance(X1, X2))
print(wasserstein_distance(pos_X1, pos_X2, X1b, X2b))
>> 0.5714285714285714
>> 0.5714285714285714
When I calculated the Wasserstein-Distance I worked with already binned data (histograms). In order to retrieve the same result using already binned data from scipy.stats.wasserstein_distance you have have to add
u_weights: corresponding to the counts in every bin of the binned
data of sample X1
v_weights: corresponding to the counts in every
bin of the binned data of sample X2
as well as the "positions" (pos_X1 and pos_X2) of the bins. They describe the distances between the bins. Since the Wasserstein Distance or Earth Mover's Distance tries to minimize work which is proportional to flow times distance, the distance between bins is very important. Of course, this example (sample vs. histograms) only yields the same result if bins as described above are chosen (one bin for every integer between 1 and 6). | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence? | As an extension for the answer from antiquity regarding scipy.stats.wasserstein_distance: If you have already binned data with given bin-distances, you can use u_weights and v_weights. Assuming your d | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence?
As an extension for the answer from antiquity regarding scipy.stats.wasserstein_distance: If you have already binned data with given bin-distances, you can use u_weights and v_weights. Assuming your data is equidistant binned:
from scipy.stats import wasserstein_distance
wasserstein_distance(sampP, sampQ)
>> 2.0
wasserstein_distance(np.arange(len(P)), np.arange(len(Q)), P, Q))
>> 2.0
See scipy.stats._cdf_distance and scipy.stats.wasserstein_distance
Additional example:
import numpy as np
from scipy.stats import wasserstein_distance
# example samples (not binned)
X1 = np.array([6, 1, 2, 3, 5, 5, 1])
X2 = np.array([1, 4, 3, 1, 6, 6, 4])
# equal distant binning for both samples
bins = np.arange(1, 8)
X1b, _ = np.histogram(X1, bins)
X2b, _ = np.histogram(X2, bins)
# bin "positions"
pos_X1 = np.arange(len(X1b))
pos_X2 = np.arange(len(X2b))
print(wasserstein_distance(X1, X2))
print(wasserstein_distance(pos_X1, pos_X2, X1b, X2b))
>> 0.5714285714285714
>> 0.5714285714285714
When I calculated the Wasserstein-Distance I worked with already binned data (histograms). In order to retrieve the same result using already binned data from scipy.stats.wasserstein_distance you have have to add
u_weights: corresponding to the counts in every bin of the binned
data of sample X1
v_weights: corresponding to the counts in every
bin of the binned data of sample X2
as well as the "positions" (pos_X1 and pos_X2) of the bins. They describe the distances between the bins. Since the Wasserstein Distance or Earth Mover's Distance tries to minimize work which is proportional to flow times distance, the distance between bins is very important. Of course, this example (sample vs. histograms) only yields the same result if bins as described above are chosen (one bin for every integer between 1 and 6). | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence?
As an extension for the answer from antiquity regarding scipy.stats.wasserstein_distance: If you have already binned data with given bin-distances, you can use u_weights and v_weights. Assuming your d |
2,615 | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence? | Wasserstein metric has a main drawback relative to invariance.
For instance, for homogeneous domains as simple as Poincaré upper half plane, wasserstein metric is not invariant wrt the automorphism of this space . Then, only Fisher metric from Information Geometry is valid and its extension by Jean-Louis Koszul and Jean-Marie Souriau | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence? | Wasserstein metric has a main drawback relative to invariance.
For instance, for homogeneous domains as simple as Poincaré upper half plane, wasserstein metric is not invariant wrt the automorphism of | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence?
Wasserstein metric has a main drawback relative to invariance.
For instance, for homogeneous domains as simple as Poincaré upper half plane, wasserstein metric is not invariant wrt the automorphism of this space . Then, only Fisher metric from Information Geometry is valid and its extension by Jean-Louis Koszul and Jean-Marie Souriau | What is the advantages of Wasserstein metric compared to Kullback-Leibler divergence?
Wasserstein metric has a main drawback relative to invariance.
For instance, for homogeneous domains as simple as Poincaré upper half plane, wasserstein metric is not invariant wrt the automorphism of |
2,616 | Reduce Classification Probability Threshold | Frank Harrell has written about this on his blog: Classification vs. Prediction, which I agree with wholeheartedly.
Essentially, his argument is that the statistical component of your exercise ends when you output a probability for each class of your new sample. Choosing a threshold beyond which you classify a new observation as 1 vs. 0 is not part of the statistics any more. It is part of the decision component. And here, you need the probabilistic output of your model - but also considerations like:
What are the consequences of deciding to treat a new observation as class 1 vs. 0? Do I then send out a cheap marketing mail to all 1s? Or do I apply an invasive cancer treatment with big side effects?
What are the consequences of treating a "true" 0 as 1, and vice versa? Will I tick off a customer? Subject someone to unnecessary medical treatment?
Are my "classes" truly discrete? Or is there actually a continuum (e.g., blood pressure), where clinical thresholds are in reality just cognitive shortcuts? If so, how far beyond a threshold is the case I'm "classifying" right now?
Or does a low-but-positive probability to be class 1 actually mean "get more data", "run another test"?
So, to answer your question: talk to the end consumer of your classification, and get answers to the questions above. Or explain your probabilistic output to her or him, and let her or him walk through the next steps.
Here is another way of looking at this. You ask:
what if I find out, that if I classify the class as 1 also when the probabilities are larger than, for instance 0.2, and the classifier performs better.
They key word in this question is "better". What does it mean that your classifier performs "better"? This of course depends on your evaluation metric, and depending on your metric, a "better" performing classifier may look very different. In a numerical prediction framework, I have written a short paper on this (Kolassa, 2020), but the exact same thing happens for classification.
Importantly, this is the case even if we have perfect probabilistic classifications. That is, they are calibrated: if an instance is predicted to have a probability $\hat{p}$ to belong to the target class, then that is indeed its true probability to be of that class.
As an illustration, suppose you have applied your probabilistic classifier to a new set of instances. Some of them have a high predicted probability to belong to the target class, more not. Perhaps the distribution of these predicted probabilities looks like this:
Now suppose you need to make hard 0-1 classifications. For that, you need to decide on a threshold such that you will classify each instance into the target class if its predicted probability exceeds that threshold. What is the optimal threshold to use?
Based on my paragraph above, it should not come as a surprise that this optimal threshold (where the classifier performs "best") depends on the evaluation measure. In this case, we can simulate: we draw $10^7$ samples for the predicted probability as above, then for each sample $\hat{p}$ assign it to the target class with probability $\hat{p}$, as the ground truth. In parallel, we can compare the probabilities to all possible thresholds $0\leq t\leq 1$ and evaluate common error measures for such thresholded hard classifications:
These plots are unsurprising. Using a threshold of $t=0$ (assigning everything to the target class) yields a perfect recall of $1$. Precision is undefined for high thresholds where there are no instances whose predicted probabilities exceed that threshold, and it is unstable just below that high threshold, depending on whether the highest-scoring instances are in the target class or not. Finally, since we have an unbalanced dataset with more negatives than positives, assigning everything to the non-target class (i.e., using a threshold of $t=1$) maximizes accuracy.
So, these three measures elicit classifications that are probably not very useful. In practice, people often use combinations of precision and recall. One very common such combination is the F1 score, which will indeed elicit an "optimal" threshold that is not $0$ or $1$, but in between. Sounds better, right?
However, note that this again depends on the particular weight between precision and recall we want. The F1 score uses equal weighting, but it is just one member of an entire family of evaluation metrics parameterized by the relative weights of precision and recall. And, again unsurprisingly, the "optimal" threshold depends on which F$\beta$ score we use, i.e., on which weight we use, and we are back to square one: in order to find the "optimal" classifier, we need to tailor our evaluation metric to the business problem at hand.
R code:
aa <- 2
bb <- 10
n_sims <- 1e7
set.seed(1)
sim_probs <- rbeta(n_sims,aa,bb)
sim_actuals <- runif(n_sims)<sim_probs
summary(sim_probs)
summary(sim_actuals)
par(mai=c(.5,.5,.5,.1))
xx <- seq(0,1,by=.01)
plot(xx,dbeta(xx,aa,bb),type="l",xlab="",ylab="",
las=1,main="Distribution of predicted probabilities")
thresholds <- seq(0,1,by=0.01)
recall <- sapply(thresholds,function(tt)
sum(sim_probs>=tt & sim_actuals)/sum(sim_actuals))
precision <- sapply(thresholds,function(tt)
sum(sim_probs>=tt & sim_actuals)/sum(sim_probs>=tt))
accuracy <- sapply(thresholds,function(tt)
(sum(sim_probs>=tt & sim_actuals)+sum(sim_probs<tt & !sim_actuals))/n_sims)
opar <- par(mfrow=c(1,3),mai=c(.7,.5,.5,.1))
plot(thresholds,recall,type="l",xlab="Threshold",
ylab="",las=1,main="Recall")
plot(thresholds,precision,type="l",xlab="Threshold",
ylab="",las=1,main="Precision")
plot(thresholds,accuracy,type="l",xlab="Threshold",
ylab="",las=1,main="Accuracy")
betas <- c(0.5,1,2)
FF <- sapply(betas,function(bb)
sapply(thresholds,function(tt)
(1+bb^2)*sum(sim_probs>=tt & sim_actuals)/
((1+bb^2)*sum(sim_probs>=tt & sim_actuals)+
sum(sim_probs>=tt & !sim_actuals)+bb^2*sum(sim_probs<tt & sim_actuals))))
for ( ii in seq_along(betas) ) {
plot(thresholds,FF[,ii],type="l",xlab="Threshold",
ylab="",las=1,main=paste0("F",betas[ii]," score"))
abline(v=thresholds[which.max(FF[,ii])],col="red")
} | Reduce Classification Probability Threshold | Frank Harrell has written about this on his blog: Classification vs. Prediction, which I agree with wholeheartedly.
Essentially, his argument is that the statistical component of your exercise ends wh | Reduce Classification Probability Threshold
Frank Harrell has written about this on his blog: Classification vs. Prediction, which I agree with wholeheartedly.
Essentially, his argument is that the statistical component of your exercise ends when you output a probability for each class of your new sample. Choosing a threshold beyond which you classify a new observation as 1 vs. 0 is not part of the statistics any more. It is part of the decision component. And here, you need the probabilistic output of your model - but also considerations like:
What are the consequences of deciding to treat a new observation as class 1 vs. 0? Do I then send out a cheap marketing mail to all 1s? Or do I apply an invasive cancer treatment with big side effects?
What are the consequences of treating a "true" 0 as 1, and vice versa? Will I tick off a customer? Subject someone to unnecessary medical treatment?
Are my "classes" truly discrete? Or is there actually a continuum (e.g., blood pressure), where clinical thresholds are in reality just cognitive shortcuts? If so, how far beyond a threshold is the case I'm "classifying" right now?
Or does a low-but-positive probability to be class 1 actually mean "get more data", "run another test"?
So, to answer your question: talk to the end consumer of your classification, and get answers to the questions above. Or explain your probabilistic output to her or him, and let her or him walk through the next steps.
Here is another way of looking at this. You ask:
what if I find out, that if I classify the class as 1 also when the probabilities are larger than, for instance 0.2, and the classifier performs better.
They key word in this question is "better". What does it mean that your classifier performs "better"? This of course depends on your evaluation metric, and depending on your metric, a "better" performing classifier may look very different. In a numerical prediction framework, I have written a short paper on this (Kolassa, 2020), but the exact same thing happens for classification.
Importantly, this is the case even if we have perfect probabilistic classifications. That is, they are calibrated: if an instance is predicted to have a probability $\hat{p}$ to belong to the target class, then that is indeed its true probability to be of that class.
As an illustration, suppose you have applied your probabilistic classifier to a new set of instances. Some of them have a high predicted probability to belong to the target class, more not. Perhaps the distribution of these predicted probabilities looks like this:
Now suppose you need to make hard 0-1 classifications. For that, you need to decide on a threshold such that you will classify each instance into the target class if its predicted probability exceeds that threshold. What is the optimal threshold to use?
Based on my paragraph above, it should not come as a surprise that this optimal threshold (where the classifier performs "best") depends on the evaluation measure. In this case, we can simulate: we draw $10^7$ samples for the predicted probability as above, then for each sample $\hat{p}$ assign it to the target class with probability $\hat{p}$, as the ground truth. In parallel, we can compare the probabilities to all possible thresholds $0\leq t\leq 1$ and evaluate common error measures for such thresholded hard classifications:
These plots are unsurprising. Using a threshold of $t=0$ (assigning everything to the target class) yields a perfect recall of $1$. Precision is undefined for high thresholds where there are no instances whose predicted probabilities exceed that threshold, and it is unstable just below that high threshold, depending on whether the highest-scoring instances are in the target class or not. Finally, since we have an unbalanced dataset with more negatives than positives, assigning everything to the non-target class (i.e., using a threshold of $t=1$) maximizes accuracy.
So, these three measures elicit classifications that are probably not very useful. In practice, people often use combinations of precision and recall. One very common such combination is the F1 score, which will indeed elicit an "optimal" threshold that is not $0$ or $1$, but in between. Sounds better, right?
However, note that this again depends on the particular weight between precision and recall we want. The F1 score uses equal weighting, but it is just one member of an entire family of evaluation metrics parameterized by the relative weights of precision and recall. And, again unsurprisingly, the "optimal" threshold depends on which F$\beta$ score we use, i.e., on which weight we use, and we are back to square one: in order to find the "optimal" classifier, we need to tailor our evaluation metric to the business problem at hand.
R code:
aa <- 2
bb <- 10
n_sims <- 1e7
set.seed(1)
sim_probs <- rbeta(n_sims,aa,bb)
sim_actuals <- runif(n_sims)<sim_probs
summary(sim_probs)
summary(sim_actuals)
par(mai=c(.5,.5,.5,.1))
xx <- seq(0,1,by=.01)
plot(xx,dbeta(xx,aa,bb),type="l",xlab="",ylab="",
las=1,main="Distribution of predicted probabilities")
thresholds <- seq(0,1,by=0.01)
recall <- sapply(thresholds,function(tt)
sum(sim_probs>=tt & sim_actuals)/sum(sim_actuals))
precision <- sapply(thresholds,function(tt)
sum(sim_probs>=tt & sim_actuals)/sum(sim_probs>=tt))
accuracy <- sapply(thresholds,function(tt)
(sum(sim_probs>=tt & sim_actuals)+sum(sim_probs<tt & !sim_actuals))/n_sims)
opar <- par(mfrow=c(1,3),mai=c(.7,.5,.5,.1))
plot(thresholds,recall,type="l",xlab="Threshold",
ylab="",las=1,main="Recall")
plot(thresholds,precision,type="l",xlab="Threshold",
ylab="",las=1,main="Precision")
plot(thresholds,accuracy,type="l",xlab="Threshold",
ylab="",las=1,main="Accuracy")
betas <- c(0.5,1,2)
FF <- sapply(betas,function(bb)
sapply(thresholds,function(tt)
(1+bb^2)*sum(sim_probs>=tt & sim_actuals)/
((1+bb^2)*sum(sim_probs>=tt & sim_actuals)+
sum(sim_probs>=tt & !sim_actuals)+bb^2*sum(sim_probs<tt & sim_actuals))))
for ( ii in seq_along(betas) ) {
plot(thresholds,FF[,ii],type="l",xlab="Threshold",
ylab="",las=1,main=paste0("F",betas[ii]," score"))
abline(v=thresholds[which.max(FF[,ii])],col="red")
} | Reduce Classification Probability Threshold
Frank Harrell has written about this on his blog: Classification vs. Prediction, which I agree with wholeheartedly.
Essentially, his argument is that the statistical component of your exercise ends wh |
2,617 | Reduce Classification Probability Threshold | Stephan's answer is great. It fundamentally depends on what you want to do with the classifier.
Just adding a few examples.
A way to find the best threshold is to define an objective function. For binary classification, this can be accuracy or F1-score for example. Depending on which you choose, the best threshold will be different. For F1-score, there is an interesting answer here: What is F1 Optimal Threshold? How to calculate it? . But saying "I want to use F1-score" is where you actually make the choice. Whether this choice is good or not depends on the final purpose.
Another way to see it is facing the trade-off between exploration and exploitation (Stephan's last point): The multi-armed bandit is an example of such a problem: you have to deal with two conflicting objectives of acquiring information and choosing the best bandit. One Bayesian strategy is to choose each bandit randomly with the probability it is the best. It's not exactly classification but dealing with output probabilities in a similar way.
If the classifier is just one brick in decision making algorithm, then the best threshold will depend on the final purpose of the algorithm. It should be evaluated and tuned in regard to the objective function of the whole process. | Reduce Classification Probability Threshold | Stephan's answer is great. It fundamentally depends on what you want to do with the classifier.
Just adding a few examples.
A way to find the best threshold is to define an objective function. For bin | Reduce Classification Probability Threshold
Stephan's answer is great. It fundamentally depends on what you want to do with the classifier.
Just adding a few examples.
A way to find the best threshold is to define an objective function. For binary classification, this can be accuracy or F1-score for example. Depending on which you choose, the best threshold will be different. For F1-score, there is an interesting answer here: What is F1 Optimal Threshold? How to calculate it? . But saying "I want to use F1-score" is where you actually make the choice. Whether this choice is good or not depends on the final purpose.
Another way to see it is facing the trade-off between exploration and exploitation (Stephan's last point): The multi-armed bandit is an example of such a problem: you have to deal with two conflicting objectives of acquiring information and choosing the best bandit. One Bayesian strategy is to choose each bandit randomly with the probability it is the best. It's not exactly classification but dealing with output probabilities in a similar way.
If the classifier is just one brick in decision making algorithm, then the best threshold will depend on the final purpose of the algorithm. It should be evaluated and tuned in regard to the objective function of the whole process. | Reduce Classification Probability Threshold
Stephan's answer is great. It fundamentally depends on what you want to do with the classifier.
Just adding a few examples.
A way to find the best threshold is to define an objective function. For bin |
2,618 | Reduce Classification Probability Threshold | There is possibly some value in considering how the probability is calculated. These days, Classifiers use a bias vector, which is multiplied by a matrix (linear algebra). As long as there are any non-zero values in the vector, the probability (the product of the vector and the matrix) will never be 0.
This causes confusion in the real world of people who didn't take linear algebra, I guess. They are bothered by the fact that there are probability scores for items that they think should have 0. In other words, they are confusing the statistical input, from the decision based on that input. As humans, we could say that something with a probability of 0.0002234 is the same as 0, in most "practical" use cases. In higher cognitive science discussions, maybe, there is an interesting discussion about why the bias vector does this, or rather, is this valid for cognitive applications. | Reduce Classification Probability Threshold | There is possibly some value in considering how the probability is calculated. These days, Classifiers use a bias vector, which is multiplied by a matrix (linear algebra). As long as there are any no | Reduce Classification Probability Threshold
There is possibly some value in considering how the probability is calculated. These days, Classifiers use a bias vector, which is multiplied by a matrix (linear algebra). As long as there are any non-zero values in the vector, the probability (the product of the vector and the matrix) will never be 0.
This causes confusion in the real world of people who didn't take linear algebra, I guess. They are bothered by the fact that there are probability scores for items that they think should have 0. In other words, they are confusing the statistical input, from the decision based on that input. As humans, we could say that something with a probability of 0.0002234 is the same as 0, in most "practical" use cases. In higher cognitive science discussions, maybe, there is an interesting discussion about why the bias vector does this, or rather, is this valid for cognitive applications. | Reduce Classification Probability Threshold
There is possibly some value in considering how the probability is calculated. These days, Classifiers use a bias vector, which is multiplied by a matrix (linear algebra). As long as there are any no |
2,619 | Reduce Classification Probability Threshold | There is no wrong threshold. The threshold you choose depends of your objective in your prediction, or rather what you want to favor, for example precision versus recall (try to graph it and measure its associated AUC to compare different classification models of your choosing).
I am giving you this example of precision vs recall, because my own problem case i am working on right now, i choose my threshold depending of the minimal precision (or PPV Positive Predictive Value) i want my model to have when predicting, but i do not care much about negatives. As such i take the threshold that corresponds to the wanted precision once i have trained my model. Precision is my constraint and Recall is the performance of my model, when i compare to other classification models. | Reduce Classification Probability Threshold | There is no wrong threshold. The threshold you choose depends of your objective in your prediction, or rather what you want to favor, for example precision versus recall (try to graph it and measure i | Reduce Classification Probability Threshold
There is no wrong threshold. The threshold you choose depends of your objective in your prediction, or rather what you want to favor, for example precision versus recall (try to graph it and measure its associated AUC to compare different classification models of your choosing).
I am giving you this example of precision vs recall, because my own problem case i am working on right now, i choose my threshold depending of the minimal precision (or PPV Positive Predictive Value) i want my model to have when predicting, but i do not care much about negatives. As such i take the threshold that corresponds to the wanted precision once i have trained my model. Precision is my constraint and Recall is the performance of my model, when i compare to other classification models. | Reduce Classification Probability Threshold
There is no wrong threshold. The threshold you choose depends of your objective in your prediction, or rather what you want to favor, for example precision versus recall (try to graph it and measure i |
2,620 | XKCD's modified Bayes theorem: actually kinda reasonable? | Well by distributing the $P(H)$ term, we obtain
$$
P(H|X) = \frac{P(X|H)P(H)}{P(X)} P(C) + P(H) [1 - P(C)],
$$
which we can interpret as the Law of Total Probability applied to the event $C =$ "you are using Bayesian statistics correctly." So if you are using Bayesian statistics correctly, then you recover Bayes' law (the left fraction above) and if you aren't, then you ignore the data and just use your prior on $H$.
I suppose this is a rejoinder against the criticism that in principle Bayesians can adjust the prior to support whatever conclusion they want, whereas Bayesians would argue that this is not how Bayesian statistics actually works.
(And yes, you did successfully nerd-snipe me. I'm neither a mathematician nor a physicist though, so I'm not sure how many points I'm worth.) | XKCD's modified Bayes theorem: actually kinda reasonable? | Well by distributing the $P(H)$ term, we obtain
$$
P(H|X) = \frac{P(X|H)P(H)}{P(X)} P(C) + P(H) [1 - P(C)],
$$
which we can interpret as the Law of Total Probability applied to the event $C =$ "you ar | XKCD's modified Bayes theorem: actually kinda reasonable?
Well by distributing the $P(H)$ term, we obtain
$$
P(H|X) = \frac{P(X|H)P(H)}{P(X)} P(C) + P(H) [1 - P(C)],
$$
which we can interpret as the Law of Total Probability applied to the event $C =$ "you are using Bayesian statistics correctly." So if you are using Bayesian statistics correctly, then you recover Bayes' law (the left fraction above) and if you aren't, then you ignore the data and just use your prior on $H$.
I suppose this is a rejoinder against the criticism that in principle Bayesians can adjust the prior to support whatever conclusion they want, whereas Bayesians would argue that this is not how Bayesian statistics actually works.
(And yes, you did successfully nerd-snipe me. I'm neither a mathematician nor a physicist though, so I'm not sure how many points I'm worth.) | XKCD's modified Bayes theorem: actually kinda reasonable?
Well by distributing the $P(H)$ term, we obtain
$$
P(H|X) = \frac{P(X|H)P(H)}{P(X)} P(C) + P(H) [1 - P(C)],
$$
which we can interpret as the Law of Total Probability applied to the event $C =$ "you ar |
2,621 | XKCD's modified Bayes theorem: actually kinda reasonable? | Believe it or not, this type of model does pop up every now and then in very serious statistical models, especially when dealing with data fusion, i.e., trying to combine inference from multiple sensors trying to make inference on a single event.
If a sensor malfunctions, it can greatly bias the inference made when trying to combine the signals from multiple sources. You can make a model more robust to this issue by including a small probability that the sensor is just transmitting random values, independent of the actual event of interest. This has the result that if 90 sensors weakly indicate $A$ is true, but 1 sensor strongly indicates $B$ is true, we should still conclude that $A$ is true (i.e., the posterior probability that this one sensor misfired becomes very high when we realize it contradicts all the other sensors). If the failure distribution is independent of the parameter we want to make inference on, then if the posterior probability that it is a failure is high, the measures from that sensor have very little effect on the posterior distribution for the parameter of interest; in fact, independence if the posterior probability of failure is 1.
Is this a general model that should considered when it comes to inference, i.e., should we replace Bayes theorem with Modified Bayes Theorem when doing Bayesian statistics? No. The reason is that "using Bayesian statistics correctly" isn't really just binary (or if it is, it's always false). Any analysis will have degrees of incorrect assumptions. In order for your conclusions to be completely independent from the data (which is implied by the formula), you need to make extremely grave errors. If "using Bayesian statistics incorrectly" at any level meant your analysis was completely independent of the truth, the use of statistics would be entirely worthless. All models are wrong but some are useful and all that. | XKCD's modified Bayes theorem: actually kinda reasonable? | Believe it or not, this type of model does pop up every now and then in very serious statistical models, especially when dealing with data fusion, i.e., trying to combine inference from multiple senso | XKCD's modified Bayes theorem: actually kinda reasonable?
Believe it or not, this type of model does pop up every now and then in very serious statistical models, especially when dealing with data fusion, i.e., trying to combine inference from multiple sensors trying to make inference on a single event.
If a sensor malfunctions, it can greatly bias the inference made when trying to combine the signals from multiple sources. You can make a model more robust to this issue by including a small probability that the sensor is just transmitting random values, independent of the actual event of interest. This has the result that if 90 sensors weakly indicate $A$ is true, but 1 sensor strongly indicates $B$ is true, we should still conclude that $A$ is true (i.e., the posterior probability that this one sensor misfired becomes very high when we realize it contradicts all the other sensors). If the failure distribution is independent of the parameter we want to make inference on, then if the posterior probability that it is a failure is high, the measures from that sensor have very little effect on the posterior distribution for the parameter of interest; in fact, independence if the posterior probability of failure is 1.
Is this a general model that should considered when it comes to inference, i.e., should we replace Bayes theorem with Modified Bayes Theorem when doing Bayesian statistics? No. The reason is that "using Bayesian statistics correctly" isn't really just binary (or if it is, it's always false). Any analysis will have degrees of incorrect assumptions. In order for your conclusions to be completely independent from the data (which is implied by the formula), you need to make extremely grave errors. If "using Bayesian statistics incorrectly" at any level meant your analysis was completely independent of the truth, the use of statistics would be entirely worthless. All models are wrong but some are useful and all that. | XKCD's modified Bayes theorem: actually kinda reasonable?
Believe it or not, this type of model does pop up every now and then in very serious statistical models, especially when dealing with data fusion, i.e., trying to combine inference from multiple senso |
2,622 | How to use Pearson correlation correctly with time series | Pearson correlation is used to look at correlation between series ... but being time series the correlation is looked at across different lags -- the cross-correlation function.
The cross-correlation is impacted by dependence within-series, so in many cases the within-series dependence should be removed first. So to use this correlation, rather than smoothing the series, it's actually more common (because it's meaningful) to look at dependence between residuals - the rough part that's left over after a suitable model is found for the variables.
You probably want to begin with some basic resources on time series models before delving into trying to figure out whether a Pearson correlation across (presumably) nonstationary, smoothed series is interpretable.
In particular, you'll probably want to look into the phenomenon here.
[Edit -- the Wikipedia landscape keeps changing; the above para. should probably be revised to reflect what's there now.]
e.g. see some discussions
http://www.math.ku.dk/~sjo/papers/LisbonPaper.pdf (the opening quote of Yule, in a paper presented in 1925 but published the following year, summarizes the problem quite well)
Christos Agiakloglou and Apostolos Tsimpanos, Spurious Correlations for Stationary AR(1) Processes http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.611.5055&rep=rep1&type=pdf (this shows that you can even get the problem between stationary series; hence the tendency to prewhiten)
The classic reference of Yule, (1926) [1] mentioned above.
You may also find the discussion here useful, as well as the discussion here
--
Using Pearson correlation in a meaningful way between time series is difficult and sometimes surprisingly subtle.
I looked up spurious correlation, but I don't care if my A series is the cause of my B series or vice versa. I only want to know if you can learn something about series A by looking at what series B is doing (or vice versa). In other words - do they have an correlation.
Take note of my previous comment about the narrow use of the term spurious correlation in the Wikipedia article.
The point about spurious correlation is that series can appear correlated, but the correlation itself is not meaningful. Consider two people tossing two distinct coins counting number of heads so far minus number of tails so far as the value of their series.
(So if person 1 tosses $\text{HTHH...}$ they have 3-1 = 2 for the value at the 4th time step, and their series goes $1, 0, 1, 2,...$.)
Obviously there's no connection whatever between the two series. Clearly neither can tell you the first thing about the other!
But look at the sort of correlations you get between pairs of coins:
If I didn't tell you what those were, and you took any pair of those series by themselves, those would be impressive correlations would they not?
But they're all meaningless. Utterly spurious. None of the three pairs are really any more positively or negatively related to each other than any of the others -- its just cumulated noise. The spuriousness isn't just about prediction, the whole notion of of considering association between series without taking account of the within-series dependence is misplaced.
All you have here is within-series dependence. There's no actual cross-series relation whatever.
Once you deal properly with the issue that makes these series auto-dependent - they're all integrated (Bernoulli random walks), so you need to difference them - the "apparent" association disappears (the largest absolute cross-series correlation of the three is 0.048).
What that tells you is the truth -- the apparent association is a mere illusion caused by the dependence within-series.
Your question asked "how to use Pearson correlation correctly with time series" -- so please understand: if there's within-series dependence and you don't deal with it first, you won't be using it correctly.
Further, smoothing won't reduce the problem of serial dependence; quite the opposite -- it makes it even worse! Here are the correlations after smoothing (default loess smooth - of series vs index - performed in R):
coin1 coin2
coin2 0.9696378
coin3 -0.8829326 -0.7733559
They all got further from 0. They're all still nothing but meaningless noise, though now it's smoothed, cumulated noise. (By smoothing, we reduce the variability in the series we put into the correlation calculation, so that may be why the correlation goes up.)
[1]: Yule, G.U. (1926) "Why do we Sometimes get Nonsense-Correlations between Time-Series?" J.Roy.Stat.Soc., 89, 1, pp. 1-63 | How to use Pearson correlation correctly with time series | Pearson correlation is used to look at correlation between series ... but being time series the correlation is looked at across different lags -- the cross-correlation function.
The cross-correlation | How to use Pearson correlation correctly with time series
Pearson correlation is used to look at correlation between series ... but being time series the correlation is looked at across different lags -- the cross-correlation function.
The cross-correlation is impacted by dependence within-series, so in many cases the within-series dependence should be removed first. So to use this correlation, rather than smoothing the series, it's actually more common (because it's meaningful) to look at dependence between residuals - the rough part that's left over after a suitable model is found for the variables.
You probably want to begin with some basic resources on time series models before delving into trying to figure out whether a Pearson correlation across (presumably) nonstationary, smoothed series is interpretable.
In particular, you'll probably want to look into the phenomenon here.
[Edit -- the Wikipedia landscape keeps changing; the above para. should probably be revised to reflect what's there now.]
e.g. see some discussions
http://www.math.ku.dk/~sjo/papers/LisbonPaper.pdf (the opening quote of Yule, in a paper presented in 1925 but published the following year, summarizes the problem quite well)
Christos Agiakloglou and Apostolos Tsimpanos, Spurious Correlations for Stationary AR(1) Processes http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.611.5055&rep=rep1&type=pdf (this shows that you can even get the problem between stationary series; hence the tendency to prewhiten)
The classic reference of Yule, (1926) [1] mentioned above.
You may also find the discussion here useful, as well as the discussion here
--
Using Pearson correlation in a meaningful way between time series is difficult and sometimes surprisingly subtle.
I looked up spurious correlation, but I don't care if my A series is the cause of my B series or vice versa. I only want to know if you can learn something about series A by looking at what series B is doing (or vice versa). In other words - do they have an correlation.
Take note of my previous comment about the narrow use of the term spurious correlation in the Wikipedia article.
The point about spurious correlation is that series can appear correlated, but the correlation itself is not meaningful. Consider two people tossing two distinct coins counting number of heads so far minus number of tails so far as the value of their series.
(So if person 1 tosses $\text{HTHH...}$ they have 3-1 = 2 for the value at the 4th time step, and their series goes $1, 0, 1, 2,...$.)
Obviously there's no connection whatever between the two series. Clearly neither can tell you the first thing about the other!
But look at the sort of correlations you get between pairs of coins:
If I didn't tell you what those were, and you took any pair of those series by themselves, those would be impressive correlations would they not?
But they're all meaningless. Utterly spurious. None of the three pairs are really any more positively or negatively related to each other than any of the others -- its just cumulated noise. The spuriousness isn't just about prediction, the whole notion of of considering association between series without taking account of the within-series dependence is misplaced.
All you have here is within-series dependence. There's no actual cross-series relation whatever.
Once you deal properly with the issue that makes these series auto-dependent - they're all integrated (Bernoulli random walks), so you need to difference them - the "apparent" association disappears (the largest absolute cross-series correlation of the three is 0.048).
What that tells you is the truth -- the apparent association is a mere illusion caused by the dependence within-series.
Your question asked "how to use Pearson correlation correctly with time series" -- so please understand: if there's within-series dependence and you don't deal with it first, you won't be using it correctly.
Further, smoothing won't reduce the problem of serial dependence; quite the opposite -- it makes it even worse! Here are the correlations after smoothing (default loess smooth - of series vs index - performed in R):
coin1 coin2
coin2 0.9696378
coin3 -0.8829326 -0.7733559
They all got further from 0. They're all still nothing but meaningless noise, though now it's smoothed, cumulated noise. (By smoothing, we reduce the variability in the series we put into the correlation calculation, so that may be why the correlation goes up.)
[1]: Yule, G.U. (1926) "Why do we Sometimes get Nonsense-Correlations between Time-Series?" J.Roy.Stat.Soc., 89, 1, pp. 1-63 | How to use Pearson correlation correctly with time series
Pearson correlation is used to look at correlation between series ... but being time series the correlation is looked at across different lags -- the cross-correlation function.
The cross-correlation |
2,623 | How to use Pearson correlation correctly with time series | To complete the answer of Glen_b and his/her example on random walks, if you really want to use Pearson correlation on this kind of time series $(S_t)_{1 \leq t \leq T}$, you should first differentiate them, then work out the correlation coefficient on the increments ($X_t = S_t - S_{t-1}$) which are (in the case of random walks) independent and identically distributed. I suggest you to use the Spearman correlation or the Kendall one, as they are more robust than the Pearson coefficient. Pearson measures linear dependence whereas Spearman and Kendall measure are invariant by monotonous transforms of your variables.
Also, imagine that two time series are strongly dependent, say moves up together and goes down together, but one undergoing sometimes strong variations and the other one having always mild variations, your Pearson correlation will be rather low unlike the Spearman and Kendall ones (which are better estimates of dependence between your time series).
For a thorough treatment on this and a better understand of dependency, you can look at Copula Theory, and for an application to time series. | How to use Pearson correlation correctly with time series | To complete the answer of Glen_b and his/her example on random walks, if you really want to use Pearson correlation on this kind of time series $(S_t)_{1 \leq t \leq T}$, you should first differentiat | How to use Pearson correlation correctly with time series
To complete the answer of Glen_b and his/her example on random walks, if you really want to use Pearson correlation on this kind of time series $(S_t)_{1 \leq t \leq T}$, you should first differentiate them, then work out the correlation coefficient on the increments ($X_t = S_t - S_{t-1}$) which are (in the case of random walks) independent and identically distributed. I suggest you to use the Spearman correlation or the Kendall one, as they are more robust than the Pearson coefficient. Pearson measures linear dependence whereas Spearman and Kendall measure are invariant by monotonous transforms of your variables.
Also, imagine that two time series are strongly dependent, say moves up together and goes down together, but one undergoing sometimes strong variations and the other one having always mild variations, your Pearson correlation will be rather low unlike the Spearman and Kendall ones (which are better estimates of dependence between your time series).
For a thorough treatment on this and a better understand of dependency, you can look at Copula Theory, and for an application to time series. | How to use Pearson correlation correctly with time series
To complete the answer of Glen_b and his/her example on random walks, if you really want to use Pearson correlation on this kind of time series $(S_t)_{1 \leq t \leq T}$, you should first differentiat |
2,624 | How to use Pearson correlation correctly with time series | Time series data is usually dependent on time. Pearson correlation, however, is appropriate for independent data. This problem is similar to the so called spurious regression. The coefficient is likely to be highly significant but this comes only from the time trend of the data that affects both series. I recommend to model the data and then try to see whether the modelling produces similar results for both series. Using Pearson correlation coefficient, however, will most likely give misleading results for the interpretation of the dependence structure. | How to use Pearson correlation correctly with time series | Time series data is usually dependent on time. Pearson correlation, however, is appropriate for independent data. This problem is similar to the so called spurious regression. The coefficient is likel | How to use Pearson correlation correctly with time series
Time series data is usually dependent on time. Pearson correlation, however, is appropriate for independent data. This problem is similar to the so called spurious regression. The coefficient is likely to be highly significant but this comes only from the time trend of the data that affects both series. I recommend to model the data and then try to see whether the modelling produces similar results for both series. Using Pearson correlation coefficient, however, will most likely give misleading results for the interpretation of the dependence structure. | How to use Pearson correlation correctly with time series
Time series data is usually dependent on time. Pearson correlation, however, is appropriate for independent data. This problem is similar to the so called spurious regression. The coefficient is likel |
2,625 | What is the difference between ZCA whitening and PCA whitening? | Let your (centered) data be stored in a $n\times d$ matrix $\mathbf X$ with $d$ features (variables) in columns and $n$ data points in rows. Let the covariance matrix $\mathbf C=\mathbf X^\top \mathbf X/n$ have eigenvectors in columns of $\mathbf E$ and eigenvalues on the diagonal of $\mathbf D$, so that $\mathbf C = \mathbf E \mathbf D \mathbf E^\top$.
Then what you call "normal" PCA whitening transformation is given by $\mathbf W_\mathrm{PCA} = \mathbf D^{-1/2} \mathbf E^\top$, see e.g. my answer in How to whiten the data using principal component analysis?
However, this whitening transformation is not unique. Indeed, whitened data will stay whitened after any rotation, which means that any $\mathbf W = \mathbf R \mathbf W_\mathrm{PCA}$ with orthogonal matrix $\mathbf R$ will also be a whitening transformation. In what is called ZCA whitening, we take $\mathbf E$ (stacked together eigenvectors of the covariance matrix) as this orthogonal matrix, i.e. $$\mathbf W_\mathrm{ZCA} = \mathbf E \mathbf D^{-1/2} \mathbf E^\top = \mathbf C^{-1/2}.$$
One defining property of ZCA transformation (sometimes also called "Mahalanobis transformation") is that it results in whitened data that is as close as possible to the original data (in the least squares sense). In other words, if you want to minimize $\|\mathbf X - \mathbf X \mathbf A^\top\|^2$ subject to $ \mathbf X \mathbf A^\top$ being whitened, then you should take $\mathbf A = \mathbf W_\mathrm{ZCA}$. Here is a 2D illustration:
Left subplot shows the data and its principal axes. Note the dark shading in the upper-right corner of the distribution: it marks its orientation. Rows of $\mathbf W_\mathrm{PCA}$ are shown on the second subplot: these are the vectors the data is projected on. After whitening (below) the distribution looks round, but notice that it also looks rotated --- dark corner is now on the East side, not on the North-East side. Rows of $\mathbf W_\mathrm{ZCA}$ are shown on the third subplot (note that they are not orthogonal!). After whitening (below) the distribution looks round and it's oriented in the same way as originally. Of course, one can get from PCA whitened data to ZCA whitened data by rotating with $\mathbf E$.
The term "ZCA" seems to have been introduced in Bell and Sejnowski 1996 in the context of independent component analysis, and stands for "zero-phase component analysis". See there for more details. Most probably, you came across this term in the context of image processing. It turns out, that when applied to a bunch of natural images (pixels as features, each image as a data point), principal axes look like Fourier components of increasing frequencies, see first column of their Figure 1 below. So they are very "global". On the other hand, rows of ZCA transformation look very "local", see the second column. This is precisely because ZCA tries to transform the data as little as possible, and so each row should better be close to one the original basis functions (which would be images with only one active pixel). And this is possible to achieve, because correlations in natural images are mostly very local (so de-correlation filters can also be local).
Update
More examples of ZCA filters and of images transformed with ZCA are given in Krizhevsky, 2009, Learning Multiple Layers of Features from Tiny Images, see also examples in @bayerj's answer (+1).
I think these examples give an idea as to when ZCA whitening might be preferable to the PCA one. Namely, ZCA-whitened images still resemble normal images, whereas PCA-whitened ones look nothing like normal images. This is probably important for algorithms like convolutional neural networks (as e.g. used in Krizhevsky's paper), which treat neighbouring pixels together and so greatly rely on the local properties of natural images. For most other machine learning algorithms it should be absolutely irrelevant whether the data is whitened with PCA or ZCA. | What is the difference between ZCA whitening and PCA whitening? | Let your (centered) data be stored in a $n\times d$ matrix $\mathbf X$ with $d$ features (variables) in columns and $n$ data points in rows. Let the covariance matrix $\mathbf C=\mathbf X^\top \mathbf | What is the difference between ZCA whitening and PCA whitening?
Let your (centered) data be stored in a $n\times d$ matrix $\mathbf X$ with $d$ features (variables) in columns and $n$ data points in rows. Let the covariance matrix $\mathbf C=\mathbf X^\top \mathbf X/n$ have eigenvectors in columns of $\mathbf E$ and eigenvalues on the diagonal of $\mathbf D$, so that $\mathbf C = \mathbf E \mathbf D \mathbf E^\top$.
Then what you call "normal" PCA whitening transformation is given by $\mathbf W_\mathrm{PCA} = \mathbf D^{-1/2} \mathbf E^\top$, see e.g. my answer in How to whiten the data using principal component analysis?
However, this whitening transformation is not unique. Indeed, whitened data will stay whitened after any rotation, which means that any $\mathbf W = \mathbf R \mathbf W_\mathrm{PCA}$ with orthogonal matrix $\mathbf R$ will also be a whitening transformation. In what is called ZCA whitening, we take $\mathbf E$ (stacked together eigenvectors of the covariance matrix) as this orthogonal matrix, i.e. $$\mathbf W_\mathrm{ZCA} = \mathbf E \mathbf D^{-1/2} \mathbf E^\top = \mathbf C^{-1/2}.$$
One defining property of ZCA transformation (sometimes also called "Mahalanobis transformation") is that it results in whitened data that is as close as possible to the original data (in the least squares sense). In other words, if you want to minimize $\|\mathbf X - \mathbf X \mathbf A^\top\|^2$ subject to $ \mathbf X \mathbf A^\top$ being whitened, then you should take $\mathbf A = \mathbf W_\mathrm{ZCA}$. Here is a 2D illustration:
Left subplot shows the data and its principal axes. Note the dark shading in the upper-right corner of the distribution: it marks its orientation. Rows of $\mathbf W_\mathrm{PCA}$ are shown on the second subplot: these are the vectors the data is projected on. After whitening (below) the distribution looks round, but notice that it also looks rotated --- dark corner is now on the East side, not on the North-East side. Rows of $\mathbf W_\mathrm{ZCA}$ are shown on the third subplot (note that they are not orthogonal!). After whitening (below) the distribution looks round and it's oriented in the same way as originally. Of course, one can get from PCA whitened data to ZCA whitened data by rotating with $\mathbf E$.
The term "ZCA" seems to have been introduced in Bell and Sejnowski 1996 in the context of independent component analysis, and stands for "zero-phase component analysis". See there for more details. Most probably, you came across this term in the context of image processing. It turns out, that when applied to a bunch of natural images (pixels as features, each image as a data point), principal axes look like Fourier components of increasing frequencies, see first column of their Figure 1 below. So they are very "global". On the other hand, rows of ZCA transformation look very "local", see the second column. This is precisely because ZCA tries to transform the data as little as possible, and so each row should better be close to one the original basis functions (which would be images with only one active pixel). And this is possible to achieve, because correlations in natural images are mostly very local (so de-correlation filters can also be local).
Update
More examples of ZCA filters and of images transformed with ZCA are given in Krizhevsky, 2009, Learning Multiple Layers of Features from Tiny Images, see also examples in @bayerj's answer (+1).
I think these examples give an idea as to when ZCA whitening might be preferable to the PCA one. Namely, ZCA-whitened images still resemble normal images, whereas PCA-whitened ones look nothing like normal images. This is probably important for algorithms like convolutional neural networks (as e.g. used in Krizhevsky's paper), which treat neighbouring pixels together and so greatly rely on the local properties of natural images. For most other machine learning algorithms it should be absolutely irrelevant whether the data is whitened with PCA or ZCA. | What is the difference between ZCA whitening and PCA whitening?
Let your (centered) data be stored in a $n\times d$ matrix $\mathbf X$ with $d$ features (variables) in columns and $n$ data points in rows. Let the covariance matrix $\mathbf C=\mathbf X^\top \mathbf |
2,626 | What is the difference between ZCA whitening and PCA whitening? | Given an Eigendecomposition of a covariance matrix
$$
\bar{X}\bar{X}^T = LDL^T
$$
where $D = \text{diag}(\lambda_1, \lambda_2, \dots, \lambda_n)$ is the diagonal matrix of Eigenvalues, ordinary whitening resorts to transforming the data into a space where the covariance matrix is diagonal:
$$\sqrt{D^{-1}}L^{-1}\bar{X}\bar{X}^TL^{-T}\sqrt{D^{-1}} = \sqrt{D^{-1}}L^{-1}LDL^TL^{-T}\sqrt{D^{-1}} \\
= \mathbf{I}
$$
(with some abuse of notation.) That means we can diagonalize the covariance by transforming the data according to
$$
\tilde{X} = \sqrt{D^{-1}}L^{-1}X.
$$
This is ordinary whitening with PCA. Now, ZCA does something different--it adds a small epsilon to the Eigenvalues and transforms the data back.
$$
\tilde{X} = L\sqrt{(D + \epsilon)^{-1}}L^{-1}X.
$$
Here are some pictures from the CIFAR data set before and after ZCA.
Before ZCA:
After ZCA with $\epsilon = 0.0001$
After ZCA with $\epsilon = 0.1$
For vision data, high frequency data will typically reside in the space spanned by the lower Eigenvalues. Hence ZCA is a way to strengthen these, leading to more visible edges etc. | What is the difference between ZCA whitening and PCA whitening? | Given an Eigendecomposition of a covariance matrix
$$
\bar{X}\bar{X}^T = LDL^T
$$
where $D = \text{diag}(\lambda_1, \lambda_2, \dots, \lambda_n)$ is the diagonal matrix of Eigenvalues, ordinary whiten | What is the difference between ZCA whitening and PCA whitening?
Given an Eigendecomposition of a covariance matrix
$$
\bar{X}\bar{X}^T = LDL^T
$$
where $D = \text{diag}(\lambda_1, \lambda_2, \dots, \lambda_n)$ is the diagonal matrix of Eigenvalues, ordinary whitening resorts to transforming the data into a space where the covariance matrix is diagonal:
$$\sqrt{D^{-1}}L^{-1}\bar{X}\bar{X}^TL^{-T}\sqrt{D^{-1}} = \sqrt{D^{-1}}L^{-1}LDL^TL^{-T}\sqrt{D^{-1}} \\
= \mathbf{I}
$$
(with some abuse of notation.) That means we can diagonalize the covariance by transforming the data according to
$$
\tilde{X} = \sqrt{D^{-1}}L^{-1}X.
$$
This is ordinary whitening with PCA. Now, ZCA does something different--it adds a small epsilon to the Eigenvalues and transforms the data back.
$$
\tilde{X} = L\sqrt{(D + \epsilon)^{-1}}L^{-1}X.
$$
Here are some pictures from the CIFAR data set before and after ZCA.
Before ZCA:
After ZCA with $\epsilon = 0.0001$
After ZCA with $\epsilon = 0.1$
For vision data, high frequency data will typically reside in the space spanned by the lower Eigenvalues. Hence ZCA is a way to strengthen these, leading to more visible edges etc. | What is the difference between ZCA whitening and PCA whitening?
Given an Eigendecomposition of a covariance matrix
$$
\bar{X}\bar{X}^T = LDL^T
$$
where $D = \text{diag}(\lambda_1, \lambda_2, \dots, \lambda_n)$ is the diagonal matrix of Eigenvalues, ordinary whiten |
2,627 | What is the difference between ZCA whitening and PCA whitening? | I'll add the following plot to illustrate visually the difference between PCA-whitening and ZCA-whitening : the only thing you need to understand is what a rotation matrix is (aka orthonormal matrix), and that PCA-whitening and ZCA-whitening are just one rotation appart.
First plot is the raw data, along 2 arrows on X and Y axis, and a circle to better understand the transformations.
Second plot is PCA-whitened data, and the same transformation is applied to the arrows and circle points
Third plot is ZCA-whitened data
You can then notice that :
Covariance matrix is identity matrix in both PCA and ZCA cases
PCA-whitened and ZCA-whitened data are indeed a rotation appart (rotate the second plot about 60° countclock-wise and you get the third plot)
ZCA-whitened data has an orientation similar to that of the original data (that is red is basicaly on lower left side, and blue on upper right side). We say that ZCA-whitened data "ressembles" the most the original data. On the hand, PCA could have had any orientation.
Also, I found Optimal whitening and decorrelation, Agnan Kessy, Alex Lewin, Korbinian Strimmer (arXiv:1512.00809) to be a great ressource on whitening.
Disclaimer : plot image above taken from PCA-whitening vs ZCA-whitening : a numpy 2d visual | What is the difference between ZCA whitening and PCA whitening? | I'll add the following plot to illustrate visually the difference between PCA-whitening and ZCA-whitening : the only thing you need to understand is what a rotation matrix is (aka orthonormal matrix), | What is the difference between ZCA whitening and PCA whitening?
I'll add the following plot to illustrate visually the difference between PCA-whitening and ZCA-whitening : the only thing you need to understand is what a rotation matrix is (aka orthonormal matrix), and that PCA-whitening and ZCA-whitening are just one rotation appart.
First plot is the raw data, along 2 arrows on X and Y axis, and a circle to better understand the transformations.
Second plot is PCA-whitened data, and the same transformation is applied to the arrows and circle points
Third plot is ZCA-whitened data
You can then notice that :
Covariance matrix is identity matrix in both PCA and ZCA cases
PCA-whitened and ZCA-whitened data are indeed a rotation appart (rotate the second plot about 60° countclock-wise and you get the third plot)
ZCA-whitened data has an orientation similar to that of the original data (that is red is basicaly on lower left side, and blue on upper right side). We say that ZCA-whitened data "ressembles" the most the original data. On the hand, PCA could have had any orientation.
Also, I found Optimal whitening and decorrelation, Agnan Kessy, Alex Lewin, Korbinian Strimmer (arXiv:1512.00809) to be a great ressource on whitening.
Disclaimer : plot image above taken from PCA-whitening vs ZCA-whitening : a numpy 2d visual | What is the difference between ZCA whitening and PCA whitening?
I'll add the following plot to illustrate visually the difference between PCA-whitening and ZCA-whitening : the only thing you need to understand is what a rotation matrix is (aka orthonormal matrix), |
2,628 | What is regularization in plain english? | In simple terms, regularization is tuning or selecting the preferred level of model complexity so your models are better at predicting (generalizing). If you don't do this your models may be too complex and overfit or too simple and underfit, either way giving poor predictions.
If you least-squares fit a complex model to a small set of training data you will probably overfit, this is the most common situation. The optimal complexity of the model depends on the sort of process you are modeling and the quality of the data, so there is no a-priori correct complexity of a model.
To regularize you need 2 things:
A way of testing how good your models are at prediction, for example using cross-validation or a set of validation data (you can't use the fitting error for this).
A tuning parameter which lets you change the complexity or smoothness of the model, or a selection of models of differing complexity/smoothness.
Basically you adjust the complexity parameter (or change the model) and find the value which gives the best model predictions.
Note that the optimized regularization error will not be an accurate estimate of the overall prediction error so after regularization you will finally have to use an additional validation dataset or perform some additional statistical analysis to get an unbiased prediction error.
An alternative to using (cross-)validation testing is to use Bayesian Priors or other methods to penalize complexity or non-smoothness, but these require more statistical sophistication and knowledge of the problem and model features. | What is regularization in plain english? | In simple terms, regularization is tuning or selecting the preferred level of model complexity so your models are better at predicting (generalizing). If you don't do this your models may be too compl | What is regularization in plain english?
In simple terms, regularization is tuning or selecting the preferred level of model complexity so your models are better at predicting (generalizing). If you don't do this your models may be too complex and overfit or too simple and underfit, either way giving poor predictions.
If you least-squares fit a complex model to a small set of training data you will probably overfit, this is the most common situation. The optimal complexity of the model depends on the sort of process you are modeling and the quality of the data, so there is no a-priori correct complexity of a model.
To regularize you need 2 things:
A way of testing how good your models are at prediction, for example using cross-validation or a set of validation data (you can't use the fitting error for this).
A tuning parameter which lets you change the complexity or smoothness of the model, or a selection of models of differing complexity/smoothness.
Basically you adjust the complexity parameter (or change the model) and find the value which gives the best model predictions.
Note that the optimized regularization error will not be an accurate estimate of the overall prediction error so after regularization you will finally have to use an additional validation dataset or perform some additional statistical analysis to get an unbiased prediction error.
An alternative to using (cross-)validation testing is to use Bayesian Priors or other methods to penalize complexity or non-smoothness, but these require more statistical sophistication and knowledge of the problem and model features. | What is regularization in plain english?
In simple terms, regularization is tuning or selecting the preferred level of model complexity so your models are better at predicting (generalizing). If you don't do this your models may be too compl |
2,629 | What is regularization in plain english? | Suppose you perform learning via empirical risk minimization.
More precisely:
you have got your non-negative loss function $L(\text{actual value},\text{ predicted value})$ which characterize how bad your predictions are
you want to fit your model in a such way that its predictions minimize mean of loss function, calculated only on training data (the only data you have)
Then the aim of learning process is to find $\text{Model} = \text{argmin} \sum L(\text{actual}, \text{predicted}(\text{Model}))$ (this method is called empirical risk minimization).
But if you haven't got enough data and there is a huge amount of variables in your model, it is very probable to find such a model that not only explain patterns but also explains random noise in your data. This effect is called overfitting and it leads to degradation of generalization ability of your model.
In order to avoid overfitting a regularization term is introduced into the target function:
$\text{Model} = \text{argmin} \sum L(\text{actual}, \text{predicted}(\text{Model})) + \lambda R(\text{Model})$
Usually, this term $R(\text{Model})$ imposes a special penalty on complex models. For instance, on models with large coefficients (L2 regularization, $R$=sum of squares of coefficients) or with a lot if non-zero coefficients (L1 regularization, $R$=sum of absolute values of coefficients). If we are training decision tree, $R$ can be its depth.
Another point of view is that $R$ introduces our prior knowledge about a form of the best model ("it doesn't have too large coefficients", "it is almost orthogonal to $\bar a$") | What is regularization in plain english? | Suppose you perform learning via empirical risk minimization.
More precisely:
you have got your non-negative loss function $L(\text{actual value},\text{ predicted value})$ which characterize how ba | What is regularization in plain english?
Suppose you perform learning via empirical risk minimization.
More precisely:
you have got your non-negative loss function $L(\text{actual value},\text{ predicted value})$ which characterize how bad your predictions are
you want to fit your model in a such way that its predictions minimize mean of loss function, calculated only on training data (the only data you have)
Then the aim of learning process is to find $\text{Model} = \text{argmin} \sum L(\text{actual}, \text{predicted}(\text{Model}))$ (this method is called empirical risk minimization).
But if you haven't got enough data and there is a huge amount of variables in your model, it is very probable to find such a model that not only explain patterns but also explains random noise in your data. This effect is called overfitting and it leads to degradation of generalization ability of your model.
In order to avoid overfitting a regularization term is introduced into the target function:
$\text{Model} = \text{argmin} \sum L(\text{actual}, \text{predicted}(\text{Model})) + \lambda R(\text{Model})$
Usually, this term $R(\text{Model})$ imposes a special penalty on complex models. For instance, on models with large coefficients (L2 regularization, $R$=sum of squares of coefficients) or with a lot if non-zero coefficients (L1 regularization, $R$=sum of absolute values of coefficients). If we are training decision tree, $R$ can be its depth.
Another point of view is that $R$ introduces our prior knowledge about a form of the best model ("it doesn't have too large coefficients", "it is almost orthogonal to $\bar a$") | What is regularization in plain english?
Suppose you perform learning via empirical risk minimization.
More precisely:
you have got your non-negative loss function $L(\text{actual value},\text{ predicted value})$ which characterize how ba |
2,630 | What is regularization in plain english? | Put in simple terms, regularization is about benefiting the solutions you'd expect to get. As you mention, for example you can benefit "simple" solutions, for some definition of simplicity. If your problem has rules, one definition can be fewer rules. But this is problem-dependent.
You're asking the right question, however. For example in Support Vector Machines this "simplicity" comes from breaking ties in the direction of "maximum margin". This margin is something that can be clearly defined in terms of the problem. There is a very good geometric derivation in the SVM article in Wikipedia. It turns out that the regularization term is, arguably at least, the "secret sauce" of SVMs.
How do you do regularization? In general that comes with the method you use, if you use SVMs you're doing L2 regularization, if your using LASSO you're doing L1 regularization (see what hairybeast is saying). However, if you're developing your own method, you need to know how to tell desirable solutions from non-desirable ones, and have a function that quantifies this. In the end you'll have a cost term and a regularization term, and you want to optimize the sum of both. | What is regularization in plain english? | Put in simple terms, regularization is about benefiting the solutions you'd expect to get. As you mention, for example you can benefit "simple" solutions, for some definition of simplicity. If your pr | What is regularization in plain english?
Put in simple terms, regularization is about benefiting the solutions you'd expect to get. As you mention, for example you can benefit "simple" solutions, for some definition of simplicity. If your problem has rules, one definition can be fewer rules. But this is problem-dependent.
You're asking the right question, however. For example in Support Vector Machines this "simplicity" comes from breaking ties in the direction of "maximum margin". This margin is something that can be clearly defined in terms of the problem. There is a very good geometric derivation in the SVM article in Wikipedia. It turns out that the regularization term is, arguably at least, the "secret sauce" of SVMs.
How do you do regularization? In general that comes with the method you use, if you use SVMs you're doing L2 regularization, if your using LASSO you're doing L1 regularization (see what hairybeast is saying). However, if you're developing your own method, you need to know how to tell desirable solutions from non-desirable ones, and have a function that quantifies this. In the end you'll have a cost term and a regularization term, and you want to optimize the sum of both. | What is regularization in plain english?
Put in simple terms, regularization is about benefiting the solutions you'd expect to get. As you mention, for example you can benefit "simple" solutions, for some definition of simplicity. If your pr |
2,631 | What is regularization in plain english? | Regularization techniques are techniques applied to machine learning models which make the decision boundary / fitted model smoother. Those techniques help to prevent overfitting.
Examples: L1, L2, Dropout, Weight Decay in Neural Networks. Parameter $C$ in SVMs. | What is regularization in plain english? | Regularization techniques are techniques applied to machine learning models which make the decision boundary / fitted model smoother. Those techniques help to prevent overfitting.
Examples: L1, L2, Dr | What is regularization in plain english?
Regularization techniques are techniques applied to machine learning models which make the decision boundary / fitted model smoother. Those techniques help to prevent overfitting.
Examples: L1, L2, Dropout, Weight Decay in Neural Networks. Parameter $C$ in SVMs. | What is regularization in plain english?
Regularization techniques are techniques applied to machine learning models which make the decision boundary / fitted model smoother. Those techniques help to prevent overfitting.
Examples: L1, L2, Dr |
2,632 | What is regularization in plain english? | In simple term, Regularization is a technique to avoid over-fitting when training machine learning algorithms.
If you have an algorithm with enough free parameters you can interpolate with great detail your sample, but examples coming outside the sample might not follow this detail interpolation as it just captured noise or
random irregularities in the sample instead of the true trend.
Over-fitting is avoided by limiting the absolute value of the parameters in the model.This can be done by adding a term to the cost function that imposes a penalty based on the magnitude of the model parameters.
If the magnitude is measured in L1 norm this is called "L1 regularization"
(and usually results in sparse models), if it is measured in L2 norm this is called "L2 regularization", and so on. | What is regularization in plain english? | In simple term, Regularization is a technique to avoid over-fitting when training machine learning algorithms.
If you have an algorithm with enough free parameters you can interpolate with great deta | What is regularization in plain english?
In simple term, Regularization is a technique to avoid over-fitting when training machine learning algorithms.
If you have an algorithm with enough free parameters you can interpolate with great detail your sample, but examples coming outside the sample might not follow this detail interpolation as it just captured noise or
random irregularities in the sample instead of the true trend.
Over-fitting is avoided by limiting the absolute value of the parameters in the model.This can be done by adding a term to the cost function that imposes a penalty based on the magnitude of the model parameters.
If the magnitude is measured in L1 norm this is called "L1 regularization"
(and usually results in sparse models), if it is measured in L2 norm this is called "L2 regularization", and so on. | What is regularization in plain english?
In simple term, Regularization is a technique to avoid over-fitting when training machine learning algorithms.
If you have an algorithm with enough free parameters you can interpolate with great deta |
2,633 | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? | Problem statement
The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data.
That's right. I explain the connection between these two formulations in my answer here (without math) or here (with math).
Let's take the second formulation: PCA is trying the find the direction such that the projection of the data on it has the highest possible variance. This direction is, by definition, called the first principal direction. We can formalize it as follows: given the covariance matrix $\mathbf C$, we are looking for a vector $\mathbf w$ having unit length, $\|\mathbf w\|=1$, such that $\mathbf w^\top \mathbf{Cw}$ is maximal.
(Just in case this is not clear: if $\mathbf X$ is the centered data matrix, then the projection is given by $\mathbf{Xw}$ and its variance is $\frac{1}{n-1}(\mathbf{Xw})^\top \cdot \mathbf{Xw} = \mathbf w^\top\cdot (\frac{1}{n-1}\mathbf X^\top\mathbf X)\cdot \mathbf w = \mathbf w^\top \mathbf{Cw}$.)
On the other hand, an eigenvector of $\mathbf C$ is, by definition, any vector $\mathbf v$ such that $\mathbf{Cv}=\lambda \mathbf v$.
It turns out that the first principal direction is given by the eigenvector with the largest eigenvalue. This is a nontrivial and surprising statement.
Proofs
If one opens any book or tutorial on PCA, one can find there the following almost one-line proof of the statement above. We want to maximize $\mathbf w^\top \mathbf{Cw}$ under the constraint that $\|\mathbf w\|=\mathbf w^\top \mathbf w=1$; this can be done introducing a Lagrange multiplier and maximizing $\mathbf w^\top \mathbf{Cw}-\lambda(\mathbf w^\top \mathbf w-1)$; differentiating, we obtain $\mathbf{Cw}-\lambda\mathbf w=0$, which is the eigenvector equation. We see that $\lambda$ has in fact to be the largest eigenvalue by substituting this solution into the objective function, which gives $\mathbf w^\top \mathbf{Cw}-\lambda(\mathbf w^\top \mathbf w-1) = \mathbf w^\top \mathbf{Cw} = \lambda\mathbf w^\top \mathbf{w} = \lambda$. By virtue of the fact that this objective function should be maximized, $\lambda$ must be the largest eigenvalue, QED.
This tends to be not very intuitive for most people.
A better proof (see e.g. this neat answer by @cardinal) says that because $\mathbf C$ is symmetric matrix, it is diagonal in its eigenvector basis. (This is actually called spectral theorem.) So we can choose an orthogonal basis, namely the one given by the eigenvectors, where $\mathbf C$ is diagonal and has eigenvalues $\lambda_i$ on the diagonal. In that basis, $\mathbf w^\top \mathbf{C w}$ simplifies to $\sum \lambda_i w_i^2$, or in other words the variance is given by the weighted sum of the eigenvalues. It is almost immediate that to maximize this expression one should simply take $\mathbf w = (1,0,0,\ldots, 0)$, i.e. the first eigenvector, yielding variance $\lambda_1$ (indeed, deviating from this solution and "trading" parts of the largest eigenvalue for the parts of smaller ones will only lead to smaller overall variance). Note that the value of $\mathbf w^\top \mathbf{C w}$ does not depend on the basis! Changing to the eigenvector basis amounts to a rotation, so in 2D one can imagine simply rotating a piece of paper with the scatterplot; obviously this cannot change any variances.
I think this is a very intuitive and a very useful argument, but it relies on the spectral theorem. So the real issue here I think is: what is the intuition behind the spectral theorem?
Spectral theorem
Take a symmetric matrix $\mathbf C$. Take its eigenvector $\mathbf w_1$ with the largest eigenvalue $\lambda_1$. Make this eigenvector the first basis vector and choose other basis vectors randomly (such that all of them are orthonormal). How will $\mathbf C$ look in this basis?
It will have $\lambda_1$ in the top-left corner, because $\mathbf w_1=(1,0,0\ldots 0)$ in this basis and $\mathbf {Cw}_1=(C_{11}, C_{21}, \ldots C_{p1})$ has to be equal to $\lambda_1\mathbf w_1 = (\lambda_1,0,0 \ldots 0)$.
By the same argument it will have zeros in the first column under the $\lambda_1$.
But because it is symmetric, it will have zeros in the first row after $\lambda_1$ as well. So it will look like that:
$$\mathbf C=\begin{pmatrix}\lambda_1 & 0 & \ldots & 0 \\ 0 & & & \\ \vdots & & & \\ 0 & & & \end{pmatrix},$$
where empty space means that there is a block of some elements there. Because the matrix is symmetric, this block will be symmetric too. So we can apply exactly the same argument to it, effectively using the second eigenvector as the second basis vector, and getting $\lambda_1$ and $\lambda_2$ on the diagonal. This can continue until $\mathbf C$ is diagonal. That is essentially the spectral theorem. (Note how it works only because $\mathbf C$ is symmetric.)
Here is a more abstract reformulation of exactly the same argument.
We know that $\mathbf{Cw}_1 = \lambda_1 \mathbf w_1$, so the first eigenvector defines a 1-dimensional subspace where $\mathbf C$ acts as a scalar multiplication. Let us now take any vector $\mathbf v$ orthogonal to $\mathbf w_1$. Then it is almost immediate that $\mathbf {Cv}$ is also orthogonal to $\mathbf w_1$. Indeed:
$$ \mathbf w_1^\top \mathbf{Cv} = (\mathbf w_1^\top \mathbf{Cv})^\top = \mathbf v^\top \mathbf C^\top \mathbf w_1 = \mathbf v^\top \mathbf {Cw}_1=\lambda_1 \mathbf v^\top \mathbf w_1 = \lambda_1\cdot 0 = 0.$$
This means that $\mathbf C$ acts on the whole remaining subspace orthogonal to $\mathbf w_1$ such that it stays separate from $\mathbf w_1$. This is the crucial property of symmetric matrices. So we can find the largest eigenvector there, $\mathbf w_2$, and proceed in the same manner, eventually constructing an orthonormal basis of eigenvectors. | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a li | Problem statement
The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simul | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)?
Problem statement
The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simultaneously maximizes the variance of the projected data.
That's right. I explain the connection between these two formulations in my answer here (without math) or here (with math).
Let's take the second formulation: PCA is trying the find the direction such that the projection of the data on it has the highest possible variance. This direction is, by definition, called the first principal direction. We can formalize it as follows: given the covariance matrix $\mathbf C$, we are looking for a vector $\mathbf w$ having unit length, $\|\mathbf w\|=1$, such that $\mathbf w^\top \mathbf{Cw}$ is maximal.
(Just in case this is not clear: if $\mathbf X$ is the centered data matrix, then the projection is given by $\mathbf{Xw}$ and its variance is $\frac{1}{n-1}(\mathbf{Xw})^\top \cdot \mathbf{Xw} = \mathbf w^\top\cdot (\frac{1}{n-1}\mathbf X^\top\mathbf X)\cdot \mathbf w = \mathbf w^\top \mathbf{Cw}$.)
On the other hand, an eigenvector of $\mathbf C$ is, by definition, any vector $\mathbf v$ such that $\mathbf{Cv}=\lambda \mathbf v$.
It turns out that the first principal direction is given by the eigenvector with the largest eigenvalue. This is a nontrivial and surprising statement.
Proofs
If one opens any book or tutorial on PCA, one can find there the following almost one-line proof of the statement above. We want to maximize $\mathbf w^\top \mathbf{Cw}$ under the constraint that $\|\mathbf w\|=\mathbf w^\top \mathbf w=1$; this can be done introducing a Lagrange multiplier and maximizing $\mathbf w^\top \mathbf{Cw}-\lambda(\mathbf w^\top \mathbf w-1)$; differentiating, we obtain $\mathbf{Cw}-\lambda\mathbf w=0$, which is the eigenvector equation. We see that $\lambda$ has in fact to be the largest eigenvalue by substituting this solution into the objective function, which gives $\mathbf w^\top \mathbf{Cw}-\lambda(\mathbf w^\top \mathbf w-1) = \mathbf w^\top \mathbf{Cw} = \lambda\mathbf w^\top \mathbf{w} = \lambda$. By virtue of the fact that this objective function should be maximized, $\lambda$ must be the largest eigenvalue, QED.
This tends to be not very intuitive for most people.
A better proof (see e.g. this neat answer by @cardinal) says that because $\mathbf C$ is symmetric matrix, it is diagonal in its eigenvector basis. (This is actually called spectral theorem.) So we can choose an orthogonal basis, namely the one given by the eigenvectors, where $\mathbf C$ is diagonal and has eigenvalues $\lambda_i$ on the diagonal. In that basis, $\mathbf w^\top \mathbf{C w}$ simplifies to $\sum \lambda_i w_i^2$, or in other words the variance is given by the weighted sum of the eigenvalues. It is almost immediate that to maximize this expression one should simply take $\mathbf w = (1,0,0,\ldots, 0)$, i.e. the first eigenvector, yielding variance $\lambda_1$ (indeed, deviating from this solution and "trading" parts of the largest eigenvalue for the parts of smaller ones will only lead to smaller overall variance). Note that the value of $\mathbf w^\top \mathbf{C w}$ does not depend on the basis! Changing to the eigenvector basis amounts to a rotation, so in 2D one can imagine simply rotating a piece of paper with the scatterplot; obviously this cannot change any variances.
I think this is a very intuitive and a very useful argument, but it relies on the spectral theorem. So the real issue here I think is: what is the intuition behind the spectral theorem?
Spectral theorem
Take a symmetric matrix $\mathbf C$. Take its eigenvector $\mathbf w_1$ with the largest eigenvalue $\lambda_1$. Make this eigenvector the first basis vector and choose other basis vectors randomly (such that all of them are orthonormal). How will $\mathbf C$ look in this basis?
It will have $\lambda_1$ in the top-left corner, because $\mathbf w_1=(1,0,0\ldots 0)$ in this basis and $\mathbf {Cw}_1=(C_{11}, C_{21}, \ldots C_{p1})$ has to be equal to $\lambda_1\mathbf w_1 = (\lambda_1,0,0 \ldots 0)$.
By the same argument it will have zeros in the first column under the $\lambda_1$.
But because it is symmetric, it will have zeros in the first row after $\lambda_1$ as well. So it will look like that:
$$\mathbf C=\begin{pmatrix}\lambda_1 & 0 & \ldots & 0 \\ 0 & & & \\ \vdots & & & \\ 0 & & & \end{pmatrix},$$
where empty space means that there is a block of some elements there. Because the matrix is symmetric, this block will be symmetric too. So we can apply exactly the same argument to it, effectively using the second eigenvector as the second basis vector, and getting $\lambda_1$ and $\lambda_2$ on the diagonal. This can continue until $\mathbf C$ is diagonal. That is essentially the spectral theorem. (Note how it works only because $\mathbf C$ is symmetric.)
Here is a more abstract reformulation of exactly the same argument.
We know that $\mathbf{Cw}_1 = \lambda_1 \mathbf w_1$, so the first eigenvector defines a 1-dimensional subspace where $\mathbf C$ acts as a scalar multiplication. Let us now take any vector $\mathbf v$ orthogonal to $\mathbf w_1$. Then it is almost immediate that $\mathbf {Cv}$ is also orthogonal to $\mathbf w_1$. Indeed:
$$ \mathbf w_1^\top \mathbf{Cv} = (\mathbf w_1^\top \mathbf{Cv})^\top = \mathbf v^\top \mathbf C^\top \mathbf w_1 = \mathbf v^\top \mathbf {Cw}_1=\lambda_1 \mathbf v^\top \mathbf w_1 = \lambda_1\cdot 0 = 0.$$
This means that $\mathbf C$ acts on the whole remaining subspace orthogonal to $\mathbf w_1$ such that it stays separate from $\mathbf w_1$. This is the crucial property of symmetric matrices. So we can find the largest eigenvector there, $\mathbf w_2$, and proceed in the same manner, eventually constructing an orthonormal basis of eigenvectors. | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a li
Problem statement
The geometric problem that PCA is trying to optimize is clear to me: PCA tries to find the first principal component by minimizing the reconstruction (projection) error, which simul |
2,634 | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? | This is my take on the linear algebra behind PCA. In linear algebra, one of the key theorems is the Spectral Theorem. It states if S is any symmetric n by n matrix with real coefficients, then S has n eigenvectors with all the eigenvalues being real. That means we can write $S = ADA^{-1} $ with D a diagonal matrix with positive entries. That is $ D = \mbox{diag} (\lambda_1, \lambda_2, \ldots, \lambda_n)$ and there is no harm in assuming $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_n$ . A is the change of basis matrix. That is, if our original basis was $x_1,x_2, \ldots, x_n$, then with respect to the basis given by $A(x_1), A(x_2), \ldots A(x_n)$ , the action of S is diagonal. This also means that the $A(x_i)$ can be considered as a orthogonal basis with $||A(x_i)|| = \lambda_i$ If our covariance matrix was for n observations of n variables, we would be done. The basis provided by the $A(x_i)$ is the PCA basis . This follows from the linear algebra facts. In essence it is true because a PCA basis is a basis of eigenvectors and there are atmost n eigenvectors of a square matrix of size n.
Of course most data matrices are not square. If X is a data matrix with n observations of p variables, then X is of size n by p. I will assume that $ n>p$ (more observations than variables) and that $rk(X) = p $ (all the variables are linearly independent). Neither assumption is necessary, but it will help with the intuition. Linear algebra has a generalization from the Spectral theorem called the singular value decomposition. For such an X it states that $ X = U \Sigma V^{t} $ with U,V orthonormal (square) matrices of size n and p and $\Sigma = (s_{ij}) $ a real diagonal matrix with only non-negative entries on the diagonal. Again we may rearrange the basis of V so that $s_{11} \geq s_{22} \geq \ldots s_{pp}> 0 $ In matrix terms, this means that $ X(v_i) = s_{ii} u_i$ if $ i \leq p$ and $ s_{ii} = 0 $ if $ i> n$ . The $ v_i$ give the PCA decomposition. More precisely $ \Sigma V^{t} $ is the PCA decomposition. Why ?Again, linear algebra says that there can only be p eigenvectors. The SVD gives new variables (given by the columns of V) that are orthogonal and have decreasing norm. | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a li | This is my take on the linear algebra behind PCA. In linear algebra, one of the key theorems is the Spectral Theorem. It states if S is any symmetric n by n matrix with real coefficients, then S ha | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)?
This is my take on the linear algebra behind PCA. In linear algebra, one of the key theorems is the Spectral Theorem. It states if S is any symmetric n by n matrix with real coefficients, then S has n eigenvectors with all the eigenvalues being real. That means we can write $S = ADA^{-1} $ with D a diagonal matrix with positive entries. That is $ D = \mbox{diag} (\lambda_1, \lambda_2, \ldots, \lambda_n)$ and there is no harm in assuming $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_n$ . A is the change of basis matrix. That is, if our original basis was $x_1,x_2, \ldots, x_n$, then with respect to the basis given by $A(x_1), A(x_2), \ldots A(x_n)$ , the action of S is diagonal. This also means that the $A(x_i)$ can be considered as a orthogonal basis with $||A(x_i)|| = \lambda_i$ If our covariance matrix was for n observations of n variables, we would be done. The basis provided by the $A(x_i)$ is the PCA basis . This follows from the linear algebra facts. In essence it is true because a PCA basis is a basis of eigenvectors and there are atmost n eigenvectors of a square matrix of size n.
Of course most data matrices are not square. If X is a data matrix with n observations of p variables, then X is of size n by p. I will assume that $ n>p$ (more observations than variables) and that $rk(X) = p $ (all the variables are linearly independent). Neither assumption is necessary, but it will help with the intuition. Linear algebra has a generalization from the Spectral theorem called the singular value decomposition. For such an X it states that $ X = U \Sigma V^{t} $ with U,V orthonormal (square) matrices of size n and p and $\Sigma = (s_{ij}) $ a real diagonal matrix with only non-negative entries on the diagonal. Again we may rearrange the basis of V so that $s_{11} \geq s_{22} \geq \ldots s_{pp}> 0 $ In matrix terms, this means that $ X(v_i) = s_{ii} u_i$ if $ i \leq p$ and $ s_{ii} = 0 $ if $ i> n$ . The $ v_i$ give the PCA decomposition. More precisely $ \Sigma V^{t} $ is the PCA decomposition. Why ?Again, linear algebra says that there can only be p eigenvectors. The SVD gives new variables (given by the columns of V) that are orthogonal and have decreasing norm. | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a li
This is my take on the linear algebra behind PCA. In linear algebra, one of the key theorems is the Spectral Theorem. It states if S is any symmetric n by n matrix with real coefficients, then S ha |
2,635 | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? | There is a result from 1936 by Eckart and Young (https://ccrma.stanford.edu/~dattorro/eckart%26young.1936.pdf), which states the following
$\sum_1^r d_k u_k v_k^T = arg min_{\hat{X} \epsilon M(r)} ||X-\hat{X}||_F^2$
where M(r) is the set of rank-r matrices, which basically means first r components of SVD of X gives the best low-rank matrix approximation of X and best is defined in terms of the squared Frobenius norm - the sum of squared elements of a matrix.
This is a general result for matrices and at first sight has nothing to do with data sets or dimensionality reduction.
However if you don't think of $X$ as a matrix but rather think of the columns of the matrix $X$ representing vectors of data points then $\hat{X}$ is the approximation with the minimum representation error in terms of squared error differences. | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a li | There is a result from 1936 by Eckart and Young (https://ccrma.stanford.edu/~dattorro/eckart%26young.1936.pdf), which states the following
$\sum_1^r d_k u_k v_k^T = arg min_{\hat{X} \epsilon M(r)} ||X | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)?
There is a result from 1936 by Eckart and Young (https://ccrma.stanford.edu/~dattorro/eckart%26young.1936.pdf), which states the following
$\sum_1^r d_k u_k v_k^T = arg min_{\hat{X} \epsilon M(r)} ||X-\hat{X}||_F^2$
where M(r) is the set of rank-r matrices, which basically means first r components of SVD of X gives the best low-rank matrix approximation of X and best is defined in terms of the squared Frobenius norm - the sum of squared elements of a matrix.
This is a general result for matrices and at first sight has nothing to do with data sets or dimensionality reduction.
However if you don't think of $X$ as a matrix but rather think of the columns of the matrix $X$ representing vectors of data points then $\hat{X}$ is the approximation with the minimum representation error in terms of squared error differences. | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a li
There is a result from 1936 by Eckart and Young (https://ccrma.stanford.edu/~dattorro/eckart%26young.1936.pdf), which states the following
$\sum_1^r d_k u_k v_k^T = arg min_{\hat{X} \epsilon M(r)} ||X |
2,636 | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? | " which simultaneously maximizes the variance of the projected data." Have you hear of Rayleigh quotient? Maybe that's one way of seeing this. Namely the rayleigh quotient of the covariance matrix gives you the variance of the projected data. (and the wiki page explains why eigenvectors maximise the Rayleigh quotient) | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a li | " which simultaneously maximizes the variance of the projected data." Have you hear of Rayleigh quotient? Maybe that's one way of seeing this. Namely the rayleigh quotient of the covariance matrix g | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)?
" which simultaneously maximizes the variance of the projected data." Have you hear of Rayleigh quotient? Maybe that's one way of seeing this. Namely the rayleigh quotient of the covariance matrix gives you the variance of the projected data. (and the wiki page explains why eigenvectors maximise the Rayleigh quotient) | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a li
" which simultaneously maximizes the variance of the projected data." Have you hear of Rayleigh quotient? Maybe that's one way of seeing this. Namely the rayleigh quotient of the covariance matrix g |
2,637 | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? | Lagrange multipliers are fine but you don't actually need that to get a decent intuitive picture of why eigenvectors maximize the variance (the projected lengths).
So we want to find the unit length $w$ such that $\|Aw\|$ is maximal, where $A$ is the centered data matrix and $\frac{A^TA}{n} = C$ is our covariance matrix.
Since squaring is monotonically increasing over non-negative real numbers, maximizing $\|Aw\|$ is equivalent to maximizing $\|Aw\|^2 = (Aw)^TAw = w^TA^TAw = n (w^TCw)$. And we can also ignore that $n$ since we're choosing the $w$ that maximizes that and $n$ is constant, so it won't affect which $w$ maximizes the expression.
But we don't actually need to enforce the unit length constraint with a Lagrange multiplier because we can turn any non-zero vector into a unit vector by dividing by its length. So, for any $w$ of non-zero length, the vector $\frac{w}{\|w\|}$ is always unit length.
So now we just need to maximize
$$
\frac{w^T}{\|w\|}C\frac{w}{\|w\|} = \frac{w^TCw}{\|w\|^2} = \left(\frac{1}{n}\right)\frac{\|Aw\|^2}{\|w\|^2}
$$
That last expression shows that this is equivalent to maximizing the ratio of the squared length of $Aw$ to the squared length of $w$, where we let $w$ be of any length. Instead of forcing $w$ to be unit-length and maximizing the numerator of that ratio (the denomitator will be 1 if $w$ is forced to be unit length), we can let $w$ be whatever length it wants and then maximize that ratio. As someone else pointed out, this ratio is called the Rayleigh Quotient.
As with lots of maximization problems, we need to find where the gradient vanishes (where the derivative is equal to zero). Before we do that with our particular multivariate case, let's derive something general about where derivatives equal zero for quotients in one dimension.
Consider the quotient $\frac{f(x)}{g(x)}$. The derivative with respect to x of this, using the product rule and chain rule (or "quotient" rule) from basic calc, we get:
$$
\frac{f'(x)}{g(x)} - \frac{f(x)g'(x)}{g(x)^2}
$$
If we set this equal to zero (to find maxima and minima) and then rearrange a bit, we get
$$
\frac{f'(x)}{g'(x)} = \frac{f(x)}{g(x)}
$$
So when the ratio of the rates of change equals the ratio of the current values, the derivative is zero and you're at a minimum or maximum.
Which actually makes a lot of sense when you think about it. Think informally about small changes in $f$ and $g$ that happen when you take a small step in $x$, then you'll go
$$
\frac{f(x)}{g(x)} \xrightarrow{\text{small step in x}} \frac{f(x) + \Delta f}{g(x) + \Delta g}
$$
Since we're interested in the case where there's no net change, we want to know when
$$
\frac{f(x)}{g(x)} \approx \frac{f(x) + \Delta f}{g(x) + \Delta g}
$$
$\approx$ because this is all informal with finite small changes instead of limits. The above is satisfied when
$$
\frac{\Delta f}{\Delta g} \approx \frac{f(x)}{g(x)}
$$
If you currently have 100 oranges and 20 apples, you have 5 oranges per apple. Now you're going to add some oranges and apples. In what case will the ratio (quotient) of oranges to apples be preserved? It would be preserved when, say, you added 5 oranges and 1 apple because $\frac{100}{20} = \frac{105}{21}$. When you went from (100, 20) to (105, 21), the ratio didn't change because the ratio of the changes in quantity was equal to the ratio of the current quantities.
What we'll use is (after one more rearrangement), now using formal symbols again, the following condition:
$$
f'(x) = \frac{f(x)}{g(x)}g'(x)
$$
"The instantaneous rate of change in the numerator must be equal to the rate of change in the denominator scaled by the ratio of the current values".
In our multivariate case, we want the whole gradient to be zero. That is, we want every partial derivative to be zero. Let's give a name to our numerator:
$$
f(w) = \|Aw\|^2
$$
$f$ is a multivariate function. It's a function from a vector $w$ to a scalar, $\|Aw\|^2$.
Let's make $A$ and $w$ explicit to illustrate.
$$
A = \begin{bmatrix}
a & e & i \\
b & f & j \\
c & g & k \\
d & h & l \\
\end{bmatrix}
$$
and
$$
w = \begin{bmatrix}
x \\
y \\
z \\
\end{bmatrix}
$$
If you write out $\|Aw\|^2$ explicitly and take the partial derivative with respect to $y$ for instance (notated as $f_y$), you will get
$$
\begin{align}
f_y & = \frac{d}{dy}(\|Aw\|^2) \\
& = \frac{d}{dy}((ax + ey + iz)^2 + (bx + fy + jz)^2 + \dots) \\
& = 2e(ax + ey + iz) + 2f(bx + fy + jz) + \dots \\
& = 2\left<\begin{bmatrix}e & f & g & h\end{bmatrix}, Aw\right>
\end{align}
$$
So that's 2 times the inner product of the 2nd column of $A$ (corresponding to $y$ being in the 2nd row of $w$) with the vector $Aw$. This makes sense because, e.g., if the 2nd column is pointing in the same direction as $Aw$'s current position, you'll increase its squared length the most. If it's orthogonal, your rate will be 0 because you'll be (instantaneously) rotating $Aw$ instead of moving forward.
And let's give a name to the denominator in our quotient: $g(w) = \|w\|^2$. It's easier to get
$$
g_y = 2y
$$
And we know what condition we want on each of our partial derivatives simulatenously to have the gradient vector equal to the zero vector. In the case of the partial w.r.t. $y$, that will become
$$
f_y = \frac{f(w)}{g(w)}g_y
$$
Keep in mind every term there is a scalar. Plugging in $f_y$ and $g_y$, we get the condition:
$$
2\left<\begin{bmatrix}e & f & g & h\end{bmatrix}, Aw\right> = \frac{\|Aw\|^2}{\|w\|^2} 2y
$$
If we go ahead and derive partial derivatives $f_x$ and $f_z$ too, and arrange them into a column vector, the gradient, we get
$$
\nabla f =
\begin{bmatrix}
f_x \\
f_y \\
f_z
\end{bmatrix} =
\begin{bmatrix}
2\left<\begin{bmatrix}a & b & c & d\end{bmatrix}, Aw\right> \\
2\left<\begin{bmatrix}e & f & g & h\end{bmatrix}, Aw\right> \\
2\left<\begin{bmatrix}i & j & k & l\end{bmatrix}, Aw\right>
\end{bmatrix} =
2A^TAw
$$
The three partial derivatives of $f$ turn out to be equal to something we can write as a matrix product, $2A^TAw$.
Doing the same for $g$, we get
$$
\nabla g = 2w
$$
Now we just need to simultaneously plug in our quotient derivative condition for all three partial derivatives, producting three simultaneous equations:
$$
2A^TAw = \frac{\|Aw\|^2}{\|w\|^2} 2w
$$
Cancelling the 2's, subbing in $C$ for $A^TA$ and letting the $n$'s cancel, we get
$$
Cw = \left(\frac{w^TCw}{w^Tw}\right)w
$$
So the 3 simultaneous conditions we got from our derivative of ratios thing, one for each of the 3 partial derivatives of the expression (one for each component of $w$), produces a condition on the whole of $w$, namely that it's an eigenvector of $C$. We have a fixed ratio (the eigenvalue) scaling each partial derivative of $g$ (each component of an eigenvector) by the same amount, producing the partials of $f$ (the components of the output of the linear transformation done by $C$). | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a li | Lagrange multipliers are fine but you don't actually need that to get a decent intuitive picture of why eigenvectors maximize the variance (the projected lengths).
So we want to find the unit length $ | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)?
Lagrange multipliers are fine but you don't actually need that to get a decent intuitive picture of why eigenvectors maximize the variance (the projected lengths).
So we want to find the unit length $w$ such that $\|Aw\|$ is maximal, where $A$ is the centered data matrix and $\frac{A^TA}{n} = C$ is our covariance matrix.
Since squaring is monotonically increasing over non-negative real numbers, maximizing $\|Aw\|$ is equivalent to maximizing $\|Aw\|^2 = (Aw)^TAw = w^TA^TAw = n (w^TCw)$. And we can also ignore that $n$ since we're choosing the $w$ that maximizes that and $n$ is constant, so it won't affect which $w$ maximizes the expression.
But we don't actually need to enforce the unit length constraint with a Lagrange multiplier because we can turn any non-zero vector into a unit vector by dividing by its length. So, for any $w$ of non-zero length, the vector $\frac{w}{\|w\|}$ is always unit length.
So now we just need to maximize
$$
\frac{w^T}{\|w\|}C\frac{w}{\|w\|} = \frac{w^TCw}{\|w\|^2} = \left(\frac{1}{n}\right)\frac{\|Aw\|^2}{\|w\|^2}
$$
That last expression shows that this is equivalent to maximizing the ratio of the squared length of $Aw$ to the squared length of $w$, where we let $w$ be of any length. Instead of forcing $w$ to be unit-length and maximizing the numerator of that ratio (the denomitator will be 1 if $w$ is forced to be unit length), we can let $w$ be whatever length it wants and then maximize that ratio. As someone else pointed out, this ratio is called the Rayleigh Quotient.
As with lots of maximization problems, we need to find where the gradient vanishes (where the derivative is equal to zero). Before we do that with our particular multivariate case, let's derive something general about where derivatives equal zero for quotients in one dimension.
Consider the quotient $\frac{f(x)}{g(x)}$. The derivative with respect to x of this, using the product rule and chain rule (or "quotient" rule) from basic calc, we get:
$$
\frac{f'(x)}{g(x)} - \frac{f(x)g'(x)}{g(x)^2}
$$
If we set this equal to zero (to find maxima and minima) and then rearrange a bit, we get
$$
\frac{f'(x)}{g'(x)} = \frac{f(x)}{g(x)}
$$
So when the ratio of the rates of change equals the ratio of the current values, the derivative is zero and you're at a minimum or maximum.
Which actually makes a lot of sense when you think about it. Think informally about small changes in $f$ and $g$ that happen when you take a small step in $x$, then you'll go
$$
\frac{f(x)}{g(x)} \xrightarrow{\text{small step in x}} \frac{f(x) + \Delta f}{g(x) + \Delta g}
$$
Since we're interested in the case where there's no net change, we want to know when
$$
\frac{f(x)}{g(x)} \approx \frac{f(x) + \Delta f}{g(x) + \Delta g}
$$
$\approx$ because this is all informal with finite small changes instead of limits. The above is satisfied when
$$
\frac{\Delta f}{\Delta g} \approx \frac{f(x)}{g(x)}
$$
If you currently have 100 oranges and 20 apples, you have 5 oranges per apple. Now you're going to add some oranges and apples. In what case will the ratio (quotient) of oranges to apples be preserved? It would be preserved when, say, you added 5 oranges and 1 apple because $\frac{100}{20} = \frac{105}{21}$. When you went from (100, 20) to (105, 21), the ratio didn't change because the ratio of the changes in quantity was equal to the ratio of the current quantities.
What we'll use is (after one more rearrangement), now using formal symbols again, the following condition:
$$
f'(x) = \frac{f(x)}{g(x)}g'(x)
$$
"The instantaneous rate of change in the numerator must be equal to the rate of change in the denominator scaled by the ratio of the current values".
In our multivariate case, we want the whole gradient to be zero. That is, we want every partial derivative to be zero. Let's give a name to our numerator:
$$
f(w) = \|Aw\|^2
$$
$f$ is a multivariate function. It's a function from a vector $w$ to a scalar, $\|Aw\|^2$.
Let's make $A$ and $w$ explicit to illustrate.
$$
A = \begin{bmatrix}
a & e & i \\
b & f & j \\
c & g & k \\
d & h & l \\
\end{bmatrix}
$$
and
$$
w = \begin{bmatrix}
x \\
y \\
z \\
\end{bmatrix}
$$
If you write out $\|Aw\|^2$ explicitly and take the partial derivative with respect to $y$ for instance (notated as $f_y$), you will get
$$
\begin{align}
f_y & = \frac{d}{dy}(\|Aw\|^2) \\
& = \frac{d}{dy}((ax + ey + iz)^2 + (bx + fy + jz)^2 + \dots) \\
& = 2e(ax + ey + iz) + 2f(bx + fy + jz) + \dots \\
& = 2\left<\begin{bmatrix}e & f & g & h\end{bmatrix}, Aw\right>
\end{align}
$$
So that's 2 times the inner product of the 2nd column of $A$ (corresponding to $y$ being in the 2nd row of $w$) with the vector $Aw$. This makes sense because, e.g., if the 2nd column is pointing in the same direction as $Aw$'s current position, you'll increase its squared length the most. If it's orthogonal, your rate will be 0 because you'll be (instantaneously) rotating $Aw$ instead of moving forward.
And let's give a name to the denominator in our quotient: $g(w) = \|w\|^2$. It's easier to get
$$
g_y = 2y
$$
And we know what condition we want on each of our partial derivatives simulatenously to have the gradient vector equal to the zero vector. In the case of the partial w.r.t. $y$, that will become
$$
f_y = \frac{f(w)}{g(w)}g_y
$$
Keep in mind every term there is a scalar. Plugging in $f_y$ and $g_y$, we get the condition:
$$
2\left<\begin{bmatrix}e & f & g & h\end{bmatrix}, Aw\right> = \frac{\|Aw\|^2}{\|w\|^2} 2y
$$
If we go ahead and derive partial derivatives $f_x$ and $f_z$ too, and arrange them into a column vector, the gradient, we get
$$
\nabla f =
\begin{bmatrix}
f_x \\
f_y \\
f_z
\end{bmatrix} =
\begin{bmatrix}
2\left<\begin{bmatrix}a & b & c & d\end{bmatrix}, Aw\right> \\
2\left<\begin{bmatrix}e & f & g & h\end{bmatrix}, Aw\right> \\
2\left<\begin{bmatrix}i & j & k & l\end{bmatrix}, Aw\right>
\end{bmatrix} =
2A^TAw
$$
The three partial derivatives of $f$ turn out to be equal to something we can write as a matrix product, $2A^TAw$.
Doing the same for $g$, we get
$$
\nabla g = 2w
$$
Now we just need to simultaneously plug in our quotient derivative condition for all three partial derivatives, producting three simultaneous equations:
$$
2A^TAw = \frac{\|Aw\|^2}{\|w\|^2} 2w
$$
Cancelling the 2's, subbing in $C$ for $A^TA$ and letting the $n$'s cancel, we get
$$
Cw = \left(\frac{w^TCw}{w^Tw}\right)w
$$
So the 3 simultaneous conditions we got from our derivative of ratios thing, one for each of the 3 partial derivatives of the expression (one for each component of $w$), produces a condition on the whole of $w$, namely that it's an eigenvector of $C$. We have a fixed ratio (the eigenvalue) scaling each partial derivative of $g$ (each component of an eigenvector) by the same amount, producing the partials of $f$ (the components of the output of the linear transformation done by $C$). | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a li
Lagrange multipliers are fine but you don't actually need that to get a decent intuitive picture of why eigenvectors maximize the variance (the projected lengths).
So we want to find the unit length $ |
2,638 | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)? | @amoeba gives neat formalization and proof of:
We can formalize it as follows: given the covariance matrix C, we are looking for a vector w having unit length, ‖w‖=1, such that wTCw is maximal.
But I think there is one intuitive proof to:
It turns out that the first principal direction is given by the eigenvector with the largest eigenvalue. This is a nontrivial and surprising statement.
We can interpret wTCw as a dot product between vector w and Cw, which is obtain by w going through transformation C:
wTCw = ‖w‖ * ‖Cw‖ * cos(w, Cw)
Since w has fix length, to maximize wTCw, we need:
maximize ‖Cw‖
maximize cos(w, Cw)
It turn out if we take w to be eigenvector of C with the largest eigenvalue, we can archive both simultaneously:
‖Cw‖ is max, (if w deviate from this eigenvector, decomposite it along orthogonal eigenvectors, you should see ‖Cw‖ decrease.)
w and Cw in same direction, cos(w, Cw) = 1, max
Since eigenvectors are orthogonal, together with the other eigenvectors of C they forms a set of principal components to X.
proof of 1
decomposite w into orthogonal primary and secondary eigenvector v1 and v2, suppose their length is v1 and v2 respectively. we want to proof
(λ1w)2 > ((λ1v1)2 + (λ2v2)2)
since λ1 > λ2, we have
((λ1v1)2 + (λ2v2)2)
< ((λ1v1)2 + (λ1v2)2)
= (λ1)2 * (v12 + v22)
= (λ1)2 * w2 | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a li | @amoeba gives neat formalization and proof of:
We can formalize it as follows: given the covariance matrix C, we are looking for a vector w having unit length, ‖w‖=1, such that wTCw is maximal.
But | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a linear algebra problem (with eigenvectors)?
@amoeba gives neat formalization and proof of:
We can formalize it as follows: given the covariance matrix C, we are looking for a vector w having unit length, ‖w‖=1, such that wTCw is maximal.
But I think there is one intuitive proof to:
It turns out that the first principal direction is given by the eigenvector with the largest eigenvalue. This is a nontrivial and surprising statement.
We can interpret wTCw as a dot product between vector w and Cw, which is obtain by w going through transformation C:
wTCw = ‖w‖ * ‖Cw‖ * cos(w, Cw)
Since w has fix length, to maximize wTCw, we need:
maximize ‖Cw‖
maximize cos(w, Cw)
It turn out if we take w to be eigenvector of C with the largest eigenvalue, we can archive both simultaneously:
‖Cw‖ is max, (if w deviate from this eigenvector, decomposite it along orthogonal eigenvectors, you should see ‖Cw‖ decrease.)
w and Cw in same direction, cos(w, Cw) = 1, max
Since eigenvectors are orthogonal, together with the other eigenvectors of C they forms a set of principal components to X.
proof of 1
decomposite w into orthogonal primary and secondary eigenvector v1 and v2, suppose their length is v1 and v2 respectively. we want to proof
(λ1w)2 > ((λ1v1)2 + (λ2v2)2)
since λ1 > λ2, we have
((λ1v1)2 + (λ2v2)2)
< ((λ1v1)2 + (λ1v2)2)
= (λ1)2 * (v12 + v22)
= (λ1)2 * w2 | What is an intuitive explanation for how PCA turns from a geometric problem (with distances) to a li
@amoeba gives neat formalization and proof of:
We can formalize it as follows: given the covariance matrix C, we are looking for a vector w having unit length, ‖w‖=1, such that wTCw is maximal.
But |
2,639 | How to interpret coefficients in a Poisson regression? | The exponentiated numberofdrugs coefficient is the multiplicative term to use for the goal of calculating the estimated healthvalue when numberofdrugs increases by 1 unit. In the case of categorical (factor) variables, the exponentiated coefficient is the multiplicative term relative to the base (first factor) level for that variable (since R uses treatment contrasts by default). The exp(Intercept) is the baseline rate, and all other estimates would be relative to it.
In your example the estimated healthvalue for someone with 2 drugs, "placebo" and improvement=="none" would be (using addition inside exp as the equivalent of multiplication):
exp( 1.88955 + # thats the baseline contribution
2*-0.02303 + 0 + 0 ) # and estimated value will be somewhat lower
[1] 6.318552
While someone on 4 drugs, "treated", and "some" improvement would have an estimated healthvalue of
exp( 1.88955 + 4*-0.02303 + -0.01271 + -0.13541)
[1] 5.203388
ADDENDUM: This is what it means to be "additive on the log scale". "Additive on the log-odds scale" was the phrase that my teacher, Barbara McKnight, used when emphasizing the need to use all applicable term values times their estimated coefficients when doing any kind of prediction. This was in discussions of interpreting logistic regression coefficients, but Poisson regression is similar if you use an offset of time at risk to get rates. You add first all the coefficients (including the intercept term) times eachcovariate values and then exponentiate the resulting sum. The way to return coefficients from regression objects in R is generally to use the coef() extractor function (done with a different random realization below):
coef(test)
# (Intercept) numberofdrugs treatmenttreated improvedsome improvedmarked
# 1.18561313 0.03272109 0.05544510 -0.09295549 0.06248684
So the calculation of the estimate for a subject with 4 drugs, "treated", with "some" improvement would be:
exp( sum( coef(test)[ c(1,2,3,4) ]* c(1,4,1,1) ) )
[1] 3.592999
And the linear predictor for that case should be the sum of:
coef(test)[c(1,2,3,4)]*c(1,4,1,1)
# (Intercept) numberofdrugs treatmenttreated improvedsome
# 1.18561313 0.13088438 0.05544510 -0.09295549
These principles should apply to any stats package that returns a table of coefficients to the user. The method and principles is more general than might appear from my use of R.
I'm copying selected clarifying comments since they 'disappear' in the default display:
Q: So you interpret the coefficients as ratios! Thank you! – MarkDollar
A: The coefficients are the natural_logarithms of the ratios. – DWin
Q2: In that case, in a poisson regression, are the exponentiated coefficients also referred to as "odds ratios"? – oort
A2: No. If it were logistic regression they would be but in Poisson regression, where the LHS is number of events and the implicit denominator is the number at risk, then the exponentiated coefficients are "rate ratios" or "relative risks". | How to interpret coefficients in a Poisson regression? | The exponentiated numberofdrugs coefficient is the multiplicative term to use for the goal of calculating the estimated healthvalue when numberofdrugs increases by 1 unit. In the case of categorical ( | How to interpret coefficients in a Poisson regression?
The exponentiated numberofdrugs coefficient is the multiplicative term to use for the goal of calculating the estimated healthvalue when numberofdrugs increases by 1 unit. In the case of categorical (factor) variables, the exponentiated coefficient is the multiplicative term relative to the base (first factor) level for that variable (since R uses treatment contrasts by default). The exp(Intercept) is the baseline rate, and all other estimates would be relative to it.
In your example the estimated healthvalue for someone with 2 drugs, "placebo" and improvement=="none" would be (using addition inside exp as the equivalent of multiplication):
exp( 1.88955 + # thats the baseline contribution
2*-0.02303 + 0 + 0 ) # and estimated value will be somewhat lower
[1] 6.318552
While someone on 4 drugs, "treated", and "some" improvement would have an estimated healthvalue of
exp( 1.88955 + 4*-0.02303 + -0.01271 + -0.13541)
[1] 5.203388
ADDENDUM: This is what it means to be "additive on the log scale". "Additive on the log-odds scale" was the phrase that my teacher, Barbara McKnight, used when emphasizing the need to use all applicable term values times their estimated coefficients when doing any kind of prediction. This was in discussions of interpreting logistic regression coefficients, but Poisson regression is similar if you use an offset of time at risk to get rates. You add first all the coefficients (including the intercept term) times eachcovariate values and then exponentiate the resulting sum. The way to return coefficients from regression objects in R is generally to use the coef() extractor function (done with a different random realization below):
coef(test)
# (Intercept) numberofdrugs treatmenttreated improvedsome improvedmarked
# 1.18561313 0.03272109 0.05544510 -0.09295549 0.06248684
So the calculation of the estimate for a subject with 4 drugs, "treated", with "some" improvement would be:
exp( sum( coef(test)[ c(1,2,3,4) ]* c(1,4,1,1) ) )
[1] 3.592999
And the linear predictor for that case should be the sum of:
coef(test)[c(1,2,3,4)]*c(1,4,1,1)
# (Intercept) numberofdrugs treatmenttreated improvedsome
# 1.18561313 0.13088438 0.05544510 -0.09295549
These principles should apply to any stats package that returns a table of coefficients to the user. The method and principles is more general than might appear from my use of R.
I'm copying selected clarifying comments since they 'disappear' in the default display:
Q: So you interpret the coefficients as ratios! Thank you! – MarkDollar
A: The coefficients are the natural_logarithms of the ratios. – DWin
Q2: In that case, in a poisson regression, are the exponentiated coefficients also referred to as "odds ratios"? – oort
A2: No. If it were logistic regression they would be but in Poisson regression, where the LHS is number of events and the implicit denominator is the number at risk, then the exponentiated coefficients are "rate ratios" or "relative risks". | How to interpret coefficients in a Poisson regression?
The exponentiated numberofdrugs coefficient is the multiplicative term to use for the goal of calculating the estimated healthvalue when numberofdrugs increases by 1 unit. In the case of categorical ( |
2,640 | Using principal component analysis (PCA) for feature selection | The basic idea when using PCA as a tool for feature selection is to select variables according to the magnitude (from largest to smallest in absolute values) of their coefficients (loadings). You may recall that PCA seeks to replace $p$ (more or less correlated) variables by $k<p$ uncorrelated linear combinations (projections) of the original variables. Let us ignore how to choose an optimal $k$ for the problem at hand. Those $k$ principal components are ranked by importance through their explained variance, and each variable contributes with varying degree to each component. Using the largest variance criteria would be akin to feature extraction, where principal component are used as new features, instead of the original variables. However, we can decide to keep only the first component and select the $j<p$ variables that have the highest absolute coefficient; the number $j$ might be based on the proportion of the number of variables (e.g., keep only the top 10% of the $p$ variables), or a fixed cutoff (e.g., considering a threshold on the normalized coefficients). This approach bears some resemblance with the Lasso operator in penalized regression (or PLS regression). Neither the value of $j$, nor the number of components to retain are obvious choices, though.
The problem with using PCA is that (1) measurements from all of the original variables are used in the projection to the lower dimensional space, (2) only linear relationships are considered, and (3) PCA or SVD-based methods, as well as univariate screening methods (t-test, correlation, etc.), do not take into account the potential multivariate nature of the data structure (e.g., higher order interaction between variables).
About point 1, some more elaborate screening methods have been proposed, for example principal feature analysis or stepwise method, like the one used for 'gene shaving' in gene expression studies. Also, sparse PCA might be used to perform dimension reduction and variable selection based on the resulting variable loadings. About point 2, it is possible to use kernel PCA (using the kernel trick) if one needs to embed nonlinear relationships into a lower dimensional space. Decision trees, or better the random forest algorithm, are probably better able to solve Point 3. The latter allows to derive Gini- or permutation-based measures of variable importance.
A last point: If you intend to perform feature selection before applying a classification or regression model, be sure to cross-validate the whole process (see §7.10.2 of the Elements of Statistical Learning, or Ambroise and McLachlan, 2002).
As you seem to be interested in R solution, I would recommend taking a look at the caret package which includes a lot of handy functions for data preprocessing and variable selection in a classification or regression context. | Using principal component analysis (PCA) for feature selection | The basic idea when using PCA as a tool for feature selection is to select variables according to the magnitude (from largest to smallest in absolute values) of their coefficients (loadings). You may | Using principal component analysis (PCA) for feature selection
The basic idea when using PCA as a tool for feature selection is to select variables according to the magnitude (from largest to smallest in absolute values) of their coefficients (loadings). You may recall that PCA seeks to replace $p$ (more or less correlated) variables by $k<p$ uncorrelated linear combinations (projections) of the original variables. Let us ignore how to choose an optimal $k$ for the problem at hand. Those $k$ principal components are ranked by importance through their explained variance, and each variable contributes with varying degree to each component. Using the largest variance criteria would be akin to feature extraction, where principal component are used as new features, instead of the original variables. However, we can decide to keep only the first component and select the $j<p$ variables that have the highest absolute coefficient; the number $j$ might be based on the proportion of the number of variables (e.g., keep only the top 10% of the $p$ variables), or a fixed cutoff (e.g., considering a threshold on the normalized coefficients). This approach bears some resemblance with the Lasso operator in penalized regression (or PLS regression). Neither the value of $j$, nor the number of components to retain are obvious choices, though.
The problem with using PCA is that (1) measurements from all of the original variables are used in the projection to the lower dimensional space, (2) only linear relationships are considered, and (3) PCA or SVD-based methods, as well as univariate screening methods (t-test, correlation, etc.), do not take into account the potential multivariate nature of the data structure (e.g., higher order interaction between variables).
About point 1, some more elaborate screening methods have been proposed, for example principal feature analysis or stepwise method, like the one used for 'gene shaving' in gene expression studies. Also, sparse PCA might be used to perform dimension reduction and variable selection based on the resulting variable loadings. About point 2, it is possible to use kernel PCA (using the kernel trick) if one needs to embed nonlinear relationships into a lower dimensional space. Decision trees, or better the random forest algorithm, are probably better able to solve Point 3. The latter allows to derive Gini- or permutation-based measures of variable importance.
A last point: If you intend to perform feature selection before applying a classification or regression model, be sure to cross-validate the whole process (see §7.10.2 of the Elements of Statistical Learning, or Ambroise and McLachlan, 2002).
As you seem to be interested in R solution, I would recommend taking a look at the caret package which includes a lot of handy functions for data preprocessing and variable selection in a classification or regression context. | Using principal component analysis (PCA) for feature selection
The basic idea when using PCA as a tool for feature selection is to select variables according to the magnitude (from largest to smallest in absolute values) of their coefficients (loadings). You may |
2,641 | Using principal component analysis (PCA) for feature selection | Given a set of N features a PCA analysis will produce (1) the linear combination of the features with the highest variance (first PCA component), (2) the linear combination with the highest variance in the subspace orthogonal to the first PCA component etcetera (under the constraint that the coefficients of the combination form a vector with unit norm)
Whether the linear combination with maximum variance is a "good" feature really depends on what you are trying to predict. For this reason I would say that being a PCA component and being a "good" features are (in general) two unrelated notions. | Using principal component analysis (PCA) for feature selection | Given a set of N features a PCA analysis will produce (1) the linear combination of the features with the highest variance (first PCA component), (2) the linear combination with the highest variance i | Using principal component analysis (PCA) for feature selection
Given a set of N features a PCA analysis will produce (1) the linear combination of the features with the highest variance (first PCA component), (2) the linear combination with the highest variance in the subspace orthogonal to the first PCA component etcetera (under the constraint that the coefficients of the combination form a vector with unit norm)
Whether the linear combination with maximum variance is a "good" feature really depends on what you are trying to predict. For this reason I would say that being a PCA component and being a "good" features are (in general) two unrelated notions. | Using principal component analysis (PCA) for feature selection
Given a set of N features a PCA analysis will produce (1) the linear combination of the features with the highest variance (first PCA component), (2) the linear combination with the highest variance i |
2,642 | Using principal component analysis (PCA) for feature selection | I skim read through the comments above and I believe quite a few have pointed out that PCA is not a good approach to feature selection. PCA offers dimensionality reduction but it is often misconceived with feature selection (as both tend to reduce the feature space in a sense). I would like to point out the key differences (absolutely open to opinions on this) that I feel between the two:
PCA is a actually a way of transforming your coordinate system to capture the variation in your data. This does not mean that the data is in any way more important than the other ones. It may be true in some cases while it may have no significance in some. PCA will only be relevant in the cases where the features having the most variation will actually be the ones most important to your problem statement and this must be known beforehand. You do normalize the data which tries to reduce this problem but PCA still is not a good method to be using for feature selection. I would list down some of the features that scikit-learn uses for feature selection to just give some direction:
Remove highly correlated features (Using Pearson's correlation matrix)
Recursive Feature Elimination (sklearn.feature_selection.RFE)
SelectFromModel (sklearn.feature_selection.SelectFromBest)
(1) above removes features(except 1) that are highly correlated amongst themselves. (2) and (3) run different algorithms to identify combination of features and checks which set gives the best accuracy while ranking the importance of features accordingly.
I'm not sure which language you are looking to use but there might be similar libraries to these.
Thanks! | Using principal component analysis (PCA) for feature selection | I skim read through the comments above and I believe quite a few have pointed out that PCA is not a good approach to feature selection. PCA offers dimensionality reduction but it is often misconceived | Using principal component analysis (PCA) for feature selection
I skim read through the comments above and I believe quite a few have pointed out that PCA is not a good approach to feature selection. PCA offers dimensionality reduction but it is often misconceived with feature selection (as both tend to reduce the feature space in a sense). I would like to point out the key differences (absolutely open to opinions on this) that I feel between the two:
PCA is a actually a way of transforming your coordinate system to capture the variation in your data. This does not mean that the data is in any way more important than the other ones. It may be true in some cases while it may have no significance in some. PCA will only be relevant in the cases where the features having the most variation will actually be the ones most important to your problem statement and this must be known beforehand. You do normalize the data which tries to reduce this problem but PCA still is not a good method to be using for feature selection. I would list down some of the features that scikit-learn uses for feature selection to just give some direction:
Remove highly correlated features (Using Pearson's correlation matrix)
Recursive Feature Elimination (sklearn.feature_selection.RFE)
SelectFromModel (sklearn.feature_selection.SelectFromBest)
(1) above removes features(except 1) that are highly correlated amongst themselves. (2) and (3) run different algorithms to identify combination of features and checks which set gives the best accuracy while ranking the importance of features accordingly.
I'm not sure which language you are looking to use but there might be similar libraries to these.
Thanks! | Using principal component analysis (PCA) for feature selection
I skim read through the comments above and I believe quite a few have pointed out that PCA is not a good approach to feature selection. PCA offers dimensionality reduction but it is often misconceived |
2,643 | Using principal component analysis (PCA) for feature selection | You can not order features according to their variance, as the variance used in PCA is basically a multidimensional entity. You can only order features by the projection of the variance to certain direction you choose (which is normally the first principal compnonet.)
So, in other word, whether a feature has more variance than anther one depends on how you choose your projection direction. | Using principal component analysis (PCA) for feature selection | You can not order features according to their variance, as the variance used in PCA is basically a multidimensional entity. You can only order features by the projection of the variance to certain dir | Using principal component analysis (PCA) for feature selection
You can not order features according to their variance, as the variance used in PCA is basically a multidimensional entity. You can only order features by the projection of the variance to certain direction you choose (which is normally the first principal compnonet.)
So, in other word, whether a feature has more variance than anther one depends on how you choose your projection direction. | Using principal component analysis (PCA) for feature selection
You can not order features according to their variance, as the variance used in PCA is basically a multidimensional entity. You can only order features by the projection of the variance to certain dir |
2,644 | Using principal component analysis (PCA) for feature selection | PCA tells us what features are more important, how?
In short:
We find the first principal component one (PC1). Now PC1 is a linear combination of the variables (features). The variable with the highest weight (coefficient)(loading scores) in the linear equation is the most important feature.
Don't miss this wonderful video from StatQuest. | Using principal component analysis (PCA) for feature selection | PCA tells us what features are more important, how?
In short:
We find the first principal component one (PC1). Now PC1 is a linear combination of the variables (features). The variable with the highes | Using principal component analysis (PCA) for feature selection
PCA tells us what features are more important, how?
In short:
We find the first principal component one (PC1). Now PC1 is a linear combination of the variables (features). The variable with the highest weight (coefficient)(loading scores) in the linear equation is the most important feature.
Don't miss this wonderful video from StatQuest. | Using principal component analysis (PCA) for feature selection
PCA tells us what features are more important, how?
In short:
We find the first principal component one (PC1). Now PC1 is a linear combination of the variables (features). The variable with the highes |
2,645 | Using principal component analysis (PCA) for feature selection | From my perspective, I agree with Sidharth Gurbani's answer.
"The pca is not suitable for variable selection."
You can even construct a dataset in which linear model works well but the performs badly with respect to the first principle component:
df <- data.frame(y=c(1,2,2,2,3),
x1=c(-.87, -1.22, 0, 1.22, .87),
x2=c(-.87, 1.22, 0, -1.22, .87))
fit <- lm(y ~ x1+x2, data = df)
print(fit)
summary(fit)
pc <- princomp(df[2:3])
biplot(pc)
pc1 <- pc$scores[,1]
fit2 <- lm(y ~ pc1)
summary(fit2)
Output:
Call:
lm(formula = y ~ x1 + x2, data = df)
Coefficients:
(Intercept) x1 x2
2.0000 0.5747 0.5747
Warning: essentially perfect fit: summary may be unreliable
Call:
lm(formula = y ~ x1 + x2, data = df)
Residuals:
1 2 3 4 5
1.402e-16 -6.769e-17 -1.451e-16 -6.769e-17 1.402e-16
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.000e+00 8.340e-17 2.398e+16 <2e-16 ***
x1 5.747e-01 9.308e-17 6.175e+15 <2e-16 ***
x2 5.747e-01 9.308e-17 6.175e+15 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.865e-16 on 2 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: 1
F-statistic: 2.876e+31 on 2 and 2 DF, p-value: < 2.2e-16
Call:
lm(formula = y ~ pc1)
Residuals:
1 2 3 4 5
-1.000e+00 -6.958e-17 -1.406e-17 -1.406e-17 1.000e+00
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.000e+00 3.651e-01 5.477 0.012 *
pc1 2.503e-17 3.171e-01 0.000 1.000
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.8165 on 3 degrees of freedom
Multiple R-squared: 2.465e-32, Adjusted R-squared: -0.3333
F-statistic: 7.396e-32 on 1 and 3 DF, p-value: 1
The R-square of fit is 1, of fit2 is 0!
Since the value of y in the pc1 direction is always 1.
Let me make it more clearly:
The PCA is to reduce the dimension and retain the most variation simultaneously. She works well only if the variation is of interest. She doesn't guarantee that she won't hurt or break down the linear or other kind of relationship in the dataset.
So, use PCA only if the variation is of interest. | Using principal component analysis (PCA) for feature selection | From my perspective, I agree with Sidharth Gurbani's answer.
"The pca is not suitable for variable selection."
You can even construct a dataset in which linear model works well but the performs badly | Using principal component analysis (PCA) for feature selection
From my perspective, I agree with Sidharth Gurbani's answer.
"The pca is not suitable for variable selection."
You can even construct a dataset in which linear model works well but the performs badly with respect to the first principle component:
df <- data.frame(y=c(1,2,2,2,3),
x1=c(-.87, -1.22, 0, 1.22, .87),
x2=c(-.87, 1.22, 0, -1.22, .87))
fit <- lm(y ~ x1+x2, data = df)
print(fit)
summary(fit)
pc <- princomp(df[2:3])
biplot(pc)
pc1 <- pc$scores[,1]
fit2 <- lm(y ~ pc1)
summary(fit2)
Output:
Call:
lm(formula = y ~ x1 + x2, data = df)
Coefficients:
(Intercept) x1 x2
2.0000 0.5747 0.5747
Warning: essentially perfect fit: summary may be unreliable
Call:
lm(formula = y ~ x1 + x2, data = df)
Residuals:
1 2 3 4 5
1.402e-16 -6.769e-17 -1.451e-16 -6.769e-17 1.402e-16
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.000e+00 8.340e-17 2.398e+16 <2e-16 ***
x1 5.747e-01 9.308e-17 6.175e+15 <2e-16 ***
x2 5.747e-01 9.308e-17 6.175e+15 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.865e-16 on 2 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: 1
F-statistic: 2.876e+31 on 2 and 2 DF, p-value: < 2.2e-16
Call:
lm(formula = y ~ pc1)
Residuals:
1 2 3 4 5
-1.000e+00 -6.958e-17 -1.406e-17 -1.406e-17 1.000e+00
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.000e+00 3.651e-01 5.477 0.012 *
pc1 2.503e-17 3.171e-01 0.000 1.000
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.8165 on 3 degrees of freedom
Multiple R-squared: 2.465e-32, Adjusted R-squared: -0.3333
F-statistic: 7.396e-32 on 1 and 3 DF, p-value: 1
The R-square of fit is 1, of fit2 is 0!
Since the value of y in the pc1 direction is always 1.
Let me make it more clearly:
The PCA is to reduce the dimension and retain the most variation simultaneously. She works well only if the variation is of interest. She doesn't guarantee that she won't hurt or break down the linear or other kind of relationship in the dataset.
So, use PCA only if the variation is of interest. | Using principal component analysis (PCA) for feature selection
From my perspective, I agree with Sidharth Gurbani's answer.
"The pca is not suitable for variable selection."
You can even construct a dataset in which linear model works well but the performs badly |
2,646 | Using principal component analysis (PCA) for feature selection | Some really great thoughts in the answers so far, and do a good job of explaining that the main job of PCA is to provide a few variables which are linear combinations of our original ones, not to select individual features of our original space. But it is actually possible to do something like that with the CUR decomposition. See CUR matrix decompositions for improved data analysis.
This has as hyperparameter an integer $K$ which is the number of PCA components to retain per usual, and then you select features to retain randomly based on their total squared contributions to the first $K$ principle components.
In particular, define the normalized statistical leverages to be $\pi_j=\sum_{k=1}^K\frac{v_{j,k}^2}{K}$ for each variable $j$ (here $v_{j,k}$ gives the $j$ element of singular vector $k$). The theory presented in the referenced article relies on keeping each variable with probability proportional to $\pi_j$. Keep this between you and me, but I usually like to just look at the variables with top scores instead of choosing randomly. | Using principal component analysis (PCA) for feature selection | Some really great thoughts in the answers so far, and do a good job of explaining that the main job of PCA is to provide a few variables which are linear combinations of our original ones, not to sele | Using principal component analysis (PCA) for feature selection
Some really great thoughts in the answers so far, and do a good job of explaining that the main job of PCA is to provide a few variables which are linear combinations of our original ones, not to select individual features of our original space. But it is actually possible to do something like that with the CUR decomposition. See CUR matrix decompositions for improved data analysis.
This has as hyperparameter an integer $K$ which is the number of PCA components to retain per usual, and then you select features to retain randomly based on their total squared contributions to the first $K$ principle components.
In particular, define the normalized statistical leverages to be $\pi_j=\sum_{k=1}^K\frac{v_{j,k}^2}{K}$ for each variable $j$ (here $v_{j,k}$ gives the $j$ element of singular vector $k$). The theory presented in the referenced article relies on keeping each variable with probability proportional to $\pi_j$. Keep this between you and me, but I usually like to just look at the variables with top scores instead of choosing randomly. | Using principal component analysis (PCA) for feature selection
Some really great thoughts in the answers so far, and do a good job of explaining that the main job of PCA is to provide a few variables which are linear combinations of our original ones, not to sele |
2,647 | What are i.i.d. random variables? | It means "Independent and identically distributed".
A good example is a succession of throws of a fair coin: The coin has no memory, so all the throws are "independent".
And every throw is 50:50 (heads:tails), so the coin is and stays fair - the distribution from which every throw is drawn, so to speak, is and stays the same: "identically distributed".
A good starting point would be the Wikipedia page.
::EDIT::
Follow this link to further explore the concept. | What are i.i.d. random variables? | It means "Independent and identically distributed".
A good example is a succession of throws of a fair coin: The coin has no memory, so all the throws are "independent".
And every throw is 50:50 (head | What are i.i.d. random variables?
It means "Independent and identically distributed".
A good example is a succession of throws of a fair coin: The coin has no memory, so all the throws are "independent".
And every throw is 50:50 (heads:tails), so the coin is and stays fair - the distribution from which every throw is drawn, so to speak, is and stays the same: "identically distributed".
A good starting point would be the Wikipedia page.
::EDIT::
Follow this link to further explore the concept. | What are i.i.d. random variables?
It means "Independent and identically distributed".
A good example is a succession of throws of a fair coin: The coin has no memory, so all the throws are "independent".
And every throw is 50:50 (head |
2,648 | What are i.i.d. random variables? | Nontechnical explanation:
Independence is a very general notion. Two events are said to be independent if the occurrence of one does not give you any information as to whether the other event occurred or not. In particular,
the probability that we ascribe to the second event is not affected
by the knowledge that the first event has occurred.
Example of independent events, possibly identically distributed
Consider tossing two different coins one after the other. Assuming that
your thumb did not get unduly tired when it flipped the first coin,
it is reasonable to assume that knowing that the first coin toss resulted in Heads in no way influences what you think the probability
of Heads on the second toss is. The two events
$$\{\text{first coin toss resulted in Heads}\}~~\text{and}~~\{\text{second coin toss resulted in Heads}\}$$
are said to be independent events.
If we know, or obstinately insist, that the two coins have different
probabilities of resulting in Heads, then the events are not
identically distributed.
If we know or assume
that the two coins have the same probability $p$ of coming
up Heads, then the above events are also identically distributed,
meaning that they both have the same probability $p$ of occurring.
But notice that unless $p = \frac 12$, the probability of Heads
does not equal the probability of Tails. As noted in one of the
Comments, "identical distribution" is not the same as "equally
probable."
Example of identically distributed nonindependent events
Consider an urn with two balls in it, one black and one white.
We reach into it and draw out the two balls one after the other,
choosing the first one at random (and this of course determines
the color of the next ball). Thus, the two equally
likely outcomes of the experiment
are (White, Black) and (Black, White), and we see that the first
ball is equally likely to be Black or White and so is the second
ball also equally likely to be Black or White. In other words,
the events
$$\{\text{first ball drawn is Black}\}~~\text{and}~~\{\text{second ball drawn is Black}\}$$
certainly are identically distributed, but they are definitely
not independent events. Indeed, if we know that the first
event has occurred, we know for sure that the second cannot
occur. Thus, while our initial evaluation of the probability
of the second event is $\frac 12$, once we know that the first
event has occurred, we had best revise our assessment of the
probability of the second drawn will be black from $\frac 12$ to $0$. | What are i.i.d. random variables? | Nontechnical explanation:
Independence is a very general notion. Two events are said to be independent if the occurrence of one does not give you any information as to whether the other event occurred | What are i.i.d. random variables?
Nontechnical explanation:
Independence is a very general notion. Two events are said to be independent if the occurrence of one does not give you any information as to whether the other event occurred or not. In particular,
the probability that we ascribe to the second event is not affected
by the knowledge that the first event has occurred.
Example of independent events, possibly identically distributed
Consider tossing two different coins one after the other. Assuming that
your thumb did not get unduly tired when it flipped the first coin,
it is reasonable to assume that knowing that the first coin toss resulted in Heads in no way influences what you think the probability
of Heads on the second toss is. The two events
$$\{\text{first coin toss resulted in Heads}\}~~\text{and}~~\{\text{second coin toss resulted in Heads}\}$$
are said to be independent events.
If we know, or obstinately insist, that the two coins have different
probabilities of resulting in Heads, then the events are not
identically distributed.
If we know or assume
that the two coins have the same probability $p$ of coming
up Heads, then the above events are also identically distributed,
meaning that they both have the same probability $p$ of occurring.
But notice that unless $p = \frac 12$, the probability of Heads
does not equal the probability of Tails. As noted in one of the
Comments, "identical distribution" is not the same as "equally
probable."
Example of identically distributed nonindependent events
Consider an urn with two balls in it, one black and one white.
We reach into it and draw out the two balls one after the other,
choosing the first one at random (and this of course determines
the color of the next ball). Thus, the two equally
likely outcomes of the experiment
are (White, Black) and (Black, White), and we see that the first
ball is equally likely to be Black or White and so is the second
ball also equally likely to be Black or White. In other words,
the events
$$\{\text{first ball drawn is Black}\}~~\text{and}~~\{\text{second ball drawn is Black}\}$$
certainly are identically distributed, but they are definitely
not independent events. Indeed, if we know that the first
event has occurred, we know for sure that the second cannot
occur. Thus, while our initial evaluation of the probability
of the second event is $\frac 12$, once we know that the first
event has occurred, we had best revise our assessment of the
probability of the second drawn will be black from $\frac 12$ to $0$. | What are i.i.d. random variables?
Nontechnical explanation:
Independence is a very general notion. Two events are said to be independent if the occurrence of one does not give you any information as to whether the other event occurred |
2,649 | What are i.i.d. random variables? | A random variable is variable which contains the probability of all possible events in a scenario. For example, lets create a random variable which represents the number of heads in 100 coin tosses. The random variable will contain the probability of getting 1 heads, 2 heads, 3 heads.....all the way to 100 heads. Lets call this random variable X.
If you have two random variables then they are IID (independent identically distributed) if:
If they are independent. As explained above independence means the occurrence of one event does not provide any information about the other event. For example, if I get 100 heads after 100 flips, the probabilities of getting heads or tails in the next flip are the same.
If each random variable shares the same distribution. For example, lets take the random variable from above - X. Lets say X represents Obama about to flip a coin 100 times. Now let's say Y represents a Priest about to flip a coin 100 times. If Obama and the Priest flip coins with the same probability of landing on heads, then X and Y are considered identically distributed. If we sample repeatedly from either the Priest or Obama, then the samples are considered identically distributed.
Side note: Independence also means you can multiply probabilities. Lets say the probability of heads is p, then the probability of getting two heads in a row is p*p, or p^2. | What are i.i.d. random variables? | A random variable is variable which contains the probability of all possible events in a scenario. For example, lets create a random variable which represents the number of heads in 100 coin tosses. T | What are i.i.d. random variables?
A random variable is variable which contains the probability of all possible events in a scenario. For example, lets create a random variable which represents the number of heads in 100 coin tosses. The random variable will contain the probability of getting 1 heads, 2 heads, 3 heads.....all the way to 100 heads. Lets call this random variable X.
If you have two random variables then they are IID (independent identically distributed) if:
If they are independent. As explained above independence means the occurrence of one event does not provide any information about the other event. For example, if I get 100 heads after 100 flips, the probabilities of getting heads or tails in the next flip are the same.
If each random variable shares the same distribution. For example, lets take the random variable from above - X. Lets say X represents Obama about to flip a coin 100 times. Now let's say Y represents a Priest about to flip a coin 100 times. If Obama and the Priest flip coins with the same probability of landing on heads, then X and Y are considered identically distributed. If we sample repeatedly from either the Priest or Obama, then the samples are considered identically distributed.
Side note: Independence also means you can multiply probabilities. Lets say the probability of heads is p, then the probability of getting two heads in a row is p*p, or p^2. | What are i.i.d. random variables?
A random variable is variable which contains the probability of all possible events in a scenario. For example, lets create a random variable which represents the number of heads in 100 coin tosses. T |
2,650 | What are i.i.d. random variables? | That two dependent variables can have the same distribution can be shown with this example:
Assume two successive experiments involving each 100 tosses of a biased coin, where the total number of Head is modeled as a random variable X1 for the first experiment and X2 for the second experiment. X1 and X2 are binomial random variables with parameters 100 and p, where p the bias of the coin.
As such, they are identically distributed. However they are not independent, since the value of the former is quite informative about the value of the latter. That is if the result of the first experiment is 100 Heads this tells us a lot about the bias of the coin and therefore gives us a lot new information regarding the distribution of X2.
Still X2 and X1 are identically distributed since they are derived from the same coin.
What is also true is that if 2 random variables are dependent then the posterior of X2 given X1 will never be the same as the prior of X2 and vice versa. While when X1 and X2 are independent their posteriors are equal to their priors. Therefore, when two variables are dependent, the observation of one of them results in revised estimates regarding the distribution of the second. Still both may be from the same distribution, it is just we learn in the process more about the nature of this distribution. So returning to the coin tosses experiments, initially in the absence of any information we might assume that X1 and X2 follow a Binomial distribution with parameters 100 and 0.5. But after observing 100 Heads on a row we would certainly revise our estimate about the p parameter to make it quite close to 1. | What are i.i.d. random variables? | That two dependent variables can have the same distribution can be shown with this example:
Assume two successive experiments involving each 100 tosses of a biased coin, where the total number of He | What are i.i.d. random variables?
That two dependent variables can have the same distribution can be shown with this example:
Assume two successive experiments involving each 100 tosses of a biased coin, where the total number of Head is modeled as a random variable X1 for the first experiment and X2 for the second experiment. X1 and X2 are binomial random variables with parameters 100 and p, where p the bias of the coin.
As such, they are identically distributed. However they are not independent, since the value of the former is quite informative about the value of the latter. That is if the result of the first experiment is 100 Heads this tells us a lot about the bias of the coin and therefore gives us a lot new information regarding the distribution of X2.
Still X2 and X1 are identically distributed since they are derived from the same coin.
What is also true is that if 2 random variables are dependent then the posterior of X2 given X1 will never be the same as the prior of X2 and vice versa. While when X1 and X2 are independent their posteriors are equal to their priors. Therefore, when two variables are dependent, the observation of one of them results in revised estimates regarding the distribution of the second. Still both may be from the same distribution, it is just we learn in the process more about the nature of this distribution. So returning to the coin tosses experiments, initially in the absence of any information we might assume that X1 and X2 follow a Binomial distribution with parameters 100 and 0.5. But after observing 100 Heads on a row we would certainly revise our estimate about the p parameter to make it quite close to 1. | What are i.i.d. random variables?
That two dependent variables can have the same distribution can be shown with this example:
Assume two successive experiments involving each 100 tosses of a biased coin, where the total number of He |
2,651 | What are i.i.d. random variables? | An aggregation of several random draws from the same distribution. An example being pulling a marble out of bag 10,000 times and counting the times you pull the red marble out. | What are i.i.d. random variables? | An aggregation of several random draws from the same distribution. An example being pulling a marble out of bag 10,000 times and counting the times you pull the red marble out. | What are i.i.d. random variables?
An aggregation of several random draws from the same distribution. An example being pulling a marble out of bag 10,000 times and counting the times you pull the red marble out. | What are i.i.d. random variables?
An aggregation of several random draws from the same distribution. An example being pulling a marble out of bag 10,000 times and counting the times you pull the red marble out. |
2,652 | What are i.i.d. random variables? | If a random variable $X$ comes from a population having (say) a normal distribution, that is its pdf (probability density function) is that of normal distribution, with a population average $\mu=3$ and population variance $\sigma^2=4$ (the numbers are hypothetical and are just for your understanding and to simplify comparisons) we can describe it as follows: $X \sim N(3 , 4)$.
Now if we have another random variable $Y$ which is also normally distributed and which is $Y \sim N(3, 4)$ then $X$ and $Y$ are identically distributed.
Nevertheless, being identically distributed does not necessarily imply independence. | What are i.i.d. random variables? | If a random variable $X$ comes from a population having (say) a normal distribution, that is its pdf (probability density function) is that of normal distribution, with a population average $\mu=3$ an | What are i.i.d. random variables?
If a random variable $X$ comes from a population having (say) a normal distribution, that is its pdf (probability density function) is that of normal distribution, with a population average $\mu=3$ and population variance $\sigma^2=4$ (the numbers are hypothetical and are just for your understanding and to simplify comparisons) we can describe it as follows: $X \sim N(3 , 4)$.
Now if we have another random variable $Y$ which is also normally distributed and which is $Y \sim N(3, 4)$ then $X$ and $Y$ are identically distributed.
Nevertheless, being identically distributed does not necessarily imply independence. | What are i.i.d. random variables?
If a random variable $X$ comes from a population having (say) a normal distribution, that is its pdf (probability density function) is that of normal distribution, with a population average $\mu=3$ an |
2,653 | Why do neural network researchers care about epochs? | In addition to Franck's answer about practicalities, and David's answer about looking at small subgroups – both of which are important points – there are in fact some theoretical reasons to prefer sampling without replacement. The reason is perhaps related to David's point (which is essentially the coupon collector's problem).
In 2009, Léon Bottou compared the convergence performance on a particular text classification problem ($n = 781,265$).
Bottou (2009). Curiously Fast Convergence of some
Stochastic Gradient Descent Algorithms. Proceedings of the
symposium on learning and data science. (author's pdf)
He trained a support vector machine via SGD with three approaches:
Random: draw random samples from the full dataset at each iteration.
Cycle: shuffle the dataset before beginning the learning process, then walk over it sequentially, so that in each epoch you see the examples in the same order.
Shuffle: reshuffle the dataset before each epoch, so that each epoch goes in a different order.
He empirically examined the convergence $\mathbb E[ C(\theta_t) - \min_\theta C(\theta) ]$, where $C$ is the cost function, $\theta_t$ the parameters at step $t$ of optimization, and the expectation is over the shuffling of assigned batches.
For Random, convergence was approximately on the order of $t^{-1}$ (as expected by existing theory at that point).
Cycle obtained convergence on the order of $t^{-\alpha}$ (with $\alpha > 1$ but varying depending on the permutation, for example $\alpha \approx 1.8$ for his Figure 1).
Shuffle was more chaotic, but the best-fit line gave $t^{-2}$, much faster than Random.
This is his Figure 1 illustrating that:
This was later theoretically confirmed by the paper:
Gürbüzbalaban, Ozdaglar, and Parrilo (2015). Why Random Reshuffling Beats Stochastic Gradient Descent. arXiv:1510.08560. (video of invited talk at NIPS 2015)
Their proof only applies to the case where the loss function is strongly convex, i.e. not to neural networks. It's reasonable to expect, though, that similar reasoning might apply to the neural network case (which is much harder to analyze). | Why do neural network researchers care about epochs? | In addition to Franck's answer about practicalities, and David's answer about looking at small subgroups – both of which are important points – there are in fact some theoretical reasons to prefer sam | Why do neural network researchers care about epochs?
In addition to Franck's answer about practicalities, and David's answer about looking at small subgroups – both of which are important points – there are in fact some theoretical reasons to prefer sampling without replacement. The reason is perhaps related to David's point (which is essentially the coupon collector's problem).
In 2009, Léon Bottou compared the convergence performance on a particular text classification problem ($n = 781,265$).
Bottou (2009). Curiously Fast Convergence of some
Stochastic Gradient Descent Algorithms. Proceedings of the
symposium on learning and data science. (author's pdf)
He trained a support vector machine via SGD with three approaches:
Random: draw random samples from the full dataset at each iteration.
Cycle: shuffle the dataset before beginning the learning process, then walk over it sequentially, so that in each epoch you see the examples in the same order.
Shuffle: reshuffle the dataset before each epoch, so that each epoch goes in a different order.
He empirically examined the convergence $\mathbb E[ C(\theta_t) - \min_\theta C(\theta) ]$, where $C$ is the cost function, $\theta_t$ the parameters at step $t$ of optimization, and the expectation is over the shuffling of assigned batches.
For Random, convergence was approximately on the order of $t^{-1}$ (as expected by existing theory at that point).
Cycle obtained convergence on the order of $t^{-\alpha}$ (with $\alpha > 1$ but varying depending on the permutation, for example $\alpha \approx 1.8$ for his Figure 1).
Shuffle was more chaotic, but the best-fit line gave $t^{-2}$, much faster than Random.
This is his Figure 1 illustrating that:
This was later theoretically confirmed by the paper:
Gürbüzbalaban, Ozdaglar, and Parrilo (2015). Why Random Reshuffling Beats Stochastic Gradient Descent. arXiv:1510.08560. (video of invited talk at NIPS 2015)
Their proof only applies to the case where the loss function is strongly convex, i.e. not to neural networks. It's reasonable to expect, though, that similar reasoning might apply to the neural network case (which is much harder to analyze). | Why do neural network researchers care about epochs?
In addition to Franck's answer about practicalities, and David's answer about looking at small subgroups – both of which are important points – there are in fact some theoretical reasons to prefer sam |
2,654 | Why do neural network researchers care about epochs? | It is indeed quite unnecessary from a performance standpoint with a large training set, but using epochs can be convenient, e.g.:
it gives a pretty good metric: "the neural network was trained for 10 epochs" is a clearer statement than "the neural network was trained for 18942 iterations" or "the neural network was trained over 303072 samples".
there's enough random things going on during the training phase: random weight initialization, mini-batch shuffling, dropout, etc.
it is easy to implement
it avoids wondering whether the training set is large enough not to have epochs
[1] gives one more reason, which isn't that much relevant given today's computer configuration:
As for any stochastic gradient descent method (including
the mini-batch case), it is important for efficiency of the estimator that each example or minibatch
be sampled approximately independently. Because
random access to memory (or even worse, to
disk) is expensive, a good approximation, called incremental
gradient (Bertsekas, 2010), is to visit the
examples (or mini-batches) in a fixed order corresponding
to their order in memory or disk (repeating
the examples in the same order on a second epoch, if
we are not in the pure online case where each example
is visited only once). In this context, it is safer if
the examples or mini-batches are first put in a random
order (to make sure this is the case, it could
be useful to first shuffle the examples). Faster convergence
has been observed if the order in which the
mini-batches are visited is changed for each epoch,
which can be reasonably efficient if the training set
holds in computer memory.
[1] Bengio, Yoshua. "Practical recommendations for gradient-based training of deep architectures." Neural Networks: Tricks of the Trade. Springer Berlin Heidelberg, 2012. 437-478. | Why do neural network researchers care about epochs? | It is indeed quite unnecessary from a performance standpoint with a large training set, but using epochs can be convenient, e.g.:
it gives a pretty good metric: "the neural network was trained for 10 | Why do neural network researchers care about epochs?
It is indeed quite unnecessary from a performance standpoint with a large training set, but using epochs can be convenient, e.g.:
it gives a pretty good metric: "the neural network was trained for 10 epochs" is a clearer statement than "the neural network was trained for 18942 iterations" or "the neural network was trained over 303072 samples".
there's enough random things going on during the training phase: random weight initialization, mini-batch shuffling, dropout, etc.
it is easy to implement
it avoids wondering whether the training set is large enough not to have epochs
[1] gives one more reason, which isn't that much relevant given today's computer configuration:
As for any stochastic gradient descent method (including
the mini-batch case), it is important for efficiency of the estimator that each example or minibatch
be sampled approximately independently. Because
random access to memory (or even worse, to
disk) is expensive, a good approximation, called incremental
gradient (Bertsekas, 2010), is to visit the
examples (or mini-batches) in a fixed order corresponding
to their order in memory or disk (repeating
the examples in the same order on a second epoch, if
we are not in the pure online case where each example
is visited only once). In this context, it is safer if
the examples or mini-batches are first put in a random
order (to make sure this is the case, it could
be useful to first shuffle the examples). Faster convergence
has been observed if the order in which the
mini-batches are visited is changed for each epoch,
which can be reasonably efficient if the training set
holds in computer memory.
[1] Bengio, Yoshua. "Practical recommendations for gradient-based training of deep architectures." Neural Networks: Tricks of the Trade. Springer Berlin Heidelberg, 2012. 437-478. | Why do neural network researchers care about epochs?
It is indeed quite unnecessary from a performance standpoint with a large training set, but using epochs can be convenient, e.g.:
it gives a pretty good metric: "the neural network was trained for 10 |
2,655 | Why do neural network researchers care about epochs? | I disagree somewhat that it clearly won't matter. Let's say there are a million training examples, and we take ten million samples.
In R, we can quickly see what the distribution looks like with
plot(dbinom(0:40, size = 10 * 1E6, prob = 1E-6), type = "h")
Some examples will be visited 20+ times, while 1% of them will be visited 3 or fewer times. If the training set was chosen carefully to represent the expected distribution of examples in real data, this could have a real impact in some areas of the data set---especially once you start slicing up the data into smaller groups.
Consider the recent case where one Illinois voter effectively got oversampled 30x and dramatically shifted the model's estimates for his demographic group (and to a lesser extent, for the whole US population). If we accidentally oversample "Ruffed Grouse" images taken against green backgrounds on cloudy days with a narrow depth of field and undersample the other kinds of grouse images, the model might associate those irrelevant features with the category label. The more ways there are to slice up the data, the more of these subgroups there will be, and the more opportunities for this kind of mistake there will be. | Why do neural network researchers care about epochs? | I disagree somewhat that it clearly won't matter. Let's say there are a million training examples, and we take ten million samples.
In R, we can quickly see what the distribution looks like with
pl | Why do neural network researchers care about epochs?
I disagree somewhat that it clearly won't matter. Let's say there are a million training examples, and we take ten million samples.
In R, we can quickly see what the distribution looks like with
plot(dbinom(0:40, size = 10 * 1E6, prob = 1E-6), type = "h")
Some examples will be visited 20+ times, while 1% of them will be visited 3 or fewer times. If the training set was chosen carefully to represent the expected distribution of examples in real data, this could have a real impact in some areas of the data set---especially once you start slicing up the data into smaller groups.
Consider the recent case where one Illinois voter effectively got oversampled 30x and dramatically shifted the model's estimates for his demographic group (and to a lesser extent, for the whole US population). If we accidentally oversample "Ruffed Grouse" images taken against green backgrounds on cloudy days with a narrow depth of field and undersample the other kinds of grouse images, the model might associate those irrelevant features with the category label. The more ways there are to slice up the data, the more of these subgroups there will be, and the more opportunities for this kind of mistake there will be. | Why do neural network researchers care about epochs?
I disagree somewhat that it clearly won't matter. Let's say there are a million training examples, and we take ten million samples.
In R, we can quickly see what the distribution looks like with
pl |
2,656 | How does a simple logistic regression model achieve a 92% classification accuracy on MNIST? | tl;dr Even though this is an image classification dataset, it remains a very easy task, for which one can easily find a direct mapping from inputs to predictions.
Answer:
This is a very interesting question and thanks to the simplicity of logistic regression you can actually find out the answer.
What logistic regression does is for each image accept $784$ inputs and multiply them with weights to generate its prediction. The interesting thing is that due to the direct mapping between input and output (i.e. no hidden layer), the value of each weight corresponds to how much each one of the $784$ inputs are taken into account when computing the probability of each class. Now, by taking the weights for each class and reshaping them into $28 \times 28$ (i.e. the image resolution), we can tell what pixels are most important for the computation of each class.
Note, again, that these are the weights.
Now take a look at the above image and focus on the first two digits (i.e. zero and one). Blue weights mean that this pixel's intensity contributes a lot for that class and red values mean that it contributes negatively.
Now imagine, how does a person draw a $0$? He draws a circular shape that's empty in between. That's exactly what the weights picked up on. In fact if someone draws the middle of the image, it counts negatively as a zero. So to recognize zeros you don't need some sophisticated filters and high-level features. You can just look at the drawn pixel locations and judge according to this.
Same thing for the $1$. It always has a straight vertical line in the middle of the image. All else counts negatively.
The rest of the digits are a bit more complicated, but with little imaginations you can see the $2$, the $3$, the $7$ and the $8$. The rest of the numbers are a bit more difficult, which is what actually limits the logistic regression from reaching the high-90s.
Through this you can see that logistic regression has a very good chance of getting a lot of images right and that's why it scores so high.
The code to reproduce the above figure is a bit dated, but here you go:
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
# Load MNIST:
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Create model
x = tf.placeholder(tf.float32, shape=(None, 784))
y = tf.placeholder(tf.float32, shape=(None, 10))
W = tf.Variable(tf.zeros((784,10)))
b = tf.Variable(tf.zeros((10)))
z = tf.matmul(x, W) + b
y_hat = tf.nn.softmax(z)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(y_hat), reduction_indices=[1]))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) #
correct_pred = tf.equal(tf.argmax(y_hat, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Train model
batch_size = 64
with tf.Session() as sess:
loss_tr, acc_tr, loss_ts, acc_ts = [], [], [], []
sess.run(tf.global_variables_initializer())
for step in range(1, 1001):
x_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(optimizer, feed_dict={x: x_batch, y: y_batch})
l_tr, a_tr = sess.run([cross_entropy, accuracy], feed_dict={x: x_batch, y: y_batch})
l_ts, a_ts = sess.run([cross_entropy, accuracy], feed_dict={x: mnist.test.images, y: mnist.test.labels})
loss_tr.append(l_tr)
acc_tr.append(a_tr)
loss_ts.append(l_ts)
acc_ts.append(a_ts)
weights = sess.run(W)
print('Test Accuracy =', sess.run(accuracy, feed_dict={x: mnist.test.images, y: mnist.test.labels}))
# Plotting:
for i in range(10):
plt.subplot(2, 5, i+1)
weight = weights[:,i].reshape([28,28])
plt.title(i)
plt.imshow(weight, cmap='RdBu') # as noted by @Eric Duminil, cmap='gray' makes the numbers stand out more
frame1 = plt.gca()
frame1.axes.get_xaxis().set_visible(False)
frame1.axes.get_yaxis().set_visible(False) | How does a simple logistic regression model achieve a 92% classification accuracy on MNIST? | tl;dr Even though this is an image classification dataset, it remains a very easy task, for which one can easily find a direct mapping from inputs to predictions.
Answer:
This is a very interesting q | How does a simple logistic regression model achieve a 92% classification accuracy on MNIST?
tl;dr Even though this is an image classification dataset, it remains a very easy task, for which one can easily find a direct mapping from inputs to predictions.
Answer:
This is a very interesting question and thanks to the simplicity of logistic regression you can actually find out the answer.
What logistic regression does is for each image accept $784$ inputs and multiply them with weights to generate its prediction. The interesting thing is that due to the direct mapping between input and output (i.e. no hidden layer), the value of each weight corresponds to how much each one of the $784$ inputs are taken into account when computing the probability of each class. Now, by taking the weights for each class and reshaping them into $28 \times 28$ (i.e. the image resolution), we can tell what pixels are most important for the computation of each class.
Note, again, that these are the weights.
Now take a look at the above image and focus on the first two digits (i.e. zero and one). Blue weights mean that this pixel's intensity contributes a lot for that class and red values mean that it contributes negatively.
Now imagine, how does a person draw a $0$? He draws a circular shape that's empty in between. That's exactly what the weights picked up on. In fact if someone draws the middle of the image, it counts negatively as a zero. So to recognize zeros you don't need some sophisticated filters and high-level features. You can just look at the drawn pixel locations and judge according to this.
Same thing for the $1$. It always has a straight vertical line in the middle of the image. All else counts negatively.
The rest of the digits are a bit more complicated, but with little imaginations you can see the $2$, the $3$, the $7$ and the $8$. The rest of the numbers are a bit more difficult, which is what actually limits the logistic regression from reaching the high-90s.
Through this you can see that logistic regression has a very good chance of getting a lot of images right and that's why it scores so high.
The code to reproduce the above figure is a bit dated, but here you go:
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
# Load MNIST:
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Create model
x = tf.placeholder(tf.float32, shape=(None, 784))
y = tf.placeholder(tf.float32, shape=(None, 10))
W = tf.Variable(tf.zeros((784,10)))
b = tf.Variable(tf.zeros((10)))
z = tf.matmul(x, W) + b
y_hat = tf.nn.softmax(z)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(y_hat), reduction_indices=[1]))
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) #
correct_pred = tf.equal(tf.argmax(y_hat, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Train model
batch_size = 64
with tf.Session() as sess:
loss_tr, acc_tr, loss_ts, acc_ts = [], [], [], []
sess.run(tf.global_variables_initializer())
for step in range(1, 1001):
x_batch, y_batch = mnist.train.next_batch(batch_size)
sess.run(optimizer, feed_dict={x: x_batch, y: y_batch})
l_tr, a_tr = sess.run([cross_entropy, accuracy], feed_dict={x: x_batch, y: y_batch})
l_ts, a_ts = sess.run([cross_entropy, accuracy], feed_dict={x: mnist.test.images, y: mnist.test.labels})
loss_tr.append(l_tr)
acc_tr.append(a_tr)
loss_ts.append(l_ts)
acc_ts.append(a_ts)
weights = sess.run(W)
print('Test Accuracy =', sess.run(accuracy, feed_dict={x: mnist.test.images, y: mnist.test.labels}))
# Plotting:
for i in range(10):
plt.subplot(2, 5, i+1)
weight = weights[:,i].reshape([28,28])
plt.title(i)
plt.imshow(weight, cmap='RdBu') # as noted by @Eric Duminil, cmap='gray' makes the numbers stand out more
frame1 = plt.gca()
frame1.axes.get_xaxis().set_visible(False)
frame1.axes.get_yaxis().set_visible(False) | How does a simple logistic regression model achieve a 92% classification accuracy on MNIST?
tl;dr Even though this is an image classification dataset, it remains a very easy task, for which one can easily find a direct mapping from inputs to predictions.
Answer:
This is a very interesting q |
2,657 | What is the difference between R functions prcomp and princomp? | The difference between them is nothing to do with the type of PCA they perform, just the method they use. As the help page for prcomp says:
The calculation is done by a singular value decomposition of the (centered and possibly scaled) data matrix, not by using eigen on the covariance matrix. This is generally the preferred method for numerical accuracy.
On the other hand, the princomp help page says:
The calculation is done using eigen on the correlation or covariance matrix, as determined by cor. This is done for compatibility with the S-PLUS result. A preferred method of calculation is to use svd on x, as is done in prcomp."
So, prcomp is preferred, although in practice you are unlikely to see much difference (for example, if you run the examples on the help pages you should get identical results). | What is the difference between R functions prcomp and princomp? | The difference between them is nothing to do with the type of PCA they perform, just the method they use. As the help page for prcomp says:
The calculation is done by a singular value decomposition o | What is the difference between R functions prcomp and princomp?
The difference between them is nothing to do with the type of PCA they perform, just the method they use. As the help page for prcomp says:
The calculation is done by a singular value decomposition of the (centered and possibly scaled) data matrix, not by using eigen on the covariance matrix. This is generally the preferred method for numerical accuracy.
On the other hand, the princomp help page says:
The calculation is done using eigen on the correlation or covariance matrix, as determined by cor. This is done for compatibility with the S-PLUS result. A preferred method of calculation is to use svd on x, as is done in prcomp."
So, prcomp is preferred, although in practice you are unlikely to see much difference (for example, if you run the examples on the help pages you should get identical results). | What is the difference between R functions prcomp and princomp?
The difference between them is nothing to do with the type of PCA they perform, just the method they use. As the help page for prcomp says:
The calculation is done by a singular value decomposition o |
2,658 | What is the difference between R functions prcomp and princomp? | Usually a multivariate analysis (computing correlations, extracting latents, etc.) is done of data columns which are features or questions, - while sample units, the rows, are respondents. Hence this way is called R way analysis. Sometimes, though, you may want to do multivariate analysis of responsents, while questions are treated as sample units. That would be Q way analysis.
There is no formal difference between the two, so you can manage both with the same function, only transpose your data. There are differences, however, in issues of standardization and results interpretation.
This is a general reply: I don't touch specifically the R functions prcomp and princomp because I'm not an R user and am not aware of possible differences between them. | What is the difference between R functions prcomp and princomp? | Usually a multivariate analysis (computing correlations, extracting latents, etc.) is done of data columns which are features or questions, - while sample units, the rows, are respondents. Hence this | What is the difference between R functions prcomp and princomp?
Usually a multivariate analysis (computing correlations, extracting latents, etc.) is done of data columns which are features or questions, - while sample units, the rows, are respondents. Hence this way is called R way analysis. Sometimes, though, you may want to do multivariate analysis of responsents, while questions are treated as sample units. That would be Q way analysis.
There is no formal difference between the two, so you can manage both with the same function, only transpose your data. There are differences, however, in issues of standardization and results interpretation.
This is a general reply: I don't touch specifically the R functions prcomp and princomp because I'm not an R user and am not aware of possible differences between them. | What is the difference between R functions prcomp and princomp?
Usually a multivariate analysis (computing correlations, extracting latents, etc.) is done of data columns which are features or questions, - while sample units, the rows, are respondents. Hence this |
2,659 | What is the difference between R functions prcomp and princomp? | A useful and specific documentation from Gregory B. Anderson, titled PRINCIPAL COMPONENT ANALYSIS IN R AN EXAMINATION OF THE DIFFERENT FUNCTIONS AND METHODS TO PERFORM PCA has given more information on this topic. Updated link (7 Jan 2021).
The following two paragraph were extracted from the introduction:
In R there are two general methods to perform PCA without any missing values: (1) spectral decomposition (R-mode [also known as eigendecomposition]) and (2) singular value decomposition (Q-mode; R Development Core Team 2011). Both of these methods can be performed longhand using the functions eigen (R-mode) and svd (Q-mode), respectively, or can be performed using the many PCA functions found in the stats package and other additional available packages. The spectral decomposition method of analysis examines the covariances and correlations between variables, whereas the singular value decomposition method looks at the covariances and correlations among the samples. While both methods can easily be performed within R, the singular value decomposition method (i.e., Q-mode) is the preferred analysis for numerical accuracy (R Development Core Team 2011).
This document focuses on comparing the different methods to perform PCA in R and provides appropriate visualization techniques to examine normality within the statistical package. More specifically this document compares six different functions either created for or can be used for PCA: eigen, princomp, svd, prcomp, PCA, and pca. Throughout the document the essential R code to perform these functions is embedded within the text using the font Courier New and is color coded using the technique provided in Tinn-R (https://sourceforge.net/projects/tinn-r). Additionally, the results from the functions are compared using simulation procedure to see if the different methods differ in the eigenvalues, eigenvectors, and scores provided from the output. | What is the difference between R functions prcomp and princomp? | A useful and specific documentation from Gregory B. Anderson, titled PRINCIPAL COMPONENT ANALYSIS IN R AN EXAMINATION OF THE DIFFERENT FUNCTIONS AND METHODS TO PERFORM PCA has given more information o | What is the difference between R functions prcomp and princomp?
A useful and specific documentation from Gregory B. Anderson, titled PRINCIPAL COMPONENT ANALYSIS IN R AN EXAMINATION OF THE DIFFERENT FUNCTIONS AND METHODS TO PERFORM PCA has given more information on this topic. Updated link (7 Jan 2021).
The following two paragraph were extracted from the introduction:
In R there are two general methods to perform PCA without any missing values: (1) spectral decomposition (R-mode [also known as eigendecomposition]) and (2) singular value decomposition (Q-mode; R Development Core Team 2011). Both of these methods can be performed longhand using the functions eigen (R-mode) and svd (Q-mode), respectively, or can be performed using the many PCA functions found in the stats package and other additional available packages. The spectral decomposition method of analysis examines the covariances and correlations between variables, whereas the singular value decomposition method looks at the covariances and correlations among the samples. While both methods can easily be performed within R, the singular value decomposition method (i.e., Q-mode) is the preferred analysis for numerical accuracy (R Development Core Team 2011).
This document focuses on comparing the different methods to perform PCA in R and provides appropriate visualization techniques to examine normality within the statistical package. More specifically this document compares six different functions either created for or can be used for PCA: eigen, princomp, svd, prcomp, PCA, and pca. Throughout the document the essential R code to perform these functions is embedded within the text using the font Courier New and is color coded using the technique provided in Tinn-R (https://sourceforge.net/projects/tinn-r). Additionally, the results from the functions are compared using simulation procedure to see if the different methods differ in the eigenvalues, eigenvectors, and scores provided from the output. | What is the difference between R functions prcomp and princomp?
A useful and specific documentation from Gregory B. Anderson, titled PRINCIPAL COMPONENT ANALYSIS IN R AN EXAMINATION OF THE DIFFERENT FUNCTIONS AND METHODS TO PERFORM PCA has given more information o |
2,660 | What is the difference between R functions prcomp and princomp? | They are different when both using covariance matrix. When scaling (normalizing) the training data, prcomp uses $n-1$ as denominator but princomp uses $n$ as its denominator. Difference of these two denominators is explained in this tutorial on principal component analysis.
Below are my test results:
> job<-read.table("./job_perf.txt", header=TRUE, sep="")
> pc.cr<-prcomp(job, scale=TRUE, cor=TRUE, scores=TRUE)
> pc.cr1<-princomp(job, scale=TRUE, cor=TRUE, scores=TRUE)
> pc.cr$scale
commun probl_solv logical learn physical appearance
5.039841 1.689540 2.000000 4.655398 3.770700 4.526689
> pc.cr1$scale
commun probl_solv logical learn physical appearance
4.805300 1.610913 1.906925 4.438747 3.595222 4.316028
Test data:
commun probl_solv logical learn physical appearance
12 52 20 44 48 16
12 57 25 45 50 16
12 54 21 45 50 16
13 52 21 46 51 17
14 54 24 46 51 17
22 52 25 54 58 26
22 56 26 55 58 27
17 52 21 45 52 17
15 53 24 45 53 18
23 54 23 53 57 24
25 54 23 55 58 25 | What is the difference between R functions prcomp and princomp? | They are different when both using covariance matrix. When scaling (normalizing) the training data, prcomp uses $n-1$ as denominator but princomp uses $n$ as its denominator. Difference of these two d | What is the difference between R functions prcomp and princomp?
They are different when both using covariance matrix. When scaling (normalizing) the training data, prcomp uses $n-1$ as denominator but princomp uses $n$ as its denominator. Difference of these two denominators is explained in this tutorial on principal component analysis.
Below are my test results:
> job<-read.table("./job_perf.txt", header=TRUE, sep="")
> pc.cr<-prcomp(job, scale=TRUE, cor=TRUE, scores=TRUE)
> pc.cr1<-princomp(job, scale=TRUE, cor=TRUE, scores=TRUE)
> pc.cr$scale
commun probl_solv logical learn physical appearance
5.039841 1.689540 2.000000 4.655398 3.770700 4.526689
> pc.cr1$scale
commun probl_solv logical learn physical appearance
4.805300 1.610913 1.906925 4.438747 3.595222 4.316028
Test data:
commun probl_solv logical learn physical appearance
12 52 20 44 48 16
12 57 25 45 50 16
12 54 21 45 50 16
13 52 21 46 51 17
14 54 24 46 51 17
22 52 25 54 58 26
22 56 26 55 58 27
17 52 21 45 52 17
15 53 24 45 53 18
23 54 23 53 57 24
25 54 23 55 58 25 | What is the difference between R functions prcomp and princomp?
They are different when both using covariance matrix. When scaling (normalizing) the training data, prcomp uses $n-1$ as denominator but princomp uses $n$ as its denominator. Difference of these two d |
2,661 | Variable selection for predictive modeling really needed in 2016? | There have been rumors for years that Google uses all available features in building its predictive algorithms. To date however, no disclaimers, explanations or white papers have emerged that clarify and/or dispute this rumor. Not even their published patents help in the understanding. As a result, no one external to Google knows what they are doing, to the best of my knowledge.
/* Update in Sept 2019, a Google Tensorflow evangelist went on record in a presentation in stating that Google engineers regularly evaluate over 5 billion parameters for the current version of PageRank. */
As the OP notes, one of the biggest problems in predictive modeling is the conflation between classic hypothesis testing and careful model specification vs pure data mining. The classically trained can get quite dogmatic about the need for "rigor" in model design and development. The fact is that when confronted with massive numbers of candidate predictors and multiple possible targets or dependent variables, the classic framework neither works, holds nor provides useful guidance. Numerous recent papers delineate this dilemma from Chattopadhyay and Lipson's brilliant paper Data Smashing: Uncovering Lurking Order in Data http://rsif.royalsocietypublishing.org/content/royinterface/11/101/20140826.full.pdf
The key bottleneck is that most data comparison algorithms today rely
on a human expert to specify what ‘features’ of the data are relevant
for comparison. Here, we propose a new principle for estimating the
similarity between the sources of arbitrary data streams, using
neither domain knowledge nor learning.
To last year's AER paper on Prediction Policy Problems by Kleinberg, et al.https://www.aeaweb.org/articles?id=10.1257/aer.p20151023 which makes the case for data mining and prediction as useful tools in economic policy making, citing instances where "causal inference is not central, or even necessary."
The fact is that the bigger, $64,000 question is the broad shift in thinking and challenges to the classic hypothesis-testing framework implicit in, e.g., this Edge.org symposium on "obsolete" scientific thinking https://www.edge.org/responses/what-scientific-idea-is-ready-for-retirement as well as this recent article by Eric Beinhocker on the "new economics" which presents some radical proposals for integrating widely different disciplines such as behavioral economics, complexity theory, predictive model development, network and portfolio theory as a platform for policy implementation and adoption https://evonomics.com/the-deep-and-profound-changes-in-economics-thinking/ Needless to say, these issues go far beyond merely economic concerns and suggest that we are undergoing a fundamental shift in scientific paradigms. The shifting views are as fundamental as the distinctions between reductionistic, Occam's Razor like model-building vs Epicurus' expansive Principle of Plenitude or multiple explanations which roughly states that if several findings explain something, retain them all ... https://en.wikipedia.org/wiki/Principle_of_plenitude
Of course, guys like Beinhocker are totally unencumbered with practical, in the trenches concerns regarding applied, statistical solutions to this evolving paradigm. Wrt the nitty-gritty questions of ultra-high dimensional variable selection, the OP is relatively nonspecific regarding viable approaches to model building that might leverage, e.g., Lasso, LAR, stepwise algorithms or "elephant models” that use all of the available information. The reality is that, even with AWS or a supercomputer, you can't use all of the available information at the same time – there simply isn’t enough RAM to load it all in. What does this mean? Workarounds have been proposed, e.g., the NSF's Discovery in Complex or Massive Datasets: Common Statistical Themes to "divide and conquer" algorithms for massive data mining, e.g., Wang, et al's paper, A Survey of Statistical Methods and Computing for Big Data http://arxiv.org/pdf/1502.07989.pdf as well as Leskovec, et al's book Mining of Massive Datasets http://www.amazon.com/Mining-Massive-Datasets-Jure-Leskovec/dp/1107077230/ref=sr_1_1?ie=UTF8&qid=1464528800&sr=8-1&keywords=Mining+of+Massive+Datasets
There are now literally hundreds, if not thousands of papers that deal with various aspects of these challenges, all proposing widely differing analytic engines as their core from the “divide and conquer” algorithms; unsupervised, "deep learning" models; random matrix theory applied to massive covariance construction; Bayesian tensor models to classic, supervised logistic regression, and more. Fifteen years or so ago, the debate largely focused on questions concerning the relative merits of hierarchical Bayesian solutions vs frequentist finite mixture models. In a paper addressing these issues, Ainslie, et al. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.197.788&rep=rep1&type=pdf came to the conclusion that the differing theoretical approaches, in practice, produced largely equivalent results with the exception of problems involving sparse and/or high dimensional data where HB models had the advantage. Today with the advent of D&C workarounds, any arbitrage HB models may have historically enjoyed are being eliminated.
The basic logic of these D&C workarounds are, by and large, extensions of Breiman's famous random forest technique which relied on bootstrapped resampling of observations and features. Breiman did his work in the late 90s on a single CPU when massive data meant a few dozen gigs and a couple of thousand features. On today's massively parallel, multi-core platforms, it is possible to run algorithms analyzing terabytes of data containing tens of millions of features building millions of "RF" mini-models in a few hours.
There are any number of important questions coming out of all of this. One has to do with a concern over a loss of precision due to the approximating nature of these workarounds. This issue has been addressed by Chen and Xie in their paper, A Split-and-Conquer Approach for Analysis of Extraordinarily Large Data http://dimacs.rutgers.edu/TechnicalReports/TechReports/2012/2012-01.pdf where they conclude that the approximations are indistinguishably different from the "full information" models.
A second concern which, to the best of my knowledge hasn't been adequately addressed by the literature, has to do with what is done with the results (i.e., the "parameters") from potentially millions of predictive mini-models once the workarounds have been rolled up and summarized. In other words, how does one execute something as simple as "scoring" new data with these results? Are the mini-model coefficients to be saved and stored or does one simply rerun the d&c algorithm on new data?
In his book, Numbers Rule Your World, Kaiser Fung describes the dilemma Netflix faced when presented with an ensemble of only 104 models handed over by the winners of their competition. The winners had, indeed, minimized the MSE vs all other competitors but this translated into only a several decimal place improvement in accuracy on the 5-point, Likert-type rating scale used by their movie recommender system. In addition, the IT maintenance required for this ensemble of models cost much more than any savings seen from the "improvement" in model accuracy.
Then there's the whole question of whether "optimization" is even possible with information of this magnitude. For instance, Emmanuel Derman, the physicist and financial engineer, in his book My Life as a Quant suggests that optimization is an unsustainable myth, at least in financial engineering.
Finally, important questions concerning relative feature importance with massive numbers of features have yet to be addressed.
There are no easy answers wrt questions concerning the need for variable selection and the new challenges opened up by the current, Epicurean workarounds remain to be resolved. The bottom line is that we are all data scientists now.
**** EDIT ***
References
Chattopadhyay I, Lipson H. 2014 Data smashing: uncovering lurking order in data. J. R. Soc. Interface 11: 20140826.
http://dx.doi.org/10.1098/rsif.2014.0826
Kleinberg, Jon, Jens Ludwig, Sendhil Mullainathan and Ziad Obermeyer. 2015. "Prediction Policy Problems." American Economic Review, 105(5): 491-95.
DOI: 10.1257/aer.p20151023
Edge.org, 2014 Annual Question : WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT?
https://www.edge.org/responses/what-scientific-idea-is-ready-for-retirement
Eric Beinhocker, How the Profound Changes in Economics Make Left Versus Right Debates Irrelevant, 2016, Evonomics.org.
https://evonomics.com/the-deep-and-profound-changes-in-economics-thinking/
Epicurus principle of multiple explanations: keep all models. Wikipedia
https://www.coursehero.com/file/p6tt7ej/Epicurus-Principle-of-Multiple-Explanations-Keep-all-models-that-are-consistent/
NSF, Discovery in Complex or Massive Datasets: Common Statistical Themes, A Workshop funded by the National Science Foundation, October 16-17, 2007
https://www.nsf.gov/mps/dms/documents/DiscoveryInComplexOrMassiveDatasets.pdf
Statistical Methods and Computing for Big Data, Working Paper by Chun Wang, Ming-Hui Chen, Elizabeth Schifano, Jing Wu, and Jun Yan, October 29, 2015
http://arxiv.org/pdf/1502.07989.pdf
Jure Leskovec, Anand Rajaraman, Jeffrey David Ullman, Mining of Massive Datasets, Cambridge University Press; 2 edition (December 29, 2014)
ISBN: 978-1107077232
Large Sample Covariance Matrices and High-Dimensional Data Analysis (Cambridge Series in Statistical and Probabilistic Mathematics), by Jianfeng Yao, Shurong Zheng, Zhidong Bai, Cambridge University Press; 1 edition (March 30, 2015)
ISBN: 978-1107065178
RICK L. ANDREWS, ANDREW AINSLIE, and IMRAN S. CURRIM, An Empirical Comparison of Logit Choice Models with Discrete Versus Continuous Representations of Heterogeneity, Journal of Marketing Research, 479 Vol. XXXIX (November 2002), 479–487
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.197.788&rep=rep1&type=pdf
A Split-and-Conquer Approach for Analysis of Extraordinarily Large Data, Xueying Chen and Minge Xie, DIMACS Technical Report 2012-01, January 2012
http://dimacs.rutgers.edu/TechnicalReports/TechReports/2012/2012-01.pdf
Kaiser Fung, Numbers Rule Your World: The Hidden Influence of Probabilities and Statistics on Everything You Do, McGraw-Hill Education; 1 edition (February 15, 2010)
ISBN: 978-0071626538
Emmanuel Derman, My Life as a Quant: Reflections on Physics and Finance, Wiley; 1 edition (January 11, 2016)
ISBN: 978-0470192733
* Update in November 2017 *
Nathan Kutz' 2013 book, Data-Driven Modeling & Scientific Computation: Methods for Complex Systems & Big Data is a mathematical and PDE-focused excursion into variable selection as well as dimension reduction methods and tools. An excellent, 1 hour introduction to his thinking can be found in this June 2017 Youtube video Data Driven Discovery of Dynamical Systems and PDEs . In it, he makes references to the latest developments in this field. https://www.youtube.com/watch?feature=youtu.be&v=Oifg9avnsH4&app=desktop | Variable selection for predictive modeling really needed in 2016? | There have been rumors for years that Google uses all available features in building its predictive algorithms. To date however, no disclaimers, explanations or white papers have emerged that clarify | Variable selection for predictive modeling really needed in 2016?
There have been rumors for years that Google uses all available features in building its predictive algorithms. To date however, no disclaimers, explanations or white papers have emerged that clarify and/or dispute this rumor. Not even their published patents help in the understanding. As a result, no one external to Google knows what they are doing, to the best of my knowledge.
/* Update in Sept 2019, a Google Tensorflow evangelist went on record in a presentation in stating that Google engineers regularly evaluate over 5 billion parameters for the current version of PageRank. */
As the OP notes, one of the biggest problems in predictive modeling is the conflation between classic hypothesis testing and careful model specification vs pure data mining. The classically trained can get quite dogmatic about the need for "rigor" in model design and development. The fact is that when confronted with massive numbers of candidate predictors and multiple possible targets or dependent variables, the classic framework neither works, holds nor provides useful guidance. Numerous recent papers delineate this dilemma from Chattopadhyay and Lipson's brilliant paper Data Smashing: Uncovering Lurking Order in Data http://rsif.royalsocietypublishing.org/content/royinterface/11/101/20140826.full.pdf
The key bottleneck is that most data comparison algorithms today rely
on a human expert to specify what ‘features’ of the data are relevant
for comparison. Here, we propose a new principle for estimating the
similarity between the sources of arbitrary data streams, using
neither domain knowledge nor learning.
To last year's AER paper on Prediction Policy Problems by Kleinberg, et al.https://www.aeaweb.org/articles?id=10.1257/aer.p20151023 which makes the case for data mining and prediction as useful tools in economic policy making, citing instances where "causal inference is not central, or even necessary."
The fact is that the bigger, $64,000 question is the broad shift in thinking and challenges to the classic hypothesis-testing framework implicit in, e.g., this Edge.org symposium on "obsolete" scientific thinking https://www.edge.org/responses/what-scientific-idea-is-ready-for-retirement as well as this recent article by Eric Beinhocker on the "new economics" which presents some radical proposals for integrating widely different disciplines such as behavioral economics, complexity theory, predictive model development, network and portfolio theory as a platform for policy implementation and adoption https://evonomics.com/the-deep-and-profound-changes-in-economics-thinking/ Needless to say, these issues go far beyond merely economic concerns and suggest that we are undergoing a fundamental shift in scientific paradigms. The shifting views are as fundamental as the distinctions between reductionistic, Occam's Razor like model-building vs Epicurus' expansive Principle of Plenitude or multiple explanations which roughly states that if several findings explain something, retain them all ... https://en.wikipedia.org/wiki/Principle_of_plenitude
Of course, guys like Beinhocker are totally unencumbered with practical, in the trenches concerns regarding applied, statistical solutions to this evolving paradigm. Wrt the nitty-gritty questions of ultra-high dimensional variable selection, the OP is relatively nonspecific regarding viable approaches to model building that might leverage, e.g., Lasso, LAR, stepwise algorithms or "elephant models” that use all of the available information. The reality is that, even with AWS or a supercomputer, you can't use all of the available information at the same time – there simply isn’t enough RAM to load it all in. What does this mean? Workarounds have been proposed, e.g., the NSF's Discovery in Complex or Massive Datasets: Common Statistical Themes to "divide and conquer" algorithms for massive data mining, e.g., Wang, et al's paper, A Survey of Statistical Methods and Computing for Big Data http://arxiv.org/pdf/1502.07989.pdf as well as Leskovec, et al's book Mining of Massive Datasets http://www.amazon.com/Mining-Massive-Datasets-Jure-Leskovec/dp/1107077230/ref=sr_1_1?ie=UTF8&qid=1464528800&sr=8-1&keywords=Mining+of+Massive+Datasets
There are now literally hundreds, if not thousands of papers that deal with various aspects of these challenges, all proposing widely differing analytic engines as their core from the “divide and conquer” algorithms; unsupervised, "deep learning" models; random matrix theory applied to massive covariance construction; Bayesian tensor models to classic, supervised logistic regression, and more. Fifteen years or so ago, the debate largely focused on questions concerning the relative merits of hierarchical Bayesian solutions vs frequentist finite mixture models. In a paper addressing these issues, Ainslie, et al. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.197.788&rep=rep1&type=pdf came to the conclusion that the differing theoretical approaches, in practice, produced largely equivalent results with the exception of problems involving sparse and/or high dimensional data where HB models had the advantage. Today with the advent of D&C workarounds, any arbitrage HB models may have historically enjoyed are being eliminated.
The basic logic of these D&C workarounds are, by and large, extensions of Breiman's famous random forest technique which relied on bootstrapped resampling of observations and features. Breiman did his work in the late 90s on a single CPU when massive data meant a few dozen gigs and a couple of thousand features. On today's massively parallel, multi-core platforms, it is possible to run algorithms analyzing terabytes of data containing tens of millions of features building millions of "RF" mini-models in a few hours.
There are any number of important questions coming out of all of this. One has to do with a concern over a loss of precision due to the approximating nature of these workarounds. This issue has been addressed by Chen and Xie in their paper, A Split-and-Conquer Approach for Analysis of Extraordinarily Large Data http://dimacs.rutgers.edu/TechnicalReports/TechReports/2012/2012-01.pdf where they conclude that the approximations are indistinguishably different from the "full information" models.
A second concern which, to the best of my knowledge hasn't been adequately addressed by the literature, has to do with what is done with the results (i.e., the "parameters") from potentially millions of predictive mini-models once the workarounds have been rolled up and summarized. In other words, how does one execute something as simple as "scoring" new data with these results? Are the mini-model coefficients to be saved and stored or does one simply rerun the d&c algorithm on new data?
In his book, Numbers Rule Your World, Kaiser Fung describes the dilemma Netflix faced when presented with an ensemble of only 104 models handed over by the winners of their competition. The winners had, indeed, minimized the MSE vs all other competitors but this translated into only a several decimal place improvement in accuracy on the 5-point, Likert-type rating scale used by their movie recommender system. In addition, the IT maintenance required for this ensemble of models cost much more than any savings seen from the "improvement" in model accuracy.
Then there's the whole question of whether "optimization" is even possible with information of this magnitude. For instance, Emmanuel Derman, the physicist and financial engineer, in his book My Life as a Quant suggests that optimization is an unsustainable myth, at least in financial engineering.
Finally, important questions concerning relative feature importance with massive numbers of features have yet to be addressed.
There are no easy answers wrt questions concerning the need for variable selection and the new challenges opened up by the current, Epicurean workarounds remain to be resolved. The bottom line is that we are all data scientists now.
**** EDIT ***
References
Chattopadhyay I, Lipson H. 2014 Data smashing: uncovering lurking order in data. J. R. Soc. Interface 11: 20140826.
http://dx.doi.org/10.1098/rsif.2014.0826
Kleinberg, Jon, Jens Ludwig, Sendhil Mullainathan and Ziad Obermeyer. 2015. "Prediction Policy Problems." American Economic Review, 105(5): 491-95.
DOI: 10.1257/aer.p20151023
Edge.org, 2014 Annual Question : WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT?
https://www.edge.org/responses/what-scientific-idea-is-ready-for-retirement
Eric Beinhocker, How the Profound Changes in Economics Make Left Versus Right Debates Irrelevant, 2016, Evonomics.org.
https://evonomics.com/the-deep-and-profound-changes-in-economics-thinking/
Epicurus principle of multiple explanations: keep all models. Wikipedia
https://www.coursehero.com/file/p6tt7ej/Epicurus-Principle-of-Multiple-Explanations-Keep-all-models-that-are-consistent/
NSF, Discovery in Complex or Massive Datasets: Common Statistical Themes, A Workshop funded by the National Science Foundation, October 16-17, 2007
https://www.nsf.gov/mps/dms/documents/DiscoveryInComplexOrMassiveDatasets.pdf
Statistical Methods and Computing for Big Data, Working Paper by Chun Wang, Ming-Hui Chen, Elizabeth Schifano, Jing Wu, and Jun Yan, October 29, 2015
http://arxiv.org/pdf/1502.07989.pdf
Jure Leskovec, Anand Rajaraman, Jeffrey David Ullman, Mining of Massive Datasets, Cambridge University Press; 2 edition (December 29, 2014)
ISBN: 978-1107077232
Large Sample Covariance Matrices and High-Dimensional Data Analysis (Cambridge Series in Statistical and Probabilistic Mathematics), by Jianfeng Yao, Shurong Zheng, Zhidong Bai, Cambridge University Press; 1 edition (March 30, 2015)
ISBN: 978-1107065178
RICK L. ANDREWS, ANDREW AINSLIE, and IMRAN S. CURRIM, An Empirical Comparison of Logit Choice Models with Discrete Versus Continuous Representations of Heterogeneity, Journal of Marketing Research, 479 Vol. XXXIX (November 2002), 479–487
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.197.788&rep=rep1&type=pdf
A Split-and-Conquer Approach for Analysis of Extraordinarily Large Data, Xueying Chen and Minge Xie, DIMACS Technical Report 2012-01, January 2012
http://dimacs.rutgers.edu/TechnicalReports/TechReports/2012/2012-01.pdf
Kaiser Fung, Numbers Rule Your World: The Hidden Influence of Probabilities and Statistics on Everything You Do, McGraw-Hill Education; 1 edition (February 15, 2010)
ISBN: 978-0071626538
Emmanuel Derman, My Life as a Quant: Reflections on Physics and Finance, Wiley; 1 edition (January 11, 2016)
ISBN: 978-0470192733
* Update in November 2017 *
Nathan Kutz' 2013 book, Data-Driven Modeling & Scientific Computation: Methods for Complex Systems & Big Data is a mathematical and PDE-focused excursion into variable selection as well as dimension reduction methods and tools. An excellent, 1 hour introduction to his thinking can be found in this June 2017 Youtube video Data Driven Discovery of Dynamical Systems and PDEs . In it, he makes references to the latest developments in this field. https://www.youtube.com/watch?feature=youtu.be&v=Oifg9avnsH4&app=desktop | Variable selection for predictive modeling really needed in 2016?
There have been rumors for years that Google uses all available features in building its predictive algorithms. To date however, no disclaimers, explanations or white papers have emerged that clarify |
2,662 | Variable selection for predictive modeling really needed in 2016? | In terms of prediction, you probably need to think of the question of how quickly the model learns the important features. Even thinking of OLS, this will give you something like model selection given enough data. But we know that it doesn't converge to this solution quickly enough - so we search for something better.
Most methods are making an assumption about the kind of betas/coefficients that are going to be encountered (like a prior distribution in a bayesian model). They work best when these assumptions hold. For example, ridge/lasso regression assumes most betas are on the same scale with most near zero. They won't work as well for the "needles in a haystack" regressions where most betas are zero, and some betas are very large (i.e. scales are very different). Feature selection may work better here - lasso can get stuck in between shrinking noise and leaving signal untouched. Feature selection is more fickle - an effect is either "signal" or "noise".
In terms of deciding - you need to have some idea of what sort of predictor variables you have. Do you have a few really good ones? Or all variables are weak? This will drive the profile of betas you will have. And which penalty/selection methods you use (horses for courses and all that).
Feature selection is also not bad but some of the older approximations due to computational restrictions are no longer good (stepwise, forward). Model averaging using feature selection (all 1 var models, 2 var models, etc weighted by their performance) will do a pretty good job at prediction. But these are essentially penalising the betas through the weight given to models with that variable excluded - just not directly - and not in a convex optimisation problem sort of way. | Variable selection for predictive modeling really needed in 2016? | In terms of prediction, you probably need to think of the question of how quickly the model learns the important features. Even thinking of OLS, this will give you something like model selection give | Variable selection for predictive modeling really needed in 2016?
In terms of prediction, you probably need to think of the question of how quickly the model learns the important features. Even thinking of OLS, this will give you something like model selection given enough data. But we know that it doesn't converge to this solution quickly enough - so we search for something better.
Most methods are making an assumption about the kind of betas/coefficients that are going to be encountered (like a prior distribution in a bayesian model). They work best when these assumptions hold. For example, ridge/lasso regression assumes most betas are on the same scale with most near zero. They won't work as well for the "needles in a haystack" regressions where most betas are zero, and some betas are very large (i.e. scales are very different). Feature selection may work better here - lasso can get stuck in between shrinking noise and leaving signal untouched. Feature selection is more fickle - an effect is either "signal" or "noise".
In terms of deciding - you need to have some idea of what sort of predictor variables you have. Do you have a few really good ones? Or all variables are weak? This will drive the profile of betas you will have. And which penalty/selection methods you use (horses for courses and all that).
Feature selection is also not bad but some of the older approximations due to computational restrictions are no longer good (stepwise, forward). Model averaging using feature selection (all 1 var models, 2 var models, etc weighted by their performance) will do a pretty good job at prediction. But these are essentially penalising the betas through the weight given to models with that variable excluded - just not directly - and not in a convex optimisation problem sort of way. | Variable selection for predictive modeling really needed in 2016?
In terms of prediction, you probably need to think of the question of how quickly the model learns the important features. Even thinking of OLS, this will give you something like model selection give |
2,663 | Variable selection for predictive modeling really needed in 2016? | I give you the perspective of industry.
Industries don't like to spend money on sensors and monitoring systems which they don't know how much they will benefit from.
For instance, I don't want to name, so imagine a component with 10 sensors gathering data every minute. The asset owner turns to me and asks me how well can you predict the behavior of my component with these data from 10 sensors? Then they perform a cost-benefit analysis.
Then, they have the same component with 20 sensors, they ask me, again, how well can you predict the behavior of my component with these data from 20 sensors? They perform another cost-benefit analysis.
At each of these cases, they compare the benefit with the investment cost due to sensors installations. (This is not just adding a $10 sensor to a component. A lot of factors play a role). Here is where a variable selection analysis can be useful. | Variable selection for predictive modeling really needed in 2016? | I give you the perspective of industry.
Industries don't like to spend money on sensors and monitoring systems which they don't know how much they will benefit from.
For instance, I don't want to na | Variable selection for predictive modeling really needed in 2016?
I give you the perspective of industry.
Industries don't like to spend money on sensors and monitoring systems which they don't know how much they will benefit from.
For instance, I don't want to name, so imagine a component with 10 sensors gathering data every minute. The asset owner turns to me and asks me how well can you predict the behavior of my component with these data from 10 sensors? Then they perform a cost-benefit analysis.
Then, they have the same component with 20 sensors, they ask me, again, how well can you predict the behavior of my component with these data from 20 sensors? They perform another cost-benefit analysis.
At each of these cases, they compare the benefit with the investment cost due to sensors installations. (This is not just adding a $10 sensor to a component. A lot of factors play a role). Here is where a variable selection analysis can be useful. | Variable selection for predictive modeling really needed in 2016?
I give you the perspective of industry.
Industries don't like to spend money on sensors and monitoring systems which they don't know how much they will benefit from.
For instance, I don't want to na |
2,664 | Variable selection for predictive modeling really needed in 2016? | As part of an algorithm for learning a purely predictive model, variable selection is not necessarily bad from a performance viewpoint nor is it automatically dangerous. However, there are some issues that one should be aware of.
To make the question a little more concrete, let's consider the linear regression problem with
$$E(Y_i \mid X_i) = X_i^T \beta$$
for $i = 1, \ldots, N$, and $X_i$ and $\beta$ being $p$-dimensional vectors of variables and parameters, respectively. The objective is to find a good approximation of the function
$$x \mapsto E(Y \mid X = x) = X^T \beta,$$
which is the prediction of $Y$ given $X = x$. This can be achieved by estimating $\beta$ using combinations of variable selection and minimisation of a loss function with or without penalisation. Model averaging or Bayesian methods may also be used, but let's focus on single model predictions.
Stepwise selection algorithms like forward and backward variable selection can be seen as approximate attempts to solve a best subset selection problem, which is computationally hard (so hard that the improvements of computational power matters little). The interest is in finding for each $k = 1, \ldots, \min(N, p)$ the best (or at least a good) model with $k$ variables. Subsequently, we may optimise over $k$.
The danger with such a variable selection procedure is that many standard distributional results are invalid conditionally on the variable selection. This holds for standard tests and confidence intervals, and is one of the problems that Harrell [2] is warning about. Breiman also warned about model selection based on e.g. Mallows' $C_p$ in The Little Bootstrap .... Mallows' $C_p$, or AIC for that matter, do not account for the model selection, and they will give overly optimistic prediction errors.
However, cross-validation can be used for estimating the prediction error and for selecting $k$, and variable selection can achieve a good balance between bias and variance. This is particularly true if $\beta$ has a few large coordinates with the rest close to zero $-$ as @probabilityislogic mentions.
Shrinkage methods such as ridge regression and lasso can achieve a good tradeoff between bias and variance without explicit variable selection. However, as the OP mentions, lasso does implicit variable selection. It's not really the model but rather the method for fitting the model that does variable selection. From that perspective, variable selection (implicit or explicit) is simply part of the method for fitting the model to data, and it should be regarded as such.
Algorithms for computing the lasso estimator can benefit from variable selection (or screening). In Statistical Learning with Sparsity: The Lasso and Generalizations, Section 5.10, it it described how screening, as implemented in glmnet, is useful. It can lead to substantially faster computation of the lasso estimator.
One personal experience is from an example where variable selection made it possible to fit a more complicated model (a generalised additive model) using the selected variables. Cross-validation results indicated that this model was superior to a number of alternatives $-$ though not to a random forest. If gamsel had been around $-$ which integrates generalised additive models with variable selection $-$ I might have considered trying it out as well.
Edit: Since I wrote this answer there is a paper out on the particular application I had in mind. R-code for reproducing the results in the paper is available.
In summary I will say that variable selection (in one form or the other) is and will remain to be useful $-$ even for purely predictive purposes $-$ as a way to control the bias-variance tradeoff. If not for other reasons, then at least because more complicated models may not be able to handle very large numbers of variables out-of-the-box. However, as time goes we will naturally see developments like gamsel that integrate variable selection into the estimation methodology.
It is, of course, always essential that we regard variable selection as part of the estimation method. The danger is to believe that variable selection performs like an oracle and identifies the correct set of variables. If we believe that and proceed as if variables were not selected based on the data, then we are at risk of making errors. | Variable selection for predictive modeling really needed in 2016? | As part of an algorithm for learning a purely predictive model, variable selection is not necessarily bad from a performance viewpoint nor is it automatically dangerous. However, there are some issues | Variable selection for predictive modeling really needed in 2016?
As part of an algorithm for learning a purely predictive model, variable selection is not necessarily bad from a performance viewpoint nor is it automatically dangerous. However, there are some issues that one should be aware of.
To make the question a little more concrete, let's consider the linear regression problem with
$$E(Y_i \mid X_i) = X_i^T \beta$$
for $i = 1, \ldots, N$, and $X_i$ and $\beta$ being $p$-dimensional vectors of variables and parameters, respectively. The objective is to find a good approximation of the function
$$x \mapsto E(Y \mid X = x) = X^T \beta,$$
which is the prediction of $Y$ given $X = x$. This can be achieved by estimating $\beta$ using combinations of variable selection and minimisation of a loss function with or without penalisation. Model averaging or Bayesian methods may also be used, but let's focus on single model predictions.
Stepwise selection algorithms like forward and backward variable selection can be seen as approximate attempts to solve a best subset selection problem, which is computationally hard (so hard that the improvements of computational power matters little). The interest is in finding for each $k = 1, \ldots, \min(N, p)$ the best (or at least a good) model with $k$ variables. Subsequently, we may optimise over $k$.
The danger with such a variable selection procedure is that many standard distributional results are invalid conditionally on the variable selection. This holds for standard tests and confidence intervals, and is one of the problems that Harrell [2] is warning about. Breiman also warned about model selection based on e.g. Mallows' $C_p$ in The Little Bootstrap .... Mallows' $C_p$, or AIC for that matter, do not account for the model selection, and they will give overly optimistic prediction errors.
However, cross-validation can be used for estimating the prediction error and for selecting $k$, and variable selection can achieve a good balance between bias and variance. This is particularly true if $\beta$ has a few large coordinates with the rest close to zero $-$ as @probabilityislogic mentions.
Shrinkage methods such as ridge regression and lasso can achieve a good tradeoff between bias and variance without explicit variable selection. However, as the OP mentions, lasso does implicit variable selection. It's not really the model but rather the method for fitting the model that does variable selection. From that perspective, variable selection (implicit or explicit) is simply part of the method for fitting the model to data, and it should be regarded as such.
Algorithms for computing the lasso estimator can benefit from variable selection (or screening). In Statistical Learning with Sparsity: The Lasso and Generalizations, Section 5.10, it it described how screening, as implemented in glmnet, is useful. It can lead to substantially faster computation of the lasso estimator.
One personal experience is from an example where variable selection made it possible to fit a more complicated model (a generalised additive model) using the selected variables. Cross-validation results indicated that this model was superior to a number of alternatives $-$ though not to a random forest. If gamsel had been around $-$ which integrates generalised additive models with variable selection $-$ I might have considered trying it out as well.
Edit: Since I wrote this answer there is a paper out on the particular application I had in mind. R-code for reproducing the results in the paper is available.
In summary I will say that variable selection (in one form or the other) is and will remain to be useful $-$ even for purely predictive purposes $-$ as a way to control the bias-variance tradeoff. If not for other reasons, then at least because more complicated models may not be able to handle very large numbers of variables out-of-the-box. However, as time goes we will naturally see developments like gamsel that integrate variable selection into the estimation methodology.
It is, of course, always essential that we regard variable selection as part of the estimation method. The danger is to believe that variable selection performs like an oracle and identifies the correct set of variables. If we believe that and proceed as if variables were not selected based on the data, then we are at risk of making errors. | Variable selection for predictive modeling really needed in 2016?
As part of an algorithm for learning a purely predictive model, variable selection is not necessarily bad from a performance viewpoint nor is it automatically dangerous. However, there are some issues |
2,665 | Variable selection for predictive modeling really needed in 2016? | Allow me to comment on the statement: “... fitting k parameters to n < k observations is just not going to happen.”
In chemometrics we are often interested in predictive models, and the situation k >> n is frequently encountered (e.g. in spectroscopic data). This problem is typically solved simply by projecting the observations to a lower dimensional subspace a, where a < n, before the regression (e.g. Principal Component Regression). Using Partial Least Squares Regression the projection and regression are performed simultaneously favoring quality of prediction. The methods mentioned find optimal pseudo-inverses to a (singular) covariance or correlation matrix, e.g. by singular value decomposition.
Experience shows that predictive performance of multivariate models increases when noisy variables are removed. So even if we - in a meaningful way - are able to estimate k parameters having only n equations (n < k), we strive for parsimonious models. For that purpose, variable selection becomes relevant, and much chemometric literature are devoted to this subject.
While prediction is an important objective, the projection methods at the same time offers valuable insight into e.g. patterns in data and relevance of variables. This is facilitated mainly by diverse model-plots, e.g. scores, loadings, residuals, etc...
Chemometric technology is used extensively e.g. in the industry where reliable and accurate predictions really count. | Variable selection for predictive modeling really needed in 2016? | Allow me to comment on the statement: “... fitting k parameters to n < k observations is just not going to happen.”
In chemometrics we are often interested in predictive models, and the situation k >> | Variable selection for predictive modeling really needed in 2016?
Allow me to comment on the statement: “... fitting k parameters to n < k observations is just not going to happen.”
In chemometrics we are often interested in predictive models, and the situation k >> n is frequently encountered (e.g. in spectroscopic data). This problem is typically solved simply by projecting the observations to a lower dimensional subspace a, where a < n, before the regression (e.g. Principal Component Regression). Using Partial Least Squares Regression the projection and regression are performed simultaneously favoring quality of prediction. The methods mentioned find optimal pseudo-inverses to a (singular) covariance or correlation matrix, e.g. by singular value decomposition.
Experience shows that predictive performance of multivariate models increases when noisy variables are removed. So even if we - in a meaningful way - are able to estimate k parameters having only n equations (n < k), we strive for parsimonious models. For that purpose, variable selection becomes relevant, and much chemometric literature are devoted to this subject.
While prediction is an important objective, the projection methods at the same time offers valuable insight into e.g. patterns in data and relevance of variables. This is facilitated mainly by diverse model-plots, e.g. scores, loadings, residuals, etc...
Chemometric technology is used extensively e.g. in the industry where reliable and accurate predictions really count. | Variable selection for predictive modeling really needed in 2016?
Allow me to comment on the statement: “... fitting k parameters to n < k observations is just not going to happen.”
In chemometrics we are often interested in predictive models, and the situation k >> |
2,666 | Variable selection for predictive modeling really needed in 2016? | In several well known cases, yes, variable selection is not necessary. Deep learning has become a bit overhyped for precisely this reason.
For example, when a convoluted neural network (http://cs231n.github.io/convolutional-networks/) tries to predict if a centered image contains a human face, the corners of the image tend to have minimal predictive value. Traditional modeling and variable selection would have the modeler remove the corner pixels as predictors; however, the convoluted neural network is smart enough to essentially discard these predictors automatically. This is true for most deep learning models that try to predict the presence of some object in an image (e.g., self drivings cars "predicting" lane markings, obstacles or other cars in frames of onboard streaming video).
Deep learning is probably overkill for a lot of traditional problems such as where datasets are small or where domain knowledge is abundant, so traditional variable selection will probably remain relevant for a long time, at least in some areas. Nonetheless, deep learning is great when you want to throw together a "pretty good" solution with minimal human intervention. It might take me many hours to handcraft and select predictors to recognize handwritten digits in images, but with a convoluted neural network and zero variable selection, I can have a state-of-the-art model in just under 20 minutes using Google's TensorFlow (https://www.tensorflow.org/versions/r0.8/tutorials/mnist/pros/index.html). | Variable selection for predictive modeling really needed in 2016? | In several well known cases, yes, variable selection is not necessary. Deep learning has become a bit overhyped for precisely this reason.
For example, when a convoluted neural network (http://cs231n | Variable selection for predictive modeling really needed in 2016?
In several well known cases, yes, variable selection is not necessary. Deep learning has become a bit overhyped for precisely this reason.
For example, when a convoluted neural network (http://cs231n.github.io/convolutional-networks/) tries to predict if a centered image contains a human face, the corners of the image tend to have minimal predictive value. Traditional modeling and variable selection would have the modeler remove the corner pixels as predictors; however, the convoluted neural network is smart enough to essentially discard these predictors automatically. This is true for most deep learning models that try to predict the presence of some object in an image (e.g., self drivings cars "predicting" lane markings, obstacles or other cars in frames of onboard streaming video).
Deep learning is probably overkill for a lot of traditional problems such as where datasets are small or where domain knowledge is abundant, so traditional variable selection will probably remain relevant for a long time, at least in some areas. Nonetheless, deep learning is great when you want to throw together a "pretty good" solution with minimal human intervention. It might take me many hours to handcraft and select predictors to recognize handwritten digits in images, but with a convoluted neural network and zero variable selection, I can have a state-of-the-art model in just under 20 minutes using Google's TensorFlow (https://www.tensorflow.org/versions/r0.8/tutorials/mnist/pros/index.html). | Variable selection for predictive modeling really needed in 2016?
In several well known cases, yes, variable selection is not necessary. Deep learning has become a bit overhyped for precisely this reason.
For example, when a convoluted neural network (http://cs231n |
2,667 | Best way to present a random forest in a publication? | Regarding making it reproducible, the best way is to provide reproducible research (i.e. code and data) along with the paper. Make it available on your website, or on a hosting site (like github).
Regarding visualization, Leo Breiman has done some interesting work on this (see his homepage, in particular the section on graphics).
But if you're using R, then the randomForest package has some useful functions:
data(mtcars)
mtcars.rf <- randomForest(mpg ~ ., data=mtcars, ntree=1000, keep.forest=FALSE,
importance=TRUE)
plot(mtcars.rf, log="y")
varImpPlot(mtcars.rf)
And
set.seed(1)
data(iris)
iris.rf <- randomForest(Species ~ ., iris, proximity=TRUE,
keep.forest=FALSE)
MDSplot(iris.rf, iris$Species)
I'm not aware of a simple way to actually plot a tree, but you can use the getTree function to retrieve the tree and plot that separately.
getTree(randomForest(iris[,-5], iris[,5], ntree=10), 3, labelVar=TRUE)
The Strobl/Zeileis presentation on "Why and how to use random forest variable importance measures (and how you shouldn’t)" has examples of trees which must have been produced in this way. This blog post on tree models has some nice examples of CART tree plots which you can use for example.
As @chl commented, a single tree isn't especially meaningful in this context, so short of using it to explain what a random forest is, I wouldn't include this in a paper. | Best way to present a random forest in a publication? | Regarding making it reproducible, the best way is to provide reproducible research (i.e. code and data) along with the paper. Make it available on your website, or on a hosting site (like github).
Re | Best way to present a random forest in a publication?
Regarding making it reproducible, the best way is to provide reproducible research (i.e. code and data) along with the paper. Make it available on your website, or on a hosting site (like github).
Regarding visualization, Leo Breiman has done some interesting work on this (see his homepage, in particular the section on graphics).
But if you're using R, then the randomForest package has some useful functions:
data(mtcars)
mtcars.rf <- randomForest(mpg ~ ., data=mtcars, ntree=1000, keep.forest=FALSE,
importance=TRUE)
plot(mtcars.rf, log="y")
varImpPlot(mtcars.rf)
And
set.seed(1)
data(iris)
iris.rf <- randomForest(Species ~ ., iris, proximity=TRUE,
keep.forest=FALSE)
MDSplot(iris.rf, iris$Species)
I'm not aware of a simple way to actually plot a tree, but you can use the getTree function to retrieve the tree and plot that separately.
getTree(randomForest(iris[,-5], iris[,5], ntree=10), 3, labelVar=TRUE)
The Strobl/Zeileis presentation on "Why and how to use random forest variable importance measures (and how you shouldn’t)" has examples of trees which must have been produced in this way. This blog post on tree models has some nice examples of CART tree plots which you can use for example.
As @chl commented, a single tree isn't especially meaningful in this context, so short of using it to explain what a random forest is, I wouldn't include this in a paper. | Best way to present a random forest in a publication?
Regarding making it reproducible, the best way is to provide reproducible research (i.e. code and data) along with the paper. Make it available on your website, or on a hosting site (like github).
Re |
2,668 | Best way to present a random forest in a publication? | As Shane wrote; make it reproducible research + include random seeds, because RF is stochastic.
First of all, plotting single trees forming RF is nonsense; this is an ensemble classifier, it makes sense only as a whole. But even plotting the whole forest is nonsense -- it is a black-box classifier, so it is not intended to explain the data with its structure, rather to replicate the original process. Instead, make some of plots Shane suggested.
In practice, OOB is a very good error approximation; yet this is not a widely accepted fact, so for publication it is better to also make a CV to confirm it. | Best way to present a random forest in a publication? | As Shane wrote; make it reproducible research + include random seeds, because RF is stochastic.
First of all, plotting single trees forming RF is nonsense; this is an ensemble classifier, it makes sen | Best way to present a random forest in a publication?
As Shane wrote; make it reproducible research + include random seeds, because RF is stochastic.
First of all, plotting single trees forming RF is nonsense; this is an ensemble classifier, it makes sense only as a whole. But even plotting the whole forest is nonsense -- it is a black-box classifier, so it is not intended to explain the data with its structure, rather to replicate the original process. Instead, make some of plots Shane suggested.
In practice, OOB is a very good error approximation; yet this is not a widely accepted fact, so for publication it is better to also make a CV to confirm it. | Best way to present a random forest in a publication?
As Shane wrote; make it reproducible research + include random seeds, because RF is stochastic.
First of all, plotting single trees forming RF is nonsense; this is an ensemble classifier, it makes sen |
2,669 | Best way to present a random forest in a publication? | Keep in mind the caveats in the other answers about the plot necessarily being meaningful. But if you want a plot for illustrative/pedagogical purposes, the following snippet of R might be useful. Not hard to add "split point" to the edge text if you need it.
to.dendrogram <- function(dfrep,rownum=1,height.increment=0.1){
if(dfrep[rownum,'status'] == -1){
rval <- list()
attr(rval,"members") <- 1
attr(rval,"height") <- 0.0
attr(rval,"label") <- dfrep[rownum,'prediction']
attr(rval,"leaf") <- TRUE
}else{##note the change "to.dendrogram" and not "to.dendogram"
left <- to.dendrogram(dfrep,dfrep[rownum,'left daughter'],height.increment)
right <- to.dendrogram(dfrep,dfrep[rownum,'right daughter'],height.increment)
rval <- list(left,right)
attr(rval,"members") <- attr(left,"members") + attr(right,"members")
attr(rval,"height") <- max(attr(left,"height"),attr(right,"height")) + height.increment
attr(rval,"leaf") <- FALSE
attr(rval,"edgetext") <- dfrep[rownum,'split var']
#To add Split Point in Dendrogram
#attr(rval,"edgetext") <- paste(dfrep[rownum,'split var'],"\n<",round(dfrep[rownum,'split point'], digits = 2),"=>", sep = " ")
}
class(rval) <- "dendrogram"
return(rval)
}
mod <- randomForest(Species ~ .,data=iris)
tree <- getTree(mod,1,labelVar=TRUE)
d <- to.dendrogram(tree)
str(d)
plot(d,center=TRUE,leaflab='none',edgePar=list(t.cex=1,p.col=NA,p.lty=0)) | Best way to present a random forest in a publication? | Keep in mind the caveats in the other answers about the plot necessarily being meaningful. But if you want a plot for illustrative/pedagogical purposes, the following snippet of R might be useful. N | Best way to present a random forest in a publication?
Keep in mind the caveats in the other answers about the plot necessarily being meaningful. But if you want a plot for illustrative/pedagogical purposes, the following snippet of R might be useful. Not hard to add "split point" to the edge text if you need it.
to.dendrogram <- function(dfrep,rownum=1,height.increment=0.1){
if(dfrep[rownum,'status'] == -1){
rval <- list()
attr(rval,"members") <- 1
attr(rval,"height") <- 0.0
attr(rval,"label") <- dfrep[rownum,'prediction']
attr(rval,"leaf") <- TRUE
}else{##note the change "to.dendrogram" and not "to.dendogram"
left <- to.dendrogram(dfrep,dfrep[rownum,'left daughter'],height.increment)
right <- to.dendrogram(dfrep,dfrep[rownum,'right daughter'],height.increment)
rval <- list(left,right)
attr(rval,"members") <- attr(left,"members") + attr(right,"members")
attr(rval,"height") <- max(attr(left,"height"),attr(right,"height")) + height.increment
attr(rval,"leaf") <- FALSE
attr(rval,"edgetext") <- dfrep[rownum,'split var']
#To add Split Point in Dendrogram
#attr(rval,"edgetext") <- paste(dfrep[rownum,'split var'],"\n<",round(dfrep[rownum,'split point'], digits = 2),"=>", sep = " ")
}
class(rval) <- "dendrogram"
return(rval)
}
mod <- randomForest(Species ~ .,data=iris)
tree <- getTree(mod,1,labelVar=TRUE)
d <- to.dendrogram(tree)
str(d)
plot(d,center=TRUE,leaflab='none',edgePar=list(t.cex=1,p.col=NA,p.lty=0)) | Best way to present a random forest in a publication?
Keep in mind the caveats in the other answers about the plot necessarily being meaningful. But if you want a plot for illustrative/pedagogical purposes, the following snippet of R might be useful. N |
2,670 | What algorithm should I use to detect anomalies on time-series? | I think the key is "unexpected" qualifier in your graph. In order to detect the unexpected you need to have an idea of what's expected.
I would start with a simple time series model such as AR(p) or ARMA(p,q). Fit it to data, add seasonality as appropriate. For instance, your SAR(1)(24) model could be: $y_{t}=c+\phi y_{t-1}+\Phi_{24}y_{t-24}+\Phi_{25}y_{t-25}+\varepsilon_t$, where $t$ is time in hours. So, you'd be predicting the graph for the next hour. Whenever the prediction error $e_t=y_t-\hat y_t$ is "too big" you throw an alert.
When you estimate the model you'll get the variance $\sigma_\varepsilon$ of the error $\varepsilon_t$. Depending on your distributional assumptions, such as normal, you can set the threshold based on the probability, such as $|e_t|<3\sigma_\varepsilon$ for 99.7% or one-sided $e_t>3\sigma_\varepsilon$.
The number of visitors is probably quite persistent, but super seasonal. It might work better to try seasonal dummies instead of the multiplicative seasonality, then you'd try ARMAX where X stands for exogenous variables, which could be anything like holiday dummy, hour dummies, weekend dummies etc. | What algorithm should I use to detect anomalies on time-series? | I think the key is "unexpected" qualifier in your graph. In order to detect the unexpected you need to have an idea of what's expected.
I would start with a simple time series model such as AR(p) or A | What algorithm should I use to detect anomalies on time-series?
I think the key is "unexpected" qualifier in your graph. In order to detect the unexpected you need to have an idea of what's expected.
I would start with a simple time series model such as AR(p) or ARMA(p,q). Fit it to data, add seasonality as appropriate. For instance, your SAR(1)(24) model could be: $y_{t}=c+\phi y_{t-1}+\Phi_{24}y_{t-24}+\Phi_{25}y_{t-25}+\varepsilon_t$, where $t$ is time in hours. So, you'd be predicting the graph for the next hour. Whenever the prediction error $e_t=y_t-\hat y_t$ is "too big" you throw an alert.
When you estimate the model you'll get the variance $\sigma_\varepsilon$ of the error $\varepsilon_t$. Depending on your distributional assumptions, such as normal, you can set the threshold based on the probability, such as $|e_t|<3\sigma_\varepsilon$ for 99.7% or one-sided $e_t>3\sigma_\varepsilon$.
The number of visitors is probably quite persistent, but super seasonal. It might work better to try seasonal dummies instead of the multiplicative seasonality, then you'd try ARMAX where X stands for exogenous variables, which could be anything like holiday dummy, hour dummies, weekend dummies etc. | What algorithm should I use to detect anomalies on time-series?
I think the key is "unexpected" qualifier in your graph. In order to detect the unexpected you need to have an idea of what's expected.
I would start with a simple time series model such as AR(p) or A |
2,671 | What algorithm should I use to detect anomalies on time-series? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
On the Netflix tech blog there is an article on their Robust Anomaly Detection tool (RAD).
http://techblog.netflix.com/2015/02/rad-outlier-detection-on-big-data.html
It deals with seasonality and very high volume datasets so it may fit your requirements.
The code is open source Java and Apache Pig https://github.com/Netflix/Surus/blob/master/resources/examples/pig/rad.pig
The underlying algorithm is based on robust PCA - see original paper here: http://statweb.stanford.edu/~candes/papers/RobustPCA.pdf | What algorithm should I use to detect anomalies on time-series? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| What algorithm should I use to detect anomalies on time-series?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
On the Netflix tech blog there is an article on their Robust Anomaly Detection tool (RAD).
http://techblog.netflix.com/2015/02/rad-outlier-detection-on-big-data.html
It deals with seasonality and very high volume datasets so it may fit your requirements.
The code is open source Java and Apache Pig https://github.com/Netflix/Surus/blob/master/resources/examples/pig/rad.pig
The underlying algorithm is based on robust PCA - see original paper here: http://statweb.stanford.edu/~candes/papers/RobustPCA.pdf | What algorithm should I use to detect anomalies on time-series?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
2,672 | What algorithm should I use to detect anomalies on time-series? | Most outlier detection algorithms in open source package are for business time series data with low frequency, daily/weekly/monthly frequency data. This data appears to be for a specialized area that is captured in minutes, so I'm not sure if open source outlier detction would be helpful. You could try to adapt this approaches to your data.
Below I outline some available packages approaches in open source R:
tsoutliers: Implements Chen and Liu's outlier detection algorithm within arima framework. see my earlier question on this site. Fantastic approach, but very slow not sure if it will be able to handle high frequency data like yours. It has the advamtage of detecting all types of outliers as mentioned in my earlier question/post.
Twitter's Anomaly detection: Uses Rosner's algorithm to detect anomalies based in time series. The algorithm decomposes timeseries and then detects anomalies. In my personal opinion, this is not efficient and accurate in detecting outlires in time series.
tsoutlier in forecast package: Similar to twitter's algorithm in terms of decomposing time series and then detecting outliers. Only will detect additive outliers or pulses.
There are commercial packages that have dedicated approaches to try and detect anomolies. Another classic approach is Tsay's time series outlier detection algorithm, similar to Chen and Liu's approach it detects different types of outliers. I recently also stumbled on this commercial software solution called metafor which might be more suited for your data.
Hope this is helpful | What algorithm should I use to detect anomalies on time-series? | Most outlier detection algorithms in open source package are for business time series data with low frequency, daily/weekly/monthly frequency data. This data appears to be for a specialized area that | What algorithm should I use to detect anomalies on time-series?
Most outlier detection algorithms in open source package are for business time series data with low frequency, daily/weekly/monthly frequency data. This data appears to be for a specialized area that is captured in minutes, so I'm not sure if open source outlier detction would be helpful. You could try to adapt this approaches to your data.
Below I outline some available packages approaches in open source R:
tsoutliers: Implements Chen and Liu's outlier detection algorithm within arima framework. see my earlier question on this site. Fantastic approach, but very slow not sure if it will be able to handle high frequency data like yours. It has the advamtage of detecting all types of outliers as mentioned in my earlier question/post.
Twitter's Anomaly detection: Uses Rosner's algorithm to detect anomalies based in time series. The algorithm decomposes timeseries and then detects anomalies. In my personal opinion, this is not efficient and accurate in detecting outlires in time series.
tsoutlier in forecast package: Similar to twitter's algorithm in terms of decomposing time series and then detecting outliers. Only will detect additive outliers or pulses.
There are commercial packages that have dedicated approaches to try and detect anomolies. Another classic approach is Tsay's time series outlier detection algorithm, similar to Chen and Liu's approach it detects different types of outliers. I recently also stumbled on this commercial software solution called metafor which might be more suited for your data.
Hope this is helpful | What algorithm should I use to detect anomalies on time-series?
Most outlier detection algorithms in open source package are for business time series data with low frequency, daily/weekly/monthly frequency data. This data appears to be for a specialized area that |
2,673 | What algorithm should I use to detect anomalies on time-series? | What other answers didn't seems to mention is that your problem sounds like a changepoint detection. The idea of changapoint detection is that you are seeking for segments in your data that significantly differ in terms properties (e.g. mean, variance). This can be achieved my using maximum likelihood estimation, where for $m$ changepoints the likelihood function is
$$
L(m, \tau_{1:m}, \theta_{1:(m+1)}) = \prod_{i=1}^{m+1} p(y_{(\tau_{i-1} + 1):\tau_i}\mid \theta_i)
$$
where $y_1,\dots,y_n$ is your data, $1 < \tau_1 <\dots<\tau_m<n$ are the boundary points marking the changes, and probability distributions $p$ are parametrized by $\theta_i$ for each $i$-th segment. This can be easily generalized to vide variety of situations. A number of algorithms exist to find the parameters, including finding the unknown $m$. There is also software that is available to estimating such models, e.g. changepoint package for R. If you want to learn more, you can check the following publications and the references they provide:
Rebecca Killick and Idris A. Eckley. (2013) changepoint: An R Package
for Changepoint Analysis. (online paper)
Eckley, I.A., Fearnhead, P. and Killick, R. (2011) Analysis of
changepoint models. [in:] Bayesian Time Series Models, eds. D.
Barber, A.T. Cemgil and S. Chiappa, Cambridge University Press. | What algorithm should I use to detect anomalies on time-series? | What other answers didn't seems to mention is that your problem sounds like a changepoint detection. The idea of changapoint detection is that you are seeking for segments in your data that significan | What algorithm should I use to detect anomalies on time-series?
What other answers didn't seems to mention is that your problem sounds like a changepoint detection. The idea of changapoint detection is that you are seeking for segments in your data that significantly differ in terms properties (e.g. mean, variance). This can be achieved my using maximum likelihood estimation, where for $m$ changepoints the likelihood function is
$$
L(m, \tau_{1:m}, \theta_{1:(m+1)}) = \prod_{i=1}^{m+1} p(y_{(\tau_{i-1} + 1):\tau_i}\mid \theta_i)
$$
where $y_1,\dots,y_n$ is your data, $1 < \tau_1 <\dots<\tau_m<n$ are the boundary points marking the changes, and probability distributions $p$ are parametrized by $\theta_i$ for each $i$-th segment. This can be easily generalized to vide variety of situations. A number of algorithms exist to find the parameters, including finding the unknown $m$. There is also software that is available to estimating such models, e.g. changepoint package for R. If you want to learn more, you can check the following publications and the references they provide:
Rebecca Killick and Idris A. Eckley. (2013) changepoint: An R Package
for Changepoint Analysis. (online paper)
Eckley, I.A., Fearnhead, P. and Killick, R. (2011) Analysis of
changepoint models. [in:] Bayesian Time Series Models, eds. D.
Barber, A.T. Cemgil and S. Chiappa, Cambridge University Press. | What algorithm should I use to detect anomalies on time-series?
What other answers didn't seems to mention is that your problem sounds like a changepoint detection. The idea of changapoint detection is that you are seeking for segments in your data that significan |
2,674 | What algorithm should I use to detect anomalies on time-series? | Have you tried using Statistical Process Control rules (e.g. Western Electric http://en.wikipedia.org/wiki/Western_Electric_rules)?
I use them for time series data - often with a touch of intuition about the data - to assess whether the data is going somewhere I don't want it to go. Like your example, these rules say if the delta / change is consistent over several data points, it flags that there may be an issue.
Also Statistical Process Control (SPC) can be good for working out if you are getting better or worse than before.
One issue with SPC is that much of it relies on a normal distribution which probably doesn't suit your data which can't go below zero. Others better than I with SPC can suggest options here. I like using it to flag an issue but, like all models, is best used with a grain of knowledge about the data itself (and source). | What algorithm should I use to detect anomalies on time-series? | Have you tried using Statistical Process Control rules (e.g. Western Electric http://en.wikipedia.org/wiki/Western_Electric_rules)?
I use them for time series data - often with a touch of intuition ab | What algorithm should I use to detect anomalies on time-series?
Have you tried using Statistical Process Control rules (e.g. Western Electric http://en.wikipedia.org/wiki/Western_Electric_rules)?
I use them for time series data - often with a touch of intuition about the data - to assess whether the data is going somewhere I don't want it to go. Like your example, these rules say if the delta / change is consistent over several data points, it flags that there may be an issue.
Also Statistical Process Control (SPC) can be good for working out if you are getting better or worse than before.
One issue with SPC is that much of it relies on a normal distribution which probably doesn't suit your data which can't go below zero. Others better than I with SPC can suggest options here. I like using it to flag an issue but, like all models, is best used with a grain of knowledge about the data itself (and source). | What algorithm should I use to detect anomalies on time-series?
Have you tried using Statistical Process Control rules (e.g. Western Electric http://en.wikipedia.org/wiki/Western_Electric_rules)?
I use them for time series data - often with a touch of intuition ab |
2,675 | What algorithm should I use to detect anomalies on time-series? | Given that the periodicity of the time series should be well understood a simple, but effective, algorithm based on differencing can be devised.
A simple one-step differencing will detect a sudden drop from a previous value
$$y_t'= y_t - y_{t-1}$$
but if the series has a strong periodic component you'd expect that drop to be considerable at a regular basis. In this case it would be better to compare any value to its counterpart at the same point in the previous cycle, that is, one period ago.
$$y_t'= y_t - y_{t-n} \quad \text{where } n=\text{length of period}$$
In the case of the posted question it would be natural to expect two significant periodic components, one the length of a day, the other the length of a week. But this isn't much of a complication, as the length of the longer period can be neatly divided by the length of the shorter.
If the sampling is done every hour, $n$ in the above equation should be set to $24*7 = 168$
If the drops are more of a proportional character a simple difference will easily fail to detect a sudden drop when activity is low. In such circumstances the algorithm can be modified to calculate ratios instead.
$$y_t^*= \frac{y_t}{y_{t-n}}$$
I did some tests in R using a simulated dataset. In it data is sampled 6 times a day and there are strong daily and weekly periods, along with other noise and fluctuations. Drops were added in at random places and of durations between 1 and 3.
To isolate the drops first ratios was calculated at distance 42, then a threshold set at 0.6, as only negative change of a certain size is of interest. Then a one-step difference was calculated, and a threshold set at -0.5. In the end one false positive appears to have slipped through (the one at the end of week 16). The graphs at the left and right show the same data, just in different ways. | What algorithm should I use to detect anomalies on time-series? | Given that the periodicity of the time series should be well understood a simple, but effective, algorithm based on differencing can be devised.
A simple one-step differencing will detect a sudden dro | What algorithm should I use to detect anomalies on time-series?
Given that the periodicity of the time series should be well understood a simple, but effective, algorithm based on differencing can be devised.
A simple one-step differencing will detect a sudden drop from a previous value
$$y_t'= y_t - y_{t-1}$$
but if the series has a strong periodic component you'd expect that drop to be considerable at a regular basis. In this case it would be better to compare any value to its counterpart at the same point in the previous cycle, that is, one period ago.
$$y_t'= y_t - y_{t-n} \quad \text{where } n=\text{length of period}$$
In the case of the posted question it would be natural to expect two significant periodic components, one the length of a day, the other the length of a week. But this isn't much of a complication, as the length of the longer period can be neatly divided by the length of the shorter.
If the sampling is done every hour, $n$ in the above equation should be set to $24*7 = 168$
If the drops are more of a proportional character a simple difference will easily fail to detect a sudden drop when activity is low. In such circumstances the algorithm can be modified to calculate ratios instead.
$$y_t^*= \frac{y_t}{y_{t-n}}$$
I did some tests in R using a simulated dataset. In it data is sampled 6 times a day and there are strong daily and weekly periods, along with other noise and fluctuations. Drops were added in at random places and of durations between 1 and 3.
To isolate the drops first ratios was calculated at distance 42, then a threshold set at 0.6, as only negative change of a certain size is of interest. Then a one-step difference was calculated, and a threshold set at -0.5. In the end one false positive appears to have slipped through (the one at the end of week 16). The graphs at the left and right show the same data, just in different ways. | What algorithm should I use to detect anomalies on time-series?
Given that the periodicity of the time series should be well understood a simple, but effective, algorithm based on differencing can be devised.
A simple one-step differencing will detect a sudden dro |
2,676 | What algorithm should I use to detect anomalies on time-series? | Would it be more useful to think of the changes in the time series as a beginning of a new trend rather than an anomaly? Taking the difference between adjacent points would help tell when the slope (derivative) is changing and might signal the beginning of a new trend in the date. Also taking the differences of the difference values (the second derivative ) might be of use. Doing a Google search on "times series beginning of trend) may give good suggestions for methods. In financial data a late of attention is paid to new trends (do you buy or sell?) so there's papers on this topic.
A good intro to wavelet is "The world according to wavelets" by Hubbard I believe is the author. | What algorithm should I use to detect anomalies on time-series? | Would it be more useful to think of the changes in the time series as a beginning of a new trend rather than an anomaly? Taking the difference between adjacent points would help tell when the slope ( | What algorithm should I use to detect anomalies on time-series?
Would it be more useful to think of the changes in the time series as a beginning of a new trend rather than an anomaly? Taking the difference between adjacent points would help tell when the slope (derivative) is changing and might signal the beginning of a new trend in the date. Also taking the differences of the difference values (the second derivative ) might be of use. Doing a Google search on "times series beginning of trend) may give good suggestions for methods. In financial data a late of attention is paid to new trends (do you buy or sell?) so there's papers on this topic.
A good intro to wavelet is "The world according to wavelets" by Hubbard I believe is the author. | What algorithm should I use to detect anomalies on time-series?
Would it be more useful to think of the changes in the time series as a beginning of a new trend rather than an anomaly? Taking the difference between adjacent points would help tell when the slope ( |
2,677 | What algorithm should I use to detect anomalies on time-series? | I was able to get some nice results for multiple-seasonality time series (daily, weekly) using two different algorithms:
Seasonal-trend decomposition using loess (or STL) to establish the midpoint series.
Nonlinear regression to establish thresholds around that midpoint, based on the relationship between the variance and the level.
STL does a time domain decomposition of your time series into a trend component, a single seasonal component, and a remainder. The seasonal component is your high frequency seasonality (e.g., daily), whereas the trend includes both the low frequency seasonality (e.g., weekly) and the trend proper. You can separate the two by simply running STL again on the trend. Anyway once you isolate the remainder series from the other components, you can perform your anomaly detection against that series.
I did a more detailed writeup here:
https://techblog.expedia.com/2016/07/28/applying-data-science-to-monitoring/ | What algorithm should I use to detect anomalies on time-series? | I was able to get some nice results for multiple-seasonality time series (daily, weekly) using two different algorithms:
Seasonal-trend decomposition using loess (or STL) to establish the midpoint se | What algorithm should I use to detect anomalies on time-series?
I was able to get some nice results for multiple-seasonality time series (daily, weekly) using two different algorithms:
Seasonal-trend decomposition using loess (or STL) to establish the midpoint series.
Nonlinear regression to establish thresholds around that midpoint, based on the relationship between the variance and the level.
STL does a time domain decomposition of your time series into a trend component, a single seasonal component, and a remainder. The seasonal component is your high frequency seasonality (e.g., daily), whereas the trend includes both the low frequency seasonality (e.g., weekly) and the trend proper. You can separate the two by simply running STL again on the trend. Anyway once you isolate the remainder series from the other components, you can perform your anomaly detection against that series.
I did a more detailed writeup here:
https://techblog.expedia.com/2016/07/28/applying-data-science-to-monitoring/ | What algorithm should I use to detect anomalies on time-series?
I was able to get some nice results for multiple-seasonality time series (daily, weekly) using two different algorithms:
Seasonal-trend decomposition using loess (or STL) to establish the midpoint se |
2,678 | What algorithm should I use to detect anomalies on time-series? | Inspired by David, have you tried to use FFT? It might be able to spot sudden drops because those are indicating your anomalies. The anomalies might appear in a narrow spectrum. So you can easily capture them. | What algorithm should I use to detect anomalies on time-series? | Inspired by David, have you tried to use FFT? It might be able to spot sudden drops because those are indicating your anomalies. The anomalies might appear in a narrow spectrum. So you can easily cap | What algorithm should I use to detect anomalies on time-series?
Inspired by David, have you tried to use FFT? It might be able to spot sudden drops because those are indicating your anomalies. The anomalies might appear in a narrow spectrum. So you can easily capture them. | What algorithm should I use to detect anomalies on time-series?
Inspired by David, have you tried to use FFT? It might be able to spot sudden drops because those are indicating your anomalies. The anomalies might appear in a narrow spectrum. So you can easily cap |
2,679 | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | As Rob mentions, this occurs when you have highly correlated variables. The standard example I use is predicting weight from shoe size. You can predict weight equally well with the right or left shoe size. But together it doesn't work out.
Brief simulation example
RSS = 3:10 #Right shoe size
LSS = rnorm(RSS, RSS, 0.1) #Left shoe size - similar to RSS
cor(LSS, RSS) #correlation ~ 0.99
weights = 120 + rnorm(RSS, 10*RSS, 10)
##Fit a joint model
m = lm(weights ~ LSS + RSS)
##F-value is very small, but neither LSS or RSS are significant
summary(m)
##Fitting RSS or LSS separately gives a significant result.
summary(lm(weights ~ LSS)) | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | As Rob mentions, this occurs when you have highly correlated variables. The standard example I use is predicting weight from shoe size. You can predict weight equally well with the right or left shoe | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
As Rob mentions, this occurs when you have highly correlated variables. The standard example I use is predicting weight from shoe size. You can predict weight equally well with the right or left shoe size. But together it doesn't work out.
Brief simulation example
RSS = 3:10 #Right shoe size
LSS = rnorm(RSS, RSS, 0.1) #Left shoe size - similar to RSS
cor(LSS, RSS) #correlation ~ 0.99
weights = 120 + rnorm(RSS, 10*RSS, 10)
##Fit a joint model
m = lm(weights ~ LSS + RSS)
##F-value is very small, but neither LSS or RSS are significant
summary(m)
##Fitting RSS or LSS separately gives a significant result.
summary(lm(weights ~ LSS)) | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
As Rob mentions, this occurs when you have highly correlated variables. The standard example I use is predicting weight from shoe size. You can predict weight equally well with the right or left shoe |
2,680 | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | It takes very little correlation among the independent variables to cause this.
To see why, try the following:
Draw 50 sets of ten vectors $(x_1, x_2, \ldots, x_{10})$ with coefficients iid standard normal.
Compute $y_i = (x_i + x_{i+1})/\sqrt{2}$ for $i = 1, 2, \ldots, 9$. This makes the $y_i$ individually standard normal but with some correlations among them.
Compute $w = x_1 + x_2 + \cdots + x_{10}$. Note that $w = \sqrt{2}(y_1 + y_3 + y_5 + y_7 + y_9)$.
Add some independent normally distributed error to $w$. With a little experimentation I found that $z = w + \varepsilon$ with $\varepsilon \sim N(0, 6)$ works pretty well. Thus, $z$ is the sum of the $x_i$ plus some error. It is also the sum of some of the $y_i$ plus the same error.
We will consider the $y_i$ to be the independent variables and $z$ the dependent variable.
Here's a scatterplot matrix of one such dataset, with $z$ along the top and left and the $y_i$ proceeding in order.
The expected correlations among $y_i$ and $y_j$ are $1/2$ when $|i-j|=1$ and $0$ otherwise. The realized correlations range up to 62%. They show up as tighter scatterplots next to the diagonal.
Look at the regression of $z$ against the $y_i$:
Source | SS df MS Number of obs = 50
-------------+------------------------------ F( 9, 40) = 4.57
Model | 1684.15999 9 187.128887 Prob > F = 0.0003
Residual | 1636.70545 40 40.9176363 R-squared = 0.5071
-------------+------------------------------ Adj R-squared = 0.3963
Total | 3320.86544 49 67.7727641 Root MSE = 6.3967
------------------------------------------------------------------------------
z | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y1 | 2.184007 1.264074 1.73 0.092 -.3707815 4.738795
y2 | 1.537829 1.809436 0.85 0.400 -2.119178 5.194837
y3 | 2.621185 2.140416 1.22 0.228 -1.704757 6.947127
y4 | .6024704 2.176045 0.28 0.783 -3.795481 5.000421
y5 | 1.692758 2.196725 0.77 0.445 -2.746989 6.132506
y6 | .0290429 2.094395 0.01 0.989 -4.203888 4.261974
y7 | .7794273 2.197227 0.35 0.725 -3.661333 5.220188
y8 | -2.485206 2.19327 -1.13 0.264 -6.91797 1.947558
y9 | 1.844671 1.744538 1.06 0.297 -1.681172 5.370514
_cons | .8498024 .9613522 0.88 0.382 -1.093163 2.792768
------------------------------------------------------------------------------
The F statistic is highly significant but none of the independent variables is, even without any adjustment for all 9 of them.
To see what's going on, consider the regression of $z$ against just the odd-numbered $y_i$:
Source | SS df MS Number of obs = 50
-------------+------------------------------ F( 5, 44) = 7.77
Model | 1556.88498 5 311.376997 Prob > F = 0.0000
Residual | 1763.98046 44 40.0904649 R-squared = 0.4688
-------------+------------------------------ Adj R-squared = 0.4085
Total | 3320.86544 49 67.7727641 Root MSE = 6.3317
------------------------------------------------------------------------------
z | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y1 | 2.943948 .8138525 3.62 0.001 1.303736 4.58416
y3 | 3.403871 1.080173 3.15 0.003 1.226925 5.580818
y5 | 2.458887 .955118 2.57 0.013 .533973 4.383801
y7 | -.3859711 .9742503 -0.40 0.694 -2.349443 1.577501
y9 | .1298614 .9795983 0.13 0.895 -1.844389 2.104112
_cons | 1.118512 .9241601 1.21 0.233 -.7440107 2.981034
------------------------------------------------------------------------------
Some of these variables are highly significant, even with a Bonferroni adjustment. (There's much more that can be said by looking at these results, but it would take us away from the main point.)
The intuition behind this is that $z$ depends primarily on a subset of the variables (but not necessarily on a unique subset). The complement of this subset ($y_2, y_4, y_6, y_8$) adds essentially no information about $z$ due to correlations—however slight—with the subset itself.
This sort of situation will arise in time series analysis. We can consider the subscripts to be times. The construction of the $y_i$ has induced a short-range serial correlation among them, much like many time series. Due to this, we lose little information by subsampling the series at regular intervals.
One conclusion we can draw from this is that when too many variables are included in a model they can mask the truly significant ones. The first sign of this is the highly significant overall F statistic accompanied by not-so-significant t-tests for the individual coefficients. (Even when some of the variables are individually significant, this does not automatically mean the others are not. That's one of the basic defects of stepwise regression strategies: they fall victim to this masking problem.) Incidentally, the variance inflation factors in the first regression range from 2.55 to 6.09 with a mean of 4.79: just on the borderline of diagnosing some multicollinearity according to the most conservative rules of thumb; well below the threshold according to other rules (where 10 is an upper cutoff). | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | It takes very little correlation among the independent variables to cause this.
To see why, try the following:
Draw 50 sets of ten vectors $(x_1, x_2, \ldots, x_{10})$ with coefficients iid standard | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
It takes very little correlation among the independent variables to cause this.
To see why, try the following:
Draw 50 sets of ten vectors $(x_1, x_2, \ldots, x_{10})$ with coefficients iid standard normal.
Compute $y_i = (x_i + x_{i+1})/\sqrt{2}$ for $i = 1, 2, \ldots, 9$. This makes the $y_i$ individually standard normal but with some correlations among them.
Compute $w = x_1 + x_2 + \cdots + x_{10}$. Note that $w = \sqrt{2}(y_1 + y_3 + y_5 + y_7 + y_9)$.
Add some independent normally distributed error to $w$. With a little experimentation I found that $z = w + \varepsilon$ with $\varepsilon \sim N(0, 6)$ works pretty well. Thus, $z$ is the sum of the $x_i$ plus some error. It is also the sum of some of the $y_i$ plus the same error.
We will consider the $y_i$ to be the independent variables and $z$ the dependent variable.
Here's a scatterplot matrix of one such dataset, with $z$ along the top and left and the $y_i$ proceeding in order.
The expected correlations among $y_i$ and $y_j$ are $1/2$ when $|i-j|=1$ and $0$ otherwise. The realized correlations range up to 62%. They show up as tighter scatterplots next to the diagonal.
Look at the regression of $z$ against the $y_i$:
Source | SS df MS Number of obs = 50
-------------+------------------------------ F( 9, 40) = 4.57
Model | 1684.15999 9 187.128887 Prob > F = 0.0003
Residual | 1636.70545 40 40.9176363 R-squared = 0.5071
-------------+------------------------------ Adj R-squared = 0.3963
Total | 3320.86544 49 67.7727641 Root MSE = 6.3967
------------------------------------------------------------------------------
z | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y1 | 2.184007 1.264074 1.73 0.092 -.3707815 4.738795
y2 | 1.537829 1.809436 0.85 0.400 -2.119178 5.194837
y3 | 2.621185 2.140416 1.22 0.228 -1.704757 6.947127
y4 | .6024704 2.176045 0.28 0.783 -3.795481 5.000421
y5 | 1.692758 2.196725 0.77 0.445 -2.746989 6.132506
y6 | .0290429 2.094395 0.01 0.989 -4.203888 4.261974
y7 | .7794273 2.197227 0.35 0.725 -3.661333 5.220188
y8 | -2.485206 2.19327 -1.13 0.264 -6.91797 1.947558
y9 | 1.844671 1.744538 1.06 0.297 -1.681172 5.370514
_cons | .8498024 .9613522 0.88 0.382 -1.093163 2.792768
------------------------------------------------------------------------------
The F statistic is highly significant but none of the independent variables is, even without any adjustment for all 9 of them.
To see what's going on, consider the regression of $z$ against just the odd-numbered $y_i$:
Source | SS df MS Number of obs = 50
-------------+------------------------------ F( 5, 44) = 7.77
Model | 1556.88498 5 311.376997 Prob > F = 0.0000
Residual | 1763.98046 44 40.0904649 R-squared = 0.4688
-------------+------------------------------ Adj R-squared = 0.4085
Total | 3320.86544 49 67.7727641 Root MSE = 6.3317
------------------------------------------------------------------------------
z | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
y1 | 2.943948 .8138525 3.62 0.001 1.303736 4.58416
y3 | 3.403871 1.080173 3.15 0.003 1.226925 5.580818
y5 | 2.458887 .955118 2.57 0.013 .533973 4.383801
y7 | -.3859711 .9742503 -0.40 0.694 -2.349443 1.577501
y9 | .1298614 .9795983 0.13 0.895 -1.844389 2.104112
_cons | 1.118512 .9241601 1.21 0.233 -.7440107 2.981034
------------------------------------------------------------------------------
Some of these variables are highly significant, even with a Bonferroni adjustment. (There's much more that can be said by looking at these results, but it would take us away from the main point.)
The intuition behind this is that $z$ depends primarily on a subset of the variables (but not necessarily on a unique subset). The complement of this subset ($y_2, y_4, y_6, y_8$) adds essentially no information about $z$ due to correlations—however slight—with the subset itself.
This sort of situation will arise in time series analysis. We can consider the subscripts to be times. The construction of the $y_i$ has induced a short-range serial correlation among them, much like many time series. Due to this, we lose little information by subsampling the series at regular intervals.
One conclusion we can draw from this is that when too many variables are included in a model they can mask the truly significant ones. The first sign of this is the highly significant overall F statistic accompanied by not-so-significant t-tests for the individual coefficients. (Even when some of the variables are individually significant, this does not automatically mean the others are not. That's one of the basic defects of stepwise regression strategies: they fall victim to this masking problem.) Incidentally, the variance inflation factors in the first regression range from 2.55 to 6.09 with a mean of 4.79: just on the borderline of diagnosing some multicollinearity according to the most conservative rules of thumb; well below the threshold according to other rules (where 10 is an upper cutoff). | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
It takes very little correlation among the independent variables to cause this.
To see why, try the following:
Draw 50 sets of ten vectors $(x_1, x_2, \ldots, x_{10})$ with coefficients iid standard |
2,681 | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | Multicollinearity
As you note, and as has been discussed in this previous question, high levels of multicollinearity is one major cause of a statistically significant $R^2$ but statically non-significant predictors.
Of course, multicollinearity is not just about an absolute threshold. Standard errors on regression coefficients will increase as intercorrelations with the focal predictor increase.
Multiple almost significant predictors
Even if you had no multicollinearity, you can still get non-significant predictors and an overall significant model if two or more individual predictors are close to significant and thus collectively, the overall prediction passes the threshold of statistical significance. For example, using an alpha of .05, if you had two predictors with p-values of .06, and .07, then I wouldn't be surprised if the overall model had a p<.05. | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | Multicollinearity
As you note, and as has been discussed in this previous question, high levels of multicollinearity is one major cause of a statistically significant $R^2$ but statically non-signif | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
Multicollinearity
As you note, and as has been discussed in this previous question, high levels of multicollinearity is one major cause of a statistically significant $R^2$ but statically non-significant predictors.
Of course, multicollinearity is not just about an absolute threshold. Standard errors on regression coefficients will increase as intercorrelations with the focal predictor increase.
Multiple almost significant predictors
Even if you had no multicollinearity, you can still get non-significant predictors and an overall significant model if two or more individual predictors are close to significant and thus collectively, the overall prediction passes the threshold of statistical significance. For example, using an alpha of .05, if you had two predictors with p-values of .06, and .07, then I wouldn't be surprised if the overall model had a p<.05. | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
Multicollinearity
As you note, and as has been discussed in this previous question, high levels of multicollinearity is one major cause of a statistically significant $R^2$ but statically non-signif |
2,682 | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | This happens when the predictors are highly correlated. Imagine a situation where there are only two predictors with very high correlation. Individually, they both also correlate closely with the response variable. Consequently, the F-test has a low p-value (it is saying that the predictors together are highly significant in explaining the variation in the response variable). But the t-test for each predictor has a high p-value because after allowing for the effect of the other predictor there is not much left to explain. | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | This happens when the predictors are highly correlated. Imagine a situation where there are only two predictors with very high correlation. Individually, they both also correlate closely with the resp | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
This happens when the predictors are highly correlated. Imagine a situation where there are only two predictors with very high correlation. Individually, they both also correlate closely with the response variable. Consequently, the F-test has a low p-value (it is saying that the predictors together are highly significant in explaining the variation in the response variable). But the t-test for each predictor has a high p-value because after allowing for the effect of the other predictor there is not much left to explain. | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
This happens when the predictors are highly correlated. Imagine a situation where there are only two predictors with very high correlation. Individually, they both also correlate closely with the resp |
2,683 | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | Consider the following model: $ X_1 \sim N(0,1)$, $X_2 = a X_1 + \delta$, $Y = bX_1 + cX_2 + \epsilon$, $\delta$, $\epsilon$ and $X_1$ are all mutually independent $N(0,1)$.
Then $${\rm Cov}(X_2,Y) = {\rm E}[(aX_1+\delta)(bX_1+cX_2+\epsilon)]={\rm E}[(aX_1+\delta)(\{b+ac\}X_1+c\delta+\epsilon)]=a(b+ac)+c$$
We can set this to zero with say $a=1$, $b=2$ and $c=-1$. Yet all the relations will obviously be there and easily detectable with regression analysis.
You said that you understand the issue of variables being correlated and regression being insignificant better; it probably means that you have been conditioned by frequent mentioning of multicollinearity, but you would need to boost your understanding of the geometry of least squares. | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | Consider the following model: $ X_1 \sim N(0,1)$, $X_2 = a X_1 + \delta$, $Y = bX_1 + cX_2 + \epsilon$, $\delta$, $\epsilon$ and $X_1$ are all mutually independent $N(0,1)$.
Then $${\rm Cov}(X_2,Y) = | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
Consider the following model: $ X_1 \sim N(0,1)$, $X_2 = a X_1 + \delta$, $Y = bX_1 + cX_2 + \epsilon$, $\delta$, $\epsilon$ and $X_1$ are all mutually independent $N(0,1)$.
Then $${\rm Cov}(X_2,Y) = {\rm E}[(aX_1+\delta)(bX_1+cX_2+\epsilon)]={\rm E}[(aX_1+\delta)(\{b+ac\}X_1+c\delta+\epsilon)]=a(b+ac)+c$$
We can set this to zero with say $a=1$, $b=2$ and $c=-1$. Yet all the relations will obviously be there and easily detectable with regression analysis.
You said that you understand the issue of variables being correlated and regression being insignificant better; it probably means that you have been conditioned by frequent mentioning of multicollinearity, but you would need to boost your understanding of the geometry of least squares. | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
Consider the following model: $ X_1 \sim N(0,1)$, $X_2 = a X_1 + \delta$, $Y = bX_1 + cX_2 + \epsilon$, $\delta$, $\epsilon$ and $X_1$ are all mutually independent $N(0,1)$.
Then $${\rm Cov}(X_2,Y) = |
2,684 | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | A keyword to search for would be "collinearity" or "multicollinearity". This can be detected using diagnostics like Variance Inflation Factors (VIFs) or methods as described inthe textbook "Regression Diagnostics: Identifying Influential Data and Sources of Collinearity" by Belsley, Kuh and Welsch. VIFs are much easier to understand, but they can't deal with collinearity involving the intercept (i.e., predictors that are almost constant by themselves or in a linear combination) - conversely, the BKW diagnostics are far less intuitive but can deal with collinearity involving the intercept. | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | A keyword to search for would be "collinearity" or "multicollinearity". This can be detected using diagnostics like Variance Inflation Factors (VIFs) or methods as described inthe textbook "Regression | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
A keyword to search for would be "collinearity" or "multicollinearity". This can be detected using diagnostics like Variance Inflation Factors (VIFs) or methods as described inthe textbook "Regression Diagnostics: Identifying Influential Data and Sources of Collinearity" by Belsley, Kuh and Welsch. VIFs are much easier to understand, but they can't deal with collinearity involving the intercept (i.e., predictors that are almost constant by themselves or in a linear combination) - conversely, the BKW diagnostics are far less intuitive but can deal with collinearity involving the intercept. | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
A keyword to search for would be "collinearity" or "multicollinearity". This can be detected using diagnostics like Variance Inflation Factors (VIFs) or methods as described inthe textbook "Regression |
2,685 | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | The answer you get depends on the question you ask. In addition to the points already made, the individual parameters F values and the overall model F values answer different questions, so they get different answers. I have seen this happen even when the individual F values are not that close to significant, especially if the model has more than 2 or 3 IVs. I do not know of any way to combine the individual p-values and get anything meaningful, althought there may be a way. | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | The answer you get depends on the question you ask. In addition to the points already made, the individual parameters F values and the overall model F values answer different questions, so they get d | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
The answer you get depends on the question you ask. In addition to the points already made, the individual parameters F values and the overall model F values answer different questions, so they get different answers. I have seen this happen even when the individual F values are not that close to significant, especially if the model has more than 2 or 3 IVs. I do not know of any way to combine the individual p-values and get anything meaningful, althought there may be a way. | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
The answer you get depends on the question you ask. In addition to the points already made, the individual parameters F values and the overall model F values answer different questions, so they get d |
2,686 | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | One other thing to keep in mind is that the tests on the individual coefficients each assume that all of the other predictors are in the model. In other words each predictor is not significant as long as all of the other predictors are in the model. There must be some interaction or interdependence between two or more of your predictors.
As someone else asked above - how did you diagnose a lack of multicollinearity? | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | One other thing to keep in mind is that the tests on the individual coefficients each assume that all of the other predictors are in the model. In other words each predictor is not significant as long | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
One other thing to keep in mind is that the tests on the individual coefficients each assume that all of the other predictors are in the model. In other words each predictor is not significant as long as all of the other predictors are in the model. There must be some interaction or interdependence between two or more of your predictors.
As someone else asked above - how did you diagnose a lack of multicollinearity? | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
One other thing to keep in mind is that the tests on the individual coefficients each assume that all of the other predictors are in the model. In other words each predictor is not significant as long |
2,687 | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | One way to understand this is the geometry of least squares as @StasK suggests.
Another is to realize it means that X is related to Y when controlling for the other variables, but not alone. You say X relates to unique variance in Y. This is right. The unique variance in Y, though, is different from the total variance. So, what variance are the other variables removing?
It would help if you could tell us your variables. | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests? | One way to understand this is the geometry of least squares as @StasK suggests.
Another is to realize it means that X is related to Y when controlling for the other variables, but not alone. You say | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
One way to understand this is the geometry of least squares as @StasK suggests.
Another is to realize it means that X is related to Y when controlling for the other variables, but not alone. You say X relates to unique variance in Y. This is right. The unique variance in Y, though, is different from the total variance. So, what variance are the other variables removing?
It would help if you could tell us your variables. | Why is it possible to get significant F statistic (p<.001) but non-significant regressor t-tests?
One way to understand this is the geometry of least squares as @StasK suggests.
Another is to realize it means that X is related to Y when controlling for the other variables, but not alone. You say |
2,688 | Famous statistical wins and horror stories for teaching purposes | Benford's law:
Described here. Digits do not appear with uniform frequency in front of numbers, but rather follow a specific pattern: digit 1 is the most likely to be the first digit, with 30% chance, followed by 2 (17.6% chance), and so on. The following picture (from Wikipedia) shows the frequency of each digit at the beginning of each number, in some naturally-occurring datasets:
There are certain conditions under which the law holds (e.g., the data should span across several scales, so stuff like peoples' heights are not eligible), but it is quite generic.
Perhaps the most surprising application is in fraud detection. This is based on the assumption that people who try to fabricate figures tend to distribute the digits uniformly, thus violating Benford's law.
I recall once I was explaining this to a class, and in the break one of the students came up with an accounting spreadsheet from his company, in which he had tried to validate my claims. It worked :)
Zipf's law
Described here: The frequency of a word in a corpus is inversely proportional to its rank. What is surprising is that this relationship holds for any corpus, even for ancient languages that have not yet been translated. An interesting video explaining more about why this pattern may hold is here. The following picture shows rank (horizontal) vs frequency (vertical) in a log-log scale for the first 10 million words in 30 Wikipedias (source). Note that the law would predict a straight line:
These two laws are powerful and counter-intuitive, and in the sense that they enhance one's understanding of the world via statistics, they could be called "statistical wins". | Famous statistical wins and horror stories for teaching purposes | Benford's law:
Described here. Digits do not appear with uniform frequency in front of numbers, but rather follow a specific pattern: digit 1 is the most likely to be the first digit, with 30% chance, | Famous statistical wins and horror stories for teaching purposes
Benford's law:
Described here. Digits do not appear with uniform frequency in front of numbers, but rather follow a specific pattern: digit 1 is the most likely to be the first digit, with 30% chance, followed by 2 (17.6% chance), and so on. The following picture (from Wikipedia) shows the frequency of each digit at the beginning of each number, in some naturally-occurring datasets:
There are certain conditions under which the law holds (e.g., the data should span across several scales, so stuff like peoples' heights are not eligible), but it is quite generic.
Perhaps the most surprising application is in fraud detection. This is based on the assumption that people who try to fabricate figures tend to distribute the digits uniformly, thus violating Benford's law.
I recall once I was explaining this to a class, and in the break one of the students came up with an accounting spreadsheet from his company, in which he had tried to validate my claims. It worked :)
Zipf's law
Described here: The frequency of a word in a corpus is inversely proportional to its rank. What is surprising is that this relationship holds for any corpus, even for ancient languages that have not yet been translated. An interesting video explaining more about why this pattern may hold is here. The following picture shows rank (horizontal) vs frequency (vertical) in a log-log scale for the first 10 million words in 30 Wikipedias (source). Note that the law would predict a straight line:
These two laws are powerful and counter-intuitive, and in the sense that they enhance one's understanding of the world via statistics, they could be called "statistical wins". | Famous statistical wins and horror stories for teaching purposes
Benford's law:
Described here. Digits do not appear with uniform frequency in front of numbers, but rather follow a specific pattern: digit 1 is the most likely to be the first digit, with 30% chance, |
2,689 | Famous statistical wins and horror stories for teaching purposes | I really liked the German tank problem. It shows, how data which is usually considered as irrelevant becomes valuable information in the hand of a statistician. Furthermore, I liked the law of small numbers and the base rate fallacy. | Famous statistical wins and horror stories for teaching purposes | I really liked the German tank problem. It shows, how data which is usually considered as irrelevant becomes valuable information in the hand of a statistician. Furthermore, I liked the law of small n | Famous statistical wins and horror stories for teaching purposes
I really liked the German tank problem. It shows, how data which is usually considered as irrelevant becomes valuable information in the hand of a statistician. Furthermore, I liked the law of small numbers and the base rate fallacy. | Famous statistical wins and horror stories for teaching purposes
I really liked the German tank problem. It shows, how data which is usually considered as irrelevant becomes valuable information in the hand of a statistician. Furthermore, I liked the law of small n |
2,690 | Famous statistical wins and horror stories for teaching purposes | R vs Sally Clark is a famous case of a woman being convicted for murder because the court was unaware of statistics and probability base principles.
But if I have to say the thing that impressed me the most, when I begun studying statistics, that is regression to the mean, which also gave the name to statistical regression (even if that is a completely different thing). The nobel prize winner (for economics, even if he's a psychologist) Daniel Kahneman told a fascinating anecdote about how he realized how regression to the mean can lead people to false beliefs.
Edit: One other very insteresting story that just came to my mind, and concernes importance of missing data instead, is the one of Abraham Wald and war planes bullet holes. | Famous statistical wins and horror stories for teaching purposes | R vs Sally Clark is a famous case of a woman being convicted for murder because the court was unaware of statistics and probability base principles.
But if I have to say the thing that impressed me th | Famous statistical wins and horror stories for teaching purposes
R vs Sally Clark is a famous case of a woman being convicted for murder because the court was unaware of statistics and probability base principles.
But if I have to say the thing that impressed me the most, when I begun studying statistics, that is regression to the mean, which also gave the name to statistical regression (even if that is a completely different thing). The nobel prize winner (for economics, even if he's a psychologist) Daniel Kahneman told a fascinating anecdote about how he realized how regression to the mean can lead people to false beliefs.
Edit: One other very insteresting story that just came to my mind, and concernes importance of missing data instead, is the one of Abraham Wald and war planes bullet holes. | Famous statistical wins and horror stories for teaching purposes
R vs Sally Clark is a famous case of a woman being convicted for murder because the court was unaware of statistics and probability base principles.
But if I have to say the thing that impressed me th |
2,691 | Famous statistical wins and horror stories for teaching purposes | To illustrate where ordinary intuition fails, the Monty Hall paradox is a great starter. | Famous statistical wins and horror stories for teaching purposes | To illustrate where ordinary intuition fails, the Monty Hall paradox is a great starter. | Famous statistical wins and horror stories for teaching purposes
To illustrate where ordinary intuition fails, the Monty Hall paradox is a great starter. | Famous statistical wins and horror stories for teaching purposes
To illustrate where ordinary intuition fails, the Monty Hall paradox is a great starter. |
2,692 | Famous statistical wins and horror stories for teaching purposes | If sampling is a part of your course then it's hard to beat Dewey beats Truman | Famous statistical wins and horror stories for teaching purposes | If sampling is a part of your course then it's hard to beat Dewey beats Truman | Famous statistical wins and horror stories for teaching purposes
If sampling is a part of your course then it's hard to beat Dewey beats Truman | Famous statistical wins and horror stories for teaching purposes
If sampling is a part of your course then it's hard to beat Dewey beats Truman |
2,693 | Famous statistical wins and horror stories for teaching purposes | Another interesting case as to how wrong gambling can go is the Monte Carlo Casino example.
In a game of roulette at the Monte Carlo Casino on August 18, 1913 the ball fell in black 26 times in a row. This was an extremely uncommon occurrence: the probability of a sequence of either red or black occurring 26 times in a row is around 1 in 66.6 million, assuming the mechanism is unbiased. At that time, Gamblers lost millions of francs betting against black, reasoning incorrectly that the streak was causing an imbalance in the randomness of the wheel, and that it had to be followed by a long streak of red.
Gambler's fallacy and Gambler's ruin give a good explanation for this example. | Famous statistical wins and horror stories for teaching purposes | Another interesting case as to how wrong gambling can go is the Monte Carlo Casino example.
In a game of roulette at the Monte Carlo Casino on August 18, 1913 the ball fell in black 26 times in a row | Famous statistical wins and horror stories for teaching purposes
Another interesting case as to how wrong gambling can go is the Monte Carlo Casino example.
In a game of roulette at the Monte Carlo Casino on August 18, 1913 the ball fell in black 26 times in a row. This was an extremely uncommon occurrence: the probability of a sequence of either red or black occurring 26 times in a row is around 1 in 66.6 million, assuming the mechanism is unbiased. At that time, Gamblers lost millions of francs betting against black, reasoning incorrectly that the streak was causing an imbalance in the randomness of the wheel, and that it had to be followed by a long streak of red.
Gambler's fallacy and Gambler's ruin give a good explanation for this example. | Famous statistical wins and horror stories for teaching purposes
Another interesting case as to how wrong gambling can go is the Monte Carlo Casino example.
In a game of roulette at the Monte Carlo Casino on August 18, 1913 the ball fell in black 26 times in a row |
2,694 | Famous statistical wins and horror stories for teaching purposes | I find the false positive paradox remarkable because it is so counter-intuitive. A good example:
Cancer screening of the general population does not increase life expectancy, even though clearly lives are saved because some cancers are caught early and can be treated better. The U.S. Preventive Services Task Force accordingly stopped recommending routine screening for women aged 40-49 in 2009.
This is good teaching material because it is a non-trivial real-life example which concerns almost everybody at some point in their lives.
There is an article by the National Cancer Institute here.
The reasoning goes thus:
The number of cancer incidents is small so that the "number needed to treat" (read: screen) is large.
The tests are fairly reliable. But the low incidence rate leads to a large absolute false positive number with the consequence of a large number of unnecessary biopsies (>90% are false positives).
Cancer incidents fall in one of the following subsets:
Aggressive cancers which will kill the patient no matter what.
Slow cancers which will not kill the patient before they die of other causes. Detecting these is called overdiagnosis. From the USPSTF document: "Even with the conservative estimate of 1 in 8 breast cancer cases being overdiagnosed, for every woman who avoids a death from breast cancer through screening, 2 to 3 women will be treated unnecessarily."
Cancers which will be treatable even when detected late, without screening.
Cancers which are aggressive enough to kill the patient when detected late but are still treatable when detected early.
Only class 4 profits from screening, at the expense of large numbers of unnecessary hospital visits, unnecessary biopsies and a lot of sleepless nights. All of these are small but measurable health risks which accumulate over the large number needed to treat, outweighing the very real benefit for the small number in subset 4. | Famous statistical wins and horror stories for teaching purposes | I find the false positive paradox remarkable because it is so counter-intuitive. A good example:
Cancer screening of the general population does not increase life expectancy, even though clearly live | Famous statistical wins and horror stories for teaching purposes
I find the false positive paradox remarkable because it is so counter-intuitive. A good example:
Cancer screening of the general population does not increase life expectancy, even though clearly lives are saved because some cancers are caught early and can be treated better. The U.S. Preventive Services Task Force accordingly stopped recommending routine screening for women aged 40-49 in 2009.
This is good teaching material because it is a non-trivial real-life example which concerns almost everybody at some point in their lives.
There is an article by the National Cancer Institute here.
The reasoning goes thus:
The number of cancer incidents is small so that the "number needed to treat" (read: screen) is large.
The tests are fairly reliable. But the low incidence rate leads to a large absolute false positive number with the consequence of a large number of unnecessary biopsies (>90% are false positives).
Cancer incidents fall in one of the following subsets:
Aggressive cancers which will kill the patient no matter what.
Slow cancers which will not kill the patient before they die of other causes. Detecting these is called overdiagnosis. From the USPSTF document: "Even with the conservative estimate of 1 in 8 breast cancer cases being overdiagnosed, for every woman who avoids a death from breast cancer through screening, 2 to 3 women will be treated unnecessarily."
Cancers which will be treatable even when detected late, without screening.
Cancers which are aggressive enough to kill the patient when detected late but are still treatable when detected early.
Only class 4 profits from screening, at the expense of large numbers of unnecessary hospital visits, unnecessary biopsies and a lot of sleepless nights. All of these are small but measurable health risks which accumulate over the large number needed to treat, outweighing the very real benefit for the small number in subset 4. | Famous statistical wins and horror stories for teaching purposes
I find the false positive paradox remarkable because it is so counter-intuitive. A good example:
Cancer screening of the general population does not increase life expectancy, even though clearly live |
2,695 | Famous statistical wins and horror stories for teaching purposes | My favourite example, as an illustration of how faulty statistics can have long-term consequences when they are used to direct government policy, is the act of large-scale railway vandalism known as the Beeching Axe. It resulted from a Transport Minister with strong ties to the road-building industry (Ernest Marples) hiring a petrochemicals-industry expert (Richard Beeching) to determine which parts of the British railway network were making losses and should therefore be pruned.
About 4000 route miles were closed as a direct result, with a direct positive effect on demand for roads (and, inevitably, much of today's congestion). Further closures continued into the 1980s, including the important and relatively recently upgraded Woodhead route across the Pennines, and halting only with the case of the Settle & Carlisle line, which had once been the northern section of the Midland Railway's main line.
It is perhaps noteworthy that Marples subsequently fled the country to evade prosecution for tax fraud. Suspicions of conflicts-of-interest were also held at the time, as he had sold his 80% stake in his former road-building form Marples Ridgeway (as legally required by his ministerial appointment) to his wife, thus making it easy for him to reacquire them subsequently.
One good source on the subject is "I Tried to Run a Railway" by Gerard Fiennes.
The statistical errors involved here were largely down to taking an excessively narrow view of the problem. Branch lines' stations were visited to examine their receipts and take surveys of traffic - but seasonal traffic which used the line and whose tickets were sold elsewhere in the country was ignored. In many cases costs were inflated by obsolete working practices which could have been streamlined, but this option was not considered at the point of choosing which lines would be closed entirely. This also led to some lines whose losses were only slight, and which indirectly benefited the railways as a whole through the "network effect" of being able to reach destinations without a change of mode, being included on the closure list.
These errors were repeated in the later Serpell Report which proposed an even more drastic closure program, but which was fortunately rejected.
Today, railway traffic demand is rising sharply in Britain, and lines are being newly built and reopened to meet demand. Some lines closed by Beeching and Marples' efforts would be highly beneficial if they still existed today. | Famous statistical wins and horror stories for teaching purposes | My favourite example, as an illustration of how faulty statistics can have long-term consequences when they are used to direct government policy, is the act of large-scale railway vandalism known as t | Famous statistical wins and horror stories for teaching purposes
My favourite example, as an illustration of how faulty statistics can have long-term consequences when they are used to direct government policy, is the act of large-scale railway vandalism known as the Beeching Axe. It resulted from a Transport Minister with strong ties to the road-building industry (Ernest Marples) hiring a petrochemicals-industry expert (Richard Beeching) to determine which parts of the British railway network were making losses and should therefore be pruned.
About 4000 route miles were closed as a direct result, with a direct positive effect on demand for roads (and, inevitably, much of today's congestion). Further closures continued into the 1980s, including the important and relatively recently upgraded Woodhead route across the Pennines, and halting only with the case of the Settle & Carlisle line, which had once been the northern section of the Midland Railway's main line.
It is perhaps noteworthy that Marples subsequently fled the country to evade prosecution for tax fraud. Suspicions of conflicts-of-interest were also held at the time, as he had sold his 80% stake in his former road-building form Marples Ridgeway (as legally required by his ministerial appointment) to his wife, thus making it easy for him to reacquire them subsequently.
One good source on the subject is "I Tried to Run a Railway" by Gerard Fiennes.
The statistical errors involved here were largely down to taking an excessively narrow view of the problem. Branch lines' stations were visited to examine their receipts and take surveys of traffic - but seasonal traffic which used the line and whose tickets were sold elsewhere in the country was ignored. In many cases costs were inflated by obsolete working practices which could have been streamlined, but this option was not considered at the point of choosing which lines would be closed entirely. This also led to some lines whose losses were only slight, and which indirectly benefited the railways as a whole through the "network effect" of being able to reach destinations without a change of mode, being included on the closure list.
These errors were repeated in the later Serpell Report which proposed an even more drastic closure program, but which was fortunately rejected.
Today, railway traffic demand is rising sharply in Britain, and lines are being newly built and reopened to meet demand. Some lines closed by Beeching and Marples' efforts would be highly beneficial if they still existed today. | Famous statistical wins and horror stories for teaching purposes
My favourite example, as an illustration of how faulty statistics can have long-term consequences when they are used to direct government policy, is the act of large-scale railway vandalism known as t |
2,696 | Famous statistical wins and horror stories for teaching purposes | Nice QA! here my two cents: It is mainly about how correlation can be very suspicious and some traditional ways to work it out:
https://www.tylervigen.com/spurious-correlations
https://en.wikipedia.org/wiki/Anscombe%27s_quartet
https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient
To elaborate a little, the canon for correlation vs. causation in modern statistics is certainly Judea Perl. Nielsen's (web)book provides a good review:
http://www.michaelnielsen.org/ddi/if-correlation-doesnt-imply-causation-then-what-does/ | Famous statistical wins and horror stories for teaching purposes | Nice QA! here my two cents: It is mainly about how correlation can be very suspicious and some traditional ways to work it out:
https://www.tylervigen.com/spurious-correlations
https://en.wikipedia.or | Famous statistical wins and horror stories for teaching purposes
Nice QA! here my two cents: It is mainly about how correlation can be very suspicious and some traditional ways to work it out:
https://www.tylervigen.com/spurious-correlations
https://en.wikipedia.org/wiki/Anscombe%27s_quartet
https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient
To elaborate a little, the canon for correlation vs. causation in modern statistics is certainly Judea Perl. Nielsen's (web)book provides a good review:
http://www.michaelnielsen.org/ddi/if-correlation-doesnt-imply-causation-then-what-does/ | Famous statistical wins and horror stories for teaching purposes
Nice QA! here my two cents: It is mainly about how correlation can be very suspicious and some traditional ways to work it out:
https://www.tylervigen.com/spurious-correlations
https://en.wikipedia.or |
2,697 | Famous statistical wins and horror stories for teaching purposes | I don't know if this counts as "intuition falls short", but rather a "naive analysis gives a counter intuitive, and misleading, answer".
One of my stats professors introduced a study regarding the connection between smoking and FEV in young students.
FEV can be considered to be a measure of lung volume. When the professor first introduced the data, he asked what we thought the relation would be. We all thought that smoking would be linked to lower FEV. However, looking at the data, that was not true! In fact, smokers had larger FEV's than non-smokers. Was this class being taught by a smoking-denialist?
Then, he reanalyzed the data, but this time adjusting for age. With this was done, we saw what we expected to see: a negative impact of smoking on FEV. This was because smokers were much more likely to be older students than younger students. While smoking did have a negative impact on their FEV, it was not so much that it completely eliminated the increase in FEV from growing up.
A link to a walk through of the data in R can be found here. | Famous statistical wins and horror stories for teaching purposes | I don't know if this counts as "intuition falls short", but rather a "naive analysis gives a counter intuitive, and misleading, answer".
One of my stats professors introduced a study regarding the co | Famous statistical wins and horror stories for teaching purposes
I don't know if this counts as "intuition falls short", but rather a "naive analysis gives a counter intuitive, and misleading, answer".
One of my stats professors introduced a study regarding the connection between smoking and FEV in young students.
FEV can be considered to be a measure of lung volume. When the professor first introduced the data, he asked what we thought the relation would be. We all thought that smoking would be linked to lower FEV. However, looking at the data, that was not true! In fact, smokers had larger FEV's than non-smokers. Was this class being taught by a smoking-denialist?
Then, he reanalyzed the data, but this time adjusting for age. With this was done, we saw what we expected to see: a negative impact of smoking on FEV. This was because smokers were much more likely to be older students than younger students. While smoking did have a negative impact on their FEV, it was not so much that it completely eliminated the increase in FEV from growing up.
A link to a walk through of the data in R can be found here. | Famous statistical wins and horror stories for teaching purposes
I don't know if this counts as "intuition falls short", but rather a "naive analysis gives a counter intuitive, and misleading, answer".
One of my stats professors introduced a study regarding the co |
2,698 | Famous statistical wins and horror stories for teaching purposes | The failure to show the association between launch temperature, and the effect of launch temperature, on the space shuttle o-rings, leading to the catastrophic failure of the Columbia soon after launch. An overview of the problem is here. | Famous statistical wins and horror stories for teaching purposes | The failure to show the association between launch temperature, and the effect of launch temperature, on the space shuttle o-rings, leading to the catastrophic failure of the Columbia soon after launc | Famous statistical wins and horror stories for teaching purposes
The failure to show the association between launch temperature, and the effect of launch temperature, on the space shuttle o-rings, leading to the catastrophic failure of the Columbia soon after launch. An overview of the problem is here. | Famous statistical wins and horror stories for teaching purposes
The failure to show the association between launch temperature, and the effect of launch temperature, on the space shuttle o-rings, leading to the catastrophic failure of the Columbia soon after launc |
2,699 | Famous statistical wins and horror stories for teaching purposes | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
For the last year and a half Bloomberg News has made periodic estimates of Tesla 3 production using multiple data sources. They just terminated this work but I think the history is interesting. | Famous statistical wins and horror stories for teaching purposes | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Famous statistical wins and horror stories for teaching purposes
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
For the last year and a half Bloomberg News has made periodic estimates of Tesla 3 production using multiple data sources. They just terminated this work but I think the history is interesting. | Famous statistical wins and horror stories for teaching purposes
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
2,700 | Understanding ROC curve | I'm not sure I got the question, but since the title asks for explaining ROC curves, I'll try.
ROC Curves are used to see how well your classifier can separate positive and negative examples and to identify the best threshold for separating them.
To be able to use the ROC curve, your classifier has to be ranking - that is, it should be able to rank examples such that the ones with higher rank are more likely to be positive. For example, Logistic Regression outputs probabilities, which is a score you can use for ranking.
Drawing ROC curve
Given a data set and a ranking classifier:
order the test examples by the score from the highest to the lowest
start in $(0, 0)$
for each example $x$ in the sorted order
if $x$ is positive, move $1/\text{pos}$ up
if $x$ is negative, move $1/\text{neg}$ right
where $\text{pos}$ and $\text{neg}$ are the fractions of positive and negative examples respectively.
This nice gif-animated picture should illustrate this process clearer
On this graph, the $y$-axis is true positive rate, and the $x$-axis is false positive rate.
Note the diagonal line - this is the baseline, that can be obtained with a random classifier. The further our ROC curve is above the line, the better.
Area Under ROC
The area under the ROC Curve (shaded) naturally shows how far the curve from the base line. For the baseline it's 0.5, and for the perfect classifier it's 1.
You can read more about AUC ROC in this question: What does AUC stand for and what is it?
Selecting the Best Threshold
I'll outline briefly the process of selecting the best threshold, and more details can be found in the reference.
To select the best threshold you see each point of your ROC curve as a separate classifier. This mini-classifiers uses the score the point got as a boundary between + and - (i.e. it classifies as + all points above the current one)
Depending on the pos/neg fraction in our data set - parallel to the baseline in case of 50%/50% - you build ISO Accuracy Lines and take the one with the best accuracy.
Here's a picture that illustrates that and for details I again invite you to the reference
Reference
http://mlwiki.org/index.php/ROC_Analysis | Understanding ROC curve | I'm not sure I got the question, but since the title asks for explaining ROC curves, I'll try.
ROC Curves are used to see how well your classifier can separate positive and negative examples and to i | Understanding ROC curve
I'm not sure I got the question, but since the title asks for explaining ROC curves, I'll try.
ROC Curves are used to see how well your classifier can separate positive and negative examples and to identify the best threshold for separating them.
To be able to use the ROC curve, your classifier has to be ranking - that is, it should be able to rank examples such that the ones with higher rank are more likely to be positive. For example, Logistic Regression outputs probabilities, which is a score you can use for ranking.
Drawing ROC curve
Given a data set and a ranking classifier:
order the test examples by the score from the highest to the lowest
start in $(0, 0)$
for each example $x$ in the sorted order
if $x$ is positive, move $1/\text{pos}$ up
if $x$ is negative, move $1/\text{neg}$ right
where $\text{pos}$ and $\text{neg}$ are the fractions of positive and negative examples respectively.
This nice gif-animated picture should illustrate this process clearer
On this graph, the $y$-axis is true positive rate, and the $x$-axis is false positive rate.
Note the diagonal line - this is the baseline, that can be obtained with a random classifier. The further our ROC curve is above the line, the better.
Area Under ROC
The area under the ROC Curve (shaded) naturally shows how far the curve from the base line. For the baseline it's 0.5, and for the perfect classifier it's 1.
You can read more about AUC ROC in this question: What does AUC stand for and what is it?
Selecting the Best Threshold
I'll outline briefly the process of selecting the best threshold, and more details can be found in the reference.
To select the best threshold you see each point of your ROC curve as a separate classifier. This mini-classifiers uses the score the point got as a boundary between + and - (i.e. it classifies as + all points above the current one)
Depending on the pos/neg fraction in our data set - parallel to the baseline in case of 50%/50% - you build ISO Accuracy Lines and take the one with the best accuracy.
Here's a picture that illustrates that and for details I again invite you to the reference
Reference
http://mlwiki.org/index.php/ROC_Analysis | Understanding ROC curve
I'm not sure I got the question, but since the title asks for explaining ROC curves, I'll try.
ROC Curves are used to see how well your classifier can separate positive and negative examples and to i |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.