idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
4,201
Interview question: If correlation doesn't imply causation, how do you detect causation?
There are a few ways around this. You are right that A/B testing is one of these. The economics Nobel this year was awarded for the pioneering of field experiments in the study of policies against poverty which do exactly this. Otherwise, you could go about one of the following alternatives: Selection on observables. Probably the most popular approach. You assume that conditional on some control variables, treatment assignment is random. In what is called the potential outcomes framework, under a binary treatment you could state this assumption as $Y_i(1), Y_i(0) \perp T_i \mid X_i$ where $T_i\in\{0,1\}$, $Y_i(t)$ are unit $i$’s outcome under treatment status $t$, and $X_i$ is a vector of $i$’s characteristics. The ideal way to achieve this is to randomize $T_i$. But other approaches that rely on this assumption are matching (including ML methods such as causal trees), inverse probability weighting, and the more ubiquitous method of adding $X_i$ as additional covariates in a linear regression. Computer science has gifted us with the theory of “directed acyclic graphs” for causal inference which help us think about what are good and what are bad variables to include in $X_i$. Regression discontinuity designs. This method is very popular because it offers credible interpretation of results as causal. To illustrate the idea, take the example of a spatial discontinuity. Suppose there was an earthquake and kids in a certain zone were mandated to not go to school for 3 months. Kids just outside the border had no disruption in going to school. So you can compare kids just inside the zone to those just outside, and plausibly the only thing that will be different between them is school attendance. You can then regress their subsequent years of schooling, college attendance, etc., on which side of the border they lived, and get the causal effects of school attendance. Note that how to choose the right window around the discontinuity and implement the RD estimator is a subtle question and there is a literature behind this (see @olooney’s comment to this answer). Instrumental variables. This is similar to regression discontinuity but usually much more difficult to defend. An instrument is a variable that you believe is only correlated with the outcome through the treatment status (that is, through the variable whose effect you want to measure). If this is the case, you can use something called two-stage least squares to estimate the causal effect. This genre has a small library’s worth of research on how things can go wrong if the assumptions fail, and even if they do not fail. But note that an RD can be a valid instrument. In the earthquake example, which side of the boundary someone lived on can be an instrument for school attendance because it is plausibly not correlated with anything else that explains the outcomes. Other clever strategies in this category are shift-share and Bartik instruments. These also have research exploring the assumptions they rely on. Difference-in-differences. This method relaxes the assumption of selection on observables. It moves to a before-after setting, and compares the average outcome change of those in the treatment group to the average outcome change of those in the control group. In doing so, the assumption that it makes is that of parallel trends: that the average change of the treatment group would’ve been the same as that of the control group had they not received the treatment. This method is incredibly popular because it’s more robust than selection on observables and settings where it can be credibly applied are more ubiquitous than for regression discontinuity or instrumental variables. A famous example is the minimum wage study of Card and Krueger who compared fast food restaurant workers in the Philadelphia area before and after a minimum wage change. A relatively recent variant of this method is that of synthetic controls which constructs an artificial control group and does diff-in-diff, which you may or may not like for its credibility.
Interview question: If correlation doesn't imply causation, how do you detect causation?
There are a few ways around this. You are right that A/B testing is one of these. The economics Nobel this year was awarded for the pioneering of field experiments in the study of policies against pov
Interview question: If correlation doesn't imply causation, how do you detect causation? There are a few ways around this. You are right that A/B testing is one of these. The economics Nobel this year was awarded for the pioneering of field experiments in the study of policies against poverty which do exactly this. Otherwise, you could go about one of the following alternatives: Selection on observables. Probably the most popular approach. You assume that conditional on some control variables, treatment assignment is random. In what is called the potential outcomes framework, under a binary treatment you could state this assumption as $Y_i(1), Y_i(0) \perp T_i \mid X_i$ where $T_i\in\{0,1\}$, $Y_i(t)$ are unit $i$’s outcome under treatment status $t$, and $X_i$ is a vector of $i$’s characteristics. The ideal way to achieve this is to randomize $T_i$. But other approaches that rely on this assumption are matching (including ML methods such as causal trees), inverse probability weighting, and the more ubiquitous method of adding $X_i$ as additional covariates in a linear regression. Computer science has gifted us with the theory of “directed acyclic graphs” for causal inference which help us think about what are good and what are bad variables to include in $X_i$. Regression discontinuity designs. This method is very popular because it offers credible interpretation of results as causal. To illustrate the idea, take the example of a spatial discontinuity. Suppose there was an earthquake and kids in a certain zone were mandated to not go to school for 3 months. Kids just outside the border had no disruption in going to school. So you can compare kids just inside the zone to those just outside, and plausibly the only thing that will be different between them is school attendance. You can then regress their subsequent years of schooling, college attendance, etc., on which side of the border they lived, and get the causal effects of school attendance. Note that how to choose the right window around the discontinuity and implement the RD estimator is a subtle question and there is a literature behind this (see @olooney’s comment to this answer). Instrumental variables. This is similar to regression discontinuity but usually much more difficult to defend. An instrument is a variable that you believe is only correlated with the outcome through the treatment status (that is, through the variable whose effect you want to measure). If this is the case, you can use something called two-stage least squares to estimate the causal effect. This genre has a small library’s worth of research on how things can go wrong if the assumptions fail, and even if they do not fail. But note that an RD can be a valid instrument. In the earthquake example, which side of the boundary someone lived on can be an instrument for school attendance because it is plausibly not correlated with anything else that explains the outcomes. Other clever strategies in this category are shift-share and Bartik instruments. These also have research exploring the assumptions they rely on. Difference-in-differences. This method relaxes the assumption of selection on observables. It moves to a before-after setting, and compares the average outcome change of those in the treatment group to the average outcome change of those in the control group. In doing so, the assumption that it makes is that of parallel trends: that the average change of the treatment group would’ve been the same as that of the control group had they not received the treatment. This method is incredibly popular because it’s more robust than selection on observables and settings where it can be credibly applied are more ubiquitous than for regression discontinuity or instrumental variables. A famous example is the minimum wage study of Card and Krueger who compared fast food restaurant workers in the Philadelphia area before and after a minimum wage change. A relatively recent variant of this method is that of synthetic controls which constructs an artificial control group and does diff-in-diff, which you may or may not like for its credibility.
Interview question: If correlation doesn't imply causation, how do you detect causation? There are a few ways around this. You are right that A/B testing is one of these. The economics Nobel this year was awarded for the pioneering of field experiments in the study of policies against pov
4,202
Interview question: If correlation doesn't imply causation, how do you detect causation?
I would like to give you a philosophical and a scientific answer: In theory and in principle, causality cannot be observed. It never has and never will. Let's take a simple example: when you hit the buttons of your keyboard and the letters appear on your screen whilst typing a post on this website, you assume a causal effect. Firstly, because you observe correlation between you hitting the keys and letters appearing your screen. And secondly, because you have a model of causality of what is happening in your mind which you find plausible (which is basically that the keyboard is an input device used to type). However, neither of the two are causality and you cannot observe causality. It could be that an invisible demon creates the letters on your screen every time you hit the keys. That is the philosophical point of view and answer. The scientific answer is to observe causality: you need to manipulate your input data, control for everything else and observe the effect. Since you're not a psychologist designing a study but analyzing data that means you need to have data over time. So for example if your assumption is that living in a populated city increases the risk of suffering from clinical depression: then you will need a sample of people living in a big city who later developed clinical depression. And not just a positive correlation between the variable "does live in a big city" and "suffers from clinical depression". And you will also need to control for other independent variables. Another way to achieve this would be in a laboratory setting where you can explicitly manipulate variables (and it is much easier to control for other independent variables). This approach however is not so much related to data science.
Interview question: If correlation doesn't imply causation, how do you detect causation?
I would like to give you a philosophical and a scientific answer: In theory and in principle, causality cannot be observed. It never has and never will. Let's take a simple example: when you hit the b
Interview question: If correlation doesn't imply causation, how do you detect causation? I would like to give you a philosophical and a scientific answer: In theory and in principle, causality cannot be observed. It never has and never will. Let's take a simple example: when you hit the buttons of your keyboard and the letters appear on your screen whilst typing a post on this website, you assume a causal effect. Firstly, because you observe correlation between you hitting the keys and letters appearing your screen. And secondly, because you have a model of causality of what is happening in your mind which you find plausible (which is basically that the keyboard is an input device used to type). However, neither of the two are causality and you cannot observe causality. It could be that an invisible demon creates the letters on your screen every time you hit the keys. That is the philosophical point of view and answer. The scientific answer is to observe causality: you need to manipulate your input data, control for everything else and observe the effect. Since you're not a psychologist designing a study but analyzing data that means you need to have data over time. So for example if your assumption is that living in a populated city increases the risk of suffering from clinical depression: then you will need a sample of people living in a big city who later developed clinical depression. And not just a positive correlation between the variable "does live in a big city" and "suffers from clinical depression". And you will also need to control for other independent variables. Another way to achieve this would be in a laboratory setting where you can explicitly manipulate variables (and it is much easier to control for other independent variables). This approach however is not so much related to data science.
Interview question: If correlation doesn't imply causation, how do you detect causation? I would like to give you a philosophical and a scientific answer: In theory and in principle, causality cannot be observed. It never has and never will. Let's take a simple example: when you hit the b
4,203
Interview question: If correlation doesn't imply causation, how do you detect causation?
Briefly... Option 1: Randomized Controlled Trial. The 'gold standard'. Option 2: Draw a causal diagram of your system. A directed acyclic graph of how you and others think the system operates. Decide if one can infer causation from observational study, by the back door criterion, front door criterion, or other conditional independence methods. Collect data on relevant variables. See Judea Pearl. Build statistical model using 1 & 2. Tred with caution as your DAG, statistical model, nor your data are perfect. For a gentle introduction see Pearl's The Book of Why
Interview question: If correlation doesn't imply causation, how do you detect causation?
Briefly... Option 1: Randomized Controlled Trial. The 'gold standard'. Option 2: Draw a causal diagram of your system. A directed acyclic graph of how you and others think the system operates. Decid
Interview question: If correlation doesn't imply causation, how do you detect causation? Briefly... Option 1: Randomized Controlled Trial. The 'gold standard'. Option 2: Draw a causal diagram of your system. A directed acyclic graph of how you and others think the system operates. Decide if one can infer causation from observational study, by the back door criterion, front door criterion, or other conditional independence methods. Collect data on relevant variables. See Judea Pearl. Build statistical model using 1 & 2. Tred with caution as your DAG, statistical model, nor your data are perfect. For a gentle introduction see Pearl's The Book of Why
Interview question: If correlation doesn't imply causation, how do you detect causation? Briefly... Option 1: Randomized Controlled Trial. The 'gold standard'. Option 2: Draw a causal diagram of your system. A directed acyclic graph of how you and others think the system operates. Decid
4,204
Interview question: If correlation doesn't imply causation, how do you detect causation?
Not sure this adds anything, but if you need another thought from philosophy, back in the day, (1960s) we were taught in a philosophy class that Hume’s 3 criteria of causality required: (1) temporal precedence (presumed cause prior in time); (2) an observable empirical correlation; and (3) that all rival hypotheses had been ruled out. Assuming criteria #3 to be practically impossible, it would follow causation will be forever impossible to demonstrate.
Interview question: If correlation doesn't imply causation, how do you detect causation?
Not sure this adds anything, but if you need another thought from philosophy, back in the day, (1960s) we were taught in a philosophy class that Hume’s 3 criteria of causality required: (1) temporal p
Interview question: If correlation doesn't imply causation, how do you detect causation? Not sure this adds anything, but if you need another thought from philosophy, back in the day, (1960s) we were taught in a philosophy class that Hume’s 3 criteria of causality required: (1) temporal precedence (presumed cause prior in time); (2) an observable empirical correlation; and (3) that all rival hypotheses had been ruled out. Assuming criteria #3 to be practically impossible, it would follow causation will be forever impossible to demonstrate.
Interview question: If correlation doesn't imply causation, how do you detect causation? Not sure this adds anything, but if you need another thought from philosophy, back in the day, (1960s) we were taught in a philosophy class that Hume’s 3 criteria of causality required: (1) temporal p
4,205
Interview question: If correlation doesn't imply causation, how do you detect causation?
In short, to detect causation directly, we need to control for everything else. For example, you plant two trees using the same soil, the same amount of water, the same time under the light, and so on but with two different fertilizers. If everything is the same and tree A is growing faster, then we may say that the fertilizer for tree A causes faster development. We can make that kind of conclusion only we are assuming that everything else is the same. This may be difficult to check so that in practice it is an assumption. For example, two trees may have different genes and one gene causes faster development.
Interview question: If correlation doesn't imply causation, how do you detect causation?
In short, to detect causation directly, we need to control for everything else. For example, you plant two trees using the same soil, the same amount of water, the same time under the light, and so on
Interview question: If correlation doesn't imply causation, how do you detect causation? In short, to detect causation directly, we need to control for everything else. For example, you plant two trees using the same soil, the same amount of water, the same time under the light, and so on but with two different fertilizers. If everything is the same and tree A is growing faster, then we may say that the fertilizer for tree A causes faster development. We can make that kind of conclusion only we are assuming that everything else is the same. This may be difficult to check so that in practice it is an assumption. For example, two trees may have different genes and one gene causes faster development.
Interview question: If correlation doesn't imply causation, how do you detect causation? In short, to detect causation directly, we need to control for everything else. For example, you plant two trees using the same soil, the same amount of water, the same time under the light, and so on
4,206
Interview question: If correlation doesn't imply causation, how do you detect causation?
You can not find causation with analysis of the same data which shows correlation. Sammy above gave an example of hypothesis: living in big cities causes mental disorders. The study he proposes have only two features: location and mental disorder status, and it can show only correlation, not causation. There is always a possibility that people with tendency of mental disorders prefer to live in big cities, and not cities cause disorders. Some additional attributes have to be involved. These may be attributes which explain the dependence. For example, one may consider a level of noise as an independent variable. As another option, one may include time in the study, to observe the process, how one is causing another. In particularly, one may consider the same people who lived both in cities and countries in different times of their lives, to see where the disorder occurred more often with these people. Anyway, there has to be additional information, explaining the causation or registering the process of influence.
Interview question: If correlation doesn't imply causation, how do you detect causation?
You can not find causation with analysis of the same data which shows correlation. Sammy above gave an example of hypothesis: living in big cities causes mental disorders. The study he proposes have
Interview question: If correlation doesn't imply causation, how do you detect causation? You can not find causation with analysis of the same data which shows correlation. Sammy above gave an example of hypothesis: living in big cities causes mental disorders. The study he proposes have only two features: location and mental disorder status, and it can show only correlation, not causation. There is always a possibility that people with tendency of mental disorders prefer to live in big cities, and not cities cause disorders. Some additional attributes have to be involved. These may be attributes which explain the dependence. For example, one may consider a level of noise as an independent variable. As another option, one may include time in the study, to observe the process, how one is causing another. In particularly, one may consider the same people who lived both in cities and countries in different times of their lives, to see where the disorder occurred more often with these people. Anyway, there has to be additional information, explaining the causation or registering the process of influence.
Interview question: If correlation doesn't imply causation, how do you detect causation? You can not find causation with analysis of the same data which shows correlation. Sammy above gave an example of hypothesis: living in big cities causes mental disorders. The study he proposes have
4,207
Interview question: If correlation doesn't imply causation, how do you detect causation?
I'm going to focus on a narrow topic: what if you can't do a two group experiment, either randomized or observational? What if you have only one group? Or what if you are talking about some national policy change where, because the change happened to the entire country, there's no obvious control group? I think you can attribute causation in some limited circumstances here. In the clinical setting, health services researchers obviously prefer to conduct randomized clinical trials where possible, and the standard is to conduct a before treatment and after treatment measurement in each arm. In a very limited number of clinical settings, we might be able to make some causal inference in single-arm studies, as discussed by Scott Evans: ...single arm trials are best utilized when the natural history of the disease is well understood when placebo effects are minimal or nonexistent, and when a placebo control is not ethically desirable. Such designs may be considered when spontaneous improvement in participants is not expected, placebo effects are not large, and randomization to a placebo may not be ethical. On the other hand, such designs would not be good choices for trials investigating treatments for chronic pain because of the large placebo effect in these trials. In my interpretation, say you have some very severe disease. Its mortality rate is well known and pretty high. Say that we know that 80% of patients die within one year of contracting disease X. Say we have a case series (i.e. a set of cases alone, without controls) where patients were given drug Y and we observed a mortality rate of 30%. In that scenario, I think many researchers would be willing to cautiously attribute causation. It might not be viable to conduct a randomized trial. If no two-arm observational studies were available, we would probably be willing to make recommendations based off just a case series. How does this thinking extend to other scenarios, like the national intervention I mentioned? I think that economists have encountered this scenario more. I think that there are a number of studies about the outcomes associated with Medicaid (in the US, this program provides health insurance for the poor, which is an oversimplification but it will do). The thing is, Medicaid is controlled by the states (as opposed to the Federal, or national, government). Some states expanded Medicaid earlier than others. I believe economists have used this disparity to attempt to attribute causation, but I'm less familiar with that set of methods. In health services research, hospital checklists are a nice parallel, because of the risk of spillover. You would ideally find, say, 60 hospitals, and randomize 30 of them to start using checklists. This is very hard to pull off. You might be a researcher at one hospital. The only thing you might be able to do is a before vs. after comparison. Here, you probably would want to make the pre- and post-intervention periods as long as you possibly could. I am not familiar with the issues of causation in this sort of scenario.
Interview question: If correlation doesn't imply causation, how do you detect causation?
I'm going to focus on a narrow topic: what if you can't do a two group experiment, either randomized or observational? What if you have only one group? Or what if you are talking about some national p
Interview question: If correlation doesn't imply causation, how do you detect causation? I'm going to focus on a narrow topic: what if you can't do a two group experiment, either randomized or observational? What if you have only one group? Or what if you are talking about some national policy change where, because the change happened to the entire country, there's no obvious control group? I think you can attribute causation in some limited circumstances here. In the clinical setting, health services researchers obviously prefer to conduct randomized clinical trials where possible, and the standard is to conduct a before treatment and after treatment measurement in each arm. In a very limited number of clinical settings, we might be able to make some causal inference in single-arm studies, as discussed by Scott Evans: ...single arm trials are best utilized when the natural history of the disease is well understood when placebo effects are minimal or nonexistent, and when a placebo control is not ethically desirable. Such designs may be considered when spontaneous improvement in participants is not expected, placebo effects are not large, and randomization to a placebo may not be ethical. On the other hand, such designs would not be good choices for trials investigating treatments for chronic pain because of the large placebo effect in these trials. In my interpretation, say you have some very severe disease. Its mortality rate is well known and pretty high. Say that we know that 80% of patients die within one year of contracting disease X. Say we have a case series (i.e. a set of cases alone, without controls) where patients were given drug Y and we observed a mortality rate of 30%. In that scenario, I think many researchers would be willing to cautiously attribute causation. It might not be viable to conduct a randomized trial. If no two-arm observational studies were available, we would probably be willing to make recommendations based off just a case series. How does this thinking extend to other scenarios, like the national intervention I mentioned? I think that economists have encountered this scenario more. I think that there are a number of studies about the outcomes associated with Medicaid (in the US, this program provides health insurance for the poor, which is an oversimplification but it will do). The thing is, Medicaid is controlled by the states (as opposed to the Federal, or national, government). Some states expanded Medicaid earlier than others. I believe economists have used this disparity to attempt to attribute causation, but I'm less familiar with that set of methods. In health services research, hospital checklists are a nice parallel, because of the risk of spillover. You would ideally find, say, 60 hospitals, and randomize 30 of them to start using checklists. This is very hard to pull off. You might be a researcher at one hospital. The only thing you might be able to do is a before vs. after comparison. Here, you probably would want to make the pre- and post-intervention periods as long as you possibly could. I am not familiar with the issues of causation in this sort of scenario.
Interview question: If correlation doesn't imply causation, how do you detect causation? I'm going to focus on a narrow topic: what if you can't do a two group experiment, either randomized or observational? What if you have only one group? Or what if you are talking about some national p
4,208
Why do transformers use layer norm instead of batch norm?
It seems that it has been the standard to use batchnorm in CV tasks, and layernorm in NLP tasks. The original Attention is All you Need paper tested only NLP tasks, and thus used layernorm. It does seem that even with the rise of transformers in CV applications, layernorm is still the most standardly used, so I'm not completely certain as to the pros and cons of each. But I do have some personal intuitions -- which I'll admit aren't grounded in theory, but which I'll nevertheless try to elaborate on in the following. Recall that in batchnorm, the mean and variance statistics used for normalization are calculated across all elements of all instances in a batch, for each feature independently. By "element" and "instance," I mean "word" and "sentence" respectively for an NLP task, and "pixel" and "image" for a CV task. On the other hand, for layernorm, the statistics are calculated across the feature dimension, for each element and instance independently (source). In transformers, it is calculated across all features and all elements, for each instance independently. This illustration from this recent article conveys the difference between batchnorm and layernorm: (in the case of transformers, where the normalization stats are calculated across all features and all elements for each instance independently, in the image that would correspond to the left face of the cube being colored blue.) Now onto the reasons why batchnorm is less suitable for NLP tasks. In NLP tasks, the sentence length often varies -- thus, if using batchnorm, it would be uncertain what would be the appropriate normalization constant (the total number of elements to divide by during normalization) to use. Different batches would have different normalization constants which leads to instability during the course of training. According to the paper that provided the image linked above, "statistics of NLP data across the batch dimension exhibit large fluctuations throughout training. This results in instability, if BN is naively implemented." (The paper is concerned with an improvement upon batchnorm for use in transformers that they call PowerNorm, which improves performance on NLP tasks as compared to either batchnorm or layernorm.) Another intuition is that in the past (before Transformers), RNN architectures were the norm. Within recurrent layers, it is again unclear how to compute the normalization statistics. (Should you consider previous words which passed through a recurrent layer?) Thus it's much more straightforward to normalize each word independently of others in the same sentence. Of course this reason does not apply to transformers, since computing on words in transformers has no time-dependency on previous words, and thus you can normalize across the sentence dimension too (in the picture above that would correspond to the entire left face of the cube being colored blue). It may also be worth checking out instance normalization and group normalization, I'm no expert on either but apparently each has its merits.
Why do transformers use layer norm instead of batch norm?
It seems that it has been the standard to use batchnorm in CV tasks, and layernorm in NLP tasks. The original Attention is All you Need paper tested only NLP tasks, and thus used layernorm. It does se
Why do transformers use layer norm instead of batch norm? It seems that it has been the standard to use batchnorm in CV tasks, and layernorm in NLP tasks. The original Attention is All you Need paper tested only NLP tasks, and thus used layernorm. It does seem that even with the rise of transformers in CV applications, layernorm is still the most standardly used, so I'm not completely certain as to the pros and cons of each. But I do have some personal intuitions -- which I'll admit aren't grounded in theory, but which I'll nevertheless try to elaborate on in the following. Recall that in batchnorm, the mean and variance statistics used for normalization are calculated across all elements of all instances in a batch, for each feature independently. By "element" and "instance," I mean "word" and "sentence" respectively for an NLP task, and "pixel" and "image" for a CV task. On the other hand, for layernorm, the statistics are calculated across the feature dimension, for each element and instance independently (source). In transformers, it is calculated across all features and all elements, for each instance independently. This illustration from this recent article conveys the difference between batchnorm and layernorm: (in the case of transformers, where the normalization stats are calculated across all features and all elements for each instance independently, in the image that would correspond to the left face of the cube being colored blue.) Now onto the reasons why batchnorm is less suitable for NLP tasks. In NLP tasks, the sentence length often varies -- thus, if using batchnorm, it would be uncertain what would be the appropriate normalization constant (the total number of elements to divide by during normalization) to use. Different batches would have different normalization constants which leads to instability during the course of training. According to the paper that provided the image linked above, "statistics of NLP data across the batch dimension exhibit large fluctuations throughout training. This results in instability, if BN is naively implemented." (The paper is concerned with an improvement upon batchnorm for use in transformers that they call PowerNorm, which improves performance on NLP tasks as compared to either batchnorm or layernorm.) Another intuition is that in the past (before Transformers), RNN architectures were the norm. Within recurrent layers, it is again unclear how to compute the normalization statistics. (Should you consider previous words which passed through a recurrent layer?) Thus it's much more straightforward to normalize each word independently of others in the same sentence. Of course this reason does not apply to transformers, since computing on words in transformers has no time-dependency on previous words, and thus you can normalize across the sentence dimension too (in the picture above that would correspond to the entire left face of the cube being colored blue). It may also be worth checking out instance normalization and group normalization, I'm no expert on either but apparently each has its merits.
Why do transformers use layer norm instead of batch norm? It seems that it has been the standard to use batchnorm in CV tasks, and layernorm in NLP tasks. The original Attention is All you Need paper tested only NLP tasks, and thus used layernorm. It does se
4,209
Why do transformers use layer norm instead of batch norm?
A less known issue of Batch Norm is that how hard it is to parallellize batch-normalized models. Since there is dependence between elements, there is additional need for synchronization across devices. While this is not an issue for most vision models, which tends to be used on a small set of devices, Transformers really suffer from this problem, as they rely on large-scale setups to counter their quadratic complexity. In this regard, layer norm provides some degree of normalization while incurring no batch-wise dependence.
Why do transformers use layer norm instead of batch norm?
A less known issue of Batch Norm is that how hard it is to parallellize batch-normalized models. Since there is dependence between elements, there is additional need for synchronization across devices
Why do transformers use layer norm instead of batch norm? A less known issue of Batch Norm is that how hard it is to parallellize batch-normalized models. Since there is dependence between elements, there is additional need for synchronization across devices. While this is not an issue for most vision models, which tends to be used on a small set of devices, Transformers really suffer from this problem, as they rely on large-scale setups to counter their quadratic complexity. In this regard, layer norm provides some degree of normalization while incurring no batch-wise dependence.
Why do transformers use layer norm instead of batch norm? A less known issue of Batch Norm is that how hard it is to parallellize batch-normalized models. Since there is dependence between elements, there is additional need for synchronization across devices
4,210
Why do transformers use layer norm instead of batch norm?
If you want to choose a sample box of data which contains all the feature but smaller in length of single dataframe row wise and small number in group of single dataframe sent as batch to dispatch -> layer norm For transformer such normalization is efficient as it will be able to create relevance matrix in one go on all the entity. And the first answers explains this very well in both modality [text and image]
Why do transformers use layer norm instead of batch norm?
If you want to choose a sample box of data which contains all the feature but smaller in length of single dataframe row wise and small number in group of single dataframe sent as batch to dispatch ->
Why do transformers use layer norm instead of batch norm? If you want to choose a sample box of data which contains all the feature but smaller in length of single dataframe row wise and small number in group of single dataframe sent as batch to dispatch -> layer norm For transformer such normalization is efficient as it will be able to create relevance matrix in one go on all the entity. And the first answers explains this very well in both modality [text and image]
Why do transformers use layer norm instead of batch norm? If you want to choose a sample box of data which contains all the feature but smaller in length of single dataframe row wise and small number in group of single dataframe sent as batch to dispatch ->
4,211
How can I change the title of a legend in ggplot2? [closed]
Another option is to use p + labs(aesthetic='custom text') For example, Chase's example would look like: library(ggplot2) ex.data <- data.frame(DV=rnorm(2*4*3),V2=rep(1:2,each=4*3),V4=rep(1:4,each=3),V3=1:3) p <- qplot(V4, DV, data=ex.data, geom="line", group=V3, linetype=factor(V3)) + facet_grid(. ~ V2) p + labs(linetype='custom title') and yield the figure:
How can I change the title of a legend in ggplot2? [closed]
Another option is to use p + labs(aesthetic='custom text') For example, Chase's example would look like: library(ggplot2) ex.data <- data.frame(DV=rnorm(2*4*3),V2=rep(1:2,each=4*3),V4=rep(1:4,
How can I change the title of a legend in ggplot2? [closed] Another option is to use p + labs(aesthetic='custom text') For example, Chase's example would look like: library(ggplot2) ex.data <- data.frame(DV=rnorm(2*4*3),V2=rep(1:2,each=4*3),V4=rep(1:4,each=3),V3=1:3) p <- qplot(V4, DV, data=ex.data, geom="line", group=V3, linetype=factor(V3)) + facet_grid(. ~ V2) p + labs(linetype='custom title') and yield the figure:
How can I change the title of a legend in ggplot2? [closed] Another option is to use p + labs(aesthetic='custom text') For example, Chase's example would look like: library(ggplot2) ex.data <- data.frame(DV=rnorm(2*4*3),V2=rep(1:2,each=4*3),V4=rep(1:4,
4,212
How can I change the title of a legend in ggplot2? [closed]
You can change the title of the legend by modifying the scale for that legend. Here's an example using the CO2 dataset library(ggplot2) p <- qplot(conc, uptake, data = CO2, colour = Type) + scale_colour_discrete(name = "Fancy Title") p <- p + facet_grid(. ~ Treatment) p EDIT: Using the example data from above, here is a working solution. I think this mimics the plot that @drknexus is trying to create. As a side note, if anyone can explain why we have to treat V3 as a factor for it to be mapped to the legend, I'd appreciate it. p <- qplot(V4, DV, data = ex.data, geom = "line", group = V3, lty = factor(V3)) p <- p + scale_linetype_discrete(name = "Fancy Title") + facet_grid(. ~ V2) p
How can I change the title of a legend in ggplot2? [closed]
You can change the title of the legend by modifying the scale for that legend. Here's an example using the CO2 dataset library(ggplot2) p <- qplot(conc, uptake, data = CO2, colour = Type) + scale_col
How can I change the title of a legend in ggplot2? [closed] You can change the title of the legend by modifying the scale for that legend. Here's an example using the CO2 dataset library(ggplot2) p <- qplot(conc, uptake, data = CO2, colour = Type) + scale_colour_discrete(name = "Fancy Title") p <- p + facet_grid(. ~ Treatment) p EDIT: Using the example data from above, here is a working solution. I think this mimics the plot that @drknexus is trying to create. As a side note, if anyone can explain why we have to treat V3 as a factor for it to be mapped to the legend, I'd appreciate it. p <- qplot(V4, DV, data = ex.data, geom = "line", group = V3, lty = factor(V3)) p <- p + scale_linetype_discrete(name = "Fancy Title") + facet_grid(. ~ V2) p
How can I change the title of a legend in ggplot2? [closed] You can change the title of the legend by modifying the scale for that legend. Here's an example using the CO2 dataset library(ggplot2) p <- qplot(conc, uptake, data = CO2, colour = Type) + scale_col
4,213
Resources for learning Markov chain and hidden Markov models
Here are some tutorials (available as PDFs): Dugad and Desai, A tutorial on hidden markov models Valeria De Fonzo1, Filippo Aluffi-Pentini2 and Valerio Parisi (2007). Hidden Markov Models in Bioinformatics. Current Bioinformatics, 2, 49-61. Smith, K. Hidden Markov Models in Bioinformatics with Application to Gene Finding in Human DNA Also take a look at Bioconductor tutorials. I assume you want free resources; otherwise, Bioinformatics from Polanski and Kimmel (Springer, 2007) provides a nice overview (§2.8-2.9) and applications (Part II).
Resources for learning Markov chain and hidden Markov models
Here are some tutorials (available as PDFs): Dugad and Desai, A tutorial on hidden markov models Valeria De Fonzo1, Filippo Aluffi-Pentini2 and Valerio Parisi (2007). Hidden Markov Models in Bioinfor
Resources for learning Markov chain and hidden Markov models Here are some tutorials (available as PDFs): Dugad and Desai, A tutorial on hidden markov models Valeria De Fonzo1, Filippo Aluffi-Pentini2 and Valerio Parisi (2007). Hidden Markov Models in Bioinformatics. Current Bioinformatics, 2, 49-61. Smith, K. Hidden Markov Models in Bioinformatics with Application to Gene Finding in Human DNA Also take a look at Bioconductor tutorials. I assume you want free resources; otherwise, Bioinformatics from Polanski and Kimmel (Springer, 2007) provides a nice overview (§2.8-2.9) and applications (Part II).
Resources for learning Markov chain and hidden Markov models Here are some tutorials (available as PDFs): Dugad and Desai, A tutorial on hidden markov models Valeria De Fonzo1, Filippo Aluffi-Pentini2 and Valerio Parisi (2007). Hidden Markov Models in Bioinfor
4,214
Resources for learning Markov chain and hidden Markov models
There is also a really good book by Oliver Cappe et. al: Inference in Hidden Markov Models. However, it is fairly theoretical and very light on the applications. There is another book with examples in R, but I couldn't stand it - Hidden Markov Models for Time Series. P.s. The speech recognition community also has a ton of literature on this subject.
Resources for learning Markov chain and hidden Markov models
There is also a really good book by Oliver Cappe et. al: Inference in Hidden Markov Models. However, it is fairly theoretical and very light on the applications. There is another book with examples i
Resources for learning Markov chain and hidden Markov models There is also a really good book by Oliver Cappe et. al: Inference in Hidden Markov Models. However, it is fairly theoretical and very light on the applications. There is another book with examples in R, but I couldn't stand it - Hidden Markov Models for Time Series. P.s. The speech recognition community also has a ton of literature on this subject.
Resources for learning Markov chain and hidden Markov models There is also a really good book by Oliver Cappe et. al: Inference in Hidden Markov Models. However, it is fairly theoretical and very light on the applications. There is another book with examples i
4,215
Resources for learning Markov chain and hidden Markov models
It is quite surprising to see that none of the answers mention the Rabiner tutorial paper on HMMs. While the practical implementation (the latter part of the paper) is focused on speech recognition, this paper is probably the most commonly cited one in the HMM literature, thanks to its clear and well-presented nature. It starts by introducing markov chains and then moves on to HMMs.
Resources for learning Markov chain and hidden Markov models
It is quite surprising to see that none of the answers mention the Rabiner tutorial paper on HMMs. While the practical implementation (the latter part of the paper) is focused on speech recognition, t
Resources for learning Markov chain and hidden Markov models It is quite surprising to see that none of the answers mention the Rabiner tutorial paper on HMMs. While the practical implementation (the latter part of the paper) is focused on speech recognition, this paper is probably the most commonly cited one in the HMM literature, thanks to its clear and well-presented nature. It starts by introducing markov chains and then moves on to HMMs.
Resources for learning Markov chain and hidden Markov models It is quite surprising to see that none of the answers mention the Rabiner tutorial paper on HMMs. While the practical implementation (the latter part of the paper) is focused on speech recognition, t
4,216
Resources for learning Markov chain and hidden Markov models
For bioinformatics applications, the classic text on HMMs would be Durbin, Eddy, Krough & Michison, "Biological Sequence Analsysis - Probabilistic Models of Proteins and Nucleic Acids", Cambridge University Press, 1998, ISBN 0-521-62971-3. It is technical, but very clear and I found it very useful. For MCMC there is a recent (version of a) book by Robert and Casella, "Introducing Monte Carlo Methods with R", Springer, which looks good, but I haven't had a chance to read it yet (uses R for examples, which is a good way to learn, but I need to learn R first ;o)
Resources for learning Markov chain and hidden Markov models
For bioinformatics applications, the classic text on HMMs would be Durbin, Eddy, Krough & Michison, "Biological Sequence Analsysis - Probabilistic Models of Proteins and Nucleic Acids", Cambridge Univ
Resources for learning Markov chain and hidden Markov models For bioinformatics applications, the classic text on HMMs would be Durbin, Eddy, Krough & Michison, "Biological Sequence Analsysis - Probabilistic Models of Proteins and Nucleic Acids", Cambridge University Press, 1998, ISBN 0-521-62971-3. It is technical, but very clear and I found it very useful. For MCMC there is a recent (version of a) book by Robert and Casella, "Introducing Monte Carlo Methods with R", Springer, which looks good, but I haven't had a chance to read it yet (uses R for examples, which is a good way to learn, but I need to learn R first ;o)
Resources for learning Markov chain and hidden Markov models For bioinformatics applications, the classic text on HMMs would be Durbin, Eddy, Krough & Michison, "Biological Sequence Analsysis - Probabilistic Models of Proteins and Nucleic Acids", Cambridge Univ
4,217
Resources for learning Markov chain and hidden Markov models
Already nice suggestions, I would like to add the following articles that describe HMMs from perspective of application in biology by Sean Eddy. Hidden Markov Models Profile hidden Markov models What is a hidden Markov model?
Resources for learning Markov chain and hidden Markov models
Already nice suggestions, I would like to add the following articles that describe HMMs from perspective of application in biology by Sean Eddy. Hidden Markov Models Profile hidden Markov models W
Resources for learning Markov chain and hidden Markov models Already nice suggestions, I would like to add the following articles that describe HMMs from perspective of application in biology by Sean Eddy. Hidden Markov Models Profile hidden Markov models What is a hidden Markov model?
Resources for learning Markov chain and hidden Markov models Already nice suggestions, I would like to add the following articles that describe HMMs from perspective of application in biology by Sean Eddy. Hidden Markov Models Profile hidden Markov models W
4,218
Resources for learning Markov chain and hidden Markov models
I learned HMMs using the great book by Walter Zucchini and Iain L. MacDonald Hidden Markov Models for Time Series: An Introduction Using R It's really good and features examples in R.
Resources for learning Markov chain and hidden Markov models
I learned HMMs using the great book by Walter Zucchini and Iain L. MacDonald Hidden Markov Models for Time Series: An Introduction Using R It's really good and features examples in R.
Resources for learning Markov chain and hidden Markov models I learned HMMs using the great book by Walter Zucchini and Iain L. MacDonald Hidden Markov Models for Time Series: An Introduction Using R It's really good and features examples in R.
Resources for learning Markov chain and hidden Markov models I learned HMMs using the great book by Walter Zucchini and Iain L. MacDonald Hidden Markov Models for Time Series: An Introduction Using R It's really good and features examples in R.
4,219
Resources for learning Markov chain and hidden Markov models
Take a look at the (HMM) Toolbox for Matlab by Kevin Murphy and also section Recommended reading on HMMs on this site. You can also get Probabilistic modeling toolkit for Matlab/Octave with some examples of using Markov Chains and HMM. You can also find lectures and labs on HMM, for example: Labs Lecture1 and Lecture2
Resources for learning Markov chain and hidden Markov models
Take a look at the (HMM) Toolbox for Matlab by Kevin Murphy and also section Recommended reading on HMMs on this site. You can also get Probabilistic modeling toolkit for Matlab/Octave with some examp
Resources for learning Markov chain and hidden Markov models Take a look at the (HMM) Toolbox for Matlab by Kevin Murphy and also section Recommended reading on HMMs on this site. You can also get Probabilistic modeling toolkit for Matlab/Octave with some examples of using Markov Chains and HMM. You can also find lectures and labs on HMM, for example: Labs Lecture1 and Lecture2
Resources for learning Markov chain and hidden Markov models Take a look at the (HMM) Toolbox for Matlab by Kevin Murphy and also section Recommended reading on HMMs on this site. You can also get Probabilistic modeling toolkit for Matlab/Octave with some examp
4,220
Resources for learning Markov chain and hidden Markov models
My 2 cents Beautifully explained and free. Hidden Markov Models, Theory and Applications University of Leeds Tutorial
Resources for learning Markov chain and hidden Markov models
My 2 cents Beautifully explained and free. Hidden Markov Models, Theory and Applications University of Leeds Tutorial
Resources for learning Markov chain and hidden Markov models My 2 cents Beautifully explained and free. Hidden Markov Models, Theory and Applications University of Leeds Tutorial
Resources for learning Markov chain and hidden Markov models My 2 cents Beautifully explained and free. Hidden Markov Models, Theory and Applications University of Leeds Tutorial
4,221
Resources for learning Markov chain and hidden Markov models
Here are some notes by Ramon van Handel at Princeton: This course is an introduction to some of the basic mathematical, statistical and computational methods for hidden Markov models. The first section includes a nice set of applications of HMMs in biology, finance,...
Resources for learning Markov chain and hidden Markov models
Here are some notes by Ramon van Handel at Princeton: This course is an introduction to some of the basic mathematical, statistical and computational methods for hidden Markov models. The first s
Resources for learning Markov chain and hidden Markov models Here are some notes by Ramon van Handel at Princeton: This course is an introduction to some of the basic mathematical, statistical and computational methods for hidden Markov models. The first section includes a nice set of applications of HMMs in biology, finance,...
Resources for learning Markov chain and hidden Markov models Here are some notes by Ramon van Handel at Princeton: This course is an introduction to some of the basic mathematical, statistical and computational methods for hidden Markov models. The first s
4,222
Resources for learning Markov chain and hidden Markov models
Here is a nice interactive introduction to Markov Chains http://setosa.io/ev/markov-chains/
Resources for learning Markov chain and hidden Markov models
Here is a nice interactive introduction to Markov Chains http://setosa.io/ev/markov-chains/
Resources for learning Markov chain and hidden Markov models Here is a nice interactive introduction to Markov Chains http://setosa.io/ev/markov-chains/
Resources for learning Markov chain and hidden Markov models Here is a nice interactive introduction to Markov Chains http://setosa.io/ev/markov-chains/
4,223
Resources for learning Markov chain and hidden Markov models
There are only 3 video i have found very useful for understanding maths behind hidden markov models- https://www.youtube.com/watch?v=E3qrns5f3Fw https://www.youtube.com/watch?v=cjlhpaDXihE https://www.youtube.com/watch?v=5sGEF-e82yY These are really good and taught by one the best Indian professors from IIT krg.
Resources for learning Markov chain and hidden Markov models
There are only 3 video i have found very useful for understanding maths behind hidden markov models- https://www.youtube.com/watch?v=E3qrns5f3Fw https://www.youtube.com/watch?v=cjlhpaDXihE https://w
Resources for learning Markov chain and hidden Markov models There are only 3 video i have found very useful for understanding maths behind hidden markov models- https://www.youtube.com/watch?v=E3qrns5f3Fw https://www.youtube.com/watch?v=cjlhpaDXihE https://www.youtube.com/watch?v=5sGEF-e82yY These are really good and taught by one the best Indian professors from IIT krg.
Resources for learning Markov chain and hidden Markov models There are only 3 video i have found very useful for understanding maths behind hidden markov models- https://www.youtube.com/watch?v=E3qrns5f3Fw https://www.youtube.com/watch?v=cjlhpaDXihE https://w
4,224
Resources for learning Markov chain and hidden Markov models
This playlist is a great explanation and is based on the paper by Rabiner mentioned in the answer above. - https://www.youtube.com/watch?v=J_y5hx_ySCg&list=PLix7MmR3doRo3NGNzrq48FItR3TDyuLCo This above playlist is a 12 series lecture which begins with explanation of Markov Chains/ observable Markov Models and then moves on to HMMs
Resources for learning Markov chain and hidden Markov models
This playlist is a great explanation and is based on the paper by Rabiner mentioned in the answer above. - https://www.youtube.com/watch?v=J_y5hx_ySCg&list=PLix7MmR3doRo3NGNzrq48FItR3TDyuLCo This abov
Resources for learning Markov chain and hidden Markov models This playlist is a great explanation and is based on the paper by Rabiner mentioned in the answer above. - https://www.youtube.com/watch?v=J_y5hx_ySCg&list=PLix7MmR3doRo3NGNzrq48FItR3TDyuLCo This above playlist is a 12 series lecture which begins with explanation of Markov Chains/ observable Markov Models and then moves on to HMMs
Resources for learning Markov chain and hidden Markov models This playlist is a great explanation and is based on the paper by Rabiner mentioned in the answer above. - https://www.youtube.com/watch?v=J_y5hx_ySCg&list=PLix7MmR3doRo3NGNzrq48FItR3TDyuLCo This abov
4,225
Why does shrinkage work?
Roughly speaking, there are three different sources of prediction error: the bias of your model the variance of your model unexplainable variance We can't do anything about point 3 (except for attempting to estimate the unexplained variance and incorporating it in our predictive densities and prediction intervals). This leaves us with 1 and 2. If you actually have the "right" model, then, say, OLS parameter estimates will be unbiased and have minimal variance among all unbiased (linear) estimators (they are BLUE). Predictions from an OLS model will be best linear unbiased predictions (BLUPs). That sounds good. However, it turns out that although we have unbiased predictions and minimal variance among all unbiased predictions, the variance can still be pretty large. More importantly, we can sometimes introduce "a little" bias and simultaneously save "a lot" of variance - and by getting the tradeoff just right, we can get a lower prediction error with a biased (lower variance) model than with an unbiased (higher variance) one. This is called the "bias-variance tradeoff", and this question and its answers is enlightening: When is a biased estimator preferable to unbiased one? And regularization like the lasso, ridge regression, the elastic net and so forth do exactly that. They pull the model towards zero. (Bayesian approaches are similar - they pull the model towards the priors.) Thus, regularized models will be biased compared to non-regularized models, but also have lower variance. If you choose your regularization right, the result is a prediction with a lower error. If you search for "bias-variance tradeoff regularization" or similar, you get some food for thought. This presentation, for instance, is useful. EDIT: amoeba quite rightly points out that I am handwaving as to why exactly regularization yields lower variance of models and predictions. Consider a lasso model with a large regularization parameter $\lambda$. If $\lambda\to\infty$, your lasso parameter estimates will all be shrunk to zero. A fixed parameter value of zero has zero variance. (This is not entirely correct, since the threshold value of $\lambda$ beyond which your parameters will be shrunk to zero depends on your data and your model. But given the model and the data, you can find a $\lambda$ such that the model is the zero model. Always keep your quantifiers straight.) However, the zero model will of course also have a giant bias. It doesn't care about the actual observations, after all. And the same applies to not-all-that-extreme values of your regularization parameter(s): small values will yield the unregularized parameter estimates, which will be less biased (unbiased if you have the "correct" model), but have higher variance. They will "jump around", following your actual observations. Higher values of your regularization $\lambda$ will "constrain" your parameter estimates more and more. This is why the methods have names like "lasso" or "elastic net": they constrain the freedom of your parameters to float around and follow the data. (I am writing up a little paper on this, which will hopefully be rather accessible. I'll add a link once it's available.)
Why does shrinkage work?
Roughly speaking, there are three different sources of prediction error: the bias of your model the variance of your model unexplainable variance We can't do anything about point 3 (except for attem
Why does shrinkage work? Roughly speaking, there are three different sources of prediction error: the bias of your model the variance of your model unexplainable variance We can't do anything about point 3 (except for attempting to estimate the unexplained variance and incorporating it in our predictive densities and prediction intervals). This leaves us with 1 and 2. If you actually have the "right" model, then, say, OLS parameter estimates will be unbiased and have minimal variance among all unbiased (linear) estimators (they are BLUE). Predictions from an OLS model will be best linear unbiased predictions (BLUPs). That sounds good. However, it turns out that although we have unbiased predictions and minimal variance among all unbiased predictions, the variance can still be pretty large. More importantly, we can sometimes introduce "a little" bias and simultaneously save "a lot" of variance - and by getting the tradeoff just right, we can get a lower prediction error with a biased (lower variance) model than with an unbiased (higher variance) one. This is called the "bias-variance tradeoff", and this question and its answers is enlightening: When is a biased estimator preferable to unbiased one? And regularization like the lasso, ridge regression, the elastic net and so forth do exactly that. They pull the model towards zero. (Bayesian approaches are similar - they pull the model towards the priors.) Thus, regularized models will be biased compared to non-regularized models, but also have lower variance. If you choose your regularization right, the result is a prediction with a lower error. If you search for "bias-variance tradeoff regularization" or similar, you get some food for thought. This presentation, for instance, is useful. EDIT: amoeba quite rightly points out that I am handwaving as to why exactly regularization yields lower variance of models and predictions. Consider a lasso model with a large regularization parameter $\lambda$. If $\lambda\to\infty$, your lasso parameter estimates will all be shrunk to zero. A fixed parameter value of zero has zero variance. (This is not entirely correct, since the threshold value of $\lambda$ beyond which your parameters will be shrunk to zero depends on your data and your model. But given the model and the data, you can find a $\lambda$ such that the model is the zero model. Always keep your quantifiers straight.) However, the zero model will of course also have a giant bias. It doesn't care about the actual observations, after all. And the same applies to not-all-that-extreme values of your regularization parameter(s): small values will yield the unregularized parameter estimates, which will be less biased (unbiased if you have the "correct" model), but have higher variance. They will "jump around", following your actual observations. Higher values of your regularization $\lambda$ will "constrain" your parameter estimates more and more. This is why the methods have names like "lasso" or "elastic net": they constrain the freedom of your parameters to float around and follow the data. (I am writing up a little paper on this, which will hopefully be rather accessible. I'll add a link once it's available.)
Why does shrinkage work? Roughly speaking, there are three different sources of prediction error: the bias of your model the variance of your model unexplainable variance We can't do anything about point 3 (except for attem
4,226
Why does shrinkage work?
Just to add something to @Kolassa's fine answer, the whole question of shrinkage estimates is bound up with Stein's paradox. For multivariate processes with $p \geq 3$, the vector of sample averages is not admissible. In other words, for some parameter value, there is a different estimator with lower expected risk. Stein proposed a shrinkage estimator as an example. So we're dealing with the curse of dimensionality, since shrinkage does not help you when you have only 1 or 2 independent variables. Read this answer for more. Apparently, Stein's paradox is related to the well-known theorem that a Browian motion process in 3 or more dimensions is non-recurrent (wanders all over the place without returning to the origin), whereas the 1 and 2 dimensional Brownians are recurrent. Stein's paradox holds regardless of what you shrink towards, although in practice, it does better if you shrink towards the true parameter values. This is what Bayesians do. They think they know where the true parameter is and they shrink towards it. Then they claim that Stein validates their existence. It's called a paradox precisely because it does challenge our intuition. However, if you think of Brownian motion, the only way to get a 3D Brownian motion to return to the origin would be to impose a damping penalty on the steps. A shrinkage estimator also imposes a sort of damper on the estimates (reduces variance), which is why it works.
Why does shrinkage work?
Just to add something to @Kolassa's fine answer, the whole question of shrinkage estimates is bound up with Stein's paradox. For multivariate processes with $p \geq 3$, the vector of sample averages i
Why does shrinkage work? Just to add something to @Kolassa's fine answer, the whole question of shrinkage estimates is bound up with Stein's paradox. For multivariate processes with $p \geq 3$, the vector of sample averages is not admissible. In other words, for some parameter value, there is a different estimator with lower expected risk. Stein proposed a shrinkage estimator as an example. So we're dealing with the curse of dimensionality, since shrinkage does not help you when you have only 1 or 2 independent variables. Read this answer for more. Apparently, Stein's paradox is related to the well-known theorem that a Browian motion process in 3 or more dimensions is non-recurrent (wanders all over the place without returning to the origin), whereas the 1 and 2 dimensional Brownians are recurrent. Stein's paradox holds regardless of what you shrink towards, although in practice, it does better if you shrink towards the true parameter values. This is what Bayesians do. They think they know where the true parameter is and they shrink towards it. Then they claim that Stein validates their existence. It's called a paradox precisely because it does challenge our intuition. However, if you think of Brownian motion, the only way to get a 3D Brownian motion to return to the origin would be to impose a damping penalty on the steps. A shrinkage estimator also imposes a sort of damper on the estimates (reduces variance), which is why it works.
Why does shrinkage work? Just to add something to @Kolassa's fine answer, the whole question of shrinkage estimates is bound up with Stein's paradox. For multivariate processes with $p \geq 3$, the vector of sample averages i
4,227
Why does shrinkage work?
@Kolassa has a great mathematical answer. For a more intuitive visual answer here is a picture. I'm doing simple linear regression here with a slope and y-intercept. A population of 17 points are loosely correlated. At random I picked two points and created a regression. In general, 2 points is not enough observations and my regression lines are going to vary wildly in shape and quality. However, the r^2 error will be perfect, the line hits both of my test points. The solid lines (R1 through R5) represent these regressions. The dashed lines (G1 through G5) represent the regression with a shrinkage effect applied. Shrinkage is going to shrink the slope towards zero. This isn't an arbitrary value. We are stating that this parameter is less likely to have an effect. In my 2d linear regression we are stating the values are less likely to be correlated. It is a way of softening our result and fighting against overfit. It makes sense when we only have a few points out of the total sample that we're more likely to see false correlation. Shrinkage doesn't always yield a better result. Going from R3 to G3 we ended up with a poorer estimate of the final regression. It is simply more likely to yield a better regression. Shrinkage isn't just a matter of rotating the final regression line towards zero. When you change the slope you need to change the y-intercept as well. In this case we're taking a line that goes exactly through both points and ended up with a line that goes somewhere through the middle. You can see that the variance of the dashed lines is considerably lower than the variance of the solid lines as we would expect. Imagine there was no noise. Shrinkage would be terrible. Any two points we pick would give us the perfect line. If we applied shrinkage we would only end up with a worse result. If you want further explanation then Josh Starmer with StatQuest has a great video here
Why does shrinkage work?
@Kolassa has a great mathematical answer. For a more intuitive visual answer here is a picture. I'm doing simple linear regression here with a slope and y-intercept. A population of 17 points are l
Why does shrinkage work? @Kolassa has a great mathematical answer. For a more intuitive visual answer here is a picture. I'm doing simple linear regression here with a slope and y-intercept. A population of 17 points are loosely correlated. At random I picked two points and created a regression. In general, 2 points is not enough observations and my regression lines are going to vary wildly in shape and quality. However, the r^2 error will be perfect, the line hits both of my test points. The solid lines (R1 through R5) represent these regressions. The dashed lines (G1 through G5) represent the regression with a shrinkage effect applied. Shrinkage is going to shrink the slope towards zero. This isn't an arbitrary value. We are stating that this parameter is less likely to have an effect. In my 2d linear regression we are stating the values are less likely to be correlated. It is a way of softening our result and fighting against overfit. It makes sense when we only have a few points out of the total sample that we're more likely to see false correlation. Shrinkage doesn't always yield a better result. Going from R3 to G3 we ended up with a poorer estimate of the final regression. It is simply more likely to yield a better regression. Shrinkage isn't just a matter of rotating the final regression line towards zero. When you change the slope you need to change the y-intercept as well. In this case we're taking a line that goes exactly through both points and ended up with a line that goes somewhere through the middle. You can see that the variance of the dashed lines is considerably lower than the variance of the solid lines as we would expect. Imagine there was no noise. Shrinkage would be terrible. Any two points we pick would give us the perfect line. If we applied shrinkage we would only end up with a worse result. If you want further explanation then Josh Starmer with StatQuest has a great video here
Why does shrinkage work? @Kolassa has a great mathematical answer. For a more intuitive visual answer here is a picture. I'm doing simple linear regression here with a slope and y-intercept. A population of 17 points are l
4,228
What is maxout in neural network?
A maxout layer is simply a layer where the activation function is the max of the inputs. As stated in the paper, even an MLP with 2 maxout units can approximate any function. They give a couple of reasons as to why maxout may be performing well, but the main reason they give is the following -- Dropout can be thought of as a form of model averaging in which a random subnetwork is trained at every iteration and in the end the weights of the different random networks are averaged. Since one cannot average the weights explicitly, an approximation is used. This approximation is exact for a linear network In maxout, they do not drop the inputs to the maxout layer. Thus the identity of the input outputting the max value for a data point remains unchanged. Thus the dropout only happens in the linear part of the MLP but one can still approximate any function because of the maxout layer. As the dropout happens in the linear part only, they conjecture that this leads to more efficient model averaging as the averaging approximation is exact for linear networks. Their code is available here.
What is maxout in neural network?
A maxout layer is simply a layer where the activation function is the max of the inputs. As stated in the paper, even an MLP with 2 maxout units can approximate any function. They give a couple of rea
What is maxout in neural network? A maxout layer is simply a layer where the activation function is the max of the inputs. As stated in the paper, even an MLP with 2 maxout units can approximate any function. They give a couple of reasons as to why maxout may be performing well, but the main reason they give is the following -- Dropout can be thought of as a form of model averaging in which a random subnetwork is trained at every iteration and in the end the weights of the different random networks are averaged. Since one cannot average the weights explicitly, an approximation is used. This approximation is exact for a linear network In maxout, they do not drop the inputs to the maxout layer. Thus the identity of the input outputting the max value for a data point remains unchanged. Thus the dropout only happens in the linear part of the MLP but one can still approximate any function because of the maxout layer. As the dropout happens in the linear part only, they conjecture that this leads to more efficient model averaging as the averaging approximation is exact for linear networks. Their code is available here.
What is maxout in neural network? A maxout layer is simply a layer where the activation function is the max of the inputs. As stated in the paper, even an MLP with 2 maxout units can approximate any function. They give a couple of rea
4,229
What is maxout in neural network?
A maxout unit can learn a piecewise linear, convex function with up to k pieces. 1 So when k is 2, you can implement the ReLU, absolute ReLU, leaky ReLU, etc., or it can learn to implement a new function. If k is let's say 10, you can even approximately learn the convex function. When k is 2: the Maxout neuron computes the function $\max(w_1^Tx+b_1, w_2^Tx + b_2)$. Both ReLU and Leaky ReLU are a special case of this form (for example, for ReLU we have $w_1, b_1 = 0$). The Maxout neuron therefore enjoys all the benefits of a ReLU unit (linear regime of operation, no saturation) and does not have its drawbacks (dying ReLU). However, unlike the ReLU neurons it doubles the number of parameters for every single neuron, leading to a high total number of parameters. 2 You can read the details here: 1. DL book 2. http://cs231n.github.io/neural-networks-1
What is maxout in neural network?
A maxout unit can learn a piecewise linear, convex function with up to k pieces. 1 So when k is 2, you can implement the ReLU, absolute ReLU, leaky ReLU, etc., or it can learn to implement a new func
What is maxout in neural network? A maxout unit can learn a piecewise linear, convex function with up to k pieces. 1 So when k is 2, you can implement the ReLU, absolute ReLU, leaky ReLU, etc., or it can learn to implement a new function. If k is let's say 10, you can even approximately learn the convex function. When k is 2: the Maxout neuron computes the function $\max(w_1^Tx+b_1, w_2^Tx + b_2)$. Both ReLU and Leaky ReLU are a special case of this form (for example, for ReLU we have $w_1, b_1 = 0$). The Maxout neuron therefore enjoys all the benefits of a ReLU unit (linear regime of operation, no saturation) and does not have its drawbacks (dying ReLU). However, unlike the ReLU neurons it doubles the number of parameters for every single neuron, leading to a high total number of parameters. 2 You can read the details here: 1. DL book 2. http://cs231n.github.io/neural-networks-1
What is maxout in neural network? A maxout unit can learn a piecewise linear, convex function with up to k pieces. 1 So when k is 2, you can implement the ReLU, absolute ReLU, leaky ReLU, etc., or it can learn to implement a new func
4,230
What is the difference between a particle filter (sequential Monte Carlo) and a Kalman filter?
From Dan Simon's "Optimal State Estimation": In a linear system with Gaussian noise, the Kalman filter is optimal. In a system that is nonlinear, the Kalman filter can be used for state estimation, but the particle filter may give better results at the price of additional computational effort. In a system that has non-Gaussian noise, the Kalman filter is the optimal linear filter, but again the particle filter may perform better. The unscented Kalman filter (UKF) provides a balance between the low computational effort of the Kalman filter and the high performance of the particle filter. The particle filter has some similarities with the UKF in that it transforms a set of points via known nonlinear equations and combines the results to estimate the mean and covariance of the state. However, in the particle filter the points are chosen randomly, whereas in the UKF the points are chosen on the basis of a specific algorithm*. Because of this, the number of points used in a particle filter generally needs to be much greater than the number of points in a UKF. Another difference between the two filters is that the estimation error in a UKF does not converge to zero in any sense, but the estimation error in a particle filter does converge to zero as the number of particles (and hence the computational effort) approaches infinity. *The unscented transformation is a method for calculating the statistics of a random variable which undergoes a nonlinear transformation and uses the intuition (which also applies to the particle filter) that it is easier to approximate a probability distribution than it is to approximate an arbitrary nonlinear function or transformation. See also this as an example of how the points are chosen in UKF."
What is the difference between a particle filter (sequential Monte Carlo) and a Kalman filter?
From Dan Simon's "Optimal State Estimation": In a linear system with Gaussian noise, the Kalman filter is optimal. In a system that is nonlinear, the Kalman filter can be used for state estimation, b
What is the difference between a particle filter (sequential Monte Carlo) and a Kalman filter? From Dan Simon's "Optimal State Estimation": In a linear system with Gaussian noise, the Kalman filter is optimal. In a system that is nonlinear, the Kalman filter can be used for state estimation, but the particle filter may give better results at the price of additional computational effort. In a system that has non-Gaussian noise, the Kalman filter is the optimal linear filter, but again the particle filter may perform better. The unscented Kalman filter (UKF) provides a balance between the low computational effort of the Kalman filter and the high performance of the particle filter. The particle filter has some similarities with the UKF in that it transforms a set of points via known nonlinear equations and combines the results to estimate the mean and covariance of the state. However, in the particle filter the points are chosen randomly, whereas in the UKF the points are chosen on the basis of a specific algorithm*. Because of this, the number of points used in a particle filter generally needs to be much greater than the number of points in a UKF. Another difference between the two filters is that the estimation error in a UKF does not converge to zero in any sense, but the estimation error in a particle filter does converge to zero as the number of particles (and hence the computational effort) approaches infinity. *The unscented transformation is a method for calculating the statistics of a random variable which undergoes a nonlinear transformation and uses the intuition (which also applies to the particle filter) that it is easier to approximate a probability distribution than it is to approximate an arbitrary nonlinear function or transformation. See also this as an example of how the points are chosen in UKF."
What is the difference between a particle filter (sequential Monte Carlo) and a Kalman filter? From Dan Simon's "Optimal State Estimation": In a linear system with Gaussian noise, the Kalman filter is optimal. In a system that is nonlinear, the Kalman filter can be used for state estimation, b
4,231
What is the difference between a particle filter (sequential Monte Carlo) and a Kalman filter?
From A Tutorial on Particle Filtering and Smoothing: Fifteen years later: Since their introduction in 1993, particle filters have become a very popular class of numerical methods for the solution of optimal estimation problems in non-linear non-Gaussian scenarios. In comparison with standard approximation methods, such as the popular Extended Kalman Filter, the principal advantage of particle methods is that they do not rely on any local linearisation technique or any crude functional approximation. The price that must be paid for this flexibility is computational: these methods are computationally expensive. However, thanks to the availability of ever-increasing computational power, these methods are already used in real-time applications appearing in fields as diverse as chemical engineering, computer vision, financial econometrics, target tracking and robotics. Moreover, even in scenarios in which there are no real-time constraints, these methods can be a powerful alternative to Markov chain Monte Carlo (MCMC) algorithms — alternatively, they can be used to design very efficient MCMC schemes. In short, Particle filter is more elastic as it does not assume linearity and Gaussian nature of noise in data, but is more computationally expensive. It represents the distribution by creating (or drawing) and weighting random samples instead of mean and covariance matrix as in Gaussian distribution.
What is the difference between a particle filter (sequential Monte Carlo) and a Kalman filter?
From A Tutorial on Particle Filtering and Smoothing: Fifteen years later: Since their introduction in 1993, particle filters have become a very popular class of numerical methods for the solution o
What is the difference between a particle filter (sequential Monte Carlo) and a Kalman filter? From A Tutorial on Particle Filtering and Smoothing: Fifteen years later: Since their introduction in 1993, particle filters have become a very popular class of numerical methods for the solution of optimal estimation problems in non-linear non-Gaussian scenarios. In comparison with standard approximation methods, such as the popular Extended Kalman Filter, the principal advantage of particle methods is that they do not rely on any local linearisation technique or any crude functional approximation. The price that must be paid for this flexibility is computational: these methods are computationally expensive. However, thanks to the availability of ever-increasing computational power, these methods are already used in real-time applications appearing in fields as diverse as chemical engineering, computer vision, financial econometrics, target tracking and robotics. Moreover, even in scenarios in which there are no real-time constraints, these methods can be a powerful alternative to Markov chain Monte Carlo (MCMC) algorithms — alternatively, they can be used to design very efficient MCMC schemes. In short, Particle filter is more elastic as it does not assume linearity and Gaussian nature of noise in data, but is more computationally expensive. It represents the distribution by creating (or drawing) and weighting random samples instead of mean and covariance matrix as in Gaussian distribution.
What is the difference between a particle filter (sequential Monte Carlo) and a Kalman filter? From A Tutorial on Particle Filtering and Smoothing: Fifteen years later: Since their introduction in 1993, particle filters have become a very popular class of numerical methods for the solution o
4,232
ANOVA assumption normality/normal distribution of residuals
Let's assume this is a fixed effects model. (The advice doesn't really change for random-effects models, it just gets a little more complicated.) First let us distinguish the "residuals" from the "errors:" the former are the differences between the responses and their predicted values, while the latter are random variables in the model. With sufficiently large amounts of data and a good fitting procedure, the distributions of the residuals will approximately look like the residuals were drawn randomly from the error distribution (and will therefore give you good information about the properties of that distribution). The assumptions, therefore, are about the errors, not the residuals. No, normality (of the responses) and normal distribution of errors are not the same. Suppose you measured yield from a crop with and without a fertilizer application. In plots without fertilizer the yield ranged from 70 to 130. In two plots with fertilizer the yield ranged from 470 to 530. The distribution of results is strongly non-normal: it's clustered at two locations related to the fertilizer application. Suppose further the average yields are 100 and 500, respectively. Then all residuals range from -30 to +30, and so the errors will be expected to have a comparable distribution. The errors might (or might not) be normally distributed, but obviously this is a completely different distribution. The distribution of the residuals matters, because those reflect the errors, which are the random part of the model. Note also that the p-values are computed from F (or t) statistics and those depend on residuals, not on the original values. If there are significant and important effects in the data (as in this example), then you might be making a "grave" mistake. You could, by luck, make the correct determination: that is, by looking at the raw data you will seeing a mixture of distributions and this can look normal (or not). The point is that what you're looking it is not relevant. ANOVA residuals don't have to be anywhere close to normal in order to fit the model. However, unless you have an enormous amount of data, near-normality of the residuals is essential for p-values computed from the F-distribution to be meaningful.
ANOVA assumption normality/normal distribution of residuals
Let's assume this is a fixed effects model. (The advice doesn't really change for random-effects models, it just gets a little more complicated.) First let us distinguish the "residuals" from the "er
ANOVA assumption normality/normal distribution of residuals Let's assume this is a fixed effects model. (The advice doesn't really change for random-effects models, it just gets a little more complicated.) First let us distinguish the "residuals" from the "errors:" the former are the differences between the responses and their predicted values, while the latter are random variables in the model. With sufficiently large amounts of data and a good fitting procedure, the distributions of the residuals will approximately look like the residuals were drawn randomly from the error distribution (and will therefore give you good information about the properties of that distribution). The assumptions, therefore, are about the errors, not the residuals. No, normality (of the responses) and normal distribution of errors are not the same. Suppose you measured yield from a crop with and without a fertilizer application. In plots without fertilizer the yield ranged from 70 to 130. In two plots with fertilizer the yield ranged from 470 to 530. The distribution of results is strongly non-normal: it's clustered at two locations related to the fertilizer application. Suppose further the average yields are 100 and 500, respectively. Then all residuals range from -30 to +30, and so the errors will be expected to have a comparable distribution. The errors might (or might not) be normally distributed, but obviously this is a completely different distribution. The distribution of the residuals matters, because those reflect the errors, which are the random part of the model. Note also that the p-values are computed from F (or t) statistics and those depend on residuals, not on the original values. If there are significant and important effects in the data (as in this example), then you might be making a "grave" mistake. You could, by luck, make the correct determination: that is, by looking at the raw data you will seeing a mixture of distributions and this can look normal (or not). The point is that what you're looking it is not relevant. ANOVA residuals don't have to be anywhere close to normal in order to fit the model. However, unless you have an enormous amount of data, near-normality of the residuals is essential for p-values computed from the F-distribution to be meaningful.
ANOVA assumption normality/normal distribution of residuals Let's assume this is a fixed effects model. (The advice doesn't really change for random-effects models, it just gets a little more complicated.) First let us distinguish the "residuals" from the "er
4,233
ANOVA assumption normality/normal distribution of residuals
Standard Classical one-way ANOVA can be viewed as an extension to the classical "2-sample T-test" to an "n-sample T-test". This can be seen from comparing a one-way ANOVA with only two groups to the classical 2-sample T-test. I think where you are getting confused is that (under the assumptions of the model) the residuals and the raw data are BOTH normally distributed. However the raw data consist of normal distributions with different means (unless all the effects are exactly the same) but the same variance. The residuals on the other hand have the same normal distribution. This comes from the third assumption of homoscedasticity. This is because the normal distribution is decomposable into a mean and variance components. If $Y_{ij}$ has a normal distribution with mean $\mu_{j}$ and variance $\sigma^2$ can be written as $Y_{ij}=\mu_{j}+\sigma\epsilon_{ij}$ where $\epsilon_{ij}$ has a standard normal distribution. While ANOVA is derivable from the assumption of normality, I think (but am unsure) it can be replaced by an assumption of linearity (along the Best Linear Unbiased Estimator (BLUE) lines of estimation, where "BEST" is interpreted as minimum mean square error). I believe this basically involves replacing the distribution for $\epsilon_{ij}$ with any mutually independent distribution (over all i and j) which has mean 0 and variance 1. In terms of looking at your raw data, it should look normal when plotted separately for each factor level in your model. This means plotting $Y_{ij}$ for each j on a separate graph.
ANOVA assumption normality/normal distribution of residuals
Standard Classical one-way ANOVA can be viewed as an extension to the classical "2-sample T-test" to an "n-sample T-test". This can be seen from comparing a one-way ANOVA with only two groups to the
ANOVA assumption normality/normal distribution of residuals Standard Classical one-way ANOVA can be viewed as an extension to the classical "2-sample T-test" to an "n-sample T-test". This can be seen from comparing a one-way ANOVA with only two groups to the classical 2-sample T-test. I think where you are getting confused is that (under the assumptions of the model) the residuals and the raw data are BOTH normally distributed. However the raw data consist of normal distributions with different means (unless all the effects are exactly the same) but the same variance. The residuals on the other hand have the same normal distribution. This comes from the third assumption of homoscedasticity. This is because the normal distribution is decomposable into a mean and variance components. If $Y_{ij}$ has a normal distribution with mean $\mu_{j}$ and variance $\sigma^2$ can be written as $Y_{ij}=\mu_{j}+\sigma\epsilon_{ij}$ where $\epsilon_{ij}$ has a standard normal distribution. While ANOVA is derivable from the assumption of normality, I think (but am unsure) it can be replaced by an assumption of linearity (along the Best Linear Unbiased Estimator (BLUE) lines of estimation, where "BEST" is interpreted as minimum mean square error). I believe this basically involves replacing the distribution for $\epsilon_{ij}$ with any mutually independent distribution (over all i and j) which has mean 0 and variance 1. In terms of looking at your raw data, it should look normal when plotted separately for each factor level in your model. This means plotting $Y_{ij}$ for each j on a separate graph.
ANOVA assumption normality/normal distribution of residuals Standard Classical one-way ANOVA can be viewed as an extension to the classical "2-sample T-test" to an "n-sample T-test". This can be seen from comparing a one-way ANOVA with only two groups to the
4,234
ANOVA assumption normality/normal distribution of residuals
In the one-way case with $p$ groups of size $n_{j}$: $F = \frac{SS_{b} / df_{b}}{SS_{w} / df_{w}}$ where $SS_{b} = \sum_{j=1}^{p}{n_{j} (M - M_{j}})^{2}$ and $SS_{w} = \sum_{j=1}^{p}\sum_{i=1}^{n_{j}}{(y_{ij} - M_{j})^{2}}$ $F$ follows an $F$-distribution if $SS_{b} / df_{b}$ and $SS_{w} / df_{w}$ are independent, $\chi^{2}$-distributed variables with $df_{b}$ and $df_{w}$ degrees of freedom, respectively. This is the case when $SS_{b}$ and $SS_{w}$ are the sum of squared independent normal variables with mean $0$ and equal scale. Thus $M-M_{j}$ and $y_{ij}-M_{j}$ must be normally distributed. $y_{i(j)} - M_{j}$ is the residual from the full model ($Y = \mu_{j} + \epsilon = \mu + \alpha_{j} + \epsilon$), $y_{i(j)} - M$ is the residual from the restricted model ($Y = \mu + \epsilon$). The difference of these residuals is $M - M_{j}$. EDIT to reflect clarification by @onestop: under $H_{0}$ all true group means are equal (and thus equal to $M$), thus normality of the group-level residuals $y_{i(j)} - M_{j}$ implies normality of $M - M_{j}$ as well. The DV values themselves need not be normally distributed.
ANOVA assumption normality/normal distribution of residuals
In the one-way case with $p$ groups of size $n_{j}$: $F = \frac{SS_{b} / df_{b}}{SS_{w} / df_{w}}$ where $SS_{b} = \sum_{j=1}^{p}{n_{j} (M - M_{j}})^{2}$ and $SS_{w} = \sum_{j=1}^{p}\sum_{i=1}^{n_{j}}
ANOVA assumption normality/normal distribution of residuals In the one-way case with $p$ groups of size $n_{j}$: $F = \frac{SS_{b} / df_{b}}{SS_{w} / df_{w}}$ where $SS_{b} = \sum_{j=1}^{p}{n_{j} (M - M_{j}})^{2}$ and $SS_{w} = \sum_{j=1}^{p}\sum_{i=1}^{n_{j}}{(y_{ij} - M_{j})^{2}}$ $F$ follows an $F$-distribution if $SS_{b} / df_{b}$ and $SS_{w} / df_{w}$ are independent, $\chi^{2}$-distributed variables with $df_{b}$ and $df_{w}$ degrees of freedom, respectively. This is the case when $SS_{b}$ and $SS_{w}$ are the sum of squared independent normal variables with mean $0$ and equal scale. Thus $M-M_{j}$ and $y_{ij}-M_{j}$ must be normally distributed. $y_{i(j)} - M_{j}$ is the residual from the full model ($Y = \mu_{j} + \epsilon = \mu + \alpha_{j} + \epsilon$), $y_{i(j)} - M$ is the residual from the restricted model ($Y = \mu + \epsilon$). The difference of these residuals is $M - M_{j}$. EDIT to reflect clarification by @onestop: under $H_{0}$ all true group means are equal (and thus equal to $M$), thus normality of the group-level residuals $y_{i(j)} - M_{j}$ implies normality of $M - M_{j}$ as well. The DV values themselves need not be normally distributed.
ANOVA assumption normality/normal distribution of residuals In the one-way case with $p$ groups of size $n_{j}$: $F = \frac{SS_{b} / df_{b}}{SS_{w} / df_{w}}$ where $SS_{b} = \sum_{j=1}^{p}{n_{j} (M - M_{j}})^{2}$ and $SS_{w} = \sum_{j=1}^{p}\sum_{i=1}^{n_{j}}
4,235
Should I normalize word2vec's word vectors before using them?
When the downstream applications only care about the direction of the word vectors (e.g. they only pay attention to the cosine similarity of two words), then normalize, and forget about length. However, if the downstream applications are able to (or need to) consider more sensible aspects, such as word significance, or consistency in word usage (see below), then normalization might not be such a good idea. From Levy et al., 2015 (and, actually, most of the literature on word embeddings): Vectors are normalized to unit length before they are used for similarity calculation, making cosine similarity and dot-product equivalent. Also from Wilson and Schakel, 2015: Most applications of word embeddings explore not the word vectors themselves, but relations between them to solve, for example, similarity and word relation tasks. For these tasks, it was found that using normalised word vectors improves performance. Word vector length is therefore typically ignored. Normalizing is equivalent to losing the notion of length. That is, once you normalize the word vectors, you forget the length (norm, module) they had right after the training phase. However, sometimes it's worth to take into consideration the original length of the word vectors. Schakel and Wilson, 2015 observed some interesting facts regarding the length of word vectors: A word that is consistently used in a similar context will be represented by a longer vector than a word of the same frequency that is used in different contexts. Not only the direction, but also the length of word vectors carries important information. Word vector length furnishes, in combination with term frequency, a useful measure of word significance.
Should I normalize word2vec's word vectors before using them?
When the downstream applications only care about the direction of the word vectors (e.g. they only pay attention to the cosine similarity of two words), then normalize, and forget about length. Howeve
Should I normalize word2vec's word vectors before using them? When the downstream applications only care about the direction of the word vectors (e.g. they only pay attention to the cosine similarity of two words), then normalize, and forget about length. However, if the downstream applications are able to (or need to) consider more sensible aspects, such as word significance, or consistency in word usage (see below), then normalization might not be such a good idea. From Levy et al., 2015 (and, actually, most of the literature on word embeddings): Vectors are normalized to unit length before they are used for similarity calculation, making cosine similarity and dot-product equivalent. Also from Wilson and Schakel, 2015: Most applications of word embeddings explore not the word vectors themselves, but relations between them to solve, for example, similarity and word relation tasks. For these tasks, it was found that using normalised word vectors improves performance. Word vector length is therefore typically ignored. Normalizing is equivalent to losing the notion of length. That is, once you normalize the word vectors, you forget the length (norm, module) they had right after the training phase. However, sometimes it's worth to take into consideration the original length of the word vectors. Schakel and Wilson, 2015 observed some interesting facts regarding the length of word vectors: A word that is consistently used in a similar context will be represented by a longer vector than a word of the same frequency that is used in different contexts. Not only the direction, but also the length of word vectors carries important information. Word vector length furnishes, in combination with term frequency, a useful measure of word significance.
Should I normalize word2vec's word vectors before using them? When the downstream applications only care about the direction of the word vectors (e.g. they only pay attention to the cosine similarity of two words), then normalize, and forget about length. Howeve
4,236
A/B tests: z-test vs t-test vs chi square vs fisher exact test
We use these tests for different reasons and under different circumstances. $z$-test. A $z$-test assumes that our observations are independently drawn from a Normal distribution with unknown mean and known variance. A $z$-test is used primarily when we have quantitative data. (i.e. weights of rodents, ages of individuals, systolic blood pressure, etc.) However, $z$-tests can also be used when interested in proportions. (i.e. the proportion of people who get at least eight hours of sleep, etc.) $t$-test. A $t$-test assumes that our observations are independently drawn from a Normal distribution with unknown mean and unknown variance. Note that with a $t$-test, we do not know the population variance. This is far more common than knowing the population variance, so a $t$-test is generally more appropriate than a $z$-test, but practically there will be little difference between the two if sample sizes are large. With $z$- and $t$-tests, your alternative hypothesis will be that your population mean (or population proportion) of one group is either not equal, less than, or greater than the population mean (or proportion) of the other group. This will depend on the type of analysis you seek to do, but your null and alternative hypotheses directly compare the means/proportions of the two groups. Chi-squared test. Whereas $z$- and $t$-tests concern quantitative data (or proportions in the case of $z$), chi-squared tests are appropriate for qualitative data. Again, the assumption is that observations are independent of one another. In this case, you aren't seeking a particular relationship. Your null hypothesis is that no relationship exists between variable one and variable two. Your alternative hypothesis is that a relationship does exist. This doesn't give you specifics as to how this relationship exists (i.e. in which direction the relationship goes) but it will provide evidence that a relationship does (or does not) exist between your independent variable and your groups. Fisher's exact test. One drawback to the chi-squared test is that it is asymptotic. This means that the $p$-value is accurate for very large sample sizes. However, if your sample sizes are small, then the $p$-value may not be quite as accurate. As such, Fisher's exact test allows you to exactly calculate the $p$-value of your data and not rely on approximations that will be poor if your sample sizes are small. I keep discussing sample sizes - different references will give you different metrics as to when your samples are large enough. I would just find a reputable source, look at their rule, and apply their rule to find the test you want. I would not "shop around", so to speak, until you find a rule that you "like". Ultimately, the test you choose should be based on a) your sample size and b) what form you want your hypotheses to take. If you are looking for a specific effect from your A/B test (for example, my B group has higher test scores), then I would opt for a $z$-test or $t$-test, pending sample size and the knowledge of the population variance. If you want to show that a relationship merely exists (for example, my A group and B group are different based on the independent variable but I don't care which group has higher scores), then the chi-squared or Fisher's exact test is appropriate, depending on sample size. Does this make sense? Hope this helps!
A/B tests: z-test vs t-test vs chi square vs fisher exact test
We use these tests for different reasons and under different circumstances. $z$-test. A $z$-test assumes that our observations are independently drawn from a Normal distribution with unknown mean and
A/B tests: z-test vs t-test vs chi square vs fisher exact test We use these tests for different reasons and under different circumstances. $z$-test. A $z$-test assumes that our observations are independently drawn from a Normal distribution with unknown mean and known variance. A $z$-test is used primarily when we have quantitative data. (i.e. weights of rodents, ages of individuals, systolic blood pressure, etc.) However, $z$-tests can also be used when interested in proportions. (i.e. the proportion of people who get at least eight hours of sleep, etc.) $t$-test. A $t$-test assumes that our observations are independently drawn from a Normal distribution with unknown mean and unknown variance. Note that with a $t$-test, we do not know the population variance. This is far more common than knowing the population variance, so a $t$-test is generally more appropriate than a $z$-test, but practically there will be little difference between the two if sample sizes are large. With $z$- and $t$-tests, your alternative hypothesis will be that your population mean (or population proportion) of one group is either not equal, less than, or greater than the population mean (or proportion) of the other group. This will depend on the type of analysis you seek to do, but your null and alternative hypotheses directly compare the means/proportions of the two groups. Chi-squared test. Whereas $z$- and $t$-tests concern quantitative data (or proportions in the case of $z$), chi-squared tests are appropriate for qualitative data. Again, the assumption is that observations are independent of one another. In this case, you aren't seeking a particular relationship. Your null hypothesis is that no relationship exists between variable one and variable two. Your alternative hypothesis is that a relationship does exist. This doesn't give you specifics as to how this relationship exists (i.e. in which direction the relationship goes) but it will provide evidence that a relationship does (or does not) exist between your independent variable and your groups. Fisher's exact test. One drawback to the chi-squared test is that it is asymptotic. This means that the $p$-value is accurate for very large sample sizes. However, if your sample sizes are small, then the $p$-value may not be quite as accurate. As such, Fisher's exact test allows you to exactly calculate the $p$-value of your data and not rely on approximations that will be poor if your sample sizes are small. I keep discussing sample sizes - different references will give you different metrics as to when your samples are large enough. I would just find a reputable source, look at their rule, and apply their rule to find the test you want. I would not "shop around", so to speak, until you find a rule that you "like". Ultimately, the test you choose should be based on a) your sample size and b) what form you want your hypotheses to take. If you are looking for a specific effect from your A/B test (for example, my B group has higher test scores), then I would opt for a $z$-test or $t$-test, pending sample size and the knowledge of the population variance. If you want to show that a relationship merely exists (for example, my A group and B group are different based on the independent variable but I don't care which group has higher scores), then the chi-squared or Fisher's exact test is appropriate, depending on sample size. Does this make sense? Hope this helps!
A/B tests: z-test vs t-test vs chi square vs fisher exact test We use these tests for different reasons and under different circumstances. $z$-test. A $z$-test assumes that our observations are independently drawn from a Normal distribution with unknown mean and
4,237
A/B tests: z-test vs t-test vs chi square vs fisher exact test
For a 3 way test you usually use an ANOVA rather than 3 separate tests. Please also check on the Bonferroni correction before multiple testing. Please use this https://www.google.com/search?q=testing+multiple+means&rlz=1C1CHBD_enIN817IN817&oq=testing+multiple+means+&aqs=chrome..69i57j69i60l3j69i61j0.3564j0j7&sourceid=chrome&ie=UTF-8
A/B tests: z-test vs t-test vs chi square vs fisher exact test
For a 3 way test you usually use an ANOVA rather than 3 separate tests. Please also check on the Bonferroni correction before multiple testing. Please use this https://www.google.com/search?q=testin
A/B tests: z-test vs t-test vs chi square vs fisher exact test For a 3 way test you usually use an ANOVA rather than 3 separate tests. Please also check on the Bonferroni correction before multiple testing. Please use this https://www.google.com/search?q=testing+multiple+means&rlz=1C1CHBD_enIN817IN817&oq=testing+multiple+means+&aqs=chrome..69i57j69i60l3j69i61j0.3564j0j7&sourceid=chrome&ie=UTF-8
A/B tests: z-test vs t-test vs chi square vs fisher exact test For a 3 way test you usually use an ANOVA rather than 3 separate tests. Please also check on the Bonferroni correction before multiple testing. Please use this https://www.google.com/search?q=testin
4,238
Graph for relationship between two ordinal variables
A spineplot (mosaic plot) works well for the example data here, but can be difficult to read or interpret if some combinations of categories are rare or don't exist. Naturally it's reasonable, and expected, that a low frequency is represented by a small tile, and zero by no tile at all, but the psychological difficulty can remain. It's also natural that people fond of spineplots choose examples which work well for their papers or presentations, but I've often produced examples that were too messy to use in public. Conversely, a spineplot does use the available space well. Some implementations presuppose interactive graphics, so that the user can interrogate each tile to learn more about it. An alternative which can also work quite well is a two-way bar chart (many other names exist). See for example tabplot within http://www.surveydesign.com.au/tipsusergraphs.html For these data, one possible plot (produced using tabplot in Stata, but should be easy in any decent software) is The format means it is easy to relate individual bars to row and column identifiers and that you can annotate with frequencies, proportions or percents (don't do that if you think the result is too busy, naturally). Some possibilities: If one variable can be thought of a response to another as predictor, then it is worth thinking of plotting it on the vertical axis as usual. Here I think of "importance" as measuring an attitude, the question then being whether it affects behaviour ("often"). The causal issue is often more complicated even for these imaginary data, but the point remains. Suggestion #1 is always to be trumped if the reverse works better, meaning, is easier to think about and interpret. Percent or probability breakdowns often make sense. A plot of raw frequencies can be useful too. (Naturally, this plot lacks the virtue of mosaic plots of showing both kinds of information at once.) You can of course try the (much more common) alternatives of grouped bar charts or stacked bar charts (or the still fairly uncommon grouped dot charts in the sense of W.S. Cleveland). In this case, I don't think they work as well, but sometimes they work better. Some might want to colour different response categories differently. I've no objection, and if you want that you wouldn't take objections seriously any way. The strategy of hybridising graph and table can be useful more generally, or indeed not what you want at all. An often repeated argument is that the separation of Figures and Tables was just a side-effect of the invention of printing and the division of labour it produced; it's once more unnecessary, just as it was to manuscript writers putting illustrations exactly how and where they liked.
Graph for relationship between two ordinal variables
A spineplot (mosaic plot) works well for the example data here, but can be difficult to read or interpret if some combinations of categories are rare or don't exist. Naturally it's reasonable, and exp
Graph for relationship between two ordinal variables A spineplot (mosaic plot) works well for the example data here, but can be difficult to read or interpret if some combinations of categories are rare or don't exist. Naturally it's reasonable, and expected, that a low frequency is represented by a small tile, and zero by no tile at all, but the psychological difficulty can remain. It's also natural that people fond of spineplots choose examples which work well for their papers or presentations, but I've often produced examples that were too messy to use in public. Conversely, a spineplot does use the available space well. Some implementations presuppose interactive graphics, so that the user can interrogate each tile to learn more about it. An alternative which can also work quite well is a two-way bar chart (many other names exist). See for example tabplot within http://www.surveydesign.com.au/tipsusergraphs.html For these data, one possible plot (produced using tabplot in Stata, but should be easy in any decent software) is The format means it is easy to relate individual bars to row and column identifiers and that you can annotate with frequencies, proportions or percents (don't do that if you think the result is too busy, naturally). Some possibilities: If one variable can be thought of a response to another as predictor, then it is worth thinking of plotting it on the vertical axis as usual. Here I think of "importance" as measuring an attitude, the question then being whether it affects behaviour ("often"). The causal issue is often more complicated even for these imaginary data, but the point remains. Suggestion #1 is always to be trumped if the reverse works better, meaning, is easier to think about and interpret. Percent or probability breakdowns often make sense. A plot of raw frequencies can be useful too. (Naturally, this plot lacks the virtue of mosaic plots of showing both kinds of information at once.) You can of course try the (much more common) alternatives of grouped bar charts or stacked bar charts (or the still fairly uncommon grouped dot charts in the sense of W.S. Cleveland). In this case, I don't think they work as well, but sometimes they work better. Some might want to colour different response categories differently. I've no objection, and if you want that you wouldn't take objections seriously any way. The strategy of hybridising graph and table can be useful more generally, or indeed not what you want at all. An often repeated argument is that the separation of Figures and Tables was just a side-effect of the invention of printing and the division of labour it produced; it's once more unnecessary, just as it was to manuscript writers putting illustrations exactly how and where they liked.
Graph for relationship between two ordinal variables A spineplot (mosaic plot) works well for the example data here, but can be difficult to read or interpret if some combinations of categories are rare or don't exist. Naturally it's reasonable, and exp
4,239
Graph for relationship between two ordinal variables
Here is a quick attempt at a heat map, I have used black cell borders to break up the cells, but perhaps the tiles should be separated more as in Glen_b's answer. library(ggplot2) runningcounts.df <- as.data.frame(table(importance, often)) ggplot(runningcounts.df, aes(importance, often)) + geom_tile(aes(fill = Freq), colour = "black") + scale_fill_gradient(low = "white", high = "steelblue") Here is a fluctuation plot based on an earlier comment by Andy W. As he describes them "they are basically just binned scatterplots for categorical data, and the size of a point is mapped to the number of observations that fall within that bin." For a reference see Wickham, Hadley and Heike Hofmann. 2011. Product plots. IEEE Transactions on Visualization and Computer Graphics (Proc. Infovis `11). Pre-print PDF theme_nogrid <- function (base_size = 12, base_family = "") { theme_bw(base_size = base_size, base_family = base_family) %+replace% theme(panel.grid = element_blank()) } ggplot(runningcounts.df, aes(importance, often)) + geom_point(aes(size = Freq, color = Freq, stat = "identity", position = "identity"), shape = 15) + scale_size_continuous(range = c(3,15)) + scale_color_gradient(low = "white", high = "black") + theme_nogrid()
Graph for relationship between two ordinal variables
Here is a quick attempt at a heat map, I have used black cell borders to break up the cells, but perhaps the tiles should be separated more as in Glen_b's answer. library(ggplot2) runningcounts.df <-
Graph for relationship between two ordinal variables Here is a quick attempt at a heat map, I have used black cell borders to break up the cells, but perhaps the tiles should be separated more as in Glen_b's answer. library(ggplot2) runningcounts.df <- as.data.frame(table(importance, often)) ggplot(runningcounts.df, aes(importance, often)) + geom_tile(aes(fill = Freq), colour = "black") + scale_fill_gradient(low = "white", high = "steelblue") Here is a fluctuation plot based on an earlier comment by Andy W. As he describes them "they are basically just binned scatterplots for categorical data, and the size of a point is mapped to the number of observations that fall within that bin." For a reference see Wickham, Hadley and Heike Hofmann. 2011. Product plots. IEEE Transactions on Visualization and Computer Graphics (Proc. Infovis `11). Pre-print PDF theme_nogrid <- function (base_size = 12, base_family = "") { theme_bw(base_size = base_size, base_family = base_family) %+replace% theme(panel.grid = element_blank()) } ggplot(runningcounts.df, aes(importance, often)) + geom_point(aes(size = Freq, color = Freq, stat = "identity", position = "identity"), shape = 15) + scale_size_continuous(range = c(3,15)) + scale_color_gradient(low = "white", high = "black") + theme_nogrid()
Graph for relationship between two ordinal variables Here is a quick attempt at a heat map, I have used black cell borders to break up the cells, but perhaps the tiles should be separated more as in Glen_b's answer. library(ggplot2) runningcounts.df <-
4,240
Graph for relationship between two ordinal variables
Here's an example of what a spineplot of the data would look like. I did this in Stata pretty quickly, but there's an R implementation. I think in R it should be just: spineplot(factor(often)~factor(importance)) The spineplot actually seems to be the default if you give R categorical variables: plot(factor(often)~factor(importance)) The fractional breakdown of the categories of often is shown for each category of importance. Stacked bars are drawn with vertical dimension showing fraction of often given the importance category. The horizontal dimension shows the fraction in each importance category. Thus the areas of tiles formed represent the frequencies, or more generally totals, for each cross-combination of importance and often.
Graph for relationship between two ordinal variables
Here's an example of what a spineplot of the data would look like. I did this in Stata pretty quickly, but there's an R implementation. I think in R it should be just: spineplot(factor(often)~factor(
Graph for relationship between two ordinal variables Here's an example of what a spineplot of the data would look like. I did this in Stata pretty quickly, but there's an R implementation. I think in R it should be just: spineplot(factor(often)~factor(importance)) The spineplot actually seems to be the default if you give R categorical variables: plot(factor(often)~factor(importance)) The fractional breakdown of the categories of often is shown for each category of importance. Stacked bars are drawn with vertical dimension showing fraction of often given the importance category. The horizontal dimension shows the fraction in each importance category. Thus the areas of tiles formed represent the frequencies, or more generally totals, for each cross-combination of importance and often.
Graph for relationship between two ordinal variables Here's an example of what a spineplot of the data would look like. I did this in Stata pretty quickly, but there's an R implementation. I think in R it should be just: spineplot(factor(often)~factor(
4,241
Graph for relationship between two ordinal variables
The way I've done this is a bit of a fudge, but it could be fixed up easily enough. This is a modified version of the jittering approach. Removing the axes reduces the temptation to interpret the scale as continuous; drawing boxes around the jittered combinations emphasizes there's something like a "scale break" - that the intervals aren't necessarily equal Ideally, the 1..5 labels should be replaced with the category names, but I'll leave that for the imagination for now; I think it conveys the sense of it. plot(jitter(often)~jitter(importance),data=running.df,bty="n", ylim=c(0.5,5.5),xlim=c(0.5,5.5),cex=0.5,pty="s",xaxt="n",yaxt="n") axis(1,tick=TRUE,col=0) axis(2,tick=TRUE,col=0) rect(rep(seq(0.75,4.75,1),5),rep(seq(0.75,4.75,1),each=5), rep(seq(1.25,5.25,1),5),rep(seq(1.25,5.25,1),each=5), border=8) Possible refinements: i) making the breaks smaller (I prefer larger breaks than this, personally), and ii) attempting to use a quasirandom sequence to reduce the incidence of apparent pattern within the boxes. While my attempt helped somewhat, you can see that in the cells with smaller numbers of points there are still subsequences with a more or less correlated look (e.g. the box in the top row, 2nd column). To avoid that, the quasi-random sequence might have to be initialized for each sub-box. (An alternative might be Latin Hypercube sampling.) Once that was sorted out, this could be inserted into a function that works exactly like jitter. library("fOptions") hjit <- runif.halton(dim(running.df)[1],2) xjit <- (hjit[,1]-.5)*0.8 yjit <- (hjit[,2]-.5)*0.8 plot(I(often+yjit)~I(importance+xjit),data=running.df,bty="n", ylim=c(0.5,5.5),xlim=c(0.5,5.5),cex=0.5,pty="s",xaxt="n",yaxt="n") axis(1,tick=TRUE,col=0) axis(2,tick=TRUE,col=0) rect(rep(seq(0.55,4.55,1),5),rep(seq(0.55,4.55,1),each=5), rep(seq(1.45,5.45,1),5),rep(seq(1.45,5.45,1),each=5), border=8)
Graph for relationship between two ordinal variables
The way I've done this is a bit of a fudge, but it could be fixed up easily enough. This is a modified version of the jittering approach. Removing the axes reduces the temptation to interpret the scal
Graph for relationship between two ordinal variables The way I've done this is a bit of a fudge, but it could be fixed up easily enough. This is a modified version of the jittering approach. Removing the axes reduces the temptation to interpret the scale as continuous; drawing boxes around the jittered combinations emphasizes there's something like a "scale break" - that the intervals aren't necessarily equal Ideally, the 1..5 labels should be replaced with the category names, but I'll leave that for the imagination for now; I think it conveys the sense of it. plot(jitter(often)~jitter(importance),data=running.df,bty="n", ylim=c(0.5,5.5),xlim=c(0.5,5.5),cex=0.5,pty="s",xaxt="n",yaxt="n") axis(1,tick=TRUE,col=0) axis(2,tick=TRUE,col=0) rect(rep(seq(0.75,4.75,1),5),rep(seq(0.75,4.75,1),each=5), rep(seq(1.25,5.25,1),5),rep(seq(1.25,5.25,1),each=5), border=8) Possible refinements: i) making the breaks smaller (I prefer larger breaks than this, personally), and ii) attempting to use a quasirandom sequence to reduce the incidence of apparent pattern within the boxes. While my attempt helped somewhat, you can see that in the cells with smaller numbers of points there are still subsequences with a more or less correlated look (e.g. the box in the top row, 2nd column). To avoid that, the quasi-random sequence might have to be initialized for each sub-box. (An alternative might be Latin Hypercube sampling.) Once that was sorted out, this could be inserted into a function that works exactly like jitter. library("fOptions") hjit <- runif.halton(dim(running.df)[1],2) xjit <- (hjit[,1]-.5)*0.8 yjit <- (hjit[,2]-.5)*0.8 plot(I(often+yjit)~I(importance+xjit),data=running.df,bty="n", ylim=c(0.5,5.5),xlim=c(0.5,5.5),cex=0.5,pty="s",xaxt="n",yaxt="n") axis(1,tick=TRUE,col=0) axis(2,tick=TRUE,col=0) rect(rep(seq(0.55,4.55,1),5),rep(seq(0.55,4.55,1),each=5), rep(seq(1.45,5.45,1),5),rep(seq(1.45,5.45,1),each=5), border=8)
Graph for relationship between two ordinal variables The way I've done this is a bit of a fudge, but it could be fixed up easily enough. This is a modified version of the jittering approach. Removing the axes reduces the temptation to interpret the scal
4,242
Graph for relationship between two ordinal variables
Using the R package riverplot: data$importance <- factor(data$importance, labels = c("not at all important", "somewhat unimportant", "neither important nor unimportant", "somewhat important", "very important")) data$often <- factor(data$often, labels = c("never", "less than once per fortnight", "once every one or two weeks", "two or three times per week", "four or more times per week")) makeRivPlot <- function(data, var1, var2, ...) { require(plyr) require(riverplot) require(RColorBrewer) names1 <- levels(data[, var1]) names2 <- levels(data[, var2]) var1 <- as.numeric(data[, var1]) var2 <- as.numeric(data[, var2]) edges <- data.frame(var1, var2 + max(var1, na.rm = T)) edges <- count(edges) colnames(edges) <- c("N1", "N2", "Value") nodes <- data.frame(ID = c(1:(max(var1, na.rm = T) + max(var2, na.rm = T))), x = c(rep(1, times = max(var1, na.rm = T)), rep(2, times = max(var2, na.rm = T))), labels = c(names1, names2) , col = c(brewer.pal(max(var1, na.rm = T), "Set1"), brewer.pal(max(var2, na.rm = T), "Set1")), stringsAsFactors = FALSE) nodes$col <- paste(nodes$col, 95, sep = "") return(makeRiver(nodes, edges)) } a <- makeRivPlot(data, "importance", "often") riverplot(a, srt = 45)
Graph for relationship between two ordinal variables
Using the R package riverplot: data$importance <- factor(data$importance, labels = c("not at all important", "somewhat unimportant
Graph for relationship between two ordinal variables Using the R package riverplot: data$importance <- factor(data$importance, labels = c("not at all important", "somewhat unimportant", "neither important nor unimportant", "somewhat important", "very important")) data$often <- factor(data$often, labels = c("never", "less than once per fortnight", "once every one or two weeks", "two or three times per week", "four or more times per week")) makeRivPlot <- function(data, var1, var2, ...) { require(plyr) require(riverplot) require(RColorBrewer) names1 <- levels(data[, var1]) names2 <- levels(data[, var2]) var1 <- as.numeric(data[, var1]) var2 <- as.numeric(data[, var2]) edges <- data.frame(var1, var2 + max(var1, na.rm = T)) edges <- count(edges) colnames(edges) <- c("N1", "N2", "Value") nodes <- data.frame(ID = c(1:(max(var1, na.rm = T) + max(var2, na.rm = T))), x = c(rep(1, times = max(var1, na.rm = T)), rep(2, times = max(var2, na.rm = T))), labels = c(names1, names2) , col = c(brewer.pal(max(var1, na.rm = T), "Set1"), brewer.pal(max(var2, na.rm = T), "Set1")), stringsAsFactors = FALSE) nodes$col <- paste(nodes$col, 95, sep = "") return(makeRiver(nodes, edges)) } a <- makeRivPlot(data, "importance", "often") riverplot(a, srt = 45)
Graph for relationship between two ordinal variables Using the R package riverplot: data$importance <- factor(data$importance, labels = c("not at all important", "somewhat unimportant
4,243
Graph for relationship between two ordinal variables
A faceted bar chart in R. It shows the distribution of "often" at each level of "importance" very clearly. But it wouldn't have worked so well if the maximum count had varied more between levels of "importance"; it's easy enough to set scales="free_y" in ggplot (see here) to avoid lots of empty space, but the shape of the distribution would be hard to discern at low-frequency levels of "importance" since the bars would be so small. Perhaps in those situations it is better to use relative frequency (conditional probability) on the vertical axis instead. It isn't so "clean" as the tabplot in Stata that Nick Cox linked to, but conveys similar information. R code: library(ggplot) running2.df <- data.frame(often = factor(often, labels = c("never", "less than once per fortnight", "once every one or two weeks", "two or three times per week", "four or more times per week")), importance = factor(importance, labels = c("not at all important", "somewhat unimportant", "neither important nor unimportant", "somewhat important", "very important"))) ggplot(running2.df, aes(often)) + geom_bar() + facet_wrap(~ importance, ncol = 1) + theme(axis.text.x=element_text(angle = -45, hjust = 0)) + theme(axis.title.x = element_blank())
Graph for relationship between two ordinal variables
A faceted bar chart in R. It shows the distribution of "often" at each level of "importance" very clearly. But it wouldn't have worked so well if the maximum count had varied more between levels of "i
Graph for relationship between two ordinal variables A faceted bar chart in R. It shows the distribution of "often" at each level of "importance" very clearly. But it wouldn't have worked so well if the maximum count had varied more between levels of "importance"; it's easy enough to set scales="free_y" in ggplot (see here) to avoid lots of empty space, but the shape of the distribution would be hard to discern at low-frequency levels of "importance" since the bars would be so small. Perhaps in those situations it is better to use relative frequency (conditional probability) on the vertical axis instead. It isn't so "clean" as the tabplot in Stata that Nick Cox linked to, but conveys similar information. R code: library(ggplot) running2.df <- data.frame(often = factor(often, labels = c("never", "less than once per fortnight", "once every one or two weeks", "two or three times per week", "four or more times per week")), importance = factor(importance, labels = c("not at all important", "somewhat unimportant", "neither important nor unimportant", "somewhat important", "very important"))) ggplot(running2.df, aes(often)) + geom_bar() + facet_wrap(~ importance, ncol = 1) + theme(axis.text.x=element_text(angle = -45, hjust = 0)) + theme(axis.title.x = element_blank())
Graph for relationship between two ordinal variables A faceted bar chart in R. It shows the distribution of "often" at each level of "importance" very clearly. But it wouldn't have worked so well if the maximum count had varied more between levels of "i
4,244
Graph for relationship between two ordinal variables
A different idea that I didn't think of originally was a sieve plot. Size of each tile is proportional to expected frequency; the little squares inside the rectangles represent actual frequencies. Hence greater density of the squares indicates higher than expected frequency (and is shaded blue); lower density of squares (red) is for lower than expected frequency. I think I'd prefer it if the color represented the size, not just sign, of the residual. This is particularly true for edge cases where expected and observed frequencies are similar and the residual is close to zero; a dichotomous red/blue scheme seems to overemphasise small deviations. Implementation in R: library(vcd) runningcounts.df <- as.data.frame(table(importance, often)) sieve(Freq ~ often + importance, data=runningcounts.df, shade= TRUE)
Graph for relationship between two ordinal variables
A different idea that I didn't think of originally was a sieve plot. Size of each tile is proportional to expected frequency; the little squares inside the rectangles represent actual frequencies. He
Graph for relationship between two ordinal variables A different idea that I didn't think of originally was a sieve plot. Size of each tile is proportional to expected frequency; the little squares inside the rectangles represent actual frequencies. Hence greater density of the squares indicates higher than expected frequency (and is shaded blue); lower density of squares (red) is for lower than expected frequency. I think I'd prefer it if the color represented the size, not just sign, of the residual. This is particularly true for edge cases where expected and observed frequencies are similar and the residual is close to zero; a dichotomous red/blue scheme seems to overemphasise small deviations. Implementation in R: library(vcd) runningcounts.df <- as.data.frame(table(importance, often)) sieve(Freq ~ often + importance, data=runningcounts.df, shade= TRUE)
Graph for relationship between two ordinal variables A different idea that I didn't think of originally was a sieve plot. Size of each tile is proportional to expected frequency; the little squares inside the rectangles represent actual frequencies. He
4,245
Can a random forest be used for feature selection in multiple linear regression?
Since RF can handle non-linearity but can't provide coefficients, would it be wise to use Random Forest to gather the most important Features and then plug those features into a Multiple Linear Regression model in order to explain their signs? I interpret OP's one-sentence question to mean that OP wishes to understand the desirability of the following analysis pipeline: Fit a random forest to some data By some metric of variable importance from (1), select a subset of high-quality features. Using the variables from (2), estimate a linear regression model. This will give OP access to the coefficients that OP notes RF cannot provide. From the linear model in (3), qualitatively interpret the signs of the coefficient estimates. I don't think this pipeline will accomplish what you'd like. Variables that are important in random forest don't necessarily have any sort of linearly additive relationship with the outcome. This remark shouldn't be surprising: it's what makes random forest so effective at discovering nonlinear relationships. Here's an example. I created a classification problem with 10 noise features, two "signal" features, and a circular decision boundary. set.seed(1) N <- 500 x1 <- rnorm(N, sd=1.5) x2 <- rnorm(N, sd=1.5) y <- apply(cbind(x1, x2), 1, function(x) (x%*%x)<1) plot(x1, x2, col=ifelse(y, "red", "blue")) lines(cos(seq(0, 2*pi, len=1000)), sin(seq(0, 2*pi, len=1000))) And when we apply the RF model, we are not surprised to find that these features are easily picked out as important by the model. (NB: this model isn't tuned at all.) x_junk <- matrix(rnorm(N*10, sd=1.5), ncol=10) x <- cbind(x1, x2, x_junk) names(x) <- paste("V", 1:ncol(x), sep="") rf <- randomForest(as.factor(y)~., data=x, mtry=4) importance(rf) MeanDecreaseGini x1 49.762104 x2 54.980725 V3 5.715863 V4 5.010281 V5 4.193836 V6 7.147988 V7 5.897283 V8 5.338241 V9 5.338689 V10 5.198862 V11 4.731412 V12 5.221611 But when we down-select to just these two, useful features, the resulting linear model is awful. summary(badmodel <- glm(y~., data=data.frame(x1,x2), family="binomial")) The important part of the summary is the comparison of the residual deviance and the null deviance. We can see that the model does basically nothing to "move" the deviance. Moreover, the coefficient estimates are essentially zero. Call: glm(formula = as.factor(y) ~ ., family = "binomial", data = data.frame(x1, x2)) Deviance Residuals: Min 1Q Median 3Q Max -0.6914 -0.6710 -0.6600 -0.6481 1.8079 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -1.398378 0.112271 -12.455 <2e-16 *** x1 -0.020090 0.076518 -0.263 0.793 x2 -0.004902 0.071711 -0.068 0.946 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 497.62 on 499 degrees of freedom Residual deviance: 497.54 on 497 degrees of freedom AIC: 503.54 Number of Fisher Scoring iterations: 4 What accounts for the wild difference between the two models? Well, clearly the decision boundary we're trying to learn is not a linear function of the two "signal" features. Obviously if you knew the functional form of the decision boundary prior to estimating the regression, you could apply some transformation to encode the data in a way that regression could then discover... (But I've never known the form of the boundary ahead of time in any real-world problem.) Since we're only working with two signal features in this case, a synthetic data set without noise in the class labels, that boundary between classes is very obvious in our plot. But it's less obvious when working with real data in a realistic number of dimensions. Moreover, in general, random forest can fit different models to different subsets of the data. In a more complicated example, it won't be obvious what's going on from a single plot at all, and building a linear model of similar predictive power will be even harder. Because we're only concerned with two dimensions, we can make a prediction surface. As expected, the random model learns that the neighborhood around the origin is important. M <- 100 x_new <- seq(-4,4, len=M) x_new_grid <- expand.grid(x_new, x_new) names(x_new_grid) <- c("x1", "x2") x_pred <- data.frame(x_new_grid, matrix(nrow(x_new_grid)*10, ncol=10)) names(x_pred) <- names(x) y_hat <- predict(object=rf, newdata=x_pred, "vote")[,2] library(fields) y_hat_mat <- as.matrix(unstack(data.frame(y_hat, x_new_grid), y_hat~x1)) image.plot(z=y_hat_mat, x=x_new, y=x_new, zlim=c(0,1), col=tim.colors(255), main="RF Prediction surface", xlab="x1", ylab="x2") As implied by our abysmal model output, the prediction surface for the reduced-variable logistic regression model is basically flat. bad_y_hat <- predict(object=badmodel, newdata=x_new_grid, type="response") bad_y_hat_mat <- as.matrix(unstack(data.frame(bad_y_hat, x_new_grid), bad_y_hat~x1)) image.plot(z=bad_y_hat_mat, x=x_new, y=x_new, zlim=c(0,1), col=tim.colors(255), main="Logistic regression prediction surface", xlab="x1", ylab="x2") HongOoi notes that the class membership isn't a linear function of the features, but that it a linear function is under a transformation. Because the decision boundary is $1=x_1^2+x_2^2,$ if we square these features, we will be able to build a more useful linear model. This is deliberate. While the RF model can find signal in those two features without transformation, the analyst has to be more specific to get similarly helpful results in the GLM. Perhaps that's sufficient for OP: finding a useful set of transformations for 2 features is easier than 12. But my point is that even if a transformation will yield a useful linear model, RF feature importance won't suggest the transformation on its own.
Can a random forest be used for feature selection in multiple linear regression?
Since RF can handle non-linearity but can't provide coefficients, would it be wise to use Random Forest to gather the most important Features and then plug those features into a Multiple Linear Regres
Can a random forest be used for feature selection in multiple linear regression? Since RF can handle non-linearity but can't provide coefficients, would it be wise to use Random Forest to gather the most important Features and then plug those features into a Multiple Linear Regression model in order to explain their signs? I interpret OP's one-sentence question to mean that OP wishes to understand the desirability of the following analysis pipeline: Fit a random forest to some data By some metric of variable importance from (1), select a subset of high-quality features. Using the variables from (2), estimate a linear regression model. This will give OP access to the coefficients that OP notes RF cannot provide. From the linear model in (3), qualitatively interpret the signs of the coefficient estimates. I don't think this pipeline will accomplish what you'd like. Variables that are important in random forest don't necessarily have any sort of linearly additive relationship with the outcome. This remark shouldn't be surprising: it's what makes random forest so effective at discovering nonlinear relationships. Here's an example. I created a classification problem with 10 noise features, two "signal" features, and a circular decision boundary. set.seed(1) N <- 500 x1 <- rnorm(N, sd=1.5) x2 <- rnorm(N, sd=1.5) y <- apply(cbind(x1, x2), 1, function(x) (x%*%x)<1) plot(x1, x2, col=ifelse(y, "red", "blue")) lines(cos(seq(0, 2*pi, len=1000)), sin(seq(0, 2*pi, len=1000))) And when we apply the RF model, we are not surprised to find that these features are easily picked out as important by the model. (NB: this model isn't tuned at all.) x_junk <- matrix(rnorm(N*10, sd=1.5), ncol=10) x <- cbind(x1, x2, x_junk) names(x) <- paste("V", 1:ncol(x), sep="") rf <- randomForest(as.factor(y)~., data=x, mtry=4) importance(rf) MeanDecreaseGini x1 49.762104 x2 54.980725 V3 5.715863 V4 5.010281 V5 4.193836 V6 7.147988 V7 5.897283 V8 5.338241 V9 5.338689 V10 5.198862 V11 4.731412 V12 5.221611 But when we down-select to just these two, useful features, the resulting linear model is awful. summary(badmodel <- glm(y~., data=data.frame(x1,x2), family="binomial")) The important part of the summary is the comparison of the residual deviance and the null deviance. We can see that the model does basically nothing to "move" the deviance. Moreover, the coefficient estimates are essentially zero. Call: glm(formula = as.factor(y) ~ ., family = "binomial", data = data.frame(x1, x2)) Deviance Residuals: Min 1Q Median 3Q Max -0.6914 -0.6710 -0.6600 -0.6481 1.8079 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -1.398378 0.112271 -12.455 <2e-16 *** x1 -0.020090 0.076518 -0.263 0.793 x2 -0.004902 0.071711 -0.068 0.946 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 497.62 on 499 degrees of freedom Residual deviance: 497.54 on 497 degrees of freedom AIC: 503.54 Number of Fisher Scoring iterations: 4 What accounts for the wild difference between the two models? Well, clearly the decision boundary we're trying to learn is not a linear function of the two "signal" features. Obviously if you knew the functional form of the decision boundary prior to estimating the regression, you could apply some transformation to encode the data in a way that regression could then discover... (But I've never known the form of the boundary ahead of time in any real-world problem.) Since we're only working with two signal features in this case, a synthetic data set without noise in the class labels, that boundary between classes is very obvious in our plot. But it's less obvious when working with real data in a realistic number of dimensions. Moreover, in general, random forest can fit different models to different subsets of the data. In a more complicated example, it won't be obvious what's going on from a single plot at all, and building a linear model of similar predictive power will be even harder. Because we're only concerned with two dimensions, we can make a prediction surface. As expected, the random model learns that the neighborhood around the origin is important. M <- 100 x_new <- seq(-4,4, len=M) x_new_grid <- expand.grid(x_new, x_new) names(x_new_grid) <- c("x1", "x2") x_pred <- data.frame(x_new_grid, matrix(nrow(x_new_grid)*10, ncol=10)) names(x_pred) <- names(x) y_hat <- predict(object=rf, newdata=x_pred, "vote")[,2] library(fields) y_hat_mat <- as.matrix(unstack(data.frame(y_hat, x_new_grid), y_hat~x1)) image.plot(z=y_hat_mat, x=x_new, y=x_new, zlim=c(0,1), col=tim.colors(255), main="RF Prediction surface", xlab="x1", ylab="x2") As implied by our abysmal model output, the prediction surface for the reduced-variable logistic regression model is basically flat. bad_y_hat <- predict(object=badmodel, newdata=x_new_grid, type="response") bad_y_hat_mat <- as.matrix(unstack(data.frame(bad_y_hat, x_new_grid), bad_y_hat~x1)) image.plot(z=bad_y_hat_mat, x=x_new, y=x_new, zlim=c(0,1), col=tim.colors(255), main="Logistic regression prediction surface", xlab="x1", ylab="x2") HongOoi notes that the class membership isn't a linear function of the features, but that it a linear function is under a transformation. Because the decision boundary is $1=x_1^2+x_2^2,$ if we square these features, we will be able to build a more useful linear model. This is deliberate. While the RF model can find signal in those two features without transformation, the analyst has to be more specific to get similarly helpful results in the GLM. Perhaps that's sufficient for OP: finding a useful set of transformations for 2 features is easier than 12. But my point is that even if a transformation will yield a useful linear model, RF feature importance won't suggest the transformation on its own.
Can a random forest be used for feature selection in multiple linear regression? Since RF can handle non-linearity but can't provide coefficients, would it be wise to use Random Forest to gather the most important Features and then plug those features into a Multiple Linear Regres
4,246
Can a random forest be used for feature selection in multiple linear regression?
The answer by @Sycorax is fantastic. In addition to those fully described aspects of the problem related to model fit, there is another reason not to pursue a multi-step process such as running random forests, lasso, or elastic net to "learn" which features to feed to traditional regression. Ordinary regression would not know about the penalization that properly went on during the development of the random forest or the other methods, and would fit unpenalized effects that are badly biased to appear too strong in predicting $Y$. This would be no different than running stepwise variable selection and reporting the final model without taking into account how it arrived.
Can a random forest be used for feature selection in multiple linear regression?
The answer by @Sycorax is fantastic. In addition to those fully described aspects of the problem related to model fit, there is another reason not to pursue a multi-step process such as running rando
Can a random forest be used for feature selection in multiple linear regression? The answer by @Sycorax is fantastic. In addition to those fully described aspects of the problem related to model fit, there is another reason not to pursue a multi-step process such as running random forests, lasso, or elastic net to "learn" which features to feed to traditional regression. Ordinary regression would not know about the penalization that properly went on during the development of the random forest or the other methods, and would fit unpenalized effects that are badly biased to appear too strong in predicting $Y$. This would be no different than running stepwise variable selection and reporting the final model without taking into account how it arrived.
Can a random forest be used for feature selection in multiple linear regression? The answer by @Sycorax is fantastic. In addition to those fully described aspects of the problem related to model fit, there is another reason not to pursue a multi-step process such as running rando
4,247
Can a random forest be used for feature selection in multiple linear regression?
A properly executed random forest applied to a problem that is more "random forest appropriate" can work as a filter to remove noise, and make results that are more useful as inputs to other analysis tools. Disclaimers: Is it a "silver bullet"? No way. Mileage will vary. It works where it works, and not elsewhere. Are there ways you can badly wrongly grossly use it and get answers that are in the junk-to-voodoo domain? youbetcha. Like every analytic tool, it has limits. If you lick a frog, will your breath smell like frog? likely. I don't have experience there. I have to give a "shout out" to my "peeps" who made "Spider". (link) Their example problem informed my approach. (link) I also love Theil-Sen estimators, and wish I could give props to Theil and Sen. My answer isn't about how to get it wrong, but about how it might work if you got it mostly right. While I use "trivial" noise, I want you to think about "non-trivial" or "structured" noise. One of the strengths of a random forest is how well it applies to high-dimensional problems. I can't show 20k columns (aka a 20k dimensional space) in a clean visual way. It is not an easy task. However, if you have a 20k-dimensional problem, a random forest might be a good tool there when most others fall flat on their "faces". This is an example of removing noise from signal using a random forest. #housekeeping rm(list=ls()) #library library(randomForest) #for reproducibility set.seed(08012015) #basic n <- 1:2000 r <- 0.05*n +1 th <- n*(4*pi)/max(n) #polar to cartesian x1=r*cos(th) y1=r*sin(th) #add noise x2 <- x1+0.1*r*runif(min = -1,max = 1,n=length(n)) y2 <- y1+0.1*r*runif(min = -1,max = 1,n=length(n)) #append salt and pepper x3 <- runif(min = min(x2),max = max(x2),n=length(n)/2) y3 <- runif(min = min(y2),max = max(y2),n=length(n)/2) x4 <- c(x2,x3) y4 <- c(y2,y3) z4 <- as.vector(matrix(1,nrow=length(x4))) #plot class "A" derivation plot(x1,y1,pch=18,type="l",col="Red", lwd=2) points(x2,y2) points(x3,y3,pch=18,col="Blue") legend(x = 65,y=65,legend = c("true","sampled","false"), col = c("Red","Black","Blue"),lty = c(1,-1,-1),pch=c(-1,1,18)) Let me describe what is going on here. This image below shows training data for class "1". Class "2" is uniform random over the same domain and range. You can see that the "information" of "1" is mostly a spiral, but has been corrupted with material from "2". Having 33% of your data corrupt can be a problem for many fitting tools. Theil-Sen starts to degrade at about 29%. (link) Now we separate out the information, only having an idea of what noise is. #Create "B" class of uniform noise x5 <- runif(min = min(x4),max = max(x4),n=length(x4)) y5 <- runif(min = min(y4),max = max(y4),n=length(x4)) z5 <- 2*z4 #assemble data into frame data <- data.frame(c(x4,x5),c(y4,y5),as.factor(c(z4,z5))) names(data) <- c("x","y","z") #train random forest - I like h2o, but this is textbook Breimann fit.rf <- randomForest(z~.,data=data, ntree = 1000, replace=TRUE, nodesize = 20) data2 <- predict(fit.rf,newdata=data[data$z==1,c(1,2)],type="response") #separate class "1" from training data idx1a <- which(data[,3]==1) #separate class "1" from the predicted data idx1b <- which(data2==1) #show the difference in classes before and after RF based filter plot(data[idx1a,1],data[idx1a,2]) points(data[idx1b,1],data[idx1b,2],col="Red") Here is the fitting result: I really like this because it can show both strengths and weaknesses of a decent method to a hard problem at the same time. If you look near the center you can see how there is less filtering. The geometric scale of information is small and the random forest is missing that. It says something about number of nodes, number of trees, and sample density for class 2. There is also a "gap" near (-50,-50), and "jets" in several locations. In general, however, the filtering is decent. Compare vs. SVM Here is the code to allow a comparison with SVM: #now to fit to svm fit.svm <- svm(z~., data=data, kernel="radial",gamma=10,type = "C") x5 <- seq(from=min(x2),to=max(x2),by=1) y5 <- seq(from=min(y2),to=max(y2),by=1) count <- 1 x6 <- numeric() y6 <- numeric() for (i in 1:length(x5)){ for (j in 1:length(y5)){ x6[count]<-x5[i] y6[count]<-y5[j] count <- count+1 } } data4 <- data.frame(x6,y6) names(data4) <- c("x","y") data4$z <- predict(fit.svm,newdata=data4) idx4 <- which(data4$z==1,arr.ind=TRUE) plot(data4[idx4,1],data4[idx4,2],col="Gray",pch=20) points(data[idx1b,1],data[idx1b,2],col="Blue",pch=20) lines(x1,y1,pch=18,col="Green", lwd=2) grid() legend(x = 65,y=65, legend = c("true","from RF","From SVM"), col = c("Green","Blue","Gray"),lty = c(1,-1,-1),pch=c(-1,20,15),pt.cex=c(1,1,2.25)) It results in the following image. This is a decent SVM. The gray is the domain associated with class "1" by the SVM. The blue dots are the samples associated with class "1" by the RF. The RF based filter performs comparably to SVM without an explicitly imposed basis. It can be seen that the "tight data" near the center of the spiral is much more "tightly" resolved by the RF. There are also "islands" toward the "tail" where the RF finds association that the SVM does not. I am entertained. Without having the background, I did one of the early things also done by a very good contributor in the field. The original author used "reference distribution" (link, link). EDIT: Apply random FOREST to this model: While user777 has a nice thought about a CART being the element of a random forest, the premise of the random forest is "ensemble aggregation of weak learners". The CART is a known weak learner but it is nothing remotely near an "ensemble". The "ensemble" though in a random forest is intended "in the limit of a large number of samples". The answer of user777, in the scatterplot, uses at least 500 samples and that says something about human readability and sample sizes in this case. The human visual system (itself an ensemble of learners) is an amazing sensor and data processor and it finds that value to be sufficient for ease of processing. If we take even the default settings on a random-forest tool, we can observe the behavior of the classification error increases for the first several trees, and does not reach the one-tree level until there are around 10 trees. Initially error grows reduction of error becomes stable around 60 trees. By stable I mean x <- cbind(x1, x2) plot(rf,type="b",ylim=c(0,0.06)) grid() Which yields: If instead of looking at the "minimum weak learner" we look at the "minimum weak ensemble" suggested by a very brief heuristic for default setting of the tool the results are somewhat different. Note, I used "lines" to draw the circle indicating the edge over the approximation. You can see that it is imperfect, but much better than the quality of a single learner. The original sampling has 88 "interior" samples. If the sample sizes are increased (allowing ensemble to apply) then the quality of the approximation also improves. The same number of learners with 20,000 samples makes a stunningly better fit. The much higher quality input information also allows evaluation of appropriate number of trees. Inspection of the convergence suggests that 20 trees is the minimum sufficient number in this particular case, to represent the data well.
Can a random forest be used for feature selection in multiple linear regression?
A properly executed random forest applied to a problem that is more "random forest appropriate" can work as a filter to remove noise, and make results that are more useful as inputs to other analysis
Can a random forest be used for feature selection in multiple linear regression? A properly executed random forest applied to a problem that is more "random forest appropriate" can work as a filter to remove noise, and make results that are more useful as inputs to other analysis tools. Disclaimers: Is it a "silver bullet"? No way. Mileage will vary. It works where it works, and not elsewhere. Are there ways you can badly wrongly grossly use it and get answers that are in the junk-to-voodoo domain? youbetcha. Like every analytic tool, it has limits. If you lick a frog, will your breath smell like frog? likely. I don't have experience there. I have to give a "shout out" to my "peeps" who made "Spider". (link) Their example problem informed my approach. (link) I also love Theil-Sen estimators, and wish I could give props to Theil and Sen. My answer isn't about how to get it wrong, but about how it might work if you got it mostly right. While I use "trivial" noise, I want you to think about "non-trivial" or "structured" noise. One of the strengths of a random forest is how well it applies to high-dimensional problems. I can't show 20k columns (aka a 20k dimensional space) in a clean visual way. It is not an easy task. However, if you have a 20k-dimensional problem, a random forest might be a good tool there when most others fall flat on their "faces". This is an example of removing noise from signal using a random forest. #housekeeping rm(list=ls()) #library library(randomForest) #for reproducibility set.seed(08012015) #basic n <- 1:2000 r <- 0.05*n +1 th <- n*(4*pi)/max(n) #polar to cartesian x1=r*cos(th) y1=r*sin(th) #add noise x2 <- x1+0.1*r*runif(min = -1,max = 1,n=length(n)) y2 <- y1+0.1*r*runif(min = -1,max = 1,n=length(n)) #append salt and pepper x3 <- runif(min = min(x2),max = max(x2),n=length(n)/2) y3 <- runif(min = min(y2),max = max(y2),n=length(n)/2) x4 <- c(x2,x3) y4 <- c(y2,y3) z4 <- as.vector(matrix(1,nrow=length(x4))) #plot class "A" derivation plot(x1,y1,pch=18,type="l",col="Red", lwd=2) points(x2,y2) points(x3,y3,pch=18,col="Blue") legend(x = 65,y=65,legend = c("true","sampled","false"), col = c("Red","Black","Blue"),lty = c(1,-1,-1),pch=c(-1,1,18)) Let me describe what is going on here. This image below shows training data for class "1". Class "2" is uniform random over the same domain and range. You can see that the "information" of "1" is mostly a spiral, but has been corrupted with material from "2". Having 33% of your data corrupt can be a problem for many fitting tools. Theil-Sen starts to degrade at about 29%. (link) Now we separate out the information, only having an idea of what noise is. #Create "B" class of uniform noise x5 <- runif(min = min(x4),max = max(x4),n=length(x4)) y5 <- runif(min = min(y4),max = max(y4),n=length(x4)) z5 <- 2*z4 #assemble data into frame data <- data.frame(c(x4,x5),c(y4,y5),as.factor(c(z4,z5))) names(data) <- c("x","y","z") #train random forest - I like h2o, but this is textbook Breimann fit.rf <- randomForest(z~.,data=data, ntree = 1000, replace=TRUE, nodesize = 20) data2 <- predict(fit.rf,newdata=data[data$z==1,c(1,2)],type="response") #separate class "1" from training data idx1a <- which(data[,3]==1) #separate class "1" from the predicted data idx1b <- which(data2==1) #show the difference in classes before and after RF based filter plot(data[idx1a,1],data[idx1a,2]) points(data[idx1b,1],data[idx1b,2],col="Red") Here is the fitting result: I really like this because it can show both strengths and weaknesses of a decent method to a hard problem at the same time. If you look near the center you can see how there is less filtering. The geometric scale of information is small and the random forest is missing that. It says something about number of nodes, number of trees, and sample density for class 2. There is also a "gap" near (-50,-50), and "jets" in several locations. In general, however, the filtering is decent. Compare vs. SVM Here is the code to allow a comparison with SVM: #now to fit to svm fit.svm <- svm(z~., data=data, kernel="radial",gamma=10,type = "C") x5 <- seq(from=min(x2),to=max(x2),by=1) y5 <- seq(from=min(y2),to=max(y2),by=1) count <- 1 x6 <- numeric() y6 <- numeric() for (i in 1:length(x5)){ for (j in 1:length(y5)){ x6[count]<-x5[i] y6[count]<-y5[j] count <- count+1 } } data4 <- data.frame(x6,y6) names(data4) <- c("x","y") data4$z <- predict(fit.svm,newdata=data4) idx4 <- which(data4$z==1,arr.ind=TRUE) plot(data4[idx4,1],data4[idx4,2],col="Gray",pch=20) points(data[idx1b,1],data[idx1b,2],col="Blue",pch=20) lines(x1,y1,pch=18,col="Green", lwd=2) grid() legend(x = 65,y=65, legend = c("true","from RF","From SVM"), col = c("Green","Blue","Gray"),lty = c(1,-1,-1),pch=c(-1,20,15),pt.cex=c(1,1,2.25)) It results in the following image. This is a decent SVM. The gray is the domain associated with class "1" by the SVM. The blue dots are the samples associated with class "1" by the RF. The RF based filter performs comparably to SVM without an explicitly imposed basis. It can be seen that the "tight data" near the center of the spiral is much more "tightly" resolved by the RF. There are also "islands" toward the "tail" where the RF finds association that the SVM does not. I am entertained. Without having the background, I did one of the early things also done by a very good contributor in the field. The original author used "reference distribution" (link, link). EDIT: Apply random FOREST to this model: While user777 has a nice thought about a CART being the element of a random forest, the premise of the random forest is "ensemble aggregation of weak learners". The CART is a known weak learner but it is nothing remotely near an "ensemble". The "ensemble" though in a random forest is intended "in the limit of a large number of samples". The answer of user777, in the scatterplot, uses at least 500 samples and that says something about human readability and sample sizes in this case. The human visual system (itself an ensemble of learners) is an amazing sensor and data processor and it finds that value to be sufficient for ease of processing. If we take even the default settings on a random-forest tool, we can observe the behavior of the classification error increases for the first several trees, and does not reach the one-tree level until there are around 10 trees. Initially error grows reduction of error becomes stable around 60 trees. By stable I mean x <- cbind(x1, x2) plot(rf,type="b",ylim=c(0,0.06)) grid() Which yields: If instead of looking at the "minimum weak learner" we look at the "minimum weak ensemble" suggested by a very brief heuristic for default setting of the tool the results are somewhat different. Note, I used "lines" to draw the circle indicating the edge over the approximation. You can see that it is imperfect, but much better than the quality of a single learner. The original sampling has 88 "interior" samples. If the sample sizes are increased (allowing ensemble to apply) then the quality of the approximation also improves. The same number of learners with 20,000 samples makes a stunningly better fit. The much higher quality input information also allows evaluation of appropriate number of trees. Inspection of the convergence suggests that 20 trees is the minimum sufficient number in this particular case, to represent the data well.
Can a random forest be used for feature selection in multiple linear regression? A properly executed random forest applied to a problem that is more "random forest appropriate" can work as a filter to remove noise, and make results that are more useful as inputs to other analysis
4,248
Can a random forest be used for feature selection in multiple linear regression?
Despite the legitimate warnings that this approach might fail in some cases, this should not discourage you from trying it out! Breimann reports an example (Breimann 2001) that selecting features by variable importance from a random forest and plugging them into logistic regresission outperformed variable selections specifically tailored for logistic regression, and others report similar observations, e.g., with using Boruta as a preprocessing variable selection step for logistic regression. As both Random Forest variable importance computation and Boruta feature selection are readily available in R or other software and thus can be tested without much effort, this is something that should be given a try. If you have enough data, you can even validate the approach by doing both steps on different fractions of the data.
Can a random forest be used for feature selection in multiple linear regression?
Despite the legitimate warnings that this approach might fail in some cases, this should not discourage you from trying it out! Breimann reports an example (Breimann 2001) that selecting features by v
Can a random forest be used for feature selection in multiple linear regression? Despite the legitimate warnings that this approach might fail in some cases, this should not discourage you from trying it out! Breimann reports an example (Breimann 2001) that selecting features by variable importance from a random forest and plugging them into logistic regresission outperformed variable selections specifically tailored for logistic regression, and others report similar observations, e.g., with using Boruta as a preprocessing variable selection step for logistic regression. As both Random Forest variable importance computation and Boruta feature selection are readily available in R or other software and thus can be tested without much effort, this is something that should be given a try. If you have enough data, you can even validate the approach by doing both steps on different fractions of the data.
Can a random forest be used for feature selection in multiple linear regression? Despite the legitimate warnings that this approach might fail in some cases, this should not discourage you from trying it out! Breimann reports an example (Breimann 2001) that selecting features by v
4,249
What is the difference between posterior and posterior predictive distribution?
The simple difference between the two is that the posterior distribution depends on the unknown parameter $\theta$, i.e., the posterior distribution is: $$p(\theta|x)=c\times p(x|\theta)p(\theta)$$ where $c$ is the normalizing constant. While on the other hand, the posterior predictive distribution does not depend on the unknown parameter $\theta$ because it has been integrated out, i.e., the posterior predictive distribution is: $$p(x^*|x)=\int_\Theta c\times p(x^*,\theta|x)d\theta=\int_\Theta c\times p(x^*|\theta)p(\theta|x)d\theta$$ where $x^*$ is a new unobserved random variable and is independent of $x$. I won't dwell on the posterior distribution explanation since you say you understand it but the posterior distribution "is the distribution of an unknown quantity, treated as a random variable, conditional on the evidence obtained" (Wikipedia). So basically its the distribution that explains your unknown, random, parameter. On the other hand, the posterior predictive distribution has a completely different meaning in that it is the distribution for future predicted data based on the data you have already seen. So the posterior predictive distribution is basically used to predict new data values. If it helps, is an example graph of a posterior distribution and a posterior predictive distribution:
What is the difference between posterior and posterior predictive distribution?
The simple difference between the two is that the posterior distribution depends on the unknown parameter $\theta$, i.e., the posterior distribution is: $$p(\theta|x)=c\times p(x|\theta)p(\theta)$$ wh
What is the difference between posterior and posterior predictive distribution? The simple difference between the two is that the posterior distribution depends on the unknown parameter $\theta$, i.e., the posterior distribution is: $$p(\theta|x)=c\times p(x|\theta)p(\theta)$$ where $c$ is the normalizing constant. While on the other hand, the posterior predictive distribution does not depend on the unknown parameter $\theta$ because it has been integrated out, i.e., the posterior predictive distribution is: $$p(x^*|x)=\int_\Theta c\times p(x^*,\theta|x)d\theta=\int_\Theta c\times p(x^*|\theta)p(\theta|x)d\theta$$ where $x^*$ is a new unobserved random variable and is independent of $x$. I won't dwell on the posterior distribution explanation since you say you understand it but the posterior distribution "is the distribution of an unknown quantity, treated as a random variable, conditional on the evidence obtained" (Wikipedia). So basically its the distribution that explains your unknown, random, parameter. On the other hand, the posterior predictive distribution has a completely different meaning in that it is the distribution for future predicted data based on the data you have already seen. So the posterior predictive distribution is basically used to predict new data values. If it helps, is an example graph of a posterior distribution and a posterior predictive distribution:
What is the difference between posterior and posterior predictive distribution? The simple difference between the two is that the posterior distribution depends on the unknown parameter $\theta$, i.e., the posterior distribution is: $$p(\theta|x)=c\times p(x|\theta)p(\theta)$$ wh
4,250
What is the difference between posterior and posterior predictive distribution?
The predictive distribution is usually used when you have learned a posterior distribution for the parameter of some sort of predictive model. For example in Bayesian linear regression, you learn a posterior distribution over the w parameter of the model y=wX given some observed data X. Then when a new unseen data point x* comes in, you want to find the distribution over possible predictions y* given the posterior distribution for w that you just learned. This distribution over possible y*'s given the posterior for w is the prediction distribution.
What is the difference between posterior and posterior predictive distribution?
The predictive distribution is usually used when you have learned a posterior distribution for the parameter of some sort of predictive model. For example in Bayesian linear regression, you learn a p
What is the difference between posterior and posterior predictive distribution? The predictive distribution is usually used when you have learned a posterior distribution for the parameter of some sort of predictive model. For example in Bayesian linear regression, you learn a posterior distribution over the w parameter of the model y=wX given some observed data X. Then when a new unseen data point x* comes in, you want to find the distribution over possible predictions y* given the posterior distribution for w that you just learned. This distribution over possible y*'s given the posterior for w is the prediction distribution.
What is the difference between posterior and posterior predictive distribution? The predictive distribution is usually used when you have learned a posterior distribution for the parameter of some sort of predictive model. For example in Bayesian linear regression, you learn a p
4,251
What is the difference between posterior and posterior predictive distribution?
They refer to distributions of two different things. The posterior distribution refers to the distribution of the parameter, while the predictive posterior distribution (PPD) refers to the distribution of future observations of data.
What is the difference between posterior and posterior predictive distribution?
They refer to distributions of two different things. The posterior distribution refers to the distribution of the parameter, while the predictive posterior distribution (PPD) refers to the distributi
What is the difference between posterior and posterior predictive distribution? They refer to distributions of two different things. The posterior distribution refers to the distribution of the parameter, while the predictive posterior distribution (PPD) refers to the distribution of future observations of data.
What is the difference between posterior and posterior predictive distribution? They refer to distributions of two different things. The posterior distribution refers to the distribution of the parameter, while the predictive posterior distribution (PPD) refers to the distributi
4,252
Explanation of min_child_weight in xgboost algorithm
For a regression, the loss of each point in a node is $\frac{1}{2}(y_i - \hat{y_i})^2$ The second derivative of this expression with respect to $\hat{y_i}$ is $1$. So when you sum the second derivative over all points in the node, you get the number of points in the node. Here, min_child_weight means something like "stop trying to split once your sample size in a node goes below a given threshold". For a binary logistic regression, the hessian for each point in a node is going to contain terms like $\sigma(\hat{y_i})(1 - \sigma(\hat{y_i}))$ where $\sigma$ is the sigmoid function. Say you're at a pure node (e.g., all of the training examples in the node are 1's). Then all of the $\hat{y_i}$'s will probably be large positive numbers, so all of the $\sigma(\hat{y_i})$'s will be near 1, so all of the hessian terms will be near 0. Similar logic holds if all of the training examples in the node are 0. Here, min_child_weight means something like "stop trying to split once you reach a certain degree of purity in a node and your model can fit it". The Hessian's a sane thing to use for regularization and limiting tree depth. For regression, it's easy to see how you might overfit if you're always splitting down to nodes with, say, just 1 observation. Similarly, for classification, it's easy to see how you might overfit if you insist on splitting until each node is pure.
Explanation of min_child_weight in xgboost algorithm
For a regression, the loss of each point in a node is $\frac{1}{2}(y_i - \hat{y_i})^2$ The second derivative of this expression with respect to $\hat{y_i}$ is $1$. So when you sum the second derivativ
Explanation of min_child_weight in xgboost algorithm For a regression, the loss of each point in a node is $\frac{1}{2}(y_i - \hat{y_i})^2$ The second derivative of this expression with respect to $\hat{y_i}$ is $1$. So when you sum the second derivative over all points in the node, you get the number of points in the node. Here, min_child_weight means something like "stop trying to split once your sample size in a node goes below a given threshold". For a binary logistic regression, the hessian for each point in a node is going to contain terms like $\sigma(\hat{y_i})(1 - \sigma(\hat{y_i}))$ where $\sigma$ is the sigmoid function. Say you're at a pure node (e.g., all of the training examples in the node are 1's). Then all of the $\hat{y_i}$'s will probably be large positive numbers, so all of the $\sigma(\hat{y_i})$'s will be near 1, so all of the hessian terms will be near 0. Similar logic holds if all of the training examples in the node are 0. Here, min_child_weight means something like "stop trying to split once you reach a certain degree of purity in a node and your model can fit it". The Hessian's a sane thing to use for regularization and limiting tree depth. For regression, it's easy to see how you might overfit if you're always splitting down to nodes with, say, just 1 observation. Similarly, for classification, it's easy to see how you might overfit if you insist on splitting until each node is pure.
Explanation of min_child_weight in xgboost algorithm For a regression, the loss of each point in a node is $\frac{1}{2}(y_i - \hat{y_i})^2$ The second derivative of this expression with respect to $\hat{y_i}$ is $1$. So when you sum the second derivativ
4,253
Explanation of min_child_weight in xgboost algorithm
When there is little information, gradients of the loss function will tend to change slower, hence a smaller hessian. In the MLE framework, the negative of the hessian is known as the observed Fisher information. Ignoring the sign, a larger hessian will mean that more information is available. You don't want splits to happen when there is too little information. This shortage of information manifests in different ways for different loss functions, some of which were already described in another answer: smaller sample size for ordinary least squares regression and similar for logistic regression but now also weighted by the impurity $p(1-p)$ expected by the current model (so smaller and purer samples will be the less informative ones). Also notice that since the score of a leaf is related to $\frac{\sum grad}{\sum hess}$, a very small $\sum hess$ will make the ratio unstable, which is another way this lack of information manifests.
Explanation of min_child_weight in xgboost algorithm
When there is little information, gradients of the loss function will tend to change slower, hence a smaller hessian. In the MLE framework, the negative of the hessian is known as the observed Fisher
Explanation of min_child_weight in xgboost algorithm When there is little information, gradients of the loss function will tend to change slower, hence a smaller hessian. In the MLE framework, the negative of the hessian is known as the observed Fisher information. Ignoring the sign, a larger hessian will mean that more information is available. You don't want splits to happen when there is too little information. This shortage of information manifests in different ways for different loss functions, some of which were already described in another answer: smaller sample size for ordinary least squares regression and similar for logistic regression but now also weighted by the impurity $p(1-p)$ expected by the current model (so smaller and purer samples will be the less informative ones). Also notice that since the score of a leaf is related to $\frac{\sum grad}{\sum hess}$, a very small $\sum hess$ will make the ratio unstable, which is another way this lack of information manifests.
Explanation of min_child_weight in xgboost algorithm When there is little information, gradients of the loss function will tend to change slower, hence a smaller hessian. In the MLE framework, the negative of the hessian is known as the observed Fisher
4,254
Regression when the OLS residuals are not normally distributed
The ordinary least squares estimate is still a reasonable estimator in the face of non-normal errors. In particular, the Gauss-Markov Theorem states that the ordinary least squares estimate is the best linear unbiased estimator (BLUE) of the regression coefficients ('Best' meaning optimal in terms of minimizing mean squared error)as long as the errors (1) have mean zero (2) are uncorrelated (3) have constant variance Notice there is no condition of normality here (or even any condition that the errors are IID). The normality condition comes into play when you're trying to get confidence intervals and/or $p$-values. As @MichaelChernick mentions (+1, btw) you can use robust inference when the errors are non-normal as long as the departure from normality can be handled by the method - for example, (as we discussed in this thread) the Huber $M$-estimator can provide robust inference when the true error distribution is the mixture between normal and a long tailed distribution (which your example looks like) but may not be helpful for other departures from normality. One interesting possibility that Michael alludes to is bootstrapping to obtain confidence intervals for the OLS estimates and seeing how this compares with the Huber-based inference. Edit: I often hear it said that you can rely on the Central Limit Theorem to take care of non-normal errors - this is not always true (I'm not just talking about counterexamples where the theorem fails). In the real data example the OP refers to, we have a large sample size but can see evidence of a long-tailed error distribution - in situations where you have long tailed errors, you can't necessarily rely on the Central Limit Theorem to give you approximately unbiased inference for realistic finite sample sizes. For example, if the errors follow a $t$-distribution with $2.01$ degrees of freedom (which is not clearly more long-tailed than the errors seen in the OP's data), the coefficient estimates are asymptotically normally distributed, but it takes much longer to "kick in" than it does for other shorter-tailed distributions. Below, I demonstrate with a crude simulation in R that when $y_{i} = 1 + 2x_{i} + \varepsilon_i$, where $\varepsilon_{i} \sim t_{2.01}$, the sampling distribution of $\hat{\beta}_{1}$ is still quite long tailed even when the sample size is $n=4000$: set.seed(5678) B = matrix(0,1000,2) for(i in 1:1000) { x = rnorm(4000) y = 1 + 2*x + rt(4000,2.01) g = lm(y~x) B[i,] = coef(g) } qqnorm(B[,2]) qqline(B[,2])
Regression when the OLS residuals are not normally distributed
The ordinary least squares estimate is still a reasonable estimator in the face of non-normal errors. In particular, the Gauss-Markov Theorem states that the ordinary least squares estimate is the bes
Regression when the OLS residuals are not normally distributed The ordinary least squares estimate is still a reasonable estimator in the face of non-normal errors. In particular, the Gauss-Markov Theorem states that the ordinary least squares estimate is the best linear unbiased estimator (BLUE) of the regression coefficients ('Best' meaning optimal in terms of minimizing mean squared error)as long as the errors (1) have mean zero (2) are uncorrelated (3) have constant variance Notice there is no condition of normality here (or even any condition that the errors are IID). The normality condition comes into play when you're trying to get confidence intervals and/or $p$-values. As @MichaelChernick mentions (+1, btw) you can use robust inference when the errors are non-normal as long as the departure from normality can be handled by the method - for example, (as we discussed in this thread) the Huber $M$-estimator can provide robust inference when the true error distribution is the mixture between normal and a long tailed distribution (which your example looks like) but may not be helpful for other departures from normality. One interesting possibility that Michael alludes to is bootstrapping to obtain confidence intervals for the OLS estimates and seeing how this compares with the Huber-based inference. Edit: I often hear it said that you can rely on the Central Limit Theorem to take care of non-normal errors - this is not always true (I'm not just talking about counterexamples where the theorem fails). In the real data example the OP refers to, we have a large sample size but can see evidence of a long-tailed error distribution - in situations where you have long tailed errors, you can't necessarily rely on the Central Limit Theorem to give you approximately unbiased inference for realistic finite sample sizes. For example, if the errors follow a $t$-distribution with $2.01$ degrees of freedom (which is not clearly more long-tailed than the errors seen in the OP's data), the coefficient estimates are asymptotically normally distributed, but it takes much longer to "kick in" than it does for other shorter-tailed distributions. Below, I demonstrate with a crude simulation in R that when $y_{i} = 1 + 2x_{i} + \varepsilon_i$, where $\varepsilon_{i} \sim t_{2.01}$, the sampling distribution of $\hat{\beta}_{1}$ is still quite long tailed even when the sample size is $n=4000$: set.seed(5678) B = matrix(0,1000,2) for(i in 1:1000) { x = rnorm(4000) y = 1 + 2*x + rt(4000,2.01) g = lm(y~x) B[i,] = coef(g) } qqnorm(B[,2]) qqline(B[,2])
Regression when the OLS residuals are not normally distributed The ordinary least squares estimate is still a reasonable estimator in the face of non-normal errors. In particular, the Gauss-Markov Theorem states that the ordinary least squares estimate is the bes
4,255
Regression when the OLS residuals are not normally distributed
I think you want to look at all the properties of the residuals. normality constant variance correlated to a covariate. combinations of the above If it is just 1 and it is due to heavytails or skewness due to one heavy tail, robust regression might be a good approach or possibly a transformation to normality. If it is a non-constant variance try a variance stabilizing transformation or attempt to model the variance function. If it is just 3 that suggests a different form of model involving that covariate. Whatever the problem bootstrapping the vectors or reiduals is always an option.
Regression when the OLS residuals are not normally distributed
I think you want to look at all the properties of the residuals. normality constant variance correlated to a covariate. combinations of the above If it is just 1 and it is due to heavytails or skewn
Regression when the OLS residuals are not normally distributed I think you want to look at all the properties of the residuals. normality constant variance correlated to a covariate. combinations of the above If it is just 1 and it is due to heavytails or skewness due to one heavy tail, robust regression might be a good approach or possibly a transformation to normality. If it is a non-constant variance try a variance stabilizing transformation or attempt to model the variance function. If it is just 3 that suggests a different form of model involving that covariate. Whatever the problem bootstrapping the vectors or reiduals is always an option.
Regression when the OLS residuals are not normally distributed I think you want to look at all the properties of the residuals. normality constant variance correlated to a covariate. combinations of the above If it is just 1 and it is due to heavytails or skewn
4,256
Regression when the OLS residuals are not normally distributed
For non-normal conditions one would sometimes resort to robust regression, especially using the links to methods. In order to present the context for non-normality it may help to review the assumptions for linear OLS regression, which are: Weak exogeneity. This essentially means that the predictor variables, x, can be treated as fixed values, rather than random variables. This means, for example, that the predictor variables are assumed to be error-free—that is, not contaminated with measurement errors. This assumption is the one that is most frequently violated and leads to errors as enumerated following this assumption list. Linearity. This means that the mean of the response variable is a linear combination of the parameters (regression coefficients) and the predictor variables. Note that this assumption is much less restrictive than it may at first seem. Because the predictor variables are treated as fixed values (see above), linearity is really only a restriction on the parameters. The predictor variables themselves can be arbitrarily transformed, and in fact multiple copies of the same underlying predictor variable can be added, each one transformed differently. Constant variance (a.k.a. homoscedasticity). This means that different values of the response variable have the same variance in their errors, regardless of the values of the predictor variables. In practice this assumption is invalid (i.e. the errors are heteroscedastic) if the response variable can vary over a wide scale. In order to check for heterogeneous error variance, or when a pattern of residuals violates model assumptions of homoscedasticity (error is equally variable around the 'best-fitting line' for all points of x), it is prudent to look for a "fanning effect" between residual error and predicted values. This is to say there will be a systematic change in the absolute or squared residuals when plotted against the predictive variables. Errors will not be evenly distributed across the regression line. Heteroscedasticity will result in the averaging over of distinguishable variances around the points to get a single variance that is inaccurately representing all the variances of the line. In effect, residuals appear clustered and spread apart on their predicted plots for larger and smaller values for points along the linear regression line, and the mean squared error for the model will be wrong. Independence of errors. This assumes that the errors of the response variables are uncorrelated with each other. (Actual statistical independence is a stronger condition than mere lack of correlation and is often not needed, although it can be exploited if it is known to hold. This latter can be examined with cluster analysis and correction for interaction.) Some methods (e.g. generalized least squares) are capable of handling correlated errors, although they typically require significantly more data unless some sort of regularization is used to bias the model towards assuming uncorrelated errors. Bayesian linear regression is a general way of handling this issue. The statistical relationship between the error terms and the regressors plays an important role in determining whether an estimation procedure has desirable sampling properties such as being unbiased and consistent. The arrangement, or probability distribution of the predictor variables x has a major influence on the precision of estimates of β. Sampling and design of experiments are highly developed subfields of statistics that provide guidance for collecting data in such a way to achieve a precise estimate of β. As this answer illustrates, simulated Student's-$t$ distributed $y$-axis errors from a line lead to OLS regression lines with confidence intervals for slope and intercept that increase in size as the degrees of freedom ($df$) decrease. For $df=1$, Student's-$t$ is a Cauchy distribution and the confidence intervals for slope become $(-\infty,+\infty)$. It is arbitrary to invoke the Cauchy distribution with respect to residuals in the sense that when the generating errors are Cauchy distributed, the OLS residuals from a spurious line through the data would be even less reliable, i.e., garbage in---garbage out. In those cases, one can use Theil-Sen regression regression. Theil-Sen is certainly more robust than OLS for non-normal residuals, e.g., Cauchy distributed error would not degrade the confidence intervals and unlike OLS is also a bivariate regression, however in the bivariate case it is still biased. Passing-Bablok regression can be more bivariate unbiased, but does not apply to negative regression slopes. It is most commonly used for methods comparison studies. One should mention Deming regression here, as unlike the Theil-Sen and Passing-Bablok regressions, it is an actual solution to the bivariate problem, but lacks the robustness of those other regressions. Robustness can be increased by truncating data to include the more central values, e.g., random sample consensus (RANSAC) is an iterative method to estimate parameters of a mathematical model from a set of observed data that contains outliers. What then is bivariate regression? A lack of testing for bivariate nature of problems is the most frequent cause for OLS regression dilution and has been nicely presented elsewhere on this site. The concept of OLS bias in this context is not well recognized, see for example Frost and Thompson as presented by Longford et al. (2001), which refers the reader to other methods, expanding the regression model to acknowledge the variability in the $x$ variable, so that no bias arises$^1$. In other words, bivariate case regression sometimes cannot be ignored when both the $x$- and $y$-values are randomly distributed. The need for bivariate regression can be tested for by fitting an OLS regression line to the bijected residuals from an OLS regression of the data. [Edit: Usually, residuals (the model minus the fitted function values) are plotted as versus the corresponding $x$-values. This will not show OLS bias. However, plotting residuals versus their sequence number: first, second, third, etc., that is, equidistantly, A.K.A as a bijected set, DOES show the bias] Then, if the bijected OLS residuals have a non-zero slope, the problem is bivariate and the OLS regression of the data will have a slope magnitude that is too shallow, and an intercept that is too large in magnitude to be representative of the functional relationship between $x$ and $y$. In those cases, the least error linear estimator of $y$-values indeed would still be from OLS regression, and its R$^2$-value will be at a maximum possible value, but the OLS regression line will not represent the actual line function that relates the $x$ and $y$ random variables. As a counter example, when, as occurs among other problems in a time series with equidistant $x$-values, OLS of the raw data is not always inappropriate, it may represent the best $y=f(x)$ line, but is still subject to variable transformation, for example for count data, one would take the square root of the counts to convert the errors for Poisson distributed error to more normal conditions, and one should still check for non-zero slope of residuals. Longford, N. T. (2001). "Correspondence". Journal of the Royal Statistical Society, Series A. 164: 565. doi:10.1111/1467-985x.00219
Regression when the OLS residuals are not normally distributed
For non-normal conditions one would sometimes resort to robust regression, especially using the links to methods. In order to present the context for non-normality it may help to review the assumption
Regression when the OLS residuals are not normally distributed For non-normal conditions one would sometimes resort to robust regression, especially using the links to methods. In order to present the context for non-normality it may help to review the assumptions for linear OLS regression, which are: Weak exogeneity. This essentially means that the predictor variables, x, can be treated as fixed values, rather than random variables. This means, for example, that the predictor variables are assumed to be error-free—that is, not contaminated with measurement errors. This assumption is the one that is most frequently violated and leads to errors as enumerated following this assumption list. Linearity. This means that the mean of the response variable is a linear combination of the parameters (regression coefficients) and the predictor variables. Note that this assumption is much less restrictive than it may at first seem. Because the predictor variables are treated as fixed values (see above), linearity is really only a restriction on the parameters. The predictor variables themselves can be arbitrarily transformed, and in fact multiple copies of the same underlying predictor variable can be added, each one transformed differently. Constant variance (a.k.a. homoscedasticity). This means that different values of the response variable have the same variance in their errors, regardless of the values of the predictor variables. In practice this assumption is invalid (i.e. the errors are heteroscedastic) if the response variable can vary over a wide scale. In order to check for heterogeneous error variance, or when a pattern of residuals violates model assumptions of homoscedasticity (error is equally variable around the 'best-fitting line' for all points of x), it is prudent to look for a "fanning effect" between residual error and predicted values. This is to say there will be a systematic change in the absolute or squared residuals when plotted against the predictive variables. Errors will not be evenly distributed across the regression line. Heteroscedasticity will result in the averaging over of distinguishable variances around the points to get a single variance that is inaccurately representing all the variances of the line. In effect, residuals appear clustered and spread apart on their predicted plots for larger and smaller values for points along the linear regression line, and the mean squared error for the model will be wrong. Independence of errors. This assumes that the errors of the response variables are uncorrelated with each other. (Actual statistical independence is a stronger condition than mere lack of correlation and is often not needed, although it can be exploited if it is known to hold. This latter can be examined with cluster analysis and correction for interaction.) Some methods (e.g. generalized least squares) are capable of handling correlated errors, although they typically require significantly more data unless some sort of regularization is used to bias the model towards assuming uncorrelated errors. Bayesian linear regression is a general way of handling this issue. The statistical relationship between the error terms and the regressors plays an important role in determining whether an estimation procedure has desirable sampling properties such as being unbiased and consistent. The arrangement, or probability distribution of the predictor variables x has a major influence on the precision of estimates of β. Sampling and design of experiments are highly developed subfields of statistics that provide guidance for collecting data in such a way to achieve a precise estimate of β. As this answer illustrates, simulated Student's-$t$ distributed $y$-axis errors from a line lead to OLS regression lines with confidence intervals for slope and intercept that increase in size as the degrees of freedom ($df$) decrease. For $df=1$, Student's-$t$ is a Cauchy distribution and the confidence intervals for slope become $(-\infty,+\infty)$. It is arbitrary to invoke the Cauchy distribution with respect to residuals in the sense that when the generating errors are Cauchy distributed, the OLS residuals from a spurious line through the data would be even less reliable, i.e., garbage in---garbage out. In those cases, one can use Theil-Sen regression regression. Theil-Sen is certainly more robust than OLS for non-normal residuals, e.g., Cauchy distributed error would not degrade the confidence intervals and unlike OLS is also a bivariate regression, however in the bivariate case it is still biased. Passing-Bablok regression can be more bivariate unbiased, but does not apply to negative regression slopes. It is most commonly used for methods comparison studies. One should mention Deming regression here, as unlike the Theil-Sen and Passing-Bablok regressions, it is an actual solution to the bivariate problem, but lacks the robustness of those other regressions. Robustness can be increased by truncating data to include the more central values, e.g., random sample consensus (RANSAC) is an iterative method to estimate parameters of a mathematical model from a set of observed data that contains outliers. What then is bivariate regression? A lack of testing for bivariate nature of problems is the most frequent cause for OLS regression dilution and has been nicely presented elsewhere on this site. The concept of OLS bias in this context is not well recognized, see for example Frost and Thompson as presented by Longford et al. (2001), which refers the reader to other methods, expanding the regression model to acknowledge the variability in the $x$ variable, so that no bias arises$^1$. In other words, bivariate case regression sometimes cannot be ignored when both the $x$- and $y$-values are randomly distributed. The need for bivariate regression can be tested for by fitting an OLS regression line to the bijected residuals from an OLS regression of the data. [Edit: Usually, residuals (the model minus the fitted function values) are plotted as versus the corresponding $x$-values. This will not show OLS bias. However, plotting residuals versus their sequence number: first, second, third, etc., that is, equidistantly, A.K.A as a bijected set, DOES show the bias] Then, if the bijected OLS residuals have a non-zero slope, the problem is bivariate and the OLS regression of the data will have a slope magnitude that is too shallow, and an intercept that is too large in magnitude to be representative of the functional relationship between $x$ and $y$. In those cases, the least error linear estimator of $y$-values indeed would still be from OLS regression, and its R$^2$-value will be at a maximum possible value, but the OLS regression line will not represent the actual line function that relates the $x$ and $y$ random variables. As a counter example, when, as occurs among other problems in a time series with equidistant $x$-values, OLS of the raw data is not always inappropriate, it may represent the best $y=f(x)$ line, but is still subject to variable transformation, for example for count data, one would take the square root of the counts to convert the errors for Poisson distributed error to more normal conditions, and one should still check for non-zero slope of residuals. Longford, N. T. (2001). "Correspondence". Journal of the Royal Statistical Society, Series A. 164: 565. doi:10.1111/1467-985x.00219
Regression when the OLS residuals are not normally distributed For non-normal conditions one would sometimes resort to robust regression, especially using the links to methods. In order to present the context for non-normality it may help to review the assumption
4,257
Regression when the OLS residuals are not normally distributed
Macro (jsut above) stated the correct answer. Just some precision because I had the same question The condition of normality of the residuals is useful when residuals are also homoskedastic. The result is then that OLS has the smallest variance between all of the estimator (linear OR non-linear). The extended OLS assumptions: $E(u|X_i = x) = 0$ $(X_i,Y_i), i=1,…,n,$ are i.i.d Large outliers are rare u is homoskedastic u is distributed $N(0,σ^2)$ if 1-5 verified, then OLS has the smallest variance between all of the estimator (linear OR non-linear). if only 1-4 verified, then by Gauss-Markov, OLS is the best linear (only !) estimator (BLUE). Source : Stock and Watson , Econometrics + my course (EPFL, Econometrics)
Regression when the OLS residuals are not normally distributed
Macro (jsut above) stated the correct answer. Just some precision because I had the same question The condition of normality of the residuals is useful when residuals are also homoskedastic. The resul
Regression when the OLS residuals are not normally distributed Macro (jsut above) stated the correct answer. Just some precision because I had the same question The condition of normality of the residuals is useful when residuals are also homoskedastic. The result is then that OLS has the smallest variance between all of the estimator (linear OR non-linear). The extended OLS assumptions: $E(u|X_i = x) = 0$ $(X_i,Y_i), i=1,…,n,$ are i.i.d Large outliers are rare u is homoskedastic u is distributed $N(0,σ^2)$ if 1-5 verified, then OLS has the smallest variance between all of the estimator (linear OR non-linear). if only 1-4 verified, then by Gauss-Markov, OLS is the best linear (only !) estimator (BLUE). Source : Stock and Watson , Econometrics + my course (EPFL, Econometrics)
Regression when the OLS residuals are not normally distributed Macro (jsut above) stated the correct answer. Just some precision because I had the same question The condition of normality of the residuals is useful when residuals are also homoskedastic. The resul
4,258
Regression when the OLS residuals are not normally distributed
My experience is completely in accord with Michael Chernick. Not only at times does applying a data transformation makes the modeling error normally distributed, it can also correct heteroskedasticity. Sorry, but to suggest otherwise like gather an insane amount of data, or employ less efficient robust regression methods, is misguided, in my opinion, having practice this science/art.
Regression when the OLS residuals are not normally distributed
My experience is completely in accord with Michael Chernick. Not only at times does applying a data transformation makes the modeling error normally distributed, it can also correct heteroskedasticity
Regression when the OLS residuals are not normally distributed My experience is completely in accord with Michael Chernick. Not only at times does applying a data transformation makes the modeling error normally distributed, it can also correct heteroskedasticity. Sorry, but to suggest otherwise like gather an insane amount of data, or employ less efficient robust regression methods, is misguided, in my opinion, having practice this science/art.
Regression when the OLS residuals are not normally distributed My experience is completely in accord with Michael Chernick. Not only at times does applying a data transformation makes the modeling error normally distributed, it can also correct heteroskedasticity
4,259
Is there a test to determine whether GLM overdispersion is significant?
In the R package AER you will find the function dispersiontest, which implements a Test for Overdispersion by Cameron & Trivedi (1990). It follows a simple idea: In a Poisson model, the mean is $E(Y)=\mu$ and the variance is $Var(Y)=\mu$ as well. They are equal. The test simply tests this assumption as a null hypothesis against an alternative where $Var(Y)=\mu + c * f(\mu)$ where the constant $c < 0$ means underdispersion and $c > 0$ means overdispersion. The function $f(.)$ is some monoton function (often linear or quadratic; the former is the default).The resulting test is equivalent to testing $H_0: c=0$ vs. $H_1: c \neq 0$ and the test statistic used is a $t$ statistic which is asymptotically standard normal under the null. Example: R> library(AER) R> data(RecreationDemand) R> rd <- glm(trips ~ ., data = RecreationDemand, family = poisson) R> dispersiontest(rd,trafo=1) Overdispersion test data: rd z = 2.4116, p-value = 0.007941 alternative hypothesis: true dispersion is greater than 0 sample estimates: dispersion 5.5658 Here we clearly see that there is evidence of overdispersion (c is estimated to be 5.57) which speaks quite strongly against the assumption of equidispersion (i.e. c=0). Note that if you not use trafo=1, it will actually do a test of $H_0: c^*=1$ vs. $H_1: c^* \neq 1$ with $c^*=c+1$ which has of course the same result as the other test apart from the test statistic being shifted by one. The reason for this, though, is that the latter corresponds to the common parametrization in a quasi-Poisson model.
Is there a test to determine whether GLM overdispersion is significant?
In the R package AER you will find the function dispersiontest, which implements a Test for Overdispersion by Cameron & Trivedi (1990). It follows a simple idea: In a Poisson model, the mean is $E(Y)
Is there a test to determine whether GLM overdispersion is significant? In the R package AER you will find the function dispersiontest, which implements a Test for Overdispersion by Cameron & Trivedi (1990). It follows a simple idea: In a Poisson model, the mean is $E(Y)=\mu$ and the variance is $Var(Y)=\mu$ as well. They are equal. The test simply tests this assumption as a null hypothesis against an alternative where $Var(Y)=\mu + c * f(\mu)$ where the constant $c < 0$ means underdispersion and $c > 0$ means overdispersion. The function $f(.)$ is some monoton function (often linear or quadratic; the former is the default).The resulting test is equivalent to testing $H_0: c=0$ vs. $H_1: c \neq 0$ and the test statistic used is a $t$ statistic which is asymptotically standard normal under the null. Example: R> library(AER) R> data(RecreationDemand) R> rd <- glm(trips ~ ., data = RecreationDemand, family = poisson) R> dispersiontest(rd,trafo=1) Overdispersion test data: rd z = 2.4116, p-value = 0.007941 alternative hypothesis: true dispersion is greater than 0 sample estimates: dispersion 5.5658 Here we clearly see that there is evidence of overdispersion (c is estimated to be 5.57) which speaks quite strongly against the assumption of equidispersion (i.e. c=0). Note that if you not use trafo=1, it will actually do a test of $H_0: c^*=1$ vs. $H_1: c^* \neq 1$ with $c^*=c+1$ which has of course the same result as the other test apart from the test statistic being shifted by one. The reason for this, though, is that the latter corresponds to the common parametrization in a quasi-Poisson model.
Is there a test to determine whether GLM overdispersion is significant? In the R package AER you will find the function dispersiontest, which implements a Test for Overdispersion by Cameron & Trivedi (1990). It follows a simple idea: In a Poisson model, the mean is $E(Y)
4,260
Is there a test to determine whether GLM overdispersion is significant?
An alternative is the odTest from the pscl library which compares the log-likelihood ratios of a Negative Binomial regression to the restriction of a Poisson regression $\mu =\mathrm{Var}$. The following result is obtained: >library(pscl) >odTest(NegBinModel) Likelihood ratio test of H0: Poisson, as restricted NB model: n.b., the distribution of the test-statistic under H0 is non-standard e.g., see help(odTest) for details/references Critical value of test statistic at the alpha= 0.05 level: 2.7055 Chi-Square Test Statistic = 52863.4998 p-value = < 2.2e-16 Here the null of the Poisson restriction is rejected in favour of my negative binomial regression NegBinModel. Why? Because the test statistic 52863.4998 exceeds 2.7055 with a p-value of < 2.2e-16. The advantage of the AER dispersiontest is the returned object of class "htest" is easier to format (e.g. converting to LaTeX) than the classless 'odTest`.
Is there a test to determine whether GLM overdispersion is significant?
An alternative is the odTest from the pscl library which compares the log-likelihood ratios of a Negative Binomial regression to the restriction of a Poisson regression $\mu =\mathrm{Var}$. The follow
Is there a test to determine whether GLM overdispersion is significant? An alternative is the odTest from the pscl library which compares the log-likelihood ratios of a Negative Binomial regression to the restriction of a Poisson regression $\mu =\mathrm{Var}$. The following result is obtained: >library(pscl) >odTest(NegBinModel) Likelihood ratio test of H0: Poisson, as restricted NB model: n.b., the distribution of the test-statistic under H0 is non-standard e.g., see help(odTest) for details/references Critical value of test statistic at the alpha= 0.05 level: 2.7055 Chi-Square Test Statistic = 52863.4998 p-value = < 2.2e-16 Here the null of the Poisson restriction is rejected in favour of my negative binomial regression NegBinModel. Why? Because the test statistic 52863.4998 exceeds 2.7055 with a p-value of < 2.2e-16. The advantage of the AER dispersiontest is the returned object of class "htest" is easier to format (e.g. converting to LaTeX) than the classless 'odTest`.
Is there a test to determine whether GLM overdispersion is significant? An alternative is the odTest from the pscl library which compares the log-likelihood ratios of a Negative Binomial regression to the restriction of a Poisson regression $\mu =\mathrm{Var}$. The follow
4,261
Is there a test to determine whether GLM overdispersion is significant?
Another alternative is to use the P__disp function from the msme package. The P__disp function can be used to calculate the Pearson $\chi^2$ and Pearson dispersion statistics after fitting the model with glm or glm.nb.
Is there a test to determine whether GLM overdispersion is significant?
Another alternative is to use the P__disp function from the msme package. The P__disp function can be used to calculate the Pearson $\chi^2$ and Pearson dispersion statistics after fitting the model w
Is there a test to determine whether GLM overdispersion is significant? Another alternative is to use the P__disp function from the msme package. The P__disp function can be used to calculate the Pearson $\chi^2$ and Pearson dispersion statistics after fitting the model with glm or glm.nb.
Is there a test to determine whether GLM overdispersion is significant? Another alternative is to use the P__disp function from the msme package. The P__disp function can be used to calculate the Pearson $\chi^2$ and Pearson dispersion statistics after fitting the model w
4,262
Is there a test to determine whether GLM overdispersion is significant?
Yet another option would be to use a likelihood-ratio test to show that a quasipoisson GLM with overdispersion is significantly better than a regular poisson GLM without overdispersion : fit = glm(count ~ treatment,family="poisson",data=data) fit.overdisp = glm(count ~ treatment,family="quasipoisson",data=data) summary(fit.overdisp)$dispersion # dispersion coefficient pchisq(summary(fit.overdisp)$dispersion * fit$df.residual, fit$df.residual, lower = F) # significance for overdispersion
Is there a test to determine whether GLM overdispersion is significant?
Yet another option would be to use a likelihood-ratio test to show that a quasipoisson GLM with overdispersion is significantly better than a regular poisson GLM without overdispersion : fit = glm(cou
Is there a test to determine whether GLM overdispersion is significant? Yet another option would be to use a likelihood-ratio test to show that a quasipoisson GLM with overdispersion is significantly better than a regular poisson GLM without overdispersion : fit = glm(count ~ treatment,family="poisson",data=data) fit.overdisp = glm(count ~ treatment,family="quasipoisson",data=data) summary(fit.overdisp)$dispersion # dispersion coefficient pchisq(summary(fit.overdisp)$dispersion * fit$df.residual, fit$df.residual, lower = F) # significance for overdispersion
Is there a test to determine whether GLM overdispersion is significant? Yet another option would be to use a likelihood-ratio test to show that a quasipoisson GLM with overdispersion is significantly better than a regular poisson GLM without overdispersion : fit = glm(cou
4,263
Adam optimizer with exponential decay
Empirically speaking: definitely try it out, you may find some very useful training heuristics, in which case, please do share! Usually people use some kind of decay, for Adam it seems uncommon. Is there any theoretical reason for this? Can it be useful to combine Adam optimizer with decay? I haven't seen enough people's code using ADAM optimizer to say if this is true or not. If it is true, perhaps it's because ADAM is relatively new and learning rate decay "best practices" haven't been established yet. I do want to note however that learning rate decay is actually part of the theoretical guarantee for ADAM. Specifically in Theorem 4.1 of their ICLR article, one of their hypotheses is that the learning rate has a square root decay, $\alpha_t = \alpha/\sqrt{t}$. Furthermore, for their logistic regression experiments they use the square root decay as well. Simply put: I don't think anything in the theory discourages using learning rate decay rules with ADAM. I have seen people report some good results using ADAM and finding some good training heuristics would be incredibly valuable.
Adam optimizer with exponential decay
Empirically speaking: definitely try it out, you may find some very useful training heuristics, in which case, please do share! Usually people use some kind of decay, for Adam it seems uncommon. Is t
Adam optimizer with exponential decay Empirically speaking: definitely try it out, you may find some very useful training heuristics, in which case, please do share! Usually people use some kind of decay, for Adam it seems uncommon. Is there any theoretical reason for this? Can it be useful to combine Adam optimizer with decay? I haven't seen enough people's code using ADAM optimizer to say if this is true or not. If it is true, perhaps it's because ADAM is relatively new and learning rate decay "best practices" haven't been established yet. I do want to note however that learning rate decay is actually part of the theoretical guarantee for ADAM. Specifically in Theorem 4.1 of their ICLR article, one of their hypotheses is that the learning rate has a square root decay, $\alpha_t = \alpha/\sqrt{t}$. Furthermore, for their logistic regression experiments they use the square root decay as well. Simply put: I don't think anything in the theory discourages using learning rate decay rules with ADAM. I have seen people report some good results using ADAM and finding some good training heuristics would be incredibly valuable.
Adam optimizer with exponential decay Empirically speaking: definitely try it out, you may find some very useful training heuristics, in which case, please do share! Usually people use some kind of decay, for Adam it seems uncommon. Is t
4,264
Adam optimizer with exponential decay
Adam uses the initial learning rate, or step size according to the original paper's terminology, while adaptively computing updates. Step size also gives an approximate bound for updates. In this regard, I think it is a good idea to reduce step size towards the end of training. This is also supported by a recent work from NIPS 2017: The Marginal Value of Adaptive Gradient Methods in Machine Learning. The last line in Section 4: Deep Learning Experiments says Though conventional wisdom suggests that Adam does not require tuning, we find that tuning the initial learning rate and decay scheme for Adam yields significant improvements over its default settings in all cases. Last but not least, the paper suggests that we use SGD anyways.
Adam optimizer with exponential decay
Adam uses the initial learning rate, or step size according to the original paper's terminology, while adaptively computing updates. Step size also gives an approximate bound for updates. In this rega
Adam optimizer with exponential decay Adam uses the initial learning rate, or step size according to the original paper's terminology, while adaptively computing updates. Step size also gives an approximate bound for updates. In this regard, I think it is a good idea to reduce step size towards the end of training. This is also supported by a recent work from NIPS 2017: The Marginal Value of Adaptive Gradient Methods in Machine Learning. The last line in Section 4: Deep Learning Experiments says Though conventional wisdom suggests that Adam does not require tuning, we find that tuning the initial learning rate and decay scheme for Adam yields significant improvements over its default settings in all cases. Last but not least, the paper suggests that we use SGD anyways.
Adam optimizer with exponential decay Adam uses the initial learning rate, or step size according to the original paper's terminology, while adaptively computing updates. Step size also gives an approximate bound for updates. In this rega
4,265
Adam optimizer with exponential decay
The reason why most people don't use learning rate decay with Adam is that the algorithm itself does a learning rate decay in the following way: t <- t + 1 lr_t <- learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t) where t0 is the initial timestep, and lr_t is the new learning rate used.
Adam optimizer with exponential decay
The reason why most people don't use learning rate decay with Adam is that the algorithm itself does a learning rate decay in the following way: t <- t + 1 lr_t <- learning_rate * sqrt(1 - beta2^t) /
Adam optimizer with exponential decay The reason why most people don't use learning rate decay with Adam is that the algorithm itself does a learning rate decay in the following way: t <- t + 1 lr_t <- learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t) where t0 is the initial timestep, and lr_t is the new learning rate used.
Adam optimizer with exponential decay The reason why most people don't use learning rate decay with Adam is that the algorithm itself does a learning rate decay in the following way: t <- t + 1 lr_t <- learning_rate * sqrt(1 - beta2^t) /
4,266
Adam optimizer with exponential decay
I agree with @Indie AI's opinion, here I supply some other information: From CS231n: ... Many of these methods may still require other hyperparameter settings, but the argument is that they are well-behaved for a broader range of hyperparameter values than the raw learning rate. ... And Also from the Paper Rethinking the Inception Architecture for Computer Vision Section 8: ... while our best models were achieved using RMSProp [21] with de- cay of 0.9 and ε = 1.0. We used a learning rate of 0.045, decayed every two epoch using an exponential rate of 0.94. ...
Adam optimizer with exponential decay
I agree with @Indie AI's opinion, here I supply some other information: From CS231n: ... Many of these methods may still require other hyperparameter settings, but the argument is that they are well-
Adam optimizer with exponential decay I agree with @Indie AI's opinion, here I supply some other information: From CS231n: ... Many of these methods may still require other hyperparameter settings, but the argument is that they are well-behaved for a broader range of hyperparameter values than the raw learning rate. ... And Also from the Paper Rethinking the Inception Architecture for Computer Vision Section 8: ... while our best models were achieved using RMSProp [21] with de- cay of 0.9 and ε = 1.0. We used a learning rate of 0.045, decayed every two epoch using an exponential rate of 0.94. ...
Adam optimizer with exponential decay I agree with @Indie AI's opinion, here I supply some other information: From CS231n: ... Many of these methods may still require other hyperparameter settings, but the argument is that they are well-
4,267
Adam optimizer with exponential decay
I trained a dataset with real easy data, if a person is considered fat or not, height and weight - creating data calculating bmi, and if over 27, the person is fat. So very easy basic data. When using Adam as optimizer, and learning rate at 0.001, the accuracy will only get me around 85% for 5 epocs, topping at max 90% with over 100 epocs tested. But when loading again at maybe 85%, and doing 0.0001 learning rate, the accuracy will over 3 epocs goto 95%, and 10 more epocs it's around 98-99%. Not sure if the learning rate can go below 4 digits 0.0001, but when loading the model again and using 0.00001, the accucary will hover around 99.20 - 100% and wont go below. Again, not sure if the learning rate would be considered 0, but anyway, that's what I've got... All this using categorical_crossentropy, but mean_square gets it to 99-100% too doing this method. AdaDelta, AdaGrad, Nesterov couldn't get above 65% accuracy, just for a note.
Adam optimizer with exponential decay
I trained a dataset with real easy data, if a person is considered fat or not, height and weight - creating data calculating bmi, and if over 27, the person is fat. So very easy basic data. When using
Adam optimizer with exponential decay I trained a dataset with real easy data, if a person is considered fat or not, height and weight - creating data calculating bmi, and if over 27, the person is fat. So very easy basic data. When using Adam as optimizer, and learning rate at 0.001, the accuracy will only get me around 85% for 5 epocs, topping at max 90% with over 100 epocs tested. But when loading again at maybe 85%, and doing 0.0001 learning rate, the accuracy will over 3 epocs goto 95%, and 10 more epocs it's around 98-99%. Not sure if the learning rate can go below 4 digits 0.0001, but when loading the model again and using 0.00001, the accucary will hover around 99.20 - 100% and wont go below. Again, not sure if the learning rate would be considered 0, but anyway, that's what I've got... All this using categorical_crossentropy, but mean_square gets it to 99-100% too doing this method. AdaDelta, AdaGrad, Nesterov couldn't get above 65% accuracy, just for a note.
Adam optimizer with exponential decay I trained a dataset with real easy data, if a person is considered fat or not, height and weight - creating data calculating bmi, and if over 27, the person is fat. So very easy basic data. When using
4,268
Adam optimizer with exponential decay
The learning rate decay in the Adam is the same as that in RSMProp(as you can see from this answer), and that is kind of mostly based on the magnitude of the previous gradients to dump out the oscillations. So the exponential decay(for a decreasing learning rate along the training process) can be adopted at the same time. They all decay the learning rate but for different purposes.
Adam optimizer with exponential decay
The learning rate decay in the Adam is the same as that in RSMProp(as you can see from this answer), and that is kind of mostly based on the magnitude of the previous gradients to dump out the oscilla
Adam optimizer with exponential decay The learning rate decay in the Adam is the same as that in RSMProp(as you can see from this answer), and that is kind of mostly based on the magnitude of the previous gradients to dump out the oscillations. So the exponential decay(for a decreasing learning rate along the training process) can be adopted at the same time. They all decay the learning rate but for different purposes.
Adam optimizer with exponential decay The learning rate decay in the Adam is the same as that in RSMProp(as you can see from this answer), and that is kind of mostly based on the magnitude of the previous gradients to dump out the oscilla
4,269
Logistic Regression: Scikit Learn vs Statsmodels
Your clue to figuring this out should be that the parameter estimates from the scikit-learn estimation are uniformly smaller in magnitude than the statsmodels counterpart. This might lead you to believe that scikit-learn applies some kind of parameter regularization. You can confirm this by reading the scikit-learn documentation. There is no way to switch off regularization in scikit-learn, but you can make it ineffective by setting the tuning parameter C to a large number. Here is how that works in your case: # module imports from patsy import dmatrices import pandas as pd from sklearn.linear_model import LogisticRegression import statsmodels.discrete.discrete_model as sm # read in the data & create matrices df = pd.read_csv("http://www.ats.ucla.edu/stat/data/binary.csv") y, X = dmatrices('admit ~ gre + gpa + C(rank)', df, return_type = 'dataframe') # sklearn output model = LogisticRegression(fit_intercept = False, C = 1e9) mdl = model.fit(X, y) model.coef_ # sm logit = sm.Logit(y, X) logit.fit().params UPDATE: As correctly pointed out in the comments below, now you can switch off the relularization in scikit-learn by setting penalty='none' (see the docs).
Logistic Regression: Scikit Learn vs Statsmodels
Your clue to figuring this out should be that the parameter estimates from the scikit-learn estimation are uniformly smaller in magnitude than the statsmodels counterpart. This might lead you to belie
Logistic Regression: Scikit Learn vs Statsmodels Your clue to figuring this out should be that the parameter estimates from the scikit-learn estimation are uniformly smaller in magnitude than the statsmodels counterpart. This might lead you to believe that scikit-learn applies some kind of parameter regularization. You can confirm this by reading the scikit-learn documentation. There is no way to switch off regularization in scikit-learn, but you can make it ineffective by setting the tuning parameter C to a large number. Here is how that works in your case: # module imports from patsy import dmatrices import pandas as pd from sklearn.linear_model import LogisticRegression import statsmodels.discrete.discrete_model as sm # read in the data & create matrices df = pd.read_csv("http://www.ats.ucla.edu/stat/data/binary.csv") y, X = dmatrices('admit ~ gre + gpa + C(rank)', df, return_type = 'dataframe') # sklearn output model = LogisticRegression(fit_intercept = False, C = 1e9) mdl = model.fit(X, y) model.coef_ # sm logit = sm.Logit(y, X) logit.fit().params UPDATE: As correctly pointed out in the comments below, now you can switch off the relularization in scikit-learn by setting penalty='none' (see the docs).
Logistic Regression: Scikit Learn vs Statsmodels Your clue to figuring this out should be that the parameter estimates from the scikit-learn estimation are uniformly smaller in magnitude than the statsmodels counterpart. This might lead you to belie
4,270
Logistic Regression: Scikit Learn vs Statsmodels
What tripped me up: disable sklearn regularization LogisticRegression(C=1e9) add statsmodels intercept sm.Logit(y, sm.add_constant(X)) OR disable sklearn intercept LogisticRegression(C=1e9, fit_intercept=False) sklearn returns probability for each class so model_sklearn.predict_proba(X)[:, 1] == model_statsmodel.predict(X) use of predict function model_sklearn.predict(X) == (model_statsmodel.predict(X) > 0.5).astype(int) I'm now seeing the same results in both libraries.
Logistic Regression: Scikit Learn vs Statsmodels
What tripped me up: disable sklearn regularization LogisticRegression(C=1e9) add statsmodels intercept sm.Logit(y, sm.add_constant(X)) OR disable sklearn intercept LogisticRegression(C=1e9, fit_inte
Logistic Regression: Scikit Learn vs Statsmodels What tripped me up: disable sklearn regularization LogisticRegression(C=1e9) add statsmodels intercept sm.Logit(y, sm.add_constant(X)) OR disable sklearn intercept LogisticRegression(C=1e9, fit_intercept=False) sklearn returns probability for each class so model_sklearn.predict_proba(X)[:, 1] == model_statsmodel.predict(X) use of predict function model_sklearn.predict(X) == (model_statsmodel.predict(X) > 0.5).astype(int) I'm now seeing the same results in both libraries.
Logistic Regression: Scikit Learn vs Statsmodels What tripped me up: disable sklearn regularization LogisticRegression(C=1e9) add statsmodels intercept sm.Logit(y, sm.add_constant(X)) OR disable sklearn intercept LogisticRegression(C=1e9, fit_inte
4,271
Logistic Regression: Scikit Learn vs Statsmodels
Another difference is that you've set fit_intercept=False, which effectively is a different model. You can see that Statsmodel includes the intercept. Not having an intercept surely changes the expected weights on the features. Try the following and see how it compares: model = LogisticRegression(C=1e9)
Logistic Regression: Scikit Learn vs Statsmodels
Another difference is that you've set fit_intercept=False, which effectively is a different model. You can see that Statsmodel includes the intercept. Not having an intercept surely changes the expect
Logistic Regression: Scikit Learn vs Statsmodels Another difference is that you've set fit_intercept=False, which effectively is a different model. You can see that Statsmodel includes the intercept. Not having an intercept surely changes the expected weights on the features. Try the following and see how it compares: model = LogisticRegression(C=1e9)
Logistic Regression: Scikit Learn vs Statsmodels Another difference is that you've set fit_intercept=False, which effectively is a different model. You can see that Statsmodel includes the intercept. Not having an intercept surely changes the expect
4,272
Manually Calculating P value from t-value in t-test
Use pt and make it two-tailed. > 2*pt(11.244, 30, lower=FALSE) [1] 2.785806e-12
Manually Calculating P value from t-value in t-test
Use pt and make it two-tailed. > 2*pt(11.244, 30, lower=FALSE) [1] 2.785806e-12
Manually Calculating P value from t-value in t-test Use pt and make it two-tailed. > 2*pt(11.244, 30, lower=FALSE) [1] 2.785806e-12
Manually Calculating P value from t-value in t-test Use pt and make it two-tailed. > 2*pt(11.244, 30, lower=FALSE) [1] 2.785806e-12
4,273
Manually Calculating P value from t-value in t-test
I posted this as a comment but when I wanted to add a bit more in edit, it became too long so I've moved it down here. Edit: Your test statistic and d.f are correct. The other answer notes the issue with the calculation of the tail area in the call to pt(), and the doubling for two-tails, which resolves your difference. Nevertheless I'll leave my earlier discussion/comment because it makes relevant points more generally about p-values in extreme tails: It's possible you could be doing nothing wrong and still get a difference, but if you post a reproducible example it might be possible to investigate further whether you have some error (say in the df). These things are calculated from approximations that may not be particularly accurate in the very extreme tail. If the two things don't use identical approximations they may not agree closely, but that lack of agreement shouldn't matter (for the exact tail area out that far to be meaningful number, the required assumptions would have to hold to astounding degrees of accuracy). Do you really have exact normality, exact independence, exactly constant variance? You shouldn't necessarily expect great accuracy out where the numbers won't mean anything anyway. To what extent does it matter if the calculated approximate p-value is $2\times 10^{-12}$ or $3\times 10^{-12}$? Neither number is measuring the actual p-value of your true situation. Even if one of the numbers did represent the real p-value of your true situation, once its below about $0.0001$, why would you care what that value actually was?
Manually Calculating P value from t-value in t-test
I posted this as a comment but when I wanted to add a bit more in edit, it became too long so I've moved it down here. Edit: Your test statistic and d.f are correct. The other answer notes the issue w
Manually Calculating P value from t-value in t-test I posted this as a comment but when I wanted to add a bit more in edit, it became too long so I've moved it down here. Edit: Your test statistic and d.f are correct. The other answer notes the issue with the calculation of the tail area in the call to pt(), and the doubling for two-tails, which resolves your difference. Nevertheless I'll leave my earlier discussion/comment because it makes relevant points more generally about p-values in extreme tails: It's possible you could be doing nothing wrong and still get a difference, but if you post a reproducible example it might be possible to investigate further whether you have some error (say in the df). These things are calculated from approximations that may not be particularly accurate in the very extreme tail. If the two things don't use identical approximations they may not agree closely, but that lack of agreement shouldn't matter (for the exact tail area out that far to be meaningful number, the required assumptions would have to hold to astounding degrees of accuracy). Do you really have exact normality, exact independence, exactly constant variance? You shouldn't necessarily expect great accuracy out where the numbers won't mean anything anyway. To what extent does it matter if the calculated approximate p-value is $2\times 10^{-12}$ or $3\times 10^{-12}$? Neither number is measuring the actual p-value of your true situation. Even if one of the numbers did represent the real p-value of your true situation, once its below about $0.0001$, why would you care what that value actually was?
Manually Calculating P value from t-value in t-test I posted this as a comment but when I wanted to add a bit more in edit, it became too long so I've moved it down here. Edit: Your test statistic and d.f are correct. The other answer notes the issue w
4,274
Manually Calculating P value from t-value in t-test
The best way to calculate it manually is: t.value = (mean(data) - 10) / (sd(data) / sqrt(length(data))) p.value = 2*pt(-abs(t.value), df=length(data)-1) You need the abs() function because otherwise you run the risk of getting p-values bigger than $1$ (when the mean of the data is bigger than the given mean)!
Manually Calculating P value from t-value in t-test
The best way to calculate it manually is: t.value = (mean(data) - 10) / (sd(data) / sqrt(length(data))) p.value = 2*pt(-abs(t.value), df=length(data)-1) You need the abs() function because otherwise
Manually Calculating P value from t-value in t-test The best way to calculate it manually is: t.value = (mean(data) - 10) / (sd(data) / sqrt(length(data))) p.value = 2*pt(-abs(t.value), df=length(data)-1) You need the abs() function because otherwise you run the risk of getting p-values bigger than $1$ (when the mean of the data is bigger than the given mean)!
Manually Calculating P value from t-value in t-test The best way to calculate it manually is: t.value = (mean(data) - 10) / (sd(data) / sqrt(length(data))) p.value = 2*pt(-abs(t.value), df=length(data)-1) You need the abs() function because otherwise
4,275
Manually Calculating P value from t-value in t-test
I really like the answer @Aaron provided, along with the abs comments. I find a handy confirmation is to run pt(1.96, 1000000, lower.tail = F) * 2 which yields 0.04999607. Here, we're using the well-known property that 95% of the area under the normal distribution occurs at ~1.96 standard deviations, thus the output of ~0.05 gives our p-value. I used 1000000 since when N is huge, the t distribution is nearly the same as the normal distribution. Running this gave me comfort in @Aaron's solution.
Manually Calculating P value from t-value in t-test
I really like the answer @Aaron provided, along with the abs comments. I find a handy confirmation is to run pt(1.96, 1000000, lower.tail = F) * 2 which yields 0.04999607. Here, we're using the well
Manually Calculating P value from t-value in t-test I really like the answer @Aaron provided, along with the abs comments. I find a handy confirmation is to run pt(1.96, 1000000, lower.tail = F) * 2 which yields 0.04999607. Here, we're using the well-known property that 95% of the area under the normal distribution occurs at ~1.96 standard deviations, thus the output of ~0.05 gives our p-value. I used 1000000 since when N is huge, the t distribution is nearly the same as the normal distribution. Running this gave me comfort in @Aaron's solution.
Manually Calculating P value from t-value in t-test I really like the answer @Aaron provided, along with the abs comments. I find a handy confirmation is to run pt(1.96, 1000000, lower.tail = F) * 2 which yields 0.04999607. Here, we're using the well
4,276
Multinomial logistic regression vs one-vs-rest binary logistic regression
If $Y$ has more than two categories your question about "advantage" of one regression over the other is probably meaningless if you aim to compare the models' parameters, because the models will be fundamentally different: $\bf log \frac{P(i)}{P(not~i)}=logit_i=linear~combination$ for each $i$ binary logistic regression, and $\bf log \frac{P(i)}{P(r)}=logit_i=linear~combination$ for each $i$ category in multiple logistic regression, $r$ being the chosen reference category ($i \ne r$). However, if your aim is only to predict probability of each category $i$ either approach is justified, albeit they may give different probability estimates. The formula to estimate a probability is generic: $\bf P'(i)= \frac{exp(logit_i)}{exp(logit_i)+exp(logit_j)+\dots+exp(logit_r)}$, where $i,j,\dots,r$ are all the categories, and if $r$ was chosen to be the reference one its $\bf exp(logit)=1$. So, for binary logistic that same formula becomes $\bf P'(i)= \frac{exp(logit_i)}{exp(logit_i)+1}$. Multinomial logistic relies on the (not always realistic) assumption of independence of irrelevant alternatives whereas a series of binary logistic predictions does not. A separate theme is what are technical differences between multinomial and binary logistic regressions in case when $Y$ is dichotomous. Will there be any difference in results? Most of the time in the absence of covariates the results will be the same, still, there are differences in the algorithms and in output options. Let me just quote SPSS Help about that issue in SPSS: Binary logistic regression models can be fitted using either the Logistic Regression procedure or the Multinomial Logistic Regression procedure. Each procedure has options not available in the other. An important theoretical distinction is that the Logistic Regression procedure produces all predictions, residuals, influence statistics, and goodness-of-fit tests using data at the individual case level, regardless of how the data are entered and whether or not the number of covariate patterns is smaller than the total number of cases, while the Multinomial Logistic Regression procedure internally aggregates cases to form subpopulations with identical covariate patterns for the predictors, producing predictions, residuals, and goodness-of-fit tests based on these subpopulations. If all predictors are categorical or any continuous predictors take on only a limited number of values—so that there are several cases at each distinct covariate pattern—the subpopulation approach can produce valid goodness-of-fit tests and informative residuals, while the individual case level approach cannot. Logistic Regression provides the following unique features: Hosmer-Lemeshow test of goodness of fit for the model Stepwise analyses Contrasts to define model parameterization Alternative cut points for classification Classification plots Model fitted on one set of cases to a held-out set of cases Saves predictions, residuals, and influence statistics Multinomial Logistic Regression provides the following unique features: Pearson and deviance chi-square tests for goodness of fit of the model Specification of subpopulations for grouping of data for goodness-of-fit tests Listing of counts, predicted counts, and residuals by subpopulations Correction of variance estimates for over-dispersion Covariance matrix of the parameter estimates Tests of linear combinations of parameters Explicit specification of nested models Fit 1-1 matched conditional logistic regression models using differenced variables
Multinomial logistic regression vs one-vs-rest binary logistic regression
If $Y$ has more than two categories your question about "advantage" of one regression over the other is probably meaningless if you aim to compare the models' parameters, because the models will be fu
Multinomial logistic regression vs one-vs-rest binary logistic regression If $Y$ has more than two categories your question about "advantage" of one regression over the other is probably meaningless if you aim to compare the models' parameters, because the models will be fundamentally different: $\bf log \frac{P(i)}{P(not~i)}=logit_i=linear~combination$ for each $i$ binary logistic regression, and $\bf log \frac{P(i)}{P(r)}=logit_i=linear~combination$ for each $i$ category in multiple logistic regression, $r$ being the chosen reference category ($i \ne r$). However, if your aim is only to predict probability of each category $i$ either approach is justified, albeit they may give different probability estimates. The formula to estimate a probability is generic: $\bf P'(i)= \frac{exp(logit_i)}{exp(logit_i)+exp(logit_j)+\dots+exp(logit_r)}$, where $i,j,\dots,r$ are all the categories, and if $r$ was chosen to be the reference one its $\bf exp(logit)=1$. So, for binary logistic that same formula becomes $\bf P'(i)= \frac{exp(logit_i)}{exp(logit_i)+1}$. Multinomial logistic relies on the (not always realistic) assumption of independence of irrelevant alternatives whereas a series of binary logistic predictions does not. A separate theme is what are technical differences between multinomial and binary logistic regressions in case when $Y$ is dichotomous. Will there be any difference in results? Most of the time in the absence of covariates the results will be the same, still, there are differences in the algorithms and in output options. Let me just quote SPSS Help about that issue in SPSS: Binary logistic regression models can be fitted using either the Logistic Regression procedure or the Multinomial Logistic Regression procedure. Each procedure has options not available in the other. An important theoretical distinction is that the Logistic Regression procedure produces all predictions, residuals, influence statistics, and goodness-of-fit tests using data at the individual case level, regardless of how the data are entered and whether or not the number of covariate patterns is smaller than the total number of cases, while the Multinomial Logistic Regression procedure internally aggregates cases to form subpopulations with identical covariate patterns for the predictors, producing predictions, residuals, and goodness-of-fit tests based on these subpopulations. If all predictors are categorical or any continuous predictors take on only a limited number of values—so that there are several cases at each distinct covariate pattern—the subpopulation approach can produce valid goodness-of-fit tests and informative residuals, while the individual case level approach cannot. Logistic Regression provides the following unique features: Hosmer-Lemeshow test of goodness of fit for the model Stepwise analyses Contrasts to define model parameterization Alternative cut points for classification Classification plots Model fitted on one set of cases to a held-out set of cases Saves predictions, residuals, and influence statistics Multinomial Logistic Regression provides the following unique features: Pearson and deviance chi-square tests for goodness of fit of the model Specification of subpopulations for grouping of data for goodness-of-fit tests Listing of counts, predicted counts, and residuals by subpopulations Correction of variance estimates for over-dispersion Covariance matrix of the parameter estimates Tests of linear combinations of parameters Explicit specification of nested models Fit 1-1 matched conditional logistic regression models using differenced variables
Multinomial logistic regression vs one-vs-rest binary logistic regression If $Y$ has more than two categories your question about "advantage" of one regression over the other is probably meaningless if you aim to compare the models' parameters, because the models will be fu
4,277
Multinomial logistic regression vs one-vs-rest binary logistic regression
Because of the title, I'm assuming that "advantages of multiple logistic regression" means "multinomial regression". There are often advantages when the model is fit simultaneously. This particular situation is described in Agresti (Categorical Data Analysis, 2002) pg 273. In sum (paraphrasing Agresti), you expect the estimates from a joint model to be different than a stratified model. The separate logistic models tend to have larger standard errors although it may not be so bad when the most frequent level of the outcome is set as the reference level.
Multinomial logistic regression vs one-vs-rest binary logistic regression
Because of the title, I'm assuming that "advantages of multiple logistic regression" means "multinomial regression". There are often advantages when the model is fit simultaneously. This particular si
Multinomial logistic regression vs one-vs-rest binary logistic regression Because of the title, I'm assuming that "advantages of multiple logistic regression" means "multinomial regression". There are often advantages when the model is fit simultaneously. This particular situation is described in Agresti (Categorical Data Analysis, 2002) pg 273. In sum (paraphrasing Agresti), you expect the estimates from a joint model to be different than a stratified model. The separate logistic models tend to have larger standard errors although it may not be so bad when the most frequent level of the outcome is set as the reference level.
Multinomial logistic regression vs one-vs-rest binary logistic regression Because of the title, I'm assuming that "advantages of multiple logistic regression" means "multinomial regression". There are often advantages when the model is fit simultaneously. This particular si
4,278
Multinomial logistic regression vs one-vs-rest binary logistic regression
I don't think the previous answers really capture the key difference, although it is implicit in the discussion of Independence of Irrelevant Alternatives (which is a social sciences term rather than a statistical term). If you use a multinomial model then your predictions for the different options sum to 1; If you use n different logistic regression models they won't. The multinomial model is to be preferred when there is a fixed set of classes, and they are mutually exclusive. So for instance in case: "For each person predict the probability that some mobile phone company is the favourite one (lets assume every one have favourite mobile phone company). Which of those methods would You use and what are the advantages over the second one?" If you believe there is a fixed unchanging set of phone companies, then multinomial regression would be appropriate. If instead you are eg predicting top 3 (which are fixed), but there is also a tail of smaller companies you don't model, then I would suggest 1 vs rest for top 3 companies is appropriate (because top 3 doesn't cover 100% of respondents)
Multinomial logistic regression vs one-vs-rest binary logistic regression
I don't think the previous answers really capture the key difference, although it is implicit in the discussion of Independence of Irrelevant Alternatives (which is a social sciences term rather than
Multinomial logistic regression vs one-vs-rest binary logistic regression I don't think the previous answers really capture the key difference, although it is implicit in the discussion of Independence of Irrelevant Alternatives (which is a social sciences term rather than a statistical term). If you use a multinomial model then your predictions for the different options sum to 1; If you use n different logistic regression models they won't. The multinomial model is to be preferred when there is a fixed set of classes, and they are mutually exclusive. So for instance in case: "For each person predict the probability that some mobile phone company is the favourite one (lets assume every one have favourite mobile phone company). Which of those methods would You use and what are the advantages over the second one?" If you believe there is a fixed unchanging set of phone companies, then multinomial regression would be appropriate. If instead you are eg predicting top 3 (which are fixed), but there is also a tail of smaller companies you don't model, then I would suggest 1 vs rest for top 3 companies is appropriate (because top 3 doesn't cover 100% of respondents)
Multinomial logistic regression vs one-vs-rest binary logistic regression I don't think the previous answers really capture the key difference, although it is implicit in the discussion of Independence of Irrelevant Alternatives (which is a social sciences term rather than
4,279
Multinomial logistic regression vs one-vs-rest binary logistic regression
It seems that the question was not at all about the implementation/structural differences between (a) the softmax (multinomial logistic) regression model and (b) the OvR "composite" model based on multiple binary logistic regression models. In a nutshell, however, skipping all the formulas, these differences can be summarized like this: Training: the softmax regression model uses the cross entropy cost function, while the OvR "composite" model based on multiple binary logistic regressors trains completely independent binary logit classifiers using the logistic regression cost function. Trained model representation: not much difference - in softmax each class gets its own parameter vector, and these vectors are stored together in a common parameter matrix, while in OvR logit there are exactly as many separate parameter vectors, one for each positive class. Evaluation: the softmax regression model uses the softmax function that predicts a probability for each class considering the scores for other classes, while the OvR "composite" model based on multiple binary logistic regressors calculates the scores/probabilities of classes completely independently and then just picks the label with the highest score. It also seems that there was no need to explain the differences between the binary, the OvR/OvO "composite" models and the "native" multilabel classifiers like the multinomial logistic regressor (aka the softmax regressor). I think the question was more about THE ACCURACY: The softmax regression (LogisticRegression(multi_class="multinomial") in scikit-learn) is more flexible when setting the linear decision boundaries among the classes. Here is a two-dimensional three-class illustration of this: https://scikit-learn.org/stable/auto_examples/linear_model/plot_logistic_multinomial.html The above example really could have benefitted from the confusion matrices, so here they are (normalized): This is not a hard classification problem - the instances from the three classes are barely mixed, so we should expect very high accuracy for all the classes. But OvR Logit stumbles when identifying the "middle" class. Generally speaking, OvR Logit will perform poorly when there is low distinction for some class by the feature values alone. It only likes "edgy" classes. For binary classification this is not a disadvantage when compared to Softmax/multinomial, since the latter also sets a linear boundary between the two classes. Or imagine three clusters that are at approximately the same distances from each other (i.e. each class cluster is on the vertex of an equilateral triangle). In such a case the accuracy of both OvR Logit and Softmax will be good for all the classes. However, imagine one of the three clusters being at or near the straight line between the centers of the other two clusters... The accuracy of OvR Logit for that "middle" class will be poor. Softmax/multinomial regressor will do well (even though its decision boundaries are still straight lines).
Multinomial logistic regression vs one-vs-rest binary logistic regression
It seems that the question was not at all about the implementation/structural differences between (a) the softmax (multinomial logistic) regression model and (b) the OvR "composite" model based on mul
Multinomial logistic regression vs one-vs-rest binary logistic regression It seems that the question was not at all about the implementation/structural differences between (a) the softmax (multinomial logistic) regression model and (b) the OvR "composite" model based on multiple binary logistic regression models. In a nutshell, however, skipping all the formulas, these differences can be summarized like this: Training: the softmax regression model uses the cross entropy cost function, while the OvR "composite" model based on multiple binary logistic regressors trains completely independent binary logit classifiers using the logistic regression cost function. Trained model representation: not much difference - in softmax each class gets its own parameter vector, and these vectors are stored together in a common parameter matrix, while in OvR logit there are exactly as many separate parameter vectors, one for each positive class. Evaluation: the softmax regression model uses the softmax function that predicts a probability for each class considering the scores for other classes, while the OvR "composite" model based on multiple binary logistic regressors calculates the scores/probabilities of classes completely independently and then just picks the label with the highest score. It also seems that there was no need to explain the differences between the binary, the OvR/OvO "composite" models and the "native" multilabel classifiers like the multinomial logistic regressor (aka the softmax regressor). I think the question was more about THE ACCURACY: The softmax regression (LogisticRegression(multi_class="multinomial") in scikit-learn) is more flexible when setting the linear decision boundaries among the classes. Here is a two-dimensional three-class illustration of this: https://scikit-learn.org/stable/auto_examples/linear_model/plot_logistic_multinomial.html The above example really could have benefitted from the confusion matrices, so here they are (normalized): This is not a hard classification problem - the instances from the three classes are barely mixed, so we should expect very high accuracy for all the classes. But OvR Logit stumbles when identifying the "middle" class. Generally speaking, OvR Logit will perform poorly when there is low distinction for some class by the feature values alone. It only likes "edgy" classes. For binary classification this is not a disadvantage when compared to Softmax/multinomial, since the latter also sets a linear boundary between the two classes. Or imagine three clusters that are at approximately the same distances from each other (i.e. each class cluster is on the vertex of an equilateral triangle). In such a case the accuracy of both OvR Logit and Softmax will be good for all the classes. However, imagine one of the three clusters being at or near the straight line between the centers of the other two clusters... The accuracy of OvR Logit for that "middle" class will be poor. Softmax/multinomial regressor will do well (even though its decision boundaries are still straight lines).
Multinomial logistic regression vs one-vs-rest binary logistic regression It seems that the question was not at all about the implementation/structural differences between (a) the softmax (multinomial logistic) regression model and (b) the OvR "composite" model based on mul
4,280
How to identify a bimodal distribution?
Identifying a mode for a continuous distribution requires smoothing or binning the data. Binning is typically too procrustean: the results often depend on where you place the bin cutpoints. Kernel smoothing (specifically, in the form of kernel density estimation) is a good choice. Although many kernel shapes are possible, typically the result does not depend much on the shape. It depends on the kernel bandwidth. Thus, people either use an adaptive kernel smooth or conduct a sequence of kernel smooths for varying fixed bandwidths in order to check the stability of the modes that are identified. Although using an adaptive or "optimal" smoother is attractive, be aware that most (all?) of these are designed to achieve a balance between precision and average accuracy: they are not designed to optimize estimation of the location of modes. As far as implementation goes, kernel smoothers locally shift and scale a predetermined function to fit the data. Provided that this basic function is differentiable--Gaussians are a good choice because you can differentiate them as many times as you like--then all you have to do is replace it by its derivative to obtain the derivative of the smooth. Then it's simply a matter of applying a standard zero-finding procedure to detect and test the critical points. (Brent's method works well.) Of course you can do the same trick with the second derivative to get a quick test of whether any critical point is a local maximum--that is, a mode. For more details and working code (in R) please see https://stats.stackexchange.com/a/428083/919.
How to identify a bimodal distribution?
Identifying a mode for a continuous distribution requires smoothing or binning the data. Binning is typically too procrustean: the results often depend on where you place the bin cutpoints. Kernel smo
How to identify a bimodal distribution? Identifying a mode for a continuous distribution requires smoothing or binning the data. Binning is typically too procrustean: the results often depend on where you place the bin cutpoints. Kernel smoothing (specifically, in the form of kernel density estimation) is a good choice. Although many kernel shapes are possible, typically the result does not depend much on the shape. It depends on the kernel bandwidth. Thus, people either use an adaptive kernel smooth or conduct a sequence of kernel smooths for varying fixed bandwidths in order to check the stability of the modes that are identified. Although using an adaptive or "optimal" smoother is attractive, be aware that most (all?) of these are designed to achieve a balance between precision and average accuracy: they are not designed to optimize estimation of the location of modes. As far as implementation goes, kernel smoothers locally shift and scale a predetermined function to fit the data. Provided that this basic function is differentiable--Gaussians are a good choice because you can differentiate them as many times as you like--then all you have to do is replace it by its derivative to obtain the derivative of the smooth. Then it's simply a matter of applying a standard zero-finding procedure to detect and test the critical points. (Brent's method works well.) Of course you can do the same trick with the second derivative to get a quick test of whether any critical point is a local maximum--that is, a mode. For more details and working code (in R) please see https://stats.stackexchange.com/a/428083/919.
How to identify a bimodal distribution? Identifying a mode for a continuous distribution requires smoothing or binning the data. Binning is typically too procrustean: the results often depend on where you place the bin cutpoints. Kernel smo
4,281
How to identify a bimodal distribution?
There is a well-known paper by Silverman that deals with this issue. It employs kernel-density estimation. See B. W. Silverman, Using kernel density estimates to investigate multimodality, J. Royal Stat. Soc. B, vol. 43, no. 1, 1981, pp. 97-99. Note that there are some errors in the tables of the paper. This is just a starting point, but a pretty good one. It provides a well-defined algorithm to use, in the event that's what you're most looking for. You might look on Google Scholar at papers that cite it for more "modern" approaches.
How to identify a bimodal distribution?
There is a well-known paper by Silverman that deals with this issue. It employs kernel-density estimation. See B. W. Silverman, Using kernel density estimates to investigate multimodality, J. Royal
How to identify a bimodal distribution? There is a well-known paper by Silverman that deals with this issue. It employs kernel-density estimation. See B. W. Silverman, Using kernel density estimates to investigate multimodality, J. Royal Stat. Soc. B, vol. 43, no. 1, 1981, pp. 97-99. Note that there are some errors in the tables of the paper. This is just a starting point, but a pretty good one. It provides a well-defined algorithm to use, in the event that's what you're most looking for. You might look on Google Scholar at papers that cite it for more "modern" approaches.
How to identify a bimodal distribution? There is a well-known paper by Silverman that deals with this issue. It employs kernel-density estimation. See B. W. Silverman, Using kernel density estimates to investigate multimodality, J. Royal
4,282
How to identify a bimodal distribution?
I came late to the party, but if you are just interested in whether it is multimodal or not, meaning you are not interested in the number of modes, you should look at diptest. In R the package is called diptest.
How to identify a bimodal distribution?
I came late to the party, but if you are just interested in whether it is multimodal or not, meaning you are not interested in the number of modes, you should look at diptest. In R the package is call
How to identify a bimodal distribution? I came late to the party, but if you are just interested in whether it is multimodal or not, meaning you are not interested in the number of modes, you should look at diptest. In R the package is called diptest.
How to identify a bimodal distribution? I came late to the party, but if you are just interested in whether it is multimodal or not, meaning you are not interested in the number of modes, you should look at diptest. In R the package is call
4,283
How to identify a bimodal distribution?
The definition in wiki is slightly confusing to me. The probability of a continous data set having just one mode is zero. A simple way to program a bimodal distrubiton is with two seperate normal distributions centered differently. This creates two peaks or what wiki calls modes. You can actually use almost any two distributions, but one of the harder statistical opportunities is to find how the data set was formed after combining the two random data distributions.
How to identify a bimodal distribution?
The definition in wiki is slightly confusing to me. The probability of a continous data set having just one mode is zero. A simple way to program a bimodal distrubiton is with two seperate normal di
How to identify a bimodal distribution? The definition in wiki is slightly confusing to me. The probability of a continous data set having just one mode is zero. A simple way to program a bimodal distrubiton is with two seperate normal distributions centered differently. This creates two peaks or what wiki calls modes. You can actually use almost any two distributions, but one of the harder statistical opportunities is to find how the data set was formed after combining the two random data distributions.
How to identify a bimodal distribution? The definition in wiki is slightly confusing to me. The probability of a continous data set having just one mode is zero. A simple way to program a bimodal distrubiton is with two seperate normal di
4,284
Visually interesting statistics concepts that are easy to explain
I like images illustrating how different patterns can have similar correlation. The ones below are from Wikipedia articles on correlation and dependence and Anscombe's quartet with correlations of about $0.816$
Visually interesting statistics concepts that are easy to explain
I like images illustrating how different patterns can have similar correlation. The ones below are from Wikipedia articles on correlation and dependence and Anscombe's quartet with correlations of
Visually interesting statistics concepts that are easy to explain I like images illustrating how different patterns can have similar correlation. The ones below are from Wikipedia articles on correlation and dependence and Anscombe's quartet with correlations of about $0.816$
Visually interesting statistics concepts that are easy to explain I like images illustrating how different patterns can have similar correlation. The ones below are from Wikipedia articles on correlation and dependence and Anscombe's quartet with correlations of
4,285
Visually interesting statistics concepts that are easy to explain
Simpson's Paradox A phenomenon that appears when a key variable is omitted from the analysis of a relationship between one or more independent variables and a dependent variable. For instance, this shows the more bedrooms houses have, the lower the home price: (source: ba762researchmethods at sites.google.com) which seems counter-intuitive, and is easily resolved by plotting all the data points that make up the average for each area, on the same graph. Here, the greater number of bedrooms correctly indicate pricier homes when also observing the neighborhood variable: (source: ba762researchmethods at sites.google.com) If you'd like to read more about the above example and get a far better explanation than I was able to provide, click here.
Visually interesting statistics concepts that are easy to explain
Simpson's Paradox A phenomenon that appears when a key variable is omitted from the analysis of a relationship between one or more independent variables and a dependent variable. For instance, this s
Visually interesting statistics concepts that are easy to explain Simpson's Paradox A phenomenon that appears when a key variable is omitted from the analysis of a relationship between one or more independent variables and a dependent variable. For instance, this shows the more bedrooms houses have, the lower the home price: (source: ba762researchmethods at sites.google.com) which seems counter-intuitive, and is easily resolved by plotting all the data points that make up the average for each area, on the same graph. Here, the greater number of bedrooms correctly indicate pricier homes when also observing the neighborhood variable: (source: ba762researchmethods at sites.google.com) If you'd like to read more about the above example and get a far better explanation than I was able to provide, click here.
Visually interesting statistics concepts that are easy to explain Simpson's Paradox A phenomenon that appears when a key variable is omitted from the analysis of a relationship between one or more independent variables and a dependent variable. For instance, this s
4,286
Visually interesting statistics concepts that are easy to explain
One of the most interesting concepts that are very important today and very easy to visualize is "overfitting". The green classifier below presents a clear example of overfitting [Edit: "the green classifier is given by the very wiggly line separating red and blue data points" - Nick Cox]. From Wikipedia:
Visually interesting statistics concepts that are easy to explain
One of the most interesting concepts that are very important today and very easy to visualize is "overfitting". The green classifier below presents a clear example of overfitting [Edit: "the green cla
Visually interesting statistics concepts that are easy to explain One of the most interesting concepts that are very important today and very easy to visualize is "overfitting". The green classifier below presents a clear example of overfitting [Edit: "the green classifier is given by the very wiggly line separating red and blue data points" - Nick Cox]. From Wikipedia:
Visually interesting statistics concepts that are easy to explain One of the most interesting concepts that are very important today and very easy to visualize is "overfitting". The green classifier below presents a clear example of overfitting [Edit: "the green cla
4,287
Visually interesting statistics concepts that are easy to explain
How does a 2D dataset where the mean of X is 54 with a SD 17, and for Y 48 and 27, respectively, and the correlation between the two is -0.06? Introducing the Anscombosaurus: And its companion, the Datasaurus Dozen:
Visually interesting statistics concepts that are easy to explain
How does a 2D dataset where the mean of X is 54 with a SD 17, and for Y 48 and 27, respectively, and the correlation between the two is -0.06? Introducing the Anscombosaurus: And its companion, the D
Visually interesting statistics concepts that are easy to explain How does a 2D dataset where the mean of X is 54 with a SD 17, and for Y 48 and 27, respectively, and the correlation between the two is -0.06? Introducing the Anscombosaurus: And its companion, the Datasaurus Dozen:
Visually interesting statistics concepts that are easy to explain How does a 2D dataset where the mean of X is 54 with a SD 17, and for Y 48 and 27, respectively, and the correlation between the two is -0.06? Introducing the Anscombosaurus: And its companion, the D
4,288
Visually interesting statistics concepts that are easy to explain
I think spurious correlations also deserve their own post. I.e. correlation does not equal causation. Perhaps one of the things used most often when trying to bend the truth using statistics. Tyler Vigen has a famous website with lots of examples. To illustrate - see the plot below where the number of polio cases and the ice cream sales are clearly correlated. But to assume that polio causes ice cream sales or the other way around is clearly nonsensical. P.S: Relevant xkcd 1 and relevant xkcd 2
Visually interesting statistics concepts that are easy to explain
I think spurious correlations also deserve their own post. I.e. correlation does not equal causation. Perhaps one of the things used most often when trying to bend the truth using statistics. Tyler Vi
Visually interesting statistics concepts that are easy to explain I think spurious correlations also deserve their own post. I.e. correlation does not equal causation. Perhaps one of the things used most often when trying to bend the truth using statistics. Tyler Vigen has a famous website with lots of examples. To illustrate - see the plot below where the number of polio cases and the ice cream sales are clearly correlated. But to assume that polio causes ice cream sales or the other way around is clearly nonsensical. P.S: Relevant xkcd 1 and relevant xkcd 2
Visually interesting statistics concepts that are easy to explain I think spurious correlations also deserve their own post. I.e. correlation does not equal causation. Perhaps one of the things used most often when trying to bend the truth using statistics. Tyler Vi
4,289
Visually interesting statistics concepts that are easy to explain
Bias can be good An $\color{orangered}{\text{unbiased estimator}}$ is on average correct. A $\color{steelblue}{\text{biased estimator}}$ is on average not correct. Why then, would you ever want to use a biased estimator (e.g. ridge regression)? The answer is that introducing bias can reduce variance. In the picture, for a given sample, the $\color{orangered}{\text{unbiased estimator}}$, has a $68\%$ chance to be within $1$ arbitrary unit of the true parameter, while the $\color{steelblue}{\text{biased estimator}}$ has a much larger $84\%$ chance. If the bias you have introduced reduces the variance of the estimator sufficiently, your one sample has a better chance of yielding an estimate close to the population parameter. "On average correct" sounds great, but does not give any guarantees of how far individual estimates can deviate from the population parameter. If you would draw many samples, the $\color{steelblue}{\text{biased estimator}}$ would on average be wrong by $0.5$ arbitrary units. However, we rarely have many samples from the same population to observe this 'average estimate', so we would rather have a good chance of being close to the true parameter.
Visually interesting statistics concepts that are easy to explain
Bias can be good An $\color{orangered}{\text{unbiased estimator}}$ is on average correct. A $\color{steelblue}{\text{biased estimator}}$ is on average not correct. Why then, would you ever want to use
Visually interesting statistics concepts that are easy to explain Bias can be good An $\color{orangered}{\text{unbiased estimator}}$ is on average correct. A $\color{steelblue}{\text{biased estimator}}$ is on average not correct. Why then, would you ever want to use a biased estimator (e.g. ridge regression)? The answer is that introducing bias can reduce variance. In the picture, for a given sample, the $\color{orangered}{\text{unbiased estimator}}$, has a $68\%$ chance to be within $1$ arbitrary unit of the true parameter, while the $\color{steelblue}{\text{biased estimator}}$ has a much larger $84\%$ chance. If the bias you have introduced reduces the variance of the estimator sufficiently, your one sample has a better chance of yielding an estimate close to the population parameter. "On average correct" sounds great, but does not give any guarantees of how far individual estimates can deviate from the population parameter. If you would draw many samples, the $\color{steelblue}{\text{biased estimator}}$ would on average be wrong by $0.5$ arbitrary units. However, we rarely have many samples from the same population to observe this 'average estimate', so we would rather have a good chance of being close to the true parameter.
Visually interesting statistics concepts that are easy to explain Bias can be good An $\color{orangered}{\text{unbiased estimator}}$ is on average correct. A $\color{steelblue}{\text{biased estimator}}$ is on average not correct. Why then, would you ever want to use
4,290
Visually interesting statistics concepts that are easy to explain
When first understanding estimators and their error, it's useful to understand two sources of error: bias and variance. The below image does a great job illustrating this while highlighting tradeoffs between these two sources of error. The bullseye is the true value the estimator is trying to estimate and each dot represents and estimate of that value. Ideally you have low bias and low variance, but the other dart boards represent less than ideal estimators.
Visually interesting statistics concepts that are easy to explain
When first understanding estimators and their error, it's useful to understand two sources of error: bias and variance. The below image does a great job illustrating this while highlighting tradeoffs
Visually interesting statistics concepts that are easy to explain When first understanding estimators and their error, it's useful to understand two sources of error: bias and variance. The below image does a great job illustrating this while highlighting tradeoffs between these two sources of error. The bullseye is the true value the estimator is trying to estimate and each dot represents and estimate of that value. Ideally you have low bias and low variance, but the other dart boards represent less than ideal estimators.
Visually interesting statistics concepts that are easy to explain When first understanding estimators and their error, it's useful to understand two sources of error: bias and variance. The below image does a great job illustrating this while highlighting tradeoffs
4,291
Visually interesting statistics concepts that are easy to explain
Principal component Analysis (PCA) PCA is a method for dimension reduction. It projects the original variables in the direction that maximizes the variance. In our figure, the red points come from a bivariate normal distribution. The vectors are the eigenvectors and the size of these vectors are proportional to the values of the respective eigenvalues. Principal component analysis provides new directions that are orthogonal and point to the directions of high variance.
Visually interesting statistics concepts that are easy to explain
Principal component Analysis (PCA) PCA is a method for dimension reduction. It projects the original variables in the direction that maximizes the variance. In our figure, the red points come from a
Visually interesting statistics concepts that are easy to explain Principal component Analysis (PCA) PCA is a method for dimension reduction. It projects the original variables in the direction that maximizes the variance. In our figure, the red points come from a bivariate normal distribution. The vectors are the eigenvectors and the size of these vectors are proportional to the values of the respective eigenvalues. Principal component analysis provides new directions that are orthogonal and point to the directions of high variance.
Visually interesting statistics concepts that are easy to explain Principal component Analysis (PCA) PCA is a method for dimension reduction. It projects the original variables in the direction that maximizes the variance. In our figure, the red points come from a
4,292
Visually interesting statistics concepts that are easy to explain
Eigenvectors & Eigenvalues The concept of eigenvectors and eigenvalues which are the basis for principal component analysis (PCA), as explained on wikipedia: In essence, an eigenvector $v$ of a linear transformation $T$ is a nonzero vector that, when $T$ is applied to it, does not change direction. Applying $T$ to the eigenvector only scales the eigenvector by the scalar value $\lambda$, called an eigenvalue. This condition can be written as the equation: $T(v) = \lambda v$. The above statement is very elegantly explained using this gif: Vectors denoted in blue $\begin{bmatrix}1 \\1 \\ \end{bmatrix}$ and magenta $\begin{bmatrix}1 \\-1 \\ \end{bmatrix}$ are eigenvectors for the linear transformation, $T = \begin{bmatrix}2 & 1 \\1 & 2 \\ \end{bmatrix}$. The points that lie on the line through the origin, parallel to the eigenvectors, remain on the line after the transformation. The vectors in red are not eigenvectors, therefore their direction is altered by the transformation. Blue vectors are scaled by a factor of 3 -- which is the eigenvalue for the blue eigenvector, whereas the magenta vectors are not scaled, since their eigenvalue is 1. Link to Wikipedia article.
Visually interesting statistics concepts that are easy to explain
Eigenvectors & Eigenvalues The concept of eigenvectors and eigenvalues which are the basis for principal component analysis (PCA), as explained on wikipedia: In essence, an eigenvector $v$ of a linea
Visually interesting statistics concepts that are easy to explain Eigenvectors & Eigenvalues The concept of eigenvectors and eigenvalues which are the basis for principal component analysis (PCA), as explained on wikipedia: In essence, an eigenvector $v$ of a linear transformation $T$ is a nonzero vector that, when $T$ is applied to it, does not change direction. Applying $T$ to the eigenvector only scales the eigenvector by the scalar value $\lambda$, called an eigenvalue. This condition can be written as the equation: $T(v) = \lambda v$. The above statement is very elegantly explained using this gif: Vectors denoted in blue $\begin{bmatrix}1 \\1 \\ \end{bmatrix}$ and magenta $\begin{bmatrix}1 \\-1 \\ \end{bmatrix}$ are eigenvectors for the linear transformation, $T = \begin{bmatrix}2 & 1 \\1 & 2 \\ \end{bmatrix}$. The points that lie on the line through the origin, parallel to the eigenvectors, remain on the line after the transformation. The vectors in red are not eigenvectors, therefore their direction is altered by the transformation. Blue vectors are scaled by a factor of 3 -- which is the eigenvalue for the blue eigenvector, whereas the magenta vectors are not scaled, since their eigenvalue is 1. Link to Wikipedia article.
Visually interesting statistics concepts that are easy to explain Eigenvectors & Eigenvalues The concept of eigenvectors and eigenvalues which are the basis for principal component analysis (PCA), as explained on wikipedia: In essence, an eigenvector $v$ of a linea
4,293
Visually interesting statistics concepts that are easy to explain
Trade-off bias variance is another very important concept in Statistics/Machine Learning. The data points in blue come from $y(x)=\sin(x)+\epsilon$, where $\epsilon$ has a normal distribution. The red curves are estimated using different samples. The figure "Large Variance and Small Bias" presents the original model, which is Radial basis function network with 24 gaussian bases. The figure "Small Variance and Large Bias" presents the same model regularized. Note that in the figure "Small Variance and Large Bias" the red curves are very close to each other (small variance). The same does not happen in the figure "Large Variance and Small Bias" (large variance). Small Variance and Large Bias Large Variance and Small Bias From my computer methods and machine learning course.
Visually interesting statistics concepts that are easy to explain
Trade-off bias variance is another very important concept in Statistics/Machine Learning. The data points in blue come from $y(x)=\sin(x)+\epsilon$, where $\epsilon$ has a normal distribution. The re
Visually interesting statistics concepts that are easy to explain Trade-off bias variance is another very important concept in Statistics/Machine Learning. The data points in blue come from $y(x)=\sin(x)+\epsilon$, where $\epsilon$ has a normal distribution. The red curves are estimated using different samples. The figure "Large Variance and Small Bias" presents the original model, which is Radial basis function network with 24 gaussian bases. The figure "Small Variance and Large Bias" presents the same model regularized. Note that in the figure "Small Variance and Large Bias" the red curves are very close to each other (small variance). The same does not happen in the figure "Large Variance and Small Bias" (large variance). Small Variance and Large Bias Large Variance and Small Bias From my computer methods and machine learning course.
Visually interesting statistics concepts that are easy to explain Trade-off bias variance is another very important concept in Statistics/Machine Learning. The data points in blue come from $y(x)=\sin(x)+\epsilon$, where $\epsilon$ has a normal distribution. The re
4,294
Visually interesting statistics concepts that are easy to explain
Here is very basic one, but in my opinion very powerful because it's not only a visual explanation of a concept but also asks for visualising or imagining a real object depicting the concept: Neophytes sometimes have a hard time understanding very basic concepts like mean, median and mode. So, for helping them to better grasp the idea of mean: Take this skewed distribution and do a 3D print of it, in plastic, or carve it in wood, so now you have a real object in your hands. Try to balance it using just one finger... the mean is the only point where you can do that.
Visually interesting statistics concepts that are easy to explain
Here is very basic one, but in my opinion very powerful because it's not only a visual explanation of a concept but also asks for visualising or imagining a real object depicting the concept: Neophyte
Visually interesting statistics concepts that are easy to explain Here is very basic one, but in my opinion very powerful because it's not only a visual explanation of a concept but also asks for visualising or imagining a real object depicting the concept: Neophytes sometimes have a hard time understanding very basic concepts like mean, median and mode. So, for helping them to better grasp the idea of mean: Take this skewed distribution and do a 3D print of it, in plastic, or carve it in wood, so now you have a real object in your hands. Try to balance it using just one finger... the mean is the only point where you can do that.
Visually interesting statistics concepts that are easy to explain Here is very basic one, but in my opinion very powerful because it's not only a visual explanation of a concept but also asks for visualising or imagining a real object depicting the concept: Neophyte
4,295
Visually interesting statistics concepts that are easy to explain
The figure below shows the importance of defining preciselly the objectives and assumptions of a clustering problem (and a general statistical problem). Different models may provide very different results: Sources: ScikitLearn
Visually interesting statistics concepts that are easy to explain
The figure below shows the importance of defining preciselly the objectives and assumptions of a clustering problem (and a general statistical problem). Different models may provide very different res
Visually interesting statistics concepts that are easy to explain The figure below shows the importance of defining preciselly the objectives and assumptions of a clustering problem (and a general statistical problem). Different models may provide very different results: Sources: ScikitLearn
Visually interesting statistics concepts that are easy to explain The figure below shows the importance of defining preciselly the objectives and assumptions of a clustering problem (and a general statistical problem). Different models may provide very different res
4,296
Visually interesting statistics concepts that are easy to explain
Okay, so this one is less about illustrating a basic concept, but it is very interesting both visually and in terms of applications. I think showing people what they can ultimately accomplish with what they are learning is a great form of motivation, so you can pitch it as an example of developing and applying statistical models, which depends on all the more fundamental statistical concepts they are learning. With that, I present to you... Species Distribution Modelling It's actually a very broad topic with a lot of nuance in terms of types of data, data collection, model setup, assumptions, applications, interpretations, etc. But very simply put, you take sample information about where a species occurs, then use those locations to sample potentially relevant environmental variables (e.g., climate data, soil data, habitat data, elevation, light pollution, noise pollution, etc), develop a model using the data (e.g., GLM, point process model, etc), then use that model to predict across a landscape using your environmental variables. Depending on how the model was setup, what's predicted might be potential suitable habitat, likely areas of occurrence, species distribution, etc. You can also change the environmental variables to see how they impact these results. People have used SDMs to find previously unknown populations of a species, they've used them to discover new species, with historical climate data they've used them to predict backwards in time where a species used to occur and how it got to where it is today (even all the way back through glaciation periods), and with things like future climate predictions and habitat loss, they are used to predict how human activities will affect the species in the future. These are just a few examples, and if I have time later I'll find and link interesting papers. In the meantime here's a quick image I found illustrating the basics:
Visually interesting statistics concepts that are easy to explain
Okay, so this one is less about illustrating a basic concept, but it is very interesting both visually and in terms of applications. I think showing people what they can ultimately accomplish with wha
Visually interesting statistics concepts that are easy to explain Okay, so this one is less about illustrating a basic concept, but it is very interesting both visually and in terms of applications. I think showing people what they can ultimately accomplish with what they are learning is a great form of motivation, so you can pitch it as an example of developing and applying statistical models, which depends on all the more fundamental statistical concepts they are learning. With that, I present to you... Species Distribution Modelling It's actually a very broad topic with a lot of nuance in terms of types of data, data collection, model setup, assumptions, applications, interpretations, etc. But very simply put, you take sample information about where a species occurs, then use those locations to sample potentially relevant environmental variables (e.g., climate data, soil data, habitat data, elevation, light pollution, noise pollution, etc), develop a model using the data (e.g., GLM, point process model, etc), then use that model to predict across a landscape using your environmental variables. Depending on how the model was setup, what's predicted might be potential suitable habitat, likely areas of occurrence, species distribution, etc. You can also change the environmental variables to see how they impact these results. People have used SDMs to find previously unknown populations of a species, they've used them to discover new species, with historical climate data they've used them to predict backwards in time where a species used to occur and how it got to where it is today (even all the way back through glaciation periods), and with things like future climate predictions and habitat loss, they are used to predict how human activities will affect the species in the future. These are just a few examples, and if I have time later I'll find and link interesting papers. In the meantime here's a quick image I found illustrating the basics:
Visually interesting statistics concepts that are easy to explain Okay, so this one is less about illustrating a basic concept, but it is very interesting both visually and in terms of applications. I think showing people what they can ultimately accomplish with wha
4,297
Is the COVID-19 pandemic curve a Gaussian curve?
It seems like there are three questions here: Is the actual distribution of cases Gaussian? No. Are the curves given in the graphic Gaussian? Not quite. I think the red one is a little bit skewed, and the blue one is definitely skewed. Can plots of a value versus time be considered Gaussian? Yes. In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the form $$f(x) = ae^{-{\frac {(x-b)^{2}}{2c^{2}}}}$$ for arbitrary real constants a, b and non zero c. https://en.wikipedia.org/wiki/Gaussian_function There is no requirement that it be a probability distribution.
Is the COVID-19 pandemic curve a Gaussian curve?
It seems like there are three questions here: Is the actual distribution of cases Gaussian? No. Are the curves given in the graphic Gaussian? Not quite. I think the red one is a little bit skewed, a
Is the COVID-19 pandemic curve a Gaussian curve? It seems like there are three questions here: Is the actual distribution of cases Gaussian? No. Are the curves given in the graphic Gaussian? Not quite. I think the red one is a little bit skewed, and the blue one is definitely skewed. Can plots of a value versus time be considered Gaussian? Yes. In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the form $$f(x) = ae^{-{\frac {(x-b)^{2}}{2c^{2}}}}$$ for arbitrary real constants a, b and non zero c. https://en.wikipedia.org/wiki/Gaussian_function There is no requirement that it be a probability distribution.
Is the COVID-19 pandemic curve a Gaussian curve? It seems like there are three questions here: Is the actual distribution of cases Gaussian? No. Are the curves given in the graphic Gaussian? Not quite. I think the red one is a little bit skewed, a
4,298
Is the COVID-19 pandemic curve a Gaussian curve?
No. For example: Not in the sense of a Gaussian probability distribution: the bell-curve of a normal (Gaussian) distribution is a histogram (a map of probability density against values of a single variable), but the curves you quote are (as you note) a map of the values of one variable (new cases) against a second variable (time). (@Accumulation and @TobyBartels point out that Gaussian curves are mathematical constructs that may be unrelated to probability distributions; given that you are asking this question on the statistics SE, I assumed that addressing the Gaussian distribution was an important part of answering the question.) The possible values under a normal distribution extend from $-\infty$ to $\infty$, but an epidemic curve cannot have negative values on the y axis, and traveling far enough left or right on the x axis, you will run out of cases altogether, either because the disease is does not exist, or because Homo sapiens does not exist. Normal distributions are continuous, but the phenomena epidemic curves measure are actually discrete not continuous: they represent new cases during each discrete unit of time. While we can subdivide time into smaller meaningful units (to a degree), we eventually run into the fact that individuals with new infections are count data (discrete). Normal distributions are symmetric about their mean, but despite the cartoon conveying a useful public health message about the need to flatten the curve, actual epidemic curves are frequently skewed to the right, with long thin tails as shown below. Normal distributions are unimodal, but actual epidemic curves may feature one or more bumps (i.e. may be multi-modal, they may even, as in @SextusEmpiricus' answer, be endemic where they return cyclically). Finally, here is an epidemic curve for COVID-19 in China, you can see that the curve generally diverges from the Gaussian curve (of course there are issues with the reliability of the data, given than many cases were not counted):
Is the COVID-19 pandemic curve a Gaussian curve?
No. For example: Not in the sense of a Gaussian probability distribution: the bell-curve of a normal (Gaussian) distribution is a histogram (a map of probability density against values of a single va
Is the COVID-19 pandemic curve a Gaussian curve? No. For example: Not in the sense of a Gaussian probability distribution: the bell-curve of a normal (Gaussian) distribution is a histogram (a map of probability density against values of a single variable), but the curves you quote are (as you note) a map of the values of one variable (new cases) against a second variable (time). (@Accumulation and @TobyBartels point out that Gaussian curves are mathematical constructs that may be unrelated to probability distributions; given that you are asking this question on the statistics SE, I assumed that addressing the Gaussian distribution was an important part of answering the question.) The possible values under a normal distribution extend from $-\infty$ to $\infty$, but an epidemic curve cannot have negative values on the y axis, and traveling far enough left or right on the x axis, you will run out of cases altogether, either because the disease is does not exist, or because Homo sapiens does not exist. Normal distributions are continuous, but the phenomena epidemic curves measure are actually discrete not continuous: they represent new cases during each discrete unit of time. While we can subdivide time into smaller meaningful units (to a degree), we eventually run into the fact that individuals with new infections are count data (discrete). Normal distributions are symmetric about their mean, but despite the cartoon conveying a useful public health message about the need to flatten the curve, actual epidemic curves are frequently skewed to the right, with long thin tails as shown below. Normal distributions are unimodal, but actual epidemic curves may feature one or more bumps (i.e. may be multi-modal, they may even, as in @SextusEmpiricus' answer, be endemic where they return cyclically). Finally, here is an epidemic curve for COVID-19 in China, you can see that the curve generally diverges from the Gaussian curve (of course there are issues with the reliability of the data, given than many cases were not counted):
Is the COVID-19 pandemic curve a Gaussian curve? No. For example: Not in the sense of a Gaussian probability distribution: the bell-curve of a normal (Gaussian) distribution is a histogram (a map of probability density against values of a single va
4,299
Is the COVID-19 pandemic curve a Gaussian curve?
Epidemiological curves for respiratory infections are very irregular curves. See for instance the SARS outbreak of 2002/2003 https://www.who.int/csr/sars/epicurve/epiindex/en/index1.html and for endemic diseases they may have some seasonal pattern. See for instance the euromomo logo (source: euromomo.eu) Besides the flattening the curve in general not being a Gaussian curve, the situation will also be more nuanced. The image that goes around on the internet is a very extreme case were the curve sticks a lot above the threshold and is being halved in size as result of the measures. It sketched a perfect situation to argue for drastic measures. That may not necessarily be so much the case with covid-19. More nuanced representations show different thresholds and have more subtle differences in the curves. Like here https://www.vaccinarsinpuglia.org/notizie/2017/10/al-via-la-sorveglianza-dellinfluenza-stagione-2017-18
Is the COVID-19 pandemic curve a Gaussian curve?
Epidemiological curves for respiratory infections are very irregular curves. See for instance the SARS outbreak of 2002/2003 https://www.who.int/csr/sars/epicurve/epiindex/en/index1.html and for ende
Is the COVID-19 pandemic curve a Gaussian curve? Epidemiological curves for respiratory infections are very irregular curves. See for instance the SARS outbreak of 2002/2003 https://www.who.int/csr/sars/epicurve/epiindex/en/index1.html and for endemic diseases they may have some seasonal pattern. See for instance the euromomo logo (source: euromomo.eu) Besides the flattening the curve in general not being a Gaussian curve, the situation will also be more nuanced. The image that goes around on the internet is a very extreme case were the curve sticks a lot above the threshold and is being halved in size as result of the measures. It sketched a perfect situation to argue for drastic measures. That may not necessarily be so much the case with covid-19. More nuanced representations show different thresholds and have more subtle differences in the curves. Like here https://www.vaccinarsinpuglia.org/notizie/2017/10/al-via-la-sorveglianza-dellinfluenza-stagione-2017-18
Is the COVID-19 pandemic curve a Gaussian curve? Epidemiological curves for respiratory infections are very irregular curves. See for instance the SARS outbreak of 2002/2003 https://www.who.int/csr/sars/epicurve/epiindex/en/index1.html and for ende
4,300
Is the COVID-19 pandemic curve a Gaussian curve?
I'm not an epidemiologist, and you should ask this question to the epidemiologists. First of all, drawing Gaussian curves is simple, since even basic plotting software has them implemented (e.g. Microsoft Excel), so when people need to draw "a distribution", they often draw Gaussians. The "flatten the curve" figures are aimed to show the general idea of the phenomenon, not the exact distribution of that will and could have happen (nobody knows it in advance, since there is too many unknowns, and too many moving parts). Even the scales of the figures are not realistic; some experts point that the difference may be much higher than on such figures. As about Gaussian shape of the epidemic, as far as I know, this is known as Farr's law. First the number of infected people rises, then falls, so this is similar to a Gaussian curve, but it is far from an exact fit. You can find discussion in this Twitter thread, that gives as an example of a study that applied Farr's law to predicting HIV/AIDS cases in US, as you can see from the plot, it has nothing to do with the actual outcome. You can find some, more serious, figures in the widely cited recently paper by Ferguson et al (2020). As you can see, they are "rising and falling, but far from Gaussian, in some simulations even multimodal, or skewed. Of course, this is still a simulation, so a much more simplified distribution than what we could expect from actual data.
Is the COVID-19 pandemic curve a Gaussian curve?
I'm not an epidemiologist, and you should ask this question to the epidemiologists. First of all, drawing Gaussian curves is simple, since even basic plotting software has them implemented (e.g. Micr
Is the COVID-19 pandemic curve a Gaussian curve? I'm not an epidemiologist, and you should ask this question to the epidemiologists. First of all, drawing Gaussian curves is simple, since even basic plotting software has them implemented (e.g. Microsoft Excel), so when people need to draw "a distribution", they often draw Gaussians. The "flatten the curve" figures are aimed to show the general idea of the phenomenon, not the exact distribution of that will and could have happen (nobody knows it in advance, since there is too many unknowns, and too many moving parts). Even the scales of the figures are not realistic; some experts point that the difference may be much higher than on such figures. As about Gaussian shape of the epidemic, as far as I know, this is known as Farr's law. First the number of infected people rises, then falls, so this is similar to a Gaussian curve, but it is far from an exact fit. You can find discussion in this Twitter thread, that gives as an example of a study that applied Farr's law to predicting HIV/AIDS cases in US, as you can see from the plot, it has nothing to do with the actual outcome. You can find some, more serious, figures in the widely cited recently paper by Ferguson et al (2020). As you can see, they are "rising and falling, but far from Gaussian, in some simulations even multimodal, or skewed. Of course, this is still a simulation, so a much more simplified distribution than what we could expect from actual data.
Is the COVID-19 pandemic curve a Gaussian curve? I'm not an epidemiologist, and you should ask this question to the epidemiologists. First of all, drawing Gaussian curves is simple, since even basic plotting software has them implemented (e.g. Micr