idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
1,301
When to use gamma GLMs?
In my opinion, it assumes that the errors lie on a family of gamma distributions, with the same shapes, and with the scales changing according the related formula. But it is difficult to do model diagnosis. Note that the simple QQ plot is not suitable here, because it is about the same distribution, while ours is a family of distributions with different variances. Naively, the residuals plot can be used to see that they have different scales but the same shape, usually with long tails. In my experience, the gamma GLM may be tried for some long tail distributed problems, and it is widely used in insurance and environment sectors, etc. But the assumptions are difficult to test, and the model does not perform well usually, so different papers argue to use other family distributions with the same problem, like inverse Gaussian, etc. In practice, it seems that such choices depends on expert judgement with the industrial experience. This limits the use of the gamma GLM.
When to use gamma GLMs?
In my opinion, it assumes that the errors lie on a family of gamma distributions, with the same shapes, and with the scales changing according the related formula. But it is difficult to do model diag
When to use gamma GLMs? In my opinion, it assumes that the errors lie on a family of gamma distributions, with the same shapes, and with the scales changing according the related formula. But it is difficult to do model diagnosis. Note that the simple QQ plot is not suitable here, because it is about the same distribution, while ours is a family of distributions with different variances. Naively, the residuals plot can be used to see that they have different scales but the same shape, usually with long tails. In my experience, the gamma GLM may be tried for some long tail distributed problems, and it is widely used in insurance and environment sectors, etc. But the assumptions are difficult to test, and the model does not perform well usually, so different papers argue to use other family distributions with the same problem, like inverse Gaussian, etc. In practice, it seems that such choices depends on expert judgement with the industrial experience. This limits the use of the gamma GLM.
When to use gamma GLMs? In my opinion, it assumes that the errors lie on a family of gamma distributions, with the same shapes, and with the scales changing according the related formula. But it is difficult to do model diag
1,302
Most interesting statistical paradoxes
It's not a paradox per se, but it is a puzzling comment, at least at first. During World War II, Abraham Wald was a statistician for the U.S. government. He looked at the bombers that returned from missions and analyzed the pattern of the bullet "wounds" on the planes. He recommended that the Navy reinforce areas where the planes had no damage. Why? We have selection effects at work. This sample suggests that damage inflicted in the observed areas could be withstood. Either planes were never hit in the untouched areas, an unlikely proposition, or strikes to those parts were lethal. We care about the planes that went down, not just those that returned. Those that fell likely suffered an attack in a place that was untouched on those that survived. For copies of his original memoranda, see here. For a more modern application, see this Scientific American blog post. Expanding upon a theme, according to this blog post, during World War I, the introduction of a tin helmet led to more head wounds than a standard cloth hat. Was the new helmet worse for soldiers? No; though injuries were higher, fatalities were lower.
Most interesting statistical paradoxes
It's not a paradox per se, but it is a puzzling comment, at least at first. During World War II, Abraham Wald was a statistician for the U.S. government. He looked at the bombers that returned from mi
Most interesting statistical paradoxes It's not a paradox per se, but it is a puzzling comment, at least at first. During World War II, Abraham Wald was a statistician for the U.S. government. He looked at the bombers that returned from missions and analyzed the pattern of the bullet "wounds" on the planes. He recommended that the Navy reinforce areas where the planes had no damage. Why? We have selection effects at work. This sample suggests that damage inflicted in the observed areas could be withstood. Either planes were never hit in the untouched areas, an unlikely proposition, or strikes to those parts were lethal. We care about the planes that went down, not just those that returned. Those that fell likely suffered an attack in a place that was untouched on those that survived. For copies of his original memoranda, see here. For a more modern application, see this Scientific American blog post. Expanding upon a theme, according to this blog post, during World War I, the introduction of a tin helmet led to more head wounds than a standard cloth hat. Was the new helmet worse for soldiers? No; though injuries were higher, fatalities were lower.
Most interesting statistical paradoxes It's not a paradox per se, but it is a puzzling comment, at least at first. During World War II, Abraham Wald was a statistician for the U.S. government. He looked at the bombers that returned from mi
1,303
Most interesting statistical paradoxes
Another example is the ecological fallacy. Example Suppose that we look for a relationship between voting and income by regressing the vote share for then-Senator Obama on the median income of a state (in thousands). We get an intercept of approximately 20 and a slope coefficient of 0.61. Many would interpret this result as saying that higher income people are more likely to vote for Democrats; indeed, popular press books have made this argument. But wait, I thought that rich people were more likely to be Republicans? They are. What this regression is really telling us is that rich states are more likely to vote for a Democrat and poor states are more likely to vote for a Republican. Within a given state, rich people are more likely to vote Republican and poor people are more likely to vote Democrat. See the work of Andrew Gelman and his coauthors. Without further assumptions, we cannot use group-level (aggregate) data to make inferences about individual-level behavior. This is the ecological fallacy. Group-level data can only tell us about group-level behavior. To make the leap to individual-level inferences, we need the constancy assumption. Here, the voting choice of individuals most not vary systematically with the median income of a state; a person who earns \$X in a rich state must be just as likely to vote for a Democrat as someone who earns \$X in a poor state. But people in Connecticut, at all income levels, are more likely to vote for a Democrat than people in Mississippi at those same income levels. Hence, the consistency assumption is violated and we are led to the wrong conclusion (fooled by aggregation bias). This topic was a frequent hobbyhorse of the late David Freedman; see this paper, for example. In that paper, Freedman provides a means for bounding individual-level probabilities using group data. Comparison to Simpson's paradox Elsewhere in this CW, @Michelle proposes Simpson's paradox as a good example, as it indeed is. Simpson's paradox and the ecological fallacy are closely related, yet distinct. The two examples differ in the natures of the data given and analysis used. The standard formulation of Simpson's paradox is a two-way table. In our example here, suppose that we have individual data and we classify each individual as high or low income. We would get an income-by-vote 2x2 contingency table of the totals. We'd see that a higher share of high income people voted for the Democrat relative to the share of low income people. Were we to create a contingency table for each state, however, we'd see the opposite pattern. In the ecological fallacy, we don't collapse income into a dichotomous (or perhaps multichotomous) variable. To get state-level, we get the mean (or median) state income and state vote share and run a regression and find that higher income states are more likely to vote for the Democrat. If we kept the individual-level data and ran the regression separately by state, we'd find the opposite effect. In summary, the differences are: Mode of analysis: We could say, following our SAT prep skills, that Simpson's paradox is to contingency tables as the ecological fallacy is to correlation coefficients and regression. Degree of aggregation/nature of data: Whereas the Simpson's paradox example compares two numbers (Democrat vote share among high income individuals versus the same for low income individuals), ecological fallacy uses 50 data points (i.e., each state) to calculate a correlation coefficient. To get the full story from in the Simpson's paradox example, we'd just need the two numbers from each of the fifty states (100 numbers), while in the ecological fallacy case, we need the individual-level data (or else be given state-level correlations/regression slopes). General observation @NeilG comments that this just seems to be saying that you can't have any selection on unobservables/omitted variables bias issues in your regression. That's right! At least in the regression context, I think that nearly any "paradox" is just a special case of omitted variables bias. Selection bias (see my other response on this CW) can be controlled for by including the variables that drive the selection. Of course, these variables are typically unobserved, driving the problem/paradox. Spurious regression (my other other response) can be overcome by adding a time trend. These cases say, essentially, that you have enough data, but need more predictors. In the case of the ecological fallacy, it's true, you need more predictors (here, state-specific slopes and intercepts). But you need more observations, individual-, rather than group-level, observations as well to estimate these relationships. (Incidentally, if you have extreme selection where the selection variable perfectly divides treatment and control, as in the WWII example that I give, you may need more data to estimate the regression as well; there, the downed planes.)
Most interesting statistical paradoxes
Another example is the ecological fallacy. Example Suppose that we look for a relationship between voting and income by regressing the vote share for then-Senator Obama on the median income of a sta
Most interesting statistical paradoxes Another example is the ecological fallacy. Example Suppose that we look for a relationship between voting and income by regressing the vote share for then-Senator Obama on the median income of a state (in thousands). We get an intercept of approximately 20 and a slope coefficient of 0.61. Many would interpret this result as saying that higher income people are more likely to vote for Democrats; indeed, popular press books have made this argument. But wait, I thought that rich people were more likely to be Republicans? They are. What this regression is really telling us is that rich states are more likely to vote for a Democrat and poor states are more likely to vote for a Republican. Within a given state, rich people are more likely to vote Republican and poor people are more likely to vote Democrat. See the work of Andrew Gelman and his coauthors. Without further assumptions, we cannot use group-level (aggregate) data to make inferences about individual-level behavior. This is the ecological fallacy. Group-level data can only tell us about group-level behavior. To make the leap to individual-level inferences, we need the constancy assumption. Here, the voting choice of individuals most not vary systematically with the median income of a state; a person who earns \$X in a rich state must be just as likely to vote for a Democrat as someone who earns \$X in a poor state. But people in Connecticut, at all income levels, are more likely to vote for a Democrat than people in Mississippi at those same income levels. Hence, the consistency assumption is violated and we are led to the wrong conclusion (fooled by aggregation bias). This topic was a frequent hobbyhorse of the late David Freedman; see this paper, for example. In that paper, Freedman provides a means for bounding individual-level probabilities using group data. Comparison to Simpson's paradox Elsewhere in this CW, @Michelle proposes Simpson's paradox as a good example, as it indeed is. Simpson's paradox and the ecological fallacy are closely related, yet distinct. The two examples differ in the natures of the data given and analysis used. The standard formulation of Simpson's paradox is a two-way table. In our example here, suppose that we have individual data and we classify each individual as high or low income. We would get an income-by-vote 2x2 contingency table of the totals. We'd see that a higher share of high income people voted for the Democrat relative to the share of low income people. Were we to create a contingency table for each state, however, we'd see the opposite pattern. In the ecological fallacy, we don't collapse income into a dichotomous (or perhaps multichotomous) variable. To get state-level, we get the mean (or median) state income and state vote share and run a regression and find that higher income states are more likely to vote for the Democrat. If we kept the individual-level data and ran the regression separately by state, we'd find the opposite effect. In summary, the differences are: Mode of analysis: We could say, following our SAT prep skills, that Simpson's paradox is to contingency tables as the ecological fallacy is to correlation coefficients and regression. Degree of aggregation/nature of data: Whereas the Simpson's paradox example compares two numbers (Democrat vote share among high income individuals versus the same for low income individuals), ecological fallacy uses 50 data points (i.e., each state) to calculate a correlation coefficient. To get the full story from in the Simpson's paradox example, we'd just need the two numbers from each of the fifty states (100 numbers), while in the ecological fallacy case, we need the individual-level data (or else be given state-level correlations/regression slopes). General observation @NeilG comments that this just seems to be saying that you can't have any selection on unobservables/omitted variables bias issues in your regression. That's right! At least in the regression context, I think that nearly any "paradox" is just a special case of omitted variables bias. Selection bias (see my other response on this CW) can be controlled for by including the variables that drive the selection. Of course, these variables are typically unobserved, driving the problem/paradox. Spurious regression (my other other response) can be overcome by adding a time trend. These cases say, essentially, that you have enough data, but need more predictors. In the case of the ecological fallacy, it's true, you need more predictors (here, state-specific slopes and intercepts). But you need more observations, individual-, rather than group-level, observations as well to estimate these relationships. (Incidentally, if you have extreme selection where the selection variable perfectly divides treatment and control, as in the WWII example that I give, you may need more data to estimate the regression as well; there, the downed planes.)
Most interesting statistical paradoxes Another example is the ecological fallacy. Example Suppose that we look for a relationship between voting and income by regressing the vote share for then-Senator Obama on the median income of a sta
1,304
Most interesting statistical paradoxes
My contribution is Simpson's paradox because: the reasons for the paradox are not intuitive to many people, so it can be really hard to explain why the findings are the way they are to lay people in plain English. tl;dr version of the paradox: the statistical significance of a result appears to differ depending on how the data are partitioned. The cause appears often to be due to a confounding variable. Another good outline of the paradox is here.
Most interesting statistical paradoxes
My contribution is Simpson's paradox because: the reasons for the paradox are not intuitive to many people, so it can be really hard to explain why the findings are the way they are to lay people in
Most interesting statistical paradoxes My contribution is Simpson's paradox because: the reasons for the paradox are not intuitive to many people, so it can be really hard to explain why the findings are the way they are to lay people in plain English. tl;dr version of the paradox: the statistical significance of a result appears to differ depending on how the data are partitioned. The cause appears often to be due to a confounding variable. Another good outline of the paradox is here.
Most interesting statistical paradoxes My contribution is Simpson's paradox because: the reasons for the paradox are not intuitive to many people, so it can be really hard to explain why the findings are the way they are to lay people in
1,305
Most interesting statistical paradoxes
There are no paradoxes in statistics, only puzzles waiting to be solved. Nevertheless, my favourite is the two envelope "paradox". Suppose I put two envelopes in front of you and tell you that one contains twice as much money as the other (but not which is which). You reason as follows. Suppose the left envelope contains $x$, then with 50% probability the right envelope contains $2x$ and with 50% probability it contains $0.5x$, for an expected value of $1.25x$. But of course you can simply reverse the envelopes and conclude instead the left envelope contains $1.25$ times the value of the right envelope. What happened?
Most interesting statistical paradoxes
There are no paradoxes in statistics, only puzzles waiting to be solved. Nevertheless, my favourite is the two envelope "paradox". Suppose I put two envelopes in front of you and tell you that one con
Most interesting statistical paradoxes There are no paradoxes in statistics, only puzzles waiting to be solved. Nevertheless, my favourite is the two envelope "paradox". Suppose I put two envelopes in front of you and tell you that one contains twice as much money as the other (but not which is which). You reason as follows. Suppose the left envelope contains $x$, then with 50% probability the right envelope contains $2x$ and with 50% probability it contains $0.5x$, for an expected value of $1.25x$. But of course you can simply reverse the envelopes and conclude instead the left envelope contains $1.25$ times the value of the right envelope. What happened?
Most interesting statistical paradoxes There are no paradoxes in statistics, only puzzles waiting to be solved. Nevertheless, my favourite is the two envelope "paradox". Suppose I put two envelopes in front of you and tell you that one con
1,306
Most interesting statistical paradoxes
The Sleeping Beauty Problem. This is a recent invention; it was heavily discussed within a small set of philosophy journals over the last decade. There are staunch advocates for two very different answers (the "Halfers" and "Thirders"). It raises questions about the nature of belief, probability, and conditioning, and has caused people to invoke a quantum-mechanical "many worlds" interpretation (among other bizarre things). Here is the statement from Wikipedia: Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details. On Sunday she is put to sleep. A fair coin is then tossed to determine which experimental procedure is undertaken. If the coin comes up heads, Beauty is awakened and interviewed on Monday, and then the experiment ends. If the coin comes up tails, she is awakened and interviewed on Monday and Tuesday. But when she is put to sleep again on Monday, she is given a dose of an amnesia-inducing drug that ensures she cannot remember her previous awakening. In this case, the experiment ends after she is interviewed on Tuesday. Any time Sleeping beauty is awakened and interviewed, she is asked, "What is your credence now for the proposition that the coin landed heads?" The Thirder position is that S.B. should respond "1/3" (this is a simple Bayes' Theorem calculation) and the Halfer position is that she should say "1/2" (because that's the correct probability for a fair coin, obviously!). IMHO, the entire debate rests on a limited understanding of probability, but isn't that the whole point of exploring apparent paradoxes? (Illustration from Project Gutenberg.) Although this is not the place to try to resolve paradoxes--only to state them--I don't want to leave people hanging and I'm sure most readers of this page don't want to wade through the philosophical explanations. We can take a tip from E. T. Jaynes, who replaces the question “how can we build a mathematical model of human common sense”—which is something we need in order to think through the Sleeping Beauty problem—by “How could we build a machine which would carry out useful plausible reasoning, following clearly defined principles expressing an idealized common sense?” Thus, if you like, replace S. B. by Jaynes' thinking robot. You can clone this robot (instead of administering a fanciful amnesiac drug) for the Tuesday portion of the experiment, thereby creating a clear model of the S. B. setup that can be unambiguously analyzed. Modeling this in a standard way using statistical decision theory then reveals there are really two questions being asked here (what is the chance a fair coin lands heads? and what is the chance the coin has landed heads, conditional on the fact that you were the clone who was awakened?). The answer is either 1/2 (in the first case) or 1/3 (in the second, using Bayes' Theorem). No quantum mechanical principles were involved in this solution :-). References Arntzenius, Frank (2002). Reflections on Sleeping Beauty. Analysis 62.1 pp 53-62. Elga, Adam (2000). Self-locating belief and the Sleeping Beauty Problem. Analysis 60 pp 143-7. Franceschi, Paul (2005). Sleeping Beauty and the Problem of World Reduction. Preprint. Groisman, Berry (2007). The end of Sleeping Beauty’s nightmare. Lewis, D (2001). Sleeping Beauty: reply to Elga. Analysis 61.3 pp 171-6. Papineau, David and Victor Dura-Vila (2008). A thirder and an Everettian: a reply to Lewis’s ‘Quantum Sleeping Beauty’. Pust, Joel (2008). Horgan on Sleeping Beauty. Synthese 160 pp 97-101. Vineberg, Susan (undated, perhaps 2003). Beauty’s Cautionary Tale. All can be found (or at least were found several years ago) on the Web.
Most interesting statistical paradoxes
The Sleeping Beauty Problem. This is a recent invention; it was heavily discussed within a small set of philosophy journals over the last decade. There are staunch advocates for two very different an
Most interesting statistical paradoxes The Sleeping Beauty Problem. This is a recent invention; it was heavily discussed within a small set of philosophy journals over the last decade. There are staunch advocates for two very different answers (the "Halfers" and "Thirders"). It raises questions about the nature of belief, probability, and conditioning, and has caused people to invoke a quantum-mechanical "many worlds" interpretation (among other bizarre things). Here is the statement from Wikipedia: Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details. On Sunday she is put to sleep. A fair coin is then tossed to determine which experimental procedure is undertaken. If the coin comes up heads, Beauty is awakened and interviewed on Monday, and then the experiment ends. If the coin comes up tails, she is awakened and interviewed on Monday and Tuesday. But when she is put to sleep again on Monday, she is given a dose of an amnesia-inducing drug that ensures she cannot remember her previous awakening. In this case, the experiment ends after she is interviewed on Tuesday. Any time Sleeping beauty is awakened and interviewed, she is asked, "What is your credence now for the proposition that the coin landed heads?" The Thirder position is that S.B. should respond "1/3" (this is a simple Bayes' Theorem calculation) and the Halfer position is that she should say "1/2" (because that's the correct probability for a fair coin, obviously!). IMHO, the entire debate rests on a limited understanding of probability, but isn't that the whole point of exploring apparent paradoxes? (Illustration from Project Gutenberg.) Although this is not the place to try to resolve paradoxes--only to state them--I don't want to leave people hanging and I'm sure most readers of this page don't want to wade through the philosophical explanations. We can take a tip from E. T. Jaynes, who replaces the question “how can we build a mathematical model of human common sense”—which is something we need in order to think through the Sleeping Beauty problem—by “How could we build a machine which would carry out useful plausible reasoning, following clearly defined principles expressing an idealized common sense?” Thus, if you like, replace S. B. by Jaynes' thinking robot. You can clone this robot (instead of administering a fanciful amnesiac drug) for the Tuesday portion of the experiment, thereby creating a clear model of the S. B. setup that can be unambiguously analyzed. Modeling this in a standard way using statistical decision theory then reveals there are really two questions being asked here (what is the chance a fair coin lands heads? and what is the chance the coin has landed heads, conditional on the fact that you were the clone who was awakened?). The answer is either 1/2 (in the first case) or 1/3 (in the second, using Bayes' Theorem). No quantum mechanical principles were involved in this solution :-). References Arntzenius, Frank (2002). Reflections on Sleeping Beauty. Analysis 62.1 pp 53-62. Elga, Adam (2000). Self-locating belief and the Sleeping Beauty Problem. Analysis 60 pp 143-7. Franceschi, Paul (2005). Sleeping Beauty and the Problem of World Reduction. Preprint. Groisman, Berry (2007). The end of Sleeping Beauty’s nightmare. Lewis, D (2001). Sleeping Beauty: reply to Elga. Analysis 61.3 pp 171-6. Papineau, David and Victor Dura-Vila (2008). A thirder and an Everettian: a reply to Lewis’s ‘Quantum Sleeping Beauty’. Pust, Joel (2008). Horgan on Sleeping Beauty. Synthese 160 pp 97-101. Vineberg, Susan (undated, perhaps 2003). Beauty’s Cautionary Tale. All can be found (or at least were found several years ago) on the Web.
Most interesting statistical paradoxes The Sleeping Beauty Problem. This is a recent invention; it was heavily discussed within a small set of philosophy journals over the last decade. There are staunch advocates for two very different an
1,307
Most interesting statistical paradoxes
The St.Petersburg paradox, which makes you think differently on the concept and meaning of Expected Value. The intuition (mainly for people with background in statistics) and the calculations are giving different results.
Most interesting statistical paradoxes
The St.Petersburg paradox, which makes you think differently on the concept and meaning of Expected Value. The intuition (mainly for people with background in statistics) and the calculations are givi
Most interesting statistical paradoxes The St.Petersburg paradox, which makes you think differently on the concept and meaning of Expected Value. The intuition (mainly for people with background in statistics) and the calculations are giving different results.
Most interesting statistical paradoxes The St.Petersburg paradox, which makes you think differently on the concept and meaning of Expected Value. The intuition (mainly for people with background in statistics) and the calculations are givi
1,308
Most interesting statistical paradoxes
The Jeffreys-Lindley paradox, which shows that under some circumstances default frequentist and Bayesian methods of hypothesis testing can give completely contradictory answers. It really forces users to think about exactly what these forms of testing mean, and to consider whether that's what the really want. For a recent example see this discussion.
Most interesting statistical paradoxes
The Jeffreys-Lindley paradox, which shows that under some circumstances default frequentist and Bayesian methods of hypothesis testing can give completely contradictory answers. It really forces users
Most interesting statistical paradoxes The Jeffreys-Lindley paradox, which shows that under some circumstances default frequentist and Bayesian methods of hypothesis testing can give completely contradictory answers. It really forces users to think about exactly what these forms of testing mean, and to consider whether that's what the really want. For a recent example see this discussion.
Most interesting statistical paradoxes The Jeffreys-Lindley paradox, which shows that under some circumstances default frequentist and Bayesian methods of hypothesis testing can give completely contradictory answers. It really forces users
1,309
Most interesting statistical paradoxes
Sorry, but I can't help myself (I, too, love statistical paradoxes!). Again, perhaps not a paradox per se and another example of omitted variables bias. Spurious causation/regression Any variable with a time trend is going to be correlated with another variable that also has a time trend. For example, my weight from birth to age 27 is going to be highly correlated with your weight from birth to age 27. Obviously, my weight isn't caused by your weight. If it was, I'd ask that you go to the gym more frequently, please. Here's an omitted variables explanation. Let my weight be $x_t$ and your weight be $y_t$, where $$\begin{align*}x_t &= \alpha_0 + \alpha_1 t + \epsilon_t \text{ and} \\ y_t &= \beta_0 + \beta_1 t + \eta_t.\end{align*}$$ Then the regression $$\begin{equation*}y_t = \gamma_0 + \gamma_1 x_t + \nu_t\end{equation*}$$ has an omitted variable---the time trend---that is correlated with the included variable, $x_t$. Hence, the coefficient $\gamma_1$ will be biased (in this case, it will be positive, as our weights grow over time). When you are performing time series analysis, you need to be sure that your variables are stationary or you'll get these spurious causation results. (I fully admit that I plagiarized my own answer given here.)
Most interesting statistical paradoxes
Sorry, but I can't help myself (I, too, love statistical paradoxes!). Again, perhaps not a paradox per se and another example of omitted variables bias. Spurious causation/regression Any variable wit
Most interesting statistical paradoxes Sorry, but I can't help myself (I, too, love statistical paradoxes!). Again, perhaps not a paradox per se and another example of omitted variables bias. Spurious causation/regression Any variable with a time trend is going to be correlated with another variable that also has a time trend. For example, my weight from birth to age 27 is going to be highly correlated with your weight from birth to age 27. Obviously, my weight isn't caused by your weight. If it was, I'd ask that you go to the gym more frequently, please. Here's an omitted variables explanation. Let my weight be $x_t$ and your weight be $y_t$, where $$\begin{align*}x_t &= \alpha_0 + \alpha_1 t + \epsilon_t \text{ and} \\ y_t &= \beta_0 + \beta_1 t + \eta_t.\end{align*}$$ Then the regression $$\begin{equation*}y_t = \gamma_0 + \gamma_1 x_t + \nu_t\end{equation*}$$ has an omitted variable---the time trend---that is correlated with the included variable, $x_t$. Hence, the coefficient $\gamma_1$ will be biased (in this case, it will be positive, as our weights grow over time). When you are performing time series analysis, you need to be sure that your variables are stationary or you'll get these spurious causation results. (I fully admit that I plagiarized my own answer given here.)
Most interesting statistical paradoxes Sorry, but I can't help myself (I, too, love statistical paradoxes!). Again, perhaps not a paradox per se and another example of omitted variables bias. Spurious causation/regression Any variable wit
1,310
Most interesting statistical paradoxes
One of my favorites is the Monty Hall problem. I remember learning about it in an elementary stats class, telling my dad, as both of us were in disbelief I simulated random numbers and we tried the problem. To our amazement it was true. Basically the problem states that if you had three doors on a game show, behind which one is a prize and the other two nothing, if you chose a door and then were told of the remaining two doors one of the two was not a prize door and allowed to switch your choice if you so chose you should switch you current door to the remaining door. Here's the link to an R simulation as well: LINK
Most interesting statistical paradoxes
One of my favorites is the Monty Hall problem. I remember learning about it in an elementary stats class, telling my dad, as both of us were in disbelief I simulated random numbers and we tried the p
Most interesting statistical paradoxes One of my favorites is the Monty Hall problem. I remember learning about it in an elementary stats class, telling my dad, as both of us were in disbelief I simulated random numbers and we tried the problem. To our amazement it was true. Basically the problem states that if you had three doors on a game show, behind which one is a prize and the other two nothing, if you chose a door and then were told of the remaining two doors one of the two was not a prize door and allowed to switch your choice if you so chose you should switch you current door to the remaining door. Here's the link to an R simulation as well: LINK
Most interesting statistical paradoxes One of my favorites is the Monty Hall problem. I remember learning about it in an elementary stats class, telling my dad, as both of us were in disbelief I simulated random numbers and we tried the p
1,311
Most interesting statistical paradoxes
Parrondo's Paradox: From wikipdedia: "Parrondo's paradox, a paradox in game theory, has been described as: A combination of losing strategies becomes a winning strategy. It is named after its creator, Juan Parrondo, who discovered the paradox in 1996. A more explanatory description is: There exist pairs of games, each with a higher probability of losing than winning, for which it is possible to construct a winning strategy by playing the games alternately. Parrondo devised the paradox in connection with his analysis of the Brownian ratchet, a thought experiment about a machine that can purportedly extract energy from random heat motions popularized by physicist Richard Feynman. However, the paradox disappears when rigorously analyzed." As alluring as the paradox might sound to the financial crowd, it does have requirements that are not readily available in financial time series. Even though a few of the component strategies can be losing, the offsetting strategies require unequal and stable probabilities of much greater or less than 50% in order for the ratcheting effect to kick in. It would be difficult to find financial strategies, whereby one has $P_B(W)=3/4+\epsilon$ and the other, $P_A(W)=1/10 + \epsilon$, over long periods. There's also a more recent related paradox called the "allison mixture," that shows we can take two IID and non-correlated series, and randomly scramble them such that certain mixtures can create a resulting series with non-zero autocorrelation.
Most interesting statistical paradoxes
Parrondo's Paradox: From wikipdedia: "Parrondo's paradox, a paradox in game theory, has been described as: A combination of losing strategies becomes a winning strategy. It is named after its creator,
Most interesting statistical paradoxes Parrondo's Paradox: From wikipdedia: "Parrondo's paradox, a paradox in game theory, has been described as: A combination of losing strategies becomes a winning strategy. It is named after its creator, Juan Parrondo, who discovered the paradox in 1996. A more explanatory description is: There exist pairs of games, each with a higher probability of losing than winning, for which it is possible to construct a winning strategy by playing the games alternately. Parrondo devised the paradox in connection with his analysis of the Brownian ratchet, a thought experiment about a machine that can purportedly extract energy from random heat motions popularized by physicist Richard Feynman. However, the paradox disappears when rigorously analyzed." As alluring as the paradox might sound to the financial crowd, it does have requirements that are not readily available in financial time series. Even though a few of the component strategies can be losing, the offsetting strategies require unequal and stable probabilities of much greater or less than 50% in order for the ratcheting effect to kick in. It would be difficult to find financial strategies, whereby one has $P_B(W)=3/4+\epsilon$ and the other, $P_A(W)=1/10 + \epsilon$, over long periods. There's also a more recent related paradox called the "allison mixture," that shows we can take two IID and non-correlated series, and randomly scramble them such that certain mixtures can create a resulting series with non-zero autocorrelation.
Most interesting statistical paradoxes Parrondo's Paradox: From wikipdedia: "Parrondo's paradox, a paradox in game theory, has been described as: A combination of losing strategies becomes a winning strategy. It is named after its creator,
1,312
Most interesting statistical paradoxes
I like the following: The host is using an unknown distribution on $[0,1]$ to choose, independently, two numbers $x,y\in [0,1]$. The only thing known to the player about the distribution is that $P(x=y)=0$. The player is then shown the number $x$ and is asked to guess whether $y>x$ or $y<x$. Clearly, if player always guesses $y>x$ then player will be correct with probability $0.5$. However, at least surprisingly if not paradoxically, player can improve on that strategy. I'm afraid I don't have a link to the problem (I heard it many years ago during a workshop).
Most interesting statistical paradoxes
I like the following: The host is using an unknown distribution on $[0,1]$ to choose, independently, two numbers $x,y\in [0,1]$. The only thing known to the player about the distribution is that $P(x=
Most interesting statistical paradoxes I like the following: The host is using an unknown distribution on $[0,1]$ to choose, independently, two numbers $x,y\in [0,1]$. The only thing known to the player about the distribution is that $P(x=y)=0$. The player is then shown the number $x$ and is asked to guess whether $y>x$ or $y<x$. Clearly, if player always guesses $y>x$ then player will be correct with probability $0.5$. However, at least surprisingly if not paradoxically, player can improve on that strategy. I'm afraid I don't have a link to the problem (I heard it many years ago during a workshop).
Most interesting statistical paradoxes I like the following: The host is using an unknown distribution on $[0,1]$ to choose, independently, two numbers $x,y\in [0,1]$. The only thing known to the player about the distribution is that $P(x=
1,313
Most interesting statistical paradoxes
It's interesting that the Two Child Problem and the Monty Hall Problem so often get mentioned together in the context of paradox. Both illustrate an apparent paradox first illustrated in 1889, called Bertrand's Box Paradox, which can be generalized to represent either. I find it a most interesting "paradox" because the same very-educated, very-intelligent people answer those two problems in opposite ways with respect to this paradox. It also compares to a principle used in card games like bridge, known as the Principle of Restricted Choice, where it resolution is time-tested. Say you have a randomly selected item that I'll call a "box." Every possible box has at least one of two symmetric properties, but some have both. I'll call the properties "gold" and "silver." The probability that a box is just gold is P; and since the properties are symmetric, P is also the probability that a box is just silver. That makes the probability that a box has just one property 2P, and the probability that it has both 1-2P. If you are told a box is gold, but not whether it is silver, you might be tempted to say the chances it is just gold are P/(P+(1-2P))=P/(1-P). But then you would have to state the same probability for a one-color box if you were told it was silver. And if this probability is P/(1-P) whenever you are told just one color, it has to be P/(1-P) even if you aren't told a color. Yet we know it is 2P from the last paragraph. This apparent paradox is resolved by noting that if a box has only one color, there is no ambiguity about what color you will be told. But if it has two, there is an implied choice. You have to know how that choice was made in order to answer the question, and that is the root of the apparent paradox. If you aren't told, you can only assume a color was chosen at random, making the answer P/(P+(1-2P)/2)=2P. If you insist P/(1-P) is the answer, you are implicitly assuming there was no possibility the other color could have been mentioned unless it was the only color. In the Monty Hall Problem, the analogy for the colors is not very intuitive, but P=1/3. Answers based on the two unopened doors originally being equally likely to have the prize are assuming Monty Hall was required to open the door he did, even if he had a choice. That answer is P/(1-P)=1/2. The answer allowing him to choose at random is 2P=2/3 for the probability that switching will win. In the Two Child Problem, the colors in my analogy compare quite nicely to genders. With four cases, P=1/4. To answer the question, we need to know how it was determined that there was a girl in the family. If it was possible to learn about a boy in the family by that method, then the answer is 2P=1/2, not P/(1-P)=1/3. It's a little more complicated if you consider the name Florida, or "born on Tuesday," but the results are the same. The answer is exactly 1/2 if there was a choice, and most statements of the problem imply such a choice. And the reason "changing" from 1/3 to 13/27, or from 1/3 to "nearly 1/2," seems paradoxical and unintuitive, is because the assumption of no choice is unintuitive. In the Principle of Restricted Choice, say you are missing some set of equivalent cards - like the Jack, Queen, and King of the same suit. The chances start out even that any particular card belongs to a specific opponent. But after an opponent plays one, his chances of having any one of the others are decreased because he could have played that card if he had it.
Most interesting statistical paradoxes
It's interesting that the Two Child Problem and the Monty Hall Problem so often get mentioned together in the context of paradox. Both illustrate an apparent paradox first illustrated in 1889, called
Most interesting statistical paradoxes It's interesting that the Two Child Problem and the Monty Hall Problem so often get mentioned together in the context of paradox. Both illustrate an apparent paradox first illustrated in 1889, called Bertrand's Box Paradox, which can be generalized to represent either. I find it a most interesting "paradox" because the same very-educated, very-intelligent people answer those two problems in opposite ways with respect to this paradox. It also compares to a principle used in card games like bridge, known as the Principle of Restricted Choice, where it resolution is time-tested. Say you have a randomly selected item that I'll call a "box." Every possible box has at least one of two symmetric properties, but some have both. I'll call the properties "gold" and "silver." The probability that a box is just gold is P; and since the properties are symmetric, P is also the probability that a box is just silver. That makes the probability that a box has just one property 2P, and the probability that it has both 1-2P. If you are told a box is gold, but not whether it is silver, you might be tempted to say the chances it is just gold are P/(P+(1-2P))=P/(1-P). But then you would have to state the same probability for a one-color box if you were told it was silver. And if this probability is P/(1-P) whenever you are told just one color, it has to be P/(1-P) even if you aren't told a color. Yet we know it is 2P from the last paragraph. This apparent paradox is resolved by noting that if a box has only one color, there is no ambiguity about what color you will be told. But if it has two, there is an implied choice. You have to know how that choice was made in order to answer the question, and that is the root of the apparent paradox. If you aren't told, you can only assume a color was chosen at random, making the answer P/(P+(1-2P)/2)=2P. If you insist P/(1-P) is the answer, you are implicitly assuming there was no possibility the other color could have been mentioned unless it was the only color. In the Monty Hall Problem, the analogy for the colors is not very intuitive, but P=1/3. Answers based on the two unopened doors originally being equally likely to have the prize are assuming Monty Hall was required to open the door he did, even if he had a choice. That answer is P/(1-P)=1/2. The answer allowing him to choose at random is 2P=2/3 for the probability that switching will win. In the Two Child Problem, the colors in my analogy compare quite nicely to genders. With four cases, P=1/4. To answer the question, we need to know how it was determined that there was a girl in the family. If it was possible to learn about a boy in the family by that method, then the answer is 2P=1/2, not P/(1-P)=1/3. It's a little more complicated if you consider the name Florida, or "born on Tuesday," but the results are the same. The answer is exactly 1/2 if there was a choice, and most statements of the problem imply such a choice. And the reason "changing" from 1/3 to 13/27, or from 1/3 to "nearly 1/2," seems paradoxical and unintuitive, is because the assumption of no choice is unintuitive. In the Principle of Restricted Choice, say you are missing some set of equivalent cards - like the Jack, Queen, and King of the same suit. The chances start out even that any particular card belongs to a specific opponent. But after an opponent plays one, his chances of having any one of the others are decreased because he could have played that card if he had it.
Most interesting statistical paradoxes It's interesting that the Two Child Problem and the Monty Hall Problem so often get mentioned together in the context of paradox. Both illustrate an apparent paradox first illustrated in 1889, called
1,314
Most interesting statistical paradoxes
This is Simpson's Paradox again but 'backwards' as well as forwards, comes from Judea Pearl's new book Causal Inference in Statistics: A primer[^1] The classic Simpon's Paradox works as follows: consider trying to choose between two doctors. You automatically choose the one with the best outcomes. But suppose the one with the best outcomes chooses the easiest cases. The other's poorer record is a consequence of trickier work. Now who do you choose? Better to look at the results stratified by difficulty and then decide. There is another side to the coin (another paradox) which says that the stratified outcomes can also lead you to the wrong choice. This time consider choosing to use a drug or not. The drug has a toxic side effect, but its therapeutic mechanism of action is through lowering blood pressure. Overall, the drug improves outcomes in the population, but when stratifying on post-treatment blood pressure the outcomes are worse in both the low and the high blood pressure groups. How can this be true? Because we have unintentionally stratified on the outcome, and within each outcome all that remains to observe is the toxic side effect. To clarify, imagine the drug is designed to fix broken hearts, and it does this by lowering the blood pressure, and instead of stratifying on blood pressure we stratify on fixed hearts. When the drug works, the heart is fixed (and the blood pressure will be lower), but some of the patients will also get the toxic side effect. Because the drug works, the 'fixed heart' group will have more patients who have taken the drug, than there are patients taking the drug in the 'broken' heart group. More patients taking the drug means more patients getting side effects, and apparently (but falsely) better outcomes for patients who didn't take the drug. The patients who get better without taking the drug are just lucky. The patients who took the drug and got better are a mixture of those who needed the drug to get better, and those who would have been lucky anyway. Examining only patients with 'fixed hearts' means excluding patients who would have been fixed had they taken the drug. Excluding such patients means excluding the harm from not taking the drug which in turn means we only see the harm from taking the drug. Simpson's paradox arises when there is a cause for the outcome other than the treatment such as the fact that your doctor only does tricky cases. Controlling for the common cause (tricky versus easy cases) allows us to see the true effect. In the latter example, we have unintentionally stratified on an outcome not on a cause which means the true answer is in the aggregate not the stratified data. [^1]: Pearl J. Causal Inference in Statistics. John Wiley & Sons; 2016
Most interesting statistical paradoxes
This is Simpson's Paradox again but 'backwards' as well as forwards, comes from Judea Pearl's new book Causal Inference in Statistics: A primer[^1] The classic Simpon's Paradox works as follows: consi
Most interesting statistical paradoxes This is Simpson's Paradox again but 'backwards' as well as forwards, comes from Judea Pearl's new book Causal Inference in Statistics: A primer[^1] The classic Simpon's Paradox works as follows: consider trying to choose between two doctors. You automatically choose the one with the best outcomes. But suppose the one with the best outcomes chooses the easiest cases. The other's poorer record is a consequence of trickier work. Now who do you choose? Better to look at the results stratified by difficulty and then decide. There is another side to the coin (another paradox) which says that the stratified outcomes can also lead you to the wrong choice. This time consider choosing to use a drug or not. The drug has a toxic side effect, but its therapeutic mechanism of action is through lowering blood pressure. Overall, the drug improves outcomes in the population, but when stratifying on post-treatment blood pressure the outcomes are worse in both the low and the high blood pressure groups. How can this be true? Because we have unintentionally stratified on the outcome, and within each outcome all that remains to observe is the toxic side effect. To clarify, imagine the drug is designed to fix broken hearts, and it does this by lowering the blood pressure, and instead of stratifying on blood pressure we stratify on fixed hearts. When the drug works, the heart is fixed (and the blood pressure will be lower), but some of the patients will also get the toxic side effect. Because the drug works, the 'fixed heart' group will have more patients who have taken the drug, than there are patients taking the drug in the 'broken' heart group. More patients taking the drug means more patients getting side effects, and apparently (but falsely) better outcomes for patients who didn't take the drug. The patients who get better without taking the drug are just lucky. The patients who took the drug and got better are a mixture of those who needed the drug to get better, and those who would have been lucky anyway. Examining only patients with 'fixed hearts' means excluding patients who would have been fixed had they taken the drug. Excluding such patients means excluding the harm from not taking the drug which in turn means we only see the harm from taking the drug. Simpson's paradox arises when there is a cause for the outcome other than the treatment such as the fact that your doctor only does tricky cases. Controlling for the common cause (tricky versus easy cases) allows us to see the true effect. In the latter example, we have unintentionally stratified on an outcome not on a cause which means the true answer is in the aggregate not the stratified data. [^1]: Pearl J. Causal Inference in Statistics. John Wiley & Sons; 2016
Most interesting statistical paradoxes This is Simpson's Paradox again but 'backwards' as well as forwards, comes from Judea Pearl's new book Causal Inference in Statistics: A primer[^1] The classic Simpon's Paradox works as follows: consi
1,315
Most interesting statistical paradoxes
I find a simplified graphical illustration of the ecological fallacy (here the rich State/poor State voting paradox) helps me to understand on an intuitive level why we see a reversal of voting patterns when we aggregate State populations:
Most interesting statistical paradoxes
I find a simplified graphical illustration of the ecological fallacy (here the rich State/poor State voting paradox) helps me to understand on an intuitive level why we see a reversal of voting patter
Most interesting statistical paradoxes I find a simplified graphical illustration of the ecological fallacy (here the rich State/poor State voting paradox) helps me to understand on an intuitive level why we see a reversal of voting patterns when we aggregate State populations:
Most interesting statistical paradoxes I find a simplified graphical illustration of the ecological fallacy (here the rich State/poor State voting paradox) helps me to understand on an intuitive level why we see a reversal of voting patter
1,316
Most interesting statistical paradoxes
Suppose you obtained a data on births in royal family of some kingdom. In the family tree each birth was noted. What is peculiar about this family was that parents were trying to have a baby only as soon first boy was born and then did not have any more children. So your data potentially looks similar to this: G G B B G G B G B G G G G G G G G G B etc. Will the proportion of boys and girls in this sample reflect the general probability of giving a birth to a boy (say 0.5)? The answer and explanation can be found in this thread.
Most interesting statistical paradoxes
Suppose you obtained a data on births in royal family of some kingdom. In the family tree each birth was noted. What is peculiar about this family was that parents were trying to have a baby only
Most interesting statistical paradoxes Suppose you obtained a data on births in royal family of some kingdom. In the family tree each birth was noted. What is peculiar about this family was that parents were trying to have a baby only as soon first boy was born and then did not have any more children. So your data potentially looks similar to this: G G B B G G B G B G G G G G G G G G B etc. Will the proportion of boys and girls in this sample reflect the general probability of giving a birth to a boy (say 0.5)? The answer and explanation can be found in this thread.
Most interesting statistical paradoxes Suppose you obtained a data on births in royal family of some kingdom. In the family tree each birth was noted. What is peculiar about this family was that parents were trying to have a baby only
1,317
Most interesting statistical paradoxes
One of my "favorites", meaning that it's what drives me crazy about the interpretation of many studies (and often by the authors themselves, not just the media) is that of Survivorship Bias. One way to imagine it is suppose there's some effect that is very detrimental to the subjects, so much so that it has a very good chance of killing them. If subjects are exposed to this effect before the study, then by the time study begins, the exposed subjects that are still alive have a very high probability of having being unusually resilient. Literally natural selection at work. When this happens, the study will observe that exposed subjects are unusually healthy (since all the unhealthy ones already died or made sure to stop being exposed to the effect).This is often misinterpreted as implying that exposure is actually good for the subjects. This is a result of ignoring truncation (i.e. ignoring the subjects who died and did not make it to the study). Similarly, subjects who stop being exposed to the effect during the study are often incredibly unhealthy: this is because they have realized that continued exposure will probably kill them. But the study merely observes that those who quit are very unhealthy! @Charlie's answer about the WWII bombers can be thought of as an example of this, but there's plenty of modern examples too. A recent example are the studies reporting that drinking 8+ cups of coffee a day (!!) is linked to much higher heart health in subjects over 55 years of age. Plenty of people with PhD's interpreted this as "drinking coffee is good for your heart!", including the authors of the study. I read this as you have to have an incredibly healthy heart to be still drinking 8 cups of coffee a day after 55 years of age and not have a heart attack. Even if it doesn't kill you, the moment something looks worrisome about your health, everyone that loves you (plus your doctor) will immediately encourage you to stop drinking coffee. Further studies found that drinking so much coffee had no beneficial effects in younger groups, which I believe is more evidence that we are seeing a survivorship effect, rather than a positive causal effect. Yet there's plenty of PhD's running around saying "Science says drinking 8+ cups of coffee is good for seniors!"
Most interesting statistical paradoxes
One of my "favorites", meaning that it's what drives me crazy about the interpretation of many studies (and often by the authors themselves, not just the media) is that of Survivorship Bias. One way t
Most interesting statistical paradoxes One of my "favorites", meaning that it's what drives me crazy about the interpretation of many studies (and often by the authors themselves, not just the media) is that of Survivorship Bias. One way to imagine it is suppose there's some effect that is very detrimental to the subjects, so much so that it has a very good chance of killing them. If subjects are exposed to this effect before the study, then by the time study begins, the exposed subjects that are still alive have a very high probability of having being unusually resilient. Literally natural selection at work. When this happens, the study will observe that exposed subjects are unusually healthy (since all the unhealthy ones already died or made sure to stop being exposed to the effect).This is often misinterpreted as implying that exposure is actually good for the subjects. This is a result of ignoring truncation (i.e. ignoring the subjects who died and did not make it to the study). Similarly, subjects who stop being exposed to the effect during the study are often incredibly unhealthy: this is because they have realized that continued exposure will probably kill them. But the study merely observes that those who quit are very unhealthy! @Charlie's answer about the WWII bombers can be thought of as an example of this, but there's plenty of modern examples too. A recent example are the studies reporting that drinking 8+ cups of coffee a day (!!) is linked to much higher heart health in subjects over 55 years of age. Plenty of people with PhD's interpreted this as "drinking coffee is good for your heart!", including the authors of the study. I read this as you have to have an incredibly healthy heart to be still drinking 8 cups of coffee a day after 55 years of age and not have a heart attack. Even if it doesn't kill you, the moment something looks worrisome about your health, everyone that loves you (plus your doctor) will immediately encourage you to stop drinking coffee. Further studies found that drinking so much coffee had no beneficial effects in younger groups, which I believe is more evidence that we are seeing a survivorship effect, rather than a positive causal effect. Yet there's plenty of PhD's running around saying "Science says drinking 8+ cups of coffee is good for seniors!"
Most interesting statistical paradoxes One of my "favorites", meaning that it's what drives me crazy about the interpretation of many studies (and often by the authors themselves, not just the media) is that of Survivorship Bias. One way t
1,318
Most interesting statistical paradoxes
Try the Borel–Kolmogorov paradox, where conditional probabilities behave badly. One example had the question Let $X_1, X_2$ be independent exponential random variables with parameter $1$. Find the conditional PDF of $X_1+X_2$ given that $\frac{X_1}{X_2}=1.$ Find the conditional PDF of $X_1+X_2$ given that $X_1-X_2=0.$ The events $\frac{X_1}{X_2}=1$ and $X_1-X_2=0$ are the same. Does this mean that conditioning on either of these two events should give the same answer? to which the answer appears to be $f_{X_1+X_2 \mid \frac{X_1}{X_2}=1}(x) = x e^{-x}$, a $\text{Gamma}(2,1)$ distribution with mean $2$ $f_{X_1+X_2 \mid X_1 - X_2=0}(x) = e^{-x}$, and $\text{Exp}(1)$ distribution with mean $1$ and this can be confirmed by simulation. Whether conditioning on $X_1=X_2$ is really consistent with the assumption that $X_1$ and $X_2$ are independent is a deeper question.
Most interesting statistical paradoxes
Try the Borel–Kolmogorov paradox, where conditional probabilities behave badly. One example had the question Let $X_1, X_2$ be independent exponential random variables with parameter $1$. Find the
Most interesting statistical paradoxes Try the Borel–Kolmogorov paradox, where conditional probabilities behave badly. One example had the question Let $X_1, X_2$ be independent exponential random variables with parameter $1$. Find the conditional PDF of $X_1+X_2$ given that $\frac{X_1}{X_2}=1.$ Find the conditional PDF of $X_1+X_2$ given that $X_1-X_2=0.$ The events $\frac{X_1}{X_2}=1$ and $X_1-X_2=0$ are the same. Does this mean that conditioning on either of these two events should give the same answer? to which the answer appears to be $f_{X_1+X_2 \mid \frac{X_1}{X_2}=1}(x) = x e^{-x}$, a $\text{Gamma}(2,1)$ distribution with mean $2$ $f_{X_1+X_2 \mid X_1 - X_2=0}(x) = e^{-x}$, and $\text{Exp}(1)$ distribution with mean $1$ and this can be confirmed by simulation. Whether conditioning on $X_1=X_2$ is really consistent with the assumption that $X_1$ and $X_2$ are independent is a deeper question.
Most interesting statistical paradoxes Try the Borel–Kolmogorov paradox, where conditional probabilities behave badly. One example had the question Let $X_1, X_2$ be independent exponential random variables with parameter $1$. Find the
1,319
Most interesting statistical paradoxes
Misspecification paradox If $T$ is a method of statistical inference with a certain model assumption, say the true $P$ is assumed to be in some set ${\cal P}$ (e.g., $P$ may be assumed to be an i.i.d. normal distribution model for data $X_1,\ldots,X_n$), it is standard practice (in some quarters) to run a model misspecification test $M$, i.e., a test, that tests the null hypothesis $P\in {\cal P}$. Assuming that $P(M$ rejects$)>0$ for $P\in {\cal P}$ (which is pretty much always fulfilled), it follows that the conditional distribution upon non-rejection $P(\bullet|M$ does not reject$)$ cannot be in ${\cal P}$. This is because $$ P(M\mbox{ rejects}|M\mbox{ does not reject})=0 $$ in contradiction to $P(M$ rejects$)>0 \forall P\in {\cal P}$. If $T$ is only applied in case that $M$ does not reject, it means that the distribution that generates the data that go into $T$ is the conditional distribution that is not in ${\cal P}$. In other words, testing the model assumption and passing it (i.e., not rejecting it and taking the model as valid for the data) actively violates the model assumption, even if it was fulfilled before! See https://academic.oup.com/philmat/article-abstract/15/2/166/1572953?redirectedFrom=fulltext, https://arxiv.org/abs/1908.02218. As I am (co-) author of these papers, I should acknowledge that in principle this is known already at least since Bancroft (1944), see reference in the arxiv paper, although I believe I was the first to call it a paradox and to present it in a way that its paradoxical "nature" comes out.
Most interesting statistical paradoxes
Misspecification paradox If $T$ is a method of statistical inference with a certain model assumption, say the true $P$ is assumed to be in some set ${\cal P}$ (e.g., $P$ may be assumed to be an i.i.d.
Most interesting statistical paradoxes Misspecification paradox If $T$ is a method of statistical inference with a certain model assumption, say the true $P$ is assumed to be in some set ${\cal P}$ (e.g., $P$ may be assumed to be an i.i.d. normal distribution model for data $X_1,\ldots,X_n$), it is standard practice (in some quarters) to run a model misspecification test $M$, i.e., a test, that tests the null hypothesis $P\in {\cal P}$. Assuming that $P(M$ rejects$)>0$ for $P\in {\cal P}$ (which is pretty much always fulfilled), it follows that the conditional distribution upon non-rejection $P(\bullet|M$ does not reject$)$ cannot be in ${\cal P}$. This is because $$ P(M\mbox{ rejects}|M\mbox{ does not reject})=0 $$ in contradiction to $P(M$ rejects$)>0 \forall P\in {\cal P}$. If $T$ is only applied in case that $M$ does not reject, it means that the distribution that generates the data that go into $T$ is the conditional distribution that is not in ${\cal P}$. In other words, testing the model assumption and passing it (i.e., not rejecting it and taking the model as valid for the data) actively violates the model assumption, even if it was fulfilled before! See https://academic.oup.com/philmat/article-abstract/15/2/166/1572953?redirectedFrom=fulltext, https://arxiv.org/abs/1908.02218. As I am (co-) author of these papers, I should acknowledge that in principle this is known already at least since Bancroft (1944), see reference in the arxiv paper, although I believe I was the first to call it a paradox and to present it in a way that its paradoxical "nature" comes out.
Most interesting statistical paradoxes Misspecification paradox If $T$ is a method of statistical inference with a certain model assumption, say the true $P$ is assumed to be in some set ${\cal P}$ (e.g., $P$ may be assumed to be an i.i.d.
1,320
Most interesting statistical paradoxes
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I'm surprised no one has mentioned Newcombe's Paradox yet, although it is more heavily discussed in decision theory. It's definitely one of my favorites.
Most interesting statistical paradoxes
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Most interesting statistical paradoxes Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I'm surprised no one has mentioned Newcombe's Paradox yet, although it is more heavily discussed in decision theory. It's definitely one of my favorites.
Most interesting statistical paradoxes Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
1,321
Most interesting statistical paradoxes
There are lists of paradoxes on Wikipedia! https://en.wikipedia.org/wiki/Category:Statistical_paradoxes https://en.wikipedia.org/wiki/Category:Probability_theory_paradoxes
Most interesting statistical paradoxes
There are lists of paradoxes on Wikipedia! https://en.wikipedia.org/wiki/Category:Statistical_paradoxes https://en.wikipedia.org/wiki/Category:Probability_theory_paradoxes
Most interesting statistical paradoxes There are lists of paradoxes on Wikipedia! https://en.wikipedia.org/wiki/Category:Statistical_paradoxes https://en.wikipedia.org/wiki/Category:Probability_theory_paradoxes
Most interesting statistical paradoxes There are lists of paradoxes on Wikipedia! https://en.wikipedia.org/wiki/Category:Statistical_paradoxes https://en.wikipedia.org/wiki/Category:Probability_theory_paradoxes
1,322
Most interesting statistical paradoxes
The hot hand paradox. Quoting Miller and Sanjurjo's paper: Jack takes a coin from his pocket and decides that he will flip it 4 times in a row, writing down the outcome of each flip on a scrap of paper. After he is done flipping, he will look at the flips that immediately followed an outcome of heads, and compute the relative frequency of heads on those flips. Because the coin is fair, Jack of course expects this conditional relative frequency to be equal to the probability of flipping a heads: 0.5. Shockingly, Jack is wrong. If he were to sample 1 million fair coins and flip each coin 4 times, observing the conditional relative frequency for each coin, on average the relative frequency would be approximately 0.4. Intuitively, the problem is that the amount of available data is positively correlated with the number of heads - so a failure to follow heads with tails is more likely to occur in a sample where there are fewer instances of heads to analyse, giving it a larger impact on the calculated mean, introducing bias. Sampling bias caused by this paradox went undetected in a notable study on the hot hand phenomenon in basketball for over thirty years (Wikipedia).
Most interesting statistical paradoxes
The hot hand paradox. Quoting Miller and Sanjurjo's paper: Jack takes a coin from his pocket and decides that he will flip it 4 times in a row, writing down the outcome of each flip on a scrap of pap
Most interesting statistical paradoxes The hot hand paradox. Quoting Miller and Sanjurjo's paper: Jack takes a coin from his pocket and decides that he will flip it 4 times in a row, writing down the outcome of each flip on a scrap of paper. After he is done flipping, he will look at the flips that immediately followed an outcome of heads, and compute the relative frequency of heads on those flips. Because the coin is fair, Jack of course expects this conditional relative frequency to be equal to the probability of flipping a heads: 0.5. Shockingly, Jack is wrong. If he were to sample 1 million fair coins and flip each coin 4 times, observing the conditional relative frequency for each coin, on average the relative frequency would be approximately 0.4. Intuitively, the problem is that the amount of available data is positively correlated with the number of heads - so a failure to follow heads with tails is more likely to occur in a sample where there are fewer instances of heads to analyse, giving it a larger impact on the calculated mean, introducing bias. Sampling bias caused by this paradox went undetected in a notable study on the hot hand phenomenon in basketball for over thirty years (Wikipedia).
Most interesting statistical paradoxes The hot hand paradox. Quoting Miller and Sanjurjo's paper: Jack takes a coin from his pocket and decides that he will flip it 4 times in a row, writing down the outcome of each flip on a scrap of pap
1,323
Most interesting statistical paradoxes
Let x, y, and z be uncorrelated vectors. Yet x/z and y/z will be correlated.
Most interesting statistical paradoxes
Let x, y, and z be uncorrelated vectors. Yet x/z and y/z will be correlated.
Most interesting statistical paradoxes Let x, y, and z be uncorrelated vectors. Yet x/z and y/z will be correlated.
Most interesting statistical paradoxes Let x, y, and z be uncorrelated vectors. Yet x/z and y/z will be correlated.
1,324
Maximum Likelihood Estimation (MLE) in layman terms
Say you have some data. Say you're willing to assume that the data comes from some distribution -- perhaps Gaussian. There are an infinite number of different Gaussians that the data could have come from (which correspond to the combination of the infinite number of means and variances that a Gaussian distribution can have). MLE will pick the Gaussian (i.e., the mean and variance) that is "most consistent" with your data (the precise meaning of consistent is explained below). So, say you've got a data set of $y = \{-1, 3, 7\}$. The most consistent Gaussian from which that data could have come has a mean of 3 and a variance of 16. It could have been sampled from some other Gaussian. But one with a mean of 3 and variance of 16 is most consistent with the data in the following sense: the probability of getting the particular $y$ values you observed is greater with this choice of mean and variance, than it is with any other choice. Moving to regression: instead of the mean being a constant, the mean is a linear function of the data, as specified by the regression equation. So, say you've got data like $x = \{ 2,4,10 \}$ along with $y$ from before. The mean of that Gaussian is now the fitted regression model $X'\hat\beta$, where $\hat\beta =[-1.9,.9]$ Moving to GLMs: replace Gaussian with some other distribution (from the exponential family). The mean is now a linear function of the data, as specified by the regression equation, transformed by the link function. So, it's $g(X'\beta)$, where $g(x) = e^x/(1+e^x)$ for logit (with binomial data).
Maximum Likelihood Estimation (MLE) in layman terms
Say you have some data. Say you're willing to assume that the data comes from some distribution -- perhaps Gaussian. There are an infinite number of different Gaussians that the data could have come
Maximum Likelihood Estimation (MLE) in layman terms Say you have some data. Say you're willing to assume that the data comes from some distribution -- perhaps Gaussian. There are an infinite number of different Gaussians that the data could have come from (which correspond to the combination of the infinite number of means and variances that a Gaussian distribution can have). MLE will pick the Gaussian (i.e., the mean and variance) that is "most consistent" with your data (the precise meaning of consistent is explained below). So, say you've got a data set of $y = \{-1, 3, 7\}$. The most consistent Gaussian from which that data could have come has a mean of 3 and a variance of 16. It could have been sampled from some other Gaussian. But one with a mean of 3 and variance of 16 is most consistent with the data in the following sense: the probability of getting the particular $y$ values you observed is greater with this choice of mean and variance, than it is with any other choice. Moving to regression: instead of the mean being a constant, the mean is a linear function of the data, as specified by the regression equation. So, say you've got data like $x = \{ 2,4,10 \}$ along with $y$ from before. The mean of that Gaussian is now the fitted regression model $X'\hat\beta$, where $\hat\beta =[-1.9,.9]$ Moving to GLMs: replace Gaussian with some other distribution (from the exponential family). The mean is now a linear function of the data, as specified by the regression equation, transformed by the link function. So, it's $g(X'\beta)$, where $g(x) = e^x/(1+e^x)$ for logit (with binomial data).
Maximum Likelihood Estimation (MLE) in layman terms Say you have some data. Say you're willing to assume that the data comes from some distribution -- perhaps Gaussian. There are an infinite number of different Gaussians that the data could have come
1,325
Maximum Likelihood Estimation (MLE) in layman terms
Maximum Likelihood Estimation (MLE) is a technique to find the most likely function that explains observed data. I think math is necessary, but don't let it scare you! Let's say that we have a set of points in the $x,y$ plane, and we want to know the function parameters $\beta$ and $\sigma$ that most likely fit the data (in this case we know the function because I specified it to create this example, but bear with me). data <- data.frame(x = runif(200, 1, 10)) data$y <- 0 + beta*data$x + rnorm(200, 0, sigma) plot(data$x, data$y) In order to do a MLE, we need to make assumptions about the form of the function. In a linear model, we assume that the points follow a normal (Gaussian) probability distribution, with mean $x\beta$ and variance $\sigma^2$: $y = \mathcal{N}(x\beta, \sigma^2)$. The equation of this probability density function is: $$\frac{1}{\sqrt{2\pi\sigma^2}}\exp{\left(-\frac{(y_i-x_i\beta)^2}{2\sigma^2}\right)}$$ What we want to find is the parameters $\beta$ and $\sigma$ that maximize this probability for all points $(x_i, y_i)$. This is the "likelihood" function, $\mathcal{L}$ $$\mathcal{L} = \prod_{i=1}^n y_i = \prod_{i=1}^n \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp\Big({-\dfrac{(y_i - x_i\beta)^2}{2\sigma^2}}\Big)$$ For various reasons, it's easier to use the log of the likelihood function: $$\log(\mathcal{L}) = \sum_{i = 1}^n-\frac{n}{2}\log(2\pi) -\frac{n}{2}\log(\sigma^2) - \frac{1}{2\sigma^2}(y_i - x_i\beta)^2$$ We can code this as a function in R with $\theta = (\beta,\sigma)$. linear.lik <- function(theta, y, X){ n <- nrow(X) k <- ncol(X) beta <- theta[1:k] sigma2 <- theta[k+1]^2 e <- y - X%*%beta logl <- -.5*n*log(2*pi)-.5*n*log(sigma2) - ( (t(e) %*% e)/ (2*sigma2) ) return(-logl) } This function, at different values of $\beta$ and $\sigma$, creates a surface. surface <- list() k <- 0 for(beta in seq(0, 5, 0.1)){ for(sigma in seq(0.1, 5, 0.1)){ k <- k + 1 logL <- linear.lik(theta = c(0, beta, sigma), y = data$y, X = cbind(1, data$x)) surface[[k]] <- data.frame(beta = beta, sigma = sigma, logL = -logL) } } surface <- do.call(rbind, surface) library(lattice) wireframe(logL ~ beta*sigma, surface, shade = TRUE) As you can see, there is a maximum point somewhere on this surface. We can find parameters that specify this point with R's built-in optimization commands. This comes reasonably close to uncovering the true parameters $0, \beta = 2.7, \sigma = 1.3$ linear.MLE <- optim(fn=linear.lik, par=c(1,1,1), lower = c(-Inf, -Inf, 1e-8), upper = c(Inf, Inf, Inf), hessian=TRUE, y=data$y, X=cbind(1, data$x), method = "L-BFGS-B") linear.MLE$par ## [1] -0.1303868 2.7286616 1.3446534 Ordinary least squares is the maximum likelihood for a linear model, so it makes sense that lm would give us the same answers. (Note that $\sigma^2$ is used in determining the standard errors). summary(lm(y ~ x, data)) ## ## Call: ## lm(formula = y ~ x, data = data) ## ## Residuals: ## Min 1Q Median 3Q Max ## -3.3616 -0.9898 0.1345 0.9967 3.8364 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.13038 0.21298 -0.612 0.541 ## x 2.72866 0.03621 75.363 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 1.351 on 198 degrees of freedom ## Multiple R-squared: 0.9663, Adjusted R-squared: 0.9661 ## F-statistic: 5680 on 1 and 198 DF, p-value: < 2.2e-16
Maximum Likelihood Estimation (MLE) in layman terms
Maximum Likelihood Estimation (MLE) is a technique to find the most likely function that explains observed data. I think math is necessary, but don't let it scare you! Let's say that we have a set of
Maximum Likelihood Estimation (MLE) in layman terms Maximum Likelihood Estimation (MLE) is a technique to find the most likely function that explains observed data. I think math is necessary, but don't let it scare you! Let's say that we have a set of points in the $x,y$ plane, and we want to know the function parameters $\beta$ and $\sigma$ that most likely fit the data (in this case we know the function because I specified it to create this example, but bear with me). data <- data.frame(x = runif(200, 1, 10)) data$y <- 0 + beta*data$x + rnorm(200, 0, sigma) plot(data$x, data$y) In order to do a MLE, we need to make assumptions about the form of the function. In a linear model, we assume that the points follow a normal (Gaussian) probability distribution, with mean $x\beta$ and variance $\sigma^2$: $y = \mathcal{N}(x\beta, \sigma^2)$. The equation of this probability density function is: $$\frac{1}{\sqrt{2\pi\sigma^2}}\exp{\left(-\frac{(y_i-x_i\beta)^2}{2\sigma^2}\right)}$$ What we want to find is the parameters $\beta$ and $\sigma$ that maximize this probability for all points $(x_i, y_i)$. This is the "likelihood" function, $\mathcal{L}$ $$\mathcal{L} = \prod_{i=1}^n y_i = \prod_{i=1}^n \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp\Big({-\dfrac{(y_i - x_i\beta)^2}{2\sigma^2}}\Big)$$ For various reasons, it's easier to use the log of the likelihood function: $$\log(\mathcal{L}) = \sum_{i = 1}^n-\frac{n}{2}\log(2\pi) -\frac{n}{2}\log(\sigma^2) - \frac{1}{2\sigma^2}(y_i - x_i\beta)^2$$ We can code this as a function in R with $\theta = (\beta,\sigma)$. linear.lik <- function(theta, y, X){ n <- nrow(X) k <- ncol(X) beta <- theta[1:k] sigma2 <- theta[k+1]^2 e <- y - X%*%beta logl <- -.5*n*log(2*pi)-.5*n*log(sigma2) - ( (t(e) %*% e)/ (2*sigma2) ) return(-logl) } This function, at different values of $\beta$ and $\sigma$, creates a surface. surface <- list() k <- 0 for(beta in seq(0, 5, 0.1)){ for(sigma in seq(0.1, 5, 0.1)){ k <- k + 1 logL <- linear.lik(theta = c(0, beta, sigma), y = data$y, X = cbind(1, data$x)) surface[[k]] <- data.frame(beta = beta, sigma = sigma, logL = -logL) } } surface <- do.call(rbind, surface) library(lattice) wireframe(logL ~ beta*sigma, surface, shade = TRUE) As you can see, there is a maximum point somewhere on this surface. We can find parameters that specify this point with R's built-in optimization commands. This comes reasonably close to uncovering the true parameters $0, \beta = 2.7, \sigma = 1.3$ linear.MLE <- optim(fn=linear.lik, par=c(1,1,1), lower = c(-Inf, -Inf, 1e-8), upper = c(Inf, Inf, Inf), hessian=TRUE, y=data$y, X=cbind(1, data$x), method = "L-BFGS-B") linear.MLE$par ## [1] -0.1303868 2.7286616 1.3446534 Ordinary least squares is the maximum likelihood for a linear model, so it makes sense that lm would give us the same answers. (Note that $\sigma^2$ is used in determining the standard errors). summary(lm(y ~ x, data)) ## ## Call: ## lm(formula = y ~ x, data = data) ## ## Residuals: ## Min 1Q Median 3Q Max ## -3.3616 -0.9898 0.1345 0.9967 3.8364 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.13038 0.21298 -0.612 0.541 ## x 2.72866 0.03621 75.363 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 1.351 on 198 degrees of freedom ## Multiple R-squared: 0.9663, Adjusted R-squared: 0.9661 ## F-statistic: 5680 on 1 and 198 DF, p-value: < 2.2e-16
Maximum Likelihood Estimation (MLE) in layman terms Maximum Likelihood Estimation (MLE) is a technique to find the most likely function that explains observed data. I think math is necessary, but don't let it scare you! Let's say that we have a set of
1,326
Maximum Likelihood Estimation (MLE) in layman terms
The maximum likelihood (ML) estimate of a parameter is the value of that parameter under which your actual observed data are most likely, relative to any other possible values of the parameter. The idea is that there are any number of "true" parameter values that could have led to your actually observed data with some non-zero (albeit perhaps small) probability. But the ML estimate gives the parameter value that would have led to your observed data with the highest probability. This must not be confused with the value of the parameter that is most likely to have actually produced your data! I like the following passage from Sober (2008, pp. 9-10) on this distinction. In this passage, we have some observed data denoted $O$ and a hypothesis denoted $H$. You need to remember that "likelihood" is a technical term. The likelihood of H, Pr(O|H), and the posterior probability of H, Pr(H|O), are different quantities and they can have different values. The likelihood of H is the probability that H confers on O, not the probability that O confers on H. Suppose you hear a noise coming from the attic of your house. You consider the hypothesis that there are gremlins up there bowling. The likelihood of this hypothesis is very high, since if there are gremlins bowling in the attic, there probably will be noise. But surely you don’t think that the noise makes it very probable that there are gremlins up there bowling. In this example, Pr(O|H) is high and Pr(H|O) is low. The gremlin hypothesis has a high likelihood (in the technical sense) but a low probability. In terms of the example above, ML would favor the gremlin hypothesis. In this particular comical example, that is clearly a bad choice. But in a lot of other more realistic cases, the ML estimate might be a very reasonable one. Reference Sober, E. (2008). Evidence and Evolution: the Logic Behind the Science. Cambridge University Press.
Maximum Likelihood Estimation (MLE) in layman terms
The maximum likelihood (ML) estimate of a parameter is the value of that parameter under which your actual observed data are most likely, relative to any other possible values of the parameter. The id
Maximum Likelihood Estimation (MLE) in layman terms The maximum likelihood (ML) estimate of a parameter is the value of that parameter under which your actual observed data are most likely, relative to any other possible values of the parameter. The idea is that there are any number of "true" parameter values that could have led to your actually observed data with some non-zero (albeit perhaps small) probability. But the ML estimate gives the parameter value that would have led to your observed data with the highest probability. This must not be confused with the value of the parameter that is most likely to have actually produced your data! I like the following passage from Sober (2008, pp. 9-10) on this distinction. In this passage, we have some observed data denoted $O$ and a hypothesis denoted $H$. You need to remember that "likelihood" is a technical term. The likelihood of H, Pr(O|H), and the posterior probability of H, Pr(H|O), are different quantities and they can have different values. The likelihood of H is the probability that H confers on O, not the probability that O confers on H. Suppose you hear a noise coming from the attic of your house. You consider the hypothesis that there are gremlins up there bowling. The likelihood of this hypothesis is very high, since if there are gremlins bowling in the attic, there probably will be noise. But surely you don’t think that the noise makes it very probable that there are gremlins up there bowling. In this example, Pr(O|H) is high and Pr(H|O) is low. The gremlin hypothesis has a high likelihood (in the technical sense) but a low probability. In terms of the example above, ML would favor the gremlin hypothesis. In this particular comical example, that is clearly a bad choice. But in a lot of other more realistic cases, the ML estimate might be a very reasonable one. Reference Sober, E. (2008). Evidence and Evolution: the Logic Behind the Science. Cambridge University Press.
Maximum Likelihood Estimation (MLE) in layman terms The maximum likelihood (ML) estimate of a parameter is the value of that parameter under which your actual observed data are most likely, relative to any other possible values of the parameter. The id
1,327
Maximum Likelihood Estimation (MLE) in layman terms
It is possible to say something without using (much) math, but for actual statistical applications of maximum likelihood you need mathematics. Maximum likelihood estimation is related to what philosophers call inference to the best explanation, or abduction. We use this all the time! Note, I do not say that maximum likelihood is abduction, that term is much wider, and some cases of Bayesian estimation (with an empirical prior) can probably also be seen as abduction. Some examples (the first taken from http://plato.stanford.edu/entries/abduction/#Aca). See also https://en.wikipedia.org/wiki/Abductive_reasoning (In computer science "abduction" is also used in the context of non-probabilistic models.) "You happen to know that Tim and Harry have recently had a terrible row that ended their friendship. Now someone tells you that she just saw Tim and Harry jogging together. The best explanation for this that you can think of is that they made up. You conclude that they are friends again." This because that conclusion makes the observation you try to explain more probable than under the alternative, that they are still not talking. You work in a kindergarten, and one day a child starts to walk in a strange way, and saying he broke his legs. You examine and find nothing wrong. Then you can reasonably infer that one of his parents broke their legs, since children then often actuate as described, so that is an "inference to the best explanation" and an instance of (informal) maximum likelihood. (and, of course, that explanation might be wrong, it is only probable, not sure. Abduction/maximum likelihood cannot give sure conclusions). Abduction is about finding pattern in data, and then searching for possible theories that can possibly make those patterns probable. Then choosing the possible explanation, which makes the observed pattern maximally probable, is just maximum likelihood! The prime example of abduction in science is evolution. There is no one single observation that implies evolution, but evolution makes observed patterns more probable than other explanations. Another typical example is medical diagnosis? Which possible medical condition makes the observed pattern of symptoms the most probable? Again, this is also maximum likelihood! (Or, in this case, maybe bayesian estimation is a better fit, we must take into account the prior probability of the various possible explanations). But that is a technicality, in this case we can have empirical priors which can be seen as a natural part of the statistical model, and what we call model, what we call prior is some arbitrary(*) statistical convention. To get back to the original question about layman term explanation of MLE, here is one simple example: When my daughters where 6 and 7 years old, I asked them this. We made two urns (two shoe-boxes), in one we put 2 black balls, 8 red, in the other the numbers where switched. Then we mixed the urns, and we draw one urn randomly. Then we took at random one ball from that urn. It was red. Then I asked : From which urn do you think that red ball was drawn? After about one seconds thinking, they answered (in choir): From the one with 8 red balls! Then I asked: Why do you think so? And anew, after about one second (in choir again): "Because then it is easier to draw a red ball!". That is, easier=more probable. That was maximum likelihood (it is an easy exercise to write up the probability model), and it is "inference to the best explanation", that is, abduction. (*) Why do I say "arbitrary?" To continue the medical diagnosis problem, say the patient is a man with some difficult to diagnose condition the physician didn't see earlier. Then, say, in the talk with the patient it arises that he visited someplace in tropical Africa short time ago. That is a new piece of data, but its effect in the typical models (used in this kind of situation, be it formal or informal) will be to change the prior of the difficult possible explanations, as tropical diseases like malaria now will get higher prior probability. So the new data enters the analysis in the prior.
Maximum Likelihood Estimation (MLE) in layman terms
It is possible to say something without using (much) math, but for actual statistical applications of maximum likelihood you need mathematics. Maximum likelihood estimation is related to what philosop
Maximum Likelihood Estimation (MLE) in layman terms It is possible to say something without using (much) math, but for actual statistical applications of maximum likelihood you need mathematics. Maximum likelihood estimation is related to what philosophers call inference to the best explanation, or abduction. We use this all the time! Note, I do not say that maximum likelihood is abduction, that term is much wider, and some cases of Bayesian estimation (with an empirical prior) can probably also be seen as abduction. Some examples (the first taken from http://plato.stanford.edu/entries/abduction/#Aca). See also https://en.wikipedia.org/wiki/Abductive_reasoning (In computer science "abduction" is also used in the context of non-probabilistic models.) "You happen to know that Tim and Harry have recently had a terrible row that ended their friendship. Now someone tells you that she just saw Tim and Harry jogging together. The best explanation for this that you can think of is that they made up. You conclude that they are friends again." This because that conclusion makes the observation you try to explain more probable than under the alternative, that they are still not talking. You work in a kindergarten, and one day a child starts to walk in a strange way, and saying he broke his legs. You examine and find nothing wrong. Then you can reasonably infer that one of his parents broke their legs, since children then often actuate as described, so that is an "inference to the best explanation" and an instance of (informal) maximum likelihood. (and, of course, that explanation might be wrong, it is only probable, not sure. Abduction/maximum likelihood cannot give sure conclusions). Abduction is about finding pattern in data, and then searching for possible theories that can possibly make those patterns probable. Then choosing the possible explanation, which makes the observed pattern maximally probable, is just maximum likelihood! The prime example of abduction in science is evolution. There is no one single observation that implies evolution, but evolution makes observed patterns more probable than other explanations. Another typical example is medical diagnosis? Which possible medical condition makes the observed pattern of symptoms the most probable? Again, this is also maximum likelihood! (Or, in this case, maybe bayesian estimation is a better fit, we must take into account the prior probability of the various possible explanations). But that is a technicality, in this case we can have empirical priors which can be seen as a natural part of the statistical model, and what we call model, what we call prior is some arbitrary(*) statistical convention. To get back to the original question about layman term explanation of MLE, here is one simple example: When my daughters where 6 and 7 years old, I asked them this. We made two urns (two shoe-boxes), in one we put 2 black balls, 8 red, in the other the numbers where switched. Then we mixed the urns, and we draw one urn randomly. Then we took at random one ball from that urn. It was red. Then I asked : From which urn do you think that red ball was drawn? After about one seconds thinking, they answered (in choir): From the one with 8 red balls! Then I asked: Why do you think so? And anew, after about one second (in choir again): "Because then it is easier to draw a red ball!". That is, easier=more probable. That was maximum likelihood (it is an easy exercise to write up the probability model), and it is "inference to the best explanation", that is, abduction. (*) Why do I say "arbitrary?" To continue the medical diagnosis problem, say the patient is a man with some difficult to diagnose condition the physician didn't see earlier. Then, say, in the talk with the patient it arises that he visited someplace in tropical Africa short time ago. That is a new piece of data, but its effect in the typical models (used in this kind of situation, be it formal or informal) will be to change the prior of the difficult possible explanations, as tropical diseases like malaria now will get higher prior probability. So the new data enters the analysis in the prior.
Maximum Likelihood Estimation (MLE) in layman terms It is possible to say something without using (much) math, but for actual statistical applications of maximum likelihood you need mathematics. Maximum likelihood estimation is related to what philosop
1,328
Maximum Likelihood Estimation (MLE) in layman terms
The MLE is the value of the parameter of interest that maximizes the probability of observing the data that you observed. In other words, it is the value of the parameter that makes the observed data most likely to have been observed.
Maximum Likelihood Estimation (MLE) in layman terms
The MLE is the value of the parameter of interest that maximizes the probability of observing the data that you observed. In other words, it is the value of the parameter that makes the observed data
Maximum Likelihood Estimation (MLE) in layman terms The MLE is the value of the parameter of interest that maximizes the probability of observing the data that you observed. In other words, it is the value of the parameter that makes the observed data most likely to have been observed.
Maximum Likelihood Estimation (MLE) in layman terms The MLE is the value of the parameter of interest that maximizes the probability of observing the data that you observed. In other words, it is the value of the parameter that makes the observed data
1,329
Maximum Likelihood Estimation (MLE) in layman terms
If your data come from a probability distribution with an unknown parameter $\theta$, the maximum likelihood estimate of $\theta$ is that which makes the data you actually observed most probable. In the case where your data are independent samples from that probability distribution, the likelihood (for a given value of $\theta$) is calculated by multiplying together the probabilities of all observations (for that given value of $\theta$) - it's just the joint probability of the whole sample. And the value of $\theta$ for which it's a maximum is the maximum likelihood estimate. (If the data are continuous read 'probability density' for 'probability'. So if they're measured in inches the density would be measured in probability per inch.)
Maximum Likelihood Estimation (MLE) in layman terms
If your data come from a probability distribution with an unknown parameter $\theta$, the maximum likelihood estimate of $\theta$ is that which makes the data you actually observed most probable. In t
Maximum Likelihood Estimation (MLE) in layman terms If your data come from a probability distribution with an unknown parameter $\theta$, the maximum likelihood estimate of $\theta$ is that which makes the data you actually observed most probable. In the case where your data are independent samples from that probability distribution, the likelihood (for a given value of $\theta$) is calculated by multiplying together the probabilities of all observations (for that given value of $\theta$) - it's just the joint probability of the whole sample. And the value of $\theta$ for which it's a maximum is the maximum likelihood estimate. (If the data are continuous read 'probability density' for 'probability'. So if they're measured in inches the density would be measured in probability per inch.)
Maximum Likelihood Estimation (MLE) in layman terms If your data come from a probability distribution with an unknown parameter $\theta$, the maximum likelihood estimate of $\theta$ is that which makes the data you actually observed most probable. In t
1,330
Maximum Likelihood Estimation (MLE) in layman terms
Let's play a game: I am in a dark room, no one can see what I do but you know that either (a) I throw a dice and count the number of '1's as 'success' or (b) I toss a coin and I count the number of heads as 'success'. As I said, you can not see which of the two I do but I give you just one single piece of information: I tell you that I have thrown a dice 100 times or I have tossed the coin 100 times and that I had 17 successes. The question is to guess whether I have thrown a dice or tossed a coin. You will probably answer that I tossed a dice. If you do, then you probably have 'made a guess by maximizing the likelihood' because if I observe 17 successes out 100 experiments, it is more likely that I have thrown a dice than that I have tossed a coin. So what you have done is taking that value of the 'probability of success' (1/6 for a dice and 1/2 for a coin) that makes it most likely to observe 17 successes in 100. 'More likely' meaning that the chance that you have 17 times a '1' in 100 throws of a dice is higher than the chance of having 17 heads out of 100 coin tosses.
Maximum Likelihood Estimation (MLE) in layman terms
Let's play a game: I am in a dark room, no one can see what I do but you know that either (a) I throw a dice and count the number of '1's as 'success' or (b) I toss a coin and I count the number of h
Maximum Likelihood Estimation (MLE) in layman terms Let's play a game: I am in a dark room, no one can see what I do but you know that either (a) I throw a dice and count the number of '1's as 'success' or (b) I toss a coin and I count the number of heads as 'success'. As I said, you can not see which of the two I do but I give you just one single piece of information: I tell you that I have thrown a dice 100 times or I have tossed the coin 100 times and that I had 17 successes. The question is to guess whether I have thrown a dice or tossed a coin. You will probably answer that I tossed a dice. If you do, then you probably have 'made a guess by maximizing the likelihood' because if I observe 17 successes out 100 experiments, it is more likely that I have thrown a dice than that I have tossed a coin. So what you have done is taking that value of the 'probability of success' (1/6 for a dice and 1/2 for a coin) that makes it most likely to observe 17 successes in 100. 'More likely' meaning that the chance that you have 17 times a '1' in 100 throws of a dice is higher than the chance of having 17 heads out of 100 coin tosses.
Maximum Likelihood Estimation (MLE) in layman terms Let's play a game: I am in a dark room, no one can see what I do but you know that either (a) I throw a dice and count the number of '1's as 'success' or (b) I toss a coin and I count the number of h
1,331
Maximum Likelihood Estimation (MLE) in layman terms
Say you have some data $X$ that comes from Normal distribution with unknown mean $\mu$. You want to find what is the value of $\mu$, however you have no idea how to achieve it. One thing you could do is to try several values of $\mu$ and check which of them is the best one. To do this you need however some method for checking which of the values is "better" then others. The likelihood function, $L$, lets you to check which values of $\mu$ are most likely given the data you have. For this purpose it uses probabilities of your data-points estimated under a probability function $f$ with a given value of $\mu$: $$ L(\mu|X) = \prod^N_{i=1} f(x_i, \mu) $$ or log-likelihood: $$ \ln L(\mu|X) = \sum^N_{i=1} \ln f(x_i, \mu) $$ You use this function to check which value of $\mu$ maximizes the likelihood, i.e. which is the most likely given the data you have. As you can see, this can be achieved with product of probabilities or with sum of log-probabilities (log-likelihood). In our example $f$ would be probability density function for normal distribution, but the approach can be extended into much more complicated problems. In practice you do not plug-in some guessed values of $\mu$ into the likelihood function but rather use different statistical approaches that are known to provide maximum likelihood estimates of the parameters of interest. There are lots of such approaches that are problem-specific - some are simple, some complicated (check Wikipedia for more information). Below I provide a simple example of how ML works in practice. Example First lets generate some fake data: set.seed(123) x <- rnorm(1000, 1.78) and define a likelihood function that we want to maximize (the likelihood of Normal distribution with different values of $\mu$ given the data $X$): llik <- function(mu) sum(log(dnorm(x, mu))) next, what we do is we check different values of $\mu$ using our function: ll <- vapply(seq(-6, 6, by=0.001), llik, numeric(1)) plot(seq(-6, 6, by=0.001), ll, type="l", ylab="Log-Likelihood", xlab=expression(mu)) abline(v=mean(x), col="red") The same could be achieved faster with an optimization algorithm that looks for the maximum value of a function in a more clever way that going brute force. There are multiple such examples, e.g. one of the most basic in R is optimize: optimize(llik, interval=c(-6, 6), maximum=TRUE)$maximum The black line shows estimates of log-likelihood function under different values of $\mu$. The red line on the plot marks the $1.78$ value that is exactly the same as the arithmetic average (that actually is maximum likelihood estimator of $\mu$), the highest point of log-likelihood function estimated with brute force search and with optimize algorithm. This example shows how you can use multiple approaches to find the value that maximizes the likelihood function to find the "best" value of your parameter.
Maximum Likelihood Estimation (MLE) in layman terms
Say you have some data $X$ that comes from Normal distribution with unknown mean $\mu$. You want to find what is the value of $\mu$, however you have no idea how to achieve it. One thing you could do
Maximum Likelihood Estimation (MLE) in layman terms Say you have some data $X$ that comes from Normal distribution with unknown mean $\mu$. You want to find what is the value of $\mu$, however you have no idea how to achieve it. One thing you could do is to try several values of $\mu$ and check which of them is the best one. To do this you need however some method for checking which of the values is "better" then others. The likelihood function, $L$, lets you to check which values of $\mu$ are most likely given the data you have. For this purpose it uses probabilities of your data-points estimated under a probability function $f$ with a given value of $\mu$: $$ L(\mu|X) = \prod^N_{i=1} f(x_i, \mu) $$ or log-likelihood: $$ \ln L(\mu|X) = \sum^N_{i=1} \ln f(x_i, \mu) $$ You use this function to check which value of $\mu$ maximizes the likelihood, i.e. which is the most likely given the data you have. As you can see, this can be achieved with product of probabilities or with sum of log-probabilities (log-likelihood). In our example $f$ would be probability density function for normal distribution, but the approach can be extended into much more complicated problems. In practice you do not plug-in some guessed values of $\mu$ into the likelihood function but rather use different statistical approaches that are known to provide maximum likelihood estimates of the parameters of interest. There are lots of such approaches that are problem-specific - some are simple, some complicated (check Wikipedia for more information). Below I provide a simple example of how ML works in practice. Example First lets generate some fake data: set.seed(123) x <- rnorm(1000, 1.78) and define a likelihood function that we want to maximize (the likelihood of Normal distribution with different values of $\mu$ given the data $X$): llik <- function(mu) sum(log(dnorm(x, mu))) next, what we do is we check different values of $\mu$ using our function: ll <- vapply(seq(-6, 6, by=0.001), llik, numeric(1)) plot(seq(-6, 6, by=0.001), ll, type="l", ylab="Log-Likelihood", xlab=expression(mu)) abline(v=mean(x), col="red") The same could be achieved faster with an optimization algorithm that looks for the maximum value of a function in a more clever way that going brute force. There are multiple such examples, e.g. one of the most basic in R is optimize: optimize(llik, interval=c(-6, 6), maximum=TRUE)$maximum The black line shows estimates of log-likelihood function under different values of $\mu$. The red line on the plot marks the $1.78$ value that is exactly the same as the arithmetic average (that actually is maximum likelihood estimator of $\mu$), the highest point of log-likelihood function estimated with brute force search and with optimize algorithm. This example shows how you can use multiple approaches to find the value that maximizes the likelihood function to find the "best" value of your parameter.
Maximum Likelihood Estimation (MLE) in layman terms Say you have some data $X$ that comes from Normal distribution with unknown mean $\mu$. You want to find what is the value of $\mu$, however you have no idea how to achieve it. One thing you could do
1,332
Maximum Likelihood Estimation (MLE) in layman terms
One task in statistics is to fit a distribution function to a set of data points to generalize what's intrinsic about the data. When one is fitting a distribution a)choose an appropriate distribution b)set the movable parts (parameters), for example mean, variance, etc. When doing all this one also needs an objective, aka objective function/error function. This is required to define the meaning of "best" or "best in what sense". MLE is the procedure where this objective function is set as the maximum of the probability mass/density function of the chosen distribution. Other techniques differ how they choose this objective function. For example ordinary least squares (OLS) takes the minimum sum of squared errors. For the Gaussian case OLS and MLE are equivalent because the Gaussian distribution has that (x-m)^2 term in the density function that makes the objectives of OLS and MLE coincide. You can see that it is a squared difference term like OLS. Of course one can choose any objective function. However the intuitive meaning will not be always clear. MLE assumes that we know the distribution to start with. In other techniques, this assumption is relaxed. Especially in those cases it is more common to have a custom objective function.
Maximum Likelihood Estimation (MLE) in layman terms
One task in statistics is to fit a distribution function to a set of data points to generalize what's intrinsic about the data. When one is fitting a distribution a)choose an appropriate distribution
Maximum Likelihood Estimation (MLE) in layman terms One task in statistics is to fit a distribution function to a set of data points to generalize what's intrinsic about the data. When one is fitting a distribution a)choose an appropriate distribution b)set the movable parts (parameters), for example mean, variance, etc. When doing all this one also needs an objective, aka objective function/error function. This is required to define the meaning of "best" or "best in what sense". MLE is the procedure where this objective function is set as the maximum of the probability mass/density function of the chosen distribution. Other techniques differ how they choose this objective function. For example ordinary least squares (OLS) takes the minimum sum of squared errors. For the Gaussian case OLS and MLE are equivalent because the Gaussian distribution has that (x-m)^2 term in the density function that makes the objectives of OLS and MLE coincide. You can see that it is a squared difference term like OLS. Of course one can choose any objective function. However the intuitive meaning will not be always clear. MLE assumes that we know the distribution to start with. In other techniques, this assumption is relaxed. Especially in those cases it is more common to have a custom objective function.
Maximum Likelihood Estimation (MLE) in layman terms One task in statistics is to fit a distribution function to a set of data points to generalize what's intrinsic about the data. When one is fitting a distribution a)choose an appropriate distribution
1,333
Maximum Likelihood Estimation (MLE) in layman terms
As you wanted, I will use very naive terms. Suppose you have collected some data $\{y_1, y_2,\ldots,y_n\}$ and have reasonable assumption that they follow some probability distribution. But you don't usually know the parameter(s) of that distribution from such samples. Parameters are the "population characteristics" of the probability distribution that you have assumed for the data. Say, your plotting or prior knowledge suggests you to consider the data as being Normally distributed. Mean and variance are the two parameters that represent a Normal distribution. Let, $\theta=\{\mu,\sigma^2\}$ be the set of parameters. So the joint probability of observing the data $\{y_1, y_2,\ldots,y_n\}$ given the set of parameters $\theta=\{\mu,\sigma^2\}$ is given by, $p(y_1, y_2,\ldots,y_n|\theta)$. Likelihood is "the probability of observing the data" so equivalent to the joint pdf (for discrete distribution joint pmf). But it is expressed as a function of the parameters or $L(\theta|y_1, y_2,\ldots,y_n)$. So that for this particular data set you may find the value of $\theta$ for which $L(\theta)$ is maximum. In words, you find $\theta$ for which the probability of observing this particular set of data is maximum. Thus comes the term "Maximum Likelihood". Now you find the set of $\{\mu,\sigma^2\}$ for which $L$ is maximized. That set of $\{\mu,\sigma^2\}$ for which $L(\theta)$ is maximum is called the Maximum Likelihood Estimate.
Maximum Likelihood Estimation (MLE) in layman terms
As you wanted, I will use very naive terms. Suppose you have collected some data $\{y_1, y_2,\ldots,y_n\}$ and have reasonable assumption that they follow some probability distribution. But you don't
Maximum Likelihood Estimation (MLE) in layman terms As you wanted, I will use very naive terms. Suppose you have collected some data $\{y_1, y_2,\ldots,y_n\}$ and have reasonable assumption that they follow some probability distribution. But you don't usually know the parameter(s) of that distribution from such samples. Parameters are the "population characteristics" of the probability distribution that you have assumed for the data. Say, your plotting or prior knowledge suggests you to consider the data as being Normally distributed. Mean and variance are the two parameters that represent a Normal distribution. Let, $\theta=\{\mu,\sigma^2\}$ be the set of parameters. So the joint probability of observing the data $\{y_1, y_2,\ldots,y_n\}$ given the set of parameters $\theta=\{\mu,\sigma^2\}$ is given by, $p(y_1, y_2,\ldots,y_n|\theta)$. Likelihood is "the probability of observing the data" so equivalent to the joint pdf (for discrete distribution joint pmf). But it is expressed as a function of the parameters or $L(\theta|y_1, y_2,\ldots,y_n)$. So that for this particular data set you may find the value of $\theta$ for which $L(\theta)$ is maximum. In words, you find $\theta$ for which the probability of observing this particular set of data is maximum. Thus comes the term "Maximum Likelihood". Now you find the set of $\{\mu,\sigma^2\}$ for which $L$ is maximized. That set of $\{\mu,\sigma^2\}$ for which $L(\theta)$ is maximum is called the Maximum Likelihood Estimate.
Maximum Likelihood Estimation (MLE) in layman terms As you wanted, I will use very naive terms. Suppose you have collected some data $\{y_1, y_2,\ldots,y_n\}$ and have reasonable assumption that they follow some probability distribution. But you don't
1,334
Maximum Likelihood Estimation (MLE) in layman terms
Suppose you have a coin. Tossing it can give either heads or tails. But you don't know if it's a fair coin. So you toss it 1000 times. It comes up as heads 1000 times, and never as tails. Now, it's possible that this is actually a fair coin with a 50/50 chance for heads/tails, but it doesn't seem likely, does it? The chance of tossing a fair coin 1000 times and heads never coming up is $0.5^{2000}$, very small indeed. The MLE tries to help you find the best explanation in a situation like this -- when you have some result, and you want to figure out what the value of the parameter is that is most likely to give that result. Here, we have 2000 heads out of 2000 tosses -- so we would use an MLE to find out what probability of getting a head best explains getting 2000 heads out of 2000 tosses. It's the Maximum Likelihood Estimator. It estimates the parameter (here, it's a probability distribution function) that is most likely to have produced the result you are currently looking at. To finish up our example, taking the MLE would return that the probability of getting a head that best explains getting 2000 heads out of 2000 tosses is $1$.
Maximum Likelihood Estimation (MLE) in layman terms
Suppose you have a coin. Tossing it can give either heads or tails. But you don't know if it's a fair coin. So you toss it 1000 times. It comes up as heads 1000 times, and never as tails. Now, it's p
Maximum Likelihood Estimation (MLE) in layman terms Suppose you have a coin. Tossing it can give either heads or tails. But you don't know if it's a fair coin. So you toss it 1000 times. It comes up as heads 1000 times, and never as tails. Now, it's possible that this is actually a fair coin with a 50/50 chance for heads/tails, but it doesn't seem likely, does it? The chance of tossing a fair coin 1000 times and heads never coming up is $0.5^{2000}$, very small indeed. The MLE tries to help you find the best explanation in a situation like this -- when you have some result, and you want to figure out what the value of the parameter is that is most likely to give that result. Here, we have 2000 heads out of 2000 tosses -- so we would use an MLE to find out what probability of getting a head best explains getting 2000 heads out of 2000 tosses. It's the Maximum Likelihood Estimator. It estimates the parameter (here, it's a probability distribution function) that is most likely to have produced the result you are currently looking at. To finish up our example, taking the MLE would return that the probability of getting a head that best explains getting 2000 heads out of 2000 tosses is $1$.
Maximum Likelihood Estimation (MLE) in layman terms Suppose you have a coin. Tossing it can give either heads or tails. But you don't know if it's a fair coin. So you toss it 1000 times. It comes up as heads 1000 times, and never as tails. Now, it's p
1,335
Maximum Likelihood Estimation (MLE) in layman terms
Just to show very simple graphics and R code for MLEs in binomial and Poisson models. Binomial. Suppose you know there are $n = 50$ trials of which $x=19$ are Successes. Then for what value of $p$ is the binomial PDF maximized? This PDF considered as a function of $p$ and (possibly) without its norming constant) is called a likelihood function. Then the MLE of $p$ is $\hat p = x/n = 19/50 = 0.32.$ p = seq(0,1, by=.01) like = dbinom(19, 59, p) mle = mean(p[like==max(like)]); mle # 'mean' in case two values of 'like' at max [1] 0.38 hdr = "MLE of Binomial Success, Probability" plot(p, like, type="l", lwd=2, main=hdr) abline(h=0, col="green2") abline(v = mle, lwd=2, lty="dotted", col="red") Poisson. Suppose the model is Poisson with mean $\lambda$ in a domain of time, area, or volume. We observe $x = 13$ Poisson events in the domain. Then the MLE of $\lambda$ is $\hat \lambda = 13.$ lam = seq(0, 50, by = .01) like = dpois(13, lam) mle = mean(lam[like==max(like)]); mle [1] 13 hdr="MLE of Poisson Mean" plot(lam, like, type="l", lwd=2, main=hdr) abline(h=0, col="green2") abline(v = mle, lwd=2, lty="dotted", col="red")
Maximum Likelihood Estimation (MLE) in layman terms
Just to show very simple graphics and R code for MLEs in binomial and Poisson models. Binomial. Suppose you know there are $n = 50$ trials of which $x=19$ are Successes. Then for what value of $p$ is
Maximum Likelihood Estimation (MLE) in layman terms Just to show very simple graphics and R code for MLEs in binomial and Poisson models. Binomial. Suppose you know there are $n = 50$ trials of which $x=19$ are Successes. Then for what value of $p$ is the binomial PDF maximized? This PDF considered as a function of $p$ and (possibly) without its norming constant) is called a likelihood function. Then the MLE of $p$ is $\hat p = x/n = 19/50 = 0.32.$ p = seq(0,1, by=.01) like = dbinom(19, 59, p) mle = mean(p[like==max(like)]); mle # 'mean' in case two values of 'like' at max [1] 0.38 hdr = "MLE of Binomial Success, Probability" plot(p, like, type="l", lwd=2, main=hdr) abline(h=0, col="green2") abline(v = mle, lwd=2, lty="dotted", col="red") Poisson. Suppose the model is Poisson with mean $\lambda$ in a domain of time, area, or volume. We observe $x = 13$ Poisson events in the domain. Then the MLE of $\lambda$ is $\hat \lambda = 13.$ lam = seq(0, 50, by = .01) like = dpois(13, lam) mle = mean(lam[like==max(like)]); mle [1] 13 hdr="MLE of Poisson Mean" plot(lam, like, type="l", lwd=2, main=hdr) abline(h=0, col="green2") abline(v = mle, lwd=2, lty="dotted", col="red")
Maximum Likelihood Estimation (MLE) in layman terms Just to show very simple graphics and R code for MLEs in binomial and Poisson models. Binomial. Suppose you know there are $n = 50$ trials of which $x=19$ are Successes. Then for what value of $p$ is
1,336
Maximum Likelihood Estimation (MLE) in layman terms
You have a model, which you impose that the data comes from. In a way, you wanna reconcile between the model and reality. To do so, you wanna minimize the discrepancy between the two. How would you do that? You have $\Theta$ vector of parameters that you can tune in order to achieve so. Minimizing the discrepancy between reality and the model is similar to choosing the parameters that make it most likely that the true data came from the model. Hence, the idea of maximum likelihood. Hope it makes sense!
Maximum Likelihood Estimation (MLE) in layman terms
You have a model, which you impose that the data comes from. In a way, you wanna reconcile between the model and reality. To do so, you wanna minimize the discrepancy between the two. How would you do
Maximum Likelihood Estimation (MLE) in layman terms You have a model, which you impose that the data comes from. In a way, you wanna reconcile between the model and reality. To do so, you wanna minimize the discrepancy between the two. How would you do that? You have $\Theta$ vector of parameters that you can tune in order to achieve so. Minimizing the discrepancy between reality and the model is similar to choosing the parameters that make it most likely that the true data came from the model. Hence, the idea of maximum likelihood. Hope it makes sense!
Maximum Likelihood Estimation (MLE) in layman terms You have a model, which you impose that the data comes from. In a way, you wanna reconcile between the model and reality. To do so, you wanna minimize the discrepancy between the two. How would you do
1,337
Maximum Likelihood Estimation (MLE) in layman terms
The way I understand MLE is this: You only get to see what the nature wants you to see. Things you see are facts. These facts have an underlying process that generated it. These process are hidden, unknown, needs to be discovered. Then the question is: Given the observed fact, what is the likelihood that process P1 generated it? What is the likelihood that process P2 generated it? And so on... One of these likelihoods is going to be max of all. MLE is a function that extracts that max likelihood. Think of a coin toss; coin is biased. No one knows the degree of bias. It could range from o(all tails) to 1 (all heads). A fair coin will be 0.5 (head/tail equally likely). When you do 10 tosses, and you observe 7 Heads, then the MLE is that degree of bias which is more likely to produce the observed fact of 7 heads in 10 tosses.
Maximum Likelihood Estimation (MLE) in layman terms
The way I understand MLE is this: You only get to see what the nature wants you to see. Things you see are facts. These facts have an underlying process that generated it. These process are hidden, un
Maximum Likelihood Estimation (MLE) in layman terms The way I understand MLE is this: You only get to see what the nature wants you to see. Things you see are facts. These facts have an underlying process that generated it. These process are hidden, unknown, needs to be discovered. Then the question is: Given the observed fact, what is the likelihood that process P1 generated it? What is the likelihood that process P2 generated it? And so on... One of these likelihoods is going to be max of all. MLE is a function that extracts that max likelihood. Think of a coin toss; coin is biased. No one knows the degree of bias. It could range from o(all tails) to 1 (all heads). A fair coin will be 0.5 (head/tail equally likely). When you do 10 tosses, and you observe 7 Heads, then the MLE is that degree of bias which is more likely to produce the observed fact of 7 heads in 10 tosses.
Maximum Likelihood Estimation (MLE) in layman terms The way I understand MLE is this: You only get to see what the nature wants you to see. Things you see are facts. These facts have an underlying process that generated it. These process are hidden, un
1,338
Using k-fold cross-validation for time-series model selection
Time-series (or other intrinsically ordered data) can be problematic for cross-validation. If some pattern emerges in year 3 and stays for years 4-6, then your model can pick up on it, even though it wasn't part of years 1 & 2. An approach that's sometimes more principled for time series is forward chaining, where your procedure would be something like this: fold 1 : training [1], test [2] fold 2 : training [1 2], test [3] fold 3 : training [1 2 3], test [4] fold 4 : training [1 2 3 4], test [5] fold 5 : training [1 2 3 4 5], test [6] That more accurately models the situation you'll see at prediction time, where you'll model on past data and predict on forward-looking data. It also will give you a sense of the dependence of your modeling on data size.
Using k-fold cross-validation for time-series model selection
Time-series (or other intrinsically ordered data) can be problematic for cross-validation. If some pattern emerges in year 3 and stays for years 4-6, then your model can pick up on it, even though it
Using k-fold cross-validation for time-series model selection Time-series (or other intrinsically ordered data) can be problematic for cross-validation. If some pattern emerges in year 3 and stays for years 4-6, then your model can pick up on it, even though it wasn't part of years 1 & 2. An approach that's sometimes more principled for time series is forward chaining, where your procedure would be something like this: fold 1 : training [1], test [2] fold 2 : training [1 2], test [3] fold 3 : training [1 2 3], test [4] fold 4 : training [1 2 3 4], test [5] fold 5 : training [1 2 3 4 5], test [6] That more accurately models the situation you'll see at prediction time, where you'll model on past data and predict on forward-looking data. It also will give you a sense of the dependence of your modeling on data size.
Using k-fold cross-validation for time-series model selection Time-series (or other intrinsically ordered data) can be problematic for cross-validation. If some pattern emerges in year 3 and stays for years 4-6, then your model can pick up on it, even though it
1,339
Using k-fold cross-validation for time-series model selection
The method I use for cross-validating my time-series model is cross-validation on a rolling basis. Start with a small subset of data for training purpose, forecast for the later data points and then checking the accuracy for the forecasted data points. The same forecasted data points are then included as part of the next training dataset and subsequent data points are forecasted. To make things intuitive, here is an image for same: An equivalent R code would be: i <- 36 #### Starting with 3 years of monthly training data pred_ets <- c() pred_arima <- c() while(i <= nrow(dt)){ ts <- ts(dt[1:i, "Amount"], start=c(2001, 12), frequency=12) pred_ets <- rbind(pred_ets, data.frame(forecast(ets(ts), 3)$mean[1:3])) pred_arima <- rbind(pred_arima, data.frame(forecast(auto.arima(ts), 3)$mean[1:3])) i = i + 3 } names(pred_arima) <- "arima" names(pred_ets) <- "ets" pred_ets <- ts(pred_ets$ets, start=c(2005, 01), frequency = 12) pred_arima <- ts(pred_arima$arima, start=c(2005, 01), frequency =12) accuracy(pred_ets, ts_dt) accuracy(pred_arima, ts_dt)
Using k-fold cross-validation for time-series model selection
The method I use for cross-validating my time-series model is cross-validation on a rolling basis. Start with a small subset of data for training purpose, forecast for the later data points and then c
Using k-fold cross-validation for time-series model selection The method I use for cross-validating my time-series model is cross-validation on a rolling basis. Start with a small subset of data for training purpose, forecast for the later data points and then checking the accuracy for the forecasted data points. The same forecasted data points are then included as part of the next training dataset and subsequent data points are forecasted. To make things intuitive, here is an image for same: An equivalent R code would be: i <- 36 #### Starting with 3 years of monthly training data pred_ets <- c() pred_arima <- c() while(i <= nrow(dt)){ ts <- ts(dt[1:i, "Amount"], start=c(2001, 12), frequency=12) pred_ets <- rbind(pred_ets, data.frame(forecast(ets(ts), 3)$mean[1:3])) pred_arima <- rbind(pred_arima, data.frame(forecast(auto.arima(ts), 3)$mean[1:3])) i = i + 3 } names(pred_arima) <- "arima" names(pred_ets) <- "ets" pred_ets <- ts(pred_ets$ets, start=c(2005, 01), frequency = 12) pred_arima <- ts(pred_arima$arima, start=c(2005, 01), frequency =12) accuracy(pred_ets, ts_dt) accuracy(pred_arima, ts_dt)
Using k-fold cross-validation for time-series model selection The method I use for cross-validating my time-series model is cross-validation on a rolling basis. Start with a small subset of data for training purpose, forecast for the later data points and then c
1,340
Using k-fold cross-validation for time-series model selection
The "canonical" way to do time-series cross-validation (at least as described by @Rob Hyndman) is to "roll" through the dataset. i.e.: fold 1 : training [1], test [2] fold 2 : training [1 2], test [3] fold 3 : training [1 2 3], test [4] fold 4 : training [1 2 3 4], test [5] fold 5 : training [1 2 3 4 5], test [6] Basically, your training set should not contain information that occurs after the test set.
Using k-fold cross-validation for time-series model selection
The "canonical" way to do time-series cross-validation (at least as described by @Rob Hyndman) is to "roll" through the dataset. i.e.: fold 1 : training [1], test [2] fold 2 : training [1 2], test [3
Using k-fold cross-validation for time-series model selection The "canonical" way to do time-series cross-validation (at least as described by @Rob Hyndman) is to "roll" through the dataset. i.e.: fold 1 : training [1], test [2] fold 2 : training [1 2], test [3] fold 3 : training [1 2 3], test [4] fold 4 : training [1 2 3 4], test [5] fold 5 : training [1 2 3 4 5], test [6] Basically, your training set should not contain information that occurs after the test set.
Using k-fold cross-validation for time-series model selection The "canonical" way to do time-series cross-validation (at least as described by @Rob Hyndman) is to "roll" through the dataset. i.e.: fold 1 : training [1], test [2] fold 2 : training [1 2], test [3
1,341
Using k-fold cross-validation for time-series model selection
There is nothing wrong with using blocks of "future" data for time series cross validation in most situations. By most situations I refer to models for stationary data, which are the models that we typically use. E.g. when you fit an $\mathit{ARIMA}(p,d,q)$, with $d>0$ to a series, you take $d$ differences of the series and fit a model for stationary data to the residuals. For cross validation to work as a model selection tool, you need approximate independence between the training and the test data. The problem with time series data is that adjacent data points are often highly dependent, so standard cross validation will fail. The remedy for this is to leave a gap between the test sample and the training samples, on both sides of the test sample. The reason why you also need to leave out a gap before the test sample is that dependence is symmetric when you move forward or backward in time (think of correlation). This approach is called $hv$ cross validation (leave $v$ out, delete $h$ observations on either side of the test sample) and is described in this paper. In your example, this would look like this: fold 1 : training [1 2 3 4 5h], test [6] fold 2 : training [1 2 3 4h h6], test [5] fold 3 : training [1 2 3h h5 6], test [4] fold 4 : training [1 2h h4 5 6], test [3] fold 5 : training [1h h3 4 5 6], test [2] fold 6 : training [h2 3 4 5 6], test [ 1] Where the h indicates that h observations of the training sample are deleted on that side.
Using k-fold cross-validation for time-series model selection
There is nothing wrong with using blocks of "future" data for time series cross validation in most situations. By most situations I refer to models for stationary data, which are the models that we ty
Using k-fold cross-validation for time-series model selection There is nothing wrong with using blocks of "future" data for time series cross validation in most situations. By most situations I refer to models for stationary data, which are the models that we typically use. E.g. when you fit an $\mathit{ARIMA}(p,d,q)$, with $d>0$ to a series, you take $d$ differences of the series and fit a model for stationary data to the residuals. For cross validation to work as a model selection tool, you need approximate independence between the training and the test data. The problem with time series data is that adjacent data points are often highly dependent, so standard cross validation will fail. The remedy for this is to leave a gap between the test sample and the training samples, on both sides of the test sample. The reason why you also need to leave out a gap before the test sample is that dependence is symmetric when you move forward or backward in time (think of correlation). This approach is called $hv$ cross validation (leave $v$ out, delete $h$ observations on either side of the test sample) and is described in this paper. In your example, this would look like this: fold 1 : training [1 2 3 4 5h], test [6] fold 2 : training [1 2 3 4h h6], test [5] fold 3 : training [1 2 3h h5 6], test [4] fold 4 : training [1 2h h4 5 6], test [3] fold 5 : training [1h h3 4 5 6], test [2] fold 6 : training [h2 3 4 5 6], test [ 1] Where the h indicates that h observations of the training sample are deleted on that side.
Using k-fold cross-validation for time-series model selection There is nothing wrong with using blocks of "future" data for time series cross validation in most situations. By most situations I refer to models for stationary data, which are the models that we ty
1,342
Using k-fold cross-validation for time-series model selection
As commented by @thebigdog, "On the use of cross-validation for time series predictor evaluation" by Bergmeir et al. discusses cross-validation in the context of stationary time-series and determine Forward Chaining (proposed by other answerers) to be unhelpful. Note, Forward Chaining is called Last-Block Evaluation in this paper: Using standard 5-fold cross-validation, no practical effect of the dependencies within the data could be found, regarding whether the final error is under- or overestimated. On the contrary, last block evaluation tends to yield less robust error measures than cross-validation and blocked cross-validation. "Evaluating time series forecasting models: An empirical study on performance estimation methods" by Cerqueira et al. agrees with this assessment. However, for non-stationary time-series, they recommend instead using a variation on Hold-Out, called Rep-Holdout. In Rep-Holdout, a point a is chosen in the time-series Y to mark the beginning of the testing data. The point a is determined to be within a window. This is illustrated in the figure below: This aforementioned paper is long and exhaustively tests almost all the other methods mentioned in the answers to this question with publicly available code. This includes @Matthias Schmidtblaicher claim of including gaps before and after the testing data. Also, I've only summarized the paper. The actual conclusion of the paper involves a decision tree for evaluating time-series models!
Using k-fold cross-validation for time-series model selection
As commented by @thebigdog, "On the use of cross-validation for time series predictor evaluation" by Bergmeir et al. discusses cross-validation in the context of stationary time-series and determine F
Using k-fold cross-validation for time-series model selection As commented by @thebigdog, "On the use of cross-validation for time series predictor evaluation" by Bergmeir et al. discusses cross-validation in the context of stationary time-series and determine Forward Chaining (proposed by other answerers) to be unhelpful. Note, Forward Chaining is called Last-Block Evaluation in this paper: Using standard 5-fold cross-validation, no practical effect of the dependencies within the data could be found, regarding whether the final error is under- or overestimated. On the contrary, last block evaluation tends to yield less robust error measures than cross-validation and blocked cross-validation. "Evaluating time series forecasting models: An empirical study on performance estimation methods" by Cerqueira et al. agrees with this assessment. However, for non-stationary time-series, they recommend instead using a variation on Hold-Out, called Rep-Holdout. In Rep-Holdout, a point a is chosen in the time-series Y to mark the beginning of the testing data. The point a is determined to be within a window. This is illustrated in the figure below: This aforementioned paper is long and exhaustively tests almost all the other methods mentioned in the answers to this question with publicly available code. This includes @Matthias Schmidtblaicher claim of including gaps before and after the testing data. Also, I've only summarized the paper. The actual conclusion of the paper involves a decision tree for evaluating time-series models!
Using k-fold cross-validation for time-series model selection As commented by @thebigdog, "On the use of cross-validation for time series predictor evaluation" by Bergmeir et al. discusses cross-validation in the context of stationary time-series and determine F
1,343
How would you explain the difference between correlation and covariance?
The problem with covariances is that they are hard to compare: when you calculate the covariance of a set of heights and weights, as expressed in (respectively) meters and kilograms, you will get a different covariance from when you do it in other units (which already gives a problem for people doing the same thing with or without the metric system!), but also, it will be hard to tell if (e.g.) height and weight 'covary more' than, say the length of your toes and fingers, simply because the 'scale' the covariance is calculated on is different. The solution to this is to 'normalize' the covariance: you divide the covariance by something that represents the diversity and scale in both the covariates, and end up with a value that is assured to be between -1 and 1: the correlation. Whatever unit your original variables were in, you will always get the same result, and this will also ensure that you can, to a certain degree, compare whether two variables 'correlate' more than two others, simply by comparing their correlation. Note: the above assumes that the reader already understands the concept of covariance.
How would you explain the difference between correlation and covariance?
The problem with covariances is that they are hard to compare: when you calculate the covariance of a set of heights and weights, as expressed in (respectively) meters and kilograms, you will get a di
How would you explain the difference between correlation and covariance? The problem with covariances is that they are hard to compare: when you calculate the covariance of a set of heights and weights, as expressed in (respectively) meters and kilograms, you will get a different covariance from when you do it in other units (which already gives a problem for people doing the same thing with or without the metric system!), but also, it will be hard to tell if (e.g.) height and weight 'covary more' than, say the length of your toes and fingers, simply because the 'scale' the covariance is calculated on is different. The solution to this is to 'normalize' the covariance: you divide the covariance by something that represents the diversity and scale in both the covariates, and end up with a value that is assured to be between -1 and 1: the correlation. Whatever unit your original variables were in, you will always get the same result, and this will also ensure that you can, to a certain degree, compare whether two variables 'correlate' more than two others, simply by comparing their correlation. Note: the above assumes that the reader already understands the concept of covariance.
How would you explain the difference between correlation and covariance? The problem with covariances is that they are hard to compare: when you calculate the covariance of a set of heights and weights, as expressed in (respectively) meters and kilograms, you will get a di
1,344
How would you explain the difference between correlation and covariance?
The requirements of these types of questions strike me as a bit bizarre. Here is a mathematical concept/formula, yet I want to talk about it in some context completely devoid of mathematical symbols. I also think it should be stated that the actual algebra necessary to understand the formulas, I would think, should be taught to most individuals before higher education (no understanding of matrix algebra is needed, just simple algebra will suffice). So, at first instead of completely ignoring the formula and speaking of it in some magical and heuristic types of analogies, lets just look at the formula and try to explain the individual components in small steps. The difference in terms of covariance and correlation, when looking at the formulas, should become clear. Whereas speaking in terms of analogies and heuristics I suspect would obsfucate two relatively simple concepts and their differences in many situations. So lets starts out with a formula for the sample covariance (these I have just taken and adopted from wikipedia); $\frac{1}{n-1}\sum_{i=1}^{n}(x_i-\bar{x})(y_i-\bar{y})$ To get everyone up to speed, lets explicitly define all of the elements and operations in the formula. $x_i$ and $y_i$ are each measurements of two seperate attributes of the same observation $\bar{x}$ and $\bar{y}$ are the means (or average) of each attribute For $\frac{1}{n-1}$, lets just say this means we divide the final result by ${n-1}$. $\sum_{i=1}^{n}$ may be a foreign symbol to some, so it would likely be useful to explain this operation. It is simply the sum of all $i$ seperate observations, and $n$ represents the total number of observations. At this point, I might introduce a simple example, to put a face on the elements and operations so to speak. So for example, lets just make up a table, where each row corresponds to an observation (and $x$ and $y$ are labeled appropriately). One would likely make these examples more specific (e.g. say $x$ represents age and $y$ represents weight), but for our discussion here it should not matter. x y --- 2 5 4 8 9 3 5 6 0 8 At this point if you feel the sum operation in the formula may not have been fully comprehended, you can introduce it again in a much simpler context. Say just present that $\sum_{i=1}^{n}(x_i)$ is the same as saying in this example; x -- 2 4 9 5 + 0 -- 20 Now that mess should be cleared up, and we can work our way into the second part of the formula, $(x_i-\bar{x})(y_i-\bar{y})$. Now, assuming people already know what the mean, $\bar{x}$ and $\bar{y}$ stand for, and I would say, being hypocritical of my own comments earlier in the post, one can just refer to the mean in terms of simple heuristics (e.g. the middle of the distribution). One can then just take this process one operation at a time. The statement $(x_i-\bar{x})$ is just examining the deviations/distance between each observation, and the mean of all observations for that particular attribute. Hence when an observation is further from the mean, this operation will be given a higher value. One can then refer back to the example table given, and simply demonstrate the operation on the $x$ vector of observations. x x_bar (x - x_bar) 2 4 -2 4 4 0 9 4 5 5 4 1 0 4 -4 The operation is the same for $y$ vector, but just for reinforcement you can present that operation as well. y y_bar (y - y_bar) 5 6 -1 8 6 2 3 6 -3 6 6 0 8 6 2 Now, the terms $(x_i-\bar{x})$ and $(y_i-\bar{y})$ should not be ambiguous, and we can go onto the next operation, multiplying these results together, $(x_i-\bar{x})\cdot(y_i-\bar{y})$. As gung points out in the comments, this is frequently called the cross product (perhaps a useful example to bring back up if one were introducing basic matrix algebra for statistics). Take note of what happens when multiplying, if two observations are both a large distance above the mean, the resulting observation will have an even larger positive value (the same is true if both observations are a large distance below the mean, as multiplying two negatives equals a positive). Also note that if one observation is high above the mean and the other is well below the mean, the resulting value will be large (in absolute terms) and negative (as a positive times a negative equals a negative number). Finally note that when a value is very near the mean for either observation, multiplying the two values will result in a small number. Again we can just present this operation in a table. (x - x_bar) (y - y_bar) (x - x_bar)*(y - y_bar) -2 -1 2 0 2 0 5 -3 -15 1 0 0 -4 2 -8 Now if there are any statisticians in the room they should be boiling with anticipation at this point. We can see all the seperate elements of what a covariance is, and how it is calculated come into play. Now all we have to do is sum up the final result in the preceding table, divide by $n-1$ and voila, the covariance should no longer be mystical (all with only defining one greek symbol). (x - x_bar)*(y - y_bar) ----------------------- 2 0 -15 0 + -8 ----- -21 -21/(5-1) = -5.25 At this point you may want to reinforce where the 5 is coming from, but that should be as simple as referring back to the table and counting the number of observations (lets again leave the difference between sample and population to another time). Now, the covariance in and of itself does not tell us much (it can, but it is needless at this point to go into any interesting examples without resorting to magically, undefined references to the audience). In a good case scenario, you won't really need to sell why we should care what the covariance is, in other circumstances, you may just have to hope your audience is captive and will take your word for it. But, continuing on to develop the difference between what the covariance is and what the correlation is, we can just refer back to the formula for correlation. To prevent greek symbol phobia maybe just say $\rho$ is the common symbol used to represent correlation. $\rho = \frac{Cov(x,y)}{\sqrt{Var(x)Var(y)}}$ Again, to reiterate, the numerator in the preceding formula is simply the covariance as we have just defined, and the denominator is the square root of the product of the variance of each individual series. If you need to define the variance itself, you could just say that the variance is the same thing as the covariance of a series with itself (i.e. $Cov(x,x) = Var(x)$). And all the same concepts that you introduced with the covariance apply (i.e. if a series has many values a far ways from its mean, it will have a high variance). Maybe note here that a series can not have a negative variance as well (which should logically follow from the math previously presented). So the only new components we have introduced are in the denominator, $Var(x)Var(y)$. So we are dividing the covariance we just calculated by the product of the variances of each series. One could go into the treatment about why dividing by $\sqrt{Var(x)Var(y)}$ will always result in a value between -1 and 1, but I suspect the Cauchy–Schwarz inequality should be left off of the agenda for this discussion. So again, I'm a hypocrite and resort to some, take my word for it, but at this point we can introduce all the reasons why we use the correlation coefficient. One can then relate these math lessons back to the heuristics that have been given in the other statements, such as Peter Flom's response to one of the other questions. While this was critisized for introducing the concept in terms of causal statements, that lesson should be on the agenda at some point as well. I understand in some circumstances this level of treatment would not be appropriate. The senate needs the executive summary. In that case, well you can refer back to the simple heuristics that people have been using in other examples, but Rome wasn't built in a day. And to the senate whom asks for the executive summary, if you have so little time perhaps you should just take my word for it, and dispense with the formalities of analogies and bullet-points.
How would you explain the difference between correlation and covariance?
The requirements of these types of questions strike me as a bit bizarre. Here is a mathematical concept/formula, yet I want to talk about it in some context completely devoid of mathematical symbols.
How would you explain the difference between correlation and covariance? The requirements of these types of questions strike me as a bit bizarre. Here is a mathematical concept/formula, yet I want to talk about it in some context completely devoid of mathematical symbols. I also think it should be stated that the actual algebra necessary to understand the formulas, I would think, should be taught to most individuals before higher education (no understanding of matrix algebra is needed, just simple algebra will suffice). So, at first instead of completely ignoring the formula and speaking of it in some magical and heuristic types of analogies, lets just look at the formula and try to explain the individual components in small steps. The difference in terms of covariance and correlation, when looking at the formulas, should become clear. Whereas speaking in terms of analogies and heuristics I suspect would obsfucate two relatively simple concepts and their differences in many situations. So lets starts out with a formula for the sample covariance (these I have just taken and adopted from wikipedia); $\frac{1}{n-1}\sum_{i=1}^{n}(x_i-\bar{x})(y_i-\bar{y})$ To get everyone up to speed, lets explicitly define all of the elements and operations in the formula. $x_i$ and $y_i$ are each measurements of two seperate attributes of the same observation $\bar{x}$ and $\bar{y}$ are the means (or average) of each attribute For $\frac{1}{n-1}$, lets just say this means we divide the final result by ${n-1}$. $\sum_{i=1}^{n}$ may be a foreign symbol to some, so it would likely be useful to explain this operation. It is simply the sum of all $i$ seperate observations, and $n$ represents the total number of observations. At this point, I might introduce a simple example, to put a face on the elements and operations so to speak. So for example, lets just make up a table, where each row corresponds to an observation (and $x$ and $y$ are labeled appropriately). One would likely make these examples more specific (e.g. say $x$ represents age and $y$ represents weight), but for our discussion here it should not matter. x y --- 2 5 4 8 9 3 5 6 0 8 At this point if you feel the sum operation in the formula may not have been fully comprehended, you can introduce it again in a much simpler context. Say just present that $\sum_{i=1}^{n}(x_i)$ is the same as saying in this example; x -- 2 4 9 5 + 0 -- 20 Now that mess should be cleared up, and we can work our way into the second part of the formula, $(x_i-\bar{x})(y_i-\bar{y})$. Now, assuming people already know what the mean, $\bar{x}$ and $\bar{y}$ stand for, and I would say, being hypocritical of my own comments earlier in the post, one can just refer to the mean in terms of simple heuristics (e.g. the middle of the distribution). One can then just take this process one operation at a time. The statement $(x_i-\bar{x})$ is just examining the deviations/distance between each observation, and the mean of all observations for that particular attribute. Hence when an observation is further from the mean, this operation will be given a higher value. One can then refer back to the example table given, and simply demonstrate the operation on the $x$ vector of observations. x x_bar (x - x_bar) 2 4 -2 4 4 0 9 4 5 5 4 1 0 4 -4 The operation is the same for $y$ vector, but just for reinforcement you can present that operation as well. y y_bar (y - y_bar) 5 6 -1 8 6 2 3 6 -3 6 6 0 8 6 2 Now, the terms $(x_i-\bar{x})$ and $(y_i-\bar{y})$ should not be ambiguous, and we can go onto the next operation, multiplying these results together, $(x_i-\bar{x})\cdot(y_i-\bar{y})$. As gung points out in the comments, this is frequently called the cross product (perhaps a useful example to bring back up if one were introducing basic matrix algebra for statistics). Take note of what happens when multiplying, if two observations are both a large distance above the mean, the resulting observation will have an even larger positive value (the same is true if both observations are a large distance below the mean, as multiplying two negatives equals a positive). Also note that if one observation is high above the mean and the other is well below the mean, the resulting value will be large (in absolute terms) and negative (as a positive times a negative equals a negative number). Finally note that when a value is very near the mean for either observation, multiplying the two values will result in a small number. Again we can just present this operation in a table. (x - x_bar) (y - y_bar) (x - x_bar)*(y - y_bar) -2 -1 2 0 2 0 5 -3 -15 1 0 0 -4 2 -8 Now if there are any statisticians in the room they should be boiling with anticipation at this point. We can see all the seperate elements of what a covariance is, and how it is calculated come into play. Now all we have to do is sum up the final result in the preceding table, divide by $n-1$ and voila, the covariance should no longer be mystical (all with only defining one greek symbol). (x - x_bar)*(y - y_bar) ----------------------- 2 0 -15 0 + -8 ----- -21 -21/(5-1) = -5.25 At this point you may want to reinforce where the 5 is coming from, but that should be as simple as referring back to the table and counting the number of observations (lets again leave the difference between sample and population to another time). Now, the covariance in and of itself does not tell us much (it can, but it is needless at this point to go into any interesting examples without resorting to magically, undefined references to the audience). In a good case scenario, you won't really need to sell why we should care what the covariance is, in other circumstances, you may just have to hope your audience is captive and will take your word for it. But, continuing on to develop the difference between what the covariance is and what the correlation is, we can just refer back to the formula for correlation. To prevent greek symbol phobia maybe just say $\rho$ is the common symbol used to represent correlation. $\rho = \frac{Cov(x,y)}{\sqrt{Var(x)Var(y)}}$ Again, to reiterate, the numerator in the preceding formula is simply the covariance as we have just defined, and the denominator is the square root of the product of the variance of each individual series. If you need to define the variance itself, you could just say that the variance is the same thing as the covariance of a series with itself (i.e. $Cov(x,x) = Var(x)$). And all the same concepts that you introduced with the covariance apply (i.e. if a series has many values a far ways from its mean, it will have a high variance). Maybe note here that a series can not have a negative variance as well (which should logically follow from the math previously presented). So the only new components we have introduced are in the denominator, $Var(x)Var(y)$. So we are dividing the covariance we just calculated by the product of the variances of each series. One could go into the treatment about why dividing by $\sqrt{Var(x)Var(y)}$ will always result in a value between -1 and 1, but I suspect the Cauchy–Schwarz inequality should be left off of the agenda for this discussion. So again, I'm a hypocrite and resort to some, take my word for it, but at this point we can introduce all the reasons why we use the correlation coefficient. One can then relate these math lessons back to the heuristics that have been given in the other statements, such as Peter Flom's response to one of the other questions. While this was critisized for introducing the concept in terms of causal statements, that lesson should be on the agenda at some point as well. I understand in some circumstances this level of treatment would not be appropriate. The senate needs the executive summary. In that case, well you can refer back to the simple heuristics that people have been using in other examples, but Rome wasn't built in a day. And to the senate whom asks for the executive summary, if you have so little time perhaps you should just take my word for it, and dispense with the formalities of analogies and bullet-points.
How would you explain the difference between correlation and covariance? The requirements of these types of questions strike me as a bit bizarre. Here is a mathematical concept/formula, yet I want to talk about it in some context completely devoid of mathematical symbols.
1,345
How would you explain the difference between correlation and covariance?
Correlation (r) is the covariance (cov) of your variables (x & y) divided by (or adjusted by, in other words) each of their standard deviations ($\sqrt{Var[x]Var[y]}$). That is, correlation is simply a representation of covariance so the result must lay between -1 (perfectly inversely correlated) an +1 (perfectly positively correlated), noting that a value close to zero means that two variables are uncorrelated. Covariance is unbounded and lacks a context when comparing to other covariances. By Normalising/adjusting/standardising covariances into a correlation, data sets can be compared more easily. As you can imagine, there are different ways a statistic (such as covariance) can be normalised/standardised. The mathematical formula for the relationship between correlation and covariance simply reflects the convention statisticians use (namely, adjusting according to their standard deviations): $$r = \frac{cov(x,y)}{\sqrt{Var[x]Var[y]}}$$
How would you explain the difference between correlation and covariance?
Correlation (r) is the covariance (cov) of your variables (x & y) divided by (or adjusted by, in other words) each of their standard deviations ($\sqrt{Var[x]Var[y]}$). That is, correlation is simply
How would you explain the difference between correlation and covariance? Correlation (r) is the covariance (cov) of your variables (x & y) divided by (or adjusted by, in other words) each of their standard deviations ($\sqrt{Var[x]Var[y]}$). That is, correlation is simply a representation of covariance so the result must lay between -1 (perfectly inversely correlated) an +1 (perfectly positively correlated), noting that a value close to zero means that two variables are uncorrelated. Covariance is unbounded and lacks a context when comparing to other covariances. By Normalising/adjusting/standardising covariances into a correlation, data sets can be compared more easily. As you can imagine, there are different ways a statistic (such as covariance) can be normalised/standardised. The mathematical formula for the relationship between correlation and covariance simply reflects the convention statisticians use (namely, adjusting according to their standard deviations): $$r = \frac{cov(x,y)}{\sqrt{Var[x]Var[y]}}$$
How would you explain the difference between correlation and covariance? Correlation (r) is the covariance (cov) of your variables (x & y) divided by (or adjusted by, in other words) each of their standard deviations ($\sqrt{Var[x]Var[y]}$). That is, correlation is simply
1,346
How would you explain the difference between correlation and covariance?
If you are familiar with the idea of centering and standardizing, x-xbar is to center x at its mean. Same applies to y. So covariance simply centers the data. Correlation, however, not only centers the data but also scales using the standard deviation (standardize). The multiplication and summation is the dot-product of the two vectors and it tells how parallel these two vectors compare to each other (the projection of one vector onto the other). The division of (n-1) or taking the expected value is to scale for the number of observations. Thoughts?
How would you explain the difference between correlation and covariance?
If you are familiar with the idea of centering and standardizing, x-xbar is to center x at its mean. Same applies to y. So covariance simply centers the data. Correlation, however, not only centers
How would you explain the difference between correlation and covariance? If you are familiar with the idea of centering and standardizing, x-xbar is to center x at its mean. Same applies to y. So covariance simply centers the data. Correlation, however, not only centers the data but also scales using the standard deviation (standardize). The multiplication and summation is the dot-product of the two vectors and it tells how parallel these two vectors compare to each other (the projection of one vector onto the other). The division of (n-1) or taking the expected value is to scale for the number of observations. Thoughts?
How would you explain the difference between correlation and covariance? If you are familiar with the idea of centering and standardizing, x-xbar is to center x at its mean. Same applies to y. So covariance simply centers the data. Correlation, however, not only centers
1,347
How would you explain the difference between correlation and covariance?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. As far as I've understood it. Correlation is a "normalized" version of the covariance.
How would you explain the difference between correlation and covariance?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
How would you explain the difference between correlation and covariance? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. As far as I've understood it. Correlation is a "normalized" version of the covariance.
How would you explain the difference between correlation and covariance? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
1,348
How would you explain the difference between correlation and covariance?
Correlation is scaled to be between -1 and +1 depending on whether there is positive or negative correlation, and is dimensionless. The covariance however, ranges from zero, in the case of two independent variables, to Var(X), in the case where the two sets of data are equal. The units of COV(X,Y) are the units of X times the units of Y.
How would you explain the difference between correlation and covariance?
Correlation is scaled to be between -1 and +1 depending on whether there is positive or negative correlation, and is dimensionless. The covariance however, ranges from zero, in the case of two indepen
How would you explain the difference between correlation and covariance? Correlation is scaled to be between -1 and +1 depending on whether there is positive or negative correlation, and is dimensionless. The covariance however, ranges from zero, in the case of two independent variables, to Var(X), in the case where the two sets of data are equal. The units of COV(X,Y) are the units of X times the units of Y.
How would you explain the difference between correlation and covariance? Correlation is scaled to be between -1 and +1 depending on whether there is positive or negative correlation, and is dimensionless. The covariance however, ranges from zero, in the case of two indepen
1,349
PCA and proportion of variance explained
In case of PCA, "variance" means summative variance or multivariate variability or overall variability or total variability. Below is the covariance matrix of some 3 variables. Their variances are on the diagonal, and the sum of the 3 values (3.448) is the overall variability. 1.343730519 -.160152268 .186470243 -.160152268 .619205620 -.126684273 .186470243 -.126684273 1.485549631 Now, PCA replaces original variables with new variables, called principal components, which are orthogonal (i.e. they have zero covariations) and have variances (called eigenvalues) in decreasing order. So, the covariance matrix between the principal components extracted from the above data is this: 1.651354285 .000000000 .000000000 .000000000 1.220288343 .000000000 .000000000 .000000000 .576843142 Note that the diagonal sum is still 3.448, which says that all 3 components account for all the multivariate variability. The 1st principal component accounts for or "explains" 1.651/3.448 = 47.9% of the overall variability; the 2nd one explains 1.220/3.448 = 35.4% of it; the 3rd one explains .577/3.448 = 16.7% of it. So, what do they mean when they say that "PCA maximizes variance" or "PCA explains maximal variance"? That is not, of course, that it finds the largest variance among three values 1.343730519 .619205620 1.485549631, no. PCA finds, in the data space, the dimension (direction) with the largest variance out of the overall variance 1.343730519+.619205620+1.485549631 = 3.448. That largest variance would be 1.651354285. Then it finds the dimension of the second largest variance, orthogonal to the first one, out of the remaining 3.448-1.651354285 overall variance. That 2nd dimension would be 1.220288343 variance. And so on. The last remaining dimension is .576843142 variance. See also "Pt3" here and the great answer here explaining how it done in more detail. Mathematically, PCA is performed via linear algebra functions called eigen-decomposition or svd-decomposition. These functions will return you all the eigenvalues 1.651354285 1.220288343 .576843142 (and corresponding eigenvectors) at once (see, see).
PCA and proportion of variance explained
In case of PCA, "variance" means summative variance or multivariate variability or overall variability or total variability. Below is the covariance matrix of some 3 variables. Their variances are on
PCA and proportion of variance explained In case of PCA, "variance" means summative variance or multivariate variability or overall variability or total variability. Below is the covariance matrix of some 3 variables. Their variances are on the diagonal, and the sum of the 3 values (3.448) is the overall variability. 1.343730519 -.160152268 .186470243 -.160152268 .619205620 -.126684273 .186470243 -.126684273 1.485549631 Now, PCA replaces original variables with new variables, called principal components, which are orthogonal (i.e. they have zero covariations) and have variances (called eigenvalues) in decreasing order. So, the covariance matrix between the principal components extracted from the above data is this: 1.651354285 .000000000 .000000000 .000000000 1.220288343 .000000000 .000000000 .000000000 .576843142 Note that the diagonal sum is still 3.448, which says that all 3 components account for all the multivariate variability. The 1st principal component accounts for or "explains" 1.651/3.448 = 47.9% of the overall variability; the 2nd one explains 1.220/3.448 = 35.4% of it; the 3rd one explains .577/3.448 = 16.7% of it. So, what do they mean when they say that "PCA maximizes variance" or "PCA explains maximal variance"? That is not, of course, that it finds the largest variance among three values 1.343730519 .619205620 1.485549631, no. PCA finds, in the data space, the dimension (direction) with the largest variance out of the overall variance 1.343730519+.619205620+1.485549631 = 3.448. That largest variance would be 1.651354285. Then it finds the dimension of the second largest variance, orthogonal to the first one, out of the remaining 3.448-1.651354285 overall variance. That 2nd dimension would be 1.220288343 variance. And so on. The last remaining dimension is .576843142 variance. See also "Pt3" here and the great answer here explaining how it done in more detail. Mathematically, PCA is performed via linear algebra functions called eigen-decomposition or svd-decomposition. These functions will return you all the eigenvalues 1.651354285 1.220288343 .576843142 (and corresponding eigenvectors) at once (see, see).
PCA and proportion of variance explained In case of PCA, "variance" means summative variance or multivariate variability or overall variability or total variability. Below is the covariance matrix of some 3 variables. Their variances are on
1,350
PCA and proportion of variance explained
@ttnphns has provided a good answer, perhaps I can add a few points. First, I want to point out that there was a relevant question on CV, with a really strong answer—you definitely want to check it out. In what follows, I will refer to the plots shown in that answer. All three plots display the same data. Notice that there is variability in the data both vertically and horizontally, but we can think of most of the variability as actually being diagonal. In the third plot, that long black diagonal line is the first eigenvector (or the first principle component), and the length of that principle component (the spread of the data along that line--not actually the length of the line itself, which is just drawn on the plot) is the first eigenvalue--it's the amount of variance accounted for by the first principle component. If you were to sum that length with the length of the second principle component (which is the width of the spread of the data orthogonally out from that diagonal line), and then divided either of the eigenvalues by that total, you would get the percent of the variance accounted for by the corresponding principle component. On the other hand, to understand the percent of the variance accounted for in regression, you can look at the top plot. In that case, the red line is the regression line, or the set of the predicted values from the model. The variance explained can be understood as the ratio of the vertical spread of the regression line (i.e., from the lowest point on the line to the highest point on the line) to the vertical spread of the data (i.e., from the lowest data point to the highest data point). Of course, that's only a loose idea, because literally those are ranges, not variances, but that should help you get the point. Be sure to read the question. And, although I referred to the top answer, several of the answers given are excellent. It's worth your time to read them all.
PCA and proportion of variance explained
@ttnphns has provided a good answer, perhaps I can add a few points. First, I want to point out that there was a relevant question on CV, with a really strong answer—you definitely want to check it o
PCA and proportion of variance explained @ttnphns has provided a good answer, perhaps I can add a few points. First, I want to point out that there was a relevant question on CV, with a really strong answer—you definitely want to check it out. In what follows, I will refer to the plots shown in that answer. All three plots display the same data. Notice that there is variability in the data both vertically and horizontally, but we can think of most of the variability as actually being diagonal. In the third plot, that long black diagonal line is the first eigenvector (or the first principle component), and the length of that principle component (the spread of the data along that line--not actually the length of the line itself, which is just drawn on the plot) is the first eigenvalue--it's the amount of variance accounted for by the first principle component. If you were to sum that length with the length of the second principle component (which is the width of the spread of the data orthogonally out from that diagonal line), and then divided either of the eigenvalues by that total, you would get the percent of the variance accounted for by the corresponding principle component. On the other hand, to understand the percent of the variance accounted for in regression, you can look at the top plot. In that case, the red line is the regression line, or the set of the predicted values from the model. The variance explained can be understood as the ratio of the vertical spread of the regression line (i.e., from the lowest point on the line to the highest point on the line) to the vertical spread of the data (i.e., from the lowest data point to the highest data point). Of course, that's only a loose idea, because literally those are ranges, not variances, but that should help you get the point. Be sure to read the question. And, although I referred to the top answer, several of the answers given are excellent. It's worth your time to read them all.
PCA and proportion of variance explained @ttnphns has provided a good answer, perhaps I can add a few points. First, I want to point out that there was a relevant question on CV, with a really strong answer—you definitely want to check it o
1,351
PCA and proportion of variance explained
There is a very simple, direct, and precise mathematical answer to the original question. The first PC is a linear combination of the original variables $Y_1$, $Y_2$, $\dots$, $Y_p$ that maximizes the total of the $R_i^2$ statistics when predicting the original variables as a regression function of the linear combination. Precisely, the coefficients $a_1$, $a_2$, $\dots$, $a_p$ in the first PC, $PC_1 = a_1Y_1 + a_2Y_2 + \cdots + a_pY_p$, give you the maximum value of $\sum_{i=1}^p R_i^2(Y_i | PC_1)$, where the maximum is taken over all possible linear combinations. In this sense, you can interpret the first PC as a maximizer of "variance explained," or more precisely, a maximizer of "total variance explained." It is "a" maximizer rather than "the" maximizer, because any proportional coefficients $b_i = c\times a_i$, for $c \neq 0$, will give the same maximum. A nice by-product of this result is that the unit length constraint is unnecessary, other than as a device to come up with "a" maximizer. For references to original literature and extensions, see Westfall,P.H., Arias, A.L., and Fulton, L.V. (2017). Teaching Principal Components Using Correlations, Multivariate Behavioral Research, 52, 648-660.
PCA and proportion of variance explained
There is a very simple, direct, and precise mathematical answer to the original question. The first PC is a linear combination of the original variables $Y_1$, $Y_2$, $\dots$, $Y_p$ that maximizes th
PCA and proportion of variance explained There is a very simple, direct, and precise mathematical answer to the original question. The first PC is a linear combination of the original variables $Y_1$, $Y_2$, $\dots$, $Y_p$ that maximizes the total of the $R_i^2$ statistics when predicting the original variables as a regression function of the linear combination. Precisely, the coefficients $a_1$, $a_2$, $\dots$, $a_p$ in the first PC, $PC_1 = a_1Y_1 + a_2Y_2 + \cdots + a_pY_p$, give you the maximum value of $\sum_{i=1}^p R_i^2(Y_i | PC_1)$, where the maximum is taken over all possible linear combinations. In this sense, you can interpret the first PC as a maximizer of "variance explained," or more precisely, a maximizer of "total variance explained." It is "a" maximizer rather than "the" maximizer, because any proportional coefficients $b_i = c\times a_i$, for $c \neq 0$, will give the same maximum. A nice by-product of this result is that the unit length constraint is unnecessary, other than as a device to come up with "a" maximizer. For references to original literature and extensions, see Westfall,P.H., Arias, A.L., and Fulton, L.V. (2017). Teaching Principal Components Using Correlations, Multivariate Behavioral Research, 52, 648-660.
PCA and proportion of variance explained There is a very simple, direct, and precise mathematical answer to the original question. The first PC is a linear combination of the original variables $Y_1$, $Y_2$, $\dots$, $Y_p$ that maximizes th
1,352
PCA and proportion of variance explained
Think about $Y=A+B$ as random variable $Y$ being explained by two new random variables $A$ and $B$. why we do this? Maybe $Y$ is complex but $A$ and $B$ are less complex. Anyhow, the portion of variance of $Y$ is explained by those of $A$ and $B$. $var(Y) = var(A) + var (B) + 2cov(A,B)$. Application of this to the linear regression is simple. Think of $A$ being $b_0+b_1X$ and $B$ is $e$, then $Y=b_0+b_1X+e$. Portion of variance in $Y$ is explained by the regression line, $b_0+b_1X$. We use "proportion of variance" term because we want to quantify how much regression line is useful to predict (or model) $Y$.
PCA and proportion of variance explained
Think about $Y=A+B$ as random variable $Y$ being explained by two new random variables $A$ and $B$. why we do this? Maybe $Y$ is complex but $A$ and $B$ are less complex. Anyhow, the portion of varian
PCA and proportion of variance explained Think about $Y=A+B$ as random variable $Y$ being explained by two new random variables $A$ and $B$. why we do this? Maybe $Y$ is complex but $A$ and $B$ are less complex. Anyhow, the portion of variance of $Y$ is explained by those of $A$ and $B$. $var(Y) = var(A) + var (B) + 2cov(A,B)$. Application of this to the linear regression is simple. Think of $A$ being $b_0+b_1X$ and $B$ is $e$, then $Y=b_0+b_1X+e$. Portion of variance in $Y$ is explained by the regression line, $b_0+b_1X$. We use "proportion of variance" term because we want to quantify how much regression line is useful to predict (or model) $Y$.
PCA and proportion of variance explained Think about $Y=A+B$ as random variable $Y$ being explained by two new random variables $A$ and $B$. why we do this? Maybe $Y$ is complex but $A$ and $B$ are less complex. Anyhow, the portion of varian
1,353
How is it possible that validation loss is increasing while validation accuracy is increasing as well
Other answers explain well how accuracy and loss are not necessarily exactly (inversely) correlated, as loss measures a difference between raw output (float) and a class (0 or 1 in the case of binary classification), while accuracy measures the difference between thresholded output (0 or 1) and class. So if raw outputs change, loss changes but accuracy is more "resilient" as outputs need to go over/under a threshold to actually change accuracy. However, accuracy and loss intuitively seem to be somewhat (inversely) correlated, as better predictions should lead to lower loss and higher accuracy, and the case of higher loss and higher accuracy shown by OP is surprising. I have myself encountered this case several times, and I present here my conclusions based on the analysis I had conducted at the time. I stress that this answer is therefore purely based on experimental data I encountered, and there may be other reasons for OP's case. Let's consider the case of binary classification, where the task is to predict whether an image is a cat or a dog, and the output of the network is a sigmoid (outputting a float between 0 and 1), where we train the network to output 1 if the image is one of a cat and 0 otherwise. I believe that in this case, two phenomenons are happening at the same time. Some images with borderline predictions get predicted better and so their output class changes (image C in the figure). This is the classic "loss decreases while accuracy increases" behavior that we expect when training is going well. Some images with very bad predictions keep getting worse (image D in the figure). This leads to a less classic "loss increases while accuracy stays the same". Note that when one uses cross-entropy loss for classification as it is usually done, bad predictions are penalized much more strongly than good predictions are rewarded. For a cat image (ground truth : 1), the loss is $log(output)$, so even if many cat images are correctly predicted (eg images A and B in the figure, contributing almost nothing to the mean loss), a single misclassified cat image will have a high loss, hence "blowing up" your mean loss. See this answer for further illustration of this phenomenon. (Getting increasing loss and stable accuracy could also be caused by good predictions being classified a little worse, but I find it less likely because of this loss "asymetry"). So I think that when both accuracy and loss are increasing, the network is starting to overfit, and both phenomena are happening at the same time. The network is starting to learn patterns only relevant for the training set and not great for generalization, leading to phenomenon 2, some images from the validation set get predicted really wrong (image C in the figure), with an effect amplified by the "loss asymetry". However, it is at the same time still learning some patterns which are useful for generalization (phenomenon one, "good learning") as more and more images are being correctly classified (image C, and also images A and B in the figure). I sadly have no answer for whether or not this "overfitting" is a bad thing in this case: should we stop the learning once the network is starting to learn spurious patterns, even though it's continuing to learn useful ones along the way? Finally, I think this effect can be further obscured in the case of multi-class classification, where the network at a given epoch might be severely overfit on some classes but still learning on others.
How is it possible that validation loss is increasing while validation accuracy is increasing as wel
Other answers explain well how accuracy and loss are not necessarily exactly (inversely) correlated, as loss measures a difference between raw output (float) and a class (0 or 1 in the case of binary
How is it possible that validation loss is increasing while validation accuracy is increasing as well Other answers explain well how accuracy and loss are not necessarily exactly (inversely) correlated, as loss measures a difference between raw output (float) and a class (0 or 1 in the case of binary classification), while accuracy measures the difference between thresholded output (0 or 1) and class. So if raw outputs change, loss changes but accuracy is more "resilient" as outputs need to go over/under a threshold to actually change accuracy. However, accuracy and loss intuitively seem to be somewhat (inversely) correlated, as better predictions should lead to lower loss and higher accuracy, and the case of higher loss and higher accuracy shown by OP is surprising. I have myself encountered this case several times, and I present here my conclusions based on the analysis I had conducted at the time. I stress that this answer is therefore purely based on experimental data I encountered, and there may be other reasons for OP's case. Let's consider the case of binary classification, where the task is to predict whether an image is a cat or a dog, and the output of the network is a sigmoid (outputting a float between 0 and 1), where we train the network to output 1 if the image is one of a cat and 0 otherwise. I believe that in this case, two phenomenons are happening at the same time. Some images with borderline predictions get predicted better and so their output class changes (image C in the figure). This is the classic "loss decreases while accuracy increases" behavior that we expect when training is going well. Some images with very bad predictions keep getting worse (image D in the figure). This leads to a less classic "loss increases while accuracy stays the same". Note that when one uses cross-entropy loss for classification as it is usually done, bad predictions are penalized much more strongly than good predictions are rewarded. For a cat image (ground truth : 1), the loss is $log(output)$, so even if many cat images are correctly predicted (eg images A and B in the figure, contributing almost nothing to the mean loss), a single misclassified cat image will have a high loss, hence "blowing up" your mean loss. See this answer for further illustration of this phenomenon. (Getting increasing loss and stable accuracy could also be caused by good predictions being classified a little worse, but I find it less likely because of this loss "asymetry"). So I think that when both accuracy and loss are increasing, the network is starting to overfit, and both phenomena are happening at the same time. The network is starting to learn patterns only relevant for the training set and not great for generalization, leading to phenomenon 2, some images from the validation set get predicted really wrong (image C in the figure), with an effect amplified by the "loss asymetry". However, it is at the same time still learning some patterns which are useful for generalization (phenomenon one, "good learning") as more and more images are being correctly classified (image C, and also images A and B in the figure). I sadly have no answer for whether or not this "overfitting" is a bad thing in this case: should we stop the learning once the network is starting to learn spurious patterns, even though it's continuing to learn useful ones along the way? Finally, I think this effect can be further obscured in the case of multi-class classification, where the network at a given epoch might be severely overfit on some classes but still learning on others.
How is it possible that validation loss is increasing while validation accuracy is increasing as wel Other answers explain well how accuracy and loss are not necessarily exactly (inversely) correlated, as loss measures a difference between raw output (float) and a class (0 or 1 in the case of binary
1,354
How is it possible that validation loss is increasing while validation accuracy is increasing as well
Accuracy of a set is evaluated by just cross-checking the highest softmax output and the correct labeled class.It is not depended on how high is the softmax output. To make it clearer, here are some numbers. Suppose there are 2 classes - horse and dog. For our case, the correct class is horse . Now, the output of the softmax is [0.9, 0.1]. For this loss ~0.37. The classifier will predict that it is a horse. Take another case where softmax output is [0.6, 0.4]. Loss ~0.6. The classifier will still predict that it is a horse. But surely, the loss has increased. So, it is all about the output distribution.
How is it possible that validation loss is increasing while validation accuracy is increasing as wel
Accuracy of a set is evaluated by just cross-checking the highest softmax output and the correct labeled class.It is not depended on how high is the softmax output. To make it clearer, here are some n
How is it possible that validation loss is increasing while validation accuracy is increasing as well Accuracy of a set is evaluated by just cross-checking the highest softmax output and the correct labeled class.It is not depended on how high is the softmax output. To make it clearer, here are some numbers. Suppose there are 2 classes - horse and dog. For our case, the correct class is horse . Now, the output of the softmax is [0.9, 0.1]. For this loss ~0.37. The classifier will predict that it is a horse. Take another case where softmax output is [0.6, 0.4]. Loss ~0.6. The classifier will still predict that it is a horse. But surely, the loss has increased. So, it is all about the output distribution.
How is it possible that validation loss is increasing while validation accuracy is increasing as wel Accuracy of a set is evaluated by just cross-checking the highest softmax output and the correct labeled class.It is not depended on how high is the softmax output. To make it clearer, here are some n
1,355
How is it possible that validation loss is increasing while validation accuracy is increasing as well
Many answers focus on the mathematical calculation explaining how is this possible. But they don't explain why it becomes so. And they cannot suggest how to digger further to be more clear. I have 3 hypothesis. And suggest some experiments to verify them. Hopefully it can help explain this problem. Label is noisy. Compare the false predictions when val_loss is minimum and val_acc is maximum. Check whether these sample are correctly labelled. [Less likely] The model doesn't have enough aspect of information to be certain. Experiment with more and larger hidden layers. [A very wild guess] This is a case where the model is less certain about certain things as being trained longer. Such situation happens to human as well. When someone started to learn a technique, he is told exactly what is good or bad, what is certain things for (high certainty). When he goes through more cases and examples, he realizes sometimes certain border can be blur (less certain, higher loss), even though he can make better decisions (more accuracy). And he may eventually gets more certain when he becomes a master after going through a huge list of samples and lots of trial and errors (more training data). So in this case, I suggest experiment with adding more noise to the training data (not label) may be helpful. Don't argue about this by just saying if you disagree with these hypothesis. It will be more meaningful to discuss with experiments to verify them, no matter the results prove them right, or prove them wrong.
How is it possible that validation loss is increasing while validation accuracy is increasing as wel
Many answers focus on the mathematical calculation explaining how is this possible. But they don't explain why it becomes so. And they cannot suggest how to digger further to be more clear. I have 3
How is it possible that validation loss is increasing while validation accuracy is increasing as well Many answers focus on the mathematical calculation explaining how is this possible. But they don't explain why it becomes so. And they cannot suggest how to digger further to be more clear. I have 3 hypothesis. And suggest some experiments to verify them. Hopefully it can help explain this problem. Label is noisy. Compare the false predictions when val_loss is minimum and val_acc is maximum. Check whether these sample are correctly labelled. [Less likely] The model doesn't have enough aspect of information to be certain. Experiment with more and larger hidden layers. [A very wild guess] This is a case where the model is less certain about certain things as being trained longer. Such situation happens to human as well. When someone started to learn a technique, he is told exactly what is good or bad, what is certain things for (high certainty). When he goes through more cases and examples, he realizes sometimes certain border can be blur (less certain, higher loss), even though he can make better decisions (more accuracy). And he may eventually gets more certain when he becomes a master after going through a huge list of samples and lots of trial and errors (more training data). So in this case, I suggest experiment with adding more noise to the training data (not label) may be helpful. Don't argue about this by just saying if you disagree with these hypothesis. It will be more meaningful to discuss with experiments to verify them, no matter the results prove them right, or prove them wrong.
How is it possible that validation loss is increasing while validation accuracy is increasing as wel Many answers focus on the mathematical calculation explaining how is this possible. But they don't explain why it becomes so. And they cannot suggest how to digger further to be more clear. I have 3
1,356
How is it possible that validation loss is increasing while validation accuracy is increasing as well
From Ankur's answer, it seems to me that: Accuracy measures the percentage correctness of the prediction i.e. $\frac{correct-classes}{total-classes}$ while Loss actually tracks the inverse-confidence (for want of a better word) of the prediction. A high Loss score indicates that, even when the model is making good predictions, it is $less$ sure of the predictions it is making...and vice-versa. So... High Validation Accuracy + High Loss Score vs High Training Accuracy + Low Loss Score suggest that the model may be over-fitting on the training data.
How is it possible that validation loss is increasing while validation accuracy is increasing as wel
From Ankur's answer, it seems to me that: Accuracy measures the percentage correctness of the prediction i.e. $\frac{correct-classes}{total-classes}$ while Loss actually tracks the inverse-confiden
How is it possible that validation loss is increasing while validation accuracy is increasing as well From Ankur's answer, it seems to me that: Accuracy measures the percentage correctness of the prediction i.e. $\frac{correct-classes}{total-classes}$ while Loss actually tracks the inverse-confidence (for want of a better word) of the prediction. A high Loss score indicates that, even when the model is making good predictions, it is $less$ sure of the predictions it is making...and vice-versa. So... High Validation Accuracy + High Loss Score vs High Training Accuracy + Low Loss Score suggest that the model may be over-fitting on the training data.
How is it possible that validation loss is increasing while validation accuracy is increasing as wel From Ankur's answer, it seems to me that: Accuracy measures the percentage correctness of the prediction i.e. $\frac{correct-classes}{total-classes}$ while Loss actually tracks the inverse-confiden
1,357
How is it possible that validation loss is increasing while validation accuracy is increasing as well
A model can overfit to cross entropy loss without over overfitting to accuracy. There is a key difference between the two types of loss: Accuracy measures whether you get the prediction right Cross entropy measures how confident you are about a prediction For example, if an image of a cat is passed into two models. Model A predicts {cat: 0.9, dog: 0.1} and model B predicts {cat: 0.6, dog: 0.4}. Both model will score the same accuracy, but model A will have a lower loss. Because of this the model will try to be more and more confident to minimize loss. It works fine in training stage, but in validation stage it will perform poorly in term of loss. For example, for some borderline images, being confident e.g. {cat: 0.9, dog: 0.1} will give higher loss than being uncertain e.g. {cat: 0.6, dog: 0.4} In short, cross entropy loss measures the calibration of a model. Mis-calibration is a common issue to modern neuronal networks. They tend to be over-confident. On Calibration of Modern Neural Networks talks about it in great details.
How is it possible that validation loss is increasing while validation accuracy is increasing as wel
A model can overfit to cross entropy loss without over overfitting to accuracy. There is a key difference between the two types of loss: Accuracy measures whether you get the prediction right Cross
How is it possible that validation loss is increasing while validation accuracy is increasing as well A model can overfit to cross entropy loss without over overfitting to accuracy. There is a key difference between the two types of loss: Accuracy measures whether you get the prediction right Cross entropy measures how confident you are about a prediction For example, if an image of a cat is passed into two models. Model A predicts {cat: 0.9, dog: 0.1} and model B predicts {cat: 0.6, dog: 0.4}. Both model will score the same accuracy, but model A will have a lower loss. Because of this the model will try to be more and more confident to minimize loss. It works fine in training stage, but in validation stage it will perform poorly in term of loss. For example, for some borderline images, being confident e.g. {cat: 0.9, dog: 0.1} will give higher loss than being uncertain e.g. {cat: 0.6, dog: 0.4} In short, cross entropy loss measures the calibration of a model. Mis-calibration is a common issue to modern neuronal networks. They tend to be over-confident. On Calibration of Modern Neural Networks talks about it in great details.
How is it possible that validation loss is increasing while validation accuracy is increasing as wel A model can overfit to cross entropy loss without over overfitting to accuracy. There is a key difference between the two types of loss: Accuracy measures whether you get the prediction right Cross
1,358
How is it possible that validation loss is increasing while validation accuracy is increasing as well
Let's say a label is horse and a prediction is: cat (25%) dog (35%) horse (40%) So, your model is predicting correct, but it's less sure about it. This is how you get high accuracy and high loss
How is it possible that validation loss is increasing while validation accuracy is increasing as wel
Let's say a label is horse and a prediction is: cat (25%) dog (35%) horse (40%) So, your model is predicting correct, but it's less sure about it. This is how you get high accuracy and high loss
How is it possible that validation loss is increasing while validation accuracy is increasing as well Let's say a label is horse and a prediction is: cat (25%) dog (35%) horse (40%) So, your model is predicting correct, but it's less sure about it. This is how you get high accuracy and high loss
How is it possible that validation loss is increasing while validation accuracy is increasing as wel Let's say a label is horse and a prediction is: cat (25%) dog (35%) horse (40%) So, your model is predicting correct, but it's less sure about it. This is how you get high accuracy and high loss
1,359
What are the main differences between K-means and K-nearest neighbours?
These are completely different methods. The fact that they both have the letter K in their name is a coincidence. K-means is a clustering algorithm that tries to partition a set of points into K sets (clusters) such that the points in each cluster tend to be near each other. It is unsupervised because the points have no external classification. K-nearest neighbors is a classification (or regression) algorithm that in order to determine the classification of a point, combines the classification of the K nearest points. It is supervised because you are trying to classify a point based on the known classification of other points.
What are the main differences between K-means and K-nearest neighbours?
These are completely different methods. The fact that they both have the letter K in their name is a coincidence. K-means is a clustering algorithm that tries to partition a set of points into K sets
What are the main differences between K-means and K-nearest neighbours? These are completely different methods. The fact that they both have the letter K in their name is a coincidence. K-means is a clustering algorithm that tries to partition a set of points into K sets (clusters) such that the points in each cluster tend to be near each other. It is unsupervised because the points have no external classification. K-nearest neighbors is a classification (or regression) algorithm that in order to determine the classification of a point, combines the classification of the K nearest points. It is supervised because you are trying to classify a point based on the known classification of other points.
What are the main differences between K-means and K-nearest neighbours? These are completely different methods. The fact that they both have the letter K in their name is a coincidence. K-means is a clustering algorithm that tries to partition a set of points into K sets
1,360
What are the main differences between K-means and K-nearest neighbours?
As noted by Bitwise in their answer, k-means is a clustering algorithm. If it comes to k-nearest neighbours (k-NN) the terminology is a bit fuzzy: in the context of classification, it is a classification algorithm, as also noted in the aforementioned answer in general it is a problem, for which various solutions (algorithms) exist So in the first context, saying "k-NN classifier" can actually mean various underlying concrete algorithms that solve the k-NN problem, and their result is interpreted for the classification purpose. These are two different things but you might find it interesting that k-means algorithm is one of various possible methods for solving the k-NN problem (Marius Muja and David G. Lowe, "Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration", in International Conference on Computer Vision Theory and Applications (VISAPP'09), 2009 PDF)
What are the main differences between K-means and K-nearest neighbours?
As noted by Bitwise in their answer, k-means is a clustering algorithm. If it comes to k-nearest neighbours (k-NN) the terminology is a bit fuzzy: in the context of classification, it is a classific
What are the main differences between K-means and K-nearest neighbours? As noted by Bitwise in their answer, k-means is a clustering algorithm. If it comes to k-nearest neighbours (k-NN) the terminology is a bit fuzzy: in the context of classification, it is a classification algorithm, as also noted in the aforementioned answer in general it is a problem, for which various solutions (algorithms) exist So in the first context, saying "k-NN classifier" can actually mean various underlying concrete algorithms that solve the k-NN problem, and their result is interpreted for the classification purpose. These are two different things but you might find it interesting that k-means algorithm is one of various possible methods for solving the k-NN problem (Marius Muja and David G. Lowe, "Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration", in International Conference on Computer Vision Theory and Applications (VISAPP'09), 2009 PDF)
What are the main differences between K-means and K-nearest neighbours? As noted by Bitwise in their answer, k-means is a clustering algorithm. If it comes to k-nearest neighbours (k-NN) the terminology is a bit fuzzy: in the context of classification, it is a classific
1,361
What are the main differences between K-means and K-nearest neighbours?
You can have a supervised k-means. You can build centroids (as in k-means) based on your labeled data. Nothing stops you. If you want to improve this, Euclidean space and Euclidean distance might not provide you the best results. You will need to choose your space (could be Riemannian space for example) and define the distance between points (and even define a "point"). The last two are topics of research and they also depend on the type (properties) of data (signal) you have.
What are the main differences between K-means and K-nearest neighbours?
You can have a supervised k-means. You can build centroids (as in k-means) based on your labeled data. Nothing stops you. If you want to improve this, Euclidean space and Euclidean distance might not
What are the main differences between K-means and K-nearest neighbours? You can have a supervised k-means. You can build centroids (as in k-means) based on your labeled data. Nothing stops you. If you want to improve this, Euclidean space and Euclidean distance might not provide you the best results. You will need to choose your space (could be Riemannian space for example) and define the distance between points (and even define a "point"). The last two are topics of research and they also depend on the type (properties) of data (signal) you have.
What are the main differences between K-means and K-nearest neighbours? You can have a supervised k-means. You can build centroids (as in k-means) based on your labeled data. Nothing stops you. If you want to improve this, Euclidean space and Euclidean distance might not
1,362
What are the main differences between K-means and K-nearest neighbours?
K-means can create the cluster information for neighbour nodes while KNN cannot find the cluster for a given neighbour node.
What are the main differences between K-means and K-nearest neighbours?
K-means can create the cluster information for neighbour nodes while KNN cannot find the cluster for a given neighbour node.
What are the main differences between K-means and K-nearest neighbours? K-means can create the cluster information for neighbour nodes while KNN cannot find the cluster for a given neighbour node.
What are the main differences between K-means and K-nearest neighbours? K-means can create the cluster information for neighbour nodes while KNN cannot find the cluster for a given neighbour node.
1,363
What are the main differences between K-means and K-nearest neighbours?
k Means can be used as the training phase before knn is deployed in the actual classification stage. K means creates the classes represented by the centroid and class label ofthe samples belonging to each class. knn uses these parameters as well as the k number to classify an unseen new sample and assign it to one of the k classes created by the K means algorithm
What are the main differences between K-means and K-nearest neighbours?
k Means can be used as the training phase before knn is deployed in the actual classification stage. K means creates the classes represented by the centroid and class label ofthe samples belonging to
What are the main differences between K-means and K-nearest neighbours? k Means can be used as the training phase before knn is deployed in the actual classification stage. K means creates the classes represented by the centroid and class label ofthe samples belonging to each class. knn uses these parameters as well as the k number to classify an unseen new sample and assign it to one of the k classes created by the K means algorithm
What are the main differences between K-means and K-nearest neighbours? k Means can be used as the training phase before knn is deployed in the actual classification stage. K means creates the classes represented by the centroid and class label ofthe samples belonging to
1,364
Difference between confidence intervals and prediction intervals
Your question isn't quite correct. A confidence interval gives a range for $\text{E}[y \mid x]$, as you say. A prediction interval gives a range for $y$ itself. Naturally, our best guess for $y$ is $\text{E}[y \mid x]$, so the intervals will both be centered around the same value, $x\hat{\beta}$. As @Greg says, the standard errors are going to be different---we guess the expected value of $\text{E}[y \mid x]$ more precisely than we estimate $y$ itself. Estimating $y$ requires including the variance that comes from the true error term. To illustrate the difference, imagine that we could get perfect estimates of our $\beta$ coefficients. Then, our estimate of $\text{E}[y \mid x]$ would be perfect. But we still wouldn't be sure what $y$ itself was because there is a true error term that we need to consider. Our confidence "interval" would just be a point because we estimate $\text{E}[y \mid x]$ exactly right, but our prediction interval would be wider because we take the true error term into account. Hence, a prediction interval will be wider than a confidence interval.
Difference between confidence intervals and prediction intervals
Your question isn't quite correct. A confidence interval gives a range for $\text{E}[y \mid x]$, as you say. A prediction interval gives a range for $y$ itself. Naturally, our best guess for $y$ is $\
Difference between confidence intervals and prediction intervals Your question isn't quite correct. A confidence interval gives a range for $\text{E}[y \mid x]$, as you say. A prediction interval gives a range for $y$ itself. Naturally, our best guess for $y$ is $\text{E}[y \mid x]$, so the intervals will both be centered around the same value, $x\hat{\beta}$. As @Greg says, the standard errors are going to be different---we guess the expected value of $\text{E}[y \mid x]$ more precisely than we estimate $y$ itself. Estimating $y$ requires including the variance that comes from the true error term. To illustrate the difference, imagine that we could get perfect estimates of our $\beta$ coefficients. Then, our estimate of $\text{E}[y \mid x]$ would be perfect. But we still wouldn't be sure what $y$ itself was because there is a true error term that we need to consider. Our confidence "interval" would just be a point because we estimate $\text{E}[y \mid x]$ exactly right, but our prediction interval would be wider because we take the true error term into account. Hence, a prediction interval will be wider than a confidence interval.
Difference between confidence intervals and prediction intervals Your question isn't quite correct. A confidence interval gives a range for $\text{E}[y \mid x]$, as you say. A prediction interval gives a range for $y$ itself. Naturally, our best guess for $y$ is $\
1,365
Difference between confidence intervals and prediction intervals
One is a prediction of a future observation, and the other is a predicted mean response. I will give a more detailed answer to hopefully explain the difference and where it comes from, as well as how this difference manifests itself in wider intervals for prediction than for confidence. This example might illustrate the difference between confidence and prediction intervals: suppose we have a regression model that predicts the price of houses based on number of bedrooms, size, etc. There are two kinds of predictions we can make for a given $x_0$: We can predict the price for a specific new house that comes on the market with characteristics $x_0$ ("what is the predicted price for this house $x_0$?"). Its true price will be $$y = x_0^T\beta+\epsilon$$. Since $E(\epsilon)=0$, the predicted price will be $$\hat{y} = x_0^T\hat{\beta}$$ In assessing the variance of this prediction, we need to include our uncertainty about $\hat{\beta}$, as well as our uncertainty about our prediction (the error of our prediction) and so must include the variance of $\epsilon$ (the error of our prediction). This is typically called a prediction of a future value. We can also predict the average price of a house with characteristics $x_0$ ("what would be the average price for a house with characteristics $x_0$?"). The point estimate is still $$\hat{y} = x_0^T\hat{\beta}$$, but now only the variance in $\hat{\beta}$ needs to be accounted for. This is typically called prediction of the mean response. Most times, what we really want is the first case. We know that $$var(x_0^T\hat{\beta}) = x_0^T(X^TX)^{-1}x_0\sigma^2$$ This is the variance for our mean response (case 2). But, for a prediction of a future observation (case 1), recall that we need the variance of $x_0^T\hat{\beta} + \epsilon$; $\epsilon$ has variance $\sigma^2$ and is assumed to be independent of $\hat{\beta}$. Using some simple algebra, this results in the following confidence intervals: CI for a single future response for $x_0$: $$\hat{y}_0\pm t_{n-p}^{(\alpha/2)}\hat{\sigma}\sqrt{x_0^T(X^TX)^{-1}x_0 + 1}$$ CI for the mean response given $x_0$: $$\hat{y}_0\pm t_{n-p}^{(\alpha/2)}\hat{\sigma}\sqrt{x_0^T(X^TX)^{-1}x_0}$$ Where $t_{n-p}^{\alpha/2}$ is a t-statistic with $n-p$ degrees of freedom at the $\alpha/2$ quantile. Hopefully this makes it a bit clearer why the prediction interval is always wider, and what the underlying difference between the two intervals is. This example was adapted from Faraway, Linear Models with R, Sec. 4.1.
Difference between confidence intervals and prediction intervals
One is a prediction of a future observation, and the other is a predicted mean response. I will give a more detailed answer to hopefully explain the difference and where it comes from, as well as how
Difference between confidence intervals and prediction intervals One is a prediction of a future observation, and the other is a predicted mean response. I will give a more detailed answer to hopefully explain the difference and where it comes from, as well as how this difference manifests itself in wider intervals for prediction than for confidence. This example might illustrate the difference between confidence and prediction intervals: suppose we have a regression model that predicts the price of houses based on number of bedrooms, size, etc. There are two kinds of predictions we can make for a given $x_0$: We can predict the price for a specific new house that comes on the market with characteristics $x_0$ ("what is the predicted price for this house $x_0$?"). Its true price will be $$y = x_0^T\beta+\epsilon$$. Since $E(\epsilon)=0$, the predicted price will be $$\hat{y} = x_0^T\hat{\beta}$$ In assessing the variance of this prediction, we need to include our uncertainty about $\hat{\beta}$, as well as our uncertainty about our prediction (the error of our prediction) and so must include the variance of $\epsilon$ (the error of our prediction). This is typically called a prediction of a future value. We can also predict the average price of a house with characteristics $x_0$ ("what would be the average price for a house with characteristics $x_0$?"). The point estimate is still $$\hat{y} = x_0^T\hat{\beta}$$, but now only the variance in $\hat{\beta}$ needs to be accounted for. This is typically called prediction of the mean response. Most times, what we really want is the first case. We know that $$var(x_0^T\hat{\beta}) = x_0^T(X^TX)^{-1}x_0\sigma^2$$ This is the variance for our mean response (case 2). But, for a prediction of a future observation (case 1), recall that we need the variance of $x_0^T\hat{\beta} + \epsilon$; $\epsilon$ has variance $\sigma^2$ and is assumed to be independent of $\hat{\beta}$. Using some simple algebra, this results in the following confidence intervals: CI for a single future response for $x_0$: $$\hat{y}_0\pm t_{n-p}^{(\alpha/2)}\hat{\sigma}\sqrt{x_0^T(X^TX)^{-1}x_0 + 1}$$ CI for the mean response given $x_0$: $$\hat{y}_0\pm t_{n-p}^{(\alpha/2)}\hat{\sigma}\sqrt{x_0^T(X^TX)^{-1}x_0}$$ Where $t_{n-p}^{\alpha/2}$ is a t-statistic with $n-p$ degrees of freedom at the $\alpha/2$ quantile. Hopefully this makes it a bit clearer why the prediction interval is always wider, and what the underlying difference between the two intervals is. This example was adapted from Faraway, Linear Models with R, Sec. 4.1.
Difference between confidence intervals and prediction intervals One is a prediction of a future observation, and the other is a predicted mean response. I will give a more detailed answer to hopefully explain the difference and where it comes from, as well as how
1,366
Difference between confidence intervals and prediction intervals
The difference between a prediction interval and a confidence interval is the standard error. The standard error for a confidence interval on the mean takes into account the uncertainty due to sampling. The line you computed from your sample will be different from the line that would have been computed if you had the entire population, the standard error takes this uncertainty into account. The standard error for a prediction interval on an individual observation takes into account the uncertainty due to sampling like above, but also takes into account the variability of the individuals around the predicted mean. The standard error for the prediction interval will be wider than for the confidence interval and hence the prediction interval will be wider than the confidence interval.
Difference between confidence intervals and prediction intervals
The difference between a prediction interval and a confidence interval is the standard error. The standard error for a confidence interval on the mean takes into account the uncertainty due to sampl
Difference between confidence intervals and prediction intervals The difference between a prediction interval and a confidence interval is the standard error. The standard error for a confidence interval on the mean takes into account the uncertainty due to sampling. The line you computed from your sample will be different from the line that would have been computed if you had the entire population, the standard error takes this uncertainty into account. The standard error for a prediction interval on an individual observation takes into account the uncertainty due to sampling like above, but also takes into account the variability of the individuals around the predicted mean. The standard error for the prediction interval will be wider than for the confidence interval and hence the prediction interval will be wider than the confidence interval.
Difference between confidence intervals and prediction intervals The difference between a prediction interval and a confidence interval is the standard error. The standard error for a confidence interval on the mean takes into account the uncertainty due to sampl
1,367
Difference between confidence intervals and prediction intervals
I found the following explanation helpful: Confidence intervals tell you about how well you have determined the mean. Assume that the data really are randomly sampled from a Gaussian distribution. If you do this many times, and calculate a confidence interval of the mean from each sample, you'd expect about 95 % of those intervals to include the true value of the population mean. The key point is that the confidence interval tells you about the likely location of the true population parameter. Prediction intervals tell you where you can expect to see the next data point sampled. Assume that the data really are randomly sampled from a Gaussian distribution. Collect a sample of data and calculate a prediction interval. Then sample one more value from the population. If you do this many times, you'd expect that next value to lie within that prediction interval in 95% of the samples.The key point is that the prediction interval tells you about the distribution of values, not the uncertainty in determining the population mean. Prediction intervals must account for both the uncertainty in knowing the value of the population mean, plus data scatter. So a prediction interval is always wider than a confidence interval. Source: http://www.graphpad.com/support/faqid/1506/
Difference between confidence intervals and prediction intervals
I found the following explanation helpful: Confidence intervals tell you about how well you have determined the mean. Assume that the data really are randomly sampled from a Gaussian distribution.
Difference between confidence intervals and prediction intervals I found the following explanation helpful: Confidence intervals tell you about how well you have determined the mean. Assume that the data really are randomly sampled from a Gaussian distribution. If you do this many times, and calculate a confidence interval of the mean from each sample, you'd expect about 95 % of those intervals to include the true value of the population mean. The key point is that the confidence interval tells you about the likely location of the true population parameter. Prediction intervals tell you where you can expect to see the next data point sampled. Assume that the data really are randomly sampled from a Gaussian distribution. Collect a sample of data and calculate a prediction interval. Then sample one more value from the population. If you do this many times, you'd expect that next value to lie within that prediction interval in 95% of the samples.The key point is that the prediction interval tells you about the distribution of values, not the uncertainty in determining the population mean. Prediction intervals must account for both the uncertainty in knowing the value of the population mean, plus data scatter. So a prediction interval is always wider than a confidence interval. Source: http://www.graphpad.com/support/faqid/1506/
Difference between confidence intervals and prediction intervals I found the following explanation helpful: Confidence intervals tell you about how well you have determined the mean. Assume that the data really are randomly sampled from a Gaussian distribution.
1,368
Difference between confidence intervals and prediction intervals
Short answer: A prediction interval is an interval associated with a random variable yet to be observed (forecasting). A confidence interval is an interval associated with a parameter and is a frequentist concept. Check full answer here from Rob Hyndman, the creator of forecast package in R.
Difference between confidence intervals and prediction intervals
Short answer: A prediction interval is an interval associated with a random variable yet to be observed (forecasting). A confidence interval is an interval associated with a parameter and is a frequen
Difference between confidence intervals and prediction intervals Short answer: A prediction interval is an interval associated with a random variable yet to be observed (forecasting). A confidence interval is an interval associated with a parameter and is a frequentist concept. Check full answer here from Rob Hyndman, the creator of forecast package in R.
Difference between confidence intervals and prediction intervals Short answer: A prediction interval is an interval associated with a random variable yet to be observed (forecasting). A confidence interval is an interval associated with a parameter and is a frequen
1,369
Difference between confidence intervals and prediction intervals
This answer is for those readers who could not fully understand the previous answers. Let's discuss a specific example. Suppose you try to predict the people's weight from their height, sex (male, female) and diet (standard, low carb, vegetarian). Currently, there are more than 8 billion people on Earth. Of course, you can find many thousands of people having the same height and other two parameters but different weight. Their weights differ wildly because some of them have obesity and others may suffer from starvation. Most of those people will be somewhere in the middle. One task is to predict the average weight of all the people having the same values of all three explanatory variables. Here we use the confidence interval. Another problem is to forecast the weight of some specific person. And we don't know the living circumstances of that individual. Here the prediction interval must be used. It is centered around the same point, but it must be much wider than the confidence interval.
Difference between confidence intervals and prediction intervals
This answer is for those readers who could not fully understand the previous answers. Let's discuss a specific example. Suppose you try to predict the people's weight from their height, sex (male, fem
Difference between confidence intervals and prediction intervals This answer is for those readers who could not fully understand the previous answers. Let's discuss a specific example. Suppose you try to predict the people's weight from their height, sex (male, female) and diet (standard, low carb, vegetarian). Currently, there are more than 8 billion people on Earth. Of course, you can find many thousands of people having the same height and other two parameters but different weight. Their weights differ wildly because some of them have obesity and others may suffer from starvation. Most of those people will be somewhere in the middle. One task is to predict the average weight of all the people having the same values of all three explanatory variables. Here we use the confidence interval. Another problem is to forecast the weight of some specific person. And we don't know the living circumstances of that individual. Here the prediction interval must be used. It is centered around the same point, but it must be much wider than the confidence interval.
Difference between confidence intervals and prediction intervals This answer is for those readers who could not fully understand the previous answers. Let's discuss a specific example. Suppose you try to predict the people's weight from their height, sex (male, fem
1,370
What does a "closed-form solution" mean?
"An equation is said to be a closed-form solution if it solves a given problem in terms of functions and mathematical operations from a given generally accepted set. For example, an infinite sum would generally not be considered closed-form. However, the choice of what to call closed-form and what not is rather arbitrary since a new "closed-form" function could simply be defined in terms of the infinite sum." --Wolfram Alpha and "In mathematics, an expression is said to be a closed-form expression if it can be expressed analytically in terms of a finite number of certain "well-known" functions. Typically, these well-known functions are defined to be elementary functions—constants, one variable x, elementary operations of arithmetic (+ − × ÷), nth roots, exponent and logarithm (which thus also include trigonometric functions and inverse trigonometric functions). Often problems are said to be tractable if they can be solved in terms of a closed-form expression." -- Wikipedia An example of a closed form solution in linear regression would be the least square equation $$\hat\beta=(X^TX)^{-1}X^Ty$$
What does a "closed-form solution" mean?
"An equation is said to be a closed-form solution if it solves a given problem in terms of functions and mathematical operations from a given generally accepted set. For example, an infinite sum w
What does a "closed-form solution" mean? "An equation is said to be a closed-form solution if it solves a given problem in terms of functions and mathematical operations from a given generally accepted set. For example, an infinite sum would generally not be considered closed-form. However, the choice of what to call closed-form and what not is rather arbitrary since a new "closed-form" function could simply be defined in terms of the infinite sum." --Wolfram Alpha and "In mathematics, an expression is said to be a closed-form expression if it can be expressed analytically in terms of a finite number of certain "well-known" functions. Typically, these well-known functions are defined to be elementary functions—constants, one variable x, elementary operations of arithmetic (+ − × ÷), nth roots, exponent and logarithm (which thus also include trigonometric functions and inverse trigonometric functions). Often problems are said to be tractable if they can be solved in terms of a closed-form expression." -- Wikipedia An example of a closed form solution in linear regression would be the least square equation $$\hat\beta=(X^TX)^{-1}X^Ty$$
What does a "closed-form solution" mean? "An equation is said to be a closed-form solution if it solves a given problem in terms of functions and mathematical operations from a given generally accepted set. For example, an infinite sum w
1,371
What does a "closed-form solution" mean?
Most estimation procedures involve finding parameters that minimize (or maximize) some objective function. For example, with OLS, we minimize the sum of squared residuals. With Maximum Likelihood Estimation, we maximize the log-likelihood function. The difference is trivial: minimization can be converted to maximization by using the negative of the objective function. Sometimes this problem can be solved algebraically, producing a closed-form solution. With OLS, you solve the system of first order conditions and get the familiar formula (though you still probably need a computer to evaluate the answer). In other cases, this is not mathematically possible and you need to search for parameter values using a computer. In this case, the computer and the algorithm play a bigger role. Nonlinear Least Squares is one example. You don't get an explicit formula; all you get is a recipe that you need to computer to implement. The recipe might be start with an initial guess of what the parameters might be and how they might vary. You then try various combinations of parameters and see which one gives you the lowest/highest objective function value. This is the brute force approach and takes a long time. For example, with 5 parameters with 10 possible values each you need to try $10^5$ combinations, and that merely puts you in the neighborhood of the right answer if you're lucky. This approach is called grid search. Or you might start with a guess, and refine that guess in some direction until the improvements in the objective function is less than some value. These are usually called gradient methods (though there are others that do not use the gradient to pick in which direction to go in, like genetic algorithms and simulated annealing). Some problems like this guarantee that you find the right answer quickly (quadratic objective functions). Others give no such guarantee. You might worry that you've gotten stuck at a local, rather than a global, optimum, so you try a range of initial guesses. You might find that wildly different parameters give you the same value of the objective function, so you don't know which set to pick. Here's a nice way to get the intuition. Suppose you had a simple exponential regression model where the only regressor is the intercept: \begin{equation} E[y]=\exp\{\alpha\} \end{equation} The objective function is \begin{equation} Q_N(\alpha)=-\frac{1}{2N} \sum_i^N \left( y_i - \exp\{\alpha\} \right)^2 \end{equation} With this simple problem, both approaches are feasible. The closed-form solution that you get by taking the derivative is $\alpha^* = \ln \bar y$. You can also verify that anything else gives you a higher value of the objective function by plugging in $\ln (\bar y + k) $ instead. If you had some regressors, the analytical solution goes out the window.
What does a "closed-form solution" mean?
Most estimation procedures involve finding parameters that minimize (or maximize) some objective function. For example, with OLS, we minimize the sum of squared residuals. With Maximum Likelihood Esti
What does a "closed-form solution" mean? Most estimation procedures involve finding parameters that minimize (or maximize) some objective function. For example, with OLS, we minimize the sum of squared residuals. With Maximum Likelihood Estimation, we maximize the log-likelihood function. The difference is trivial: minimization can be converted to maximization by using the negative of the objective function. Sometimes this problem can be solved algebraically, producing a closed-form solution. With OLS, you solve the system of first order conditions and get the familiar formula (though you still probably need a computer to evaluate the answer). In other cases, this is not mathematically possible and you need to search for parameter values using a computer. In this case, the computer and the algorithm play a bigger role. Nonlinear Least Squares is one example. You don't get an explicit formula; all you get is a recipe that you need to computer to implement. The recipe might be start with an initial guess of what the parameters might be and how they might vary. You then try various combinations of parameters and see which one gives you the lowest/highest objective function value. This is the brute force approach and takes a long time. For example, with 5 parameters with 10 possible values each you need to try $10^5$ combinations, and that merely puts you in the neighborhood of the right answer if you're lucky. This approach is called grid search. Or you might start with a guess, and refine that guess in some direction until the improvements in the objective function is less than some value. These are usually called gradient methods (though there are others that do not use the gradient to pick in which direction to go in, like genetic algorithms and simulated annealing). Some problems like this guarantee that you find the right answer quickly (quadratic objective functions). Others give no such guarantee. You might worry that you've gotten stuck at a local, rather than a global, optimum, so you try a range of initial guesses. You might find that wildly different parameters give you the same value of the objective function, so you don't know which set to pick. Here's a nice way to get the intuition. Suppose you had a simple exponential regression model where the only regressor is the intercept: \begin{equation} E[y]=\exp\{\alpha\} \end{equation} The objective function is \begin{equation} Q_N(\alpha)=-\frac{1}{2N} \sum_i^N \left( y_i - \exp\{\alpha\} \right)^2 \end{equation} With this simple problem, both approaches are feasible. The closed-form solution that you get by taking the derivative is $\alpha^* = \ln \bar y$. You can also verify that anything else gives you a higher value of the objective function by plugging in $\ln (\bar y + k) $ instead. If you had some regressors, the analytical solution goes out the window.
What does a "closed-form solution" mean? Most estimation procedures involve finding parameters that minimize (or maximize) some objective function. For example, with OLS, we minimize the sum of squared residuals. With Maximum Likelihood Esti
1,372
What does a "closed-form solution" mean?
I think that this website provides a simple intuition, an excerpt of which is: A closed-form solution (or closed form expression) is any formula that can be evaluated in a finite number of standard operations. ... A numerical solution is any approximation that can be evaluated in a finite number of standard operations. Closed form solutions and numerical solutions are similar in that they both can be evaluated with a finite number of standard operations. They differ in that a closed-form solution is exact whereas a numerical solution is only approximate.
What does a "closed-form solution" mean?
I think that this website provides a simple intuition, an excerpt of which is: A closed-form solution (or closed form expression) is any formula that can be evaluated in a finite number of standard
What does a "closed-form solution" mean? I think that this website provides a simple intuition, an excerpt of which is: A closed-form solution (or closed form expression) is any formula that can be evaluated in a finite number of standard operations. ... A numerical solution is any approximation that can be evaluated in a finite number of standard operations. Closed form solutions and numerical solutions are similar in that they both can be evaluated with a finite number of standard operations. They differ in that a closed-form solution is exact whereas a numerical solution is only approximate.
What does a "closed-form solution" mean? I think that this website provides a simple intuition, an excerpt of which is: A closed-form solution (or closed form expression) is any formula that can be evaluated in a finite number of standard
1,373
What does a "closed-form solution" mean?
Looking for lay terms or the painful verbiage that rigorously defines the meaning? I'll presume lay terms as the other can be found everywhere. Let's say you wanted the closed form solution of the square root of 8. The closed form solution is 2 * (2)^1/2 or two times the square root of two. This is in contrast to the non-closed form solution 2.8284. (see wikipedia square root of 2 to see than at 69 decimal places it is accurate to within 1/10,000) One is absolutely defined in mathematical terms whereas the other is not. A closed form solution provides an exact answer and one that is not closed form is an approximation, but you can get a non closed form solution as close as to a closed form solution as you want. Sounds counter intuitive, but if you need it more accurate, then just grind out a little bit more computations.
What does a "closed-form solution" mean?
Looking for lay terms or the painful verbiage that rigorously defines the meaning? I'll presume lay terms as the other can be found everywhere. Let's say you wanted the closed form solution of the squ
What does a "closed-form solution" mean? Looking for lay terms or the painful verbiage that rigorously defines the meaning? I'll presume lay terms as the other can be found everywhere. Let's say you wanted the closed form solution of the square root of 8. The closed form solution is 2 * (2)^1/2 or two times the square root of two. This is in contrast to the non-closed form solution 2.8284. (see wikipedia square root of 2 to see than at 69 decimal places it is accurate to within 1/10,000) One is absolutely defined in mathematical terms whereas the other is not. A closed form solution provides an exact answer and one that is not closed form is an approximation, but you can get a non closed form solution as close as to a closed form solution as you want. Sounds counter intuitive, but if you need it more accurate, then just grind out a little bit more computations.
What does a "closed-form solution" mean? Looking for lay terms or the painful verbiage that rigorously defines the meaning? I'll presume lay terms as the other can be found everywhere. Let's say you wanted the closed form solution of the squ
1,374
Softmax vs Sigmoid function in Logistic classifier?
The sigmoid function is used for the two-class logistic regression, whereas the softmax function is used for the multiclass logistic regression (a.k.a. MaxEnt, multinomial logistic regression, softmax Regression, Maximum Entropy Classifier). In the two-class logistic regression, the predicted probablies are as follows, using the sigmoid function: $$ \begin{align} \Pr(Y_i=0) &= \frac{e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \, \\ \Pr(Y_i=1) &= 1 - \Pr(Y_i=0) = \frac{1} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \end{align} $$ In the multiclass logistic regression, with $K$ classes, the predicted probabilities are as follows, using the softmax function: $$ \begin{align} \Pr(Y_i=k) &= \frac{e^{\boldsymbol\beta_k \cdot \mathbf{X}_i}} {~\sum_{0 \leq c \leq K}^{}{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}} \, \\ \end{align} $$ One can observe that the softmax function is an extension of the sigmoid function to the multiclass case, as explained below. Let's look at the multiclass logistic regression, with $K=2$ classes: $$ \begin{align} \Pr(Y_i=0) &= \frac{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i}} {~\sum_{0 \leq c \leq K}^{}{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}} = \frac{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i}}{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} = \frac{e^{(\boldsymbol\beta_0 - \boldsymbol\beta_1) \cdot \mathbf{X}_i}}{e^{(\boldsymbol\beta_0 - \boldsymbol\beta_1) \cdot \mathbf{X}_i} + 1} = \frac{e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \\ \, \\ \Pr(Y_i=1) &= \frac{e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} {~\sum_{0 \leq c \leq K}^{}{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}} = \frac{e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}}{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} = \frac{1}{e^{(\boldsymbol\beta_0-\boldsymbol\beta_1) \cdot \mathbf{X}_i} + 1} = \frac{1} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \, \\ \end{align} $$ with $\boldsymbol\beta = - (\boldsymbol\beta_0 - \boldsymbol\beta_1)$. We see that we obtain the same probabilities as in the two-class logistic regression using the sigmoid function. Wikipedia expands a bit more on that.
Softmax vs Sigmoid function in Logistic classifier?
The sigmoid function is used for the two-class logistic regression, whereas the softmax function is used for the multiclass logistic regression (a.k.a. MaxEnt, multinomial logistic regression, softmax
Softmax vs Sigmoid function in Logistic classifier? The sigmoid function is used for the two-class logistic regression, whereas the softmax function is used for the multiclass logistic regression (a.k.a. MaxEnt, multinomial logistic regression, softmax Regression, Maximum Entropy Classifier). In the two-class logistic regression, the predicted probablies are as follows, using the sigmoid function: $$ \begin{align} \Pr(Y_i=0) &= \frac{e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \, \\ \Pr(Y_i=1) &= 1 - \Pr(Y_i=0) = \frac{1} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \end{align} $$ In the multiclass logistic regression, with $K$ classes, the predicted probabilities are as follows, using the softmax function: $$ \begin{align} \Pr(Y_i=k) &= \frac{e^{\boldsymbol\beta_k \cdot \mathbf{X}_i}} {~\sum_{0 \leq c \leq K}^{}{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}} \, \\ \end{align} $$ One can observe that the softmax function is an extension of the sigmoid function to the multiclass case, as explained below. Let's look at the multiclass logistic regression, with $K=2$ classes: $$ \begin{align} \Pr(Y_i=0) &= \frac{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i}} {~\sum_{0 \leq c \leq K}^{}{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}} = \frac{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i}}{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} = \frac{e^{(\boldsymbol\beta_0 - \boldsymbol\beta_1) \cdot \mathbf{X}_i}}{e^{(\boldsymbol\beta_0 - \boldsymbol\beta_1) \cdot \mathbf{X}_i} + 1} = \frac{e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \\ \, \\ \Pr(Y_i=1) &= \frac{e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} {~\sum_{0 \leq c \leq K}^{}{e^{\boldsymbol\beta_c \cdot \mathbf{X}_i}}} = \frac{e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}}{e^{\boldsymbol\beta_0 \cdot \mathbf{X}_i} + e^{\boldsymbol\beta_1 \cdot \mathbf{X}_i}} = \frac{1}{e^{(\boldsymbol\beta_0-\boldsymbol\beta_1) \cdot \mathbf{X}_i} + 1} = \frac{1} {1 +e^{-\boldsymbol\beta \cdot \mathbf{X}_i}} \, \\ \end{align} $$ with $\boldsymbol\beta = - (\boldsymbol\beta_0 - \boldsymbol\beta_1)$. We see that we obtain the same probabilities as in the two-class logistic regression using the sigmoid function. Wikipedia expands a bit more on that.
Softmax vs Sigmoid function in Logistic classifier? The sigmoid function is used for the two-class logistic regression, whereas the softmax function is used for the multiclass logistic regression (a.k.a. MaxEnt, multinomial logistic regression, softmax
1,375
Softmax vs Sigmoid function in Logistic classifier?
I've noticed people often get directed to this question when searching whether to use sigmoid vs softmax in neural networks. If you are one of those people building a neural network classifier, here is how to decide whether to apply sigmoid or softmax to the raw output values from your network: If you have a multi-label classification problem = there is more than one "right answer" = the outputs are NOT mutually exclusive, then use a sigmoid function on each raw output independently. The sigmoid will allow you to have high probability for all of your classes, some of them, or none of them. Example: classifying diseases in a chest x-ray image. The image might contain pneumonia, emphysema, and/or cancer, or none of those findings. If you have a multi-class classification problem = there is only one "right answer" = the outputs are mutually exclusive, then use a softmax function. The softmax will enforce that the sum of the probabilities of your output classes are equal to one, so in order to increase the probability of a particular class, your model must correspondingly decrease the probability of at least one of the other classes. Example: classifying images from the MNIST data set of handwritten digits. A single picture of a digit has only one true identity - the picture cannot be a 7 and an 8 at the same time. Reference: for a more detailed explanation of when to use sigmoid vs. softmax in neural network design, including example calculations, please see this article: "Classification: Sigmoid vs. Softmax."
Softmax vs Sigmoid function in Logistic classifier?
I've noticed people often get directed to this question when searching whether to use sigmoid vs softmax in neural networks. If you are one of those people building a neural network classifier, here i
Softmax vs Sigmoid function in Logistic classifier? I've noticed people often get directed to this question when searching whether to use sigmoid vs softmax in neural networks. If you are one of those people building a neural network classifier, here is how to decide whether to apply sigmoid or softmax to the raw output values from your network: If you have a multi-label classification problem = there is more than one "right answer" = the outputs are NOT mutually exclusive, then use a sigmoid function on each raw output independently. The sigmoid will allow you to have high probability for all of your classes, some of them, or none of them. Example: classifying diseases in a chest x-ray image. The image might contain pneumonia, emphysema, and/or cancer, or none of those findings. If you have a multi-class classification problem = there is only one "right answer" = the outputs are mutually exclusive, then use a softmax function. The softmax will enforce that the sum of the probabilities of your output classes are equal to one, so in order to increase the probability of a particular class, your model must correspondingly decrease the probability of at least one of the other classes. Example: classifying images from the MNIST data set of handwritten digits. A single picture of a digit has only one true identity - the picture cannot be a 7 and an 8 at the same time. Reference: for a more detailed explanation of when to use sigmoid vs. softmax in neural network design, including example calculations, please see this article: "Classification: Sigmoid vs. Softmax."
Softmax vs Sigmoid function in Logistic classifier? I've noticed people often get directed to this question when searching whether to use sigmoid vs softmax in neural networks. If you are one of those people building a neural network classifier, here i
1,376
Softmax vs Sigmoid function in Logistic classifier?
They are, in fact, equivalent, in the sense that one can be transformed into the other. Suppose that your data is represented by a vector $\boldsymbol{x}$, of arbitrary dimension, and you built a binary classifier $P$ for it, using an affine transformation followed by a softmax: \begin{equation} \begin{pmatrix} z_0 \\ z_1 \end{pmatrix} = \begin{pmatrix} \boldsymbol{w}_0^T \\ \boldsymbol{w}_1^T \end{pmatrix}\boldsymbol{x} + \begin{pmatrix} b_0 \\ b_1 \end{pmatrix}, \end{equation} \begin{equation} P(C_i | \boldsymbol{x}) = \text{softmax}(z_i)=\frac{e^{z_i}}{e^{z_0}+e^{z_1}}, \, \, i \in \{0,1\}. \end{equation} Let's transform it into an equivalent binary classifier $P^*$ that uses a sigmoid instead of the softmax. First of all, we have to decide which is the probability that we want the sigmoid to output (which can be for class $C_0$ or $C_1$). This choice is absolutely arbitrary and so I choose class $C_1$. Then, my classifier will be of the form: \begin{equation} z' = \boldsymbol{w}'^T \boldsymbol{x} + b', \end{equation} \begin{equation} P^*(C_1 | \boldsymbol{x}) = \sigma(z')=\frac{1}{1+e^{-z'}}, \end{equation} \begin{equation} P^*(C_0 | \boldsymbol{x}) = 1-\sigma(z'). \end{equation} The classifiers are equivalent if the probabilities are the same for all $\boldsymbol{x}$, so we must impose: \begin{equation} P^*(C_i|\boldsymbol{x})=P(C_i|\boldsymbol{x}) \quad i \in \{0,1\},\; \forall \boldsymbol{x}, \end{equation} or, equivalently, $\sigma(z') = \text{softmax}(z_1)$ for all $\boldsymbol{x}$. Now, replacing $z_0$, $z_1$, and $z'$ by their expressions in terms of $\boldsymbol{w}_0,\boldsymbol{w}_1, \boldsymbol{w}', b_0, b_1, b'$, and $\boldsymbol{x}$ and doing some straightforward algebraic manipulation, you may verify that the equality above holds if and only if $\boldsymbol{w}'$ and $b'$ are given by: \begin{equation} \boldsymbol{w}' = \boldsymbol{w}_1-\boldsymbol{w}_0, \end{equation} \begin{equation} b' = b_1-b_0. \end{equation} This shows that your first classifier $P$ (i.e., the one using the softmax) had more parameters than needed. This is true also for multiclass classification and it poses difficulties to optimization. An effective solution is to set the parameters for one of the classes to a fixed value (e.g., set $\boldsymbol{w}_0 = 0$ and $b_0=0$) and optimize only the remaining parameters.
Softmax vs Sigmoid function in Logistic classifier?
They are, in fact, equivalent, in the sense that one can be transformed into the other. Suppose that your data is represented by a vector $\boldsymbol{x}$, of arbitrary dimension, and you built a bina
Softmax vs Sigmoid function in Logistic classifier? They are, in fact, equivalent, in the sense that one can be transformed into the other. Suppose that your data is represented by a vector $\boldsymbol{x}$, of arbitrary dimension, and you built a binary classifier $P$ for it, using an affine transformation followed by a softmax: \begin{equation} \begin{pmatrix} z_0 \\ z_1 \end{pmatrix} = \begin{pmatrix} \boldsymbol{w}_0^T \\ \boldsymbol{w}_1^T \end{pmatrix}\boldsymbol{x} + \begin{pmatrix} b_0 \\ b_1 \end{pmatrix}, \end{equation} \begin{equation} P(C_i | \boldsymbol{x}) = \text{softmax}(z_i)=\frac{e^{z_i}}{e^{z_0}+e^{z_1}}, \, \, i \in \{0,1\}. \end{equation} Let's transform it into an equivalent binary classifier $P^*$ that uses a sigmoid instead of the softmax. First of all, we have to decide which is the probability that we want the sigmoid to output (which can be for class $C_0$ or $C_1$). This choice is absolutely arbitrary and so I choose class $C_1$. Then, my classifier will be of the form: \begin{equation} z' = \boldsymbol{w}'^T \boldsymbol{x} + b', \end{equation} \begin{equation} P^*(C_1 | \boldsymbol{x}) = \sigma(z')=\frac{1}{1+e^{-z'}}, \end{equation} \begin{equation} P^*(C_0 | \boldsymbol{x}) = 1-\sigma(z'). \end{equation} The classifiers are equivalent if the probabilities are the same for all $\boldsymbol{x}$, so we must impose: \begin{equation} P^*(C_i|\boldsymbol{x})=P(C_i|\boldsymbol{x}) \quad i \in \{0,1\},\; \forall \boldsymbol{x}, \end{equation} or, equivalently, $\sigma(z') = \text{softmax}(z_1)$ for all $\boldsymbol{x}$. Now, replacing $z_0$, $z_1$, and $z'$ by their expressions in terms of $\boldsymbol{w}_0,\boldsymbol{w}_1, \boldsymbol{w}', b_0, b_1, b'$, and $\boldsymbol{x}$ and doing some straightforward algebraic manipulation, you may verify that the equality above holds if and only if $\boldsymbol{w}'$ and $b'$ are given by: \begin{equation} \boldsymbol{w}' = \boldsymbol{w}_1-\boldsymbol{w}_0, \end{equation} \begin{equation} b' = b_1-b_0. \end{equation} This shows that your first classifier $P$ (i.e., the one using the softmax) had more parameters than needed. This is true also for multiclass classification and it poses difficulties to optimization. An effective solution is to set the parameters for one of the classes to a fixed value (e.g., set $\boldsymbol{w}_0 = 0$ and $b_0=0$) and optimize only the remaining parameters.
Softmax vs Sigmoid function in Logistic classifier? They are, in fact, equivalent, in the sense that one can be transformed into the other. Suppose that your data is represented by a vector $\boldsymbol{x}$, of arbitrary dimension, and you built a bina
1,377
Softmax vs Sigmoid function in Logistic classifier?
Adding to all the previous answers - I would like to mention the fact that any multi-class classification problem can be reduced to multiple binary classification problems using "one-vs-all" method, i.e. having C sigmoids (when C is the number of classes) and interpreting every sigmoid to be the probability of being in that specific class or not, and taking the max probability. So for example, in the MNIST digits example, you could either use a softmax, or ten sigmoids. In fact this is what Andrew Ng does in his Coursera ML course. You can check out here how Andrew Ng used 10 sigmoids for multiclass classification (adapted from Matlab to python by me), and here is my softmax adaptation in python. Also, it's worth noting that while the functions are equivalent (for the purpose of multiclass classification) they differ a bit in their implementation (especially with regards to their derivatives, and how to represent y). A big advantage of using multiple binary classifications (i.e. Sigmoids) over a single multiclass classification (i.e. Softmax) - is that if your softmax is too large (e.g. if you are using a one-hot word embedding of a dictionary size of 10K or more) - it can be inefficient to train it. What you can do instead is take a small part of your training-set and use it to train only a small part of your sigmoids. This is the main idea behind Negative Sampling.
Softmax vs Sigmoid function in Logistic classifier?
Adding to all the previous answers - I would like to mention the fact that any multi-class classification problem can be reduced to multiple binary classification problems using "one-vs-all" method, i
Softmax vs Sigmoid function in Logistic classifier? Adding to all the previous answers - I would like to mention the fact that any multi-class classification problem can be reduced to multiple binary classification problems using "one-vs-all" method, i.e. having C sigmoids (when C is the number of classes) and interpreting every sigmoid to be the probability of being in that specific class or not, and taking the max probability. So for example, in the MNIST digits example, you could either use a softmax, or ten sigmoids. In fact this is what Andrew Ng does in his Coursera ML course. You can check out here how Andrew Ng used 10 sigmoids for multiclass classification (adapted from Matlab to python by me), and here is my softmax adaptation in python. Also, it's worth noting that while the functions are equivalent (for the purpose of multiclass classification) they differ a bit in their implementation (especially with regards to their derivatives, and how to represent y). A big advantage of using multiple binary classifications (i.e. Sigmoids) over a single multiclass classification (i.e. Softmax) - is that if your softmax is too large (e.g. if you are using a one-hot word embedding of a dictionary size of 10K or more) - it can be inefficient to train it. What you can do instead is take a small part of your training-set and use it to train only a small part of your sigmoids. This is the main idea behind Negative Sampling.
Softmax vs Sigmoid function in Logistic classifier? Adding to all the previous answers - I would like to mention the fact that any multi-class classification problem can be reduced to multiple binary classification problems using "one-vs-all" method, i
1,378
What loss function for multi-class, multi-label classification tasks in neural networks?
If you are using keras, just put sigmoids on your output layer and binary_crossentropy on your cost function. If you are using tensorflow, then can use sigmoid_cross_entropy_with_logits. But for my case this direct loss function was not converging. So I ended up using explicit sigmoid cross entropy loss $(y \cdot \ln(\text{sigmoid}(\text{logits})) + (1-y) \cdot \ln(1-\text{sigmoid}(\text{logits})))$ . You can make your own like in this Example Sigmoid, unlike softmax don't give probability distribution around $n_{classes}$ as output, but independent probabilities. If on average any row is assigned less labels then you can use softmax_cross_entropy_with_logits because with this loss while the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of labels is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.
What loss function for multi-class, multi-label classification tasks in neural networks?
If you are using keras, just put sigmoids on your output layer and binary_crossentropy on your cost function. If you are using tensorflow, then can use sigmoid_cross_entropy_with_logits. But for my ca
What loss function for multi-class, multi-label classification tasks in neural networks? If you are using keras, just put sigmoids on your output layer and binary_crossentropy on your cost function. If you are using tensorflow, then can use sigmoid_cross_entropy_with_logits. But for my case this direct loss function was not converging. So I ended up using explicit sigmoid cross entropy loss $(y \cdot \ln(\text{sigmoid}(\text{logits})) + (1-y) \cdot \ln(1-\text{sigmoid}(\text{logits})))$ . You can make your own like in this Example Sigmoid, unlike softmax don't give probability distribution around $n_{classes}$ as output, but independent probabilities. If on average any row is assigned less labels then you can use softmax_cross_entropy_with_logits because with this loss while the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of labels is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.
What loss function for multi-class, multi-label classification tasks in neural networks? If you are using keras, just put sigmoids on your output layer and binary_crossentropy on your cost function. If you are using tensorflow, then can use sigmoid_cross_entropy_with_logits. But for my ca
1,379
What loss function for multi-class, multi-label classification tasks in neural networks?
UPDATE (18/04/18): The old answer still proved to be useful on my model. The trick is to model the partition function and the distribution separately, thus exploiting the power of softmax. Consider your observation vector $y$ to contain $m$ labels. $y_{im}=\delta_{im}$ (1 if sample i contains label m, 0 otherwise). So the objective would be to to model the matrix in a per-sample manner. Hence the model evaluates $F(y_i,x_i)=-\log P(y_i|x_i)$. Consider expanding $y_{im}=Z\cdot P(y_m)$ to achieve two property: Distribution function: $\sum_m P(y_m) = 1$ Partition function: $Z$ estimates the number of labels Then it's a matter of modeling the two separately. The distribution function is best modeled with a softmax layer, and the partition function can be modeled with a linear unit (in practice I clipped it as $max(0.01,output)$. More sophisticated modeling like Poisson unit would probably work better). Then you can choose to apply distributed loss (KL on distribution and MSE on partition), or you can try the following loss on their product. In practical, the choice of optimiser also makes a huge difference. My experience with the factorisation approach is it works best under Adadelta (Adagrad dont work for me, didnt try RMSprop yet, performances of SGD is subject to parameter). Side comment on sigmoid: I have certainly tried sigmoid + crossentropy and it did not work out. The model inclined to predict the $Z$ only, and failed to capture the variation in distribution function. (aka, it's somehow quite useful for modelling the partition and there may be math reason behind it) UPDATE: (Random thought) It seems using Dirichlet process would allow incorporation of some prior on the number of labels? UPDATE: By experiment, the modified KL-divergence is still inclined to give multi-class output rather than multi-label output. (Old answer) My experience with sigmoid cross-entropy was not very pleasant. At the moment I am using a modified KL-divergence. It takes the form $$ \begin{aligned} Loss(P,Q)&=\sum_x{|P(x)-Q(x)| \cdot \left|\log\frac{P(x)}{Q(x)}\right| } \\ &= \sum_x{\left| (P(x)-Q(x)) \cdot \log\frac{P(x)}{Q(x)}\right| } \end{aligned} $$ Where $P(x)$ is the target pseudo-distribution and $Q(x)$ is the predicted pseudo-distribution (but the function is actually symmetrical so it does not actually matter) They are called pseudo-distributions for not being normalised. So you can have $\sum_x{P(x)}=2$ if you have 2 labels for a particular sample. Keras impelmentation def abs_KL_div(y_true, y_pred): y_true = K.clip(y_true, K.epsilon(), None) y_pred = K.clip(y_pred, K.epsilon(), None) return K.sum( K.abs( (y_true- y_pred) * (K.log(y_true / y_pred))), axis=-1)
What loss function for multi-class, multi-label classification tasks in neural networks?
UPDATE (18/04/18): The old answer still proved to be useful on my model. The trick is to model the partition function and the distribution separately, thus exploiting the power of softmax. Consider y
What loss function for multi-class, multi-label classification tasks in neural networks? UPDATE (18/04/18): The old answer still proved to be useful on my model. The trick is to model the partition function and the distribution separately, thus exploiting the power of softmax. Consider your observation vector $y$ to contain $m$ labels. $y_{im}=\delta_{im}$ (1 if sample i contains label m, 0 otherwise). So the objective would be to to model the matrix in a per-sample manner. Hence the model evaluates $F(y_i,x_i)=-\log P(y_i|x_i)$. Consider expanding $y_{im}=Z\cdot P(y_m)$ to achieve two property: Distribution function: $\sum_m P(y_m) = 1$ Partition function: $Z$ estimates the number of labels Then it's a matter of modeling the two separately. The distribution function is best modeled with a softmax layer, and the partition function can be modeled with a linear unit (in practice I clipped it as $max(0.01,output)$. More sophisticated modeling like Poisson unit would probably work better). Then you can choose to apply distributed loss (KL on distribution and MSE on partition), or you can try the following loss on their product. In practical, the choice of optimiser also makes a huge difference. My experience with the factorisation approach is it works best under Adadelta (Adagrad dont work for me, didnt try RMSprop yet, performances of SGD is subject to parameter). Side comment on sigmoid: I have certainly tried sigmoid + crossentropy and it did not work out. The model inclined to predict the $Z$ only, and failed to capture the variation in distribution function. (aka, it's somehow quite useful for modelling the partition and there may be math reason behind it) UPDATE: (Random thought) It seems using Dirichlet process would allow incorporation of some prior on the number of labels? UPDATE: By experiment, the modified KL-divergence is still inclined to give multi-class output rather than multi-label output. (Old answer) My experience with sigmoid cross-entropy was not very pleasant. At the moment I am using a modified KL-divergence. It takes the form $$ \begin{aligned} Loss(P,Q)&=\sum_x{|P(x)-Q(x)| \cdot \left|\log\frac{P(x)}{Q(x)}\right| } \\ &= \sum_x{\left| (P(x)-Q(x)) \cdot \log\frac{P(x)}{Q(x)}\right| } \end{aligned} $$ Where $P(x)$ is the target pseudo-distribution and $Q(x)$ is the predicted pseudo-distribution (but the function is actually symmetrical so it does not actually matter) They are called pseudo-distributions for not being normalised. So you can have $\sum_x{P(x)}=2$ if you have 2 labels for a particular sample. Keras impelmentation def abs_KL_div(y_true, y_pred): y_true = K.clip(y_true, K.epsilon(), None) y_pred = K.clip(y_pred, K.epsilon(), None) return K.sum( K.abs( (y_true- y_pred) * (K.log(y_true / y_pred))), axis=-1)
What loss function for multi-class, multi-label classification tasks in neural networks? UPDATE (18/04/18): The old answer still proved to be useful on my model. The trick is to model the partition function and the distribution separately, thus exploiting the power of softmax. Consider y
1,380
What loss function for multi-class, multi-label classification tasks in neural networks?
I was going through same problem, After some research here is my solution: If you are using tensorflow : Multi label loss: cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=tf.cast(targets,tf.float32)) loss = tf.reduce_mean(tf.reduce_sum(cross_entropy, axis=1)) prediction = tf.sigmoid(logits) output = tf.cast(self.prediction > threshold, tf.int32) train_op = tf.train.AdamOptimizer(0.001).minimize(loss) Explanation : For example if Logits from model and labels are : logits = array([[ 1.4397182 , -0.7993438 , 4.113389 , 3.2199187 , 4.5777845 ], [ 0.30619335, 0.10168511, 4.253479 , 2.3782277 , 4.7390924 ], [ 1.124632 , 1.6056736 , 2.9778094 , 2.0808482 , 2.0735667 ], [ 0.7051575 , -0.10341895, 4.990803 , 3.7019827 , 3.8265839 ], [ 0.6333333 , -0.76601076, 3.2255085 , 2.7842572 , 5.3817415 ]], dtype=float32) labels = array([[1, 1, 0, 0, 0], [0, 1, 0, 0, 1], [1, 1, 1, 1, 0], [0, 0, 1, 0, 1], [1, 1, 1, 1, 1]]) then cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=tf.cast(targets,tf.float32)) will give you : [[0.21268466 1.170648 4.129609 3.2590992 4.58801 ] [0.85791767 0.64359653 4.2675934 2.466893 0.00870855] [0.28124034 0.18294993 0.04965096 0.11762683 2.1920042 ] [1.1066352 0.64277405 0.00677719 3.7263577 0.02155003] [0.42580318 1.147773 0.03896642 0.059942 0.00458926]] and prediction = tf.cast(tf.sigmoid(one_placeholder) > 0.5, tf.int32) will give you : [[1 0 1 1 1] [1 1 1 1 1] [1 1 1 1 1] [1 0 1 1 1] [1 0 1 1 1]] Now you have predicted labels and true labels, You can calculate accuracy easily. For multi-class: The labels must be one-hot encoded cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels = one_hot_y) loss = tf.reduce_sum(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate=self.lr).minimize(loss) predictions = tf.argmax(logits, axis=1, output_type=tf.int32, name='predictions') accuracy = tf.reduce_sum(tf.cast(tf.equal(predictions, true_labels), tf.float32)) Another example # LOSS AND OPTIMIZER loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=output, labels=y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, beta1=0.9, beta2=0.999, epsilon=1e-08).minimize(loss, global_step=global_step) # PREDICTION AND ACCURACY CALCULATION correct_prediction = tf.equal(y_pred_cls, tf.argmax(y, axis=1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
What loss function for multi-class, multi-label classification tasks in neural networks?
I was going through same problem, After some research here is my solution: If you are using tensorflow : Multi label loss: cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, la
What loss function for multi-class, multi-label classification tasks in neural networks? I was going through same problem, After some research here is my solution: If you are using tensorflow : Multi label loss: cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=tf.cast(targets,tf.float32)) loss = tf.reduce_mean(tf.reduce_sum(cross_entropy, axis=1)) prediction = tf.sigmoid(logits) output = tf.cast(self.prediction > threshold, tf.int32) train_op = tf.train.AdamOptimizer(0.001).minimize(loss) Explanation : For example if Logits from model and labels are : logits = array([[ 1.4397182 , -0.7993438 , 4.113389 , 3.2199187 , 4.5777845 ], [ 0.30619335, 0.10168511, 4.253479 , 2.3782277 , 4.7390924 ], [ 1.124632 , 1.6056736 , 2.9778094 , 2.0808482 , 2.0735667 ], [ 0.7051575 , -0.10341895, 4.990803 , 3.7019827 , 3.8265839 ], [ 0.6333333 , -0.76601076, 3.2255085 , 2.7842572 , 5.3817415 ]], dtype=float32) labels = array([[1, 1, 0, 0, 0], [0, 1, 0, 0, 1], [1, 1, 1, 1, 0], [0, 0, 1, 0, 1], [1, 1, 1, 1, 1]]) then cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=tf.cast(targets,tf.float32)) will give you : [[0.21268466 1.170648 4.129609 3.2590992 4.58801 ] [0.85791767 0.64359653 4.2675934 2.466893 0.00870855] [0.28124034 0.18294993 0.04965096 0.11762683 2.1920042 ] [1.1066352 0.64277405 0.00677719 3.7263577 0.02155003] [0.42580318 1.147773 0.03896642 0.059942 0.00458926]] and prediction = tf.cast(tf.sigmoid(one_placeholder) > 0.5, tf.int32) will give you : [[1 0 1 1 1] [1 1 1 1 1] [1 1 1 1 1] [1 0 1 1 1] [1 0 1 1 1]] Now you have predicted labels and true labels, You can calculate accuracy easily. For multi-class: The labels must be one-hot encoded cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels = one_hot_y) loss = tf.reduce_sum(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate=self.lr).minimize(loss) predictions = tf.argmax(logits, axis=1, output_type=tf.int32, name='predictions') accuracy = tf.reduce_sum(tf.cast(tf.equal(predictions, true_labels), tf.float32)) Another example # LOSS AND OPTIMIZER loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=output, labels=y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, beta1=0.9, beta2=0.999, epsilon=1e-08).minimize(loss, global_step=global_step) # PREDICTION AND ACCURACY CALCULATION correct_prediction = tf.equal(y_pred_cls, tf.argmax(y, axis=1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
What loss function for multi-class, multi-label classification tasks in neural networks? I was going through same problem, After some research here is my solution: If you are using tensorflow : Multi label loss: cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, la
1,381
What loss function for multi-class, multi-label classification tasks in neural networks?
I haven't used keras yet. Taking caffe for example, you can use SigmoidCrossEntropyLossLayer for multi-label problems.
What loss function for multi-class, multi-label classification tasks in neural networks?
I haven't used keras yet. Taking caffe for example, you can use SigmoidCrossEntropyLossLayer for multi-label problems.
What loss function for multi-class, multi-label classification tasks in neural networks? I haven't used keras yet. Taking caffe for example, you can use SigmoidCrossEntropyLossLayer for multi-label problems.
What loss function for multi-class, multi-label classification tasks in neural networks? I haven't used keras yet. Taking caffe for example, you can use SigmoidCrossEntropyLossLayer for multi-label problems.
1,382
What loss function for multi-class, multi-label classification tasks in neural networks?
Actually in tensorsflow you can still use the sigmoid_cross_entropy_mean as the loss calculation function in multi-label, I am very confirm it
What loss function for multi-class, multi-label classification tasks in neural networks?
Actually in tensorsflow you can still use the sigmoid_cross_entropy_mean as the loss calculation function in multi-label, I am very confirm it
What loss function for multi-class, multi-label classification tasks in neural networks? Actually in tensorsflow you can still use the sigmoid_cross_entropy_mean as the loss calculation function in multi-label, I am very confirm it
What loss function for multi-class, multi-label classification tasks in neural networks? Actually in tensorsflow you can still use the sigmoid_cross_entropy_mean as the loss calculation function in multi-label, I am very confirm it
1,383
What loss function for multi-class, multi-label classification tasks in neural networks?
I'm a newbie here but I'll try give it a shot with this question. I was searching the same thing as you, and finally I found a very good keras multi-class classification tutorial @ http://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/. The author of that tutorial use categorical cross entropy loss function, and there is other thread that may help you to find solution @ here.
What loss function for multi-class, multi-label classification tasks in neural networks?
I'm a newbie here but I'll try give it a shot with this question. I was searching the same thing as you, and finally I found a very good keras multi-class classification tutorial @ http://machinelear
What loss function for multi-class, multi-label classification tasks in neural networks? I'm a newbie here but I'll try give it a shot with this question. I was searching the same thing as you, and finally I found a very good keras multi-class classification tutorial @ http://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/. The author of that tutorial use categorical cross entropy loss function, and there is other thread that may help you to find solution @ here.
What loss function for multi-class, multi-label classification tasks in neural networks? I'm a newbie here but I'll try give it a shot with this question. I was searching the same thing as you, and finally I found a very good keras multi-class classification tutorial @ http://machinelear
1,384
Intuitive explanation of unit root
He had just come to the bridge; and not looking where he was going, he tripped over something, and the fir-cone jerked out of his paw into the river. "Bother," said Pooh, as it floated slowly under the bridge, and he went back to get another fir-cone which had a rhyme to it. But then he thought that he would just look at the river instead, because it was a peaceful sort of day, so he lay down and looked at it, and it slipped slowly away beneath him . . . and suddenly, there was his fir-cone slipping away too. "That's funny," said Pooh. "I dropped it on the other side," said Pooh, "and it came out on this side! I wonder if it would do it again?" A.A. Milne, The House at Pooh Corner (Chapter VI. In which Pooh invents a new game and eeyore joins in.) Here is a picture of the flow along the surface of the water: The arrows show the direction of flow and are connected by streamlines. A fir cone will tend to follow the streamline in which it falls. But it doesn't always do it the same way each time, even when it's dropped in the same place in the stream: random variations along its path, caused by turbulence in the water, wind, and other whims of nature kick it onto neighboring stream lines. Here, the fir cone was dropped near the upper right corner. It more or less followed the stream lines--which converge and flow away down and to the left--but it took little detours along the way. An "autoregressive process" (AR process) is a sequence of numbers thought to behave like certain flows. The two-dimensional illustration corresponds to a process in which each number is determined by its two preceding values--plus a random "detour." The analogy is made by interpreting each successive pair in the sequence as coordinates of a point in the stream. Instant by instant, the stream's flow changes the fir cone's coordinates in the same mathematical way given by the AR process. We can recover the original process from the flow-based picture by writing the coordinates of each point occupied by the fir cone and then erasing all but the last number in each set of coordinates. Nature--and streams in particular--is richer and more varied than the flows corresponding to AR processes. Because each number in the sequence is assumed to depend in the same fixed way on its predecessors--apart from the random detour part--the flows that illustrate AR processes exhibit limited patterns. They can indeed seem to flow like a stream, as seen here. They can also look like the swirling around a drain. The flows can occur in reverse, seeming to gush outwards from a drain. And they can look like mouths of two streams crashing together: two sources of water flow directly at one another and then split away to the sides. But that's about it. You can't have, say, a flowing stream with eddies off to the sides. AR processes are too simple for that. In this flow, the fir cone was dropped at the lower right corner and quickly carried into the eddy in the upper right, despite the slight random changes in position it underwent. But it will never quite stop moving, due to those same random movements which rescue it from oblivion. The fir cone's coordinates move around a bit--indeed, they are seen to oscillate, on the whole, around the coordinates of the center of the eddy. In the first stream flow, the coordinates progressed inevitably along the center of the stream, which quickly captured the cone and carried it away faster than its random detours could slow it down: they trend in time. By contrast, circling around an eddy exemplifies a stationary process in which the fir cone is captured; flowing away down the stream, in which the cone flows out of sight--trending--is non-stationary. Incidentally, when the flow for an AR process moves away downstream, it also accelerates. It gets faster and faster as the cone moves along it. The nature of an AR flow is determined by a few special, "characteristic," directions, which are usually evident in the stream diagram: streamlines seem to converge towards or come from these directions. One can always find as many characteristic directions as there are coefficients in the AR process: two in these illustrations. Associated with each characteristic direction is a number, its "root" or "eigenvalue." When the size of the number is less than unity, the flow in that characteristic direction is towards a central location. When the size of the root is greater than unity, the flow accelerates away from a central location. Movement along a characteristic direction with a unit root--one whose size is $1$--is dominated by the random forces affecting the cone. It is a "random walk." The cone can wander away slowly but without accelerating. (Some of the figures display the values of both roots in their titles.) Even Pooh--a bear of very little brain--would recognize that the stream will capture his fir cone only when all the flow is toward one eddy or whirlpool; otherwise, on one of those random detours the cone will eventually find itself under the influence of that part of the flow with a root greater than $1$ in magnitude, whence it will wander off downstream and be lost forever. Consequently, an AR process can be stationary if and only if all characteristic values are less than unity in size. Economists are perhaps the greatest analysts of time series and employers of the AR process technology. Their series of data typically do not accelerate out of sight. They are concerned, therefore, only whether there is a characteristic direction whose value may be as large as $1$ in size: a "unit root." Knowing whether the data are consistent with such a flow can tell the economist much about the potential fate of his pooh stick: that is, about what will happen in the future. That's why it can be important to test for a unit root. A fine Wikipedia article explains some of the implications. Pooh and his friends found an empirical test of stationarity: Now one day Pooh and Piglet and Rabbit and Roo were all playing Poohsticks together. They had dropped their sticks in when Rabbit said "Go!" and then they had hurried across to the other side of the bridge, and now they were all leaning over the edge, waiting to see whose stick would come out first. But it was a long time coming, because the river was very lazy that day, and hardly seemed to mind if it didn't ever get there at all. "I can see mine!" cried Roo. "No, I can't, it's something else. Can you see yours, Piglet? I thought I could see mine, but I couldn't. There it is! No, it isn't. Can you see yours, Pooh?" "No," said Pooh. "I expect my stick's stuck," said Roo. "Rabbit, my stick's stuck. Is your stick stuck, Piglet?" "They always take longer than you think," said Rabbit. This passage, from 1928, could be construed as the very first "Unit Roo test."
Intuitive explanation of unit root
He had just come to the bridge; and not looking where he was going, he tripped over something, and the fir-cone jerked out of his paw into the river. "Bother," said Pooh, as it floated slo
Intuitive explanation of unit root He had just come to the bridge; and not looking where he was going, he tripped over something, and the fir-cone jerked out of his paw into the river. "Bother," said Pooh, as it floated slowly under the bridge, and he went back to get another fir-cone which had a rhyme to it. But then he thought that he would just look at the river instead, because it was a peaceful sort of day, so he lay down and looked at it, and it slipped slowly away beneath him . . . and suddenly, there was his fir-cone slipping away too. "That's funny," said Pooh. "I dropped it on the other side," said Pooh, "and it came out on this side! I wonder if it would do it again?" A.A. Milne, The House at Pooh Corner (Chapter VI. In which Pooh invents a new game and eeyore joins in.) Here is a picture of the flow along the surface of the water: The arrows show the direction of flow and are connected by streamlines. A fir cone will tend to follow the streamline in which it falls. But it doesn't always do it the same way each time, even when it's dropped in the same place in the stream: random variations along its path, caused by turbulence in the water, wind, and other whims of nature kick it onto neighboring stream lines. Here, the fir cone was dropped near the upper right corner. It more or less followed the stream lines--which converge and flow away down and to the left--but it took little detours along the way. An "autoregressive process" (AR process) is a sequence of numbers thought to behave like certain flows. The two-dimensional illustration corresponds to a process in which each number is determined by its two preceding values--plus a random "detour." The analogy is made by interpreting each successive pair in the sequence as coordinates of a point in the stream. Instant by instant, the stream's flow changes the fir cone's coordinates in the same mathematical way given by the AR process. We can recover the original process from the flow-based picture by writing the coordinates of each point occupied by the fir cone and then erasing all but the last number in each set of coordinates. Nature--and streams in particular--is richer and more varied than the flows corresponding to AR processes. Because each number in the sequence is assumed to depend in the same fixed way on its predecessors--apart from the random detour part--the flows that illustrate AR processes exhibit limited patterns. They can indeed seem to flow like a stream, as seen here. They can also look like the swirling around a drain. The flows can occur in reverse, seeming to gush outwards from a drain. And they can look like mouths of two streams crashing together: two sources of water flow directly at one another and then split away to the sides. But that's about it. You can't have, say, a flowing stream with eddies off to the sides. AR processes are too simple for that. In this flow, the fir cone was dropped at the lower right corner and quickly carried into the eddy in the upper right, despite the slight random changes in position it underwent. But it will never quite stop moving, due to those same random movements which rescue it from oblivion. The fir cone's coordinates move around a bit--indeed, they are seen to oscillate, on the whole, around the coordinates of the center of the eddy. In the first stream flow, the coordinates progressed inevitably along the center of the stream, which quickly captured the cone and carried it away faster than its random detours could slow it down: they trend in time. By contrast, circling around an eddy exemplifies a stationary process in which the fir cone is captured; flowing away down the stream, in which the cone flows out of sight--trending--is non-stationary. Incidentally, when the flow for an AR process moves away downstream, it also accelerates. It gets faster and faster as the cone moves along it. The nature of an AR flow is determined by a few special, "characteristic," directions, which are usually evident in the stream diagram: streamlines seem to converge towards or come from these directions. One can always find as many characteristic directions as there are coefficients in the AR process: two in these illustrations. Associated with each characteristic direction is a number, its "root" or "eigenvalue." When the size of the number is less than unity, the flow in that characteristic direction is towards a central location. When the size of the root is greater than unity, the flow accelerates away from a central location. Movement along a characteristic direction with a unit root--one whose size is $1$--is dominated by the random forces affecting the cone. It is a "random walk." The cone can wander away slowly but without accelerating. (Some of the figures display the values of both roots in their titles.) Even Pooh--a bear of very little brain--would recognize that the stream will capture his fir cone only when all the flow is toward one eddy or whirlpool; otherwise, on one of those random detours the cone will eventually find itself under the influence of that part of the flow with a root greater than $1$ in magnitude, whence it will wander off downstream and be lost forever. Consequently, an AR process can be stationary if and only if all characteristic values are less than unity in size. Economists are perhaps the greatest analysts of time series and employers of the AR process technology. Their series of data typically do not accelerate out of sight. They are concerned, therefore, only whether there is a characteristic direction whose value may be as large as $1$ in size: a "unit root." Knowing whether the data are consistent with such a flow can tell the economist much about the potential fate of his pooh stick: that is, about what will happen in the future. That's why it can be important to test for a unit root. A fine Wikipedia article explains some of the implications. Pooh and his friends found an empirical test of stationarity: Now one day Pooh and Piglet and Rabbit and Roo were all playing Poohsticks together. They had dropped their sticks in when Rabbit said "Go!" and then they had hurried across to the other side of the bridge, and now they were all leaning over the edge, waiting to see whose stick would come out first. But it was a long time coming, because the river was very lazy that day, and hardly seemed to mind if it didn't ever get there at all. "I can see mine!" cried Roo. "No, I can't, it's something else. Can you see yours, Piglet? I thought I could see mine, but I couldn't. There it is! No, it isn't. Can you see yours, Pooh?" "No," said Pooh. "I expect my stick's stuck," said Roo. "Rabbit, my stick's stuck. Is your stick stuck, Piglet?" "They always take longer than you think," said Rabbit. This passage, from 1928, could be construed as the very first "Unit Roo test."
Intuitive explanation of unit root He had just come to the bridge; and not looking where he was going, he tripped over something, and the fir-cone jerked out of his paw into the river. "Bother," said Pooh, as it floated slo
1,385
Intuitive explanation of unit root
Imagine two $AR(1)$ processes: Process 1: $v_k = 0.5 v_{k-1} + \epsilon_{k-1}$ Process 2: $v_k = v_{k-1} + \epsilon_{k-1}$ $\epsilon_i$ is drawn from $N(0,1)$ Process 1 has no unit root. Process 2 has a unit root. You can confirm this by calculating characteristic polynomials per Michael's answer. Imagine we start both processes off at zero, i.e. $v_1 = 0$. Now imagine what happens when we have a "good run" of positive epsilons, and imagine that both process get to $v_{10} = 5$. What happens next? Where do we expect the sequence to go? We expect that $\epsilon_{i} = 0$. So we expect that Process 1 case to have $v_{11} = 2.5$, $v_{12} = 1.25$, $v_{13} = 0.625$ etc. But we expect for Process 2 that $v_{11} = 5$, $v_{12} = 5$, $v_{13} = 5$ etc. So one intuition is, when a "run of good/bad luck" pushes a process with a unit root around, the sequence "gets stuck in position" by the historical good or bad luck. It will still shift around randomly, but there's nothing "forcing it back". On the other hand, when there's no unit root and the process doesn't blow up, there's a "force" on the process which will make the process drift back to the old position, although the random noise will still knock it around a bit. The "getting stuck" can include undamped oscillations, a simple example is: $v_k = -v_{k-1} + \epsilon_{k-1}$. This will bounce back and forth positive to negative, but the oscillation is not predestined to explode out to infinity or damp down to zero. You can get more forms of "getting stuck" which include more complex kinds of oscillations.
Intuitive explanation of unit root
Imagine two $AR(1)$ processes: Process 1: $v_k = 0.5 v_{k-1} + \epsilon_{k-1}$ Process 2: $v_k = v_{k-1} + \epsilon_{k-1}$ $\epsilon_i$ is drawn from $N(0,1)$ Process 1 has no unit root. Process 2
Intuitive explanation of unit root Imagine two $AR(1)$ processes: Process 1: $v_k = 0.5 v_{k-1} + \epsilon_{k-1}$ Process 2: $v_k = v_{k-1} + \epsilon_{k-1}$ $\epsilon_i$ is drawn from $N(0,1)$ Process 1 has no unit root. Process 2 has a unit root. You can confirm this by calculating characteristic polynomials per Michael's answer. Imagine we start both processes off at zero, i.e. $v_1 = 0$. Now imagine what happens when we have a "good run" of positive epsilons, and imagine that both process get to $v_{10} = 5$. What happens next? Where do we expect the sequence to go? We expect that $\epsilon_{i} = 0$. So we expect that Process 1 case to have $v_{11} = 2.5$, $v_{12} = 1.25$, $v_{13} = 0.625$ etc. But we expect for Process 2 that $v_{11} = 5$, $v_{12} = 5$, $v_{13} = 5$ etc. So one intuition is, when a "run of good/bad luck" pushes a process with a unit root around, the sequence "gets stuck in position" by the historical good or bad luck. It will still shift around randomly, but there's nothing "forcing it back". On the other hand, when there's no unit root and the process doesn't blow up, there's a "force" on the process which will make the process drift back to the old position, although the random noise will still knock it around a bit. The "getting stuck" can include undamped oscillations, a simple example is: $v_k = -v_{k-1} + \epsilon_{k-1}$. This will bounce back and forth positive to negative, but the oscillation is not predestined to explode out to infinity or damp down to zero. You can get more forms of "getting stuck" which include more complex kinds of oscillations.
Intuitive explanation of unit root Imagine two $AR(1)$ processes: Process 1: $v_k = 0.5 v_{k-1} + \epsilon_{k-1}$ Process 2: $v_k = v_{k-1} + \epsilon_{k-1}$ $\epsilon_i$ is drawn from $N(0,1)$ Process 1 has no unit root. Process 2
1,386
Intuitive explanation of unit root
Consider the first-order autoregressive process $$X_t= aX_{t-1} + e_t$$ where $e_t$ is white noise. The model can also be expressed with all $X$'s on one side as $$X_t-aX_{t-1} = e_t.$$ Using the backshift operator $BX_t = X_{t-1}$ we can re-express the model compactly as $X_t-aBX_t =e_t$ or, equivalently, $$(1-aB)X_t = e_t.$$ The characteristic polynomial is $1-ax$. This has a (unique) root at $x=1/a$. Then for $|a|\lt 1$ we have a stationary $AR(1)$ process and for $|a|\gt 1$ we have an explosive nonstationary $AR(1)$ process. For $a=1$ we have a random walk which is nonstationary and a unit root $x=1/1=1$. So the unit roots form the boundary between stationarity and nonstationarity. The $AR(1)$ model (by virtue of its linear characteristic polynomial) is the simplest to illustrate it.
Intuitive explanation of unit root
Consider the first-order autoregressive process $$X_t= aX_{t-1} + e_t$$ where $e_t$ is white noise. The model can also be expressed with all $X$'s on one side as $$X_t-aX_{t-1} = e_t.$$ Using the ba
Intuitive explanation of unit root Consider the first-order autoregressive process $$X_t= aX_{t-1} + e_t$$ where $e_t$ is white noise. The model can also be expressed with all $X$'s on one side as $$X_t-aX_{t-1} = e_t.$$ Using the backshift operator $BX_t = X_{t-1}$ we can re-express the model compactly as $X_t-aBX_t =e_t$ or, equivalently, $$(1-aB)X_t = e_t.$$ The characteristic polynomial is $1-ax$. This has a (unique) root at $x=1/a$. Then for $|a|\lt 1$ we have a stationary $AR(1)$ process and for $|a|\gt 1$ we have an explosive nonstationary $AR(1)$ process. For $a=1$ we have a random walk which is nonstationary and a unit root $x=1/1=1$. So the unit roots form the boundary between stationarity and nonstationarity. The $AR(1)$ model (by virtue of its linear characteristic polynomial) is the simplest to illustrate it.
Intuitive explanation of unit root Consider the first-order autoregressive process $$X_t= aX_{t-1} + e_t$$ where $e_t$ is white noise. The model can also be expressed with all $X$'s on one side as $$X_t-aX_{t-1} = e_t.$$ Using the ba
1,387
Comprehensive list of activation functions in neural networks with pros/cons
I'll start making a list here of the ones I've learned so far. As @marcodena said, pros and cons are more difficult because it's mostly just heuristics learned from trying these things, but I figure at least having a list of what they are can't hurt. First, I'll define notation explicitly so there is no confusion: Notation This notation is from Neilsen's book. A Feedforward Neural Network is a many layers of neurons connected together. It takes in an input, then that input "trickles" through the network and the neural network returns an output vector. More formally, call $a^i_j$ the activation (aka output) of the $j^{th}$ neuron in the $i^{th}$ layer, where $a^1_j$ is the $j^{th}$ element in the input vector. Then we can relate the next layer's input to it's previous via the following relation: $$a^i_j = \sigma\bigg(\sum\limits_k (w^i_{jk} \cdot a^{i-1}_k) + b^i_j\bigg)$$ where $\sigma$ is the activation function, $w^i_{jk}$ is the weight from the $k^{th}$ neuron in the $(i-1)^{th}$ layer to the $j^{th}$ neuron in the $i^{th}$ layer, $b^i_j$ is the bias of the $j^{th}$ neuron in the $i^{th}$ layer, and $a^i_j$ represents the activation value of the $j^{th}$ neuron in the $i^{th}$ layer. Sometimes we write $z^i_j$ to represent $\sum\limits_k (w^i_{jk} \cdot a^{i-1}_k) + b^i_j$, in other words, the activation value of a neuron before applying the activation function. For more concise notation we can write $$a^i = \sigma(w^i \times a^{i-1} + b^i)$$ To use this formula to compute the output of a feedforward network for some input $I \in \mathbb{R}^n$, set $a^1 = I$, then compute $a^2, a^3, \ldots, a^m$, where $m$ is the number of layers. Activation Functions (in the following, we will write $\exp(x)$ instead of $e^x$ for readability) Identity Also known as a linear activation function. $$a^i_j = \sigma(z^i_j) = z^i_j$$ Step $$a^i_j = \sigma(z^i_j) = \begin{cases} 0 & \text{if } z^i_j < 0 \\ 1 & \text{if } z^i_j > 0 \end{cases}$$ Piecewise Linear Choose some $x_{\min}$ and $x_{\max}$, which is our "range". Everything less than than this range will be 0, and everything greater than this range will be 1. Anything else is linearly-interpolated between. Formally: $$a^i_j = \sigma(z^i_j) = \begin{cases} 0 & \text{if } z^i_j < x_{\min} \\ m z^i_j+b & \text{if } x_{\min} \leq z^i_j \leq x_{\max} \\ 1 & \text{if } z^i_j > x_{\max} \end{cases}$$ Where $$m = \frac{1}{x_{\max}-x_{\min}}$$ and $$b = -m x_{\min} = 1 - m x_{\max}$$ Sigmoid $$a^i_j = \sigma(z^i_j) = \frac{1}{1+\exp(-z^i_j)}$$ Complementary log-log $$a^i_j = \sigma(z^i_j) = 1 − \exp\!\big(−\exp(z^i_j)\big)$$ Bipolar $$a^i_j = \sigma(z^i_j) = \begin{cases} -1 & \text{if } z^i_j < 0 \\ \ \ \ 1 & \text{if } z^i_j > 0 \end{cases}$$ Bipolar Sigmoid $$a^i_j = \sigma(z^i_j) = \frac{1-\exp(-z^i_j)}{1+\exp(-z^i_j)}$$ Tanh $$a^i_j = \sigma(z^i_j) = \tanh(z^i_j)$$ LeCun's Tanh See Efficient Backprop. $$a^i_j = \sigma(z^i_j) = 1.7159 \tanh\!\left( \frac{2}{3} z^i_j\right)$$ Scaled: Hard Tanh $$a^i_j = \sigma(z^i_j) = \max\!\big(-1, \min(1, z^i_j)\big)$$ Absolute $$a^i_j = \sigma(z^i_j) = \mid z^i_j \mid$$ Rectifier Also known as Rectified Linear Unit (ReLU), Max, or the Ramp Function. $$a^i_j = \sigma(z^i_j) = \max(0, z^i_j)$$ Modifications of ReLU These are some activation functions that I have been playing with that seem to have very good performance for MNIST for mysterious reasons. $$a^i_j = \sigma(z^i_j) = \max(0, z^i_j)+\cos(z^i_j)$$ Scaled: $$a^i_j = \sigma(z^i_j) = \max(0, z^i_j)+\sin(z^i_j)$$ Scaled: Smooth Rectifier Also known as Smooth Rectified Linear Unit, Smooth Max, or Soft plus $$a^i_j = \sigma(z^i_j) = \log\!\big(1+\exp(z^i_j)\big)$$ Logit $$a^i_j = \sigma(z^i_j) = \log\!\bigg(\frac{z^i_j}{(1 − z^i_j)}\bigg)$$ Scaled: Probit $$a^i_j = \sigma(z^i_j) = \sqrt{2}\,\text{erf}^{-1}(2z^i_j-1)$$. Where $\text{erf}$ is the Error Function. It can't be described via elementary functions, but you can find ways of approximating it's inverse at that Wikipedia page and here. Alternatively, it can be expressed as $$a^i_j = \sigma(z^i_j) = \phi(z^i_j)$$. Where $\phi $is the Cumulative distribution function (CDF). See here for means of approximating this. Scaled: Cosine See Random Kitchen Sinks. $$a^i_j = \sigma(z^i_j) = \cos(z^i_j)$$. Softmax Also known as the Normalized Exponential. $$a^i_j = \frac{\exp(z^i_j)}{\sum\limits_k \exp(z^i_k)}$$ This one is a little weird because the output of a single neuron is dependent on the other neurons in that layer. It also does get difficult to compute, as $z^i_j$ may be a very high value, in which case $\exp(z^i_j)$ will probably overflow. Likewise, if $z^i_j$ is a very low value, it will underflow and become $0$. To combat this, we will instead compute $\log(a^i_j)$. This gives us: $$\log(a^i_j) = \log\left(\frac{\exp(z^i_j)}{\sum\limits_k \exp(z^i_k)}\right)$$ $$\log(a^i_j) = z^i_j - \log(\sum\limits_k \exp(z^i_k))$$ Here we need to use the log-sum-exp trick: Let's say we are computing: $$\log(e^2 + e^9 + e^{11} + e^{-7} + e^{-2} + e^5)$$ We will first sort our exponentials by magnitude for convenience: $$\log(e^{11} + e^9 + e^5 + e^2 + e^{-2} + e^{-7})$$ Then, since $e^{11}$ is our highest, we multiply by $\frac{e^{-11}}{e^{-11}}$: $$\log(\frac{e^{-11}}{e^{-11}}(e^{11} + e^9 + e^5 + e^2 + e^{-2} + e^{-7}))$$ $$\log(\frac{1}{e^{-11}}(e^{0} + e^{-2} + e^{-6} + e^{-9} + e^{-13} + e^{-18}))$$ $$\log(e^{11}(e^{0} + e^{-2} + e^{-6} + e^{-9} + e^{-13} + e^{-18}))$$ $$\log(e^{11}) + \log(e^{0} + e^{-2} + e^{-6} + e^{-9} + e^{-13} + e^{-18})$$ $$ 11 + \log(e^{0} + e^{-2} + e^{-6} + e^{-9} + e^{-13} + e^{-18})$$ We can then compute the expression on the right and take the log of it. It's okay to do this because that sum is very small with respect to $\log(e^{11})$, so any underflow to 0 wouldn't have been significant enough to make a difference anyway. Overflow can't happen in the expression on the right because we are guaranteed that after multiplying by $e^{-11}$, all the powers will be $\leq 0$. Formally, we call $m=\max(z^i_1, z^i_2, z^i_3, ...)$. Then: $$\log\!(\sum\limits_k \exp(z^i_k)) = m + \log(\sum\limits_k \exp(z^i_k - m))$$ Our softmax function then becomes: $$a^i_j = \exp(\log(a^i_j))=\exp\!\left( z^i_j - m - \log(\sum\limits_k \exp(z^i_k - m))\right)$$ Also as a sidenote, the derivative of the softmax function is: $$\frac{d \sigma(z^i_j)}{d z^i_j}=\sigma^{\prime}(z^i_j)= \sigma(z^i_j)(1 - \sigma(z^i_j))$$ Maxout This one is also a little tricky. Essentially the idea is that we break up each neuron in our maxout layer into lots of sub-neurons, each of which have their own weights and biases. Then the input to a neuron goes to each of it's sub-neurons instead, and each sub-neuron simply outputs their $z$'s (without applying any activation function). The $a^i_j$ of that neuron is then the max of all its sub-neuron's outputs. Formally, in a single neuron, say we have $n$ sub-neurons. Then $$a^i_j = \max\limits_{k \in [1,n]} s^i_{jk}$$ where $$s^i_{jk} = a^{i-1} \bullet w^i_{jk} + b^i_{jk}$$ ($\bullet$ is the dot product) To help us think about this, consider the weight matrix $W^i$ for the $i^{\text{th}}$ layer of a neural network that is using, say, a sigmoid activation function. $W^i$ is a 2D matrix, where each column $W^i_j$ is a vector for neuron $j$ containing a weight for every neuron in the the previous layer $i-1$. If we're going to have sub-neurons, we're going to need a 2D weight matrix for each neuron, since each sub-neuron will need a vector containing a weight for every neuron in the previous layer. This means that $W^i$ is now a 3D weight matrix, where each $W^i_j$ is the 2D weight matrix for a single neuron $j$. And then $W^i_{jk}$ is a vector for sub-neuron $k$ in neuron $j$ that contains a weight for every neuron in the previous layer $i-1$. Likewise, in a neural network that is again using, say, a sigmoid activation function, $b^i$ is a vector with a bias $b^i_j$ for each neuron $j$ in layer $i$. To do this with sub-neurons, we need a 2D bias matrix $b^i$ for each layer $i$, where $b^i_j$ is the vector with a bias for $b^i_{jk}$ each subneuron $k$ in the $j^{\text{th}}$ neuron. Having a weight matrix $w^i_j$ and a bias vector $b^i_j$ for each neuron then makes the above expressions very clear, it's simply applying each sub-neuron's weights $w^i_{jk}$ to the outputs $a^{i-1}$ from layer $i-1$, then applying their biases $b^i_{jk}$ and taking the max of them. Radial Basis Function Networks Radial Basis Function Networks are a modification of Feedforward Neural Networks, where instead of using $$a^i_j=\sigma\bigg(\sum\limits_k (w^i_{jk} \cdot a^{i-1}_k) + b^i_j\bigg)$$ we have one weight $w^i_{jk}$ per node $k$ in the previous layer (as normal), and also one mean vector $\mu^i_{jk}$ and one standard deviation vector $\sigma^i_{jk}$ for each node in the previous layer. Then we call our activation function $\rho$ to avoid getting it confused with the standard deviation vectors $\sigma^i_{jk}$. Now to compute $a^i_j$ we first need to compute one $z^i_{jk}$ for each node in the previous layer. One option is to use Euclidean distance: $$z^i_{jk}=\sqrt{\Vert(a^{i-1}-\mu^i_{jk}\Vert}=\sqrt{\sum\limits_\ell (a^{i-1}_\ell - \mu^i_{jk\ell})^2}$$ Where $\mu^i_{jk\ell}$ is the $\ell^\text{th}$ element of $\mu^i_{jk}$. This one does not use the $\sigma^i_{jk}$. Alternatively there is Mahalanobis distance, which supposedly performs better: $$z^i_{jk}=\sqrt{(a^{i-1}-\mu^i_{jk})^T \Sigma^i_{jk} (a^{i-1}-\mu^i_{jk})}$$ where $\Sigma^i_{jk}$ is the covariance matrix, defined as: $$\Sigma^i_{jk} = \text{diag}(\sigma^i_{jk})$$ In other words, $\Sigma^i_{jk}$ is the diagonal matrix with $\sigma^i_{jk}$ as it's diagonal elements. We define $a^{i-1}$ and $\mu^i_{jk}$ as column vectors here because that is the notation that is normally used. These are really just saying that Mahalanobis distance is defined as $$z^i_{jk}=\sqrt{\sum\limits_\ell \frac{(a^{i-1}_{\ell} - \mu^i_{jk\ell})^2}{\sigma^i_{jk\ell}}}$$ Where $\sigma^i_{jk\ell}$ is the $\ell^\text{th}$ element of $\sigma^i_{jk}$. Note that $\sigma^i_{jk\ell}$ must always be positive, but this is a typical requirement for standard deviation so this isn't that surprising. If desired, Mahalanobis distance is general enough that the covariance matrix $\Sigma^i_{jk}$ can be defined as other matrices. For example, if the covariance matrix is the identity matrix, our Mahalanobis distance reduces to the Euclidean distance. $\Sigma^i_{jk} = \text{diag}(\sigma^i_{jk})$ is pretty common though, and is known as normalized Euclidean distance. Either way, once our distance function has been chosen, we can compute $a^i_j$ via $$a^i_j=\sum\limits_k w^i_{jk}\rho(z^i_{jk})$$ In these networks they choose to multiply by weights after applying the activation function for reasons. This describes how to make a multi-layer Radial Basis Function network, however, usually there is only one of these neurons, and its output is the output of the network. It's drawn as multiple neurons because each mean vector $\mu^i_{jk}$ and each standard deviation vector $\sigma^i_{jk}$ of that single neuron is considered a one "neuron" and then after all of these outputs there is another layer that takes the sum of those computed values times the weights, just like $a^i_j$ above. Splitting it into two layers with a "summing" vector at the end seems odd to me, but it's what they do. Also see here. Radial Basis Function Network Activation Functions Gaussian $$\rho(z^i_{jk}) = \exp\!\big(-\frac{1}{2} (z^i_{jk})^2\big)$$ Multiquadratic Choose some point $(x, y)$. Then we compute the distance from $(z^i_j, 0)$ to $(x, y)$: $$\rho(z^i_{jk}) = \sqrt{(z^i_{jk}-x)^2 + y^2}$$ This is from Wikipedia. It isn't bounded, and can be any positive value, though I am wondering if there is a way to normalize it. When $y=0$, this is equivalent to absolute (with a horizontal shift $x$). Inverse Multiquadratic Same as quadratic, except flipped: $$\rho(z^i_{jk}) = \frac{1}{\sqrt{(z^i_{jk}-x)^2 + y^2}}$$ *Graphics from intmath's Graphs using SVG.
Comprehensive list of activation functions in neural networks with pros/cons
I'll start making a list here of the ones I've learned so far. As @marcodena said, pros and cons are more difficult because it's mostly just heuristics learned from trying these things, but I figure a
Comprehensive list of activation functions in neural networks with pros/cons I'll start making a list here of the ones I've learned so far. As @marcodena said, pros and cons are more difficult because it's mostly just heuristics learned from trying these things, but I figure at least having a list of what they are can't hurt. First, I'll define notation explicitly so there is no confusion: Notation This notation is from Neilsen's book. A Feedforward Neural Network is a many layers of neurons connected together. It takes in an input, then that input "trickles" through the network and the neural network returns an output vector. More formally, call $a^i_j$ the activation (aka output) of the $j^{th}$ neuron in the $i^{th}$ layer, where $a^1_j$ is the $j^{th}$ element in the input vector. Then we can relate the next layer's input to it's previous via the following relation: $$a^i_j = \sigma\bigg(\sum\limits_k (w^i_{jk} \cdot a^{i-1}_k) + b^i_j\bigg)$$ where $\sigma$ is the activation function, $w^i_{jk}$ is the weight from the $k^{th}$ neuron in the $(i-1)^{th}$ layer to the $j^{th}$ neuron in the $i^{th}$ layer, $b^i_j$ is the bias of the $j^{th}$ neuron in the $i^{th}$ layer, and $a^i_j$ represents the activation value of the $j^{th}$ neuron in the $i^{th}$ layer. Sometimes we write $z^i_j$ to represent $\sum\limits_k (w^i_{jk} \cdot a^{i-1}_k) + b^i_j$, in other words, the activation value of a neuron before applying the activation function. For more concise notation we can write $$a^i = \sigma(w^i \times a^{i-1} + b^i)$$ To use this formula to compute the output of a feedforward network for some input $I \in \mathbb{R}^n$, set $a^1 = I$, then compute $a^2, a^3, \ldots, a^m$, where $m$ is the number of layers. Activation Functions (in the following, we will write $\exp(x)$ instead of $e^x$ for readability) Identity Also known as a linear activation function. $$a^i_j = \sigma(z^i_j) = z^i_j$$ Step $$a^i_j = \sigma(z^i_j) = \begin{cases} 0 & \text{if } z^i_j < 0 \\ 1 & \text{if } z^i_j > 0 \end{cases}$$ Piecewise Linear Choose some $x_{\min}$ and $x_{\max}$, which is our "range". Everything less than than this range will be 0, and everything greater than this range will be 1. Anything else is linearly-interpolated between. Formally: $$a^i_j = \sigma(z^i_j) = \begin{cases} 0 & \text{if } z^i_j < x_{\min} \\ m z^i_j+b & \text{if } x_{\min} \leq z^i_j \leq x_{\max} \\ 1 & \text{if } z^i_j > x_{\max} \end{cases}$$ Where $$m = \frac{1}{x_{\max}-x_{\min}}$$ and $$b = -m x_{\min} = 1 - m x_{\max}$$ Sigmoid $$a^i_j = \sigma(z^i_j) = \frac{1}{1+\exp(-z^i_j)}$$ Complementary log-log $$a^i_j = \sigma(z^i_j) = 1 − \exp\!\big(−\exp(z^i_j)\big)$$ Bipolar $$a^i_j = \sigma(z^i_j) = \begin{cases} -1 & \text{if } z^i_j < 0 \\ \ \ \ 1 & \text{if } z^i_j > 0 \end{cases}$$ Bipolar Sigmoid $$a^i_j = \sigma(z^i_j) = \frac{1-\exp(-z^i_j)}{1+\exp(-z^i_j)}$$ Tanh $$a^i_j = \sigma(z^i_j) = \tanh(z^i_j)$$ LeCun's Tanh See Efficient Backprop. $$a^i_j = \sigma(z^i_j) = 1.7159 \tanh\!\left( \frac{2}{3} z^i_j\right)$$ Scaled: Hard Tanh $$a^i_j = \sigma(z^i_j) = \max\!\big(-1, \min(1, z^i_j)\big)$$ Absolute $$a^i_j = \sigma(z^i_j) = \mid z^i_j \mid$$ Rectifier Also known as Rectified Linear Unit (ReLU), Max, or the Ramp Function. $$a^i_j = \sigma(z^i_j) = \max(0, z^i_j)$$ Modifications of ReLU These are some activation functions that I have been playing with that seem to have very good performance for MNIST for mysterious reasons. $$a^i_j = \sigma(z^i_j) = \max(0, z^i_j)+\cos(z^i_j)$$ Scaled: $$a^i_j = \sigma(z^i_j) = \max(0, z^i_j)+\sin(z^i_j)$$ Scaled: Smooth Rectifier Also known as Smooth Rectified Linear Unit, Smooth Max, or Soft plus $$a^i_j = \sigma(z^i_j) = \log\!\big(1+\exp(z^i_j)\big)$$ Logit $$a^i_j = \sigma(z^i_j) = \log\!\bigg(\frac{z^i_j}{(1 − z^i_j)}\bigg)$$ Scaled: Probit $$a^i_j = \sigma(z^i_j) = \sqrt{2}\,\text{erf}^{-1}(2z^i_j-1)$$. Where $\text{erf}$ is the Error Function. It can't be described via elementary functions, but you can find ways of approximating it's inverse at that Wikipedia page and here. Alternatively, it can be expressed as $$a^i_j = \sigma(z^i_j) = \phi(z^i_j)$$. Where $\phi $is the Cumulative distribution function (CDF). See here for means of approximating this. Scaled: Cosine See Random Kitchen Sinks. $$a^i_j = \sigma(z^i_j) = \cos(z^i_j)$$. Softmax Also known as the Normalized Exponential. $$a^i_j = \frac{\exp(z^i_j)}{\sum\limits_k \exp(z^i_k)}$$ This one is a little weird because the output of a single neuron is dependent on the other neurons in that layer. It also does get difficult to compute, as $z^i_j$ may be a very high value, in which case $\exp(z^i_j)$ will probably overflow. Likewise, if $z^i_j$ is a very low value, it will underflow and become $0$. To combat this, we will instead compute $\log(a^i_j)$. This gives us: $$\log(a^i_j) = \log\left(\frac{\exp(z^i_j)}{\sum\limits_k \exp(z^i_k)}\right)$$ $$\log(a^i_j) = z^i_j - \log(\sum\limits_k \exp(z^i_k))$$ Here we need to use the log-sum-exp trick: Let's say we are computing: $$\log(e^2 + e^9 + e^{11} + e^{-7} + e^{-2} + e^5)$$ We will first sort our exponentials by magnitude for convenience: $$\log(e^{11} + e^9 + e^5 + e^2 + e^{-2} + e^{-7})$$ Then, since $e^{11}$ is our highest, we multiply by $\frac{e^{-11}}{e^{-11}}$: $$\log(\frac{e^{-11}}{e^{-11}}(e^{11} + e^9 + e^5 + e^2 + e^{-2} + e^{-7}))$$ $$\log(\frac{1}{e^{-11}}(e^{0} + e^{-2} + e^{-6} + e^{-9} + e^{-13} + e^{-18}))$$ $$\log(e^{11}(e^{0} + e^{-2} + e^{-6} + e^{-9} + e^{-13} + e^{-18}))$$ $$\log(e^{11}) + \log(e^{0} + e^{-2} + e^{-6} + e^{-9} + e^{-13} + e^{-18})$$ $$ 11 + \log(e^{0} + e^{-2} + e^{-6} + e^{-9} + e^{-13} + e^{-18})$$ We can then compute the expression on the right and take the log of it. It's okay to do this because that sum is very small with respect to $\log(e^{11})$, so any underflow to 0 wouldn't have been significant enough to make a difference anyway. Overflow can't happen in the expression on the right because we are guaranteed that after multiplying by $e^{-11}$, all the powers will be $\leq 0$. Formally, we call $m=\max(z^i_1, z^i_2, z^i_3, ...)$. Then: $$\log\!(\sum\limits_k \exp(z^i_k)) = m + \log(\sum\limits_k \exp(z^i_k - m))$$ Our softmax function then becomes: $$a^i_j = \exp(\log(a^i_j))=\exp\!\left( z^i_j - m - \log(\sum\limits_k \exp(z^i_k - m))\right)$$ Also as a sidenote, the derivative of the softmax function is: $$\frac{d \sigma(z^i_j)}{d z^i_j}=\sigma^{\prime}(z^i_j)= \sigma(z^i_j)(1 - \sigma(z^i_j))$$ Maxout This one is also a little tricky. Essentially the idea is that we break up each neuron in our maxout layer into lots of sub-neurons, each of which have their own weights and biases. Then the input to a neuron goes to each of it's sub-neurons instead, and each sub-neuron simply outputs their $z$'s (without applying any activation function). The $a^i_j$ of that neuron is then the max of all its sub-neuron's outputs. Formally, in a single neuron, say we have $n$ sub-neurons. Then $$a^i_j = \max\limits_{k \in [1,n]} s^i_{jk}$$ where $$s^i_{jk} = a^{i-1} \bullet w^i_{jk} + b^i_{jk}$$ ($\bullet$ is the dot product) To help us think about this, consider the weight matrix $W^i$ for the $i^{\text{th}}$ layer of a neural network that is using, say, a sigmoid activation function. $W^i$ is a 2D matrix, where each column $W^i_j$ is a vector for neuron $j$ containing a weight for every neuron in the the previous layer $i-1$. If we're going to have sub-neurons, we're going to need a 2D weight matrix for each neuron, since each sub-neuron will need a vector containing a weight for every neuron in the previous layer. This means that $W^i$ is now a 3D weight matrix, where each $W^i_j$ is the 2D weight matrix for a single neuron $j$. And then $W^i_{jk}$ is a vector for sub-neuron $k$ in neuron $j$ that contains a weight for every neuron in the previous layer $i-1$. Likewise, in a neural network that is again using, say, a sigmoid activation function, $b^i$ is a vector with a bias $b^i_j$ for each neuron $j$ in layer $i$. To do this with sub-neurons, we need a 2D bias matrix $b^i$ for each layer $i$, where $b^i_j$ is the vector with a bias for $b^i_{jk}$ each subneuron $k$ in the $j^{\text{th}}$ neuron. Having a weight matrix $w^i_j$ and a bias vector $b^i_j$ for each neuron then makes the above expressions very clear, it's simply applying each sub-neuron's weights $w^i_{jk}$ to the outputs $a^{i-1}$ from layer $i-1$, then applying their biases $b^i_{jk}$ and taking the max of them. Radial Basis Function Networks Radial Basis Function Networks are a modification of Feedforward Neural Networks, where instead of using $$a^i_j=\sigma\bigg(\sum\limits_k (w^i_{jk} \cdot a^{i-1}_k) + b^i_j\bigg)$$ we have one weight $w^i_{jk}$ per node $k$ in the previous layer (as normal), and also one mean vector $\mu^i_{jk}$ and one standard deviation vector $\sigma^i_{jk}$ for each node in the previous layer. Then we call our activation function $\rho$ to avoid getting it confused with the standard deviation vectors $\sigma^i_{jk}$. Now to compute $a^i_j$ we first need to compute one $z^i_{jk}$ for each node in the previous layer. One option is to use Euclidean distance: $$z^i_{jk}=\sqrt{\Vert(a^{i-1}-\mu^i_{jk}\Vert}=\sqrt{\sum\limits_\ell (a^{i-1}_\ell - \mu^i_{jk\ell})^2}$$ Where $\mu^i_{jk\ell}$ is the $\ell^\text{th}$ element of $\mu^i_{jk}$. This one does not use the $\sigma^i_{jk}$. Alternatively there is Mahalanobis distance, which supposedly performs better: $$z^i_{jk}=\sqrt{(a^{i-1}-\mu^i_{jk})^T \Sigma^i_{jk} (a^{i-1}-\mu^i_{jk})}$$ where $\Sigma^i_{jk}$ is the covariance matrix, defined as: $$\Sigma^i_{jk} = \text{diag}(\sigma^i_{jk})$$ In other words, $\Sigma^i_{jk}$ is the diagonal matrix with $\sigma^i_{jk}$ as it's diagonal elements. We define $a^{i-1}$ and $\mu^i_{jk}$ as column vectors here because that is the notation that is normally used. These are really just saying that Mahalanobis distance is defined as $$z^i_{jk}=\sqrt{\sum\limits_\ell \frac{(a^{i-1}_{\ell} - \mu^i_{jk\ell})^2}{\sigma^i_{jk\ell}}}$$ Where $\sigma^i_{jk\ell}$ is the $\ell^\text{th}$ element of $\sigma^i_{jk}$. Note that $\sigma^i_{jk\ell}$ must always be positive, but this is a typical requirement for standard deviation so this isn't that surprising. If desired, Mahalanobis distance is general enough that the covariance matrix $\Sigma^i_{jk}$ can be defined as other matrices. For example, if the covariance matrix is the identity matrix, our Mahalanobis distance reduces to the Euclidean distance. $\Sigma^i_{jk} = \text{diag}(\sigma^i_{jk})$ is pretty common though, and is known as normalized Euclidean distance. Either way, once our distance function has been chosen, we can compute $a^i_j$ via $$a^i_j=\sum\limits_k w^i_{jk}\rho(z^i_{jk})$$ In these networks they choose to multiply by weights after applying the activation function for reasons. This describes how to make a multi-layer Radial Basis Function network, however, usually there is only one of these neurons, and its output is the output of the network. It's drawn as multiple neurons because each mean vector $\mu^i_{jk}$ and each standard deviation vector $\sigma^i_{jk}$ of that single neuron is considered a one "neuron" and then after all of these outputs there is another layer that takes the sum of those computed values times the weights, just like $a^i_j$ above. Splitting it into two layers with a "summing" vector at the end seems odd to me, but it's what they do. Also see here. Radial Basis Function Network Activation Functions Gaussian $$\rho(z^i_{jk}) = \exp\!\big(-\frac{1}{2} (z^i_{jk})^2\big)$$ Multiquadratic Choose some point $(x, y)$. Then we compute the distance from $(z^i_j, 0)$ to $(x, y)$: $$\rho(z^i_{jk}) = \sqrt{(z^i_{jk}-x)^2 + y^2}$$ This is from Wikipedia. It isn't bounded, and can be any positive value, though I am wondering if there is a way to normalize it. When $y=0$, this is equivalent to absolute (with a horizontal shift $x$). Inverse Multiquadratic Same as quadratic, except flipped: $$\rho(z^i_{jk}) = \frac{1}{\sqrt{(z^i_{jk}-x)^2 + y^2}}$$ *Graphics from intmath's Graphs using SVG.
Comprehensive list of activation functions in neural networks with pros/cons I'll start making a list here of the ones I've learned so far. As @marcodena said, pros and cons are more difficult because it's mostly just heuristics learned from trying these things, but I figure a
1,388
Comprehensive list of activation functions in neural networks with pros/cons
One such a list, though not very exhaustive: http://cs231n.github.io/neural-networks-1/ Commonly used activation functions Every activation function (or non-linearity) takes a single number and performs a certain fixed mathematical operation on it. There are several activation functions you may encounter in practice: Left: Sigmoid non-linearity > squashes real numbers to range between [0,1] Right: The tanh non-linearity squashes real numbers to range between [-1,1]. Sigmoid. The sigmoid non-linearity has the mathematical form $\sigma(x) = 1 / (1 + e^{-x})$ and is shown in the image above on the left. As alluded to in the previous section, it takes a real-valued number and "squashes" it into range between 0 and 1. In particular, large negative numbers become 0 and large positive numbers become 1. The sigmoid function has seen frequent use historically since it has a nice interpretation as the firing rate of a neuron: from not firing at all (0) to fully-saturated firing at an assumed maximum frequency (1). In practice, the sigmoid non-linearity has recently fallen out of favor and it is rarely ever used. It has two major drawbacks: Sigmoids saturate and kill gradients. A very undesirable property of the sigmoid neuron is that when the neuron's activation saturates at either tail of 0 or 1, the gradient at these regions is almost zero. Recall that during backpropagation, this (local) gradient will be multiplied to the gradient of this gate's output for the whole objective. Therefore, if the local gradient is very small, it will effectively "kill" the gradient and almost no signal will flow through the neuron to its weights and recursively to its data. Additionally, one must pay extra caution when initializing the weights of sigmoid neurons to prevent saturation. For example, if the initial weights are too large then most neurons would become saturated and the network will barely learn. Sigmoid outputs are not zero-centered. This is undesirable since neurons in later layers of processing in a Neural Network (more on this soon) would be receiving data that is not zero-centered. This has implications on the dynamics during gradient descent, because if the data coming into a neuron is always positive (e.g. $x > 0$ elementwise in $f = w^Tx + b$)), then the gradient on the weights $w$ will during backpropagation become either all be positive, or all negative (depending on the gradient of the whole expression $f$). This could introduce undesirable zig-zagging dynamics in the gradient updates for the weights. However, notice that once these gradients are added up across a batch of data the final update for the weights can have variable signs, somewhat mitigating this issue. Therefore, this is an inconvenience but it has less severe consequences compared to the saturated activation problem above. Tanh. The tanh non-linearity is shown on the image above on the right. It squashes a real-valued number to the range [-1, 1]. Like the sigmoid neuron, its activations saturate, but unlike the sigmoid neuron its output is zero-centered. Therefore, in practice the tanh non-linearity is always preferred to the sigmoid nonlinearity. Also note that the tanh neuron is simply a scaled sigmoid neuron, in particular the following holds: $ \tanh(x) = 2 \sigma(2x) -1 $. Left: Rectified Linear Unit (ReLU) activation function, which is zero when x &lt 0 and then linear with slope 1 when x &gt 0. Right: A plot from Krizhevsky et al. (pdf) paper indicating the 6x improvement in convergence with the ReLU unit compared to the tanh unit. ReLU. The Rectified Linear Unit has become very popular in the last few years. It computes the function $f(x) = \max(0, x)$. In other words, the activation is simply thresholded at zero (see image above on the left). There are several pros and cons to using the ReLUs: (+) It was found to greatly accelerate (e.g. a factor of 6 in Krizhevsky et al.) the convergence of stochastic gradient descent compared to the sigmoid/tanh functions. It is argued that this is due to its linear, non-saturating form. (+) Compared to tanh/sigmoid neurons that involve expensive operations (exponentials, etc.), the ReLU can be implemented by simply thresholding a matrix of activations at zero. (-) Unfortunately, ReLU units can be fragile during training and can "die". For example, a large gradient flowing through a ReLU neuron could cause the weights to update in such a way that the neuron will never activate on any datapoint again. If this happens, then the gradient flowing through the unit will forever be zero from that point on. That is, the ReLU units can irreversibly die during training since they can get knocked off the data manifold. For example, you may find that as much as 40% of your network can be "dead" (i.e. neurons that never activate across the entire training dataset) if the learning rate is set too high. With a proper setting of the learning rate this is less frequently an issue. Leaky ReLU. Leaky ReLUs are one attempt to fix the "dying ReLU" problem. Instead of the function being zero when x < 0, a leaky ReLU will instead have a small negative slope (of 0.01, or so). That is, the function computes $f(x) = \mathbb{1}(x < 0) (\alpha x) + \mathbb{1}(x>=0) (x) $ where $\alpha$ is a small constant. Some people report success with this form of activation function, but the results are not always consistent. The slope in the negative region can also be made into a parameter of each neuron, as seen in PReLU neurons, introduced in Delving Deep into Rectifiers, by Kaiming He et al., 2015. However, the consistency of the benefit across tasks is presently unclear. Maxout. Other types of units have been proposed that do not have the functional form $f(w^Tx + b)$ where a non-linearity is applied on the dot product between the weights and the data. One relatively popular choice is the Maxout neuron (introduced recently by Goodfellow et al.) that generalizes the ReLU and its leaky version. The Maxout neuron computes the function $\max(w_1^Tx+b_1, w_2^Tx + b_2)$. Notice that both ReLU and Leaky ReLU are a special case of this form (for example, for ReLU we have $w_1, b_1 = 0$). The Maxout neuron therefore enjoys all the benefits of a ReLU unit (linear regime of operation, no saturation) and does not have its drawbacks (dying ReLU). However, unlike the ReLU neurons it doubles the number of parameters for every single neuron, leading to a high total number of parameters. This concludes our discussion of the most common types of neurons and their activation functions. As a last comment, it is very rare to mix and match different types of neurons in the same network, even though there is no fundamental problem with doing so. TLDR: "What neuron type should I use?" Use the ReLU non-linearity, be careful with your learning rates and possibly monitor the fraction of "dead" units in a network. If this concerns you, give Leaky ReLU or Maxout a try. Never use sigmoid. Try tanh, but expect it to work worse than ReLU/Maxout. ... License: The MIT License (MIT) Copyright (c) 2015 Andrej Karpathy Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.* Other links: tanh activation function vs sigmoid activation function
Comprehensive list of activation functions in neural networks with pros/cons
One such a list, though not very exhaustive: http://cs231n.github.io/neural-networks-1/ Commonly used activation functions Every activation function (or non-linearity) takes a single number and perfo
Comprehensive list of activation functions in neural networks with pros/cons One such a list, though not very exhaustive: http://cs231n.github.io/neural-networks-1/ Commonly used activation functions Every activation function (or non-linearity) takes a single number and performs a certain fixed mathematical operation on it. There are several activation functions you may encounter in practice: Left: Sigmoid non-linearity > squashes real numbers to range between [0,1] Right: The tanh non-linearity squashes real numbers to range between [-1,1]. Sigmoid. The sigmoid non-linearity has the mathematical form $\sigma(x) = 1 / (1 + e^{-x})$ and is shown in the image above on the left. As alluded to in the previous section, it takes a real-valued number and "squashes" it into range between 0 and 1. In particular, large negative numbers become 0 and large positive numbers become 1. The sigmoid function has seen frequent use historically since it has a nice interpretation as the firing rate of a neuron: from not firing at all (0) to fully-saturated firing at an assumed maximum frequency (1). In practice, the sigmoid non-linearity has recently fallen out of favor and it is rarely ever used. It has two major drawbacks: Sigmoids saturate and kill gradients. A very undesirable property of the sigmoid neuron is that when the neuron's activation saturates at either tail of 0 or 1, the gradient at these regions is almost zero. Recall that during backpropagation, this (local) gradient will be multiplied to the gradient of this gate's output for the whole objective. Therefore, if the local gradient is very small, it will effectively "kill" the gradient and almost no signal will flow through the neuron to its weights and recursively to its data. Additionally, one must pay extra caution when initializing the weights of sigmoid neurons to prevent saturation. For example, if the initial weights are too large then most neurons would become saturated and the network will barely learn. Sigmoid outputs are not zero-centered. This is undesirable since neurons in later layers of processing in a Neural Network (more on this soon) would be receiving data that is not zero-centered. This has implications on the dynamics during gradient descent, because if the data coming into a neuron is always positive (e.g. $x > 0$ elementwise in $f = w^Tx + b$)), then the gradient on the weights $w$ will during backpropagation become either all be positive, or all negative (depending on the gradient of the whole expression $f$). This could introduce undesirable zig-zagging dynamics in the gradient updates for the weights. However, notice that once these gradients are added up across a batch of data the final update for the weights can have variable signs, somewhat mitigating this issue. Therefore, this is an inconvenience but it has less severe consequences compared to the saturated activation problem above. Tanh. The tanh non-linearity is shown on the image above on the right. It squashes a real-valued number to the range [-1, 1]. Like the sigmoid neuron, its activations saturate, but unlike the sigmoid neuron its output is zero-centered. Therefore, in practice the tanh non-linearity is always preferred to the sigmoid nonlinearity. Also note that the tanh neuron is simply a scaled sigmoid neuron, in particular the following holds: $ \tanh(x) = 2 \sigma(2x) -1 $. Left: Rectified Linear Unit (ReLU) activation function, which is zero when x &lt 0 and then linear with slope 1 when x &gt 0. Right: A plot from Krizhevsky et al. (pdf) paper indicating the 6x improvement in convergence with the ReLU unit compared to the tanh unit. ReLU. The Rectified Linear Unit has become very popular in the last few years. It computes the function $f(x) = \max(0, x)$. In other words, the activation is simply thresholded at zero (see image above on the left). There are several pros and cons to using the ReLUs: (+) It was found to greatly accelerate (e.g. a factor of 6 in Krizhevsky et al.) the convergence of stochastic gradient descent compared to the sigmoid/tanh functions. It is argued that this is due to its linear, non-saturating form. (+) Compared to tanh/sigmoid neurons that involve expensive operations (exponentials, etc.), the ReLU can be implemented by simply thresholding a matrix of activations at zero. (-) Unfortunately, ReLU units can be fragile during training and can "die". For example, a large gradient flowing through a ReLU neuron could cause the weights to update in such a way that the neuron will never activate on any datapoint again. If this happens, then the gradient flowing through the unit will forever be zero from that point on. That is, the ReLU units can irreversibly die during training since they can get knocked off the data manifold. For example, you may find that as much as 40% of your network can be "dead" (i.e. neurons that never activate across the entire training dataset) if the learning rate is set too high. With a proper setting of the learning rate this is less frequently an issue. Leaky ReLU. Leaky ReLUs are one attempt to fix the "dying ReLU" problem. Instead of the function being zero when x < 0, a leaky ReLU will instead have a small negative slope (of 0.01, or so). That is, the function computes $f(x) = \mathbb{1}(x < 0) (\alpha x) + \mathbb{1}(x>=0) (x) $ where $\alpha$ is a small constant. Some people report success with this form of activation function, but the results are not always consistent. The slope in the negative region can also be made into a parameter of each neuron, as seen in PReLU neurons, introduced in Delving Deep into Rectifiers, by Kaiming He et al., 2015. However, the consistency of the benefit across tasks is presently unclear. Maxout. Other types of units have been proposed that do not have the functional form $f(w^Tx + b)$ where a non-linearity is applied on the dot product between the weights and the data. One relatively popular choice is the Maxout neuron (introduced recently by Goodfellow et al.) that generalizes the ReLU and its leaky version. The Maxout neuron computes the function $\max(w_1^Tx+b_1, w_2^Tx + b_2)$. Notice that both ReLU and Leaky ReLU are a special case of this form (for example, for ReLU we have $w_1, b_1 = 0$). The Maxout neuron therefore enjoys all the benefits of a ReLU unit (linear regime of operation, no saturation) and does not have its drawbacks (dying ReLU). However, unlike the ReLU neurons it doubles the number of parameters for every single neuron, leading to a high total number of parameters. This concludes our discussion of the most common types of neurons and their activation functions. As a last comment, it is very rare to mix and match different types of neurons in the same network, even though there is no fundamental problem with doing so. TLDR: "What neuron type should I use?" Use the ReLU non-linearity, be careful with your learning rates and possibly monitor the fraction of "dead" units in a network. If this concerns you, give Leaky ReLU or Maxout a try. Never use sigmoid. Try tanh, but expect it to work worse than ReLU/Maxout. ... License: The MIT License (MIT) Copyright (c) 2015 Andrej Karpathy Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.* Other links: tanh activation function vs sigmoid activation function
Comprehensive list of activation functions in neural networks with pros/cons One such a list, though not very exhaustive: http://cs231n.github.io/neural-networks-1/ Commonly used activation functions Every activation function (or non-linearity) takes a single number and perfo
1,389
Comprehensive list of activation functions in neural networks with pros/cons
I don't think that a list with pros and cons exists. The activation functions are highly application dependent, and they depends also on the architecture of your neural network (here for example you see the application of two softmax functions, that are similar to the sigmoid one). You can find some studies about the general behaviour of the functions, but I think you will never have a defined and definitive list (what you ask...). I'm still a student, so I point what I know so far: here you find some thoughts about the behaviours of tanh and sigmoids with backpropagation. Tanh are more generic, but sigmoids... (there will be always a "but") In Deep Sparse Rectifier Neural Networks of Glorot Xavier et al, they state that Rectifier units are more biologically plausible and they perform better than the others (sigmoid/tanh)
Comprehensive list of activation functions in neural networks with pros/cons
I don't think that a list with pros and cons exists. The activation functions are highly application dependent, and they depends also on the architecture of your neural network (here for example you s
Comprehensive list of activation functions in neural networks with pros/cons I don't think that a list with pros and cons exists. The activation functions are highly application dependent, and they depends also on the architecture of your neural network (here for example you see the application of two softmax functions, that are similar to the sigmoid one). You can find some studies about the general behaviour of the functions, but I think you will never have a defined and definitive list (what you ask...). I'm still a student, so I point what I know so far: here you find some thoughts about the behaviours of tanh and sigmoids with backpropagation. Tanh are more generic, but sigmoids... (there will be always a "but") In Deep Sparse Rectifier Neural Networks of Glorot Xavier et al, they state that Rectifier units are more biologically plausible and they perform better than the others (sigmoid/tanh)
Comprehensive list of activation functions in neural networks with pros/cons I don't think that a list with pros and cons exists. The activation functions are highly application dependent, and they depends also on the architecture of your neural network (here for example you s
1,390
Comprehensive list of activation functions in neural networks with pros/cons
Just for the sake of completeness on Danielle's great answer, there are other paradigms, where one randomly 'spins the wheel' on the weights and / or the type of activations: liquid state machines, extreme learning machines and echo state networks. One way to think about these architectures: the reservoir is a sort of kernel as in SVMs or one large hidden layer in a simple FFNN where the data is projected to some hyperspace. There is no actual learning, the reservoir is re-generated until a satisfying solution is reached. Also see this nice answer.
Comprehensive list of activation functions in neural networks with pros/cons
Just for the sake of completeness on Danielle's great answer, there are other paradigms, where one randomly 'spins the wheel' on the weights and / or the type of activations: liquid state machines, e
Comprehensive list of activation functions in neural networks with pros/cons Just for the sake of completeness on Danielle's great answer, there are other paradigms, where one randomly 'spins the wheel' on the weights and / or the type of activations: liquid state machines, extreme learning machines and echo state networks. One way to think about these architectures: the reservoir is a sort of kernel as in SVMs or one large hidden layer in a simple FFNN where the data is projected to some hyperspace. There is no actual learning, the reservoir is re-generated until a satisfying solution is reached. Also see this nice answer.
Comprehensive list of activation functions in neural networks with pros/cons Just for the sake of completeness on Danielle's great answer, there are other paradigms, where one randomly 'spins the wheel' on the weights and / or the type of activations: liquid state machines, e
1,391
Comprehensive list of activation functions in neural networks with pros/cons
An article reviewing recent activation functions can be found in "Activation Functions: Comparison of Trends in Practice and Research for Deep Learning" by Chigozie Enyinna Nwankpa, Winifred Ijomah, Anthony Gachagan, and Stephen Marshall Deep neural networks have been successfully used in diverse emerging domains to solve real world complex problems with may more deep learning(DL) architectures, being developed to date. To achieve these state-of-the-art performances, the DL architectures use activation functions (AFs), to perform diverse computations between the hidden layers and the output layers of any given DL architecture. This paper presents a survey on the existing AFs used in deep learning applications and highlights the recent trends in the use of the activation functions for deep learning applications. The novelty of this paper is that it compiles majority of the AFs used in DL and outlines the current trends in the applications and usage of these functions in practical deep learning deployments against the state-of-the-art research results. This compilation will aid in making effective decisions in the choice of the most suitable and appropriate activation function for any given application, ready for deployment. This paper is timely because most research papers on AF highlights similar works and results while this paper will be the first, to compile the trends in AF applications in practice against the research results from literature, found in deep learning research to date.
Comprehensive list of activation functions in neural networks with pros/cons
An article reviewing recent activation functions can be found in "Activation Functions: Comparison of Trends in Practice and Research for Deep Learning" by Chigozie Enyinna Nwankpa, Winifred Ijomah, A
Comprehensive list of activation functions in neural networks with pros/cons An article reviewing recent activation functions can be found in "Activation Functions: Comparison of Trends in Practice and Research for Deep Learning" by Chigozie Enyinna Nwankpa, Winifred Ijomah, Anthony Gachagan, and Stephen Marshall Deep neural networks have been successfully used in diverse emerging domains to solve real world complex problems with may more deep learning(DL) architectures, being developed to date. To achieve these state-of-the-art performances, the DL architectures use activation functions (AFs), to perform diverse computations between the hidden layers and the output layers of any given DL architecture. This paper presents a survey on the existing AFs used in deep learning applications and highlights the recent trends in the use of the activation functions for deep learning applications. The novelty of this paper is that it compiles majority of the AFs used in DL and outlines the current trends in the applications and usage of these functions in practical deep learning deployments against the state-of-the-art research results. This compilation will aid in making effective decisions in the choice of the most suitable and appropriate activation function for any given application, ready for deployment. This paper is timely because most research papers on AF highlights similar works and results while this paper will be the first, to compile the trends in AF applications in practice against the research results from literature, found in deep learning research to date.
Comprehensive list of activation functions in neural networks with pros/cons An article reviewing recent activation functions can be found in "Activation Functions: Comparison of Trends in Practice and Research for Deep Learning" by Chigozie Enyinna Nwankpa, Winifred Ijomah, A
1,392
Why use gradient descent for linear regression, when a closed-form math solution is available?
The main reason why gradient descent is used for linear regression is the computational complexity: it's computationally cheaper (faster) to find the solution using the gradient descent in some cases. The formula which you wrote looks very simple, even computationally, because it only works for univariate case, i.e. when you have only one variable. In the multivariate case, when you have many variables, the formulae is slightly more complicated on paper and requires much more calculations when you implement it in software: $$\beta=(X'X)^{-1}X'Y$$ Here, you need to calculate the matrix $X'X$ then invert it (see note below). It's an expensive calculation. For your reference, the (design) matrix X has K+1 columns where K is the number of predictors and N rows of observations. In a machine learning algorithm you can end up with K>1000 and N>1,000,000. The $X'X$ matrix itself takes a little while to calculate, then you have to invert $K\times K$ matrix - this is expensive. So, the gradient descent allows to save a lot of time on calculations. Moreover, the way it's done allows for a trivial parallelization, i.e. distributing the calculations across multiple processors or machines. The linear algebra solution can also be parallelized but it's more complicated and still expensive. Additionally, there are versions of gradient descent when you keep only a piece of your data in memory, lowering the requirements for computer memory. Overall, for extra large problems it's more efficient than linear algebra solution. This becomes even more important as the dimensionality increases, when you have thousands of variables like in machine learning. Remark. I was surprised by how much attention is given to the gradient descent in Ng's lectures. He spends nontrivial amount of time talking about it, maybe 20% of entire course. To me it's just an implementation detail, it's how exactly you find the optimum. The key is in formulating the optimization problem, and how exactly you find it is nonessential. I wouldn't worry about it too much. Leave it to computer science people, and focus on what's important to you as a statistician. Having said this I must qualify by saying that it is indeed important to understand the computational complexity and numerical stability of the solution algorithms. I still don't think you must know the details of implementation and code of the algorithms. It's not the best use of your time as a statistician usually. Note 1. I wrote that you have to invert the matrix for didactic purposes and it's not how usually you solve the equation. In practice, the linear algebra problems are solved by using some kind of factorization such as QR, where you don't directly invert the matrix but do some other mathematically equivalent manipulations to get an answer. You do this because matrix inversion is an expensive and numerically unstable operation in many cases. This brings up another little advantageof the gradient descent algorithm as a side effect: it works even when the design matrix has collinearity issues. The usual linear algebra path would blow up and gradient descent will keep going even for collinear predictors.
Why use gradient descent for linear regression, when a closed-form math solution is available?
The main reason why gradient descent is used for linear regression is the computational complexity: it's computationally cheaper (faster) to find the solution using the gradient descent in some cases.
Why use gradient descent for linear regression, when a closed-form math solution is available? The main reason why gradient descent is used for linear regression is the computational complexity: it's computationally cheaper (faster) to find the solution using the gradient descent in some cases. The formula which you wrote looks very simple, even computationally, because it only works for univariate case, i.e. when you have only one variable. In the multivariate case, when you have many variables, the formulae is slightly more complicated on paper and requires much more calculations when you implement it in software: $$\beta=(X'X)^{-1}X'Y$$ Here, you need to calculate the matrix $X'X$ then invert it (see note below). It's an expensive calculation. For your reference, the (design) matrix X has K+1 columns where K is the number of predictors and N rows of observations. In a machine learning algorithm you can end up with K>1000 and N>1,000,000. The $X'X$ matrix itself takes a little while to calculate, then you have to invert $K\times K$ matrix - this is expensive. So, the gradient descent allows to save a lot of time on calculations. Moreover, the way it's done allows for a trivial parallelization, i.e. distributing the calculations across multiple processors or machines. The linear algebra solution can also be parallelized but it's more complicated and still expensive. Additionally, there are versions of gradient descent when you keep only a piece of your data in memory, lowering the requirements for computer memory. Overall, for extra large problems it's more efficient than linear algebra solution. This becomes even more important as the dimensionality increases, when you have thousands of variables like in machine learning. Remark. I was surprised by how much attention is given to the gradient descent in Ng's lectures. He spends nontrivial amount of time talking about it, maybe 20% of entire course. To me it's just an implementation detail, it's how exactly you find the optimum. The key is in formulating the optimization problem, and how exactly you find it is nonessential. I wouldn't worry about it too much. Leave it to computer science people, and focus on what's important to you as a statistician. Having said this I must qualify by saying that it is indeed important to understand the computational complexity and numerical stability of the solution algorithms. I still don't think you must know the details of implementation and code of the algorithms. It's not the best use of your time as a statistician usually. Note 1. I wrote that you have to invert the matrix for didactic purposes and it's not how usually you solve the equation. In practice, the linear algebra problems are solved by using some kind of factorization such as QR, where you don't directly invert the matrix but do some other mathematically equivalent manipulations to get an answer. You do this because matrix inversion is an expensive and numerically unstable operation in many cases. This brings up another little advantageof the gradient descent algorithm as a side effect: it works even when the design matrix has collinearity issues. The usual linear algebra path would blow up and gradient descent will keep going even for collinear predictors.
Why use gradient descent for linear regression, when a closed-form math solution is available? The main reason why gradient descent is used for linear regression is the computational complexity: it's computationally cheaper (faster) to find the solution using the gradient descent in some cases.
1,393
Why use gradient descent for linear regression, when a closed-form math solution is available?
In short, suppose we want to solve the linear regression problem with squared loss $$\text{minimize}~ \|Ax-b\|^2$$ We can set the derivative $2A^T(Ax-b)$ to $0$, and it is solving the linear system $$A^TAx=A^Tb$$ In high level, there are two ways to solve a linear system. Direct method and the iterative method. Note direct method is solving $A^TAx=A^Tb$, and gradient descent (one example iterative method) is directly solving $\text{minimize}~ \|Ax-b\|^2$. Comparing to direct methods (Say QR / LU Decomposition). Iterative methods have some advantages when we have a large amount of data or the data is very sparse. Suppose our data matrix $A$ is huge and it is not possible to fit in memory, stochastic gradient descent can be used. I have an answer to explain why How could stochastic gradient descent save time compared to standard gradient descent? For sparse data, check the great book Iterative Methods for Sparse Linear Systems On the other hand, I believe one of the reasons Andrew Ng emphasizes it is because it is a generic method (most widely used method in machine learning) and can be used in other models such as logistic regression or neural network.
Why use gradient descent for linear regression, when a closed-form math solution is available?
In short, suppose we want to solve the linear regression problem with squared loss $$\text{minimize}~ \|Ax-b\|^2$$ We can set the derivative $2A^T(Ax-b)$ to $0$, and it is solving the linear system $$
Why use gradient descent for linear regression, when a closed-form math solution is available? In short, suppose we want to solve the linear regression problem with squared loss $$\text{minimize}~ \|Ax-b\|^2$$ We can set the derivative $2A^T(Ax-b)$ to $0$, and it is solving the linear system $$A^TAx=A^Tb$$ In high level, there are two ways to solve a linear system. Direct method and the iterative method. Note direct method is solving $A^TAx=A^Tb$, and gradient descent (one example iterative method) is directly solving $\text{minimize}~ \|Ax-b\|^2$. Comparing to direct methods (Say QR / LU Decomposition). Iterative methods have some advantages when we have a large amount of data or the data is very sparse. Suppose our data matrix $A$ is huge and it is not possible to fit in memory, stochastic gradient descent can be used. I have an answer to explain why How could stochastic gradient descent save time compared to standard gradient descent? For sparse data, check the great book Iterative Methods for Sparse Linear Systems On the other hand, I believe one of the reasons Andrew Ng emphasizes it is because it is a generic method (most widely used method in machine learning) and can be used in other models such as logistic regression or neural network.
Why use gradient descent for linear regression, when a closed-form math solution is available? In short, suppose we want to solve the linear regression problem with squared loss $$\text{minimize}~ \|Ax-b\|^2$$ We can set the derivative $2A^T(Ax-b)$ to $0$, and it is solving the linear system $$
1,394
Why use gradient descent for linear regression, when a closed-form math solution is available?
Sycorax is correct that you don't need gradient descent when estimating linear regression. Your course might be using a simple example to teach you gradient descent to preface more complicated versions. One neat thing I want to add, though, is that there's currently a small research niche involving terminating gradient descent early to prevent overfitting of a model.
Why use gradient descent for linear regression, when a closed-form math solution is available?
Sycorax is correct that you don't need gradient descent when estimating linear regression. Your course might be using a simple example to teach you gradient descent to preface more complicated versio
Why use gradient descent for linear regression, when a closed-form math solution is available? Sycorax is correct that you don't need gradient descent when estimating linear regression. Your course might be using a simple example to teach you gradient descent to preface more complicated versions. One neat thing I want to add, though, is that there's currently a small research niche involving terminating gradient descent early to prevent overfitting of a model.
Why use gradient descent for linear regression, when a closed-form math solution is available? Sycorax is correct that you don't need gradient descent when estimating linear regression. Your course might be using a simple example to teach you gradient descent to preface more complicated versio
1,395
Why use gradient descent for linear regression, when a closed-form math solution is available?
If I am not wrong, I think you are pointing towards the MOOC offered by Prof Andrew Ng. To find the optimal regression coefficients, grossly two methods are available. One is by using Normal Equations i.e. by simply finding out $(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}$ and the second is by minimizing the least squares criterion which is derived from the hypothesis you have cited. By the way, the first method i.e. the Normal equations is a product of the second method i.e. the optimization method. The method you have mentioned i.e. using correlation, it is applicable for one predictor and one intercept quantity only. Just be noticing the form. So, when the number of predictors is more than one in number then what is the way out? Then one has to resort to the other methods i.e. the normal equation or optimization. Now why optimization (here Gradient Descent) although direct normal equation is available. Notice that in normal equation one has to invert a matrix. Now inverting a matrix costs $\mathcal{O}(N^3)$ for computation where $N$ is the number of rows in $\mathbf{X}$ matrix i.e. the observations. Moreover, if the $\mathbf{X}$ is ill conditioned then it will create computational errors in estimation. So, it is the Gradient Descent kind of optimization algorithm which can save us from this type of problem. Another problem is overfitting and underfitting in estimation of regression coefficients. My suggestion to you is don't go for merely solving a problem. Try to understand the theory. Prof Ng is one of the best Professors in this world who kindly teaches Machine Learning in MOOC. So, when he is instructing in this way then it must have some latent intentions. I hope you will not mind for my words. All the best.
Why use gradient descent for linear regression, when a closed-form math solution is available?
If I am not wrong, I think you are pointing towards the MOOC offered by Prof Andrew Ng. To find the optimal regression coefficients, grossly two methods are available. One is by using Normal Equations
Why use gradient descent for linear regression, when a closed-form math solution is available? If I am not wrong, I think you are pointing towards the MOOC offered by Prof Andrew Ng. To find the optimal regression coefficients, grossly two methods are available. One is by using Normal Equations i.e. by simply finding out $(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}$ and the second is by minimizing the least squares criterion which is derived from the hypothesis you have cited. By the way, the first method i.e. the Normal equations is a product of the second method i.e. the optimization method. The method you have mentioned i.e. using correlation, it is applicable for one predictor and one intercept quantity only. Just be noticing the form. So, when the number of predictors is more than one in number then what is the way out? Then one has to resort to the other methods i.e. the normal equation or optimization. Now why optimization (here Gradient Descent) although direct normal equation is available. Notice that in normal equation one has to invert a matrix. Now inverting a matrix costs $\mathcal{O}(N^3)$ for computation where $N$ is the number of rows in $\mathbf{X}$ matrix i.e. the observations. Moreover, if the $\mathbf{X}$ is ill conditioned then it will create computational errors in estimation. So, it is the Gradient Descent kind of optimization algorithm which can save us from this type of problem. Another problem is overfitting and underfitting in estimation of regression coefficients. My suggestion to you is don't go for merely solving a problem. Try to understand the theory. Prof Ng is one of the best Professors in this world who kindly teaches Machine Learning in MOOC. So, when he is instructing in this way then it must have some latent intentions. I hope you will not mind for my words. All the best.
Why use gradient descent for linear regression, when a closed-form math solution is available? If I am not wrong, I think you are pointing towards the MOOC offered by Prof Andrew Ng. To find the optimal regression coefficients, grossly two methods are available. One is by using Normal Equations
1,396
Why use gradient descent for linear regression, when a closed-form math solution is available?
First, yes, the real reason is the one given by Tim Atreides; this is a pedagogical exercise. However, it is possible, albeit unlikely, that one would want to do a linear regression on, say, several trillion datapoints being streamed in from a network socket. In this case, the naive evaluation of the analytic solution would be infeasible, while some variants of stochastic/adaptive gradient descent would converge to the correct solution with minimal memory overhead. (one could, for linear regression, reformulate the analytic solution as a recurrence system, but this is not a general technique.)
Why use gradient descent for linear regression, when a closed-form math solution is available?
First, yes, the real reason is the one given by Tim Atreides; this is a pedagogical exercise. However, it is possible, albeit unlikely, that one would want to do a linear regression on, say, several t
Why use gradient descent for linear regression, when a closed-form math solution is available? First, yes, the real reason is the one given by Tim Atreides; this is a pedagogical exercise. However, it is possible, albeit unlikely, that one would want to do a linear regression on, say, several trillion datapoints being streamed in from a network socket. In this case, the naive evaluation of the analytic solution would be infeasible, while some variants of stochastic/adaptive gradient descent would converge to the correct solution with minimal memory overhead. (one could, for linear regression, reformulate the analytic solution as a recurrence system, but this is not a general technique.)
Why use gradient descent for linear regression, when a closed-form math solution is available? First, yes, the real reason is the one given by Tim Atreides; this is a pedagogical exercise. However, it is possible, albeit unlikely, that one would want to do a linear regression on, say, several t
1,397
Why use gradient descent for linear regression, when a closed-form math solution is available?
One other reason is that gradient descent is more of a general method. For many machine learning problems, the cost function is not convex (e.g., matrix factorization, neural networks) so you cannot use a closed form solution. In those cases, gradient descent is used to find some good local optimum points. Or if you want to implement an online version then again you have to use a gradient descent based algorithm.
Why use gradient descent for linear regression, when a closed-form math solution is available?
One other reason is that gradient descent is more of a general method. For many machine learning problems, the cost function is not convex (e.g., matrix factorization, neural networks) so you cannot u
Why use gradient descent for linear regression, when a closed-form math solution is available? One other reason is that gradient descent is more of a general method. For many machine learning problems, the cost function is not convex (e.g., matrix factorization, neural networks) so you cannot use a closed form solution. In those cases, gradient descent is used to find some good local optimum points. Or if you want to implement an online version then again you have to use a gradient descent based algorithm.
Why use gradient descent for linear regression, when a closed-form math solution is available? One other reason is that gradient descent is more of a general method. For many machine learning problems, the cost function is not convex (e.g., matrix factorization, neural networks) so you cannot u
1,398
Why use gradient descent for linear regression, when a closed-form math solution is available?
In fact, you can solve your linear regression problem by different methods: normal equations (the way you mentioned), QR/SVD decomposition or an iterative method to minimize the error directly (like what the gradient descent method is doing). Note that the other methods give you the exact solution (ignoring the round-off error) while, as the GD method is iterative, you should be careful in choosing the step size to converge to the correct solution. The advantage of an iterative method is that if your system is really large, you will get a good approximation to your solution much faster.
Why use gradient descent for linear regression, when a closed-form math solution is available?
In fact, you can solve your linear regression problem by different methods: normal equations (the way you mentioned), QR/SVD decomposition or an iterative method to minimize the error directly (like w
Why use gradient descent for linear regression, when a closed-form math solution is available? In fact, you can solve your linear regression problem by different methods: normal equations (the way you mentioned), QR/SVD decomposition or an iterative method to minimize the error directly (like what the gradient descent method is doing). Note that the other methods give you the exact solution (ignoring the round-off error) while, as the GD method is iterative, you should be careful in choosing the step size to converge to the correct solution. The advantage of an iterative method is that if your system is really large, you will get a good approximation to your solution much faster.
Why use gradient descent for linear regression, when a closed-form math solution is available? In fact, you can solve your linear regression problem by different methods: normal equations (the way you mentioned), QR/SVD decomposition or an iterative method to minimize the error directly (like w
1,399
tanh activation function vs sigmoid activation function
Yes it matters for technical reasons. Basically for optimization. It is worth reading Efficient Backprop by LeCun et al. There are two reasons for that choice (assuming you have normalized your data, and this is very important): Having stronger gradients: since data is centered around 0, the derivatives are higher. To see this, calculate the derivative of the tanh function and notice that its range (output values) is [0,1]. The range of the tanh function is [-1,1] and that of the sigmoid function is [0,1] Avoiding bias in the gradients. This is explained very well in the paper, and it is worth reading it to understand these issues.
tanh activation function vs sigmoid activation function
Yes it matters for technical reasons. Basically for optimization. It is worth reading Efficient Backprop by LeCun et al. There are two reasons for that choice (assuming you have normalized your data,
tanh activation function vs sigmoid activation function Yes it matters for technical reasons. Basically for optimization. It is worth reading Efficient Backprop by LeCun et al. There are two reasons for that choice (assuming you have normalized your data, and this is very important): Having stronger gradients: since data is centered around 0, the derivatives are higher. To see this, calculate the derivative of the tanh function and notice that its range (output values) is [0,1]. The range of the tanh function is [-1,1] and that of the sigmoid function is [0,1] Avoiding bias in the gradients. This is explained very well in the paper, and it is worth reading it to understand these issues.
tanh activation function vs sigmoid activation function Yes it matters for technical reasons. Basically for optimization. It is worth reading Efficient Backprop by LeCun et al. There are two reasons for that choice (assuming you have normalized your data,
1,400
tanh activation function vs sigmoid activation function
Thanks a lot @jpmuc ! Inspired by your answer, I calculated and plotted the derivative of the tanh function and the standard sigmoid function seperately. I'd like to share with you all. Here is what I got. This is the derivative of the tanh function. For input between [-1,1], we have derivative between [0.42, 1]. This is the derivative of the standard sigmoid function f(x)= 1/(1+exp(-x)). For input between [0,1], we have derivative between [0.20, 0.25]. Apparently the tanh function provides stronger gradients.
tanh activation function vs sigmoid activation function
Thanks a lot @jpmuc ! Inspired by your answer, I calculated and plotted the derivative of the tanh function and the standard sigmoid function seperately. I'd like to share with you all. Here is what I
tanh activation function vs sigmoid activation function Thanks a lot @jpmuc ! Inspired by your answer, I calculated and plotted the derivative of the tanh function and the standard sigmoid function seperately. I'd like to share with you all. Here is what I got. This is the derivative of the tanh function. For input between [-1,1], we have derivative between [0.42, 1]. This is the derivative of the standard sigmoid function f(x)= 1/(1+exp(-x)). For input between [0,1], we have derivative between [0.20, 0.25]. Apparently the tanh function provides stronger gradients.
tanh activation function vs sigmoid activation function Thanks a lot @jpmuc ! Inspired by your answer, I calculated and plotted the derivative of the tanh function and the standard sigmoid function seperately. I'd like to share with you all. Here is what I