idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
1,701
US Election results 2016: What went wrong with prediction models?
First it was Brexit, now the US election Not really a first, e.g. the French presidential election, 2002 "led to serious discussions about polling techniques". So it's not far-fetched to say these models didn't do a very good job. Garbage in, garbage out. I saw one explanation was voters were unwilling to identify themselves as Trump supporter. How could a model incorporate effects like that? See response bias, and in particular social desirability bias. Other interesting reads: silent majority and Bradley effect.
US Election results 2016: What went wrong with prediction models?
First it was Brexit, now the US election Not really a first, e.g. the French presidential election, 2002 "led to serious discussions about polling techniques". So it's not far-fetched to say these m
US Election results 2016: What went wrong with prediction models? First it was Brexit, now the US election Not really a first, e.g. the French presidential election, 2002 "led to serious discussions about polling techniques". So it's not far-fetched to say these models didn't do a very good job. Garbage in, garbage out. I saw one explanation was voters were unwilling to identify themselves as Trump supporter. How could a model incorporate effects like that? See response bias, and in particular social desirability bias. Other interesting reads: silent majority and Bradley effect.
US Election results 2016: What went wrong with prediction models? First it was Brexit, now the US election Not really a first, e.g. the French presidential election, 2002 "led to serious discussions about polling techniques". So it's not far-fetched to say these m
1,702
US Election results 2016: What went wrong with prediction models?
The USC/LA Times poll has some accurate numbers. They predicted Trump to be in the lead. See The USC/L.A. Times poll saw what other surveys missed: A wave of Trump support http://www.latimes.com/politics/la-na-pol-usc-latimes-poll-20161108-story.html They had accurate numbers for 2012 as well. You may want to review: http://graphics.latimes.com/usc-presidential-poll-dashboard/ And NY Times complained about their weighting: http://www.nytimes.com/2016/10/13/upshot/how-one-19-year-old-illinois-man-is-distorting-national-polling-averages.html LA Times' response: http://www.latimes.com/politics/la-na-pol-daybreak-poll-questions-20161013-snap-story.html
US Election results 2016: What went wrong with prediction models?
The USC/LA Times poll has some accurate numbers. They predicted Trump to be in the lead. See The USC/L.A. Times poll saw what other surveys missed: A wave of Trump support http://www.latimes.com/polit
US Election results 2016: What went wrong with prediction models? The USC/LA Times poll has some accurate numbers. They predicted Trump to be in the lead. See The USC/L.A. Times poll saw what other surveys missed: A wave of Trump support http://www.latimes.com/politics/la-na-pol-usc-latimes-poll-20161108-story.html They had accurate numbers for 2012 as well. You may want to review: http://graphics.latimes.com/usc-presidential-poll-dashboard/ And NY Times complained about their weighting: http://www.nytimes.com/2016/10/13/upshot/how-one-19-year-old-illinois-man-is-distorting-national-polling-averages.html LA Times' response: http://www.latimes.com/politics/la-na-pol-daybreak-poll-questions-20161013-snap-story.html
US Election results 2016: What went wrong with prediction models? The USC/LA Times poll has some accurate numbers. They predicted Trump to be in the lead. See The USC/L.A. Times poll saw what other surveys missed: A wave of Trump support http://www.latimes.com/polit
1,703
US Election results 2016: What went wrong with prediction models?
No high ground claimed here. I work in a field (Monitoring and Evaluation) that is as rife with pseudo-science as any other social science you could name. But here's the deal, the polling industry is supposedly in 'crisis' today because it got the US election predictions so wrong, social science in general has a replicability 'crisis' and back in the late 2000's we had a world financial 'crisis' because some practitioners believed that sub-prime mortgage derivatives were a valid form of financial data (if we give them the benefit of the doubt...). And we all just blunder on regardless. Everyday I see the most questionable of researcher constructs used as data collection approaches, and therefore eventually used as data (everything from quasi-ordinal scales to utterly leading fixed response categories). Very few researchers even seem to realize they need to have a conceptual framework for such constructs before they can hope to understand their results. It is as if we have looked at market 'research' approaches and decided to adopt only the worst of their mistakes, with the addition of a little numerology on the side. We want to be considered 'scientists', but the rigor is all a bit too hard to be bothered with, so we collect rubbish data and pray to the Loki-like god of statistics to magically over-ride the GIGO axiom. But as the heavily quoted Mr Feynman points out: “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong”. There are better ways to handle the qualitative data which we are often stuck with, but they take a bit more work and those nice researcher constructs are often way easier to feed into SPSS. Convenience seems to trump science every time (no pun intended). In short, if we do not start to get serious about raw data quality, I think we are just wasting everyone's time and money, including our own. So does anyone want to collaborate on a 'data quality initiative' in relation to social science methods (yes, there is plenty in the text books about such things, but no one seems to pay attention to that source after their exams). Whoever has the most academic gravitas gets to be the lead! (It won't be me.) Just to be clear about my answer here: I see serious fundamental issues with 'contrived' raw data types so often, that I would like to suggest a need to start at the beginning. So even before we worry about sampling or which tests to run on the data, we need to look at the validity/limitations of the data types we collect in relation to the models we are proposing. Otherwise the overall predictive model is incompletely defined.
US Election results 2016: What went wrong with prediction models?
No high ground claimed here. I work in a field (Monitoring and Evaluation) that is as rife with pseudo-science as any other social science you could name. But here's the deal, the polling industry is
US Election results 2016: What went wrong with prediction models? No high ground claimed here. I work in a field (Monitoring and Evaluation) that is as rife with pseudo-science as any other social science you could name. But here's the deal, the polling industry is supposedly in 'crisis' today because it got the US election predictions so wrong, social science in general has a replicability 'crisis' and back in the late 2000's we had a world financial 'crisis' because some practitioners believed that sub-prime mortgage derivatives were a valid form of financial data (if we give them the benefit of the doubt...). And we all just blunder on regardless. Everyday I see the most questionable of researcher constructs used as data collection approaches, and therefore eventually used as data (everything from quasi-ordinal scales to utterly leading fixed response categories). Very few researchers even seem to realize they need to have a conceptual framework for such constructs before they can hope to understand their results. It is as if we have looked at market 'research' approaches and decided to adopt only the worst of their mistakes, with the addition of a little numerology on the side. We want to be considered 'scientists', but the rigor is all a bit too hard to be bothered with, so we collect rubbish data and pray to the Loki-like god of statistics to magically over-ride the GIGO axiom. But as the heavily quoted Mr Feynman points out: “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong”. There are better ways to handle the qualitative data which we are often stuck with, but they take a bit more work and those nice researcher constructs are often way easier to feed into SPSS. Convenience seems to trump science every time (no pun intended). In short, if we do not start to get serious about raw data quality, I think we are just wasting everyone's time and money, including our own. So does anyone want to collaborate on a 'data quality initiative' in relation to social science methods (yes, there is plenty in the text books about such things, but no one seems to pay attention to that source after their exams). Whoever has the most academic gravitas gets to be the lead! (It won't be me.) Just to be clear about my answer here: I see serious fundamental issues with 'contrived' raw data types so often, that I would like to suggest a need to start at the beginning. So even before we worry about sampling or which tests to run on the data, we need to look at the validity/limitations of the data types we collect in relation to the models we are proposing. Otherwise the overall predictive model is incompletely defined.
US Election results 2016: What went wrong with prediction models? No high ground claimed here. I work in a field (Monitoring and Evaluation) that is as rife with pseudo-science as any other social science you could name. But here's the deal, the polling industry is
1,704
US Election results 2016: What went wrong with prediction models?
Polls tend to have an error margin of 5% that you can't really get rid of, because it's not a random error, but a bias. Even if you average across many polls, it does not get much better. This has to do with misrepresented voter groups, lack of mobilization, inability to go to the vote on a workday, unwillingness to answer, unwillingness to answer right, spontaneous last-minute decisions, ... because this bias tends to be "correlated" across polls, you can't get rid of it with more polls; you also can't get rid of it with larger sample sizes; and you don't appear to be able to predict this bias either, because it changes too fast (and we elect presidents too rarely). Due to the stupid winner-takes-all principle still present in almost all states, an error of 5% can cause very different results: Assume the polls always predicted 49-51, but the real result was 51-49 (so an error of just 2%), the outcome is 100% off; because of winner-takes-it-all. If you look at individual states, most results are within the predicted error margins! Probably the best you can do is sample this bias (+-5%), apply the winner-takes-all extremes, then aggregate the outcomes. This is probably similar to what 538 did; and in 30% of the samples Donald Trump won...
US Election results 2016: What went wrong with prediction models?
Polls tend to have an error margin of 5% that you can't really get rid of, because it's not a random error, but a bias. Even if you average across many polls, it does not get much better. This has to
US Election results 2016: What went wrong with prediction models? Polls tend to have an error margin of 5% that you can't really get rid of, because it's not a random error, but a bias. Even if you average across many polls, it does not get much better. This has to do with misrepresented voter groups, lack of mobilization, inability to go to the vote on a workday, unwillingness to answer, unwillingness to answer right, spontaneous last-minute decisions, ... because this bias tends to be "correlated" across polls, you can't get rid of it with more polls; you also can't get rid of it with larger sample sizes; and you don't appear to be able to predict this bias either, because it changes too fast (and we elect presidents too rarely). Due to the stupid winner-takes-all principle still present in almost all states, an error of 5% can cause very different results: Assume the polls always predicted 49-51, but the real result was 51-49 (so an error of just 2%), the outcome is 100% off; because of winner-takes-it-all. If you look at individual states, most results are within the predicted error margins! Probably the best you can do is sample this bias (+-5%), apply the winner-takes-all extremes, then aggregate the outcomes. This is probably similar to what 538 did; and in 30% of the samples Donald Trump won...
US Election results 2016: What went wrong with prediction models? Polls tend to have an error margin of 5% that you can't really get rid of, because it's not a random error, but a bias. Even if you average across many polls, it does not get much better. This has to
1,705
US Election results 2016: What went wrong with prediction models?
The reliance on data analysis had a huge impact in strategic campaign decisions, journalistic coverage, and ultimately in individual choices. What could possibly go wrong when the Clinton campaign's decisions were informed by no other than $\small 400,000$ daily simulations on the secret Ada algorithm? In the end, it exposed a colossal failure of numerical analysis to make up for lack of knowledge of the subject matter. People were ashamed of themselves to explicitly embrace the winning candidate for obvious reasons. The worst computer model could have gotten closer to the outcome if anybody had bothered to conduct a preliminary poll face to face, knocking on doors. Here is an example: the Trafalgar Group (no affiliation or knowledge other than what follows) had Trump leading in PA, FL, MI, GA, UT and NV (this latter state went ultimately blue) one day prior to the election. What was the magic? a combination of survey respondents to both a standard ballot test and a ballot test guaging [sic] where respondent's neighbors stand. This addresses the underlying bias of traditional polling, wherein respondents are not wholly truthful about their position regarding highly controversial candidates. Pretty low-tech, including the lack of spell-check, showing in numbers a lot about human nature. Here is the discrepancy in PA: Historic Pennsylvania - so far from being perceived as the final straw in the Democratic defeat just hours prior to this closing realization at 1:40 am on November 9, 2016:
US Election results 2016: What went wrong with prediction models?
The reliance on data analysis had a huge impact in strategic campaign decisions, journalistic coverage, and ultimately in individual choices. What could possibly go wrong when the Clinton campaign's d
US Election results 2016: What went wrong with prediction models? The reliance on data analysis had a huge impact in strategic campaign decisions, journalistic coverage, and ultimately in individual choices. What could possibly go wrong when the Clinton campaign's decisions were informed by no other than $\small 400,000$ daily simulations on the secret Ada algorithm? In the end, it exposed a colossal failure of numerical analysis to make up for lack of knowledge of the subject matter. People were ashamed of themselves to explicitly embrace the winning candidate for obvious reasons. The worst computer model could have gotten closer to the outcome if anybody had bothered to conduct a preliminary poll face to face, knocking on doors. Here is an example: the Trafalgar Group (no affiliation or knowledge other than what follows) had Trump leading in PA, FL, MI, GA, UT and NV (this latter state went ultimately blue) one day prior to the election. What was the magic? a combination of survey respondents to both a standard ballot test and a ballot test guaging [sic] where respondent's neighbors stand. This addresses the underlying bias of traditional polling, wherein respondents are not wholly truthful about their position regarding highly controversial candidates. Pretty low-tech, including the lack of spell-check, showing in numbers a lot about human nature. Here is the discrepancy in PA: Historic Pennsylvania - so far from being perceived as the final straw in the Democratic defeat just hours prior to this closing realization at 1:40 am on November 9, 2016:
US Election results 2016: What went wrong with prediction models? The reliance on data analysis had a huge impact in strategic campaign decisions, journalistic coverage, and ultimately in individual choices. What could possibly go wrong when the Clinton campaign's d
1,706
US Election results 2016: What went wrong with prediction models?
One of the reasons for poll inaccurracy in the US election, besides some people for whatever reason don´t say the truth is, that the "winner takes it all" effect makes predictions even less easier. A 1% difference in one state can lead to a complete shift of a state and influence the whole outcome very heavily. Hillary had more voters just like Al Gore vs Bush. The Brexit referendum was not a normal election and therefore also harder to predict (No good historical data and everyone was like a first time voter on this matter). People who for decades vote for the same party stabilize predictions.
US Election results 2016: What went wrong with prediction models?
One of the reasons for poll inaccurracy in the US election, besides some people for whatever reason don´t say the truth is, that the "winner takes it all" effect makes predictions even less easier. A
US Election results 2016: What went wrong with prediction models? One of the reasons for poll inaccurracy in the US election, besides some people for whatever reason don´t say the truth is, that the "winner takes it all" effect makes predictions even less easier. A 1% difference in one state can lead to a complete shift of a state and influence the whole outcome very heavily. Hillary had more voters just like Al Gore vs Bush. The Brexit referendum was not a normal election and therefore also harder to predict (No good historical data and everyone was like a first time voter on this matter). People who for decades vote for the same party stabilize predictions.
US Election results 2016: What went wrong with prediction models? One of the reasons for poll inaccurracy in the US election, besides some people for whatever reason don´t say the truth is, that the "winner takes it all" effect makes predictions even less easier. A
1,707
US Election results 2016: What went wrong with prediction models?
(Just answering this bit, as the other answers seem to have covered everything else.) As late as 4 pm PST yesterday, the betting markets were still favoring Hillary 4 to 1. I take it that the betting markets, with real money on the line, should act as an ensemble of all the available prediction models out there. No... but indirectly yes. The betting markets are designed so the bookies make a profit whatever happens. E.g. say the current odds quoted were 1-4 on Hilary, and 3-1 on Trump. If the next ten people all bet \$10 on Hilary, then that \$100 taken in is going to cost them \$25 if Hilary wins. So they shorten Hilary to 1-5, and raise Trump to 4-1. More people now bet on Trump, and balance is restored. I.e. it is purely based on how people bet, not on the pundits or the prediction models. But, of course, the customers of the bookies are looking at those polls, and listening to those pundits. They hear that Hilary is 3% ahead, a dead cert to win, and decide a quick way to make \$10 is to bet \$40 on her. Indirectly the pundits and polls are moving the odds. (Some people also notice all their friends at work are going to vote Trump, so make a bet on him; others notice all their Facebook friend's posts are pro-Hilary, so make a bet on her, so there is a bit of reality influencing them, in that way.)
US Election results 2016: What went wrong with prediction models?
(Just answering this bit, as the other answers seem to have covered everything else.) As late as 4 pm PST yesterday, the betting markets were still favoring Hillary 4 to 1. I take it that the betti
US Election results 2016: What went wrong with prediction models? (Just answering this bit, as the other answers seem to have covered everything else.) As late as 4 pm PST yesterday, the betting markets were still favoring Hillary 4 to 1. I take it that the betting markets, with real money on the line, should act as an ensemble of all the available prediction models out there. No... but indirectly yes. The betting markets are designed so the bookies make a profit whatever happens. E.g. say the current odds quoted were 1-4 on Hilary, and 3-1 on Trump. If the next ten people all bet \$10 on Hilary, then that \$100 taken in is going to cost them \$25 if Hilary wins. So they shorten Hilary to 1-5, and raise Trump to 4-1. More people now bet on Trump, and balance is restored. I.e. it is purely based on how people bet, not on the pundits or the prediction models. But, of course, the customers of the bookies are looking at those polls, and listening to those pundits. They hear that Hilary is 3% ahead, a dead cert to win, and decide a quick way to make \$10 is to bet \$40 on her. Indirectly the pundits and polls are moving the odds. (Some people also notice all their friends at work are going to vote Trump, so make a bet on him; others notice all their Facebook friend's posts are pro-Hilary, so make a bet on her, so there is a bit of reality influencing them, in that way.)
US Election results 2016: What went wrong with prediction models? (Just answering this bit, as the other answers seem to have covered everything else.) As late as 4 pm PST yesterday, the betting markets were still favoring Hillary 4 to 1. I take it that the betti
1,708
US Election results 2016: What went wrong with prediction models?
It is not surprising that these efforts failed, when you consider the disparity between what information the models have access to and what information drives behavior at the polling booth. I'm speculating, but the models probably take into account: a variety of pre-election polling results historical state leanings (blue/red) historical results of prior elections with current state leanings/projections But, pre-election polls are unreliable (we've seen constant failures in the past), states can flip, and there haven't been enough election cycles in our history to account for the multitude of situations that can, and do, arise. Another complication is the confluence of the popular vote with the electoral college. As we saw in this election, the popular vote can be extremely close within a state, but once the state is won, all votes go to one candidate, which is why the map has so much red.
US Election results 2016: What went wrong with prediction models?
It is not surprising that these efforts failed, when you consider the disparity between what information the models have access to and what information drives behavior at the polling booth. I'm specul
US Election results 2016: What went wrong with prediction models? It is not surprising that these efforts failed, when you consider the disparity between what information the models have access to and what information drives behavior at the polling booth. I'm speculating, but the models probably take into account: a variety of pre-election polling results historical state leanings (blue/red) historical results of prior elections with current state leanings/projections But, pre-election polls are unreliable (we've seen constant failures in the past), states can flip, and there haven't been enough election cycles in our history to account for the multitude of situations that can, and do, arise. Another complication is the confluence of the popular vote with the electoral college. As we saw in this election, the popular vote can be extremely close within a state, but once the state is won, all votes go to one candidate, which is why the map has so much red.
US Election results 2016: What went wrong with prediction models? It is not surprising that these efforts failed, when you consider the disparity between what information the models have access to and what information drives behavior at the polling booth. I'm specul
1,709
US Election results 2016: What went wrong with prediction models?
The polling models didn't consider how many Libertarians might switch from Johnson to Trump when it came to actual voting. The states which were won by a thin margin were won based on which percentage of the vote Johnson got. PA (which pushed Trump past 270 on the election night) gave only 2% to Johnson. NH (which went to Clinton) gave 4%+ to Johnson. Johnson was polling at 4%-5% the day before the election and he got roughly 3% on the day of the election. So why did Libertarians, all of a sudden, switch on the day of the election? No one considered what was the central issue to Libertarian voters. They tend to view literal interpretation of the Constitution as canon. Most people who voted for Clinton did not think that her dismissiveness of the law was a high enough priority to consider. Certainly, not higher than everything which they didn't like about Trump. Regardless of whether her legal troubles were important or not to others, they would be important to Libertarians. They would put a very high priority on keeping out of office someone who viewed legal compliance as optional, at best. So, for a large number of them, keeping Clinton out of office would become a higher priority than making a statement that Libertarian philosophy is a viable political philosophy. Many of them may not have even liked Trump, but if they thought that he would be more respectful of the rule of law than Clinton would be, pragmatism would have won over principles for a lot of them and caused them to switch their vote when it came time to actually vote.
US Election results 2016: What went wrong with prediction models?
The polling models didn't consider how many Libertarians might switch from Johnson to Trump when it came to actual voting. The states which were won by a thin margin were won based on which percentag
US Election results 2016: What went wrong with prediction models? The polling models didn't consider how many Libertarians might switch from Johnson to Trump when it came to actual voting. The states which were won by a thin margin were won based on which percentage of the vote Johnson got. PA (which pushed Trump past 270 on the election night) gave only 2% to Johnson. NH (which went to Clinton) gave 4%+ to Johnson. Johnson was polling at 4%-5% the day before the election and he got roughly 3% on the day of the election. So why did Libertarians, all of a sudden, switch on the day of the election? No one considered what was the central issue to Libertarian voters. They tend to view literal interpretation of the Constitution as canon. Most people who voted for Clinton did not think that her dismissiveness of the law was a high enough priority to consider. Certainly, not higher than everything which they didn't like about Trump. Regardless of whether her legal troubles were important or not to others, they would be important to Libertarians. They would put a very high priority on keeping out of office someone who viewed legal compliance as optional, at best. So, for a large number of them, keeping Clinton out of office would become a higher priority than making a statement that Libertarian philosophy is a viable political philosophy. Many of them may not have even liked Trump, but if they thought that he would be more respectful of the rule of law than Clinton would be, pragmatism would have won over principles for a lot of them and caused them to switch their vote when it came time to actually vote.
US Election results 2016: What went wrong with prediction models? The polling models didn't consider how many Libertarians might switch from Johnson to Trump when it came to actual voting. The states which were won by a thin margin were won based on which percentag
1,710
US Election results 2016: What went wrong with prediction models?
Polls are not historical trends. A Bayesian would inquire as to the historical trends. Since Abraham Lincoln, there has been a Republican party and a Democratic party holding the presidential office. The trend for party change 16 times since then from Wikipedia has the following cumulative mass function where time in years to a change of presidential party is on the $x$-axis. After 8-years of a party in power, the odds are 68.75% that the voters vote for a change, just over 2 to 1. Moreover, since the 1860 election, Republicans have held the presidency 59% of the time versus 41% for Democrats. What made journalists, the Democratic party, and the pollsters think that the odds were in favor of liberals winning was perhaps wishful thinking. Behavior may be predictable, within limits, but in this case the Democrats were wishing that people would not vote for a change, and from a historical perspective, it seems more likely there would be one than not.
US Election results 2016: What went wrong with prediction models?
Polls are not historical trends. A Bayesian would inquire as to the historical trends. Since Abraham Lincoln, there has been a Republican party and a Democratic party holding the presidential office.
US Election results 2016: What went wrong with prediction models? Polls are not historical trends. A Bayesian would inquire as to the historical trends. Since Abraham Lincoln, there has been a Republican party and a Democratic party holding the presidential office. The trend for party change 16 times since then from Wikipedia has the following cumulative mass function where time in years to a change of presidential party is on the $x$-axis. After 8-years of a party in power, the odds are 68.75% that the voters vote for a change, just over 2 to 1. Moreover, since the 1860 election, Republicans have held the presidency 59% of the time versus 41% for Democrats. What made journalists, the Democratic party, and the pollsters think that the odds were in favor of liberals winning was perhaps wishful thinking. Behavior may be predictable, within limits, but in this case the Democrats were wishing that people would not vote for a change, and from a historical perspective, it seems more likely there would be one than not.
US Election results 2016: What went wrong with prediction models? Polls are not historical trends. A Bayesian would inquire as to the historical trends. Since Abraham Lincoln, there has been a Republican party and a Democratic party holding the presidential office.
1,711
US Election results 2016: What went wrong with prediction models?
I think poll results were extrapolated to the extent of the public assuming the voter demographics will be similar to poll taker demographics and would be a good representation of the whole population. For example, if 7 out of 10 minorities supported Hillary in the polls, and if that minority represents 30% of the US population, the majority of polls assumed 30% of voters will be represented by that minority and translated to that 21% gain for Hillary. In reality, white, middle-to-upper class males were better represented among the voters. Less than 50% of eligible people voted and this didn't translate into 50% off all genders, races, etc. Or, polls assumed perfect randomization and based their models on that but in reality the voter data was biased toward older middle-to-upper class males. Or, the polls didn't exactly assume perfect randomization but their extrapolation parameters underestimated the heterogeneity of voter demographics. ETA: Polls of previous two elections performed better because of increased attention to voting by groups that aren't usually represented well.
US Election results 2016: What went wrong with prediction models?
I think poll results were extrapolated to the extent of the public assuming the voter demographics will be similar to poll taker demographics and would be a good representation of the whole population
US Election results 2016: What went wrong with prediction models? I think poll results were extrapolated to the extent of the public assuming the voter demographics will be similar to poll taker demographics and would be a good representation of the whole population. For example, if 7 out of 10 minorities supported Hillary in the polls, and if that minority represents 30% of the US population, the majority of polls assumed 30% of voters will be represented by that minority and translated to that 21% gain for Hillary. In reality, white, middle-to-upper class males were better represented among the voters. Less than 50% of eligible people voted and this didn't translate into 50% off all genders, races, etc. Or, polls assumed perfect randomization and based their models on that but in reality the voter data was biased toward older middle-to-upper class males. Or, the polls didn't exactly assume perfect randomization but their extrapolation parameters underestimated the heterogeneity of voter demographics. ETA: Polls of previous two elections performed better because of increased attention to voting by groups that aren't usually represented well.
US Election results 2016: What went wrong with prediction models? I think poll results were extrapolated to the extent of the public assuming the voter demographics will be similar to poll taker demographics and would be a good representation of the whole population
1,712
US Election results 2016: What went wrong with prediction models?
HoraceT and CliffAB (sorry too long for comments) I’m afraid I have a lifetime of examples, which have also taught me that I need to be very careful with their explanation, if I wish to avoid offending people. So while I don’t want your indulgence, I do ask for your patience. Here goes: To start with an extreme example, I once saw a proposed survey question that asked illiterate village farmers (South East Asia), to estimate their ‘economic rate of return’. Leaving the response options aside for now, we can hopefully all see that this a stupid thing to do, but consistently explaining why it is stupid is not so easy. Yes, we can simply say that it is stupid because the respondent will not understand the question and just dismiss it as a semantic issue. But this is really not good enough in a research context. The fact that this question was ever suggested implies that researchers have inherent variability on what they consider ‘stupid’. To address this more objectively, we must step back and transparently declare a relevant framework for decision making about such things. There are many such options, and I will use one that I sometimes find useful - but have no intent of defending here (I actively encourage anyone to think of others, as it means you are already starting down the road to better conceptualizations). So, let’s transparently assume that we have two basic information types we can use in analyses: qualitative and quantitative. And that the two are related by a transformative process, such that all quantitative information started out as qualitative information but went through the following (oversimplified) steps: Convention setting (e.g. we all decided that [regardless of how we individually perceive it], that we will all call the colour of a daytime open sky “blue”.) Classification (e.g. we assess everything in a room by this convention and separate all items into ‘blue’ or ‘not blue’ categories) Count (we count/detect the ‘quantity’ of blue things in the room) Note that (under this model) without step 1, there is no such thing as a quality and if you don’t start with step 1, you can never generate a meaningful quantity. Once stated, this all looks very obvious, but it is such sets of first principles that (I find) are most commonly overlooked and therefore result in ‘Garbage-In’. So the ‘stupidity’ in the example above becomes very clearly definable as a failure to set a common convention between the researcher and the respondents. Of course this is an extreme example, but much more subtle mistakes can be equally garbage generating. Another example I have seen is a survey of farmers in rural Somalia, that asked “How has climate change affected your livelihood?” Again putting response options aside for the moment, I would suggest that even asking this of farmers in the Mid-West of the United States would constitute a serious failure to use a common convention between researcher and respondent (i.e. as to what is being measured as ‘climate change’). Now let’s move on to response options. By allowing respondents to self-code responses from a set of multiple choice options or similar construct you are pushing this ‘convention’ issue into this aspect of questioning as well. This may be fine if we all stick to effectively ‘universal’ conventions in response categories (e.g. question: what town do you live in? response categories: list of all towns in research area [plus ‘not in this area’]). However, many researchers actually seem to take pride in the subtle nuancing of their questions and response categories to meet their needs. In the same survey that the ‘rate of economic return’ question appeared, the researcher also asked the respondents (poor villagers), to provide which economic sector they contributed to: with response categories of ‘production’, ‘service’, ‘manufacturing’ and ‘marketing’. Again a qualitative convention issue obviously arises here. However, because he made the responses mutually exclusive, such that respondents could only choose one option (because “it is easier to feed into SPSS that way”), and village farmers routinely produce crops, sell their labour, manufacture handicrafts and take everything to local markets themselves, this particular researcher did not just have a convention issue with his respondents, he had one with reality itself. This is why old bores like myself will always recommend the more work intensive approach of applying coding to data post-collection - as at least you can adequately train coders in researcher-held conventions (and note that trying to impart such conventions to respondents in ‘survey instructions’ is a mug’s game –just trust me on this one for now). Also note also that if you accept the above ‘information model’ (which, again, I am not claiming you have to), it also shows why quasi-ordinal response scales have a bad reputation. It is not just the basic maths issues under the Steven’s convention (i.e. you need to define a meaningful origin even for ordinals, you can’t add and average them, etc. etc.), it is also that they have often never been through any transparently declared and logically consistent transformative process that would amount to ‘quantification’ (i.e. an extended version of the model used above that also encompasses generation of ‘ordinal quantities’ [-this is not hard to do]). Anyway, if it does not satisfy the requirements of being either qualitative or quantitative information, then the researcher is actually claiming to have discovered a new type of information outside the framework, and the onus is therefore on them to explain its fundamental conceptual basis fully (i.e. transparently define a new framework). Finally let’s look at sampling issues (and I think this aligns with some of the other answers already here). For example, if a researcher wants to apply a convention of what constitutes a ‘liberal’ voter, they need to be sure that the demographic information they use to choose their sampling regime is consistent with this convention. This level is usually the easiest to identify and deal with as it is largely within researcher control and is most often the type of assumed qualitative convention that is transparently declared in research. This is also why it is the level usually discussed or critiqued, while the more fundamental issues go unaddressed. So while pollers stick to questions like ‘who do you plan to vote for at this point in time?’, we are probably still ok, but many of them want to get much ‘fancier’ than this…
US Election results 2016: What went wrong with prediction models?
HoraceT and CliffAB (sorry too long for comments) I’m afraid I have a lifetime of examples, which have also taught me that I need to be very careful with their explanation, if I wish to avoid offendin
US Election results 2016: What went wrong with prediction models? HoraceT and CliffAB (sorry too long for comments) I’m afraid I have a lifetime of examples, which have also taught me that I need to be very careful with their explanation, if I wish to avoid offending people. So while I don’t want your indulgence, I do ask for your patience. Here goes: To start with an extreme example, I once saw a proposed survey question that asked illiterate village farmers (South East Asia), to estimate their ‘economic rate of return’. Leaving the response options aside for now, we can hopefully all see that this a stupid thing to do, but consistently explaining why it is stupid is not so easy. Yes, we can simply say that it is stupid because the respondent will not understand the question and just dismiss it as a semantic issue. But this is really not good enough in a research context. The fact that this question was ever suggested implies that researchers have inherent variability on what they consider ‘stupid’. To address this more objectively, we must step back and transparently declare a relevant framework for decision making about such things. There are many such options, and I will use one that I sometimes find useful - but have no intent of defending here (I actively encourage anyone to think of others, as it means you are already starting down the road to better conceptualizations). So, let’s transparently assume that we have two basic information types we can use in analyses: qualitative and quantitative. And that the two are related by a transformative process, such that all quantitative information started out as qualitative information but went through the following (oversimplified) steps: Convention setting (e.g. we all decided that [regardless of how we individually perceive it], that we will all call the colour of a daytime open sky “blue”.) Classification (e.g. we assess everything in a room by this convention and separate all items into ‘blue’ or ‘not blue’ categories) Count (we count/detect the ‘quantity’ of blue things in the room) Note that (under this model) without step 1, there is no such thing as a quality and if you don’t start with step 1, you can never generate a meaningful quantity. Once stated, this all looks very obvious, but it is such sets of first principles that (I find) are most commonly overlooked and therefore result in ‘Garbage-In’. So the ‘stupidity’ in the example above becomes very clearly definable as a failure to set a common convention between the researcher and the respondents. Of course this is an extreme example, but much more subtle mistakes can be equally garbage generating. Another example I have seen is a survey of farmers in rural Somalia, that asked “How has climate change affected your livelihood?” Again putting response options aside for the moment, I would suggest that even asking this of farmers in the Mid-West of the United States would constitute a serious failure to use a common convention between researcher and respondent (i.e. as to what is being measured as ‘climate change’). Now let’s move on to response options. By allowing respondents to self-code responses from a set of multiple choice options or similar construct you are pushing this ‘convention’ issue into this aspect of questioning as well. This may be fine if we all stick to effectively ‘universal’ conventions in response categories (e.g. question: what town do you live in? response categories: list of all towns in research area [plus ‘not in this area’]). However, many researchers actually seem to take pride in the subtle nuancing of their questions and response categories to meet their needs. In the same survey that the ‘rate of economic return’ question appeared, the researcher also asked the respondents (poor villagers), to provide which economic sector they contributed to: with response categories of ‘production’, ‘service’, ‘manufacturing’ and ‘marketing’. Again a qualitative convention issue obviously arises here. However, because he made the responses mutually exclusive, such that respondents could only choose one option (because “it is easier to feed into SPSS that way”), and village farmers routinely produce crops, sell their labour, manufacture handicrafts and take everything to local markets themselves, this particular researcher did not just have a convention issue with his respondents, he had one with reality itself. This is why old bores like myself will always recommend the more work intensive approach of applying coding to data post-collection - as at least you can adequately train coders in researcher-held conventions (and note that trying to impart such conventions to respondents in ‘survey instructions’ is a mug’s game –just trust me on this one for now). Also note also that if you accept the above ‘information model’ (which, again, I am not claiming you have to), it also shows why quasi-ordinal response scales have a bad reputation. It is not just the basic maths issues under the Steven’s convention (i.e. you need to define a meaningful origin even for ordinals, you can’t add and average them, etc. etc.), it is also that they have often never been through any transparently declared and logically consistent transformative process that would amount to ‘quantification’ (i.e. an extended version of the model used above that also encompasses generation of ‘ordinal quantities’ [-this is not hard to do]). Anyway, if it does not satisfy the requirements of being either qualitative or quantitative information, then the researcher is actually claiming to have discovered a new type of information outside the framework, and the onus is therefore on them to explain its fundamental conceptual basis fully (i.e. transparently define a new framework). Finally let’s look at sampling issues (and I think this aligns with some of the other answers already here). For example, if a researcher wants to apply a convention of what constitutes a ‘liberal’ voter, they need to be sure that the demographic information they use to choose their sampling regime is consistent with this convention. This level is usually the easiest to identify and deal with as it is largely within researcher control and is most often the type of assumed qualitative convention that is transparently declared in research. This is also why it is the level usually discussed or critiqued, while the more fundamental issues go unaddressed. So while pollers stick to questions like ‘who do you plan to vote for at this point in time?’, we are probably still ok, but many of them want to get much ‘fancier’ than this…
US Election results 2016: What went wrong with prediction models? HoraceT and CliffAB (sorry too long for comments) I’m afraid I have a lifetime of examples, which have also taught me that I need to be very careful with their explanation, if I wish to avoid offendin
1,713
How does the correlation coefficient differ from regression slope?
Assuming you're talking about a simple regression model $$Y_i = \alpha + \beta X_i + \varepsilon_i$$ estimated by least squares, we know from wikipedia that $$ \hat {\beta} = {\rm cor}(Y_i, X_i) \cdot \frac{ {\rm SD}(Y_i) }{ {\rm SD}(X_i) } $$ Therefore the two only coincide when ${\rm SD}(Y_i) = {\rm SD}(X_i)$. That is, they only coincide when the two variables are on the same scale, in some sense. The most common way of achieving this is through standardization, as indicated by @gung. The two, in some sense give you the same information - they each tell you the strength of the linear relationship between $X_i$ and $Y_i$. But, they do each give you distinct information (except, of course, when they are exactly the same): The correlation gives you a bounded measurement that can be interpreted independently of the scale of the two variables. The closer the estimated correlation is to $\pm 1$, the closer the two are to a perfect linear relationship. The regression slope, in isolation, does not tell you that piece of information. The regression slope gives a useful quantity interpreted as the estimated change in the expected value of $Y_i$ for a given value of $X_i$. Specifically, $\hat \beta$ tells you the change in the expected value of $Y_i$ corresponding to a 1-unit increase in $X_i$. This information can not be deduced from the correlation coefficient alone.
How does the correlation coefficient differ from regression slope?
Assuming you're talking about a simple regression model $$Y_i = \alpha + \beta X_i + \varepsilon_i$$ estimated by least squares, we know from wikipedia that $$ \hat {\beta} = {\rm cor}(Y_i, X_i) \cdot
How does the correlation coefficient differ from regression slope? Assuming you're talking about a simple regression model $$Y_i = \alpha + \beta X_i + \varepsilon_i$$ estimated by least squares, we know from wikipedia that $$ \hat {\beta} = {\rm cor}(Y_i, X_i) \cdot \frac{ {\rm SD}(Y_i) }{ {\rm SD}(X_i) } $$ Therefore the two only coincide when ${\rm SD}(Y_i) = {\rm SD}(X_i)$. That is, they only coincide when the two variables are on the same scale, in some sense. The most common way of achieving this is through standardization, as indicated by @gung. The two, in some sense give you the same information - they each tell you the strength of the linear relationship between $X_i$ and $Y_i$. But, they do each give you distinct information (except, of course, when they are exactly the same): The correlation gives you a bounded measurement that can be interpreted independently of the scale of the two variables. The closer the estimated correlation is to $\pm 1$, the closer the two are to a perfect linear relationship. The regression slope, in isolation, does not tell you that piece of information. The regression slope gives a useful quantity interpreted as the estimated change in the expected value of $Y_i$ for a given value of $X_i$. Specifically, $\hat \beta$ tells you the change in the expected value of $Y_i$ corresponding to a 1-unit increase in $X_i$. This information can not be deduced from the correlation coefficient alone.
How does the correlation coefficient differ from regression slope? Assuming you're talking about a simple regression model $$Y_i = \alpha + \beta X_i + \varepsilon_i$$ estimated by least squares, we know from wikipedia that $$ \hat {\beta} = {\rm cor}(Y_i, X_i) \cdot
1,714
How does the correlation coefficient differ from regression slope?
With simple linear regression (i.e., only 1 covariate), the slope $\beta_1$ is the same as Pearson's $r$ if both variables were standardized first. (For more information, you might find my answer here helpful.) When you are doing multiple regression, this can be more complicated due to multicollinearity, etc.
How does the correlation coefficient differ from regression slope?
With simple linear regression (i.e., only 1 covariate), the slope $\beta_1$ is the same as Pearson's $r$ if both variables were standardized first. (For more information, you might find my answer her
How does the correlation coefficient differ from regression slope? With simple linear regression (i.e., only 1 covariate), the slope $\beta_1$ is the same as Pearson's $r$ if both variables were standardized first. (For more information, you might find my answer here helpful.) When you are doing multiple regression, this can be more complicated due to multicollinearity, etc.
How does the correlation coefficient differ from regression slope? With simple linear regression (i.e., only 1 covariate), the slope $\beta_1$ is the same as Pearson's $r$ if both variables were standardized first. (For more information, you might find my answer her
1,715
How does the correlation coefficient differ from regression slope?
The correlation coefficient measures the "tightness" of linear relationship between two variables and is bounded between -1 and 1, inclusive. Correlations close to zero represent no linear association between the variables, whereas correlations close to -1 or +1 indicate strong linear relationship. Intuitively, the easier it is for you to draw a line of best fit through a scatterplot, the more correlated they are. The regression slope measures the "steepness" of the linear relationship between two variables and can take any value from $-\infty$ to $+\infty$. Slopes near zero mean that the response (Y) variable changes slowly as the predictor (X) variable changes. Slopes that are further from zero (either in the negative or positive direction) mean the response changes more rapidly as the predictor changes. Intuitively, if you were to draw a line of best fit through a scatterplot, the steeper it is, the further your slope is from zero. So the correlation coefficient and regression slope MUST have the same sign (+ or -), but will not have the same value. For simplicity, this answer assumes simple linear regression.
How does the correlation coefficient differ from regression slope?
The correlation coefficient measures the "tightness" of linear relationship between two variables and is bounded between -1 and 1, inclusive. Correlations close to zero represent no linear associatio
How does the correlation coefficient differ from regression slope? The correlation coefficient measures the "tightness" of linear relationship between two variables and is bounded between -1 and 1, inclusive. Correlations close to zero represent no linear association between the variables, whereas correlations close to -1 or +1 indicate strong linear relationship. Intuitively, the easier it is for you to draw a line of best fit through a scatterplot, the more correlated they are. The regression slope measures the "steepness" of the linear relationship between two variables and can take any value from $-\infty$ to $+\infty$. Slopes near zero mean that the response (Y) variable changes slowly as the predictor (X) variable changes. Slopes that are further from zero (either in the negative or positive direction) mean the response changes more rapidly as the predictor changes. Intuitively, if you were to draw a line of best fit through a scatterplot, the steeper it is, the further your slope is from zero. So the correlation coefficient and regression slope MUST have the same sign (+ or -), but will not have the same value. For simplicity, this answer assumes simple linear regression.
How does the correlation coefficient differ from regression slope? The correlation coefficient measures the "tightness" of linear relationship between two variables and is bounded between -1 and 1, inclusive. Correlations close to zero represent no linear associatio
1,716
How does the correlation coefficient differ from regression slope?
Pearson's correlation coefficient is dimensionless and scaled between -1 and 1 regardless of the dimension and scale of the input variables. If (for example) you input a mass in grams or kilograms, it makes no difference to the value of $r$, whereas this will make a tremendous difference to the gradient/slope (which has dimension and is scaled accordingly ... likewise, it would make no difference to $r$ if the scale is adjusted in any way, including using pounds or tons instead). A simple demonstration (apologies for using Python!): import numpy as np x = [10, 20, 30, 40] y = [3, 5, 10, 11] np.corrcoef(x,y)[0][1] x = [1, 2, 3, 4] np.corrcoef(x,y)[0][1] shows that $r = 0.969363$ even though the slope has been increased by a factor of 10. I must confess it's a neat trick that $r$ comes to be scaled between -1 and 1 (one of those cases where the numerator can never have absolute value greater than the denominator). As @Macro has detailed above, slope $b = r(\frac{\sigma_{y}}{\sigma_{x}})$ , so you are correct in intuiting that Pearson's $r$ is related to the slope, but only when adjusted according to the standard deviations (which effectively restores the dimensions and scales!). At first I thought it odd that the formula seems to suggest a loosely fitted line (low $r$) results in a lower gradient; then I plotted an example and realised that given a gradient, varying the "looseness" results in $r$ decreasing but this is offset by a proportional increase in $\sigma_{y}$. In the chart below, four $x,y$ datasets are plotted: the results of $y=3x$ (so gradient $b=3$, $r=1$, $\sigma_{x}=2.89$, $\sigma_{y}=8.66$) ... note that $\frac{\sigma_{y}}{\sigma_{x}}=3 $ the same but varied by a random number, with $r = 0.2447$, $\sigma_{x}=2.89$, $\sigma_{y}=34.69$, from which we can compute $b= 2.94 $ $y=15x$ (so $b=15$ and $r=1$, $\sigma_{x}=0.58$, $\sigma_{y}=8.66$) the same as (2) but with reduced range $x$ so $ b= 14.70$ (and still $r = 0.2447$, $\sigma_{x}=0.58$, $\sigma_{y}=34.69$) It can be seen that variance affects $r$ without necessarily affecting $b$, and units of measure can affect scale and thus $b$ without affecting $r$
How does the correlation coefficient differ from regression slope?
Pearson's correlation coefficient is dimensionless and scaled between -1 and 1 regardless of the dimension and scale of the input variables. If (for example) you input a mass in grams or kilograms, i
How does the correlation coefficient differ from regression slope? Pearson's correlation coefficient is dimensionless and scaled between -1 and 1 regardless of the dimension and scale of the input variables. If (for example) you input a mass in grams or kilograms, it makes no difference to the value of $r$, whereas this will make a tremendous difference to the gradient/slope (which has dimension and is scaled accordingly ... likewise, it would make no difference to $r$ if the scale is adjusted in any way, including using pounds or tons instead). A simple demonstration (apologies for using Python!): import numpy as np x = [10, 20, 30, 40] y = [3, 5, 10, 11] np.corrcoef(x,y)[0][1] x = [1, 2, 3, 4] np.corrcoef(x,y)[0][1] shows that $r = 0.969363$ even though the slope has been increased by a factor of 10. I must confess it's a neat trick that $r$ comes to be scaled between -1 and 1 (one of those cases where the numerator can never have absolute value greater than the denominator). As @Macro has detailed above, slope $b = r(\frac{\sigma_{y}}{\sigma_{x}})$ , so you are correct in intuiting that Pearson's $r$ is related to the slope, but only when adjusted according to the standard deviations (which effectively restores the dimensions and scales!). At first I thought it odd that the formula seems to suggest a loosely fitted line (low $r$) results in a lower gradient; then I plotted an example and realised that given a gradient, varying the "looseness" results in $r$ decreasing but this is offset by a proportional increase in $\sigma_{y}$. In the chart below, four $x,y$ datasets are plotted: the results of $y=3x$ (so gradient $b=3$, $r=1$, $\sigma_{x}=2.89$, $\sigma_{y}=8.66$) ... note that $\frac{\sigma_{y}}{\sigma_{x}}=3 $ the same but varied by a random number, with $r = 0.2447$, $\sigma_{x}=2.89$, $\sigma_{y}=34.69$, from which we can compute $b= 2.94 $ $y=15x$ (so $b=15$ and $r=1$, $\sigma_{x}=0.58$, $\sigma_{y}=8.66$) the same as (2) but with reduced range $x$ so $ b= 14.70$ (and still $r = 0.2447$, $\sigma_{x}=0.58$, $\sigma_{y}=34.69$) It can be seen that variance affects $r$ without necessarily affecting $b$, and units of measure can affect scale and thus $b$ without affecting $r$
How does the correlation coefficient differ from regression slope? Pearson's correlation coefficient is dimensionless and scaled between -1 and 1 regardless of the dimension and scale of the input variables. If (for example) you input a mass in grams or kilograms, i
1,717
What is the benefit of breaking up a continuous predictor variable?
You're right on both counts. See Frank Harrell's page here for a long list of problems with binning continuous variables. If you use a few bins you throw away a lot of information in the predictors; if you use many you tend to fit wiggles in what should be a smooth, if not linear, relationship, & use up a lot of degrees of freedom. Generally better to use polynomials ($x + x^2 + \ldots$) or splines (piecewise polynomials that join smoothly) for the predictors. Binning's really only a good idea when you'd expect a discontinuity in the response at the cut-points—say the temperature something boils at, or the legal age for driving–, & when the response is flat between them.. The value?—well, it's a quick & easy way to take curvature into account without having to think about it, & the model may well be good enough for what you're using it for. It tends to work all right when you've lots of data compared to the number of predictors, each predictor is split into plenty of categories; in this case within each predictor band the range of response is small & the average response is precisely determined. [Edit in response to comments: Sometimes there are standard cut-offs used within a field for a continuous variable: e.g. in medicine blood pressure measurements may be categorized as low, medium or high. There may be many good reasons for using such cut-offs when you present or apply a model. In particular, decision rules are often based on less information than goes into a model, & may need to be simple to apply. But it doesn't follow that these cut-offs are appropriate for binning the predictors when you fit the model. Suppose some response varies continuously with blood pressure. If you define a high blood pressure group as a predictor in your study, the effect you're estimating is the average response over the particular blood-pressures of the individuals in that group. It's not an estimate of the average response of people with high blood pressure in the general population, or of people in the high blood pressure group in another study, unless you take specific measures to make it so. If the distribution of blood pressure in the general population is known, as I imagine it is, you'll do better to calculate the average response of people with high blood pressure in the general population based on predictions from the model with blood pressure as a continuous variable. Crude binning makes your model only approximately generalizable. In general, if you have questions about the behaviour of the response between cut-offs, fit the best model you can first, & then use it to answer them.] [With regard to presentation; I think this is a red herring: (1) Ease of presentation doesn't justify bad modelling decisions. (And in the cases where binning is a good modelling decision, it doesn't need additional justification.) Surely this is self-evident. No-one ever recommends taking an important interaction out of a model because it's hard to present. (2) Whatever kind of model you fit, you can still present its results in terms of categories if you think it will aid interpretation. Though ... (3) You have to be careful to make sure it doesn't aid mis-interpretation, for the reasons given above. (4) It's not in fact difficult to present non-linear responses. Personal opinion, clearly, & audiences differ; but I've never seen a graph of fitted response values versus predictor values puzzle someone just because it's curved. Interactions, logits, random effects, multicollinearity, ...—these are all much harder to explain.] [An additional point brought up by @Roland is the exactness of the measurement of the predictors; he's suggesting, I think, that categorization may be appropriate when they're not especially precise. Common sense might suggest that you don't improve matters by re-stating them even less precisely, & common sense would be right: MacCallum et al (2002), "On the Practice of Dichotomization of Quantitative Variables", Psychological Methods, 7, 1, pp17–19.]
What is the benefit of breaking up a continuous predictor variable?
You're right on both counts. See Frank Harrell's page here for a long list of problems with binning continuous variables. If you use a few bins you throw away a lot of information in the predictors; i
What is the benefit of breaking up a continuous predictor variable? You're right on both counts. See Frank Harrell's page here for a long list of problems with binning continuous variables. If you use a few bins you throw away a lot of information in the predictors; if you use many you tend to fit wiggles in what should be a smooth, if not linear, relationship, & use up a lot of degrees of freedom. Generally better to use polynomials ($x + x^2 + \ldots$) or splines (piecewise polynomials that join smoothly) for the predictors. Binning's really only a good idea when you'd expect a discontinuity in the response at the cut-points—say the temperature something boils at, or the legal age for driving–, & when the response is flat between them.. The value?—well, it's a quick & easy way to take curvature into account without having to think about it, & the model may well be good enough for what you're using it for. It tends to work all right when you've lots of data compared to the number of predictors, each predictor is split into plenty of categories; in this case within each predictor band the range of response is small & the average response is precisely determined. [Edit in response to comments: Sometimes there are standard cut-offs used within a field for a continuous variable: e.g. in medicine blood pressure measurements may be categorized as low, medium or high. There may be many good reasons for using such cut-offs when you present or apply a model. In particular, decision rules are often based on less information than goes into a model, & may need to be simple to apply. But it doesn't follow that these cut-offs are appropriate for binning the predictors when you fit the model. Suppose some response varies continuously with blood pressure. If you define a high blood pressure group as a predictor in your study, the effect you're estimating is the average response over the particular blood-pressures of the individuals in that group. It's not an estimate of the average response of people with high blood pressure in the general population, or of people in the high blood pressure group in another study, unless you take specific measures to make it so. If the distribution of blood pressure in the general population is known, as I imagine it is, you'll do better to calculate the average response of people with high blood pressure in the general population based on predictions from the model with blood pressure as a continuous variable. Crude binning makes your model only approximately generalizable. In general, if you have questions about the behaviour of the response between cut-offs, fit the best model you can first, & then use it to answer them.] [With regard to presentation; I think this is a red herring: (1) Ease of presentation doesn't justify bad modelling decisions. (And in the cases where binning is a good modelling decision, it doesn't need additional justification.) Surely this is self-evident. No-one ever recommends taking an important interaction out of a model because it's hard to present. (2) Whatever kind of model you fit, you can still present its results in terms of categories if you think it will aid interpretation. Though ... (3) You have to be careful to make sure it doesn't aid mis-interpretation, for the reasons given above. (4) It's not in fact difficult to present non-linear responses. Personal opinion, clearly, & audiences differ; but I've never seen a graph of fitted response values versus predictor values puzzle someone just because it's curved. Interactions, logits, random effects, multicollinearity, ...—these are all much harder to explain.] [An additional point brought up by @Roland is the exactness of the measurement of the predictors; he's suggesting, I think, that categorization may be appropriate when they're not especially precise. Common sense might suggest that you don't improve matters by re-stating them even less precisely, & common sense would be right: MacCallum et al (2002), "On the Practice of Dichotomization of Quantitative Variables", Psychological Methods, 7, 1, pp17–19.]
What is the benefit of breaking up a continuous predictor variable? You're right on both counts. See Frank Harrell's page here for a long list of problems with binning continuous variables. If you use a few bins you throw away a lot of information in the predictors; i
1,718
What is the benefit of breaking up a continuous predictor variable?
A part of this answer that I've learned since asking is that not binning and binning seeks to answer two slightly different questions - What is the incremental change in the data? and What is the difference between the lowest and the highest?. Not binning says "this is a quantification of the trend seen in the data" and binning says "I don't have enough information to say how much this changes by each increment, but I can say that the top is different from the bottom".
What is the benefit of breaking up a continuous predictor variable?
A part of this answer that I've learned since asking is that not binning and binning seeks to answer two slightly different questions - What is the incremental change in the data? and What is the diff
What is the benefit of breaking up a continuous predictor variable? A part of this answer that I've learned since asking is that not binning and binning seeks to answer two slightly different questions - What is the incremental change in the data? and What is the difference between the lowest and the highest?. Not binning says "this is a quantification of the trend seen in the data" and binning says "I don't have enough information to say how much this changes by each increment, but I can say that the top is different from the bottom".
What is the benefit of breaking up a continuous predictor variable? A part of this answer that I've learned since asking is that not binning and binning seeks to answer two slightly different questions - What is the incremental change in the data? and What is the diff
1,719
What is the benefit of breaking up a continuous predictor variable?
As previous posters have mentioned, it is generally best to avoid dichotomizing a continuous variable. However, in answer to your question, there are instances where dichotomizing a continuous variable does confer advantages. For instance, if a given variable contains missing values for a significant proportion of the population, but is known to be highly predictive and the missing values themselves bear predictive value. For example, in a credit scoring model, consider a variable, let's say average-revolving-credit-balance (which granted, is not technically continuous, but in this case mirrors a normal distribution close enough to be treated as such), which contains missing values for about 20% of the applicant pool in a given target market. In this case, the missing values for this variable represent a distinct class--those who don't have an open, revolving-credit line; these customers will display entirely different behavior compared to, say, those with available revolving credit-lines, but who regularly carry no balance. If instead these missing values were discarded, or imputed, it could restrict the model's predictive ability. Another benefit of dichotomization: it can be used to mitigate the effects of significant outliers that skew coefficients, but represent realistic cases that need to be handled. If the outliers don't differ greatly in outcome from other values in the nearest percentiles, but skew the parameters enough to effect marginal accuracy, then it may be beneficial to group them with values displaying similar effects. Sometimes a distribution naturally lends itself to a set of classes, in which case dichotomization will actually give you a higher degree of accuracy than a continuous function. Also, as previously mentioned, depending on the audience, the ease of presentation can outweigh the losses to accuracy. To use credit scoring again as an example, in practice, the high degree of regulation does make a practical case for discretizing at times. While the higher degree of accuracy could help the lender cut losses, practitioners must also consider that models need to be easily understood by regulators (who may request thousands of pages of model documentation) and consumers, whom if denied credit, are legally entitled to an explanation of why. It all depends on the problem at hand and the data, but there are certainly cases where dichotomization has its merits.
What is the benefit of breaking up a continuous predictor variable?
As previous posters have mentioned, it is generally best to avoid dichotomizing a continuous variable. However, in answer to your question, there are instances where dichotomizing a continuous variabl
What is the benefit of breaking up a continuous predictor variable? As previous posters have mentioned, it is generally best to avoid dichotomizing a continuous variable. However, in answer to your question, there are instances where dichotomizing a continuous variable does confer advantages. For instance, if a given variable contains missing values for a significant proportion of the population, but is known to be highly predictive and the missing values themselves bear predictive value. For example, in a credit scoring model, consider a variable, let's say average-revolving-credit-balance (which granted, is not technically continuous, but in this case mirrors a normal distribution close enough to be treated as such), which contains missing values for about 20% of the applicant pool in a given target market. In this case, the missing values for this variable represent a distinct class--those who don't have an open, revolving-credit line; these customers will display entirely different behavior compared to, say, those with available revolving credit-lines, but who regularly carry no balance. If instead these missing values were discarded, or imputed, it could restrict the model's predictive ability. Another benefit of dichotomization: it can be used to mitigate the effects of significant outliers that skew coefficients, but represent realistic cases that need to be handled. If the outliers don't differ greatly in outcome from other values in the nearest percentiles, but skew the parameters enough to effect marginal accuracy, then it may be beneficial to group them with values displaying similar effects. Sometimes a distribution naturally lends itself to a set of classes, in which case dichotomization will actually give you a higher degree of accuracy than a continuous function. Also, as previously mentioned, depending on the audience, the ease of presentation can outweigh the losses to accuracy. To use credit scoring again as an example, in practice, the high degree of regulation does make a practical case for discretizing at times. While the higher degree of accuracy could help the lender cut losses, practitioners must also consider that models need to be easily understood by regulators (who may request thousands of pages of model documentation) and consumers, whom if denied credit, are legally entitled to an explanation of why. It all depends on the problem at hand and the data, but there are certainly cases where dichotomization has its merits.
What is the benefit of breaking up a continuous predictor variable? As previous posters have mentioned, it is generally best to avoid dichotomizing a continuous variable. However, in answer to your question, there are instances where dichotomizing a continuous variabl
1,720
What is the benefit of breaking up a continuous predictor variable?
As a clinician I think the answer depends on what you want to do. If you want to make the best fit or make the best adjustment you can use continuous and squared variables. If you want to describe and communicate complicated associations for a non-statistically oriented audience the use of categorised variables is better, accepting that you may give some slightly biased results in the last decimal. I prefer to use at least three categories to show nonlinear associations. The alternative is to produce graphs and predicted results at certain points. Then you may need to produce a family of graphs for each continuous covariate that may be interesting. If you are afraid of getting too much bias I think you can test both models and see if the difference is important or not. You need to be practical and realistic. I think we may realise that in many clinical situations our calculations are not based on exact data and when I for instance prescribe a medicine to an adult I do not do that with exact mg's per kilo anyway (the parable with choice between surgery and medical treatment is just nonsense).
What is the benefit of breaking up a continuous predictor variable?
As a clinician I think the answer depends on what you want to do. If you want to make the best fit or make the best adjustment you can use continuous and squared variables. If you want to describe a
What is the benefit of breaking up a continuous predictor variable? As a clinician I think the answer depends on what you want to do. If you want to make the best fit or make the best adjustment you can use continuous and squared variables. If you want to describe and communicate complicated associations for a non-statistically oriented audience the use of categorised variables is better, accepting that you may give some slightly biased results in the last decimal. I prefer to use at least three categories to show nonlinear associations. The alternative is to produce graphs and predicted results at certain points. Then you may need to produce a family of graphs for each continuous covariate that may be interesting. If you are afraid of getting too much bias I think you can test both models and see if the difference is important or not. You need to be practical and realistic. I think we may realise that in many clinical situations our calculations are not based on exact data and when I for instance prescribe a medicine to an adult I do not do that with exact mg's per kilo anyway (the parable with choice between surgery and medical treatment is just nonsense).
What is the benefit of breaking up a continuous predictor variable? As a clinician I think the answer depends on what you want to do. If you want to make the best fit or make the best adjustment you can use continuous and squared variables. If you want to describe a
1,721
What is the benefit of breaking up a continuous predictor variable?
Many times binning continuous variables comes with an uneasy feeling of causing damage due to information lost. However, not only that you can bound the information loss, you can gain information and get more advantages. If you use binning and get categorised variables you might be able to apply learning algorithms that are not applicable to continuous variables. Your dataset might fit better one of these algorithms so here is your first benefit. The idea of estimating the loss due to binning is based on the paper "PAC learning with irrelevant attributes". Suppose the our concept is binary so we can split the samples into positives and negatives. For each pair of a negative and a positive samples, the difference in concept might be explained by a difference in one of the features (or otherwise, it is not explainable by the given features). The set of the feature differences is the set of possible explanation to concept difference, hence the data to use to determine the concept. If we did binning and we still get the same set of explanations for the pairs, we didn't lose any information needed (with respect to learning algorithms that work by such comparisons). If our categorisation will be very strict we will probably have a smaller set of possible explanations but we will be able to measure accurately how much and where we lose. That will enable us to trade off the number of bins vs. set of explanations. So far we saw that we might not lose due to categorisation, but if we consider applying such a step we would like to benefit. Indeed, we can benefit from categorisation Many learning algorithms that will be asked to classify a sample with values not seen on the train set, will consider the value as "unknown". Hence we will get a bin of "unknown" that includes ALL values not seen during the train (or even not seen enough). For such algorithms, the difference between unknown values pairs won't be used to improve classification. Compare your pairs after binning to the pairs with unknown and see if your binning is useful and you actually gained. You can estimate how common will be unknown values by checking the values distribution of each feature. Feature were values that appear only few times are a considerable part of their distribution are good candidates for binning. Note that in many scenarios you will have many features with unknown increasing the probability that a sample will contain unknown value. Algorithms that treat all or many of the features are prone to error in such situations. A. Dhagat and L. Hellerstein, "PAC learning with irrelevant attributes", in 'Proceedings of the IEEE Symp. on Foundation of Computer Science', 1994.http://citeseer.ist.psu.edu/dhagat94pac.html
What is the benefit of breaking up a continuous predictor variable?
Many times binning continuous variables comes with an uneasy feeling of causing damage due to information lost. However, not only that you can bound the information loss, you can gain information and
What is the benefit of breaking up a continuous predictor variable? Many times binning continuous variables comes with an uneasy feeling of causing damage due to information lost. However, not only that you can bound the information loss, you can gain information and get more advantages. If you use binning and get categorised variables you might be able to apply learning algorithms that are not applicable to continuous variables. Your dataset might fit better one of these algorithms so here is your first benefit. The idea of estimating the loss due to binning is based on the paper "PAC learning with irrelevant attributes". Suppose the our concept is binary so we can split the samples into positives and negatives. For each pair of a negative and a positive samples, the difference in concept might be explained by a difference in one of the features (or otherwise, it is not explainable by the given features). The set of the feature differences is the set of possible explanation to concept difference, hence the data to use to determine the concept. If we did binning and we still get the same set of explanations for the pairs, we didn't lose any information needed (with respect to learning algorithms that work by such comparisons). If our categorisation will be very strict we will probably have a smaller set of possible explanations but we will be able to measure accurately how much and where we lose. That will enable us to trade off the number of bins vs. set of explanations. So far we saw that we might not lose due to categorisation, but if we consider applying such a step we would like to benefit. Indeed, we can benefit from categorisation Many learning algorithms that will be asked to classify a sample with values not seen on the train set, will consider the value as "unknown". Hence we will get a bin of "unknown" that includes ALL values not seen during the train (or even not seen enough). For such algorithms, the difference between unknown values pairs won't be used to improve classification. Compare your pairs after binning to the pairs with unknown and see if your binning is useful and you actually gained. You can estimate how common will be unknown values by checking the values distribution of each feature. Feature were values that appear only few times are a considerable part of their distribution are good candidates for binning. Note that in many scenarios you will have many features with unknown increasing the probability that a sample will contain unknown value. Algorithms that treat all or many of the features are prone to error in such situations. A. Dhagat and L. Hellerstein, "PAC learning with irrelevant attributes", in 'Proceedings of the IEEE Symp. on Foundation of Computer Science', 1994.http://citeseer.ist.psu.edu/dhagat94pac.html
What is the benefit of breaking up a continuous predictor variable? Many times binning continuous variables comes with an uneasy feeling of causing damage due to information lost. However, not only that you can bound the information loss, you can gain information and
1,722
What is the benefit of breaking up a continuous predictor variable?
If a variable has an effect at a specific threshold, create a new variable by binning it is a good thing to do. I always keep both variables, original one and binning one, and check which variable is a better predictor.
What is the benefit of breaking up a continuous predictor variable?
If a variable has an effect at a specific threshold, create a new variable by binning it is a good thing to do. I always keep both variables, original one and binning one, and check which variable is
What is the benefit of breaking up a continuous predictor variable? If a variable has an effect at a specific threshold, create a new variable by binning it is a good thing to do. I always keep both variables, original one and binning one, and check which variable is a better predictor.
What is the benefit of breaking up a continuous predictor variable? If a variable has an effect at a specific threshold, create a new variable by binning it is a good thing to do. I always keep both variables, original one and binning one, and check which variable is
1,723
What is the benefit of breaking up a continuous predictor variable?
I'm a committed fan of Frank Harrell's advice that analysts should resist premature discretization of continuous data. And I have several answers on CV and SO that demonstrate how to visualize interactions between continuous variables, since I think that is an even more valuable line of investigation. However, I also have real-world experience in the medical world of the barriers to adhering to this advice. There are often attractive divisions that both clinicians and non-clinicians expect for "splits". The conventional "upper limit of normal" is one such "natural" split point. One is essentially first examining the statistical underpinning of a relation and then communicating the substance of the findings in terms that your audience expects and can easily comprehends. Despite my "allergy" to barplots, they are exceedingly common in scientific and medical discourse. So the audience is likely to have a ready-made cognitive pattern to process them and will be able to integrate the results in their knowledge base. Furthermore, the graphical display of modeled interactions among non-linear forms of predictor variables requires presentations of contour plots or wireframe displays which most of the audience will have some difficulty in digesting. I have found the medical and general public more receptive to presentations that have discretized and segmented results. So I suppose the conclusion is that splitting is properly done after the statistical analysis is complete; and is done in the presentation phase.
What is the benefit of breaking up a continuous predictor variable?
I'm a committed fan of Frank Harrell's advice that analysts should resist premature discretization of continuous data. And I have several answers on CV and SO that demonstrate how to visualize interac
What is the benefit of breaking up a continuous predictor variable? I'm a committed fan of Frank Harrell's advice that analysts should resist premature discretization of continuous data. And I have several answers on CV and SO that demonstrate how to visualize interactions between continuous variables, since I think that is an even more valuable line of investigation. However, I also have real-world experience in the medical world of the barriers to adhering to this advice. There are often attractive divisions that both clinicians and non-clinicians expect for "splits". The conventional "upper limit of normal" is one such "natural" split point. One is essentially first examining the statistical underpinning of a relation and then communicating the substance of the findings in terms that your audience expects and can easily comprehends. Despite my "allergy" to barplots, they are exceedingly common in scientific and medical discourse. So the audience is likely to have a ready-made cognitive pattern to process them and will be able to integrate the results in their knowledge base. Furthermore, the graphical display of modeled interactions among non-linear forms of predictor variables requires presentations of contour plots or wireframe displays which most of the audience will have some difficulty in digesting. I have found the medical and general public more receptive to presentations that have discretized and segmented results. So I suppose the conclusion is that splitting is properly done after the statistical analysis is complete; and is done in the presentation phase.
What is the benefit of breaking up a continuous predictor variable? I'm a committed fan of Frank Harrell's advice that analysts should resist premature discretization of continuous data. And I have several answers on CV and SO that demonstrate how to visualize interac
1,724
What is the benefit of breaking up a continuous predictor variable?
I just want to add something to the discussion: Normally, I would tend to also not binning the predictor variables, as I've learned that loosing information is not much appreciated, and sometimes dangerous. However, thinking of a massive amount of data, the performance of getting the required outcome could be something like frustrating at a larger mass of data. However, the model error with binning tends to be near the error of a model with continous predictors, while accuracy stays sharp. https://proceedings.neurips.cc/paper/2017/file/6449f44a102fde848669bdd9eb6b76fa-Paper.pdf p.4 ff. and p.7 And with the rise of Histogram Gradient Boosting Classifiers and Regressors we may have a chance that binning of continous predictors into categorical features, may have some sort of use, if data is big enough. And my own experiments showed me that this is true to newer experimental versions of these GDBTs. It is correct that we do not know the exact rise in units just as in a regression, when doing binning (as cjthompson underpinned previously). But with permutation importance we at least know what is essential to a model. Thus, if you do not agree, you have to admit that on large scale data, a pre scan of your data with these algorithms, even if you tend to continous predictors, may be enlightening or something to consider. Ke et al. 2017, LightGBM: A Highly Efficient Gradient Boosting Decision Tree, 31st Conference on Neural Information Processing Systems.
What is the benefit of breaking up a continuous predictor variable?
I just want to add something to the discussion: Normally, I would tend to also not binning the predictor variables, as I've learned that loosing information is not much appreciated, and sometimes dang
What is the benefit of breaking up a continuous predictor variable? I just want to add something to the discussion: Normally, I would tend to also not binning the predictor variables, as I've learned that loosing information is not much appreciated, and sometimes dangerous. However, thinking of a massive amount of data, the performance of getting the required outcome could be something like frustrating at a larger mass of data. However, the model error with binning tends to be near the error of a model with continous predictors, while accuracy stays sharp. https://proceedings.neurips.cc/paper/2017/file/6449f44a102fde848669bdd9eb6b76fa-Paper.pdf p.4 ff. and p.7 And with the rise of Histogram Gradient Boosting Classifiers and Regressors we may have a chance that binning of continous predictors into categorical features, may have some sort of use, if data is big enough. And my own experiments showed me that this is true to newer experimental versions of these GDBTs. It is correct that we do not know the exact rise in units just as in a regression, when doing binning (as cjthompson underpinned previously). But with permutation importance we at least know what is essential to a model. Thus, if you do not agree, you have to admit that on large scale data, a pre scan of your data with these algorithms, even if you tend to continous predictors, may be enlightening or something to consider. Ke et al. 2017, LightGBM: A Highly Efficient Gradient Boosting Decision Tree, 31st Conference on Neural Information Processing Systems.
What is the benefit of breaking up a continuous predictor variable? I just want to add something to the discussion: Normally, I would tend to also not binning the predictor variables, as I've learned that loosing information is not much appreciated, and sometimes dang
1,725
Feature selection and cross-validation
If you perform feature selection on all of the data and then cross-validate, then the test data in each fold of the cross-validation procedure was also used to choose the features and this is what biases the performance analysis. Consider this example. We generate some target data by flipping a coin 10 times and recording whether it comes down as heads or tails. Next, we generate 20 features by flipping the coin 10 times for each feature and write down what we get. We then perform feature selection by picking the feature that matches the target data as closely as possible and use that as our prediction. If we then cross-validate, we will get an expected error rate slightly lower than 0.5. This is because we have chosen the feature on the basis of a correlation over both the training set and the test set in every fold of the cross-validation procedure. However, the true error rate is going to be 0.5 as the target data is simply random. If you perform feature selection independently within each fold of the cross-validation, the expected value of the error rate is 0.5 (which is correct). The key idea is that cross-validation is a way of estimating the generalization performance of a process for building a model, so you need to repeat the whole process in each fold. Otherwise, you will end up with a biased estimate, or an under-estimate of the variance of the estimate (or both). HTH Here is some MATLAB code that performs a Monte-Carlo simulation of this setup, with 56 features and 259 cases, to match your example, the output it gives is: Biased estimator: erate = 0.429210 (0.397683 - 0.451737) Unbiased estimator: erate = 0.499689 (0.397683 - 0.590734) The biased estimator is the one where feature selection is performed prior to cross-validation, the unbiased estimator is the one where feature selection is performed independently in each fold of the cross-validation. This suggests that the bias can be quite severe in this case, depending on the nature of the learning task. NF = 56; NC = 259; NFOLD = 10; NMC = 1e+4; % perform Monte-Carlo simulation of biased estimator erate = zeros(NMC,1); for i=1:NMC y = randn(NC,1) >= 0; x = randn(NC,NF) >= 0; % perform feature selection err = mean(repmat(y,1,NF) ~= x); [err,idx] = min(err); % perform cross-validation partition = mod(1:NC, NFOLD)+1; y_xval = zeros(size(y)); for j=1:NFOLD y_xval(partition==j) = x(partition==j,idx(1)); end erate(i) = mean(y_xval ~= y); plot(erate); drawnow; end erate = sort(erate); fprintf(1, ' Biased estimator: erate = %f (%f - %f)\n', mean(erate), erate(ceil(0.025*end)), erate(floor(0.975*end))); % perform Monte-Carlo simulation of unbiased estimator erate = zeros(NMC,1); for i=1:NMC y = randn(NC,1) >= 0; x = randn(NC,NF) >= 0; % perform cross-validation partition = mod(1:NC, NFOLD)+1; y_xval = zeros(size(y)); for j=1:NFOLD % perform feature selection err = mean(repmat(y(partition~=j),1,NF) ~= x(partition~=j,:)); [err,idx] = min(err); y_xval(partition==j) = x(partition==j,idx(1)); end erate(i) = mean(y_xval ~= y); plot(erate); drawnow; end erate = sort(erate); fprintf(1, 'Unbiased estimator: erate = %f (%f - %f)\n', mean(erate), erate(ceil(0.025*end)), erate(floor(0.975*end)));
Feature selection and cross-validation
If you perform feature selection on all of the data and then cross-validate, then the test data in each fold of the cross-validation procedure was also used to choose the features and this is what bia
Feature selection and cross-validation If you perform feature selection on all of the data and then cross-validate, then the test data in each fold of the cross-validation procedure was also used to choose the features and this is what biases the performance analysis. Consider this example. We generate some target data by flipping a coin 10 times and recording whether it comes down as heads or tails. Next, we generate 20 features by flipping the coin 10 times for each feature and write down what we get. We then perform feature selection by picking the feature that matches the target data as closely as possible and use that as our prediction. If we then cross-validate, we will get an expected error rate slightly lower than 0.5. This is because we have chosen the feature on the basis of a correlation over both the training set and the test set in every fold of the cross-validation procedure. However, the true error rate is going to be 0.5 as the target data is simply random. If you perform feature selection independently within each fold of the cross-validation, the expected value of the error rate is 0.5 (which is correct). The key idea is that cross-validation is a way of estimating the generalization performance of a process for building a model, so you need to repeat the whole process in each fold. Otherwise, you will end up with a biased estimate, or an under-estimate of the variance of the estimate (or both). HTH Here is some MATLAB code that performs a Monte-Carlo simulation of this setup, with 56 features and 259 cases, to match your example, the output it gives is: Biased estimator: erate = 0.429210 (0.397683 - 0.451737) Unbiased estimator: erate = 0.499689 (0.397683 - 0.590734) The biased estimator is the one where feature selection is performed prior to cross-validation, the unbiased estimator is the one where feature selection is performed independently in each fold of the cross-validation. This suggests that the bias can be quite severe in this case, depending on the nature of the learning task. NF = 56; NC = 259; NFOLD = 10; NMC = 1e+4; % perform Monte-Carlo simulation of biased estimator erate = zeros(NMC,1); for i=1:NMC y = randn(NC,1) >= 0; x = randn(NC,NF) >= 0; % perform feature selection err = mean(repmat(y,1,NF) ~= x); [err,idx] = min(err); % perform cross-validation partition = mod(1:NC, NFOLD)+1; y_xval = zeros(size(y)); for j=1:NFOLD y_xval(partition==j) = x(partition==j,idx(1)); end erate(i) = mean(y_xval ~= y); plot(erate); drawnow; end erate = sort(erate); fprintf(1, ' Biased estimator: erate = %f (%f - %f)\n', mean(erate), erate(ceil(0.025*end)), erate(floor(0.975*end))); % perform Monte-Carlo simulation of unbiased estimator erate = zeros(NMC,1); for i=1:NMC y = randn(NC,1) >= 0; x = randn(NC,NF) >= 0; % perform cross-validation partition = mod(1:NC, NFOLD)+1; y_xval = zeros(size(y)); for j=1:NFOLD % perform feature selection err = mean(repmat(y(partition~=j),1,NF) ~= x(partition~=j,:)); [err,idx] = min(err); y_xval(partition==j) = x(partition==j,idx(1)); end erate(i) = mean(y_xval ~= y); plot(erate); drawnow; end erate = sort(erate); fprintf(1, 'Unbiased estimator: erate = %f (%f - %f)\n', mean(erate), erate(ceil(0.025*end)), erate(floor(0.975*end)));
Feature selection and cross-validation If you perform feature selection on all of the data and then cross-validate, then the test data in each fold of the cross-validation procedure was also used to choose the features and this is what bia
1,726
Feature selection and cross-validation
To add a slightly different and more general description of the problem: If you do any kind of data-driven pre-processing, e.g. parameter optimization guided by cross validation / out-of-bootstrap dimensionality reduction with techniques like PCA or PLS to produce input for the model (e.g. PLS-LDA, PCA-LDA) ... and want to use cross validation/out-of-bootstrap(/hold out) validation to estimate the final model's performance, the data-driven pre-processing needs to be done on the surrogate training data, i.e. separately for each surrogate model. If the data-driven pre-processing is of type 1., this leads to "double" or "nested" cross validation: the parameter estimation is done in a cross validation using only the training set of the "outer" cross validation. The ElemStatLearn have an illustration (https://web.stanford.edu/~hastie/Papers/ESLII.pdf Page 222 of print 5). You may say that the pre-processing is really part of the building of the model. only pre-processing that is done independently for each case or independently of the actual data set can be taken out of the validation loop to save computations. So the other way round: if your model is completely built by knowledge external to the particular data set (e.g. you decide beforehand by your expert knowledge that measurement channels 63 - 79 cannot possibly help to solve the problem, you can of course exclude these channels, build the model and cross-validate it. The same, if you do a PLS regression and decide by your experience that 3 latent variables are a reasonable choice (but do not play around whether 2 or 5 lv give better results) then you can go ahead with a normal out-of-bootstrap/cross validation.
Feature selection and cross-validation
To add a slightly different and more general description of the problem: If you do any kind of data-driven pre-processing, e.g. parameter optimization guided by cross validation / out-of-bootstrap di
Feature selection and cross-validation To add a slightly different and more general description of the problem: If you do any kind of data-driven pre-processing, e.g. parameter optimization guided by cross validation / out-of-bootstrap dimensionality reduction with techniques like PCA or PLS to produce input for the model (e.g. PLS-LDA, PCA-LDA) ... and want to use cross validation/out-of-bootstrap(/hold out) validation to estimate the final model's performance, the data-driven pre-processing needs to be done on the surrogate training data, i.e. separately for each surrogate model. If the data-driven pre-processing is of type 1., this leads to "double" or "nested" cross validation: the parameter estimation is done in a cross validation using only the training set of the "outer" cross validation. The ElemStatLearn have an illustration (https://web.stanford.edu/~hastie/Papers/ESLII.pdf Page 222 of print 5). You may say that the pre-processing is really part of the building of the model. only pre-processing that is done independently for each case or independently of the actual data set can be taken out of the validation loop to save computations. So the other way round: if your model is completely built by knowledge external to the particular data set (e.g. you decide beforehand by your expert knowledge that measurement channels 63 - 79 cannot possibly help to solve the problem, you can of course exclude these channels, build the model and cross-validate it. The same, if you do a PLS regression and decide by your experience that 3 latent variables are a reasonable choice (but do not play around whether 2 or 5 lv give better results) then you can go ahead with a normal out-of-bootstrap/cross validation.
Feature selection and cross-validation To add a slightly different and more general description of the problem: If you do any kind of data-driven pre-processing, e.g. parameter optimization guided by cross validation / out-of-bootstrap di
1,727
Feature selection and cross-validation
Let's try to make it a little bit intuitive. Consider this example: You have a binary dependent and two binary predictors. You want a model with just one predictors. Both predictors have a chance of say 95% to be equal to the dependent and a chance of 5% to disagree with the dependent. Now, by chance on your data one predictor equals the dependent on the whole data in 97% of the time and the other one only in 93% of the time. You will pick the predictor with 97% and build your models. In each fold of the cross-validation you will have the model dependent = predictor, because it is almost always right. Therefore you will get a cross predicted performance of 97%. Now, you could say, ok that's just bad luck. But if the predictors are constructed as above then you have chance of 75% of at least one of them having an accuracy >95% on the whole data set and that is the one you will pick. So you have a chance of 75% to overestimate the performance. In practice, it is not at all trivial to estimate the effect. It is entirely possible that your feature selection would select the same features in each fold as if you did it on the whole data set and then there will be no bias. The effect also becomes smaller if you have much more samples but features. It might be instructive to use both ways with your data and see how the results differ. You could also set aside an amount of data (say 20%), use both your way and the correct way to get performance estimates by cross validating on the 80% and see which performance prediction proves more accurate when you transfer your model to the 20% of the data set aside. Note that for this to work your feature selection before CV will also have to be done just on the 80% of the data. Else it won't simulate transferring your model to data outside your sample.
Feature selection and cross-validation
Let's try to make it a little bit intuitive. Consider this example: You have a binary dependent and two binary predictors. You want a model with just one predictors. Both predictors have a chance of s
Feature selection and cross-validation Let's try to make it a little bit intuitive. Consider this example: You have a binary dependent and two binary predictors. You want a model with just one predictors. Both predictors have a chance of say 95% to be equal to the dependent and a chance of 5% to disagree with the dependent. Now, by chance on your data one predictor equals the dependent on the whole data in 97% of the time and the other one only in 93% of the time. You will pick the predictor with 97% and build your models. In each fold of the cross-validation you will have the model dependent = predictor, because it is almost always right. Therefore you will get a cross predicted performance of 97%. Now, you could say, ok that's just bad luck. But if the predictors are constructed as above then you have chance of 75% of at least one of them having an accuracy >95% on the whole data set and that is the one you will pick. So you have a chance of 75% to overestimate the performance. In practice, it is not at all trivial to estimate the effect. It is entirely possible that your feature selection would select the same features in each fold as if you did it on the whole data set and then there will be no bias. The effect also becomes smaller if you have much more samples but features. It might be instructive to use both ways with your data and see how the results differ. You could also set aside an amount of data (say 20%), use both your way and the correct way to get performance estimates by cross validating on the 80% and see which performance prediction proves more accurate when you transfer your model to the 20% of the data set aside. Note that for this to work your feature selection before CV will also have to be done just on the 80% of the data. Else it won't simulate transferring your model to data outside your sample.
Feature selection and cross-validation Let's try to make it a little bit intuitive. Consider this example: You have a binary dependent and two binary predictors. You want a model with just one predictors. Both predictors have a chance of s
1,728
What book would you recommend for non-statistician scientists? [closed]
Statistics David Freedman, Robert Pisani, Roger Purves Fourth edition: 2007, First edition: 1978 As an undergraduate studying philosophy, I was asked to analyze some data for a small study that I was working on with a physician. Needless to say, I found myself somewhat overwhelmed, but was able to get by by mimicking some old Stata code that a biostatistician friend had given me. The analysis turned out to be good enough to help get the study published, and I had suddenly become interested in this curious field of study called statistics. The first book on statistics that I read was Statistics, by David Freedman and colleagues. What I liked most about it was its focus on explaining the fundamental concepts of statistical analysis (what do p-values actually mean, why is it important to visualize data, what does it mean for a test to be significant, etc) with concise and accurate language, but without too much mathematics. With that conceptual background, I found it much easier to go on to read more advanced literature with more advanced mathematics. This book covers all topics covered in a first year statistics course, but does not cover time series or aggregation of large data sets. I feel it does a very good job at teaching a non-statistician how to think like a statistician. From there, adding new methods, like time series, should be relatively easy, and the non-statistician should be well on his way to becoming a life-long student of statistics.
What book would you recommend for non-statistician scientists? [closed]
Statistics David Freedman, Robert Pisani, Roger Purves Fourth edition: 2007, First edition: 1978 As an undergraduate studying philosophy, I was asked to analyze some data for a small study that I was
What book would you recommend for non-statistician scientists? [closed] Statistics David Freedman, Robert Pisani, Roger Purves Fourth edition: 2007, First edition: 1978 As an undergraduate studying philosophy, I was asked to analyze some data for a small study that I was working on with a physician. Needless to say, I found myself somewhat overwhelmed, but was able to get by by mimicking some old Stata code that a biostatistician friend had given me. The analysis turned out to be good enough to help get the study published, and I had suddenly become interested in this curious field of study called statistics. The first book on statistics that I read was Statistics, by David Freedman and colleagues. What I liked most about it was its focus on explaining the fundamental concepts of statistical analysis (what do p-values actually mean, why is it important to visualize data, what does it mean for a test to be significant, etc) with concise and accurate language, but without too much mathematics. With that conceptual background, I found it much easier to go on to read more advanced literature with more advanced mathematics. This book covers all topics covered in a first year statistics course, but does not cover time series or aggregation of large data sets. I feel it does a very good job at teaching a non-statistician how to think like a statistician. From there, adding new methods, like time series, should be relatively easy, and the non-statistician should be well on his way to becoming a life-long student of statistics.
What book would you recommend for non-statistician scientists? [closed] Statistics David Freedman, Robert Pisani, Roger Purves Fourth edition: 2007, First edition: 1978 As an undergraduate studying philosophy, I was asked to analyze some data for a small study that I was
1,729
What book would you recommend for non-statistician scientists? [closed]
The answer would most definitely depend on their discipline, the methods/techniques that they would like to learn and their existing mathematical/statistical abilities. For example, economists/social scientists who want to learn about cutting edge empirical econometrics could read Angrist and Pischke's Mostly Harmless Econometrics. This is a non-technical book covering the "natural experimental revolution" in economics. The book only presupposes that they know what regression is. But I think the best book on applied regression is Gelman and Hill's Data Analysis Using Regression and Multilevel/Hierarchical Models. This covers basic regression, multilevel regression, and Bayesian methods in a clear and intuitive way. It would be good for any scientist with a basic background in statistics.
What book would you recommend for non-statistician scientists? [closed]
The answer would most definitely depend on their discipline, the methods/techniques that they would like to learn and their existing mathematical/statistical abilities. For example, economists/social
What book would you recommend for non-statistician scientists? [closed] The answer would most definitely depend on their discipline, the methods/techniques that they would like to learn and their existing mathematical/statistical abilities. For example, economists/social scientists who want to learn about cutting edge empirical econometrics could read Angrist and Pischke's Mostly Harmless Econometrics. This is a non-technical book covering the "natural experimental revolution" in economics. The book only presupposes that they know what regression is. But I think the best book on applied regression is Gelman and Hill's Data Analysis Using Regression and Multilevel/Hierarchical Models. This covers basic regression, multilevel regression, and Bayesian methods in a clear and intuitive way. It would be good for any scientist with a basic background in statistics.
What book would you recommend for non-statistician scientists? [closed] The answer would most definitely depend on their discipline, the methods/techniques that they would like to learn and their existing mathematical/statistical abilities. For example, economists/social
1,730
What book would you recommend for non-statistician scientists? [closed]
Peter Dalgaard's Introductory Statistics with R is a great book for some introductory statistics with a focus on the R software for data analysis.
What book would you recommend for non-statistician scientists? [closed]
Peter Dalgaard's Introductory Statistics with R is a great book for some introductory statistics with a focus on the R software for data analysis.
What book would you recommend for non-statistician scientists? [closed] Peter Dalgaard's Introductory Statistics with R is a great book for some introductory statistics with a focus on the R software for data analysis.
What book would you recommend for non-statistician scientists? [closed] Peter Dalgaard's Introductory Statistics with R is a great book for some introductory statistics with a focus on the R software for data analysis.
1,731
What book would you recommend for non-statistician scientists? [closed]
I'm going to assume some basic statistics knowledge and recommend: The Statistical Sleuth (Ramsey, Schafer) which contain a good deal of mini case studies as they cover the basic statistical tools for data analysis. A First Course in Multivariate Statistics (Flury) which covers the essential statistics required for data mining and the like.
What book would you recommend for non-statistician scientists? [closed]
I'm going to assume some basic statistics knowledge and recommend: The Statistical Sleuth (Ramsey, Schafer) which contain a good deal of mini case studies as they cover the basic statistical tools fo
What book would you recommend for non-statistician scientists? [closed] I'm going to assume some basic statistics knowledge and recommend: The Statistical Sleuth (Ramsey, Schafer) which contain a good deal of mini case studies as they cover the basic statistical tools for data analysis. A First Course in Multivariate Statistics (Flury) which covers the essential statistics required for data mining and the like.
What book would you recommend for non-statistician scientists? [closed] I'm going to assume some basic statistics knowledge and recommend: The Statistical Sleuth (Ramsey, Schafer) which contain a good deal of mini case studies as they cover the basic statistical tools fo
1,732
What book would you recommend for non-statistician scientists? [closed]
Khan Academy has some nice introductory/beginner videos on statistics.
What book would you recommend for non-statistician scientists? [closed]
Khan Academy has some nice introductory/beginner videos on statistics.
What book would you recommend for non-statistician scientists? [closed] Khan Academy has some nice introductory/beginner videos on statistics.
What book would you recommend for non-statistician scientists? [closed] Khan Academy has some nice introductory/beginner videos on statistics.
1,733
What book would you recommend for non-statistician scientists? [closed]
A lot of Social Science / Psychology students with minimal mathematical background like Andy Field's book: Discovering Statistics Using SPSS. He also has a website that shares a lot of material.
What book would you recommend for non-statistician scientists? [closed]
A lot of Social Science / Psychology students with minimal mathematical background like Andy Field's book: Discovering Statistics Using SPSS. He also has a website that shares a lot of material.
What book would you recommend for non-statistician scientists? [closed] A lot of Social Science / Psychology students with minimal mathematical background like Andy Field's book: Discovering Statistics Using SPSS. He also has a website that shares a lot of material.
What book would you recommend for non-statistician scientists? [closed] A lot of Social Science / Psychology students with minimal mathematical background like Andy Field's book: Discovering Statistics Using SPSS. He also has a website that shares a lot of material.
1,734
What book would you recommend for non-statistician scientists? [closed]
Not intending to plug my book but it does seem to possibly apply. Last year I published a book with Wiley titled "The Essentials of Biostatistics for Physicians, Nurses and Clinicians". It is paperback and fairly concise 214 pages in total. It has the advantage for you that it emphasizes topics that are important in biological applications but may not be quite as concise as you would like to have for a 10 day self-learning course. "Introductory Statistics for Biology Students" 2nd Edtion by Trudy Watt and published by Chapman and Hall/CRC 1997 is another paperback that might be right for you. It is a little simpler than my book but does not include survival analysis which I consider to be a very important topic in biological studies (particularly clinical trials). Her book is 236 pages. I would also like to mention "The Cartoon Guide to Statistics" by Gonick. A humorous book but it also covers basic concepts very well and is exceptionally easy to read.
What book would you recommend for non-statistician scientists? [closed]
Not intending to plug my book but it does seem to possibly apply. Last year I published a book with Wiley titled "The Essentials of Biostatistics for Physicians, Nurses and Clinicians". It is paperb
What book would you recommend for non-statistician scientists? [closed] Not intending to plug my book but it does seem to possibly apply. Last year I published a book with Wiley titled "The Essentials of Biostatistics for Physicians, Nurses and Clinicians". It is paperback and fairly concise 214 pages in total. It has the advantage for you that it emphasizes topics that are important in biological applications but may not be quite as concise as you would like to have for a 10 day self-learning course. "Introductory Statistics for Biology Students" 2nd Edtion by Trudy Watt and published by Chapman and Hall/CRC 1997 is another paperback that might be right for you. It is a little simpler than my book but does not include survival analysis which I consider to be a very important topic in biological studies (particularly clinical trials). Her book is 236 pages. I would also like to mention "The Cartoon Guide to Statistics" by Gonick. A humorous book but it also covers basic concepts very well and is exceptionally easy to read.
What book would you recommend for non-statistician scientists? [closed] Not intending to plug my book but it does seem to possibly apply. Last year I published a book with Wiley titled "The Essentials of Biostatistics for Physicians, Nurses and Clinicians". It is paperb
1,735
What book would you recommend for non-statistician scientists? [closed]
Statistics in Plain English is pretty good. 4.5 on Amazon, 11 reviews. Explains ANOVA pretty well too.
What book would you recommend for non-statistician scientists? [closed]
Statistics in Plain English is pretty good. 4.5 on Amazon, 11 reviews. Explains ANOVA pretty well too.
What book would you recommend for non-statistician scientists? [closed] Statistics in Plain English is pretty good. 4.5 on Amazon, 11 reviews. Explains ANOVA pretty well too.
What book would you recommend for non-statistician scientists? [closed] Statistics in Plain English is pretty good. 4.5 on Amazon, 11 reviews. Explains ANOVA pretty well too.
1,736
What book would you recommend for non-statistician scientists? [closed]
Probably the best basic, get the big picture / ideas book is going to be: Robert Abelson's Statistics as Principled Argument
What book would you recommend for non-statistician scientists? [closed]
Probably the best basic, get the big picture / ideas book is going to be: Robert Abelson's Statistics as Principled Argument
What book would you recommend for non-statistician scientists? [closed] Probably the best basic, get the big picture / ideas book is going to be: Robert Abelson's Statistics as Principled Argument
What book would you recommend for non-statistician scientists? [closed] Probably the best basic, get the big picture / ideas book is going to be: Robert Abelson's Statistics as Principled Argument
1,737
What book would you recommend for non-statistician scientists? [closed]
The Drunkard's Walk: How Randomness Rules Our Lives by Leonard Mlodinow is an excellent book for laypeople. Enjoyable and educational. It might not be a textbook, but it makes you think about the world in the right way.
What book would you recommend for non-statistician scientists? [closed]
The Drunkard's Walk: How Randomness Rules Our Lives by Leonard Mlodinow is an excellent book for laypeople. Enjoyable and educational. It might not be a textbook, but it makes you think about the worl
What book would you recommend for non-statistician scientists? [closed] The Drunkard's Walk: How Randomness Rules Our Lives by Leonard Mlodinow is an excellent book for laypeople. Enjoyable and educational. It might not be a textbook, but it makes you think about the world in the right way.
What book would you recommend for non-statistician scientists? [closed] The Drunkard's Walk: How Randomness Rules Our Lives by Leonard Mlodinow is an excellent book for laypeople. Enjoyable and educational. It might not be a textbook, but it makes you think about the worl
1,738
What book would you recommend for non-statistician scientists? [closed]
It is a bit old, but I have found Chris Chatfield's book, Statistics for Technology: A Course in Applied Technology to be an excellent introduction. It was how I first learned about statistics from a conceptual point of view.
What book would you recommend for non-statistician scientists? [closed]
It is a bit old, but I have found Chris Chatfield's book, Statistics for Technology: A Course in Applied Technology to be an excellent introduction. It was how I first learned about statistics from a
What book would you recommend for non-statistician scientists? [closed] It is a bit old, but I have found Chris Chatfield's book, Statistics for Technology: A Course in Applied Technology to be an excellent introduction. It was how I first learned about statistics from a conceptual point of view.
What book would you recommend for non-statistician scientists? [closed] It is a bit old, but I have found Chris Chatfield's book, Statistics for Technology: A Course in Applied Technology to be an excellent introduction. It was how I first learned about statistics from a
1,739
What book would you recommend for non-statistician scientists? [closed]
As a first introduction to the topic i liked Data Analysis: A Bayesian Tutorial. For a deep and philosophical discussion of the underlying ideas of quantitative scientific reasoning i recommend Probability Theory: The Logic of Science. This book does not serve as a good introduction, though. It's only recommended for persons who want to know, why bayesian statistics is the way it is and/or are interested in a historic review of bayesian statistics.
What book would you recommend for non-statistician scientists? [closed]
As a first introduction to the topic i liked Data Analysis: A Bayesian Tutorial. For a deep and philosophical discussion of the underlying ideas of quantitative scientific reasoning i recommend Probab
What book would you recommend for non-statistician scientists? [closed] As a first introduction to the topic i liked Data Analysis: A Bayesian Tutorial. For a deep and philosophical discussion of the underlying ideas of quantitative scientific reasoning i recommend Probability Theory: The Logic of Science. This book does not serve as a good introduction, though. It's only recommended for persons who want to know, why bayesian statistics is the way it is and/or are interested in a historic review of bayesian statistics.
What book would you recommend for non-statistician scientists? [closed] As a first introduction to the topic i liked Data Analysis: A Bayesian Tutorial. For a deep and philosophical discussion of the underlying ideas of quantitative scientific reasoning i recommend Probab
1,740
What book would you recommend for non-statistician scientists? [closed]
So many wonderful recommendations! It's not quite what you asked for, but How to Lie with Statistics is short and quite wonderful. It doesn't directly teach the things you want, but it does help point out violation of assumptions and other flaws.
What book would you recommend for non-statistician scientists? [closed]
So many wonderful recommendations! It's not quite what you asked for, but How to Lie with Statistics is short and quite wonderful. It doesn't directly teach the things you want, but it does help point
What book would you recommend for non-statistician scientists? [closed] So many wonderful recommendations! It's not quite what you asked for, but How to Lie with Statistics is short and quite wonderful. It doesn't directly teach the things you want, but it does help point out violation of assumptions and other flaws.
What book would you recommend for non-statistician scientists? [closed] So many wonderful recommendations! It's not quite what you asked for, but How to Lie with Statistics is short and quite wonderful. It doesn't directly teach the things you want, but it does help point
1,741
What book would you recommend for non-statistician scientists? [closed]
The Flaw of Averages by Sam Savage.
What book would you recommend for non-statistician scientists? [closed]
The Flaw of Averages by Sam Savage.
What book would you recommend for non-statistician scientists? [closed] The Flaw of Averages by Sam Savage.
What book would you recommend for non-statistician scientists? [closed] The Flaw of Averages by Sam Savage.
1,742
What book would you recommend for non-statistician scientists? [closed]
"How to Tell the Liars from the Statisticians" by Hooke. I am fond of its way of explaining the concepts of statistics to laypersons. As for explaining the motivations of statisticians, "The Lady Tasting Tea" is good reading.
What book would you recommend for non-statistician scientists? [closed]
"How to Tell the Liars from the Statisticians" by Hooke. I am fond of its way of explaining the concepts of statistics to laypersons. As for explaining the motivations of statisticians, "The Lady Tast
What book would you recommend for non-statistician scientists? [closed] "How to Tell the Liars from the Statisticians" by Hooke. I am fond of its way of explaining the concepts of statistics to laypersons. As for explaining the motivations of statisticians, "The Lady Tasting Tea" is good reading.
What book would you recommend for non-statistician scientists? [closed] "How to Tell the Liars from the Statisticians" by Hooke. I am fond of its way of explaining the concepts of statistics to laypersons. As for explaining the motivations of statisticians, "The Lady Tast
1,743
What book would you recommend for non-statistician scientists? [closed]
"Biometry: The Principles and Practices of Statistics in Biological Research" by Robert R. Sokal and F. James Rohlf "Biostatistical Analysis" by Jerrold H. Zar "Primer of Biostatistics" by Stanton Glantz
What book would you recommend for non-statistician scientists? [closed]
"Biometry: The Principles and Practices of Statistics in Biological Research" by Robert R. Sokal and F. James Rohlf "Biostatistical Analysis" by Jerrold H. Zar "Primer of Biostatistics" by Stanton Gla
What book would you recommend for non-statistician scientists? [closed] "Biometry: The Principles and Practices of Statistics in Biological Research" by Robert R. Sokal and F. James Rohlf "Biostatistical Analysis" by Jerrold H. Zar "Primer of Biostatistics" by Stanton Glantz
What book would you recommend for non-statistician scientists? [closed] "Biometry: The Principles and Practices of Statistics in Biological Research" by Robert R. Sokal and F. James Rohlf "Biostatistical Analysis" by Jerrold H. Zar "Primer of Biostatistics" by Stanton Gla
1,744
What book would you recommend for non-statistician scientists? [closed]
For the rudiments of statistics: http://www.bbc.co.uk/dna/h2g2/A1091350 and http://www.robertniles.com/stats/ For a good guide to data visualisation: http://www.perceptualedge.com/ - in particular, try the Graph Design IQ test at http://www.perceptualedge.com/files/GraphDesignIQ.html (requires Flash) NB these are orthogonal - there are lots of statistics experts who are terrible at data visualisation, and vice versa.
What book would you recommend for non-statistician scientists? [closed]
For the rudiments of statistics: http://www.bbc.co.uk/dna/h2g2/A1091350 and http://www.robertniles.com/stats/ For a good guide to data visualisation: http://www.perceptualedge.com/ - in particular, tr
What book would you recommend for non-statistician scientists? [closed] For the rudiments of statistics: http://www.bbc.co.uk/dna/h2g2/A1091350 and http://www.robertniles.com/stats/ For a good guide to data visualisation: http://www.perceptualedge.com/ - in particular, try the Graph Design IQ test at http://www.perceptualedge.com/files/GraphDesignIQ.html (requires Flash) NB these are orthogonal - there are lots of statistics experts who are terrible at data visualisation, and vice versa.
What book would you recommend for non-statistician scientists? [closed] For the rudiments of statistics: http://www.bbc.co.uk/dna/h2g2/A1091350 and http://www.robertniles.com/stats/ For a good guide to data visualisation: http://www.perceptualedge.com/ - in particular, tr
1,745
What book would you recommend for non-statistician scientists? [closed]
The following are text books I used for my MSEE coursework and research and I found them to be pretty good. Probability, Statistics and Random Processes for Engineers by Henry Stark and John W. Woods (Detailed explanation of concepts, good for Communications and Signal Processing people). Schaum's Outline of Probability, Random Variables and Random Processes by Hwei Hsu (Concise explanation of concepts, has a good amount of solved examples).
What book would you recommend for non-statistician scientists? [closed]
The following are text books I used for my MSEE coursework and research and I found them to be pretty good. Probability, Statistics and Random Processes for Engineers by Henry Stark and John W. Wood
What book would you recommend for non-statistician scientists? [closed] The following are text books I used for my MSEE coursework and research and I found them to be pretty good. Probability, Statistics and Random Processes for Engineers by Henry Stark and John W. Woods (Detailed explanation of concepts, good for Communications and Signal Processing people). Schaum's Outline of Probability, Random Variables and Random Processes by Hwei Hsu (Concise explanation of concepts, has a good amount of solved examples).
What book would you recommend for non-statistician scientists? [closed] The following are text books I used for my MSEE coursework and research and I found them to be pretty good. Probability, Statistics and Random Processes for Engineers by Henry Stark and John W. Wood
1,746
What book would you recommend for non-statistician scientists? [closed]
I recently found Even You Can Learn Statistics to be pretty useful.
What book would you recommend for non-statistician scientists? [closed]
I recently found Even You Can Learn Statistics to be pretty useful.
What book would you recommend for non-statistician scientists? [closed] I recently found Even You Can Learn Statistics to be pretty useful.
What book would you recommend for non-statistician scientists? [closed] I recently found Even You Can Learn Statistics to be pretty useful.
1,747
What book would you recommend for non-statistician scientists? [closed]
I strongly recommend "Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition" by Box, Hunter and Hunter. Must-read book for any scientist doing statistical analysis of their experiments. There's a companion R package (BHH2) as well.
What book would you recommend for non-statistician scientists? [closed]
I strongly recommend "Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition" by Box, Hunter and Hunter. Must-read book for any scientist doing statistical analysis of their exp
What book would you recommend for non-statistician scientists? [closed] I strongly recommend "Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition" by Box, Hunter and Hunter. Must-read book for any scientist doing statistical analysis of their experiments. There's a companion R package (BHH2) as well.
What book would you recommend for non-statistician scientists? [closed] I strongly recommend "Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition" by Box, Hunter and Hunter. Must-read book for any scientist doing statistical analysis of their exp
1,748
What book would you recommend for non-statistician scientists? [closed]
For years I have found the Engineering Statistics Handbook to be useful on a practical level. It's freely available online.
What book would you recommend for non-statistician scientists? [closed]
For years I have found the Engineering Statistics Handbook to be useful on a practical level. It's freely available online.
What book would you recommend for non-statistician scientists? [closed] For years I have found the Engineering Statistics Handbook to be useful on a practical level. It's freely available online.
What book would you recommend for non-statistician scientists? [closed] For years I have found the Engineering Statistics Handbook to be useful on a practical level. It's freely available online.
1,749
What book would you recommend for non-statistician scientists? [closed]
Gotelli and Ellison (2004) A Primer of Ecological Statistics It's geared towards "Outdoor Science" (Ecology, Environmental Science, Biology) but the pedagogy is excellent. Anyone could benefit from it.
What book would you recommend for non-statistician scientists? [closed]
Gotelli and Ellison (2004) A Primer of Ecological Statistics It's geared towards "Outdoor Science" (Ecology, Environmental Science, Biology) but the pedagogy is excellent. Anyone could benefit from it
What book would you recommend for non-statistician scientists? [closed] Gotelli and Ellison (2004) A Primer of Ecological Statistics It's geared towards "Outdoor Science" (Ecology, Environmental Science, Biology) but the pedagogy is excellent. Anyone could benefit from it.
What book would you recommend for non-statistician scientists? [closed] Gotelli and Ellison (2004) A Primer of Ecological Statistics It's geared towards "Outdoor Science" (Ecology, Environmental Science, Biology) but the pedagogy is excellent. Anyone could benefit from it
1,750
What book would you recommend for non-statistician scientists? [closed]
Whitlock and Schluter The Analysis of Biological Data 3rd edition 2020 details at https://www.amazon.com/Analysis-Biological-Data-Michael-Whitlock/dp/131922623X is an outstanding blend of statistics and science. You don't have to be a biologist (I'm certainly not) to understand and appreciate the examples. It's not only clear and sound, it's entertaining and enjoyable too.
What book would you recommend for non-statistician scientists? [closed]
Whitlock and Schluter The Analysis of Biological Data 3rd edition 2020 details at https://www.amazon.com/Analysis-Biological-Data-Michael-Whitlock/dp/131922623X is an outstanding blend of statistics a
What book would you recommend for non-statistician scientists? [closed] Whitlock and Schluter The Analysis of Biological Data 3rd edition 2020 details at https://www.amazon.com/Analysis-Biological-Data-Michael-Whitlock/dp/131922623X is an outstanding blend of statistics and science. You don't have to be a biologist (I'm certainly not) to understand and appreciate the examples. It's not only clear and sound, it's entertaining and enjoyable too.
What book would you recommend for non-statistician scientists? [closed] Whitlock and Schluter The Analysis of Biological Data 3rd edition 2020 details at https://www.amazon.com/Analysis-Biological-Data-Michael-Whitlock/dp/131922623X is an outstanding blend of statistics a
1,751
What book would you recommend for non-statistician scientists? [closed]
I have recently had this website pointed out to me. It covers a number of books useful for new statisticians, with some targetted discussion of their various strengths and weaknesses, and a summary right at the bottom.
What book would you recommend for non-statistician scientists? [closed]
I have recently had this website pointed out to me. It covers a number of books useful for new statisticians, with some targetted discussion of their various strengths and weaknesses, and a summary ri
What book would you recommend for non-statistician scientists? [closed] I have recently had this website pointed out to me. It covers a number of books useful for new statisticians, with some targetted discussion of their various strengths and weaknesses, and a summary right at the bottom.
What book would you recommend for non-statistician scientists? [closed] I have recently had this website pointed out to me. It covers a number of books useful for new statisticians, with some targetted discussion of their various strengths and weaknesses, and a summary ri
1,752
What book would you recommend for non-statistician scientists? [closed]
"Theoretical Statistics" Keener, Robert W. 1st Edition., 2010, XVII, 538 p. Hardcover, ISBN 978-0-387-93838-7 About the book...
What book would you recommend for non-statistician scientists? [closed]
"Theoretical Statistics" Keener, Robert W. 1st Edition., 2010, XVII, 538 p. Hardcover, ISBN 978-0-387-93838-7 About the book...
What book would you recommend for non-statistician scientists? [closed] "Theoretical Statistics" Keener, Robert W. 1st Edition., 2010, XVII, 538 p. Hardcover, ISBN 978-0-387-93838-7 About the book...
What book would you recommend for non-statistician scientists? [closed] "Theoretical Statistics" Keener, Robert W. 1st Edition., 2010, XVII, 538 p. Hardcover, ISBN 978-0-387-93838-7 About the book...
1,753
What book would you recommend for non-statistician scientists? [closed]
I would recommend: The statistical sleuth (Ramsey&Schafer) and biostatistical analysis (Zar).
What book would you recommend for non-statistician scientists? [closed]
I would recommend: The statistical sleuth (Ramsey&Schafer) and biostatistical analysis (Zar).
What book would you recommend for non-statistician scientists? [closed] I would recommend: The statistical sleuth (Ramsey&Schafer) and biostatistical analysis (Zar).
What book would you recommend for non-statistician scientists? [closed] I would recommend: The statistical sleuth (Ramsey&Schafer) and biostatistical analysis (Zar).
1,754
What book would you recommend for non-statistician scientists? [closed]
I'm really fond of the "for Dummies" series, and from the few pages I've read of it, Deborah J. Rumsey's "Statistics For Dummies" is a fine book for non-statisticians as well as Statisticians looking for a way to explain statistical concepts to non-statisticians.
What book would you recommend for non-statistician scientists? [closed]
I'm really fond of the "for Dummies" series, and from the few pages I've read of it, Deborah J. Rumsey's "Statistics For Dummies" is a fine book for non-statisticians as well as Statisticians looking
What book would you recommend for non-statistician scientists? [closed] I'm really fond of the "for Dummies" series, and from the few pages I've read of it, Deborah J. Rumsey's "Statistics For Dummies" is a fine book for non-statisticians as well as Statisticians looking for a way to explain statistical concepts to non-statisticians.
What book would you recommend for non-statistician scientists? [closed] I'm really fond of the "for Dummies" series, and from the few pages I've read of it, Deborah J. Rumsey's "Statistics For Dummies" is a fine book for non-statisticians as well as Statisticians looking
1,755
What book would you recommend for non-statistician scientists? [closed]
This link suggested many great books. https://www.stat.berkeley.edu/mediawiki/index.php/Recommended_Books besides that, I suggested: The Statistical Sleuth: A Course in Methods of Data Analysis. Following the examples in the book, many concepts become easier to understand.
What book would you recommend for non-statistician scientists? [closed]
This link suggested many great books. https://www.stat.berkeley.edu/mediawiki/index.php/Recommended_Books besides that, I suggested: The Statistical Sleuth: A Course in Methods of Data Analysis. Follo
What book would you recommend for non-statistician scientists? [closed] This link suggested many great books. https://www.stat.berkeley.edu/mediawiki/index.php/Recommended_Books besides that, I suggested: The Statistical Sleuth: A Course in Methods of Data Analysis. Following the examples in the book, many concepts become easier to understand.
What book would you recommend for non-statistician scientists? [closed] This link suggested many great books. https://www.stat.berkeley.edu/mediawiki/index.php/Recommended_Books besides that, I suggested: The Statistical Sleuth: A Course in Methods of Data Analysis. Follo
1,756
What book would you recommend for non-statistician scientists? [closed]
If you're to use SPSS, I'd recommend this book: Data Analysis for the Behavioral Sciences Using SPSS by Weinberg & Abramowitz. It is very well written and accessible. Note that it doesn't cover time-series, though.
What book would you recommend for non-statistician scientists? [closed]
If you're to use SPSS, I'd recommend this book: Data Analysis for the Behavioral Sciences Using SPSS by Weinberg & Abramowitz. It is very well written and accessible. Note that it doesn't cover time-s
What book would you recommend for non-statistician scientists? [closed] If you're to use SPSS, I'd recommend this book: Data Analysis for the Behavioral Sciences Using SPSS by Weinberg & Abramowitz. It is very well written and accessible. Note that it doesn't cover time-series, though.
What book would you recommend for non-statistician scientists? [closed] If you're to use SPSS, I'd recommend this book: Data Analysis for the Behavioral Sciences Using SPSS by Weinberg & Abramowitz. It is very well written and accessible. Note that it doesn't cover time-s
1,757
What book would you recommend for non-statistician scientists? [closed]
That'll depend very much on their background, but I found "Statistics in a Nutshell" to be pretty good.
What book would you recommend for non-statistician scientists? [closed]
That'll depend very much on their background, but I found "Statistics in a Nutshell" to be pretty good.
What book would you recommend for non-statistician scientists? [closed] That'll depend very much on their background, but I found "Statistics in a Nutshell" to be pretty good.
What book would you recommend for non-statistician scientists? [closed] That'll depend very much on their background, but I found "Statistics in a Nutshell" to be pretty good.
1,758
Correlation between a nominal (IV) and a continuous (DV) variable
The title of this question suggests a fundamental misunderstanding. The most basic idea of correlation is "as one variable increases, does the other variable increase (positive correlation), decrease (negative correlation), or stay the same (no correlation)" with a scale such that perfect positive correlation is +1, no correlation is 0, and perfect negative correlation is -1. The meaning of "perfect" depends on which measure of correlation is used: for Pearson correlation it means the points on a scatter plot lie right on a straight line (sloped upwards for +1 and downwards for -1), for Spearman correlation that the ranks exactly agree (or exactly disagree, so first is paired with last, for -1), and for Kendall's tau that all pairs of observations have concordant ranks (or discordant for -1). An intuition for how this works in practice can be gleaned from the Pearson correlations for the following scatter plots (image credit): Further insight comes from considering Anscombe's Quartet where all four data sets have Pearson correlation +0.816, even though they follow the pattern "as $x$ increases, $y$ tends to increase" in very different ways (image credit): If your independent variable is nominal then it doesn't make sense to talk about what happens "as $x$ increases". In your case, "Topic of conversation" doesn't have a numerical value that can go up and down. So you can't correlate "Topic of conversation" with "Duration of conversation". But as @ttnphns wrote in the comments, there are measures of strength of association you can use that are somewhat analogous. Here is some fake data and accompanying R code: data.df <- data.frame( topic = c(rep(c("Gossip", "Sports", "Weather"), each = 4)), duration = c(6:9, 2:5, 4:7) ) print(data.df) boxplot(duration ~ topic, data = data.df, ylab = "Duration of conversation") Which gives: > print(data.df) topic duration 1 Gossip 6 2 Gossip 7 3 Gossip 8 4 Gossip 9 5 Sports 2 6 Sports 3 7 Sports 4 8 Sports 5 9 Weather 4 10 Weather 5 11 Weather 6 12 Weather 7 By using "Gossip" as the reference level for "Topic", and defining binary dummy variables for "Sports" and "Weather", we can perform a multiple regression. > model.lm <- lm(duration ~ topic, data = data.df) > summary(model.lm) Call: lm(formula = duration ~ topic, data = data.df) Residuals: Min 1Q Median 3Q Max -1.50 -0.75 0.00 0.75 1.50 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 7.5000 0.6455 11.619 1.01e-06 *** topicSports -4.0000 0.9129 -4.382 0.00177 ** topicWeather -2.0000 0.9129 -2.191 0.05617 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.291 on 9 degrees of freedom Multiple R-squared: 0.6809, Adjusted R-squared: 0.6099 F-statistic: 9.6 on 2 and 9 DF, p-value: 0.005861 We can interpret the estimated intercept as giving the mean duration of Gossip conversations as 7.5 minutes, and the estimated coefficients for the dummy variables as showing Sports conversations were on average 4 minutes shorter than Gossip ones, while Weather conversations were 2 minutes shorter than Gossip. Part of the output is the coefficient of determination $R^2 = 0.6809$. One interpretation of this is that our model explains 68% of variance in conversation duration. Another interpretation of $R^2$ is that by square-rooting, we can find the multiple correlation coefficent $R$. > rsq <- summary(model.lm)$r.squared > rsq [1] 0.6808511 > sqrt(rsq) [1] 0.825137 Note that 0.825 isn't the correlation between Duration and Topic - we can't correlate those two variables because Topic is nominal. What it actually represents is the correlation between the observed durations, and the ones predicted (fitted) by our model. Both of these variables are numerical so we are able to correlate them. In fact the fitted values are just the mean durations for each group: > print(model.lm$fitted) 1 2 3 4 5 6 7 8 9 10 11 12 7.5 7.5 7.5 7.5 3.5 3.5 3.5 3.5 5.5 5.5 5.5 5.5 Just to check, the Pearson correlation between observed and fitted values is: > cor(data.df$duration, model.lm$fitted) [1] 0.825137 We can visualise this on a scatter plot: plot(x = model.lm$fitted, y = data.df$duration, xlab = "Fitted duration", ylab = "Observed duration") abline(lm(data.df$duration ~ model.lm$fitted), col="red") The strength of this relationship is visually very similar to those of the Anscombe's Quartet plots, which is unsurprising as they all had Pearson correlations about 0.82. You might be surprised that with a categorical independent variable, I chose to do a (multiple) regression rather than a one-way ANOVA. But in fact this turns out to be an equivalent approach. library(heplots) # for eta model.aov <- aov(duration ~ topic, data = data.df) summary(model.aov) This gives a summary with identical F statistic and p-value: Df Sum Sq Mean Sq F value Pr(>F) topic 2 32 16.000 9.6 0.00586 ** Residuals 9 15 1.667 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Again, the ANOVA model fits the group means, just as the regression did: > print(model.aov$fitted) 1 2 3 4 5 6 7 8 9 10 11 12 7.5 7.5 7.5 7.5 3.5 3.5 3.5 3.5 5.5 5.5 5.5 5.5 This means that the correlation between fitted and observed values of the dependent variable is the same as it was for the multiple regression model. The "proportion of variance explained" measure $R^2$ for multiple regression has an ANOVA equivalent, $\eta^2$ (eta squared). We can see that they match. > etasq(model.aov, partial = FALSE) eta^2 topic 0.6808511 Residuals NA In this sense, the closest analogue to a "correlation" between a nominal explanatory variable and continuous response would be $\eta$, the square-root of $\eta^2$, which is the equivalent of the multiple correlation coefficient $R$ for regression. This explains the comment that "The most natural measure of association / correlation between a nominal (taken as IV) and a scale (taken as DV) variables is eta". If you are more interested in the proportion of variance explained, then you can stick with eta squared (or its regression equivalent $R^2$). For ANOVA, one often comes across the partial eta squared. As this ANOVA was one-way (there was only one categorical predictor), the partial eta squared is the same as eta squared, but things change in models with more predictors. > etasq(model.aov, partial = TRUE) Partial eta^2 topic 0.6808511 Residuals NA However it's quite possible that neither "correlation" nor "proportion of variance explained" is the measure of effect size you wish to use. For instance, your focus may lie more on how means differ between groups. This question and answer contain more information on eta squared, partial eta squared, and various alternatives.
Correlation between a nominal (IV) and a continuous (DV) variable
The title of this question suggests a fundamental misunderstanding. The most basic idea of correlation is "as one variable increases, does the other variable increase (positive correlation), decrease
Correlation between a nominal (IV) and a continuous (DV) variable The title of this question suggests a fundamental misunderstanding. The most basic idea of correlation is "as one variable increases, does the other variable increase (positive correlation), decrease (negative correlation), or stay the same (no correlation)" with a scale such that perfect positive correlation is +1, no correlation is 0, and perfect negative correlation is -1. The meaning of "perfect" depends on which measure of correlation is used: for Pearson correlation it means the points on a scatter plot lie right on a straight line (sloped upwards for +1 and downwards for -1), for Spearman correlation that the ranks exactly agree (or exactly disagree, so first is paired with last, for -1), and for Kendall's tau that all pairs of observations have concordant ranks (or discordant for -1). An intuition for how this works in practice can be gleaned from the Pearson correlations for the following scatter plots (image credit): Further insight comes from considering Anscombe's Quartet where all four data sets have Pearson correlation +0.816, even though they follow the pattern "as $x$ increases, $y$ tends to increase" in very different ways (image credit): If your independent variable is nominal then it doesn't make sense to talk about what happens "as $x$ increases". In your case, "Topic of conversation" doesn't have a numerical value that can go up and down. So you can't correlate "Topic of conversation" with "Duration of conversation". But as @ttnphns wrote in the comments, there are measures of strength of association you can use that are somewhat analogous. Here is some fake data and accompanying R code: data.df <- data.frame( topic = c(rep(c("Gossip", "Sports", "Weather"), each = 4)), duration = c(6:9, 2:5, 4:7) ) print(data.df) boxplot(duration ~ topic, data = data.df, ylab = "Duration of conversation") Which gives: > print(data.df) topic duration 1 Gossip 6 2 Gossip 7 3 Gossip 8 4 Gossip 9 5 Sports 2 6 Sports 3 7 Sports 4 8 Sports 5 9 Weather 4 10 Weather 5 11 Weather 6 12 Weather 7 By using "Gossip" as the reference level for "Topic", and defining binary dummy variables for "Sports" and "Weather", we can perform a multiple regression. > model.lm <- lm(duration ~ topic, data = data.df) > summary(model.lm) Call: lm(formula = duration ~ topic, data = data.df) Residuals: Min 1Q Median 3Q Max -1.50 -0.75 0.00 0.75 1.50 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 7.5000 0.6455 11.619 1.01e-06 *** topicSports -4.0000 0.9129 -4.382 0.00177 ** topicWeather -2.0000 0.9129 -2.191 0.05617 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.291 on 9 degrees of freedom Multiple R-squared: 0.6809, Adjusted R-squared: 0.6099 F-statistic: 9.6 on 2 and 9 DF, p-value: 0.005861 We can interpret the estimated intercept as giving the mean duration of Gossip conversations as 7.5 minutes, and the estimated coefficients for the dummy variables as showing Sports conversations were on average 4 minutes shorter than Gossip ones, while Weather conversations were 2 minutes shorter than Gossip. Part of the output is the coefficient of determination $R^2 = 0.6809$. One interpretation of this is that our model explains 68% of variance in conversation duration. Another interpretation of $R^2$ is that by square-rooting, we can find the multiple correlation coefficent $R$. > rsq <- summary(model.lm)$r.squared > rsq [1] 0.6808511 > sqrt(rsq) [1] 0.825137 Note that 0.825 isn't the correlation between Duration and Topic - we can't correlate those two variables because Topic is nominal. What it actually represents is the correlation between the observed durations, and the ones predicted (fitted) by our model. Both of these variables are numerical so we are able to correlate them. In fact the fitted values are just the mean durations for each group: > print(model.lm$fitted) 1 2 3 4 5 6 7 8 9 10 11 12 7.5 7.5 7.5 7.5 3.5 3.5 3.5 3.5 5.5 5.5 5.5 5.5 Just to check, the Pearson correlation between observed and fitted values is: > cor(data.df$duration, model.lm$fitted) [1] 0.825137 We can visualise this on a scatter plot: plot(x = model.lm$fitted, y = data.df$duration, xlab = "Fitted duration", ylab = "Observed duration") abline(lm(data.df$duration ~ model.lm$fitted), col="red") The strength of this relationship is visually very similar to those of the Anscombe's Quartet plots, which is unsurprising as they all had Pearson correlations about 0.82. You might be surprised that with a categorical independent variable, I chose to do a (multiple) regression rather than a one-way ANOVA. But in fact this turns out to be an equivalent approach. library(heplots) # for eta model.aov <- aov(duration ~ topic, data = data.df) summary(model.aov) This gives a summary with identical F statistic and p-value: Df Sum Sq Mean Sq F value Pr(>F) topic 2 32 16.000 9.6 0.00586 ** Residuals 9 15 1.667 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Again, the ANOVA model fits the group means, just as the regression did: > print(model.aov$fitted) 1 2 3 4 5 6 7 8 9 10 11 12 7.5 7.5 7.5 7.5 3.5 3.5 3.5 3.5 5.5 5.5 5.5 5.5 This means that the correlation between fitted and observed values of the dependent variable is the same as it was for the multiple regression model. The "proportion of variance explained" measure $R^2$ for multiple regression has an ANOVA equivalent, $\eta^2$ (eta squared). We can see that they match. > etasq(model.aov, partial = FALSE) eta^2 topic 0.6808511 Residuals NA In this sense, the closest analogue to a "correlation" between a nominal explanatory variable and continuous response would be $\eta$, the square-root of $\eta^2$, which is the equivalent of the multiple correlation coefficient $R$ for regression. This explains the comment that "The most natural measure of association / correlation between a nominal (taken as IV) and a scale (taken as DV) variables is eta". If you are more interested in the proportion of variance explained, then you can stick with eta squared (or its regression equivalent $R^2$). For ANOVA, one often comes across the partial eta squared. As this ANOVA was one-way (there was only one categorical predictor), the partial eta squared is the same as eta squared, but things change in models with more predictors. > etasq(model.aov, partial = TRUE) Partial eta^2 topic 0.6808511 Residuals NA However it's quite possible that neither "correlation" nor "proportion of variance explained" is the measure of effect size you wish to use. For instance, your focus may lie more on how means differ between groups. This question and answer contain more information on eta squared, partial eta squared, and various alternatives.
Correlation between a nominal (IV) and a continuous (DV) variable The title of this question suggests a fundamental misunderstanding. The most basic idea of correlation is "as one variable increases, does the other variable increase (positive correlation), decrease
1,759
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
Consider the simplest case where $Y$ is regressed against $X$ and $Z$ and where $X$ and $Z$ are highly positively correlated. Then the effect of $X$ on $Y$ is hard to distinguish from the effect of $Z$ on $Y$ because any increase in $X$ tends to be associated with an increase in $Z$. Another way to look at this is to consider the equation. If we write $Y = b_0 + b_1X + b_2Z + e$, then the coefficient $b_1$ is the increase in $Y$ for every unit increase in $X$ while holding $Z$ constant. But in practice, it is often impossible to hold $Z$ constant and the positive correlation between $X$ and $Z$ mean that a unit increase in $X$ is usually accompanied by some increase in $Z$ at the same time. A similar but more complicated explanation holds for other forms of multicollinearity.
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
Consider the simplest case where $Y$ is regressed against $X$ and $Z$ and where $X$ and $Z$ are highly positively correlated. Then the effect of $X$ on $Y$ is hard to distinguish from the effect of $Z
Is there an intuitive explanation why multicollinearity is a problem in linear regression? Consider the simplest case where $Y$ is regressed against $X$ and $Z$ and where $X$ and $Z$ are highly positively correlated. Then the effect of $X$ on $Y$ is hard to distinguish from the effect of $Z$ on $Y$ because any increase in $X$ tends to be associated with an increase in $Z$. Another way to look at this is to consider the equation. If we write $Y = b_0 + b_1X + b_2Z + e$, then the coefficient $b_1$ is the increase in $Y$ for every unit increase in $X$ while holding $Z$ constant. But in practice, it is often impossible to hold $Z$ constant and the positive correlation between $X$ and $Z$ mean that a unit increase in $X$ is usually accompanied by some increase in $Z$ at the same time. A similar but more complicated explanation holds for other forms of multicollinearity.
Is there an intuitive explanation why multicollinearity is a problem in linear regression? Consider the simplest case where $Y$ is regressed against $X$ and $Z$ and where $X$ and $Z$ are highly positively correlated. Then the effect of $X$ on $Y$ is hard to distinguish from the effect of $Z
1,760
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
I was eating sushi once and thought that it might make a good intuitive demonstration of ill-conditioned problems. Suppose you wanted to show someone a plane using two sticks touching at their bases. You'd probably hold the sticks orthogonal to each other. The effect of any kind of shakiness of your hands on the plane causes it to wobble a little around the what you were hoping to show people, but after watching you for a while they get a good idea of what plane you were intending to demonstrate. But let's say you bring the sticks' ends closer together and watch the effect of your hands shaking. The plane it forms will pitch far more wildly. Your audience will have to watch longer to get a good idea of what plane the you are trying to demonstrate.
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
I was eating sushi once and thought that it might make a good intuitive demonstration of ill-conditioned problems. Suppose you wanted to show someone a plane using two sticks touching at their bases.
Is there an intuitive explanation why multicollinearity is a problem in linear regression? I was eating sushi once and thought that it might make a good intuitive demonstration of ill-conditioned problems. Suppose you wanted to show someone a plane using two sticks touching at their bases. You'd probably hold the sticks orthogonal to each other. The effect of any kind of shakiness of your hands on the plane causes it to wobble a little around the what you were hoping to show people, but after watching you for a while they get a good idea of what plane you were intending to demonstrate. But let's say you bring the sticks' ends closer together and watch the effect of your hands shaking. The plane it forms will pitch far more wildly. Your audience will have to watch longer to get a good idea of what plane the you are trying to demonstrate.
Is there an intuitive explanation why multicollinearity is a problem in linear regression? I was eating sushi once and thought that it might make a good intuitive demonstration of ill-conditioned problems. Suppose you wanted to show someone a plane using two sticks touching at their bases.
1,761
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
The geometric approach is to consider the least squares projection of $Y$ onto the subspace spanned by $X$. Say you have a model: $E[Y | X] = \beta_{1} X_{1} + \beta_{2} X_{2}$ Our estimation space is the plane determined by the vectors $X_{1}$ and $X_{2}$ and the problem is to find coordinates corresponding to $(\beta_{1}, \beta_{2})$ which will describe the vector $\hat{Y}$, a least squares projection of $Y$ on to that plane. Now suppose $X_{1} = 2 X_{2}$, i.e. they're collinear. Then, the subspace determined by $X_{1}$ and $X_{2}$ is just a line and we have only one degree of freedom. So we can't determine two values $\beta_{1}$ and $\beta_{2}$ as we were asked.
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
The geometric approach is to consider the least squares projection of $Y$ onto the subspace spanned by $X$. Say you have a model: $E[Y | X] = \beta_{1} X_{1} + \beta_{2} X_{2}$ Our estimation space is
Is there an intuitive explanation why multicollinearity is a problem in linear regression? The geometric approach is to consider the least squares projection of $Y$ onto the subspace spanned by $X$. Say you have a model: $E[Y | X] = \beta_{1} X_{1} + \beta_{2} X_{2}$ Our estimation space is the plane determined by the vectors $X_{1}$ and $X_{2}$ and the problem is to find coordinates corresponding to $(\beta_{1}, \beta_{2})$ which will describe the vector $\hat{Y}$, a least squares projection of $Y$ on to that plane. Now suppose $X_{1} = 2 X_{2}$, i.e. they're collinear. Then, the subspace determined by $X_{1}$ and $X_{2}$ is just a line and we have only one degree of freedom. So we can't determine two values $\beta_{1}$ and $\beta_{2}$ as we were asked.
Is there an intuitive explanation why multicollinearity is a problem in linear regression? The geometric approach is to consider the least squares projection of $Y$ onto the subspace spanned by $X$. Say you have a model: $E[Y | X] = \beta_{1} X_{1} + \beta_{2} X_{2}$ Our estimation space is
1,762
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
Two people are pushing a boulder up a hill. You want to know how hard each of them is pushing. Suppose that you watch them push together for ten minutes and the boulder moves 10 feet. Did the first guy do all the work and the second just fake it? Or vice versa? Or 50-50? Since both forces are working at the exact same time, you can't separate the strength of either one separately. All that you can say is that their combined force is 1 foot per minute. Now imagine that the first guy pushes for a minute himself, then nine minutes with the second guy, and a final minute is just the second guy pushing. Now you can use estimates of forces in the first and last minutes to figure out each person's force separately. Even though they are still largely working at the same time, the fact that there is a bit of difference lets you get estimates of the force for each. If you saw each man pushing independently for a full ten minutes, that would give you more precise estimates of the forces than if there is a large overlap in the forces. I leave as an exercise for the reader to extend this case to one man pushing uphill and the other pushing downhill (it still works). Perfect multicolinearity prevents you from estimating the forces separately; near multicolinearity gives you larger standard errors.
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
Two people are pushing a boulder up a hill. You want to know how hard each of them is pushing. Suppose that you watch them push together for ten minutes and the boulder moves 10 feet. Did the first gu
Is there an intuitive explanation why multicollinearity is a problem in linear regression? Two people are pushing a boulder up a hill. You want to know how hard each of them is pushing. Suppose that you watch them push together for ten minutes and the boulder moves 10 feet. Did the first guy do all the work and the second just fake it? Or vice versa? Or 50-50? Since both forces are working at the exact same time, you can't separate the strength of either one separately. All that you can say is that their combined force is 1 foot per minute. Now imagine that the first guy pushes for a minute himself, then nine minutes with the second guy, and a final minute is just the second guy pushing. Now you can use estimates of forces in the first and last minutes to figure out each person's force separately. Even though they are still largely working at the same time, the fact that there is a bit of difference lets you get estimates of the force for each. If you saw each man pushing independently for a full ten minutes, that would give you more precise estimates of the forces than if there is a large overlap in the forces. I leave as an exercise for the reader to extend this case to one man pushing uphill and the other pushing downhill (it still works). Perfect multicolinearity prevents you from estimating the forces separately; near multicolinearity gives you larger standard errors.
Is there an intuitive explanation why multicollinearity is a problem in linear regression? Two people are pushing a boulder up a hill. You want to know how hard each of them is pushing. Suppose that you watch them push together for ten minutes and the boulder moves 10 feet. Did the first gu
1,763
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
The way I think about this really is in terms of information. Say each of $X_{1}$ and $X_{2}$ has some information about $Y$. The more correlated $X_{1}$ and $X_{2}$ are with each other, the more the information content about $Y$ from $X_{1}$ and $X_{2}$ are similar or overlapping, to the point that for perfectly correlated $X_{1}$ and $X_{2}$, it really is the same information content. If we now put $X_{1}$ and $X_{2}$ in the same (regression) model to explain $Y$, the model tries to "apportion" the information that ($X_{1}$,$X_{2}$) contains about $Y$ to each of $X_{1}$ and $X_{2}$, in a somewhat arbitrary manner. There is no really good way to apportion this, since any split of the information still leads to keeping the total information from ($X_{1}$,$X_{2}$) in the model (for perfectly correlated $X$'s, this really is a case of non-identifiability). This leads to unstable individual estimates for the individual coefficients of $X_{1}$ and $X_{2}$, though if you look at the predicted values $b_{1}X_{1}+b_{2}X_{2}$ over many runs and estimates of $b_{1}$ and $b_{2}$, these will be quite stable.
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
The way I think about this really is in terms of information. Say each of $X_{1}$ and $X_{2}$ has some information about $Y$. The more correlated $X_{1}$ and $X_{2}$ are with each other, the more the
Is there an intuitive explanation why multicollinearity is a problem in linear regression? The way I think about this really is in terms of information. Say each of $X_{1}$ and $X_{2}$ has some information about $Y$. The more correlated $X_{1}$ and $X_{2}$ are with each other, the more the information content about $Y$ from $X_{1}$ and $X_{2}$ are similar or overlapping, to the point that for perfectly correlated $X_{1}$ and $X_{2}$, it really is the same information content. If we now put $X_{1}$ and $X_{2}$ in the same (regression) model to explain $Y$, the model tries to "apportion" the information that ($X_{1}$,$X_{2}$) contains about $Y$ to each of $X_{1}$ and $X_{2}$, in a somewhat arbitrary manner. There is no really good way to apportion this, since any split of the information still leads to keeping the total information from ($X_{1}$,$X_{2}$) in the model (for perfectly correlated $X$'s, this really is a case of non-identifiability). This leads to unstable individual estimates for the individual coefficients of $X_{1}$ and $X_{2}$, though if you look at the predicted values $b_{1}X_{1}+b_{2}X_{2}$ over many runs and estimates of $b_{1}$ and $b_{2}$, these will be quite stable.
Is there an intuitive explanation why multicollinearity is a problem in linear regression? The way I think about this really is in terms of information. Say each of $X_{1}$ and $X_{2}$ has some information about $Y$. The more correlated $X_{1}$ and $X_{2}$ are with each other, the more the
1,764
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
My (very) layman intuition for this is that the OLS model needs a certain level of "signal" in the X variable in order to detect it gives a "good" predicting for Y. If the same "signal" is spread over many X's (because they are correlated), then none of the correlated X's can give enough of a "proof" (statistical significance) that it is a real predictor. The previous (wonderful) answers do a great work in explaining why that is the case.
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
My (very) layman intuition for this is that the OLS model needs a certain level of "signal" in the X variable in order to detect it gives a "good" predicting for Y. If the same "signal" is spread ove
Is there an intuitive explanation why multicollinearity is a problem in linear regression? My (very) layman intuition for this is that the OLS model needs a certain level of "signal" in the X variable in order to detect it gives a "good" predicting for Y. If the same "signal" is spread over many X's (because they are correlated), then none of the correlated X's can give enough of a "proof" (statistical significance) that it is a real predictor. The previous (wonderful) answers do a great work in explaining why that is the case.
Is there an intuitive explanation why multicollinearity is a problem in linear regression? My (very) layman intuition for this is that the OLS model needs a certain level of "signal" in the X variable in order to detect it gives a "good" predicting for Y. If the same "signal" is spread ove
1,765
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
Assume that two people collaborated and accomplished scientific discovery. It is easy to tell their unique contributions (who did what) when two are totally different persons (one is theory guy and the other is good at experiment), while it is difficult to distinguish their unique influences (coefficients in regression) when they are twins acting similarly.
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
Assume that two people collaborated and accomplished scientific discovery. It is easy to tell their unique contributions (who did what) when two are totally different persons (one is theory guy and th
Is there an intuitive explanation why multicollinearity is a problem in linear regression? Assume that two people collaborated and accomplished scientific discovery. It is easy to tell their unique contributions (who did what) when two are totally different persons (one is theory guy and the other is good at experiment), while it is difficult to distinguish their unique influences (coefficients in regression) when they are twins acting similarly.
Is there an intuitive explanation why multicollinearity is a problem in linear regression? Assume that two people collaborated and accomplished scientific discovery. It is easy to tell their unique contributions (who did what) when two are totally different persons (one is theory guy and th
1,766
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
If two regressors are perfectly correlated, their coefficients will be impossible to calculate; it's helpful to consider why they would be difficult to interpret if we could calculate them. In fact, this explains why it's difficult to interpret variables that are not perfectly correlated but that are also not truly independent. Suppose that our dependent variable is the daily supply of fish in New York, and our independent variables include one for whether it rains on that day and one for the amount of bait purchased on that day. What we don't realize when we collect our data is that every time it rains, fishermen purchase no bait, and every time it doesn't, they purchase a constant amount of bait. So Bait and Rain are perfectly correlated, and when we run our regression, we can't calculate their coefficients. In reality, Bait and Rain are probably not perfectly correlated, but we wouldn't want to include them both as regressors without somehow cleaning them of their endogeneity.
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
If two regressors are perfectly correlated, their coefficients will be impossible to calculate; it's helpful to consider why they would be difficult to interpret if we could calculate them. In fact,
Is there an intuitive explanation why multicollinearity is a problem in linear regression? If two regressors are perfectly correlated, their coefficients will be impossible to calculate; it's helpful to consider why they would be difficult to interpret if we could calculate them. In fact, this explains why it's difficult to interpret variables that are not perfectly correlated but that are also not truly independent. Suppose that our dependent variable is the daily supply of fish in New York, and our independent variables include one for whether it rains on that day and one for the amount of bait purchased on that day. What we don't realize when we collect our data is that every time it rains, fishermen purchase no bait, and every time it doesn't, they purchase a constant amount of bait. So Bait and Rain are perfectly correlated, and when we run our regression, we can't calculate their coefficients. In reality, Bait and Rain are probably not perfectly correlated, but we wouldn't want to include them both as regressors without somehow cleaning them of their endogeneity.
Is there an intuitive explanation why multicollinearity is a problem in linear regression? If two regressors are perfectly correlated, their coefficients will be impossible to calculate; it's helpful to consider why they would be difficult to interpret if we could calculate them. In fact,
1,767
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
I think the dummy variable trap provides another useful possibility to illustrate why multicollinearity is a problem. Recall that it arises when we have a constant and a full set of dummies in the model. Then, the sum of the dummies adds up to one, the constant, so multicollinearity. E.g., a dummy for men and one for women: $$y_i=\beta_0+\beta_1Man_i+\beta_2Woman_i+u_i$$ The standard interpretation of $\beta_1$ is the expected change in $Y$ that arises from changing $Man_i$ from 0 to 1. Likewise, $\beta_2$ is the expected change in $Y$ that arises from changing $Woman_i$ from 0 to 1. But, what is $\beta_0$ then supposed to represent? It is $E(y_i|Man_i=0,Woman_i=0)$, so the expected outcome for persons who are neither a man nor a woman. So if everybody in the dataset identifies as either man or woman, $\beta_0$ represents nobody.
Is there an intuitive explanation why multicollinearity is a problem in linear regression?
I think the dummy variable trap provides another useful possibility to illustrate why multicollinearity is a problem. Recall that it arises when we have a constant and a full set of dummies in the mod
Is there an intuitive explanation why multicollinearity is a problem in linear regression? I think the dummy variable trap provides another useful possibility to illustrate why multicollinearity is a problem. Recall that it arises when we have a constant and a full set of dummies in the model. Then, the sum of the dummies adds up to one, the constant, so multicollinearity. E.g., a dummy for men and one for women: $$y_i=\beta_0+\beta_1Man_i+\beta_2Woman_i+u_i$$ The standard interpretation of $\beta_1$ is the expected change in $Y$ that arises from changing $Man_i$ from 0 to 1. Likewise, $\beta_2$ is the expected change in $Y$ that arises from changing $Woman_i$ from 0 to 1. But, what is $\beta_0$ then supposed to represent? It is $E(y_i|Man_i=0,Woman_i=0)$, so the expected outcome for persons who are neither a man nor a woman. So if everybody in the dataset identifies as either man or woman, $\beta_0$ represents nobody.
Is there an intuitive explanation why multicollinearity is a problem in linear regression? I think the dummy variable trap provides another useful possibility to illustrate why multicollinearity is a problem. Recall that it arises when we have a constant and a full set of dummies in the mod
1,768
How scared should we be about convergence warnings in lme4
Be afraid. Be very afraid. Last year, I interviewed John Nash, the author of optim and optimx, for an article on IBM's DeveloperWorks site. We talked about how optimizers work and why they fail when they fail. He seemed to take it for granted that they often do. That's why the diagnostics are included in the package. He also thought that you need to "understand your problem", and understand your data. All of which means that warnings should be taken seriously, and are an invitation to look at your data in other ways. Typically, an optimizer stops searching when it can no longer improve the loss function by a meaningful amount. It doesn't know where to go next, basically. If the gradient of the loss function is not zero at that point, you haven't reached an extremum of any kind. If the Hessian is not positive, but the gradient is zero, you haven't found a minimum, but possibly, you did find a maximum or saddle point. Depending on the optimizer, though, results about the Hessian might not be supplied. In Optimx, if you want the KKT conditions evaluated, you have to ask for them -- they are not evaluated by default. (These conditions look at the gradient and Hessian to see if you really have a minimum.) The problem with mixed models is that the variance estimates for the random effects are constrained to be positive, thus placing a boundary within the optimization region. But suppose a particular random effect is not really needed in your model -- i.e. the variance of the random effect is 0. Your optimizer will head into that boundary,be unable to proceed, and stop with a non-zero gradient. If removing that random effect improved convergence, you will know that was the problem. As an aside, note that asymptotic maximum likelihood theory assumes the MLE is found in an interior point (i.e. not on the boundary of licit parameter values) - so likelihood ratio tests for variance components may not work when indeed the null hypothesis of zero variance is true. Testing can be done using simulation tests, as implemented in package RLRsim. To me, I suspect that optimizers run into problems when there is too little data for the number of parameters, or the proposed model is really not suitable. Think glass slipper and ugly step-sister: you can't shoehorn your data into the model, no matter how hard you try, and something has to give. Even if the data happen to fit the model, they may not have the power to estimate all the parameters. A funny thing happened to me along those lines. I simulated some mixed models to answer a question about what happens if you don't allow the random effects to be correlated when fitting a mixed effects model. I simulated data with a strong correlation between the two random effects, then fit the model both ways with lmer: positing 0 correlations and free correlations. The correlation model fit better than the uncorrelated model, but interestingly, in 1000 simulations, I had 13 errors when fitting the true model and 0 errors when fitting the simpler model. I don't fully understand why this happened (and I repeated the sims to similar results). I suspect that the correlation parameter is fairly useless and the optimizer can't find the value (because it doesn't matter). You asked about what to do when different optimizers give different results. John and I discussed this point. Some optimizers, in his opinion, are just not that good! And all of them have points of weakness -- i.e., data sets that will cause them to fail. This is why he wrote optimx, which includes a variety of optimizers. You can run several on the same data set. If two optimizers give the same parameters, but different diagnostics -- and those parameters make real world sense -- then I would be inclined to trust the parameter values. The difficulty could lie with the diagnostics, which are not fool-proof. If you have not explicitly supplied the gradient function and/or Hessian matrix, the optimizer will need to estimate these from the loss function and the data, which is just something else that can go wrong. If you are getting different parameter values as well, then you might want to try different starting values and see what happens then. Some optimizers and some problems are very sensitive to the starting values. You want to be starting in the ball park.
How scared should we be about convergence warnings in lme4
Be afraid. Be very afraid. Last year, I interviewed John Nash, the author of optim and optimx, for an article on IBM's DeveloperWorks site. We talked about how optimizers work and why they fail when
How scared should we be about convergence warnings in lme4 Be afraid. Be very afraid. Last year, I interviewed John Nash, the author of optim and optimx, for an article on IBM's DeveloperWorks site. We talked about how optimizers work and why they fail when they fail. He seemed to take it for granted that they often do. That's why the diagnostics are included in the package. He also thought that you need to "understand your problem", and understand your data. All of which means that warnings should be taken seriously, and are an invitation to look at your data in other ways. Typically, an optimizer stops searching when it can no longer improve the loss function by a meaningful amount. It doesn't know where to go next, basically. If the gradient of the loss function is not zero at that point, you haven't reached an extremum of any kind. If the Hessian is not positive, but the gradient is zero, you haven't found a minimum, but possibly, you did find a maximum or saddle point. Depending on the optimizer, though, results about the Hessian might not be supplied. In Optimx, if you want the KKT conditions evaluated, you have to ask for them -- they are not evaluated by default. (These conditions look at the gradient and Hessian to see if you really have a minimum.) The problem with mixed models is that the variance estimates for the random effects are constrained to be positive, thus placing a boundary within the optimization region. But suppose a particular random effect is not really needed in your model -- i.e. the variance of the random effect is 0. Your optimizer will head into that boundary,be unable to proceed, and stop with a non-zero gradient. If removing that random effect improved convergence, you will know that was the problem. As an aside, note that asymptotic maximum likelihood theory assumes the MLE is found in an interior point (i.e. not on the boundary of licit parameter values) - so likelihood ratio tests for variance components may not work when indeed the null hypothesis of zero variance is true. Testing can be done using simulation tests, as implemented in package RLRsim. To me, I suspect that optimizers run into problems when there is too little data for the number of parameters, or the proposed model is really not suitable. Think glass slipper and ugly step-sister: you can't shoehorn your data into the model, no matter how hard you try, and something has to give. Even if the data happen to fit the model, they may not have the power to estimate all the parameters. A funny thing happened to me along those lines. I simulated some mixed models to answer a question about what happens if you don't allow the random effects to be correlated when fitting a mixed effects model. I simulated data with a strong correlation between the two random effects, then fit the model both ways with lmer: positing 0 correlations and free correlations. The correlation model fit better than the uncorrelated model, but interestingly, in 1000 simulations, I had 13 errors when fitting the true model and 0 errors when fitting the simpler model. I don't fully understand why this happened (and I repeated the sims to similar results). I suspect that the correlation parameter is fairly useless and the optimizer can't find the value (because it doesn't matter). You asked about what to do when different optimizers give different results. John and I discussed this point. Some optimizers, in his opinion, are just not that good! And all of them have points of weakness -- i.e., data sets that will cause them to fail. This is why he wrote optimx, which includes a variety of optimizers. You can run several on the same data set. If two optimizers give the same parameters, but different diagnostics -- and those parameters make real world sense -- then I would be inclined to trust the parameter values. The difficulty could lie with the diagnostics, which are not fool-proof. If you have not explicitly supplied the gradient function and/or Hessian matrix, the optimizer will need to estimate these from the loss function and the data, which is just something else that can go wrong. If you are getting different parameter values as well, then you might want to try different starting values and see what happens then. Some optimizers and some problems are very sensitive to the starting values. You want to be starting in the ball park.
How scared should we be about convergence warnings in lme4 Be afraid. Be very afraid. Last year, I interviewed John Nash, the author of optim and optimx, for an article on IBM's DeveloperWorks site. We talked about how optimizers work and why they fail when
1,769
How scared should we be about convergence warnings in lme4
I just want to supplement @Placidia's great answer. You may want to check out "Richly Parameterized Linear Models: Additive, Time Series, and Spatial Models Using Random Effects" by James Hodges (2014). It discuses what we do not know about mixed models and at the same time attempts to offer a broad theory as well as practical tips for fitting complex models. An often scared modeler myself, I find Hodge's discussions of "puzzles" priceless. He explains strange cases arising from fitting mixed-effects modeling, including "A random effect competing with a fixed effect" and "Competition between Random Effects". Sounds familiar?
How scared should we be about convergence warnings in lme4
I just want to supplement @Placidia's great answer. You may want to check out "Richly Parameterized Linear Models: Additive, Time Series, and Spatial Models Using Random Effects" by James Hodges (2014
How scared should we be about convergence warnings in lme4 I just want to supplement @Placidia's great answer. You may want to check out "Richly Parameterized Linear Models: Additive, Time Series, and Spatial Models Using Random Effects" by James Hodges (2014). It discuses what we do not know about mixed models and at the same time attempts to offer a broad theory as well as practical tips for fitting complex models. An often scared modeler myself, I find Hodge's discussions of "puzzles" priceless. He explains strange cases arising from fitting mixed-effects modeling, including "A random effect competing with a fixed effect" and "Competition between Random Effects". Sounds familiar?
How scared should we be about convergence warnings in lme4 I just want to supplement @Placidia's great answer. You may want to check out "Richly Parameterized Linear Models: Additive, Time Series, and Spatial Models Using Random Effects" by James Hodges (2014
1,770
Conditional inference trees vs traditional decision trees
For what it's worth: both rpart and ctree recursively perform univariate splits of the dependent variable based on values on a set of covariates. rpart and related algorithms usually employ information measures (such as the Gini coefficient) for selecting the current covariate. ctree, according to its authors (see chl's comments) avoids the following variable selection bias of rpart (and related methods): They tend to select variables that have many possible splits or many missing values. Unlike the others, ctree uses a significance test procedure in order to select variables instead of selecting the variable that maximizes an information measure (e.g. Gini coefficient). The significance test, or better: the multiple significance tests computed at each start of the algorithm (select covariate - choose split - recurse) are permutation tests, that is, the "the distribution of the test statistic under the null hypothesis is obtained by calculating all possible values of the test statistic under rearrangements of the labels on the observed data points." (from the wikipedia article). Now for the test statistic: it is computed from transformations (including identity, that is, no transform) of the dependent variable and the covariates. You can choose any of a number of transformations for both variables. For the DV (Dependant Variable), the transformation is called the influence function you were asking about. Examples (taken from the paper): if both DV and covariates are numeric, you might select identity transforms and calculate correlations between the covariate and all possible permutations of the values of the DV. Then, you calculate the p-value from this permutation test and compare it with p-values for other covariates. if both DV and the covariates are nominal (unordered categorical), the test statistic is computed from a contingency table. you can easily make up other kinds of test statistics from any kind of transformations (including identity transform) from this general scheme. small example for a permutation test in R: require(gtools) dv <- c(1,3,4,5,5); covariate <- c(2,2,5,4,5) # all possible permutations of dv, length(120): perms <- permutations(5,5,dv,set=FALSE) # now calculate correlations for all perms with covariate: cors <- apply(perms, 1, function(perms_row) cor(perms_row,covariate)) cors <- cors[order(cors)] # now p-value: compare cor(dv,covariate) with the # sorted vector of all permutation correlations length(cors[cors>=cor(dv,covariate)])/length(cors) # result: [1] 0.1, i.e. a p-value of .1 # note that this is a one-sided test Now suppose you have a set of covariates, not only one as above. Then calculate p-values for each of the covariates like in the above scheme, and select the one with the smallest p-value. You want to calculate p-values instead of the correlations directly, because you could have covariates of different kinds (e.g. numeric and categorical). Once you have selected a covariate, now explore all possible splits (or often a somehow restricted number of all possible splits, e.g. by requiring a minimal number of elements of the DV before splitting) again evaluating a permutation-based test. ctree comes with a number of possible transformations for both DV and covariates (see the help for Transformations in the party package). so generally the main difference seems to be that ctree uses a covariate selection scheme that is based on statistical theory (i.e. selection by permutation-based significance tests) and thereby avoids a potential bias in rpart, otherwise they seem similar; e.g. conditional inference trees can be used as base learners for Random Forests. This is about as far as I can get. For more information, you really need to read the papers. Note that I strongly recommend that you really know what you're doing when you want to apply any kind of statistical analysis.
Conditional inference trees vs traditional decision trees
For what it's worth: both rpart and ctree recursively perform univariate splits of the dependent variable based on values on a set of covariates. rpart and related algorithms usually employ informatio
Conditional inference trees vs traditional decision trees For what it's worth: both rpart and ctree recursively perform univariate splits of the dependent variable based on values on a set of covariates. rpart and related algorithms usually employ information measures (such as the Gini coefficient) for selecting the current covariate. ctree, according to its authors (see chl's comments) avoids the following variable selection bias of rpart (and related methods): They tend to select variables that have many possible splits or many missing values. Unlike the others, ctree uses a significance test procedure in order to select variables instead of selecting the variable that maximizes an information measure (e.g. Gini coefficient). The significance test, or better: the multiple significance tests computed at each start of the algorithm (select covariate - choose split - recurse) are permutation tests, that is, the "the distribution of the test statistic under the null hypothesis is obtained by calculating all possible values of the test statistic under rearrangements of the labels on the observed data points." (from the wikipedia article). Now for the test statistic: it is computed from transformations (including identity, that is, no transform) of the dependent variable and the covariates. You can choose any of a number of transformations for both variables. For the DV (Dependant Variable), the transformation is called the influence function you were asking about. Examples (taken from the paper): if both DV and covariates are numeric, you might select identity transforms and calculate correlations between the covariate and all possible permutations of the values of the DV. Then, you calculate the p-value from this permutation test and compare it with p-values for other covariates. if both DV and the covariates are nominal (unordered categorical), the test statistic is computed from a contingency table. you can easily make up other kinds of test statistics from any kind of transformations (including identity transform) from this general scheme. small example for a permutation test in R: require(gtools) dv <- c(1,3,4,5,5); covariate <- c(2,2,5,4,5) # all possible permutations of dv, length(120): perms <- permutations(5,5,dv,set=FALSE) # now calculate correlations for all perms with covariate: cors <- apply(perms, 1, function(perms_row) cor(perms_row,covariate)) cors <- cors[order(cors)] # now p-value: compare cor(dv,covariate) with the # sorted vector of all permutation correlations length(cors[cors>=cor(dv,covariate)])/length(cors) # result: [1] 0.1, i.e. a p-value of .1 # note that this is a one-sided test Now suppose you have a set of covariates, not only one as above. Then calculate p-values for each of the covariates like in the above scheme, and select the one with the smallest p-value. You want to calculate p-values instead of the correlations directly, because you could have covariates of different kinds (e.g. numeric and categorical). Once you have selected a covariate, now explore all possible splits (or often a somehow restricted number of all possible splits, e.g. by requiring a minimal number of elements of the DV before splitting) again evaluating a permutation-based test. ctree comes with a number of possible transformations for both DV and covariates (see the help for Transformations in the party package). so generally the main difference seems to be that ctree uses a covariate selection scheme that is based on statistical theory (i.e. selection by permutation-based significance tests) and thereby avoids a potential bias in rpart, otherwise they seem similar; e.g. conditional inference trees can be used as base learners for Random Forests. This is about as far as I can get. For more information, you really need to read the papers. Note that I strongly recommend that you really know what you're doing when you want to apply any kind of statistical analysis.
Conditional inference trees vs traditional decision trees For what it's worth: both rpart and ctree recursively perform univariate splits of the dependent variable based on values on a set of covariates. rpart and related algorithms usually employ informatio
1,771
What are examples where a "naive bootstrap" fails?
If the quantity of interest, usually a functional of a distribution, is reasonably smooth and your data are i.i.d., you're usually in pretty safe territory. Of course, there are other circumstances when the bootstrap will work as well. What it means for the bootstrap to "fail" Broadly speaking, the purpose of the bootstrap is to construct an approximate sampling distribution for the statistic of interest. It's not about actual estimation of the parameter. So, if the statistic of interest (under some rescaling and centering) is $\newcommand{\Xhat}{\hat{X}_n}\Xhat$ and $\Xhat \to X_\infty$ in distribution, we'd like our bootstrap distribution to converge to the distribution of $X_\infty$. If we don't have this, then we can't trust the inferences made. The canonical example of when the bootstrap can fail, even in an i.i.d. framework is when trying to approximate the sampling distribution of an extreme order statistic. Below is a brief discussion. Maximum order statistic of a random sample from a $\;\mathcal{U}[0,\theta]$ distribution Let $X_1, X_2, \ldots$ be a sequence of i.i.d. uniform random variables on $[0,\theta]$. Let $\newcommand{\Xmax}{X_{(n)}} \Xmax = \max_{1\leq k \leq n} X_k$. The distribution of $\Xmax$ is $$ \renewcommand{\Pr}{\mathbb{P}}\Pr(\Xmax \leq x) = (x/\theta)^n \>. $$ (Note that by a very simple argument, this actually also shows that $\Xmax \to \theta$ in probability, and even, almost surely, if the random variables are all defined on the same space.) An elementary calculation yields $$ \Pr( n(\theta - \Xmax) \leq x ) = 1 - \Big(1 - \frac{x}{\theta n}\Big)^n \to 1 - e^{-x/\theta} \>, $$ or, in other words, $n(\theta - \Xmax)$ converges in distribution to an exponential random variable with mean $\theta$. Now, we form a (naive) bootstrap estimate of the distribution of $n(\theta - \Xmax)$ by resampling $X_1, \ldots, X_n$ with replacement to get $X_1^\star,\ldots,X_n^\star$ and using the distribution of $n(\Xmax - \Xmax^\star)$ conditional on $X_1,\ldots,X_n$. But, observe that $\Xmax^\star = \Xmax$ with probability $1 - (1-1/n)^n \to 1 - e^{-1}$, and so the bootstrap distribution has a point mass at zero even asymptotically despite the fact that the actual limiting distribution is continuous. More explicitly, though the true limiting distribution is exponential with mean $\theta$, the limiting bootstrap distribution places a point mass at zero of size $1−e^{-1} \approx 0.632$ independent of the actual value of $\theta$. By taking $\theta$ sufficiently large, we can make the probability of the true limiting distribution arbitrary small for any fixed interval $[0,\varepsilon)$, yet the bootstrap will (still!) report that there is at least probability 0.632 in this interval! From this it should be clear that the bootstrap can behave arbitrarily badly in this setting. In summary, the bootstrap fails (miserably) in this case. Things tend to go wrong when dealing with parameters at the edge of the parameter space. An example from a sample of normal random variables There are other similar examples of the failure of the bootstrap in surprisingly simple circumstances. Consider a sample $X_1, X_2, \ldots$ from $\mathcal{N}(\mu,1)$ where the parameter space for $\mu$ is restricted to $[0,\infty)$. The MLE in this case is $\newcommand{\Xbar}{\bar{X}}\Xhat = \max(\bar{X},0)$. Again, we use the bootstrap estimate $\Xhat^\star = \max(\Xbar^\star, 0)$. Again, it can be shown that the distribution of $\sqrt{n}(\Xhat^\star - \Xhat)$ (conditional on the observed sample) does not converge to the same limiting distribution as $\sqrt{n}(\Xhat - \mu)$. Exchangeable arrays Perhaps one of the most dramatic examples is for an exchangeable array. Let $\newcommand{\bm}[1]{\mathbf{#1}}\bm{Y} = (Y_{ij})$ be an array of random variables such that, for every pair of permutation matrices $\bm{P}$ and $\bm{Q}$, the arrays $\bm{Y}$ and $\bm{P} \bm{Y} \bm{Q}$ have the same joint distribution. That is, permuting rows and columns of $\bm{Y}$ keeps the distribution invariant. (You can think of a two-way random effects model with one observation per cell as an example, though the model is much more general.) Suppose we wish to estimate a confidence interval for the mean $\mu = \mathbb{E}(Y_{ij}) = \mathbb{E}(Y_{11})$ (due to the exchangeability assumption described above the means of all the cells must be the same). McCullagh (2000) considered two different natural (i.e., naive) ways of bootstrapping such an array. Neither of them get the asymptotic variance for the sample mean correct. He also considers some examples of a one-way exchangeable array and linear regression. References Unfortunately, the subject matter is nontrivial, so none of these are particularly easy reads. P. Bickel and D. Freedman, Some asymptotic theory for the bootstrap. Ann. Stat., vol. 9, no. 6 (1981), 1196–1217. D. W. K. Andrews, Inconsistency of the bootstrap when a parameter is on the boundary of the parameter space, Econometrica, vol. 68, no. 2 (2000), 399–405. P. McCullagh, Resampling and exchangeable arrays, Bernoulli, vol. 6, no. 2 (2000), 285–301. E. L. Lehmann and J. P. Romano, Testing Statistical Hypotheses, 3rd. ed., Springer (2005). [Chapter 15: General Large Sample Methods]
What are examples where a "naive bootstrap" fails?
If the quantity of interest, usually a functional of a distribution, is reasonably smooth and your data are i.i.d., you're usually in pretty safe territory. Of course, there are other circumstances wh
What are examples where a "naive bootstrap" fails? If the quantity of interest, usually a functional of a distribution, is reasonably smooth and your data are i.i.d., you're usually in pretty safe territory. Of course, there are other circumstances when the bootstrap will work as well. What it means for the bootstrap to "fail" Broadly speaking, the purpose of the bootstrap is to construct an approximate sampling distribution for the statistic of interest. It's not about actual estimation of the parameter. So, if the statistic of interest (under some rescaling and centering) is $\newcommand{\Xhat}{\hat{X}_n}\Xhat$ and $\Xhat \to X_\infty$ in distribution, we'd like our bootstrap distribution to converge to the distribution of $X_\infty$. If we don't have this, then we can't trust the inferences made. The canonical example of when the bootstrap can fail, even in an i.i.d. framework is when trying to approximate the sampling distribution of an extreme order statistic. Below is a brief discussion. Maximum order statistic of a random sample from a $\;\mathcal{U}[0,\theta]$ distribution Let $X_1, X_2, \ldots$ be a sequence of i.i.d. uniform random variables on $[0,\theta]$. Let $\newcommand{\Xmax}{X_{(n)}} \Xmax = \max_{1\leq k \leq n} X_k$. The distribution of $\Xmax$ is $$ \renewcommand{\Pr}{\mathbb{P}}\Pr(\Xmax \leq x) = (x/\theta)^n \>. $$ (Note that by a very simple argument, this actually also shows that $\Xmax \to \theta$ in probability, and even, almost surely, if the random variables are all defined on the same space.) An elementary calculation yields $$ \Pr( n(\theta - \Xmax) \leq x ) = 1 - \Big(1 - \frac{x}{\theta n}\Big)^n \to 1 - e^{-x/\theta} \>, $$ or, in other words, $n(\theta - \Xmax)$ converges in distribution to an exponential random variable with mean $\theta$. Now, we form a (naive) bootstrap estimate of the distribution of $n(\theta - \Xmax)$ by resampling $X_1, \ldots, X_n$ with replacement to get $X_1^\star,\ldots,X_n^\star$ and using the distribution of $n(\Xmax - \Xmax^\star)$ conditional on $X_1,\ldots,X_n$. But, observe that $\Xmax^\star = \Xmax$ with probability $1 - (1-1/n)^n \to 1 - e^{-1}$, and so the bootstrap distribution has a point mass at zero even asymptotically despite the fact that the actual limiting distribution is continuous. More explicitly, though the true limiting distribution is exponential with mean $\theta$, the limiting bootstrap distribution places a point mass at zero of size $1−e^{-1} \approx 0.632$ independent of the actual value of $\theta$. By taking $\theta$ sufficiently large, we can make the probability of the true limiting distribution arbitrary small for any fixed interval $[0,\varepsilon)$, yet the bootstrap will (still!) report that there is at least probability 0.632 in this interval! From this it should be clear that the bootstrap can behave arbitrarily badly in this setting. In summary, the bootstrap fails (miserably) in this case. Things tend to go wrong when dealing with parameters at the edge of the parameter space. An example from a sample of normal random variables There are other similar examples of the failure of the bootstrap in surprisingly simple circumstances. Consider a sample $X_1, X_2, \ldots$ from $\mathcal{N}(\mu,1)$ where the parameter space for $\mu$ is restricted to $[0,\infty)$. The MLE in this case is $\newcommand{\Xbar}{\bar{X}}\Xhat = \max(\bar{X},0)$. Again, we use the bootstrap estimate $\Xhat^\star = \max(\Xbar^\star, 0)$. Again, it can be shown that the distribution of $\sqrt{n}(\Xhat^\star - \Xhat)$ (conditional on the observed sample) does not converge to the same limiting distribution as $\sqrt{n}(\Xhat - \mu)$. Exchangeable arrays Perhaps one of the most dramatic examples is for an exchangeable array. Let $\newcommand{\bm}[1]{\mathbf{#1}}\bm{Y} = (Y_{ij})$ be an array of random variables such that, for every pair of permutation matrices $\bm{P}$ and $\bm{Q}$, the arrays $\bm{Y}$ and $\bm{P} \bm{Y} \bm{Q}$ have the same joint distribution. That is, permuting rows and columns of $\bm{Y}$ keeps the distribution invariant. (You can think of a two-way random effects model with one observation per cell as an example, though the model is much more general.) Suppose we wish to estimate a confidence interval for the mean $\mu = \mathbb{E}(Y_{ij}) = \mathbb{E}(Y_{11})$ (due to the exchangeability assumption described above the means of all the cells must be the same). McCullagh (2000) considered two different natural (i.e., naive) ways of bootstrapping such an array. Neither of them get the asymptotic variance for the sample mean correct. He also considers some examples of a one-way exchangeable array and linear regression. References Unfortunately, the subject matter is nontrivial, so none of these are particularly easy reads. P. Bickel and D. Freedman, Some asymptotic theory for the bootstrap. Ann. Stat., vol. 9, no. 6 (1981), 1196–1217. D. W. K. Andrews, Inconsistency of the bootstrap when a parameter is on the boundary of the parameter space, Econometrica, vol. 68, no. 2 (2000), 399–405. P. McCullagh, Resampling and exchangeable arrays, Bernoulli, vol. 6, no. 2 (2000), 285–301. E. L. Lehmann and J. P. Romano, Testing Statistical Hypotheses, 3rd. ed., Springer (2005). [Chapter 15: General Large Sample Methods]
What are examples where a "naive bootstrap" fails? If the quantity of interest, usually a functional of a distribution, is reasonably smooth and your data are i.i.d., you're usually in pretty safe territory. Of course, there are other circumstances wh
1,772
What are examples where a "naive bootstrap" fails?
The following book has a chapter (Ch.9) devoted to "When Bootstrapping Fails Along with Remedies for Failures": M. R. Chernick, Bootstrap methods: A guide for practitioners and researchers, 2nd ed. Hoboken N.J.: Wiley-Interscience, 2008. The topics are: Too Small of a Sample Size Distributions with Infinite Moments Estimating Extreme Values Survey Sampling Data Sequences that Are M-Dependent Unstable Autoregressive Processes Long-Range Dependence
What are examples where a "naive bootstrap" fails?
The following book has a chapter (Ch.9) devoted to "When Bootstrapping Fails Along with Remedies for Failures": M. R. Chernick, Bootstrap methods: A guide for practitioners and researchers, 2nd ed. Ho
What are examples where a "naive bootstrap" fails? The following book has a chapter (Ch.9) devoted to "When Bootstrapping Fails Along with Remedies for Failures": M. R. Chernick, Bootstrap methods: A guide for practitioners and researchers, 2nd ed. Hoboken N.J.: Wiley-Interscience, 2008. The topics are: Too Small of a Sample Size Distributions with Infinite Moments Estimating Extreme Values Survey Sampling Data Sequences that Are M-Dependent Unstable Autoregressive Processes Long-Range Dependence
What are examples where a "naive bootstrap" fails? The following book has a chapter (Ch.9) devoted to "When Bootstrapping Fails Along with Remedies for Failures": M. R. Chernick, Bootstrap methods: A guide for practitioners and researchers, 2nd ed. Ho
1,773
What are examples where a "naive bootstrap" fails?
The naive bootstrap depends on the sample size being large, so that the empirical CDF for the data are a good approximation to the "true" CDF. This ensures that sampling from the empirical CDF is very much like sampling from the "true" CDF. The extreme case is when you have only sampled one data point - bootstrapping achieves nothing here. It will become more and more useless as it approaches this degenerate case. Bootstrapping naively will not necessarily fail in times series analysis (although it may be inefficient) - if you model the series using basis functions of continuous time (such a legendre polynomials) for a trend component, and sine and cosine functions of continuous time for cyclical components (plus normal noise error term). Then you just put in what-ever times you happen to have sampled into the likelihood function. No disaster for bootstrapping here. Any auto-correlation or ARIMA model has a representation in this format above - this model is just easier to use and I think to understand and interpret (easy to understand cycles in sine and cosine functions, hard to understand coefficients of an ARIMA model). For example the auto-correlation function is the inverse Fourier transform of the power spectrum of a time series.
What are examples where a "naive bootstrap" fails?
The naive bootstrap depends on the sample size being large, so that the empirical CDF for the data are a good approximation to the "true" CDF. This ensures that sampling from the empirical CDF is ver
What are examples where a "naive bootstrap" fails? The naive bootstrap depends on the sample size being large, so that the empirical CDF for the data are a good approximation to the "true" CDF. This ensures that sampling from the empirical CDF is very much like sampling from the "true" CDF. The extreme case is when you have only sampled one data point - bootstrapping achieves nothing here. It will become more and more useless as it approaches this degenerate case. Bootstrapping naively will not necessarily fail in times series analysis (although it may be inefficient) - if you model the series using basis functions of continuous time (such a legendre polynomials) for a trend component, and sine and cosine functions of continuous time for cyclical components (plus normal noise error term). Then you just put in what-ever times you happen to have sampled into the likelihood function. No disaster for bootstrapping here. Any auto-correlation or ARIMA model has a representation in this format above - this model is just easier to use and I think to understand and interpret (easy to understand cycles in sine and cosine functions, hard to understand coefficients of an ARIMA model). For example the auto-correlation function is the inverse Fourier transform of the power spectrum of a time series.
What are examples where a "naive bootstrap" fails? The naive bootstrap depends on the sample size being large, so that the empirical CDF for the data are a good approximation to the "true" CDF. This ensures that sampling from the empirical CDF is ver
1,774
How to annoy a statistical referee?
What particularly irritates me personally is people who clearly used user-written packages for statistical software but don't cite them properly, or at all, thereby failing to give any credit to the authors. Doing so is particularly important when the authors are in academia and their jobs depend on publishing papers that get cited. (Perhaps I should add that, in my field, many of the culprits are not statisticians.)
How to annoy a statistical referee?
What particularly irritates me personally is people who clearly used user-written packages for statistical software but don't cite them properly, or at all, thereby failing to give any credit to the a
How to annoy a statistical referee? What particularly irritates me personally is people who clearly used user-written packages for statistical software but don't cite them properly, or at all, thereby failing to give any credit to the authors. Doing so is particularly important when the authors are in academia and their jobs depend on publishing papers that get cited. (Perhaps I should add that, in my field, many of the culprits are not statisticians.)
How to annoy a statistical referee? What particularly irritates me personally is people who clearly used user-written packages for statistical software but don't cite them properly, or at all, thereby failing to give any credit to the a
1,775
How to annoy a statistical referee?
Goodness me, so many things come to mind... Stepwise regression Splitting continuous data into groups Giving p-values but no measure of effect size Describing data using the mean and the standard deviation without indicating whether the data were more-or-less symmetric and unimodal Figures without clear captions (are those error bars standard errors of the mean, or standard deviations within groups, or what?)
How to annoy a statistical referee?
Goodness me, so many things come to mind... Stepwise regression Splitting continuous data into groups Giving p-values but no measure of effect size Describing data using the mean and the standard dev
How to annoy a statistical referee? Goodness me, so many things come to mind... Stepwise regression Splitting continuous data into groups Giving p-values but no measure of effect size Describing data using the mean and the standard deviation without indicating whether the data were more-or-less symmetric and unimodal Figures without clear captions (are those error bars standard errors of the mean, or standard deviations within groups, or what?)
How to annoy a statistical referee? Goodness me, so many things come to mind... Stepwise regression Splitting continuous data into groups Giving p-values but no measure of effect size Describing data using the mean and the standard dev
1,776
How to annoy a statistical referee?
Irene Stratton and colleague published a short paper about a closely related question: Stratton IM, Neil A. How to ensure your paper is rejected by the statistical reviewer. Diabetic Medicine 2005; 22(4):371-373.
How to annoy a statistical referee?
Irene Stratton and colleague published a short paper about a closely related question: Stratton IM, Neil A. How to ensure your paper is rejected by the statistical reviewer. Diabetic Medicine 2005; 22
How to annoy a statistical referee? Irene Stratton and colleague published a short paper about a closely related question: Stratton IM, Neil A. How to ensure your paper is rejected by the statistical reviewer. Diabetic Medicine 2005; 22(4):371-373.
How to annoy a statistical referee? Irene Stratton and colleague published a short paper about a closely related question: Stratton IM, Neil A. How to ensure your paper is rejected by the statistical reviewer. Diabetic Medicine 2005; 22
1,777
How to annoy a statistical referee?
The code used to generate the simulated results is not provided. After asking for the code, it demands additional work to get it to run on a referee generated dataset.
How to annoy a statistical referee?
The code used to generate the simulated results is not provided. After asking for the code, it demands additional work to get it to run on a referee generated dataset.
How to annoy a statistical referee? The code used to generate the simulated results is not provided. After asking for the code, it demands additional work to get it to run on a referee generated dataset.
How to annoy a statistical referee? The code used to generate the simulated results is not provided. After asking for the code, it demands additional work to get it to run on a referee generated dataset.
1,778
How to annoy a statistical referee?
Plagiarism (theoretical or methodological). My first review was indeed for a paper figuring many unreferenced copy/paste from a well-established methodological paper published 10 years ago. Just found a couple of interesting papers on this topic: Authorship and plagiarism in science. In the same vein, I find falsification (of data or results) the worst of all.
How to annoy a statistical referee?
Plagiarism (theoretical or methodological). My first review was indeed for a paper figuring many unreferenced copy/paste from a well-established methodological paper published 10 years ago. Just found
How to annoy a statistical referee? Plagiarism (theoretical or methodological). My first review was indeed for a paper figuring many unreferenced copy/paste from a well-established methodological paper published 10 years ago. Just found a couple of interesting papers on this topic: Authorship and plagiarism in science. In the same vein, I find falsification (of data or results) the worst of all.
How to annoy a statistical referee? Plagiarism (theoretical or methodological). My first review was indeed for a paper figuring many unreferenced copy/paste from a well-established methodological paper published 10 years ago. Just found
1,779
How to annoy a statistical referee?
When we ask the authors for minor comment about an idea we have (in this sense, this not considered as a reason for rejecting the paper but just to be sure the authors are able to discuss another POV), or unclear or contradicting results, and that authors don't really answer in case (1) or that the incriminated results in (2) disappear from the MS.
How to annoy a statistical referee?
When we ask the authors for minor comment about an idea we have (in this sense, this not considered as a reason for rejecting the paper but just to be sure the authors are able to discuss another PO
How to annoy a statistical referee? When we ask the authors for minor comment about an idea we have (in this sense, this not considered as a reason for rejecting the paper but just to be sure the authors are able to discuss another POV), or unclear or contradicting results, and that authors don't really answer in case (1) or that the incriminated results in (2) disappear from the MS.
How to annoy a statistical referee? When we ask the authors for minor comment about an idea we have (in this sense, this not considered as a reason for rejecting the paper but just to be sure the authors are able to discuss another PO
1,780
How to annoy a statistical referee?
Confusing p-values and effect size (i.e. stating my effect is large because I have a really tiny p-value). Slightly different than Stephan's answer of excluding effect sizes but giving p-values. I agree you should give both (and hopefully understand the difference!)
How to annoy a statistical referee?
Confusing p-values and effect size (i.e. stating my effect is large because I have a really tiny p-value). Slightly different than Stephan's answer of excluding effect sizes but giving p-values. I agr
How to annoy a statistical referee? Confusing p-values and effect size (i.e. stating my effect is large because I have a really tiny p-value). Slightly different than Stephan's answer of excluding effect sizes but giving p-values. I agree you should give both (and hopefully understand the difference!)
How to annoy a statistical referee? Confusing p-values and effect size (i.e. stating my effect is large because I have a really tiny p-value). Slightly different than Stephan's answer of excluding effect sizes but giving p-values. I agr
1,781
How to annoy a statistical referee?
Not including effect sizes. P-ing all over the research (I have to credit my favorite grad school professor for that line). Giving a preposterous number of digits (males gained 3.102019 pounds more than females) Not including page numbers (that makes it harder to review) Misnumbering figures and tables (as already mentioned - stepwise and categorizing continuous variables)
How to annoy a statistical referee?
Not including effect sizes. P-ing all over the research (I have to credit my favorite grad school professor for that line). Giving a preposterous number of digits (males gained 3.102019 pounds more th
How to annoy a statistical referee? Not including effect sizes. P-ing all over the research (I have to credit my favorite grad school professor for that line). Giving a preposterous number of digits (males gained 3.102019 pounds more than females) Not including page numbers (that makes it harder to review) Misnumbering figures and tables (as already mentioned - stepwise and categorizing continuous variables)
How to annoy a statistical referee? Not including effect sizes. P-ing all over the research (I have to credit my favorite grad school professor for that line). Giving a preposterous number of digits (males gained 3.102019 pounds more th
1,782
How to annoy a statistical referee?
When they don't sufficiently explain their analysis and/or include simple errors that make it difficult to work out what actually was done. This often includes throwing around a lot of jargon, by way of explanation, which is more ambiguous than the author seems to realize and also may be misused.
How to annoy a statistical referee?
When they don't sufficiently explain their analysis and/or include simple errors that make it difficult to work out what actually was done. This often includes throwing around a lot of jargon, by way
How to annoy a statistical referee? When they don't sufficiently explain their analysis and/or include simple errors that make it difficult to work out what actually was done. This often includes throwing around a lot of jargon, by way of explanation, which is more ambiguous than the author seems to realize and also may be misused.
How to annoy a statistical referee? When they don't sufficiently explain their analysis and/or include simple errors that make it difficult to work out what actually was done. This often includes throwing around a lot of jargon, by way
1,783
How to annoy a statistical referee?
Using causal language to describe associations in observational data when omitted variables are almost certainly a serious concern.
How to annoy a statistical referee?
Using causal language to describe associations in observational data when omitted variables are almost certainly a serious concern.
How to annoy a statistical referee? Using causal language to describe associations in observational data when omitted variables are almost certainly a serious concern.
How to annoy a statistical referee? Using causal language to describe associations in observational data when omitted variables are almost certainly a serious concern.
1,784
How to annoy a statistical referee?
Coming up with new words for the existing concepts, or, vice versa, using the existing terms to denote something different. Some of the existing terminology differentials has long settled in the literature: longitudinal data in biostatistics vs. panel data in econometrics; cause and effect indicators in sociology vs. formative and reflective indicators in psychology; etc. I still hate them, but at least you can find a few thousand references to each of them in their respective literatures. The most recent one is this whole strand of work on directed acyclic graphs in causal literature: most, if not all, of the theory of identification and estimation in these has been developed by econometricians in the 1950s under the name of simultaneous equations. The term that has double, if not triple, meaning, is "robust", and the different meanings are often contradictory. "Robust" standard errors are not robust for far outliers; moreover, they are not robust to against anything except the assumed deviation from the model, and often have dismal small sample performance. White's standard errors are not robust against serial or cluster correlations; "robust" standard errors in SEM are not robust against the misspecifications of the model structure (omitted paths or variables). Just like with the idea of the null hypothesis significance testing, it is impossible to point a finger at anybody and say: "You are responsible for confusing several generations of researchers for coining this concept that does not really stand for its name".
How to annoy a statistical referee?
Coming up with new words for the existing concepts, or, vice versa, using the existing terms to denote something different. Some of the existing terminology differentials has long settled in the lite
How to annoy a statistical referee? Coming up with new words for the existing concepts, or, vice versa, using the existing terms to denote something different. Some of the existing terminology differentials has long settled in the literature: longitudinal data in biostatistics vs. panel data in econometrics; cause and effect indicators in sociology vs. formative and reflective indicators in psychology; etc. I still hate them, but at least you can find a few thousand references to each of them in their respective literatures. The most recent one is this whole strand of work on directed acyclic graphs in causal literature: most, if not all, of the theory of identification and estimation in these has been developed by econometricians in the 1950s under the name of simultaneous equations. The term that has double, if not triple, meaning, is "robust", and the different meanings are often contradictory. "Robust" standard errors are not robust for far outliers; moreover, they are not robust to against anything except the assumed deviation from the model, and often have dismal small sample performance. White's standard errors are not robust against serial or cluster correlations; "robust" standard errors in SEM are not robust against the misspecifications of the model structure (omitted paths or variables). Just like with the idea of the null hypothesis significance testing, it is impossible to point a finger at anybody and say: "You are responsible for confusing several generations of researchers for coining this concept that does not really stand for its name".
How to annoy a statistical referee? Coming up with new words for the existing concepts, or, vice versa, using the existing terms to denote something different. Some of the existing terminology differentials has long settled in the lite
1,785
How to annoy a statistical referee?
When authors use the one statistical test they know (in my field, usually a t-test or an ANOVA), ad infinitum, regardless of whether it's appropriate. I recently reviewed a paper where the authors wanted to compare a dozen different treatment groups, so they had done a two-sample t-test for every possible pair of treatments...
How to annoy a statistical referee?
When authors use the one statistical test they know (in my field, usually a t-test or an ANOVA), ad infinitum, regardless of whether it's appropriate. I recently reviewed a paper where the authors wa
How to annoy a statistical referee? When authors use the one statistical test they know (in my field, usually a t-test or an ANOVA), ad infinitum, regardless of whether it's appropriate. I recently reviewed a paper where the authors wanted to compare a dozen different treatment groups, so they had done a two-sample t-test for every possible pair of treatments...
How to annoy a statistical referee? When authors use the one statistical test they know (in my field, usually a t-test or an ANOVA), ad infinitum, regardless of whether it's appropriate. I recently reviewed a paper where the authors wa
1,786
How to annoy a statistical referee?
Zero consideration of missing data. Many practical applications use data for which there are at least some missing values. This is certainly very true in epidemiology. Missing data presents problems for many statistical methods - including linear models. Missing data with linear models is often dealt with through deletion of cases with any missing data on any covariates. This is a problem, unless data are missing under an assumption that data are Missing Completely At Random (MCAR). Perhaps 10 years ago, it was reasonable to publish results from linear models with no further consideration of missingness. I am certainly guilty of this. However, very good advice on how to deal with missing data with multiple imputation is now widely available, as are statistical packages/models/libraries/etc. to facilitate more appropriate analyses under more reasonable assumptions when missingness is present.
How to annoy a statistical referee?
Zero consideration of missing data. Many practical applications use data for which there are at least some missing values. This is certainly very true in epidemiology. Missing data presents problems
How to annoy a statistical referee? Zero consideration of missing data. Many practical applications use data for which there are at least some missing values. This is certainly very true in epidemiology. Missing data presents problems for many statistical methods - including linear models. Missing data with linear models is often dealt with through deletion of cases with any missing data on any covariates. This is a problem, unless data are missing under an assumption that data are Missing Completely At Random (MCAR). Perhaps 10 years ago, it was reasonable to publish results from linear models with no further consideration of missingness. I am certainly guilty of this. However, very good advice on how to deal with missing data with multiple imputation is now widely available, as are statistical packages/models/libraries/etc. to facilitate more appropriate analyses under more reasonable assumptions when missingness is present.
How to annoy a statistical referee? Zero consideration of missing data. Many practical applications use data for which there are at least some missing values. This is certainly very true in epidemiology. Missing data presents problems
1,787
How to annoy a statistical referee?
Reporting effects that "approached significance ( p < .10 for example) and then writing about them as though they had attained significance at a more stringent and acceptable level. Running multiple Structural Equation Models that were not nested and then writing about them as though they were nested. Taking a well-established analytic strategy and presenting it as though no-one had ever thought of using it before. Perhaps this qualifies as plagiarism to the nth degree.
How to annoy a statistical referee?
Reporting effects that "approached significance ( p < .10 for example) and then writing about them as though they had attained significance at a more stringent and acceptable level. Running multiple S
How to annoy a statistical referee? Reporting effects that "approached significance ( p < .10 for example) and then writing about them as though they had attained significance at a more stringent and acceptable level. Running multiple Structural Equation Models that were not nested and then writing about them as though they were nested. Taking a well-established analytic strategy and presenting it as though no-one had ever thought of using it before. Perhaps this qualifies as plagiarism to the nth degree.
How to annoy a statistical referee? Reporting effects that "approached significance ( p < .10 for example) and then writing about them as though they had attained significance at a more stringent and acceptable level. Running multiple S
1,788
How to annoy a statistical referee?
I recommend the following two articles: Martin Bland: How to Upset the Statistical Referee This is based on a series of talks given by Martin Bland, along with data from other statistical referees (‘a convenience sample with a low response rate’). It ends with an 11-point list of ‘[h]ow to avoid upsetting the statistical referee’. Stian Lydersen: Statistical review: frequently given comments This recent paper (published 2014/2015) lists the author’s 14 most common review comments, based on approx. 200 statistical reviews of scientific papers (in a particular journal). Each comment has a brief explanation of the problem and instructions on how to properly do the analysis/reporting. The list of cited references is a treasure trove of interesting papers.
How to annoy a statistical referee?
I recommend the following two articles: Martin Bland: How to Upset the Statistical Referee This is based on a series of talks given by Martin Bland, along with data from other statistical referees (‘a
How to annoy a statistical referee? I recommend the following two articles: Martin Bland: How to Upset the Statistical Referee This is based on a series of talks given by Martin Bland, along with data from other statistical referees (‘a convenience sample with a low response rate’). It ends with an 11-point list of ‘[h]ow to avoid upsetting the statistical referee’. Stian Lydersen: Statistical review: frequently given comments This recent paper (published 2014/2015) lists the author’s 14 most common review comments, based on approx. 200 statistical reviews of scientific papers (in a particular journal). Each comment has a brief explanation of the problem and instructions on how to properly do the analysis/reporting. The list of cited references is a treasure trove of interesting papers.
How to annoy a statistical referee? I recommend the following two articles: Martin Bland: How to Upset the Statistical Referee This is based on a series of talks given by Martin Bland, along with data from other statistical referees (‘a
1,789
How to annoy a statistical referee?
I'm most (and most frequently) annoyed by "validation" aiming at generalization error of predictive models where the test data is not independent (e.g. typically multiple measurements per patient in the data, out-of-bootstrap or cross validation splitting measurements not patients). Even more annoying, papers that give such flawed cross validation results plus an independent test set that demonstrates the overoptimistic bias of the cross validation but not a single word that the design of the cross validation is wrong ... (I'd be perfectly happy if the same data would be presented "we know the cross validation should split patients, but we're stuck with software that doesn't allow this. Therefore we tested a truly independent set of test patients in addition") (I'm also aware that bootstrapping = resampling with replacement usually performs better than cross validation = resampling without replacement. However, we found for spectroscopic data (simulated spectra and slightly artificial model setup but real spectra) that repeated/iterated cross validation and out-of-bootstrap had similar overall uncertainty; oob had more bias but less variance - for rewieving, I'm looking at this from a very pragmatic perspective: repeated cross validation vs. out-of-bootstrap does not matter as long as many papers neither split patient-wise nor report/discuss/mention random uncertainty due to limited test sample size.) Besides being wrong this also has the side effect that people who do a proper validation often have to defend why their results are so much worse than all those other results in the literature.
How to annoy a statistical referee?
I'm most (and most frequently) annoyed by "validation" aiming at generalization error of predictive models where the test data is not independent (e.g. typically multiple measurements per patient in t
How to annoy a statistical referee? I'm most (and most frequently) annoyed by "validation" aiming at generalization error of predictive models where the test data is not independent (e.g. typically multiple measurements per patient in the data, out-of-bootstrap or cross validation splitting measurements not patients). Even more annoying, papers that give such flawed cross validation results plus an independent test set that demonstrates the overoptimistic bias of the cross validation but not a single word that the design of the cross validation is wrong ... (I'd be perfectly happy if the same data would be presented "we know the cross validation should split patients, but we're stuck with software that doesn't allow this. Therefore we tested a truly independent set of test patients in addition") (I'm also aware that bootstrapping = resampling with replacement usually performs better than cross validation = resampling without replacement. However, we found for spectroscopic data (simulated spectra and slightly artificial model setup but real spectra) that repeated/iterated cross validation and out-of-bootstrap had similar overall uncertainty; oob had more bias but less variance - for rewieving, I'm looking at this from a very pragmatic perspective: repeated cross validation vs. out-of-bootstrap does not matter as long as many papers neither split patient-wise nor report/discuss/mention random uncertainty due to limited test sample size.) Besides being wrong this also has the side effect that people who do a proper validation often have to defend why their results are so much worse than all those other results in the literature.
How to annoy a statistical referee? I'm most (and most frequently) annoyed by "validation" aiming at generalization error of predictive models where the test data is not independent (e.g. typically multiple measurements per patient in t
1,790
How to annoy a statistical referee?
Using Microsoft Word rather than LaTeX.
How to annoy a statistical referee?
Using Microsoft Word rather than LaTeX.
How to annoy a statistical referee? Using Microsoft Word rather than LaTeX.
How to annoy a statistical referee? Using Microsoft Word rather than LaTeX.
1,791
How to annoy a statistical referee?
Using "data" in a singular sense. Data ARE, they never is.
How to annoy a statistical referee?
Using "data" in a singular sense. Data ARE, they never is.
How to annoy a statistical referee? Using "data" in a singular sense. Data ARE, they never is.
How to annoy a statistical referee? Using "data" in a singular sense. Data ARE, they never is.
1,792
How to annoy a statistical referee?
For me by far is , attributing cause without any proper causal analysis or when there is improper causal inference. I also hate it when zero attention is given to how missing data was handled. I see so many papers too where the authors simply perform complete case analysis and make no mention of whether or not the results are generalizable to the population with missing values or how the population with missing values might be systematically different from the population with complete data.
How to annoy a statistical referee?
For me by far is , attributing cause without any proper causal analysis or when there is improper causal inference. I also hate it when zero attention is given to how missing data was handled. I see
How to annoy a statistical referee? For me by far is , attributing cause without any proper causal analysis or when there is improper causal inference. I also hate it when zero attention is given to how missing data was handled. I see so many papers too where the authors simply perform complete case analysis and make no mention of whether or not the results are generalizable to the population with missing values or how the population with missing values might be systematically different from the population with complete data.
How to annoy a statistical referee? For me by far is , attributing cause without any proper causal analysis or when there is improper causal inference. I also hate it when zero attention is given to how missing data was handled. I see
1,793
Relationship between poisson and exponential distribution
I will use the following notation to be as consistent as possible with the wiki (in case you want to go back and forth between my answer and the wiki definitions for the poisson and exponential.) $N_t$: the number of arrivals during time period $t$ $X_t$: the time it takes for one additional arrival to arrive assuming that someone arrived at time $t$ By definition, the following conditions are equivalent: $ (X_t > x) \equiv (N_t = N_{t+x})$ The event on the left captures the event that no one has arrived in the time interval $[t,t+x]$ which implies that our count of the number of arrivals at time $t+x$ is identical to the count at time $t$ which is the event on the right. By the complement rule, we also have: $P(X_t \le x) = 1 - P(X_t > x)$ Using the equivalence of the two events that we described above, we can re-write the above as: $P(X_t \le x) = 1 - P(N_{t+x} - N_t = 0)$ But, $P(N_{t+x} - N_t = 0) = P(N_x = 0)$ Using the poisson pmf the above where $\lambda$ is the average number of arrivals per time unit and $x$ a quantity of time units, simplifies to: $P(N_{t+x} - N_t = 0) = \frac{(\lambda x)^0}{0!}e^{-\lambda x}$ i.e. $P(N_{t+x} - N_t = 0) = e^{-\lambda x}$ Substituting in our original eqn, we have: $P(X_t \le x) = 1 - e^{-\lambda x}$ The above is the cdf of a exponential pdf.
Relationship between poisson and exponential distribution
I will use the following notation to be as consistent as possible with the wiki (in case you want to go back and forth between my answer and the wiki definitions for the poisson and exponential.) $N_t
Relationship between poisson and exponential distribution I will use the following notation to be as consistent as possible with the wiki (in case you want to go back and forth between my answer and the wiki definitions for the poisson and exponential.) $N_t$: the number of arrivals during time period $t$ $X_t$: the time it takes for one additional arrival to arrive assuming that someone arrived at time $t$ By definition, the following conditions are equivalent: $ (X_t > x) \equiv (N_t = N_{t+x})$ The event on the left captures the event that no one has arrived in the time interval $[t,t+x]$ which implies that our count of the number of arrivals at time $t+x$ is identical to the count at time $t$ which is the event on the right. By the complement rule, we also have: $P(X_t \le x) = 1 - P(X_t > x)$ Using the equivalence of the two events that we described above, we can re-write the above as: $P(X_t \le x) = 1 - P(N_{t+x} - N_t = 0)$ But, $P(N_{t+x} - N_t = 0) = P(N_x = 0)$ Using the poisson pmf the above where $\lambda$ is the average number of arrivals per time unit and $x$ a quantity of time units, simplifies to: $P(N_{t+x} - N_t = 0) = \frac{(\lambda x)^0}{0!}e^{-\lambda x}$ i.e. $P(N_{t+x} - N_t = 0) = e^{-\lambda x}$ Substituting in our original eqn, we have: $P(X_t \le x) = 1 - e^{-\lambda x}$ The above is the cdf of a exponential pdf.
Relationship between poisson and exponential distribution I will use the following notation to be as consistent as possible with the wiki (in case you want to go back and forth between my answer and the wiki definitions for the poisson and exponential.) $N_t
1,794
Relationship between poisson and exponential distribution
For a Poisson process, hits occur at random independent of the past, but with a known long term average rate $\lambda$ of hits per time unit. The Poisson distribution would let us find the probability of getting some particular number of hits. Now, instead of looking at the number of hits, we look at the random variable $L$ (for Lifetime), the time you have to wait for the first hit. The probability that the waiting time is more than a given time value is $P(L \gt t) = P(\text{no hits in time t})=\frac{\Lambda^0e^{-\Lambda}}{0!}=e^{-\lambda t}$ (by the Poisson distribution, where $\Lambda = \lambda t$). $P(L \le t) = 1 - e^{-\lambda t}$ (the cumulative distribution function). We can get the density function by taking the derivative of this: $$f(t) = \begin{cases} \lambda e^{-\lambda t} & \mbox{for } t \ge 0 \\ 0 & \mbox{for } t \lt 0 \end{cases}$$ Any random variable that has a density function like this is said to be exponentially distributed.
Relationship between poisson and exponential distribution
For a Poisson process, hits occur at random independent of the past, but with a known long term average rate $\lambda$ of hits per time unit. The Poisson distribution would let us find the probability
Relationship between poisson and exponential distribution For a Poisson process, hits occur at random independent of the past, but with a known long term average rate $\lambda$ of hits per time unit. The Poisson distribution would let us find the probability of getting some particular number of hits. Now, instead of looking at the number of hits, we look at the random variable $L$ (for Lifetime), the time you have to wait for the first hit. The probability that the waiting time is more than a given time value is $P(L \gt t) = P(\text{no hits in time t})=\frac{\Lambda^0e^{-\Lambda}}{0!}=e^{-\lambda t}$ (by the Poisson distribution, where $\Lambda = \lambda t$). $P(L \le t) = 1 - e^{-\lambda t}$ (the cumulative distribution function). We can get the density function by taking the derivative of this: $$f(t) = \begin{cases} \lambda e^{-\lambda t} & \mbox{for } t \ge 0 \\ 0 & \mbox{for } t \lt 0 \end{cases}$$ Any random variable that has a density function like this is said to be exponentially distributed.
Relationship between poisson and exponential distribution For a Poisson process, hits occur at random independent of the past, but with a known long term average rate $\lambda$ of hits per time unit. The Poisson distribution would let us find the probability
1,795
Relationship between poisson and exponential distribution
The other answers do a good job of explaining the math. I think it helps to consider a physical example. When I think about a Poisson process, I always come back to the idea of cars passing on a road. Lambda is the average number of cars that pass per unit of time, let's say 60/hour (lambda = 60). We know, however, that the actual number will vary - some days more, some days less. The Poisson Distribution allows us to model this variability. Now, an average of 60 cars per hour equates to an average of 1 car passing by each minute. Again though, we know there's going to be variability in the amount of time between arrivals: Sometimes more than 1 minute; other times less. The Exponential Distribution allows us to model this variability. All that being said, cars passing by on a road won't always follow a Poisson Process. If there's a traffic signal just around the corner, for example, arrivals are going to be bunched up instead of steady. On an open highway, a slow tractor-trailer may hold up a long line of cars, again causing bunching. In these cases, the Poisson Distribution may still work okay for longer time periods, but the exponential will fail badly in modeling arrival times. Note also that there is huge variability based on time of day: busier during commuting times; much slower at 3am. Make sure that your lambda is reflective of the specific time period you are considering.
Relationship between poisson and exponential distribution
The other answers do a good job of explaining the math. I think it helps to consider a physical example. When I think about a Poisson process, I always come back to the idea of cars passing on a road.
Relationship between poisson and exponential distribution The other answers do a good job of explaining the math. I think it helps to consider a physical example. When I think about a Poisson process, I always come back to the idea of cars passing on a road. Lambda is the average number of cars that pass per unit of time, let's say 60/hour (lambda = 60). We know, however, that the actual number will vary - some days more, some days less. The Poisson Distribution allows us to model this variability. Now, an average of 60 cars per hour equates to an average of 1 car passing by each minute. Again though, we know there's going to be variability in the amount of time between arrivals: Sometimes more than 1 minute; other times less. The Exponential Distribution allows us to model this variability. All that being said, cars passing by on a road won't always follow a Poisson Process. If there's a traffic signal just around the corner, for example, arrivals are going to be bunched up instead of steady. On an open highway, a slow tractor-trailer may hold up a long line of cars, again causing bunching. In these cases, the Poisson Distribution may still work okay for longer time periods, but the exponential will fail badly in modeling arrival times. Note also that there is huge variability based on time of day: busier during commuting times; much slower at 3am. Make sure that your lambda is reflective of the specific time period you are considering.
Relationship between poisson and exponential distribution The other answers do a good job of explaining the math. I think it helps to consider a physical example. When I think about a Poisson process, I always come back to the idea of cars passing on a road.
1,796
Relationship between poisson and exponential distribution
The Poisson Distribution is normally derived from the Binomial Distribution (both discrete). This you'll find on Wiki. However, the Poisson distribution (discrete) can also be derived from the Exponential Distribution (continuous). I've added the proof to Wiki (link below): https://en.wikipedia.org/wiki/Talk:Poisson_distribution/Archive_1#Derivation_of_the_Poisson_Distribution_from_the_Exponential_Distribution
Relationship between poisson and exponential distribution
The Poisson Distribution is normally derived from the Binomial Distribution (both discrete). This you'll find on Wiki. However, the Poisson distribution (discrete) can also be derived from the Exponen
Relationship between poisson and exponential distribution The Poisson Distribution is normally derived from the Binomial Distribution (both discrete). This you'll find on Wiki. However, the Poisson distribution (discrete) can also be derived from the Exponential Distribution (continuous). I've added the proof to Wiki (link below): https://en.wikipedia.org/wiki/Talk:Poisson_distribution/Archive_1#Derivation_of_the_Poisson_Distribution_from_the_Exponential_Distribution
Relationship between poisson and exponential distribution The Poisson Distribution is normally derived from the Binomial Distribution (both discrete). This you'll find on Wiki. However, the Poisson distribution (discrete) can also be derived from the Exponen
1,797
Relationship between poisson and exponential distribution
While the other answers here go into more explanatory detail, I am going to give you a simple summary of the equation relating a set of IID exponential random variables and a generated Poisson random variable. A Poisson random variable with parameter $\lambda > 0$ can be generated by counting the number of sequential events occurring in time $\lambda/\eta$ where the times between the events are independent exponential random variables with rate $\eta$. (Setting $\eta=1$ gives you a simple way to generate a Poisson random variable from a series of IID unit exponential random variables.) This means that if $E_1,E_2,E_3,... \sim \text{Exp}(\eta)$ with rate parameter $\eta>0$, and $K \sim \text{Pois}(\lambda)$ with rate parameter $\lambda>0$ then you have: $$\mathbb{P}(K \geqslant k) = \mathbb{P} \Big( E_1+\cdots+E_k \leqslant \frac{\lambda}{\eta} \Big).$$
Relationship between poisson and exponential distribution
While the other answers here go into more explanatory detail, I am going to give you a simple summary of the equation relating a set of IID exponential random variables and a generated Poisson random
Relationship between poisson and exponential distribution While the other answers here go into more explanatory detail, I am going to give you a simple summary of the equation relating a set of IID exponential random variables and a generated Poisson random variable. A Poisson random variable with parameter $\lambda > 0$ can be generated by counting the number of sequential events occurring in time $\lambda/\eta$ where the times between the events are independent exponential random variables with rate $\eta$. (Setting $\eta=1$ gives you a simple way to generate a Poisson random variable from a series of IID unit exponential random variables.) This means that if $E_1,E_2,E_3,... \sim \text{Exp}(\eta)$ with rate parameter $\eta>0$, and $K \sim \text{Pois}(\lambda)$ with rate parameter $\lambda>0$ then you have: $$\mathbb{P}(K \geqslant k) = \mathbb{P} \Big( E_1+\cdots+E_k \leqslant \frac{\lambda}{\eta} \Big).$$
Relationship between poisson and exponential distribution While the other answers here go into more explanatory detail, I am going to give you a simple summary of the equation relating a set of IID exponential random variables and a generated Poisson random
1,798
Generate a random variable with a defined correlation to an existing variable(s)
Here's another one: for vectors with mean 0, their correlation equals the cosine of their angle. So one way to find a vector $x$ with exactly the desired correlation $r$, corresponding to an angle $\theta$: get fixed vector $x_1$ and a random vector $x_2$ center both vectors (mean 0), giving vectors $\dot{x}_{1}$, $\dot{x}_{2}$ make $\dot{x}_{2}$ orthogonal to $\dot{x}_{1}$ (projection onto orthogonal subspace), giving $\dot{x}_{2}^{\perp}$ scale $\dot{x}_{1}$ and $\dot{x}_{2}^{\perp}$ to length 1, giving $\bar{x}_{1}$ and $\bar{x}_{2}^{\perp}$ $\bar{x}_{2}^{\perp} + (1/\tan(\theta)) \cdot \bar{x}_{1}$ is the vector whose angle to $\bar{x}_{1}$ is $\theta$, and whose correlation with $\bar{x}_{1}$ thus is $r$. This is also the correlation to $x_1$ since linear transformations leave the correlation unchanged. Here is the code: n <- 20 # length of vector rho <- 0.6 # desired correlation = cos(angle) theta <- acos(rho) # corresponding angle x1 <- rnorm(n, 1, 1) # fixed given data x2 <- rnorm(n, 2, 0.5) # new random data X <- cbind(x1, x2) # matrix Xctr <- scale(X, center=TRUE, scale=FALSE) # centered columns (mean 0) Id <- diag(n) # identity matrix Q <- qr.Q(qr(Xctr[ , 1, drop=FALSE])) # QR-decomposition, just matrix Q P <- tcrossprod(Q) # = Q Q' # projection onto space defined by x1 x2o <- (Id-P) %*% Xctr[ , 2] # x2ctr made orthogonal to x1ctr Xc2 <- cbind(Xctr[ , 1], x2o) # bind to matrix Y <- Xc2 %*% diag(1/sqrt(colSums(Xc2^2))) # scale columns to length 1 x <- Y[ , 2] + (1 / tan(theta)) * Y[ , 1] # final new vector cor(x1, x) # check correlation = rho For the orthogonal projection $P$, I used the $QR$-decomposition to improve numerical stability, since then simply $P = Q Q'$.
Generate a random variable with a defined correlation to an existing variable(s)
Here's another one: for vectors with mean 0, their correlation equals the cosine of their angle. So one way to find a vector $x$ with exactly the desired correlation $r$, corresponding to an angle $\t
Generate a random variable with a defined correlation to an existing variable(s) Here's another one: for vectors with mean 0, their correlation equals the cosine of their angle. So one way to find a vector $x$ with exactly the desired correlation $r$, corresponding to an angle $\theta$: get fixed vector $x_1$ and a random vector $x_2$ center both vectors (mean 0), giving vectors $\dot{x}_{1}$, $\dot{x}_{2}$ make $\dot{x}_{2}$ orthogonal to $\dot{x}_{1}$ (projection onto orthogonal subspace), giving $\dot{x}_{2}^{\perp}$ scale $\dot{x}_{1}$ and $\dot{x}_{2}^{\perp}$ to length 1, giving $\bar{x}_{1}$ and $\bar{x}_{2}^{\perp}$ $\bar{x}_{2}^{\perp} + (1/\tan(\theta)) \cdot \bar{x}_{1}$ is the vector whose angle to $\bar{x}_{1}$ is $\theta$, and whose correlation with $\bar{x}_{1}$ thus is $r$. This is also the correlation to $x_1$ since linear transformations leave the correlation unchanged. Here is the code: n <- 20 # length of vector rho <- 0.6 # desired correlation = cos(angle) theta <- acos(rho) # corresponding angle x1 <- rnorm(n, 1, 1) # fixed given data x2 <- rnorm(n, 2, 0.5) # new random data X <- cbind(x1, x2) # matrix Xctr <- scale(X, center=TRUE, scale=FALSE) # centered columns (mean 0) Id <- diag(n) # identity matrix Q <- qr.Q(qr(Xctr[ , 1, drop=FALSE])) # QR-decomposition, just matrix Q P <- tcrossprod(Q) # = Q Q' # projection onto space defined by x1 x2o <- (Id-P) %*% Xctr[ , 2] # x2ctr made orthogonal to x1ctr Xc2 <- cbind(Xctr[ , 1], x2o) # bind to matrix Y <- Xc2 %*% diag(1/sqrt(colSums(Xc2^2))) # scale columns to length 1 x <- Y[ , 2] + (1 / tan(theta)) * Y[ , 1] # final new vector cor(x1, x) # check correlation = rho For the orthogonal projection $P$, I used the $QR$-decomposition to improve numerical stability, since then simply $P = Q Q'$.
Generate a random variable with a defined correlation to an existing variable(s) Here's another one: for vectors with mean 0, their correlation equals the cosine of their angle. So one way to find a vector $x$ with exactly the desired correlation $r$, corresponding to an angle $\t
1,799
Generate a random variable with a defined correlation to an existing variable(s)
I will describe the most general possible solution. Solving the problem in this generality allows us to achieve a remarkably compact software implementation: just two short lines of R code suffice. At the end is a generalization to multiple $Y$ vectors, with working code. Pick a vector $X$, of the same length as $Y$, according to any distribution you like. Let $Y^\perp$ be the residuals of the least squares regression of $X$ against $Y$: this removes the $Y$ component from $X,$ producing a vector orthogonal to $Y.$ By adding back a suitable multiple of $Y$ to $Y^\perp$, we may produce a vector having any desired correlation $\rho$ with $Y$ (except $\rho=\pm 1$, but then $\pm Y$ works). Up to an arbitrary additive constant and positive multiplicative constant--which you are free to choose in any way--the solution is $$X_{Y;\rho} = \rho\, \operatorname{SD}(Y^\perp)Y + \sqrt{1-\rho^2}\,\operatorname{SD}(Y)Y^\perp.$$ ("$\operatorname{SD}$" stands for any calculation proportional to a standard deviation.) Here is working R code. If you don't supply $X$, the code will draw its values randomly from the multivariate standard Normal distribution. complement <- function(y, rho, x) { if (missing(x)) x <- rnorm(length(y)) # Optional: supply a default if `x` is not given y.perp <- residuals(lm(x ~ y)) rho * sd(y.perp) * y + y.perp * sd(y) * sqrt(1 - rho^2) } To illustrate, I generated a vector $Y$ with $50$ components and produced various $Z=X_{Y;\rho}$ having specified correlations $\rho$ with this $Y$. They were all created with the same starting vector $X=(1,2,\ldots, 50)$. Here are their $(Y,Z)$ scatterplots. The "rugplots" at the bottom of each panel show the common $Y$ vector. There's a remarkable similarity among the plots, isn't there :-). If you would like to experiment, modify this code that produced the data and the figure. (I didn't bother to use the freedom to shift and scale the results, which are easy operations.) y <- rnorm(50, sd=10) x <- 1:50 # Optional rho <- seq(0, 1, length.out=6) * rep(c(-1,1), 3) X <- data.frame(z=as.vector(sapply(rho, function(rho) complement(y, rho, x))), rho=ordered(rep(signif(rho, 2), each=length(y))), y=rep(y, length(rho))) library(ggplot2) ggplot(X, aes(y,z, group=rho)) + geom_smooth(method="lm", color="Black") + geom_rug(sides="b") + geom_point(aes(fill=rho), alpha=1/2, shape=21) + facet_wrap(~ rho, scales="free") BTW, this method readily generalizes to more than one $Y$: if it's mathematically possible, it will find an $X_{Y_1,Y_2,\ldots,Y_k;\rho_1,\rho_2,\ldots,\rho_k}$ having specified correlations with an entire set of $Y_i$. Just use ordinary least squares to take out the effects of all the $Y_i$ from $X$ and form a suitable linear combination of the $Y_i$ and the residuals. (It helps to do this in terms of a dual basis for $Y$, which is obtained by computing a pseudo-inverse. The following code uses the SVD of $Y$ to accomplish that.) Here's a sketch of the algorithm in R, where the $Y_i$ are given as columns of a matrix y: y <- scale(y) # Makes computations simpler e <- residuals(lm(x ~ y)) # Take out the columns of matrix `y` y.dual <- with(svd(y), (n-1)*u %*% diag(ifelse(d > 0, 1/d, 0)) %*% t(v)) sigma2 <- c((1 - rho %*% cov(y.dual) %*% rho) / var(e)) return(y.dual %*% rho + sqrt(sigma2)*e) Another thread provides a detailed explanation of each line of code. The following is a more complete implementation for those who would like to experiment. complement <- function(y, rho, x, threshold=1e-12) { # # Process the arguments. # if(!is.matrix(y)) y <- matrix(y, ncol=1) d <- ncol(y) n <- nrow(y) y <- scale(y, center=FALSE) # Makes computations simpler if (missing(x)) x <- rnorm(n) # # Remove the effects of `y` on `x`. # e <- residuals(lm(x ~ y)) # # Calculate the coefficient `sigma` of `e` so that the correlation of # `y` with the linear combination y.dual %*% rho + sigma*e is the desired # vector. # y.dual <- with(svd(y), (n-1)*u %*% diag(ifelse(d > threshold, 1/d, 0)) %*% t(v)) sigma2 <- c((1 - rho %*% cov(y.dual) %*% rho) / var(e)) # # Return this linear combination. # if (sigma2 >= 0) { sigma <- sqrt(sigma2) z <- y.dual %*% rho + sigma*e } else { warning("Correlations are impossible.") z <- rep(0, n) } return(z) } # # Set up the problem. # d <- 3 # Number of given variables n <- 50 # Dimension of all vectors x <- 1:n # Optionally: specify `x` or draw from any distribution y <- matrix(rnorm(d*n), ncol=d) # Create `d` original variables in any way rho <- c(0.5, -0.5, 0) # Specify the correlations # # Verify the results. # z <- complement(y, rho, x) cbind('Actual correlations' = cor(y, z), 'Target correlations' = rho) # # Display them. # colnames(y) <- paste0("y.", 1:d) colnames(z) <- "z" pairs(cbind(z, y))
Generate a random variable with a defined correlation to an existing variable(s)
I will describe the most general possible solution. Solving the problem in this generality allows us to achieve a remarkably compact software implementation: just two short lines of R code suffice. A
Generate a random variable with a defined correlation to an existing variable(s) I will describe the most general possible solution. Solving the problem in this generality allows us to achieve a remarkably compact software implementation: just two short lines of R code suffice. At the end is a generalization to multiple $Y$ vectors, with working code. Pick a vector $X$, of the same length as $Y$, according to any distribution you like. Let $Y^\perp$ be the residuals of the least squares regression of $X$ against $Y$: this removes the $Y$ component from $X,$ producing a vector orthogonal to $Y.$ By adding back a suitable multiple of $Y$ to $Y^\perp$, we may produce a vector having any desired correlation $\rho$ with $Y$ (except $\rho=\pm 1$, but then $\pm Y$ works). Up to an arbitrary additive constant and positive multiplicative constant--which you are free to choose in any way--the solution is $$X_{Y;\rho} = \rho\, \operatorname{SD}(Y^\perp)Y + \sqrt{1-\rho^2}\,\operatorname{SD}(Y)Y^\perp.$$ ("$\operatorname{SD}$" stands for any calculation proportional to a standard deviation.) Here is working R code. If you don't supply $X$, the code will draw its values randomly from the multivariate standard Normal distribution. complement <- function(y, rho, x) { if (missing(x)) x <- rnorm(length(y)) # Optional: supply a default if `x` is not given y.perp <- residuals(lm(x ~ y)) rho * sd(y.perp) * y + y.perp * sd(y) * sqrt(1 - rho^2) } To illustrate, I generated a vector $Y$ with $50$ components and produced various $Z=X_{Y;\rho}$ having specified correlations $\rho$ with this $Y$. They were all created with the same starting vector $X=(1,2,\ldots, 50)$. Here are their $(Y,Z)$ scatterplots. The "rugplots" at the bottom of each panel show the common $Y$ vector. There's a remarkable similarity among the plots, isn't there :-). If you would like to experiment, modify this code that produced the data and the figure. (I didn't bother to use the freedom to shift and scale the results, which are easy operations.) y <- rnorm(50, sd=10) x <- 1:50 # Optional rho <- seq(0, 1, length.out=6) * rep(c(-1,1), 3) X <- data.frame(z=as.vector(sapply(rho, function(rho) complement(y, rho, x))), rho=ordered(rep(signif(rho, 2), each=length(y))), y=rep(y, length(rho))) library(ggplot2) ggplot(X, aes(y,z, group=rho)) + geom_smooth(method="lm", color="Black") + geom_rug(sides="b") + geom_point(aes(fill=rho), alpha=1/2, shape=21) + facet_wrap(~ rho, scales="free") BTW, this method readily generalizes to more than one $Y$: if it's mathematically possible, it will find an $X_{Y_1,Y_2,\ldots,Y_k;\rho_1,\rho_2,\ldots,\rho_k}$ having specified correlations with an entire set of $Y_i$. Just use ordinary least squares to take out the effects of all the $Y_i$ from $X$ and form a suitable linear combination of the $Y_i$ and the residuals. (It helps to do this in terms of a dual basis for $Y$, which is obtained by computing a pseudo-inverse. The following code uses the SVD of $Y$ to accomplish that.) Here's a sketch of the algorithm in R, where the $Y_i$ are given as columns of a matrix y: y <- scale(y) # Makes computations simpler e <- residuals(lm(x ~ y)) # Take out the columns of matrix `y` y.dual <- with(svd(y), (n-1)*u %*% diag(ifelse(d > 0, 1/d, 0)) %*% t(v)) sigma2 <- c((1 - rho %*% cov(y.dual) %*% rho) / var(e)) return(y.dual %*% rho + sqrt(sigma2)*e) Another thread provides a detailed explanation of each line of code. The following is a more complete implementation for those who would like to experiment. complement <- function(y, rho, x, threshold=1e-12) { # # Process the arguments. # if(!is.matrix(y)) y <- matrix(y, ncol=1) d <- ncol(y) n <- nrow(y) y <- scale(y, center=FALSE) # Makes computations simpler if (missing(x)) x <- rnorm(n) # # Remove the effects of `y` on `x`. # e <- residuals(lm(x ~ y)) # # Calculate the coefficient `sigma` of `e` so that the correlation of # `y` with the linear combination y.dual %*% rho + sigma*e is the desired # vector. # y.dual <- with(svd(y), (n-1)*u %*% diag(ifelse(d > threshold, 1/d, 0)) %*% t(v)) sigma2 <- c((1 - rho %*% cov(y.dual) %*% rho) / var(e)) # # Return this linear combination. # if (sigma2 >= 0) { sigma <- sqrt(sigma2) z <- y.dual %*% rho + sigma*e } else { warning("Correlations are impossible.") z <- rep(0, n) } return(z) } # # Set up the problem. # d <- 3 # Number of given variables n <- 50 # Dimension of all vectors x <- 1:n # Optionally: specify `x` or draw from any distribution y <- matrix(rnorm(d*n), ncol=d) # Create `d` original variables in any way rho <- c(0.5, -0.5, 0) # Specify the correlations # # Verify the results. # z <- complement(y, rho, x) cbind('Actual correlations' = cor(y, z), 'Target correlations' = rho) # # Display them. # colnames(y) <- paste0("y.", 1:d) colnames(z) <- "z" pairs(cbind(z, y))
Generate a random variable with a defined correlation to an existing variable(s) I will describe the most general possible solution. Solving the problem in this generality allows us to achieve a remarkably compact software implementation: just two short lines of R code suffice. A
1,800
Generate a random variable with a defined correlation to an existing variable(s)
Here's another computational approach (the solution is adapted from a forum post by Enrico Schumann). According to Wolfgang (see comments), this is computationally identical to the solution proposed by ttnphns. In contrast to caracal's solution it does not produce a sample with the exact correlation of $\rho$, but two vectors whose population correlation is equal to $\rho$. Following function can compute a bivariate sample distribution drawn from a population with a given $\rho$. It either computes two random variables, or it takes one existing variable (passed as parameter x) and creates a second variable with the desired correlation: # returns a data frame of two variables which correlate with a population correlation of rho # If desired, one of both variables can be fixed to an existing variable by specifying x getBiCop <- function(n, rho, mar.fun=rnorm, x = NULL, ...) { if (!is.null(x)) {X1 <- x} else {X1 <- mar.fun(n, ...)} if (!is.null(x) & length(x) != n) warning("Variable x does not have the same length as n!") C <- matrix(rho, nrow = 2, ncol = 2) diag(C) <- 1 C <- chol(C) X2 <- mar.fun(n) X <- cbind(X1,X2) # induce correlation (does not change X1) df <- X %*% C ## if desired: check results #all.equal(X1,X[,1]) #cor(X) return(df) } The function can also use non-normal marginal distributions by adjusting parameter mar.fun. Note, however, that fixing one variable only seems to work with a normally distributed variable x! (which might relate to Macro's comment). Also note that the "small correction factor" from the original post was removed as it seems to bias the resulting correlations, at least in the case of Gaussian distributions and Pearson correlations (also see comments).
Generate a random variable with a defined correlation to an existing variable(s)
Here's another computational approach (the solution is adapted from a forum post by Enrico Schumann). According to Wolfgang (see comments), this is computationally identical to the solution proposed b
Generate a random variable with a defined correlation to an existing variable(s) Here's another computational approach (the solution is adapted from a forum post by Enrico Schumann). According to Wolfgang (see comments), this is computationally identical to the solution proposed by ttnphns. In contrast to caracal's solution it does not produce a sample with the exact correlation of $\rho$, but two vectors whose population correlation is equal to $\rho$. Following function can compute a bivariate sample distribution drawn from a population with a given $\rho$. It either computes two random variables, or it takes one existing variable (passed as parameter x) and creates a second variable with the desired correlation: # returns a data frame of two variables which correlate with a population correlation of rho # If desired, one of both variables can be fixed to an existing variable by specifying x getBiCop <- function(n, rho, mar.fun=rnorm, x = NULL, ...) { if (!is.null(x)) {X1 <- x} else {X1 <- mar.fun(n, ...)} if (!is.null(x) & length(x) != n) warning("Variable x does not have the same length as n!") C <- matrix(rho, nrow = 2, ncol = 2) diag(C) <- 1 C <- chol(C) X2 <- mar.fun(n) X <- cbind(X1,X2) # induce correlation (does not change X1) df <- X %*% C ## if desired: check results #all.equal(X1,X[,1]) #cor(X) return(df) } The function can also use non-normal marginal distributions by adjusting parameter mar.fun. Note, however, that fixing one variable only seems to work with a normally distributed variable x! (which might relate to Macro's comment). Also note that the "small correction factor" from the original post was removed as it seems to bias the resulting correlations, at least in the case of Gaussian distributions and Pearson correlations (also see comments).
Generate a random variable with a defined correlation to an existing variable(s) Here's another computational approach (the solution is adapted from a forum post by Enrico Schumann). According to Wolfgang (see comments), this is computationally identical to the solution proposed b