source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
2,253
I would like to reopen the discussion regarding CV and the Data Science beta. This question is related to this previous one: Data Science SE but now with a better view of where Data Science seems to be going. I was inspired to make this post because of https://stats.stackexchange.com/questions/126403/crossvalidated-vs-datascience-what-is-different . The difference between CV and data science appears to be that CV focuses on data analysis theory (statistics, machine learning and math to a lesser extent) while data science focuses on (big) data analysis in practice (software frameworks, databases, languages). At least on paper. CV's mission statement: Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. Data Science's mission statement: Data Science Stack Exchange is a question and answer site for Data science professionals, Machine Learning specialists, and those interested in learning more about the field. These mission statements are pretty vague but already there one can immediately see tremendous overlap. I think historically the need for Data Science arose because CV rejected 'implementation-related' questions. That may have been a mistake. I am not convinced that these should be separate, especially considering the evolution of data science. Maybe CV is focusing on independence of statistics too much. Data Science is getting a lot of theoretical questions which should probably have ended up here (most are in fact already answered here), some examples: Consequence of feature scaling Please enlighten me with Platt's SMO algorithm for SVM Where to start on neural networks Skewed multi-class data Choosing a learning rate K-means clustering for mixed numeric and categorical data Advantages of AUC vs standard accuracy The list goes on. If we consider the list given at Data Science meta to be used, such questions would fit on both venues. The idea that (a non-trivial amount of) questions may well end up on either site is in direct contradiction to the overall mission of StackExchange sites (e.g. to provide a single place to answer certain questions that cannot be found in other places). On CV we are (fairly) consistently closing questions that belong on data science while they appear to not be doing the same. Essentially this boils down to ' when in doubt, ask at Data Science '. This is just an observation, don't consider this to be a complaint or accusation. It seems to me that CV needs better PR at least. My question: is having two small, heavily intertwined sites better than one large one related to 'data analysis' in all its forms? StackOverflow has shown that a single go-to point for programming stuff has worked tremendously well, so maybe the equivalent for data analysis has its merits? From a new user's perspective, it would make a lot more sense.
My question: is having two small, heavily intertwined sites better than one large one related to 'data analysis' in all its forms? For what it's worth, my answer is 'no'. If anything, I think Data Science should be merged into Cross Validated. I can respect that some people would want to keep the engineering and theory separate (I'll let them make that case); however: It is possible to have a single site which deals with different aspects of the same area. Tags have a role to play. It's kind of annoying having to check two sites, have two sets of rep etc. There will be wasteful duplication and dilution of conversations. Computational tractability is often driving the choice of theories to pursue. IMO there is a complementarity between theoretical questions and engineering issues. It's going to get dull seeing 'is this one for data science?' in comments fields. One could argue that it is inconsistent that on CV we will sometimes provide R/matlab code, but seem to shy away from larger systems engineering questions or less familiar software.
{ "source": [ "https://stats.meta.stackexchange.com/questions/2253", "https://stats.meta.stackexchange.com", "https://stats.meta.stackexchange.com/users/25433/" ] }
2,468
For the past couple of months I've had access to the review queue and haven't really got to grips with it yet. I find the design rather unintuitive and clunky - when looking at a post in the review queue, most of the context of whatever you are looking at is cut out, and if editing a post I can either see only the preview of the rendered output, or only the text entry box. So most of the time I just leave the review screen and go to the thread directly, where it seems easier to see what is going on and then take whatever action I feel appropriate. I'm sure this isn't the "right" way of doing it - no review action gets recorded against my account, and I wonder whether for several of the actions this even removes things from the queue. I know that actions for suggested edits work, but these seem to disappear by the time I've made a decision on them! Can anybody give me hints on how to use the tool more effectively? I notice, for instance, that regular patrollers of first or low quality posts often leave a stock comment (e.g. regarding a question that is entirely coding, or is self-study but not tagged as such or not showing an attempt, or an answer which is just a link with no description). Are these produced by template, through the review mechanism itself, or is there simply a list to copy and paste from?
A compendium of comments I've found useful. Feel free to add more. Questions Self-study Is this a question from a course or textbook? If so, please add the self-study tag & read its wiki . Is this a question from a course or textbook? If so, please add the [tag:self-study] tag & read its [wiki](https://stats.stackexchange.com/tags/self-study/info). Please add the self-study tag & read its wiki . Please add the [tag:self-study] tag & read its [wiki](https://stats.stackexchange.com/tags/self-study/info). Please add the self-study tag & read its wiki . Then tell us what you understand thus far, what you've tried & where you're stuck. We'll provide hints to help you get unstuck. Please make these changes as just posting your homework & hoping someone will do it for you is grounds for closing. Please add the [tag:self-study] tag & read its [wiki](https://stats.stackexchange.com/tags/self-study/info). Then tell us what you understand thus far, what you've tried & where you're stuck. We'll provide hints to help you get unstuck. Please make these changes as just posting your homework & hoping someone will do it for you is grounds for closing. Please type your question as text, do not just post a photograph or screenshot (see here ). When you re-type the question, add the self-study tag & read its wiki . Then tell us what you understand thus far, what you've tried & where you're stuck. We'll provide hints to help you get unstuck. Please make these changes as just posting your homework & hoping someone will do it for you is grounds for closing. Please type your question as text, do not just post a photograph or screenshot (see [here](https://stats.meta.stackexchange.com/a/3176/)). When you retype the question, add the [tag:self-study] tag & read [its wiki](https://stats.stackexchange.com/tags/self-study/info). Then tell us what you understand thus far, what you've tried & where you're stuck. We'll provide hints to help you get unstuck. Please make these changes as just posting your homework & hoping someone will do it for you is grounds for closing. Reproducible examples (Link is specifically for R-related questions) Please add a reproducible example for people to work with. Please add a [reproducible example](https://stackoverflow.com/q/5963269/) for people to work with. I'm voting to close this question as off-topic because it is about how to use R without a reproducible example. I'm voting to close this question as [off-topic](https://stats.stackexchange.com/help/on-topic) because it is about how to use R without a reproducible example. (For coding questions other than R) Please add a reproducible example for people to work with. Please add a [reproducible example](https://stackoverflow.com/help/mcve) for people to work with. I'm voting to close this question as off-topic because it is about how to use without a reproducible example. I'm voting to close this question as [off-topic](https://stats.stackexchange.com/help/on-topic) because it is about how to use <!--REPLACE_ME--> without a reproducible example. Ambiguous code / statistical question Questions solely about how software works are off-topic here, but you may have a real statistical question buried here. You may want to edit your question to clarify the underlying statistical issue. You may find that when you understand the statistical concepts involved, the software-specific elements are self-evident or at least easy to get from the documentation. Questions solely about how software works are [off-topic](https://stats.stackexchange.com/help/on-topic) here, but you may have a real statistical question buried here. You may want to edit your question to clarify the underlying statistical issue. You may find that when you understand the statistical concepts involved, the software-specific elements are self-evident or at least easy to get from the documentation. No need to sign off/give thanks etc Welcome to CV. Note that your username, identicon, & a link to your user page are automatically added to every post you make, so there is no need to sign your posts. In fact, we prefer you don't. Welcome to CV. Note that your username, identicon, & a link to your user page are automatically added to every post you make, so there is no need to sign your posts. In fact, we prefer you don't. On this site there's no need to say "thank you" at the end of your post - it might seem rude at first, but it's part of the philosophy of this site to "Ask questions, get answers, no distractions", and it means future readers of your question don't need to read through the pleasantries. On this site there's no need to say "thank you" at the end of your post - it might seem rude at first, but it's part of [the philosophy of this site](https://stats.stackexchange.com/help/behavior) to "Ask questions, get answers, no distractions", and it means future readers of your question don't need to read through the pleasantries. New information in comments Please do not give new information only in comments, edit your question to add the new information. We want posts to be self-contained, comments can be deleted, and anyhow, information in comments are not well organized. Also, many people do not read comments. Please do not give new information only in comments, edit your question to add the new information. We want posts to be self-contained, comments can be deleted, and anyhow, information in comments are not well organized. Also, many people do not read comments. Demands fast answer Please don't say your question is urgent or ask people to answer quickly. Remember that you are asking strangers to volunteer their time to help you for free. People will respond at the rate that is comfortable for them. Please don't say your question is urgent or ask people to answer quickly. Remember that you are asking strangers to volunteer their time to help you for free. People will respond at the rate that is comfortable for them. Critical information for question behind link Please paste in whatever context is necessary to understand & answer your question. We want this thread to remain valuable even if the link goes dead. Please paste in whatever context is necessary to understand & answer your question. We want this thread to remain valuable even if the link goes dead. Too broad Questions in the SE system are supposed to be narrow & concrete such that they can be given a definitively correct, factual answer in at most a few paragraphs. This isn't a site for discussions or opinions. Questions in the SE system are supposed to be narrow & concrete such that they can be given a definitively correct, factual answer in at most a few paragraphs. This isn't a site for discussions or opinions. Refer to textbook This question is very broad, and I believe you would profit from reading an introductory level textbook. We have a helpful list of free statistical textbooks. If afterwards you still have more specific questions, then please do ask them here. If you already have read such a textbook, please edit your question to make it more specific. Thank you! This question is very broad, and I believe you would profit from reading an introductory level textbook. We have a helpful list of [free statistical textbooks.](https://stats.stackexchange.com/q/170/). If afterwards you still have more specific questions, then please do ask them here. If you already *have* read such a textbook, please edit your question to make it more specific. Thank you! Specifically for forecasting: This question is very broad, and I believe you would profit from reading an introductory level textbook, e.g., the free online Forecasting: Principles and Practice by Hyndman & Athanasopoulos . If after reading this you still have more specific questions, then please do ask them here. If you already have read such a textbook, please edit your question to make it more specific. Thank you! This question is very broad, and I believe you would profit from reading an introductory level textbook, e.g., the free online [*Forecasting: Principles and Practice* by Hyndman & Athanasopoulos](https://otexts.org/fpp2/). If after reading this you still have more specific questions, then please do ask them here. If you already *have* read such a textbook, please edit your question to make it more specific. Thank you! ( More reasoning behind this comment can be found in this Meta answer. ) Specifically for neural networks: This question is very broad, and I believe you would profit from reading an introductory level textbook. We have a helpful list of textbooks and courses about neural networks. If afterwards you still have more specific questions, then please do ask them here. If you already have read such a textbook, please edit your question to make it more specific. Thank you! This question is very broad, and I believe you would profit from reading an introductory level textbook. We have a helpful list of [textbooks and courses](https://stats.stackexchange.com/q/226911/) about neural networks. If afterwards you still have more specific questions, then please do ask them here. If you already *have* read such a textbook, please edit your question to make it more specific. Thank you! Has been discussed extensively before Similar questions have been discussed multiple times before. Please search the site , noting the tips on advanced search options, and tell us what you found and why it didn’t meet your needs. This demonstrates that you’ve taken the time to try to help yourself, it saves us from reiterating obvious answers, and above all, it helps you get a more specific and relevant answer! Similar questions have been discussed multiple times before. Please [search the site](https://stats.stackexchange.com/search), noting the [tips](https://stats.stackexchange.com/help/searching) on advanced search options, and tell us what you found and why it didn’t meet your needs. This demonstrates that you’ve taken the time to try to help yourself, it saves us from reiterating obvious answers, and above all, it helps you get a more specific and relevant answer! Duplicate question I think you will find the information you need in the linked thread. Please read it. If it isn't what you want / you still have a question afterwards, come back here & edit your question to state what you learned & what you still need to know. Then we can provide the information you need without just duplicating material elsewhere that already didn't help you. I think you will find the information you need in the linked thread. Please read it. If it isn't what you want / you still have a question afterwards, come back here & edit your question to state what you learned & what you still need to know. Then we can provide the information you need without just duplicating material elsewhere that already didn't help you. Off-topic (only about software) Questions that are only about software (e.g. error messages, code or packages, etc.) are generally off topic here. If you have a substantive machine learning or statistical question, please edit to clarify. Questions that are only about software (e.g. error messages, code or packages, etc.) are generally off topic here. If you have a substantive machine learning or statistical question, please edit to clarify. Vandalism Please do not vandalize your question. When you posted on SE, you gave up exclusive ownership of the content under CC BY-SA 4.0 . If there are no answers, you may delete your own question (see here ): just click the faint gray 'delete' at lower left (your account needs to be registered for this). Otherwise, the thread will remain according to SE's rules. Please do not vandalize your question. When you posted on SE, you gave up ownership of the content under [CC BY-SA 4.0](https://stats.stackexchange.com/help/licensing). If there are no answers, you may delete your own question (see [here](https://stats.stackexchange.com/help/what-to-do-instead-of-deleting-question) ): just click the faint gray 'delete' at lower left (your account needs to be registered for this). Otherwise, the thread will remain according to SE's rules. XY Problem It sounds to me like the problem you're trying to solve is <X> , and you're wondering if <Y> is a good way to go about it. Is that fair? Because if your real question is " <X> ?" then I would suggest only asking about that. As its written right now, the question appears to be an XY Problem . It sounds to me like the problem you're trying to solve is `<X>`, and you're wondering if `<Y>` is a good way to go about it. Is that fair? Because if your real question is "`<X>`?" then I would suggest only asking about that. As its written right now, the question appears to be an [XY Problem](http://xyproblem.info/). Cross posted question Please don't cross post on multiple SE sites. Cross posting is against SE policy & wastes a lot of people's time. Decide which site you want your question on & only post there (or delete the copies elsewhere, as appropriate). Please don't cross post on multiple SE sites. Cross posting is [against SE policy](https://meta.stackexchange.com/q/64068/) & wastes a lot of people's time. Decide which site you want your question on & only post there (or delete the copies elsewhere, as appropriate). Screenshot of equation, but not self-study Hi, there are blind and visually impaired users of this site who interact with it using screen readers. The screen readers can't handle the equation in your screenshot. Please edit the post to include the equation as TeX. If it helps, we have some resources on using LaTeX on Cross Validated . Hi, there are blind and visually impaired users of this site who interact with it using screen readers. The screen readers can't handle the equation in your screenshot. Please edit the post to include the equation as LaTeX. If it helps, we have some [resources on using LaTeX on Cross Validated](https://stats.meta.stackexchange.com/a/1605/155836). (The Answers section moved to a new post)
{ "source": [ "https://stats.meta.stackexchange.com/questions/2468", "https://stats.meta.stackexchange.com", "https://stats.meta.stackexchange.com/users/22228/" ] }
2,855
For the fourth year running , the Stack Exchange team is organizing a "Winter Bash". Users earn "hats" for their gravatars by completing novel tasks (analogous to badges). Certain specific actions will trigger access to a (graphical) hat, which their gravatar can then "wear" at the user's option. Users will be able to see all the hats they've earned on http://winterbash2015.stackexchange.com . That site will also have an FAQ to explain how things work. This event will run from 14 December 2015 to 03 January 2016. Individuals who don’t want to participate, don’t want to see hats, or are generally anti-hat will have an "I hate hats" option available (which will cause you not to see hats at all). The only visual change to the site itself will be the presence of the hats and the "I hate hats" button in the footer. Participation on one site does not affect accounts on other SE sites. Two answers aim to collect votes for a community poll: Please, indicate whether you think Cross Validated should participate in this event or not (1 vote per user). Responses from the community are due by December 10. Moderators will inform the SE team of our collective decision. Since results are effectively "due" about now (in fact this year we only need to say anything if the answer is no), I'm removing the "featured" tag now. People are still free to vote but I think the opinion is clear.
Yes , Cross Validated should participate in Winter Bash 2015.
{ "source": [ "https://stats.meta.stackexchange.com/questions/2855", "https://stats.meta.stackexchange.com", "https://stats.meta.stackexchange.com/users/805/" ] }
3,175
This is intended to be a "humorous and friendly thread on how to ask a horrible question" ( amoeba ) which can help new users understand how to write good questions. We hope you will contribute one "suggestion" per reply. Include (if you wish) its rank in the top ten reasons to close questions. But please add some explanation of why your suggestion is bad and (also if you wish) how to improve such questions. One possible use of the replies in this thread will be targets of links provided in comments to closed questions. Example The title is "Statistics question." On a statistics site, this is meaningless. Good titles attract good readers. Make yours count. Use key words that clearly indicate what your question is about and how it might differ from similar ones. (Often such titles reflect carelessness and lack of thought: they tend to be accompanied by questions elsewhere on our Top Ten list, such as copy-pasted homework. Such posts will usually be closed within minutes without further explanation.) References See our "TenFold" chat thread beginning at http://chat.stackexchange.com/transcript/message/30673850#30673850 for the first incarnation of this list. Thanks to Amoeba, Scortchi, user777 , er, GeneralAbrial , um, Sycorax $\checkmark$ , NickCox, Silverfish, Glen_b, usεr11852, Matthew, (and any others I may have overlooked) for your suggestions. Feel free to borrow from and expand on those ideas.
Questions which are just photographs of your homework sheet or old exam paper Why these questions are closed We have a policy on self-study questions , which says that: It is okay to ask about homework. Homework is included in this self-study tag. This site exists to help people learn and provide a standard repository for questions in statistics and machine learning, both simple and complex, and this includes helping students. However, we ask that you fulfil certain conditions, including: Make a good faith attempt to solve the problem yourself first. If don't seem to be making a genuine attempt, your question might be voted down or closed. Be honest about the source of the question. Do this by adding the self-study tag and mentioning whether it is for some class in the question text. Just showing us a picture of the question doesn't show us what you've tried or where you're stuck, or indeed that you've made a good faith attempt at the question at all. Also, if you haven't typed anything, then we won't know what the original source of your question is. If your homework comes from a textbook, for instance, you should give a full reference (citation) to acknowledge its authorship. Photographs of your question are also inconvenient for other reasons. They can be difficult to read, particularly if the quality of the photograph is poor. The text is inaccessible to users using screen readers . Moreover, the text is not searchable, either through our site search or external search engines — this goes against our objective of producing a high quality repository of statistics questions and answers that future readers can learn from. If your question can't be found, then other people will not be able to learn from it! Sometimes people resort to taking a photograph of their homework because they do not feel able to copy the formatting. Our site has typesetting features for text and equations (using LaTeX) — see our editing help for further information. There are situations where photographs are appropriate, for instance if you want to ask what an unclear piece of notation means (and because it's not clear to you what it is, you can't typeset it), you are struggling with how to use LaTeX for an equation or table (some of our helpful users may edit your question to assist you with this), or you need to show us a graph or diagram. However, even in these cases it's best for your question not to just consist of a photograph! Type the question you have, and show us a picture of what you can't type.
{ "source": [ "https://stats.meta.stackexchange.com/questions/3175", "https://stats.meta.stackexchange.com", "https://stats.meta.stackexchange.com/users/919/" ] }
4,477
All but one of my questions since September have gone unanswered. I am not complaining, of course: there's no reason why my questions should be answered. It's just that it seems to me that the style and type of questions hasn't strongly changed with respect to older questions. Thus I wonder, is there a general trend on CV, where the ratio of unanswered to answered questions is growing? Or am I wrong and my newer questions are less interesting and/or well written than the older ones? Can I do something to improve the likelihood of an answer? Other than putting bounties on each question, of course :)
The proportion of unanswered questions is indeed slowly growing because (among other things): the number of people answering questions is growing more slowly than the number of people asking questions. the proportion of very poor questions is increasing rapidly, sapping the time-resources of the very people who tend to answer questions into dealing with clarifying or closing questions. I doubt this relatively slow process is the cause of a sudden drop in answers for you, however. Part of it may be a matter of luck but you can certainly improve your chances of an answer by working on your questions. Several questions already on site here deal with such issues. Looking at your recent questions, at least some of your questions seem unlikely to be of broad interest (for all they are sure to be useful to you personally); that will reduce interest in answering them; it's fine to ask but they won't necessarily be priorities. You can consider ways to make some of those questions more generally useful, which may help make them interesting to answerers. You can also make it easier for people to answer some of your questions. For example, the most recent one asking about references on probability links to three references but mentions neither the authors nor the year of publication . Since many books on probability have similar or even identical titles, the titles alone don't really identify them. So when I see a question like that, I have to ponder whether I have time to start loading page after page on Amazon just to find out what books you're even talking about. I'm kind of busy (and then when I am here there's a lot of tasks to take care of besides answering questions) and even without that, there's more questions a day than I can even read, let alone give a worthwhile answer to, so if my connection is slow enough to make it take a few seconds each page, it's likely to be " Sorry, next question ". I imagine at least some other answerers may be in a similar position. Clicking through now, I only know one of those books. So even if I did decide I have time to click through to find the books, I would disqualify myself from offering a choice; again it would be a case of "Sorry, next question". [A rephrase of the question may make it more likely to draw at least partial answers.] Lastly, a number of of your questions are outside my area of knowledge, or are just about within it but so specific I doubt I have enough specialized knowledge to give a useful answer. This, too, may be the case for several other potential answerers.
{ "source": [ "https://stats.meta.stackexchange.com/questions/4477", "https://stats.meta.stackexchange.com", "https://stats.meta.stackexchange.com/users/58675/" ] }
4,520
I have noticed that as I look at a newly posted question on CV that it may already have say in one case 2 or 3 upvotes or in another 1 or 2 downvotes. The question is completely new and has only been up for a few minutes. There are no edits, no comments and no answers. In that situation I would never vote on the question even if after reading (1) I think it is a good question: or (2) I think it is a bad question. In case 1 I may be inclined to upvote but no one has commented and I may be overlooking something. In case 2 I may be inclined to downvote but maybe I am just not familiar with terminology that the OP uses but a commentator may clarify later. I realize that users are free to vote when a question is brand new and this can be a judgment call. But in my case I am not so confident to believe that I know enough to vote when the question is brand new. I think upvotes and downvotes deserve a little time to consider and the decision is not urgent. Do you agree with my position or do you see a reason why a very early vote (up or down on the question) could be justified?
TL;DR Vote early and often. Deploy your daily votes constructively to help people use our site effectively and well. I'm sure people have different systems for reading posts and voting on them. Please bear in mind the constructive role played by voting, which I think is the concern being expressed here: Upvotes, when they are merited, encourage people to participate and reward good contributions. Upvotes that are not merited can be confusing and potentially elevate poor posts to undue prominence. We have to trust that this occurs relatively little and will usually be corrected by the community. Downvotes are inherently negative. They create bad feelings. Use them when they can have the constructive effect of encouraging a poster to improve a particular post. This implies that most downvotes are wasted if they are not accompanied by an effective, actionable comment. (There are exceptions: some posts are so obviously poor that little needs to be said.) Downvotes that are unmerited are even more negative and provocative. They can reflect badly on the downvoter, too. For this reason, it's wise to hesitate before applying any downvote--to make sure you have a good reason for it and are not just reacting emotionally--and then to pause again after applying it, to reflect on what you just did. You have a few minutes to change your mind. Note, too, that upvotes are the driver of our "reputation economy": virtually all reputation arises from the five or ten points each upvote creates in the system. The more votes you supply, the more reputation there will be to go around, the more things people will be able to do on the site, and the happier they will feel in continuing their participation. Please: before you leave any page you have been reading, get in the habit of pausing for a second and asking whether you have voted yet. You will usually be glad you did vote. If you wait and tell yourself you will come back, maybe you will--but likely you won't. Use every opportunity to make a positive impact on the site.
{ "source": [ "https://stats.meta.stackexchange.com/questions/4520", "https://stats.meta.stackexchange.com", "https://stats.meta.stackexchange.com/users/11032/" ] }
4,576
In a thread on the main site someone said very elementary questions are off topic It looks like someone upvoted his comment so maybe he's right. Around that same time @gung told me to be cautious answering the question so I asked him/her about this matter and the response was So apparently there's some disagreement on this. Are very elementary questions off topic and, if so, what is the definition of very elementary? Edit : Here is a follow-up comment from Dr. Michael Chernick in response to what I said above:
To my mind, elementary is too slippery a criterion for closing. I don't doubt there are many questions that are elementary for @MichaelChernick that would be over my head, and I don't necessarily think they should be closed. Below the pictured comment, he replies, Elementary questions are likely to have been answered many times in possible duplicates. That seems true, but in my opinion it is incumbent on him to find those duplicates and close the question in that more specific, and more specifically relevant, manner. I am sympathetic that duplicates can be hard to find. I often suspect there should be a duplicate, but can't find one and don't end up voting to close, or even post another answer that may cover material already covered elsewhere. It is hard to say that that strategy is always better. Nonetheless, I don't think 'too elementary' is a justifiable close reason.
{ "source": [ "https://stats.meta.stackexchange.com/questions/4576", "https://stats.meta.stackexchange.com", "https://stats.meta.stackexchange.com/users/117710/" ] }
4,586
Questions about R packages are off-topic on SO, and since programming questions are off-topic on CV, a fortiori packages questions, which are about programming (in R) are off-topic on CV. That's why I always ask questions on algorithms (on SO), even if I often would much rather be pointed to a package. However, learning about packages is an integral part of doing statistics with R. Is there somewhere on Stack Exchange where I can ask question about which R package to use for a specific statistical inference problem? If not, as you guys are statistical experts, and many of you use/have used R, where do you go when you need advice about packages? Other than the R-help mailing list.
The answer is, of course, "it depends", but I think you're asking about something different from what you think you're asking about. I believe we can all agree that a question like Is there an R package for fitting Random Forests? not only is off-topic but also shows a lack of research effort. And likewise we can all agree that randomForest::randomForest(...) gives me an error. How can I fix it? is also off-topic, and at best belongs on StackOverflow. Now consider a question like: How do the rpart and party packages differ? Which one is preferred off-the-shelf for classification? I would say this question is a good question for CV , because while, yes, its focus is restricted to R packages, it is actually a question about statistical computing. It is not about programming as such (i.e. it doesn't belong on StackOverflow), and while it is asking for a software recommendation I would say it's still topical enough to not kick it over to SoftwareRecs, where I imagine there is much less domain expertise floating around anyway. There might be a gray area here with a question like: What is the difference between caret and mlr ? How do they compare to scikit-learn in Python? What are the strengths and weaknesses of each? because it's not really about statistical computing. But I (as an opinionated poster without moderator privileges) would still say it's on-topic, because it's about statistics-specific methods and workflows. (Note that this question would also probably be closed as "too broad", but I still think it gets the right idea across.) Finally, on to a question that I hope captures the spirit of the one you linked to in the comments ( https://stackoverflow.com/q/42022917/2954547 ): What R package should I use for outlier detection? As stated, this question would rightfully be closed as either "opinion-based" or "too broad". But what is the question really trying to ask? I would argue that the question could have been phrased as What are the current popular outlier detection techniques, and which of them are readily available in R? and that this would be on-topic here . The question isn't really about R, it's about statistical modeling, with an added requirement that the model have an R implementation. This, I think, is what you are asking in your question on StackOverflow: Of the MCMC samplers available in R, which one should I use for this problem and why? which has more or less the same flavor.
{ "source": [ "https://stats.meta.stackexchange.com/questions/4586", "https://stats.meta.stackexchange.com", "https://stats.meta.stackexchange.com/users/58675/" ] }
4,798
Sometimes I spend a lot of time to write an answer, and such posts become valuable to me. However, I am afraid that StackExchange may be down for some reason (although this is very unlikely) in the future (e.g., out of business, company acquisition, etc.). Should I "export" some of my notes? If so, how can I do it?
Given that metaoptimize disappeared overnight without any warning, and that - in my opinion - Stack Exchange doesn't care much about the perennity of user content, that's a fair question. We should all be worried about such an eventuality, and be prepared for it. Stack Exchange provides back-ups , which are made infrequently, typically once every few months. These backups do not include images, which is a huge limitation. It would be nice if there were some initiative to create a dump of all images. If you just care about your questions and answers, you can run this query: What is the easiest way for me to download all my questions+answers across all Stack Exchange sites? It is very disappointing that Stack Exchange doesn't provide an easy way for users to export the entirety of their content. (but it's still better than Quora or Reddit, which don't even provide a dump) FYI: How can I download all images I have uploaded to Stack Exchange? Why has the closed beta site "Big Data" no dump available? API or library to obtain a mirror of a link Google Chrome extension to archive a web page Hosting all Stack Exchange data dumps on archive.org
{ "source": [ "https://stats.meta.stackexchange.com/questions/4798", "https://stats.meta.stackexchange.com", "https://stats.meta.stackexchange.com/users/113777/" ] }
4,963
I understand the need to make questions clear when asking them and agree telling the OP to make the question clearer, if it is not, is a good thing. But can't this be done without locking the ability to post potential answers? Say for example a question seemed unclear to the mod, but was understood by someone who potentially knew the answer: all that is being achieved by putting the question on hold is stopping the answer from being submitted. And if the question seemed unclear to the mod, and was unclear to someone who did know the answer, then no answer would be submitted anyway, and there would be no need to put the question on hold. Furthermore as an extension to the first scenario where a potential answer is blocked if the OP did not edit his question or forgot to, then there would be no way of answering the question forever for the potential benefit of all others who might face the same problem, which in my opinion is a gross failure of the philosophy of this website and what it tries to do in helping people through knowledge sharing. So my question is, what advantage does putting unclear questions on hold actually achieve over just asking the OP to make the question clearer?
This is misleading in so far as it implies that closing is always and only done by a moderator. On the contrary, closing as unclear is often done by non-moderators without any moderator intervening . Then a consensus of five high-reputation users is needed. The point otherwise is well taken: "appears unclear to me" does not mean "is unclear to everyone" : it could be sufficient that one competent person understands it and can answer. And judgments are fallible. I would argue, however, that present principles and practices are about right. If anything, we need to work harder at identifying unclear questions (more generally, questions that aren't good enough) and either getting them improved or closing them down. First, deciding that a question is unclear is often much easier than understanding what the question is . Without trying to be exhaustive, here are some common examples. The question boils down to "What should I do with my dataset?" but that and the underlying problem are too vaguely described to allow good advice. The question is too brief and cryptic to be clear. Presentation is just too messy to attract all but the most noble readers. Second, the gentle model -- just leave it there; someone may understand -- is not nearly as kind as it seems . It is in the original poster's best interests to get prompt and decisive feedback on the question. They can then try to improve it. If they don't, and/or if they get a negative impression of CV, that's unfortunate in some senses, but keep reading. If they can, a much better question is much more likely to get a good answer. It is in nobody else's interests that CV is cluttered with questions that no one understands -- or wants to answer. Indeed, this is true already! The fallacy here is that CV is a helpline in which ideally every questioner and every question deserves an answer. Not so; the ideal is to build up a site with an archive of good, distinct, well-answered questions. Every question is a candidate for that status, but many questions fail. Third, there are enough checks and balances to correct over-zealous voting to close . Needing 5 non-moderator votes has already been mentioned. The SE model does tend to mean that people with vote to close privilege have done enough around the site to have read lots of questions and answered many. That doesn't rule out puzzling votes to close, but no system is perfect. Answering a question is the best way to show that someone understands it -- so long as they do understand it! Guessing wildly and guessing wrong won't help. Do your bit by editing a question, or proposing edits, that make it clearer. Sometimes just a little care and attention can tip the balance. It could be something as simple as a better title. Comment in threads where you think voting to close is wrong. A vote to close brings a thread to the attention of high-reputation users who are happy to disagree with each other if some see merit in a question. In this sense a vote to close as unclear from a non-moderator is no more than "this seems unclear to me; do others agree?".
{ "source": [ "https://stats.meta.stackexchange.com/questions/4963", "https://stats.meta.stackexchange.com", "https://stats.meta.stackexchange.com/users/176173/" ] }
5,514
I found that quite a few tags are present at each of these sites in the SE universe: Artificial Intelligence (AI) Data Science (DS) Cross Validated (CV) For example: reinforcement-learning ( AI , DS , CV ) neural-network(s) ( AI , DS , CV ) deep-learning ( AI , DS , CV ) etc Considering the descriptions of these SE sites doesn't help in drawing clear boundaries: [Artificial Intelligence] Q&A for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment. [Data Science] Q&A for Data science professionals, Machine Learning specialists, and those interested in learning more about the field. [Cross Validated] Q&A for people interested in statistics, machine learning, data analysis, data mining, and data visualization. And neither do the tag descriptions, for example reinforcement-learning : [Artificial Intelligence] For questions related to learning controlled by external positive reinforcement or negative feedback signal or both, where learning and use of what has been thus far learned occur concurrently. [Data Science] Area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. [Cross Validated] A set of dynamic strategies by which an algorithm can learn the structure of an environment online by adaptively taking actions associated with different rewards so as to maximize the rewards earned. While the site description for AI seems to invite only conceptual questions rather than specific or detailing ones, surveying the posted questions reveals a different trend. For example tagged reinforcement-learning at Artificial Intelligence : DQN it's not working properly Why are all the actions converging to the same index? Q-Learning fails to converge even after 50K iterations for a simple board game - What could be the reason for this? These questions are quite specific and not conceptual (in a sense that they ask about implementation details or aspects specific to a problem). Surveying the first few pages of the reinforcement-learning tag at Data Science on the other hand created the impression of an excess of conceptual questions on that site and the same is observed for Cross Validated . All these sites support MathJax rendering, in favor of conceptual questions. Questions Now the obvious dilemma (actually "trilemma") is, if I have a question about any of these shared-tag topics, ... ... which SE site should I prefer for posting that question? ... is there a preference regarding conceptual / (implementation-) specific questions? ... is cross-posting considered bad manners? Also are there any attempts in merging these SE sites into a single one, in order to merge the corresponding knowledge and competences? Note I posted this questions here, at Cross Validated Meta , because Cross Validated appeared topmost at SE sites , hosting most of the questions, considering the above three candidates.
TL;DR Machine learning, deep learning and reinforcement learning are all on-topic here, but we ask that questions not be primarily concerned with programming. Preamble I'm not active on DSCI.SE or AI.SE so I have no deep understanding of what is on-topic on those fora. However, I have been active on stats.SE for several years and I regularly participate in the review queues and meta.stats.SE. Moreover, my specialization is machine learning, neural networks and I've started learning about reinforcement learning, so I feel that I can speak to your question with respect to stats.SE. Machine learning is a subfield of statistics, and this is a statistics forum I've said it before and I'll say it again, "machine learning" and "data science" are terms that you use in place of "statistics" when you want to double the fees on your rate card. Machine learning is just another kind of statistical specialization, among other specializations such as time series analysis or mixed effects modeling. Despite its novelty, machine learning is concerned with core statistical concepts such as random variables, quantifying uncertainty, prediction and forecasts. In this light, questions about machine learning are squarely on-topic here . Some words about code The main caveat to keep in mind is that questions which are primarily concerned with how to accomplish some task using a programming language are not on-topic here. This is because Stack Overflow already exists, and there would be no point in duplicating all of the "how do I __ in R?" and "how do I __ in sklearn ?" questions here. These questions already have high-quality answers from programmers; why would you want another answer from a statistician? Our core competency is statistics, not programming. Most modern statistics involves some level of computer programming, so sometimes it is unclear where to draw the line, or the quoted code is purely incidental, or there are other reasons that a question that includes code is more about statistics than it is about programming. It's a subtle issue, and I encourage other reviewers to be thoughtful when considering whether a question that contains a code block is better open than closed. It's not clear where on the programming-statistics continuum your question falls. If you're trying to fix a bug, or improve the efficiency of code, that's probably off-topic at stats.SE, but if you have a question about statistics and code is more-or-less incidental or expository to your statistical question, then it's probably on-topic. One test I use when questions in the Close queue is whether the question is completely answered with some computer code; if it is, then the question is probably more about programming than statistics. Cross-posting is frowned upon Asking the exact same question in two or more fora is frowned upon because it duplicates effort. The only reason to ask similar questions in two places is because you want to have answers from clearly distinct perspectives. Glen_b came up with a nice example for this: suppose you ask a question about building a fence in your yard on DIY.SE (because you learn about home improvement), but you're not sure about the legal implications (because you don't want to annex some of your neighbor's yard); you ask a similar question on law.SE. This is fine, but do make it clear that you're interested in two distinct answers , one about home improvement and one about minimizing legal risk. Why do users face this trilemma? The creation of three distinct fora which largely overlap is very confusing to new users. It's the direct result of the Area51 process, which allows new SE websites to be created without the intervention of common sense. Simultaneously, hapless undergraduates are convinced that because "machine learning" and "data science" are courses taught in the computer science department of their universities, they must be wholly unique fields of study with only a historical relationship to the math or statistics departments; when these students are loosed upon the world, they carry with them their policing of disciplinary boundaries and zealously enforce their misconceptions when they graduate. It's also confusing to veteran users like me, because SE was designed around creating a durable repository of high-quality questions and answers; duplicate questions are addressed by closing questions as duplicates of the existing, high-quality content. Creating nearly-identical fora directly undermines that objective. My view is that SE websites should strive to be as mutually exclusive as possible. The SE team even built a feature to accommodate confused users: if users post a question in the wrong place, it can easily be migrated to the correct forum. AI.SE and DSCI.SE would be valuable additions to the SE network if their focus were on the topics not covered by stats.SE . I'm hard-pressed to find a question that is on-topic here but not on-topic at DSCI.SE; does this imply that we should duplicate every question and every answer on DSCI.SE? If not , why does it exist? If so , why does it exist? As it stands, the only reason for DSCI.SE to exist is to accommodate users unfamiliar with the term "statistics," but we can easily fix that by redirecting DSCI.SE.com to stats.SE.com. Occasionally, this topic has been discussed on meta.DSCI.SE and meta.AI.SE, and it is common to see users of those websites claim that machine learning, neural networks, deep learning and reinforcement learning are not on-topic on stat.SE. These users are simply wrong. Machine learning is statistics, and stat.SE is a statistics forum. (Examples of such meta threads are easy enough to find; I don't really want to provoke any specific users by naming them here.) When considering arguments in favor of DSCI.SE, one quickly realizes that the same arguments can argue in favor of creating time-series.SE, spatial-data.SE, bayes.SE, missing-data.SE, regression.SE, probability.SE and a specialized SE website for every other statistical topic. I can see no useful purpose for such fine-grained distinctions. Moreover, time-series.SE and spatial-data.SE leave no obvious home for spatio-temporal questions, which can find a home in neither because it includes both topics. Users will be fragmented across the statistical archipelago; total traffic to each website will be much smaller; more questions will be posted to the wrong fora to the consternation of all involved; all users will chafe under the pointless friction of this system and depart for Reddit, Twitter, Quora or Yahoo! Answers.
{ "source": [ "https://stats.meta.stackexchange.com/questions/5514", "https://stats.meta.stackexchange.com", "https://stats.meta.stackexchange.com/users/150976/" ] }
5,765
Additional update: My diamond was removed something in the ballpark of an hour ago, as per my request from last week. Thanks for your understanding, everyone. Some updates: (1) I signed this letter . It has been signed by a very large number of moderators and ex-moderators (2) David Fullerton, CTO of StackOverflow [1] has posted a considerably better apology that appears to recognize many of those points. Its an encouraging sign that there's an actual recognition of the issues and some preparedness to address them. (3) I have also signed this letter ; in particular I agree with each one of the three specific requests for action in it. [1] (which I'll continue to think of and speak of as "StackExchange", since it better represents the network of sites I participate in) I've been agonizing over this for quite a few days now. I hate to do it - and I certainly don't do it lightly - but I no longer see my position as a moderator as tenable. A number of things in recent times have concerned me about StackExchange, but recent events have brought matters to a head. Meanwhile, Stack Exchange Inc has gone to some trouble to make it hard for people to know what is actually going on, including changing the algorithm by which hot meta posts will appear on your page. For example this meta post Firing mods and forced relicensing blew up, and you should have been notified about it in the "Hot Meta Posts" section of the sidebar but SE made sure that it and posts like it would not appear in the Hot Meta Posts notifications. I found it by pure accident. Nice. If the StackExchange community is bothered by something, you no longer get to know that unless you're checking meta.SE carefully. So what has happened? Skipping a lot of details (but you can find them; the above link has many further links in it, for starters, see later posts in meta and their answers and comments for further information, and check the posts in the various site metas, such Writing, for resigning mods across the network), here's a rough summary. SE is going to introduce a new Code of Conduct which (among other things) is attempting to make the site more accepting of LGBTQI+ users. A summary of this Code was posted to a private moderator-only chat room. This is not an unusual eventuality and open robust discussion of proposed changes is also not unusual, such things have been discussed with mods before and their views taken into consideration (usually leading to much better outcomes). One of the mods - one of the most respected mods on the entire SE network, and someone I have held a deep respect for since before StackOverflow was thought of - asked some questions about a particular aspect of this policy and how it would work in practice. This is all to be expected. After a while this discussion went to email. A Stack Exchange employee appears to have accused this moderator of bigotry. Stack Exchange have removed the moderator (Monica Cellio). This removal occurred hours before a major religious holiday for this moderator (and during which she would not be able to respond) The official position is that she violated the current code of conduct. Monica's position is that she has not violated any part of the current code and has repeatedly asked for clarification about what it is that she supposedly did. The moderator was not even told removal would happen. There was no warning. She thought she was participating in an amicable email conversation and only discovered her removal when she noticed she had gained badges moderators can't get. She attempted to reach out to achieve some resolution . These have been ignored. SE staff appear to have attempted to paint her in a bad light in the media at a time when it was clear she could not respond , while also telling the community here they couldn't talk about it. The " apology " offered by Sara Chipps (overnight my time) was really not adequate. It does not reassure me. To me it does nothing to right the wrongs here and doesn't seem to really recognize that what was done was wrong or seeming to be actually correcting what was wrong about it. (I recommend reading comments under the top answer at that link.) This is not quite the whole story, indeed we probably won't have it all since some parts of the story are not public; there's some only moderators can see and we'll probably never know the exact content of the email exchange. Don't trust my take on it though, go read for yourself (keeping in mind that we don't have all the information - but even a very generous take on it doesn't leave Stack Exchange looking good). Explaining these points in some additional detail (as far as I am aware of them) and my views on them: I support attempts to make the site more accepting of LGBTQI+ people. I support a policy of asking users to use other people's desired pronouns each time you use a pronoun to refer to them. Apparently, so does Monica " I would never knowingly misgender with wrong pronouns " Monica's issue seems largely to have been somewhat more estoeric, which related to something which she sought additional clarification on (and which I think would be difficult for moderators to administer). It led to a discussion about whether it was really a requirement in the policy to use preferred pronouns extended to situations in which you did not use pronouns . Now there's definitely some scope for discussion/disagreement here in specific examples, and some mods have explained that they have a problem with a position she took (while not agreeing with what eventuated ). This is something that we should expect to be discussed when a new policy is to be introduced -- particularly by the people being expected to police it. Not everyone will be on the same page and discussions are absolutely necessary for us to understand each other clearly. I am completely okay with Monica asking "It's okay if I do X, right?" and I am completely okay with Cyn and some of the other mods or members of the community more broadly saying to Monica "We really don't think that it's okay if you do that". This is not anybody trying to hurt anyone on either side of a conversation. (You might like to read Cyn's post on Writing's meta ) I think the policy definitely needs more discussion, some clarification and some polishing/refomulating. Not everyone will agree with everyone on all the details, or see them all as workable (in particular, for example, there can be religious restrictions on what some people are even permitted to do -- there should be some attempt to try to find some reasonable compromise in such cases). I could have easily asked similar questions to what Monica seems to have asked. This is not incompatible with wishing to be inclusive and welcoming. I feel I could easily have been dismissed in her place, had I been part of the same discussion (I would have been if I was aware of it happening at the time). I see nothing in what has come since that would suggest I couldn't have very easily found myself in a similar position. I think it's ludicrous to remove anyone over a Code of Conduct we don't even have yet which is what seems to have triggered the whole thing; as far as I can see she has not done anything that would consitute a violation of either the spirit or the wording of the current Code ( on which, see ). I think it's bad faith to remove a community-elected volunteer without several other steps in between (indeed there was already a policy for removal but it doesn't actually seem to have been followed as far as I can tell) and some opportunity for further discussion. There was no review. No appeal. Just gone, without even being given the courtesy of being informed that it would happen. Monica still doesn't seem to know exactly what she's supposed to have done (which is weird -- what if it turns out someone hacked her email account, for example? How could she even know if what she did was not explicit?) Some people have raised the possibility that it was perhaps to do with not using some people's preferred pronoun (singular they) in a chat room, but that would surely not be handled this way; any discussion and potential consequences for doing so in a site's chat room would surely be up to the moderators for that site, collectively; there's a formal procedure for dealing with site moderators who misbehave, clearly laid out. It would take something really extreme for it to make sense to remove someone so quickly (or else it could wait a few days for more deliberate action); if there was some danger of acting improperly as a mod in the meantime (seriously?) then a temporary suspension from being a mod could be used. Something extreme enough that it couldn't sit for a weekend (that the mod had to be inactive for anyway) would likely require the involvement of police. If anyone understands what it is to be part of a marginalized and abused minority, it's Monica (I'd provide links that might give you some sense of just how directly, but don't want to give away such specific details without her explicit okay, and she has better things to worry about). Worse, the sacking came just hours before the beginning of important religious holidays for her. On top of everything else, the present situation has seemingly displayed incredible insensitivity and intolerance toward Monica. If Monica was not as gracious, it would very reasonably have been been called antisemitic. Even if Monica had done what she's been accused of her removal in this fashion would be egregious, but it seems pretty clear that her actual position is not consistent with how it has been officially portrayed. The present circumstances muddies the waters and seems to hand actual opponents of tolerance and acceptance of LGBTQI+ people ammunition with which to oppose the original aim. Indeed, I am convinced that as it stands it sounds far worse for LGBTQI+ people than anything Monica seems to have done and even worse than what she's been accused of doing. You don't introduce a policy of tolerance with a purge. Many moderators have resigned or stood aside. I pondered my response for while, then I stood aside from moderation while I considered further -- all the while hoping Stack Exchange would come clean, admit they made an error in handling it, apologize and reinstate Monica (or at least wind it back to a more proper handling, in a way that justice could be seen to be done). Monica maintains that she doesn't know what it is that they think she has done wrong. Instead they have doubled down. They have thrown shade in the media and they've acted to make it harder for people to find out it even happened. They've offered a weak apology which doesn't fix what happened and had done nothing to make me feel confident they understand what people think was wrong nor how wrong it was. The assurances that they'd do it better next time really concern me, since it needs to be better this time. Monica has been one of the mods for whom I have the very greatest of respect - a moderator who I would aspire to be as good as. She appears to have been very shabbily treated. How does any of this improve Stack Exchange? Every single employee of Stack Exchange, at every level, needs to think very carefully about one simple fact: the content that pays their salaries is generated for free by its users, and curated by the high reputation users and diamond moderators who donate huge amounts of time and expertise to make this a good place to find answers. Your income depends in very large part on the goodwill of the communities that generate content for you. If mods can be terminated so casually, no matter the feelings of the communities who chose them, then they really should be paid for it. I think this could have been solved a week ago, with a few emails and an actual apology or perhaps, two, if it were necessary to apologize both ways. Or failing that, with a simple, considered review of what went on, and then backing up before applying something nearer to proper procedure. How much has this policy of doubling down cost Stack Exchange already? For a company needing to make money they're acting like the sources of that money are both expendable and fungible. Stack Exchange keeps saying they want to listen to moderators and to the community, but increasingly it seems like they do not (they actually did it pretty well, once). Stack Exchange talks of respect and tolerance but throughout this they don't seem to have shown a lot of either. I continue to hope that SE will take real steps to undo the harm they've done to the goodwill of moderators and to Monica in particular (and indeed to any perception that they care about being welcoming and tolerant) but I see nothing to indicate that they will. At this stage an independent review of what happened and mediation would be a reasonable first step on that path, but given the response so far, I can wait no longer. I can't give the implied support to Stack Exchange Inc that continuing as a moderator would suggest, and I can't be responsible for enforcing a future Code of Conduct that I (seemingly) can't completely understand (and apparently could risk removal for asking serious questions about), for all that I support its underlying aims. I must, with the deepest of regret, step down. I have to take care of some urgent work shortly after I post this, but I plan to notify Stack Exchange officially as soon as I get some time (assuming they don't see this and remove me before I get to it, which is fine, since my intent is 100% clear). [Edit this official notification has occurred; it might still take a while before I actually lose the diamond.] As things stand at the moment I expect this will be my last post as a moderator, though I may add some details and will continue to edit as needed. I'd like to thank my fellow CrossValidated mods for their kindness, professionalism, support and much else besides, throughout my term as a moderator and before. I have the deepest respect for each of them. I'd like to thank the wider moderation community on the SE network, who have helped me many times during my term as moderator. I'd also like to thank our many knowledgeable and giving users who make our site work so wonderfully. I think our community here in CrossValidated is amazing, and I hope to continue to be an active part of it in the future as an ordinary user, though for a time I will be stepping back somewhat from that as well, reducing my participation as an ordinary user for the present; I'll still be around hoping that Stack Exchange show a clearer recognition of their responsibilities in this relationship. Just in case anyone is inclined to misunderstand the implications of any of the above, let me be clear: I have no trouble whatever with singular they . In fact quite the opposite -- I have been arguing for it as a nongendered pronoun since approximately the mid 80s - useful, for example when you don't know a gender for whoever you're referring to - and have used it in my writing, sometimes in the face of quite a bit of opposition from editors. I didn't always win those arguments (in that my text got changed over my objections). More generally I have no trouble with referring to people by a preferred pronoun[2]. None of this is in any way a stance by me against calling people what they want, especially not people from marginalized and abused communities. [2] (except for people who don't take it seriously and want to try to claim they should be referred to using "helicopter" as a pronoun or something; I think that ends up harming the very people it should be helping by making a mockery of it. That's not okay.)
Anyone who visits CrossValidated for more than 10 minutes will notice your ubiquitous great contributions to this site, and anyone who visits longer will recognize your enormously valuable insight and skill as a statistician. When high-profile moderators like yourself across the SE network make this decision it might finally get the message through to those in charge that they are really taking steps in the wrong direction. I really hope that this brings about a positive change, and that all these sites do not truly lose their most valuable members as moderators this way... but perhaps I am being too optimistic. Regardless, I think it is a very noble thing to do, after the tremendous effort you must have put in to get where you are now.
{ "source": [ "https://stats.meta.stackexchange.com/questions/5765", "https://stats.meta.stackexchange.com", "https://stats.meta.stackexchange.com/users/805/" ] }
5,799
I have seen the question asked by S. Kolassa describing Monica Cellio and the fact that she was mysteriously removed from Stack Overflow. I have also learned from that question that several CV users have changed their avatar to "Reinstate Monica". Could anyone tell me the following: Has she been reinstated? Was there a reason why she was removed or was it just a glitch? Why have so many users including those with very high reputation gotten on the bandwagon? If I want to go along, how do I change my avatar?
Much of this information is in, or can be found from, the question you refer to. If you want to read everything, or just warm your feet by the dumpster fire, you can go to meta.SE . The place to start, with much of the information indexed, is Firing mods and forced relicensing: is Stack Exchange still interested in cooperating with the community? She has not been reinstated. SE has dug in its heels, out of obstinacy, as far as I can tell. They ignored her and the growing storm for over a month. At this point the lawyers are involved. Monica has initiated a lawsuit against SE. SE has decided they want to try to fight it out rather than compromise. The official claim is that she misgendered trans users. This supposedly happened in moderator-only spaces. To the extent mods can see, that did not occur. Trans mods have spoken up for Monica as well. Changing usernames and avatars is one way that people can advocate for her. Some users know her, others are appalled by the rank injustice on display. On your user page, you would go to the "edit profile and settings" tab, and change whatever fields you like (e.g., username, about me, and avatar).
{ "source": [ "https://stats.meta.stackexchange.com/questions/5799", "https://stats.meta.stackexchange.com", "https://stats.meta.stackexchange.com/users/11032/" ] }
5,926
Currently, it takes 5 close votes from users with the vote-to-close privilege ( $\ge3000$ reputation ), or 1 vote from a moderator, to close a question. If the question is being closed as a duplicate of an existing thread, it can also be closed by a single user with a gold tag badge on one of the tags on the question, if the OP had put that tag on the question originally. On the other hand, if there are 3 leave open votes on a question in the queue, it will not be closed and will exit the queue. However, right now (6/25/20) we have 47 threads sitting the the close vote queue. That seems to be typical. Over the past few years, the queue has occasionally gotten into triple digits; it has not gotten to 0 and stayed there long enough to be recorded. If neither voting threshold is reached, the thread will 'age out' of the close vote queue after 4 or 14 days (depending on whether the question has $\ge 100$ or $<100$ views, respectively). I believe there have been many threads that have been unresolved and simply aged away, but I don't know the number. To the extent that threads have been closed or left open consciously, it has been primarily due to action by voters with the power to unilaterally resolve the issue. To a large extent, threads have been resolved due to the efforts of Peter Flom, who has completed more than 15 thousand reviews (thanks, Peter!). This state of affairs seems suboptimal to me: Many threads are not resolved, those that are often take too long to reach resolution, and too much of the burden and results have been shoulders of too few. I am now wondering how we should move forward. I see a couple options: We could continue with the status quo. This does not seem like a great option to me, but I don't know if others share that opinion. We could try to somehow get users with sufficient reputation to engage with the close vote queue more. How? A third possibility is to petition SE to lower the threshold for closing questions from 5 votes (presumably to 3). Stack Overflow did this, and while their queue remains very long, it did improve. Another 14 of the smaller SE sites (I don't have the list) have had this policy change. I gather they have had improvements as well. One thing to note about this option is that, if we elect to do this, it is unlikely that the change will be implemented very quickly; the SE staff who do this sort of thing has been reduced, even though the network has grown. Nonetheless, it can be done, if we choose to move forward. Update: Something to bear in mind is that we are discussing a very specific proposed change: Lowering the threshold for closing questions from 5 to 3 (I don't believe that even changing from 5 to 4 is an option). This change is on the table because it is something SE has decided they will do (eventually) for sites that want it. It may be that some other changes will come along with this (e.g., @StephanKolassa's observation that the number of close votes a user can cast was increased on SO), but even that is speculative. People have proffered lots of other ideas that may help if implemented in addition to or in place of changing the threshold, but those would ultimately have to be the subject of other discussions on meta.SE, approved by SE's hierarchy, assigned to developers' queues, etc. It is fine to spitball additional options here, but we need to be clear on what we're discussing and what changes are realistic.
I'm all in favor of option 3: ask SE to lower the threshold for closing questions from 5 to 3 votes. (Importantly, the same should apply to the threshold for reopening a question. This is also the case at SO. ) In addition, at SO, qualified users of 3000 rep minimum can cast up to 50 close or reopen votes per day . Here, we only get 24 close (and presumably reopen) votes per day . So I would also propose to raise these limits to 50 per day. My personal impression is that a large majority of questions that need closing are really clear-cut cases. We don't need a consensus of five sufficiently experienced users. Three is plenty. And if something gets closed that shouldn't, then the same logic applies. I can't recall a question where a larger consensus would have been useful, and if so, we can always open a question on Meta. On the other hand, I don't see a good way to increase participation in the review queues, although that would certainly be a worthwhile goal. I'm as guilty as the next user of not looking at them enough, simply because it's a thankless job. By definition, you don't see the kind of interesting and fun questions that draw you to CV, but the exact opposite. So, as long as it's a thankless and boring job, let's at least increase our effectiveness by 67% (if 3 votes can close a question instead of requiring 5, then each vote is $\frac{5}{3}$ as effective).
{ "source": [ "https://stats.meta.stackexchange.com/questions/5926", "https://stats.meta.stackexchange.com", "https://stats.meta.stackexchange.com/users/7290/" ] }
6,066
Over time, a large number of machine learning questions have been asked and answered on Stack Overflow, amassing a significant amount of useful information. However, these questions are unfortunately off-topic for Stack Overflow, and as they come to members' attention, they are being closed (SO is for implementation questions, not theory). For example: What is the role of the bias in neural networks? What is the difference between supervised learning and unsupervised learning? Which machine learning classifier to choose, in general? What are advantages of Artificial Neural Networks over Support Vector Machines? Difference between classification and clustering in data mining? Why must a nonlinear activation function be used in a backpropagation neural network? There are many more similar questions that are likely to be on the chopping block as mods get to them. And I'd really hate to lose the ability to have experts add new answers to them (or for them to be silently deleted if someone got overzealous without anyone noticing). Unfortunately, there's a 60 day limit on migrating questions between sites. Moderators can't even override this, only SE employees can. This is for good reasons, smaller stacks didn't want StackOverflow dumping a backlog of terrible questions on small sites with even fewer moderation resources, or for bad questions to get passed around like White Elephants from stack to stack. Additionally, high-rep users on smaller stacks were rightfully upset that one migrated question from a big stack could shoot a member that never participated in their stack to the top of their rep ladders (and occasionally, vice versa). This leaves SO in a difficult situation. Close the questions and let these questions stagnate? Delete them and hope the vacuum promotes similar questions to be asked on the correct stacks? Leave them open and set a bad precedent for newer theory questions (that should probably be asked here?) I have two other possible solutions, but either would require the buy-in of the receiving stack. And I'm pretty sure the rightful home of most of these questions is here on Cross Validated. Nominate a large block of well-maintained theory questions , link them here on stats.meta, get approval from CV members, then bring the list to se.meta and beg for migration. Pros/Cons: easier on mods, users, and google algorithms (which would be directed here instead of SO) high-rep questions getting migrated might distort the rep landscape of CV consensus on many questions is difficult to get simultaneously after all that effort, se.meta might just say no or leave it in limbo forever. Manually copy "good" ML theory questions and answers from SO to CV when they are closed, and then ask for SO mods to add a redirect link here. Pros/Cons: Many of these old questions are asked/answered by users that may have not been active for years, so requesting reposts from them is likely a fool's errand. Manually "cross-posting" would be quite a lot of manual work, so it wouldn't be undertaken for anything but the most worthy Q&A's CV users can judge their worth for themselves and up/down/close vote as appropriate If I were to do this, I'd ask the mods to make both question and answer Community Wiki, because honestly I don't want to leech rep from work I didn't do. That is a controversial method , but personally, I'd rather err on the side of not taking credit for other's work. I personally lean towards 2, although 1 would put most of the work on SE employee's shoulders (which the lazy slob in me likes) and bring more traffic here form Google (which may be a dubious benefit). But either would depend on buy-in from CV members and mods. So what do you think? EDIT: After some delay, I've started closing questions on SO for this purpose. It would be helpful to have CV mods chime in on the MSO answer here for clarification on disputed questions (whether they are on-topic here, or may be on-topic elsewhere, like Data Science or AI)
They should be migrated here! Already two comments prefer option 1) so that looks a good one.
{ "source": [ "https://stats.meta.stackexchange.com/questions/6066", "https://stats.meta.stackexchange.com", "https://stats.meta.stackexchange.com/users/156974/" ] }
1
How should I elicit prior distributions from experts when fitting a Bayesian model?
John Cook gives some interesting recommendations. Basically, get percentiles/quantiles (not means or obscure scale parameters!) from the experts, and fit them with the appropriate distribution. http://www.johndcook.com/blog/2010/01/31/parameters-from-percentiles/
{ "source": [ "https://stats.stackexchange.com/questions/1", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/8/" ] }
2
In many different statistical methods there is an "assumption of normality". What is "normality" and how do I know if there is normality?
The assumption of normality is just the supposition that the underlying random variable of interest is distributed normally , or approximately so. Intuitively, normality may be understood as the result of the sum of a large number of independent random events. More specifically, normal distributions are defined by the following function: $$ f(x) =\frac{1}{\sqrt{2\pi\sigma^2}}e^{ -\frac{(x-\mu)^2}{2\sigma^2} },$$ where $\mu$ and $\sigma^2$ are the mean and the variance, respectively, and which appears as follows: This can be checked in multiple ways , that may be more or less suited to your problem by its features, such as the size of n. Basically, they all test for features expected if the distribution were normal (e.g. expected quantile distribution ).
{ "source": [ "https://stats.stackexchange.com/questions/2", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/24/" ] }
3
What are some valuable Statistical Analysis open source projects available right now? Edit: as pointed out by Sharpie, valuable could mean helping you get things done faster or more cheaply.
The R-project http://www.r-project.org/ R is valuable and significant because it was the first widely-accepted Open-Source alternative to big-box packages. It's mature, well supported, and a standard within many scientific communities. Some reasons why it is useful and valuable There are some nice tutorials here .
{ "source": [ "https://stats.stackexchange.com/questions/3", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/18/" ] }
6
Last year, I read a blog post from Brendan O'Connor entitled "Statistics vs. Machine Learning, fight!" that discussed some of the differences between the two fields. Andrew Gelman responded favorably to this : Simon Blomberg: From R's fortunes package: To paraphrase provocatively, 'machine learning is statistics minus any checking of models and assumptions'. -- Brian D. Ripley (about the difference between machine learning and statistics) useR! 2004, Vienna (May 2004) :-) Season's Greetings! Andrew Gelman: In that case, maybe we should get rid of checking of models and assumptions more often. Then maybe we'd be able to solve some of the problems that the machine learning people can solve but we can't! There was also the "Statistical Modeling: The Two Cultures" paper by Leo Breiman in 2001 which argued that statisticians rely too heavily on data modeling, and that machine learning techniques are making progress by instead relying on the predictive accuracy of models. Has the statistics field changed over the last decade in response to these critiques? Do the two cultures still exist or has statistics grown to embrace machine learning techniques such as neural networks and support vector machines?
I think the answer to your first question is simply in the affirmative. Take any issue of Statistical Science, JASA, Annals of Statistics of the past 10 years and you'll find papers on boosting, SVM, and neural networks, although this area is less active now. Statisticians have appropriated the work of Valiant and Vapnik, but on the other side, computer scientists have absorbed the work of Donoho and Talagrand. I don't think there is much difference in scope and methods any more. I have never bought Breiman's argument that CS people were only interested in minimizing loss using whatever works. That view was heavily influenced by his participation in Neural Networks conferences and his consulting work; but PAC, SVMs, Boosting have all solid foundations. And today, unlike 2001, Statistics is more concerned with finite-sample properties, algorithms and massive datasets. But I think that there are still three important differences that are not going away soon. Methodological Statistics papers are still overwhelmingly formal and deductive, whereas Machine Learning researchers are more tolerant of new approaches even if they don't come with a proof attached; The ML community primarily shares new results and publications in conferences and related proceedings, whereas statisticians use journal papers. This slows down progress in Statistics and identification of star researchers. John Langford has a nice post on the subject from a while back; Statistics still covers areas that are (for now) of little concern to ML, such as survey design, sampling, industrial Statistics etc.
{ "source": [ "https://stats.stackexchange.com/questions/6", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5/" ] }
7
I've been working on a new method for analyzing and parsing datasets to identify and isolate subgroups of a population without foreknowledge of any subgroup's characteristics. While the method works well enough with artificial data samples (i.e. datasets created specifically for the purpose of identifying and segregating subsets of the population), I'd like to try testing it with live data. What I'm looking for is a freely available (i.e. non-confidential, non-proprietary) data source. Preferably one containing bimodal or multimodal distributions or being obviously comprised of multiple subsets that cannot be easily pulled apart via traditional means. Where would I go to find such information?
Also see the UCI machine learning Data Repository. http://archive.ics.uci.edu/ml/
{ "source": [ "https://stats.stackexchange.com/questions/7", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/38/" ] }
10
Many studies in the social sciences use Likert scales. When is it appropriate to use Likert data as ordinal and when is it appropriate to use it as interval data?
Maybe too late but I add my answer anyway... It depends on what you intend to do with your data: If you are interested in showing that scores differ when considering different group of participants (gender, country, etc.), you may treat your scores as numeric values, provided they fulfill usual assumptions about variance (or shape) and sample size. If you are rather interested in highlighting how response patterns vary across subgroups, then you should consider item scores as discrete choice among a set of answer options and look for log-linear modeling, ordinal logistic regression, item-response models or any other statistical model that allows to cope with polytomous items. As a rule of thumb, one generally considers that having 11 distinct points on a scale is sufficient to approximate an interval scale (for interpretation purpose, see @xmjx's comment)). Likert items may be regarded as true ordinal scale, but they are often used as numeric and we can compute their mean or SD. This is often done in attitude surveys, although it is wise to report both mean/SD and % of response in, e.g. the two highest categories. When using summated scale scores (i.e., we add up score on each item to compute a "total score"), usual statistics may be applied, but you have to keep in mind that you are now working with a latent variable so the underlying construct should make sense! In psychometrics, we generally check that (1) unidimensionnality of the scale holds, (2) scale reliability is sufficient. When comparing two such scale scores (for two different instruments), we might even consider using attenuated correlation measures instead of classical Pearson correlation coefficient. Classical textbooks include: 1. Nunnally, J.C. and Bernstein, I.H. (1994). Psychometric Theory (3rd ed.). McGraw-Hill Series in Psychology. 2. Streiner, D.L. and Norman, G.R. (2008). Health Measurement Scales. A practical guide to their development and use (4th ed.). Oxford. 3. Rao, C.R. and Sinharay, S., Eds. (2007). Handbook of Statistics, Vol. 26: Psychometrics . Elsevier Science B.V. 4. Dunn, G. (2000). Statistics in Psychiatry . Hodder Arnold. You may also have a look at Applications of latent trait and latent class models in the social sciences , from Rost & Langeheine, and W. Revelle's website on personality research . When validating a psychometric scale, it is important to look at so-called ceiling/floor effects (large asymmetry resulting from participants scoring at the lowest/highest response category), which may seriously impact on any statistics computed when treating them as numeric variable (e.g., country aggregation, t-test). This raises specific issues in cross-cultural studies since it is known that overall response distribution in attitude or health surveys differ from one country to the other (e.g. chinese people vs. those coming from western countries tend to highlight specific response pattern, the former having generally more extreme scores at the item level, see e.g. Song, X.-Y. (2007) Analysis of multisample structural equation models with applications to Quality of Life data, in Handbook of Latent Variable and Related Models , Lee, S.-Y. (Ed.), pp 279-302, North-Holland). More generally, you should look at the psychometric-related literature which makes extensive use of Likert items if you are interested with measurement issue. Various statistical models have been developed and are currently headed under the Item Response Theory framework.
{ "source": [ "https://stats.stackexchange.com/questions/10", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/24/" ] }
22
How would you describe in plain English the characteristics that distinguish Bayesian from Frequentist reasoning?
Here is how I would explain the basic difference to my grandma: I have misplaced my phone somewhere in the home. I can use the phone locator on the base of the instrument to locate the phone and when I press the phone locator the phone starts beeping. Problem: Which area of my home should I search? Frequentist Reasoning I can hear the phone beeping. I also have a mental model which helps me identify the area from which the sound is coming. Therefore, upon hearing the beep, I infer the area of my home I must search to locate the phone. Bayesian Reasoning I can hear the phone beeping. Now, apart from a mental model which helps me identify the area from which the sound is coming from, I also know the locations where I have misplaced the phone in the past. So, I combine my inferences using the beeps and my prior information about the locations I have misplaced the phone in the past to identify an area I must search to locate the phone.
{ "source": [ "https://stats.stackexchange.com/questions/22", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/66/" ] }
23
How can I find the PDF (probability density function) of a distribution given the CDF (cumulative distribution function)?
As user28 said in comments above, the pdf is the first derivative of the cdf for a continuous random variable, and the difference for a discrete random variable. In the continuous case, wherever the cdf has a discontinuity the pdf has an atom. Dirac delta "functions" can be used to represent these atoms.
{ "source": [ "https://stats.stackexchange.com/questions/23", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/69/" ] }
26
What is a standard deviation, how is it calculated and what is its use in statistics?
Standard deviation is a number that represents the "spread" or "dispersion" of a set of data. There are other measures for spread, such as range and variance. Here are some example sets of data, and their standard deviations: [1,1,1] standard deviation = 0 (there's no spread) [-1,1,3] standard deviation = 1.6 (some spread) [-99,1,101] standard deviation = 82 (big spead) The above data sets have the same mean. Deviation means "distance from the mean". "Standard" here means "standardized", meaning the standard deviation and mean are in the same units, unlike variance. For example, if the mean height is 2 meters , the standard deviation might be 0.3 meters , whereas the variance would be 0.09 meters squared . It is convenient to know that at least 75% of the data points always lie within 2 standard deviations of the mean (or around 95% if the distribution is Normal). For example, if the mean is 100, and the standard deviation is 15, then at least 75% of the values are between 70 and 130. If the distribution happens to be Normal, then 95% of the values are between 70 and 130. Generally speaking, IQ test scores are normally distributed and have an average of 100. Someone who is "very bright" is two standard deviations above the mean, meaning an IQ test score of 130.
{ "source": [ "https://stats.stackexchange.com/questions/26", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/75/" ] }
31
After taking a statistics course and then trying to help fellow students, I noticed one subject that inspires much head-desk banging is interpreting the results of statistical hypothesis tests. It seems that students easily learn how to perform the calculations required by a given test but get hung up on interpreting the results. Many computerized tools report test results in terms of "p values" or "t values". How would you explain the following points to college students taking their first course in statistics: What does a "p-value" mean in relation to the hypothesis being tested? Are there cases when one should be looking for a high p-value or a low p-value? What is the relationship between a p-value and a t-value?
Understanding $p$ -value Suppose, that you want to test the hypothesis that the average height of male students at your University is $5$ ft $7$ inches. You collect heights of $100$ students selected at random and compute the sample mean (say it turns out to be $5$ ft $9$ inches). Using an appropriate formula/statistical routine you compute the $p$ -value for your hypothesis and say it turns out to be $0.06$ . In order to interpret $p=0.06$ appropriately, we should keep several things in mind: The first step under classical hypothesis testing is the assumption that the hypothesis under consideration is true. (In our context, we assume that the true average height is $5$ ft $7$ inches.) Imagine doing the following calculation: Compute the probability that the sample mean is greater than $5$ ft $9$ inches assuming that our hypothesis is in fact correct (see point 1). In other words, we want to know $$\mathrm{P}(\mathrm{Sample\: mean} \ge 5 \:\mathrm{ft} \:9 \:\mathrm{inches} \:|\: \mathrm{True\: value} = 5 \:\mathrm{ft}\: 7\: \mathrm{inches}).$$ The calculation in step 2 is what is called the $p$ -value. Therefore, a $p$ -value of $0.06$ would mean that if we were to repeat our experiment many, many times (each time we select $100$ students at random and compute the sample mean) then $6$ times out of $100$ we can expect to see a sample mean greater than or equal to $5$ ft $9$ inches. Given the above understanding, should we still retain our assumption that our hypothesis is true (see step 1)? Well, a $p=0.06$ indicates that one of two things have happened: (A) Either our hypothesis is correct and an extremely unlikely event has occurred (e.g., all $100$ students are student athletes) or (B) Our assumption is incorrect and the sample we have obtained is not that unusual. The traditional way to choose between (A) and (B) is to choose an arbitrary cut-off for $p$ . We choose (A) if $p > 0.05$ and (B) if $p < 0.05$ .
{ "source": [ "https://stats.stackexchange.com/questions/31", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/13/" ] }
50
What do they mean when they say "random variable"?
A random variable is a variable whose value depends on unknown events. We can summarize the unknown events as "state", and then the random variable is a function of the state. Example: Suppose we have three dice rolls ($D_{1}$,$D_{2}$,$D_{3}$). Then the state $S=(D_{1},D_{2},D_{3})$. One random variable $X$ is the number of 5s. This is: $$ X=(D_{1}=5?)+(D_{2}=5?)+(D_{3}=5?)$$ Another random variable $Y$ is the sum of the dice rolls. This is: $$ Y=D_{1}+D_{2}+D_{3} $$
{ "source": [ "https://stats.stackexchange.com/questions/50", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/62/" ] }
53
What are the main differences between performing principal component analysis (PCA) on the correlation matrix and on the covariance matrix? Do they give the same results?
You tend to use the covariance matrix when the variable scales are similar and the correlation matrix when variables are on different scales. Using the correlation matrix is equivalent to standardizing each of the variables (to mean 0 and standard deviation 1). In general, PCA with and without standardizing will give different results. Especially when the scales are different. As an example, take a look at this R heptathlon data set. Some of the variables have an average value of about 1.8 (the high jump), whereas other variables (run 800m) are around 120. library(HSAUR) heptathlon[,-8] # look at heptathlon data (excluding 'score' variable) This outputs: hurdles highjump shot run200m longjump javelin run800m Joyner-Kersee (USA) 12.69 1.86 15.80 22.56 7.27 45.66 128.51 John (GDR) 12.85 1.80 16.23 23.65 6.71 42.56 126.12 Behmer (GDR) 13.20 1.83 14.20 23.10 6.68 44.54 124.20 Sablovskaite (URS) 13.61 1.80 15.23 23.92 6.25 42.78 132.24 Choubenkova (URS) 13.51 1.74 14.76 23.93 6.32 47.46 127.90 ... Now let's do PCA on covariance and on correlation: # scale=T bases the PCA on the correlation matrix hep.PC.cor = prcomp(heptathlon[,-8], scale=TRUE) hep.PC.cov = prcomp(heptathlon[,-8], scale=FALSE) biplot(hep.PC.cov) biplot(hep.PC.cor) Notice that PCA on covariance is dominated by run800m and javelin : PC1 is almost equal to run800m (and explains $82\%$ of the variance) and PC2 is almost equal to javelin (together they explain $97\%$ ). PCA on correlation is much more informative and reveals some structure in the data and relationships between variables (but note that the explained variances drop to $64\%$ and $71\%$ ). Notice also that the outlying individuals (in this data set) are outliers regardless of whether the covariance or correlation matrix is used.
{ "source": [ "https://stats.stackexchange.com/questions/53", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/17/" ] }
97
I have some ordinal data gained from survey questions. In my case they are Likert style responses (Strongly Disagree-Disagree-Neutral-Agree-Strongly Agree). In my data they are coded as 1-5. I don't think means would mean much here, so what basic summary statistics are considered usefull?
A frequency table is a good place to start. You can do the count, and relative frequency for each level. Also, the total count, and number of missing values may be of use. You can also use a contingency table to compare two variables at once. Can display using a mosaic plot too.
{ "source": [ "https://stats.stackexchange.com/questions/97", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/114/" ] }
118
In the definition of standard deviation, why do we have to square the difference from the mean to get the mean (E) and take the square root back at the end? Can't we just simply take the absolute value of the difference instead and get the expected value (mean) of those, and wouldn't that also show the variation of the data? The number is going to be different from square method (the absolute-value method will be smaller), but it should still show the spread of data. Anybody know why we take this square approach as a standard? The definition of standard deviation: $\sigma = \sqrt{E\left[\left(X - \mu\right)^2\right]}.$ Can't we just take the absolute value instead and still be a good measurement? $\sigma = E\left[|X - \mu|\right]$
If the goal of the standard deviation is to summarise the spread of a symmetrical data set (i.e. in general how far each datum is from the mean), then we need a good method of defining how to measure that spread. The benefits of squaring include: Squaring always gives a non-negative value, so the sum will always be zero or higher. Squaring emphasizes larger differences, a feature that turns out to be both good and bad (think of the effect outliers have). Squaring however does have a problem as a measure of spread and that is that the units are all squared, whereas we might prefer the spread to be in the same units as the original data (think of squared pounds, squared dollars, or squared apples). Hence the square root allows us to return to the original units. I suppose you could say that absolute difference assigns equal weight to the spread of data whereas squaring emphasises the extremes. Technically though, as others have pointed out, squaring makes the algebra much easier to work with and offers properties that the absolute method does not (for example, the variance is equal to the expected value of the square of the distribution minus the square of the mean of the distribution) It is important to note however that there's no reason you couldn't take the absolute difference if that is your preference on how you wish to view 'spread' (sort of how some people see 5% as some magical threshold for $p$ -values, when in fact it is situation dependent). Indeed, there are in fact several competing methods for measuring spread. My view is to use the squared values because I like to think of how it relates to the Pythagorean Theorem of Statistics: $c = \sqrt{a^2 + b^2}$ …this also helps me remember that when working with independent random variables, variances add, standard deviations don't. But that's just my personal subjective preference which I mostly only use as a memory aid, feel free to ignore this paragraph. An interesting analysis can be read here: Revisiting a 90-year-old debate: the advantages of the mean deviation - Stephen Gorard (Department of Educational Studies, University of York); Paper presented at the British Educational Research Association Annual Conference, University of Manchester, 16-18 September 2004
{ "source": [ "https://stats.stackexchange.com/questions/118", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/83/" ] }
125
Which is the best introductory textbook for Bayesian statistics? One book per answer, please.
John Kruschke released a book in mid 2011 called Doing Bayesian Data Analysis: A Tutorial with R and BUGS . (A second edition was released in Nov 2014: Doing Bayesian Data Analysis, Second Edition: A Tutorial with R, JAGS, and Stan .) It is truly introductory. If you want to walk from frequentist stats into Bayes though, especially with multilevel modelling, I recommend Gelman and Hill. John Kruschke also has a website for the book that has all the examples in the book in BUGS and JAGS. His blog on Bayesian statistics also links in with the book.
{ "source": [ "https://stats.stackexchange.com/questions/125", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5/" ] }
130
I had a plan of learning R in the near future. Reading another question I found out about Clojure. Now I don't know what to do. I think a big advantage of R for me is that some people in Economics use it, including one of my supervisors (though the other said: stay away from R!). One advantage of Clojure is that it is Lisp-based, and as I have started learning Emacs and I am keen on writing my own customisations, it would be helpful (yeah, I know Clojure and Elisp are different dialects of Lisp, but they are both Lisp and thus similar I would imagine). I can't ask which one is better, because I know this is very personal, but could someone give me the advantages (or advantages) of Clojure x R, especially in practical terms? For example, which one should be easier to learn, which one is more flexible or more powerful, which one has more libraries, more support, more users, etc? My intended use : The bulk of my estimation should be done using Matlab, so I am not looking for anything too deep in terms of statistical analysis, but rather a software to substitute Excel for the initial data manipulation and visualisation, summary statistics and charting, but also some basic statistical analysis or the initial attempts at my estimation.
Let me start by saying that I love both languages: you can't go wrong with either, and they are certainly better than something like C++ or Java for doing data analysis. For basic data analysis I would suggest R (especially with plyr). IMO, R is a little easier to learn than Clojure, although this isn't completely obvious since Clojure is based on Lisp and there are numerous fantastic Lisp resources available (such as SICP ). There are less keywords in Clojure, but the libraries are much more difficult to install and work with. Also, keep in mind that R (or S) is largely derived from Scheme, so you would benefit from Lisp knowledge when using it. In general: The main advantage of R is the community on CRAN (over 2461 packages and counting). Nothing will compare with this in the near future, not even a commercial application like matlab. Clojure has the big advantage of running on the JVM which means that it can use any Java based library immediately. I would add that I gave a talk relating Clojure/Incanter to R a while ago, so you may find it of interest. In my experience around creating this, Clojure was generally slower than R for simple operations.
{ "source": [ "https://stats.stackexchange.com/questions/130", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/90/" ] }
138
I'm interested in learning R on the cheap. What's the best free resource/book/tutorial for learning R?
Some useful R links (find out the link that suits you): Intro: for R basics http://cran.r-project.org/doc/contrib/usingR.pdf for data manipulation http://had.co.nz/plyr/plyr-intro-090510.pdf http://portal.stats.ox.ac.uk/userdata/ruth/APTS2012/APTS.html Interactive intro to R programming language https://www.datacamp.com/courses/introduction-to-r Application focused R tutorial https://www.teamleada.com/tutorials/introduction-to-statistical-programming-in-r In-browser learning for R http://tryr.codeschool.com/ with a focus on economics: lecture notes with R code http://www.econ.uiuc.edu/~econ472/e-Tutorial.html A brief guide to R and Economics http://people.su.se/~ma/R_intro/R_intro.pdf Graphics: plots, maps, etc.: tutorial with info on plots http://cran.r-project.org/doc/contrib/Rossiter-RIntro-ITC.pdf a graph gallery of R plots and charts with supporting code http://addictedtor.free.fr/graphiques/ A tutorial for Lattice http://osiris.sunderland.ac.uk/~cs0her/Statistics/UsingLatticeGraphicsInR.htm Ggplot R graphics http://had.co.nz/ggplot2/ Ggplot Vs Lattice @ http://had.co.nz/ggplot/vs-lattice.html Multiple tutorials for using ggplot2 and Lattice http://learnr.wordpress.com/tag/ggplot2/ Google Charts with R http://www.iq.harvard.edu/blog/sss/archives/2008/04/google_charts_f_1.shtml Introduction to using RGoogleMaps @ http://cran.r-project.org/web/packages/RgoogleMaps/vignettes/RgoogleMaps-intro.pdf Thematic Maps with R https://stackoverflow.com/questions/1260965/developing-geographic-thematic-maps-with-r geographic maps in R http://smartdatacollective.com/Home/22052 GUIs: Poor Man GUI for R http://wiener.math.csi.cuny.edu/pmg/ R Commander is a robust GUI for R http://socserv.mcmaster.ca/jfox/Misc/Rcmdr/installation-notes.html JGR is a Java-based GUI for R http://jgr.markushelbig.org/Screenshots.html Time series & finance: a good beginner’s tutorial for Time Series http://www.stat.pitt.edu/stoffer/tsa2/index.html Interesting time series packages in R http://robjhyndman.com/software advanced time series in R http://www.wise.xmu.edu.cn/2007summerworkshop/download/Advanced%20Topics%20in%20Time%20Series%20Econometrics%20Using%20R1_ZongwuCAI.pdf provides a great analysis and visualization framework for quantitative trading http://www.quantmod.com/ Guide to Credit Scoring using R http://cran.r-project.org/doc/contrib/Sharma-CreditScoring.pdf an Open Source framework for Financial Analysis http://www.rmetrics.org/ Data / text mining: A Data Mining tool in R http://rattle.togaware.com/ An online e-book for Data Mining with R http://www.liaad.up.pt/~ltorgo/DataMiningWithR/ Introduction to the Text Mining package in R http://cran.r-project.org/web/packages/tm/vignettes/tm.pdf Other statistical techniques: Quick-R http://www.statmethods.net/ annotated guides for a variety of models http://www.ats.ucla.edu/stat/r/dae/default.htm Social Network Analysis http://www.r-project.org/conferences/useR-2008/slides/Bojanowski.pdf Editors: Komodo Edit R editor http://www.sciviews.org/SciViews-K/index.html Tinn-R makes for a good R editor http://www.sciviews.org/Tinn-R/ An Eclipse plugin for R @ http://www.walware.de/goto/statet Instructions to install StatET in Eclipse http://www.splusbook.com/Rintro/R_Eclipse_StatET.pdf RStudio http://rstudio.org/ Emacs Speaks Statistics, a statistical language package for Emacs http://ess.r-project.org/ Interfacing w/ other languages / software: to embed R data frames in Excel via multiple approaches http://learnr.wordpress.com/2009/10/06/export-data-frames-to-multi-worksheet-excel-file/ provides a tool to make R usable from Excel http://www.statconn.com/ Connect to MySQL from R http://erikvold.com/blog/index.cfm/2008/8/20/how-to-connect-to-mysql-with-r-in-wndows-using-rmysql info about pulling data from SAS, STATA, SPSS, etc. http://www.statmethods.net/input/importingdata.html Latex http://www.stat.uni-muenchen.de/~leisch/Sweave/ R2HTML http://www.feferraz.net/en/P/R2HTML Blogs, newsletters, etc.: A very informative blog http://blog.revolutionanalytics.com/ A blog aggregator for posts about R http://www.r-bloggers.com/ R mailing lists http://www.r-project.org/mail.html R newsletter (old) http://cran.r-project.org/doc/Rnews/ R journal (current) http://journal.r-project.org/ Other / uncategorized: (as of yet) Web Scraping in R http://www.programmingr.com/content/webscraping-using-readlines-and-rcurl a very interesting list of packages that is seriously worth a look http://www.omegahat.org/ Commercial versions of R @ http://www.revolutionanalytics.com/ Red R for R tasks http://code.google.com/p/r-orange/ KNIME for R (worth a serious look) http://www.knime.org/introduction/screenshots R Tutorial for Titanic https://statsguys.wordpress.com/
{ "source": [ "https://stats.stackexchange.com/questions/138", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/142/" ] }
165
Maybe the concept, why it's used, and an example.
First, we need to understand what is a Markov chain. Consider the following weather example from Wikipedia. Suppose that weather on any given day can be classified into two states only: sunny and rainy. Based on past experience, we know the following: $P(\text{Next day is Sunny}\,\vert \,\text{Given today is Rainy)}=0.50$ Since, the next day's weather is either sunny or rainy it follows that: $P(\text{Next day is Rainy}\,\vert \,\text{Given today is Rainy)}=0.50$ Similarly, let: $P(\text{Next day is Rainy}\,\vert \,\text{Given today is Sunny)}=0.10$ Therefore, it follows that: $P(\text{Next day is Sunny}\,\vert \,\text{Given today is Sunny)}=0.90$ The above four numbers can be compactly represented as a transition matrix which represents the probabilities of the weather moving from one state to another state as follows: $P = \begin{bmatrix} & S & R \\ S& 0.9 & 0.1 \\ R& 0.5 & 0.5 \end{bmatrix}$ We might ask several questions whose answers follow: Q1: If the weather is sunny today then what is the weather likely to be tomorrow? A1: Since, we do not know what is going to happen for sure, the best we can say is that there is a $90\%$ chance that it is likely to be sunny and $10\%$ that it will be rainy. Q2: What about two days from today? A2: One day prediction: $90\%$ sunny, $10\%$ rainy. Therefore, two days from now: First day it can be sunny and the next day also it can be sunny. Chances of this happening are: $0.9 \times 0.9$. Or First day it can be rainy and second day it can be sunny. Chances of this happening are: $0.1 \times 0.5$. Therefore, the probability that the weather will be sunny in two days is: $P(\text{Sunny 2 days from now} = 0.9 \times 0.9 + 0.1 \times 0.5 = 0.81 + 0.05 = 0.86$ Similarly, the probability that it will be rainy is: $P(\text{Rainy 2 days from now} = 0.1 \times 0.5 + 0.9 \times 0.1 = 0.05 + 0.09 = 0.14$ In linear algebra (transition matrices) these calculations correspond to all the permutations in transitions from one step to the next (sunny-to-sunny ($S_2S$), sunny-to-rainy ($S_2R$), rainy-to-sunny ($R_2S$) or rainy-to-rainy ($R_2R$)) with their calculated probabilities: On the lower part of the image we see how to calculate the probability of a future state ($t+1$ or $t+2$) given the probabilities (probability mass function, $PMF$) for every state (sunny or rainy) at time zero (now or $t_0$) as simple matrix multiplication. If you keep forecasting weather like this you will notice that eventually the $n$-th day forecast, where $n$ is very large (say $30$), settles to the following 'equilibrium' probabilities: $P(\text{Sunny}) = 0.833$ and $P(\text{Rainy}) = 0.167$ In other words, your forecast for the $n$-th day and the $n+1$-th day remain the same. In addition, you can also check that the 'equilibrium' probabilities do not depend on the weather today. You would get the same forecast for the weather if you start of by assuming that the weather today is sunny or rainy. The above example will only work if the state transition probabilities satisfy several conditions which I will not discuss here. But, notice the following features of this 'nice' Markov chain (nice = transition probabilities satisfy conditions): Irrespective of the initial starting state we will eventually reach an equilibrium probability distribution of states. Markov Chain Monte Carlo exploits the above feature as follows: We want to generate random draws from a target distribution. We then identify a way to construct a 'nice' Markov chain such that its equilibrium probability distribution is our target distribution. If we can construct such a chain then we arbitrarily start from some point and iterate the Markov chain many times (like how we forecast the weather $n$ times). Eventually, the draws we generate would appear as if they are coming from our target distribution. We then approximate the quantities of interest (e.g. mean) by taking the sample average of the draws after discarding a few initial draws which is the Monte Carlo component. There are several ways to construct 'nice' Markov chains (e.g., Gibbs sampler, Metropolis-Hastings algorithm).
{ "source": [ "https://stats.stackexchange.com/questions/165", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/74/" ] }
170
Are there any free statistical textbooks available?
Online books include http://davidmlane.com/hyperstat/ http://vassarstats.net/textbook/ http://www.psychstat.missouristate.edu/multibook2/mlt.htm http://bookboon.com/uk/student/statistics http://www.freebookcentre.net/SpecialCat/Free-Statistics-Books-Download.html Update: I can now add my own forecasting textbook Forecasting: principles and practice (Hyndman & Athanasopoulos, 2012)
{ "source": [ "https://stats.stackexchange.com/questions/170", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/8/" ] }
181
Is there a standard and accepted method for selecting the number of layers, and the number of nodes in each layer, in a feed-forward neural network? I'm interested in automated ways of building neural networks.
I realize this question has been answered, but I don't think the extant answer really engages the question beyond pointing to a link generally related to the question's subject matter. In particular, the link describes one technique for programmatic network configuration, but that is not a " [a] standard and accepted method " for network configuration. By following a small set of clear rules, one can programmatically set a competent network architecture (i.e., the number and type of neuronal layers and the number of neurons comprising each layer). Following this schema will give you a competent architecture but probably not an optimal one. But once this network is initialized, you can iteratively tune the configuration during training using a number of ancillary algorithms; one family of these works by pruning nodes based on (small) values of the weight vector after a certain number of training epochs--in other words, eliminating unnecessary/redundant nodes (more on this below). So every NN has three types of layers: input , hidden , and output . Creating the NN architecture, therefore, means coming up with values for the number of layers of each type and the number of nodes in each of these layers. The Input Layer Simple--every NN has exactly one of them--no exceptions that I'm aware of. With respect to the number of neurons comprising this layer, this parameter is completely and uniquely determined once you know the shape of your training data. Specifically, the number of neurons comprising that layer is equal to the number of features (columns) in your data . Some NN configurations add one additional node for a bias term. The Output Layer Like the Input layer, every NN has exactly one output layer. Determining its size (number of neurons) is simple; it is completely determined by the chosen model configuration. Is your NN going to run in Machine Mode or Regression Mode (the ML convention of using a term that is also used in statistics but assigning a different meaning to it is very confusing)? Machine mode: returns a class label (e.g., "Premium Account"/"Basic Account"). Regression Mode returns a value (e.g., price). If the NN is a regressor, then the output layer has a single node. If the NN is a classifier, then it also has a single node unless softmax is used in which case the output layer has one node per class label in your model. The Hidden Layers So those few rules set the number of layers and size (neurons/layer) for both the input and output layers. That leaves the hidden layers. How many hidden layers? Well, if your data is linearly separable (which you often know by the time you begin coding a NN), then you don't need any hidden layers at all. Of course, you don't need an NN to resolve your data either, but it will still do the job. Beyond that, as you probably know, there's a mountain of commentary on the question of hidden layer configuration in NNs (see the insanely thorough and insightful NN FAQ for an excellent summary of that commentary). One issue within this subject on which there is a consensus is the performance difference from adding additional hidden layers: the situations in which performance improves with a second (or third, etc.) hidden layer are very few. One hidden layer is sufficient for the large majority of problems. So what about the size of the hidden layer(s)--how many neurons? There are some empirically derived rules of thumb; of these, the most commonly relied on is ' the optimal size of the hidden layer is usually between the size of the input and size of the output layers '. Jeff Heaton, the author of Introduction to Neural Networks in Java , offers a few more. In sum, for most problems, one could probably get decent performance (even without a second optimization step) by setting the hidden layer configuration using just two rules: (i) the number of hidden layers equals one; and (ii) the number of neurons in that layer is the mean of the neurons in the input and output layers. Optimization of the Network Configuration Pruning describes a set of techniques to trim network size (by nodes, not layers) to improve computational performance and sometimes resolution performance. The gist of these techniques is removing nodes from the network during training by identifying those nodes which, if removed from the network, would not noticeably affect network performance (i.e., resolution of the data). (Even without using a formal pruning technique, you can get a rough idea of which nodes are not important by looking at your weight matrix after training; look at weights very close to zero--it's the nodes on either end of those weights that are often removed during pruning.) Obviously, if you use a pruning algorithm during training, then begin with a network configuration that is more likely to have excess (i.e., 'prunable') nodes--in other words, when deciding on network architecture, err on the side of more neurons, if you add a pruning step. Put another way, by applying a pruning algorithm to your network during training, you can approach optimal network configuration; whether you can do that in a single "up-front" (such as a genetic-algorithm-based algorithm), I don't know, though I do know that for now, this two-step optimization is more common.
{ "source": [ "https://stats.stackexchange.com/questions/181", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/159/" ] }
206
What is the difference between discrete data and continuous data?
Discrete data can only take particular values. There may potentially be an infinite number of those values, but each is distinct and there's no grey area in between. Discrete data can be numeric -- like numbers of apples -- but it can also be categorical -- like red or blue, or male or female, or good or bad. Continuous data are not restricted to defined separate values, but can occupy any value over a continuous range. Between any two continuous data values, there may be an infinite number of others. Continuous data are always essentially numeric. It sometimes makes sense to treat discrete data as continuous and the other way around : For example, something like height is continuous, but often we don't really care too much about tiny differences and instead group heights into a number of discrete bins -- i.e. only measuring centimetres --. Conversely, if we're counting large amounts of some discrete entity -- i.e. grains of rice, or termites, or pennies in the economy -- we may choose not to think of 2,000,006 and 2,000,008 as crucially different values but instead as nearby points on an approximate continuum. It can also sometimes be useful to treat numeric data as categorical , eg: underweight, normal, obese. This is usually just another kind of binning. It seldom makes sense to consider categorical data as continuous.
{ "source": [ "https://stats.stackexchange.com/questions/206", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/188/" ] }
220
If $X_1, ..., X_n$ are independent identically-distributed random variables, what can be said about the distribution of $\min(X_1, ..., X_n)$ in general?
If the cdf of $X_i$ is denoted by $F(x)$, then the cdf of the minimum is given by $1-[1-F(x)]^n$.
{ "source": [ "https://stats.stackexchange.com/questions/220", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/85/" ] }
222
What are principal component scores (PC scores, PCA scores)?
First, let's define a score. John, Mike and Kate get the following percentages for exams in Maths, Science, English and Music as follows: Maths Science English Music John 80 85 60 55 Mike 90 85 70 45 Kate 95 80 40 50 In this case there are 12 scores in total. Each score represents the exam results for each person in a particular subject. So a score in this case is simply a representation of where a row and column intersect. Now let's informally define a Principal Component. In the table above, can you easily plot the data in a 2D graph? No, because there are four subjects (which means four variables: Maths, Science, English, and Music), i.e.: You could plot two subjects in the exact same way you would with $x$ and $y$ co-ordinates in a 2D graph. You could even plot three subjects in the same way you would plot $x$ , $y$ and $z$ in a 3D graph (though this is generally bad practice, because some distortion is inevitable in the 2D representation of 3D data). But how would you plot 4 subjects? At the moment we have four variables which each represent just one subject. So a method around this might be to somehow combine the subjects into maybe just two new variables which we can then plot. This is known as Multidimensional scaling . Principal Component analysis is a form of multidimensional scaling. It is a linear transformation of the variables into a lower dimensional space which retain maximal amount of information about the variables. For example, this would mean we could look at the types of subjects each student is maybe more suited to. A principal component is therefore a combination of the original variables after a linear transformation. In R, this is: DF <- data.frame(Maths=c(80, 90, 95), Science=c(85, 85, 80), English=c(60, 70, 40), Music=c(55, 45, 50)) prcomp(DF, scale = FALSE) Which will give you something like this (first two Principal Components only for sake of simplicity): PC1 PC2 Maths 0.27795606 0.76772853 Science -0.17428077 -0.08162874 English -0.94200929 0.19632732 Music 0.07060547 -0.60447104 The first column here shows coefficients of linear combination that defines principal component #1, and the second column shows coefficients for principal component #2. So what is a Principal Component Score? It's a score from the table at the end of this post (see below). The above output from R means we can now plot each person's score across all subjects in a 2D graph as follows. First, we need to center the original variables by subtracting column means: Maths Science English Music John -8.33 1.66 3.33 5 Mike 1.66 1.66 13.33 -5 Kate 6.66 -3.33 -16.66 0 And then to form linear combinations to get PC1 and PC2 scores : x y John -0.28*8.33 + -0.17*1.66 + -0.94*3.33 + 0.07*5 -0.77*8.33 + -0.08*1.66 + 0.19*3.33 + -0.60*5 Mike 0.28*1.66 + -0.17*1.66 + -0.94*13.33 + -0.07*5 0.77*1.66 + -0.08*1.66 + 0.19*13.33 + -0.60*5 Kate 0.28*6.66 + 0.17*3.33 + 0.94*16.66 + 0.07*0 0.77*6.66 + 0.08*3.33 + -0.19*16.66 + -0.60*0 Which simplifies to: x y John -5.39 -8.90 Mike -12.74 6.78 Kate 18.13 2.12 There are six principal component scores in the table above. You can now plot the scores in a 2D graph to get a sense of the type of subjects each student is perhaps more suited to. The same output can be obtained in R by typing prcomp(DF, scale = FALSE)$x . EDIT 1: Hmm, I probably could have thought up a better example, and there is more to it than what I've put here, but I hope you get the idea. EDIT 2: full credit to @drpaulbrewer for his comment in improving this answer.
{ "source": [ "https://stats.stackexchange.com/questions/222", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/191/" ] }
256
What is the easiest way to understand boosting? Why doesn't it boost very weak classifiers "to infinity" (perfection)?
In plain English: If your classifier misclassifies some data, train another copy of it mainly on this misclassified part with hope that it will discover something subtle. And then, as usual, iterate. On the way there are some voting schemes that allow to combine all those classifiers' predictions in sensible way. Because sometimes it is impossible (the noise is just hiding some of the information, or it is not even present in the data); on the other hand, boosting too much may lead to overfitting.
{ "source": [ "https://stats.stackexchange.com/questions/256", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/217/" ] }
269
What is the difference between a population and a sample? What common variables and statistics are used for each one, and how do those relate to each other?
The population is the set of entities under study. For example, the mean height of men. This is a hypothetical population because it includes all men that have lived, are alive and will live in the future. I like this example because it drives home the point that we, as analysts, choose the population that we wish to study. Typically it is impossible to survey/measure the entire population because not all members are observable (e.g. men who will exist in the future). If it is possible to enumerate the entire population it is often costly to do so and would take a great deal of time. In the example above we have a population "men" and a parameter of interest, their height. Instead, we could take a subset of this population called a sample and use this sample to draw inferences about the population under study, given some conditions. Thus we could measure the mean height of men in a sample of the population which we call a statistic and use this to draw inferences about the parameter of interest in the population. It is an inference because there will be some uncertainty and inaccuracy involved in drawing conclusions about the population based upon a sample. This should be obvious - we have fewer members in our sample than our population therefore we have lost some information. There are many ways to select a sample and the study of this is called sampling theory. A commonly used method is called Simple Random Sampling (SRS). In SRS each member of the population has an equal probability of being included in the sample, hence the term "random". There are many other sampling methods e.g. stratified sampling, cluster sampling, etc which all have their advantages and disadvantages. It is important to remember that the sample we draw from the population is only one from a large number of potential samples. If ten researchers were all studying the same population, drawing their own samples then they may obtain different answers. Returning to our earlier example, each of the ten researchers may come up with a different mean height of men i.e. the statistic in question (mean height) varies of sample to sample -- it has a distribution called a sampling distribution. We can use this distribution to understand the uncertainty in our estimate of the population parameter. The sampling distribution of the sample mean is known to be a normal distribution with a standard deviation equal to the sample standard deviation divided by the sample size. Because this could easily be confused with the standard deviation of the sample it more common to call the standard deviation of the sampling distribution the standard error .
{ "source": [ "https://stats.stackexchange.com/questions/269", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/62/" ] }
283
What is meant when we say we have a saturated model?
A saturated model is one in which there are as many estimated parameters as data points. By definition, this will lead to a perfect fit, but will be of little use statistically, as you have no data left to estimate variance. For example, if you have 6 data points and fit a 5th-order polynomial to the data, you would have a saturated model (one parameter for each of the 5 powers of your independant variable plus one for the constant term).
{ "source": [ "https://stats.stackexchange.com/questions/283", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/215/" ] }
288
Suppose that I culture cancer cells in n different dishes g₁ , g₂ , … , g n and observe the number of cells n i in each dish that look different than normal. The total number of cells in dish g i is t i . There is individual differences between individual cells, but also differences between the populations in different dishes because each dish has a slightly different temperature, amount of liquid, and so on. I model this as a beta-binomial distribution: n i ~ Binomial( p i , t i ) where p i ~ Beta( α , β ). Given a number of observations of n i and t i , how can I estimate α and β ?
A saturated model is one in which there are as many estimated parameters as data points. By definition, this will lead to a perfect fit, but will be of little use statistically, as you have no data left to estimate variance. For example, if you have 6 data points and fit a 5th-order polynomial to the data, you would have a saturated model (one parameter for each of the 5 powers of your independant variable plus one for the constant term).
{ "source": [ "https://stats.stackexchange.com/questions/288", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/220/" ] }
298
Am I looking for a better behaved distribution for the independent variable in question, or to reduce the effect of outliers, or something else?
I always hesitate to jump into a thread with as many excellent responses as this, but it strikes me that few of the answers provide any reason to prefer the logarithm to some other transformation that "squashes" the data, such as a root or reciprocal. Before getting to that, let's recapitulate the wisdom in the existing answers in a more general way. Some non-linear re-expression of the dependent variable is indicated when any of the following apply: The residuals have a skewed distribution. The purpose of a transformation is to obtain residuals that are approximately symmetrically distributed (about zero, of course). The spread of the residuals changes systematically with the values of the dependent variable ("heteroscedasticity"). The purpose of the transformation is to remove that systematic change in spread, achieving approximate "homoscedasticity." To linearize a relationship. When scientific theory indicates. For example, chemistry often suggests expressing concentrations as logarithms (giving activities or even the well-known pH). When a more nebulous statistical theory suggests the residuals reflect "random errors" that do not accumulate additively. To simplify a model. For example, sometimes a logarithm can simplify the number and complexity of "interaction" terms. (These indications can conflict with one another; in such cases, judgment is needed.) So, when is a logarithm specifically indicated instead of some other transformation? The residuals have a "strongly" positively skewed distribution. In his book on EDA, John Tukey provides quantitative ways to estimate the transformation (within the family of Box-Cox, or power, transformations) based on rank statistics of the residuals. It really comes down to the fact that if taking the log symmetrizes the residuals, it was probably the right form of re-expression; otherwise, some other re-expression is needed. When the SD of the residuals is directly proportional to the fitted values (and not to some power of the fitted values). When the relationship is close to exponential. When residuals are believed to reflect multiplicatively accumulating errors. You really want a model in which marginal changes in the explanatory variables are interpreted in terms of multiplicative (percentage) changes in the dependent variable. Finally, some non - reasons to use a re-expression : Making outliers not look like outliers. An outlier is a datum that does not fit some parsimonious, relatively simple description of the data. Changing one's description in order to make outliers look better is usually an incorrect reversal of priorities: first obtain a scientifically valid, statistically good description of the data and then explore any outliers. Don't let the occasional outlier determine how to describe the rest of the data! Because the software automatically did it. (Enough said!) Because all the data are positive. (Positivity often implies positive skewness, but it does not have to. Furthermore, other transformations can work better. For example, a root often works best with counted data.) To make "bad" data (perhaps of low quality) appear well behaved. To be able to plot the data. (If a transformation is needed to be able to plot the data, it's probably needed for one or more good reasons already mentioned. If the only reason for the transformation truly is for plotting, go ahead and do it--but only to plot the data. Leave the data untransformed for analysis.)
{ "source": [ "https://stats.stackexchange.com/questions/298", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/125/" ] }
305
It seems like when the assumption of homogeneity of variance is met that the results from a Welch adjusted t-test and a standard t-test are approximately the same. Why not simply always use the Welch adjusted t?
I would like to oppose the other two answers based on a paper (in German) by Kubinger, Rasch and Moder (2009) . They argue, based on "extensive" simulations from distributions either meeting or not meeting the assumptions imposed by a t-test, (normality and homogenity of variance) that the welch-tests performs equally well when the assumptions are met (i.e., basically same probability of committing alpha and beta errors) but outperforms the t-test if the assumptions are not met, especially in terms of power. Therefore, they recommend to always use the welch-test if the sample size exceeds 30. As a meta-comment: For people interested in statistics (like me and probably most other here) an argument based on data (as mine) should at least count equally as arguments solely based on theoretical grounds (as the others here). Update: After thinking about this topic again, I found two further recommendations of which the newer one assists my point. Look at the original papers (which are both, at least for me, freely available) for the argumentations that lead to these recommendations. The first recommendation comes from Graeme D. Ruxton in 2006: " If you want to compare the central tendency of 2 populations based on samples of unrelated data, then the unequal variance t-test should always be used in preference to the Student's t-test or Mann–Whitney U test. " In: Ruxton, G.D., 2006. The unequal variance t-test is an underused alternative to Student’s t-test and the Mann–Whitney U test . Behav. Ecol . 17, 688–690. The second (older) recommendation is from Coombs et al. (1996, p. 148): " In summary, the independent samples t test is generally acceptable in terms of controlling Type I error rates provided there are sufficiently large equal-sized samples, even when the equal population variance assumption is violated. For unequal-sized samples, however, an alternative that does not assume equal population variances is preferable. Use the James second-order test when distributions are either short-tailed symmetric or normal. Promising alternatives include the Wilcox H and Yuen trimmed means tests, which provide broader control of Type I error rates than either the Welch test or the James test and have greater power when data are long-tailed." (emphasis added) In: Coombs WT, Algina J, Oltman D. 1996. Univariate and multivariate omnibus hypothesis tests selected to control type I error rates when population variances are not necessarily equal . Rev Educ Res 66:137–79.
{ "source": [ "https://stats.stackexchange.com/questions/305", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/196/" ] }
362
What is the difference between the Shapiro–Wilk test of normality and the Kolmogorov–Smirnov test of normality? When will results from these two methods differ?
You can't really even compare the two since the Kolmogorov-Smirnov is for a completely specified distribution (so if you're testing normality, you must specify the mean and variance; they can't be estimated from the data*), while the Shapiro-Wilk is for normality, with unspecified mean and variance. * you also can't standardize by using estimated parameters and test for standard normal; that's actually the same thing. One way to compare would be to supplement the Shapiro-Wilk with a test for specified mean and variance in a normal (combining the tests in some manner), or by having the KS tables adjusted for the parameter estimation (but then it's no longer distribution-free). There is such a test (equivalent to the Kolmogorov-Smirnov with estimated parameters) - the Lilliefors test; the normality-test version could be validly compared to the Shapiro-Wilk (and will generally have lower power). More competitive is the Anderson-Darling test (which must also be adjusted for parameter estimation for a comparison to be valid). As for what they test - the KS test (and the Lilliefors) looks at the largest difference between the empirical CDF and the specified distribution, while the Shapiro Wilk effectively compares two estimates of variance; the closely related Shapiro-Francia can be regarded as a monotonic function of the squared correlation in a Q-Q plot; if I recall correctly, the Shapiro-Wilk also takes into account covariances between the order statistics. Edited to add: While the Shapiro-Wilk nearly always beats the Lilliefors test on alternatives of interest, an example where it doesn't is the $t_{30}$ in medium-large samples ( $n>60$ -ish). There the Lilliefors has higher power. [It should be kept in mind that there are many more tests for normality that are available than these.]
{ "source": [ "https://stats.stackexchange.com/questions/362", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/196/" ] }
396
I usually make my own idiosyncratic choices when preparing plots. However, I wonder if there are any best practices for generating plots. Note: Rob's comment to an answer to this question is very relevant here.
The Tufte principles are very good practices when preparing plots. See also his book Beautiful Evidence The principles include: Keep a high data-ink ratio Remove chart junk Give graphical element multiple functions Keep in mind the data density The term to search for is Information Visualization
{ "source": [ "https://stats.stackexchange.com/questions/396", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
414
What is a good introduction to statistics for a mathematician who is already well-versed in probability? I have two distinct motivations for asking, which may well lead to different suggestions: I'd like to better understand the statistics motivation behind many problems considered by probabilists. I'd like to know how to better interpret the results of Monte Carlo simulations which I sometimes do to form mathematical conjectures. I'm open to the possibility that the best way to go is not to look for something like "Statistics for Probabilists" and just go to a more introductory source.
As you said, it's not necessarily the case that a mathematician may want a rigorous book. Maybe the goal is to get some intuition of the concepts quickly, and then fill in the details. I recommend two books from CMU professors, both published by Springer: " All of Statistics " by Larry Wasserman is quick and informal. " Theory of Statistics " by Mark Schervish is rigorous and relatively complete. It has decision theory, finite sample, some asymptotics and sequential analysis. Added 7/28/10: There is one additional reference that is orthogonal to the other two: very rigorous, focused on learning theory, and short. It's by Smale (Steven Smale!) and Cucker, " On the Mathematical Foundations of Learning ". Not easy read, but the best crash course on the theory.
{ "source": [ "https://stats.stackexchange.com/questions/414", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/89/" ] }
423
Data analysis cartoons can be useful for many reasons: they help communicate; they show that quantitative people have a sense of humor too; they can instigate good teaching moments; and they can help us remember important principles and lessons. This is one of my favorites: As a service to those who value this kind of resource, please share your favorite data analysis cartoon. They probably don't need any explanation (if they do, they're probably not good cartoons!) As always, one entry per answer . (This is in the vein of the Stack Overflow question What’s your favorite “programmer” cartoon? .) P.S. Do not hotlink the cartoon without the site's permission please.
Was XKCD, so time for Dilbert: Source: http://dilbert.com/strip/2001-10-25
{ "source": [ "https://stats.stackexchange.com/questions/423", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5/" ] }
486
I have calculated AIC and AICc to compare two general linear mixed models; The AICs are positive with model 1 having a lower AIC than model 2. However, the values for AICc are both negative (model 1 is still < model 2). Is it valid to use and compare negative AICc values?
All that matters is the difference between two AIC (or, better, AICc) values, representing the fit to two models. The actual value of the AIC (or AICc), and whether it is positive or negative, means nothing. If you simply changed the units the data are expressed in, the AIC (and AICc) would change dramatically. But the difference between the AIC of the two alternative models would not change at all. Bottom line: Ignore the actual value of AIC (or AICc) and whether it is positive or negative. Ignore also the ratio of two AIC (or AICc) values. Pay attention only to the difference.
{ "source": [ "https://stats.stackexchange.com/questions/486", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/266/" ] }
507
What is your preferred method of checking for convergence when using Markov chain Monte Carlo for Bayesian inference, and why?
I use the Gelman-Rubin convergence diagnostic as well. A potential problem with Gelman-Rubin is that it may mis-diagnose convergence if the shrink factor happens to be close to 1 by chance, in which case you can use a Gelman-Rubin-Brooks plot. See the "General Methods for Monitoring Convergence of Iterative Simulations" paper for details. This is supported in the coda package in R (for "Output analysis and diagnostics for Markov Chain Monte Carlo simulations"). coda also includes other functions (such as the Geweke’s convergence diagnostic). You can also have a look at "boa: An R Package for MCMC Output Convergence Assessment and Posterior Inference" .
{ "source": [ "https://stats.stackexchange.com/questions/507", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/215/" ] }
517
In the context of machine learning, what is the difference between unsupervised learning supervised learning and semi-supervised learning? And what are some of the main algorithmic approaches to look at?
Generally, the problems of machine learning may be considered variations on function estimation for classification, prediction or modeling. In supervised learning one is furnished with input ( $x_1$ , $x_2$ , ...,) and output ( $y_1$ , $y_2$ , ...,) and are challenged with finding a function that approximates this behavior in a generalizable fashion. The output could be a class label (in classification) or a real number (in regression)-- these are the "supervision" in supervised learning. In the case of unsupervised learning , in the base case, you receives inputs $x_1$ , $x_2$ , ..., but neither target outputs, nor rewards from its environment are provided. Based on the problem (classify, or predict) and your background knowledge of the space sampled, you may use various methods: density estimation (estimating some underlying PDF for prediction), k-means clustering (classifying unlabeled real valued data), k-modes clustering (classifying unlabeled categorical data), etc. Semi-supervised learning involves function estimation on labeled and unlabeled data. This approach is motivated by the fact that labeled data is often costly to generate, whereas unlabeled data is generally not. The challenge here mostly involves the technical question of how to treat data mixed in this fashion. See this Semi-Supervised Learning Literature Survey for more details on semi-supervised learning methods. In addition to these kinds of learning, there are others, such as reinforcement learning whereby the learning method interacts with its environment by producing actions $a_1$ , $a_2$ , . . .. that produce rewards or punishments $r_1$ , $r_2$ , ...
{ "source": [ "https://stats.stackexchange.com/questions/517", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/68/" ] }
534
We all know the mantra "correlation does not imply causation" which is drummed into all first year statistics students. There are some nice examples here to illustrate the idea. But sometimes correlation does imply causation. The following example is taking from this Wikipedia page For example, one could run an experiment on identical twins who were known to consistently get the same grades on their tests. One twin is sent to study for six hours while the other is sent to the amusement park. If their test scores suddenly diverged by a large degree, this would be strong evidence that studying (or going to the amusement park) had a causal effect on test scores. In this case, correlation between studying and test scores would almost certainly imply causation. Are there other situations where correlation implies causation?
Correlation is not sufficient for causation. One can get around the Wikipedia example by imagining that those twins always cheated in their tests by having a device that gives them the answers. The twin that goes to the amusement park loses the device, hence the low grade. A good way to get this stuff straight is to think of the structure of Bayesian network that may be generating the measured quantities, as done by Pearl in his book Causality . His basic point is to look for hidden variables. If there is a hidden variable that happens not to vary in the measured sample, then the correlation would not imply causation. Expose all hidden variables and you have causation.
{ "source": [ "https://stats.stackexchange.com/questions/534", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/159/" ] }
539
In answering this question on discrete and continuous data I glibly asserted that it rarely makes sense to treat categorical data as continuous. On the face of it that seems self-evident, but intuition is often a poor guide for statistics, or at least mine is. So now I'm wondering: is it true? Or are there established analyses for which a transform from categorical data to some continuum is actually useful? Would it make a difference if the data were ordinal?
I will assume that a "categorical" variable actually stands for an ordinal variable; otherwise it doesn't make much sense to treat it as a continuous one, unless it's a binary variable (coded 0/1) as pointed by @Rob. Then, I would say that the problem is not that much the way we treat the variable, although many models for categorical data analysis have been developed so far--see e.g., The analysis of ordered categorical data: An overview and a survey of recent developments from Liu and Agresti--, than the underlying measurement scale we assume. My response will focus on this second point, although I will first briefly discuss the assignment of numerical scores to variable categories or levels. By using a simple numerical recoding of an ordinal variable, you are assuming that the variable has interval properties (in the sense of the classification given by Stevens, 1946). From a measurement theory perspective (in psychology), this may often be a too strong assumption, but for basic study (i.e. where a single item is used to express one's opinion about a daily activity with clear wording) any monotone scores should give comparable results. Cochran (1954) already pointed that any set of scores gives a valid test, provided that they are constructed without consulting the results of the experiment. If the set of scores is poor, in that it badly distorts a numerical scale that really does underlie the ordered classification, the test will not be sensitive. The scores should therefore embody the best insight available about the way in which the classification was constructed and used. (p. 436) (Many thanks to @whuber for reminding me about this throughout one of his comments, which led me to re-read Agresti's book, from which this citation comes.) Actually, several tests treat implicitly such variables as interval scales: for example, the $M^2$ statistic for testing a linear trend (as an alternative to simple independence) is based on a correlational approach ( $M^2=(n-1)r^2$ , Agresti, 2002, p. 87). Well, you can also decide to recode your variable on an irregular range, or aggregate some of its levels, but in this case strong imbalance between recoded categories may distort statistical tests, e.g. the aforementioned trend test. A nice alternative for assigning distance between categories was already proposed by @Jeromy, namely optimal scaling. Now, let's discuss the second point I made, that of the underlying measurement model. I'm always hesitating about adding the "psychometrics" tag when I see this kind of question, because the construction and analysis of measurement scales come under Psychometric Theory (Nunnally and Bernstein, 1994, for a neat overview). I will not dwell on all the models that are actually headed under the Item Response Theory , and I kindly refer the interested reader to I. Partchev's tutorial, A visual guide to item response theory , for a gentle introduction to IRT, and to references (5-8) listed at the end for possible IRT taxonomies. Very briefly, the idea is that rather than assigning arbitrary distances between variable categories, you assume a latent scale and estimate their location on that continuum, together with individuals' ability or liability. A simple example is worth much mathematical notation, so let's consider the following item (coming from the EORTC QLQ-C30 health-related quality of life questionnaire): Did you worry? which is coded on a four-point scale, ranging from "Not at all" to "Very much". Raw scores are computed by assigning a score of 1 to 4. Scores on items belonging to the same scale can then be added together to yield a so-called scale score, which denotes one's rank on the underlying construct (here, a mental health component). Such summated scale scores are very practical because of scoring easiness (for the practitioner or nurse), but they are nothing more than a discrete (ordered) scale. We can also consider that the probability of endorsing a given response category obeys some kind of a logistic model, as described in I. Partchev's tutorial, referred above. Basically, the idea is that of a kind of threshold model (which lead to equivalent formulation in terms of the proportional or cumulative odds models) and we model the odds of being in one response category rather the preceding one or the odds of scoring above a certain category, conditional on subjects' location on the latent trait. In addition, we may impose that response categories are equally spaced on the latent scale (this is the Rating Scale model)--which is the way we do by assigning regularly spaced numerical scores-- or not (this is the Partial Credit model). Clearly, we are not adding very much to Classical Test Theory, where ordinal variable are treated as numerical ones. However, we introduce a probabilistic model, where we assume a continuous scale (with interval properties) and where specific errors of measurement can be accounted for, and we can plug these factorial scores in any regression model. References S S Stevens. On the theory of scales of measurement. Science , 103 : 677-680, 1946. W G Cochran. Some methods of strengthening the common $\chi^2$ tests. Biometrics , 10 : 417-451, 1954. J Nunnally and I Bernstein. Psychometric Theory . McGraw-Hill, 1994 Alan Agresti. Categorical Data Analysis . Wiley, 1990. C R Rao and S Sinharay, editors. Handbook of Statistics, Vol. 26: Psychometrics . Elsevier Science B.V., The Netherlands, 2007. A Boomsma, M A J van Duijn, and T A B Snijders. Essays on Item Response Theory . Springer, 2001. D Thissen and L Steinberg. A taxonomy of item response models. Psychometrika , 51(4) : 567–577, 1986. P Mair and R Hatzinger. Extended Rasch Modeling: The eRm Package for the Application of IRT Models in R . Journal of Statistical Software , 20(9) , 2007.
{ "source": [ "https://stats.stackexchange.com/questions/539", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/174/" ] }
555
ANOVA is equivalent to linear regression with the use of suitable dummy variables. The conclusions remain the same irrespective of whether you use ANOVA or linear regression. In light of their equivalence, is there any reason why ANOVA is used instead of linear regression? Note: I am particularly interested in hearing about technical reasons for the use of ANOVA instead of linear regression. Edit Here is one example using one-way ANOVA. Suppose, you want to know if the average height of male and females is the same. To test for your hypothesis you would collect data from a random sample of male and females (say 30 each) and perform the ANOVA analysis (i.e., sum of squares for sex and error) to decide whether an effect exists. You could also use linear regression to test for this as follows: Define: $\text{Sex} = 1$ if respondent is a male and $0$ otherwise. $$ \text{Height} = \text{Intercept} + \beta * \text{Sex} + \text{error} $$ where: $\text{error}\sim\mathcal N(0,\sigma^2)$ Then a test of whether $\beta = 0$ is a an equivalent test for your hypothesis.
As an economist, the analysis of variance (ANOVA) is taught and usually understood in relation to linear regression (e.g. in Arthur Goldberger's A Course in Econometrics ). Economists/Econometricians typically view ANOVA as uninteresting and prefer to move straight to regression models. From the perspective of linear (or even generalised linear) models, ANOVA assigns coefficients into batches, with each batch corresponding to a "source of variation" in ANOVA terminology. Generally you can replicate the inferences you would obtain from ANOVA using regression but not always OLS regression. Multilevel models are needed for analysing hierarchical data structures such as "split-plot designs," where between-group effects are compared to group-level errors, and within-group effects are compared to data-level errors. Gelman's paper [1] goes into great detail about this problem and effectively argues that ANOVA is an important statistical tool that should still be taught for it's own sake. In particular Gelman argues that ANOVA is a way of understanding and structuring multilevel models. Therefore ANOVA is not an alternative to regression but as a tool for summarizing complex high-dimensional inferences and for exploratory data analysis. Gelman is a well-respected statistician and some credence should be given to his view. However, almost all of the empirical work that I do would be equally well served by linear regression and so I firmly fall into the camp of viewing it as a little bit pointless. Some disciplines with complex study designs (e.g. psychology) may find ANOVA useful. [1] Gelman, A. (2005). Analysis of variance: why it is more important than ever (with discussion). Annals of Statistics 33, 1–53. doi:10.1214/009053604000001048
{ "source": [ "https://stats.stackexchange.com/questions/555", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
563
Instrumental variables are becoming increasingly common in applied economics and statistics. For the uninitiated, can we have some non-technical answers to the following questions: What is an instrumental variable? When would one want to employ an instrumental variable? How does one find or choose an instrumental variable?
[The following perhaps seems a little technical because of the use of equations but it builds mainly on the arrow charts to provide the intuition which only requires very basic understanding of OLS - so don't be repulsed.] Suppose you want to estimate the causal effect of $x_i$ on $y_i$ given by the estimated coefficient for $\beta$, but for some reason there is a correlation between your explanatory variable and the error term: $$\begin{matrix}y_i &=& \alpha &+& \beta x_i &+& \epsilon_i & \\ & && & & \hspace{-1cm}\nwarrow & \hspace{-0.8cm} \nearrow \\ & & & & & corr & \end{matrix}$$ This might have happened because we forgot to include an important variable that also correlates with $x_i$. This problem is known as omitted variable bias and then your $\widehat{\beta}$ will not give you the causal effect (see here for the details). This is a case when you would want to use an instrument because only then can you find the true causal effect. An instrument is a new variable $z_i$ which is uncorrelated with $\epsilon_i$, but that correlates well with $x_i$ and which only influences $y_i$ through $x_i$ - so our instrument is what is called "exogenous". It's like in this chart here: $$\begin{matrix} z_i & \rightarrow & x_i & \rightarrow & y_i \newline & & \uparrow & \nearrow & \newline & & \epsilon_i & \end{matrix}$$ So how do we use this new variable? Maybe you remember the ANOVA type idea behind regression where you split the total variation of a dependent variable into an explained and an unexplained component. For example, if you regress your $x_i$ on the instrument, $$\underbrace{x_i}_{\text{total variation}} = \underbrace{a \quad + \quad \pi z_i}_{\text{explained variation}} \quad + \underbrace{\eta_i}_{\text{unexplained variation}}$$ then you know that the explained variation here is exogenous to our original equation because it depends on the exogenous variable $z_i$ only. So in this sense, we split our $x_i$ up into a part that we can claim is certainly exogenous (that's the part that depends on $z_i$) and some unexplained part $\eta_i$ that keeps all the bad variation which correlates with $\epsilon_i$. Now we take the exogenous part of this regression, call it $\widehat{x_i}$, $$x_i \quad = \underbrace{a \quad + \quad \pi z_i}_{\text{good variation} \: = \: \widehat{x}_i } \quad + \underbrace{\eta_i}_{\text{bad variation}}$$ and put this into our original regression: $$y_i = \alpha + \beta \widehat{x}_i + \epsilon_i$$ Now since $\widehat{x}_i$ is not correlated anymore with $\epsilon_i$ (remember, we "filtered out" this part from $x_i$ and left it in $\eta_i$), we can consistently estimate our $\beta$ because the instrument has helped us to break the correlation between the explanatory variably and the error. This was one way how you can apply instrumental variables. This method is actually called 2-stage least squares, where our regression of $x_i$ on $z_i$ is called the "first stage" and the last equation here is called the "second stage". In terms of our original picture (I leave out the $\epsilon_i$ to not make a mess but remember that it is there!), instead of taking the direct but flawed route between $x_i$ to $y_i$ we took an intermediate step via $\widehat{x}_i$ $$\begin{matrix} & & & & & \widehat{x}_i \newline & & & & \nearrow & \downarrow \newline & z_i & \rightarrow & x_i & \rightarrow & y_i \end{matrix}$$ Thanks to this slight diversion of our road to the causal effect we were able to consistently estimate $\beta$ by using the instrument. The cost of this diversion is that instrumental variables models are generally less precise, meaning that they tend to have larger standard errors. How do we find instruments? That's not an easy question because you need to make a good case as to why your $z_i$ would not be correlated with $\epsilon_i$ - this cannot be tested formally because the true error is unobserved. The main challenge is therefore to come up with something that can be plausibly seen as exogenous such as natural disasters, policy changes, or sometimes you can even run a randomized experiment. The other answers had some very good examples for this so I won't repeat this part.
{ "source": [ "https://stats.stackexchange.com/questions/563", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/215/" ] }
564
Difference in differences has long been popular as a non-experimental tool, especially in economics. Can somebody please provide a clear and non-technical answer to the following questions about difference-in-differences. What is a difference-in-difference estimator? Why is a difference-in-difference estimator any use? Can we actually trust difference-in-difference estimates?
What is a difference in differences estimator Difference in differences (DiD) is a tool to estimate treatment effects comparing the pre- and post-treatment differences in the outcome of a treatment and a control group. In general, we are interested in estimating the effect of a treatment $D_i$ (e.g. union status, medication, etc.) on an outcome $Y_i$ (e.g. wages, health, etc.) as in $$Y_{it} = \alpha_i + \lambda_t + \rho D_{it} + X'_{it}\beta + \epsilon_{it}$$ where $\alpha_i$ are individual fixed effects (characteristics of individuals that do not change over time), $\lambda_t$ are time fixed effects, $X_{it}$ are time-varying covariates like individuals' age, and $\epsilon_{it}$ is an error term. Individuals and time are indexed by $i$ and $t$, respectively. If there is a correlation between the fixed effects and $D_{it}$ then estimating this regression via OLS will be biased given that the fixed effects are not controlled for. This is the typical omitted variable bias . To see the effect of a treatment we would like to know the difference between a person in a world in which she received the treatment and one in which she does not. Of course, only one of these is ever observable in practice. Therefore we look for people with the same pre-treatment trends in the outcome. Suppose we have two periods $t = 1, 2$ and two groups $s = A,B$. Then, under the assumption that the trends in the treatment and control groups would have continued the same way as before in the absence of treatment, we can estimate the treatment effect as $$\rho = (E[Y_{ist}|s=A,t=2] - E[Y_{ist}|s=A,t=1]) - (E[Y_{ist}|s=B,t=2] - E[Y_{ist}|s=B,t=1])$$ Graphically this would look something like this: You can simply calculate these means by hand, i.e. obtain the mean outcome of group $A$ in both periods and take their difference. Then obtain the mean outcome of group $B$ in both periods and take their difference. Then take the difference in the differences and that's the treatment effect. However, it is more convenient to do this in a regression framework because this allows you to control for covariates to obtain standard errors for the treatment effect to see if it is significant To do this, you can follow either of two equivalent strategies. Generate a control group dummy $\text{treat}_i$ which is equal to 1 if a person is in group $A$ and 0 otherwise, generate a time dummy $\text{time}_t$ which is equal to 1 if $t=2$ and 0 otherwise, and then regress $$Y_{it} = \beta_1 + \beta_2 (\text{treat}_i) + \beta_3 (\text{time}_t) + \rho (\text{treat}_i \cdot \text{time}_t) + \epsilon_{it}$$ Or you simply generate a dummy $T_{it}$ which equals one if a person is in the treatment group AND the time period is the post-treatment period and is zero otherwise. Then you would regress $$Y_{it} = \beta_1 \gamma_s + \beta_2 \lambda_t + \rho T_{it} + \epsilon_{it}$$ where $\gamma_s$ is again a dummy for the control group and $\lambda_t$ are time dummies. The two regressions give you the same results for two periods and two groups. The second equation is more general though as it easily extends to multiple groups and time periods. In either case, this is how you can estimate the difference in differences parameter in a way such that you can include control variables (I left those out from the above equations to not clutter them up but you can simply include them) and obtain standard errors for inference. Why is the difference in differences estimator useful? As stated before, DiD is a method to estimate treatment effects with non-experimental data. That's the most useful feature. DiD is also a version of fixed effects estimation. Whereas the fixed effects model assumes $E(Y_{0it}|i,t) = \alpha_i + \lambda_t$, DiD makes a similar assumption but at the group level, $E(Y_{0it}|s,t) = \gamma_s + \lambda_t$. So the expected value of the outcome here is the sum of a group and a time effect. So what's the difference? For DiD you don't necessarily need panel data as long as your repeated cross sections are drawn from the same aggregate unit $s$. This makes DiD applicable to a wider array of data than the standard fixed effects models that require panel data. Can we trust difference in differences? The most important assumption in DiD is the parallel trends assumption (see the figure above). Never trust a study that does not graphically show these trends! Papers in the 1990s might have gotten away with this but nowadays our understanding of DiD is much better. If there is no convincing graph that shows the parallel trends in the pre-treatment outcomes for the treatment and control groups, be cautious. If the parallel trends assumption holds and we can credibly rule out any other time-variant changes that may confound the treatment, then DiD is a trustworthy method. Another word of caution should be applied when it comes to the treatment of standard errors. With many years of data you need to adjust the standard errors for autocorrelation. In the past, this has been neglected but since Bertrand et al. (2004) "How Much Should We Trust Differences-In-Differences Estimates?" we know that this is an issue. In the paper they provide several remedies for dealing with autocorrelation. The easiest is to cluster on the individual panel identifier which allows for arbitrary correlation of the residuals among individual time series. This corrects for both autocorrelation and heteroscedasticity. For further references see these lecture notes by Waldinger and Pischke .
{ "source": [ "https://stats.stackexchange.com/questions/564", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/215/" ] }
577
The AIC and BIC are both methods of assessing model fit penalized for the number of estimated parameters. As I understand it, BIC penalizes models more for free parameters than does AIC. Beyond a preference based on the stringency of the criteria, are there any other reasons to prefer AIC over BIC or vice versa?
Your question implies that AIC and BIC try to answer the same question, which is not true. The AIC tries to select the model that most adequately describes an unknown, high dimensional reality. This means that reality is never in the set of candidate models that are being considered. On the contrary, BIC tries to find the TRUE model among the set of candidates. I find it quite odd the assumption that reality is instantiated in one of the models that the researchers built along the way. This is a real issue for BIC. Nevertheless, there are a lot of researchers who say BIC is better than AIC, using model recovery simulations as an argument. These simulations consist of generating data from models A and B, and then fitting both datasets with the two models. Overfitting occurs when the wrong model fits the data better than the generating. The point of these simulations is to see how well AIC and BIC correct these overfits. Usually, the results point to the fact that AIC is too liberal and still frequently prefers a more complex, wrong model over a simpler, true model. At first glance these simulations seem to be really good arguments, but the problem with them is that they are meaningless for AIC. As I said before, AIC does not consider that any of the candidate models being tested is actually true. According to AIC, all models are approximations to reality, and reality should never have a low dimensionality. At least lower than some of the candidate models. My recommendation is to use both AIC and BIC. Most of the times they will agree on the preferred model, when they don't, just report it. If you are unhappy with both AIC and BIC and have free time to invest, look up Minimum Description Length (MDL), a totally different approach that overcomes the limitations of AIC and BIC. There are several measures stemming from MDL, like normalized maximum likelihood or the Fisher Information approximation. The problem with MDL is that its mathematically demanding and/or computationally intensive. Still, if you want to stick to simple solutions, a nice way for assessing model flexibility (especially when the number of parameters are equal, rendering AIC and BIC useless) is doing Parametric Bootstrap, which is quite easy to implement. Here is a link to a paper on it. Some people here advocate for the use of cross-validation. I personally have used it and don't have anything against it, but the issue with it is that the choice among the sample-cutting rule (leave-one-out, K-fold, etc) is an unprincipled one.
{ "source": [ "https://stats.stackexchange.com/questions/577", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/196/" ] }
581
I am currently using Viterbi training for an image segmentation problem. I wanted to know what the advantages/disadvantages are of using the Baum-Welch algorithm instead of Viterbi training.
The Baum-Welch algorithm and the Viterbi algorithm calculate different things. If you know the transition probabilities for the hidden part of your model, and the emission probabilities for the visible outputs of your model, then the Viterbi algorithm gives you the most likely complete sequence of hidden states conditional on both your outputs and your model specification. The Baum-Welch algorithm gives you both the most likely hidden transition probabilities as well as the most likely set of emission probabilities given only the observed states of the model (and, usually, an upper bound on the number of hidden states). You also get the "pointwise" highest likelihood points in the hidden states, which is often slightly different from the single hidden sequence that is overall most likely. If you know your model and just want the latent states, then there is no reason to use the Baum-Welch algorithm. If you don't know your model, then you can't be using the Viterbi algorithm. Edited to add: See Peter Smit's comment; there's some overlap/vagueness in nomenclature. Some poking around led me to a chapter by Luis Javier Rodrıguez and Ines Torres in "Pattern Recognition and Image Analysis" (ISBN 978-3-540-40217-6, pp 845-857) which discusses the speed versus accuracy trade-offs of the two algorithms. Briefly, the Baum-Welch algorithm is essentially the Expectation-Maximization (EM) algorithm applied to an HMM; as a strict EM-type algorithm you're guaranteed to converge to at least a local maximum, and so for unimodal problems find the MLE. It requires two passes over your data for each step, though, and the complexity gets very big in the length of the data and number of training samples. However, you do end up with the full conditional likelihood for your hidden parameters. The Viterbi training algorithm (as opposed to the "Viterbi algorithm") approximates the MLE to achieve a gain in speed at the cost of accuracy. It segments the data and then applies the Viterbi algorithm (as I understood it) to get the most likely state sequence in the segment, then uses that most likely state sequence to re-estimate the hidden parameters. This, unlike the Baum-Welch algorithm, doesn't give the full conditional likelihood of the hidden parameters, and so ends up reducing the accuracy while saving significant (the chapter reports 1 to 2 orders of magnitude) computational time.
{ "source": [ "https://stats.stackexchange.com/questions/581", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/99/" ] }
612
I have tried to reproduce some research (using PCA) from SPSS in R. In my experience, principal() function from package psych was the only function that came close (or if my memory serves me right, dead on) to match the output. To match the same results as in SPSS, I had to use parameter principal(..., rotate = "varimax") . I have seen papers talk about how they did PCA, but based on the output of SPSS and use of rotation, it sounds more like Factor analysis. Question: Is PCA, even after rotation (using varimax ), still PCA? I was under the impression that this might be in fact Factor analysis... In case it's not, what details am I missing?
This question is largely about definitions of PCA/FA, so opinions might differ. My opinion is that PCA+varimax should not be called either PCA or FA, bur rather explicitly referred to e.g. as "varimax-rotated PCA". I should add that this is quite a confusing topic. In this answer I want to explain what a rotation actually is ; this will require some mathematics. A casual reader can skip directly to the illustration. Only then we can discuss whether PCA+rotation should or should not be called "PCA". One reference is Jolliffe's book "Principal Component Analysis", section 11.1 "Rotation of Principal Components", but I find it could be clearer. Let $\mathbf X$ be a $n \times p$ data matrix which we assume is centered. PCA amounts ( see my answer here ) to a singular-value decomposition: $\mathbf X=\mathbf{USV}^\top$. There are two equivalent but complimentary views on this decomposition: a more PCA-style "projection" view and a more FA-style "latent variables" view. According to the PCA-style view, we found a bunch of orthogonal directions $\mathbf V$ (these are eigenvectors of the covariance matrix, also called "principal directions" or "axes"), and "principal components" $\mathbf{US}$ (also called principal component "scores") are projections of the data on these directions. Principal components are uncorrelated, the first one has maximally possible variance, etc. We can write: $$\mathbf X = \mathbf{US}\cdot \mathbf V^\top = \text{Scores} \cdot \text{Principal directions}.$$ According to the FA-style view, we found some uncorrelated unit-variance "latent factors" that give rise to the observed variables via "loadings". Indeed, $\widetilde{\mathbf U}=\sqrt{n-1}\mathbf{U}$ are standardized principal components (uncorrelated and with unit variance), and if we define loadings as $\mathbf L = \mathbf{VS}/\sqrt{n-1}$, then $$\mathbf X= \sqrt{n-1}\mathbf{U}\cdot (\mathbf{VS}/\sqrt{n-1})^\top =\widetilde{\mathbf U}\cdot \mathbf L^\top = \text{Standardized scores} \cdot \text{Loadings}.$$ (Note that $\mathbf{S}^\top=\mathbf{S}$.) Both views are equivalent. Note that loadings are eigenvectors scaled by the respective eigenvalues ($\mathbf{S}/\sqrt{n-1}$ are eigenvalues of the covariance matrix). (I should add in brackets that PCA$\ne$FA ; FA explicitly aims at finding latent factors that are linearly mapped to the observed variables via loadings; it is more flexible than PCA and yields different loadings. That is why I prefer to call the above "FA-style view on PCA" and not FA, even though some people take it to be one of FA methods.) Now, what does a rotation do? E.g. an orthogonal rotation, such as varimax. First, it considers only $k<p$ components, i.e.: $$\mathbf X \approx \mathbf U_k \mathbf S_k \mathbf V_k^\top = \widetilde{\mathbf U}_k \mathbf L^\top_k.$$ Then it takes a square orthogonal $k \times k$ matrix $\mathbf T$, and plugs $\mathbf T\mathbf T^\top=\mathbf I$ into this decomposition: $$\mathbf X \approx \mathbf U_k \mathbf S_k \mathbf V_k^\top = \mathbf U_k \mathbf T \mathbf T^\top \mathbf S_k \mathbf V_k^\top = \widetilde{\mathbf U}_\mathrm{rot} \mathbf L^\top_\mathrm{rot},$$ where rotated loadings are given by $\mathbf L_\mathrm{rot} = \mathbf L_k \mathbf T$, and rotated standardized scores are given by $\widetilde{\mathbf U}_\mathrm{rot} = \widetilde{\mathbf U}_k \mathbf T$. (The purpose of this is to find $\mathbf T$ such that $\mathbf L_\mathrm{rot}$ became as close to being sparse as possible, to facilitate its interpretation.) Note that what is rotated are: (1) standardized scores, (2) loadings. But not the raw scores and not the principal directions! So the rotation happens in the latent space, not in the original space. This is absolutely crucial. From the FA-style point of view, nothing much happened. (A) The latent factors are still uncorrelated and standardized. (B) They are still mapped to the observed variables via (rotated) loadings. (C) The amount of variance captured by each component/factor is given by the sum of squared values of the corresponding loadings column in $\mathbf L_\mathrm{rot}$. (D) Geometrically, loadings still span the same $k$-dimensional subspace in $\mathbb R^p$ (the subspace spanned by the first $k$ PCA eigenvectors). (E) The approximation to $\mathbf X$ and the reconstruction error did not change at all. (F) The covariance matrix is still approximated equally well:$$\boldsymbol \Sigma \approx \mathbf L_k\mathbf L_k^\top = \mathbf L_\mathrm{rot}\mathbf L_\mathrm{rot}^\top.$$ But the PCA-style point of view has practically collapsed. Rotated loadings do not correspond to orthogonal directions/axes in $\mathbb R^p$ anymore, i.e. columns of $\mathbf L_\mathrm{rot}$ are not orthogonal! Worse, if you [orthogonally] project the data onto the directions given by the rotated loadings, you will get correlated (!) projections and will not be able to recover the scores. [Instead, to compute the standardized scores after rotation, one needs to multiply the data matrix with the pseudo-inverse of loadings $\widetilde{\mathbf U}_\mathrm{rot} = \mathbf X (\mathbf L_\mathrm{rot}^+)^\top$. Alternatively, one can simply rotate the original standardized scores with the rotation matrix: $\widetilde{\mathbf U}_\mathrm{rot} = \widetilde{\mathbf U} \mathbf T$.] Also, the rotated components do not successively capture the maximal amount of variance: the variance gets redistributed among the components (even though all $k$ rotated components capture exactly as much variance as all $k$ original principal components). Here is an illustration. The data is a 2D ellipse stretched along the main diagonal. First principal direction is the main diagonal, the second one is orthogonal to it. PCA loading vectors (eigenvectors scaled by the eigenvalues) are shown in red -- pointing in both directions and also stretched by a constant factor for visibility. Then I applied an orthogonal rotation by $30^\circ$ to the loadings. Resulting loading vectors are shown in magenta. Note how they are not orthogonal (!). An FA-style intuition here is as follows: imagine a "latent space" where points fill a small circle (come from a 2D Gaussian with unit variances). These distribution of points is then stretched along the PCA loadings (red) to become the data ellipse that we see on this figure. However, the same distribution of points can be rotated and then stretched along the rotated PCA loadings (magenta) to become the same data ellipse . [To actually see that an orthogonal rotation of loadings is a rotation , one needs to look at a PCA biplot; there the vectors/rays corresponding to original variables will simply rotate.] Let us summarize. After an orthogonal rotation (such as varimax), the "rotated-principal" axes are not orthogonal, and orthogonal projections on them do not make sense. So one should rather drop this whole axes/projections point of view. It would be weird to still call it PCA (which is all about projections with maximal variance etc.). From FA-style point of view, we simply rotated our (standardized and uncorrelated) latent factors, which is a valid operation. There are no "projections" in FA; instead, latent factors generate the observed variables via loadings. This logic is still preserved. However, we started with principal components, which are not actually factors (as PCA is not the same as FA). So it would be weird to call it FA as well. Instead of debating whether one "should" rather call it PCA or FA, I would suggest to be meticulous in specifying the exact used procedure: "PCA followed by a varimax rotation". Postscriptum. It is possible to consider an alternative rotation procedure, where $\mathbf{TT}^\top$ is inserted between $\mathbf{US}$ and $\mathbf V^\top$. This would rotate raw scores and eigenvectors (instead of standardized scores and loadings). The biggest problem with this approach is that after such a "rotation", scores will not be uncorrelated anymore, which is pretty fatal for PCA. One can do it, but it is not how rotations are usually being understood and applied.
{ "source": [ "https://stats.stackexchange.com/questions/612", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/144/" ] }
622
I see these terms being used and I keep getting them mixed up. Is there a simple explanation of the differences between them?
The likelihood function usually depends on many parameters. Depending on the application, we are usually interested in only a subset of these parameters. For example, in linear regression, interest typically lies in the slope coefficients and not on the error variance. Denote the parameters we are interested in as $\beta$ and the parameters that are not of primary interest as $\theta$ . The standard way to approach the estimation problem is to maximize the likelihood function so that we obtain estimates of $\beta$ and $\theta$ . However, since the primary interest lies in $\beta$ partial, profile and marginal likelihood offer alternative ways to estimate $\beta$ without estimating $\theta$ . In order to see the difference denote the standard likelihood by $L(\beta, \theta|\mathrm{data})$ . Maximum Likelihood Find $\beta$ and $\theta$ that maximizes $L(\beta, \theta|\mathrm{data})$ . Partial Likelihood If we can write the likelihood function as: $$L(\beta, \theta|\mathrm{data}) = L_1(\beta|\mathrm{data}) L_2(\theta|\mathrm{data})$$ Then we simply maximize $L_1(\beta|\mathrm{data})$ . Profile Likelihood If we can express $\theta$ as a function of $\beta$ then we replace $\theta$ with the corresponding function. Say, $\theta = g(\beta)$ . Then, we maximize: $$L(\beta, g(\beta)|\mathrm{data})$$ Marginal Likelihood We integrate out $\theta$ from the likelihood equation by exploiting the fact that we can identify the probability distribution of $\theta$ conditional on $\beta$ .
{ "source": [ "https://stats.stackexchange.com/questions/622", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/159/" ] }
631
What is an estimator of standard deviation of standard deviation if normality of data can be assumed?
Let $X_1, ..., X_n \sim N(\mu, \sigma^2)$. As shown in this thread , the standard deviation of the sample standard deviation, $$ s = \sqrt{ \frac{1}{n-1} \sum_{i=1}^{n} (X_i - \overline{X}) }, $$ is $$ {\rm SD}(s) = \sqrt{ E \left( [E(s)- s]^2 \right) } = \sigma \sqrt{ 1 - \frac{2}{n-1} \cdot \left( \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } \right)^2 } $$ where $\Gamma(\cdot)$ is the gamma function , $n$ is the sample size and $\overline{X} = \frac{1}{n} \sum_{i=1}^{n} X_i$ is the sample mean. Since $s$ is a consistent estimator of $\sigma$, this suggests replacing $\sigma$ with $s$ in the equation above to get a consistent estimator of ${\rm SD}(s)$. If it is an unbiased estimator you seek, we see in this thread that $ E(s) = \sigma \cdot \sqrt{ \frac{2}{n-1} } \cdot \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } $, which, by linearity of expectation, suggests $$ s \cdot \sqrt{ \frac{n-1}{2} } \cdot \frac{\Gamma( \frac{n-1}{2} )}{ \Gamma(n/2) } $$ as an unbiased estimator of $\sigma$. All of this together with linearity of expectation gives an unbiased estimator of ${\rm SD}(s)$: $$ s \cdot \frac{\Gamma( \frac{n-1}{2} )}{ \Gamma(n/2) } \cdot \sqrt{\frac{n-1}{2} - \left( \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } \right)^2 } $$
{ "source": [ "https://stats.stackexchange.com/questions/631", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
665
What's the difference between probability and statistics, and why are they studied together?
The short answer to this I've heard from Persi Diaconis is the following: The problems considered by probability and statistics are inverse to each other. In probability theory we consider some underlying process which has some randomness or uncertainty modeled by random variables, and we figure out what happens. In statistics we observe something that has happened, and try to figure out what underlying process would explain those observations.
{ "source": [ "https://stats.stackexchange.com/questions/665", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/327/" ] }
672
What are the main ideas, that is, concepts related to Bayes' theorem ? I am not asking for any derivations of complex mathematical notation.
Bayes' theorem is a relatively simple, but fundamental result of probability theory that allows for the calculation of certain conditional probabilities. Conditional probabilities are just those probabilities that reflect the influence of one event on the probability of another. Simply put, in its most famous form, it states that the probability of a hypothesis given new data ( P(H|D) ; called the posterior probability) is equal to the following equation: the probability of the observed data given the hypothesis ( P(D|H) ; called the conditional probability), times the probability of the theory being true prior to new evidence ( P(H) ; called the prior probability of H), divided by the probability of seeing that data, period ( P(D ); called the marginal probability of D). Formally, the equation looks like this: The significance of Bayes theorem is largely due to its proper use being a point of contention between schools of thought on probability. To a subjective Bayesian (that interprets probability as being subjective degrees of belief) Bayes' theorem provides the cornerstone for theory testing, theory selection and other practices, by plugging their subjective probability judgments into the equation, and running with it. To a frequentist (that interprets probability as limiting relative frequencies ), this use of Bayes' theorem is an abuse, and they strive to instead use meaningful (non-subjective) priors (as do objective Bayesians under yet another interpretation of probability).
{ "source": [ "https://stats.stackexchange.com/questions/672", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/333/" ] }
726
What is your favorite statistical quote? This is community wiki, so please one quote per answer.
All models are wrong, but some are useful. (George E. P. Box) Reference: Box & Draper (1987), Empirical model-building and response surfaces , Wiley, p. 424. Also: G.E.P. Box (1979), "Robustness in the Strategy of Scientific Model Building" in Robustness in Statistics (Launer & Wilkinson eds.), p. 202.
{ "source": [ "https://stats.stackexchange.com/questions/726", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/223/" ] }
798
I'm interested in finding as optimal of a method as I can for determining how many bins I should use in a histogram. My data should range from 30 to 350 objects at most, and in particular I'm trying to apply thresholding (like Otsu's method) where "good" objects, which I should have fewer of and should be more spread out, are separated from "bad" objects, which should be more dense in value. A concrete value would have a score of 1-10 for each object. I'd had 5-10 objects with scores 6-10, and 20-25 objects with scores 1-4. I'd like to find a histogram binning pattern that generally allows something like Otsu's method to threshold off the low scoring objects. However, in the implementation of Otsu's I've seen, the bin size was 256, and often I have many fewer data points that 256, which to me suggests that 256 is not a good bin number. With so few data, what approaches should I take to calculating the number of bins to use?
The Freedman-Diaconis rule is very robust and works well in practice. The bin-width is set to $h=2\times\text{IQR}\times n^{-1/3}$ . So the number of bins is $(\max-\min)/h$ , where $n$ is the number of observations, max is the maximum value and min is the minimum value. In base R, you can use: hist(x, breaks="FD") For other plotting libraries without this option (e.g., ggplot2 ), you can calculate binwidth as: bw <- 2 * IQR(x) / length(x)^(1/3) ### for example ##### ggplot() + geom_histogram(aes(x), binwidth = bw)
{ "source": [ "https://stats.stackexchange.com/questions/798", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/476/" ] }
866
Say I want to estimate a large number of parameters, and I want to penalize some of them because I believe they should have little effect compared to the others. How do I decide what penalization scheme to use? When is ridge regression more appropriate? When should I use lasso?
Keep in mind that ridge regression can't zero out coefficients; thus, you either end up including all the coefficients in the model, or none of them. In contrast, the LASSO does both parameter shrinkage and variable selection automatically. If some of your covariates are highly correlated, you may want to look at the Elastic Net [3] instead of the LASSO. I'd personally recommend using the Non-Negative Garotte (NNG) [1] as it's consistent in terms of estimation and variable selection [2]. Unlike LASSO and ridge regression, NNG requires an initial estimate that is then shrunk towards the origin. In the original paper, Breiman recommends the least-squares solution for the initial estimate (you may however want to start the search from a ridge regression solution and use something like GCV to select the penalty parameter). In terms of available software, I've implemented the original NNG in MATLAB (based on Breiman's original FORTRAN code). You can download it from: http://www.emakalic.org/blog/wp-content/uploads/2010/04/nngarotte.zip BTW, if you prefer a Bayesian solution, check out [4,5]. References: [1] Breiman, L. Better Subset Regression Using the Nonnegative Garrote Technometrics, 1995, 37, 373-384 [2] Yuan, M. & Lin, Y. On the non-negative garrotte estimator Journal of the Royal Statistical Society (Series B), 2007, 69, 143-161 [3] Zou, H. & Hastie, T. Regularization and variable selection via the elastic net Journal of the Royal Statistical Society (Series B), 2005, 67, 301-320 [4] Park, T. & Casella, G. The Bayesian Lasso Journal of the American Statistical Association, 2008, 103, 681-686 [5] Kyung, M.; Gill, J.; Ghosh, M. & Casella, G. Penalized Regression, Standard Errors, and Bayesian Lassos Bayesian Analysis, 2010, 5, 369-412
{ "source": [ "https://stats.stackexchange.com/questions/866", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/455/" ] }
871
I realize this is pedantic and trite, but as a researcher in a field outside of statistics, with limited formal education in statistics, I always wonder if I'm writing "p-value" correctly. Specifically: Is the "p" supposed to be capitalized? Is the "p" supposed to be italicized? (Or in mathematical font, in TeX?) Is there supposed to be a hyphen between "p" and "value"? Alternatively, is there no "proper" way of writing "p-value" at all, and any dolt will understand what I mean if I just place "p" next to "value" in some permutation of these options?
There do not appear to be "standards". For example: The Nature style guide refers to "P value" This APA style guide refers to " p value" The Blood style guide says: Capitalize and italicize the P that introduces a P value Italicize the p that represents the Spearman rank correlation test Wikipedia uses " p -value" (with hyphen and italicized "p") My brief, unscientific survey suggests that the most common combination is lower-case, italicized p without a hyphen.
{ "source": [ "https://stats.stackexchange.com/questions/871", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/520/" ] }
897
What is the difference between offline and online learning ? Is it just a matter of learning over the entire dataset (offline) vs. learning incrementally (one instance at a time)? What are examples of algorithms used in both?
Online learning means that you are doing it as the data comes in. Offline means that you have a static dataset. So, for online learning, you (typically) have more data, but you have time constraints. Another wrinkle that can affect online learning is that your concepts might change through time. Let's say you want to build a classifier to recognize spam. You can acquire a large corpus of e-mail, label it, and train a classifier on it. This would be offline learning. Or, you can take all the e-mail coming into your system, and continuously update your classifier (labels may be a bit tricky). This would be online learning.
{ "source": [ "https://stats.stackexchange.com/questions/897", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/284/" ] }
1,016
I've got a linear regression model with the sample and variable observations and I want to know: Whether a specific variable is significant enough to remain included in the model. Whether another variable (with observations) ought to be included in the model. Which statistics can help me out? How can get them most efficiently?
Statistical significance is not usually a good basis for determining whether a variable should be included in a model. Statistical tests were designed to test hypotheses, not select variables. I know a lot of textbooks discuss variable selection using statistical tests, but this is generally a bad approach. See Harrell's book Regression Modelling Strategies for some of the reasons why. These days, variable selection based on the AIC (or something similar) is usually preferred.
{ "source": [ "https://stats.stackexchange.com/questions/1016", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/614/" ] }
1,112
I want to represent a variable as a number between 0 and 1. The variable is a non-negative integer with no inherent bound. I map 0 to 0 but what can I map to 1 or numbers between 0 and 1? I could use the history of that variable to provide the limits. This would mean I have to restate old statistics if the maximum increases. Do I have to do this or are there other tricks I should know about?
A very common trick to do so (e.g., in connectionist modeling) is to use the hyperbolic tangent tanh as the 'squashing function". It automatically fits all numbers into the interval between -1 and 1. Which in your case restricts the range from 0 to 1. In r and matlab you get it via tanh() . Another squashing function is the logistic function (thanks to Simon for the name), provided by $ f(x) = 1 / (1 + e ^{-x} ) $, which restricts the range from 0 to 1 (with 0 mapped to .5). So you would have to multiply the result by 2 and subtract 1 to fit your data into the interval between 0 and 1. Here is some simple R code which plots both functions (tanh in red, logistic in blue) so you can see how both squash: x <- seq(0,20,0.001) plot(x,tanh(x),pch=".", col="red", ylab="y") points(x,(1 / (1 + exp(-x)))*2-1, pch=".",col="blue")
{ "source": [ "https://stats.stackexchange.com/questions/1112", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/652/" ] }
1,142
I am working with a large amount of time series. These time series are basically network measurements coming every 10 minutes, and some of them are periodic (i.e. the bandwidth), while some other aren't (i.e. the amount of routing traffic). I would like a simple algorithm for doing an online "outlier detection". Basically, I want to keep in memory (or on disk) the whole historical data for each time series, and I want to detect any outlier in a live scenario (each time a new sample is captured). What is the best way to achieve these results? I'm currently using a moving average in order to remove some noise, but then what next? Simple things like standard deviation, mad, ... against the whole data set doesn't work well (I can't assume the time series are stationary), and I would like something more "accurate", ideally a black box like: double outlier_detection(double* vector, double value); where vector is the array of double containing the historical data, and the return value is the anomaly score for the new sample "value" .
Here is a simple R function that will find time series outliers (and optionally show them in a plot). It will handle seasonal and non-seasonal time series. The basic idea is to find robust estimates of the trend and seasonal components and subtract them. Then find outliers in the residuals. The test for residual outliers is the same as for the standard boxplot -- points greater than 1.5IQR above or below the upper and lower quartiles are assumed outliers. The number of IQRs above/below these thresholds is returned as an outlier "score". So the score can be any positive number, and will be zero for non-outliers. I realise you are not implementing this in R, but I often find an R function a good place to start. Then the task is to translate this into whatever language is required. tsoutliers <- function(x,plot=FALSE) { x <- as.ts(x) if(frequency(x)>1) resid <- stl(x,s.window="periodic",robust=TRUE)$time.series[,3] else { tt <- 1:length(x) resid <- residuals(loess(x ~ tt)) } resid.q <- quantile(resid,prob=c(0.25,0.75)) iqr <- diff(resid.q) limits <- resid.q + 1.5*iqr*c(-1,1) score <- abs(pmin((resid-limits[1])/iqr,0) + pmax((resid - limits[2])/iqr,0)) if(plot) { plot(x) x2 <- ts(rep(NA,length(x))) x2[score>0] <- x[score>0] tsp(x2) <- tsp(x) points(x2,pch=19,col="red") return(invisible(score)) } else return(score) }
{ "source": [ "https://stats.stackexchange.com/questions/1142", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/667/" ] }
1,149
The wiki discusses the problems that arise when multicollinearity is an issue in linear regression. The basic problem is multicollinearity results in unstable parameter estimates which makes it very difficult to assess the effect of independent variables on dependent variables. I understand the technical reasons behind the problems (may not be able to invert $X' X$, ill-conditioned $X' X$ etc) but I am searching for a more intuitive (perhaps geometric?) explanation for this issue. Is there a geometric or perhaps some other form of easily understandable explanation as to why multicollinearity is problematic in the context of linear regression?
Consider the simplest case where $Y$ is regressed against $X$ and $Z$ and where $X$ and $Z$ are highly positively correlated. Then the effect of $X$ on $Y$ is hard to distinguish from the effect of $Z$ on $Y$ because any increase in $X$ tends to be associated with an increase in $Z$. Another way to look at this is to consider the equation. If we write $Y = b_0 + b_1X + b_2Z + e$, then the coefficient $b_1$ is the increase in $Y$ for every unit increase in $X$ while holding $Z$ constant. But in practice, it is often impossible to hold $Z$ constant and the positive correlation between $X$ and $Z$ mean that a unit increase in $X$ is usually accompanied by some increase in $Z$ at the same time. A similar but more complicated explanation holds for other forms of multicollinearity.
{ "source": [ "https://stats.stackexchange.com/questions/1149", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
1,164
When solving business problems using data, it's common that at least one key assumption that under-pins classical statistics is invalid. Most of the time, no one bothers to check those assumptions so you never actually know. For instance, that so many of the common web metrics are "long-tailed" (relative to the normal distribution) is, by now, so well documented that we take it for granted. Another example, online communities--even in communities with thousands of members, it's well-documented that by far the largest share of contribution to/participation in many of these community is attributable to a minuscule group of 'super-contributors.' (E.g., a few months ago, just after the SO API was made available in beta, a StackOverflow member published a brief analysis from data he collected through the API; his conclusion-- less than one percent of the SO members account for most of the activity on SO (presumably asking questions, and answering them), another 1-2% accounted for the rest, and the overwhelming majority of the members do nothing). Distributions of that sort--again more often the rule rather than the exception--are often best modeled with a power law density function. For these type of distributions, even the central limit theorem is problematic to apply. So given the abundance of populations like this of interest to analysts, and given that classical models perform demonstrably poorly on these data, and given that robust and resistant methods have been around for a while (at least 20 years, I believe)--why are they not used more often? (I am also wondering why I don't use them more often, but that's not really a question for CrossValidated .) Yes I know that there are textbook chapters devoted entirely to robust statistics and I know there are (a few) R Packages ( robustbase is the one I am familiar with and use), etc. And yet given the obvious advantages of these techniques, they are often clearly the better tools for the job-- why are they not used much more often ? Shouldn't we expect to see robust (and resistant) statistics used far more often (perhaps even presumptively) compared with the classical analogs? The only substantive (i.e., technical) explanation I have heard is that robust techniques (likewise for resistant methods) lack the power/sensitivity of classical techniques. I don't know if this is indeed true in some cases, but I do know it is not true in many cases. A final word of preemption: yes I know this question does not have a single demonstrably correct answer; very few questions on this Site do. Moreover, this question is a genuine inquiry; it's not a pretext to advance a point of view--I don't have a point of view here, just a question for which i am hoping for some insightful answers.
Researchers want small p-values, and you can get smaller p-values if you use methods that make stronger distributional assumptions. In other words, non-robust methods let you publish more papers. Of course more of these papers may be false positives, but a publication is a publication. That's a cynical explanation, but it's sometimes valid.
{ "source": [ "https://stats.stackexchange.com/questions/1164", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/438/" ] }
1,174
I know of normality tests, but how do I test for "Poisson-ness"? I have sample of ~1000 non-negative integers, which I suspect are taken from a Poisson distribution, and I would like to test that.
First of all my advice is you must refrain from trying out a Poisson distribution just as it is to the data. I suggest you must first make a theory as to why should Poisson distribution fit a particular dataset or a phenomenon. Once you have established this, the next question is whether the distribution is homogeneous or not. This means whether all parts of the data are handled by the same poisson distribution or is there a variation in this based on some aspect like time or space. Once you have convinced of these aspects, try the following three tests: likelihood ratio test using a chi square variable use of conditional chi-square statistic; also called poisson dispersion test or variance test use of the neyman-scott statistic, that is based on a variance stabilizing transformation of the poisson variable search for these and you will find them easily on the net.
{ "source": [ "https://stats.stackexchange.com/questions/1174", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/634/" ] }
1,194
Back in April, I attended a talk at the UMD Math Department Statistics group seminar series called "To Explain or To Predict?". The talk was given by Prof. Galit Shmueli who teaches at UMD's Smith Business School. Her talk was based on research she did for a paper titled "Predictive vs. Explanatory Modeling in IS Research" , and a follow up working paper titled "To Explain or To Predict?" . Dr. Shmueli's argument is that the terms predictive and explanatory in a statistical modeling context have become conflated, and that statistical literature lacks a a thorough discussion of the differences. In the paper, she contrasts both and talks about their practical implications. I encourage you to read the papers. The questions I'd like to pose to the practitioner community are: How do you define a predictive exercise vs an explanatory/descriptive one? It would be useful if you could talk about the specific application. Have you ever fallen into the trap of using one when meaning to use the other? I certainly have. How do you know which one to use?
In one sentence Predictive modelling is all about "what is likely to happen?", whereas explanatory modelling is all about "what can we do about it?" In many sentences I think the main difference is what is intended to be done with the analysis. I would suggest explanation is much more important for intervention than prediction. If you want to do something to alter an outcome, then you had best be looking to explain why it is the way it is. Explanatory modelling, if done well, will tell you how to intervene (which input should be adjusted). However, if you simply want to understand what the future will be like, without any intention (or ability) to intervene, then predictive modelling is more likely to be appropriate. As an incredibly loose example, using "cancer data". Predictive modelling using "cancer data" would be appropriate (or at least useful) if you were funding the cancer wards of different hospitals. You don't really need to explain why people get cancer, rather you only need an accurate estimate of how much services will be required. Explanatory modelling probably wouldn't help much here. For example, knowing that smoking leads to higher risk of cancer doesn't on its own tell you whether to give more funding to ward A or ward B. Explanatory modelling of "cancer data" would be appropriate if you wanted to decrease the national cancer rate - predictive modelling would be fairly obsolete here. The ability to accurately predict cancer rates is hardly likely to help you decide how to reduce it. However, knowing that smoking leads to higher risk of cancer is valuable information - because if you decrease smoking rates (e.g. by making cigarettes more expensive), this leads to more people with less risk, which (hopefully) leads to an expected decrease in cancer rates. Looking at the problem this way, I would think that explanatory modelling would mainly focus on variables which are in control of the user, either directly or indirectly. There may be a need to collect other variables, but if you can't change any of the variables in the analysis, then I doubt that explanatory modelling will be useful, except maybe to give you the desire to gain control or influence over those variables which are important. Predictive modelling, crudely, just looks for associations between variables, whether controlled by the user or not. You only need to know the inputs/features/independent variables/etc.. to make a prediction, but you need to be able to modify or influence the inputs/features/independent variables/etc.. in order to intervene and change an outcome.
{ "source": [ "https://stats.stackexchange.com/questions/1194", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/11/" ] }
1,207
This post is the continuation of another post related to a generic method for outlier detection in time series . Basically, at this point I'm interested in a robust way to discover the periodicity/seasonality of a generic time series affected by a lot of noise. From a developer point of view, I would like a simple interface such as: unsigned int discover_period(vector<double> v); Where v is the array containing the samples, and the return value is the period of the signal. The main point is that, again, I can't make any assumption regarding the analyzed signal. I already tried an approach based on the signal autocorrelation (detecting the peaks of a correlogram), but it's not robust as I would like.
If you really have no idea what the periodicity is, probably the best approach is to find the frequency corresponding to the maximum of the spectral density. However, the spectrum at low frequencies will be affected by trend, so you need to detrend the series first. The following R function should do the job for most series. It is far from perfect, but I've tested it on a few dozen examples and it seems to work ok. It will return 1 for data that have no strong periodicity, and the length of period otherwise. Update: Version 2 of function. This is much faster and seems to be more robust. find.freq <- function(x) { n <- length(x) spec <- spec.ar(c(x),plot=FALSE) if(max(spec$spec)>10) # Arbitrary threshold chosen by trial and error. { period <- round(1/spec$freq[which.max(spec$spec)]) if(period==Inf) # Find next local maximum { j <- which(diff(spec$spec)>0) if(length(j)>0) { nextmax <- j[1] + which.max(spec$spec[j[1]:500]) period <- round(1/spec$freq[nextmax]) } else period <- 1 } } else period <- 1 return(period) }
{ "source": [ "https://stats.stackexchange.com/questions/1207", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/667/" ] }
1,252
I was wondering if there is a statistical model "cheat sheet(s)" that lists any or more information: when to use the model when not to use the model required and optional inputs expected outputs has the model been tested in different fields (policy, bio, engineering, manufacturing, etc)? is it accepted in practice or research? expected variation / accuracy / precision caveats scalability deprecated model, avoid or don't use etc .. I've seen hierarchies before on various websites, and some simplistic model cheat sheets in various textbooks; however, it'll be nice if there is a larger one that encompasses various types of models based on different types of analysis and theories.
I have previously found UCLA's "Choosing the Correct Statistical Test" to be helpful: https://stats.idre.ucla.edu/other/mult-pkg/whatstat/ It also gives examples of how to do the analysis in SAS, Stata, SPSS and R.
{ "source": [ "https://stats.stackexchange.com/questions/1252", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/59/" ] }
1,292
Decision trees seems to be a very understandable machine learning method. Once created it can be easily inspected by a human which is a great advantage in some applications. What are the practical weak sides of Decision Trees?
Here are a couple I can think of: They can be extremely sensitive to small perturbations in the data: a slight change can result in a drastically different tree. They can easily overfit. This can be negated by validation methods and pruning, but this is a grey area. They can have problems out-of-sample prediction (this is related to them being non-smooth). Some of these are related to the problem of multicollinearity : when two variables both explain the same thing, a decision tree will greedily choose the best one, whereas many other methods will use them both. Ensemble methods such as random forests can negate this to a certain extent, but you lose the ease of understanding. However the biggest problem, from my point of view at least, is the lack of a principled probabilistic framework. Many other methods have things like confidence intervals, posterior distributions etc., which give us some idea of how good a model is. A decision tree is ultimately an ad hoc heuristic, which can still be very useful (they are excellent for finding the sources of bugs in data processing), but there is the danger of people treating the output as "the" correct model (from my experience, this happens a lot in marketing).
{ "source": [ "https://stats.stackexchange.com/questions/1292", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/217/" ] }
1,337
Well, we've got favourite statistics quotes. What about statistics jokes?
A statistician's wife had twins. He was delighted. He rang the minister who was also delighted. "Bring them to church on Sunday and we'll baptize them," said the minister. "No," replied the statistician. "Baptize one. We'll keep the other as a control." STATS: The Magazine For Students of Statistics, Winter 1996, Number 15
{ "source": [ "https://stats.stackexchange.com/questions/1337", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/521/" ] }
1,432
In answering this question John Christie suggested that the fit of logistic regression models should be assessed by evaluating the residuals. I'm familiar with how to interpret residuals in OLS, they are in the same scale as the DV and very clearly the difference between y and the y predicted by the model. However for logistic regression, in the past I've typically just examined estimates of model fit, e.g. AIC, because I wasn't sure what a residual would mean for a logistic regression. After looking into R's help files a little bit I see that in R there are five types of glm residuals available, c("deviance", "pearson", "working","response", "partial") . The help file refers to: Davison, A. C. and Snell, E. J. (1991) Residuals and diagnostics. In: Statistical Theory and Modelling. In Honour of Sir David Cox, FRS , eds. Hinkley, D. V., Reid, N. and Snell, E. J., Chapman & Hall. I do not have a copy of that. Is there a short way to describe how to interpret each of these types? In a logistic context will sum of squared residuals provide a meaningful measure of model fit or is one better off with an Information Criterion?
The easiest residuals to understand are the deviance residuals as when squared these sum to -2 times the log-likelihood. In its simplest terms logistic regression can be understood in terms of fitting the function $p = \text{logit}^{-1}(X\beta)$ for known $X$ in such a way as to minimise the total deviance, which is the sum of squared deviance residuals of all the data points. The (squared) deviance of each data point is equal to (-2 times) the logarithm of the difference between its predicted probability $\text{logit}^{-1}(X\beta)$ and the complement of its actual value (1 for a control; a 0 for a case) in absolute terms. A perfect fit of a point (which never occurs) gives a deviance of zero as log(1) is zero. A poorly fitting point has a large residual deviance as -2 times the log of a very small value is a large number. Doing logistic regression is akin to finding a beta value such that the sum of squared deviance residuals is minimised. This can be illustrated with a plot, but I don't know how to upload one.
{ "source": [ "https://stats.stackexchange.com/questions/1432", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/196/" ] }
1,444
If I have highly skewed positive data I often take logs. But what should I do with highly skewed non-negative data that include zeros? I have seen two transformations used: $\log(x+1)$ which has the neat feature that 0 maps to 0. $\log(x+c)$ where c is either estimated or set to be some very small positive value. Are there any other approaches? Are there any good reasons to prefer one approach over the others?
It seems to me that the most appropriate choice of transformation is contingent on the model and the context. The '0' point can arise from several different reasons each of which may have to be treated differently: Truncation (as in Robin's example): Use appropriate models (e.g., mixtures, survival models etc) Missing data: Impute data / Drop observations if appropriate. Natural zero point (e.g., income levels; an unemployed person has zero income): Transform as needed Sensitivity of measuring instrument: Perhaps, add a small amount to data? I am not really offering an answer as I suspect there is no universal, 'correct' transformation when you have zeros.
{ "source": [ "https://stats.stackexchange.com/questions/1444", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/159/" ] }
1,447
I want to fully grasp the notion of $r^2$ describing the amount of variation between variables. Every web explanation is a bit mechanical and obtuse. I want to "get" the concept, not just mechanically use the numbers. E.g.: Hours studied vs. test score $r$ = .8 $r^2$ = .64 So, what does this mean? 64% of the variability of test scores can be explained by hours? How do we know that just by squaring?
Start with the basic idea of variation. Your beginning model is the sum of the squared deviations from the mean. The R^2 value is the proportion of that variation that is accounted for by using an alternative model. For example, R-squared tells you how much of the variation in Y you can get rid of by summing up the squared distances from a regression line, rather than the mean. I think this is made perfectly clear if we think about the simple regression problem plotted out. Consider a typical scatterplot where you have a predictor X along the horizontal axis and a response Y along the vertical axis. The mean is a horizontal line on the plot where Y is constant. The total variation in Y is the sum of squared differences between the mean of Y and each individual data point. It's the distance between the mean line and every individual point squared and added up. You can also calculate another measure of variability after you have the regression line from the model. This is the difference between each Y point and the regression line. Rather than each (Y - the mean) squared we get (Y - the point on the regression line) squared. If the regression line is anything but horizontal, we're going to get less total distance when we use this fitted regression line rather than the mean--that is there is less unexplained variation. The ratio between the extra variation explained and the original variation is your R^2. It's the proportion of the original variation in your response that is explained by fitting that regression line. Here is some R code for a graph with the mean, the regression line, and segments from the regression line to each point to help visualize: library(ggplot2) data(faithful) plotdata <- aggregate( eruptions ~ waiting , data = faithful, FUN = mean) linefit1 <- lm(eruptions ~ waiting, data = plotdata) plotdata$expected <- predict(linefit1) plotdata$sign <- residuals(linefit1) > 0 p <- ggplot(plotdata, aes(y=eruptions, x=waiting, xend=waiting, yend=expected) ) p + geom_point(shape = 1, size = 3) + geom_smooth(method=lm, se=FALSE) + geom_segment(aes(y=eruptions, x=waiting, xend=waiting, yend=expected, colour = sign), data = plotdata) + theme(legend.position="none") + geom_hline(yintercept = mean(plotdata$eruptions), size = 1)
{ "source": [ "https://stats.stackexchange.com/questions/1447", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/6967/" ] }
1,525
Say I have eaten hamburgers every Tuesday for years. You could say that I eat hamburgers 14% of the time, or that the probability of me eating a hamburger in a given week is 14%. What are the main differences between probabilities and proportions? Is a probability an expected proportion? Are probabilities uncertain and proportions are guaranteed?
I have hesitated to wade into this discussion, but because it seems to have gotten sidetracked over a trivial issue concerning how to express numbers, maybe it's worthwhile refocusing it. A point of departure for your consideration is this: A probability is a hypothetical property. Proportions summarize observations. A frequentist might rely on laws of large numbers to justify statements like "the long-run proportion of an event [is] its probability." This supplies meaning to statements like "a probability is an expected proportion," which otherwise might appear merely tautological. Other interpretations of probability also lead to connections between probabilities and proportions but they are less direct than this one. In our models we usually take probabilities to be definite but unknown. Due to the sharp contrasts among the meanings of "probable," "definite," and "unknown" I am reluctant to apply the term "uncertain" to describe that situation. However, before we conduct a sequence of observations, the [eventual] proportion, like any future event, is indeed "uncertain". After we make those observations, the proportion is both definite and known. (Perhaps this is what is meant by "guaranteed" in the OP.) Much of our knowledge about the [hypothetical] probability is mediated through these uncertain observations and informed by the idea that they might have turned out otherwise. In this sense--that uncertainty about the observations is transmitted back to uncertain knowledge of the underlying probability--it seems justifiable to refer to the probability as "uncertain." In any event it is apparent that probabilities and proportions function differently in statistics, despite their similarities and intimate relationships. It would be a mistake to take them to be the same thing. Reference Huber, WA Ignorance is Not Probability . Risk Analysis Volume 30, Issue 3, pages 371–376, March 2010.
{ "source": [ "https://stats.stackexchange.com/questions/1525", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/74/" ] }
1,576
It seems that a number of the statistical packages that I use wrap these two concepts together. However, I'm wondering if there are different assumptions or data 'formalities' that must be true to use one over the other. A real example would be incredibly useful.
Principal component analysis involves extracting linear composites of observed variables. Factor analysis is based on a formal model predicting observed variables from theoretical latent factors. In psychology these two techniques are often applied in the construction of multi-scale tests to determine which items load on which scales. They typically yield similar substantive conclusions (for a discussion see Comrey (1988) Factor-Analytic Methods of Scale Development in Personality and Clinical Psychology). This helps to explain why some statistics packages seem to bundle them together. I have also seen situations where "principal component analysis" is incorrectly labelled "factor analysis". In terms of a simple rule of thumb , I'd suggest that you: Run factor analysis if you assume or wish to test a theoretical model of latent factors causing observed variables. Run principal component analysis If you want to simply reduce your correlated observed variables to a smaller set of important independent composite variables.
{ "source": [ "https://stats.stackexchange.com/questions/1576", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/776/" ] }
1,580
Imagine You run a linear regression with four numeric predictors (IV1, ..., IV4) When only IV1 is included as a predictor the standardised beta is +.20 When you also include IV2 to IV4 the sign of the standardised regression coefficient of IV1 flips to -.25 (i.e., it's become negative). This gives rise to a few questions: With regards to terminology, do you call this a "suppressor effect"? What strategies would you use to explain and understand this effect? Do you have any examples of such effects in practice and how did you explain and understand these effects?
Multicollinearity is the usual suspect as JoFrhwld mentioned. Basically, if your variables are positively correlated, then the coefficients will be negatively correlated, which can lead to a wrong sign on one of the coefficients. One check would be to perform a principal components regression or ridge regression. This reduces the dimensionality of the regression space, handling the multicollinearity. You end up with biased estimates but a possibly lower MSE and corrected signs. Whether you go with those particular results or not, it's a good diagnostic check. If you still get sign changes, it may be theoretically interesting. UPDATE Following from the comment in John Christie's answer, this might be interesting. Reversal in association (magnitude or direction) are examples of Simpson's Paradox, Lord's Paradox and Suppression Effects. The differences essentially relate to the type of variable. It's more useful to understand the underlying phenomenon rather than think in terms of a particular "paradox" or effect. For a causal perspective, the paper below does a good job of explaining why and I'll quote at length their introduction and conclusion to whet your appetite. The role of causal reasoning in understanding Simpson's paradox, Lord's paradox, and the suppression effect: covariate selection in the analysis of observational studies Tu et al present an analysis of the equivalence of three paradoxes, concluding that all three simply reiterate the unsurprising change in the association of any two variables when a third variable is statistically controlled for. I call this unsurprising because reversal or change in magnitude is common in conditional analysis. To avoid either, we must avoid conditional analysis altogether. What is it about Simpson's and Lord's paradoxes or the suppression effect, beyond their pointing out the obvious, that attracts the intermittent and sometimes alarmist interests seen in the literature? [...] In conclusion, it cannot be overemphasized that although Simpson's and related paradoxes reveal the perils of using statistical criteria to guide causal analysis, they hold neither the explanations of the phenomenon they purport to depict nor the pointers on how to avoid them. The explanations and solutions lie in causal reasoning which relies on background knowledge, not statistical criteria. It is high time we stopped treating misinterpreted signs and symptoms ('paradoxes'), and got on with the business of handling the disease ('causality'). We should rightly turn our attention to the perennial problem of covariate selection for causal analysis using non-experimental data.
{ "source": [ "https://stats.stackexchange.com/questions/1580", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/183/" ] }
1,595
Lots of people use a main tool like Excel or another spreadsheet, SPSS, Stata, or R for their statistics needs. They might turn to some specific package for very special needs, but a lot of things can be done with a simple spreadsheet or a general stats package or stats programming environment. I've always liked Python as a programming language, and for simple needs, it's easy to write a short program that calculates what I need. Matplotlib allows me to plot it. Has anyone switched completely from, say R, to Python? R (or any other statistics package) has a lot of functionality specific to statistics, and it has data structures that allow you to think about the statistics you want to perform and less about the internal representation of your data. Python (or some other dynamic language) has the benefit of allowing me to program in a familiar, high-level language, and it lets me programmatically interact with real-world systems in which the data resides or from which I can take measurements. But I haven't found any Python package that would allow me to express things with "statistical terminology" – from simple descriptive statistics to more complicated multivariate methods. What can you recommend if I wanted to use Python as a "statistics workbench" to replace R, SPSS, etc.? What would I gain and lose, based on your experience?
It's hard to ignore the wealth of statistical packages available in R/CRAN. That said, I spend a lot of time in Python land and would never dissuade anyone from having as much fun as I do. :) Here are some libraries/links you might find useful for statistical work. NumPy/Scipy You probably know about these already. But let me point out the Cookbook where you can read about many statistical facilities already available and the Example List which is a great reference for functions (including data manipulation and other operations). Another handy reference is John Cook's Distributions in Scipy . pandas This is a really nice library for working with statistical data -- tabular data, time series, panel data. Includes many builtin functions for data summaries, grouping/aggregation, pivoting. Also has a statistics/econometrics library. larry Labeled array that plays nice with NumPy. Provides statistical functions not present in NumPy and good for data manipulation. python-statlib A fairly recent effort which combined a number of scattered statistics libraries. Useful for basic and descriptive statistics if you're not using NumPy or pandas. statsmodels Statistical modeling: Linear models, GLMs, among others. scikits Statistical and scientific computing packages -- notably smoothing, optimization and machine learning. PyMC For your Bayesian/MCMC/hierarchical modeling needs. Highly recommended. PyMix Mixture models. Biopython Useful for loading your biological data into python, and provides some rudimentary statistical/ machine learning tools for analysis. If speed becomes a problem, consider Theano -- used with good success by the deep learning people. There's plenty of other stuff out there, but this is what I find the most useful along the lines you mentioned.
{ "source": [ "https://stats.stackexchange.com/questions/1595", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/890/" ] }
1,610
I'm not a statistician by education, I'm a software engineer. Yet statistics comes up a lot. In fact, questions specifically about Type I and Type II error are coming up a lot in the course of my studying for the Certified Software Development Associate exam (mathematics and statistics are 10% of the exam). I'm having trouble always coming up with the right definitions for Type I and Type II error - although I'm memorizing them now (and can remember them most of the time), I really don't want to freeze up on this exam trying to remember what the difference is. I know that Type I Error is a false positive, or when you reject the null hypothesis and it's actually true and a Type II error is a false negative, or when you accept the null hypothesis and it's actually false. Is there an easy way to remember what the difference is, such as a mnemonic? How do professional statisticians do it - is it just something that they know from using or discussing it often? (Side Note: This question can probably use some better tags. One that I wanted to create was "terminology", but I don't have enough reputation to do it. If someone could add that, it would be great. Thanks.)
Since type two means "False negative" or sort of "false false", I remember it as the number of falses. Type I: "I falsely think the alternate hypothesis is true" (one false) Type II: "I falsely think the alternate hypothesis is false " (two falses)
{ "source": [ "https://stats.stackexchange.com/questions/1610", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/110/" ] }
1,637
I'm sure I've got this completely wrapped round my head, but I just can't figure it out. The t-test compares two normal distributions using the Z distribution. That's why there's an assumption of normality in the DATA. ANOVA is equivalent to linear regression with dummy variables, and uses sums of squares, just like OLS. That's why there's an assumption of normality of RESIDUALS. It's taken me several years, but I think I've finally grasped those basic facts. So why is it that the t-test is equivalent to ANOVA with two groups? How can they be equivalent if they don't even assume the same things about the data?
The t-test with two groups assumes that each group is normally distributed with the same variance (although the means may differ under the alternative hypothesis). That is equivalent to a regression with a dummy variable as the regression allows the mean of each group to differ but not the variance. Hence the residuals (equal to the data with the group means subtracted) have the same distribution --- that is, they are normally distributed with zero mean. A t-test with unequal variances is not equivalent to a one-way ANOVA.
{ "source": [ "https://stats.stackexchange.com/questions/1637", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/199/" ] }
1,645
So far, I've been using the Shapiro-Wilk statistic in order to test normality assumptions in small samples. Could you please recommend another technique?
The fBasics package in R (part of Rmetrics ) includes several normality tests , covering many of the popular frequentist tests -- Kolmogorov-Smirnov, Shapiro-Wilk, Jarque–Bera, and D'Agostino -- along with a wrapper for the normality tests in the nortest package -- Anderson–Darling, Cramer–von Mises, Lilliefors (Kolmogorov-Smirnov), Pearson chi–square, and Shapiro–Francia. The package documentation also provides all the important references. Here is a demo that shows how to use the tests from nortest . One approach, if you have the time, is to use more than one test and check for agreement. The tests vary in a number of ways, so it isn't entirely straightforward to choose "the best". What do other researchers in your field use? This can vary and it may be best to stick with the accepted methods so that others will accept your work. I frequently use the Jarque-Bera test, partly for that reason, and Anderson–Darling for comparison. You can look at "Comparison of Tests for Univariate Normality" (Seier 2002) and "A comparison of various tests of normality" (Yazici; Yolacan 2007) for a comparison and discussion of the issues. It's also trivial to test these methods for comparison in R, thanks to all the distribution functions . Here's a simple example with simulated data (I won't print out the results to save space), although a more full exposition would be required: library(fBasics); library(ggplot2) set.seed(1) # normal distribution x1 <- rnorm(1e+06) x1.samp <- sample(x1, 200) qplot(x1.samp, geom="histogram") jbTest(x1.samp) adTest(x1.samp) # cauchy distribution x2 <- rcauchy(1e+06) x2.samp <- sample(x2, 200) qplot(x2.samp, geom="histogram") jbTest(x2.samp) adTest(x2.samp) Once you have the results from the various tests over different distributions, you can compare which were the most effective. For instance, the p-value for the Jarque-Bera test above returned 0.276 for the normal distribution (accepting) and < 2.2e-16 for the cauchy (rejecting the null hypothesis).
{ "source": [ "https://stats.stackexchange.com/questions/1645", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/1356/" ] }
1,736
I am wondering if there are any packages for python that is capable of performing survival analysis. I have been using the survival package in R but would like to port my work to python.
AFAIK, there aren't any survival analysis packages in python. As mbq comments above, the only route available would be to Rpy . Even if there were a pure python package available, I would be very careful in using it, in particular I would look at: How often does it get updated. Does it have a large user base? Does it have advanced techniques? One of the benefits of R, is that these standard packages get a massive amount of testing and user feed back. When dealing with real data, unexpected edge cases can creep in.
{ "source": [ "https://stats.stackexchange.com/questions/1736", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/172/" ] }
1,805
I was taught to only apply Fisher's Exact Test in contingency tables that were 2x2. Questions: Did Fisher himself ever envision this test to be used in tables larger than 2x2 (I am aware of the tale of him devising the test while trying to guess whether an old woman could tell if milk was added to tea or tea was added to milk ) Stata allows me to use Fisher's exact test to any contingency table. Is this valid? Is it preferable to use FET when expected cell counts in a contingency table are < 5?
The only problem with applying Fisher's exact test to tables larger than 2x2 is that the calculations become much more difficult to do. The 2x2 version is the only one which is even feasible by hand, and so I doubt that Fisher ever imagined the test in larger tables because the computations would have been beyond anything he would have envisaged. Nevertheless, the test can be applied to any mxn table and some software including Stata and SPSS provide the facility. Even so, the calculation is often approximated using a Monte Carlo approach. Yes, if the expected cell counts are small, it is better to use an exact test as the chi-squared test is no longer a good approximation in such cases.
{ "source": [ "https://stats.stackexchange.com/questions/1805", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/561/" ] }
1,826
How would you describe cross-validation to someone without a data analysis background?
Consider the following situation: I want to catch the subway to go to my office. My plan is to take my car, park at the subway and then take the train to go to my office. My goal is to catch the train at 8.15 am every day so that I can reach my office on time. I need to decide the following: (a) the time at which I need to leave from my home and (b) the route I will take to drive to the station. In the above example, I have two parameters (i.e., time of departure from home and route to take to the station) and I need to choose these parameters such that I reach the station by 8.15 am. In order to solve the above problem I may try out different sets of 'parameters' (i.e., different combination of times of departure and route) on Mondays, Wednesdays, and Fridays, to see which combination is the 'best' one. The idea is that once I have identified the best combination I can use it every day so that I achieve my objective. Problem of Overfitting The problem with the above approach is that I may overfit which essentially means that the best combination I identify may in some sense may be unique to Mon, Wed and Fridays and that combination may not work for Tue and Thu. Overfitting may happen if in my search for the best combination of times and routes I exploit some aspect of the traffic situation on Mon/Wed/Fri which does not occur on Tue and Thu. One Solution to Overfitting: Cross-Validation Cross-validation is one solution to overfitting. The idea is that once we have identified our best combination of parameters (in our case time and route) we test the performance of that set of parameters in a different context. Therefore, we may want to test on Tue and Thu as well to ensure that our choices work for those days as well. Extending the analogy to statistics In statistics, we have a similar issue. We often use a limited set of data to estimate the unknown parameters we do not know. If we overfit then our parameter estimates will work very well for the existing data but not as well for when we use them in another context. Thus, cross-validation helps in avoiding the above issue of overfitting by proving us some reassurance that the parameter estimates are not unique to the data we used to estimate them. Of course, cross validation is not perfect. Going back to our example of the subway, it can happen that even after cross-validation, our best choice of parameters may not work one month down the line because of various issues (e.g., construction, traffic volume changes over time etc).
{ "source": [ "https://stats.stackexchange.com/questions/1826", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5/" ] }
1,829
I usually hear about "ordinary least squares". Is that the most widely used algorithm used for linear regression? Are there reasons to use a different one?
Regarding the question in the title, about what is the algorithm that is used: In a linear algebra perspective, the linear regression algorithm is the way to solve a linear system $\mathbf{A}x=b$ with more equations than unknowns. In most of the cases there is no solution to this problem. And this is because the vector $b$ doesn't belong to the column space of $\mathbf{A}$, $C(\mathbf{A})$. The best straight line is the one that makes the overall error $e=\mathbf{A}x-b$ as small as it takes. And is convenient to think as small to be the squared length, $\lVert e \rVert^2$, because it's non negative, and it equals 0 only when $b\in C(\mathbf{A})$. Projecting (orthogonally) the vector $b$ to the nearest point in the column space of $\mathbf{A}$ gives the vector $b^*$ that solves the system (it's components lie on the best straight line) with the minimum error. $\mathbf{A}^T\mathbf{A}\hat{x}=\mathbf{A}^Tb \Rightarrow \hat{x}=(\mathbf{A}^T\mathbf{A})^{-1}\mathbf{A}^Tb$ and the projected vector $b^*$ is given by: $b^*=\mathbf{A}\hat{x}=\mathbf{A}(\mathbf{A}^T\mathbf{A})^{-1}\mathbf{A}^Tb$ Perhaps the least squares method is not exclusively used because that squaring overcompensates for outliers. Let me give a simple example in R, that solves the regression problem using this algorithm: library(fBasics) reg.data <- read.table(textConnection(" b x 12 0 10 1 8 2 11 3 6 4 7 5 2 6 3 7 3 8 "), header = T) attach(reg.data) A <- model.matrix(b~x) # intercept and slope inv(t(A) %*% A) %*% t(A) %*% b # fitted values - the projected vector b in the C(A) A %*% inv(t(A) %*%A ) %*% t(A) %*% b # The projection is easier if the orthogonal matrix Q is used, # because t(Q)%*%Q = I Q <- qr.Q(qr(A)) R <- qr.R(qr(A)) # intercept and slope best.line <- inv(R) %*% t(Q) %*% b # fitted values Q %*% t(Q) %*% b plot(x,b,pch=16) abline(best.line[1],best.line[2])
{ "source": [ "https://stats.stackexchange.com/questions/1829", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/988/" ] }
1,850
For an effect size analysis, I am noticing that there are differences between Cohen's d, Hedges's g and Hedges' g*. Are these three metrics normally very similar? What would be a case where they would produce different results? Also is it a matter of preference which I use or report with?
Both Cohen's d and Hedges' g pool variances on the assumption of equal population variances, but g pools using n - 1 for each sample instead of n, which provides a better estimate, especially the smaller the sample sizes. Both d and g are somewhat positively biased, but only negligibly for moderate or larger sample sizes. The bias is reduced using g*. The d by Glass does not assume equal variances, so it uses the sd of a control group or baseline comparison group as the standardizer for the difference between the two means. These effect sizes and Cliff's and other nonparametric effect sizes are discussed in detail in my book: Grissom, R. J., & Kim, J, J. (2005). Effect sizes for research: A broad practical approach. Mahwah, NJ: Erlbaum.
{ "source": [ "https://stats.stackexchange.com/questions/1850", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/559/" ] }
2,038
I came across this nice tutorial: A Handbook of Statistical Analyses Using R. Chapter 13. Principal Component Analysis: The Olympic Heptathlon on how to do PCA in R language. I don't understand the interpretation of Figure 13.3: So I am plotting first eigenvector vs the second eigenvector. What does that mean? Suppose eigenvalue corresponding to first eigenvector explains 60% of variation in data set and second eigenvalue-eigenvector explain 20% of variation. What does it mean to plot these against each other?
PCA is one of the many ways to analyse the structure of a given correlation matrix. By construction, the first principal axis is the one which maximizes the variance (reflected by its eigenvalue) when data are projected onto a line (which stands for a direction in the $p$ -dimensional space, assuming you have $p$ variables) and the second one is orthogonal to it, and still maximizes the remaining variance. This is the reason why using the first two axes should yield the better approximation of the original variables space (say, a matrix $X$ of dim $n \times p$ ) when it is projected onto a plane. Principal components are just linear combinations of the original variables. Therefore, plotting individual factor scores (defined as $Xu$ , where $u$ is the vector of loadings of any principal component) may help to highlight groups of homogeneous individuals, for example, or to interpret one's overall scoring when considering all variables at the same time. In other words, this is a way to summarize one's location with respect to his value on the $p$ variables, or a combination thereof. In your case, Fig. 13.3 in HSAUR shows that Joyner-Kersee (Jy-K) has a high (negative) score on the 1st axis, suggesting he performed overall quite good on all events. The same line of reasoning applies for interpreting the second axis. I take a very short look at the figure so I will not go into details and my interpretation is certainly superficial. I assume that you will find further information in the HSAUR textbook. Here it is worth noting that both variables and individuals are shown on the same diagram (this is called a biplot ), which helps to interpret the factorial axes while looking at individuals' location. Usually, we plot the variables into a so-called correlation circle (where the angle formed by any two variables, represented here as vectors, reflects their actual pairwise correlation, since the cosine of the angle between pairs of vectors amounts to the correlation between the variables. I think, however, you'd better start reading some introductory book on multivariate analysis to get deep insight into PCA-based methods. For example, B.S. Everitt wrote an excellent textbook on this topic, An R and S-Plus ® Companion to Multivariate Analysis , and you can check the companion website for illustration. There are other great R packages for applied multivariate data analysis, like ade4 and FactoMineR .
{ "source": [ "https://stats.stackexchange.com/questions/2038", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/862/" ] }
2,077
Besides taking differences, what are other techniques for making a non-stationary time series, stationary? Ordinarily one refers to a series as " integrated of order p " if it can be made stationary through a lag operator $(1-L)^P X_t$.
De-trending is fundamental. This includes regressing against covariates other than time. Seasonal adjustment is a version of taking differences but could be construed as a separate technique. Transformation of the data implicitly converts a difference operator into something else; e.g., differences of the logarithms are actually ratios. Some EDA smoothing techniques (such as removing a moving median) could be construed as non-parametric ways of detrending. They were used as such by Tukey in his book on EDA. Tukey continued by detrending the residuals and iterating this process for as long as necessary (until he achieved residuals that appeared stationary and symmetrically distributed around zero).
{ "source": [ "https://stats.stackexchange.com/questions/2077", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5/" ] }
2,092
The waiting times for poisson distribution is an exponential distribution with parameter lambda. But I don't understand it. Poisson models the number of arrivals per unit of time for example. How is this related to exponential distribution? Lets say probability of k arrivals in a unit of time is P(k) (modeled by poisson) and probability of k+1 is P(k+1), how does exponential distribution model the waiting time between them?
I will use the following notation to be as consistent as possible with the wiki (in case you want to go back and forth between my answer and the wiki definitions for the poisson and exponential .) $N_t$: the number of arrivals during time period $t$ $X_t$: the time it takes for one additional arrival to arrive assuming that someone arrived at time $t$ By definition, the following conditions are equivalent: $ (X_t > x) \equiv (N_t = N_{t+x})$ The event on the left captures the event that no one has arrived in the time interval $[t,t+x]$ which implies that our count of the number of arrivals at time $t+x$ is identical to the count at time $t$ which is the event on the right. By the complement rule, we also have: $P(X_t \le x) = 1 - P(X_t > x)$ Using the equivalence of the two events that we described above, we can re-write the above as: $P(X_t \le x) = 1 - P(N_{t+x} - N_t = 0)$ But, $P(N_{t+x} - N_t = 0) = P(N_x = 0)$ Using the poisson pmf the above where $\lambda$ is the average number of arrivals per time unit and $x$ a quantity of time units, simplifies to: $P(N_{t+x} - N_t = 0) = \frac{(\lambda x)^0}{0!}e^{-\lambda x}$ i.e. $P(N_{t+x} - N_t = 0) = e^{-\lambda x}$ Substituting in our original eqn, we have: $P(X_t \le x) = 1 - e^{-\lambda x}$ The above is the cdf of a exponential pdf.
{ "source": [ "https://stats.stackexchange.com/questions/2092", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/862/" ] }
2,125
In particular, I am referring to the Pearson product-moment correlation coefficient.
What's the difference between the correlation between $X$ and $Y$ and a linear regression predicting $Y$ from $X$ ? First, some similarities : the standardised regression coefficient is the same as Pearson's correlation coefficient The square of Pearson's correlation coefficient is the same as the $R^2$ in simple linear regression The sign of the unstandardized coefficient (i.e., whether it is positive or negative) will the same as the sign of the correlation coefficient. Neither simple linear regression nor correlation answer questions of causality directly. This point is important, because I've met people that think that simple regression can magically allow an inference that $X$ causes $Y$ . Standard tests of the null hypothesis (i.e., "correlation = 0" or, equivalently, "slope = 0" for the regression in either order), such as carried out by lm and cor.test in R, will yield identical p-values. Second, some differences : The regression equation (i.e., $a + bX$ ) can be used to make predictions on $Y$ based on values of $X$ While correlation typically refers to the linear relationship, it can refer to other forms of dependence, such as polynomial or truly nonlinear relationships While correlation typically refers to Pearson's correlation coefficient, there are other types of correlation, such as Spearman's. The correlation between X and Y is the same as the correlation between Y and X. In contrast, the unstandardized coefficient typically changes when moving from a model predicting Y from X to a model predicting X from Y.
{ "source": [ "https://stats.stackexchange.com/questions/2125", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/74/" ] }
2,149
A particle filter and Kalman filter are both recursive Bayesian estimators . I often encounter Kalman filters in my field, but very rarely see the usage of a particle filter. When would one be used over the other?
From Dan Simon's "Optimal State Estimation": In a linear system with Gaussian noise, the Kalman filter is optimal. In a system that is nonlinear, the Kalman filter can be used for state estimation, but the particle filter may give better results at the price of additional computational effort. In a system that has non-Gaussian noise, the Kalman filter is the optimal linear filter, but again the particle filter may perform better. The unscented Kalman filter (UKF) provides a balance between the low computational effort of the Kalman filter and the high performance of the particle filter. The particle filter has some similarities with the UKF in that it transforms a set of points via known nonlinear equations and combines the results to estimate the mean and covariance of the state. However, in the particle filter the points are chosen randomly, whereas in the UKF the points are chosen on the basis of a specific algorithm*. Because of this, the number of points used in a particle filter generally needs to be much greater than the number of points in a UKF. Another difference between the two filters is that the estimation error in a UKF does not converge to zero in any sense, but the estimation error in a particle filter does converge to zero as the number of particles (and hence the computational effort) approaches infinity. *The unscented transformation is a method for calculating the statistics of a random variable which undergoes a nonlinear transformation and uses the intuition (which also applies to the particle filter) that it is easier to approximate a probability distribution than it is to approximate an arbitrary nonlinear function or transformation. See also this as an example of how the points are chosen in UKF."
{ "source": [ "https://stats.stackexchange.com/questions/2149", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/5/" ] }
2,151
In other words, instead of having a two class problem I am dealing with 4 classes and still would like to assess performance using AUC.
It seems you are looking for multi-class ROC analysis, which is a kind of multi-objective optimization covered in a tutorial at ICML'04. As in several multi-class problem, the idea is generally to carry out pairwise comparison (one class vs. all other classes, one class vs. another class, see (1) or the Elements of Statistical Learning ), and there is a recent paper by Landgrebe and Duin on that topic, Approximating the multiclass ROC by pairwise analysis , Pattern Recognition Letters 2007 28: 1747-1758. Now, for visualization purpose, I've seen some papers some time ago, most of them turning around volume under the ROC surface (VUS) or Cobweb diagram . I don't know, however, if there exists an R implementation of these methods, although I think the stars() function might be used for cobweb plot. I just ran across a Matlab toolbox that seems to offer multi-class ROC analysis, PRSD Studio . Other papers that may also be useful as a first start for visualization/computation: Visualisation of multi-class ROC surfaces A simplified extension of the Area under the ROC to the multiclass domain References: 1. Allwein, E.L., Schapire, R.E. and Singer, Y. (2000). Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research , 1 :113–141.
{ "source": [ "https://stats.stackexchange.com/questions/2151", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }