source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
2,179 | How to obtain a variable (attribute) importance using SVM? | If you use l-1 penalty on the weight vector, it does automatic feature selection as the weights corresponding to irrelevant attributes are automatically set to zero. See this paper . The (absolute) magnitude of each non-zero weights can give an idea about the importance of the corresponding attribute. Also look at this paper which uses criteria derived from SVMs to guide the attribute selection. | {
"source": [
"https://stats.stackexchange.com/questions/2179",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
]
} |
2,213 | What is the difference between a feed-forward and recurrent neural network? Why would you use one over the other? Do other network topologies exist? | Feed-forward ANNs allow signals to travel one way only: from input to output. There are no feedback (loops); i.e. , the output of any layer does not affect that same layer. Feed-forward ANNs tend to be straightforward networks that associate inputs with outputs. They are extensively used in pattern recognition. This type of organisation is also referred to as bottom-up or top-down. Feedback (or recurrent or interactive) networks can have signals traveling in both directions by introducing loops in the network. Feedback networks are powerful and can get extremely complicated. Computations derived from earlier input are fed back into the network, which gives them a kind of memory. Feedback networks are dynamic; their 'state' is changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input changes and a new equilibrium needs to be found. Feedforward neural networks are ideally suitable for modeling relationships between a set of predictor or input variables and one or more response or output variables. In other words, they are appropriate for any functional mapping problem where we want to know how a number of input variables affect the output variable. The multilayer feedforward neural networks, also called multi-layer perceptrons (MLP), are the most widely studied and used neural network model in practice. As an example of feedback network, I can recall Hopfield’s network . The main use of Hopfield’s network is as associative memory. An associative memory is a device which accepts an input pattern and generates an output as the stored pattern which is most closely associated with the input. The function of the associate memory is to recall the corresponding stored pattern, and then produce a clear version of the pattern at the output. Hopfield networks are typically used for those problems with binary pattern vectors and the input pattern may be a noisy version of one of the stored patterns. In the Hopfield network, the stored patterns are encoded as the weights of the network. Kohonen’s self-organizing maps (SOM) represent another neural network type that is markedly different from the feedforward multilayer networks. Unlike training in the feedforward MLP, the SOM training or learning is often called unsupervised because there are no known target outputs associated with each input pattern in SOM and during the training process, the SOM processes the input patterns and learns to cluster or segment the data through adjustment of weights (that makes it an important neural network model for dimension reduction and data clustering). A two-dimensional map is typically created in such a way that the orders of the interrelationships among inputs are preserved. The number and composition of clusters can be visually determined based on the output distribution generated by the training process. With only input variables in the training sample, SOM aims to learn or discover the underlying structure of the data. (The diagrams are from Dana Vrajitoru's C463 / B551 Artificial Intelligence web site .) | {
"source": [
"https://stats.stackexchange.com/questions/2213",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5/"
]
} |
2,230 | I've never really grokked the difference between these two measures of convergence. (Or, in fact, any of the different types of convergence, but I mention these two in particular because of the Weak and Strong Laws of Large Numbers.) Sure, I can quote the definition of each and give an example where they differ, but I still don't quite get it. What's a good way to understand the difference? Why is the difference important? Is there a particularly memorable example where they differ? | From my point of view the difference is important, but largely for philosophical reasons. Assume you have some device, that improves with time. So, every time you use the device the probability of it failing is less than before. Convergence in probability says that the chance of failure goes to zero as the number of usages goes to infinity. So, after using the device a large number of times, you can be very confident of it working correctly, it still might fail, it's just very unlikely. Convergence almost surely is a bit stronger. It says that the total number of failures is finite . That is, if you count the number of failures as the number of usages goes to infinity, you will get a finite number. The impact of this is as follows: As you use the device more and more, you will, after some finite number of usages, exhaust all failures. From then on the device will work perfectly . As Srikant points out, you don't actually know when you have exhausted all failures, so from a purely practical point of view, there is not much difference between the two modes of convergence. However, personally I am very glad that, for example, the strong law of large numbers exists, as opposed to just the weak law. Because now, a scientific experiment to obtain, say, the speed of light, is justified in taking averages. At least in theory, after obtaining enough data, you can get arbitrarily close to the true speed of light. There wont be any failures (however improbable) in the averaging process. Let me clarify what I mean by ''failures (however improbable) in the averaging process''. Choose some $\delta > 0$ arbitrarily small. You obtain $n$ estimates $X_1,X_2,\dots,X_n$ of the speed of light (or some other quantity) that has some `true' value, say $\mu$. You compute the average
$$S_n = \frac{1}{n}\sum_{k=1}^n X_k.$$
As we obtain more data ($n$ increases) we can compute $S_n$ for each $n = 1,2,\dots$. The weak law says (under some assumptions about the $X_n$) that the probability
$$P(|S_n - \mu| > \delta) \rightarrow 0$$
as $n$ goes to $\infty$. The strong law says that the number of times that $|S_n - \mu|$ is larger than $\delta$ is finite (with probability 1). That is, if we define the indicator function $I(|S_n - \mu| > \delta)$ that returns one when $|S_n - \mu| > \delta$ and zero otherwise, then
$$\sum_{n=1}^{\infty}I(|S_n - \mu| > \delta)$$
converges. This gives you considerable confidence in the value of $S_n$, because it guarantees (i.e. with probability 1) the existence of some finite $n_0$ such that $|S_n - \mu| < \delta$ for all $n > n_0$ (i.e. the average never fails for $n > n_0$). Note that the weak law gives no such guarantee. | {
"source": [
"https://stats.stackexchange.com/questions/2230",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1106/"
]
} |
2,234 | I would like as many algorithms that perform the same task as logistic regression. That is algorithms/models that can give a prediction to a binary response (Y) with some explanatory variable (X). I would be glad if after you name the algorithm, if you would also show how to implement it in R. Here is a code that can be updated with other models: set.seed(55)
n <- 100
x <- c(rnorm(n), 1+rnorm(n))
y <- c(rep(0,n), rep(1,n))
r <- glm(y~x, family=binomial)
plot(y~x)
abline(lm(y~x), col='red', lty=2)
xx <- seq(min(x), max(x), length=100)
yy <- predict(r, data.frame(x=xx), type='response')
lines(xx, yy, col='blue', lwd=5, lty=2)
title(main='Logistic regression with the "glm" function') | Popular right now are randomForest and gbm (called MART or Gradient Boosting in machine learning literature), rpart for simple trees. Also popular is bayesglm, which uses MAP with priors for regularization. install.packages(c("randomForest", "gbm", "rpart", "arm"))
library(randomForest)
library(gbm)
library(rpart)
library(arm)
r1 <- randomForest(y~x)
r2 <- gbm(y~x)
r3 <- rpart(y~x)
r4 <- bayesglm(y ~ x, family=binomial)
yy1 <- predict(r1, data.frame(x=xx))
yy2 <- predict(r2, data.frame(x=xx))
yy3 <- predict(r3, data.frame(x=xx))
yy4 <- predict(r4, data.frame(x=xx), type="response") | {
"source": [
"https://stats.stackexchange.com/questions/2234",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/253/"
]
} |
2,245 | In his 1984 paper "Statistics and Causal Inference" , Paul Holland raised one of the most fundamental questions in statistics: What can a statistical model say about
causation? This led to his motto: NO CAUSATION WITHOUT MANIPULATION which emphasized the importance of restrictions around experiments that consider causation.
Andrew Gelman makes a similar point : "To find out what happens when you change something, it is necessary to change it."...There are things you learn from perturbing a system that you'll never find out from any amount of passive observation. His ideas are summarized in this article . What considerations should be made when making a causal inference from a statistical model? | This is a broad question, but given the Box, Hunter and Hunter quote is true I think what it comes down to is The quality of the experimental design: randomization, sample sizes, control of confounders,... The quality of the implementation of the design: adherance to protocol, measurement error, data handling, ... The quality of the model to accurately reflect the design: blocking structures are accurately represented, proper degrees of freedom are associated with effects, estimators are unbiased, ... At the risk of stating the obvious I'll try to hit on the key points of each: is a large sub-field of statistics, but in it's most basic form I think it comes down to the fact that when making causal inference we ideally start with identical units that are monitored in identical environments other than being assigned to a treatment. Any systematic differences between groups after assigment are then logically attributable to the treatment (we can infer cause). But, the world isn't that nice and units differ prior to treatment and evironments during experiments are not perfectly controlled. So we "control what we can and randomize what we can't", which helps to insure that there won't be systematic bias due to the confounders that we controlled or randomized. One problem is that experiments tend to be difficult (to impossible) and expensive and a large variety of designs have been developed to efficiently extract as much information as possible in as carefully controlled a setting as possible, given the costs. Some of these are quite rigorous (e.g. in medicine the double-blind, randomized, placebo-controlled trial) and others less so (e.g. various forms of 'quasi-experiments'). is also a big issue and one that statisticians generally don't think about...though we should. In applied statistical work I can recall incidences where 'effects' found in the data were spurious results of inconsistency of data collection or handling. I also wonder how often information on true causal effects of interest is lost due to these issues (I believe students in the applied sciences generally have little-to-no training about ways that data can become corrupted - but I'm getting off topic here...) is another large technical subject, and another necessary step in objective causal inference. To a certain degree this is taken care of because the design crowd develop designs and models together (since inference from a model is the goal, the attributes of the estimators drive design). But this only gets us so far because in the 'real world' we end up analysing experimental data from non-textbook designs and then we have to think hard about things like the appropriate controls and how they should enter the model and what associated degrees of freedom should be and whether assumptions are met if if not how to adjust of violations and how robust the estimators are to any remaining violations and... Anyway, hopefully some of the above helps in thinking about considerations in making causal inference from a model. Did I forget anything big? | {
"source": [
"https://stats.stackexchange.com/questions/2245",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5/"
]
} |
2,272 | Joris and Srikant's exchange here got me wondering (again) if my internal explanations for the difference between confidence intervals and credible intervals were the correct ones. How you would explain the difference? | I agree completely with Srikant's explanation. To give a more heuristic spin on it: Classical approaches generally posit that the world is one way (e.g., a parameter has one particular true value), and try to conduct experiments whose resulting conclusion -- no matter the true value of the parameter -- will be correct with at least some minimum probability. As a result, to express uncertainty in our knowledge after an experiment, the frequentist approach uses a "confidence interval" -- a range of values designed to include the true value of the parameter with some minimum probability, say 95%. A frequentist will design the experiment and 95% confidence interval procedure so that out of every 100 experiments run start to finish, at least 95 of the resulting confidence intervals will be expected to include the true value of the parameter. The other 5 might be slightly wrong, or they might be complete nonsense -- formally speaking that's ok as far as the approach is concerned, as long as 95 out of 100 inferences are correct. (Of course we would prefer them to be slightly wrong, not total nonsense.) Bayesian approaches formulate the problem differently. Instead of saying the parameter simply has one (unknown) true value, a Bayesian method says the parameter's value is fixed but has been chosen from some probability distribution -- known as the prior probability distribution. (Another way to say that is that before taking any measurements, the Bayesian assigns a probability distribution, which they call a belief state, on what the true value of the parameter happens to be.) This "prior" might be known (imagine trying to estimate the size of a truck, if we know the overall distribution of truck sizes from the DMV) or it might be an assumption drawn out of thin air. The Bayesian inference is simpler -- we collect some data, and then calculate the probability of different values of the parameter GIVEN the data. This new probability distribution is called the "a posteriori probability" or simply the "posterior." Bayesian approaches can summarize their uncertainty by giving a range of values on the posterior probability distribution that includes 95% of the probability -- this is called a "95% credibility interval." A Bayesian partisan might criticize the frequentist confidence interval like this: "So what if 95 out of 100 experiments yield a confidence interval that includes the true value? I don't care about 99 experiments I DIDN'T DO; I care about this experiment I DID DO. Your rule allows 5 out of the 100 to be complete nonsense [negative values, impossible values] as long as the other 95 are correct; that's ridiculous." A frequentist die-hard might criticize the Bayesian credibility interval like this: "So what if 95% of the posterior probability is included in this range? What if the true value is, say, 0.37? If it is, then your method, run start to finish, will be WRONG 75% of the time. Your response is, 'Oh well, that's ok because according to the prior it's very rare that the value is 0.37,' and that may be so, but I want a method that works for ANY possible value of the parameter. I don't care about 99 values of the parameter that IT DOESN'T HAVE; I care about the one true value IT DOES HAVE. Oh also, by the way, your answers are only correct if the prior is correct. If you just pull it out of thin air because it feels right, you can be way off." In a sense both of these partisans are correct in their criticisms of each others' methods, but I would urge you to think mathematically about the distinction -- as Srikant explains. Here's an extended example from that talk that shows the difference precisely in a discrete example. When I was a child my mother used to occasionally surprise me by ordering a jar of chocolate-chip cookies to be delivered by mail. The delivery company stocked four different kinds of cookie jars -- type A, type B, type C, and type D, and they were all on the same truck and you were never sure what type you would get. Each jar had exactly 100 cookies, but the feature that distinguished the different cookie jars was their respective distributions of chocolate chips per cookie. If you reached into a jar and took out a single cookie uniformly at random, these are the probability distributions you would get on the number of chips: A type-A cookie jar, for example, has 70 cookies with two chips each, and no cookies with four chips or more! A type-D cookie jar has 70 cookies with one chip each. Notice how each vertical column is a probability mass function -- the conditional probability of the number of chips you'd get, given that the jar = A, or B, or C, or D, and each column sums to 100. I used to love to play a game as soon as the deliveryman dropped off my new cookie jar. I'd pull one single cookie at random from the jar, count the chips on the cookie, and try to express my uncertainty -- at the 70% level -- of which jars it could be. Thus it's the identity of the jar (A, B, C or D) that is the value of the parameter being estimated. The number of chips (0, 1, 2, 3 or 4) is the outcome or the observation or the sample. Originally I played this game using a frequentist, 70% confidence interval. Such an interval needs to make sure that no matter the true value of the parameter, meaning no matter which cookie jar I got, the interval would cover that true value with at least 70% probability. An interval, of course, is a function that relates an outcome (a row) to a set of values of the parameter (a set of columns). But to construct the confidence interval and guarantee 70% coverage, we need to work "vertically" -- looking at each column in turn, and making sure that 70% of the probability mass function is covered so that 70% of the time, that column's identity will be part of the interval that results. Remember that it's the vertical columns that form a p.m.f. So after doing that procedure, I ended up with these intervals: For example, if the number of chips on the cookie I draw is 1, my confidence interval will be {B,C,D}. If the number is 4, my confidence interval will be {B,C}. Notice that since each column sums to 70% or greater, then no matter which column we are truly in (no matter which jar the deliveryman dropped off), the interval resulting from this procedure will include the correct jar with at least 70% probability. Notice also that the procedure I followed in constructing the intervals had some discretion. In the column for type-B, I could have just as easily made sure that the intervals that included B would be 0,1,2,3 instead of 1,2,3,4. That would have resulted in 75% coverage for type-B jars (12+19+24+20), still meeting the lower bound of 70%. My sister Bayesia thought this approach was crazy, though. "You have to consider the deliverman as part of the system," she said. "Let's treat the identity of the jar as a random variable itself, and let's assume that the deliverman chooses among them uniformly -- meaning he has all four on his truck, and when he gets to our house he picks one at random, each with uniform probability." "With that assumption, now let's look at the joint probabilities of the whole event -- the jar type and the number of chips you draw from your first cookie," she said, drawing the following table: Notice that the whole table is now a probability mass function -- meaning the whole table sums to 100%. "Ok," I said, "where are you headed with this?" "You've been looking at the conditional probability of the number of chips, given the jar," said Bayesia. "That's all wrong! What you really care about is the conditional probability of which jar it is, given the number of chips on the cookie! Your 70% interval should simply include the list jars that, in total, have 70% probability of being the true jar. Isn't that a lot simpler and more intuitive?" "Sure, but how do we calculate that?" I asked. "Let's say we know that you got 3 chips. Then we can ignore all the other rows in the table, and simply treat that row as a probability mass function. We'll need to scale up the probabilities proportionately so each row sums to 100, though." She did: "Notice how each row is now a p.m.f., and sums to 100%. We've flipped the conditional probability from what you started with -- now it's the probability of the man having dropped off a certain jar, given the number of chips on the first cookie." "Interesting," I said. "So now we just circle enough jars in each row to get up to 70% probability?" We did just that, making these credibility intervals: Each interval includes a set of jars that, a posteriori , sum to 70% probability of being the true jar. "Well, hang on," I said. "I'm not convinced. Let's put the two kinds of intervals side-by-side and compare them for coverage and, assuming that the deliveryman picks each kind of jar with equal probability, credibility." Here they are: Confidence intervals: Credibility intervals: "See how crazy your confidence intervals are?" said Bayesia. "You don't even have a sensible answer when you draw a cookie with zero chips! You just say it's the empty interval. But that's obviously wrong -- it has to be one of the four types of jars. How can you live with yourself, stating an interval at the end of the day when you know the interval is wrong? And ditto when you pull a cookie with 3 chips -- your interval is only correct 41% of the time. Calling this a '70%' confidence interval is bullshit." "Well, hey," I replied. "It's correct 70% of the time, no matter which jar the deliveryman dropped off. That's a lot more than you can say about your credibility intervals. What if the jar is type B? Then your interval will be wrong 80% of the time, and only correct 20% of the time!" "This seems like a big problem," I continued, "because your mistakes will be correlated with the type of jar. If you send out 100 'Bayesian' robots to assess what type of jar you have, each robot sampling one cookie, you're telling me that on type-B days, you will expect 80 of the robots to get the wrong answer, each having >73% belief in its incorrect conclusion! That's troublesome, especially if you want most of the robots to agree on the right answer." "PLUS we had to make this assumption that the deliveryman behaves uniformly and selects each type of jar at random," I said. "Where did that come from? What if it's wrong? You haven't talked to him; you haven't interviewed him. Yet all your statements of a posteriori probability rest on this statement about his behavior. I didn't have to make any such assumptions, and my interval meets its criterion even in the worst case." "It's true that my credibility interval does perform poorly on type-B jars," Bayesia said. "But so what? Type B jars happen only 25% of the time. It's balanced out by my good coverage of type A, C, and D jars. And I never publish nonsense." "It's true that my confidence interval does perform poorly when I've drawn a cookie with zero chips," I said. "But so what? Chipless cookies happen, at most, 27% of the time in the worst case (a type-D jar). I can afford to give nonsense for this outcome because NO jar will result in a wrong answer more than 30% of the time." "The column sums matter," I said. "The row sums matter," Bayesia said. "I can see we're at an impasse," I said. "We're both correct in the mathematical statements we're making, but we disagree about the appropriate way to quantify uncertainty." "That's true," said my sister. "Want a cookie?" | {
"source": [
"https://stats.stackexchange.com/questions/2272",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/71/"
]
} |
2,275 | I do not study statistics but engineering, but this is a statistics question, and I hope you can lead me to what I need to learn to solve this problem. I have this situation where I calculate probabilities of 1000's of things happening in like 30 days. If in 30 days I see what actually happened, how can I test to see how accurately I predicted? These calculations result in probabilities and in actual values (ft). What is the method for doing this?
Thanks,
CP | I agree completely with Srikant's explanation. To give a more heuristic spin on it: Classical approaches generally posit that the world is one way (e.g., a parameter has one particular true value), and try to conduct experiments whose resulting conclusion -- no matter the true value of the parameter -- will be correct with at least some minimum probability. As a result, to express uncertainty in our knowledge after an experiment, the frequentist approach uses a "confidence interval" -- a range of values designed to include the true value of the parameter with some minimum probability, say 95%. A frequentist will design the experiment and 95% confidence interval procedure so that out of every 100 experiments run start to finish, at least 95 of the resulting confidence intervals will be expected to include the true value of the parameter. The other 5 might be slightly wrong, or they might be complete nonsense -- formally speaking that's ok as far as the approach is concerned, as long as 95 out of 100 inferences are correct. (Of course we would prefer them to be slightly wrong, not total nonsense.) Bayesian approaches formulate the problem differently. Instead of saying the parameter simply has one (unknown) true value, a Bayesian method says the parameter's value is fixed but has been chosen from some probability distribution -- known as the prior probability distribution. (Another way to say that is that before taking any measurements, the Bayesian assigns a probability distribution, which they call a belief state, on what the true value of the parameter happens to be.) This "prior" might be known (imagine trying to estimate the size of a truck, if we know the overall distribution of truck sizes from the DMV) or it might be an assumption drawn out of thin air. The Bayesian inference is simpler -- we collect some data, and then calculate the probability of different values of the parameter GIVEN the data. This new probability distribution is called the "a posteriori probability" or simply the "posterior." Bayesian approaches can summarize their uncertainty by giving a range of values on the posterior probability distribution that includes 95% of the probability -- this is called a "95% credibility interval." A Bayesian partisan might criticize the frequentist confidence interval like this: "So what if 95 out of 100 experiments yield a confidence interval that includes the true value? I don't care about 99 experiments I DIDN'T DO; I care about this experiment I DID DO. Your rule allows 5 out of the 100 to be complete nonsense [negative values, impossible values] as long as the other 95 are correct; that's ridiculous." A frequentist die-hard might criticize the Bayesian credibility interval like this: "So what if 95% of the posterior probability is included in this range? What if the true value is, say, 0.37? If it is, then your method, run start to finish, will be WRONG 75% of the time. Your response is, 'Oh well, that's ok because according to the prior it's very rare that the value is 0.37,' and that may be so, but I want a method that works for ANY possible value of the parameter. I don't care about 99 values of the parameter that IT DOESN'T HAVE; I care about the one true value IT DOES HAVE. Oh also, by the way, your answers are only correct if the prior is correct. If you just pull it out of thin air because it feels right, you can be way off." In a sense both of these partisans are correct in their criticisms of each others' methods, but I would urge you to think mathematically about the distinction -- as Srikant explains. Here's an extended example from that talk that shows the difference precisely in a discrete example. When I was a child my mother used to occasionally surprise me by ordering a jar of chocolate-chip cookies to be delivered by mail. The delivery company stocked four different kinds of cookie jars -- type A, type B, type C, and type D, and they were all on the same truck and you were never sure what type you would get. Each jar had exactly 100 cookies, but the feature that distinguished the different cookie jars was their respective distributions of chocolate chips per cookie. If you reached into a jar and took out a single cookie uniformly at random, these are the probability distributions you would get on the number of chips: A type-A cookie jar, for example, has 70 cookies with two chips each, and no cookies with four chips or more! A type-D cookie jar has 70 cookies with one chip each. Notice how each vertical column is a probability mass function -- the conditional probability of the number of chips you'd get, given that the jar = A, or B, or C, or D, and each column sums to 100. I used to love to play a game as soon as the deliveryman dropped off my new cookie jar. I'd pull one single cookie at random from the jar, count the chips on the cookie, and try to express my uncertainty -- at the 70% level -- of which jars it could be. Thus it's the identity of the jar (A, B, C or D) that is the value of the parameter being estimated. The number of chips (0, 1, 2, 3 or 4) is the outcome or the observation or the sample. Originally I played this game using a frequentist, 70% confidence interval. Such an interval needs to make sure that no matter the true value of the parameter, meaning no matter which cookie jar I got, the interval would cover that true value with at least 70% probability. An interval, of course, is a function that relates an outcome (a row) to a set of values of the parameter (a set of columns). But to construct the confidence interval and guarantee 70% coverage, we need to work "vertically" -- looking at each column in turn, and making sure that 70% of the probability mass function is covered so that 70% of the time, that column's identity will be part of the interval that results. Remember that it's the vertical columns that form a p.m.f. So after doing that procedure, I ended up with these intervals: For example, if the number of chips on the cookie I draw is 1, my confidence interval will be {B,C,D}. If the number is 4, my confidence interval will be {B,C}. Notice that since each column sums to 70% or greater, then no matter which column we are truly in (no matter which jar the deliveryman dropped off), the interval resulting from this procedure will include the correct jar with at least 70% probability. Notice also that the procedure I followed in constructing the intervals had some discretion. In the column for type-B, I could have just as easily made sure that the intervals that included B would be 0,1,2,3 instead of 1,2,3,4. That would have resulted in 75% coverage for type-B jars (12+19+24+20), still meeting the lower bound of 70%. My sister Bayesia thought this approach was crazy, though. "You have to consider the deliverman as part of the system," she said. "Let's treat the identity of the jar as a random variable itself, and let's assume that the deliverman chooses among them uniformly -- meaning he has all four on his truck, and when he gets to our house he picks one at random, each with uniform probability." "With that assumption, now let's look at the joint probabilities of the whole event -- the jar type and the number of chips you draw from your first cookie," she said, drawing the following table: Notice that the whole table is now a probability mass function -- meaning the whole table sums to 100%. "Ok," I said, "where are you headed with this?" "You've been looking at the conditional probability of the number of chips, given the jar," said Bayesia. "That's all wrong! What you really care about is the conditional probability of which jar it is, given the number of chips on the cookie! Your 70% interval should simply include the list jars that, in total, have 70% probability of being the true jar. Isn't that a lot simpler and more intuitive?" "Sure, but how do we calculate that?" I asked. "Let's say we know that you got 3 chips. Then we can ignore all the other rows in the table, and simply treat that row as a probability mass function. We'll need to scale up the probabilities proportionately so each row sums to 100, though." She did: "Notice how each row is now a p.m.f., and sums to 100%. We've flipped the conditional probability from what you started with -- now it's the probability of the man having dropped off a certain jar, given the number of chips on the first cookie." "Interesting," I said. "So now we just circle enough jars in each row to get up to 70% probability?" We did just that, making these credibility intervals: Each interval includes a set of jars that, a posteriori , sum to 70% probability of being the true jar. "Well, hang on," I said. "I'm not convinced. Let's put the two kinds of intervals side-by-side and compare them for coverage and, assuming that the deliveryman picks each kind of jar with equal probability, credibility." Here they are: Confidence intervals: Credibility intervals: "See how crazy your confidence intervals are?" said Bayesia. "You don't even have a sensible answer when you draw a cookie with zero chips! You just say it's the empty interval. But that's obviously wrong -- it has to be one of the four types of jars. How can you live with yourself, stating an interval at the end of the day when you know the interval is wrong? And ditto when you pull a cookie with 3 chips -- your interval is only correct 41% of the time. Calling this a '70%' confidence interval is bullshit." "Well, hey," I replied. "It's correct 70% of the time, no matter which jar the deliveryman dropped off. That's a lot more than you can say about your credibility intervals. What if the jar is type B? Then your interval will be wrong 80% of the time, and only correct 20% of the time!" "This seems like a big problem," I continued, "because your mistakes will be correlated with the type of jar. If you send out 100 'Bayesian' robots to assess what type of jar you have, each robot sampling one cookie, you're telling me that on type-B days, you will expect 80 of the robots to get the wrong answer, each having >73% belief in its incorrect conclusion! That's troublesome, especially if you want most of the robots to agree on the right answer." "PLUS we had to make this assumption that the deliveryman behaves uniformly and selects each type of jar at random," I said. "Where did that come from? What if it's wrong? You haven't talked to him; you haven't interviewed him. Yet all your statements of a posteriori probability rest on this statement about his behavior. I didn't have to make any such assumptions, and my interval meets its criterion even in the worst case." "It's true that my credibility interval does perform poorly on type-B jars," Bayesia said. "But so what? Type B jars happen only 25% of the time. It's balanced out by my good coverage of type A, C, and D jars. And I never publish nonsense." "It's true that my confidence interval does perform poorly when I've drawn a cookie with zero chips," I said. "But so what? Chipless cookies happen, at most, 27% of the time in the worst case (a type-D jar). I can afford to give nonsense for this outcome because NO jar will result in a wrong answer more than 30% of the time." "The column sums matter," I said. "The row sums matter," Bayesia said. "I can see we're at an impasse," I said. "We're both correct in the mathematical statements we're making, but we disagree about the appropriate way to quantify uncertainty." "That's true," said my sister. "Want a cookie?" | {
"source": [
"https://stats.stackexchange.com/questions/2275",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1137/"
]
} |
2,306 | I am getting a bit confused about feature selection and machine learning
and I was wondering if you could help me out. I have a microarray dataset that is
classified into two groups and has 1000s of features. My aim is to get a small number of genes (my features) (10-20) in a signature that I will in theory be able to apply to
other datasets to optimally classify those samples. As I do not have that many samples (<100), I am not using a test and training set but using Leave-one-out cross-validation to help
determine the robustness. I have read that one should perform feature selection for each split of the samples i.e. Select one sample as the test set On the remaining samples perform feature selection Apply machine learning algorithm to remaining samples using the features selected Test whether the test set is correctly classified Go to 1. If you do this, you might get different genes each time, so how do you
get your "final" optimal gene classifier? i.e. what is step 6. What I mean by optimal is the collection of genes that any further studies
should use. For example, say I have a cancer/normal dataset and I want
to find the top 10 genes that will classify the tumour type according to
an SVM. I would like to know the set of genes plus SVM parameters that
could be used in further experiments to see if it could be used as a
diagnostic test. | Whether you use LOO or K-fold CV, you'll end up with different features since the cross-validation iteration must be the most outer loop, as you said. You can think of some kind of voting scheme which would rate the n-vectors of features you got from your LOO-CV (can't remember the paper but it is worth checking the work of Harald Binder or Antoine Cornuéjols ). In the absence of a new test sample, what is usually done is to re-apply the ML algorithm to the whole sample once you have found its optimal cross-validated parameters. But proceeding this way, you cannot ensure that there is no overfitting (since the sample was already used for model optimization). Or, alternatively, you can use embedded methods which provide you with features ranking through a measure of variable importance, e.g. like in Random Forests (RF). As cross-validation is included in RFs, you don't have to worry about the $n\ll p$ case or curse of dimensionality. Here are nice papers of their applications in gene expression studies: Cutler, A., Cutler, D.R., and Stevens, J.R. (2009). Tree-Based Methods, in High-Dimensional Data Analysis in Cancer Research , Li, X. and Xu, R. (eds.), pp. 83-101, Springer. Saeys, Y., Inza, I., and Larrañaga, P. (2007). A review of feature selection techniques in bioinformatics. Bioinformatics , 23(19) : 2507-2517. Díaz-Uriarte, R., Alvarez de Andrés, S. (2006). Gene selection and classification of microarray data using random forest. BMC Bioinformatics , 7 :3. Diaz-Uriarte, R. (2007). GeneSrF and varSelRF: a web-based tool and R package for gene selection and classification using random forest. BMC Bioinformatics , 8 : 328 Since you are talking of SVM, you can look for penalized SVM . | {
"source": [
"https://stats.stackexchange.com/questions/2306",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1150/"
]
} |
2,344 | I am using the random forest algorithm as a robust classifier of two groups in a microarray study with 1000s of features. What is the best way to present the random forest so that there is enough information to make it
reproducible in a paper? Is there a plot method in R to actually plot the tree, if there are a small number of features? Is the OOB estimate of error rate the best statistic to quote? | Regarding making it reproducible, the best way is to provide reproducible research (i.e. code and data) along with the paper. Make it available on your website, or on a hosting site (like github). Regarding visualization, Leo Breiman has done some interesting work on this (see his homepage , in particular the section on graphics ). But if you're using R, then the randomForest package has some useful functions: data(mtcars)
mtcars.rf <- randomForest(mpg ~ ., data=mtcars, ntree=1000, keep.forest=FALSE,
importance=TRUE)
plot(mtcars.rf, log="y")
varImpPlot(mtcars.rf) And set.seed(1)
data(iris)
iris.rf <- randomForest(Species ~ ., iris, proximity=TRUE,
keep.forest=FALSE)
MDSplot(iris.rf, iris$Species) I'm not aware of a simple way to actually plot a tree, but you can use the getTree function to retrieve the tree and plot that separately. getTree(randomForest(iris[,-5], iris[,5], ntree=10), 3, labelVar=TRUE) The Strobl/Zeileis presentation on "Why and how to use random forest variable importance measures (and how you shouldn’t)" has examples of trees which must have been produced in this way. This blog post on tree models has some nice examples of CART tree plots which you can use for example. As @chl commented, a single tree isn't especially meaningful in this context, so short of using it to explain what a random forest is, I wouldn't include this in a paper. | {
"source": [
"https://stats.stackexchange.com/questions/2344",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1150/"
]
} |
2,356 | A recent question on the difference between confidence and credible intervals led me to start re-reading Edwin Jaynes' article on that topic: Jaynes, E. T., 1976. `Confidence Intervals vs Bayesian Intervals,' in Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science, W. L. Harper and C. A. Hooker (eds.), D. Reidel, Dordrecht, p. 175; ( pdf ) In the abstract, Jaynes writes: ...we exhibit the Bayesian and orthodox solutions to six common statistical problems involving confidence intervals (including significance tests based on the same reasoning). In every case, we find the situation is exactly the opposite, i.e. the Bayesian method is easier to apply and yields the same or better results. Indeed, the orthodox results are satisfactory only when they agree closely (or exactly) with the Bayesian results. No contrary example has yet been produced. (emphasis mine) The paper was published in 1976, so perhaps things have moved on. My question is, are there examples where the frequentist confidence interval is clearly superior to the Bayesian credible interval (as per the challenge implicitly made by Jaynes)? Examples based on incorrect prior assumptions are not acceptable as they say nothing about the internal consistency of the different approaches. | I said earlier that I would have a go at answering the question, so here goes... Jaynes was being a little naughty in his paper in that a frequentist confidence interval isn't defined as an interval where we might expect the true value of the statistic to lie with high (specified) probability, so it isn't unduly surprising that contradictions arise if they are interpreted as if they were. The problem is that this is often the way confidence intervals are used in practice, as an interval highly likely to contain the true value (given what we can infer from our sample of data) is what we often want. The key issue for me is that when a question is posed, it is best to have a direct answer to that question. Whether Bayesian credible intervals are worse than frequentist confidence intervals depends on what question was actually asked. If the question asked was: (a) "Give me an interval where the true value of the statistic lies with probability p", then it appears a frequentist cannot actually answer that question directly (and this introduces the kind of problems that Jaynes discusses in his paper), but a Bayesian can, which is why a Bayesian credible interval is superior to the frequentist confidence interval in the examples given by Jaynes. But this is only becuase it is the "wrong question" for the frequentist. (b) "Give me an interval where, were the experiment repeated a large number of times, the true value of the statistic would lie within p*100% of such intervals" then the frequentist answer is just what you want. The Bayesian may also be able to give a direct answer to this question (although it may not simply be the obvious credible interval). Whuber's comment on the question suggests this is the case. So essentially, it is a matter of correctly specifying the question and properly intepreting the answer. If you want to ask question (a) then use a Bayesian credible interval, if you want to ask question (b) then use a frequentist confidence interval. | {
"source": [
"https://stats.stackexchange.com/questions/2356",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/887/"
]
} |
2,358 | Are multiple and multivariate regression really different? What is a variate anyways? | Very quickly, I would say: 'multiple' applies to the number of predictors that enter the model (or equivalently the design matrix) with a single outcome (Y response), while 'multivariate' refers to a matrix of response vectors. Cannot remember the author who starts its introductory section on multivariate modeling with that consideration, but I think it is Brian Everitt in his textbook An R and S-Plus Companion to Multivariate Analysis . For a thorough discussion about this, I would suggest to look at his latest book, Multivariable Modeling and Multivariate Analysis for the Behavioral Sciences . For 'variate', I would say this is a common way to refer to any random variable that follows a known or hypothesized distribution, e.g. we speak of gaussian variates $X_i$ as a series of observations drawn from a normal distribution (with parameters $\mu$ and $\sigma^2$). In probabilistic terms, we said that these are some random realizations of X, with mathematical expectation $\mu$, and about 95% of them are expected to lie on the range $[\mu-2\sigma;\mu+2\sigma]$ . | {
"source": [
"https://stats.stackexchange.com/questions/2358",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/74/"
]
} |
2,374 | I used to analyse items from a psychometric point of view. But now I am trying to analyse other types of questions on motivation and other topics. These questions are all on Likert scales. My initial thought was to use factor analysis, because the questions are hypothesised to reflect some underlying dimensions. But is factor analysis appropriate? Is it necessary to validate each question regarding its dimensionality ? Is there a problem with performing factor analysis on likert items? Are there any good papers and methods on how to conduct factor analysis on Likert and other categorical items? | From what I've seen so far, FA is used for attitude items as it is for other kind of rating scales. The problem arising from the metric used (that is, "are Likert scales really to be treated as numeric scales?" is a long-standing debate, but providing you check for the bell-shaped response distribution you may handle them as continuous measurements, otherwise check for non-linear FA models or optimal scaling ) may be handled by polytmomous IRT models, like the Graded Response, Rating Scale, or Partial Credit Model. The latter two may be used as a rough check of whether the threshold distances, as used in Likert-type items, are a characteristic of the response format (RSM) or of the particular item (PCM). Regarding your second point, it is known, for example, that response distributions in attitude or health surveys differ from one country to the other (e.g. chinese people tend to highlight 'extreme' response patterns compared to those coming from western countries, see e.g. Song, X.-Y. (2007) Analysis of multisample structural equation models with applications to Quality of Life data, in Handbook of Latent Variable and Related Models , Lee, S.-Y. (Ed.), pp 279-302, North-Holland). Some methods to handle such situation off the top of my head: use of log-linear models (marginal approach) to highlight strong between-groups imbalance at the item level (coefficients are then interpreted as relative risks instead of odds); the multi-sample SEM method from Song cited above (Don't know if they do further work on that approach, though). Now, the point is that most of these approaches focus at the item level (ceiling/floor effect, decreased reliability, bad item fit statistics, etc.), but when one is interested in how people deviate from what would be expected from an ideal set of observers/respondents, I think we must focus on person fit indices instead. Such $\chi^2$ statistics are readily available for IRT models, like INFIT or OUTFIT mean square, but generally they apply on the whole questionnaire. Moreover, since estimation of items parameters rely in part on persons parameters (e.g., in the marginal likelihood framework, we assume a gaussian distribution), the presence of outlying individuals may lead to potentially biased estimates and poor model fit. As proposed by Eid and Zickar (2007), combining a latent class model (to isolate group of respondents, e.g. those always answering on the extreme categories vs. the others) and an IRT model (to estimate item parameters and persons locations on the latent trait in both groups) appears a nice solution. Other modeling strategies are described in their paper (e.g. HYBRID model, see also Holden and Book, 2009). Likewise, unfolding models may be used to cope with response style , which is defined as a consistent and content-independent pattern of response category (e.g. tendency to agree with all statements). In the social sciences or psychological literature, this is know as Extreme Response Style (ERS). References (1–3) may be useful to get an idea on how it manifests and how it may be measured. Here is a short list of papers that may help to progress on this subject: Hamilton, D.L. (1968). Personality attributes associated with extreme response style . Psychological Bulletin , 69(3) : 192–203. Greanleaf, E.A. (1992). Measuring extreme response style. Public Opinion Quaterly , 56(3) : 328-351. de Jong, M.G., Steenkamp, J.-B.E.M., Fox, J.-P., and Baumgartner, H. (2008). Using Item Response Theory to Measure Extreme Response Style in Marketing Research: A Global Investigation. Journal of marketing research , 45(1) : 104-115. Morren, M., Gelissen, J., and Vermunt, J.K. (2009). Dealing with extreme response style in cross-cultural research: A restricted latent class factor analysis approach Moors, G. (2003). Diagnosing Response Style Behavior by Means of a Latent-Class Factor Approach. Socio-Demographic Correlates of Gender Role Attitudes and Perceptions of Ethnic Discrimination Reexamined. Quality & Quantity , 37(3), 277-302. de Jong, M.G. Steenkamp J.B., Fox, J.-P., and Baumgartner, H. (2008). Item Response Theory to Measure Extreme Response Style in Marketing Research: A Global Investigation. Journal of Marketing Research , 45(1), 104-115. Javaras, K.N. and Ripley, B.D. (2007). An “Unfolding” Latent Variable Model for Likert Attitude Data. JASA , 102(478): 454-463. slides from Moustaki, Knott and Mavridis, Methods for detecting outliers in latent variable models Eid, M. and Zickar, M.J. (2007). Detecting response styles and faking in personality and organizational assessments by Mixed Rasch Models. In von Davier, M. and Carstensen, C.H. (Eds.), Multivariate and Mixture Distribution Rasch Models , pp. 255–270, Springer. Holden, R.R. and Book, A.S. (2009). Using hybrid Rasch-latent class modeling to improve the detection of fakers on a personality inventory. Personality and Individual Differences , 47(3) : 185-190. | {
"source": [
"https://stats.stackexchange.com/questions/2374",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1154/"
]
} |
2,379 | Mathematics has its famous Millennium Problems (and, historically, Hilbert's 23 ), questions that helped to shape the direction of the field. I have little idea, though, what the Riemann Hypotheses and P vs. NP's of statistics would be. So, what are the overarching open questions in statistics? Edited to add: As an example of the general spirit (if not quite specificity) of answer I'm looking for, I found a "Hilbert's 23"-inspired lecture by David Donoho at a "Math Challenges of the 21st Century" conference: High-Dimensional Data Analysis: The Curses and Blessings of Dimensionality So a potential answer could talk about big data and why it's important, the types of statistical challenges high-dimensional data poses, and methods that need to be developed or questions that need to be answered in order to help solve the problem. | A big question should involve key issues of statistical methodology or, because statistics is entirely about applications, it should concern how statistics is used with problems important to society. This characterization suggests the following should be included in any consideration of big problems: How best to conduct drug trials . Currently, classical hypothesis testing requires many formal phases of study. In later (confirmatory) phases, the economic and ethical issues loom large. Can we do better? Do we have to put hundreds or thousands of sick people into control groups and keep them there until the end of a study, for example, or can we find better ways to identify treatments that really work and deliver them to members of the trial (and others) sooner? Coping with scientific publication bias . Negative results are published much less simply because they just don't attain a magic p-value. All branches of science need to find better ways to bring scientifically important, not just statistically significant, results to light. (The multiple comparisons problem and coping with high-dimensional data are subcategories of this problem.) Probing the limits of statistical methods and their interfaces with machine learning and machine cognition . Inevitable advances in computing technology will make true AI accessible in our lifetimes. How are we going to program artificial brains? What role might statistical thinking and statistical learning have in creating these advances? How can statisticians help in thinking about artificial cognition, artificial learning, in exploring their limitations, and making advances? Developing better ways to analyze geospatial data . It is often claimed that the majority, or vast majority, of databases contain locational references. Soon many people and devices will be located in real time with GPS and cell phone technologies. Statistical methods to analyze and exploit spatial data are really just in their infancy (and seem to be relegated to GIS and spatial software which is typically used by non-statisticians). | {
"source": [
"https://stats.stackexchange.com/questions/2379",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1106/"
]
} |
2,391 | Suppose that I have three populations with four, mutually exclusive characteristics. I take random samples from each population and construct a crosstab or frequency table for the characteristics that I am measuring. Am I correct in saying that: If I wanted to test whether there is any relationship between the populations and the characteristics (e.g. whether one population has a higher frequency of one of the characteristics), I should run a chi-squared test and see whether the result is significant. If the chi-squared test is significant, it only shows me that there is some relationship between the populations and characteristics, but not how they are related. Furthermore, not all of the characteristics need to be related to the population. For example, if the different populations have significantly different distributions of characteristics A and B, but not of C and D, then the chi-squared test may still come back as significant. If I wanted to measure whether or not a specific characteristic is affected by the population, then I can run a test for equal proportions (I have seen this called a z-test, or as prop.test() in R ) on just that characteristic. In other words, is it appropriate to use the prop.test() to more accurately determine the nature of a relationship between two sets of categories when the chi-squared test says that there is a significant relationship? | Very short answer: The chi-Squared test ( chisq.test() in R) compares the observed frequencies in each category of a contingency table with the expected frequencies (computed as the product of the marginal frequencies). It is used to determine whether the deviations between the observed and the expected counts are too large to be attributed to chance. Departure from independence is easily checked by inspecting residuals (try ?mosaicplot or ?assocplot , but also look at the vcd package). Use fisher.test() for an exact test (relying on the hypergeometric distribution). The prop.test() function in R allows to test whether proportions are comparable between groups or does not differ from theoretical probabilities. It is referred to as a $z$-test because the test statistic looks like this: $$
z=\frac{(f_1-f_2)}{\sqrt{\hat p \left(1-\hat p \right) \left(\frac{1}{n_1}+\frac{1}{n_2}\right)}}
$$ where $\hat p=(p_1+p_2)/(n_1+n_2)$, and the indices $(1,2)$ refer to the first and second line of your table.
In a two-way contingency table where $H_0:\; p_1=p_2$, this should yield comparable results to the ordinary $\chi^2$ test: > tab <- matrix(c(100, 80, 20, 10), ncol = 2)
> chisq.test(tab)
Pearson's Chi-squared test with Yates' continuity correction
data: tab
X-squared = 0.8823, df = 1, p-value = 0.3476
> prop.test(tab)
2-sample test for equality of proportions with continuity correction
data: tab
X-squared = 0.8823, df = 1, p-value = 0.3476
alternative hypothesis: two.sided
95 percent confidence interval:
-0.15834617 0.04723506
sample estimates:
prop 1 prop 2
0.8333333 0.8888889 For analysis of discrete data with R, I highly recommend R (and S-PLUS) Manual to Accompany Agresti’s Categorical Data Analysis (2002) , from Laura Thompson. | {
"source": [
"https://stats.stackexchange.com/questions/2391",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1195/"
]
} |
2,457 | I would just like someone to confirm my understanding or if I'm missing something. The definition of a markov process says the next step depends on the current state only and no past states. So, let's say we had a state space of a,b,c,d and we go from a->b->c->d. That means that the transition to d could only depend on the fact that we were in c. However, is it true that you could just make the model more complex and kind of "get around" this limitation? In other words, if your state space were now aa, ab, ac, ad, ba, bb, bc, bd, ca, cb, cc, cd, da, db, dc, dd, meaning that your new state space becomes the previous state combined with the current state, then the above transition would be *a->ab->bc->cd and so the transition to cd (equivalent in the previous model to d) is now "dependent" on a state which, if modeled differently, is a previous state (I refer to it as a sub-state below). Am I correct in that one can make it "depend on previous states (sub-state)" (I know technically it doesn't in the new model since the sub-state is no longer a real state) maintain the markov property by expanding the state space as I did? So, one could in effect create a markov process that could depend on any number of previous sub-states. | Technically, both the processes you describe are markov chains. The difference is that the first one is a first order markov chain whereas the second one is a second order markov chain. And yes, you can transform a second order markov chain to a first order markov chain by a suitable change in state space definition. Let me explain via an example. Suppose that we want to model the weather as a stochastic process and suppose that on any given day the weather can be rainy, sunny or cloudy. Let $W_t$ be the weather in any particular day and let us denote the possible states by the symbols $R$ (for rainy), $S$ for (sunny) and $C$ (for cloudy). First Order Markov Chain $P(W_t = w | W_{t-1}, W_{t-2},W_{t-3} ..) = P(W_t = w | W_{t-1})$ Second Order Markov Chain $P(W_t = w | W_{t-1}, W_{t-2},W_{t-3} ..) = P(W_t = w | W_{t-1},W_{t-2})$ The second order markov chain can be transformed into a first order markov chain be re-defining the state space as follows. Define: $Z_{t-1,t}$ as the weather on two consecutive days. In other words, the state space can take one of the following values: $RR$, $RC$, $RS$, $CR$, $CC$, $CS$, $SR$, $SC$ and $SS$. With this re-defined state space we have the following: $P(Z_{t-1,t} = z_{t-1,t} | Z_{t-2,t-1}, Z_{t-3,t-2}, ..) = P(Z_{t-1,t} = z_{t-1,t} | Z_{t-2,t-1})$ The above is clearly a first order markov chain on the re-defined state space. The one difference from the second order markov chain is that your redefined markov chain needs to be specified with two initial starting states i.e., the chain must be started with some assumptions about the weather on day 1 and on day 2. | {
"source": [
"https://stats.stackexchange.com/questions/2457",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1208/"
]
} |
2,476 | To cluster (text) documents you need a way of measuring similarity between pairs of documents. Two alternatives are: Compare documents as term vectors using Cosine Similarity - and TF/IDF as the weightings for terms. Compare each documents probability distribution using f-divergence e.g. Kullback-Leibler divergence Is there any intuitive reason to prefer one method to the other (assuming average document sizes of 100 terms)? | For text documents, the feature vectors can be very high dimensional and sparse under any of the standard representations (bag of words or TF-IDF etc). Measuring distances directly under such a representation may not be reliable since it is a known fact that in very high dimensions, distance between any two points starts to look the same. One way to deal with this is to reduce the data dimensionality by using PCA or LSA ( Latent Semantic Analysis ; also known as Latent Semantic Indexing ) and then measure the distances in the new space. Using something like LSA over PCA is advantageous since it can give a meaningful representation in terms of "semantic concepts", apart from measuring distances in a lower dimensional space. Comparing documents based on the probability distributions is usually done by first computing the topic distribution of each document (using something like Latent Dirichlet Allocation ), and then computing some sort of divergence (e.g., KL divergence) between the topic distributions of pair of documents. In a way, it's actually kind of similar to doing LSA first and then measuring distances in the LSA space using KL-divergence between the vectors (instead of cosine similarity). KL-divergence is a distance measure for comparing distributions so it may be preferable if the document representation is in terms of some distribution (which is often actually the case -- e.g., documents represented as distribution over topics, as in LDA). Also note that under such a representation, the entries in the feature vector would sum to one (since you are basically treating the document as a distribution over topics or semantic concepts). Also see a related thread here . | {
"source": [
"https://stats.stackexchange.com/questions/2476",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1212/"
]
} |
2,492 | A former colleague once argued to me as follows: We usually apply normality tests to the results of processes that,
under the null, generate random variables that are only asymptotically or nearly normal (with the 'asymptotically' part dependent on some quantity which we cannot make large); In the era of
cheap memory, big data, and fast processors, normality tests should always reject the null of normal distribution for large (though not insanely large) samples. And so, perversely, normality tests should
only be used for small samples, when they presumably have lower power
and less control over type I rate. Is this a valid argument? Is this a well-known argument? Are there well known tests for a 'fuzzier' null hypothesis than normality? | It's not an argument. It is a (a bit strongly stated) fact that formal normality tests always reject on the huge sample sizes we work with today. It's even easy to prove that when n gets large, even the smallest deviation from perfect normality will lead to a significant result. And as every dataset has some degree of randomness, no single dataset will be a perfectly normally distributed sample. But in applied statistics the question is not whether the data/residuals ... are perfectly normal, but normal enough for the assumptions to hold. Let me illustrate with the Shapiro-Wilk test . The code below constructs a set of distributions that approach normality but aren't completely normal. Next, we test with shapiro.test whether a sample from these almost-normal distributions deviate from normality. In R: x <- replicate(100, { # generates 100 different tests on each distribution
c(shapiro.test(rnorm(10)+c(1,0,2,0,1))$p.value, #$
shapiro.test(rnorm(100)+c(1,0,2,0,1))$p.value, #$
shapiro.test(rnorm(1000)+c(1,0,2,0,1))$p.value, #$
shapiro.test(rnorm(5000)+c(1,0,2,0,1))$p.value) #$
} # rnorm gives a random draw from the normal distribution
)
rownames(x) <- c("n10","n100","n1000","n5000")
rowMeans(x<0.05) # the proportion of significant deviations
n10 n100 n1000 n5000
0.04 0.04 0.20 0.87 The last line checks which fraction of the simulations for every sample size deviate significantly from normality. So in 87% of the cases, a sample of 5000 observations deviates significantly from normality according to Shapiro-Wilks. Yet, if you see the qq plots, you would never ever decide on a deviation from normality. Below you see as an example the qq-plots for one set of random samples with p-values n10 n100 n1000 n5000
0.760 0.681 0.164 0.007 | {
"source": [
"https://stats.stackexchange.com/questions/2492",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/795/"
]
} |
2,499 | There are several distinct usages: kernel density estimation kernel trick kernel smoothing Please explain what the "kernel" in them means, in plain English, in your own words. | There appear to be at least two different meanings of "kernel": one more commonly used in statistics; the other in machine learning. In statistics "kernel" is most commonly used to refer to kernel density estimation and kernel smoothing . A straightforward explanation of kernels in density estimation can be found ( here ). In machine learning "kernel" is usually used to refer to the kernel trick , a method of using a linear classifier to solve a non-linear problem "by mapping the original non-linear observations into a higher-dimensional space". A simple visualisation might be to imagine that all of class $0$ are within radius $r$ of the origin in an x, y plane (class $0$: $x^2 + y^2 < r^2$); and all of class $1$ are beyond radius $r$ in that plane (class $1$: $x^2 + y^2 > r^2$). No linear separator is possible, but clearly a circle of radius $r$ will perfectly separate the data. We can transform the data into three dimensional space by calculating three new variables $x^2$, $y^2$ and $\sqrt{2}xy$. The two classes will now be separable by a plane in this 3 dimensional space. The equation of that optimally separating hyperplane where $z_1 = x^2, z_2 = y^2$ and $z_3 = \sqrt{2}xy$ is $z_1 + z_2 = 1$, and in this case omits $z_3$. (If the circle is off-set from the origin, the optimal separating hyperplane will vary in $z_3$ as well.) The kernel is the mapping function which calculates the value of the 2-dimensional data in 3-dimensional space. In mathematics, there are other uses of "kernels" , but these seem to be the main ones in statistics. | {
"source": [
"https://stats.stackexchange.com/questions/2499",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/74/"
]
} |
2,516 | In a recent article of Amstat News , the authors (Mark van der Laan and Sherri Rose) stated that "We know that for large enough sample sizes, every study—including ones in which the null hypothesis of no effect is true — will declare a statistically significant effect.". Well, I for one didn't know that. Is this true? Does it mean that hypothesis testing is worthless for large data sets? | It is not true. If the null hypothesis is true then it will not be rejected more frequently at large sample sizes than small. There is an erroneous rejection rate that's usually set to 0.05 (alpha) but it is independent of sample size. Therefore, taken literally the statement is false. Nevertheless, it's possible that in some situations (even whole fields) all nulls are false and therefore all will be rejected if N is high enough. But is this a bad thing? What is true is that trivially small effects can be found to be "significant" with very large sample sizes. That does not suggest that you shouldn't have such large samples sizes. What it means is that the way you interpret your finding is dependent upon the effect size and sensitivity of the test. If you have a very small effect size and highly sensitive test you have to recognize that the statistically significant finding may not be meaningful or useful. Given some people don't believe that a test of the null hypothesis, when the null is true , always has an error rate equal to the cutoff point selected for any sample size, here's a simple simulation in R proving the point. Make N as large as you like and the rate of Type I errors will remain constant. # number of subjects in each condition
n <- 100
# number of replications of the study in order to check the Type I error rate
nsamp <- 10000
ps <- replicate(nsamp, {
#population mean = 0, sd = 1 for both samples, therefore, no real effect
y1 <- rnorm(n, 0, 1)
y2 <- rnorm(n, 0, 1)
tt <- t.test(y1, y2, var.equal = TRUE)
tt$p.value
})
sum(ps < .05) / nsamp
# ~ .05 no matter how big n is. Note particularly that it is not an increasing value always finding effects when n is very large. | {
"source": [
"https://stats.stackexchange.com/questions/2516",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/666/"
]
} |
2,541 | I have read/heard many times that the sample size of at least 30 units is considered as "large sample" (normality assumptions of means usually approximately holds due to the CLT, ...). Therefore, in my experiments, I usually generate samples of 30 units. Can you please give me some reference which should be cited when using sample size 30? | The choice of n = 30 for a boundary between small and large samples is a rule of thumb, only. There is a large number of books that quote (around) this value, for example, Hogg and Tanis' Probability and Statistical Inference (7e) says "greater than 25 or 30". That said, the story told to me was that the only reason 30 was regarded as a good boundary was because it made for pretty Student's t tables in the back of textbooks to fit nicely on one page. That, and the critical values (between Student's t and Normal) are only off by approximately up to 0.25, anyway, from df = 30 to df = infinity. For hand computation the difference didn't really matter. Nowadays it is easy to compute critical values for all sorts of things to 15 decimal places. On top of that we have resampling and permutation methods for which we aren't even restricted to parametric population distributions. In practice I never rely on n = 30. Plot the data. Superimpose a normal distribution, if you like. Visually assess whether a normal approximation is appropriate (and ask whether an approximation is even really needed). If generating samples for research and an approximation is obligatory, generate enough of a sample size to make the approximation as close as desired (or as close as computationally feasible). | {
"source": [
"https://stats.stackexchange.com/questions/2541",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1215/"
]
} |
2,547 | If you look at Wolfram Alpha Or this Wikipedia page List of countries by median age Clearly median seems to be the statistic of choice when it comes to ages. I am not able to explain to myself why arithmetic mean would be a worse statistic. Why is it so? Originally posted here because I did not know this site existed. | Statistics does not provide a good answer to this question, in my opinion. A mean can be relevant in mortality studies for example, but ages are not as easy to measure as you might think. Older people, illiterate people, and people in some third-world countries tend to round their ages to a multiple of 5 or 10, for instance. The median is more resistant to such errors than the mean. Moreover, median ages are typically 20 – 40, but people can live to 100 and more (an increasing and noticeable proportion of the population of modern countries now lives beyond 100). People of such age have 1.5 to 4 times the influence on the mean than they do on the median compared to very young people. Thus, the median is a bit more up-to-date statistic concerning a country's age distribution and is a little more independent of mortality rates and life expectancy than the mean is. Finally, the median gives us a slightly better picture of what the age distribution itself looks like: when you see a median of 35, for example, you know that half the population is older than 35 and you can infer some things about birth rates, ages of parents, and so on; but if the mean is 35, you can't say as much, because that 35 could be influenced by a large population bulge at age 70, for example, or perhaps a population gap in some age range due to an old war or epidemic. Thus, for demographic, not statistical, reasons, a median appears more worthy of the role of an omnibus value for summarizing the ages of relatively large populations of people. | {
"source": [
"https://stats.stackexchange.com/questions/2547",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/285/"
]
} |
2,573 | If we want to visibly see the distribution of a continuous data, which one among histogram and pdf should be used? What are the differences, not formula wise, between histogram and pdf? | To clarify Dirks point : Say your data is a sample of a normal distribution. You could construct the following plot: The red line is the empirical density estimate, the blue line is the theoretical pdf of the underlying normal distribution. Note that the histogram is expressed in densities and not in frequencies here. This is done for plotting purposes, in general frequencies are used in histograms. So to answer your question : you use the empirical distribution (i.e. the histogram) if you want to describe your sample, and the pdf if you want to describe the hypothesized underlying distribution. Plot is generated by following code in R : x <- rnorm(100)
y <- seq(-4,4,length.out=200)
hist(x,freq=F,ylim=c(0,0.5))
lines(density(x),col="red",lwd=2)
lines(y,dnorm(y),col="blue",lwd=2) | {
"source": [
"https://stats.stackexchange.com/questions/2573",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
]
} |
2,592 | After performing principal component analysis (PCA), I want to project a new vector onto PCA space (i.e. find its coordinates in the PCA coordinate system). I have calculated PCA in R language using prcomp . Now I should be able to multiply my vector by the PCA rotation matrix. Should principal components in this matrix be arranged in rows or columns? | Well, @Srikant already gave you the right answer since the rotation (or loadings) matrix contains eigenvectors arranged column-wise, so that you just have to multiply (using %*% ) your vector or matrix of new data with e.g. prcomp(X)$rotation . Be careful, however, with any extra centering or scaling parameters that were applied when computing PCA EVs. In R, you may also find useful the predict() function, see ?predict.prcomp . BTW, you can check how projection of new data is implemented by simply entering: getS3method("predict", "prcomp") | {
"source": [
"https://stats.stackexchange.com/questions/2592",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1260/"
]
} |
2,628 | Imagine you have to do reporting on the numbers of candidates who yearly take a given test. It seems rather difficult to infer the observed % of success, for instance, on a wider population due to the specifity of the target population. So you may consider that these data represent the whole population. Are results of tests indicating that the proportions of males and females are different really correct? Does a test comparing observed and theoretical proportions appear to be a correct one, since you consider a whole population (and not a sample)? | There may be varying opinions on this, but I would treat the population data as a sample and assume a hypothetical population, then make inferences in the usual way. One way to think about this is that there is an underlying data generating process responsible for the collected data, the "population" distribution. In your particular case, this might make even more sense since you will have cohorts in the future. Then your population is really cohorts who take the test even in the future. In this way, you could account for time based variations if you have data for more than a year, or try to account for latent factors through your error model. In short, you can develop richer models with greater explanatory power. | {
"source": [
"https://stats.stackexchange.com/questions/2628",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1154/"
]
} |
2,641 | The wikipedia page claims that likelihood and probability are distinct concepts. In non-technical parlance, "likelihood" is usually a synonym for "probability," but in statistical usage there is a clear distinction in perspective: the number that is the probability of some observed outcomes given a set of parameter values is regarded as the likelihood of the set of parameter values given the observed outcomes. Can someone give a more down-to-earth description of what this means? In addition, some examples of how "probability" and "likelihood" disagree would be nice. | The answer depends on whether you are dealing with discrete or continuous random variables. So, I will split my answer accordingly. I will assume that you want some technical details and not necessarily an explanation in plain English. Discrete Random Variables Suppose that you have a stochastic process that takes discrete values (e.g., outcomes of tossing a coin 10 times, number of customers who arrive at a store in 10 minutes etc). In such cases, we can calculate the probability of observing a particular set of outcomes by making suitable assumptions about the underlying stochastic process (e.g., probability of coin landing heads is $p$ and that coin tosses are independent). Denote the observed outcomes by $O$ and the set of parameters that describe the stochastic process as $\theta$ . Thus, when we speak of probability we want to calculate $P(O|\theta)$ . In other words, given specific values for $\theta$ , $P(O|\theta)$ is the probability that we would observe the outcomes represented by $O$ . However, when we model a real life stochastic process, we often do not know $\theta$ . We simply observe $O$ and the goal then is to arrive at an estimate for $\theta$ that would be a plausible choice given the observed outcomes $O$ . We know that given a value of $\theta$ the probability of observing $O$ is $P(O|\theta)$ . Thus, a 'natural' estimation process is to choose that value of $\theta$ that would maximize the probability that we would actually observe $O$ . In other words, we find the parameter values $\theta$ that maximize the following function: $L(\theta|O) = P(O|\theta)$ $L(\theta|O)$ is called the likelihood function. Notice that by definition the likelihood function is conditioned on the observed $O$ and that it is a function of the unknown parameters $\theta$ . Continuous Random Variables In the continuous case the situation is similar with one important difference. We can no longer talk about the probability that we observed $O$ given $\theta$ because in the continuous case $P(O|\theta) = 0$ . Without getting into technicalities, the basic idea is as follows: Denote the probability density function (pdf) associated with the outcomes $O$ as: $f(O|\theta)$ . Thus, in the continuous case we estimate $\theta$ given observed outcomes $O$ by maximizing the following function: $L(\theta|O) = f(O|\theta)$ In this situation, we cannot technically assert that we are finding the parameter value that maximizes the probability that we observe $O$ as we maximize the PDF associated with the observed outcomes $O$ . | {
"source": [
"https://stats.stackexchange.com/questions/2641",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/386/"
]
} |
2,691 | In today's pattern recognition class my professor talked about PCA, eigenvectors and eigenvalues. I understood the mathematics of it. If I'm asked to find eigenvalues etc. I'll do it correctly like a machine. But I didn't understand it. I didn't get the purpose of it. I didn't get the feel of it. I strongly believe in the following quote: You do not really understand something unless you can explain it to your grandmother. -- Albert Einstein Well, I can't explain these concepts to a layman or grandma. Why PCA, eigenvectors & eigenvalues? What was the need for these concepts? How would you explain these to a layman? | Imagine a big family dinner where everybody starts asking you about PCA. First, you explain it to your great-grandmother; then to your grandmother; then to your mother; then to your spouse; finally, to your daughter (a mathematician). Each time the next person is less of a layman. Here is how the conversation might go. Great-grandmother: I heard you are studying "Pee-See-Ay". I wonder what that is... You: Ah, it's just a method of summarizing some data. Look, we have some wine bottles standing here on the table. We can describe each wine by its colour, how strong it is, how old it is, and so on. Visualization originally found here . We can compose a whole list of different characteristics of each wine in our cellar. But many of them will measure related properties and so will be redundant. If so, we should be able to summarize each wine with fewer characteristics! This is what PCA does. Grandmother: This is interesting! So this PCA thing checks what characteristics are redundant and discards them? You: Excellent question, granny! No, PCA is not selecting some characteristics and discarding the others. Instead, it constructs some new characteristics that turn out to summarize our list of wines well. Of course, these new characteristics are constructed using the old ones; for example, a new characteristic might be computed as wine age minus wine acidity level or some other combination (we call them linear combinations ). In fact, PCA finds the best possible characteristics, the ones that summarize the list of wines as well as only possible (among all conceivable linear combinations). This is why it is so useful. Mother: Hmmm, this certainly sounds good, but I am not sure I understand. What do you actually mean when you say that these new PCA characteristics "summarize" the list of wines? You: I guess I can give two different answers to this question. The first answer is that you are looking for some wine properties (characteristics) that strongly differ across wines. Indeed, imagine that you come up with a property that is the same for most of the wines - like the stillness of wine after being poured. This would not be very useful, would it? Wines are very different, but your new property makes them all look the same! This would certainly be a bad summary. Instead, PCA looks for properties that show as much variation across wines as possible. The second answer is that you look for the properties that would allow you to predict, or "reconstruct", the original wine characteristics. Again, imagine that you come up with a property that has no relation to the original characteristics - like the shape of a wine bottle; if you use only this new property, there is no way you could reconstruct the original ones! This, again, would be a bad summary. So PCA looks for properties that allow reconstructing the original characteristics as well as possible. Surprisingly, it turns out that these two aims are equivalent and so PCA can kill two birds with one stone. Spouse: But darling, these two "goals" of PCA sound so different! Why would they be equivalent? You: Hmmm. Perhaps I should make a little drawing (takes a napkin and starts scribbling) . Let us pick two wine characteristics, perhaps wine darkness and alcohol content -- I don't know if they are correlated, but let's imagine that they are. Here is what a scatter plot of different wines could look like: Each dot in this "wine cloud" shows one particular wine. You see that the two properties ( $x$ and $y$ on this figure) are correlated. A new property can be constructed by drawing a line through the centre of this wine cloud and projecting all points onto this line. This new property will be given by a linear combination $w_1 x + w_2 y$ , where each line corresponds to some particular values of $w_1$ and $w_2$ . Now, look here very carefully -- here is what these projections look like for different lines (red dots are projections of the blue dots): As I said before, PCA will find the "best" line according to two different criteria of what is the "best". First, the variation of values along this line should be maximal. Pay attention to how the "spread" (we call it "variance") of the red dots changes while the line rotates; can you see when it reaches maximum? Second, if we reconstruct the original two characteristics (position of a blue dot) from the new one (position of a red dot), the reconstruction error will be given by the length of the connecting red line. Observe how the length of these red lines changes while the line rotates; can you see when the total length reaches minimum? If you stare at this animation for some time, you will notice that "the maximum variance" and "the minimum error" are reached at the same time, namely when the line points to the magenta ticks I marked on both sides of the wine cloud. This line corresponds to the new wine property that will be constructed by PCA. By the way, PCA stands for "principal component analysis", and this new property is called "first principal component". And instead of saying "property" or "characteristic", we usually say "feature" or "variable". Daughter: Very nice, papa! I think I can see why the two goals yield the same result: it is essentially because of the Pythagoras theorem, isn't it? Anyway, I heard that PCA is somehow related to eigenvectors and eigenvalues; where are they in this picture? You: Brilliant observation. Mathematically, the spread of the red dots is measured as the average squared distance from the centre of the wine cloud to each red dot; as you know, it is called the variance . On the other hand, the total reconstruction error is measured as the average squared length of the corresponding red lines. But as the angle between red lines and the black line is always $90^\circ$ , the sum of these two quantities is equal to the average squared distance between the centre of the wine cloud and each blue dot; this is precisely Pythagoras theorem. Of course, this average distance does not depend on the orientation of the black line, so the higher the variance, the lower the error (because their sum is constant). This hand-wavy argument can be made precise ( see here ). By the way, you can imagine that the black line is a solid rod, and each red line is a spring. The energy of the spring is proportional to its squared length (this is known in physics as Hooke's law), so the rod will orient itself such as to minimize the sum of these squared distances. I made a simulation of what it will look like in the presence of some viscous friction: Regarding eigenvectors and eigenvalues. You know what a covariance matrix is; in my example it is a $2\times 2$ matrix that is given by $$\begin{pmatrix}1.07 &0.63\\0.63 & 0.64\end{pmatrix}.$$ What this means is that the variance of the $x$ variable is $1.07$ , the variance of the $y$ variable is $0.64$ , and the covariance between them is $0.63$ . As it is a square symmetric matrix, it can be diagonalized by choosing a new orthogonal coordinate system, given by its eigenvectors (incidentally, this is called spectral theorem ); corresponding eigenvalues will then be located on the diagonal. In this new coordinate system, the covariance matrix is diagonal and looks like that: $$\begin{pmatrix}1.52 &0\\0 & 0.19\end{pmatrix},$$ meaning that the correlation between points is now zero. It becomes clear that the variance of any projection will be given by a weighted average of the eigenvalues (I am only sketching the intuition here). Consequently, the maximum possible variance ( $1.52$ ) will be achieved if we simply take the projection on the first coordinate axis. It follows that the direction of the first principal component is given by the first eigenvector of the covariance matrix. ( More details here. ) You can see this on the rotating figure as well: there is a gray line there orthogonal to the black one; together, they form a rotating coordinate frame. Try to notice when the blue dots become uncorrelated in this rotating frame. The answer, again, is that it happens precisely when the black line points at the magenta ticks. Now I can tell you how I found them (the magenta ticks): they mark the direction of the first eigenvector of the covariance matrix, which in this case is equal to $(0.81, 0.58)$ . Per popular request, I shared the Matlab code to produce the above animations . | {
"source": [
"https://stats.stackexchange.com/questions/2691",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/851/"
]
} |
2,715 | I like G van Belle's book on Statistical Rules of Thumb , and to a lesser extent Common Errors in Statistics (and How to Avoid Them) from Phillip I Good and James W. Hardin. They address common pitfalls when interpreting results from experimental and observational studies and provide practical recommendations for statistical inference, or exploratory data analysis. But I feel that "modern" guidelines are somewhat lacking, especially with the ever growing use of computational and robust statistics in various fields, or the introduction of techniques from the machine learning community in, e.g. clinical biostatistics or genetic epidemiology. Apart from computational tricks or common pitfalls in data visualization which could be addressed elsewhere, I would like to ask: What are the top rules of thumb you would recommend for efficient data analysis? ( one rule per answer, please ). I am thinking of guidelines that you might provide to a colleague, a researcher without strong background in statistical modeling, or a student in intermediate to advanced course. This might pertain to various stages of data analysis, e.g. sampling strategies, feature selection or model building, model comparison, post-estimation, etc. | Don't forget to do some basic data checking before you start the analysis. In particular, look at a scatter plot of every variable you intend to analyse against ID number, date / time of data collection or similar. The eye can often pick up patterns that reveal problems when summary statistics don't show anything unusual. And if you're going to use a log or other transformation for analysis, also use it for the plot. | {
"source": [
"https://stats.stackexchange.com/questions/2715",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/930/"
]
} |
2,717 | I have a (symmetric) matrix M that represents the distance between each pair of nodes. For example, A B C D E F G H I J K L
A 0 20 20 20 40 60 60 60 100 120 120 120
B 20 0 20 20 60 80 80 80 120 140 140 140
C 20 20 0 20 60 80 80 80 120 140 140 140
D 20 20 20 0 60 80 80 80 120 140 140 140
E 40 60 60 60 0 20 20 20 60 80 80 80
F 60 80 80 80 20 0 20 20 40 60 60 60
G 60 80 80 80 20 20 0 20 60 80 80 80
H 60 80 80 80 20 20 20 0 60 80 80 80
I 100 120 120 120 60 40 60 60 0 20 20 20
J 120 140 140 140 80 60 80 80 20 0 20 20
K 120 140 140 140 80 60 80 80 20 20 0 20
L 120 140 140 140 80 60 80 80 20 20 20 0 Is there any method to extract clusters from M (if needed, the number of clusters can be fixed), such that each cluster contains nodes with small distances between them. In the example, the clusters would be (A, B, C, D) , (E, F, G, H) and (I, J, K, L) . I've already tried UPGMA and k -means but the resulting clusters are very bad. The distances are the average steps a random walker would take to go from node A to node B ( != A ) and go back to node A . It's guaranteed that M^1/2 is a metric. To run k -means, I don't use the centroid. I define the distance between node n cluster c as the average distance between n and all nodes in c . Thanks a lot :) | There are a number of options. k-medoids clustering First, you could try partitioning around medoids (pam) instead of using k-means clustering. This one is more robust, and could give better results. Van der Laan reworked the algorithm. If you're going to implement it yourself, his article is worth a read. There is a specific k-medoids clustering algorithm for large datasets. The algorithm is called Clara in R, and is described in chapter 3 of Finding Groups in Data: An Introduction to Cluster Analysis. by Kaufman, L and Rousseeuw, PJ (1990). hierarchical clustering Instead of UPGMA, you could try some other hierarchical clustering options. First of all, when you use hierarchical clustering, be sure you define the partitioning method properly. This partitioning method is essentially how the distances between observations and clusters are calculated. I mostly use Ward's method or complete linkage, but other options might be the choice for you. Don't know if you tried it yet, but the single linkage method or neighbour joining is often preferred above UPGMA in phylogenetic applications. If you didn't try it yet, you could give it a shot as well, as it often gives remarkably good results. In R you can take a look at the package cluster . All described algorithms are implemented there. See ?pam, ?clara, ?hclust, ... Check also the different implementation of the algorithm in ?kmeans. Sometimes chosing another algorithm can improve the clustering substantially. EDIT : Just thought about something: If you work with graphs and nodes and the likes, you should take a look at the markov clustering algorithm as well. That one is used for example in grouping sequences based on blast similarities, and performs incredibly well. It can do the clustering for you, or give you some ideas on how to solve the research problem you're focusing on. Without knowing anything about it in fact, I guess his results are definitely worth looking at. If I may say so, I still consider this method of Stijn van Dongen one of the nicest results in clustering I've ever encountered. http://www.micans.org/mcl/ | {
"source": [
"https://stats.stackexchange.com/questions/2717",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1316/"
]
} |
2,819 | I have p values from a lot of tests and would like to know whether there is actually something significant after correcting for multiple testing. The complication: my tests are not independent. The method I am thinking about (a variant of Fisher's Product Method, Zaykin et al., Genet Epidemiol , 2002) needs the correlation between the p values. In order to estimate this correlation, I am currently thinking about bootstrapping cases, running the analyses and correlating the resulting vectors of p values. Does anyone have a better idea? Or even a better idea for my original problem (correcting for multiple testing in correlated tests)? Background: I am logistically regressing whether or not my subjects are suffering from a particular disease on the interaction between their genotype (AA, Aa or aa) and a covariate. However, the genotype is actually a lot (30-250) of Single Nucleotide Polymorphisms (SNPs), which are certainly not independent but in Linkage Disequilibrium. | This is actually a hot topic in Genomewide analysis studies (GWAS)! I am not sure the method you are thinking of is the most appropriate in this context. Pooling of p-values was described by some authors, but in a different context (replication studies or meta-analysis, see e.g. (1) for a recent review). Combining SNP p-values by Fisher's method is generally used when one wants to derive an unique p-value for a given gene; this allows to work at the gene level, and reduce the amount of dimensionality of subsequent testing, but as you said the non-independence between markers (arising from spatial colocation or linkage disiquilibrium, LD) introduce a bias. More powerful alternatives rely on resampling procedures, for example the use of maxT statistics for combining p-value and working at the gene level or when one is interested in pathway-based approaches, see e.g. (2) (§2.4 p. 93 provides details on their approach). My main concerns with bootstraping (with replacement) would be that you are introducing an artificial form of relatedness, or in other words you create virtual twins, hence altering Hardy-Weinberg equilibrium (but also minimum allele frequency and call rate). This would not be the case with a permutation approach where you permute individual labels and keep the genotyping data as is. Usually, the plink software can give you raw and permuted p-values, although it uses (by default) an adaptive testing strategy with a sliding window that allows to stop running all permutations (say 1000 per SNP) if it appears that the SNP under consideration is not "interesting"; it also has option for computing maxT, see the online help . But given the low number of SNPs you are considering, I would suggest relying on FDR-based or maxT tests as implemented in the multtest R package (see mt.maxT ), but the definitive guide to resampling strategies for genomic application is Multiple Testing Procedures with Applications to Genomics , from Dudoit & van der Laan (Springer, 2008). See also Andrea Foulkes's book on genetics with R , which is reviewed in the JSS. She has great material on multiple testing procedures. Further Notes Many authors have pointed to the fact that simple multiple testing correcting methods such as the Bonferroni or Sidak are too stringent for adjusting the results for the individual SNPs. Moreover, neither of these methods take into account the correlation that exists between SNPs due to LD which tags the genetic variation across gene regions. Other alternative have bee proposed, like a derivative of Holm's method for multiple comparison (3), Hidden Markov Model (4), conditional or positive FDR (5) or derivative thereof (6), to name a few. So-called gap statistics or sliding window have been proved successful in some case, but you'll find a good review in (7) and (8). I've also heard of methods that make effective use of the haplotype structure or LD, e.g. (9), but I never used them. They seem, however, more related to estimating the correlation between markers, not p-value as you meant. But in fact, you might better think in terms of the dependency structure between successive test statistics, than between correlated p-values. References Cantor, RM, Lange, K and Sinsheimer, JS. Prioritizing GWAS Results: A Review of Statistical Methods and Recommendations for Their Application . Am J Hum Genet. 2010 86(1): 6–22. Corley, RP, Zeiger, JS, Crowley, T et al. Association of candidate genes with antisocial drug dependence in adolescents . Drug and Alcohol Dependence 2008 96: 90–98. Dalmasso, C, Génin, E and Trégouet DA. A Weighted-Holm Procedure Accounting for Allele Frequencies in Genomewide Association Studies . Genetics 2008 180(1): 697–702. Wei, Z, Sun, W, Wang, K, and Hakonarson, H. Multiple Testing in Genome-Wide Association Studies via Hidden Markov Models . Bioinformatics 2009 25(21): 2802-2808. Broberg, P. A comparative review of estimates of the proportion unchanged genes and the false discovery rate . BMC Bioinformatics 2005 6: 199. Need, AC, Ge, D, Weale, ME, et a. A Genome-Wide Investigation of SNPs and CNVs in Schizophrenia . PLoS Genet. 2009 5(2): e1000373. Han, B, Kang, HM, and Eskin, E. Rapid and Accurate Multiple Testing Correction and Power Estimation for Millions of Correlated Markers . PLoS Genetics 2009 Liang, Y and Kelemen, A. Statistical advances and challenges for analyzing correlated high dimensional snp data in genomic study for complex diseases . Statistics Surveys 2008 2 :43–60. -- the best recent review ever Nyholt, DR. A Simple Correction for Multiple Testing for Single-Nucleotide Polymorphisms in Linkage Disequilibrium with Each Other . Am J Hum Genet. 2004 74(4): 765–769. Nicodemus, KK, Liu, W, Chase, GA, Tsai, Y-Y, and Fallin, MD. Comparison of type I error for multiple test corrections in large single-nucleotide polymorphism studies using principal components versus haplotype blocking algorithms . BMC Genetics 2005; 6(Suppl 1): S78. Peng, Q, Zhao, J, and Xue, F. PCA-based bootstrap confidence interval tests for gene-disease association involving multiple SNPs . BMC Genetics 2010, 11:6 Li, M, Romero, R, Fu, WJ, and Cui, Y (2010). Mapping Haplotype-haplotype Interactions with Adaptive LASSO . BMC Genetics 2010, 11:79 -- although not directly related to the question, it covers haplotype-based analysis/epistatic effect | {
"source": [
"https://stats.stackexchange.com/questions/2819",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1352/"
]
} |
2,910 | We often hear of project management and design patterns in computer science, but less frequently in statistical analysis. However, it seems that a decisive step toward designing an effective and durable statistical project is to keep things organized. I often advocate the use of R and a consistent organization of files in separate folders (raw data file, transformed data file, R scripts, figures, notes, etc.). The main reason for this approach is that it may be easier to run your analysis later (when you forgot how you happened to produce a given plot, for instance). What are the best practices for statistical project management , or the recommendations you would like to give from your own experience? Of course, this applies to any statistical software. ( one answer per post, please ) | I am compiling a quick series of guidelines I found on SO (as suggested by @Shane), Biostar (hereafter, BS), and this SE. I tried my best to acknowledge ownership for each item, and to select first or highly upvoted answer. I also added things of my own, and flagged items that are specific to the [R] environment. Data management Create a project structure for keeping all things at the right place (data, code, figures, etc., giovanni /BS) Never modify raw data files (ideally, they should be read-only), copy/rename to new ones when making transformations, cleaning, etc. Check data consistency ( whuber /SE) Manage script dependencies and data flow with a build automation tool, like GNU make ( Karl Broman / Zachary Jones ) Coding organize source code in logical units or building blocks ( Josh Reich / hadley / ars /SO; giovanni / Khader Shameer /BS) separate source code from editing stuff, especially for large project -- partly overlapping with previous item and reporting Document everything, with e.g. [R]oxygen ( Shane /SO) or consistent self-annotation in the source file -- a good discussion on Medstats, Documenting analyses and data edits Options [R] Custom functions can be put in a dedicated file (that can be sourced when necessary), in a new environment (so as to avoid populating the top-level namespace, Brendan OConnor /SO), or a package ( Dirk Eddelbuettel / Shane /SO) Analysis Don't forget to set/record the seed you used when calling RNG or stochastic algorithms (e.g. k-means) For Monte Carlo studies, it may be interesting to store specs/parameters in a separate file ( sumatra may be a good candidate, giovanni /BS) Don't limit yourself to one plot per variable, use multivariate (Trellis) displays and interactive visualization tools (e.g. GGobi) Versioning Use some kind of revision control for easy tracking/export, e.g. Git ( Sharpie / VonC / JD Long /SO) -- this follows from nice questions asked by @Jeromy and @Tal Backup everything, on a regular basis ( Sharpie / JD Long /SO) Keep a log of your ideas, or rely on an issue tracker, like ditz ( giovanni /BS) -- partly redundant with the previous item since it is available in Git Editing/Reporting [R] Sweave ( Matt Parker /SO) or the more up-to-date knitr [R] Brew ( Shane /SO) [R] R2HTML or ascii As a side note, Hadley Wickham offers a comprehensive overview of R project management , including reproducible exemplification and an unified philosophy of data . Finally, in his R-oriented Workflow of statistical data analysis Oliver Kirchkamp offers a very detailed overview of why adopting and obeying a specific workflow will help statisticians collaborate with each other, while ensuring data integrity and reproducibility of results. It further includes some discussion of using a weaving and version control system. Stata users might find J. Scott Long's The Workflow of Data Analysis Using Stata useful too. | {
"source": [
"https://stats.stackexchange.com/questions/2910",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/930/"
]
} |
2,948 | I'm wondering if someone could suggest what are good starting points when it comes to performing community detection/graph partitioning/clustering on a graph that has weighted , undirected edges. The graph in question has approximately 3 million edges and each edge expresses the degree of similarity between the two vertices it connects. In particular, in this dataset edges are individuals and vertices are a measure of the similarity of their observed behavior. In the past I followed a suggestion I got here on stats.stackexchange.com and used igraph's implementation of Newman's modularity clustering and was satisfied with the results, but that was on a unweighted dataset. Are there any specific algorithms I should be looking at? | igraph implementation of Newman's modularity clustering (fastgreedy function) can be used with weighted edges as well. Just add weight attribute to the edges and analyse as usual. In my experience, it run even faster with weights as there are less ties. | {
"source": [
"https://stats.stackexchange.com/questions/2948",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1007/"
]
} |
3,024 | I understand that for certain datasets such as voting it performs better. Why is Poisson regression used over ordinary linear regression or logistic regression? What is the mathematical motivation for it? | Poisson distributed data is intrinsically integer-valued, which makes sense for count data. Ordinary Least Squares (OLS, which you call "linear regression") assumes that true values are normally distributed around the expected value and can take any real value, positive or negative, integer or fractional, whatever. Finally, logistic regression only works for data that is 0-1-valued (TRUE-FALSE-valued), like "has a disease" versus "doesn't have the disease". Thus, the Poisson distribution makes the most sense for count data. That said, a normal distribution is often a rather good approximation to a Poisson one for data with a mean above 30 or so. And in a regression framework, where you have predictors influencing the count, an OLS with its normal distribution may be easier to fit and would actually be more general, since the Poisson distribution and regression assume that the mean and the variance are equal, while OLS can deal with unequal means and variances - for a count data model with different means and variances, one could use a negative binomial distribution , for instance. | {
"source": [
"https://stats.stackexchange.com/questions/3024",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1392/"
]
} |
3,051 | I have a vector of values that I would like to report the average in windows along a smaller slide. For example, for a vector of the following values: 4, 5, 7, 3, 9, 8 A window size of 3 and a slide of 2 would do the following: (4+5+7)/3 = 5.33
(7+3+9)/3 = 6.33
(9+8)/3 = 5.67 And return a vector of these values: 5.33, 6.33, 5.67 Is there a simple function that will do this for me? If it also returned the indices of the window starts that would be an added bonus. In this example that would be 1,3,5 | Function rollapply in package zoo gets you close: > require(zoo)
> TS <- zoo(c(4, 5, 7, 3, 9, 8))
> rollapply(TS, width = 3, by = 2, FUN = mean, align = "left")
1 3
5.333333 6.333333 It just won't compute the last value for you as it doesn't contain 3 observations. Maybe this will be sufficient for your real problem? Also, note that the returned object has the indices you want as the names of the returned vector. Your example is making an assumption that there is an unobserved 0 in the last window. It might be more useful or realistic to pad with an NA to represent the missing information and tell mean to handle missing values. In this case we will have (8+9)/2 as our final windowed value. > TS <- zoo(c(4, 5, 7, 3, 9, 8, NA))
> rollapply(TS, width = 3, by = 2, FUN = mean, na.rm = TRUE, align = "left")
1 3 5
5.333333 6.333333 8.500000 | {
"source": [
"https://stats.stackexchange.com/questions/3051",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1024/"
]
} |
3,069 | Among Matlab and Python, which language is good for general statistical data analysis? What are the pros and cons, other than accessibility, for each? | As a diehard Matlab user for the last 10+ years, I recommend you learn Python. Once you are sufficiently skilled in a language, when you work in a language you are learning, it will seem like you are not being productive enough, and you will fall back to using your default best language. At the very least, I would suggest you try to become equally proficient in a number of languages (I would suggest R as well). What I like about Matlab: I am proficient in it. It is the lingua franca among numerical analysts. the profiling tool is very good. This is the only reason I use Matlab instead of octave. There is a freeware clone, octave, which has good compliance with the reference implementation. What I do not like about Matlab: There is not a good system to manage third party (free or otherwise) packages and scripts. Mathworks controls the 'central file exchange', and installation of add-on packages seems very clunky, nothing like the excellent system that R has. Furthermore, Mathworks has no incentive to improve this situation, because they make money on selling toolboxes, which compete with freeware packages; Licenses for parallel computation in Matlab are insanely expensive; Much of the m-code, including many of the toolbox functions, and some builtins, were designed to be obviously correct, at the expense of efficiency and/or usability. The most glaring example of this is Matlab's median function, which performs a sort of the data, then takes the middle value . This has been the wrong algorithm since the 70's. saving graphs to file is dodgy at best in Matlab. I have not found my user experience to have improved over the last 5 years (when I started using Matlab instead of octave), even though Mathworks continues to add bells and whistles. This indicates that I am not their target customer, rather they are looking to expand market share by making things worse for power users. There are now 2 ways to do object-oriented programming in Matlab, which is confusing at best. Legacy code using the old style will persist for some time. The Matlab UI is written in Java, which has unpleasant ideas about memory management. | {
"source": [
"https://stats.stackexchange.com/questions/3069",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
]
} |
3,136 | I have a data set with following structure: a word | number of occurrence of a word in a document | a document id How can I perform a test for normal distribution in R? Probably it is an easy question but I am a R newbie. | If I understand your question correctly, then to test if word occurrences in a set of documents follows a Normal distribution you can just use a shapiro-Wilk test and some qqplots. For example, ## Generate two data sets
## First Normal, second from a t-distribution
words1 = rnorm(100); words2 = rt(100, df=3)
## Have a look at the densities
plot(density(words1));plot(density(words2))
## Perform the test
shapiro.test(words1); shapiro.test(words2)
## Plot using a qqplot
qqnorm(words1);qqline(words1, col = 2)
qqnorm(words2);qqline(words2, col = 2) The qqplot commands give: You can see that the second data set is clearly not Normal by the heavy tails ( More Info ). In the Shapiro-Walk normality test, the p-value is large for the first data set (>.9) but very small for the second data set (<.01). This will lead you to reject the null hypothesis for the second. | {
"source": [
"https://stats.stackexchange.com/questions/3136",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1389/"
]
} |
3,181 | If you think back, to when you first started with time series analysis. What tools, R packages and internet resources do you wish you had known about? What I'm trying to ask is, where should one start? Specifically, are there any resources for R that really boil it down for one who is "new" to time series analysis with R. | There is a Time Series Task View that aims to summarize all the time series packages for R. It highlights some core packages that provide some essential functionality. I would also recommend the book by Shumway and Stoffer and the associated website, although it is not so good for forecasting. My blog post on "Econometrics and R" provides a few other references that are useful. Then there is my own book on forecasting using R: Forecasting principles and practice . | {
"source": [
"https://stats.stackexchange.com/questions/3181",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/776/"
]
} |
3,200 | Lets assume you are a social science researcher/econometrician trying to find relevant predictors of demand for a service. You have 2 outcome/dependent variables describing the demand (using the service yes/no, and the number of occasions). You have 10 predictor/independent variables that could theoretically explain the demand (e.g., age, sex, income, price, race, etc). Running two separate multiple regressions will yield 20 coefficients estimations and their p-values. With enough independent variables in your regressions you would sooner or later find at least one variable with a statistically significant correlation between the dependent and independent variables. My question: is it a good idea to correct the p-values for multiple tests if I want to include all independent variables in the regression? Any references to prior work are much appreciated. | It seems your question more generally addresses the problem of identifying good predictors. In this case, you should consider using some kind of penalized regression (methods dealing with variable or feature selection are relevant too), with e.g. L1, L2 (or a combination thereof, the so-called elasticnet ) penalties (look for related questions on this site, or the R penalized and elasticnet package, among others). Now, about correcting p-values for your regression coefficients (or equivalently your partial correlation coefficients) to protect against over-optimism (e.g. with Bonferroni or, better, step-down methods), it seems this would only be relevant if you are considering one model and seek those predictors that contribute a significant part of explained variance, that is if you don't perform model selection (with stepwise selection, or hierarchical testing). This article may be a good start: Bonferroni Adjustments in Tests for Regression Coefficients . Be aware that such correction won't protect you against multicollinearity issue, which affects the reported p-values. Given your data, I would recommend using some kind of iterative model selection techniques. In R for instance, the stepAIC function allows to perform stepwise model selection by exact AIC. You can also estimate the relative importance of your predictors based on their contribution to $R^2$ using boostrap (see the relaimpo package). I think that reporting effect size measure or % of explained variance are more informative than p-value, especially in a confirmatory model. It should be noted that stepwise approaches have also their drawbacks (e.g., Wald tests are not adapted to conditional hypothesis as induced by the stepwise procedure), or as indicated by Frank Harrell on R mailing , "stepwise variable selection based on AIC has all the problems of stepwise variable selection based on P-values. AIC is just a restatement of the P-Value" (but AIC remains useful if the set of predictors is already defined); a related question -- Is a variable significant in a linear regression model? -- raised interesting comments ( @Rob , among others) about the use of AIC for variable selection. I append a couple of references at the end (including papers kindly provided by @Stephan ); there is also a lot of other references on P.Mean . Frank Harrell authored a book on Regression Modeling Strategy which includes a lot of discussion and advices around this problem (§4.3, pp. 56-60). He also developed efficient R routines to deal with generalized linear models (See the Design or rms packages). So, I think you definitely have to take a look at it (his handouts are available on his homepage). References Whittingham, MJ, Stephens, P, Bradbury, RB, and Freckleton, RP (2006). Why do we still use stepwise modelling in ecology and behaviour? Journal of Animal Ecology , 75 , 1182-1189. Austin, PC (2008). Bootstrap model selection had similar performance for selecting authentic and noise variables compared to backward variable elimination: a simulation study . Journal of Clinical Epidemiology , 61(10) , 1009-1017. Austin, PC and Tu, JV (2004). Automated variable selection methods for logistic regression produced unstable models for predicting acute myocardial infarction mortality . Journal of Clinical Epidemiology , 57 , 1138–1146. Greenland, S (1994). Hierarchical regression for epidemiologic analyses of multiple exposures . Environmental Health Perspectives , 102(Suppl 8) , 33–39. Greenland, S (2008). Multiple comparisons and association selection in general epidemiology . International Journal of Epidemiology , 37(3) , 430-434. Beyene, J, Atenafu, EG, Hamid, JS, To, T, and Sung L (2009). Determining relative importance of variables in developing and validating predictive models . BMC Medical Research Methodology , 9 , 64. Bursac, Z, Gauss, CH, Williams, DK, and Hosmer, DW (2008). Purposeful selection of variables in logistic regression . Source Code for Biology and Medicine , 3 , 17. Brombin, C, Finos, L, and Salmaso, L (2007). Adjusting stepwise p-values in generalized linear models . International Conference on Multiple Comparison Procedures . -- see step.adj() in the R someMTP package. Wiegand, RE (2010). Performance of using multiple stepwise algorithms for variable selection . Statistics in Medicine , 29(15), 1647–1659. Moons KG, Donders AR, Steyerberg EW, and Harrell FE (2004). Penalized Maximum Likelihood Estimation to predict binary outcomes. Journal of Clinical Epidemiology , 57(12) , 1262–1270. Tibshirani, R (1996). Regression shrinkage and selection via the lasso . Journal of The Royal Statistical Society B , 58(1) , 267–288. Efron, B, Hastie, T, Johnstone, I, and Tibshirani, R (2004). Least Angle Regression . Annals of Statistics , 32(2) , 407-499. Flom, PL and Cassell, DL (2007). Stopping Stepwise: Why stepwise and similar selection methods are bad, and what you should use . NESUG 2007 Proceedings . Shtatland, E.S., Cain, E., and Barton, M.B. (2001). The perils of stepwise logistic regression and how to escape them using information criteria and the Output Delivery System . SUGI 26 Proceedings (pp. 222–226). | {
"source": [
"https://stats.stackexchange.com/questions/3200",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1458/"
]
} |
3,238 | I have a set of time series data. Each series covers the same period, although the actual dates in each time series may not all 'line up' exactly. That is to say, if the Time series were to be read into a 2D matrix, it would look something like this: date T1 T2 T3 .... TN
1/1/01 100 59 42 N/A
2/1/01 120 29 N/A 42.5
3/1/01 110 N/A 12 36.82
4/1/01 N/A 59 40 61.82
5/1/01 05 99 42 23.68
...
31/12/01 100 59 42 N/A
etc I want to write an R script that will segregate the time series {T1, T2, ... TN} into 'families' where a family is defined as a set of series which "tend to move in sympathy" with each other. For the 'clustering' part, I will need to select/define a kind of distance measure. I am not quite sure how to go about this, since I am dealing with time series, and a pair of series that may move in sympathy over one interval, may not do so in a subsequent interval. I am sure there are far more experienced/clever people than me on here, so I would be grateful for any suggestions, ideas on what algorithm/heuristic to use for the distance measure and how to use that in clustering the time series. My guess is that there is NOT an established robust statistic method for doing this, so I would be very interested to see how people approach/solve this problem - thinking like a statistician. | In data streaming and mining of time series databases, a common approach is to transform the series to a symbolic representation, then use a similarity metric, such as Euclidean distance, to cluster the series. The most popular representations are SAX (Keogh & Lin) or the newer iSAX (Shieh & Keogh): Symbolic Aggregate approXimation iSAX: Indexing and Mining Terabyte Sized Time Series The pages above also contain references to distance metrics and clustering. Keogh and crew are into reproducible research and pretty receptive to releasing their code. So you could email them and ask. I believe they tend to work in MATLAB/C++ though. There was a recent effort to produce a Java and R implementation: jmotif I don't know how far along it is -- it's geared towards motif finding, but, depending on how far they've gotten, it should have the necessary bits you need to put something together for your needs (iSAX and distance metrics: since this part is common to clustering and motif finding). | {
"source": [
"https://stats.stackexchange.com/questions/3238",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1216/"
]
} |
3,242 | I have some data to which I am trying to fit a trendline. I believe the data to follow a power law, and so have plotted the data on log-log axes looking for a straight line. This has resulted in an (almost) straight line and so in Excel I have added a trendline for a power law. Being a stats newb, my question is, what is now the best way for me to go from "well the line looks like it fits pretty well" to "numeric property $x$ proves that this graph is fitted appropriately by a power law"? In Excel I can get an r-squared value, though given my limited knowledge of statistics, I don't even know whether this is actually appropriate under my specific circumstances. I have included an image below showing the plot of the data I am working with in Excel. I have a little bit of experience with R, so if my analysis is being limited by my tools, I am open to suggestions on how to go about improving it using R. | See Aaron Clauset's page: Power-law Distributions in Empirical Data which has links to code for fitting power laws (Matlab, R, Python, C++) as well as a paper by Clauset and Shalizi you should read first. You might want to read Clauset's and Shalizi's blogs posts on the paper first: Power laws and all that jazz So You Think You Have a Power Law — Well Isn't That Special? A summary of the last link could be: Lots of distributions give you straight-ish lines on a log-log plot. Abusing linear regression makes the baby Gauss cry. Fitting a line to your log-log plot by least squares is a bad idea. Use maximum likelihood to estimate the scaling exponent. Use goodness of fit to estimate where the scaling region begins. Use a goodness-of-fit test to check goodness of fit. Use Vuong's test to check alternatives, and be prepared for disappointment. | {
"source": [
"https://stats.stackexchange.com/questions/3242",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/870/"
]
} |
3,331 | I have sales data for a series of outlets, and want to categorise them based on the shape of their curves over time. The data looks roughly like this (but obviously isn't random, and has some missing data): n.quarters <- 100
n.stores <- 20
if (exists("test.data")){
rm(test.data)
}
for (i in 1:n.stores){
interval <- runif(1, 1, 200)
new.df <- data.frame(
var0 = interval + c(0, cumsum(runif(49, -5, 5))),
date = seq.Date(as.Date("1990-03-30"), by="3 month", length.out=n.quarters),
store = rep(paste("Store", i, sep=""), n.quarters))
if (exists("test.data")){
test.data <- rbind(test.data, new.df)
} else {
test.data <- new.df
}
}
test.data$store <- factor(test.data$store) I would like to know how I can cluster based on the shape of the curves in R. I had considered the following approach: Create a new column by linearly transforming each store's var0 to a value between 0.0 and 1.0 for the entire time series. Cluster these transformed curves using the kml package in R. I have two questions: Is this a reasonable exploratory approach? How can I transform my data into the longitudinal data format that kml will understand? Any R snippets would be much appreciated! | Several directions for analyzing longitudinal data were discussed in the link provided by @Jeromy, so I would suggest you to read them carefully, especially those on functional data analysis. Try googling for "Functional Clustering of Longitudinal Data", or the PACE Matlab toolbox which is specifically concerned with model-based clustering of irregularly sampled trajectories (Peng and Müller, Distance-based clustering of sparsely observed stochastic processes, with applications to online auctions , Annals of Applied Statistics 2008 2: 1056). I can imagine that there may be a good statistical framework for financial time series, but I don't know about that. The kml package basically relies on k-means, working (by default) on euclidean distances between the $t$ measurements observed on $n$ individuals. What is called a trajectory is just the series of observed values for individual $i$, $y_i=(y_{i1},y_{i2},\dots,y_{it})$, and $d(y_i,y_j)=\sqrt{t^{-1}\sum_{k=1}^t(y_{ik}-y_{jk})^2}$. Missing data are handled through a slight modification of the preceding distance measure (Gower adjustment) associated to a nearest neighbor-like imputation scheme (for computing Calinski criterion). As I don't represent myself what you real data would look like, I cannot say if it will work. At least, it work with longitudinal growth curves, "polynomial" shape, but I doubt it will allow you to detect very specific patterns (like local minima/maxima at specific time-points with time-points differing between clusters, by a translation for example). If you are interested in clustering possibly misaligned curves, then you definitively have to look at other solutions; Functional clustering and alignment , from Sangalli et al., and references therein may provide a good starting point. Below, I show you some code that may help to experiment with it (my seed is generally set at 101, if you want to reproduce the results). Basically, for using kml you just have to construct a clusterizLongData object (an id number for the first column, and the $t$ measurements in the next columns). library(lattice)
xyplot(var0 ~ date, data=test.data, groups=store, type=c("l","g"))
tw <- reshape(test.data, timevar="date", idvar="store", direction="wide")
parallel(tw[,-1], horizontal.axis=F,
scales=list(x=list(rot=45,
at=seq(1,ncol(tw)-1,by=2),
labels=substr(names(tw[,-1])[seq(1,ncol(tw)-1,by=2)],6,100),
cex=.5)))
library(kml)
names(tw) <- c("id", paste("t", 1:(ncol(tw)-1)))
tw.cld <- as.cld(tw)
cld.res <- kml(tw.cld,nbRedrawing=5)
plot(tw.cld) The next two figures are the raw simulated data and the five-cluster solution (according to Calinski criterion, also used in the fpc package). I don't show the scaled version . | {
"source": [
"https://stats.stackexchange.com/questions/3331",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/179/"
]
} |
3,372 | I must clarify immediately that I am a practicing software developer, not a statistician, and that my college stats class was a very long time ago… That said, I would like to know if there is a method for accumulating a set of descriptive statistics that could then be used to produce a boxplot, that does not entail storing a bunch of individual samples? What I am trying to do is produce a graphical summary of queue service times within a complex multi-queue process. I have in the past used a package called tnftools that allowed large samples to be accumulated and then post-processed into a nice graph of response times and outliers… But tnftools are not available for my current platform. Ideally I would like to be able to accumulate a set of descriptive statistics "on the fly" as the process runs, and then extract the data for analysis on demand. But I cannot simply have the process accumulate samples as the memory / IO involved in doing so would have an unacceptable impact on the performance of the system. | For 'on the fly' boxplot, you will need 'on the fly' min/max (trivial) as well as 'on the fly' quartiles (0.25,0.5=median and 0.75). A lot of work has been going on recently in the problem of online (or 'on the fly') algorithm for median computation. A recent developements is binmedian . As a side-kick, it also enjoy better worst case complexity than quickselect (which is neither online nor single pass). You can find the associated paper as well as C and FORTRAN code online here . You may have to check the licencing details with the authors. You will also need a single pass algorithm for the quartiles, for which you can use the approach above and the following recursive characterization of the quartiles in terms of medians: $Q_{0.75}(x) \approx Q_{0.5}(x_i:x_i > Q_{0.5}(x))$ and $Q_{0.25}(x) \approx Q_{0.5}(x_i:x_i < Q_{0.5}(x))$ i.e. the 25 (75) percent quartile is very close to the median of those observations that are smaller (larger) than the median. Addendum: There exist a host of older multi-pass methods for computing quantiles. A popular approach is to maintain/update a deterministically sized reservoir of observations randomly selected from the stream and recursively compute quantiles (see this review) on this reservoir. This (and related) approach are superseded by the one proposed above. | {
"source": [
"https://stats.stackexchange.com/questions/3372",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1515/"
]
} |
3,392 | It seems that lots of people (including me) like to do exploratory data analysis in Excel. Some limitations, such as the number of rows allowed in a spreadsheet, are a pain but in most cases don't make it impossible to use Excel to play around with data. A paper by McCullough and Heiser , however, practically screams that you will get your results all wrong -- and probably burn in hell as well -- if you try to use Excel. Is this paper correct or is it biased? The authors do sound like they hate Microsoft. | Use the right tool for the right job and exploit the strengths of the tools you are familiar with. In Excel's case there are some salient issues: Please don't use a spreadsheet to manage data, even if your data will fit into one. You're just asking for trouble, terrible trouble. There is virtually no protection against typographical errors, wholesale mixing up of data, truncating data values, etc., etc. Many of the statistical functions indeed are broken. The t distribution is one of them. The default graphics are awful. It is missing some fundamental statistical graphics, especially boxplots and histograms. The random number generator is a joke (but despite that is still effective for educational purposes). Avoid the high-level functions and most of the add-ins; they're c**p. But this is just a general principle of safe computing: if you're not sure what a function is doing, don't use it. Stick to the low-level ones (which include arithmetic functions, ranking, exp, ln, trig functions, and--within limits--the normal distribution functions). Never use an add-in that produces a graphic: it's going to be terrible. (NB: it's dead easy to create your own probability plots from scratch. They'll be correct and highly customizable.) In its favor, though, are the following: Its basic numerical calculations are as accurate as double precision floats can be. They include some useful ones, such as log gamma. It's quite easy to wrap a control around input boxes in a spreadsheet, making it possible to create dynamic simulations easily. If you need to share a calculation with non-statistical people, most will have some comfort with a spreadsheet and none at all with statistical software, no matter how cheap it may be. It's easy to write effective numerical macros, including porting old Fortran code, which is quite close to VBA. Moreover, the execution of VBA is reasonably fast. (For example, I have code that accurately computes non-central t distributions from scratch and three different implementations of Fast Fourier Transforms.) It supports some effective simulation and Monte-Carlo add-ons like Crystal Ball and @Risk. (They use their own RNGs, by the way--I checked.) The immediacy of interacting directly with (a small set of) data is unparalleled: it's better than any stats package, Mathematica, etc. When used as a giant calculator with loads of storage, a spreadsheet really comes into its own. Good EDA, using robust and resistant methods, is not easy, but after you have done it once, you can set it up again quickly. With Excel you can effectively reproduce all the calculations (although only some of the plots) in Tukey's EDA book, including median polish of n-way tables (although it's a bit cumbersome). In direct answer to the original question, there is a bias in that paper: it focuses on the material that Excel is weakest at and that a competent statistician is least likely to use. That's not a criticism of the paper, though, because warnings like this need to be broadcast. | {
"source": [
"https://stats.stackexchange.com/questions/3392",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/666/"
]
} |
3,425 | I am not sure how this should be termed, so please correct me if you know a better term. I've got two lists. One of 55 items (e.g: a vector of strings), the other of 92. The item names are similar but not identical. I wish to find the best candidate s in the 92 list to the items in the 55 list (I will then go through it and pick the correct fitting). How can it be done? Ideas I had where to: See all the ones that match (using something list ?match) Try a distance matrix between the strings vectors, but I am not sure how to best define it (number of identical letters, what about order of strings?) So what package/functions/field-of-research deals with such a task, and how? Update: Here is an example of the vectors I wish to match vec55 <- c("Aeropyrum pernix", "Archaeoglobus fulgidus", "Candidatus_Korarchaeum_cryptofilum",
"Candidatus_Methanoregula_boonei_6A8", "Cenarchaeum_symbiosum",
"Desulfurococcus_kamchatkensis", "Ferroplasma acidarmanus", "Haloarcula_marismortui_ATCC_43049",
"Halobacterium sp.", "Halobacterium_salinarum_R1", "Haloferax volcanii",
"Haloquadratum_walsbyi", "Hyperthermus_butylicus", "Ignicoccus_hospitalis_KIN4",
"Metallosphaera_sedula_DSM_5348", "Methanobacterium thermautotrophicus",
"Methanobrevibacter_smithii_ATCC_35061", "Methanococcoides_burtonii_DSM_6242"
)
vec91 <- c("Acidilobus saccharovorans 345-15", "Aciduliprofundum boonei T469",
"Aeropyrum pernix K1", "Archaeoglobus fulgidus DSM 4304", "Archaeoglobus profundus DSM 5631",
"Caldivirga maquilingensis IC-167", "Candidatus Korarchaeum cryptofilum OPF8",
"Candidatus Methanoregula boonei 6A8", "Cenarchaeum symbiosum A",
"Desulfurococcus kamchatkensis 1221n", "Ferroglobus placidus DSM 10642",
"Halalkalicoccus jeotgali B3", "Haloarcula marismortui ATCC 43049",
"Halobacterium salinarum R1", "Halobacterium sp. NRC-1", "Haloferax volcanii DS2",
"Halomicrobium mukohataei DSM 12286", "Haloquadratum walsbyi DSM 16790",
"Halorhabdus utahensis DSM 12940", "Halorubrum lacusprofundi ATCC 49239",
"Haloterrigena turkmenica DSM 5511", "Hyperthermus butylicus DSM 5456",
"Ignicoccus hospitalis KIN4/I", "Ignisphaera aggregans DSM 17230",
"Metallosphaera sedula DSM 5348", "Methanobrevibacter ruminantium M1",
"Methanobrevibacter smithii ATCC 35061", "Methanocaldococcus fervens AG86",
"Methanocaldococcus infernus ME", "Methanocaldococcus jannaschii DSM 2661",
"Methanocaldococcus sp. FS406-22", "Methanocaldococcus vulcanius M7",
"Methanocella paludicola SANAE", "Methanococcoides burtonii DSM 6242",
"Methanococcus aeolicus Nankai-3", "Methanococcus maripaludis C5",
"Methanococcus maripaludis C6", "Methanococcus maripaludis C7",
"Methanococcus maripaludis S2", "Methanococcus vannielii SB",
"Methanococcus voltae A3", "Methanocorpusculum labreanum Z",
"Methanoculleus marisnigri JR1", "Methanohalobium evestigatum Z-7303",
"Methanohalophilus mahii DSM 5219", "Methanoplanus petrolearius DSM 11571",
"Methanopyrus kandleri AV19", "Methanosaeta thermophila PT",
"Methanosarcina acetivorans C2A", "Methanosarcina barkeri str. Fusaro",
"Methanosarcina mazei Go1", "Methanosphaera stadtmanae DSM 3091",
"Methanosphaerula palustris E1-9c", "Methanospirillum hungatei JF-1",
"Methanothermobacter marburgensis str. Marburg", "Methanothermobacter thermautotrophicus str. Delta H",
"Nanoarchaeum equitans Kin4-M", "Natrialba magadii ATCC 43099",
"Natronomonas pharaonis DSM 2160", "Nitrosopumilus maritimus SCM1",
"Picrophilus torridus DSM 9790", "Pyrobaculum aerophilum str. IM2",
"Pyrobaculum arsenaticum DSM 13514", "Pyrobaculum calidifontis JCM 11548",
"Pyrobaculum islandicum DSM 4184", "Pyrococcus abyssi GE5", "Pyrococcus furiosus DSM 3638",
"Pyrococcus horikoshii OT3", "Staphylothermus hellenicus DSM 12710",
"Staphylothermus marinus F1", "Sulfolobus acidocaldarius DSM 639",
"Sulfolobus islandicus L.D.8.5", "Sulfolobus islandicus L.S.2.15",
"Sulfolobus islandicus M.14.25", "Sulfolobus islandicus M.16.27",
"Sulfolobus islandicus M.16.4", "Sulfolobus islandicus Y.G.57.14",
"Sulfolobus islandicus Y.N.15.51", "Sulfolobus solfataricus P2",
"Sulfolobus tokodaii str. 7", "Thermococcus gammatolerans EJ3",
"Thermococcus kodakarensis KOD1", "Thermococcus onnurineus NA1",
"Thermococcus sibiricus MM 739", "Thermofilum pendens Hrk 5",
"Thermoplasma acidophilum DSM 1728", "Thermoplasma volcanium GSS1",
"Thermoproteus neutrophilus V24Sta", "Thermosphaera aggregans DSM 11486",
"Vulcanisaeta distributa DSM 14429", "uncultured methanogenic archaeon RC-I"
) | I've had similar problems. (seen here: https://stackoverflow.com/questions/2231993/merging-two-data-frames-using-fuzzy-approximate-string-matching-in-r ) Most of the recommendations that I received fell around: pmatch() , and agrep() , grep() , grepl() are three functions that if you take the time to look through will provide you with some insight into approximate string matching either by approximate string or approximate regex. Without seeing the strings, it's hard to provide you with hard example of how to match them. If you could provide us with some example data I'm sure we could come to a solution. Another option that I found works well is to flatten the strings, tolower() , looking at the first letter of each word within the string and then comparing. Sometimes that works without a hitch. Then there are more complicated things like the distances mentioned in other answers. Sometimes these work, sometimes they're horrible - it really depends on the strings. Can we see them? Update It looks like agrep() will do the trick for most of these. Note that agrep() is just R's implementation of Levenshtein distance. agrep(vec55[1],vec91,value=T) Some don't compute although, I'm not even sure if Ferroplasm acidaramus is the same as Ferroglobus placidus DSM 10642, for example: agrep(vec55[7],vec91,value=T) I think you may be a bit SOL for some of these and perhaps creating an index from scratch is the best bet. ie,. Create a table with id numbers for vec55, and then manually create a reference to the id's in vec55 in vec91. Painful, I know, but a lot of it can be done with agrep(). | {
"source": [
"https://stats.stackexchange.com/questions/3425",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/253/"
]
} |
3,458 | I am looking for an alternative to Classification Trees which might yield better predictive power. The data I am dealing with has factors for both the explanatory and the explained variables. I remember coming across random forests and neural networks in this context, although never tried them before, are there another good candidate for such a modeling task (in R, obviously)? | I think it would be worth giving a try to Random Forests ( randomForest ); some references were provided in response to related questions: Feature selection for “final” model when performing cross-validation in machine learning ; Can CART models be made robust? . Boosting/bagging render them more stable than a single CART which is known to be very sensitive to small perturbations. Some authors argued that it performed as well as penalized SVM or Gradient Boosting Machines (see, e.g. Cutler et al., 2009). I think they certainly outperform NNs. Boulesteix and Strobl provides a nice overview of several classifiers in Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction (BMC MRM 2009 9: 85). I've heard of another good study at the IV EAM meeting , which should be under review in Statistics in Medicine , João Maroco , Dina Silva, Manuela Guerreiro, Alexandre de Mendonça. Do
Random Forests Outperform Neural
Networks, Support Vector Machines and
Discriminant Analysis classifiers? A
case study in the evolution to
dementia in elderly patients with
cognitive complaints I also like the caret package: it is well documented and allows to compare predictive accuracy of different classifiers on the same data set. It takes care of managing training /test samples, computing accuracy, etc in few user-friendly functions. The glmnet package, from Friedman and coll., implements penalized GLM (see the review in the Journal of Statistical Software ), so you remain in a well-known modeling framework. Otherwise, you can also look for association rules based classifiers (see the CRAN Task View on Machine Learning or the Top 10 algorithms in data mining for a gentle introduction to some of them). I'd like to mention another interesting approach that I plan to re-implement in R (actually, it's Matlab code) which is Discriminant Correspondence Analysis from Hervé Abdi. Although initially developed to cope with small-sample studies with a lot of explanatory variables (eventually grouped into coherent blocks), it seems to efficiently combine classical DA with data reduction techniques. References Cutler, A., Cutler, D.R., and Stevens, J.R. (2009). Tree-Based Methods , in High-Dimensional Data Analysis in Cancer Research , Li, X. and Xu, R. (eds.), pp. 83-101, Springer. Saeys, Y., Inza, I., and Larrañaga, P. (2007). A review of feature selection techniques in bioinformatics . Bioinformatics, 23(19): 2507-2517. | {
"source": [
"https://stats.stackexchange.com/questions/3458",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/253/"
]
} |
3,460 | For some of us, refereeing papers is part of the job. When refereeing statistical methodology papers, I think advice from other subject areas is fairly useful, i.e. computer science and Maths . This question concerns reviewing more applied statistical papers. By this I mean, the paper is submitted to a non-statistical/mathematical journal and statistics is just mentioned in the "methods" section. Some particular questions: How much effort should we put in to understand the application area? How much time should I spend on a report? How picky are you when looking at figures/tables. How do you cope with the data not being available. Do you try and rerun the analysis used. What's the maximum number of papers your would review in a year? Have a missed any questions? Feel free to edit or add a comment. Edit I coming to this question as a statistician reviewing a biology paper, but I'm interested in the statistical review of any non-mathematical discipline. I'm not sure if this should be a CW. On one hand it's a bit open, but on the other I can see myself accepting an answer. Also, answers will probably be fairly long. | I am not sure about which area of science you are referring to (I'm sure the answer would be really different if dealing with biology vs physics for instance...) Anyway, as a biologist, I will answer from a "biological" point of view: How much effort should we put in to understand the application area? I tend at least to read the previous papers from the same authors and look for a few review on the subject if I am not too familiar with it. This is especially true when dealing with new techniques I don't know, because I need to understand if they did all the proper controls etc. How much time should I spend on a report? As much as needed (OK, dumb answer, I know! :P)
In general I would not like someone reviewing my paper to do an approximative job just because he/she has other things to do, so I try not to do it myself. How picky are you when looking at figures/tables. Quite picky. Figures are the first thing you look at when browsing through a paper. They need to be consistent (e.g. right titles on the axes, correct legend etc.). On occasion I have suggested to use a different kind of plot to show data when I thought the one used was not the best. This happens a lot in biology, a field that is dominated by the "barplot +/- SEM" type of graph.
I'm also quite picky on the "materials and methods" section: a perfect statistical analysis on a inherently wrong biological model is completely useless. How do you cope with the data not being available. You just do and trust the Authors, I guess. In many cases in biology there's not much you can do, especially when dealing with things like imaging or animal behaviour and similar. Unless you want people to publish tons of images, videos etc (that you most likely would not go through anyways), but that may be very unpractical. If you think the data are really necessary ask for the authors to provide them as supplementary data/figures. Do you try and rerun the analysis used. Only if I have serious doubts on the conclusions drawn by the authors.
In biology there's often a difference between what is (or not) "statistically significant" and what is "biologically significant". I prefer a thinner statistical analysis with good biological reasoning then the other way around. But again, in the very unlikely event that I were to review a bio-statistics paper (ahah, that would be some fun!!!) I would probably pay much more attention to the stats than to the biology in there. | {
"source": [
"https://stats.stackexchange.com/questions/3460",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8/"
]
} |
3,466 | Imagine the following common design: 100 participants are randomly allocated to either a treatment or a control group the dependent variable is numeric and measured pre- and post- treatment Three obvious options for analysing such data are: Test the group by time interaction effect in mixed ANOVA Do an ANCOVA with condition as the IV and the pre- measure as the covariate and post measure as the DV Do a t-test with condition as the IV and pre-post change scores as the DV Question: What is the best way to analyse such data? Are there reasons to prefer one approach over another? | There is a huge literature around this topic (change/gain scores), and I think the best references come from the biomedical domain, e.g. Senn, S (2007). Statistical issues in
drug development . Wiley (chap. 7 pp.
96-112) In biomedical research, interesting work has also been done in the study of cross-over trials (esp. in relation to carry-over effects, although I don't know how applicable it is to your study). From Gain Score t to ANCOVA F (and vice versa) , from Knapp & Schaffer, provides an interesting review of ANCOVA vs. t approach (the so-called Lord's Paradox). The simple analysis of change scores is not the recommended way for pre/post design according to Senn in his article Change from baseline and analysis of covariance revisited (Stat. Med. 2006 25(24)). Moreover, using a mixed-effects model (e.g. to account for the correlation between the two time points) is not better because you really need to use the "pre" measurement as a covariate to increase precision (through adjustment). Very briefly: The use of change scores (post $-$ pre, or outcome $-$ baseline) does not solve the problem of imbalance; the correlation between pre and post measurement is < 1, and the correlation between pre and (post $-$ pre) is generally negative -- it follows that if the treatment (your group allocation) as measured by raw scores happens to be an unfair disadvantage compared to control, it will have an unfair advantage with change scores. The variance of the estimator used in ANCOVA is generally lower than that for raw or change scores (unless correlation between pre and post equals 1). If the pre/post relationships differ between the two groups (slope), it is not as much of a problem than for any other methods (the change scores approach also assumes that the relationship is identical between the two groups -- the parallel slope hypothesis). Under the null hypothesis of equality of treatment (on the outcome), no interaction treatment x baseline is expected; it is dangerous to fit such a model, but in this case one must use centered baselines (otherwise, the treatment effect is estimated at the covariate origin). I also like Ten Difference Score Myths from Edwards, although it focuses on difference scores in a different context; but here is an annotated bibliography on the analysis of pre-post change (unfortunately, it doesn't cover very recent work). Van Breukelen also compared ANOVA vs. ANCOVA in randomized and non-randomized setting, and his conclusions support the idea that ANCOVA is to be preferred, at least in randomized studies (which prevent from regression to the mean effect). | {
"source": [
"https://stats.stackexchange.com/questions/3466",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/183/"
]
} |
3,476 | Python matplotlib has a boxplot command . Normally, all the parts of the graph are numerically ticked. How can I change the ticks to names instead of positions? For illustration, I mean the Mon Tue Wed labels like in this boxplot: | Use the second argument of xticks to set the labels: import numpy as np
import matplotlib.pyplot as plt
data = [[np.random.rand(100)] for i in range(3)]
plt.boxplot(data)
plt.xticks([1, 2, 3], ['mon', 'tue', 'wed']) edited to remove pylab bc pylab is a convenience module that bulk imports matplotlib.pyplot (for plotting) and numpy (for mathematics and working with arrays) in a single name space. Although many examples use pylab , it is no longer recommended . | {
"source": [
"https://stats.stackexchange.com/questions/3476",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/190/"
]
} |
3,539 | Which inter-rater reliability methods are most appropriate for ordinal or interval data? I believe that "Joint probability of agreement" or "Kappa" are designed for nominal data. Whilst "Pearson" and "Spearman" can be used, they are mainly used for two raters (although they can be used for more than two raters). What other measures are suitable for ordinal or interval data, i.e. more than two raters? | The Kappa ( $\kappa$ ) statistic is a quality index that compares observed agreement between 2 raters on a nominal or ordinal scale with agreement expected by chance alone (as if raters were tossing up). Extensions for the case of multiple raters exist (2, pp. 284–291). In the case of ordinal data , you can use the weighted $\kappa$ , which basically reads as usual $\kappa$ with off-diagonal elements contributing to the measure of agreement. Fleiss (3) provided guidelines to interpret $\kappa$ values but these are merely rules of thumbs. The $\kappa$ statistic is asymptotically equivalent to the ICC estimated from a two-way random effects ANOVA, but significance tests and SE coming from the usual ANOVA framework are not valid anymore with binary data. It is better to use bootstrap to get confidence interval (CI). Fleiss (8) discussed the connection between weighted kappa and the intraclass correlation (ICC). It should be noted that some psychometricians don't very much like $\kappa$ because it is affected by the prevalence of the object of measurement much like predictive values are affected by the prevalence of the disease under consideration, and this can lead to paradoxical results. Inter-rater reliability for $k$ raters can be estimated with Kendall’s coefficient of concordance, $W$ . When the number of items or units that are rated $n > 7$ , $k(n − 1)W \sim \chi^2(n − 1)$ . (2, pp. 269–270). This asymptotic approximation is valid for moderate value of $n$ and $k$ (6), but with less than 20 items $F$ or permutation tests are more suitable (7). There is a close relationship between Spearman’s $\rho$ and Kendall’s $W$ statistic: $W$ can be directly calculated from the mean of the pairwise Spearman correlations (for untied observations only). Polychoric (ordinal data) correlation may also be used as a measure of inter-rater agreement. Indeed, they allow to estimate what would be the correlation if ratings were made on a continuous scale, test marginal homogeneity between raters. In fact, it can be shown that it is a special case of latent trait modeling, which allows to relax distributional assumptions (4). About continuous (or so assumed) measurements, the ICC which quantifies the proportion of variance attributable to the between-subject variation is fine. Again, bootstraped CIs are recommended. As @ars said, there are basically two versions -- agreement and consistency -- that are applicable in the case of agreement studies (5), and that mainly differ on the way sum of squares are computed; the “consistency” ICC is generally estimated without considering the Item×Rater interaction. The ANOVA framework is useful with specific block design where one wants to minimize the number of ratings ( BIBD ) -- in fact, this was one of the original motivation of Fleiss's work. It is also the best way to go for multiple raters . The natural extension of this approach is called the Generalizability Theory . A brief overview is given in Rater Models: An Introduction , otherwise the standard reference is Brennan's book, reviewed in Psychometrika 2006 71(3) . As for general references, I recommend chapter 3 of Statistics in Psychiatry , from Graham Dunn (Hodder Arnold, 2000). For a more complete treatment of reliability studies, the best reference to date is Dunn, G (2004). Design and Analysis of
Reliability Studies . Arnold. See the
review in the International Journal
of Epidemiology . A good online introduction is available on John Uebersax's website, Intraclass Correlation and Related Methods ; it includes a discussion of the pros and cons of the ICC approach, especially with respect to ordinal scales. Relevant R packages for two-way assessment (ordinal or continuous measurements) are found in the Psychometrics Task View; I generally use either the psy , psych , or irr packages. There's also the concord package but I never used it. For dealing with more than two raters, the lme4 package is the way to go for it allows to easily incorporate random effects, but most of the reliability designs can be analysed using the aov() because we only need to estimate variance components. References J Cohen. Weighted kappa: Nominal scale agreement with provision for scales disagreement of partial credit. Psychological Bulletin , 70 , 213–220, 1968. S Siegel and Jr N John Castellan. Nonparametric Statistics for the Behavioral
Sciences . McGraw-Hill, Second edition, 1988. J L Fleiss. Statistical Methods for Rates and Proportions . New York: Wiley, Second
edition, 1981. J S Uebersax. The tetrachoric and polychoric correlation coefficients . Statistical Methods for Rater Agreement web site, 2006. Available at: http://john-uebersax.com/stat/tetra.htm . Accessed February 24, 2010. P E Shrout and J L Fleiss. Intraclass correlation: Uses in assessing rater reliability . Psychological Bulletin , 86 , 420–428, 1979. M G Kendall and B Babington Smith. The problem of m rankings . Annals of Mathematical Statistics , 10 , 275–287, 1939. P Legendre. Coefficient of concordance . In N J Salkind, editor, Encyclopedia of Research Design . SAGE Publications, 2010. J L Fleiss. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability . Educational and Psychological Measurement , 33 , 613-619, 1973. | {
"source": [
"https://stats.stackexchange.com/questions/3539",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1564/"
]
} |
3,542 | I've seen various theoretical treatments of graphics, such as the Grammar of Graphics . But I have seen nothing equivalent with regards to tables. Over the while I have developed an informal model of good practice in table design.
However, I'd like to be able to provide a good reference to students.
The APA Style Manual has a few tips on table design, but it is only a starting point. Question:
What is a good resource that provides theoretical and practical advice on the presentation of numeric results in tables? UPDATE: It would be particularly useful to have a good free online resource. Note: I'm not sure if this should be community wiki. I feel as if there might be a correct answer. | Ed Tufte has a few pages on this in his classic "The Visual Display of Quantitative Information" . For a much more detailed treatment, there is Jane Miller's Chicago Guide to Writing about Numbers . I've never seen anything else like it. It has a whole chapter on "Creating Effective Tables". | {
"source": [
"https://stats.stackexchange.com/questions/3542",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/183/"
]
} |
3,549 | In a multiple linear regression, why is it possible to have a highly significant F statistic (p<.001) but have very high p-values on all the regressor's t tests? In my model, there are 10 regressors. One has a p-value of 0.1 and the rest are above 0.9 For dealing with this problem see the follow-up question . | As Rob mentions, this occurs when you have highly correlated variables. The standard example I use is predicting weight from shoe size. You can predict weight equally well with the right or left shoe size. But together it doesn't work out. Brief simulation example RSS = 3:10 #Right shoe size
LSS = rnorm(RSS, RSS, 0.1) #Left shoe size - similar to RSS
cor(LSS, RSS) #correlation ~ 0.99
weights = 120 + rnorm(RSS, 10*RSS, 10)
##Fit a joint model
m = lm(weights ~ LSS + RSS)
##F-value is very small, but neither LSS or RSS are significant
summary(m)
##Fitting RSS or LSS separately gives a significant result.
summary(lm(weights ~ LSS)) | {
"source": [
"https://stats.stackexchange.com/questions/3549",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1077/"
]
} |
3,559 | I have SPSS output for a logistic regression model. The output reports two measures for the model fit, Cox & Snell and Nagelkerke . So as a rule of thumb, which of these $R^²$ measures would you report as the model fit? Or, which of these fit indices is the one that is usually reported in journals? Some Background: The regression tries to predict the presence or absence of a bird (capercaillie) from some environmental variables (e.g., steepness, vegetation cover, ...). Unfortunately, the bird did not appear very often (35 hits to 468 misses) so the regression performs rather poorly. Cox & Snell is .09, Nagelkerke, .23. The subject is environmental sciences or ecology. | Normally I wouldn't report $R^2$ at all. Hosmer and Lemeshow, in their textbook Applied Logistic Regression (2nd Ed.), explain why: In general, [$R^2$ measures] are based on various comparisons of the predicted values from the fitted model to those from [the base model], the no data or intercept only model and, as a result, do not assess goodness-of-fit. We think that a true measure of fit is one based strictly on a comparison of observed to predicted values from the fitted model. [At p. 164.] Concerning various ML versions of $R^2$, the "pseudo $R^2$" stat, they mention that it is not "recommended for routine use, as it is not as intuitively easy to explain," but they feel obliged to describe it because various software packages report it. They conclude this discussion by writing, ...low $R^2$ values in logistic regression are the norm and this presents a problem when reporting their values to an audience accustomed to seeing linear regression values. ... Thus [arguing by reference to running examples in the text] we do not recommend routine publishing of $R^2$ values with results from fitted logistic models. However, they may be helpful in the model building state as a statistic to evaluate competing models. [At p. 167.] My experience with some large logistic models (100k to 300k records, 100 - 300 explanatory variables) has been exactly as H & L describe. I could achieve relatively high $R^2$ with my data, up to about 0.40. These corresponded to classification error rates between 3% and 15% (false negatives and false positives, balanced, as confirmed using 50% hold-out datasets). As H & L hinted, I had to spend a lot of time disabusing the client (a sophisticated consultant himself, who was familiar with $R^2$) concerning $R^2$ and getting him to focus on what mattered in the analysis (the classification error rates). I can warmly recommend describing the results of your analysis without reference to $R^2$, which is more likely to mislead than not. | {
"source": [
"https://stats.stackexchange.com/questions/3559",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/442/"
]
} |
3,685 | Hierarchical clustering can be represented by a dendrogram. Cutting a dendrogram at a certain level gives a set of clusters. Cutting at another level gives another set of clusters. How would you pick where to cut the dendrogram? Is there something we could consider an optimal point? If I look at a dendrogram across time as it changes, should I cut at the same point? | There is no definitive answer since cluster analysis is essentially an exploratory approach; the interpretation of the resulting hierarchical structure is context-dependent and often several solutions are equally good from a theoretical point of view. Several clues were given in a related question, What stop-criteria for agglomerative hierarchical clustering are used in practice? I generally use visual criteria, e.g. silhouette plots, and some kind of numerical criteria, like Dunn’s validity index, Hubert's gamma, G2/G3 coefficient, or the corrected Rand index. Basically, we want to know how well the original distance matrix is approximated in the cluster space, so a measure of the cophenetic correlation is also useful. I also use k-means, with several starting values, and the gap statistic ( mirror ) to determine the number of clusters that minimize the within-SS. The concordance with Ward hierarchical clustering gives an idea of the stability of the cluster solution (You can use matchClasses() in the e1071 package for that). You will find useful resources in the CRAN Task View Cluster , including pvclust , fpc , clv , among others. Also worth to give a try is the clValid package ( described in the Journal of Statistical Software ). Now, if your clusters change over time, this is a bit more tricky; why choosing the first cluster-solution rather than another? Do you expect that some individuals move from one cluster to another as a result of an underlying process evolving with time? There are some measure that try to match clusters that have a maximum absolute or relative overlap, as was suggested to you in your preceding question. Look at Comparing Clusterings - An Overview from Wagner and Wagner. | {
"source": [
"https://stats.stackexchange.com/questions/3685",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1250/"
]
} |
3,695 | When plotting a boxplot with python matplotblib, the lines halfway the plot is the median of the distribution. Is there a possibility to instead have the line at the average. Or to plot it next to it in a different style. Also, because it is common for the line to be the median, will it really confuse my readers if I make it the average (off course I will add a note what the middle line is)? | This code makes the boxplots then places a circle marking the mean for each box. You can use a different symbol by specifying the marker argument in the call to scatter . import numpy as np
import pylab
# 3 boxes
data = [[np.random.rand(100)] for i in range(3)]
pylab.boxplot(data)
# mark the mean
means = [np.mean(x) for x in data]
pylab.scatter([1, 2, 3], means) | {
"source": [
"https://stats.stackexchange.com/questions/3695",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/190/"
]
} |
3,713 | When using cluster analysis on a data set to group similar cases, one needs to choose among a large number of clustering methods and measures of distance. Sometimes, one choice might influence the other, but there are many possible combinations of methods. Does anyone have any recommendations on how to choose among the various clustering algorithms / methods and distance measures ? How is this related to the nature of the variables (e.g., categorical or numerical) and the clustering problem? Is there an optimal technique? | There is no definitive answer to your question, as even within the same method the choice of the distance to represent individuals (dis)similarity may yield different result, e.g. when using euclidean vs. squared euclidean in hierarchical clustering. As an other example, for binary data, you can choose the Jaccard index as a measure of similarity and proceed with classical hierarchical clustering; but there are alternative approaches, like the Mona ( Monothetic Analysis ) algorithm which only considers one variable at a time, while other hierarchical approaches (e.g. classical HC, Agnes, Diana) use all variables at each step. The k-means approach has been extended in various way, including partitioning around medoids (PAM) or representative objects rather than centroids (Kaufman and Rousseuw, 1990), or fuzzy clustering (Chung and Lee, 1992). For instance, the main difference between the k-means and PAM is that PAM minimizes a sum of dissimilarities rather than a sum of squared euclidean distances; fuzzy clustering allows to consider "partial membership" (we associate to each observation a weight reflecting class membership). And for methods relying on a probabilistic framework, or so-called model-based clustering (or latent profile analysis for the psychometricians), there is a great package: Mclust . So definitively, you need to consider how to define the resemblance of individuals as well as the method for linking individuals together (recursive or iterative clustering, strict or fuzzy class membership, unsupervised or semi-supervised approach, etc.). Usually, to assess cluster stability, it is interesting to compare several algorithm which basically "share" some similarity (e.g. k-means and hierarchical clustering, because euclidean distance work for both). For assessing the concordance between two cluster solutions, some pointers were suggested in response to this question, Where to cut a dendrogram? (see also the cross-references for other link on this website). If you are using R, you will see that several packages are already available in Task View on Cluster Analysis, and several packages include vignettes that explain specific methods or provide case studies. Cluster Analysis: Basic Concepts and Algorithms provides a good overview of several techniques used in Cluster Analysis.
As for a good recent book with R illustrations, I would recommend chapter 12 of Izenman, Modern Multivariate Statistical Techniques (Springer, 2008). A couple of other standard references is given below: Cormack, R., 1971. A review of classification. Journal of the Royal Statistical Society, A 134, 321–367. Everitt, B., 1974. Cluster analysis . London: Heinemann Educ. Books. Gordon, A., 1987. A review of hierarchical classification. Journal of the Royal Statistical Society, A 150, 119–137. Gordon, A., 1999. Classification , 2nd Edition. Chapman and Hall. Kaufman, L., Rousseuw, P., 1990. Finding Groups in Data: An Introduction to Cluster Analysis . New York, Wiley. | {
"source": [
"https://stats.stackexchange.com/questions/3713",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/485/"
]
} |
3,730 | I get this question frequently enough in my statistics consulting work, that I thought I'd post it here. I have an answer, which is posted below, but I was keen to hear what others have to say. Question: If you have two variables that are not normally distributed, should you use Spearman's rho for the correlation? | Pearson's correlation is a measure of the linear relationship between two continuous random variables. It does not assume normality although it does assume finite variances and finite covariance. When the variables are bivariate normal, Pearson's correlation provides a complete description of the association. Spearman's correlation applies to ranks and so provides a measure of a monotonic relationship between two continuous random variables. It is also useful with ordinal data and is robust to outliers (unlike Pearson's correlation). The distribution of either correlation coefficient will depend on the underlying distribution, although both are asymptotically normal because of the central limit theorem. | {
"source": [
"https://stats.stackexchange.com/questions/3730",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/183/"
]
} |
3,734 | In several different contexts we invoke the central limit theorem to justify whatever statistical method we want to adopt (e.g., approximate the binomial distribution by a normal distribution). I understand the technical details as to why the theorem is true but it just now occurred to me that I do not really understand the intuition behind the central limit theorem. So, what is the intuition behind the central limit theorem? Layman explanations would be ideal. If some technical detail is needed please assume that I understand the concepts of a pdf, cdf, random variable etc but have no knowledge of convergence concepts, characteristic functions or anything to do with measure theory. | I apologize in advance for the length of this post: it is with some trepidation that I let it out in public at all, because it takes some time and attention to read through and undoubtedly has typographic errors and expository lapses. But here it is for those who are interested in the fascinating topic, offered in the hope that it will encourage you to identify one or more of the many parts of the CLT for further elaboration in responses of your own. Most attempts at "explaining" the CLT are illustrations or just restatements that assert it is true. A really penetrating, correct explanation would have to explain an awful lot of things. Before looking at this further, let's be clear about what the CLT says. As you all know, there are versions that vary in their generality. The common context is a sequence of random variables, which are certain kinds of functions on a common probability space. For intuitive explanations that hold up rigorously I find it helpful to think of a probability space as a box with distinguishable objects. It doesn't matter what those objects are but I will call them "tickets." We make one "observation" of a box by thoroughly mixing up the tickets and drawing one out; that ticket constitutes the observation. After recording it for later analysis we return the ticket to the box so that its contents remain unchanged. A "random variable" basically is a number written on each ticket. In 1733, Abraham de Moivre considered the case of a single box where the numbers on the tickets are only zeros and ones ("Bernoulli trials"), with some of each number present. He imagined making $n$ physically independent observations, yielding a sequence of values $x_1, x_2, \ldots, x_n$ , all of which are zero or one. The sum of those values, $y_n = x_1 + x_2 + \ldots + x_n$ , is random because the terms in the sum are. Therefore, if we could repeat this procedure many times, various sums (whole numbers ranging from $0$ through $n$ ) would appear with various frequencies--proportions of the total. (See the histograms below.) Now one would expect--and it's true--that for very large values of $n$ , all the frequencies would be quite small. If we were to be so bold (or foolish) as to attempt to "take a limit" or "let $n$ go to $\infty$ ", we would conclude correctly that all frequencies reduce to $0$ . But if we simply draw a histogram of the frequencies, without paying any attention to how its axes are labeled, we see that the histograms for large $n$ all begin to look the same: in some sense, these histograms approach a limit even though the frequencies themselves all go to zero. These histograms depict the results of repeating the procedure of obtaining $y_n$ many times. $n$ is the "number of trials" in the titles. The insight here is to draw the histogram first and label its axes later . With large $n$ the histogram covers a large range of values centered around $n/2$ (on the horizontal axis) and a vanishingly small interval of values (on the vertical axis), because the individual frequencies grow quite small. Fitting this curve into the plotting region has therefore required both a shifting and rescaling of the histogram. The mathematical description of this is that for each $n$ we can choose some central value $m_n$ (not necessarily unique!) to position the histogram and some scale value $s_n$ (not necessarily unique!) to make it fit within the axes. This can be done mathematically by changing $y_n$ to $z_n = (y_n - m_n) / s_n$ . Remember that a histogram represents frequencies by areas between it and the horizontal axis. The eventual stability of these histograms for large values of $n$ should therefore be stated in terms of area. So, pick any interval of values you like, say from $a$ to $b \gt a$ and, as $n$ increases, track the area of the part of the histogram of $z_n$ that horizontally spans the interval $(a, b]$ . The CLT asserts several things: No matter what $a$ and $b$ are, if we choose the sequences $m_n$ and $s_n$ appropriately (in a way that does not depend on $a$ or $b$ at all), this area indeed approaches a limit as $n$ gets large. The sequences $m_n$ and $s_n$ can be chosen in a way that depends only on $n$ , the average of values in the box, and some measure of spread of those values--but on nothing else--so that regardless of what is in the box, the limit is always the same. (This universality property is amazing.) Specifically, that limiting area is the area under the curve $y = \exp(-z^2/2) / \sqrt{2 \pi}$ between $a$ and $b$ : this is the formula of that universal limiting histogram. The first generalization of the CLT adds, When the box can contain numbers in addition to zeros and ones, exactly the same conclusions hold (provided that the proportions of extremely large or small numbers in the box are not "too great," a criterion that has a precise and simple quantitative statement). The next generalization, and perhaps the most amazing one, replaces this single box of tickets with an ordered indefinitely long array of boxes with tickets. Each box can have different numbers on its tickets in different proportions. The observation $x_1$ is made by drawing a ticket from the first box, $x_2$ comes from the second box, and so on. Exactly the same conclusions hold provided the contents of the boxes are "not too different" (there are several precise, but different, quantitative characterizations of what "not too different" has to mean; they allow an astonishing amount of latitude). These five assertions, at a minimum, need explaining. There's more. Several intriguing aspects of the setup are implicit in all the statements. For example, What is special about the sum ? Why don't we have central limit theorems for other mathematical combinations of numbers such as their product or their maximum? (It turns out we do, but they are not quite so general nor do they always have such a clean, simple conclusion unless they can be reduced to the CLT.) The sequences of $m_n$ and $s_n$ are not unique but they're almost unique in the sense that eventually they have to approximate the expectation of the sum of $n$ tickets and the standard deviation of the sum, respectively (which, in the first two statements of the CLT, equals $\sqrt{n}$ times the standard deviation of the box). The standard deviation is one measure of the spread of values, but it is by no means the only one nor is it the most "natural," either historically or for many applications. (Many people would choose something like a median absolute deviation from the median , for instance.) Why does the SD appear in such an essential way? Consider the formula for the limiting histogram: who would have expected it to take such a form? It says the logarithm of the probability density is a quadratic function. Why? Is there some intuitive or clear, compelling explanation for this? I confess I am unable to reach the ultimate goal of supplying answers that are simple enough to meet Srikant's challenging criteria for intuitiveness and simplicity, but I have sketched this background in the hope that others might be inspired to fill in some of the many gaps. I think a good demonstration will ultimately have to rely on an elementary analysis of how values between $\alpha_n = a s_n + m_n$ and $\beta_n = b s_n + m_n$ can arise in forming the sum $x_1 + x_2 + \ldots + x_n$ . Going back to the single-box version of the CLT, the case of a symmetric distribution is simpler to handle: its median equals its mean, so there's a 50% chance that $x_i$ will be less than the box's mean and a 50% chance that $x_i$ will be greater than its mean. Moreover, when $n$ is sufficiently large, the positive deviations from the mean ought to compensate for the negative deviations in the mean. (This requires some careful justification, not just hand waving.) Thus we ought primarily to be concerned about counting the numbers of positive and negative deviations and only have a secondary concern about their sizes. (Of all the things I have written here, this might be the most useful at providing some intuition about why the CLT works. Indeed, the technical assumptions needed to make the generalizations of the CLT true essentially are various ways of ruling out the possibility that rare huge deviations will upset the balance enough to prevent the limiting histogram from arising.) This shows, to some degree anyway, why the first generalization of the CLT does not really uncover anything that was not in de Moivre's original Bernoulli trial version. At this point it looks like there is nothing for it but to do a little math: we need to count the number of distinct ways in which the number of positive deviations from the mean can differ from the number of negative deviations by any predetermined value $k$ , where evidently $k$ is one of $-n, -n+2, \ldots, n-2, n$ . But because vanishingly small errors will disappear in the limit, we don't have to count precisely; we only need to approximate the counts. To this end it suffices to know that $$\text{The number of ways to obtain } k \text{ positive and } n-k \text{ negative values out of } n$$ $$\text{equals } \frac{n-k+1}{k}$$ $$\text{times the number of ways to get } k-1 \text{ positive and } n-k+1 \text { negative values.}$$ (That's a perfectly elementary result so I won't bother to write down the justification.) Now we approximate wholesale. The maximum frequency occurs when $k$ is as close to $n/2$ as possible (also elementary). Let's write $m = n/2$ . Then, relative to the maximum frequency, the frequency of $m+j+1$ positive deviations ( $j \ge 0$ ) is estimated by the product $$\frac{m+1}{m+1} \frac{m}{m+2} \cdots \frac{m-j+1}{m+j+1}$$ $$=\frac{1 - 1/(m+1)}{1 + 1/(m+1)} \frac{1-2/(m+1)}{1+2/(m+1)} \cdots \frac{1-j/(m+1)}{1+j/(m+1)}.$$ 135 years before de Moivre was writing, John Napier invented logarithms to simplify multiplication, so let's take advantage of this. Using the approximation $$\log\left(\frac{1-x}{1+x}\right) = -2x - \frac{2x^3}{3} + O(x^5),$$ we find that the log of the relative frequency is approximately $$-\frac{2}{m+1}\left(1 + 2 + \cdots + j\right) - \frac{2}{3(m+1)^3}\left(1^3+2^3+\cdots+j^3\right) = -\frac{j^2}{m} + O\left(\frac{j^4}{m^3}\right).$$ Because the error in approximating this sum by $-j^2/m$ is on the order of $j^4/m^3$ , the approximation ought to work well provided $j^4$ is small relative to $m^3$ . That covers a greater range of values of $j$ than is needed. (It suffices for the approximation to work for $j$ only on the order of $\sqrt{m}$ which asymptotically is much smaller than $m^{3/4}$ .) Consequently, writing $$z = \sqrt{2}\,\frac{j}{\sqrt{m}} = \frac{j/n}{1 / \sqrt{4n}}$$ for the standardized deviation, the relative frequency of deviations of size given by $z$ must be proportional to $\exp(-z^2/2)$ for large $m.$ Thus appears the Gaussian law of #3 above. Obviously much more analysis of this sort should be presented to justify the other assertions in the CLT, but I'm running out of time, space, and energy and I've probably lost 90% of the people who started reading this anyway. This simple approximation, though, suggests how de Moivre might originally have suspected that there is a universal limiting distribution, that its logarithm is a quadratic function, and that the proper scale factor $s_n$ must be proportional to $\sqrt{n}$ (as shown by the denominator of the preceding formula). It is difficult to imagine how this important quantitative relationship could be explained without invoking some kind of mathematical information and reasoning; anything less would leave the precise shape of the limiting curve a complete mystery. | {
"source": [
"https://stats.stackexchange.com/questions/3734",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
]
} |
3,787 | For a unimodal distribution that is moderately skewed, we have the following empirical relationship between the mean, median and mode:
$$
\text{(Mean - Mode)}\sim 3\,\text{(Mean - Median)}
$$
How was this relationship derived? Did Karl Pearson plot thousands of these relationships before forming this conclusion, or is there a logical line of reasoning behind this relationship? | Denote $\mu$ the mean ($\neq$ average), $m$ the median, $\sigma$ the standard deviation and $M$ the mode. Finally, let $X$ be the sample, a realization of a continuous unimodal distribution $F$ for which the first two moments exist. It's well known that $$|\mu-m|\leq\sigma\label{d}\tag{1}$$ This is a frequent textbook exercise: \begin{eqnarray}
|\mu-m| &=& |E(X-m)| \\
&\leq& E|X-m| \\
&\leq& E|X-\mu| \\
&=& E\sqrt{(X-\mu)^2} \\
&\leq& \sqrt{E(X-\mu)^2} \\
&=& \sigma
\end{eqnarray}
The first equality derives from the definition of the mean, the third comes about because the median is the unique minimiser (among all $c$'s) of $E|X-c|$ and the fourth from Jensen's inequality (i.e. the definition of a convex function). Actually, this inequality can be made tighter. In fact, for any $F$, satisfying the conditions above, it can be shown [3] that $$|m-\mu|\leq \sqrt{0.6}\sigma\label{f}\tag{2}$$ Even though it is in general not true ( Abadir, 2005 ) that any unimodal distribution must satisfy either one of
$$M\leq m\leq\mu\textit{ or }M\geq m\geq \mu$$
it can still be shown that the inequality $$|\mu-M|\leq\sqrt{3}\sigma\label{e}\tag{3}$$ holds for any unimodal, square integrable distribution (regardless of skew). This is proven formally in Johnson and Rogers (1951) though the proof depends on many auxiliary lemmas that are hard to fit here. Go see the original paper. A sufficient condition for a distribution $F$ to satisfy $\mu\leq m\leq M$ is given in [2]. If $F$: $$F(m−x)+F(m+x)\geq 1 \text{ for all }x\label{g}\tag{4}$$ then $\mu\leq m\leq M$. Furthermore, if $\mu\neq m$, then the inequality is strict. The Pearson Type I to XII distributions are one example of family of distributions satisfying $(4)$ [4] (for example, the Weibull is one common distribution for which $(4)$ does not hold, see [5]). Now assuming that $(4)$ holds strictly and w.l.o.g. that $\sigma=1$, we have that
$$3(m-\mu)\in(0,3\sqrt{0.6}] \mbox{ and } M-\mu\in(m-\mu,\sqrt{3}]$$ and since the second of these two ranges is not empty, it's certainly possible to find distributions for which the assertion is true (e.g. when $0<m-\mu<\frac{\sqrt{3}}{3}<\sigma=1$) for some range of values of the distribution's parameters but it is not true for all distributions and not even for all distributions satisfying $(4)$. [0]: The Moment Problem for Unimodal Distributions.
N. L. Johnson and C. A. Rogers. The Annals of Mathematical Statistics, Vol. 22, No. 3 (Sep., 1951), pp. 433-439 [1]: The Mean-Median-Mode Inequality: Counterexamples
Karim M. Abadir
Econometric Theory, Vol. 21, No. 2 (Apr., 2005), pp. 477-482 [2]: W. R. van Zwet, Mean, median, mode II, Statist. Neerlandica, 33 (1979), pp. 1--5. [3]: The Mean, Median, and Mode of Unimodal Distributions:A Characterization. S. Basu and A. DasGupta (1997). Theory Probab. Appl., 41(2), 210–223. [4]: Some Remarks On The Mean, Median, Mode And Skewness. Michikazu Sato. Australian Journal of Statistics. Volume 39, Issue 2, pages 219–224, June 1997 [5]: P. T. von Hippel (2005). Mean, Median, and Skew: Correcting a Textbook Rule. Journal of Statistics Education Volume 13, Number 2. | {
"source": [
"https://stats.stackexchange.com/questions/3787",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1636/"
]
} |
3,814 | I recently asked a question regarding general principles around reviewing statistics in papers . What I would now like to ask, is what particularly irritates you when reviewing a paper, i.e. what's the best way to really annoy a statistical referee! One example per answer, please. | What particularly irritates me personally is people who clearly used user-written packages for statistical software but don't cite them properly, or at all, thereby failing to give any credit to the authors. Doing so is particularly important when the authors are in academia and their jobs depend on publishing papers that get cited . (Perhaps I should add that, in my field, many of the culprits are not statisticians.) | {
"source": [
"https://stats.stackexchange.com/questions/3814",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8/"
]
} |
3,931 | I was asked today in class why you divide the sum of square error by $n-1$ instead of with $n$, when calculating the standard deviation. I said I am not going to answer it in class (since I didn't wanna go into unbiased estimators), but later I wondered - is there an intuitive explanation for this?! | The standard deviation calculated with a divisor of $n-1$ is a standard deviation calculated from the sample as an estimate of the standard deviation of the population from which the sample was drawn. Because the observed values fall, on average, closer to the sample mean than to the population mean, the standard deviation which is calculated using deviations from the sample mean underestimates the desired standard deviation of the population. Using $n-1$ instead of $n$ as the divisor corrects for that by making the result a little bit bigger. Note that the correction has a larger proportional effect when $n$ is small than when it is large, which is what we want because when n is larger the sample mean is likely to be a good estimator of the population mean. When the sample is the whole population we use the standard deviation with $n$ as the divisor because the sample mean is population mean. (I note parenthetically that nothing that starts with "second moment recentered around a known, definite mean" is going to fulfil the questioner's request for an intuitive explanation.) | {
"source": [
"https://stats.stackexchange.com/questions/3931",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/253/"
]
} |
3,943 | In which cases should one prefer the one over the other? I found someone who claims an advantage for Kendall, for pedagogical reasons , are there other reasons? | I found that Spearman correlation is mostly used in place of usual linear correlation when working with integer valued scores on a measurement scale, when it has a moderate number of possible scores or when we don't want to make rely on assumptions about the bivariate relationships. As compared to Pearson coefficient, the interpretation of Kendall's tau seems to me less direct than that of Spearman's rho, in the sense that it quantifies the difference between the % of concordant and discordant pairs among all possible pairwise events. In my understanding, Kendall's tau more closely resembles Goodman-Kruskal Gamma . I just browsed an article from Larry Winner in the J. Statistics Educ. (2006) which discusses the use of both measures, NASCAR Winston Cup Race Results for 1975-2003 . I also found @onestop answer about Pearson's or Spearman's correlation with non-normal data interesting in this respect. Of note, Kendall's tau (the a version) has connection to Somers' D (and Harrell's C) used for predictive modelling (see e.g., Interpretation of Somers’ D under four simple models by RB Newson and reference 6 therein, and articles by Newson published in the Stata Journal 2006). An overview of rank-sum tests is provided in Efficient Calculation of Jackknife Confidence Intervals for Rank Statistics , that was published in the JSS (2006). | {
"source": [
"https://stats.stackexchange.com/questions/3943",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/253/"
]
} |
3,944 | Reading about methods and results of statistical analysis, especially in epidemiology, I very often hear about adjustment or controlling of the models. How would you explain, to a non-statistician, the purpose of that? How do you interpret your results after controlling for certain variable? Small walk-through in Stata or R, or a pointer to one online, would a true gem. | Easiest to explain by way of an example: Imagine study finds that people who watched the World Cup final were more likely to suffer a heart attack during the match or in the subsequent 24 hours than those who didn't watch it. Should the government ban football from TV? But men are more likely to watch football than women, and men are also more likely to have a heart attack than women. So the association between football-watching and heart attacks might be explained by a third factor such as sex that affects both. (Sociologists would distinguish here between gender , a cultural construct that is associated with football-watching, and sex , a biological category that is associated with heart-attack incidence, but the two are cleary very strongly correlated so i'm going to ignore that distinction for simplicity.) Statisticians, and especially epidemiologists, call such a third factor a confounder , and the phenomenon confounding . The most obvious way to remove the problem is to look at the association between football-watching and heart-attack incidence in men and women separately, or in the jargon, to stratify by sex. If we find that the association (if there still is one) is similar in both sexes, we may then choose to combine the two estimates of the association across the two sexes. The resulting estimate of the association between football-watching and heart-attack incidence is then said to be adjusted or controlled for sex. We would probably also wish to control for other factors in the same way. Age is another obvious one (in fact epidemiologists either stratify or adjust/control almost every association by age and sex). Socio-economic class is probably another. Others can get trickier, e.g. should we adjust for beer consumption while watching the match? Maybe yes, if we're interested in the effect of the stress of watching the match alone; but maybe no, if we're considering banning broadcasting of World Cup football and that would also reduce beer consumption. Whether given variable is a confounder or not depends on precisely what question we wish to address, and this can require very careful thought and get quite tricky and even contentious. Clearly then, we may wish to adjust/control for several factors, some of which may be measured in several categories (e.g. social class) while others may be continuous (e.g. age). We could deal with the continuous ones by splitting into (age-)groups, thereby turning them into categorical ones. So say we have 2 sexes, 5 social class groups and 7 age groups. We can now look at the association between football-watching and heart-attack incidence in 2×5×7 = 70 strata. But if our study is fairly small, so some of those strata contain very few people, we're going to run into problems with this approach. And in practice we may wish to adjust for a dozen or more variables. An alternative way of adjusting/controlling for variables that is particularly useful when there are many of them is provided by regression analysis with multiple dependent variables, sometimes known as multivariable regression analysis. (There are different types of regression models depending on the type of outcome variable: least squares regression, logistic regression, proportional hazards (Cox) regression...). In observational studies, as opposed to experiments, we nearly always want to adjust for many potential confounders, so in practice adjustment/control for confounders is often done by regression analysis, though there are other alternatives too though, such as standardization, weighting, propensity score matching... | {
"source": [
"https://stats.stackexchange.com/questions/3944",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/22/"
]
} |
4,075 | I have two populations, One with N=38,704 (number of observations) and other with N=1,313,662. These data sets have ~25 variables, all continuous. I took mean of each in each data set and computed the test statistic using the formula t=mean difference/std error The problem is of the degree of freedom. By formula of df=N1+N2-2 we'll have more freedom than the table can handle. Any suggestions on this? How to check the t statistic here. I know that the t-test is used for handling samples but what if we apply this on large samples. | chl already mentioned the trap of multiple comparisons when conducting simultaneously 25 tests with the same data set. An easy way to handle that is to adjust the p value threshold by dividing them by the number of tests (in this case 25). The more precise formula is: Adjusted p value = 1 - (1 - p value)^(1/n). However, the two different formulas derive almost the same adjusted p value. There is another major issue with your hypothesis testing exercise. You will most certainly run into a Type I error (false positive) whereby you will uncover some really trivial differences that are extremely significant at the 99.9999% level. This is because when you deal with a sample of such a large size (n = 1,313,662), you will get a standard error that is very close to 0. That's because the square root of 1,313,662 = 1,146. So, you will divide the standard deviation by 1,146. In short, you will capture minute differences that may be completely immaterial. I would suggest you move away from this hypothesis testing framework and instead conduct an Effect Size type analysis. Within this framework the measure of statistical distance is the standard deviation. Unlike the standard error, the standard deviation is not artificially shrunk by the size of the sample. And, this approach will give you a better sense of the material differences between your data sets. Effect Size is also much more focused on confidence interval around the mean average difference which is much more informative than the hypothesis testing focus on statistical significance that often is not significant at all. Hope that helps. | {
"source": [
"https://stats.stackexchange.com/questions/4075",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1763/"
]
} |
4,165 | I wish to decide if I should take a course called "INTRODUCTION TO STOCHASTIC PROCESSES" which will be held next semester in my University. I asked the lecturer how studying such a course would help me as a statistician, he said that since he comes from probability, he knows very little of statistics and doesn't know how to answer my question. I can make an un-educated guess that stochastic processes are important in statistics. But I am also curious to know how.
That is, in what fields/methods, will basic understanding in "stochastic processes" will help me do better statistics? | Stochastic processes underlie many ideas in statistics such as time series, markov chains, markov processes, bayesian estimation algorithms (e.g., Metropolis-Hastings) etc. Thus, a study of stochastic processes will be useful in two ways: Enable you to develop models for situations of interest to you. An exposure to such a course, may enable you to identify a standard stochastic process that works given your problem context. You can then modify the model as needed to accommodate the idiosyncrasies of your specific context. Enable you to better understand the nuances of the statistical methodology that uses stochastic processes. There are several key ideas in stochastic processes such as convergence, stationarity that play an important role when we want to analyze a stochastic process. It is my belief that a course in stochastic process will let you appreciate better the need for caring about these issues and why they are important. Can you be a statistician without taking a course in stochastic processes? Sure. You can always use the software that is available to perform whatever statistical analysis you want. However, a basic understanding of stochastic processes is very helpful in order to make a correct choice of methodology, in order to understand what is really happening in the black box etc. Obviously, you will not be able to contribute to the theory of stochastic processes with a basic course but in my opinion it will make you a better statistician. My general rule of thumb for coursework: The more advanced course you take the better off you will be in the long-run. By way of analogy: You can perform a t-test without knowing any probability theory or statistics testing methodology. But, a knowledge of probability theory and statistical testing methodology is extremely useful in understanding the output correctly and in choosing the correct statistical test. | {
"source": [
"https://stats.stackexchange.com/questions/4165",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/253/"
]
} |
4,172 | I am new to the area of statistics and I am hoping you can suggest methods I may use. Sorry if this is long but I might as well be as clear as possible on my first post :) What I am worried most is that I may miss out on assumptions and draw conclusions based on statistical tests that, in fact, cannot be applied to my situation. In a nutshell: We are replacing a measurement tool + methodology with another tool and a similar methodology and I would like to prove that the new tool & methodology provide the same "results". The data reported : Each tool reports 1) the GPS position, 2) a category of measurement (type 1, type 2, type 3) (the categories are the same for both measurement tools and relate to what is being measured, they should report the same thing), and 3) a quantized value of a continuous value. The measurement tools probably quantize the value with different algorithms but, according to spec, they should provide the same value. Given what we're measuring the measurements are definately not stationnary and, since we're measuring a physical quantity, I assume the time series are autocorrelated. How the setups differ: Setup 1 (historical setup) : uses tool "A", takes a measurement 3 times a minute and reports the GPS position, the category of the measurement and the discrete value Setup 2 (new setup) : uses tool "B", takes a measurement up to every second (but not necessarily based on distance criteria between measurements) and report the GPS position, the category and discrete value too Our experiment: We put both tools in a car and traveled enough to gather over 100.000 data points for setup 1. What I would like to prove: the categories reported by setup 1 and 2 do not significantly differ the discrete value measurements do not significantly differ either if the new setup and a bias or skew compared to the other one What I have done so far: I have matched each data point of setup 1 to a single data point in setup 2 (the one that is "closest geographically" in a 4 minute-time window). Is this even statistically sound ? 1) Regarding the discrete value reported, I drew a scatter plot of the discrete values for matched data points with bubble sizes corresponding to the count for each (x,y) : the data clusters along a 45° angle line as expected but I can see there is some bias. There is also some spread a round that line I drew a Bland-Altman/Tukey diagram of the same data and I now see that the average difference depends on the average mean. That's interesting to know I computed the pearson correlation for matches that are in the same category : I get 0.87 which seems to be high enough to look good. Can Pearson be applied given I have no idea if the distribution is normal and since the measurements are definalty not independent inside the time series ? Would the U test be better ? I tried to compute a t test but I'm getting t values in the "80" range because SQRT(N) is huge I would like to use all the data collected in setup 2 rather than only the data that was matched 1 to 1. There is about 4 times more data reported by setup 2 than setup 1. I've been looking into non-parametric tests and I believe that is what applies to my case as well as the whole notion of inter-rater agreement. So it seems like my next steps will be to use R to compute Cohen's Kappa and KrippenDorff's alpha. Would computing these and finding high correlations be enough to make my point ? 2) Regarding categories reported, again the data reported in the time series are correlated because if category 1 is reported then the chance of the next category being reported being 1 is higher than if category 2 had been reported. Given that there are three categories, what kind of tests could I apply ? thanks for your suggestions | Stochastic processes underlie many ideas in statistics such as time series, markov chains, markov processes, bayesian estimation algorithms (e.g., Metropolis-Hastings) etc. Thus, a study of stochastic processes will be useful in two ways: Enable you to develop models for situations of interest to you. An exposure to such a course, may enable you to identify a standard stochastic process that works given your problem context. You can then modify the model as needed to accommodate the idiosyncrasies of your specific context. Enable you to better understand the nuances of the statistical methodology that uses stochastic processes. There are several key ideas in stochastic processes such as convergence, stationarity that play an important role when we want to analyze a stochastic process. It is my belief that a course in stochastic process will let you appreciate better the need for caring about these issues and why they are important. Can you be a statistician without taking a course in stochastic processes? Sure. You can always use the software that is available to perform whatever statistical analysis you want. However, a basic understanding of stochastic processes is very helpful in order to make a correct choice of methodology, in order to understand what is really happening in the black box etc. Obviously, you will not be able to contribute to the theory of stochastic processes with a basic course but in my opinion it will make you a better statistician. My general rule of thumb for coursework: The more advanced course you take the better off you will be in the long-run. By way of analogy: You can perform a t-test without knowing any probability theory or statistics testing methodology. But, a knowledge of probability theory and statistical testing methodology is extremely useful in understanding the output correctly and in choosing the correct statistical test. | {
"source": [
"https://stats.stackexchange.com/questions/4172",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1784/"
]
} |
4,220 | On the Wikipedia page about naive Bayes classifiers , there is this line: $p(\mathrm{height}|\mathrm{male}) = 1.5789$ (A probability distribution over 1 is OK. It is the area under the bell curve that is equal to 1.) How can a value $>1$ be OK? I thought all probability values were expressed in the range $0 \leq p \leq 1$. Furthermore, given that it is possible to have such a value, how is that value obtained in the example shown on the page? | That Wiki page is abusing language by referring to this number as a probability. You are correct that it is not. It is actually a probability per foot . Specifically, the value of 1.5789 (for a height of 6 feet) implies that the probability of a height between, say, 5.99 and 6.01 feet is close to the following unitless value: $$1.5789\, [1/\text{foot}] \times (6.01 - 5.99)\, [\text{feet}] = 0.0316$$ This value must not exceed 1, as you know. (The small range of heights (0.02 in this example) is a crucial part of the probability apparatus. It is the "differential" of height, which I will abbreviate $d(\text{height})$.) Probabilities per unit of something are called densities by analogy to other densities, like mass per unit volume. Bona fide probability densities can have arbitrarily large values, even infinite ones. This example shows the probability density function for a Gamma distribution (with shape parameter of $3/2$ and scale of $1/5$). Because most of the density is less than $1$, the curve has to rise higher than $1$ in order to have a total area of $1$ as required for all probability distributions. This density (for a beta distribution with parameters $1/2, 1/10$) becomes infinite at $0$ and at $1$. The total area still is finite (and equals $1$)! The value of 1.5789 /foot is obtained in that example by estimating that the heights of males have a normal distribution with mean 5.855 feet and variance 3.50e-2 square feet. (This can be found in a previous table.) The square root of that variance is the standard deviation, 0.18717 feet. We re-express 6 feet as the number of SDs from the mean: $$z = (6 - 5.855) / 0.18717 = 0.7747$$ The division by the standard deviation produces a relation $$dz = d(\text{height})/0.18717$$ The Normal probability density, by definition, equals $$\frac{1}{\sqrt{2 \pi}}\exp(-z^2/2)dz = 0.29544\ d(\text{height}) / 0.18717 = 1.5789\ d(\text{height}).$$ (Actually, I cheated: I simply asked Excel to compute NORMDIST(6, 5.855, 0.18717, FALSE). But then I really did check it against the formula, just to be sure.) When we strip the essential differential $d(\text{height})$ from the formula only the number $1.5789$ remains, like the Cheshire Cat's smile. We, the readers, need to understand that the number has to be multiplied by a small difference in heights in order to produce a probability. | {
"source": [
"https://stats.stackexchange.com/questions/4220",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/226/"
]
} |
4,272 | In what circumstances should one consider using regularization methods (ridge, lasso or least angles regression) instead of OLS? In case this helps steer the discussion, my main interest is improving predictive accuracy. | Short answer: Whenever you are facing one of these situations: large number of variables or low ratio of no. observations to no. variables (including the $n\ll p$ case), high collinearity, seeking for a sparse solution (i.e., embed feature selection when estimating model parameters), or accounting for variables grouping in high-dimensional data set. Ridge regression generally yields better predictions than OLS solution, through a better compromise between bias and variance. Its main drawback is that all predictors are kept in the model, so it is not very interesting if you seek a parsimonious model or want to apply some kind of feature selection. To achieve sparsity, the lasso is more appropriate but it will not necessarily yield good results in presence of high collinearity (it has been observed that if predictors are highly correlated, the prediction performance of the lasso is dominated by ridge regression). The second problem with L1 penalty is that the lasso solution is not uniquely determined when the number of variables is greater than the number of subjects (this is not the case of ridge regression). The last drawback of lasso is that it tends to select only one variable among a group of predictors with high pairwise correlations. In this case, there are alternative solutions like the group (i.e., achieve shrinkage on block of covariates, that is some blocks of regression coefficients are exactly zero) or fused lasso. The Graphical Lasso also offers promising features for GGMs (see the R glasso package). But, definitely, the elasticnet criteria, which is a combination of L1 and L2 penalties achieve both shrinkage and automatic variable selection, and it allows to keep $m>p$ variables in the case where $n\ll p$. Following Zou and Hastie (2005), it is defined as the argument that minimizes (over $\beta$) $$
L(\lambda_1,\lambda_2,\mathbf{\beta}) = \|Y-X\beta\|^2 + \lambda_2\|\beta\|^2 + \lambda_1\|\beta\|_1
$$ where $\|\beta\|^2=\sum_{j=1}^p\beta_j^2$ and $\|\beta\|^1=\sum_{j=1}^p|\beta_j |$. The lasso can be computed with an algorithm based on coordinate descent as described in the recent paper by Friedman and coll., Regularization Paths for Generalized Linear Models via Coordinate Descent (JSS, 2010) or the LARS algorithm. In R, the penalized , lars or biglars , and glmnet packages are useful packages; in Python, there's the scikit.learn toolkit, with extensive documentation on the algorithms used to apply all three kind of regularization schemes. As for general references, the Lasso page contains most of what is needed to get started with lasso regression and technical details about L1-penalty, and this related question features essential references, When should I use lasso vs ridge? | {
"source": [
"https://stats.stackexchange.com/questions/4272",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/439/"
]
} |
4,284 | I am looking for an intuitive explanation of the bias-variance tradeoff, both in general and specifically in the context of linear regression. | Imagine some 2D data--let's say height versus weight for students at a high school--plotted on a pair of axes. Now suppose you fit a straight line through it. This line, which of course represents a set of predicted values, has zero statistical variance. But the bias is (probably) high--i.e., it doesn't fit the data very well. Next, suppose you model the data with a high-degree polynomial spline. You're not satisfied with the fit, so you increase the polynomial degree until the fit improves (and it will, to arbitrary precision, in fact). Now you have a situation with bias that tends to zero, but the variance is very high. Note that the bias-variance trade-off doesn't describe a proportional relationship--i.e., if you plot bias versus variance you won't necessarily see a straight line through the origin with slope -1. In the polynomial spline example above, reducing the degree almost certainly increases the variance much less than it decreases the bias. The bias-variance tradeoff is also embedded in the sum-of-squares error function. Below, I have rewritten (but not altered) the usual form of this equation to emphasize this: $$
E\left(\left(y - \dot{f}(x)\right)^2\right) = \sigma^2 + \left[f(x) - \frac{1}{\kappa}\sum_{i=0}^nf(x_n)\right]^2+\frac{\sigma^2}{\kappa}
$$ On the right-hand side, there are three terms: the first of these is just the irreducible error (the variance in the data itself); this is beyond our control so ignore it. The second term is the square of the bias ; and the third is the variance . It's easy to see that as one goes up the other goes down--they can't both vary together in the same direction. Put another way, you can think of least-squares regression as (implicitly) finding the optimal combination of bias and variance from among candidate models. | {
"source": [
"https://stats.stackexchange.com/questions/4284",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/439/"
]
} |
4,364 | A standardized Gaussian distribution on $\mathbb{R}$ can be defined by giving explicitly its density:
$$ \frac{1}{\sqrt{2\pi}}e^{-x^2/2}$$ or its characteristic function. As recalled in this question it is also the only distribution for which the sample mean and variance are independent. What are other surprising alternative characterization of Gaussian measures that you know ? I will accept the most surprising answer | My personal most surprising is the one about the sample mean and variance, but here is another (maybe) surprising characterization: if $X$ and $Y$ are IID with finite variance with $X+Y$ and $X-Y$ independent, then $X$ and $Y$ are normal. Intuitively, we can usually identify when variables are not independent with a scatterplot. So imagine a scatterplot of $(X,Y)$ pairs that looks independent. Now rotate by 45 degrees and look again: if it still looks independent, then the $X$ and $Y$ coordinates individually must be normal (this is all speaking loosely, of course). To see why the intuitive bit works, take a look at $$
\left[
\begin{array}{cc}
\cos45^{\circ} & -\sin45^{\circ} \newline
\sin45^{\circ} & \cos45^{\circ}
\end{array}
\right]
\left[
\begin{array}{c}
x \newline
y
\end{array}
\right]= \frac{1}{\sqrt{2}}
\left[
\begin{array}{c}
x-y \newline
x+y
\end{array}
\right]
$$ | {
"source": [
"https://stats.stackexchange.com/questions/4364",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/223/"
]
} |
4,498 | They all seem to represent random variables by the nodes and (in)dependence via the (possibly directed) edges. I'm esp interested in a bayesian's point-of-view. | A Bayesian network is a type of graphical model. The other "big" type of graphical model is a Markov Random Field (MRF). Graphical models are used for inference, estimation and in general, to model the world. The term hierarchical model is used to mean many things in different areas. While neural networks come with "graphs" they generally don't encode dependence information, and the nodes don't represent random variables. NNs are different because they are discriminative. Popular neural networks are used for classification and regression. Kevin Murphy has an excellent introduction to these topics available here . | {
"source": [
"https://stats.stackexchange.com/questions/4498",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1795/"
]
} |
4,517 | Is it possible to have a (multiple) regression equation with two or more dependent variables? Sure, you could run two separate regression equations, one for each DV, but that doesn't seem like it would capture any relationship between the two DVs? | Yes, it is possible. What you're interested is is called "Multivariate Multiple Regression" or just "Multivariate Regression". I don't know what software you are using, but you can do this in R. Here's a link that provides examples . | {
"source": [
"https://stats.stackexchange.com/questions/4517",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1977/"
]
} |
4,528 | I have been trying to discern what exactly the "coef" and "(exp)coef" output of coxph signify. It seems that the "(exp)coef" are comparisons of the first variable in the model according to the group assigned in the command. How does the coxph function arrive at the values for "coef" and "(exp)coef"? Additionally, how does coxph determine these values when there is censoring involved? | If you have a single explanatory variable, say treatment group, a Cox's regression model is fitted with coxph() ; the coefficient ( coef ) reads as a regression coefficient (in the context of the Cox model, described hereafter) and its exponential gives you the hazard in the treatment group (compared to the control or placebo group). For example, if $\hat\beta=-1.80$, then the hazard is $\exp(-1.80)=0.165$, that is 16.5%. As you may know, the hazard function is modeled as $$
h(t)=h_0(t)\exp(\beta'x)
$$ where $h_0(t)$ is the baseline hazard. The hazards depend multiplicatively on the covariates, and $\exp(\beta_1)$ is the ratio of the hazards between two individuals whose values of $x_1$ differ by one unit when all other covariates are held constant. The ratio of the hazards of any two individuals $i$ and $j$ is $\exp\big(\beta'(x_i-x_j)\big)$, and is called the hazard ratio (or incidence rate ratio). This ratio is assumed to be constant over time, hence the name of proportional hazard . To echo your preceding question about survreg , here the form of $h_0(t)$ is left unspecified; more precisely, this is a semi-parametric model in that only the effects of covariates are parametrized, and not the hazard function. In other words, we don't make any distribution assumption about survival times. The regression parameters are estimated by maximizing the partial log-likelihood defined by $$
\ell=\sum_f\log\left(\frac{\exp(\beta'x_f)}{\sum_{r(f)}\exp(\beta'x_r)}\right)
$$ where the first summation is over all deaths or failures $f$, and the second summation is over all subjects $r(f)$ still alive (but at risk) at the time of failure -- this is known as the risk set . In other words, $\ell$ can be interpreted as the log profile likelihood for $\beta$ after eliminating $h_0(t)$ (or in other words, the LL where the $h_0(t)$ have been replaced by functions of $\beta$ that maximize the likelihood with respect to $h_0(t)$ for a fixed vector $\beta$). About censoring, it is not clear whether you refer to left censoring (as might be the case if we consider an origin for the time scale that is earlier than the time when observation began, also called delayed entry ), or right-censoring. In any case, more details about the computation of the regression coefficients and how the survival package handles censoring can be found in Therneau and Grambsch, Modeling Survival Data (Springer, 2000). Terry Therneau is the author of the former S package. An online tutorial is available. Survival Analysis in R , by David Diez, provides a good introduction to Survival Analysis in R. A brief overview of $\chi^2$ tests for regression parameters is given p. 10. Hopefully, this should help clarifying the on-line help quoted by @onestop , "coefficients the coefficients of the linear predictor, which multiply the columns of the model matrix." For an applied textbook, I recommend Analyzing Medical Data Using S-PLUS , by Everitt and Rabe-Hesketh (Springer, 2001, chap. 16 and 17), from which most of the above comes from.
Another useful reference is John Fox's appendix on Cox Proportional-Hazards Regression for Survival Data . | {
"source": [
"https://stats.stackexchange.com/questions/4528",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1862/"
]
} |
4,544 | Please provide R code which allows one to conduct a between-subjects ANOVA with -3, -1, 1, 3 contrasts. I understand there is a debate regarding the appropriate Sum of Squares (SS) type for such an analysis. However, as the default type of SS used in SAS and SPSS (Type III) is considered the standard in my area. Thus I would like the results of this analysis to match perfectly what is generated by those statistics programs. To be accepted an answer must directly call aov(), but other answers may be voted up (espeically if they are easy to understand/use). sample.data <- data.frame(IV=rep(1:4,each=20),DV=rep(c(-3,-3,1,3),each=20)+rnorm(80)) Edit: Please note, the contrast I am requesting is not a simple linear or polynomial contrast but is a contrast derived by a theoretical prediction, i.e. the type of contrasts discussed by Rosenthal and Rosnow. | Type III sum of squares for ANOVA are readily available through the Anova() function from the car package. Contrast coding can be done in several ways, using C() , the contr.* family (as indicated by @nico), or directly the contrasts() function/argument. This is detailed in §6.2 (pp. 144-151) of Modern Applied Statistics with S (Springer, 2002, 4th ed.). Note that aov() is just a wrapper function for the lm() function. It is interesting when one wants to control the error term of the model (like in a within-subject design), but otherwise they both yield the same results (and whatever the way you fit your model, you still can output ANOVA or LM-like summaries with summary.aov or summary.lm ). I don't have SPSS to compare the two outputs, but something like > library(car)
> sample.data <- data.frame(IV=factor(rep(1:4,each=20)),
DV=rep(c(-3,-3,1,3),each=20)+rnorm(80))
> Anova(lm1 <- lm(DV ~ IV, data=sample.data,
contrasts=list(IV=contr.poly)), type="III")
Anova Table (Type III tests)
Response: DV
Sum Sq Df F value Pr(>F)
(Intercept) 18.08 1 21.815 1.27e-05 ***
IV 567.05 3 228.046 < 2.2e-16 ***
Residuals 62.99 76
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 is worth to try in first instance. About factor coding in R vs. SAS: R considers the baseline or reference level as the first level in lexicographic order, whereas SAS considers the last one. So, to get comparable results, either you have to use contr.SAS() or to relevel() your R factor. | {
"source": [
"https://stats.stackexchange.com/questions/4544",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/196/"
]
} |
4,551 | I'm a grad student in psychology, and as I pursue more and more independent studies in statistics, I am increasingly amazed by the inadequacy of my formal training. Both personal and second hand experience suggests that the paucity of statistical rigor in undergraduate and graduate training is rather ubiquitous within psychology. As such, I thought it would be useful for independent learners like myself to create a list of "Statistical Sins", tabulating statistical practices taught to grad students as standard practice that are in fact either superseded by superior (more powerful, or flexible, or robust, etc.) modern methods or shown to be frankly invalid. Anticipating that other fields might also experience a similar state of affairs, I propose a community wiki where we can collect a list of statistical sins across disciplines. Please, submit one "sin" per answer. | Most interpretations of p-values are sinful! The conventional usage of p-values is badly flawed; a fact that, in my opinion, calls into question the standard approaches to the teaching of hypothesis tests and tests of significance. Haller and Krause have found that statistical instructors are almost as likely as students to misinterpret p-values. (Take the test in their paper and see how you do.) Steve Goodman makes a good case for discarding the conventional (mis-)use of the p -value in favor of likelihoods. The Hubbard paper is also worth a look. Haller and Krauss. Misinterpretations of significance: A problem students share with their teachers . Methods of Psychological Research (2002) vol. 7 (1) pp. 1-20 ( PDF ) Hubbard and Bayarri. Confusion over Measures of Evidence (p's) versus Errors (α's) in Classical Statistical Testing . The American Statistician (2003) vol. 57 (3) Goodman. Toward evidence-based medical statistics. 1: The P value fallacy. Ann Intern Med (1999) vol. 130 (12) pp. 995-1004 ( PDF ) Also see: Wagenmakers, E-J. A practical solution to the pervasive problems of p values. Psychonomic Bulletin & Review, 14(5), 779-804. for some clear cut cases where even the nominally "correct" interpretation of a p-value has been made incorrect due to the choices made by the experimenter. Update (2016) : In 2016, American Statistical Association issued a statement on p-values, see here . This was, in a way, a response to the "ban on p-values" issued by a psychology journal about a year earlier. | {
"source": [
"https://stats.stackexchange.com/questions/4551",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/364/"
]
} |
4,608 | I'm trying to implement basic gradient descent and I'm testing it with a hinge loss function i.e. $l_{\text{hinge}} = \max(0,1-y\ \boldsymbol{x}\cdot\boldsymbol{w})$. However, I'm confused about the gradient of the hinge loss. I'm under the impression that it is $$
\frac{\partial }{\partial w}l_{\text{hinge}} =
\begin{cases}
-y\ \boldsymbol{x} &\text{if } y\ \boldsymbol{x}\cdot\boldsymbol{w} < 1 \\
0&\text{if } y\ \boldsymbol{x}\cdot\boldsymbol{w} \geq 1
\end{cases}
$$ But doesn't this return a matrix the same size as $\boldsymbol{x}$? I thought we were looking to return a vector of length $\boldsymbol{w}$? Clearly, I've got something confused somewhere. Can someone point in the right direction here? I've included some basic code in case my description of the task was not clear #Run standard gradient descent
gradient_descent<-function(fw, dfw, n, lr=0.01)
{
#Date to be used
x<-t(matrix(c(1,3,6,1,4,2,1,5,4,1,6,1), nrow=3))
y<-c(1,1,-1,-1)
w<-matrix(0, nrow=ncol(x))
print(sprintf("loss: %f,x.w: %s",sum(fw(w,x,y)),paste(x%*%w, collapse=',')))
#update the weights 'n' times
for (i in 1:n)
{
w<-w-lr*dfw(w,x,y)
print(sprintf("loss: %f,x.w: %s",sum(fw(w,x,y)),paste(x%*%w,collapse=',')))
}
}
#Hinge loss
hinge<-function(w,x,y) max(1-y%*%x%*%w, 0)
d_hinge<-function(w,x,y){ dw<-t(-y%*%x); dw[y%*%x%*%w>=1]<-0; dw}
gradient_descent(hinge, d_hinge, 100, lr=0.01) Update:
While the answer below helped my understanding of the problem, the output of this algorithm is still incorrect for the given data. The loss function reduces by 0.25 each time but converge too fast and the resulting weights do not result in a good classification. Currently the output looks like #y=1,1,-1,-1
"loss: 1.000000, x.w: 0,0,0,0"
"loss: 0.750000, x.w: 0.06,-0.1,-0.08,-0.21"
"loss: 0.500000, x.w: 0.12,-0.2,-0.16,-0.42"
"loss: 0.250000, x.w: 0.18,-0.3,-0.24,-0.63"
"loss: 0.000000, x.w: 0.24,-0.4,-0.32,-0.84"
"loss: 0.000000, x.w: 0.24,-0.4,-0.32,-0.84"
"loss: 0.000000, x.w: 0.24,-0.4,-0.32,-0.84"
... | To get the gradient we differentiate the loss with respect to $i$th component of $w$. Rewrite hinge loss in terms of $w$ as $f(g(w))$ where $f(z)=\max(0,1-y\ z)$ and $g(w)=\mathbf{x}\cdot \mathbf{w}$ Using chain rule we get $$\frac{\partial}{\partial w_i} f(g(w))=\frac{\partial f}{\partial z} \frac{\partial g}{\partial w_i} $$ First derivative term is evaluated at $g(w)=x\cdot w$ becoming $-y$ when $\mathbf{x}\cdot w<1$, and 0 when $\mathbf{x}\cdot w>1$. Second derivative term becomes $x_i$. So in the end you get
$$
\frac{\partial f(g(w))}{\partial w_i} =
\begin{cases}
-y\ x_i &\text{if } y\ \mathbf{x}\cdot \mathbf{w} < 1 \\
0&\text{if } y\ \mathbf{x}\cdot \mathbf{w} > 1
\end{cases}
$$ Since $i$ ranges over the components of $x$, you can view the above as a vector quantity, and write $\frac{\partial}{\partial w}$ as shorthand for $(\frac{\partial}{\partial w_1},\frac{\partial}{\partial w_2},\ldots)$ | {
"source": [
"https://stats.stackexchange.com/questions/4608",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2023/"
]
} |
4,659 | I'm more of a programmer than a statistician, so I hope this question isn't too naive. It happens in sampling program executions at random times. If I take N=10 random-time samples of the program's state, I could see function Foo being executed on, for example, I=3 of those samples. I'm interested in what that tells me about the actual fraction of time F that Foo is in execution. I understand that I is binomially distributed with mean F*N. I also know that, given I and N, F follows a beta distribution. In fact I've verified by program the relationship between those two distributions, which is cdfBeta(I, N-I+1, F) + cdfBinomial(N, F, I-1) = 1 The problem is I don't have an intuitive feel for the relationship. I can't "picture" why it works. EDIT: All the answers were challenging, especially @whuber's, which I still need to grok, but bringing in order statistics was very helpful. Nevertheless I've realized I should have asked a more basic question: Given I and N, what is the distribution for F? Everyone has pointed out that it's Beta, which I knew. I finally figured out from Wikipedia ( Conjugate prior ) that it appears to be Beta(I+1, N-I+1) . After exploring it with a program, it appears to be the right answer. So, I would like to know if I'm wrong. And, I'm still confused about the relationship between the two cdfs shown above, why they sum to 1, and if they even have anything to do with what I really wanted to know. | Consider the order statistics $x_{[0]} \le x_{[1]} \le \cdots \le x_{[n]}$ of $n+1$ independent draws from a uniform distribution. Because order statistics have Beta distributions , the chance that $x_{[k]}$ does not exceed $p$ is given by the Beta integral $$\Pr[x_{[k]} \le p] = \frac{1}{B(k+1, n-k+1)} \int_0^p{x^k(1-x)^{n-k}dx}.$$ (Why is this? Here is a non-rigorous but memorable demonstration. The chance that $x_{[k]}$ lies between $p$ and $p + dp$ is the chance that out of $n+1$ uniform values, $k$ of them lie between $0$ and $p$, at least one of them lies between $p$ and $p + dp$, and the remainder lie between $p + dp$ and $1$. To first order in the infinitesimal $dp$ we only need to consider the case where exactly one value (namely, $x_{[k]}$ itself) lies between $p$ and $p + dp$ and therefore $n - k$ values exceed $p + dp$. Because all values are independent and uniform, this probability is proportional to $p^k (dp) (1 - p - dp)^{n-k}$. To first order in $dp$ this equals $p^k(1-p)^{n-k}dp$, precisely the integrand of the Beta distribution. The term $\frac{1}{B(k+1, n-k+1)}$ can be computed directly from this argument as the multinomial coefficient ${n+1}\choose{k,1, n-k}$ or derived indirectly as the normalizing constant of the integral.) By definition, the event $x_{[k]} \le p$ is that the $k+1^\text{st}$ value does not exceed $p$. Equivalently, at least $k+1$ of the values do not exceed $p$: this simple (and I hope obvious) assertion provides the intuition you seek. The probability of the equivalent statement is given by the Binomial distribution, $$\Pr[\text{at least }k+1\text{ of the }x_i \le p] = \sum_{j=k+1}^{n+1}{{n+1}\choose{j}} p^j (1-p)^{n+1-j}.$$ In summary , the Beta integral breaks the calculation of an event into a series of calculations: finding at least $k+1$ values in the range $[0, p]$, whose probability we normally would compute with a Binomial cdf, is broken down into mutually exclusive cases where exactly $k$ values are in the range $[0, x]$ and 1 value is in the range $[x, x+dx]$ for all possible $x$, $0 \le x \lt p$, and $dx$ is an infinitesimal length. Summing over all such "windows" $[x, x+dx]$--that is, integrating--must give the same probability as the Binomial cdf. | {
"source": [
"https://stats.stackexchange.com/questions/4659",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1270/"
]
} |
4,689 | What are the differences between generative and discriminative (discriminant) models (in the context of Bayesian learning and inference)? and what it is concerned with prediction, decision theory or unsupervised learning? | Both are used in supervised learning where you want to learn a rule that maps input x to output y, given a number of training examples of the form $\{(x_i,y_i)\}$. A generative model (e.g., naive Bayes) explicitly models the joint probability distribution $p(x,y)$ and then uses the Bayes rule to compute $p(y|x)$. On the other hand, a discriminative model (e.g., logistic regression) directly models $p(y|x)$. Some people argue that the discriminative model is better in the sense that it directly models the quantity you care about $(y)$, so you don't have to spend your modeling efforts on the input x (you need to compute $p(x|y)$ as well in a generative model). However, the generative model has its own advantages such as the capability of dealing with missing data, etc. For some comparison, you can take a look at this paper: On Discriminative vs. Generative classifiers: A comparison of logistic regression and naive Bayes There can be cases when one model is better than the other (e.g., discriminative models usually tend to do better if you have lots of data; generative models may be better if you have some extra unlabeled data). In fact, there exists hybird models too that try to bring in the best of both worlds. See this paper for an example: Principled hybrids of generative and discriminative models | {
"source": [
"https://stats.stackexchange.com/questions/4689",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2046/"
]
} |
4,695 | Suppose I have time series observations from distributions drawn from some population. That is, I observe $X_{t,i}$ for $t=1,2,...,T,$ and $i=1,2,...,n$, where I believe that $X_{t,i}$ have pdf $f(\theta_i)$. (I have some idea about the distribution of the $\theta_i$, but that may not be important here.) I have some sample statistic which is a good estimator of $\theta_i$ given some observations. However, there is the suspicion that, in fact, the $\theta_i$ are not stationary, rather the observations come from $f(\theta_{t,i})$, where the $\theta_{t,i}$ are changing slowly over time. How can I test this, either by a formal hypothesis test or an 'eyeball' test? The amount of data available in the time domain is not so great ( i.e. $T$ is not so large), thus partitioning the time domain and computing the sample estimate on each partition would only be advisable for a small number (say 5) of partitions (because otherwise the standard error of the estimate is too great). However, the number of series, $n$, is largeish, say 10,000. I realize there are a number of gaps in this question, e.g. how the $\theta_{t,i}$ might be varying with time, the standard error of the parameter estimator, etc. However, any hints would be appreciated. To be concrete, one could think of the $X_{t,i}$ as being normally distributed with mean $\theta$ and standard deviation $1$, and the sample statistic is the sample mean. | Both are used in supervised learning where you want to learn a rule that maps input x to output y, given a number of training examples of the form $\{(x_i,y_i)\}$. A generative model (e.g., naive Bayes) explicitly models the joint probability distribution $p(x,y)$ and then uses the Bayes rule to compute $p(y|x)$. On the other hand, a discriminative model (e.g., logistic regression) directly models $p(y|x)$. Some people argue that the discriminative model is better in the sense that it directly models the quantity you care about $(y)$, so you don't have to spend your modeling efforts on the input x (you need to compute $p(x|y)$ as well in a generative model). However, the generative model has its own advantages such as the capability of dealing with missing data, etc. For some comparison, you can take a look at this paper: On Discriminative vs. Generative classifiers: A comparison of logistic regression and naive Bayes There can be cases when one model is better than the other (e.g., discriminative models usually tend to do better if you have lots of data; generative models may be better if you have some extra unlabeled data). In fact, there exists hybird models too that try to bring in the best of both worlds. See this paper for an example: Principled hybrids of generative and discriminative models | {
"source": [
"https://stats.stackexchange.com/questions/4695",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/795/"
]
} |
4,700 | In simple terms, how would you explain (perhaps with simple examples) the difference between fixed effect, random effect and mixed effect models? | Statistician Andrew Gelman says that the terms 'fixed effect' and 'random effect' have variable meanings depending on who uses them. Perhaps you can pick out which one of the 5 definitions applies to your case. In general it may be better to either look for equations which describe the probability model the authors are using (when reading) or write out the full probability model you want to use (when writing). Here we outline five definitions that we have seen: Fixed effects are constant across individuals, and random effects vary. For example, in a growth study, a model with random intercepts $a_i$ and fixed slope $b$ corresponds to parallel lines for different individuals $i$ , or the model $y_{it} = a_i + b t$ . Kreft and De Leeuw (1998) thus distinguish between fixed and random coefficients. Effects are fixed if they are interesting in themselves or random if there is interest in the underlying population. Searle, Casella, and McCulloch (1992, Section 1.4) explore this distinction in depth. “When a sample exhausts the population, the corresponding variable is fixed; when the sample is a small (i.e., negligible) part of the population the corresponding variable is random.” (Green and Tukey, 1960) “If an effect is assumed to be a realized value of a random variable, it is called a random effect.” (LaMotte, 1983) Fixed effects are estimated using least squares (or, more generally, maximum likelihood) and random effects are estimated with shrinkage (“linear unbiased prediction” in the terminology of Robinson, 1991). This definition is standard in the multilevel modeling literature (see, for example, Snijders and Bosker, 1999, Section 4.2) and in econometrics. [ Gelman, 2004, Analysis of variance—why it is more important than ever. The Annals of Statistics. ] | {
"source": [
"https://stats.stackexchange.com/questions/4700",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1991/"
]
} |
4,713 | I have used the following r code to estimate the confidence intervals of a binomial proportion because I understand that that substitutes for a "power calculation" when designing receiver operating characteristic curve designs looking at detection of diseases in a population. n is 150, and the disease, we believe, is 25% prevalent in the population. I have calculated the values for 75% sensitivity and 90% specificity (because that's what people seem to do). binom.test(c(29,9), p=0.75, alternative=c("t"), conf.level=0.95)
binom.test(c(100, 12), p=0.90, alternative=c("t"), conf.level=0.95) I have also visited this site: http://statpages.org/confint.html Which is a java page which calculates binomial confidence intervals, and it gives the same answer. Anyway, after that lengthy set-up, I want to ask why the confidence intervals are not symmetric, e.g. sensitivity is 95 percent confidence interval:
0.5975876 0.8855583
sample estimate probability: 0.7631579 Sorry if this is a basic question, but everywhere I look seems to suggest that they will be symmetric, and a colleague of mine seems to think they will be too. | They're believed to be symmetric because quite often a normal approximation is used. This one works well enough in case p lies around 0.5. binom.test on the other hand reports "exact" Clopper-Pearson intervals, which are based on the F distribution (see here for the exact formulas of both approaches). If we would implement the Clopper-Pearson interval in R it would be something like (see note ): Clopper.Pearson <- function(x, n, conf.level){
alpha <- (1 - conf.level) / 2
QF.l <- qf(1 - alpha, 2*n - 2*x + 2, 2*x)
QF.u <- qf(1 - alpha, 2*x + 2, 2*n - 2*x)
ll <- if (x == 0){
0
} else { x / ( x + (n-x+1)*QF.l ) }
uu <- if (x == 0){
0
} else { (x+1)*QF.u / ( n - x + (x+1)*QF.u ) }
return(c(ll, uu))
} You see both in the link and in the implementation that the formula for the upper and the lower limit are completely different. The only case of a symmetric confidence interval is when p=0.5. Using the formulas from the link and taking into account that in this case $n = 2\times x$ it's easy to derive yourself how it comes. I personally understood it better looking at the confidence intervals based on a logistic approach. Binomial data is generally modeled using a logit link function, defined as: $${\rm logit}(x) = \log\! \bigg( \frac{x}{1-x} \bigg)$$ This link function "maps" the error term in a logistic regression to a normal distribution. As a consequence, confidence intervals in the logistic framework are symmetric around the logit values, much like in the classic linear regression framework. The logit transformation is used exactly to allow for using the whole normality-based theory around the linear regression. After doing the inverse transformation: $${\rm logit}^{-1}(x) = \frac{e^x}{1+e^{x}}$$ You get an asymmetric interval again. Now these confidence intervals are actually biased. Their coverage is not what you would expect, especially at the boundaries of the binomial distribution. Yet, as an illustration they show you why it is logic that a binomial distribution has asymmetric confidence intervals. An example in R: logit <- function(x){ log(x/(1-x)) }
inv.logit <- function(x){ exp(x)/(1+exp(x)) }
x <- c(0.2, 0.5, 0.8)
lx <- logit(x)
upper <- lx + 2
lower <- lx - 2
logxtab <- cbind(lx, upper, lower)
logxtab # the confidence intervals are symmetric by construction
xtab <- inv.logit(logxtab)
xtab # back transformation gives asymmetric confidence intervals note : In fact, R uses the beta distribution, but this is completely equivalent and computationally a bit more efficient. The implementation in R is thus different from what I show here, but it gives exactly the same result. | {
"source": [
"https://stats.stackexchange.com/questions/4713",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/199/"
]
} |
4,756 | I have a random sample of Bernoulli random variables $X_1 ... X_N$, where $X_i$ are i.i.d. r.v. and $P(X_i = 1) = p$, and $p$ is an unknown parameter. Obviously, one can find an estimate for $p$: $\hat{p}:=(X_1+\dots+X_N)/N$. My question is how can I build a confidence interval for $p$? | If the average, $\hat{p}$, is not near $1$ or $0$, and sample size $n$ is sufficiently large (i.e. $n\hat{p}>5$ and $n(1-\hat{p})>5$, the confidence interval can be estimated by a normal distribution and the confidence interval constructed thus: $$\hat{p}\pm z_{1-\alpha/2}\sqrt{\frac{\hat{p}(1-\hat{p})}{n}}$$ If $\hat{p} = 0$ and $n>30$, the $95\%$ confidence interval is approximately $[0,\frac{3}{n}]$ (Javanovic and Levy, 1997) ; the opposite holds for $\hat{p}=1$. The reference also discusses using using $n+1$ and $n+b$ (the later to incorporate prior information). Else Wikipedia provides a good overview and points to Agresti and Couli (1998) and Ross (2003) for details about the use of estimates other than the normal approximation, the Wilson score, Clopper-Pearson, or Agresti-Coull intervals. These can be more accurate when above assumptions about $n$ and $\hat{p}$ are not met. R provides functions binconf {Hmisc} and binom.confint {binom} which can be used in the following manner: set.seed(0)
p <- runif(1,0,1)
X <- sample(c(0,1), size = 100, replace = TRUE, prob = c(1-p, p))
library(Hmisc)
binconf(sum(X), length(X), alpha = 0.05, method = 'all')
library(binom)
binom.confint(sum(X), length(X), conf.level = 0.95, method = 'all') Agresti, Alan; Coull, Brent A. (1998). "Approximate is better than 'exact' for interval estimation of binomial proportions". The American Statistician 52: 119–126. Jovanovic, B. D. and P. S. Levy, 1997. A Look at the Rule of Three. The American Statistician Vol. 51, No. 2, pp. 137-139 Ross, T. D. (2003). "Accurate confidence intervals for binomial proportion and Poisson rate estimation". Computers in Biology and Medicine 33: 509–531. | {
"source": [
"https://stats.stackexchange.com/questions/4756",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
]
} |
4,768 | I was asked this question during an interview for a trading position with a proprietary trading firm. I would very much like to know the answer to this question and the intuition behind it. Amoeba Question:
A population of amoebas starts with 1. After 1 period that amoeba can divide into 1, 2, 3, or 0 (it can die) with equal probability. What is the probability that the entire population dies out eventually? | Cute problem. This is the kind of stuff that probabilists do in their heads for fun. The technique is to assume that there is such a probability of extinction, call it $P$. Then, looking at a one-deep decision tree for the possible outcomes we see--using the Law of Total Probability--that $P=\frac{1}{4} + \frac{1}{4}P + \frac{1}{4}P^2 + \frac{1}{4}P^3$ assuming that, in the cases of 2 or 3 "offspring" their extinction probabilities are IID. This equation has two feasible roots, $1$ and $\sqrt{2}-1$. Someone smarter than me might be able to explain why the $1$ isn't plausible. Jobs must be getting tight -- what kind of interviewer expects you to solve cubic equations in your head? | {
"source": [
"https://stats.stackexchange.com/questions/4768",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2079/"
]
} |
4,775 | I was wondering if it is possible to do symbolic computation in R? For example, I was hoping to get the inverse of a symbolic covariance matrix of 3D Gaussian distribution. Also can I do symbolic integration and differentiation in R? | Yes. There is the Ryacas package which is hosted on Google Code here . Ryacas has recently been expanded/converted to the rMathpiper package which is hosted here . I have used Ryacas and it is straightforward, but you will need to install Yacas in order for it to work (Yacas does all the heavy lifting; Ryacas is just an R interface to Yacas). There is also the rSymPy project hosted on Google Code here . I haven't tried this one. The idea is similar, though, link to the sympy CAS which does the symbolic work. | {
"source": [
"https://stats.stackexchange.com/questions/4775",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1005/"
]
} |
4,920 | Using Pearson's Correlation Coefficient, I have several variables that are highly correlated ($\rho = 0.978$ and $\rho = 0.989$ for 2 pairs of variables that are in my model). The reason some of the variables are highly correlated is because one variable is used in the calculation for another variable. Example: $B = V / 3000$ and
$E = V * D$ $B$ and $E$ have $\rho = 0.989$ Is it possible for me to just "throw away" one of the variables? | Both B and E are derived from V. B and E are clearly not truly "independent" variables from each other. The underlying variable that really matters here is V. You should probably disgard both B and E in this case and keep V only. In a more general situation, when you have two independent variables that are very highly correlated, you definitely should remove one of them because you run into the multicollinearity conundrum and your regression model's regression coefficients related to the two highly correlated variables will be unreliable. Also, in plain English if two variables are so highly correlated they will obviously impart nearly exactly the same information to your regression model. But, by including both you are actually weakening the model. You are not adding incremental information. Instead, you are infusing your model with noise. Not a good thing. One way you could keep highly correlated variables within your model is to use instead of regression a Principal Component Analysis (PCA) model. PCA models are made to get rid off multicollinearity. The trade off is that you end up with two or three principal components within your model that are often just mathematical constructs and are pretty much incomprehensible in logical terms. PCA is therefore frequently abandoned as a method whenever you have to present your results to an outside audience such as management, regulators, etc... PCA models create cryptic black boxes that are very challenging to explain. | {
"source": [
"https://stats.stackexchange.com/questions/4920",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1894/"
]
} |
4,949 | If two classes $w_1$ and $w_2$ have normal distribution with known parameters ($M_1$, $M_2$ as their means and $\Sigma_1$,$\Sigma_2$ are their covariances) how we can calculate error of the Bayes classifier for them theorically? Also suppose the variables are in N-dimensional space. Note: A copy of this question is also available at https://math.stackexchange.com/q/11891/4051 that is still unanswered. If any of these question get answered, the other one will be deleted. | There's no closed form, but you could do it numerically. As a concrete example, consider two Gaussians with following parameters $$\mu_1=\left(\begin{matrix}
-1\\\\
-1
\end{matrix}\right),
\mu_2=\left(\begin{matrix}
1\\\\
1
\end{matrix}\right)$$ $$\Sigma_1=\left(\begin{matrix}
2&1/2\\\\
1/2&2
\end{matrix}\right),\ \Sigma_2=\left(\begin{matrix}
1&0\\\\
0&1
\end{matrix}\right)$$ Bayes optimal classifier boundary will correspond to the point where two densities are equal Since your classifier will pick the most likely class at every point, you need to integrate over the density that is not the highest one for each point. For the problem above, it corresponds to volumes of following regions You can integrate two pieces separately using some numerical integration package. For the problem above I get 0.253579 using following Mathematica code dens1[x_, y_] = PDF[MultinormalDistribution[{-1, -1}, {{2, 1/2}, {1/2, 2}}], {x, y}];
dens2[x_, y_] = PDF[MultinormalDistribution[{1, 1}, {{1, 0}, {0, 1}}], {x, y}];
piece1 = NIntegrate[dens2[x, y] Boole[dens1[x, y] > dens2[x, y]], {x, -Infinity, Infinity}, {y, -Infinity, Infinity}];
piece2 = NIntegrate[dens1[x, y] Boole[dens2[x, y] > dens1[x, y]], {x, -Infinity, Infinity}, {y, -Infinity, Infinity}];
piece1 + piece2 | {
"source": [
"https://stats.stackexchange.com/questions/4949",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2148/"
]
} |
4,959 | If $X_i$ is exponentially distributed $(i=1,...,n)$ with parameter $\lambda$ and $X_i$'s are mutually independent, what is the expectation of $$ \left(\sum_{i=1}^n {X_i} \right)^2$$ in terms of $n$ and $\lambda$ and possibly other constants? Note: This question has gotten a mathematical answer on https://math.stackexchange.com/q/12068/4051 . The readers would take a look at it too. | If $x_i \sim Exp(\lambda)$, then (under independence), $y = \sum x_i \sim Gamma(n, 1/\lambda)$, so $y$ is gamma distributed (see wikipedia ). So, we just need $E[y^2]$. Since $Var[y] = E[y^2] - E[y]^2$, we know that $E[y^2] = Var[y] + E[y]^2$. Therefore, $E[y^2] = n/\lambda^2 + n^2/\lambda^2 = n(1+n)/\lambda^2$ (see wikipedia for the expectation and variance of the gamma distribution). | {
"source": [
"https://stats.stackexchange.com/questions/4959",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2148/"
]
} |
4,961 | Unlike other articles, I found the wikipedia entry for this subject unreadable for a non-math person (like me). I understood the basic idea, that you favor models with fewer rules. What I don't get is how do you get from a set of rules to a 'regularization score' which you can use to sort the models from least to most overfit. Can you describe a simple regularization method? I'm interested in the context of analyzing statistical trading systems. It would be great if you could describe if/how I can apply regularization to analyze the following two predictive models: Model 1 - price going up when: exp_moving_avg(price, period=50) > exp_moving_avg(price, period=200) Model 2 - price going up when: price[n] < price[n-1] 10 times in a row exp_moving_avg(price, period=200) going up But I'm more interested in getting a feeling for how you do regularization. So if you know better models for explaining it please do. | In simple terms, regularization is tuning or selecting the preferred level of model complexity so your models are better at predicting (generalizing). If you don't do this your models may be too complex and overfit or too simple and underfit, either way giving poor predictions. If you least-squares fit a complex model to a small set of training data you will probably overfit, this is the most common situation. The optimal complexity of the model depends on the sort of process you are modeling and the quality of the data, so there is no a-priori correct complexity of a model. To regularize you need 2 things: A way of testing how good your models are at prediction, for example using cross-validation or a set of validation data (you can't use the fitting error for this). A tuning parameter which lets you change the complexity or smoothness of the model, or a selection of models of differing complexity/smoothness. Basically you adjust the complexity parameter (or change the model) and find the value which gives the best model predictions. Note that the optimized regularization error will not be an accurate estimate of the overall prediction error so after regularization you will finally have to use an additional validation dataset or perform some additional statistical analysis to get an unbiased prediction error. An alternative to using (cross-)validation testing is to use Bayesian Priors or other methods to penalize complexity or non-smoothness, but these require more statistical sophistication and knowledge of the problem and model features. | {
"source": [
"https://stats.stackexchange.com/questions/4961",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/749/"
]
} |
5,007 | I have a plot I'm making in ggplot2 to summarize data that are from a 2 x 4 x 3 celled dataset. I have been able to make panels for the 2-leveled variable using facet_grid(. ~ Age) and to set the x and y axes using aes(x=4leveledVariable, y=DV) . I used aes(group=3leveledvariable, lty=3leveledvariable) to produce the plot so far. This gives me a visualization that is paneled by the 2-leveled variable, with the X axis representing the 4 leveled variable and different lines plotted within the panels for the 3-leveled variable. But the key for the 3-leveled variable is titled with the 3-leveled variable's name and I want it to be a title that has a character space in it. How can I rename the title of the legend? Things I've tried that don't seem to work (where abp is my ggplot2 object): abp <- abp + opts(legend.title="Town Name")
abp <- abp + scale_fill_continuous("Town Name")
abp <- abp + opts(group="Town Name")
abp <- abp + opts(legend.title="Town Name") Example data: ex.data <- data.frame(DV=rnorm(2*4*3), V2=rep(1:2,each=4*3), V4=rep(1:4,each=3), V3=1:3) | Another option is to use p + labs(aesthetic='custom text') For example, Chase's example would look like: library(ggplot2)
ex.data <- data.frame(DV=rnorm(2*4*3),V2=rep(1:2,each=4*3),V4=rep(1:4,each=3),V3=1:3)
p <- qplot(V4, DV, data=ex.data, geom="line", group=V3, linetype=factor(V3)) + facet_grid(. ~ V2)
p + labs(linetype='custom title') and yield the figure: | {
"source": [
"https://stats.stackexchange.com/questions/5007",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/196/"
]
} |
5,015 | I am interested in learning (and implementing) an alternative to polynomial interpolation. However, I am having trouble finding a good description of how these methods work, how they relate, and how they compare. I would appreciate your input on the pros/cons/conditions under which these methods or alternatives would be useful, but some good references to texts, slides, or podcasts would be sufficient. | Basic OLS regression is a very good technique for fitting a function to a set of data. However, simple regression only fits a straight line that is constant for the entire possible range of $X$. This may not be appropriate for a given situation. For instance, data sometimes show a curvilinear relationship. This can be dealt with by means of regressing $Y$ onto a transformation of $X$, $f(X)$. Different transformations are possible. In situations where the relationship between $X$ and $Y$ is monotonic , but continually tapers off, a log transform can be used. Another popular choice is to use a polynomial where new terms are formed by raising $X$ to a series of powers (e.g., $X^2$, $X^3$, etc.). This strategy is easy to implement, and you can interpret the fit as telling you how many 'bends' exist in your data (where the number of bends is equal to the highest power needed minus 1). However, regressions based on the logarithm or an exponent of the covariate will fit optimally only when that is the exact nature of the true relationship. It is quite reasonable to imagine that there is a curvilinear relationship between $X$ and $Y$ that is different from the possibilities those transformations afford. Thus, we come to two other strategies. The first approach is loess , a series of weighted linear regressions computed over a moving window. This approach is older, and better suited to exploratory data analysis . The other approach is to use splines. At it's simplest, a spline is a new term that applies to only a portion of the range of $X$. For example, $X$ might range from 0 to 1, and the spline term might only range from .7 to 1. In this instance, .7 is the knot . A simple, linear spline term would be computed like this:
$$
X_{\rm spline} = \begin{cases} 0\quad &\text{if } X\le{.7} \\
X-.7\quad &\text{if } X>.7 \end{cases}
$$ and would be added to your model, in addition to the original $X$ term. The fitted model will show a sharp break at .7 with a straight line from 0 to .7, and the line continuing on with a different slope from .7 to 1. However, a spline term need not be linear. Specifically, it has been determined that cubic splines are especially useful (i.e., $X_{\rm spline}^3$). The sharp break needn't be there, either. Algorithms have been developed that constrain the fitted parameters such that the first and second derivatives match at the knots, which makes the knots impossible to detect in the output. The end result of all this is that with just a few knots (usually 3-5) in choice locations (which software can determine for you) can reproduce pretty much any curve. Moreover, the degrees of freedom are calculated correctly, so you can trust the results, which is not true when you look at your data first and then decide to fit a squared term because you saw a bend. In addition, all of this is just another (albeit more complicated) version of the basic linear model. Thus, everything that we get with linear models comes with this (e.g., predictions, residuals, confidence bands, tests, etc.) These are substantial advantages. The simplest introduction to these topics that I know of is: Fox, J. (2000). Nonparametric Simple Regression: Smoothing Scatterplots , Sage. | {
"source": [
"https://stats.stackexchange.com/questions/5015",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1381/"
]
} |
5,026 | What is the difference between data mining, statistics, machine learning and AI? Would it be accurate to say that they are 4 fields attempting to solve very similar problems but with different approaches? What exactly do they have in common and where do they differ? If there is some kind of hierarchy between them, what would it be? Similar questions have been asked previously but I still don't get it: Data Mining and Statistical Analysis The Two Cultures: statistics vs. machine learning? | There is considerable overlap among these, but some distinctions can be made. Of necessity, I will have to over-simplify some things or give short-shrift to others, but I will do my best to give some sense of these areas. Firstly, Artificial Intelligence is fairly distinct from the rest. AI is the study of how to create intelligent agents. In practice, it is how to program a computer to behave and perform a task as an intelligent agent (say, a person) would. This does not have to involve learning or induction at all, it can just be a way to 'build a better mousetrap'. For example, AI applications have included programs to monitor and control ongoing processes (e.g., increase aspect A if it seems too low). Notice that AI can include darn-near anything that a machine does, so long as it doesn't do it 'stupidly'. In practice, however, most tasks that require intelligence require an ability to induce new knowledge from experiences. Thus, a large area within AI is machine learning . A computer program is said to learn some task from experience if its performance at the task improves with experience, according to some performance measure. Machine learning involves the study of algorithms that can extract information automatically (i.e., without on-line human guidance). It is certainly the case that some of these procedures include ideas derived directly from, or inspired by, classical statistics, but they don't have to be. Similarly to AI, machine learning is very broad and can include almost everything, so long as there is some inductive component to it. An example of a machine learning algorithm might be a Kalman filter. Data mining is an area that has taken much of its inspiration and techniques from machine learning (and some, also, from statistics), but is put to different ends . Data mining is carried out by a person , in a specific situation, on a particular data set, with a goal in mind. Typically, this person wants to leverage the power of the various pattern recognition techniques that have been developed in machine learning. Quite often, the data set is massive , complicated , and/or may have special problems (such as there are more variables than observations). Usually, the goal is either to discover / generate some preliminary insights in an area where there really was little knowledge beforehand, or to be able to predict future observations accurately. Moreover, data mining procedures could be either 'unsupervised' (we don't know the answer--discovery) or 'supervised' (we know the answer--prediction). Note that the goal is generally not to develop a more sophisticated understanding of the underlying data generating process. Common data mining techniques would include cluster analyses, classification and regression trees, and neural networks. I suppose I needn't say much to explain what statistics is on this site, but perhaps I can say a few things. Classical statistics (here I mean both frequentist and Bayesian) is a sub-topic within mathematics. I think of it as largely the intersection of what we know about probability and what we know about optimization. Although mathematical statistics can be studied as simply a Platonic object of inquiry, it is mostly understood as more practical and applied in character than other, more rarefied areas of mathematics. As such (and notably in contrast to data mining above), it is mostly employed towards better understanding some particular data generating process. Thus, it usually starts with a formally specified model , and from this are derived procedures to accurately extract that model from noisy instances (i.e., estimation--by optimizing some loss function) and to be able to distinguish it from other possibilities (i.e., inferences based on known properties of sampling distributions). The prototypical statistical technique is regression. | {
"source": [
"https://stats.stackexchange.com/questions/5026",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2199/"
]
} |
5,056 | Given the support vectors of a linear SVM, how can I compute the equation of the decision boundary? | The Elements of Statistical Learning , from Hastie et al., has a complete chapter on support vector classifiers and SVMs (in your case, start page 418 on the 2nd edition). Another good tutorial is Support Vector Machines in R , by David Meyer. Unless I misunderstood your question, the decision boundary (or hyperplane) is defined by $x^T\beta + \beta_0=0$ (with $\|\beta\|=1$, and $\beta_0$ the intercept term), or as @ebony said a linear combination of the support vectors. The margin is then $2/\|\beta\|$, following Hastie et al. notations. From the on-line help of ksvm() in the kernlab R package, but see also kernlab – An S4 Package for Kernel Methods in R , here is a toy example: set.seed(101)
x <- rbind(matrix(rnorm(120),,2),matrix(rnorm(120,mean=3),,2))
y <- matrix(c(rep(1,60),rep(-1,60)))
svp <- ksvm(x,y,type="C-svc")
plot(svp,data=x) Note that for the sake of clarity, we don't consider train and test samples.
Results are shown below, where color shading helps visualizing the fitted decision values; values around 0 are on the decision boundary. Calling attributes(svp) gives you attributes that you can access, e.g. alpha(svp) # support vectors whose indices may be
# found with alphaindex(svp)
b(svp) # (negative) intercept So, to display the decision boundary, with its corresponding margin, let's try the following (in the rescaled space), which is largely inspired from a tutorial on SVM made some time ago by Jean-Philippe Vert : plot(scale(x), col=y+2, pch=y+2, xlab="", ylab="")
w <- colSums(coef(svp)[[1]] * x[unlist(alphaindex(svp)),])
b <- b(svp)
abline(b/w[1],-w[2]/w[1])
abline((b+1)/w[1],-w[2]/w[1],lty=2)
abline((b-1)/w[1],-w[2]/w[1],lty=2) And here it is: | {
"source": [
"https://stats.stackexchange.com/questions/5056",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2221/"
]
} |
5,115 | What are the most important statisticians, and what is it that made them famous? (Reply just one scientist per answer please.) | Ronald Fisher for his fundamental contributions to the way we analyze data, whether it be the analysis of variance framework, maximum likelihood, permutation tests, or any number of other ground-breaking discoveries. | {
"source": [
"https://stats.stackexchange.com/questions/5115",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1808/"
]
} |
5,135 | The help pages in R assume I know what those numbers mean, but I don't.
I'm trying to really intuitively understand every number here. I will just post the output and comment on what I found out. There might (will) be mistakes, as I'll just write what I assume. Mainly I'd like to know what the t-value in the coefficients mean, and why they print the residual standard error. Call:
lm(formula = iris$Sepal.Width ~ iris$Petal.Width)
Residuals:
Min 1Q Median 3Q Max
-1.09907 -0.23626 -0.01064 0.23345 1.17532 This is a 5-point-summary of the residuals (their mean is always 0, right?). The numbers can be used (I'm guessing here) to quickly see if there are any big outliers. Also you can already see it here if the residuals are far from normally distributed (they should be normally distributed). Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.30843 0.06210 53.278 < 2e-16 ***
iris$Petal.Width -0.20936 0.04374 -4.786 4.07e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Estimates $\hat{\beta_i}$, computed by least squares regression. Also, the standard error is $\sigma_{\beta_i}$. I'd like to know how this is calculated. I have no idea where the t-value and the corresponding p-value come from. I know $\hat{\beta}$ should be normal distributed, but how is the t-value calculated? Residual standard error: 0.407 on 148 degrees of freedom $\sqrt{ \frac{1}{n-p} \epsilon^T\epsilon }$, I guess. But why do we calculate that, and what does it tell us? Multiple R-squared: 0.134, Adjusted R-squared: 0.1282 $ R^2 = \frac{s_\hat{y}^2}{s_y^2} $, which is $ \frac{\sum_{i=1}^n (\hat{y_i}-\bar{y})^2}{\sum_{i=1}^n (y_i-\bar{y})^2} $. The ratio is close to 1 if the points lie on a straight line, and 0 if they are random. What is the adjusted R-squared? F-statistic: 22.91 on 1 and 148 DF, p-value: 4.073e-06 F and p for the whole model, not only for single $\beta_i$s as previous. The F value is $ \frac{s^2_{\hat{y}}}{\sum\epsilon_i} $. The bigger it grows, the more unlikely it is that the $\beta$'s do not have any effect at all. | Five point summary yes, the idea is to give a quick summary of the distribution. It should be roughly symmetrical about mean, the median should be close to 0, the 1Q and 3Q values should ideally be roughly similar values. Coefficients and $\hat{\beta_i}s$ Each coefficient in the model is a Gaussian (Normal) random variable. The $\hat{\beta_i}$ is the estimate of the mean of the distribution of that random variable, and the standard error is the square root of the variance of that distribution. It is a measure of the uncertainty in the estimate of the $\hat{\beta_i}$ . You can look at how these are computed (well the mathematical formulae used) on Wikipedia . Note that any self-respecting stats programme will not use the standard mathematical equations to compute the $\hat{\beta_i}$ because doing them on a computer can lead to a large loss of precision in the computations. $t$ -statistics The $t$ statistics are the estimates ( $\hat{\beta_i}$ ) divided by their standard errors ( $\hat{\sigma_i}$ ), e.g. $t_i = \frac{\hat{\beta_i}}{\hat{\sigma_i}}$ . Assuming you have the same model in object mod as your Q: > mod <- lm(Sepal.Width ~ Petal.Width, data = iris) then the $t$ values R reports are computed as: > tstats <- coef(mod) / sqrt(diag(vcov(mod)))
(Intercept) Petal.Width
53.277950 -4.786461 Where coef(mod) are the $\hat{\beta_i}$ , and sqrt(diag(vcov(mod))) gives the square roots of the diagonal elements of the covariance matrix of the model parameters, which are the standard errors of the parameters ( $\hat{\sigma_i}$ ). The p-value is the probability of achieving a $|t|$ as large as or larger than the observed absolute t value if the null hypothesis ( $H_0$ ) was true, where $H_0$ is $\beta_i = 0$ . They are computed as (using tstats from above): > 2 * pt(abs(tstats), df = df.residual(mod), lower.tail = FALSE)
(Intercept) Petal.Width
1.835999e-98 4.073229e-06 So we compute the upper tail probability of achieving the $t$ values we did from a $t$ distribution with degrees of freedom equal to the residual degrees of freedom of the model. This represents the probability of achieving a $t$ value greater than the absolute values of the observed $t$ s. It is multiplied by 2, because of course $t$ can be large in the negative direction too. Residual standard error The residual standard error is an estimate of the parameter $\sigma$ . The assumption in ordinary least squares is that the residuals are individually described by a Gaussian (normal) distribution with mean 0 and standard deviation $\sigma$ . The $\sigma$ relates to the constant variance assumption; each residual has the same variance and that variance is equal to $\sigma^2$ . Adjusted $R^2$ Adjusted $R^2$ is computed as: $$1 - (1 - R^2) \frac{n - 1}{n - p - 1}$$ The adjusted $R^2$ is the same thing as $R^2$ , but adjusted for the complexity (i.e. the number of parameters) of the model. Given a model with a single parameter, with a certain $R^2$ , if we add another parameter to this model, the $R^2$ of the new model has to increase, even if the added parameter has no statistical power. The adjusted $R^2$ accounts for this by including the number of parameters in the model. $F$ -statistic The $F$ is the ratio of two variances ( $SSR/SSE$ ), the variance explained by the parameters in the model (sum of squares of regression, SSR) and the residual or unexplained variance (sum of squares of error, SSE). You can see this better if we get the ANOVA table for the model via anova() : > anova(mod)
Analysis of Variance Table
Response: Sepal.Width
Df Sum Sq Mean Sq F value Pr(>F)
Petal.Width 1 3.7945 3.7945 22.91 4.073e-06 ***
Residuals 148 24.5124 0.1656
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 The $F$ s are the same in the ANOVA output and the summary(mod) output. The Mean Sq column contains the two variances and $3.7945 / 0.1656 = 22.91$ . We can compute the probability of achieving an $F$ that large under the null hypothesis of no effect, from an $F$ -distribution with 1 and 148 degrees of freedom. This is what is reported in the final column of the ANOVA table. In the simple case of a single, continuous predictor (as per your example), $F = t_{\mathrm{Petal.Width}}^2$ , which is why the p-values are the same. This equivalence only holds in this simple case. | {
"source": [
"https://stats.stackexchange.com/questions/5135",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2091/"
]
} |
5,158 | I understand that when sampling from a finite population and our sample size is more than 5% of the population, we need to make a correction on the sample's mean and standard error using this formula: $\hspace{10mm} FPC=\sqrt{\frac{N-n}{N-1}}$ Where $N$ is the population size and $n$ is the sample size. I have 3 questions about this formula: Why is the threshold set at 5%? How was the formula derived? Are there other online resources that comprehensively explain this formula besides this paper? | The threshold is chosen such that it ensures convergence of the hypergeometric distribution ($\sqrt{\frac{N-n}{N-1}}$ is its SD), instead of a binomial distribution (for sampling with replacement), to a normal distribution (this is the Central Limit Theorem, see e.g., The Normal Curve, the Central Limit Theorem, and Markov's and Chebychev's Inequalities for Random Variables ). In other words, when $n/N\leq 0.05$ (i.e., $n$ is not 'too large' compared to $N$), the FPC can safely be ignored; it is easy to see how the correction factor evolves with varying $n$ for a fixed $N$: with $N=10,000$, we have $\text{FPC}=.9995$ when $n=10$ while $\text{FPC}=.3162$ when $n=9,000$. When $N\to\infty$, the FPC approaches 1 and we are close to the situation of sampling with replacement (i.e., like with an infinite population). To understand this results, a good starting point is to read some online tutorials on sampling theory where sampling is done without replacement ( simple random sampling ). This online tutorial on Nonparametric statistics has an illustration on computing the expectation and variance for a total. You will notice that some authors use $N$ instead of $N-1$ in the denominator of the FPC; in fact, it depends on whether you work with the sample or population statistic: for the variance, it will be $N$ instead of $N-1$ if you are interested in $S^2$ rather than $\sigma^2$. As for online references, I can suggest you Estimation and statistical inference A new look at inference for the Hypergeometric Distribution Finite Population Sampling with Application to the Hypergeometric Distribution Simple random sampling | {
"source": [
"https://stats.stackexchange.com/questions/5158",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1636/"
]
} |
5,235 | In multiple linear regression, I can understand the correlations between residual and predictors are zero, but what is the expected correlation between residual and the criterion variable? Should it expected to be zero or highly correlated? What's the meaning of that? | In the regression model: $$y_i=\mathbf{x}_i'\beta+u_i$$ the usual assumption is that $(y_i,\mathbf{x}_i,u_i)$, $i=1,...,n$ is an iid sample. Under assumptions that $E\mathbf{x}_iu_i=0$ and $E(\mathbf{x}_i\mathbf{x}_i')$ has full rank, the ordinary least squares estimator: $$\widehat{\beta}=\left(\sum_{i=1}^n\mathbf{x}_i\mathbf{x}_i'\right)^{-1}\sum_{i=1}\mathbf{x}_iy_i$$ is consistent and asymptotically normal. The expected covariance between a residual and the response variable then is: $$Ey_iu_i=E(\mathbf{x}_i'\beta+u_i)u_i=Eu_i^2$$ If we furthermore assume that $E(u_i|\mathbf{x}_1,...,\mathbf{x}_n)=0$ and $E(u_i^2|\mathbf{x}_1,...,\mathbf{x}_n)=\sigma^2$, we can calculate the expected covariance between $y_i$ and its regression residual: $$\begin{align*}
Ey_i\widehat{u}_i&=Ey_i(y_i-\mathbf{x}_i'\widehat{\beta})\\\\
&=E(\mathbf{x}_i'\beta+u_i)(u_i-\mathbf{x}_i(\widehat{\beta}-\beta))\\\\
&=E(u_i^2)\left(1-E\mathbf{x}_i' \left(\sum_{j=1}^n\mathbf{x}_j\mathbf{x}_j'\right)^{-1}\mathbf{x}_i\right)
\end{align*}$$ Now to get the correlation we need to calculate $\text{Var}(y_i)$ and $\text{Var}(\hat{u}_i)$. It turns out that $$\text{Var}(\hat u_i)=E(y_i\hat{u}_i),$$ hence $$\text{Corr}(y_i,\hat u_i)=\sqrt{1-E\mathbf{x}_i' \left(\sum_{j=1}^n\mathbf{x}_j\mathbf{x}_j'\right)^{-1}\mathbf{x}_i}$$ Now the term $\mathbf{x}_i' \left(\sum_{j=1}^n\mathbf{x}_j\mathbf{x}_j'\right)^{-1}\mathbf{x}_i$ comes from diagonal of the hat matrix $H=X(X'X)^{-1}X'$, where $X=[\mathbf{x}_i,...,\mathbf{x}_N]'$. The matrix $H$ is idempotent, hence it satisfies a following property $$\text{trace}(H)=\sum_{i}h_{ii}=\text{rank}(H),$$ where $h_{ii}$ is the diagonal term of $H$. The $\text{rank}(H)$ is the number of linearly independent variables in $\mathbf{x}_i$, which is usually the number of variables. Let us call it $p$. The number of $h_{ii}$ is the sample size $N$. So we have $N$ nonnegative terms which should sum up to $p$. Usually $N$ is much bigger than $p$, hence a lot of $h_{ii}$ would be close to the zero, meaning that the correlation between the residual and the response variable would be close to 1 for the bigger part of observations. The term $h_{ii}$ is also used in various regression diagnostics for determining influential observations. | {
"source": [
"https://stats.stackexchange.com/questions/5235",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/400/"
]
} |
5,253 | After reading a dataset: dataset <- read.csv("forR.csv") How can I get R to give me the number of cases it contains? Also, will the returned value include of exclude cases omitted with na.omit(dataset) ? | dataset will be a data frame. As I don't have forR.csv , I'll make up a small data frame for illustration: set.seed(1)
dataset <- data.frame(A = sample(c(NA, 1:100), 1000, rep = TRUE),
B = rnorm(1000))
> head(dataset)
A B
1 26 0.07730312
2 37 -0.29686864
3 57 -1.18324224
4 91 0.01129269
5 20 0.99160104
6 90 1.59396745 To get the number of cases, count the number of rows using nrow() or NROW() : > nrow(dataset)
[1] 1000
> NROW(dataset)
[1] 1000 To count the data after omitting the NA , use the same tools, but wrap dataset in na.omit() : > NROW(na.omit(dataset))
[1] 993 The difference between NROW() and NCOL() and their lowercase variants ( ncol() and nrow() ) is that the lowercase versions will only work for objects that have dimensions (arrays, matrices, data frames). The uppercase versions will work with vectors, which are treated as if they were a 1 column matrix, and are robust if you end up subsetting your data such that R drops an empty dimension. Alternatively, use complete.cases() and sum it ( complete.cases() returns a logical vector [ TRUE or FALSE ] indicating if any observations are NA for any rows. > sum(complete.cases(dataset))
[1] 993 | {
"source": [
"https://stats.stackexchange.com/questions/5253",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1950/"
]
} |
5,278 | What way (ways?) is there to visually explain what is ANOVA? Any references, link(s) (R packages?) will be welcomed. | Personally, I like introducing linear regression and ANOVA by showing that it is all the same and that linear models amount to partition the total variance: We have some kind of variance in the outcome that can be explained by the factors of interest, plus the unexplained part (called the 'residual'). I generally use the following illustration (gray line for total variability, black lines for group or individual specific variability) : I also like the heplots R package, from Michael Friendly and John Fox, but see also Visual Hypothesis Tests in Multivariate Linear Models: The heplots Package for R . Standard ways to explain what ANOVA actually does, especially in the Linear Model framework, are really well explained in Plane answers to complex questions , by Christensen, but there are very few illustrations. Saville and Wood's Statistical methods: The geometric approach has some examples, but mainly on regression. In Montgomery's Design and Analysis of Experiments , which mostly focused on DoE, there are illustrations that I like, but see below (these are mine :-) But I think you have to look for textbooks on Linear Models if you want to see how sum of squares, errors, etc. translates into a vector space, as shown on Wikipedia . Estimation and Inference in Econometrics , by Davidson and MacKinnon, seems to have nice illustrations (the 1st chapter actually covers OLS geometry) but I only browse the French translation (available here ). The Geometry of Linear Regression has also some good illustrations. Edit : Ah, and I just remember this article by Robert Pruzek, A new graphic for one-way ANOVA . Edit 2 And now, the granova package (mentioned by @gd047 and associated to the above paper) has been ported to ggplot, see granovaGG with an illustration for one-way ANOVA below. | {
"source": [
"https://stats.stackexchange.com/questions/5278",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/253/"
]
} |
5,292 | Is there any GUI for R that makes it easier for a beginner to start learning and programming in that language? | You can also try the brand-new RStudio . Reasonably full-featured IDE with easy set-up. I played with it yesterday and it seems nice. Update I now like RStudio even more. They actively implement feature requests, and it shows in the little things getting better and better. It also includes Git support (including remote syncing so Github integration is seamless). A bunch of big names just joined so hopefully things will continue getting even better. Update again And indeed things have only gotten better, in rapid fashion. Package build-check cycles are now point-and-click, and the little stuff continues to improve as well. It now comes with an integrated debugging environment , too. | {
"source": [
"https://stats.stackexchange.com/questions/5292",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1808/"
]
} |
5,304 | Dear everyone - I've noticed something strange that I can't explain, can you? In summary: the manual approach to calculating a confidence interval in a logistic regression model, and the R function confint() give different results. I've been going through Hosmer & Lemeshow's Applied logistic regression (2nd edition). In the 3rd chapter there is an example of calculating the odds ratio and 95% confidence interval. Using R, I can easily reproduce the model: Call:
glm(formula = dataset$CHD ~ as.factor(dataset$dich.age), family = "binomial")
Deviance Residuals:
Min 1Q Median 3Q Max
-1.734 -0.847 -0.847 0.709 1.549
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.8408 0.2551 -3.296 0.00098 ***
as.factor(dataset$dich.age)1 2.0935 0.5285 3.961 7.46e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 136.66 on 99 degrees of freedom
Residual deviance: 117.96 on 98 degrees of freedom
AIC: 121.96
Number of Fisher Scoring iterations: 4 However, when I calculate the confidence intervals of the parameters, I get a different interval to the one given in the text: > exp(confint(model))
Waiting for profiling to be done...
2.5 % 97.5 %
(Intercept) 0.2566283 0.7013384
as.factor(dataset$dich.age)1 3.0293727 24.7013080 Hosmer & Lemeshow suggest the following formula: $$
e^{[\hat\beta_1\pm z_{1-\alpha/2}\times\hat{\text{SE}}(\hat\beta_1)]}
$$ and they calculate the confidence interval for as.factor(dataset$dich.age)1 to be (2.9, 22.9). This seems straightforward to do in R: # upper CI for beta
exp(summary(model)$coefficients[2,1]+1.96*summary(model)$coefficients[2,2])
# lower CI for beta
exp(summary(model)$coefficients[2,1]-1.96*summary(model)$coefficients[2,2]) gives the same answer as the book. However, any thoughts on why confint() seems to give different results? I've seen lots of examples of people using confint() . | After having fetched the data from the accompanying website , here is how I would do it: chdage <- read.table("chdage.dat", header=F, col.names=c("id","age","chd"))
chdage$aged <- ifelse(chdage$age>=55, 1, 0)
mod.lr <- glm(chd ~ aged, data=chdage, family=binomial)
summary(mod.lr) The 95% CIs based on profile likelihood are obtained with require(MASS)
exp(confint(mod.lr)) This often is the default if the MASS package is automatically loaded. In this case, I get 2.5 % 97.5 %
(Intercept) 0.2566283 0.7013384
aged 3.0293727 24.7013080 Now, if I wanted to compare with 95% Wald CIs (based on asymptotic normality) like the one you computed by hand, I would use confint.default() instead; this yields 2.5 % 97.5 %
(Intercept) 0.2616579 0.7111663
aged 2.8795652 22.8614705 Wald CIs are good in most situations, although profile likelihood-based may be useful with complex sampling strategies. If you want to grasp the idea of how they work, here is a brief overview of the main principles: Confidence intervals by the profile likelihood method, with applications in veterinary epidemiology . You can also take a look at Venables and Ripley's MASS book, §8.4, pp. 220-221. | {
"source": [
"https://stats.stackexchange.com/questions/5304",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1991/"
]
} |
5,344 | I have fit a few mixed effects models (particularly longitudinal models) using lme4 in R but would like to really master the models and the code that goes with them. However, before diving in with both feet (and buying some books) I want to be sure that I am learning the right library. I have used lme4 up to now because I just found it easier than nlme , but if nlme is better for my purposes then I feel I should use that. I'm sure neither is "better" in a simplistic way, but I would value some opinions or thoughts. My main criteria are: Easy to use (I'm a psychologist by training, and not particularly versed in statistics or coding, but I'm learning) Good features for fitting longitudinal data (if there is a difference here- but this is what I mainly use them for) Good (easy to interpret) graphical summaries, again not sure if there is a difference here but I often produce graphs for people even less technical than I, so nice clear plots are always good (I'm very fond of the xyplot function in lattice() for this reason). | Both packages use Lattice as the backend, but nlme has some nice features like groupedData() and lmList() that are lacking in lme4 (IMO). From a practical perspective, the two most important criteria seem, however, that lme4 extends nlme with other link functions: in nlme , you cannot fit outcomes whose distribution is not gaussian, lme4 can be used to fit mixed-effects logistic regression, for example. in nlme , it is possible to specify the variance-covariance matrix for the random effects (e.g. an AR(1)); it is not possible in lme4 . Now, lme4 can easily handle very huge number of random effects (hence, number of individuals in a given study) thanks to its C part and the use of sparse matrices. The nlme package has somewhat been superseded by lme4 so I won't expect people spending much time developing add-ons on top of nlme . Personally, when I have a continuous response in my model, I tend to use both packages, but I'm now versed to the lme4 way for fitting GLMM. Rather than buying a book, take a look first at the Doug Bates' draft book on R-forge: lme4: Mixed-effects Modeling with R . | {
"source": [
"https://stats.stackexchange.com/questions/5344",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/199/"
]
} |
5,347 | I am modeling a random variable ($Y$) which is the sum of some ~15-40k independent Bernoulli random variables ($X_i$), each with a different success probability ($p_i$). Formally, $Y=\sum X_i$ where $\Pr(X_i=1)=p_i$ and $\Pr(X_i=0)=1-p_i$. I am interested in quickly answering queries such as $\Pr(Y<=k)$ (where $k$ is given). Currently, I use random simulations to answer such queries. I randomly draw each $X_i$ according to its $p_i$, then sum all $X_i$ values to get $Y'$. I repeat this process a few thousand times and return the fraction of times $\Pr(Y'\leq k)$. Obviously, this is not totally accurate (although accuracy greatly increases as the number of simulations increases). Also, it seems I have enough data about the distribution to avoid the use simulations. Can you think of a reasonable way to get the exact probability $\Pr(Y\leq k)$? p.s. I use Perl & R. EDIT Following the responses I thought some clarifications might be needed. I will shortly describe the setting of my problem. Given is a circular genome with circumference c and a set of n ranges mapped to it. For example, c=3*10^9 and ranges={[100,200],[50,1000],[3*10^9-1,1000],...} . Note all ranges are closed (both ends are inclusive). Also note that we only deal with integers (whole units). I am looking for regions on the circle that are undercovered by the given n mapped ranges. So to test whether a given a range of length x on the circle is undercovered, I test the hypothesis that the n ranges are mapped randomly. The probability a mapped range of length q>x will fully cover the given range of length x is (q-x)/c . This probability becomes quite small when c is large and/or q is small. What I'm interested is the number of ranges (out of n ) which cover x . This is how Y is formed. I test my null hypothesis vs. one sided alternative (undercoverage). Also note I am testing multiple hypothesis (different x lengths), and sure to correct for this. | If it often resembles a Poisson , have you tried approximating it by a Poisson with parameter $\lambda = \sum p_i$ ? EDIT : I've found a theoretical result to justify this, as well as a name for the distribution of $Y$: it's called the Poisson binomial distribution . Le Cam's inequality tells you how closely its distribution is approximated by the distribution of a Poisson with parameter $\lambda = \sum p_i$. It tells you the quality of this approx is governed by the sum of the squares of the $p_i$s, to paraphrase Steele (1994) . So if all your $p_i$s are reasonably small, as it now appears they are, it should be a pretty good approximation. EDIT 2 : How small is 'reasonably small'? Well, that depends how good you need the approximation to be! The Wikipedia article on Le Cam's theorem gives the precise form of the result I referred to above: the sum of the absolute differences between the probability mass function (pmf) of $Y$ and the pmf of the above Poisson distribution is no more than twice the sum of the squares of the $p_i$s. Another result from Le Cam (1960) may be easier to use: this sum is also no more than 18 times the largest $p_i$. There are quite a few more such results... see Serfling (1978) for one review. | {
"source": [
"https://stats.stackexchange.com/questions/5347",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/634/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.