source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
36,247 | I'm completely new to neural networks but highly interested in understanding them. However it's not easy at all to get started. Could anyone recommend a good book or any other kind of resource? Is there a must-read? I'm thankful for any kind of tip. | Neural networks have been around for a while, and they've changed dramatically over the years. If you only poke around on the web, you might end up with the impression that "neural network" means multi-layer feedforward network trained with back-propagation. Or, you might come across any of the dozens of rarely used, bizarrely named models and conclude that neural networks are more of a zoo than a research project. Or that they're a novelty. Or... I could go on. If you want a clear explanation, I'd listen to Geoffrey Hinton . He has been around forever and (therefore?) does a great job weaving all the disparate models he's worked on into one cohesive, intuitive (and sometimes theoretical) historical narrative. On his homepage, there are links to Google Tech Talks and Videolectures.net lectures he has done (on RBMs and Deep Learning , among others). From the way I see it, here's a historical and pedagogical road map to understanding neural networks, from their inception to the state-of-the-art: Perceptrons Easy to understand Severely limited Multi-layer, trained by back-propogation Many resources to learn these Don't generally do as well as SVMs Boltzmann machines Interesting way of thinking about the stability of a recurrent network in terms of "energy" Look at Hopfield networks if you want an easy to understand (but not very practical) example of recurrent networks with "energy". Theoretically interesting, useless in practice (training about the same speed as continental drift) Restricted Boltzmann Machines Useful! Build off of the theory of Boltzmann machines Some good introductions on the web Deep Belief Networks So far as I can tell, this is a class of multi-layer RBMs for doing semi-supervised learning. Some resources | {
"source": [
"https://stats.stackexchange.com/questions/36247",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14062/"
]
} |
37,370 | I am using the party package in R with 10,000 rows and 34 features, and some factor features have more than 300 levels. The computing time is too long. (It has taken 3 hours so far and it hasn't finished yet.) I want to know what elements have a big effect on the computing time of a random forest. Is it having factors with too many levels? Are there any optimized methods to improve the RF computing time? | The overall complexity of RF is something like $\text{ntree}\cdot\text{mtry}\cdot(\text{# objects})\log( \text{# objects})$; if you want to speed your computations up, you can try the following: Use randomForest instead of party , or, even better, ranger or Rborist (although both are not yet battle-tested). Don't use formula, i.e. call randomForest(predictors,decision) instead of randomForest(decision~.,data=input) . Use do.trace argument to see the OOB error in real-time; this way you may detect that you can lower ntree . About factors; RF (and all tree methods) try to find an optimal subset of levels thus scanning $2^\text{(# of levels-1)}$ possibilities; to this end it is rather naive this factor can give you so much information -- not to mention that randomForest won't eat factors with more than 32 levels. Maybe you can simply treat it as an ordered one (and thus equivalent to a normal, numeric variable for RF) or cluster it in some groups, splitting this one attribute into several? Check if your computer haven't run out of RAM and it is using swap space. If so, buy a bigger computer. Finally, you can extract some random subset of objects and make some initial experiments on this. | {
"source": [
"https://stats.stackexchange.com/questions/37370",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14123/"
]
} |
37,405 | Can somebody explain what is the natural interpretation for LDA hyperparameters? ALPHA and BETA are parameters of Dirichlet distributions for (per document) topic and (per topic) word distributions respectively. However can someone explain what it means to choose larger values of these hyperparameters versus smaller values? Does that mean putting any prior beliefs in terms of topic sparsity in documents and mutual exclusiveness of topics in terms of words? This question is about latent Dirichlet allocation, but the comment by BGReene immediately below refers to linear discriminant analysis, which confusingly is also abbreviated LDA. | The answer depends on whether you are assuming the symmetric or asymmetric dirichlet distribution (or, more technically, whether the base measure is uniform). Unless something else is specified, most implementations of LDA assume the distribution is symmetric. For the symmetric distribution, a high alpha-value means that each document is likely to contain a mixture of most of the topics, and not any single topic specifically. A low alpha value puts less such constraints on documents and means that it is more likely that a document may contain mixture of just a few, or even only one, of the topics. Likewise, a high beta-value means that each topic is likely to contain a mixture of most of the words, and not any word specifically, while a low value means that a topic may contain a mixture of just a few of the words. If, on the other hand, the distribution is asymmetric, a high alpha-value means that a specific topic distribution (depending on the base measure) is more likely for each document. Similarly, high beta-values means each topic is more likely to contain a specific word mix defined by the base measure. In practice, a high alpha-value will lead to documents being more similar in terms of what topics they contain. A high beta-value will similarly lead to topics being more similar in terms of what words they contain. So, yes, the alpha-parameters specify prior beliefs about topic sparsity/uniformity in the documents. I'm not entirely sure what you mean by "mutual exclusiveness of topics in terms of words" though. More generally, these are concentration parameters for the dirichlet distribution used in the LDA model. To gain some intuitive understanding of how this works, this presentation contains some nice illustrations, as well as a good explanation of LDA in general. An additional comment I'll put here, since I can't comment on your original question: From what I've seen, the alpha- and beta-parameters can somewhat confusingly refer to several different parameterizations. The underlying dirichlet distribution is usually parameterized with the vector $(\alpha_1, \alpha_2, ... ,\alpha_K)$ , but this can be decomposed into the base measure $u = (u_1, u_2, ..., u_K)$ and the concentration parameter $\alpha$, such that $\alpha * \textbf{u} = (\alpha_1, \alpha_2, ... ,\alpha_K)$ . In the case where the alpha parameter is a scalar, it is usually meant the concentration parameter $\alpha$, but it can also mean the values of $(\alpha_1, \alpha_2, ... ,\alpha_K)$, since these will be equal under the symmetrical dirichlet distribution. If it's a vector, it usually refers to $(\alpha_1, \alpha_2, ... ,\alpha_K)$. I'm not sure which parametrization is most common, but in my reply I assume you meant the alpha- and beta-values as the concentration parameters. | {
"source": [
"https://stats.stackexchange.com/questions/37405",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/13915/"
]
} |
37,461 | I recently found it necessary to derive a pdf for the square of a normal random variable with mean 0. For whatever reason, I chose not to normalise the variance beforehand. If I did this correctly then this pdf is as follows: $$
N^2(x; \sigma^2) = \frac{1}{\sigma \sqrt{2 \pi} \sqrt{x}} e^{\frac{-x}{2\sigma^2}}
$$ I noticed this was in fact just a parametrisation of a gamma distribution: $$
N^2(x; \sigma^2) = \operatorname{Gamma}(x; \frac{1}{2}, 2 \sigma^2)
$$ And then, from the fact the sum of two gammas (with the same scale parameter) equals another gamma, it follows that the gamma is equivalent to the sum of $k$ squared normal random variables. $$
N^2_\Sigma(x; k, \sigma^2) = \operatorname{Gamma}(x; \frac{k}{2}, 2 \sigma^2)
$$ This was a bit surprising to me. Even though I knew the $\chi^2$ distribution -- a distribution of the sum of squared standard normal RVs -- was a special case of the gamma, I didn't realise the gamma was essentially just a generalisation allowing for the sum of normal random variables of any variance. This also leads to other characterisations I had not come across before, such as the exponential distribution being equivalent to the sum of two squared normal distributions. This is all somewhat mysterious to me. Is the normal distribution fundamental to the derivation of the gamma distribution, in the manner I outlined above? Most resources I checked make no mention that the two distributions are intrinsically related like this, or even for that matter describe how the gamma is derived. This makes me think some lower-level truth is at play that I have simply highlighted in a convoluted way? | As Prof. Sarwate's comment noted, the relations between squared normal and chi-square are a very widely disseminated fact - as it should be also the fact that a chi-square is just a special case of the Gamma distribution: $$X \sim N(0,\sigma^2) \Rightarrow X^2/\sigma^2 \sim \mathcal \chi^2_1 \Rightarrow X^2 \sim \sigma^2\mathcal \chi^2_1= \text{Gamma}\left(\frac 12, 2\sigma^2\right)$$ the last equality following from the scaling property of the Gamma. As regards the relation with the exponential, to be accurate it is the sum of two squared zero-mean normals each scaled by the variance of the other , that leads to the Exponential distribution: $$X_1 \sim N(0,\sigma^2_1),\;\; X_2 \sim N(0,\sigma^2_2) \Rightarrow \frac{X_1^2}{\sigma^2_1}+\frac{X_2^2}{\sigma^2_2} \sim \mathcal \chi^2_2 \Rightarrow \frac{\sigma^2_2X_1^2+ \sigma^2_1X_2^2}{\sigma^2_1\sigma^2_2} \sim \mathcal \chi^2_2$$ $$ \Rightarrow \sigma^2_2X_1^2+ \sigma^2_1X_2^2 \sim \sigma^2_1\sigma^2_2\mathcal \chi^2_2 = \text{Gamma}\left(1, 2\sigma^2_1\sigma^2_2\right) = \text{Exp}( {1\over {2\sigma^2_1\sigma^2_2}})$$ But the suspicion that there is "something special" or "deeper" in the sum of two squared zero mean normals that "makes them a good model for waiting time" is unfounded:
First of all, what is special about the Exponential distribution that makes it a good model for "waiting time"? Memorylessness of course, but is there something "deeper" here, or just the simple functional form of the Exponential distribution function, and the properties of $e$? Unique properties are scattered around all over Mathematics, and most of the time, they don't reflect some "deeper intuition" or "structure" - they just exist (thankfully). Second, the square of a variable has very little relation with its level. Just consider $f(x) = x$ in, say, $[-2,\,2]$: ...or graph the standard normal density against the chi-square density: they reflect and represent totally different stochastic behaviors, even though they are so intimately related, since the second is the density of a variable that is the square of the first. The normal may be a very important pillar of the mathematical system we have developed to model stochastic behavior - but once you square it, it becomes something totally else. | {
"source": [
"https://stats.stackexchange.com/questions/37461",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14161/"
]
} |
37,647 | I'm using a mixed model in R ( lme4 ) to analyze some repeated measures data. I have a response variable (fiber content of feces) and 3 fixed effects (body mass, etc.). My study only has 6 participants, with 16 repeated measures for each one (though two only have 12 repeats). The subjects are lizards that were given different combinations of food in different 'treatments'. My question is: can I use subject ID as a random effect? I know this is the usual course of action in longitudinal mixed effects models, to take account of the randomly sampled nature of the subjects and the fact that observations within subjects will be more closely correlated than those between subjects. But, treating subject ID as a random effect involves estimating a mean and variance for this variable. Since I have only 6 subjects (6 levels of this factor), is this enough to get an accurate characterization of the mean and variance? Does the fact that I have quite a few repeated measurements for each subject help in this regard (I don't see how it matters)? Finally, If I can't use subject ID as a random effect, will including it as a fixed effect allow me to control for the fact that I have repeated measures? Edit: I'd just like to clarify that when I say "can I" use subject ID as a random effect, I mean "is it a good idea to". I know I can fit the model with a factor with just 2 levels, but surely this would be in-defensible? I'm asking at what point does it become sensible to think about treating subjects as random effects? It seems like the literature advises that 5-6 levels is a lower bound. It seems to me that the estimates of the mean and variance of the random effect would not be very precise until there were 15+ factor levels. | Short answer: Yes, you can use ID as random effect with 6 levels. Slightly longer answer: The @BenBolker's GLMM FAQ says (among other things) the following under the headline " Should I treat factor xxx as fixed or random? ": One point of particular relevance to 'modern' mixed model estimation
(rather than 'classical' method-of-moments estimation) is that, for
practical purposes, there must be a reasonable number of
random-effects levels (e.g. blocks) — more than 5 or 6 at a minimum. So you are at the lower bound, but on the right side of it. | {
"source": [
"https://stats.stackexchange.com/questions/37647",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14216/"
]
} |
37,865 | I am wondering if there is a simple way of detecting outliers. For one of my projects, which was basically a correlation between the number of times respondents participate in physical activity in a week and the number of times they eat outside the home (fast food) in a week, I drew a scatterplot and literally removed the data points that were extreme. (The scatterplot showed a negative correlation.) This was based on value judgement (based on the scatterplot where these data points were clearly extreme). I did not do any statistical tests. I am just wondering if this is a sound way of dealing with outliers. I have data from 350 people so loss of (say) 20 data points is not a worry to me. | There is no simple sound way to remove outliers. Outliers can be of two kinds: 1) Data entry errors. These are often the easiest to spot and always the easiest to deal with. If you can find the right data, correct it; if not, delete it. 2) Legitimate data that is unusual. This is much trickier. For bivariate data like yours, the outlier could be univariate or bivariate. a) Univariate. First, "unusual" depends on the distribution and the sample size. You give us the sample size of 350, but what is the distribution? It clearly isn't normal, since it's a relatively small integer. What is unusual under a Poisson would not be under a negative binomial. I'd kind of suspect a zero-inflated negative binomial relationship. But even when you have the distribution, the (possible) outliers will affect the parameters. You can look at "leave one out" distributions, where you check if data point q would be an outlier if the data had all points but q. Even then, though, what if there are multiple outliers? b) Bivariate. This is where neither variable's value is unusual in itself, but together they are odd. There is a possibly apocryphal report that the census once said there were 20,000 12 year old widows in the USA. 12 year olds aren't unusual, widows aren't either, but 12 year old widows are. Given all this, it might be simpler to report a robust measure of relationship. | {
"source": [
"https://stats.stackexchange.com/questions/37865",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9342/"
]
} |
37,993 | I'm currently working on a quasi-experimental research paper. I only have a sample size of 15 due to low population within the chosen area and that only 15 fit my criteria. Is 15 the minimum sample size to compute for t-test and F-test? If so, where can I get an article or book to support this small sample size? This paper was already defended last Monday and one of the panel asked to have a supporting reference because my sample size is too low. He said it should've been at least 40 respondents. | There is no minimum sample size for the t test to be valid other than it be large enough to calculate the test statistic. Validity requires that the assumptions for the test statistic hold approximately. Those assumptions are in the one sample case that the data are iid normal (or approximately normal) with mean 0 under the null hypothesis and a variance that is unknown but estimated from the sample. In the two sample case it is that both samples are independent of each other and each sample consists of iid normal variables with the two samples having the same mean and a common unknown variance under the null hypothesis. A pooled estimate of variance is used for the statistic. In the one sample case the distribution under the null hypothesis is a central t with n-1 degrees of freedom. In the two sample cases with sample sizes n and m not necessarily equal the null distribution of the test statistics is t with n+m-2 degrees of freedom. The increased variability due to low sample size is accounted for in the distribution which has heavier tails when the degrees of freedom is low which corresponds to a low sample size. So critical values can be found for the test statistic to have a given significance level for any sample size (well, at least of size 2 or larger). The problem with low sample size is with regard to the power of the test. The reviewer may have felt that 15 per group was not a large enough sample size to have high power of detecting a meaningful difference say delta between the two means or a mean greater than delta in absolute value for a one sample problem. Needing 40 would require a specification of a certain power at a particular delta that would be achieved with n equal 40 but not lower than 40. I should add that for the t test to be performed the sample must be large enough to estimate the variance or variances. | {
"source": [
"https://stats.stackexchange.com/questions/37993",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14383/"
]
} |
38,001 | Following my question here , I am wondering if there are strong views for or against the use of standard deviation to detect outliers (e.g. any datapoint that is more than 2 standard deviation is an outlier). I know this is dependent on the context of the study, for instance a data point, 48kg, will certainly be an outlier in a study of babies' weight but not in a study of adults' weight. Outliers are the result of a number of factors such as data entry mistakes. In my case, these processes are robust. I guess the question I am asking is: Is using standard deviation a sound method for detecting outliers? | Some outliers are clearly impossible . You mention 48 kg for baby weight. This is clearly an error. That's not a statistical issue, it's a substantive one. There are no 48 kg human babies. Any statistical method will identify such a point. Personally, rather than rely on any test (even appropriate ones, as recommended by @Michael) I would graph the data. Showing that a certain data value (or values) are unlikely under some hypothesized distribution does not mean the value is wrong and therefore values shouldn't be automatically deleted just because they are extreme. In addition, the rule you propose (2 SD from the mean) is an old one that was used in the days before computers made things easy. If N is 100,000, then you certainly expect quite a few values more than 2 SD from the mean, even if there is a perfect normal distribution. But what if the distribution is wrong? Suppose, in the population, the variable in question is not normally distributed but has heavier tails than that? | {
"source": [
"https://stats.stackexchange.com/questions/38001",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9342/"
]
} |
38,370 | There's a distinction that's tripping me up with mixed models, and I'm wondering if I could get some clarity on it. Let's assume you've got a mixed model of count data. There's a variable you know you want as a fixed effect (A) and another variable for time (T), grouped by say a "Site" variable. As I understand it: glmer(counts ~ A + T, data=data, family="Poisson") is a fixed effects model. glmer(counts ~ (A + T | Site), data=data, family="Poisson") is a random effect model. My question is when you have something like: glmer(counts ~ A + T + (T | Site), data=data, family="Poisson") what is T? Is it a random effect? A fixed effect? What's actually being accomplished by putting T in both places? When should something only appear in the random effects section of the model formula? | This may become clearer by writing out the model formula for each of these three models. Let $Y_{ij}$ be the observation for person $i$ in site $j$ in each model and define $A_{ij}, T_{ij}$ analogously to refer to the variables in your model. glmer(counts ~ A + T, data=data, family="Poisson") is the model $$ \log \big( E(Y_{ij}) \big) = \beta_0 + \beta_1 A_{ij} + \beta_2 T_{ij} $$ which is just an ordinary poisson regression model. glmer(counts ~ (A + T|Site), data=data, family="Poisson") is the model $$ \log \big( E(Y_{ij}) \big) = \alpha_0 + \eta_{j0} + \eta_{j1} A_{ij} + \eta_{j2} T_{ij} $$ where $\eta_{j} = (\eta_{j0}, \eta_{j1}, \eta_{j2}) \sim N(0, \Sigma)$ are random effects that are shared by each observation made by individuals from site $j$. These random effects are allowed to be freely correlated (i.e. no restrictions are made on $\Sigma$) in the model you specified. To impose independence, you have to place them inside different brackets, e.g. (A-1|Site) + (T-1|Site) + (1|Site) would do it. This model assumes that $\log \big( E(Y_{ij}) \big)$ is $\alpha_0$ for all sites but each site has a random offset ($\eta_{j0}$) and has a random linear relationship with both $A_{ij}, T_{ij}$. glmer(counts ~ A + T + (T|Site), data=data, family="Poisson") is the model $$ \log \big( E(Y_{ij}) \big) = (\theta_0 + \gamma_{j0}) + \theta_1 A_{ij} + (\theta_2 + \gamma_{j1}) T_{ij} $$ So now $\log \big( E(Y_{ij}) \big)$ has some "average" relationship with $A_{ij}, T_{ij}$, given by the fixed effects $\theta_0, \theta_1, \theta_2$ but that relationship is different for each site and those differences are captured by the random effects, $\gamma_{j0}, \gamma_{j1}, \gamma_{j2}$. That is, the baseline is random shifted and the slopes of the two variables are randomly shifted and everyone from the same site shares the same random shift. what is T? Is it a random effect? A fixed effect? What's actually being accomplished by putting T in both places? $T$ is one of your covariates. It is not a random effect - Site is a random effect. There is a fixed effect of $T$ that is different depending on the random effect conferred by Site - $\gamma_{j1}$ in the model above. What is accomplished by including this random effect is to allow for heterogeneity between sites in the relationship between $T$ and $\log \big( E(Y_{ij}) \big)$. When should something only appear in the random effects section of the model formula? This is a matter of what makes sense in the context of the application. Regarding the intercept - you should keep the fixed intercept in there for a lot of reasons (see, e.g., here ); re: the random intercept, $\gamma_{j0}$, this primarily acts to induce correlation between observations made at the same site. If it doesn't make sense for such correlation to exist, then the random effect should be excluded. Regarding the random slopes, a model with only random slopes and no fixed slopes reflects a belief that, for each site, there is some relationship between $\log \big( E(Y_{ij}) \big)$ and your covariates for each site, but if you average those effects over all sites, then there is no relationship. For example, if you had a random slope in $T$ but no fixed slope, this would be like saying that time, on average, has no effect (e.g. no secular trends in the data) but each Site is heading in a random direction over time, which could make sense. Again, it depends on the application. Note that you can fit the model with and without random effects to see if this is happening - you should see no effect in the fixed model but significant random effects in the subsequent model. I must caution you that decisions like this are often better made based on an understanding of the application rather than through model selection. | {
"source": [
"https://stats.stackexchange.com/questions/38370",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5836/"
]
} |
38,420 | In his book "All of Statistics", Prof. Larry Wasserman presents the following Example (11.10, page 188). Suppose that we have a density $f$ such that $f(x)=c\,g(x)$, where $g$ is a known (nonnegative, integrable) function, and the normalization constant $c>0$ is unknown . We are interested in those cases where we can't compute $c=1/\int g(x)\,dx$. For example, it may be the case that $f$ is a pdf over a very high-dimensional sample space. It is well known that there are simulation techniques that allow us to sample from $f$, even though $c$ is unknown. Hence, the puzzle is: How could we estimate $c$ from such a sample? Prof. Wasserman describes the following Bayesian solution: let $\pi$ be some prior for $c$. The likelihood is
$$
L_x(c) = \prod_{i=1}^n f(x_i) = \prod_{i=1}^n \left(c\,g(x_i)\right) = c^n \prod_{i=1}^n g(x_i) \propto c^n \, .
$$
Therefore, the posterior
$$
\pi(c\mid x) \propto c^n \pi(c)
$$
does not depend on the sample values $x_1,\dots,x_n$. Hence, a Bayesian can't use the information contained in the sample to make inferences about $c$. Prof. Wasserman points out that "Bayesians are slaves of the likelihood function. When the likelihood goes awry, so will Bayesian inference". My question for my fellow stackers is: Regarding this particular example, what went wrong (if anything) with Bayesian methodology? P.S. As Prof. Wasserman kindly explained in his answer, the example is due to Ed George. | The proposed statistical model may be described as follows: You have a known nonnegative integrable function $g:\mathbb{R}\to\mathbb{R}$, and a nonnegative random variable $C$. The random variables $X_1,\dots,X_n$ are supposed to be conditionally independent and identically distributed, given that $C=c$, with conditional density $f_{X_i\mid C}(x_i\mid c)=c\,g(x_i)$, for $c>0$. Unfortunately, in general, this is not a valid description of a statistical model. The problem is that, by definition, $f_{X_i\mid C}(\,\cdot\mid c)$ must be a probability density for almost every possible value of $c$, which is, in general, clearly false. In fact, it is true just for the single value $c=\left(\int_{-\infty}^\infty g(x)\,dx\right)^{-1}$. Therefore, the model is correctly specified only in the trivial case when the distribution of $C$ is concentrated at this particular value. Of course, we are not interested in this case. What we want is the distribution of $C$ to be dominated by Lebesgue measure, having a nice pdf $\pi$. Hence, defining $x=(x_1,\dots,x_n)$, the expression
$$
L_x(c) = \prod_{i=1}^n \left(c\,g(x_i)\right) \, ,
$$
taken as a function of $c$, for fixed $x$, does not correspond to a genuine likelihood function. Everything after that inherits from this problem. In particular, the posterior computed with Bayes's Theorem is bogus. It's easy to see that: suppose that you have a proper prior
$$
\pi(c) = \frac{1}{c^2} \,I_{[1,\infty)}(c) \, .
$$
Note that $\int_0^\infty \pi(c)\,dc=1$. According to the computation presented in the example, the posterior should be
$$
\pi(c\mid x) \propto \frac{1}{c^{2-n}}\, I_{[1,\infty)}(c) \, .
$$
But if that is right, this posterior would be always improper, because
$$
\int_0^\infty \frac{1}{c^{2-n}}\,I_{[1,\infty)}(c)\,dc
$$
diverges for every sample size $n\geq 1$. This is impossible: we know that if we start with a proper prior, our posterior can't be improper for every possible sample (it may be improper inside a set of null prior predictive probability). | {
"source": [
"https://stats.stackexchange.com/questions/38420",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9394/"
]
} |
38,856 | I'm sorry if this seems a bit too basic, but I guess I'm just looking to confirm understanding here. I get the sense I'd have to do this in two steps, and I've started trying to grok correlation matrices, but it's just starting to seem really involved. I'm looking for a concise explanation (ideally with hints towards a pseudocode solution) of a good, ideally quick way to generate correlated random numbers. Given two pseudorandom variables height and weight with known means and variances, and a given correlation, I think I'm basically trying to understand what this second step should look like: height = gaussianPdf(height.mean, height.variance)
weight = gaussianPdf(correlated_mean(height.mean, correlation_coefficient),
correlated_variance(height.variance,
correlation_coefficient)) How do I calculate the correlated mean and variance? But I want to confirm that's really the relevant problem here. Do I need to resort to matrix manipulation? Or do I have something else very wrong in my basic approach to this problem? | To answer your question on "a good, ideally quick way to generate correlated random numbers":
Given a desired variance-covariance matrix $C$ that is by definition positive definite, the Cholesky decomposition of it is: $C$=$LL^T$; $L$ being lower triangular matrix. If you now use this matrix $L$ to project an uncorrelated random variable vector $X$, the resulting projection $Y = LX$ will be that of correlated random variables. You can find an concise explanation why this happens here . | {
"source": [
"https://stats.stackexchange.com/questions/38856",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14728/"
]
} |
38,870 | A certain auditorium has 30 rows of seats. Row 1 has 11 seats, while Row 2 has 12 seats, Row 3 has 13 seats, and so on to the back of the auditorium where Row 30 has 40 seats. A door prize is to be given away by randomly selecting a row (with equal probability of selecting any of the 30 rows) and then randomly selecting a seat within that row (with each seat in the row equally likely to be selected).
Now,
1) Find the probability that Seat 15 was selected given that row 20 was selected ?
2) Find the probability that Row 20 was selected given that Seat 15 was selected ? To answer the first, given that Row 20 was selected, there are 30 possible seats in Row 20 that are equally likely to be selected. Hence Pr(Seat 15 | Row 20) = 1/30.
The same kind of argument can be given to answer the second : given that Seat 15 was selected, there are 30 possible rows that are equally likely to be selected. Hence Pr(Row 20 | Seat 15) = 1/30. Now, it turns out that the first answer is correct whereas the second answer is incorrect. My question is where am I making mistakes in computing the second answer ? | To answer your question on "a good, ideally quick way to generate correlated random numbers":
Given a desired variance-covariance matrix $C$ that is by definition positive definite, the Cholesky decomposition of it is: $C$=$LL^T$; $L$ being lower triangular matrix. If you now use this matrix $L$ to project an uncorrelated random variable vector $X$, the resulting projection $Y = LX$ will be that of correlated random variables. You can find an concise explanation why this happens here . | {
"source": [
"https://stats.stackexchange.com/questions/38870",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2516/"
]
} |
38,962 | I understand that the Jeffreys prior is invariant under re-parameterization. However, what I don't understand is why this property is desired. Why wouldn't you want the prior to change under a change of variables? | Suppose that you and a friend are analyzing the same set of data using a normal model. You adopt the usual parameterization of the normal model using the mean and the variance as parameters, but your friend prefers to parameterize the normal model with the coefficient of variation and the precision as parameters (which is perfectly "legal"). If both of you use Jeffreys' priors, your posterior distribution will be your friend's posterior distribution properly transformed (don't forget that Jacobian) from his parameterization to yours. It is in this sense that the Jeffreys' prior is "invariant" (By the way, "invariant" is a horrible word; what we really mean is that it is "covariant" in the same sense of tensor calculus/differential geometry, but, of course, this term already has a well established probabilistic meaning, so we can't use it.) Why is this consistency property desired? Because, if Jeffreys' prior has any chance of representing ignorance about the value of the parameters in an absolute sense (actually, it doesn't, but for other reasons not related to "invariance"), and not ignorance relatively to a particular parameterization of the model, it must be the case that, no matter which parameterizations we arbitrarily choose to start with, our posteriors should "match" after transformation. Jeffreys himself violated this "invariance" property routinely when constructing his priors. | {
"source": [
"https://stats.stackexchange.com/questions/38962",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14514/"
]
} |
39,002 | Apart from some unique circumstances where we absolutely must understand the conditional mean relationship, what are the situations where a researcher should pick OLS over Quantile Regression? I don't want the answer to be "if there is no use in understanding the tail relationships", as we could just use median regression as the OLS substitute. | If you are interested in the mean, use OLS, if in the median, use quantile. One big difference is that the mean is more affected by outliers and other extreme data. Sometimes, that is what you want. One example is if your dependent variable is the social capital in a neighborhood. The presence of a single person with a lot of social capital may be very important for the whole neighborhood. | {
"source": [
"https://stats.stackexchange.com/questions/39002",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
]
} |
39,243 | I am trying to interpret the variable weights given by fitting a linear SVM. (I'm using scikit-learn ): from sklearn import svm
svm = svm.SVC(kernel='linear')
svm.fit(features, labels)
svm.coef_ I cannot find anything in the documentation that specifically states how these weights are calculated or interpreted. Does the sign of the weight have anything to do with class? | For a general kernel it is difficult to interpret the SVM weights, however for the linear SVM there actually is a useful interpretation: 1) Recall that in linear SVM, the result is a hyperplane that separates the classes as best as possible. The weights represent this hyperplane, by giving you the coordinates of a vector which is orthogonal to the hyperplane - these are the coefficients given by svm.coef_. Let's call this vector w. 2) What can we do with this vector? It's direction gives us the predicted class, so if you take the dot product of any point with the vector, you can tell on which side it is: if the dot product is positive, it belongs to the positive class, if it is negative it belongs to the negative class. 3) Finally, you can even learn something about the importance of each feature. This is my own interpretation so convince yourself first. Let's say the svm would find only one feature useful for separating the data, then the hyperplane would be orthogonal to that axis. So, you could say that the absolute size of the coefficient relative to the other ones gives an indication of how important the feature was for the separation. For example if only the first coordinate is used for separation, w will be of the form (x,0) where x is some non zero number and then |x|>0. | {
"source": [
"https://stats.stackexchange.com/questions/39243",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12576/"
]
} |
40,384 | I'm looking for a way to generate random numbers that appear to be uniform distributed -- and every test will show them to be uniform -- except that they are more evenly distributed than true uniform data . The problem I have with the "true" uniform randoms is that they will occasionally cluster. This effect is stronger at a low sample size. Roughly said: when I draw two Uniform randoms in U[0;1], chances are around 10% that they are within a range of 0.1, and 1% that they are within 0.01. So I'm looking for a good way to generate random numbers that are more evenly distributed than uniform randoms . Use case example: say I'm doing a computer game, and I want to place treasure randomly on a map (not caring about any other thing). I don't want the treasure to be all in one place, it should be all over the map. With uniform randoms, if I place, say, 10 objects, the chances are not that low that there are 5 or so really close to each other. This may give one player an advantage over another. Think of minesweeper, chances (albeit low, if there are enough mines) are that you are really lucky and win with a single click. A very naive approach for my problem is to divide the data into a grid. As long as the number is large enough (and has factors), one can enforce extra uniformness this way. So instead of drawing 12 random variables from U[0;1], I can draw 6 from U[0;.5] and 6 from U[0.5;1], or 4 from U[0;1/3] + 4 from U[1/3;2/3] + 4 from U[2/3; 1]. Is there any better way to get this extra evenness into the uniform? It probably only works for batch randoms (when drawing a single random, I obviously have to consider the whole range). In particular, I can shuffle the records again afterwards (so it's not the first four from the first third). How about doing it incrementally? So the first is on U[0;1], then two from each halves, one from each third, one from each fourth? Has this been investigated, and how good is it?
I might have to be careful to use different generators for x and y to not get them correlated (the first xy would always be in the bottom half, the second in the left half and bottom third, the third in center third and top third... so at least some random bin permutation is also needed. and in the long run, it will be too even, I guess. As a side node, is there a well-known test whether some distribution is too evenly distributed to be truly uniform? So testing "true uniform" vs. "someone messed with the data and distributed the items more evenly". If I recall correctly, Hopkins Statistic can measure this, but can it be used for testing, too? Also somewhat an inverse K-S-Test: if the largest deviation is below a certain expected threshold, the data is too evenly distributed? | Yes , there are many ways to produce a sequence of numbers that are more evenly distributed than random uniforms. In fact, there is a whole field dedicated to this question; it is the backbone of quasi-Monte Carlo (QMC). Below is a brief tour of the absolute basics. Measuring uniformity There are many ways to do this, but the most common way has a strong, intuitive, geometric flavor. Suppose we are concerned with generating $n$ points $x_1,x_2,\ldots,x_n$ in $[0,1]^d$ for some positive integer $d$. Define
$$\newcommand{\I}{\mathbf 1}
D_n := \sup_{R \in \mathcal R}\,\left|\frac{1}{n}\sum_{i=1}^n \I_{(x_i \in R)} - \mathrm{vol}(R)\right| \>,
$$
where $R$ is a rectangle $[a_1, b_1] \times \cdots \times [a_d, b_d]$ in $[0,1]^d$ such that $0 \leq a_i \leq b_i \leq 1$ and $\mathcal R$ is the set of all such rectangles. The first term inside the modulus is the "observed" proportion of points inside $R$ and the second term is the volume of $R$, $\mathrm{vol}(R) = \prod_i (b_i - a_i)$. The quantity $D_n$ is often called the discrepancy or extreme discrepancy of the set of points $(x_i)$. Intuitively, we find the "worst" rectangle $R$ where the proportion of points deviates the most from what we would expect under perfect uniformity. This is unwieldy in practice and difficult to compute. For the most part, people prefer to work with the star discrepancy ,
$$
D_n^\star = \sup_{R \in \mathcal A} \,\left|\frac{1}{n}\sum_{i=1}^n \I_{(x_i \in R)} - \mathrm{vol}(R)\right| \>.
$$
The only difference is the set $\mathcal A$ over which the supremum is taken. It is the set of anchored rectangles (at the origin), i.e., where $a_1 = a_2 = \cdots = a_d = 0$. Lemma : $D_n^\star \leq D_n \leq 2^d D_n^\star$ for all $n$, $d$. Proof . The left hand bound is obvious since $\mathcal A \subset \mathcal R$. The right-hand bound follows because every $R \in \mathcal R$ can be composed via unions, intersections and complements of no more than $2^d$ anchored rectangles (i.e., in $\mathcal A$). Thus, we see that $D_n$ and $D_n^\star$ are equivalent in the sense that if one is small as $n$ grows, the other will be too. Here is a (cartoon) picture showing candidate rectangles for each discrepancy. Examples of "good" sequences Sequences with verifiably low star discrepancy $D_n^\star$ are often called, unsurprisingly, low discrepancy sequences . van der Corput . This is perhaps the simplest example. For $d=1$, the van der Corput sequences are formed by expanding the integer $i$ in binary and then "reflecting the digits" around the decimal point. More formally, this is done with the radical inverse function in base $b$,
$$\newcommand{\rinv}{\phi}
\rinv_b(i) = \sum_{k=0}^\infty a_k b^{-k-1} \>,
$$
where $i = \sum_{k=0}^\infty a_k b^k$ and $a_k$ are the digits in the base $b$ expansion of $i$. This function forms the basis for many other sequences as well. For example, $41$ in binary is $101001$ and so $a_0 = 1$, $a_1 = 0$, $a_2 = 0$, $a_3 = 1$, $a_4 = 0$ and $a_5 = 1$. Hence, the 41st point in the van der Corput sequence is $x_{41} = \rinv_2(41) = 0.100101\,\text{(base 2)} = 37/64$. Note that because the least significant bit of $i$ oscillates between $0$ and $1$, the points $x_i$ for odd $i$ are in $[1/2,1)$, whereas the points $x_i$ for even $i$ are in $(0,1/2)$. Halton sequences . Among the most popular of classical low-discrepancy sequences, these are extensions of the van der Corput sequence to multiple dimensions. Let $p_j$ be the $j$th smallest prime. Then, the $i$th point $x_i$ of the $d$-dimensional Halton sequence is
$$
x_i = (\rinv_{p_1}(i), \rinv_{p_2}(i),\ldots,\rinv_{p_d}(i)) \>.
$$
For low $d$ these work quite well, but have problems in higher dimensions . Halton sequences satisfy $D_n^\star = O(n^{-1} (\log n)^d)$. They are also nice because they are extensible in that the construction of the points does not depend on an a priori choice of the length of the sequence $n$. Hammersley sequences . This is a very simple modification of the Halton sequence. We instead use
$$
x_i = (i/n, \rinv_{p_1}(i), \rinv_{p_2}(i),\ldots,\rinv_{p_{d-1}}(i)) \>.
$$
Perhaps surprisingly, the advantage is that they have better star discrepancy $D_n^\star = O(n^{-1}(\log n)^{d-1})$. Here is an example of the Halton and Hammersley sequences in two dimensions. Faure-permuted Halton sequences . A special set of permutations (fixed as a function of $i$) can be applied to the digit expansion $a_k$ for each $i$ when producing the Halton sequence. This helps remedy (to some degree) the problems alluded to in higher dimensions. Each of the permutations has the interesting property of keeping $0$ and $b-1$ as fixed points. Lattice rules . Let $\beta_1, \ldots, \beta_{d-1}$ be integers. Take
$$
x_i = (i/n, \{i \beta_1 / n\}, \ldots, \{i \beta_{d-1}/n\}) \>,
$$
where $\{y\}$ denotes the fractional part of $y$. Judicious choice of the $\beta$ values yields good uniformity properties. Poor choices can lead to bad sequences. They are also not extensible. Here are two examples. $(t,m,s)$ nets . $(t,m,s)$ nets in base $b$ are sets of points such that every rectangle of volume $b^{t-m}$ in $[0,1]^s$ contains $b^t$ points. This is a strong form of uniformity. Small $t$ is your friend, in this case. Halton, Sobol' and Faure sequences are examples of $(t,m,s)$ nets. These lend themselves nicely to randomization via scrambling. Random scrambling (done right) of a $(t,m,s)$ net yields another $(t,m,s)$ net. The MinT project keeps a collection of such sequences. Simple randomization: Cranley-Patterson rotations . Let $x_i \in [0,1]^d$ be a sequence of points. Let $U \sim \mathcal U(0,1)$. Then the points $\hat x_i = \{x_i + U\}$ are uniformly distributed in $[0,1]^d$. Here is an example with the blue dots being the original points and the red dots being the rotated ones with lines connecting them (and shown wrapped around, where appropriate). Completely uniformly distributed sequences . This is an even stronger notion of uniformity that sometimes comes into play. Let $(u_i)$ be the sequence of points in $[0,1]$ and now form overlapping blocks of size $d$ to get the sequence $(x_i)$. So, if $s = 3$, we take $x_1 = (u_1,u_2,u_3)$ then $x_2 = (u_2,u_3,u_4)$, etc. If, for every $s \geq 1$, $D_n^\star(x_1,\ldots,x_n) \to 0$, then $(u_i)$ is said to be completely uniformly distributed . In other words, the sequence yields a set of points of any dimension that have desirable $D_n^\star$ properties. As an example, the van der Corput sequence is not completely uniformly distributed since for $s = 2$, the points $x_{2i}$ are in the square $(0,1/2) \times [1/2,1)$ and the points $x_{2i-1}$ are in $[1/2,1) \times (0,1/2)$. Hence there are no points in the square $(0,1/2) \times (0,1/2)$ which implies that for $s=2$, $D_n^\star \geq 1/4$ for all $n$. Standard references The Niederreiter (1992) monograph and the Fang and Wang (1994) text are places to go for further exploration. | {
"source": [
"https://stats.stackexchange.com/questions/40384",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7828/"
]
} |
40,454 | I have a database table of data transfers between different nodes. This is a huge database (with nearly 40 million transfers). One of the attributes is the number of bytes (nbytes) transfers which range from 0 bytes to 2 tera bytes. I would like to cluster the nbytes such that given k clusters some x1 transfers belongs to k1 cluster, x2 transfters to k2 etc. From the terminology that I used you might have guessed what I was going with: K-means. This is 1d data since nbytes is the only feature I care about. When I was searching for different methods to this I saw the EM was mentioned a couple times along with a non-clustering approach. I would like to know about your views on how to approach this problem (specifically whether to cluster or not to cluster). Thanks! | In one dimensional data, don't use cluster analysis. Cluster analysis is usually a multivariate technique. Or let me better put it the other way around: for one-dimensional data -- which is completely ordered -- there are much better techniques. Using k-means and similar techniques here is a total waste, unless you put in enough effort to actually optimize them for the 1-d case. Just to give you an example: for k-means it is common to use k random objects as initial seeds. For one dimensional data, it's fairly easy to do better by just using the appropriate quantiles (1/2k, 3/2k, 5/2k etc.), after sorting the data once , and then optimize from this starting point. However, 2D data cannot be sorted completely. And in a grid, there likely will be empty cells. I also wouldn't call it cluster. I would call it interval . What you really want to do is to optimize the interval borders. If you do k-means, it will test for each object if it should be moved to another cluster. That does not make sense in 1D: only the objects at the interval borders need to be checked. That obviously is much faster, as there are only ~2k objects there. If they do not already prefer other intervals, more central objects will not either. You may want to look into techniques such as Jenks Natural Breaks optimization , for example. Or you can do a kernel density estimation and look for local minima of the density to split there. The nice thing is that you do not need to specify k for this! See this answer for an example how to do this in Python (green markers are the cluster modes; red markers a points where the data is cut; the y axis is a log-likelihood of the density): P.S. please use the search function. Here are some questions on 1-d data clustering that you missed: Clustering 1D data https://stackoverflow.com/questions/7869609/cluster-one-dimensional-data-optimally https://stackoverflow.com/questions/11513484/1d-number-array-clustering | {
"source": [
"https://stats.stackexchange.com/questions/40454",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/15957/"
]
} |
40,769 | I have found discordant information on the question: " If one constructs a 95% confidence interval (CI) of a difference in means or a difference in proportions, are all values within the CI equally likely? Or, is the point estimate the most likely, with values near the "tails" of the CI less likely than those in the middle of the CI? For instance, if a randomized clinical trial report states that the relative risk of mortality with a particular treatment is 1.06 (95% CI 0.96 to 1.18), is the likelihood of 0.96 being the correct value the same as 1.06? I found many references to this concept online, but the following two examples reflect the uncertainty therein: Lisa Sullivan's module about Confidence Intervals states: The confidence intervals for the difference in means provide a range of likely values for ($μ_1-μ_2$). It is important to note that all values in the confidence interval are equally likely estimates of the true value of ($μ_1-μ_2$). This blogpost, titled Within the Margin of Error , states: What I have in mind is misunderstanding about “margin of error” that treats all points within the confidence interval as equally likely, as if the central limit theorem implied a bounded uniform distribution instead of a t distribution. [...] The thing that talk about “margin of error” misses is that possibilities that are close to the point estimate are much more likely than possibilities that are at the edge of the margin". These seem contradictory, so which is correct? | One question that needs to be answered is what does "likely" mean in this context? If it means probability (as it is sometimes used as a synonym of) and we are using strict frequentist definitions then the true parameter value is a single value that does not change, so the probability (likelihood) of that point is 100% and all other values are 0%. So almost all are equally likely at 0%, but if the interval contains the true value, then it is different from the others. If we use a Bayesian approach then the CI (Credible Interval) comes from the posterior distribution and you can compare the likelihood at the different points within the interval. Unless the posterior is perfectly uniform within the interval (theoretically possible I guess, but that would be a strange circumstance) then the values have different likelihoods. If we use likely to be similar to confidence then think about it this way: Compute a 95% confidence interval, a 90% confidence interval, and an 85% confidence interval. We would be 5% confident that the true value lies in the region inside of the 95% interval but outside of the 90% interval, we could say that the true value is 5% likely to fall in that region. The same is true for the region that is inside the 90% interval but outside the 85% interval. So if every value is equally likely, then the size of the above 2 regions would need to be exactly the same and the same would hold true for the region inside a 10% confidence interval but outside a 5% confidence interval. None of the standard distributions that intervals are constructed using have this property (except special cases with 1 draw from a uniform). You could further prove this to yourself by simulating a large number of datasets from known populations, computing the confidence interval of interest, then comparing how often the true parameter is closer to the point estimate than to each of the end points. | {
"source": [
"https://stats.stackexchange.com/questions/40769",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/561/"
]
} |
40,808 | I'm starting to want to advance my own skillset and I've always been fascinated by machine learning. However, six years ago instead of pursuing this I decided to take a completely unrelated degree to computer science. I have been developing software and applications for about 8-10 years now, so I have a good handle but I just can't seem to penetrate the maths side of machine learning/probabilities/statistics. I start looking at learning material and on the first page it might include something which confuses me and immediately sets up a barrier in my learning. Is a strong background in maths a total requisite for ML? Should I try and fill in the blanks of my maths before continuing with ML? Can self learning really work for just a developer without any hard computer science background? Related question: Book for reading before Elements of Statistical Learning? | Stanford (Ng) and Caltech (Abu-Mostafa) have put machine learning classes on YouTube. You don't get to see the assignments, but the lectures don't rely on those. I recommend trying to watch those first, as those will help you to find out what math you need to learn. I believe a very similar class with assignments is taught by Andrew Ng on Coursera, which Ng helped to create. One exception: If I recall correctly, early in the Stanford lectures, Ng does some calculations involving derivatives of traces of products of matrices. Those are rather isolated, so don't worry if you don't follow those calculations. I don't even know what course would cover those. You do want to have some familiarity with probability, linear algebra, linear programming, and multivariable calculus. However, you need a lot less than what is contained in many complete college classes on those subjects. | {
"source": [
"https://stats.stackexchange.com/questions/40808",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16101/"
]
} |
40,815 | I am aware of the frequentist and bayesian interpretations of statistics. I prefer Bayesian because I think it's closer to how people think, and because we in practice often can't rerun a trial a million times to estimate the probability. But are there any other interpretations besides those two? | Stanford (Ng) and Caltech (Abu-Mostafa) have put machine learning classes on YouTube. You don't get to see the assignments, but the lectures don't rely on those. I recommend trying to watch those first, as those will help you to find out what math you need to learn. I believe a very similar class with assignments is taught by Andrew Ng on Coursera, which Ng helped to create. One exception: If I recall correctly, early in the Stanford lectures, Ng does some calculations involving derivatives of traces of products of matrices. Those are rather isolated, so don't worry if you don't follow those calculations. I don't even know what course would cover those. You do want to have some familiarity with probability, linear algebra, linear programming, and multivariable calculus. However, you need a lot less than what is contained in many complete college classes on those subjects. | {
"source": [
"https://stats.stackexchange.com/questions/40815",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12615/"
]
} |
40,876 | What's the difference between terms 'link function' and 'canonical link function'? Also, are there any (theoretical) advantages of using one over the other? For example, a binary response variable can be modeled using many link functions such as logit , probit , etc. But, logit here is considered the "canonical" link function. | The above answers are more intuitive, so I try more rigor. What is a GLM? Let $Y=(y,\mathbf{x})$ denote a set of a response $y$ and $p$ -dimensional covariate vector $\mathbf{x}=(x_1,\dots,x_p)$ with expected value $E(y)=\mu$ . For $i=1,\dots,n$ independent observations, the distribution of each $y_i$ is an exponential family with density $$
f(y_i;\theta_i,\phi)=\exp\left(\frac{y_i\theta_i-\gamma(\theta_i)}{\phi}+\tau(y_i,\phi)\right) = \alpha(y_i, \phi)\exp\left(\frac{y_i\theta_i-\gamma(\theta_i)}{\phi}\right)
$$ Here, the parameter of interest (natural or canonical parameter) is $\theta_i$ , $\phi$ is a scale parameter (known or seen as a nuisance) and $\gamma$ and $\tau$ are known functions. The $n$ -dimensional vectors of fixed input values for the $p$ explanatory variables are denoted by $\mathbf{x}_1,\dots,\mathbf{x}_p$ . We assume that the input vectors influence (1) only via a linear function, the linear predictor, $$
\eta_i=\beta_0+\beta_1x_{i1}+\dots+\beta_px_{ip}
$$ upon which $\theta_i$ depends. As it can be shown that $\theta=(\gamma')^{-1}(\mu)$ , this dependency is established by connecting the linear predictor $\eta$ and $\theta$ via the mean. More specifically, the mean $\mu$ is seen as an invertible and smooth function of the linear predictor, i.e. $$
g(\mu)=\eta\ \textrm{or}\ \mu=g^{-1}(\eta)
$$ Now to answer your question: The function $g(\cdot)$ is called the link function. If the function connects $\mu$ , $\eta$ and $\theta$ such that $\eta \equiv\theta$ , then this link is called canonical and has the form $g=(\gamma')^{-1}$ . That's it. Then there are a number of desirable statistical properties of using the canonical link, e.g., the sufficient statistic is $X'y$ with
components $\sum_i x_{ij} y_i$ for $j = 1, \dots, p$ , the Newton Method and Fisher scoring for finding the ML estimator coincide, these links simplify the derivation of the MLE, they ensure that some properties of linear regression (e.g., the sum of the residuals is 0) hold up or they ensure that $\mu$ stays within the range of the outcome variable. Hence they tend to be used by default. Note however, that there is no a priori reason why the effects in the model should be additive on the scale given by this or any other link. | {
"source": [
"https://stats.stackexchange.com/questions/40876",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12786/"
]
} |
41,145 | We need an early warning system. I am dealing with a server that is known to have performance issues under load. Errors are recorded in a database along with a timestamp. There are some manual intervention steps that can be taken to decrease the server load, but only if someone is aware of the issue... Given a set of times that errors occurred, how can I identify the beginning of a spike in errors (in real time)? We can calculate periodically or on each error occurrence. We are unconcerned about occasional errors, but don't have a specific threshold. I could just notify someone any time we get, say, three errors in five minutes, but I'm sure there's a better way... I'd like to be able to adjust the sensitivity of the algorithm based on feedback from the sysadmins. For now, they'd like it to be fairly sensitive, even though we know we can expect some false positives. I am not a statistician, which I'm sure is obvious, and implementing this needs to be relatively simple with our existing tools: SQL Server and old-school ASP JScript. I'm not looking for an answer in code, but if it requires additional software, it probably won't work for us (though I welcome impractical but ideal solutions as a comment, for my own curiosity). | It has been 5 months since you asked this question, and hopefully you figured something out. I'm going to make a few different suggestions here, hoping that you find some use for them in other scenarios. For your use-case I don't think you need to look at spike-detection algorithms. So here goes:
Let's start with a picture of the errors occurring on a timeline: What you want is a numerical indicator, a "measure" of how fast the errors are coming. And this measure should be amenable to thresholding - your sysadmins should be able to set limits which control with what sensitivity errors turn into warnings. Measure 1 You mentioned "spikes", the easiest way to get a spike is to draw a histogram over every 20-minute interval: Your sysadmins would set the sensitivity based on the heights of the bars i.e. the most errors tolerable in a 20-minute interval. (At this point you may be wondering if that 20-minute window length can't be adjusted. It can, and you can think of the window length as defining the word together in the phrase errors appearing together .) What's the problem with this method for your particular scenario? Well, your variable is an integer, probably less than 3. You wouldn't set your threshold to 1, since that just means "every error is a warning" which doesn't require an algorithm. So your choices for the threshold are going to be 2 and 3. This doesn't give your sysadmins a whole lot of fine-grained control. Measure 2 Instead of counting errors in a time window, keep track of the number of minutes between the current and last errors. When this value gets too small, it means your errors are getting too frequent and you need to raise a warning. Your sysadmins will probably set the limit at 10 (i.e. if errors are happening less than 10 minutes apart, it's a problem) or 20 minutes. Maybe 30 minutes for a less mission-critical system. This measure provides more flexibility. Unlike Measure 1, for which there was a small set of values you could work with, now you have a measure which provides a good 20-30 values. Your sysadmins will therefore have more scope for fine-tuning. Friendly Advice There is another way to approach this problem. Rather than looking at the error frequencies, it may be possible to predict the errors before they occur. You mentioned that this behavior was occurring on a single server, which is known to have performance issues. You could monitor certain Key Performance Indicators on that machine, and have them tell you when an error is going to happen. Specifically, you would look at CPU usage, Memory usage, and KPIs relating to Disk I/O. If your CPU usage crosses 80%, the system's going to slow down. (I know you said you didn't want to install any software, and it's true that you could do this using PerfMon. But there are free tools out there which will do this for you, like Nagios and Zenoss .) And for people who came here hoping to find something about spike detection in a time-series: Spike Detection in a Time-Series The simplest thing you should start by doing is to compute a moving average of your input values. If your series is $x_1, x_2,...$, then you would compute a moving average after each observation as: $M_k = (1 - \alpha) M_{k-1} + \alpha x_k$ where the $\alpha$ would determine how much weight give the latest value of $x_k$. If your new value has moved too far away from the moving average, for example $\frac{x_k - M_k}{M_k} > 20\%$ then you raise a warning. Moving averages are nice when working with real-time data. But suppose you already have a bunch of data in a table, and you just want to run SQL queries against it to find the spikes. I would suggest: Compute the mean value of your time-series Compute the standard deviation $\sigma$ Isolate those values which are more than $2\sigma$ above the mean (you may need to adjust that factor of "2") More fun stuff about time series Many real-world time-series exhibit cyclic behavior. There is a model called ARIMA which helps you extract these cycles from your time-series. Moving averages which take into account cyclic behavior: Holt and Winters | {
"source": [
"https://stats.stackexchange.com/questions/41145",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16227/"
]
} |
41,208 | The situation Some researchers would like to put you to sleep. Depending on the secret toss of a fair coin, they will briefly awaken you either once (Heads) or twice (Tails). After each waking, they will put you back to sleep with a drug that makes you forget that awakening. When you are awakened, to what degree should you believe that the outcome of the coin toss was Heads? (OK, maybe you don’t want to be the subject of this experiment! Suppose instead that Sleeping Beauty (SB) agrees to it (with the full approval of the Magic Kingdom’s Institutional Review Board, of course). She’s about to go to sleep for one hundred years, so what are one or two more days, anyway?) [Detail of a Maxfield Parrish illustration.] Are you a Halfer or a Thirder? The Halfer position. Simple! The coin is fair--and SB knows it--so she should believe there's a one-half chance of heads. The Thirder position. Were this experiment to be repeated many times, then the coin will be heads only one third of the time SB is awakened. Her probability for heads will be one third. Thirders have a problem Most, but not all, people who have written about this are thirders. But: On Sunday evening, just before SB falls asleep, she must believe the chance of heads is one-half: that’s what it means to be a fair coin. Whenever SB awakens, she has learned absolutely nothing she did not know Sunday night. What rational argument can she give, then, for stating that her belief in heads is now one-third and not one-half? Some attempted explanations SB would necessarily lose money if she were to bet on heads with any odds other than 1/3. (Vineberg, inter alios ) One-half really is correct: just use the Everettian “many-worlds” interpretation of Quantum Mechanics! (Lewis). SB updates her belief based on self-perception of her “temporal location” in the world. (Elga, i.a. ) SB is confused: “[It] seems more plausible to say that her epistemic state upon waking up should not include a definite degree of belief in heads. … The real issue is how one deals with known, unavoidable, cognitive malfunction.” [Arntzenius] The question Accounting for what has already been written on this subject (see the references as well as a previous post ), how can this paradox be resolved in a statistically rigorous way? Is this even possible? References Arntzenius, Frank (2002). Reflections on Sleeping Beauty Analysis 62.1 pp 53-62. Bradley, DJ (2010). Confirmation in a Branching World: The Everett Interpretation and Sleeping Beauty . Brit. J. Phil. Sci. 0 (2010), 1–21. Elga, Adam (2000). Self-locating belief and the Sleeping Beauty Problem. Analysis 60 pp 143-7. Franceschi, Paul (2005). Sleeping Beauty and the Problem of World Reduction . Preprint. Groisman, Berry (2007). The end of Sleeping Beauty’s nightmare . Preprint. Lewis, D (2001). Sleeping Beauty: reply to Elga . Analysis 61.3 pp 171-6. Papineau, David and Victor Dura-Vila (2008). A Thirder and an Everettian: a reply to Lewis’s ‘Quantum Sleeping Beauty’ . Pust, Joel (2008). Horgan on Sleeping Beauty . Synthese 160 pp 97-101. Vineberg, Susan (undated, perhaps 2003). Beauty’s Cautionary Tale . | Strategy I would like to apply rational decision theory to the analysis, because that is one well-established way to attain rigor in solving a statistical decision problem. In trying to do so, one difficulty emerges as special: the alteration of SB’s consciousness. Rational decision theory has no mechanism to handle altered mental states. In asking SB for her credence in the coin flip, we are simultaneously treating her in a somewhat self-referential manner both as subject (of the SB experiment) and experimenter (concerning the coin flip). Let’s alter the experiment in an inessential way: instead of administering the memory-erasure drug, prepare a stable of Sleeping Beauty clones just before the experiment begins. (This is the key idea, because it helps us resist distracting--but ultimately irrelevant and misleading--philosophical issues.) The clones are like her in all respects, including memory and thought. SB is fully aware this will happen. We can clone, in principle. E. T. Jaynes replaces the question "how can we build a mathematical model of human common sense"--something we need in order to think through the Sleeping Beauty problem--by "How could we build a machine which would carry out useful plausible reasoning, following clearly defined principles expressing an idealized common sense?" Thus, if you like, replace SB by Jaynes' thinking robot, and clone that. (There have been, and still are, controversies about "thinking" machines. "They will never make a machine to replace the human mind—it does many things which no machine could ever do." You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!” --J. von Neumann, 1948. Quoted by E. T. Jaynes in Probability Theory: The Logic of Science , p. 4.) --Rube Goldberg The Sleeping Beauty experiment restated Prepare $n \ge 2$ identical copies of SB (including SB herself) on Sunday evening. They all go to sleep at the same time, potentially for 100 years. Whenever you need to awaken SB during the experiment, randomly select a clone who has not yet been awakened. Any awakenings will occur on Monday and, if needed, on Tuesday. I claim that this version of the experiment creates exactly the same set of possible results, right down to SB's mental states and awareness, with exactly the same probabilities. This potentially is one key point where philosophers might choose to attack my solution. I claim it's the last point at which they can attack it, because the remaining analysis is routine and rigorous. Now we apply the usual statistical machinery. Let's begin with the sample space (of possible experimental outcomes). Let $M$ mean "awakens Monday" and $T$ mean "awakens Tuesday." Similarly, let $h$ mean "heads" and $t$ mean "tails". Subscript the clones with integers $1, 2, \ldots, n$ . Then the possible experimental outcomes can be written (in what I hope is a transparent, self-evident notation) as the set $$\eqalign{
\{&hM_1, hM_2, \ldots, hM_n, \\
&(tM_1, tT_2), (tM_1, tT_3), \ldots, (tM_1, tT_n), \\
&(tM_2, tT_1), (tM_2, tT_3), \ldots, (tM_2, tT_n), \\
&\cdots, \\
&(tM_n, tT_1), (tM_n, tT_2), \ldots, (tM_n, tT_{n-1}) & \}.
}$$ Monday probabilities As one of the SB clones, you figure your chance of being awakened on Monday during a heads-up experiment is ( $1/2$ chance of heads) times ( $1/n$ chance I’m picked to be the clone who is awakened). In more technical terms: The set of heads outcomes is $h = \{hM_j, j=1,2, \ldots,n\}$ . There are $n$ of them. The event where you are awakened with heads is $h(i) = \{hM_i\}$ . The chance of any particular SB clone $i$ being awakened with the coin showing heads equals $$\Pr[h(i)] = \Pr[h] \times \Pr[h(i)|h] = \frac{1}{2} \times
\frac{1}{n} = \frac{1}{2n}.$$ Tuesday probabilities The set of tails outcomes is $t = \{(tM_j, tT_k): j \ne k\}$ . There are $n(n-1)$ of them. All are equally likely, by design. You, clone $i$ , are awakened in $(n-1) + (n-1) = 2(n-1)$ of these cases; namely, the $n-1$ ways you can be awakened on Monday (there are $n-1$ remaining clones to be awakened Tuesday) plus the $n-1$ ways you can be awakened on Tuesday (there are $n-1$ possible Monday clones). Call this event $t(i)$ . Your chance of being awakened during a tails-up experiment equals $$\Pr[t(i)] = \Pr[t] \times P[t(i)|t] = \frac{1}{2} \times \frac{2(n-1)}{n(n-1)} = \frac{1}{n}.$$ Bayes' Theorem Now that we have come this far, Bayes' Theorem --a mathematical tautology beyond dispute--finishes the work. Any clone's chance of heads is therefore $$\Pr[h | t(i) \cup h(i)] = \frac{\Pr[h]\Pr[h(i)|h]}{\Pr[h]\Pr[h(i)|h] + \Pr[t]\Pr[t(i)|t]} = \frac{1/(2n)}{1/n + 1/(2n)} = \frac{1}{3}.$$ Because SB is indistinguishable from her clones--even to herself!--this is the answer she should give when asked for her degree of belief in heads. Interpretations The question "what is the probability of heads" has two reasonable interpretations for this experiment: it can ask for the chance a fair coin lands heads, which is $\Pr[h] = 1/2$ (the Halfer answer), or it can ask for the chance the coin lands heads, conditioned on the fact that you were the clone awakened. This is $\Pr[h|t(i) \cup h(i)] = 1/3$ (the Thirder answer). In the situation in which SB (or rather any one of a set of identically prepared Jaynes thinking machines) finds herself, this analysis--which many others have performed (but I think less convincingly, because they did not so clearly remove the philosophical distractions in the experimental descriptions)--supports the Thirder answer. The Halfer answer is correct, but uninteresting, because it is not relevant to the situation in which SB finds herself. This resolves the paradox. This solution is developed within the context of a single well-defined experimental setup. Clarifying the experiment clarifies the question. A clear question leads to a clear answer. Comments I guess that, following Elga (2000), you could legitimately characterize our conditional answer as "count[ing] your own temporal location as relevant to the truth of h," but that characterization adds no insight to the problem: it only detracts from the mathematical facts in evidence. To me it appears to be just an obscure way of asserting that the "clones" interpretation of the probability question is the correct one. This analysis suggests that the underlying philosophical issue is one of identity : What happens to the clones who are not awakened? What cognitive and noetic relationships hold among the clones?--but that discussion is not a matter of statistical analysis; it belongs on a different forum . | {
"source": [
"https://stats.stackexchange.com/questions/41208",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/919/"
]
} |
41,259 | What is the best way to compute singular value decomposition (SVD) of a very large positive matrix (65M x 3.4M) where data is extremely sparse? Less than 0.1% of the matrix is non zero. I need a way that: will fit into memory (I know that online methods exists) will be computed in a reasonable time: 3,4 days will be accurate enough however accuracy is not my main concern and I would like to be able to control how much resources I put into it. It would be great to have a Haskell, Python, C# etc. library which implements it. I am not using mathlab or R but if necessary I can go with R. | If it fits into memory, construct a sparse matrix in R using the Matrix package , and try irlba for the SVD. You can specify how many singular vectors you want in the result, which is another way to limit the computation. That's a pretty big matrix, but I've had very good results with this method in the past. irlba is pretty state-of-the-art. It uses the implicitly restarted Lanczos bi-diagonalization algorithm . It can chew through the netflix prize dataset (480,189 rows by 17,770 columns, 100,480,507 non-zero entries) in milliseconds. You dataset is ~ 200,000 times bigger than the Netflix dataset, so it take significantly longer than that. It might be reasonable to expect that it could do the computation in a couple of days. | {
"source": [
"https://stats.stackexchange.com/questions/41259",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10402/"
]
} |
41,286 | PSY's music video "Gangnam style" is popular, after a little more than 2 months it has about 540 million viewers. I learned this from my preteen children at dinner last week and soon the discussion went in the direction of if it was possible to do some kind of prediction of how many viewers there will be in 10-12 days and when(/if) the song will pass 800 million viewers or 1 billion viewers. Here is the picture from number of viewers since it was posted: Here are the picture from number of viewers of the No1 "Justin Biever-Baby"and No2 "Eminem - Love the way you lie" music videos that both have been around for a much longer time My first attempt to reason about the model was that is should be a S-curve but this doesn't seem to fit the the No1 and No2 songs and also doesn't fit that there are no limit on how many views that the music video can have, only a slower growth. So my question is: what kind of model should I use to predict number of viewers of the music video? | Aha, excellent question!! I would also have naively proposed an S-shaped logisitic curve, but this is obviously a poor fit. As far as I know, the constant increase is an approximation because YouTube counts the unique views (one per IP address), so there cannot be more views than computers. We could use an epidemiological model where people have different susceptibility. To make it simple, we could divide it in the high risk group (say the children) and the low risk group (say the adults). Let's call $x(t)$ the proportion of "infected" children and $y(t)$ the proportion of "infected" adults at time $t$. I will call $X$ the (unknown) number of individuals in the high risk group and $Y$ the (also unknown) number of individuals in the low risk group. $$\dot{x}(t) = r_1(x(t)+y(t))(X-x(t))$$
$$\dot{y}(t) = r_2(x(t)+y(t))(Y-y(t)),$$ where $r_1 > r_2$. I don't know how to solve that system (maybe @EpiGrad would), but looking at your graphs, we could make a couple of simplifying assumptions. Because the growth does not saturate, we can assume that $Y$ is very large and $y$ is small, or $$\dot{x}(t) = r_1x(t)(X-x(t))$$
$$\dot{y}(t) = r_2x(t),$$ which predicts linear growth once the high risk group is completely infected. Note that with this model there is no reason to assume $r_1 > r_2$, quite the contrary because the large term $Y-y(t)$ is now subsumed in $r_2$. This system solves to $$x(t) = X \frac{C_1e^{Xr_1t}}{1 + C_1e^{Xr_1t}}$$
$$y(t) = r_2 \int x(t)dt + C_2 = \frac{r_2}{r_1} \log(1+C_1e^{Xr_1t})+C_2,$$ where $C_1$ and $C_2$ are integration constants. The total "infected" population is then
$x(t) + y(t)$, which has 3 parameters and 2 integration constants (initial conditions). I don't know how easy it would be to fit... Update: playing around with the parameters, I could not reproduce the shape of the top curve with this model, the transition from $0$ to $600,000,000$ is always sharper than above. Continuing with the same idea, we could again assume that there are two kinds of Internet users: the "sharers" $x(t)$ and the "loners" $y(t)$. The sharers infect each other, the loners bump into the video by chance. The model is $$\dot{x}(t) = r_1x(t)(X-x(t))$$
$$\dot{y}(t) = r_2,$$ and solves to $$x(t) = X \frac{C_1e^{Xr_1t}}{1 + C_1e^{Xr_1t}}$$
$$y(t) = r_2 t+C_2.$$ We could assume that $x(0) = 1$, i.e. that there is only patient 0 at $t=0$, which yields $C_1 = \frac{1}{X-1} \approx \frac{1}{X}$ because $X$ is a large number. $C_2 = y(0)$ so we can assume that $C_2 = 0$. Now only the 3 parameters $X$, $r_1$ and $r_2$ determine the dynamics. Even with this model, it seems that the inflection is very sharp, it is not a good fit so the model must be wrong. That makes the problem very interesting actually. As an example, the figure below was built with $X = 600,000,000$, $r_1 = 3.667 \cdot 10^{-10}$ and $r_2 = 1,000,000$. Update: From the comments I gathered that Youtube counts views (in its secret way) and not unique IPs, which makes a big difference. Back to the drawing board. To keep it simple, let's assume that the viewers are "infected" by the video. They come back to watch it regularly, until they clear the infection. One of the simplest models is the SIR (Susceptible-Infected-Resistant) which is the following: $$\dot{S}(t) = -\alpha S(t)I(t)$$
$$\dot{I}(t) = \alpha S(t)I(t) - \beta I(t)$$
$$\dot{R}(t) = \beta I(t)$$ where $\alpha$ is the rate of infection and $\beta$ is the rate of clearance. The total view count $x(t)$ is such that $\dot{x}(t) = kI(t)$, where $k$ is the average views per day per infected individual. In this model, the view count starts increasing abruptly some time after the onset of the infection, which is not the case in the original data, perhaps because videos also spread in a non viral (or meme) way. I am no expert in estimating the parameters of the SIR model. Just playing with different values, here is what I came up with (in R). S0 = 1e7; a = 5e-8; b = 0.01 ; k = 1.2
views = 0; S = S0; I = 1;
# Exrapolate 1 year after the onset.
for (i in 1:365) {
dS = -a*I*S;
dI = a*I*S - b*I;
S = S+dS;
I = I+dI;
views[i+1] = views[i] + k*I
}
par(mfrow=c(2,1))
plot(views[1:95], type='l', lwd=2, ylim=c(0,6e8))
plot(views, type='n', lwd=2)
lines(views[1:95], type='l', lwd=2)
lines(96:365, views[96:365], type='l', lty=2) The model is obviously not perfect, and could be complemented in many sound ways. This very rough sketch predicts a billion views somewhere around March 2013, let's see... | {
"source": [
"https://stats.stackexchange.com/questions/41286",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/13201/"
]
} |
41,289 | It seems that it is possible to get similar results to a neural network with a multivariate linear regression in some cases, and multivariate linear regression is super fast and easy. Under what circumstances can neural networks give better results than multivariate linear regression? | Neural networks can in principle model nonlinearities automatically (see the universal approximation theorem ), which you would need to explicitly model using transformations (splines etc.) in linear regression. The caveat: the temptation to overfit can be (even) stronger in neural networks than in regression, since adding hidden layers or neurons looks harmless. So be extra careful to look at out-of-sample prediction performance. | {
"source": [
"https://stats.stackexchange.com/questions/41289",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10278/"
]
} |
41,306 | What is the meaning of the tilde when specifying probability distributions? For example: $$Z \sim \mbox{Normal}(0,1).$$ | The ~ (tilde) used in that way means "is distributed as". Why? To ask why doesn't make much sense to me, its just a convention. To cite Brian Ripley: Mathematical conventions are just that, conventions. They differ by
field of mathematics. Don't ask us why matrix rows are numbered down
but graphs are numbered up the y axis, nor why x comes before y but
row before column. But the matrix layout has always seemed illogical
to me. -- Brian D. Ripley (answering a question why print(x) and
image(x) are layouted differently)
R-help (August 2004) | {
"source": [
"https://stats.stackexchange.com/questions/41306",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16018/"
]
} |
41,394 | There have been many debates within statistics between Bayesians and frequentists. I generally find these rather off-putting (although I think it has died down). On the other hand, I've met several people who take an entirely pragmatic view of the issue, saying that sometimes it is more convenient to conduct a frequentist analysis and sometimes it's easier to run a Bayesian analysis. I find this perspective practical and refreshing. It occurs to me that it would be helpful to have a list of such cases. Because there are too many statistical analyses, and because I assume that it is ordinarily more practical to conduct a frequentist analysis (coding a t-test in WinBUGS is considerably more involved than the single function call required to perform the frequentist-based version in R, for example), it would be nice to have a list of the situations where a Bayesian approach is simpler, more practical, and / or more convenient than a frequentist approach. (Two answers that I have no interest in are: 'always', and 'never'. I understand people have strong opinions, but please don't air them here. If this thread becomes a venue for petty squabbling, I will probably delete it. My goal here is to develop a resource that will be useful for an analyst with a job to do, not an axe to grind.) People are welcome to suggest more than one case, but please use separate answers to do so, so that each situation can be evaluated (voted / discussed) individually. Answers should list: (1) what the nature of the situation is, and (2) why the Bayesian approach is simpler in this case. Some code (say, in WinBUGS) demonstrating how the analysis would be done and why the Bayesian version is more practical would be ideal, but I expect will be too cumbersome. If it can be done easily I would appreciate it, but please include why either way. Finally, I recognize that I have not defined what it means for one approach to be 'simpler' than another. The truth is, I'm not entirely sure what it should mean for one approach to be more practical than the other. I'm open to different suggestions, just specify your interpretation when you explain why a Bayesian analysis is more convenient in the situation you discuss. | (1) In contexts where the likelihood function is intractable (at least numerically), the use of the Bayesian approach, by means of Approximate Bayesian Computation (ABC), has gained ground over some frequentist competitors such as composite likelihoods ( 1 , 2 ) or the empirical likelihood because it tends to be easier to implement (not necessarily correct). Due to this, the use of ABC has become popular in areas where it is common to come across intractable likelihoods such as biology , genetics , and ecology . Here, we could mention an ocean of examples. Some examples of intractable likelihoods are Superposed processes. Cox and Smith (1954) proposed a model in the context of neurophysiology which consists of $N$ superposed point processes. For example consider the times between the electrical pulses observed at some part of the brain that were emited by several neurones during a certain period. This sample contains non iid observations which makes difficult to construct the corresponding likelihood, complicating the estimation of the corresponding parameters. A (partial)frequentist solution was recently proposed in this paper . The implementation of the ABC approach has also been recently studied and it can be found here . Population genetics is another example of models leading to intractable likelihoods. In this case the intractability has a different nature: the likelihood is expressed in terms of a multidimensional integral (sometimes of dimension $1000+$) which would take a couple of decades just to evaluate it at a single point. This area is probably ABC's headquarters. | {
"source": [
"https://stats.stackexchange.com/questions/41394",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7290/"
]
} |
41,443 | Anyone got library or code suggestions on how to actually plot a couple of sample trees from: getTree(rfobj, k, labelVar=TRUE) (Yes I know you're not supposed to do this operationally, RF is a blackbox, etc etc. I want to visually sanity-check a tree to see if any variables are behaving counterintuitively, need tweaking/combining/discretization/transformation, check how well my encoded factors are working, etc.) Prior questions without a decent answer: How to make Random Forests more interpretable? Also Obtaining knowledge from a random forest I actually want to plot a sample tree . So don't argue with me about that, already. I'm not asking about varImpPlot (Variable Importance Plot) or partialPlot or MDSPlot , or these other plots , I already have those, but they're not a substitute for seeing a sample tree.
Yes I can visually inspect the output of getTree(...,labelVar=TRUE) . (I guess a plot.rf.tree() contribution would be very-well-received.) | First (and easiest) solution: If you are not keen to stick with classical RF, as implemented in Andy Liaw's randomForest , you can try the party package which provides a different implementation of the original RF algorithm (use of conditional trees and aggregation scheme based on units weight average). Then, as reported on this R-help post , you can plot a single member of the list of trees. It seems to run smoothly, as far as I can tell. Below is a plot of one tree generated by cforest(Species ~ ., data=iris, controls=cforest_control(mtry=2, mincriterion=0)) . Second (almost as easy) solution: Most of tree-based techniques in R ( tree , rpart , TWIX , etc.) offers a tree -like structure for printing/plotting a single tree. The idea would be to convert the output of randomForest::getTree to such an R object, even if it is nonsensical from a statistical point of view. Basically, it is easy to access the tree structure from a tree object, as shown below. Please note that it will slightly differ depending of the type of task--regression vs. classification--where in the later case it will add class-specific probabilities as the last column of the obj$frame (which is a data.frame ). > library(tree)
> tr <- tree(Species ~ ., data=iris)
> tr
node), split, n, deviance, yval, (yprob)
* denotes terminal node
1) root 150 329.600 setosa ( 0.33333 0.33333 0.33333 )
2) Petal.Length < 2.45 50 0.000 setosa ( 1.00000 0.00000 0.00000 ) *
3) Petal.Length > 2.45 100 138.600 versicolor ( 0.00000 0.50000 0.50000 )
6) Petal.Width < 1.75 54 33.320 versicolor ( 0.00000 0.90741 0.09259 )
12) Petal.Length < 4.95 48 9.721 versicolor ( 0.00000 0.97917 0.02083 )
24) Sepal.Length < 5.15 5 5.004 versicolor ( 0.00000 0.80000 0.20000 ) *
25) Sepal.Length > 5.15 43 0.000 versicolor ( 0.00000 1.00000 0.00000 ) *
13) Petal.Length > 4.95 6 7.638 virginica ( 0.00000 0.33333 0.66667 ) *
7) Petal.Width > 1.75 46 9.635 virginica ( 0.00000 0.02174 0.97826 )
14) Petal.Length < 4.95 6 5.407 virginica ( 0.00000 0.16667 0.83333 ) *
15) Petal.Length > 4.95 40 0.000 virginica ( 0.00000 0.00000 1.00000 ) *
> tr$frame
var n dev yval splits.cutleft splits.cutright yprob.setosa yprob.versicolor yprob.virginica
1 Petal.Length 150 329.583687 setosa <2.45 >2.45 0.33333333 0.33333333 0.33333333
2 <leaf> 50 0.000000 setosa 1.00000000 0.00000000 0.00000000
3 Petal.Width 100 138.629436 versicolor <1.75 >1.75 0.00000000 0.50000000 0.50000000
6 Petal.Length 54 33.317509 versicolor <4.95 >4.95 0.00000000 0.90740741 0.09259259
12 Sepal.Length 48 9.721422 versicolor <5.15 >5.15 0.00000000 0.97916667 0.02083333
24 <leaf> 5 5.004024 versicolor 0.00000000 0.80000000 0.20000000
25 <leaf> 43 0.000000 versicolor 0.00000000 1.00000000 0.00000000
13 <leaf> 6 7.638170 virginica 0.00000000 0.33333333 0.66666667
7 Petal.Length 46 9.635384 virginica <4.95 >4.95 0.00000000 0.02173913 0.97826087
14 <leaf> 6 5.406735 virginica 0.00000000 0.16666667 0.83333333
15 <leaf> 40 0.000000 virginica 0.00000000 0.00000000 1.00000000 Then, there are methods for pretty printing and plotting those objects. The key functions are a generic tree:::plot.tree method (I put a triple : which allows you to view the code in R directly) relying on tree:::treepl (graphical display) and tree:::treeco (compute nodes coordinates). These functions expect the obj$frame representation of the tree. Other subtle issues: (1) the argument type = c("proportional", "uniform") in the default plotting method, tree:::plot.tree , help to manage vertical distance between nodes ( proportional means it is proportional to deviance, uniform mean it is fixed); (2) you need to complement plot(tr) by a call to text(tr) to add text labels to nodes and splits, which in this case means that you will also have to take a look at tree:::text.tree . The getTree method from randomForest returns a different structure, which is documented in the online help. A typical output is shown below, with terminal nodes indicated by status code (-1). (Again, output will differ depending on the type of task, but only on the status and prediction columns.) > library(randomForest)
> rf <- randomForest(Species ~ ., data=iris)
> getTree(rf, 1, labelVar=TRUE)
left daughter right daughter split var split point status prediction
1 2 3 Petal.Length 4.75 1 <NA>
2 4 5 Sepal.Length 5.45 1 <NA>
3 6 7 Sepal.Width 3.15 1 <NA>
4 8 9 Petal.Width 0.80 1 <NA>
5 10 11 Sepal.Width 3.60 1 <NA>
6 0 0 <NA> 0.00 -1 virginica
7 12 13 Petal.Width 1.90 1 <NA>
8 0 0 <NA> 0.00 -1 setosa
9 14 15 Petal.Width 1.55 1 <NA>
10 0 0 <NA> 0.00 -1 versicolor
11 0 0 <NA> 0.00 -1 setosa
12 16 17 Petal.Length 5.40 1 <NA>
13 0 0 <NA> 0.00 -1 virginica
14 0 0 <NA> 0.00 -1 versicolor
15 0 0 <NA> 0.00 -1 virginica
16 0 0 <NA> 0.00 -1 versicolor
17 0 0 <NA> 0.00 -1 virginica If you can manage to convert the above table to the one generated by tree , you will probably be able to customize tree:::treepl , tree:::treeco and tree:::text.tree to suit your needs, though I do not have an example of this approach. In particular, you probably want to get rid of the use of deviance, class probabilities, etc. which are not meaningful in RF. All you want is to set up nodes coordinates and split values. You could use fixInNamespace() for that, but, to be honest, I'm not sure this is the right way to go. Third (and certainly clever) solution: Write a true as.tree helper
function which will alleviates all of the above "patches". You could then use R's plotting methods or, probably better, Klimt (directly from R) to display individual trees. | {
"source": [
"https://stats.stackexchange.com/questions/41443",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7291/"
]
} |
41,467 | I've been wondering about this one for a while; I find it a little weird how abruptly it happens. Basically, why do we need just three uniforms for $Z_n$ to smooth out like it does? And why does the smoothing-out happen so relatively quickly? $Z_2$: $Z_3$: (images shamelessly stolen from John D. Cook's blog: http://www.johndcook.com/blog/2009/02/12/sums-of-uniform-random-values/ ) Why doesn't it take, say, four uniforms? Or five? Or...? | We can take various approaches to this, any of which may seem intuitive to some people and less than intuitive to others. To accommodate such variation, this answer surveys several such approaches, covering the major divisions of mathematical thought--analysis (the infinite and the infinitesimal), geometry/topology (spatial relationships), and algebra (formal patterns of symbolic manipulation)--as well as probability itself. It culminates in an observation that unifies all four approaches, demonstrates there is a genuine question to be answered here, and shows exactly what the issue is. Each approach provides, in its own way, deeper insight into the nature of the shapes of the probability distribution functions of sums of independent uniform variables. Background The Uniform $[0,1]$ distribution has several basic descriptions. When $X$ has such a distribution, The chance that $X$ lies in a measurable set $A$ is just the measure (length) of $A \cap [0,1]$, written $|A \cap [0,1]|$. From this it is immediate that the cumulative distribution function (CDF) is $$F_X(x) = \Pr(X \le x) = |(-\infty, x] \cap [0,1]| = |[0,\min(x,1)]| = \begin{array}{ll} \left\{
\begin{array}{ll}
0 & x\lt 0 \\
x & 0\leq x\leq 1 \\
1 & x\gt 1.
\end{array}\right.
\end{array} $$ The probability density function (PDF), which is the derivative of the CDF, is $f_X(x) = 1$ for $0 \le x \le 1$ and $f_X(x)=0$ otherwise. (It is undefined at $0$ and $1$.) Intuition from Characteristic Functions (Analysis) The characteristic function (CF) of any random variable $X$ is the expectation of $\exp(i t X)$ (where $i$ is the imaginary unit, $i^2=-1$). Using the PDF of a uniform distribution we can compute $$\phi_X(t) = \int_{-\infty}^\infty \exp(i t x) f_X(x) dx = \int_0^1 \exp(i t x) dx = \left. \frac{\exp(itx)}{it} \right|_{x=0}^{x=1} = \frac{\exp(it)-1}{it}.$$ The CF is a (version of the) Fourier transform of the PDF, $\phi(t) = \hat{f}(t)$. The most basic theorems about Fourier transforms are: The CF of a sum of independent variables $X+Y$ is the product of their CFs. When the original PDF $f$ is continuous and $X$ is bounded, $f$ can be recovered from the CF $\phi$ by a closely related version of the Fourier transform, $$f(x) = \check{\phi}(x) = \frac{1}{2\pi} \int_{-\infty}^\infty \exp(-i x t) \phi(t) dt.$$ When $f$ is differentiable, its derivative can be computed under the integral sign: $$f'(x) = \frac{d}{dx} \frac{1}{2\pi} \int_{-\infty}^\infty \exp(-i x t) \phi(t) dt = \frac{-i}{2\pi} \int_{-\infty}^\infty t \exp(-i x t) \phi(t) dt.$$ For this to be well-defined, the last integral must converge absolutely; that is, $$\int_{-\infty}^\infty |t \exp(-i x t) \phi(t)| dt = \int_{-\infty}^\infty |t| |\phi(t)| dt$$ must converge to a finite value. Conversely, when it does converge, the derivative exists everywhere by virtue of these inversion formulas. It is now clear exactly how differentiable the PDF for a sum of $n$ uniform variables is: from the first bullet, the CF of the sum of iid variables is the CF of one of them raised to the $n^\text{th}$ power, here equal to $(\exp(i t) - 1)^n / (i t)^n$. The numerator is bounded (it consists of sine waves) while the denominator is $O(t^{n})$. We can multiply such an integrand by $t^{s}$ and it will still converge absolutely when $s \lt n-1$ and converge conditionally when $s = n-1$. Thus, repeated application of the third bullet shows that the PDF for the sum of $n$ uniform variates will be continuously $n-2$ times differentiable and, in most places, it will be $n-1$ times differentiable. The blue shaded curve is a log-log plot of the absolute value of the real part of the CF of the sum of $n=10$ iid uniform variates. The dashed red line is an asymptote; its slope is $-10$, showing that the PDF is $10 - 2 = 8$ times differentiable. For reference, the gray curve plots the real part of the CF for a similarly shaped Gaussian function (a normal PDF). Intuition from Probability Let $Y$ and $X$ be independent random variables where $X$ has a Uniform $[0,1]$ distribution. Consider a narrow interval $(t, t+dt]$. We decompose the chance that $X+Y \in (t, t+dt]$ into the chance that $Y$ is sufficiently close to this interval times the chance that $X$ is just the right size to place $X+Y$ in this interval, given that $Y$ is close enough: $$\eqalign{
f_{X+Y}(t) dt = &\Pr(X+Y\in (t,t+dt])\\
& = \Pr(X+Y\in (t,t+dt] | Y \in (t-1, t+dt]) \Pr(Y \in (t-1, t+dt]) \\
& = \Pr(X \in (t-Y, t-Y+dt] | Y \in (t-1, t+dt]) \left(F_Y(t+dt) - F_Y(t-1)\right) \\
& = 1 dt \left(F_Y(t+dt) - F_Y(t-1)\right).
}$$ The final equality comes from the expression for the PDF of $X$. Dividing both sides by $dt$ and taking the limit as $dt\to 0$ gives $$f_{X+Y}(t) = F_Y(t) - F_Y(t-1).$$ In other words, adding a Uniform $[0,1]$ variable $X$ to any variable $Y$ changes the pdf $f_Y$ into a differenced CDF $F_Y(t) - F_Y(t-1)$. Because the PDF is the derivative of the CDF, this implies that each time we add an independent uniform variable to $Y$, the resulting PDF is one time more differentiable than before. Let's apply this insight, starting with a uniform variable $Y$. The original PDF is not differentiable at $0$ or $1$: it is discontinuous there. The PDF of $Y+X$ is not differentiable at $0$, $1$, or $2$, but it must be continuous at those points, because it is the difference of integrals of the PDF of $Y$ . Add another independent uniform variable $X_2$: the PDF of $Y+X+X_2$ is differentiable at $0$,$1$,$2$, and $3$--but it does not necessarily have second derivatives at those points. And so on. Intuition from Geometry The CDF at $t$ of a sum of $n$ iid uniform variates equals the volume of the unit hypercube $[0,1]^n$ lying within the half-space $x_1+x_2+\cdots+x_n \le t$. The situation for $n=3$ variates is shown here, with $t$ set at $1/2$, $3/2$, and then $5/2$. As $t$ progresses from $0$ through $n$, the hyperplane $H_n(t): x_1+x_2+\cdots+x_n=t$ crosses vertices at $t=0$, $t=1, \ldots, t=n$. At each time the shape of the cross section changes: in the figure it first is a triangle (a $2$-simplex), then a hexagon, then a triangle again. Why doesn't the PDF have sharp bends at these values of $t$? To understand this, first consider small values of $t$. Here, the hyperplane $H_n(t)$ cuts off an $n-1$-simplex. All $n-1$ dimensions of the simplex are directly proportional to $t$, whence its "area" is proportional to $t^{n-1}$. Some notation for this will come in handy later. Let $\theta$ be the "unit step function," $$\theta(x) = \begin{array}{ll} \left\{
\begin{array}{ll}
0 & x \lt 0 \\
1 & x\ge 0.
\end{array}\right.
\end{array} $$ If it were not for the presence of the other corners of the hypercube, this scaling would continue indefinitely. A plot of the area of the $n-1$-simplex would look like the solid blue curve below: it is zero at negative values and equals $t^{n-1}/(n-1)!$ at the positive one, conveniently written $\theta(t) t^{n-1}/(n-1)!$. It has a "kink" of order $n-2$ at the origin, in the sense that all derivatives through order $n-3$ exist and are continuous, but that left and right derivatives of order $n-2$ exist but do not agree at the origin. (The other curves shown in this figure are $-3\theta(t-1) (t-1)^{2}/2!$ (red), $3\theta(t-2) (t-2)^{2}/2!$ (gold), and $-\theta(t-3) (t-3)^{2}/2!$ (black). Their roles in the case $n=3$ are discussed further below.) To understand what happens when $t$ crosses $1$, let's examine in detail the case $n=2$, where all the geometry happens in a plane. We may view the unit "cube" (now just a square) as a linear combination of quadrants , as shown here: The first quadrant appears in the lower left panel, in gray. The value of $t$ is $1.5$, determining the diagonal line shown in all five panels. The CDF equals the yellow area shown at right. This yellow area is comprised of: The triangular gray area in the lower left panel, minus the triangular green area in the upper left panel, minus the triangular red area in the low middle panel, plus any blue area in the upper middle panel (but there isn't any such area, nor will there be until $t$ exceeds $2$). Every one of these $2^n=4$ areas is the area of a triangle. The first one scales like $t^n=t^2$, the next two are zero for $t\lt 1$ and otherwise scale like $(t-1)^n = (t-1)^2$, and the last is zero for $t\lt 2$ and otherwise scales like $(t-2)^n$. This geometric analysis has established that the CDF is proportional to $\theta(t)t^2 - \theta(t-1)(t-1)^2 - \theta(t-1)(t-1)^2 + \theta(t-2)(t-2)^2$ = $\theta(t)t^2 - 2 \theta(t-1)(t-1)^2 + \theta(t-2)(t-2)^2$; equivalently, the PDF is proportional to the sum of the three functions $\theta(t)t$, $-2\theta(t-1)(t-1)$, and $\theta(t-2)(t-2)$ (each of them scaling linearly when $n=2$). The left panel of this figure shows their graphs: evidently, they are all versions of the original graph $\theta(t)t$, but (a) shifted by $0$, $1$, and $2$ units to the right and (b) rescaled by $1$, $-2$, and $1$, respectively. The right panel shows the sum of these graphs (the solid black curve, normalized to have unit area: this is precisely the angular-looking PDF shown in the original question. Now we can understand the nature of the "kinks" in the PDF of any sum of iid uniform variables. They are all exactly like the "kink" that occurs at $0$ in the function $\theta(t)t^{n-1}$, possibly rescaled, and shifted to the integers $1,2,\ldots, n$ corresponding to where the hyperplane $H_n(t)$ crosses the vertices of the hypercube. For $n=2$, this is a visible change in direction: the right derivative of $\theta(t)t$ at $0$ is $0$ while its left derivative is $1$. For $n=3$, this is a continuous change in direction, but a sudden (discontinuous) change in second derivative. For general $n$, there will be continuous derivatives through order $n-2$ but a discontinuity in the $n-1^\text{st}$ derivative. Intuition from Algebraic Manipulation The integration to compute the CF, the form of the conditional probability in the probabilistic analysis, and the synthesis of a hypercube as a linear combination of quadrants all suggest returning to the original uniform distribution and re-expressing it as a linear combination of simpler things. Indeed, its PDF can be written $$f_X(x) = \theta(x) - \theta(x-1).$$ Let us introduce the shift operator $\Delta$: it acts on any function $f$ by shifting its graph one unit to the right: $$(\Delta f)(x) = f(x-1).$$ Formally, then, for the PDF of a uniform variable $X$ we may write $$f_X = (1 - \Delta)\theta.$$ The PDF of a sum of $n$ iid uniforms is the convolution of $f_X$ with itself $n$ times. This follows from the definition of a sum of random variables: the convolution of two functions $f$ and $g$ is the function $$(f \star g)(x) = \int_{-\infty}^{\infty} f(x-y)g(y) dy.$$ It is easy to verify that convolution commutes with $\Delta$. Just change the variable of integration from $y$ to $y+1$: $$\eqalign{
(f \star (\Delta g)) &= \int_{-\infty}^{\infty} f(x-y)(\Delta g)(y) dy \\
&= \int_{-\infty}^{\infty} f(x-y)g(y-1) dy \\
&= \int_{-\infty}^{\infty} f((x-1)-y)g(y) dy \\
&= (\Delta (f \star g))(x).
}$$ For the PDF of the sum of $n$ iid uniforms, we may now proceed algebraically to write $$f = f_X^{\star n} = ((1 - \Delta)\theta)^{\star n} = (1-\Delta)^n \theta^{\star n}$$ (where the $\star n$ "power" denotes repeated convolution, not pointwise multiplication!). Now $\theta^{\star n}$ is a direct, elementary integration, giving $$\theta^{\star n}(x) = \theta(x) \frac{x^{n-1}}{{n-1}!}.$$ The rest is algebra, because the Binomial Theorem applies (as it does in any commutative algebra over the reals): $$f = (1-\Delta)^n \theta^{\star n} = \sum_{i=0}^{n} (-1)^i \binom{n}{i} \Delta^i \theta^{\star n}.$$ Because $\Delta^i$ merely shifts its argument by $i$, this exhibits the PDF $f$ as a linear combination of shifted versions of $\theta(x) x^{n-1}$, exactly as we deduced geometrically: $$f(x) = \frac{1}{(n-1)!}\sum_{i=0}^{n} (-1)^i \binom{n}{i} (x-i)^{n-1}\theta(x-i).$$ (John Cook quotes this formula later in his blog post, using the notation $(x-i)^{n-1}_+$ for $(x-i)^{n-1}\theta(x-i)$.) Accordingly, because $x^{n-1}$ is a smooth function everywhere, any singular behavior of the PDF will occur only at places where $\theta(x)$ is singular (obviously just $0$) and at those places shifted to the right by $1, 2, \ldots, n$. The nature of that singular behavior--the degree of smoothness--will therefore be the same at all $n+1$ locations. Illustrating this is the picture for $n=8$, showing (in the left panel) the individual terms in the sum and (in the right panel) the partial sums, culminating in the sum itself (solid black curve): Closing Comments It is useful to note that this last approach has finally yielded a compact, practical expression for computing the PDF of a sum of $n$ iid uniform variables. (A formula for the CDF is similarly obtained.) The Central Limit Theorem has little to say here. After all, a sum of iid Binomial variables converges to a Normal distribution, but that sum is always discrete: it never even has a PDF at all! We should not hope for any intuition about "kinks" or other measures of differentiability of a PDF to come from the CLT. | {
"source": [
"https://stats.stackexchange.com/questions/41467",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9216/"
]
} |
41,509 | I am confused. I don't understand the difference a ARMA and a GARCH process.. to me there are the same no ? Here is the (G)ARCH(p, q) process $$\sigma_t^2 =
\underbrace{
\underbrace{
\alpha_0
+ \sum_{i=1}^q \alpha_ir_{t-i}^2}
_{ARCH}
+ \sum_{i=1}^p\beta_i\sigma_{t-i}^2}
_{GARCH}$$ And here is the ARMA($p, q$): $$ X_t = c + \varepsilon_t + \sum_{i=1}^p \varphi_i X_{t-i} + \sum_{i=1}^q \theta_i \varepsilon_{t-i}.\,$$ Is the ARMA simply an extension of the GARCH, GARCH being used only for returns and with the assumption $r = \sigma\varepsilon$ where $\varepsilon$ follows a strong white process? | You are conflating the features of a process with its representation. Consider the (return) process $(Y_t)_{t=0}^\infty$. An ARMA(p,q) model specifies the conditional mean of the process as $$
\begin{align}
\mathbb{E}(Y_t \mid \mathcal{I}_t) &= \alpha_0 + \sum_{j=1}^p \alpha_j Y_{t-j}+ \sum_{k=1}^q \beta_k\epsilon_{t-k}\\
\end{align}
$$
Here, $\mathcal{I}_t$ is the information set at time $t$, which is the $\sigma$-algebra generated by the lagged values of the outcome process $(Y_t)$. The GARCH(r,s) model specifies the conditional variance of the process
$$
\begin{alignat}{2}
& \mathbb{V}(Y_t \mid \mathcal{I}_t) &{}={}& \mathbb{V}(\epsilon_t \mid \mathcal{I}_t) \\
\equiv \,& \sigma^2_t&{}={}& \delta_0 + \sum_{l=1}^r \delta_j \sigma^2_{t-l} + \sum_{m=1}^s \gamma_k \epsilon^2_{t-m}
\end{alignat}
$$ Note in particular the first equivalence $ \mathbb{V}(Y_t \mid \mathcal{I}_t)= \mathbb{V}(\epsilon_t \mid \mathcal{I}_t)$. Aside : Based on this representation, you can write
$$
\epsilon_t \equiv \sigma_t Z_t
$$
where $Z_t$ is a strong white noise process, but this follows from the way the process is defined. The two models (for the conditional mean and the variance) are perfectly compatible with each other, in that the mean of the process can be modeled as ARMA, and the variances as GARCH. This leads to the complete specification of an ARMA(p,q)-GARCH(r,s) model for the process as in the following representation
$$
\begin{align}
Y_t &= \alpha_0 + \sum_{j=1}^p \alpha_j Y_{t-j} + \sum_{k=1}^q \beta_k\epsilon_{t-k} +\epsilon_t\\
\mathbb{E}(\epsilon_t\mid \mathcal{I}_t) &=0,\, \forall t \\
\mathbb{V}(\epsilon_t \mid \mathcal{I}_t) &= \delta_0 + \sum_{l=1}^r \delta_l \sigma^2_{t-l} + \sum_{m=1}^s \gamma_m \epsilon^2_{t-m}\, \forall t
\end{align}
$$ | {
"source": [
"https://stats.stackexchange.com/questions/41509",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16387/"
]
} |
41,536 | I am trying to build a model where the response is a proportion (it is actually the share of votes a party gets in constituencies). Its distribution is not normal, so I decided to model it with a beta distribution. I also have several predictors. However, I don't know how to write it in BUGS/JAGS/STAN (JAGS would be my best choice, but it doesn't really matter). My problem is that I make a sum of parameters by predictors, but then what can I do with it? The code would be something like this (in JAGS syntax), but I don' know how to "link" the y_hat and y parameters. for (i in 1:n) {
y[i] ~ dbeta(alpha, beta)
y_hat[i] <- a + b * x[i]
} ( y_hat is just the cross-product of parameters and predictors, hence the deterministic relationship. a and b are the coefficients which I try to estimate, x being a predictor). Thanks for your suggestions! | The beta regression approach is to reparameterize in terms of $\mu$ and $\phi$. Where $\mu$ will be the equivalent to y_hat that you predict. In this parameterization you will have $\alpha=\mu\times\phi$ and $\beta=(1-\mu) \times \phi$. Then you can model $\mu$ as the logit of the linear combination. $\phi$ can either have its own prior (must be greater than 0), or can be modeled on covariates as well (choose a link function to keep it greater than 0, such as exponential). Possibly something like: for(i in 1:n) {
y[i] ~ dbeta(alpha[i], beta[i])
alpha[i] <- mu[i] * phi
beta[i] <- (1-mu[i]) * phi
logit(mu[i]) <- a + b*x[i]
}
phi ~ dgamma(.1,.1)
a ~ dnorm(0,.001)
b ~ dnorm(0,.001) | {
"source": [
"https://stats.stackexchange.com/questions/41536",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10890/"
]
} |
41,704 | I see that lots of machine learning algorithms work better with mean cancellation and covariance equalization. For example, Neural Networks tend to converge faster, and K-Means generally gives better clustering with pre-processed features. I do not see the intuition behind these pre-processing steps lead to improved performance. Can someone explain this me? | It's simply a case of getting all your data on the same scale: if the scales for different features are wildly different, this can have a knock-on effect on your ability to learn (depending on what methods you're using to do it). Ensuring standardised feature values implicitly weights all features equally in their representation. | {
"source": [
"https://stats.stackexchange.com/questions/41704",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14289/"
]
} |
42,956 | I'm not sure how to interpret this probit regression I ran on Stata. The data is on loan approval and white is a dummy variable that =1 if a person was white, and =0 if the person was not. Any help on how to read this would be greatly appreciated. What I'm mostly looking for is how to find the estimated probability of loan approval for both whites and nonwhites. Can someone also help me with the text on here and how to make it normal?? I'm sorry I don't know how to do this. . probit approve white
Iteration 0: log likelihood = -740.34659
Iteration 1: log likelihood = -701.33221
Iteration 2: log likelihood = -700.87747
Iteration 3: log likelihood = -700.87744
Probit regression
Number of obs = 1989
LR chi2(1) = 78.94
Prob > chi2 = 0.0000
Log likelihood = -700.87744
Pseudo R2 = 0.0533 for the variable white: Coef.: .7839465
Std. Err.: .0867118
z: 9.04
P>|z|: 0.000
95% Conf. Interval: .6139946-.9538985 for the constant: Coef.: .5469463
Std. Err.: .075435
z: 7.25
P>|z|: 0.000
95% Conf. Interval: .3990964-.6947962 | In general, you cannot interpret the coefficients from the output of a probit regression (not in any standard way, at least). You need to interpret the marginal effects of the regressors, that is, how much the (conditional) probability of the outcome variable changes when you change the value of a regressor, holding all other regressors constant at some values. This is different from the linear regression case where you are directly interpreting the estimated coefficients. This is so because in the linear regression case, the regression coefficients are the marginal effects . In the probit regression, there is an additional step of computation required to get the marginal effects once you have computed the probit regression fit. Linear and probit regression models Probit regression : Recall that in the probit model, you are modelling the (conditional) probability of a "successful" outcome, that is, $Y_i=1$,
$$
\mathbb{P}\left[Y_i=1\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right] = \Phi(\beta_0 + \sum_{k=1}^K \beta_kX_{ki})
$$
where $\Phi(\cdot)$ is the cumulative distribution function of the standard normal distribution. This basically says that, conditional on the regressors, the probability that the outcome variable, $Y_i$ is 1, is a certain function of a linear combination of the regressors. Linear regression : Compare this to the linear regression model, where $$
\mathbb{E}\left(Y_i\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right) = \beta_0 + \sum_{k=1}^K \beta_kX_{ki}$$
the (conditional) mean of the outcome is a linear combination of the regressors. Marginal effects Other than in the linear regression model, coefficients rarely have any direct interpretation. We are typically interested in the ceteris paribus effects of changes in the regressors affecting the features of the outcome variable. This is the notion that marginal effects measure. Linear regression : I would now like to know how much the mean of the outcome variable moves when I move one of the regressors $$
\frac{\partial \mathbb{E}\left(Y_i\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right)}{\partial X_{ki}} = \beta_k
$$ But this is just the regression coeffcient, which means that the marginal effect of a change in the $k$-th regressor is just the regression coefficient. Probit regression : However, it is easy to see that this is not the case for the probit regression $$
\frac{\partial \mathbb{P}\left[Y_i=1\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right]}{\partial X_{ki}} = \beta_k\phi(\beta_0 + \sum_{k=1}^K \beta_kX_{ki})
$$
which is not the same as the regression coefficient. These are the marginal effects for the probit model, and the quantity we are after. In particular, this depends on the values of all the other regressors, and the regression coefficients. Here $\phi(\cdot)$ is the standard normal probability density function. How are you to compute this quantity, and what are the choices of the other regressors that should enter this formula? Thankfully, Stata provides this computation after a probit regression, and provides some defaults of the choices of the other regressors (there is no universal agreement on these defaults). Discrete regressors Note that much of the above applies to the case of continuous regressors, since we have used calculus. In the case of discrete regressors, you need to use discrete changes. SO, for example, the discrete change in a regressor $X_{ki}$ that takes the values $\{0,1\}$ is $$
\small
\begin{align}
\Delta_{X_{ki}}\mathbb{P}\left[Y_i=1\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right]&=\beta_k\phi(\beta_0 + \sum_{l=1}^{k-1} \beta_lX_{li}+\beta_k + \sum_{l=k+1}^K\beta_l X_{li}) \\
&\quad- \beta_k\phi(\beta_0 + \sum_{l=1}^{k-1} \beta_lX_{li}+ \sum_{l=k+1}^K\beta_l X_{li})
\end{align}
$$ Computing marginal effects in Stata Probit regression : Here is an example of computation of marginal effects after a probit regression in Stata. webuse union
probit union age grade not_smsa south##c.year
margins, dydx(*) Here is the output you will get from the margins command . margins, dydx(*)
Average marginal effects Number of obs = 26200
Model VCE : OIM
Expression : Pr(union), predict()
dy/dx w.r.t. : age grade not_smsa 1.south year
------------------------------------------------------------------------------
| Delta-method
| dy/dx Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
age | .003442 .000844 4.08 0.000 .0017878 .0050963
grade | .0077673 .0010639 7.30 0.000 .0056822 .0098525
not_smsa | -.0375788 .0058753 -6.40 0.000 -.0490941 -.0260634
1.south | -.1054928 .0050851 -20.75 0.000 -.1154594 -.0955261
year | -.0017906 .0009195 -1.95 0.051 -.0035928 .0000115
------------------------------------------------------------------------------
Note: dy/dx for factor levels is the discrete change from the base level. This can be interpreted, for example, that the a one unit change in the age variable, increases the probability of union status by 0.003442. Similarly, being from the south, decreases the probability of union status by 0.1054928 Linear regression : As a final check, we can confirm that the marginal effects in the linear regression model are the same as the regression coefficients (with one small twist). Running the following regression and computing the marginal effects after sysuse auto, clear
regress mpg weight c.weight#c.weight foreign
margins, dydx(*) just gives you back the regression coefficients. Note the interesting fact that Stata computes the net marginal effect of a regressor including the effect through the quadratic terms if included in the model. . margins, dydx(*)
Average marginal effects Number of obs = 74
Model VCE : OLS
Expression : Linear prediction, predict()
dy/dx w.r.t. : weight foreign
------------------------------------------------------------------------------
| Delta-method
| dy/dx Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
weight | -.0069641 .0006314 -11.03 0.000 -.0082016 -.0057266
foreign | -2.2035 1.059246 -2.08 0.038 -4.279585 -.1274157
------------------------------------------------------------------------------ | {
"source": [
"https://stats.stackexchange.com/questions/42956",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16557/"
]
} |
42,957 | For my research I am doing classification on the dataset of three variables. I run unsupervised clustering (based on a histogram peak technique of cluster analysis)and the result I evaluated visually and saw that it was not very good. I tried also supervised classification (minimum distance to means), and unsupervised kmeans (with random seed cluster centroid initialization rule), even worse results.
Then I run principal component analysis for these 3 variables, and got almost 99% of the variance in the first component (actually PC1 98.9, PC2 0.97, PC3 0.14 ).
After this I run the classification again but now on the 3 principal components instead of the 3 initial variables.
The clustering result was much better, really good. Others did not improve much. And I see that the PC2 despite of its only 1% of the variance is very important for my classification, and PC3 also helps. And my question is: What is the statistical explanation of that effect? Maybe, by using 3 orthogonal components as equal variables for classification, I am incresing the influence of small part of information which is in the last principal components?
Is it a reasonable way to help classification, or I am doing something crazy from the statistical point of view? And is this effect just a case of good luck and depends on the image, or I can use and recommend this method hereafter? I saw here questions about using PCA for reducing dimensions before classification, but this is not what I need, I am in opposite interested in the sense of using all (especially last) PCs.
Also I saw here the question PCA and random forests , but it was mainly about why classifications improved with additional features and touched my question just at #3 "what if ...", and those answers are not very good for me. And, my variables are red, green and blue bands of an image. Here is the comparison of the classification results in the most problem spots for land and water. Here are images of PCs and scatterplots of PCs .
The scatterplots of the variables: I am not a statistician and I appreciate not very complicated explanations. Thanks in advance! | In general, you cannot interpret the coefficients from the output of a probit regression (not in any standard way, at least). You need to interpret the marginal effects of the regressors, that is, how much the (conditional) probability of the outcome variable changes when you change the value of a regressor, holding all other regressors constant at some values. This is different from the linear regression case where you are directly interpreting the estimated coefficients. This is so because in the linear regression case, the regression coefficients are the marginal effects . In the probit regression, there is an additional step of computation required to get the marginal effects once you have computed the probit regression fit. Linear and probit regression models Probit regression : Recall that in the probit model, you are modelling the (conditional) probability of a "successful" outcome, that is, $Y_i=1$,
$$
\mathbb{P}\left[Y_i=1\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right] = \Phi(\beta_0 + \sum_{k=1}^K \beta_kX_{ki})
$$
where $\Phi(\cdot)$ is the cumulative distribution function of the standard normal distribution. This basically says that, conditional on the regressors, the probability that the outcome variable, $Y_i$ is 1, is a certain function of a linear combination of the regressors. Linear regression : Compare this to the linear regression model, where $$
\mathbb{E}\left(Y_i\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right) = \beta_0 + \sum_{k=1}^K \beta_kX_{ki}$$
the (conditional) mean of the outcome is a linear combination of the regressors. Marginal effects Other than in the linear regression model, coefficients rarely have any direct interpretation. We are typically interested in the ceteris paribus effects of changes in the regressors affecting the features of the outcome variable. This is the notion that marginal effects measure. Linear regression : I would now like to know how much the mean of the outcome variable moves when I move one of the regressors $$
\frac{\partial \mathbb{E}\left(Y_i\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right)}{\partial X_{ki}} = \beta_k
$$ But this is just the regression coeffcient, which means that the marginal effect of a change in the $k$-th regressor is just the regression coefficient. Probit regression : However, it is easy to see that this is not the case for the probit regression $$
\frac{\partial \mathbb{P}\left[Y_i=1\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right]}{\partial X_{ki}} = \beta_k\phi(\beta_0 + \sum_{k=1}^K \beta_kX_{ki})
$$
which is not the same as the regression coefficient. These are the marginal effects for the probit model, and the quantity we are after. In particular, this depends on the values of all the other regressors, and the regression coefficients. Here $\phi(\cdot)$ is the standard normal probability density function. How are you to compute this quantity, and what are the choices of the other regressors that should enter this formula? Thankfully, Stata provides this computation after a probit regression, and provides some defaults of the choices of the other regressors (there is no universal agreement on these defaults). Discrete regressors Note that much of the above applies to the case of continuous regressors, since we have used calculus. In the case of discrete regressors, you need to use discrete changes. SO, for example, the discrete change in a regressor $X_{ki}$ that takes the values $\{0,1\}$ is $$
\small
\begin{align}
\Delta_{X_{ki}}\mathbb{P}\left[Y_i=1\mid X_{1i}, \ldots, X_{Ki};\beta_0, \ldots, \beta_K\right]&=\beta_k\phi(\beta_0 + \sum_{l=1}^{k-1} \beta_lX_{li}+\beta_k + \sum_{l=k+1}^K\beta_l X_{li}) \\
&\quad- \beta_k\phi(\beta_0 + \sum_{l=1}^{k-1} \beta_lX_{li}+ \sum_{l=k+1}^K\beta_l X_{li})
\end{align}
$$ Computing marginal effects in Stata Probit regression : Here is an example of computation of marginal effects after a probit regression in Stata. webuse union
probit union age grade not_smsa south##c.year
margins, dydx(*) Here is the output you will get from the margins command . margins, dydx(*)
Average marginal effects Number of obs = 26200
Model VCE : OIM
Expression : Pr(union), predict()
dy/dx w.r.t. : age grade not_smsa 1.south year
------------------------------------------------------------------------------
| Delta-method
| dy/dx Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
age | .003442 .000844 4.08 0.000 .0017878 .0050963
grade | .0077673 .0010639 7.30 0.000 .0056822 .0098525
not_smsa | -.0375788 .0058753 -6.40 0.000 -.0490941 -.0260634
1.south | -.1054928 .0050851 -20.75 0.000 -.1154594 -.0955261
year | -.0017906 .0009195 -1.95 0.051 -.0035928 .0000115
------------------------------------------------------------------------------
Note: dy/dx for factor levels is the discrete change from the base level. This can be interpreted, for example, that the a one unit change in the age variable, increases the probability of union status by 0.003442. Similarly, being from the south, decreases the probability of union status by 0.1054928 Linear regression : As a final check, we can confirm that the marginal effects in the linear regression model are the same as the regression coefficients (with one small twist). Running the following regression and computing the marginal effects after sysuse auto, clear
regress mpg weight c.weight#c.weight foreign
margins, dydx(*) just gives you back the regression coefficients. Note the interesting fact that Stata computes the net marginal effect of a regressor including the effect through the quadratic terms if included in the model. . margins, dydx(*)
Average marginal effects Number of obs = 74
Model VCE : OLS
Expression : Linear prediction, predict()
dy/dx w.r.t. : weight foreign
------------------------------------------------------------------------------
| Delta-method
| dy/dx Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
weight | -.0069641 .0006314 -11.03 0.000 -.0082016 -.0057266
foreign | -2.2035 1.059246 -2.08 0.038 -4.279585 -.1274157
------------------------------------------------------------------------------ | {
"source": [
"https://stats.stackexchange.com/questions/42957",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9657/"
]
} |
43,159 | Say there are $m+n$ elements split into two groups ( $m$ and $n$ ). The variance of the first group is $\sigma_m^2$ and the variance of the second group is $\sigma^2_n$ . The elements themselves are assumed to be unknown but I know the means $\mu_m$ and $\mu_n$ . Is there a way to calculate the combined variance $\sigma^2_{(m+n)}$ ? The variance doesn't have to be unbiased so denominator is $(m+n)$ and not $(m+n-1)$ . | The idea is to express quantities as sums rather than fractions. Given any $n$ data values $x_i,$ use the definitions of the mean $$\mu_{1:n} = \frac{1}{\Omega_{1;n}}\sum_{i=1}^n \omega_{i} x_i$$ and sample variance $$\sigma_{1:n}^2 = \frac{1}{\Omega_{1;n}}\sum_{i=1}^n \omega_{i}\left(x_i - \mu_{1:n}\right)^2 = \frac{1}{\Omega_{1;n}}\sum_{i=1}^n \omega_{i}x_i^2 - \mu_{1:n}^2$$ to find the (weighted) sum of squares of the data as $$\Omega_{1;n}\mu_{1:n} = \sum_{i=1}^n \omega_{i} x_i$$ and $$\Omega_{1;n} \sigma_{1:n}^2 = \sum_{i=1}^n \omega_{i}\left(x_i - \mu_{1:n}\right)^2 = \sum_{i=1}^n \omega_{i}x_i^2 - \Omega_{1;n}\mu_{1:n}^2.$$ For notational convenience I have written $$\Omega_{j;k}=\sum_{i=j}^k \omega_i$$ for sums of weights. (In applications with equal weights, which are the usual ones, we may take $\omega_i=1$ for all $i,$ whence $\Omega_{1;n}=n.$ ) Let's do the (simple) algebra. Order the indexes $i$ so that $i=1,\ldots,n$ designates elements of the first group and $i=n+1,\ldots,n+m$ designates elements of the second group. Break the overall combination of squares by group and re-express the two pieces in terms of the variances and means of the subsets of the data: $$\eqalign{
\Omega_{1;n+m}(\sigma^2_{1:m+n} + \mu_{1:m+n}^2)&= \sum_{i=1}^{1:n+m} \omega_{i}x_i^2 \\
&= \sum_{i=1}^n \omega_{i} x_i^2 + \sum_{i=n+1}^{n+m} \omega_{i} x_i^2 \\
&= \Omega_{1;n}(\sigma^2_{1:n} + \mu_{1:n}^2) + \Omega_{n+1;n+m}(\sigma^2_{1+n:m+n} + \mu_{1+n:m+n}^2).
}$$ Algebraically solving this for $\sigma^2_{m+n}$ in terms of the other (known) quantities yields $$\sigma^2_{1:m+n} = \frac{\Omega_{1;n}(\sigma^2_{1:n} + \mu_{1:n}^2) + \Omega_{n+1;n+m}(\sigma^2_{1+n:m+n} + \mu_{1+n:m+n}^2)}{\Omega_{1;n+m}} - \mu^2_{1:m+n}.$$ Of course, using the same approach, $\mu_{1:m+n} = (\Omega_{1;n}\mu_{1:n} + \Omega_{n+1;n+m}\mu_{1+n:m+n})/\Omega_{1;n+m}$ can be expressed in terms of the group means, too. Edit 1 An anonymous contributor points out that when the sample means are equal (so that $\mu_{1:n}=\mu_{1+n:m+n}=\mu_{1:m+n}$ ), the solution for $\sigma^2_{m+n}$ is a weighted mean of the group sample variances. Edit 2 I have generalized the formulas to weighted statistics. The motivation for this is a recent federal court case in the US involving a dispute over how to pool weighted variances: a government agency contends the proper method is to weight the two group variances equally. In working on this case I found it difficult to find authoritative references on combining weighted statistics: most textbooks do not deal with this or they assume the generalization is obvious (which it is, but not necessarily to government employees or lawyers!). BTW, I used entirely different notation in my work on that case. If in the editing process any error has crept into the formulas in this post I apologize in advance and will fix them--but that would not reflect any error in my testimony, which was very carefully checked. | {
"source": [
"https://stats.stackexchange.com/questions/43159",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16662/"
]
} |
43,161 | I want to create a model matrix for a quadratic regression in R. I have a matrix $X$, where each column is a covariate. Currently, I'm using the code: Model = model.matrix(~ 1 + X[,1] + X[,2] + X[,3] + I(X[,1]^2/2) +
I(X[,2]^2/2) + I(X[,3]^2/2) + I(X[,1]*X[,2]) + I(X[,1]*X[,3])+I(X[,2]*X[,3])) Then if I'm dealing with a problem with 4 or more covariates, I have to add the new terms by hand. I tried to create the model matrix myself using: Model = cbind(1,X,X^2/2,mixed_terms) I don't remember exactly how I was calculating the mixed terms (I think I used a permutation of the columns of $X$), but in any case my code was much slower in the model.matrix , while I need the model matrix to be created fast because it goes inside a loop in a simulation routine. Thanks! | The idea is to express quantities as sums rather than fractions. Given any $n$ data values $x_i,$ use the definitions of the mean $$\mu_{1:n} = \frac{1}{\Omega_{1;n}}\sum_{i=1}^n \omega_{i} x_i$$ and sample variance $$\sigma_{1:n}^2 = \frac{1}{\Omega_{1;n}}\sum_{i=1}^n \omega_{i}\left(x_i - \mu_{1:n}\right)^2 = \frac{1}{\Omega_{1;n}}\sum_{i=1}^n \omega_{i}x_i^2 - \mu_{1:n}^2$$ to find the (weighted) sum of squares of the data as $$\Omega_{1;n}\mu_{1:n} = \sum_{i=1}^n \omega_{i} x_i$$ and $$\Omega_{1;n} \sigma_{1:n}^2 = \sum_{i=1}^n \omega_{i}\left(x_i - \mu_{1:n}\right)^2 = \sum_{i=1}^n \omega_{i}x_i^2 - \Omega_{1;n}\mu_{1:n}^2.$$ For notational convenience I have written $$\Omega_{j;k}=\sum_{i=j}^k \omega_i$$ for sums of weights. (In applications with equal weights, which are the usual ones, we may take $\omega_i=1$ for all $i,$ whence $\Omega_{1;n}=n.$ ) Let's do the (simple) algebra. Order the indexes $i$ so that $i=1,\ldots,n$ designates elements of the first group and $i=n+1,\ldots,n+m$ designates elements of the second group. Break the overall combination of squares by group and re-express the two pieces in terms of the variances and means of the subsets of the data: $$\eqalign{
\Omega_{1;n+m}(\sigma^2_{1:m+n} + \mu_{1:m+n}^2)&= \sum_{i=1}^{1:n+m} \omega_{i}x_i^2 \\
&= \sum_{i=1}^n \omega_{i} x_i^2 + \sum_{i=n+1}^{n+m} \omega_{i} x_i^2 \\
&= \Omega_{1;n}(\sigma^2_{1:n} + \mu_{1:n}^2) + \Omega_{n+1;n+m}(\sigma^2_{1+n:m+n} + \mu_{1+n:m+n}^2).
}$$ Algebraically solving this for $\sigma^2_{m+n}$ in terms of the other (known) quantities yields $$\sigma^2_{1:m+n} = \frac{\Omega_{1;n}(\sigma^2_{1:n} + \mu_{1:n}^2) + \Omega_{n+1;n+m}(\sigma^2_{1+n:m+n} + \mu_{1+n:m+n}^2)}{\Omega_{1;n+m}} - \mu^2_{1:m+n}.$$ Of course, using the same approach, $\mu_{1:m+n} = (\Omega_{1;n}\mu_{1:n} + \Omega_{n+1;n+m}\mu_{1+n:m+n})/\Omega_{1;n+m}$ can be expressed in terms of the group means, too. Edit 1 An anonymous contributor points out that when the sample means are equal (so that $\mu_{1:n}=\mu_{1+n:m+n}=\mu_{1:m+n}$ ), the solution for $\sigma^2_{m+n}$ is a weighted mean of the group sample variances. Edit 2 I have generalized the formulas to weighted statistics. The motivation for this is a recent federal court case in the US involving a dispute over how to pool weighted variances: a government agency contends the proper method is to weight the two group variances equally. In working on this case I found it difficult to find authoritative references on combining weighted statistics: most textbooks do not deal with this or they assume the generalization is obvious (which it is, but not necessarily to government employees or lawyers!). BTW, I used entirely different notation in my work on that case. If in the editing process any error has crept into the formulas in this post I apologize in advance and will fix them--but that would not reflect any error in my testimony, which was very carefully checked. | {
"source": [
"https://stats.stackexchange.com/questions/43161",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/26105/"
]
} |
43,339 | This xkcd comic (Frequentists vs. Bayesians) makes fun of a frequentist statistician who derives an obviously wrong result. However it seems to me that his reasoning is actually correct in the sense that it follows the standard frequentist methodology. So my question is "does he correctly apply the frequentist methodology?" If no: what would be a correct frequentist inference in this scenario? How to integrate "prior knowledge" about the sun stability in the frequentist methodology? If yes: wtf? ;-) | The main issue is that the first experiment (Sun gone nova) is not repeatable, which makes it highly unsuitable for frequentist methodology that interprets probability as estimate of how frequent an event is giving that we can repeat the experiment many times. In contrast, bayesian probability is interpreted as our degree of belief giving all available prior knowledge, making it suitable for common sense reasoning about one-time events. The dice throw experiment is repeatable, but I find it very unlikely that any frequentist would intentionally ignore the influence of the first experiment and be so confident in significance of the obtained results. Although it seems that author mocks frequentist reliance on repeatable experiments and their distrust of priors, giving the unsuitability of the experimental setup to the frequentist methodology I would say that real theme of this comic is not frequentist methodology but blind following of unsuitable methodology in general. Whether it's funny or not is up to you (for me it is) but I think it more misleads than clarifies the differences between the two approaches. | {
"source": [
"https://stats.stackexchange.com/questions/43339",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12968/"
]
} |
43,345 | What is the correct graphical representation for these data of weekly spending (in dollars) on soft drinks for 20 people: 12,13,17,21,24,24,26,27,27,30,32,35,37,38,41,43,44,46,53,58 I want to separate the data into 5 bins: 10-20(f=3), 20-30(f=6), 30-40(f=5), 40-50(f=4), 50-60(f=2) and I title axis: "Weekly spending (dollar amount)". Which one would be more appropriate for y axis? (For which I put f=3,6,5,4,2 .) Frequency per \$1 spent Frequency per \$10 spent | The main issue is that the first experiment (Sun gone nova) is not repeatable, which makes it highly unsuitable for frequentist methodology that interprets probability as estimate of how frequent an event is giving that we can repeat the experiment many times. In contrast, bayesian probability is interpreted as our degree of belief giving all available prior knowledge, making it suitable for common sense reasoning about one-time events. The dice throw experiment is repeatable, but I find it very unlikely that any frequentist would intentionally ignore the influence of the first experiment and be so confident in significance of the obtained results. Although it seems that author mocks frequentist reliance on repeatable experiments and their distrust of priors, giving the unsuitability of the experimental setup to the frequentist methodology I would say that real theme of this comic is not frequentist methodology but blind following of unsuitable methodology in general. Whether it's funny or not is up to you (for me it is) but I think it more misleads than clarifies the differences between the two approaches. | {
"source": [
"https://stats.stackexchange.com/questions/43345",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16721/"
]
} |
43,482 | I'm working on an algorithm that relies on the fact that observations $Y$s are normally distributed, and I would like to test the robustness of the algorithm to this assumption empirically. To do this, I was looking for a sequence of transformations $T_1(), \dots, T_n()$ that would progressively disrupt the normality of $Y$. For example if the $Y$s are normal they have skewness $= 0$ and kurtosis $= 3$, and it would be nice to find a sequence of transformation that progressively increase both. My idea was to simulate some normally approximately distributed data $Y$ and test the algorithm on that. Than test algorithm on each transformed dataset $T_1(Y), \dots, T_n(y)$, to see how much the output is changing. Notice that I don't control the distribution of the simulated $Y$s, so I cannot simulate them using a distribution that generalizes the Normal (such as the Skewed Generalized Error Distribution). | This can be done using the sinh-arcsinh transformation from Jones, M. C. and Pewsey A. (2009). Sinh-arcsinh distributions . Biometrika 96: 761–780. The transformation is defined as $$H(x;\epsilon,\delta)=\sinh[\delta\sinh^{-1}(x)-\epsilon], \tag{$\star$}$$ where $\epsilon \in{\mathbb R}$ and $\delta \in {\mathbb R}_+$. When this transformation is applied to the normal CDF $S(x;\epsilon,\delta)=\Phi[H(x;\epsilon,\delta)]$, it produces a unimodal distribution whose parameters $(\epsilon,\delta)$ control skewness and kurtosis, respectively (Jones and Pewsey, 2009), in the sense of van Zwet (1969) . In addition, if $\epsilon=0$ and $\delta=1$, we obtain the original normal distribution. See the following R code. fs = function(x,epsilon,delta) dnorm(sinh(delta*asinh(x)-epsilon))*delta*cosh(delta*asinh(x)-epsilon)/sqrt(1+x^2)
vec = seq(-15,15,0.001)
plot(vec,fs(vec,0,1),type="l")
points(vec,fs(vec,1,1),type="l",col="red")
points(vec,fs(vec,2,1),type="l",col="blue")
points(vec,fs(vec,-1,1),type="l",col="red")
points(vec,fs(vec,-2,1),type="l",col="blue")
vec = seq(-5,5,0.001)
plot(vec,fs(vec,0,0.5),type="l",ylim=c(0,1))
points(vec,fs(vec,0,0.75),type="l",col="red")
points(vec,fs(vec,0,1),type="l",col="blue")
points(vec,fs(vec,0,1.25),type="l",col="red")
points(vec,fs(vec,0,1.5),type="l",col="blue") Therefore, by choosing an appropriate sequence of parameters $(\epsilon_n,\delta_n)$, you can generate a sequence of distributions/transformations with different levels of skewness and kurtosis and make them look as similar or as different to the normal distribution as you want. The following plot shows the outcome produced by the R code. For (i) $\epsilon=(-2,-1,0,1,2)$ and $\delta=1$, and (ii) $\epsilon=0$ and $\delta=(0.5,0.75,1,1.25,1.5)$. Simulation of this distribution is straightforward given that you just have to transform a normal sample using the inverse of $(\star)$. $$H^{-1}(x;\epsilon,\delta)=\sinh[\delta^{-1}(\sinh^{-1}(x)+\epsilon)]$$ | {
"source": [
"https://stats.stackexchange.com/questions/43482",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/26105/"
]
} |
43,492 | There are two methods which produce different ranking of a group of hypotheses.
I want to compare the two methods showing that one method generally produces
lower ranking than the other method for a specific subset of the considered group of
hypotheses. I was thinking of comparing the quantiles of the two rankings on this subset,
but not sure whether that was the right thing to do. Can anyone familiar with this give
some hint? Thanks. Hanna | This can be done using the sinh-arcsinh transformation from Jones, M. C. and Pewsey A. (2009). Sinh-arcsinh distributions . Biometrika 96: 761–780. The transformation is defined as $$H(x;\epsilon,\delta)=\sinh[\delta\sinh^{-1}(x)-\epsilon], \tag{$\star$}$$ where $\epsilon \in{\mathbb R}$ and $\delta \in {\mathbb R}_+$. When this transformation is applied to the normal CDF $S(x;\epsilon,\delta)=\Phi[H(x;\epsilon,\delta)]$, it produces a unimodal distribution whose parameters $(\epsilon,\delta)$ control skewness and kurtosis, respectively (Jones and Pewsey, 2009), in the sense of van Zwet (1969) . In addition, if $\epsilon=0$ and $\delta=1$, we obtain the original normal distribution. See the following R code. fs = function(x,epsilon,delta) dnorm(sinh(delta*asinh(x)-epsilon))*delta*cosh(delta*asinh(x)-epsilon)/sqrt(1+x^2)
vec = seq(-15,15,0.001)
plot(vec,fs(vec,0,1),type="l")
points(vec,fs(vec,1,1),type="l",col="red")
points(vec,fs(vec,2,1),type="l",col="blue")
points(vec,fs(vec,-1,1),type="l",col="red")
points(vec,fs(vec,-2,1),type="l",col="blue")
vec = seq(-5,5,0.001)
plot(vec,fs(vec,0,0.5),type="l",ylim=c(0,1))
points(vec,fs(vec,0,0.75),type="l",col="red")
points(vec,fs(vec,0,1),type="l",col="blue")
points(vec,fs(vec,0,1.25),type="l",col="red")
points(vec,fs(vec,0,1.5),type="l",col="blue") Therefore, by choosing an appropriate sequence of parameters $(\epsilon_n,\delta_n)$, you can generate a sequence of distributions/transformations with different levels of skewness and kurtosis and make them look as similar or as different to the normal distribution as you want. The following plot shows the outcome produced by the R code. For (i) $\epsilon=(-2,-1,0,1,2)$ and $\delta=1$, and (ii) $\epsilon=0$ and $\delta=(0.5,0.75,1,1.25,1.5)$. Simulation of this distribution is straightforward given that you just have to transform a normal sample using the inverse of $(\star)$. $$H^{-1}(x;\epsilon,\delta)=\sinh[\delta^{-1}(\sinh^{-1}(x)+\epsilon)]$$ | {
"source": [
"https://stats.stackexchange.com/questions/43492",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/13154/"
]
} |
43,497 | I am performing K means clustering on a gene expression dataset.
I am aware of the fact that the Pearson correlation metric allows to group trends or patterns irrespective of their overall level of expression. I was wondering if the same concept stands for Covariance metric (I believe that the only difference between the two metrics is the fact that covariance returns unbounded values, while Pearson maps value in interval [-1,1]) | This can be done using the sinh-arcsinh transformation from Jones, M. C. and Pewsey A. (2009). Sinh-arcsinh distributions . Biometrika 96: 761–780. The transformation is defined as $$H(x;\epsilon,\delta)=\sinh[\delta\sinh^{-1}(x)-\epsilon], \tag{$\star$}$$ where $\epsilon \in{\mathbb R}$ and $\delta \in {\mathbb R}_+$. When this transformation is applied to the normal CDF $S(x;\epsilon,\delta)=\Phi[H(x;\epsilon,\delta)]$, it produces a unimodal distribution whose parameters $(\epsilon,\delta)$ control skewness and kurtosis, respectively (Jones and Pewsey, 2009), in the sense of van Zwet (1969) . In addition, if $\epsilon=0$ and $\delta=1$, we obtain the original normal distribution. See the following R code. fs = function(x,epsilon,delta) dnorm(sinh(delta*asinh(x)-epsilon))*delta*cosh(delta*asinh(x)-epsilon)/sqrt(1+x^2)
vec = seq(-15,15,0.001)
plot(vec,fs(vec,0,1),type="l")
points(vec,fs(vec,1,1),type="l",col="red")
points(vec,fs(vec,2,1),type="l",col="blue")
points(vec,fs(vec,-1,1),type="l",col="red")
points(vec,fs(vec,-2,1),type="l",col="blue")
vec = seq(-5,5,0.001)
plot(vec,fs(vec,0,0.5),type="l",ylim=c(0,1))
points(vec,fs(vec,0,0.75),type="l",col="red")
points(vec,fs(vec,0,1),type="l",col="blue")
points(vec,fs(vec,0,1.25),type="l",col="red")
points(vec,fs(vec,0,1.5),type="l",col="blue") Therefore, by choosing an appropriate sequence of parameters $(\epsilon_n,\delta_n)$, you can generate a sequence of distributions/transformations with different levels of skewness and kurtosis and make them look as similar or as different to the normal distribution as you want. The following plot shows the outcome produced by the R code. For (i) $\epsilon=(-2,-1,0,1,2)$ and $\delta=1$, and (ii) $\epsilon=0$ and $\delta=(0.5,0.75,1,1.25,1.5)$. Simulation of this distribution is straightforward given that you just have to transform a normal sample using the inverse of $(\star)$. $$H^{-1}(x;\epsilon,\delta)=\sinh[\delta^{-1}(\sinh^{-1}(x)+\epsilon)]$$ | {
"source": [
"https://stats.stackexchange.com/questions/43497",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16808/"
]
} |
43,501 | I'm using exponential smoothing (Brown's method) for forecasting. The forecast can be calculated for one or more steps (time intervals). Is there any way to calculate confidence intervals for such prognosis (ex-ante)? | This can be done using the sinh-arcsinh transformation from Jones, M. C. and Pewsey A. (2009). Sinh-arcsinh distributions . Biometrika 96: 761–780. The transformation is defined as $$H(x;\epsilon,\delta)=\sinh[\delta\sinh^{-1}(x)-\epsilon], \tag{$\star$}$$ where $\epsilon \in{\mathbb R}$ and $\delta \in {\mathbb R}_+$. When this transformation is applied to the normal CDF $S(x;\epsilon,\delta)=\Phi[H(x;\epsilon,\delta)]$, it produces a unimodal distribution whose parameters $(\epsilon,\delta)$ control skewness and kurtosis, respectively (Jones and Pewsey, 2009), in the sense of van Zwet (1969) . In addition, if $\epsilon=0$ and $\delta=1$, we obtain the original normal distribution. See the following R code. fs = function(x,epsilon,delta) dnorm(sinh(delta*asinh(x)-epsilon))*delta*cosh(delta*asinh(x)-epsilon)/sqrt(1+x^2)
vec = seq(-15,15,0.001)
plot(vec,fs(vec,0,1),type="l")
points(vec,fs(vec,1,1),type="l",col="red")
points(vec,fs(vec,2,1),type="l",col="blue")
points(vec,fs(vec,-1,1),type="l",col="red")
points(vec,fs(vec,-2,1),type="l",col="blue")
vec = seq(-5,5,0.001)
plot(vec,fs(vec,0,0.5),type="l",ylim=c(0,1))
points(vec,fs(vec,0,0.75),type="l",col="red")
points(vec,fs(vec,0,1),type="l",col="blue")
points(vec,fs(vec,0,1.25),type="l",col="red")
points(vec,fs(vec,0,1.5),type="l",col="blue") Therefore, by choosing an appropriate sequence of parameters $(\epsilon_n,\delta_n)$, you can generate a sequence of distributions/transformations with different levels of skewness and kurtosis and make them look as similar or as different to the normal distribution as you want. The following plot shows the outcome produced by the R code. For (i) $\epsilon=(-2,-1,0,1,2)$ and $\delta=1$, and (ii) $\epsilon=0$ and $\delta=(0.5,0.75,1,1.25,1.5)$. Simulation of this distribution is straightforward given that you just have to transform a normal sample using the inverse of $(\star)$. $$H^{-1}(x;\epsilon,\delta)=\sinh[\delta^{-1}(\sinh^{-1}(x)+\epsilon)]$$ | {
"source": [
"https://stats.stackexchange.com/questions/43501",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16814/"
]
} |
43,930 | I'm trying to understand the philosophy behind using a Generalized Linear Model (GLM) vs a Linear Model (LM). I've created an example data set below where: $$\log(y) = x + \varepsilon $$ The example does not have the error $\varepsilon$ as a function of the magnitude of $y$, so I would assume that a linear model of the log-transformed y would be the best. In the example below, this is indeed the case (I think) - since the AIC of the LM on the log-transformed data is lowest. The AIC of the Gamma distribution GLM with a log-link function has a lower sum of squares (SS), but the additional degrees of freedom result in a slightly higher AIC. I was surprised that the Gaussian distribution AIC is so much higher (even though the SS is the lowest of the models). I am hoping to get some advice on when one should approach GLM models - i.e. is there something I should look for in my LM model fit residuals to tell me that another distribution is more appropriate? Also, how should one proceed in selecting an appropriate distribution family. Many thanks in advance for your help. [EDIT]: I have now adjusted the summary statistics so that the SS of the log-transformed linear model is comparable to the GLM models with the log-link function. A graph of the statistics is now shown. Example set.seed(1111)
n <- 1000
y <- rnorm(n, mean=0, sd=1)
y <- exp(y)
hist(y, n=20)
hist(log(y), n=20)
x <- log(y) - rnorm(n, mean=0, sd=1)
hist(x, n=20)
df <- data.frame(y=y, x=x)
df2 <- data.frame(x=seq(from=min(df$x), to=max(df$x),,100))
#models
mod.name <- "LM"
assign(mod.name, lm(y ~ x, df))
summary(get(mod.name))
plot(y ~ x, df)
lines(predict(get(mod.name), newdata=df2) ~ df2$x, col=2)
mod.name <- "LOG.LM"
assign(mod.name, lm(log(y) ~ x, df))
summary(get(mod.name))
plot(y ~ x, df)
lines(exp(predict(get(mod.name), newdata=df2)) ~ df2$x, col=2)
mod.name <- "LOG.GAUSS.GLM"
assign(mod.name, glm(y ~ x, df, family=gaussian(link="log")))
summary(get(mod.name))
plot(y ~ x, df)
lines(predict(get(mod.name), newdata=df2, type="response") ~ df2$x, col=2)
mod.name <- "LOG.GAMMA.GLM"
assign(mod.name, glm(y ~ x, df, family=Gamma(link="log")))
summary(get(mod.name))
plot(y ~ x, df)
lines(predict(get(mod.name), newdata=df2, type="response") ~ df2$x, col=2)
#Results
model.names <- list("LM", "LOG.LM", "LOG.GAUSS.GLM", "LOG.GAMMA.GLM")
plot(y ~ x, df, log="y", pch=".", cex=3, col=8)
lines(predict(LM, newdata=df2) ~ df2$x, col=1, lwd=2)
lines(exp(predict(LOG.LM, newdata=df2)) ~ df2$x, col=2, lwd=2)
lines(predict(LOG.GAUSS.GLM, newdata=df2, type="response") ~ df2$x, col=3, lwd=2)
lines(predict(LOG.GAMMA.GLM, newdata=df2, type="response") ~ df2$x, col=4, lwd=2)
legend("topleft", legend=model.names, col=1:4, lwd=2, bty="n")
res.AIC <- as.matrix(
data.frame(
LM=AIC(LM),
LOG.LM=AIC(LOG.LM),
LOG.GAUSS.GLM=AIC(LOG.GAUSS.GLM),
LOG.GAMMA.GLM=AIC(LOG.GAMMA.GLM)
)
)
res.SS <- as.matrix(
data.frame(
LM=sum((predict(LM)-y)^2),
LOG.LM=sum((exp(predict(LOG.LM))-y)^2),
LOG.GAUSS.GLM=sum((predict(LOG.GAUSS.GLM, type="response")-y)^2),
LOG.GAMMA.GLM=sum((predict(LOG.GAMMA.GLM, type="response")-y)^2)
)
)
res.RMS <- as.matrix(
data.frame(
LM=sqrt(mean((predict(LM)-y)^2)),
LOG.LM=sqrt(mean((exp(predict(LOG.LM))-y)^2)),
LOG.GAUSS.GLM=sqrt(mean((predict(LOG.GAUSS.GLM, type="response")-y)^2)),
LOG.GAMMA.GLM=sqrt(mean((predict(LOG.GAMMA.GLM, type="response")-y)^2))
)
)
png("stats.png", height=7, width=10, units="in", res=300)
#x11(height=7, width=10)
par(mar=c(10,5,2,1), mfcol=c(1,3), cex=1, ps=12)
barplot(res.AIC, main="AIC", las=2)
barplot(res.SS, main="SS", las=2)
barplot(res.RMS, main="RMS", las=2)
dev.off() | Good effort for thinking through this issue. Here's an incomplete answer, but some starters for the next steps. First, the AIC scores - based on likelihoods - are on different scales because of the different distributions and link functions, so aren't comparable. Your sum of squares and mean sum of squares have been calculated on the original scale and hence are on the same scale, so can be compared, although whether this is a good criterion for model selection is another question (it might be, or might not - search the cross validated archives on model selection for some good discussion of this). For your more general question, a good way of focusing on the problem is to consider the difference between LOG.LM (your linear model with the response as log(y)); and LOG.GAUSS.GLM, the glm with the response as y and a log link function. In the first case the model you are fitting is: $\log(y)=X\beta+\epsilon$; and in the glm() case it is: $ \log(y+\epsilon)=X\beta$ and in both cases $\epsilon$ is distributed $ \mathcal{N}(0,\sigma^2)$. | {
"source": [
"https://stats.stackexchange.com/questions/43930",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10675/"
]
} |
44,063 | Can someone please explain sufficient statistics in very basic terms? I come from an engineering background, and I have gone through a lot of stuff but failed to find an intuitive explanation. | A sufficient statistic summarizes all the information contained in a sample so that you would make the same parameter estimate whether we gave you the sample or just the statistic itself. It's reduction of the data without information loss. Here's one example. Suppose $X$ has a symmetric distribution about zero. Instead of giving you a sample, I hand you a sample of absolute values instead (that's the statistic). You don't get to see the sign. But you know that the distribution is symmetric, so for a given value $x$, $-x$ and $x$ are equally likely (the conditional probability is $0.5$). So you can flip a fair coin. If it comes up heads, make that $x$ negative. If tails, make it positive. This gives you a sample from $X'$, which has the same distribution as the original data $X$. You basically were able to reconstruct the data from the statistic. That's what makes it sufficient. | {
"source": [
"https://stats.stackexchange.com/questions/44063",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/17042/"
]
} |
44,204 | Recently I came across Tableau and tried to visualize the data from database and csv file. The user iterface enables the user to visualize time and spatial data and create plots in an instant. Such tool is really useful as it enables to observe the data graphically without writing the code. As there are many data sources from which I have to retrieve and visualize the data it would be very useful to have a tool which enabled to generate charts by simply dragging columns on axes and additionally modify the visualization with dragging the column names as well. Does anyone know any free or open source software of that kind? | I've never tried it, but there's an open source desktop / browser-based visualisation suite called WEAVE (short for Web-based Analysis and Visualization Environment). Like Tableau, it's intended to let you explore data through an interactive click-based interface. Unlike Tableau, it's open source: you can download the source code and install your own version on your own machine which can be as private or as public as you want it to be. Don't expect anything nearly as slick and user-friendly as Tableau, but it looks like an interesting, powerful project for someone prepared to put the time in to learning to use it. Source code on GitHub WEAVE project home page including live demos Short write-up on Flowing Data Some screenshots from their homepage: Or, you can look into rolling your own . There are some really good open source javacript tools for supporting programming data visualisation in a browser. If you don't mind coding some Javascript and some kind of server-side layer to serve up the data, give these a try: Miso Dataset for getting, processing, managing and cleaning the data on the client side in Javascript (includes a CSV parser) D3 for interactive visualisations in SVG (works in every browser except IE8 and earlier and old (v1,v2) Android phones). gRaphael for interactive cross-browser standard charts Raphael if you need SVG output to work in Internet Explorer 6, 7, and 8. D34Raphael combines D3's visualisation tools with Raphael's IE compatibility and abstraction If you're good with javascript, Raphael is a good way to build something custom-made. Here's a different approach to pumping D3 output through Raphael to be cross-browser Tip: If you decide to work with Raphael and the latest version is still 2.1.0, I'd advise applying this bug fix to the code ). If you're interested in the web programming option, here's a slightly more detailed write-up I wrote on Raphael and D3 for stackoverflow . There are also some free (not open source) online datavis suites worth mentioning (probably not suitable for direct DB connection but worth a look): Raw by Density Design - blog introduction - (hit "Choose a data sample" to try it out) - mostly copy and paste based, not sure if it has an API that can connect to a database but good for trying things out quickly. Tableau Public - a free-to-use online version of Tableau. The catch is, the data you enter into it and any visualisations you create must be publicly available. And something completely different: if you have a quality server lying around and you happen to want to make awesome google-maps style tile-based 'slippy' maps using open source tech (probably not what you're looking for - but it's possible!), check out MapBox TileMill . Have a look through the gallery of examples on their home page - some of them are truly stunning. See also related project Modest Maps , an open source Javascript library for interacting with maps developed by Stamen Design (a really highly rated agency specialising in interactive maps). It's considered to be an improvement on the more established OpenLayers. All open source. WEAVE is the best GUI-based open-source tool I know of for personal visual analysis . The other tools listed are top of the range tools for online publishing of visualisations (for example, D3 is used by and developed by the award-winning NY Times graphics team ), and are more often used for visualisation in the context of public-facing communications than exploratory analysis, but they can be used for analysis too. | {
"source": [
"https://stats.stackexchange.com/questions/44204",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/315/"
]
} |
44,261 | Given a dataset with instances $x_i$ together with $N$ classes where every instance $x_i$ belongs exactly to one class $y_i$ a multiclass classifier After the training and testing I basically have a table with the true class $y_i$ and the predicted class $a_i$ for every instance $x_i$ in the test set. So for every instance I have either a match ($y_i= a_i$) or a miss ($y_i\neq a_i$). How can I evaluate the quality of the match ? The issue is that some classes can have many members, i.e. many instances belong to it. Obviously if 50% of all data points belong to one class and my final classifier is 50% correct overall, I have gained nothing. I could have just as well made a trivial classifier which outputs that biggest class no matter what the input is. Is there a standard method to estimate the quality of a classifier based on the known testing set results of matches and hits for each class? Maybe it's even important to distinguish matching rates for each particular class? The simplest approach I can think of is to exclude the correct matches of the biggest class. What else? | Like binary classification, you can use the empirical error rate for estimating the quality of your classifier. Let $g$ be a classifier, and $x_i$ and $y_i$ be respectively an example in your data base and its class.
$$err(g) = \frac{1}{n} \sum_{i \leq n} \mathbb{1}_{g(x_i) \neq y_i}$$
As you said, when the classes are unbalanced, the baseline is not 50% but the proportion of the bigger class. You could add a weight on each class to balance the error. Let $W_y$ be the weight of the class $y$. Set the weights such that $\frac{1}{W_y} \sim \frac{1}{n}\sum_{i \leq n} \mathbb{1}_{y_i = y}$ and define the weighted empirical error $$err_W(g) = \frac{1}{n} \sum_{i \leq n} W_{y_i} \mathbb{1}_{g(x_i) \neq y_i}$$ As Steffen said, the confusion matrix could be a good way to estimate the quality of a classifier. In the binary case, you can derive some measure from this matrix such as sensitivity and specificity, estimating the capability of a classifier to detect a particular class.
The source of error of a classifier might be in a particular way. For example a classifier could be too much confident when predicting a 1, but never say wrong when predicting a 0. Many classifiers can be parametrized to control this rate (false positives vs false negatives), and you are then interested in the quality of the whole family of classifier, not just one.
From this you can plot the ROC curve, and measuring the area under the ROC curve give you the quality of those classifiers. ROC curves can be extended for your multiclass problem. I suggest you to read the answer of this thread . | {
"source": [
"https://stats.stackexchange.com/questions/44261",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8694/"
]
} |
44,382 | Can anyone help explain some of the mathematics behind classification in CART? I'm looking to understand how two main stages happen. For instance I trained a CART classifier on a dataset and used a testing dataset to mark its predictive performance but: How is the initial root of the tree chosen? Why and how is each branch formed? My dataset being 400 thousand records with 15 columns and 23 classes achieves a 100% accuracy from a confusion matrix, I use 10-fold crossvalidation on the dataset. I would be really greatful if anyone could help explain the stages of CART classification? | CART and decision trees like algorithms work through recursive partitioning of the training set in order to obtain subsets that are as pure as possible to a given target class. Each node of the tree is associated to a particular set of records $T$ that is splitted by a specific test on a feature. For example, a split on a continuous attribute $A$ can be induced by the test $ A \le x$. The set of records $T$ is then partitioned in two subsets that leads to the left branch of the tree and the right one. $T_l = \{ t \in T: t(A) \le x \}$ and $T_r = \{ t \in T: t(A) > x \}$ Similarly, a categorical feature $B$ can be used to induce splits according to its values. For example, if $B = \{b_1, \dots, b_k\}$ each branch $i$ can be induced by the test $B = b_i$. The divide step of the recursive algorithm to induce decision tree takes into account all possible splits for each feature and tries to find the best one according to a chosen quality measure: the splitting criterion. If your dataset is induced on the following scheme $$A_1, \dots, A_m, C$$ where $A_j$ are attributes and $C$ is the target class, all candidates splits are generated and evaluated by the splitting criterion. Splits on continuous attributes and categorical ones are generated as described above. The selection of the best split is usually carried out by impurity measures. The impurity of the parent node has to be decreased by the split . Let $(E_1, E_2, \dots, E_k)$ be a split induced on the set of records $E$, a splitting criterion that makes used of the impurity measure $I(\cdot)$ is: $$\Delta = I(E) - \sum_{i=1}^{k}\frac{|E_i|}{|E|}I(E_i)$$ Standard impurity measures are the Shannon entropy or the Gini index. More specifically, CART uses the Gini index that is defined for the set $E$ as following. Let $p_j$ be the fraction of records in $E$ of class $c_j$
$$p_j = \frac{|\{t \in E:t[C] = c_j\}|}{|E|} $$ then
$$ \mathit{Gini}(E) = 1 - \sum_{j=1}^{Q}p_j^2$$
where $Q$ is the number of classes. It leads to a 0 impurity when all records belong to the same class. As an example, let's say that we have a binary class set of records $T$ where the class distribution is $(1/2, 1/2)$ - the following is a good split for $T$ the probability distribution of records in $T_l$ is $(1,0)$ and the $T_r$'s one is $(0,1)$. Let's say that $T_l$ and $T_r$ are the same size, thus $|T_l|/|T| = |T_r|/|T| = 1/2$. We can see that $\Delta$ is high: $$\Delta = 1 - 1/2^2 - 1/2^2 - 0 - 0 = 1/2$$ The following split is worse than the first one and the splitting criterion $\Delta$ reflects this characteristic. $$\Delta = 1 - 1/2^2 - 1/2^2 - 1/2 \bigg( 1 - (3/4)^2 - (1/4)^2 \bigg) - 1/2 \bigg( 1 - (1/4)^2 - (3/4)^2 \bigg) = 1/2 - 1/2(3/8) - 1/2(3/8) = 1/8$$ The first split will be selected as best split and then the algorithm proceeds in a recursive fashion. It is easy to classify a new instance with a decision tree, in fact it is enough to follow the path from the root node to a leaf. A record is classified with the majority class of the leaf that it reaches. Say that we want to classify the square on this figure that is the graphical representation of a training set induced on the scheme $A,B,C$, where $C$ is the target class and $A$ and $B$ are two continuous features. A possible induced decision tree might be the following: It is clear that the record square will be classified by the decision tree as a circle given that the record falls on a leaf labeled with circles. In this toy example the accuracy on the training set is 100% because no record is mis-classified by the tree. On the graphical representation of the training set above we can see the boundaries (gray dashed lines) that the tree uses to classify new instances. There is plenty of literature on decision trees, I wanted just to write down a sketchy introduction. Another famous implementation is C4.5. | {
"source": [
"https://stats.stackexchange.com/questions/44382",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6875/"
]
} |
44,484 | I ran a regression with 4 variables, and all are very statistically significant, with T values $\approx 7,9,26$ and $31$ (I say $\approx$ because it seems irrelevant to include the decimals) which are very high and clearly significant. But then the $R^2$ is only .2284. Am I misinterpreting the t values here to mean something they're not? My first reaction upon seeing the t values was that the $R^2$ would be quite high, but maybe that is a high $R^2$ ? | The $t$ -values and $R^2$ are used to judge very different things. The $t$ -values are used to judge the accurary of your estimate of the $\beta_i$ 's, but $R^2$ measures the amount of variation in your response variable explained by your covariates. Suppose you are estimating a regression model with $n$ observations, $$
Y_i = \beta_0 + \beta_1X_{1i} + ...+ \beta_kX_{ki}+\epsilon_i
$$ where $\epsilon_i\overset{i.i.d}{\sim}N(0,\sigma^2)$ , $i=1,...,n$ . Large $t$ -values (in absolute value) lead you to reject the null hypothesis that $\beta_i=0$ . This means you can be confident that you have correctly estimated the sign of the coefficient. Also, if $|t|$ >4 and you have $n>5$ , then 0 is not in a 99% confidence interval for the coefficient. The $t$ -value for a coefficient $\beta_i$ is the difference between the estimate $\hat{\beta_i}$ and 0 normalized by the standard error $se\{\hat{\beta_i}\}$ . $$
t=\frac{\hat{\beta_i}}{se\{\hat{\beta_i}\}}
$$ which is simply the estimate divided by a measure of its variability. If you have a large enough dataset, you will always have statistically significant (large) $t$ -values. This does not mean necessarily mean your covariates explain much of the variation in the response variable. As @Stat mentioned, $R^2$ measures the amount of variation in your response variable explained by your dependent variables. For more about $R^2$ , go to wikipedia . In your case, it appears you have a large enough data set to accurately estimate the $\beta_i$ 's, but your covariates do a poor job of explaining and\or predicting the response values. | {
"source": [
"https://stats.stackexchange.com/questions/44484",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16557/"
]
} |
44,494 | In LDA topic model algorithm, I saw this assumption. But I don't know why chose Dirichlet distribution? I don't know if we can use Uniform distribution over Multinomial as a pair? | The Dirichlet distribution is a conjugate prior for the multinomial distribution. This means that if the prior distribution of the multinomial parameters is Dirichlet then the posterior distribution is also a Dirichlet distribution (with parameters different from those of the prior). The benefit of this is that (a) the posterior distribution is easy to compute and (b) it in some sense is possible to quantify how much our beliefs have changed after collecting the data. It can certainly be discussed whether these are good reasons to choose a particular prior, as these criteria are unrelated to actual prior beliefs... Nevertheless, conjugate priors are popular, as they often are reasonably flexible and convenient to use for the reasons stated above. For the special case of the multinomial distribution, let $(p_1,\ldots,p_k)$ be the vector of multinomial parameters (i.e. the probabilities for the different categories). If $$(p_1,\ldots,p_k)\sim \mbox{Dirichlet}(\alpha_1,\ldots,\alpha_k)$$ prior to collecting the data, then, given observations $(x_1,\ldots,x_k)$ in the different categories,
$$(p_1,\ldots,p_k)\Big|(x_1,\ldots,x_k)\sim \mbox{Dirichlet}(\alpha_1+x_1,\ldots,\alpha_k+x_k).$$ The uniform distribution is actually a special case of the Dirichlet distribution, corresponding to the case $\alpha_1=\alpha_2=\cdots=\alpha_k=1$. So is the least-informative Jeffreys prior , for which $\alpha_1=\cdots=\alpha_k=1/2$. The fact that the Dirichlet class includes these natural "non-informative" priors is another reason for using it. | {
"source": [
"https://stats.stackexchange.com/questions/44494",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9950/"
]
} |
44,647 | Short version: I have a time series of climate data that I'm testing for stationarity. Based on previous research, I expect the model underlying (or "generating", so to speak) the data to have an intercept term and a positive linear time trend. To test these data for stationarity, should I use the Dickey-Fuller test that includes an intercept and time trend, i.e. equation #3 ? $\nabla y_t = \alpha_0+\alpha_1t+\delta y_{t-1}+u_t$ Or, should I use the DF test that only includes an intercept because the first difference of the equation I believe underlies the model has only an intercept? Long version: As stated above, I have a time series of climate data that I'm testing for stationarity. Based on previous research, I expect the model underlying the data to have an intercept term, a positive linear time trend, and some normally distributed error term. In other words, I expect the underlying model to look something like this: $y_t = a_0 + a_1t + \beta y_{t-1} + u_t $ where $u_t$ is normally distributed. Since I'm assuming the underlying model has both an intercept and a linear time trend, I tested for a unit root with equation #3 of the simple Dickey-Fuller test, as shown: $\nabla y_t = \alpha_0+\alpha_1t+\delta y_{t-1}+u_t$ This test returns a critical value that would lead me to reject the null hypothesis and conclude that the underlying model is non-stationary. However, I question if I'm applying this correctly, since even though the underlying model is assumed to have an intercept and a time trend, this does not imply that the first difference $\nabla y_t$ will as well. Quite the opposite, in fact, if my math is correct. Calculating the first difference based on the equation of the assumed underlying model gives: $\nabla y_t = y_t - y_{t-1} = [a_0 + a_1t + \beta y_{t-1} + u_t] - [a_0 + a_1(t-1) + \beta y_{t-2} + u_{t-1}]$ $\nabla y_t = [a_0 - a_0] + [a_1t - a_t(t-1)] + \beta[y_{t-1} - y_{t-2}] + [u_t - u_{t-1}]$ $\nabla y_t = a_1 + \beta \cdot \nabla y_{t-1} + u_t - u_{t-1}$ Therefore, the first difference $\nabla y_t$ appears to only have an intercept, not a time trend. I think my question is similar to this one , except I'm not sure how to apply that answer to my question. Sample data: Here is some of the sample temperature data that I'm working with. 64.19749
65.19011
64.03281
64.99111
65.43837
65.51817
65.22061
65.43191
65.0221
65.44038
64.41756
64.65764
64.7486
65.11544
64.12437
64.49148
64.89215
64.72688
64.97553
64.6361
64.29038
65.31076
64.2114
65.37864
65.49637
65.3289
65.38394
65.39384
65.0984
65.32695
65.28
64.31041
65.20193
65.78063
65.17604
66.16412
65.85091
65.46718
65.75551
65.39994
66.36175
65.37125
65.77763
65.48623
64.62135
65.77237
65.84289
65.80289
66.78865
65.56931
65.29913
64.85516
65.56866
64.75768
65.95956
65.64745
64.77283
65.64165
66.64309
65.84163
66.2946
66.10482
65.72736
65.56701
65.11096
66.0006
66.71783
65.35595
66.44798
65.74924
65.4501
65.97633
65.32825
65.7741
65.76783
65.88689
65.88939
65.16927
64.95984
66.02226
66.79225
66.75573
65.74074
66.14969
66.15687
65.81199
66.13094
66.13194
65.82172
66.14661
65.32756
66.3979
65.84383
65.55329
65.68398
66.42857
65.82402
66.01003
66.25157
65.82142
66.08791
65.78863
66.2764
66.00948
66.26236
65.40246
65.40166
65.37064
65.73147
65.32708
65.84894
65.82043
64.91447
65.81062
66.42228
66.0316
65.35361
66.46407
66.41045
65.81548
65.06059
66.25414
65.69747
65.15275
65.50985
66.66216
66.88095
65.81281
66.15546
66.40939
65.94115
65.98144
66.13243
66.89761
66.95423
65.63435
66.05837
66.71114 | You need to consider the drift and (parametric/linear) trend in the levels of the time series in order to specify the deterministic terms in the augmented Dickey-Fuller regression which is in terms of the first differences of the time series. The confusion arises exactly from deriving the first-differences equation in the way that you have done. (Augmented) Dickey-Fuller regression model Suppose that the levels of the series include a drift and trend term
$$
Y_t = \beta_{0,l} + \beta_{1,l} t + \beta_{2, l}Y_{t-1} + \varepsilon_{t}
$$
The null hypothesis of nonstationarity in this case would be $\mathfrak{H}_0{}:{}\beta_{2, l} = 1$. One equation for the first differences implied by this data-generating process [DGP], is the one that you have derived
$$
\Delta Y_t = \beta_{1,l} + \beta_{2, l}\Delta Y_{t-1} + \Delta \varepsilon_{t}
$$
However, this is not the (augmented) Dickey Fuller regression as used in the test. Instead, the correct version can be had by subtracting $Y_{t-1}$ from both sides of the first equation resulting in
$$
\begin{align}
\Delta Y_t &= \beta_{0,l} + \beta_{1,l} t + (\beta_{2, l}-1)Y_{t-1} + \varepsilon_{t} \\
&\equiv \beta_{0,d} + \beta_{1,d}t + \beta_{2,d}Y_{t-1} + \varepsilon_{t}
\end{align}
$$ This is the (augmented) Dickey-Fuller regression, and the equivalent version of the null hypothesis of nonstationarity is the test $\mathfrak{H}_0{}:{}\beta_{2, d}=0$ which is just a t-test using the OLS estimate of $\beta_{2, d}$ in the regression above. Note that the drift and trend come through to this specification unchanged. An additional point to note is that if you are not certain about the presence of the linear trend in the levels of the time series, then you can jointly test for the linear trend and unit root, that is, $\mathfrak{H}_0{}:{}[\beta_{2, d}, \beta_{1,l}]' = [0, 0]'$, which can be tested using an F-test with appropriate critical values. These tests and critical values are produced by the R function ur.df in the urca package. Let us consider some examples in detail. Examples 1. Using the US investment series The first example uses the US investment series which is discussed in Lutkepohl and Kratzig (2005, pg. 9) . The plot of the series and its first difference are given below. From the levels of the series, it appears that it has a non-zero mean, but does not appear to have a linear trend. So, we proceed with an augmented Dickey Fuller regression with an intercept, and also three lags of the dependent variable to account for serial correlation, that is:
$$
\Delta Y_t = \beta_{0,d} + \beta_{2,d}Y_{t-1} + \sum_{j=1}^3 \gamma_j \Delta Y_{t-j} + \varepsilon_{t}
$$
Note the key point that I have looked at the levels to specify the regression equation in differences. The R code to do this is given below: library(urca)
library(foreign)
library(zoo)
tsInv <- as.zoo(ts(as.data.frame(read.table(
"http://www.jmulti.de/download/datasets/US_investment.dat", skip=8, header=TRUE)),
frequency=4, start=1947+2/4))
png("USinvPlot.png", width=6,
height=7, units="in", res=100)
par(mfrow=c(2, 1))
plot(tsInv$USinvestment)
plot(diff(tsInv$USinvestment))
dev.off()
# ADF with intercept
adfIntercept <- ur.df(tsInv$USinvestment, lags = 3, type= 'drift')
summary(adfIntercept) The results indicate that the the null hypothesis of nonstationarity can be rejected for this series using the t-test based on the estimated coefficient. The joint F-test of the intercept and the slope coefficient ($\mathfrak{H}{}:{}[\beta_{2, d}, \beta_{0,l}]' = [0, 0]'$) also rejects the null hypothesis that there is a unit root in the series. 2. Using German (log) consumption series The second example is using the German quarterly seasonally adjusted time series of (log) consumption. The plot of the series and its differences are given below. From the levels of the series, it is clear that the series has a trend, so we include the trend in the augmented Dickey-Fuller regression together with four lags of the first differences to account for the serial correlation, that is
$$
\Delta Y_t = \beta_{0,d} + \beta_{1,d}t + \beta_{2,d}Y_{t-1} + \sum_{j=1}^4 \gamma_j \Delta Y_{t-j} + \varepsilon_{t}
$$ The R code to do this is # using the (log) consumption series
tsConsump <- zoo(read.dta("http://www.stata-press.com/data/r12/lutkepohl2.dta"), frequency=1)
png("logConsPlot.png", width=6,
height=7, units="in", res=100)
par(mfrow=c(2, 1))
plot(tsConsump$ln_consump)
plot(diff(tsConsump$ln_consump))
dev.off()
# ADF with trend
adfTrend <- ur.df(tsConsump$ln_consump, lags = 4, type = 'trend')
summary(adfTrend) The results indicate that the null of nonstationarity cannot be rejected using the t-test based on the estimated coefficient. The joint F-test of the linear trend coefficient and the slope coefficient ($\mathfrak{H}{}:{}[\beta_{2, d}, \beta_{1,l}]' = [0, 0]'$) also indicates that the null of nonstationarity cannot be rejected. 3. Using given temperature data Now we can assess the properties of your data. The usual plots in levels and first differences are given below. These indicate that your data has an intercept and a trend, so we perform the ADF test (with no lagged first difference terms), using the following R code # using the given data
tsTemp <- read.table(textConnection("temp
64.19749
65.19011
64.03281
64.99111
65.43837
65.51817
65.22061
65.43191
65.0221
65.44038
64.41756
64.65764
64.7486
65.11544
64.12437
64.49148
64.89215
64.72688
64.97553
64.6361
64.29038
65.31076
64.2114
65.37864
65.49637
65.3289
65.38394
65.39384
65.0984
65.32695
65.28
64.31041
65.20193
65.78063
65.17604
66.16412
65.85091
65.46718
65.75551
65.39994
66.36175
65.37125
65.77763
65.48623
64.62135
65.77237
65.84289
65.80289
66.78865
65.56931
65.29913
64.85516
65.56866
64.75768
65.95956
65.64745
64.77283
65.64165
66.64309
65.84163
66.2946
66.10482
65.72736
65.56701
65.11096
66.0006
66.71783
65.35595
66.44798
65.74924
65.4501
65.97633
65.32825
65.7741
65.76783
65.88689
65.88939
65.16927
64.95984
66.02226
66.79225
66.75573
65.74074
66.14969
66.15687
65.81199
66.13094
66.13194
65.82172
66.14661
65.32756
66.3979
65.84383
65.55329
65.68398
66.42857
65.82402
66.01003
66.25157
65.82142
66.08791
65.78863
66.2764
66.00948
66.26236
65.40246
65.40166
65.37064
65.73147
65.32708
65.84894
65.82043
64.91447
65.81062
66.42228
66.0316
65.35361
66.46407
66.41045
65.81548
65.06059
66.25414
65.69747
65.15275
65.50985
66.66216
66.88095
65.81281
66.15546
66.40939
65.94115
65.98144
66.13243
66.89761
66.95423
65.63435
66.05837
66.71114"), header=T)
tsTemp <- as.zoo(ts(tsTemp, frequency=1))
png("tempPlot.png", width=6,
height=7, units="in", res=100)
par(mfrow=c(2, 1))
plot(tsTemp$temp)
plot(diff(tsTemp$temp))
dev.off()
# ADF with trend
adfTrend <- ur.df(tsTemp$temp, type = 'trend')
summary(adfTrend) The results for both the t-test and the F-test indicate that the null of nonstationarity can be rejected for the temperature series.
I hope that clarifies matter somewhat. | {
"source": [
"https://stats.stackexchange.com/questions/44647",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10926/"
]
} |
44,838 | For my own understanding, I am interested in manually replicating the calculation of the standard errors of estimated coefficients as, for example, come with the output of the lm() function in R , but haven't been able to pin it down. What is the formula / implementation used? | The linear model is written as $$
\left|
\begin{array}{l}
\mathbf{y} = \mathbf{X} \mathbf{\beta} + \mathbf{\epsilon} \\
\mathbf{\epsilon} \sim N(0, \sigma^2 \mathbf{I}),
\end{array}
\right.$$ where $\mathbf{y}$ denotes the vector of responses, $\mathbf{\beta}$ is the vector of fixed effects parameters, $\mathbf{X}$ is the corresponding design matrix whose columns are the values of the explanatory variables, and $\mathbf{\epsilon}$ is the vector of random errors. It is well known that an estimate of $\mathbf{\beta}$ is given by (refer, e.g., to the wikipedia article ) $$\hat{\mathbf{\beta}} = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y}.$$ Hence $$
\textrm{Var}(\hat{\mathbf{\beta}}) =
(\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime}
\;\sigma^2 \mathbf{I} \; \mathbf{X} (\mathbf{X}^{\prime} \mathbf{X})^{-1}
= \sigma^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1} (\mathbf{X}^{\prime}
\mathbf{X}) (\mathbf{X}^{\prime} \mathbf{X})^{-1}
= \sigma^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1},
$$ [reminder: $\textrm{Var}(AX)=A\times \textrm{Var}(X) \times A′$ , for some random vector $X$ and some non-random matrix $A$ ] so that $$
\widehat{\textrm{Var}}(\hat{\mathbf{\beta}}) = \hat{\sigma}^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1},
$$ where $\hat{\sigma}^2$ can be obtained by the Mean Square Error (MSE) in the ANOVA table. Example with a simple linear regression in R #------generate one data set with epsilon ~ N(0, 0.25)------
seed <- 1152 #seed
n <- 100 #nb of observations
a <- 5 #intercept
b <- 2.7 #slope
set.seed(seed)
epsilon <- rnorm(n, mean=0, sd=sqrt(0.25))
x <- sample(x=c(0, 1), size=n, replace=TRUE)
y <- a + b * x + epsilon
#-----------------------------------------------------------
#------using lm------
mod <- lm(y ~ x)
#--------------------
#------using the explicit formulas------
X <- cbind(1, x)
betaHat <- solve(t(X) %*% X) %*% t(X) %*% y
var_betaHat <- anova(mod)[[3]][2] * solve(t(X) %*% X)
#---------------------------------------
#------comparison------
#estimate
> mod$coef
(Intercept) x
5.020261 2.755577
> c(betaHat[1], betaHat[2])
[1] 5.020261 2.755577
#standard error
> summary(mod)$coefficients[, 2]
(Intercept) x
0.06596021 0.09725302
> sqrt(diag(var_betaHat))
x
0.06596021 0.09725302
#---------------------- When there is a single explanatory variable, the model reduces to $$y_i = a + bx_i + \epsilon_i, \qquad i = 1, \dotsc, n$$ and $$\mathbf{X} = \left(
\begin{array}{cc}
1 & x_1 \\
1 & x_2 \\
\vdots & \vdots \\
1 & x_n
\end{array}
\right), \qquad \mathbf{\beta} = \left(
\begin{array}{c}
a\\b
\end{array}
\right)$$ so that $$(\mathbf{X}^{\prime} \mathbf{X})^{-1} = \frac{1}{n\sum x_i^2 - (\sum x_i)^2}
\left(
\begin{array}{cc}
\sum x_i^2 & -\sum x_i \\
-\sum x_i & n
\end{array}
\right)$$ and formulas become more transparant. For example, the standard error of the estimated slope is $$\sqrt{\widehat{\textrm{Var}}(\hat{b})} = \sqrt{[\hat{\sigma}^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}]_{22}} = \sqrt{\frac{n \hat{\sigma}^2}{n\sum x_i^2 - (\sum x_i)^2}}.$$ > num <- n * anova(mod)[[3]][2]
> denom <- n * sum(x^2) - sum(x)^2
> sqrt(num / denom)
[1] 0.09725302 | {
"source": [
"https://stats.stackexchange.com/questions/44838",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4087/"
]
} |
44,840 | I've dived into the field of neural networks and I became enthralled with them. I have finally developed an application framework for testing trade systems in stock exchanges and now I'm going to implement my first neural network in it.
Very simple and primitive one, not intended for real trading, just for starters. I only want to know if my approach is good approach. And if you see I'm missing something (or I'm wrong about something) or you have an idea of what could help a begginer in a field of neural networks in market trading, that would just make me super-happy :) I have 40 inputs, market values from the stock exchange (S&P e-mini but that's not important). For these 40 inputs, I know 2 numbers. How much money would I earn or lose with a buy order How much money would I earn or lose with a sell order Because of how stock exchanges work, both numbers can actually be negative/positive indicating that I can lose/earn money for either buy and sell (this is because a trade can have attached "loss limitting" or "targetting" orders like STOP, LIMIT etc. which behave differently). But if that happens, it is an indication that I should not place an order at all, even if both buy&sell orders give positive numbers. I imagine that the best activation function to use is the ...sigmoid thing but with a range from -1 to 1 (I've found it's called many names on the internet...bipolar sigmoid, tanh, tangent something...I'm no profound mathematician). With a back propagation learning I teach the network that for the 40 inputs, there is 1 output and this output is one of these numbers. -1 which means sell order is going to earn money, buy is going to lose money +1 which means buy order is going to earn money, sell is going to lose money 0 which means buy and sell are both going to sell/lose money, best avoid trading I'm imagining that after learning, the network output will be always some number close to -1, 1 or 0 and it's just up to me where I set the threshold for buying or selling. Is this a right way to use a neural network? Everywhere on the internet, the output for learning people are giving the back propagation learning machine are the future values of the market chart and not the expected money yield of a different trade entries (buy or sell). I consider that a bad approach because I'm not interested in the future chart values but in the money I want to earn. Edit: I intend to build a neural network for automated trading, not for decision helping. | There are severe flaws with this approach. First, there are many gambles which usually win, but which are bad gambles. Suppose you have the chance to win \$1 $90\%$ of the time and lose \$100 $10\%$ of the time. This has a negative expected value, but the way you are training the neural network would teach it to recommend such reverse lottery tickets. Second, you are missing a big point of the stock exchange, which is to manage risk. What determines the price of an investment is not just its return, it is the return versus the risk which can't be hedged away. Investments with high returns and high risks are not necessarily better than investments with low returns and low risk. If you can invest risk-free at $6\%$ and borrow money at $5\%$, this is more valuable than finding a very risky investment with a return of $60\%$. An investment with a negative rate of return may still be valuable if it is strongly negatively correlated with a risky investment with a high rate of return. So, the rate of return is insufficient for evaluating investments. Third, you should realize that you are competing with other people who also have access to neural networks. There are a lot of commercial programs aimed at day traders based on neural networks. (These are made by people who find it more profitable to sell software to confused day traders than to use their own systems.) There are many proprietary systems, some of which may involve neural networks. To find value they overlook, you need to have some advantage, and you haven't mentioned any. I'm a big fan of neural networks, but I think typical users of neural networks in the stock market do not understand the basics and burn money. | {
"source": [
"https://stats.stackexchange.com/questions/44840",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/17373/"
]
} |
44,992 | In the arima function in R, what does order(1, 0, 12) mean? What are the values that can be assigned to p , d , q , and what is the process to find those values? | What does ARIMA(1, 0, 12) mean? Specifically for your model, ARIMA(1, 0, 12) means that it you are describing some response variable (Y) by combining a 1st order Auto-Regressive model and a 12th order Moving Average model. A good way to think about it is (AR, I, MA). This makes your model look the following, in simple terms: Y = (Auto-Regressive Parameters) + (Moving Average Parameters) The 0 between the 1 and the 12 represents the 'I' part of the model (the Integrative part) and it signifies a model where you're taking the difference between response variable data - this can be done with non-stationary data and it doesn't seem like you're dealing with that, so you can just ignore it. The link that DanTheMan posted shows a nice mix of models that could help you understand yours by comparing it to those. What values can be assigned to p, d, q? Lots of different whole numbers. There are diagnostic tests you can do to try to find the best values of p,d,q (see part 3). What is the process to find the values of p, d, q? There are a number of ways, and I don't intend this to be exhaustive: look at an autocorrelation graph of the data (will help if Moving Average (MA) model is appropriate) look at a partial autocorrelation graph of the data (will help if AutoRegressive (AR) model is appropriate) look at extended autocorrelation chart of the data (will help if a combination of AR and MA are needed) try Akaike's Information Criterion (AIC) on a set of models and investigate the models with the lowest AIC values try the Schwartz Bayesian Information Criterion (BIC) and investigate the models with the lowest BIC values Without knowing how much more you need to know, I can't go too much farther, but if you have more questions, feel free to ask and maybe I, or someone else, can help. * Edit : All of the ways to find p, d, q that I listed here can be found in the R package TSA if you are familiar with R. | {
"source": [
"https://stats.stackexchange.com/questions/44992",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16832/"
]
} |
45,026 | Can you give some real-life examples of time series for which a moving average process of order $q$, i.e.
$$
y_t = \sum_{i=1}^q \theta_i \varepsilon_{t-i} + \varepsilon_t, \text{ where } \varepsilon_t \sim \mathcal{N}(0, \sigma^2)
$$
has some a priori reason for being a good model? At least for me, autoregressive processes seem to be quite easy to understand intuitively, while MA processes do not seem as natural at first glance. Note that I am not interested in theoretical results here (such as Wold's Theorem or invertibility). As an example of what I am looking for, suppose that you have daily stock returns $r_t \sim \text{IID}(0, \sigma^2)$. Then, average weekly stock returns will have an MA(4) structure as a purely statistical artifact. | One very common cause is mis-specification. For example, let $y$ be grocery sales and $\varepsilon$ be an unobserved (to the analyst) coupon campaign that varies in intensity over time. At any point in time, there may be several "vintages" of coupons circulating as people use them, throw them away, and receive new ones. Shocks can also have persistent (but gradually weakening) effects. Take natural disasters or simply bad weather. Battery sales go up before the storm, then fall during, and then jump again as people people realize that disaster kits may be a good idea for the future. Similarly, data manipulation (like smoothing or interpolation) can induce this effect. I also have "inherently smooth behavior of time series data (inertia) can cause $MA(1)$" in my notes, but that one no longer makes sense to me. | {
"source": [
"https://stats.stackexchange.com/questions/45026",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4783/"
]
} |
45,050 | For linear regression, we can check the diagnostic plots (residuals plots, Normal QQ plots, etc) to check if the assumptions of linear regression are violated. For logistic regression, I am having trouble finding resources that explain how to diagnose the logistic regression model fit. Digging up some course notes for GLM, it simply states that checking the residuals is not helpful for performing diagnosis for a logistic regression fit. Looking around the internet, there also seems to be various "diagnosis" procedures, such as checking the model deviance and performing chi-squared tests, but other sources state that this is inappropriate, and that you should perform a Hosmer-Lemeshow goodness of fit test. Then I find other sources that state that this test may be highly dependent on the actual groupings and cut-off values (may not be reliable). So how should one diagnose the logistic regression fit? | A few newer techniques I have come across for assessing the fit of logistic regression models come from political science journals: Greenhill, Brian, Michael D. Ward & Audrey Sacks. 2011. The separation plot: A new visual method for evaluating the fit of binary models. American Journal of Political Science 55(4):991-1002 . Esarey, Justin & Andrew Pierce. 2012. Assessing fit quality and testing for misspecification in binary-dependent variable models. Political Analysis 20(4): 480-500 . Preprint PDF Here Both of these techniques purport to replace Goodness-of-Fit tests (like Hosmer & Lemeshow) and identify potential mis-specification (in particular non-linearity in included variables in the equation). These are particularly useful as typical R-square measures of fit are frequently criticized . Both of the above papers above utilize predicted probabilities vs. observed outcomes in plots - somewhat avoiding the unclear issue of what is a residual in such models. Examples of residuals could be contribution to the log-likelihood or Pearson residuals (I believe there are many more though). Another measure that is often of interest (although not a residual) are DFBeta's (the amount a coefficient estimate changes when an observation is excluded from the model). See examples in Stata for this UCLA page on Logistic Regression Diagnostics along with other potential diagnostic procedures. I don't have it handy, but I believe J. Scott Long's Regression Models for Categorical and Limited Dependent Variables goes in to sufficient detail on all of these different diagnostic measures in a simple manner. | {
"source": [
"https://stats.stackexchange.com/questions/45050",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2252/"
]
} |
45,071 | I was just wondering, just because I tend to wonder about these sort of things in my spare time, if there are any data pieces that are 1Gig+ that have 0 Statistical Redundancy? (i.e., uncompressible with lossless compression.) Is it even possible for a file larger than a few bytes to have such a trait? Is that property possible at all, or merely theoretical? Anyway, please let me know if such a thing would be possible, and, if feasible, please link me to a place where I could view such a piece of data. Google gave me nothing. Might as well include the Wikipedia page about statistical redundancy: en.wikipedia.org/wiki/Redundancy (information theory) | A few newer techniques I have come across for assessing the fit of logistic regression models come from political science journals: Greenhill, Brian, Michael D. Ward & Audrey Sacks. 2011. The separation plot: A new visual method for evaluating the fit of binary models. American Journal of Political Science 55(4):991-1002 . Esarey, Justin & Andrew Pierce. 2012. Assessing fit quality and testing for misspecification in binary-dependent variable models. Political Analysis 20(4): 480-500 . Preprint PDF Here Both of these techniques purport to replace Goodness-of-Fit tests (like Hosmer & Lemeshow) and identify potential mis-specification (in particular non-linearity in included variables in the equation). These are particularly useful as typical R-square measures of fit are frequently criticized . Both of the above papers above utilize predicted probabilities vs. observed outcomes in plots - somewhat avoiding the unclear issue of what is a residual in such models. Examples of residuals could be contribution to the log-likelihood or Pearson residuals (I believe there are many more though). Another measure that is often of interest (although not a residual) are DFBeta's (the amount a coefficient estimate changes when an observation is excluded from the model). See examples in Stata for this UCLA page on Logistic Regression Diagnostics along with other potential diagnostic procedures. I don't have it handy, but I believe J. Scott Long's Regression Models for Categorical and Limited Dependent Variables goes in to sufficient detail on all of these different diagnostic measures in a simple manner. | {
"source": [
"https://stats.stackexchange.com/questions/45071",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/17452/"
]
} |
45,087 | Why doesn't backpropagation work when you initialize all the weight the same value (say 0.5), but works fine when given random numbers? Shouldn't the algorithm calculate the error and work from there, despite the fact that the weights are initially the same? | Symmetry breaking. If all weights start with equal values and if the solution requires that unequal weights be developed, the system can never learn. This is because error is propagated back through the weights in proportion to the values of the weights. This means that all hidden units connected directly to the output units will get identical error signals, and, since the weight changes depend on the error signals, the weights from those units to the output units must always be the same. The system is starting out at a kind of unstable equilibrium point that keeps the weights equal, but it is higher than some neighboring points on the error surface, and once it moves away to one of these points, it will never return. We counteract this problem by starting the system with small random weights. Under these conditions symmetry problems of this kind do not arise. | {
"source": [
"https://stats.stackexchange.com/questions/45087",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/17349/"
]
} |
45,124 | If I calculate the median of a sufficiently large number of observations drawn from the same distribution, does the central limit theorem state that the distribution of medians will approximate a normal distribution? My understanding is that this is true with the means of a large number of samples, but is it also true with medians? If not, what is the underlying distribution of sample medians? | If you work in terms of indicator variables (i.e. $Z_i = 1$ if $X_i \leq x$ and $0$ otherwise), you can directly apply the Central limit theorem to a mean of $Z$ 's, and by using the Delta method , turn that into an asymptotic normal distribution for $F_X^{-1}(\bar{Z})$ , which in turn means that you get asymptotic normality for fixed quantiles of $X$ . So not just the median, but quartiles, 90th percentiles, ... etc. Loosely, if we're talking about the $q$ th sample quantile in sufficiently large samples, we get that it will approximately have a normal distribution with mean the $q$ th population quantile $x_q$ and variance $q(1-q)/(nf_X(x_q)^2)$ . Hence for the median ( $q = 1/2$ ), the variance in sufficiently large samples will be approximately $1/(4nf_X(\tilde{\mu})^2)$ . You need all the conditions along the way to hold, of course, so it doesn't work in all situations, but for continuous distributions where the density at the population quantile is positive and differentiable, etc, ... Further, it doesn't hold for extreme quantiles, because the CLT doesn't kick in there (the average of Z's won't be asymptotically normal). You need different theory for extreme values. Edit: whuber's critique is correct; this would work if $x$ were a population median rather than a sample median. The argument needs to be modified to actually work properly. | {
"source": [
"https://stats.stackexchange.com/questions/45124",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14839/"
]
} |
45,153 | I have a sample dataset with 31 values. I ran a two-tailed t-test using R to test if the true mean is equal to 10: t.test(x=data, mu=10, conf.level=0.95) Output: t = 11.244, df = 30, p-value = 2.786e-12
alternative hypothesis: true mean is not equal to 10
95 percent confidence interval:
19.18980 23.26907
sample estimates:
mean of x
21.22944 Now I'm trying to do the same thing manually: t.value = (mean(data) - 10) / (sd(data) / sqrt(length(data)))
p.value = dt(t.value, df=length(lengths-1)) The t-value calculated using this method is the same as output by the t-test R function. The p-value, however, comes out to be 3.025803e-12. Any ideas what I'm doing wrong? Thanks! EDIT Here is the full R code, including my dataset: # Raw dataset -- 32 observations
data = c(21.75, 18.0875, 18.75, 23.5, 14.125, 16.75, 11.125, 11.125, 14.875, 15.5, 20.875,
17.125, 19.075, 25.125, 27.75, 29.825, 17.825, 28.375, 22.625, 28.75, 27, 12.825,
26, 32.825, 25.375, 24.825, 25.825, 15.625, 26.825, 24.625, 26.625, 19.625)
# Student t-Test
t.test(x=data, mu=10, conf.level=0.95)
# Manually calculate p-value
t.value = (mean(data) - 10) / (sd(data) / sqrt(length(data)))
p.value = dt(t.value, df=length(data) - 1) | Use pt and make it two-tailed. > 2*pt(11.244, 30, lower=FALSE)
[1] 2.785806e-12 | {
"source": [
"https://stats.stackexchange.com/questions/45153",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/17488/"
]
} |
45,588 | Suppose that a random variable has a lower and an upper bound [0,1]. How to compute the variance of such a variable? | You can prove Popoviciu's inequality as follows. Use the notation $m=\inf X$ and $M=\sup X$ . Define a function $g$ by $$
g(t)=\mathbb{E}\!\left[\left(X-t\right)^2\right] \, .
$$ Computing the derivative $g'$ , and solving $$
g'(t) = -2\mathbb{E}[X] +2t=0 \, ,
$$ we find that $g$ achieves its minimum at $t=\mathbb{E}[X]$ (note that $g''>0$ ). Now, consider the value of the function $g$ at the special point $t=\frac{M+m}{2}$ . It must be the case that $$
\mathbb{Var}[X]=g(\mathbb{E}[X])\leq g\left(\frac{M+m}{2}\right) \, .
$$ But $$
g\left(\frac{M+m}{2}\right) = \mathbb{E}\!\left[\left(X - \frac{M+m}{2}\right)^2 \right] = \frac{1}{4}\mathbb{E}\!\left[\left((X-m) + (X-M)\right)^2 \right] \, .
$$ Since $X-m\geq 0$ and $X-M\leq 0$ , we have $$
\left((X-m)+(X-M)\right)^2\leq\left((X-m)-(X-M)\right)^2=\left(M-m\right)^2 \, ,
$$ implying that $$
\frac{1}{4}\mathbb{E}\!\left[\left((X-m) + (X-M)\right)^2 \right] \leq \frac{1}{4}\mathbb{E}\!\left[\left((X-m) - (X-M)\right)^2 \right] = \frac{(M-m)^2}{4} \, .
$$ Therefore, we proved Popoviciu's inequality $$
\mathbb{Var}[X]\leq \frac{(M-m)^2}{4} \, .
$$ | {
"source": [
"https://stats.stackexchange.com/questions/45588",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/17675/"
]
} |
45,643 | I am reading books about linear regression. There are some sentences about the L1 and L2 norm. I know the formulas, but I don't understand why the L1 norm enforces sparsity in models. Can someone give a simple explanation? | Consider the vector $\vec{x}=(1,\varepsilon)\in\mathbb{R}^2$ where $\varepsilon>0$ is small. The $l_1$ and $l_2$ norms of $\vec{x}$, respectively, are given by $$||\vec{x}||_1 = 1+\varepsilon,\ \ ||\vec{x}||_2^2 = 1+\varepsilon^2$$ Now say that, as part of some regularization procedure, we are going to reduce the magnitude of one of the elements of $\vec{x}$ by $\delta\leq\varepsilon$. If we change $x_1$ to $1-\delta$, the resulting norms are $$||\vec{x}-(\delta,0)||_1 = 1-\delta+\varepsilon,\ \ ||\vec{x}-(\delta,0)||_2^2 = 1-2\delta+\delta^2+\varepsilon^2$$ On the other hand, reducing $x_2$ by $\delta$ gives norms $$||\vec{x}-(0,\delta)||_1 = 1-\delta+\varepsilon,\ \ ||\vec{x}-(0,\delta)||_2^2 = 1-2\varepsilon\delta+\delta^2+\varepsilon^2$$ The thing to notice here is that, for an $l_2$ penalty, regularizing the larger term $x_1$ results in a much greater reduction in norm than doing so to the smaller term $x_2\approx 0$. For the $l_1$ penalty, however, the reduction is the same. Thus, when penalizing a model using the $l_2$ norm, it is highly unlikely that anything will ever be set to zero, since the reduction in $l_2$ norm going from $\varepsilon$ to $0$ is almost nonexistent when $\varepsilon$ is small. On the other hand, the reduction in $l_1$ norm is always equal to $\delta$, regardless of the quantity being penalized. Another way to think of it: it's not so much that $l_1$ penalties encourage sparsity, but that $l_2$ penalties in some sense discourage sparsity by yielding diminishing returns as elements are moved closer to zero. | {
"source": [
"https://stats.stackexchange.com/questions/45643",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1803/"
]
} |
45,652 | What is the difference between the algorithms EM (Expectation Maximization) and Gradient Ascent (or descent)? Is there any condition under which they are equivalent? | From: Xu L and Jordan MI (1996). On Convergence Properties of the EM Algorithm for
Gaussian Mixtures . Neural Computation 2: 129-151. Abstract: We show that the EM step in parameter space is obtained from the gradient via a projection matrix P, and we provide an explicit expression for the matrix. Page 2 In particular we show that the EM step can be obtained by pre-multiplying the gradient by a positive denite matrix. We provide an explicit expression for the matrix ... Page 3 That is, the EM algorithm can be viewed as a variable metric gradient ascent algorithm ... This is, the paper provides explicit transformations of the EM algorithm into gradient-ascent, Newton, quasi-Newton. From wikipedia There are other methods for finding maximum likelihood estimates, such as gradient descent, conjugate gradient or variations of the Gauss–Newton method. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function. | {
"source": [
"https://stats.stackexchange.com/questions/45652",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10669/"
]
} |
45,660 | Conditional on the fact that during the first t trials coin landed once on heads. | From: Xu L and Jordan MI (1996). On Convergence Properties of the EM Algorithm for
Gaussian Mixtures . Neural Computation 2: 129-151. Abstract: We show that the EM step in parameter space is obtained from the gradient via a projection matrix P, and we provide an explicit expression for the matrix. Page 2 In particular we show that the EM step can be obtained by pre-multiplying the gradient by a positive denite matrix. We provide an explicit expression for the matrix ... Page 3 That is, the EM algorithm can be viewed as a variable metric gradient ascent algorithm ... This is, the paper provides explicit transformations of the EM algorithm into gradient-ascent, Newton, quasi-Newton. From wikipedia There are other methods for finding maximum likelihood estimates, such as gradient descent, conjugate gradient or variations of the Gauss–Newton method. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function. | {
"source": [
"https://stats.stackexchange.com/questions/45660",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/17706/"
]
} |
45,820 | If I have a dataset with $n$ observations and $p$ variables (dimensions), and generally $n$ is small ($n=12-16$), and $p$ may range from small ($p = 4-10$) to perhaps much larger ($p= 30-50$). I remember learning that $n$ should be much larger than $p$ in order to run principal component analysis (PCA) or factor analysis (FA), but it seems like this may not be so in my data. Note that for my purposes I am rarely interested in any principal components past PC2. Questions: What are the rules of thumb for minimum sample size when PCA is OK to use, and when it is not? Is it ever OK to use the first few PCs even if $n=p$ or $n<p$? Are there any references on this? Does it matter if your main goal is to use PC1 and possibly PC2 either: simply graphically, or as synthetic variable then used in regression? | You can actually measure whether your sample size is "large enough". One symptom of small sample size being too small is instability. Bootstrap or cross validate your PCA: these techniques disturb your data set by deleting/exchanging a small fraction of your sample and then build "surrogate models" for each of the disturbed data sets. If the surrogate models are similar enough (= stable), you are fine.
You'll probably need to take into account that the solution of the PCA is not unique: PCs can flip (multiply both a score and the respective principal component by $-1$). You may also want to use Procrustes rotation, to obtain PC models that are as similar as possible. | {
"source": [
"https://stats.stackexchange.com/questions/45820",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6072/"
]
} |
45,842 | Consider any arbitrary estimator called $\hat{M}$ (e.g., regression coefficient estimator or specific type of correlation estimator, etc) that satisfies the following asymptotic property: $$\boxed{\sqrt{N}(\hat{M}-M) \overset{d}{\to}\mathcal{N}(0,\sigma^2)}\,\,\,\,\,\,\,\,\,\,\,\,(1)$$ which implies that our $\hat{M}$ is consistent. We also have a consistent estimator $\hat{\sigma}$, which gives rise to the asymptotic property: $$\displaystyle \ \ \boxed{\frac{\sqrt{N}(\hat{M}-M)}{\hat{\sigma}} \overset{d}{\to}\mathcal{N}(0,1)}\,\,\,\,\,\,\,\,\,\,\,\,(2)$$ I'm wondering if I can use the $z$- or $t$-test just like normal for any such $\hat{M}$ that satisfies the above? Let $Q$ be defined as the test statistic: $$\displaystyle \ \ \boxed{Q_\hat{M} = \frac{\hat{M}-M_{H_0}}{\sqrt{\frac{1}{N}\hat{\sigma}^2}}}\,\,\,\,\,\,\,\,\,\,\,\,(3)$$ My goal is to do the following hypothesis test: $H_0: M = 0$ $H_a: M \not= 0$ yet the only information I have access to is $(1)$ and $(2)$, whence my question. $$\underline{\text{Update}}$$ The current answers suggest that I can't always robustly $z$- or $t$-test for any such $\hat{M}$. I am reading the relevant sections of All of Statistics (Wasserman), as well as Statistical Inference (Casella & Berger). Both state that, if: $$\displaystyle \ \ \frac{\sqrt{N}(\hat{M}-M)}{\hat{\sigma}} \overset{d}{\to} \mathcal{N}(0,1)$$ then " an approximate test can be based on the wald statistic $Q$ and would reject $H_0$ if.f. $Q < -z_{\alpha/2}$ or $Q > z_{\alpha/2}$ " (in Casella & Berger, page 492, "10.3.2 Other Large-Sample Tests") or, in (Wasserman, page 158, Theorem 10.13) " Let $Q = (\hat{M}-M_{H_0})/\hat{se}$ denote the observed value of the Wald statistic $Q$ $\big($where $\hat{se}$ is obviously equal to my $\sqrt{\frac{1}{N}\hat{\sigma}^2}$$\big)$. The p-value is given by: $$p = 2\Phi(-|Q|)$$ This contradicts the existing advice since they do not state any other necessary assumptions to be able to do this legitimately (to the best of my ability to comprehend). Either; I have failed to understand existing answers. I have failed to express my original question clearly. I have failed to read these chapters properly. They are excluding thoroughness for pedagogical purposes. I would appreciate some assistance on which option is correct. Thanks. $\big($Please go easy I am new to stats :)$\big)$. Another dimension is that my intended application is $n = 3000$, so perhaps the finite sample problems are less relevant? | You can actually measure whether your sample size is "large enough". One symptom of small sample size being too small is instability. Bootstrap or cross validate your PCA: these techniques disturb your data set by deleting/exchanging a small fraction of your sample and then build "surrogate models" for each of the disturbed data sets. If the surrogate models are similar enough (= stable), you are fine.
You'll probably need to take into account that the solution of the PCA is not unique: PCs can flip (multiply both a score and the respective principal component by $-1$). You may also want to use Procrustes rotation, to obtain PC models that are as similar as possible. | {
"source": [
"https://stats.stackexchange.com/questions/45842",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16811/"
]
} |
45,875 | MAD = Mean Absolute Deviation
MSE = Mean Squared Error I've seen suggestions from various places that MSE is used despite some undesirable qualities (e.g. http://www.stat.nus.edu.sg/~staxyc/T12.pdf , which states on p8 "It is commonly believed that MAD is a better criterion than MSE. However, mathematically MSE is more convenient than MAD.") Is there more to it than that? Is there a paper that thoroughly analyzes the situations in which various methods of measuring forecast error are more/less appropriate? My google searches haven't revealed anything. A similar question to this was asked at https://stackoverflow.com/questions/13391376/how-to-decide-the-forecasting-method-from-the-me-mad-mse-sde , and the user was asked to post on stats.stackexchange.com, but I don't think they ever did. | To decide which point forecast error measure to use, we need to take a step back. Note that we don't know the future outcome perfectly, nor will we ever. So the future outcome follows a probability distribution . Some forecasting methods explicitly output such a full distribution, and some don't - but it is always there, if only implicitly. Now, we want to have a good error measure for a point forecast . Such a point forecast $F_t$ is our attempt to summarize what we know about the future distribution (i.e., the predictive distribution) at time $t$ using a single number, a so-called functional of the future density. The error measure then is a way to assess the quality of this single number summary. So you should choose an error measure that rewards "good" one number summaries of (unknown, possibly forecasted, but possibly only implicit) future densities. The challenge is that different error measures are minimized by different functionals. The expected MSE is minimized by the expected value of the future distribution. The expected MAD is minimized by the median of the future distribution. Thus, if you calibrate your forecasts to minimize the MAE, your point forecast will be the future median, not the future expected value, and your forecasts will be biased if your future distribution is not symmetric. This is most relevant for count data, which are typically skewed. In extreme cases (say, Poisson distributed sales with a mean below $\log 2\approx 0.69$ ), your MAE will be lowest for a flat zero forecast. See here or here or here for details. I give some more information and an illustration in What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? That thread considers the mape , but also other error measures, and it contains links to other related threads. In the end, which error measure to use really depends on your Cost of Forecast Error, i.e., which kind of error is most painful. Without looking at the actual implications of forecast errors, any discussion about "better criteria" is basically meaningless. Measures of forecast accuracy were a big topic in the forecasting community some years back, and they still pop up now and then. One very good article to look at is Hyndman & Koehler "Another look at measures of forecast accuracy" (2006). Finally, one alternative is to calculate full predictive densities and assess these using proper scoring-rules . | {
"source": [
"https://stats.stackexchange.com/questions/45875",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9162/"
]
} |
46,019 | Why are we using the squared residuals instead of the absolute residuals in OLS estimation? My idea was that we use the square of the error values, so that residuals below the fitted line (which are then negative), would still have to be able to be added up to the positive errors. Otherwise, we could have an error of 0 simply because a huge positive error could cancel with a huge negative error. So why do we square it, instead of just taking the absolute value? Is that because of the extra penalty for higher errors (instead of 2 being 2 times the error of 1, it is 4 times the error of 1 when we square it). | Both are done. Least squares is easier, and the fact that for independent random variables "variances add" means that it's considerably more convenient; for examples, the ability to partition variances is particularly handy for comparing nested models. It's somewhat more efficient at the normal (least squares is maximum likelihood), which might seem to be a good justification -- however, some robust estimators with high breakdown can have surprisingly high efficiency at the normal. But L1 norms are certainly used for regression problems and these days relatively often. If you use R, you might find the discussion in section 5 here useful: https://socialsciences.mcmaster.ca/jfox/Books/Companion/appendices/Appendix-Robust-Regression.pdf (though the stuff before it on M estimation is also relevant, since it's also a special case of that) | {
"source": [
"https://stats.stackexchange.com/questions/46019",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16175/"
]
} |
46,151 | In the simple linear regression case $y=\beta_0+\beta_1x$, you can derive the least square estimator $\hat\beta_1=\frac{\sum(x_i-\bar x)(y_i-\bar y)}{\sum(x_i-\bar x)^2}$ such that you don't have to know $\hat\beta_0$ to estimate $\hat\beta_1$ Suppose I have $y=\beta_1x_1+\beta_2x_2$, how do I derive $\hat\beta_1$ without estimating $\hat\beta_2$? or is this not possible? | The derivation in matrix notation Starting from $y= Xb +\epsilon $, which really is just the same as $\begin{bmatrix}
y_{1} \\
y_{2} \\
\vdots \\
y_{N}
\end{bmatrix}
=
\begin{bmatrix}
x_{11} & x_{12} & \cdots & x_{1K} \\
x_{21} & x_{22} & \cdots & x_{2K} \\
\vdots & \ddots & \ddots & \vdots \\
x_{N1} & x_{N2} & \cdots & x_{NK}
\end{bmatrix}
*
\begin{bmatrix}
b_{1} \\
b_{2} \\
\vdots \\
b_{K}
\end{bmatrix}
+
\begin{bmatrix}
\epsilon_{1} \\
\epsilon_{2} \\
\vdots \\
\epsilon_{N}
\end{bmatrix} $ it all comes down to minimzing $e'e$: $\epsilon'\epsilon = \begin{bmatrix}
e_{1} & e_{2} & \cdots & e_{N} \\
\end{bmatrix}
\begin{bmatrix}
e_{1} \\
e_{2} \\
\vdots \\
e_{N}
\end{bmatrix} = \sum_{i=1}^{N}e_{i}^{2}
$ So minimizing $e'e'$ gives us: $min_{b}$ $e'e = (y-Xb)'(y-Xb)$ $min_{b}$ $e'e = y'y - 2b'X'y + b'X'Xb$ $\frac{\partial(e'e)}{\partial b} = -2X'y + 2X'Xb \stackrel{!}{=} 0$ $X'Xb=X'y$ $b=(X'X)^{-1}X'y$ One last mathematical thing, the second order condition for a minimum requires that the matrix $X'X$ is positive definite. This requirement is fulfilled in case $X$ has full rank. The more accurate derivation which goes trough all the steps in greater dept can be found under http://economictheoryblog.com/2015/02/19/ols_estimator/ | {
"source": [
"https://stats.stackexchange.com/questions/46151",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14120/"
]
} |
46,226 | I've got a dataset on agricultural trials. My response variable is a response ratio: log(treatment/control). I'm interested in what mediates the difference, so I'm running RE meta-regressions (unweighted, because is seems pretty clear that effect size is uncorrelated with variance of estimates). Each study reports grain yield, biomass yield, or both. I can't impute grain yield from studies that report biomass yield alone, because not all of the plants studied were useful for grain (sugar cane is included, for instance). But each plant that produced grain also had biomass. For missing covariates, I've been using iterative regression imputation (following Andrew Gelman's textbook chapter). It seems to give reasonable results, and the whole process is generally intuitive. Basically I predict missing values, and use those predicted values to predict missing values, and loop through each variable until each variable approximately converges (in distribution). Is there any reason why I can't use the same process to impute missing outcome data? I can probably form a relatively informative imputation model for biomass response ratio given grain response ratio, crop type, and other covariates that I have. I'd then average the coefficients and VCV's, and add the MI correction as per standard practice. But what do these coefficients measure when the outcomes themselves are imputed? Is the interpretation of the coefficients any different than standard MI for covariates? Thinking about it, I can't convince myself that this doesn't work, but I'm not really sure. Thoughts and suggestions for reading material are welcome. | As you suspected, it is valid to use multiple imputation for the outcome measure. There are cases where this is useful, but it can also be risky. I consider the situation where all covariates are complete, and the outcome is incomplete. If the imputation model is correct, we will obtain valid inferences on the parameter estimates from the imputed data. The inferences obtained from just the complete cases may actually be wrong if the missingness is related to the outcome after conditioning on the predictor, i.e. under MNAR. So imputation is useful if we know (or suspect) that the data are MNAR. Under MAR, there are generally no benefits to impute the outcome, and for a low number of imputations the results may even be somewhat more variable because of simulation error. There is an important exception to this. If we have access to an auxiliary complete variable that is not part of the model and that is highly correlated with the outcome, imputation can be considerably more efficient than complete case analysis, resulting in more precise estimates and shorter confidence intervals. A common scenario where this occurs is if we have a cheap outcome measure for everyone, and an expensive measure for a subset. In many data sets, missing data also occur in the independent variables. In these cases, we need to impute the outcome variable since its imputed version is needed to impute the independent variables. | {
"source": [
"https://stats.stackexchange.com/questions/46226",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/17359/"
]
} |
46,229 | I am dealing with linear data with outliers, some of which are at more the 5 standard deviations away from the estimated regression line. I'm looking for a linear regression technique that reduces the influence of these points. So far what I did is to estimate the regression line with all the data, then discard the data point with very large squared residuals (say the top 10%) and repeated the regression without those points. In the literature there are lots of possible approaches: least trimmed squares, quantile regression , m-estimators, etc. I really don't know which approach I should try, so I'm looking for suggestions. The important for me is that the chosen method should be fast because the robust regression will be computed at
each step of an optimization routine. Thanks a lot! | If your data contains a single outlier, then it can be found reliably using the approach you suggest (without the iterations though). A formal approach to this
is Cook, R. Dennis (1979). Influential Observations in Linear Regression . Journal of the American Statistical Association (American Statistical Association) 74 (365): 169–174. For finding more than one outlier, for many years, the leading method was the so-called $M$ -estimation family of approach. This is a rather broad family of estimators that includes Huber's $M$ estimator of regression, Koenker's L1 regression as well as the approach proposed by Procastinator in his comment to your question.
The $M$ estimators with convex $\rho$ functions have the advantage that they have about the same numerical complexity as a regular regression estimation. The big disadvantage is that they can only reliably find the outliers if: the contamination rate of your sample is smaller than $\frac{1}{1+p}$ where $p$ is the number of design variables, or if the outliers are not outlying in the design space (Ellis and Morgenthaler (1992)). You can find good implementation of $M$ ( $l_1$ ) estimates of regression in the robustbase ( quantreg ) R package. If your data contains more than $\lfloor\frac{n}{p+1}\rfloor$ outlier potentially also outlying on the design space, then, finding them amounts to solving a combinatorial problem (equivalently the solution to an $M$ estimator with re-decending/non-convex $\rho$ function). In the last 20 years (and specially last 10) a large body of fast and reliable outlier detection algorithms have been designed to approximately solve this combinatorial problem. These are now widely implemented in the most popular statistical packages (R, Matlab, SAS, STATA,...). Nonetheless, the numerical complexity of finding outliers with these approaches is typically of order $O(2^p)$ . Most algorithm can be used in practice for values of $p$ in the mid teens. Typically these algorithms are linear in $n$ (the number of observations) so the number of observation isn't an issue. A big advantage is that most of these algorithms are embarrassingly parallel. More recently, many approaches specifically designed for higher dimensional data have been proposed. Given that you did not specify $p$ in your question, I will list some references
for the case $p<20$ . Here are some papers that explain this in greater details in these series of review articles: Rousseeuw, P. J. and van Zomeren B.C. (1990). Unmasking Multivariate Outliers and Leverage Points . Journal of the American Statistical Association , Vol. 85, No. 411, pp. 633-639. Rousseeuw, P.J. and Van Driessen, K. (2006). Computing LTS Regression for Large Data Sets . Data Mining and Knowledge Discovery archive Volume 12 Issue 1, Pages 29 - 45. Hubert, M., Rousseeuw, P.J. and Van Aelst, S. (2008). High-Breakdown Robust Multivariate Methods . Statistical Science , Vol. 23, No. 1, 92–119 Ellis S. P. and Morgenthaler S. (1992). Leverage and Breakdown in L1 Regression. Journal of the American Statistical Association , Vol. 87,
No. 417, pp. 143-148 A recent reference book on the problem of outlier identification is: Maronna R. A., Martin R. D. and Yohai V. J. (2006). Robust Statistics: Theory and
Methods . Wiley, New York. These (and many other variations of these) methods are implemented (among other) in the robustbase R package. | {
"source": [
"https://stats.stackexchange.com/questions/46229",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/26105/"
]
} |
46,334 | I'm trying to graph the number of actions by users (in this case, "likes") over time. So I have "Number of actions" as my y-axis, my x-axis is time (weeks), and each line represents one user. My problem is that I want to look at this data for a set of about 100 users. A line graph quickly becomes a jumbled mess with 100 lines. Is there a better type of graph I can use to display this information? Or should I look at being able to toggle individual lines on/off? I'd like to see all the data at once, but being able to discern the number of actions with high precision isn't terribly important. Why I'm doing this For a subset of my users (top users), I want to find out which ones may not have liked a new version of the application that was rolled out on a certain date. I'm looking for significant drops in the number of actions by individual users. | I would like to suggest a (standard) preliminary analysis to remove the principal effects of (a) variation among users, (b) the typical response among all users to the change, and (c) typical variation from one time period to the next. A simple (but by no means the best) way to do this is to perform a few iterations of "median polish" on the data to sweep out user medians and time period medians, then smooth the residuals over time. Identify the smooths that change a lot: they are the users you want to emphasize in the graphic. Because these are count data, it's a good idea to re-express them using a square root. As an example of what can result, here is a simulated 60-week dataset of 240 users who typically undertake 10 to 20 actions per week. A change in all users occurred after week 40. Three of these were "told" to respond negatively to the change. The left plot shows the raw data: counts of action by user (with users distinguished by color) over time. As asserted in the question, it's a mess. The right plot shows the results of this EDA--in the same colors as before--with the unusually responsive users automatically identified and highlighted. The identification--although it is somewhat ad hoc --is complete and correct (in this example). Here is the R code that produced these data and carried out the analysis. It could be improved in several ways, including Using a full median polish to find the residuals, rather than just one iteration. Smoothing the residuals separately before and after the change point. Perhaps using a more sophisticated outlier detection algorithm. The current one merely flags all users whose range of residuals is more than twice the median range. Albeit simple, it is robust and appears to work well. (A user-settable value, threshold , can be adjusted to make this identification more or less stringent.) Testing nevertheless suggests this solution works well for a wide range of user counts, 12 - 240 or more. n.users <- 240 # Number of users (here limited to 657, the number of colors)
n.periods <- 60 # Number of time periods
i.break <- 40 # Period after which change occurs
n.outliers <- 3 # Number of greatly changed users
window <- 1/5 # Temporal smoothing window, fraction of total period
response.all <- 1.1 # Overall response to the change
threshold <- 2 # Outlier detection threshold
# Create a simulated dataset
set.seed(17)
base <- exp(rnorm(n.users, log(10), 1/2))
response <- c(rbeta(n.users - n.outliers, 9, 1),
rbeta(n.outliers, 5, 45)) * response.all
actual <- cbind(base %o% rep(1, i.break),
base * response %o% rep(response.all, n.periods-i.break))
observed <- matrix(rpois(n.users * n.periods, actual), nrow=n.users)
# ---------------------------- The analysis begins here ----------------------------#
# Plot the raw data as lines
set.seed(17)
colors = sample(colors(), n.users) # (Use a different method when n.users > 657)
par(mfrow=c(1,2))
plot(c(1,n.periods), c(min(observed), max(observed)), type="n",
xlab="Time period", ylab="Number of actions", main="Raw data")
i <- 0
apply(observed, 1, function(a) {i <<- i+1; lines(a, col=colors[i])})
abline(v = i.break, col="Gray") # Mark the last period before a change
# Analyze the data by time period and user by sweeping out medians and smoothing
x <- sqrt(observed + 1/6) # Re-express the counts
mean.per.period <- apply(x, 2, median)
residuals <- sweep(x, 2, mean.per.period)
mean.per.user <- apply(residuals, 1, median)
residuals <- sweep(residuals, 1, mean.per.user)
smooth <- apply(residuals, 1, lowess, f=window) # Smooth the residuals
smooth.y <- sapply(smooth, function(s) s$y) # Extract the smoothed values
ends <- ceiling(window * n.periods / 4) # Prepare to drop near-end values
range <- apply(smooth.y[-(1:ends), ], 2, function(x) max(x) - min(x))
# Mark the apparent outlying users
thick <- rep(1, n.users)
thick[outliers <- which(range >= threshold * median(range))] <- 3
type <- ifelse(thick==1, 3, 1)
cat(outliers) # Print the outlier identifiers (ideally, the last `n.outliers`)
# Plot the residuals
plot(c(1,n.periods), c(min(smooth.y), max(smooth.y)), type="n",
xlab="Time period", ylab="Smoothed residual root", main="Residuals")
i <- 0
tmp <- lapply(smooth,
function(a) {i <<- i+1; lines(a, lwd=thick[i], lty=type[i], col=colors[i])})
abline(v = i.break, col="Gray") | {
"source": [
"https://stats.stackexchange.com/questions/46334",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/18009/"
]
} |
46,429 | I am looking for a method to transform my dataset from its current mean and standard deviation to a target mean and a target standard deviation. Basically, I want to shrink/expand the dispersion and scale all numbers to a mean. It doesn't work to do two separate linear transformations, one for standard deviation, and then one for mean. What method should I use? | Suppose you start $\{x_i\}$ with mean $m_1$ and non-zero standard deviation $s_1$ and you want to arrive at a similar set with mean $m_2$ and standard deviation $s_2$. Then multiplying all your values by $\frac{s_2}{s_1}$ will give a set with mean $m_1 \times \frac{s_2}{s_1}$ and standard deviation $s_2$. Now adding $m_2 - m_1 \times \frac{s_2}{s_1}$ will give a set with mean $m_2$ and standard deviation $s_2$. So a new set $\{y_i\}$ with $$y_i= m_2+ (x_i- m_1) \times \frac{s_2}{s_1} $$ has mean $m_2$ and standard deviation $s_2$. You would get the same result with the three steps: translate the mean to $0$, scale to the desired standard deviation; translate to the desired mean. | {
"source": [
"https://stats.stackexchange.com/questions/46429",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/18044/"
]
} |
46,523 | I know I'm missing something in my understanding of logistic regression, and would really appreciate any help. As far as I understand it, the logistic regression assumes that the probability of a '1' outcome given the inputs, is a linear combination of the inputs, passed through an inverse-logistic function. This is exemplified in the following R code: #create data:
x1 = rnorm(1000) # some continuous variables
x2 = rnorm(1000)
z = 1 + 2*x1 + 3*x2 # linear combination with a bias
pr = 1/(1+exp(-z)) # pass through an inv-logit function
y = pr > 0.5 # take as '1' if probability > 0.5
#now feed it to glm:
df = data.frame(y=y,x1=x1,x2=x2)
glm =glm( y~x1+x2,data=df,family="binomial") and I get the following error message: Warning messages:
1: glm.fit: algorithm did not converge
2: glm.fit: fitted probabilities numerically 0 or 1 occurred I've worked with R for some time now; enough to know that probably I'm the one to blame..
what is happening here? | No. The response variable $y_i$ is a Bernoulli random variable taking value $1$ with probability $pr(i)$. > set.seed(666)
> x1 = rnorm(1000) # some continuous variables
> x2 = rnorm(1000)
> z = 1 + 2*x1 + 3*x2 # linear combination with a bias
> pr = 1/(1+exp(-z)) # pass through an inv-logit function
> y = rbinom(1000,1,pr) # bernoulli response variable
>
> #now feed it to glm:
> df = data.frame(y=y,x1=x1,x2=x2)
> glm( y~x1+x2,data=df,family="binomial")
Call: glm(formula = y ~ x1 + x2, family = "binomial", data = df)
Coefficients:
(Intercept) x1 x2
0.9915 2.2731 3.1853
Degrees of Freedom: 999 Total (i.e. Null); 997 Residual
Null Deviance: 1355
Residual Deviance: 582.9 AIC: 588.9 | {
"source": [
"https://stats.stackexchange.com/questions/46523",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/17998/"
]
} |
46,588 | I am a bit confused. Why are Gaussian processes called non parametric models? They do assume that the functional values, or a subset of them, have a Gaussian prior with mean 0 and covariance function given as the kernel function. These kernel functions themselves have some parameters (i.e., hyperparameters). So why are they called non parametric models? | I'll preface this by saying that it isn't always clear what one means by "nonparametric" or "semiparametric" etc. In the comments, it seems likely that whuber has some formal definition in mind (maybe something like choosing a model $M_\theta$ from some family $\{M_\theta: \theta \in \Theta\}$ where $\Theta$ is infinite dimensional), but I'm going to be pretty informal. Some might argue that a nonparametric method is one where the effective number of parameters you use increases with the data. I think there is a video on videolectures.net where (I think) Peter Orbanz gives four or five different takes on how we can define "nonparametric." Since I think I know what sorts of things you have in mind, for simplicity I'll assume that you are talking about using Gaussian processes for regression, in a typical way: we have training data $(Y_i, X_i), i = 1, ..., n$ and we are interested in modeling the conditional mean $E(Y|X = x) := f(x)$. We write
$$
Y_i = f(X_i) + \epsilon_i
$$
and perhaps we are so bold as to assume that the $\epsilon_i$ are iid and normally distributed, $\epsilon_i \sim N(0, \sigma^2)$. $X_i$ will be one dimensional, but everything carries over to higher dimensions. If our $X_i$ can take values in a continuum then $f(\cdot)$ can be thought of as a parameter of (uncountably) infinite dimension. So, in the sense that we are estimating a parameter of infinite dimension , our problem is a nonparametric one. It is true that the Bayesian approach has some parameters floating about here and there. But really, it is called nonparametric because we are estimating something of infinite dimension. The GP priors we use assign mass to every neighborhood of every continuous function, so they can estimate any continuous function arbitrarily well. The things in the covariance function are playing a role similar to the smoothing parameters in the usual frequentist estimators - in order for the problem to not be absolutely hopeless we have to assume that there is some structure that we expect to see $f$ exhibit. Bayesians accomplish this by using a prior on the space of continuous functions in the form of a Gaussian process. From a Bayesian perspective, we are encoding beliefs about $f$ by assuming $f$ is drawn from a GP with such-and-such covariance function. The prior effectively penalizes estimates of $f$ for being too complicated. Edit for computational issues Most (all?) of this stuff is in the Gaussian Process book by Rasmussen and Williams. Computational issues are tricky for GPs. If we proceed niavely we will need $O(N^2)$ size memory just to hold the covariance matrix and (it turns out) $O(N^3)$ operations to invert it. There are a few things we can do to make things more feasible. One option is to note that guy that we really need is $v$, the solution to $(K + \sigma^2 I)v = Y$ where $K$ is the covariance matrix. The method of conjugate gradients solves this exactly in $O(N^3)$ computations, but if we satisfy ourselves with an approximate solution we could terminate the conjugate gradient algorithm after $k$ steps and do it in $O(kN^2)$ computations. We also don't necessarily need to store the whole matrix $K$ at once. So we've moved from $O(N^3)$ to $O(kN^2)$, but this still scales quadratically in $N$, so we might not be happy. The next best thing is to work instead with a subset of the data, say of size $m$ where inverting and storing an $m \times m$ matrix isn't so bad. Of course, we don't want to just throw away the remaining data. The subset of regressors approach notes that we can derive the posterior mean of our GP as a regression of our data $Y$ on $N$ data-dependent basis functions determined by our covariance function; so we throw all but $m$ of these away and we are down to $O(m^2 N)$ computations. A couple of other potential options exist. We could construct a low-rank approximation to $K$, and set $K = QQ^T$ where $Q$ is $n \times q$ and of rank $q$; it turns inverting $K + \sigma^2 I$ in this case can be done by instead inverting $Q^TQ + \sigma^2 I$. Another option is to choose the covariance function to be sparse and use conjugate gradient methods - if the covariance matrix is very sparse then this can speed up computations substantially. | {
"source": [
"https://stats.stackexchange.com/questions/46588",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12329/"
]
} |
46,856 | I recently came across the paper "The Insignificance of Null Hypothesis Significance Testing", Jeff Gill (1999) . The author raised a few common misconceptions regarding hypothesis testing and p-values, about which I have two specific questions: The p-value is technically $P({\rm observation}|H_{0})$, which, as pointed out by the paper, generally does not tell us anything about $P(H_{0}|{\rm observation})$, unless we happen to know the marginal distributions, which is rarely the case in "everyday" hypothesis testing. When we obtain a small p-value and "reject the null hypothesis," what exactly is the probabilistic statement that we are making, since we cannot say anything about $P(H_{0}|{\rm observation})$? The second question relates to a particular statement from page 6(652) of the paper: Since the p-value, or range of p-values indicated by stars, is not set a priori, it is not the long-run probability of making a Type I error but is typically treated as such. Can anyone help to explain what is meant by this statement? | (Technically, the P-value is the probability of observing data at least as extreme as that actually observed, given the null hypothesis.) Q1. A decision to reject the null hypothesis on the basis of a small P-value typically depends on 'Fisher's disjunction': Either a rare event has happened or the null hypothesis is false. In effect, it is rarity of the event is what the P-value tells you rather than the probability that the null is false. The probability that the null is false can be obtained from the experimental data only by way of Bayes' theorem, which requires specification of the 'prior' probability of the null hypothesis (presumably what Gill is referring to as "marginal distributions"). Q2. This part of your question is much harder than it might seem. There is a great deal of confusion regarding P-values and error rates which is, presumably, what Gill is referring to with "but is typically treated as such." The combination of Fisherian P-values with Neyman-Pearsonian error rates has been called an incoherent mishmash, and it is unfortunately very widespread. No short answer is going to be completely adequate here, but I can point you to a couple of good papers (yes, one is mine). Both will help you make sense of the Gill paper. Hurlbert, S., & Lombardi, C. (2009). Final collapse of the Neyman-Pearson decision theoretic framework and rise of the neoFisherian. Annales Zoologici Fennici, 46(5), 311–349. (Link to paper) Lew, M. J. (2012). Bad statistical practice in pharmacology (and other basic biomedical disciplines): you probably don't know P. British Journal of Pharmacology, 166(5), 1559–1567. doi:10.1111/j.1476-5381.2012.01931.x (Link to paper) | {
"source": [
"https://stats.stackexchange.com/questions/46856",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
]
} |
47,590 | I have just heard, that it's a good idea to choose initial weights of a neural network from the range $(\frac{-1}{\sqrt d} , \frac{1}{\sqrt d})$, where $d$ is the number of inputs to a given neuron. It is assumed, that the sets are normalized - mean 0, variance 1 (don't know if this matters). Why is this a good idea? | I assume you are using logistic neurons, and that you are training by gradient descent/back-propagation. The logistic function is close to flat for large positive or negative inputs. The derivative at an input of $2$ is about $1/10$, but at $10$ the derivative is about $1/22000$ . This means that if the input of a logistic neuron is $10$ then, for a given training signal, the neuron will learn about $2200$ times slower that if the input was $2$. If you want the neuron to learn quickly, you either need to produce a huge training signal (such as with a cross-entropy loss function) or you want the derivative to be large. To make the derivative large, you set the initial weights so that you often get inputs in the range $[-4,4]$. The initial weights you give might or might not work. It depends on how the inputs are normalized. If the inputs are normalized to have mean $0$ and standard deviation $1$, then a random sum of $d$ terms with weights uniform on $(\frac{-1}{\sqrt{d}},\frac{1}{\sqrt{d}})$ will have mean $0$ and variance $\frac{1}{3}$, independent of $d$. The probability that you get a sum outside of $[-4,4]$ is small. That means as you increase $d$, you are not causing the neurons to start out saturated so that they don't learn. With inputs which are not normalized, those weights may not be effective at avoiding saturation. | {
"source": [
"https://stats.stackexchange.com/questions/47590",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16763/"
]
} |
47,771 | Disclaimer: I'm not a statistician but a software engineer. Most of my knowledge in statistics comes from self-education, thus I still have many gaps in understanding concepts that may seem trivial for other people here. So I would be very thankful if answers included less specific terms and more explanation. Imagine that you are talking to your grandma :) I'm trying to grasp the nature of beta distribution – what it should be used for and how to interpret it in each case. If we were talking about, say, normal distribution, one could describe it as arrival time of a train: most frequently it arrives just in time, a bit less frequently it is 1 minute earlier or 1 minute late and very rarely it arrives with difference of 20 minutes from the mean. Uniform distribution describes, in particular, chance of each ticket in lottery. Binomial distribution may be described with coin flips and so on. But is there such intuitive explanation of beta distribution ? Let's say, $\alpha=.99$ and $\beta=.5$ . Beta distribution $B(\alpha, \beta)$ in this case looks like this (generated in R): But what does it actually mean? Y-axis is obviously a probability density, but what is on the X-axis? I would highly appreciate any explanation, either with this example or any other. | The short version is that the Beta distribution can be understood as representing a distribution of probabilities , that is, it represents all the possible values of a probability when we don't know what that probability is. Here is my favorite intuitive explanation of this: Anyone who follows baseball is familiar with batting averages —simply the number of times a player gets a base hit divided by the number of times he goes up at bat (so it's just a percentage between 0 and 1 ). .266 is in general considered an average batting average, while .300 is considered an excellent one. Imagine we have a baseball player, and we want to predict what his season-long batting average will be. You might say we can just use his batting average so far- but this will be a very poor measure at the start of a season! If a player goes up to bat once and gets a single, his batting average is briefly 1.000 , while if he strikes out, his batting average is 0.000 . It doesn't get much better if you go up to bat five or six times- you could get a lucky streak and get an average of 1.000 , or an unlucky streak and get an average of 0 , neither of which are a remotely good predictor of how you will bat that season. Why is your batting average in the first few hits not a good predictor of your eventual batting average? When a player's first at-bat is a strikeout, why does no one predict that he'll never get a hit all season? Because we're going in with prior expectations. We know that in history, most batting averages over a season have hovered between something like .215 and .360 , with some extremely rare exceptions on either side. We know that if a player gets a few strikeouts in a row at the start, that might indicate he'll end up a bit worse than average, but we know he probably won't deviate from that range. Given our batting average problem, which can be represented with a binomial distribution (a series of successes and failures), the best way to represent these prior expectations (what we in statistics just call a prior ) is with the Beta distribution- it's saying, before we've seen the player take his first swing, what we roughly expect his batting average to be. The domain of the Beta distribution is (0, 1) , just like a probability, so we already know we're on the right track, but the appropriateness of the Beta for this task goes far beyond that. We expect that the player's season-long batting average will be most likely around .27 , but that it could reasonably range from .21 to .35 . This can be represented with a Beta distribution with parameters $\alpha=81$ and $\beta=219$ : curve(dbeta(x, 81, 219)) I came up with these parameters for two reasons: The mean is $\frac{\alpha}{\alpha+\beta}=\frac{81}{81+219}=.270$ As you can see in the plot, this distribution lies almost entirely within (.2, .35) - the reasonable range for a batting average. You asked what the x axis represents in a beta distribution density plot—here it represents his batting average. Thus notice that in this case, not only is the y-axis a probability (or more precisely a probability density), but the x-axis is as well (batting average is just a probability of a hit, after all)! The Beta distribution is representing a probability distribution of probabilities . But here's why the Beta distribution is so appropriate. Imagine the player gets a single hit. His record for the season is now 1 hit; 1 at bat . We have to then update our probabilities- we want to shift this entire curve over just a bit to reflect our new information. While the math for proving this is a bit involved ( it's shown here ), the result is very simple . The new Beta distribution will be: $\mbox{Beta}(\alpha_0+\mbox{hits}, \beta_0+\mbox{misses})$ Where $\alpha_0$ and $\beta_0$ are the parameters we started with- that is, 81 and 219. Thus, in this case, $\alpha$ has increased by 1 (his one hit), while $\beta$ has not increased at all (no misses yet). That means our new distribution is $\mbox{Beta}(81+1, 219)$ , or: curve(dbeta(x, 82, 219)) Notice that it has barely changed at all- the change is indeed invisible to the naked eye! (That's because one hit doesn't really mean anything). However, the more the player hits over the course of the season, the more the curve will shift to accommodate the new evidence, and furthermore the more it will narrow based on the fact that we have more proof. Let's say halfway through the season he has been up to bat 300 times, hitting 100 out of those times. The new distribution would be $\mbox{Beta}(81+100, 219+200)$ , or: curve(dbeta(x, 81+100, 219+200)) Notice the curve is now both thinner and shifted to the right (higher batting average) than it used to be- we have a better sense of what the player's batting average is. One of the most interesting outputs of this formula is the expected value of the resulting Beta distribution, which is basically your new estimate. Recall that the expected value of the Beta distribution is $\frac{\alpha}{\alpha+\beta}$ . Thus, after 100 hits of 300 real at-bats, the expected value of the new Beta distribution is $\frac{81+100}{81+100+219+200}=.303$ - notice that it is lower than the naive estimate of $\frac{100}{100+200}=.333$ , but higher than the estimate you started the season with ( $\frac{81}{81+219}=.270$ ). You might notice that this formula is equivalent to adding a "head start" to the number of hits and non-hits of a player- you're saying "start him off in the season with 81 hits and 219 non hits on his record"). Thus, the Beta distribution is best for representing a probabilistic distribution of probabilities : the case where we don't know what a probability is in advance, but we have some reasonable guesses. | {
"source": [
"https://stats.stackexchange.com/questions/47771",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3305/"
]
} |
47,802 | This isn't as easy to Google as some other things as, to be clear, I'm not talking about logistic regression in the sense of using regression to predict categorical variables. I'm talking about fitting a logistic growth curve to given data points. To be specific, $x$ is a given year from 1958 to 2012 and $y$ is the estimated global CO2 ppm (parts per million of carbon dioxide) in November of year $x$. Right now it's accelerating but it's got to level off at some point. So I want a logistic curve. I haven't found a relatively straightforward way to do this yet. | See the nls() function. It has a self starting logistic curve model function via SSlogis() . E.g. from the ?nls help page DNase1 <- subset(DNase, Run == 1)
## using a selfStart model
fm1DNase1 <- nls(density ~ SSlogis(log(conc), Asym, xmid, scal),
DNase1) I suggest you read the help pages for these functions and probably the linked references if possible to find out more. | {
"source": [
"https://stats.stackexchange.com/questions/47802",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14413/"
]
} |
47,840 | In this paper titled "CHOOSING AMONG GENERALIZED LINEAR MODELS APPLIED TO MEDICAL DATA" the authors write: In a generalized linear model, the mean is transformed, by the link
function, instead of transforming the response itself. The two methods
of transformation can lead to quite different results; for example, the mean of log-transformed responses is not the same as the logarithm
of the mean response . In general, the former cannot easily be
transformed to a mean response. Thus, transforming the mean often
allows the results to be more easily interpreted, especially in that
mean parameters remain on the same scale as the measured responses. It appears they advise the fitting of a generalized linear model (GLM) with log link instead of a linear model (LM) with log-transformed response. I do not grasp the advantages of this approach, and it seems quite unusual to me. My response variable looks log-normally distributed. I get similar results in terms of the coefficients and their standard errors with either approach. Still I wonder: If a variable has a log-normal distribution, isn't the mean of the log-transformed variable preferable over the log of the mean untransformed variable , as the mean is the natural summary of a normal distribution, and the log-transformed variable is normally distributed, whereas the variable itself is not? | Although it may appear that the mean of the log-transformed variables is preferable (since this is how log-normal is typically parameterised), from a practical point of view, the log of the mean is typically much more useful. This is particularly true when your model is not exactly correct, and to quote George Box: "All models are wrong, some are useful" Suppose some quantity is log normally distributed, blood pressure say (I'm not a medic!), and we have two populations, men and women. One might hypothesise that the average blood pressure is higher in women than in men. This exactly corresponds to asking whether log of average blood pressure is higher in women than in men. It is not the same as asking whether the average of log blood pressure is higher in women that man . Don't get confused by the text book parameterisation of a distribution - it doesn't have any "real" meaning. The log-normal distribution is parameterised by the mean of the log ($\mu_{\ln}$) because of mathematical convenience, but equally we could choose to parameterise it by its actual mean and variance $\mu = e^{\mu_{\ln} + \sigma_{\ln}^2/2}$ $\sigma^2 = (e^{\sigma^2_{\ln}} -1)e^{2 \mu_{\ln} + \sigma_{\ln}^2}$ Obviously, doing so makes the algebra horribly complicated, but it still works and means the same thing. Looking at the above formula, we can see an important difference between transforming the variables and transforming the mean. The log of the mean, $\ln(\mu)$, increases as $\sigma^2_{\ln}$ increases, while the mean of the log, $\mu_{\ln}$ doesn't. This means that women could, on average, have higher blood pressure that men, even though the mean paramater of the log normal distribution ($\mu_{\ln}$) is the same, simply because the variance parameter is larger. This fact would get missed by a test that used log(Blood Pressure). So far, we have assumed that blood pressure genuinly is log-normal. If the true distributions are not quite log normal, then transforming the data will (typically) make things even worse than above - since we won't quite know what our "mean" parameter actually means. I.e. we won't know those two equations for mean and variance I gave above are correct. Using those to transform back and forth will then introduce additional errors. | {
"source": [
"https://stats.stackexchange.com/questions/47840",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10064/"
]
} |
47,913 | Are Pandas, Statsmodels and Scikit-learn different implementations of machine learning/statistical operations, or are these complementary to one another? Which of these has the most comprehensive functionality? Which one is actively developed and/or supported? I have to implement logistic regression. Any suggestions as to which of these I should use? | I would like to qualify and clarify a bit the accepted answer. The three packages are complementary to each other since they cover different areas, have different main objectives, or emphasize different areas in machine learning/statistics. pandas is mainly a package to handle and operate directly on data. scikit-learn is doing machine learning with emphasis on predictive modeling with often large and sparse data statsmodels is doing "traditional" statistics and econometrics, with much stronger emphasis on parameter estimation and (statistical) testing. statsmodels has pandas as a dependency, pandas optionally uses statsmodels for some statistics. statsmodels is using patsy to provide a similar formula interface to the models as R. There is some overlap in models between scikit-learn and statsmodels, but with different objectives.
see for example The Two Cultures: statistics vs. machine learning? some more about statsmodels statsmodels has the lowest developement activity and longest release cycle of the three. statsmodels has many contributors but unfortunately still only two "maintainers" (I'm one of them.) The core of statsmodels is "production ready": linear models, robust linear models, generalised linear models and discrete models have been around for several years and are verified against Stata and R. statsmodels also has a time series analysis part covering AR, ARMA and VAR (vector autoregressive) regression, which are not available in any other python package. Some examples to show some specific differences between the machine learning approach in scikit-learn and the statistics and econometrics approach in statsmodels: Simple linear Regression, OLS , has a large number of post-estimation analysis http://statsmodels.sourceforge.net/devel/generated/statsmodels.regression.linear_model.OLSResults.html including tests on parameters, outlier measures and specification tests http://statsmodels.sourceforge.net/devel/stats.html#residual-diagnostics-and-specification-tests Logistic Regression can be done in statsmodels either as Logit model in discrete or as a family in generalized linear model ( GLM ). http://statsmodels.sourceforge.net/devel/glm.html#module-reference GLM includes the usual families, discrete models contains besides Logit also Probit , multinomial and count regression. Logit Using Logit is as simple as this http://statsmodels.sourceforge.net/devel/examples/generated/example_discrete.html >>> import statsmodels.api as sm
>>> x = sm.add_constant(data.exog, prepend=False)
>>> y = data.endog
>>> res1 = sm.Logit(y, x).fit()
Optimization terminated successfully.
Current function value: 0.402801
Iterations 7
>>> print res1.summary()
Logit Regression Results
==============================================================================
Dep. Variable: y No. Observations: 32
Model: Logit Df Residuals: 28
Method: MLE Df Model: 3
Date: Sat, 26 Jan 2013 Pseudo R-squ.: 0.3740
Time: 07:34:59 Log-Likelihood: -12.890
converged: True LL-Null: -20.592
LLR p-value: 0.001502
==============================================================================
coef std err z P>|z| [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1 2.8261 1.263 2.238 0.025 0.351 5.301
x2 0.0952 0.142 0.672 0.501 -0.182 0.373
x3 2.3787 1.065 2.234 0.025 0.292 4.465
const -13.0213 4.931 -2.641 0.008 -22.687 -3.356
==============================================================================
>>> dir(res1)
...
>>> res1.predict(x.mean(0))
0.25282026208742708 | {
"source": [
"https://stats.stackexchange.com/questions/47913",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14653/"
]
} |
47,916 | I wanted to ask a question inspired by an excellent answer to the query about the intuition for the beta distribution. I wanted to get a better understanding of the derivation for the prior distribution for the batting average. It looks like David is backing out the parameters from the mean and the range. Under the assumption that the mean is $0.27$ and the standard deviation is $0.18$, can you back out $\alpha$ and $\beta$ by solving these two equations:
\begin{equation}
\frac{\alpha}{\alpha+\beta}=0.27 \\
\frac{\alpha\cdot\beta}{(\alpha+\beta)^2\cdot(\alpha+\beta+1)}=0.18^2
\end{equation} | Notice that: \begin{equation}
\frac{\alpha\cdot\beta}{(\alpha+\beta)^2}=(\frac{\alpha}{\alpha+\beta})\cdot(1-\frac{\alpha}{\alpha+\beta})
\end{equation} This means the variance can therefore be expressed in terms of the mean as \begin{equation}
\sigma^2=\frac{\mu\cdot(1-\mu)}{\alpha+\beta+1} \\
\end{equation} If you want a mean of $.27$ and a standard deviation of $.18$ (variance $.0324$ ), just calculate: \begin{equation}
\alpha+\beta=\frac{\mu(1-\mu)}{\sigma^2}-1=\frac{.27\cdot(1-.27)}{.0324}-1=5.083333 \\
\end{equation} Now that you know the total, $\alpha$ and $\beta$ are easy: \begin{equation}
\alpha=\mu(\alpha+\beta)=.27 \cdot 5.083333=1.372499 \\
\beta=(1-\mu)(\alpha+\beta)=(1-.27) \cdot 5.083333=3.710831
\end{equation} You can check this answer in R: > mean(rbeta(10000000, 1.372499, 3.710831))
[1] 0.2700334
> var(rbeta(10000000, 1.372499, 3.710831))
[1] 0.03241907 | {
"source": [
"https://stats.stackexchange.com/questions/47916",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7071/"
]
} |
47,919 | I fit a lognormal model on some data points using both frequentist and Bayesian (using a non-informative prior) approaches. However, I got different results. Here are my codes and outputs: Frequentist: > data1 = c(0.32618457, 0.29166954, 0.27427996, 0.23844847, 0.25148180)
> n=length(data1)
>
> lln1 = function(par){ if(par[2]>0) return( -
> sum(log(dlnorm(exp(data1),par[1],par[2]))) ) else return(Inf) }
> optim(c(0,0.1),lln1)
>
> mu sigma
[1] 0.27641155 0.03091169 Bayesian with 20,000 MCMC and 4000 burn: model
{
for( i in 1 : N )
{
x[i] ~ dlnorm(mu, tau)
}
mu ~ dunif(0, 1)
tau ~ dunif(0, 1)
sigma <- 1/tau
}
list(N = 5, x = c(0.32618457, 0.29166954, 0.27427996, 0.23844847, 0.25148180))
Node mean sd MC error 2.5% median 97.5% start sample
mu 0.2417 0.2182 0.001976 0.006612 0.1759 0.8226 4000 16001
sigma 2.625 2.22 0.01899 1.049 2.015 7.755 4000 16001 Since I'm using a non-informative prior, I was wondering why the estimates of mu and sigma are different. | Notice that: \begin{equation}
\frac{\alpha\cdot\beta}{(\alpha+\beta)^2}=(\frac{\alpha}{\alpha+\beta})\cdot(1-\frac{\alpha}{\alpha+\beta})
\end{equation} This means the variance can therefore be expressed in terms of the mean as \begin{equation}
\sigma^2=\frac{\mu\cdot(1-\mu)}{\alpha+\beta+1} \\
\end{equation} If you want a mean of $.27$ and a standard deviation of $.18$ (variance $.0324$ ), just calculate: \begin{equation}
\alpha+\beta=\frac{\mu(1-\mu)}{\sigma^2}-1=\frac{.27\cdot(1-.27)}{.0324}-1=5.083333 \\
\end{equation} Now that you know the total, $\alpha$ and $\beta$ are easy: \begin{equation}
\alpha=\mu(\alpha+\beta)=.27 \cdot 5.083333=1.372499 \\
\beta=(1-\mu)(\alpha+\beta)=(1-.27) \cdot 5.083333=3.710831
\end{equation} You can check this answer in R: > mean(rbeta(10000000, 1.372499, 3.710831))
[1] 0.2700334
> var(rbeta(10000000, 1.372499, 3.710831))
[1] 0.03241907 | {
"source": [
"https://stats.stackexchange.com/questions/47919",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9136/"
]
} |
48,267 | Why use Root Mean Squared Error (RMSE) instead of Mean Absolute Error (MAE)?? Hi I've been investigating the error generated in a calculation - I initially calculated the error as a Root Mean Normalised Squared Error. Looking a little closer, I see the effects of squaring the error gives more weight to larger errors than smaller ones, skewing the error estimate towards the odd outlier. This is quite obvious in retrospect. So my question - in what instance would the Root Mean Squared Error be a more appropriate measure of error than the Mean Absolute Error? The latter seems more appropriate to me or am I missing something? To illustrate this I have attached an example below: The scatter plot shows two variables with a good correlation, the two histograms to the right chart the error between Y(observed )
and Y(predicted) using normalised RMSE (top) and MAE (bottom). There are no significant outliers in this data and MAE gives a lower error than RMSE. Is there any rational, other than MAE being preferable, for using one measure of error over the other? | This depends on your loss function. In many circumstances it makes sense to give more weight to points further away from the mean--that is, being off by 10 is more than twice as bad as being off by 5. In such cases RMSE is a more appropriate measure of error. If being off by ten is just twice as bad as being off by 5, then MAE is more appropriate. In any case, it doesn't make sense to compare RMSE and MAE to each other as you do in your second-to-last sentence ("MAE gives a lower error than RMSE"). MAE will never be higher than RMSE because of the way they are calculated. They only make sense in comparison to the same measure of error: you can compare RMSE for Method 1 to RMSE for Method 2, or MAE for Method 1 to MAE for Method 2, but you can't say MAE is better than RMSE for Method 1 because it's smaller. | {
"source": [
"https://stats.stackexchange.com/questions/48267",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/19931/"
]
} |
48,360 | My question is do we need to standardize the data set to make sure all variables have the same scale, between [0,1], before fitting logistic regression. The formula is: $$\frac{x_i-\min(x_i)}{\max(x_i)-\min(x_i)}$$ My data set has 2 variables, they describe the same thing for two channels, but the volume is different. Say it's the number of customer visits in two stores, y here is whether a customer purchases. Because a customer can visit both stores, or twice first store, once second store before he makes a purchase. but the total number of customer visits for 1st store is 10 times larger than the second store. When I fit this logistic regression, without standardization, coef(store1)=37, coef(store2)=13 ; if I standardize the data, then coef(store1)=133, coef(store2)=11 . Something like this. Which approach makes more sense? What if I am fitting a decision tree model? I know tree structure models don't need standardization since the model itself will adjust it somehow. But checking with all of you. | Standardization isn't required for logistic regression. The main goal of standardizing features is to help convergence of the technique used for optimization. For example, if you use Newton-Raphson to maximize the likelihood, standardizing the features makes the convergence faster. Otherwise, you can run your logistic regression without any standardization treatment on the features. | {
"source": [
"https://stats.stackexchange.com/questions/48360",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/18303/"
]
} |
48,374 | I am wondering if there are any methods for calculating sample size in mixed models? I'm using lmer in R to fit the models (I have random slopes and intercepts). | The longpower package implements the sample size calculations in Liu and Liang (1997) and Diggle et al (2002). The documentation has example code. Here's one, using the lmmpower() function: > require(longpower)
> require(lme4)
> fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
> lmmpower(fm1, pct.change = 0.30, t = seq(0,9,1), power = 0.80)
Power for longitudinal linear model with random slope (Edland, 2009)
n = 68.46972
delta = 3.140186
sig2.s = 35.07153
sig2.e = 654.941
sig.level = 0.05
t = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
power = 0.8
alternative = two.sided
delta.CI = 2.231288, 4.049084
Days = 10.46729
Days CI = 7.437625, 13.496947
n.CI = 41.18089, 135.61202 Also check the liu.liang.linear.power() which " performs the sample size calculation for a linear mixed model" Liu, G., & Liang, K. Y. (1997). Sample size calculations for studies with correlated observations. Biometrics, 53(3), 937-47. Diggle PJ, Heagerty PJ, Liang K, Zeger SL. Analysis of longitudinal data. Second Edition. Oxford. Statistical Science Serires. 2002 Edit: Another way is to "correct" for the effect of clustering. In an ordinary linear model each observation is independent, but in the presence of clustering observations are not independent which can be thought of as having fewer independent observations - the effective sample size is smaller. This loss of effectiveness is known as the design effect : $$ DE = 1 +(m-1)\rho$$
where $m$ is the average cluster size and $\rho$ is the intraclass correlation coefficient (variance partition coefficient). So the sample size obtained through a calculation that ignores clustering is inflated by $DE$ to obtain a sample size that allows for clustering. | {
"source": [
"https://stats.stackexchange.com/questions/48374",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/19998/"
]
} |
48,378 | Given two independent random variables $X\sim \mathrm{Gamma}(\alpha_X,\beta_X)$ and $Y\sim \mathrm{Gamma}(\alpha_Y,\beta_Y)$, what is the distribution of the difference, i.e. $D=X-Y$? If the result is not well-known, how would I go about deriving the result? | I will outline how the problem can be approached and state
what I think the end result will be for the special case
when the shape parameters are integers, but not fill in the
details. First, note that $X-Y$ takes on values in $(-\infty,\infty)$
and so $f_{X-Y}(z)$ has support $(-\infty,\infty)$. Second, from the standard results that the
density of the sum of two independent continuous random variables is the
convolution of their densities, that is,
$$f_{X+Y}(z) = \int_{-\infty}^\infty f_X(x)f_Y(z-x)\,\mathrm dx$$
and that the density of the random variable $-Y$ is
$f_{-Y}(\alpha) = f_Y(-\alpha)$, deduce that
$$f_{X-Y}(z) = f_{X+(-Y)}(z) = \int_{-\infty}^\infty f_X(x)f_{-Y}(z-x)\,\mathrm dx
= \int_{-\infty}^\infty f_X(x)f_Y(x-z)\,\mathrm dx.$$ Third, for non-negative random variables $X$ and $Y$, note that the
above expression simplifies to
$$f_{X-Y}(z) = \begin{cases}
\int_0^\infty f_X(x)f_Y(x-z)\,\mathrm dx, & z < 0,\\
\int_{0}^\infty f_X(y+z)f_Y(y)\,\mathrm dy, & z > 0.
\end{cases}$$ Finally, using parametrization $\Gamma(s,\lambda)$ to mean a
random variable with density
$\lambda\frac{(\lambda x)^{s-1}}{\Gamma(s)}\exp(-\lambda x)\mathbf 1_{x>0}(x)$,
and with
$X \sim \Gamma(s,\lambda)$ and $Y \sim \Gamma(t,\mu)$ random variables,
we have for $z > 0$ that
$$\begin{align*}f_{X-Y}(z) &= \int_{0}^\infty
\lambda\frac{(\lambda (y+z))^{s-1}}{\Gamma(s)}\exp(-\lambda (y+z))
\mu\frac{(\mu y)^{t-1}}{\Gamma(t)}\exp(-\mu y)\,\mathrm dy\\
&= \exp(-\lambda z) \int_0^\infty p(y,z)\exp(-(\lambda+\mu)y)\,\mathrm dy.\tag{1}
\end{align*}$$
Similarly, for $z < 0$,
$$\begin{align*}f_{X-Y}(z) &= \int_{0}^\infty
\lambda\frac{(\lambda x)^{s-1}}{\Gamma(s)}\exp(-\lambda x)
\mu\frac{(\mu (x-z))^{t-1}}{\Gamma(t)}\exp(-\mu (x-z))\,\mathrm dx\\
&= \exp(\mu z) \int_0^\infty q(x,z)\exp(-(\lambda+\mu)x)\,\mathrm dx.\tag{2}
\end{align*}$$ These integrals are not easy to evaluate but for the special case
$s = t$, Gradshteyn and Ryzhik, Tables of Integrals, Series, and Products, Section 3.383, lists the value of
$$\int_0^\infty x^{s-1}(x+\beta)^{s-1}\exp(-\nu x)\,\mathrm dx$$
in terms of polynomial, exponential and Bessel functions of $\beta$
and this can be used to write down explicit expressions for $f_{X-Y}(z)$. From here on, we assume that $s$ and $t$ are integers so
that $p(y,z)$ is a polynomial in $y$ and $z$ of degree $(s+t-2, s-1)$
and $q(x,z)$ is a polynomial in $x$ and $z$ of degree $(s+t-2,t-1)$. For $z > 0$, the integral $(1)$
is the sum of $s$ Gamma integrals with respect to $y$ with coefficients
$1, z, z^2, \ldots z^{s-1}$. It follows that the density of
$X-Y$ is proportional to a mixture density of
$\Gamma(1,\lambda), \Gamma(2,\lambda), \cdots, \Gamma(s,\lambda)$
random variables for $z > 0$. Note that this result
will hold even if $t$ is not an integer. Similarly, for $z < 0$,
the density of
$X-Y$ is proportional to a mixture density of
$\Gamma(1,\mu), \Gamma(2,\mu), \cdots, \Gamma(t,\mu)$
random variables flipped over , that is,
it will have terms such as $(\mu|z|)^{k-1}\exp(\mu z)$
instead of the usual $(\mu z)^{k-1}\exp(-\mu z)$.
Also, this result will hold even if $s$ is not an integer. | {
"source": [
"https://stats.stackexchange.com/questions/48378",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/19999/"
]
} |
48,396 | I've just played a game with my kids that basically boils down to: whoever rolls every number at least once on a 6-sided die wins. I won, eventually, and the others finished 1-2 turns later. Now I'm wondering: what is the expectation of the length of the game? I know that the expectation of the number of rolls till you hit a specific number is $\sum_{n=1}^\infty n\frac{1}{6}(\frac{5}{6})^{n-1}=6$ . However, I have two questions: How many times to you have to roll a six-sided die until you get every number at least once? Among four independent trials (i.e. with four players), what is the expectation of the maximum number of rolls needed? [note: it's maximum, not minimum, because at their age, it's more about finishing than about getting there first for my kids] I can simulate the result, but I wonder how I would go about calculating it analytically. Here's a Monte Carlo simulation in Matlab mx=zeros(1000000,1);
for i=1:1000000,
%# assume it's never going to take us >100 rolls
r=randi(6,100,1);
%# since R2013a, unique returns the first occurrence
%# for earlier versions, take the minimum of x
%# and subtract it from the total array length
[~,x]=unique(r);
mx(i,1)=max(x);
end
%# make sure we haven't violated an assumption
assert(numel(x)==6)
%# find the expected value for the coupon collector problem
expectationForOneRun = mean(mx)
%# find the expected number of rolls as a maximum of four independent players
maxExpectationForFourRuns = mean( max( reshape( mx, 4, []), [], 1) )
expectationForOneRun =
14.7014 (SEM 0.006)
maxExpectationForFourRuns =
21.4815 (SEM 0.01) | Because a "completely analytical approach" has been requested, here is an exact solution. It also provides an alternative approach to solving the question at Probability to draw a black ball in a set of black and white balls with mixed replacement conditions . The number of moves in the game, $X$ , can be modeled as the sum of six independent realizations of Geometric $(p)$ variables with probabilities $p=1, 5/6, 4/6, 3/6, 2/6, 1/6$ , each of them shifted by $1$ (because a geometric variable counts only the rolls preceding a success and we must also count the rolls on which successes were observed). By computing with the geometric distribution, we will therefore obtain answers that are $6$ less than the desired ones and therefore must be sure to add $6$ back at the end. The probability generating function (pgf) of such a geometric variable with parameter $p$ is $$f(z, p) = \frac{p}{1-(1-p)z}.$$ Therefore the pgf for the sum of these six variables is $$g(z) = \prod_{i=1}^6 f(z, i/6) = 6^{-z-4} \left(-5\ 2^{z+5}+10\ 3^{z+4}-5\ 4^{z+4}+5^{z+4}+5\right).$$ (The product can be computed in this closed form by separating it into five terms via partial fractions.) The cumulative distribution function (CDF) is obtained from the partial sums of $g$ (as a power series in $z$ ), which amounts to summing geometric series, and is given by $$F(z) = 6^{-z-4} \left(-(1)\ 1^{z+4} + (5)\ 2^{z+4}-(10)\
3^{z+4}+(10)\ 4^{z+4}-(5)\ 5^{z+4}+(1)\ 6^{z+4}\right).$$ (I have written this expression in a form that suggests an alternate derivation via the Principle of Inclusion-Exclusion.) From this we obtain the expected number of moves in the game (answering the first question) as $$\mathbb{E}(6+X) = 6+\sum_{i=1}^\infty \left(1-F(i)\right) = \frac{147}{10}.$$ The CDF of the maximum of $m$ independent versions of $X$ is $F(z)^m$ (and from this we can, in principle, answer any probability questions about the maximum we like, such as what is its variance, what is its 99th percentile, and so on). With $m=4$ we obtain an expectation of $$ 6+\sum_{i=1}^\infty \left(1-F(i)^4\right) \approx 21.4820363\ldots.$$ (The value is a rational fraction which, in reduced form, has a 71-digit denominator.) The standard deviation is $6.77108\ldots.$ Here is a plot of the probability mass function of the maximum for four players (it has been shifted by $6$ already): As one would expect, it is positively skewed. The mode is at $18$ rolls. It is rare that the last person to finish will take more than $50$ rolls (it is about $0.3\%$ ). | {
"source": [
"https://stats.stackexchange.com/questions/48396",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/198/"
]
} |
48,509 | How can I prove that pointwise product of two kernel functions is a kernel function? | By point-wise product, I assume you mean that if $k_1(x,y), k_2(x,y)$ are both valid kernel functions, then their product \begin{align}
k_{p}( x, y) = k_1( x, y) k_2(x,y)
\end{align} is also a valid kernel function. Proving this property is rather straightforward when we invoke Mercer's theorem. Since $k_1, k_2$ are valid kernels, we know (via Mercer) that they must admit an inner product representation. Let $a$ denote the feature vector of $k_1$ and $b$ denote the same for $k_2$. \begin{align}
k_1(x,y) = a(x)^T a(y), \qquad a( z ) = [a_1(z), a_2(z), \ldots a_M(z)] \\
k_2(x,y) = b(x)^T b(y), \qquad b( z ) = [b_1(z), b_2(z), \ldots b_N(z)]
\end{align} So $a$ is a function that produces an $M$-dim vector, and $b$ produces an $N$-dim vector. Next, we just write the product in terms of $a$ and $b$, and perform some regrouping. \begin{align}
k_{p}(x,y) &= k_1(x,y) k_2(x,y)
\\&= \Big( \sum_{m=1}^M a_m(x) a_m(y) \Big) \Big( \sum_{n=1}^N b_n(x) b_n(y) \Big)
\\&= \sum_{m=1}^M \sum_{n=1}^N [ a_m(x) b_n(x) ] [a_m(y) b_n(y)]
\\&= \sum_{m=1}^M \sum_{n=1}^N c_{mn}( x ) c_{mn}( y )
\\&= c(x)^T c(y)
\end{align} where $c(z)$ is an $M \cdot N$ -dimensional vector, s.t. $c_{mn}(z) = a_m(z) b_n(z)$. Now, because we can write $k_p(x,y)$ as an inner product using the feature map $c$, we know $k_p$ is a valid kernel (via Mercer's theorem). That's all there is to it. | {
"source": [
"https://stats.stackexchange.com/questions/48509",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/57207/"
]
} |
48,520 | I was using the kmeans instruction of R for performing the k-means algorithm on Anderson's iris dataset. I have a question about some parameters that I got. The results are: Cluster means:
Sepal.Length Sepal.Width Petal.Length Petal.Width
1 5.006000 3.428000 1.462000 0.246000 In this case, what does "Cluster means" stands for? It is the mean of the distances of all the objects within the cluster? Also in the last part I have: Within cluster sum of squares by cluster:
[1] 15.15100 39.82097 23.87947
(between_SS / total_SS = 88.4 %) That value of 88.4%, what it could be its interpretation? | If you compute the sum of squared distances of each data point to the global sample mean, you get total_SS . If, instead of computing a global sample mean (or 'centroid'), you compute one per group (here, there are three groups) and then compute the sum of squared distances of these three means to the global mean, you get between_SS . (When computing this, you multiply the squared distance of each mean to the global mean by the number of data points it represents.) If there were no discernible pattern of clustering, the three means of the three groups would be close to the global mean, and between_SS would be a very small fraction of total_SS . The opposite is true here, which shows that data points cluster quite neatly in four dimensional space according to species. | {
"source": [
"https://stats.stackexchange.com/questions/48520",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/20078/"
]
} |
48,594 | What is the purpose of the link function as a component of the generalized linear model? Why do we need it? Wikipedia states: It can be convenient to match the domain of the link function to the range of the distribution function's mean What's the advantage of doing this? | A.J. Dobson pointed out the following things in her book : Linear regression assumes that the conditional distribution of the response variable is normally distributed. Generalized linear models can have response variables with conditional distributions other than the Normal distribution – they may even be categorical rather than continuous. Thus they may not range from $-\infty$ to $+\infty$ . Relationship between the response and explanatory variables need not be of the simple linear form. This is why we need the link function as a component of the generalized linear model. It links the mean of the dependent variable $Y_i$ , which is $E(Y_i)=\mu_i$ to the linear term $x_i^T\beta$ in such a way that the range of the non-linearly transformed mean $g(\mu_i)$ ranges from $-\infty$ to $+\infty$ . Thus you can actually form a linear equation $g(\mu_i)$ = $x_i^T\beta$ and use an iteratively reweighted least squares method for maximum likelihood estimation of the model parameters. | {
"source": [
"https://stats.stackexchange.com/questions/48594",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10749/"
]
} |
48,671 | I have read in the abstract of this paper that: "The maximum likelihood (ML) procedure of Hartley aud Rao is modified by adapting a transformation from Patterson and Thompson which partitions the likelihood render normality into two parts, one being free of the fixed effects. Maximizing this part yields what are called restricted maximum likelihood (REML) estimators." I also read in the abstract of this paper that REML: "takes into account the loss in degrees of freedom resulting from estimating fixed effects." Sadly I don't have access to the full text of those papers (and probably would not understand if I did). Also, what are the advantages of REML vs. ML? Under what circumstances may REML be preferred over ML (or vice versa) when fitting a mixed effects model?
Please give an explanation suitable for someone with a high-school (or just beyond) mathematics background! | As per ocram's answer, ML is biased for the estimation of variance components. But observe that the bias gets smaller for larger sample sizes. Hence in answer to your questions " ...what are the advantages of REML vs ML ? Under what circumstances may REML be preferred over ML (or vice versa) when fitting a mixed effects model ? ", for small sample sizes REML is preferred. However, likelihood ratio tests for REML require exactly the same fixed effects specification in both models. So, to compare models with different fixed effects (a common scenario) with an LR test, ML must be used. REML takes account of the number of (fixed effects) parameters estimated, losing 1 degree of freedom for each. This is achieved by applying ML to the least squares residuals, which are independent of the fixed effects. | {
"source": [
"https://stats.stackexchange.com/questions/48671",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11405/"
]
} |
48,838 | Can someone explain me in a intuitive way what the periodicity of a Markov chain is? It is defined as follows: For all states $i$ in $S$ $d_i$=gcd$\{n \in \mathbb{N} | p_{ii}^{(n)} > 0\} =1$ Thank you for your effort! | First of all, your definition is not entirely correct. Here is the correct definition from wikipedia, as suggested by Cyan. Periodicity (source: wikipedia ) A state i has period k if any return to state i must occur in multiples of k time steps. Formally, the period of a state is defined as k = $gcd\{ n: \Pr(X_n = i | X_0 = i) > 0\}$ (where "gcd" is the greatest common divisor). Note that even though a state has period k, it may not be possible to reach the state in k steps. For example, suppose it is possible to return to the state in {6, 8, 10, 12, ...} time steps; k would be 2, even though 2 does not appear in this list. If k = 1, then the state is said to be aperiodic: returns to state i can occur at irregular times. In other words, a state i is aperiodic if there exists n such that for all n' ≥ n, $Pr(X_{n'} = i | X_0 = i) > 0.$ Otherwise (k > 1), the state is said to be periodic with period k. A Markov chain is aperiodic if every state is aperiodic. My Explanation The term periodicity describes whether something (an event, or here: the visit of a particular state) is happening at a regular time interval. Here time is measured in the number of states you visit. First Example: Now imagine that the clock represents a markov chain and every hour mark a state, so we got 12 states. Every state is visted by the hour hand every 12 hours (states) with probability=1, so the greatest common divisor is also 12. So every (hour-)state is periodic with period 12. Second example: Imagine a graph describing a sequence of coin tosses, starting at state $start$ and state $heads$ and $tails$ representing the outcome of the last coin toss. The transition probability is 0.5 for every pair of states (i,j), except $heads$ -> $start$ and $tails$ -> $start$ where it is 0. Now imagine you are in state $heads$. The number of states you have to visit before you visit $heads$ again could be 1,2,3 etc.. It will happen, so the probability is greater 0, but it is not exactly predictable when. So the greatest common divisior of all possible number of visits which could occur before you visit $heads$ again is 1. This means that $heads$ is aperiodic. The same applies for $tails$. Since it does not apply for $start$, the whole graph is not aperiodic. If we remove $start$, it would be. | {
"source": [
"https://stats.stackexchange.com/questions/48838",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10749/"
]
} |
49,226 | I would like to know how to interpret a difference of f-measure values. I know that f-measure is a balanced mean between precision and recall, but I am asking about the practical meaning of a difference in F-measures. For example, if a classifier C1 has an accuracy of 0.4 and another classifier C2 an accuracy of 0.8, then we can say that C2 has correctly classified the double of test examples compared to C1. However, if a classifier C1 has an F-measure of 0.4 for a certain class and another classifier C2 an F-measure of 0.8, what can we state about the difference in performance of the 2 classifiers ? Can we say that C2 has classified X more instances correctly that C1 ? | I cannot think of an intuitive meaning of the F measure, because it's just a combined metric. What's more intuitive than F-mesure, of course, is precision and recall. But using two values, we often cannot determine if one algorithm is superior to another. For example, if one algorithm has higher precision but lower recall than other, how can you tell which algorithm is better? If you have a specific goal in your mind like 'Precision is the king. I don't care much about recall', then there's no problem. Higher precision is better. But if you don't have such a strong goal, you will want a combined metric. That's F-measure. By using it, you will compare some of precision and some of recall. The ROC curve is often drawn stating the F-measure. You may find this article interesting as it contains explanation on several measures including ROC curves: http://binf.gmu.edu/mmasso/ROC101.pdf | {
"source": [
"https://stats.stackexchange.com/questions/49226",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/13600/"
]
} |
49,243 | R's randomForest package can not handle factor with more than 32 levels. When it is given more than 32 levels, it emits an error message: Can not handle categorical predictors with more than 32 categories. But the data I have has several factors. Some of them have 1000+ levels and some of them have 100+. It even has 'state' of united states which is 52. So, here's my question. Why is there such limitation? randomForest refuse to run even for the simple case. > d <- data.frame(x=factor(1:50), y=1:50)
> randomForest(y ~ x, data=d)
Error in randomForest.default(m, y, ...) :
Can not handle categorical predictors with more than 32 categories. If it is simply due to memory limitation, how can scikit learn's randomForeestRegressor run with more than 32 levels? What is the best way to handle this problem? Suppose that I have X1, X2, ..., X50 independent variables and Y is dependent variable. And suppose that X1, X2 and X3 has more than 32 levels. What should I do? What I'm thinking of is running clustering algorithm for each of X1, X2 and X3 where distance is defined as difference in Y. I'll run three clusterings as there are three problematic variables. And in each clustering, I wish I can find similar levels. And I'll merge them. How does this sound? | It is actually a pretty reasonable constrain because a split on a factor with $N$ levels is actually a selection of one of the $2^N-2$ possible combinations. So even with $N$ like 25 the space of combinations is so huge that such inference makes minor sense. Most other implementations simply treat factor as an ordinal one (i.e. integers from 1 to $N$), and this is one option how you can solve this problem. Actually RF is often wise enough to slice this into arbitrary groups with several splits. The other option is to change representation -- maybe your outcome does not directly depend on state entity but, for instance, area, population, number of pine trees per capita or other attribute(s) you can plug into your information system instead. It may be also that each state is such an isolated and uncorrelated entity that it requires a separate model for itself. Clustering based on a decision is probably a bad idea because this way you are smuggling information from the decision into attributes, which often ends in overfitting. | {
"source": [
"https://stats.stackexchange.com/questions/49243",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8218/"
]
} |
49,272 | My statistics professor claims that the word "correlation" applies strictly to linear relationships between variates, whereas the word "association" applies broadly to any type of relationship. In other words, he claims the term "non-linear correlation" is an oxymoron. From what I can make of this section in the Wikipedia article on " Correlation and dependence ", the Pearson correlation coefficient describes the degree of "linearity" in the relationship between two variates. This suggests that the term "correlation" does in fact apply exclusively to linear relationships. On the other hand, a quick Google search for " non-linear correlation " turns up a number of published papers that use the term. Is my professor correct, or is "correlation" simply a synonym of "association"? | No; correlation is not equivalent to association. However, the meaning of correlation is dependent upon context. The classical statistics definition is, to quote from Kotz and Johnson's Encyclopedia of Statistical Sciences "a measure of the strength of of the linear relationship between two random variables". In mathematical statistics "correlation" seems to generally have this interpretation. In applied areas where data is commonly ordinal rather than numeric (e.g., psychometrics and market research) this definition is not so helpful as the concept of linearity assumes data that has interval-scale properties. Consequently, in these fields correlation is instead interpreted as indicating a monotonically increasing or decreasing bivariate pattern or, a correlation of the ranks. A number of non-parametric correlation statistics have been developed specifically for this (e.g., Spearman's correlation and Kendall's tau-b). These are sometimes referred to as "non-linear correlations" because they are correlation statistics that do not assume linearity. Amongst non-statisticians correlation often means association (sometimes with and sometimes without a causal connotation). Irrespective of the etymology of correlation, the reality is that amongst non-statisticians it has this broader meaning and no amount of chastising them for inappropriate usage is likely to change this. I have done a "google" and it seems that some of the uses of non-linear correlation seem to be of this kind (in particular, it seems that some people use the term to denote a smoothish non-linear relationship between numeric variables). The context-dependent nature of the term "non-linear correlation" perhaps means it is ambiguous and should not be used. As regards "correlation", you need to work out the context of the person using the term in order to know what they mean. | {
"source": [
"https://stats.stackexchange.com/questions/49272",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/20463/"
]
} |
49,443 | My dependent variable shown below doesn't fit any stock distribution that I know of. Linear regression produces somewhat non-normal, right-skewed residuals that relate to predicted Y in an odd way (2nd plot). Any suggestions for transformations or other ways to obtain most valid results and best predictive accuracy? If possible I'd like to avoid clumsy categorizing into, say, 5 values (e.g., 0, lo%, med%, hi%, 1). | Methods of censored regression can handle data like this. They assume the residuals behave as in ordinary linear regression but have been modified so that (Left censoring): all values smaller than a low threshold, which is independent of the data, (but can vary from one case to the other) have not been quantified; and/or (Right censoring): all values larger than than a high threshold, which is independent of the data (but can vary from one case to the other) have not been quantified. "Not quantified" means we know whether or not a value falls below (or above) its threshold, but that's all. The fitting methods typically use maximum likelihood. When the model for the response $Y$ corresponding to a vector $X$ is in the form $$Y \sim X \beta + \varepsilon$$ with iid $\varepsilon$ having a common distribution $F_\sigma$ with PDF $f_\sigma$ (where $\sigma$ are unknown "nuisance parameters"), then--in the absence of censoring--the log likelihood of observations $(x_i, y_i)$ is $$\Lambda = \sum_{i=1}^n \log f_\sigma(y_i - x_i\beta).$$ With censoring present we may divide the cases into three (possibly empty) classes: for indexes $i=1$ to $n_1$, the $y_i$ contain the lower threshold values and represent left censored data; for indexes $i=n_1+1$ to $n_2$, the $y_i$ are quantified; and for the remaining indexes, the $y_i$ contain the upper threshold values and represent right censored data. The log likelihood is obtained in the same way as before: it is the log of the product of the probabilities. $$\Lambda = \sum_{i=1}^{n_1} \log F_\sigma(y_i - x_i\beta) + \sum_{i=n_1+1}^{n_2} \log f_\sigma(y_i - x_i\beta) + \sum_{i=n_2+1}^n \log (1 - F_\sigma(y_i - x_i\beta)).$$ This is maximized numerically as a function of $(\beta, \sigma)$. In my experience, such methods can work well when less than half the data are censored; otherwise, the results can be unstable. Here is a simple R example using the censReg package to illustrate how OLS and censored results can differ (a lot) even with plenty of data. It qualitatively reproduces the data in the question. library("censReg")
set.seed(17)
n.data <- 2960
coeff <- c(-0.001, 0.005)
sigma <- 0.005
x <- rnorm(n.data, 0.5)
y <- as.vector(coeff %*% rbind(rep(1, n.data), x) + rnorm(n.data, 0, sigma))
y.cen <- y
y.cen[y < 0] <- 0
y.cen[y > 0.01] <- 0.01
data = data.frame(list(x, y.cen)) The key things to notice are the parameters: the true slope is $0.005$, the true intercept is $-0.001$, and the true error SD is $0.005$. Let's use both lm and censReg to fit a line: fit <- censReg(y.cen ~ x, data=data, left=0.0, right=0.01)
summary(fit) The results of this censored regression, given by print(fit) , are (Intercept) x sigma
-0.001028 0.004935 0.004856 Those are remarkably close to the correct values of $-0.001$, $0.005$, and $0.005$, respectively. fit.OLS <- lm(y.cen ~ x, data=data)
summary(fit.OLS) The OLS fit, given by print(fit.OLS) , is (Intercept) x
0.001996 0.002345 Not even remotely close! The estimated standard error reported by summary is $0.002864$, less than half the true value. These kinds of biases are typical of regressions with lots of censored data. For comparison, let's limit the regression to the quantified data: fit.part <- lm(y[0 <= y & y <= 0.01] ~ x[0 <= y & y <= 0.01])
summary(fit.part)
(Intercept) x[0 <= y & y <= 0.01]
0.003240 0.001461 Even worse! A few pictures summarize the situation. lineplot <- function() {
abline(coef(fit)[1:2], col="Red", lwd=2)
abline(coef(fit.OLS), col="Blue", lty=2, lwd=2)
abline(coef(fit.part), col=rgb(.2, .6, .2), lty=3, lwd=2)
}
par(mfrow=c(1,4))
plot(x,y, pch=19, cex=0.5, col="Gray", main="Hypothetical Data")
lineplot()
plot(x,y.cen, pch=19, cex=0.5, col="Gray", main="Censored Data")
lineplot()
hist(y.cen, breaks=50, main="Censored Data")
hist(y[0 <= y & y <= 0.01], breaks=50, main="Quantified Data") The difference between the "hypothetical data" and "censored data" plots is that all y-values below $0$ or above $0.01$ in the former have been moved to their respective thresholds to produce the latter plot. As a result, you can see the censored data all lined up along the bottom and top. Solid red lines are the censored fits, dashed blue lines the OLS fits, both of them based on the censored data only . The dashed green lines are the fits to the quantified data only. It is clear which is best: the blue and green lines are noticeably poor and only the red (for the censored regression fit) looks about right. The histograms at the right confirm that the $Y$ values of this synthetic dataset are indeed qualitatively like those of the question (mean = $0.0032$, SD = $0.0037$). The rightmost histogram shows the center (quantified) part of the histogram in detail. | {
"source": [
"https://stats.stackexchange.com/questions/49443",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2669/"
]
} |
49,528 | Suppose we have some training set $(x_{(i)}, y_{(i)})$ for $i = 1, \dots, m$ . Also suppose we run some type of supervised learning algorithm on the training set. Hypotheses are represented as $h_{\theta}(x_{(i)}) = \theta_0+\theta_{1}x_{(i)1} + \cdots +\theta_{n}x_{(i)n}$ . We need to find the parameters $\mathbf{\theta}$ that minimize the "distance" between $y_{(i)}$ and $h_{\theta}(x_{(i)})$ . Let $$J(\theta) = \frac{1}{2} \sum_{i=1}^{m} (y_{(i)}-h_{\theta}(x_{(i)}))^{2}$$ Then we want to find $\theta$ that minimizes $J(\theta)$ . In gradient descent we initialize each parameter and perform the following update: $$\theta_j := \theta_j-\alpha \frac{\partial J(\theta)}{\partial \theta_{j}} $$ What is the key difference between batch gradient descent and stochastic gradient descent? Both use the above update rule. But is one better than the other? | The applicability of batch or stochastic gradient descent really depends on the error manifold expected. Batch gradient descent computes the gradient using the whole dataset. This is great for convex, or relatively smooth error manifolds. In this case, we move somewhat directly towards an optimum solution, either local or global. Additionally, batch gradient descent, given an annealed learning rate, will eventually find the minimum located in it's basin of attraction. Stochastic gradient descent (SGD) computes the gradient using a single sample. Most applications of SGD actually use a minibatch of several samples, for reasons that will be explained a bit later. SGD works well (Not well, I suppose, but better than batch gradient descent) for error manifolds that have lots of local maxima/minima. In this case, the somewhat noisier gradient calculated using the reduced number of samples tends to jerk the model out of local minima into a region that hopefully is more optimal. Single samples are really noisy, while minibatches tend to average a little of the noise out. Thus, the amount of jerk is reduced when using minibatches. A good balance is struck when the minibatch size is small enough to avoid some of the poor local minima, but large enough that it doesn't avoid the global minima or better-performing local minima. (Incidently, this assumes that the best minima have a larger and deeper basin of attraction, and are therefore easier to fall into.) One benefit of SGD is that it's computationally a whole lot faster. Large datasets often can't be held in RAM, which makes vectorization much less efficient. Rather, each sample or batch of samples must be loaded, worked with, the results stored, and so on. Minibatch SGD, on the other hand, is usually intentionally made small enough to be computationally tractable. Usually, this computational advantage is leveraged by performing many more iterations of SGD, making many more steps than conventional batch gradient descent. This usually results in a model that is very close to that which would be found via batch gradient descent, or better. The way I like to think of how SGD works is to imagine that I have one point that represents my input distribution. My model is attempting to learn that input distribution. Surrounding the input distribution is a shaded area that represents the input distributions of all of the possible minibatches I could sample. It's usually a fair assumption that the minibatch input distributions are close in proximity to the true input distribution. Batch gradient descent, at all steps, takes the steepest route to reach the true input distribution. SGD, on the other hand, chooses a random point within the shaded area, and takes the steepest route towards this point. At each iteration, though, it chooses a new point. The average of all of these steps will approximate the true input distribution, usually quite well. | {
"source": [
"https://stats.stackexchange.com/questions/49528",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/20616/"
]
} |
49,540 | I read in Wikipedia : In stratified k-fold cross-validation , the folds are selected so that the mean response value is approximately equal in all the folds. In
the case of a dichotomous classification, this means that each fold
contains roughly the same proportions of the two types of class
labels. Say we are using CV for estimating the performance of a predictor or estimator. What would mean response value (MRV) mean in this context? Just the average value of the predictor / estimator? In what scenarios would "achieving approximately the same MRV" in all folds be actually important ? In other words, what are the consequences of not doing so? | Stratification seeks to ensure that each fold is representative of all strata of the data. Generally this is done in a supervised way for classification and aims to ensure each class is (approximately) equally represented across each test fold (which are of course combined in a complementary way to form training folds). The intuition behind this relates to the bias of most classification algorithms. They tend to weight each instance equally which means overrepresented classes get too much weight (e.g. optimizing F-measure, Accuracy or a complementary form of error). Stratification is not so important for an algorithm that weights each class equally (e.g. optimizing Kappa, Informedness or ROC AUC) or according to a cost matrix (e.g. that is giving a value to each class correctly weighted and/or a cost to each way of misclassifying). See, e.g.
D. M. W. Powers (2014), What the F-measure doesn't measure: Features, Flaws, Fallacies and Fixes. http://arxiv.org/pdf/1503.06410 One specific issue that is important across even unbiased or balanced algorithms, is that they tend not to be able to learn or test a class that isn't represented at all in a fold, and furthermore even the case where only one of a class is represented in a fold doesn't allow generalization to performed resp. evaluated. However even this consideration isn't universal and for example doesn't apply so much to one-class learning, which tries to determine what is normal for an individual class, and effectively identifies outliers as being a different class, given that cross-validation is about determining statistics not generating a specific classifier. On the other hand, supervised stratification compromises the technical purity of the evaluation as the labels of the test data shouldn't affect training, but in stratification are used in the selection of the training instances. Unsupervised stratification is also possible based on spreading similar data around looking only at the attributes of the data, not the true class. See, e.g. https://doi.org/10.1016/S0004-3702(99)00094-6 N. A. Diamantidis, D. Karlis, E. A. Giakoumakis (1997),
Unsupervised stratification of cross-validation for accuracy estimation. Stratification can also be applied to regression rather than classification, in which case like the unsupervised stratification, similarity rather than identity is used, but the supervised version uses the known true function value. Further complications are rare classes and multilabel classification, where classifications are being done on multiple (independent) dimensions. Here tuples of the true labels across all dimensions can be treated as classes for the purpose of cross-validation. However, not all combinations necessarily occur, and some combinations may be rare. Rare classes and rare combinations are a problem in that a class/combination that occurs at least once but less than K times (in K-CV) cannot be represented in all test folds. In such cases, one could instead consider a form of stratified boostrapping (sampling with replacement to generate a full size training fold with repetitions expected and 36.8% expected unselected for testing, with one instance of each class selected initially without replacement for the test fold). Another approach to multilabel stratification is to try to stratify or bootstrap each class dimension separately without seeking to ensure representative selection of combinations. With L labels and N instances and Kkl instances of class k for label l, we can randomly choose (without replacement) from the corresponding set of labeled instances Dkl approximately N/LKkl instances. This does not ensure optimal balance but rather seeks balance heuristically. This can be improved by barring selection of labels at or over quota unless there is no choice (as some combinations do not occur or are rare). Problems tend to mean either that there is too little data or that the dimensions are not independent. | {
"source": [
"https://stats.stackexchange.com/questions/49540",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2798/"
]
} |
49,942 | Let $\theta \in R^{n}$. The Fisher Information Matrix is defined as: $$I(\theta)_{i,j} = -E\left[\frac{\partial^{2} \log(f(X|\theta))}{\partial \theta_{i} \partial \theta_{j}}\bigg|\theta\right]$$ How can I prove the Fisher Information Matrix is positive semidefinite? | Check this out: http://en.wikipedia.org/wiki/Fisher_information#Matrix_form From the definition, we have $$
I_{ij} = \mathrm{E}_\theta \left[ \left(\partial_i \log f_{X\mid\Theta}(X\mid\theta)\right) \left(\partial_j \log f_{X\mid\Theta}(X\mid\theta)\right)\right] \, ,
$$ for $i,j=1,\dots,k$ , in which $\partial_i=\partial /\partial \theta_i$ . Your expression for $I_{ij}$ follows from this one under regularity conditions. For a nonnull vector $u = (u_1,\dots,u_k)^\top\in\mathbb{R}^n$ , it follows from the linearity of the expectation that $$
\sum_{i,j=1}^k u_i I_{ij} u_j = \sum_{i,j=1}^k \left( u_i \mathrm{E}_\theta \left[ \left(\partial_i \log f_{X\mid\Theta}(X\mid\theta)\right) \left(\partial_j \log f_{X\mid\Theta}(X\mid\theta)\right)\right] u_j \right) \\
= \mathrm{E}_\theta \left[ \left(\sum_{i=1}^k u_i \partial_i \log f_{X\mid\Theta}(X\mid\theta)\right) \left(\sum_{j=1}^k u_j \partial_j \log f_{X\mid\Theta} (X\mid\theta)\right)\right] \\
= \mathrm{E}_\theta \left[ \left(\sum_{i=1}^k u_i \partial_i \log f_{X\mid\Theta}(X\mid\theta)\right)^2 \right] \geq 0 \, .
$$ If this component wise notation is too ugly, note that the Fisher Information matrix $H=(I_{ij})$ can be written as $H = \mathrm{E}_\theta\left[S S^\top\right]$ , in which the scores vector $S$ is defined as $$
S = \left( \partial_1 \log f_{X\mid\Theta}(X\mid\theta), \dots, \partial_k \log f_{X\mid\Theta}(X\mid\theta) \right)^\top \, .
$$ Hence, we have the one-liner $$
u^\top H u = u^\top \mathrm{E}_\theta[S S^\top] u = \mathrm{E}_\theta[u^\top S S^\top u] = \mathrm{E}_\theta\left[|| S^\top u ||^2\right] \geq 0.
$$ | {
"source": [
"https://stats.stackexchange.com/questions/49942",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12174/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.