idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
53,301 | averaging after n trials of monte carlo simulation or not? which is better statistically? | Both procedures actually lead to the same answer from a probabilistic perspective, i.e., to the same distribution for the Monte Carlo estimator, since$$\frac{1}{10N}\sum_{i=1}^{10N} f(x_i)\sim\frac{1}{10}\sum_{j=1}^{10} \frac{1}{N}\sum_{k=1}^{N} f(x_{jk})$$ (meaning the two random variables have the same distribution) when $$x_i,x_{jk}\stackrel{\text{i.i.d.}}{\sim} g$$In particular,$$\text{var}\left(\frac{1}{10N}\sum_{i=1}^{10N} f(x_i)\right)=\text{var}\left(\frac{1}{10}\sum_{j=1}^{10} \frac{1}{N}\sum_{k=1}^{N} f(x_{jk})\right)$$ | averaging after n trials of monte carlo simulation or not? which is better statistically? | Both procedures actually lead to the same answer from a probabilistic perspective, i.e., to the same distribution for the Monte Carlo estimator, since$$\frac{1}{10N}\sum_{i=1}^{10N} f(x_i)\sim\frac{1} | averaging after n trials of monte carlo simulation or not? which is better statistically?
Both procedures actually lead to the same answer from a probabilistic perspective, i.e., to the same distribution for the Monte Carlo estimator, since$$\frac{1}{10N}\sum_{i=1}^{10N} f(x_i)\sim\frac{1}{10}\sum_{j=1}^{10} \frac{1}{N}\sum_{k=1}^{N} f(x_{jk})$$ (meaning the two random variables have the same distribution) when $$x_i,x_{jk}\stackrel{\text{i.i.d.}}{\sim} g$$In particular,$$\text{var}\left(\frac{1}{10N}\sum_{i=1}^{10N} f(x_i)\right)=\text{var}\left(\frac{1}{10}\sum_{j=1}^{10} \frac{1}{N}\sum_{k=1}^{N} f(x_{jk})\right)$$ | averaging after n trials of monte carlo simulation or not? which is better statistically?
Both procedures actually lead to the same answer from a probabilistic perspective, i.e., to the same distribution for the Monte Carlo estimator, since$$\frac{1}{10N}\sum_{i=1}^{10N} f(x_i)\sim\frac{1} |
53,302 | averaging after n trials of monte carlo simulation or not? which is better statistically? | I prefer to talk here and not in the ~comments~ because it seems to be a long writing. I am not a statistician on discipline :) but here are some proposal aspect for setting up your MC analysis...
Number of Experiments
OF course, the more experiments you try, the better, so, by far, the preferable Procedure would be the first. The ideal would be to distribute as many experiments as possible, because every experiment drive a different operating point of the whole system. Note that the number of points are distributed after the number of experiments were obtained...
The main factors on this decision are:
The variability of the expected value you want to obtain,
The variability of the process,
The quantity of excited subsystems.
Of course, you can escalate this strategy onto sensors, machines, process, facilities, etc... The idea is, the size of the MC is theoretically the variabilities described above.
Variability
Here, variability is: non-linearity, dinamical, time-varying.
Evidently if everything is linear -this do not means the systems are linear , but their frequent operating domain is reduced to a small quantity of points- there is no difference between the Procedure 1 and the Procedure 2.
A non-linear system-i.e. a pressure control system- or machine will require more MC points than a linear one -i.e. a temperature control system-,
A dynamical system -i.e. almost any process- will require more MC points than a static system -i.e. a system output, constant KPIs, monitors-,
A time varying system -i.e. event ocurrences, periodic or non periodic trends- will require specific MC points for covering those situations.
Every of these factors carry a particular, well defined, measure of variability as the expected rate of change -flow, energy consumed, activity- per rate of resources consumed -time, fuel, manhours-.
With all this clear, you are free to proceed onto the stages of the overall system analysis, which more or less proceed on this way:
Step 1. Exploration. You dont know anything about your system, and place your MC 100k operating points randomly and you are done,
Step 2. Detection. You need to identify how to distribute those 100k, in order to concentrate your simulation power to cover the most demanded areas,
Step 3. Diagnosis. You obtain reliable estimators and calculate the first variance for your estimators,
Step 4. Isolation. You optimize your simulations and improve the variance for your estimators.
Every system of course will have its own agenda, but as you see, the more complex, the merrier the analysis turned to be...
Well if any comments, just write here below and we extend the discussion.
Cheers... | averaging after n trials of monte carlo simulation or not? which is better statistically? | I prefer to talk here and not in the ~comments~ because it seems to be a long writing. I am not a statistician on discipline :) but here are some proposal aspect for setting up your MC analysis...
Nu | averaging after n trials of monte carlo simulation or not? which is better statistically?
I prefer to talk here and not in the ~comments~ because it seems to be a long writing. I am not a statistician on discipline :) but here are some proposal aspect for setting up your MC analysis...
Number of Experiments
OF course, the more experiments you try, the better, so, by far, the preferable Procedure would be the first. The ideal would be to distribute as many experiments as possible, because every experiment drive a different operating point of the whole system. Note that the number of points are distributed after the number of experiments were obtained...
The main factors on this decision are:
The variability of the expected value you want to obtain,
The variability of the process,
The quantity of excited subsystems.
Of course, you can escalate this strategy onto sensors, machines, process, facilities, etc... The idea is, the size of the MC is theoretically the variabilities described above.
Variability
Here, variability is: non-linearity, dinamical, time-varying.
Evidently if everything is linear -this do not means the systems are linear , but their frequent operating domain is reduced to a small quantity of points- there is no difference between the Procedure 1 and the Procedure 2.
A non-linear system-i.e. a pressure control system- or machine will require more MC points than a linear one -i.e. a temperature control system-,
A dynamical system -i.e. almost any process- will require more MC points than a static system -i.e. a system output, constant KPIs, monitors-,
A time varying system -i.e. event ocurrences, periodic or non periodic trends- will require specific MC points for covering those situations.
Every of these factors carry a particular, well defined, measure of variability as the expected rate of change -flow, energy consumed, activity- per rate of resources consumed -time, fuel, manhours-.
With all this clear, you are free to proceed onto the stages of the overall system analysis, which more or less proceed on this way:
Step 1. Exploration. You dont know anything about your system, and place your MC 100k operating points randomly and you are done,
Step 2. Detection. You need to identify how to distribute those 100k, in order to concentrate your simulation power to cover the most demanded areas,
Step 3. Diagnosis. You obtain reliable estimators and calculate the first variance for your estimators,
Step 4. Isolation. You optimize your simulations and improve the variance for your estimators.
Every system of course will have its own agenda, but as you see, the more complex, the merrier the analysis turned to be...
Well if any comments, just write here below and we extend the discussion.
Cheers... | averaging after n trials of monte carlo simulation or not? which is better statistically?
I prefer to talk here and not in the ~comments~ because it seems to be a long writing. I am not a statistician on discipline :) but here are some proposal aspect for setting up your MC analysis...
Nu |
53,303 | Are there alternatives to the Bayesian update rule? | There are some alternatives, in fact, but they rely on using non-probabilistic methods. (The uniqueness of Bayes Law is implied by the uniqueness of a single probability measure, and the definition of joint probability - see this other answer for details)
Dempster Shafer theory is an alternative, as are more complex formalisms such as DSMT. Similarly, Imprecise probabilities can do similar things - but they still use Bayes' law. Fuzzy Logic and other formalisms may also be of interest. | Are there alternatives to the Bayesian update rule? | There are some alternatives, in fact, but they rely on using non-probabilistic methods. (The uniqueness of Bayes Law is implied by the uniqueness of a single probability measure, and the definition of | Are there alternatives to the Bayesian update rule?
There are some alternatives, in fact, but they rely on using non-probabilistic methods. (The uniqueness of Bayes Law is implied by the uniqueness of a single probability measure, and the definition of joint probability - see this other answer for details)
Dempster Shafer theory is an alternative, as are more complex formalisms such as DSMT. Similarly, Imprecise probabilities can do similar things - but they still use Bayes' law. Fuzzy Logic and other formalisms may also be of interest. | Are there alternatives to the Bayesian update rule?
There are some alternatives, in fact, but they rely on using non-probabilistic methods. (The uniqueness of Bayes Law is implied by the uniqueness of a single probability measure, and the definition of |
53,304 | Are there alternatives to the Bayesian update rule? | I'll add another perspective. In E. T. Jaynes incredible book Probability Theory: The Logic Of Science, he gives a rigorous treatment of an extension of Aristotelian logic to degrees of belief (Jaynes was a baysean to the core). That is, he explores a probability theory so that $P = 1$ and $P = 0$ correspond to True and False in classical first order logic.
He outlines a few desirable properties that such an extension must have:
I. Degrees of Plausibility are represented by real numbers.
II. Qualitative Correspondence with common sense.
IIIa. If a conclusion can be reasoned out in more than one way, then
every possible way must lead to the same result.
IIIb. Always take into account all of the evidence
relevant to a question. Do not arbitrarily ignore some of
the information, basing its conclusions only on what remains.
In other words, the theory is completely non-ideological.
IIIc. Always represent equivalent states of knowledge by
equivalent plausibility assignments. That is, if in two problems
the state of knowledge is the same (except perhaps for
the labeling of the propositions), then it must assign the same
plausibilities in both.
all of these statements are given rigorous interpretations in the book.
Then Jaynes gives a deeply fascinating and rigorous mathematical demonstration that Bayesian reasoning is the unique extension of logic to degrees of belief that satisfies these requirements. So in the sense of Jaynes, the answer is actually no, if you want your theory to be compatible with Aristotle, Bayes is the only way. | Are there alternatives to the Bayesian update rule? | I'll add another perspective. In E. T. Jaynes incredible book Probability Theory: The Logic Of Science, he gives a rigorous treatment of an extension of Aristotelian logic to degrees of belief (Jayne | Are there alternatives to the Bayesian update rule?
I'll add another perspective. In E. T. Jaynes incredible book Probability Theory: The Logic Of Science, he gives a rigorous treatment of an extension of Aristotelian logic to degrees of belief (Jaynes was a baysean to the core). That is, he explores a probability theory so that $P = 1$ and $P = 0$ correspond to True and False in classical first order logic.
He outlines a few desirable properties that such an extension must have:
I. Degrees of Plausibility are represented by real numbers.
II. Qualitative Correspondence with common sense.
IIIa. If a conclusion can be reasoned out in more than one way, then
every possible way must lead to the same result.
IIIb. Always take into account all of the evidence
relevant to a question. Do not arbitrarily ignore some of
the information, basing its conclusions only on what remains.
In other words, the theory is completely non-ideological.
IIIc. Always represent equivalent states of knowledge by
equivalent plausibility assignments. That is, if in two problems
the state of knowledge is the same (except perhaps for
the labeling of the propositions), then it must assign the same
plausibilities in both.
all of these statements are given rigorous interpretations in the book.
Then Jaynes gives a deeply fascinating and rigorous mathematical demonstration that Bayesian reasoning is the unique extension of logic to degrees of belief that satisfies these requirements. So in the sense of Jaynes, the answer is actually no, if you want your theory to be compatible with Aristotle, Bayes is the only way. | Are there alternatives to the Bayesian update rule?
I'll add another perspective. In E. T. Jaynes incredible book Probability Theory: The Logic Of Science, he gives a rigorous treatment of an extension of Aristotelian logic to degrees of belief (Jayne |
53,305 | Are there alternatives to the Bayesian update rule? | To my knowledge, if you assign a probability to your belief, the bayesian updating rule is the only way to act upon new datas in a consistent manner in line with probabilities.
You might have two reasons to leave the bayesian framework :
You don't want to assign probabilities to a belief.
You don't have (or don't want to specify) an alternative belief.
One (amongst other) alternative to Bayesian Inference is the framework of Null Hypothesis Statistical Testing (NHST) framework to eventually reject your hypothesis. One could argue that rejecting a belief is a hard form of updating. You do not mention in your question an alternative hypothesis.
If your belief has some degree of freedom, this is a special case. There is no straight answer of how to judge which model is the best as different criteria coexist. There is a bayesian way to act (Bayesian Information Criterion), but also others (the field is known as model selection). I don't know if and how a belief can be updated in this case. | Are there alternatives to the Bayesian update rule? | To my knowledge, if you assign a probability to your belief, the bayesian updating rule is the only way to act upon new datas in a consistent manner in line with probabilities.
You might have two reas | Are there alternatives to the Bayesian update rule?
To my knowledge, if you assign a probability to your belief, the bayesian updating rule is the only way to act upon new datas in a consistent manner in line with probabilities.
You might have two reasons to leave the bayesian framework :
You don't want to assign probabilities to a belief.
You don't have (or don't want to specify) an alternative belief.
One (amongst other) alternative to Bayesian Inference is the framework of Null Hypothesis Statistical Testing (NHST) framework to eventually reject your hypothesis. One could argue that rejecting a belief is a hard form of updating. You do not mention in your question an alternative hypothesis.
If your belief has some degree of freedom, this is a special case. There is no straight answer of how to judge which model is the best as different criteria coexist. There is a bayesian way to act (Bayesian Information Criterion), but also others (the field is known as model selection). I don't know if and how a belief can be updated in this case. | Are there alternatives to the Bayesian update rule?
To my knowledge, if you assign a probability to your belief, the bayesian updating rule is the only way to act upon new datas in a consistent manner in line with probabilities.
You might have two reas |
53,306 | Comparing models in linear mixed effects regression in R | While you can compare model 1 and model 2, and choose among them by ordinary likelihood ratio tests or F tests (e.g. anova in R), you cannot compare model 1 with 3 or model 2 with 3 by likelihood ratio tests or F tests. Nor you can compare 1 vs 3 and 2 vs 3 by information criteria, as the response variables are on different scales.
Hence, pvalues form anova(fit2,fit3) and anova(fit1,fit3) are misleading. The reason of this is that the model for $\log y$ and the model for, say, $\log (y)^3$ are not nested and the likelihood ratio and F tests do not have any more the usual asymptotic distributions. There are some special tests for such difficult cases (see MacKinnon 1983, Model Specification Tests
Against Non-Nested Alternatives, Econometric Reviews 2, 5-110 link). Hope this helps. | Comparing models in linear mixed effects regression in R | While you can compare model 1 and model 2, and choose among them by ordinary likelihood ratio tests or F tests (e.g. anova in R), you cannot compare model 1 with 3 or model 2 with 3 by likelihood rati | Comparing models in linear mixed effects regression in R
While you can compare model 1 and model 2, and choose among them by ordinary likelihood ratio tests or F tests (e.g. anova in R), you cannot compare model 1 with 3 or model 2 with 3 by likelihood ratio tests or F tests. Nor you can compare 1 vs 3 and 2 vs 3 by information criteria, as the response variables are on different scales.
Hence, pvalues form anova(fit2,fit3) and anova(fit1,fit3) are misleading. The reason of this is that the model for $\log y$ and the model for, say, $\log (y)^3$ are not nested and the likelihood ratio and F tests do not have any more the usual asymptotic distributions. There are some special tests for such difficult cases (see MacKinnon 1983, Model Specification Tests
Against Non-Nested Alternatives, Econometric Reviews 2, 5-110 link). Hope this helps. | Comparing models in linear mixed effects regression in R
While you can compare model 1 and model 2, and choose among them by ordinary likelihood ratio tests or F tests (e.g. anova in R), you cannot compare model 1 with 3 or model 2 with 3 by likelihood rati |
53,307 | Should the average prediction = the average value in regression? | For a linear model with an intercept term, yes. This is because the solution satisfies:
$$ X^{t} X\beta = X^{t} y $$
This is a system of equations. The very first row in $X^{t}$ is all ones, so the first equation is:
$$ \sum_i \sum_j x_{ij} \beta_j = \sum_i y_i $$
Which reads, sum of predictions equals sum of response. | Should the average prediction = the average value in regression? | For a linear model with an intercept term, yes. This is because the solution satisfies:
$$ X^{t} X\beta = X^{t} y $$
This is a system of equations. The very first row in $X^{t}$ is all ones, so the f | Should the average prediction = the average value in regression?
For a linear model with an intercept term, yes. This is because the solution satisfies:
$$ X^{t} X\beta = X^{t} y $$
This is a system of equations. The very first row in $X^{t}$ is all ones, so the first equation is:
$$ \sum_i \sum_j x_{ij} \beta_j = \sum_i y_i $$
Which reads, sum of predictions equals sum of response. | Should the average prediction = the average value in regression?
For a linear model with an intercept term, yes. This is because the solution satisfies:
$$ X^{t} X\beta = X^{t} y $$
This is a system of equations. The very first row in $X^{t}$ is all ones, so the f |
53,308 | What is Bartlett's theory? | I suspect what Fox et al 2005 refers to is Bartlett 1946 which is a more "general" form of the AR1-based variance estimator (Bartlett 1935). Bartlett 1946's estimator was later adapted for bivariate time series by Quenouille 1947 as a DoF estimator.
Suppose $X$ and $Y$ are two time series of length $N$ where $\rho_{XX,k}$ and $\rho_{YY,k}$ are the autocorrelation coefficients of $X$ and $Y$, respectively, on lag $k$. Then Quenouille 1947 found the effective DoF to be
$$
\hat{N} = N \left(\sum_{k=-\infty}^{\infty} {\rho}_{XX,k} {\rho}_{YY,k}\right)^{-1},
$$
while Bayley & Hammersley 1946 found,
$$
\hat{N} = N\Big(1+2\sum_{k=1}^{N-1}\frac{(N-k)}{N}\rho_{XX,k}{\rho}_{YY,k}\Big)^{-1}.
$$
There are many approximations of Bartlett's original estimator. One nice review of these variants can be found in Pyper and Peterman 1998.
It is however very important to note that all above estimators assume $X$ and $Y$ are uncorrelated ($\rho = 0$, which in neuroimaging is far from reality). The problem is that once the assumption is violated, these estimators remarkably overestimate the variance due to a confounding of autocorrelation and crosscorrelation, a phenomena also known as statistical aliasing; see Appendix D of Afyouni et al 2018.
So: No correction over-estimates DoF (underestimates variance) and the above corrections under-estimates DoF (overestimates variance). What can be done? See the estimator has recently been proposed in Afyouni et al 2018,
\begin{equation}
\begin{split}
\mathbb{V}({\hat\rho})&=N^{-2}\left[\vphantom{\sum_k^M}(N-1)(1-\rho^2)^2 \right. \\
&\quad +\rho^2 \sum_k^M w_k (\rho_{XX,k}^2 + \rho_{YY,k}^2 + \rho_{XY,k}^2 + \rho_{XY,-k}^2)\\
&\quad -2 \rho \sum_k^M w_k (\rho_{XX,k} + \rho_{YY,k}) (\rho_{XY,k} + \rho_{XY,-k}) \\
&\quad +2 \left.\sum_k^M w_k (\rho_{XX,k}\rho_{YY,k}+\rho_{XY,k}\rho_{XY,-k})
\right],
\end{split}
\label{Eq:fastMEIntro}
\end{equation}
where $w_i=N-2-k$. While this is an involved expression, we show that -- with sensible regularisation of the autocorrelation and crosscorrelation function -- this gives accurate DoF / variance estimates over a range of settings. (See also Roy 1989 for an asymptotic derivation of the same).
Bartlett, M. S. (1946). On the Theoretical Specification and Sampling Properties of Autocorrelated Time-Series. Supplement to the Journal of the Royal Statistical Society, 8(1), 27. http://doi.org/10.2307/2983611
Bartlett, M. S. (1935). Some Aspects of the Time-Correlation Problem in Regard to Tests of Significance. Journal of the Royal Statistical Society, 98(3), 536. http://doi.org/10.2307/2342284
Quenouille, M. H. (1947). Notes on the Calculation of Autocorrelations of Linear Autoregressive Schemes. Biometrika, 34(3/4), 365. http://doi.org/10.2307/2332450
Bayley, G. V., & Hammersley, J. M. (1946). The “Effective” Number of Independent Observations in an Autocorrelated Time Series. Supplement to the Journal of the Royal Statistical Society, 8(2), 184. http://doi.org/10.2307/2983560
Pyper, B. J., & Peterman, R. M. (1998). Comparison of methods to account for autocorrelation in correlation analyses of fish data, 2140, 2127–2140.
Afyouni, Soroosh, Stephen M. Smith, and Thomas E. Nichols. "Effective Degrees of Freedom of the Pearson's Correlation Coefficient under Serial Correlation." bioRxiv (2018): 453795. https://www.biorxiv.org/content/early/2018/10/25/453795
Roy, R. (1989). Asymptotic covariance structure of serial correlations in multivariate time series. Biometrika, 76(4), 824–827. http://doi.org/10.1093/biomet/76.4.824 | What is Bartlett's theory? | I suspect what Fox et al 2005 refers to is Bartlett 1946 which is a more "general" form of the AR1-based variance estimator (Bartlett 1935). Bartlett 1946's estimator was later adapted for bivariate t | What is Bartlett's theory?
I suspect what Fox et al 2005 refers to is Bartlett 1946 which is a more "general" form of the AR1-based variance estimator (Bartlett 1935). Bartlett 1946's estimator was later adapted for bivariate time series by Quenouille 1947 as a DoF estimator.
Suppose $X$ and $Y$ are two time series of length $N$ where $\rho_{XX,k}$ and $\rho_{YY,k}$ are the autocorrelation coefficients of $X$ and $Y$, respectively, on lag $k$. Then Quenouille 1947 found the effective DoF to be
$$
\hat{N} = N \left(\sum_{k=-\infty}^{\infty} {\rho}_{XX,k} {\rho}_{YY,k}\right)^{-1},
$$
while Bayley & Hammersley 1946 found,
$$
\hat{N} = N\Big(1+2\sum_{k=1}^{N-1}\frac{(N-k)}{N}\rho_{XX,k}{\rho}_{YY,k}\Big)^{-1}.
$$
There are many approximations of Bartlett's original estimator. One nice review of these variants can be found in Pyper and Peterman 1998.
It is however very important to note that all above estimators assume $X$ and $Y$ are uncorrelated ($\rho = 0$, which in neuroimaging is far from reality). The problem is that once the assumption is violated, these estimators remarkably overestimate the variance due to a confounding of autocorrelation and crosscorrelation, a phenomena also known as statistical aliasing; see Appendix D of Afyouni et al 2018.
So: No correction over-estimates DoF (underestimates variance) and the above corrections under-estimates DoF (overestimates variance). What can be done? See the estimator has recently been proposed in Afyouni et al 2018,
\begin{equation}
\begin{split}
\mathbb{V}({\hat\rho})&=N^{-2}\left[\vphantom{\sum_k^M}(N-1)(1-\rho^2)^2 \right. \\
&\quad +\rho^2 \sum_k^M w_k (\rho_{XX,k}^2 + \rho_{YY,k}^2 + \rho_{XY,k}^2 + \rho_{XY,-k}^2)\\
&\quad -2 \rho \sum_k^M w_k (\rho_{XX,k} + \rho_{YY,k}) (\rho_{XY,k} + \rho_{XY,-k}) \\
&\quad +2 \left.\sum_k^M w_k (\rho_{XX,k}\rho_{YY,k}+\rho_{XY,k}\rho_{XY,-k})
\right],
\end{split}
\label{Eq:fastMEIntro}
\end{equation}
where $w_i=N-2-k$. While this is an involved expression, we show that -- with sensible regularisation of the autocorrelation and crosscorrelation function -- this gives accurate DoF / variance estimates over a range of settings. (See also Roy 1989 for an asymptotic derivation of the same).
Bartlett, M. S. (1946). On the Theoretical Specification and Sampling Properties of Autocorrelated Time-Series. Supplement to the Journal of the Royal Statistical Society, 8(1), 27. http://doi.org/10.2307/2983611
Bartlett, M. S. (1935). Some Aspects of the Time-Correlation Problem in Regard to Tests of Significance. Journal of the Royal Statistical Society, 98(3), 536. http://doi.org/10.2307/2342284
Quenouille, M. H. (1947). Notes on the Calculation of Autocorrelations of Linear Autoregressive Schemes. Biometrika, 34(3/4), 365. http://doi.org/10.2307/2332450
Bayley, G. V., & Hammersley, J. M. (1946). The “Effective” Number of Independent Observations in an Autocorrelated Time Series. Supplement to the Journal of the Royal Statistical Society, 8(2), 184. http://doi.org/10.2307/2983560
Pyper, B. J., & Peterman, R. M. (1998). Comparison of methods to account for autocorrelation in correlation analyses of fish data, 2140, 2127–2140.
Afyouni, Soroosh, Stephen M. Smith, and Thomas E. Nichols. "Effective Degrees of Freedom of the Pearson's Correlation Coefficient under Serial Correlation." bioRxiv (2018): 453795. https://www.biorxiv.org/content/early/2018/10/25/453795
Roy, R. (1989). Asymptotic covariance structure of serial correlations in multivariate time series. Biometrika, 76(4), 824–827. http://doi.org/10.1093/biomet/76.4.824 | What is Bartlett's theory?
I suspect what Fox et al 2005 refers to is Bartlett 1946 which is a more "general" form of the AR1-based variance estimator (Bartlett 1935). Bartlett 1946's estimator was later adapted for bivariate t |
53,309 | What is Bartlett's theory? | In the supporting information for another paper from the same group, I found this elaboration:
Because individual time points in the BOLD signal are not statistically independent, the degrees of freedom must be computed according to Bartlett’s theory, i.e., computing the integral across all time of the square of the autocorrelation function (11).
Reference 11 is the same Jenkins and Watts textbook.
This raises the question, Which autocorrelation function is used: the autocorrelation for the seed region or the autocorrelation for the voxel with which it has been correlated? And what is meant by "across all time" -- across all time lags?
Michael Fox, in a personal communication, responds that the autocorrelation function for the seed region is used, and that "across all time" indeed means across all time lags.
(But why use the autocorrelation for the seed region and ignore the other timeseries?)
I can't find any independent support for this approach.
A handout I found from a course in Atmospheric Sciences at U Dub suggests a vaguely similar approach and attributes it to Bartlett:
Indeed, Bretherton et al, (1999) show that, assuming that one is looking at quadratic
statistics, such as variance and covariance analysis between two variables $x_1$ and $x_2$, and
using Gaussian red noise as a model then a good approximation to use is:
$\frac{N^*}{N} = \frac{1− r_1(\Delta t)r_2 (\Delta t)}{
1+ r_1(\Delta t)r_2 (\Delta t)}$
where, of course, if we are covarying a variable with itself, $r_1(\Delta t)r_2(\Delta t) = r(\Delta t)^2$. This
goes back as far as Bartlett (1935). Of course, if the time or space series is not Gaussian red noise, then the formula is not accurate. But it is still good practice to use it.
In this, $N^*$ is the degrees of freedom, $N$ is the number of observations, and $r_x$ is the autocorrelation function for signal $x$.
The operative words here are "vaguely similar."
I have now read a good portion of Jenkins and Watts, and skimmed through even more, but have found no description in the text of the correction described by Fox et al. or of the Fisher z-transform.
All that said, I have decided to compute the correction factor the same way Michael says I should . . . but wait: How do I do this? | What is Bartlett's theory? | In the supporting information for another paper from the same group, I found this elaboration:
Because individual time points in the BOLD signal are not statistically independent, the degrees of free | What is Bartlett's theory?
In the supporting information for another paper from the same group, I found this elaboration:
Because individual time points in the BOLD signal are not statistically independent, the degrees of freedom must be computed according to Bartlett’s theory, i.e., computing the integral across all time of the square of the autocorrelation function (11).
Reference 11 is the same Jenkins and Watts textbook.
This raises the question, Which autocorrelation function is used: the autocorrelation for the seed region or the autocorrelation for the voxel with which it has been correlated? And what is meant by "across all time" -- across all time lags?
Michael Fox, in a personal communication, responds that the autocorrelation function for the seed region is used, and that "across all time" indeed means across all time lags.
(But why use the autocorrelation for the seed region and ignore the other timeseries?)
I can't find any independent support for this approach.
A handout I found from a course in Atmospheric Sciences at U Dub suggests a vaguely similar approach and attributes it to Bartlett:
Indeed, Bretherton et al, (1999) show that, assuming that one is looking at quadratic
statistics, such as variance and covariance analysis between two variables $x_1$ and $x_2$, and
using Gaussian red noise as a model then a good approximation to use is:
$\frac{N^*}{N} = \frac{1− r_1(\Delta t)r_2 (\Delta t)}{
1+ r_1(\Delta t)r_2 (\Delta t)}$
where, of course, if we are covarying a variable with itself, $r_1(\Delta t)r_2(\Delta t) = r(\Delta t)^2$. This
goes back as far as Bartlett (1935). Of course, if the time or space series is not Gaussian red noise, then the formula is not accurate. But it is still good practice to use it.
In this, $N^*$ is the degrees of freedom, $N$ is the number of observations, and $r_x$ is the autocorrelation function for signal $x$.
The operative words here are "vaguely similar."
I have now read a good portion of Jenkins and Watts, and skimmed through even more, but have found no description in the text of the correction described by Fox et al. or of the Fisher z-transform.
All that said, I have decided to compute the correction factor the same way Michael says I should . . . but wait: How do I do this? | What is Bartlett's theory?
In the supporting information for another paper from the same group, I found this elaboration:
Because individual time points in the BOLD signal are not statistically independent, the degrees of free |
53,310 | What is Bartlett's theory? | Bartlett's theory here refers to results from this paper
On the Theoretical Specification and Sampling Properties of Autocorrelated Time-Series.
The main idea here is that if you have $n$ discrete time observations, the effective number of degrees of freedom needs to be less than n since the observations are not independent, given by $n \cdot\text{correction factor}$
For a linear autoregressive model with $s$ lags
$$ x_{t+1} = \sum_{l=1}^{s}x_{t-l}\rho_{l} + e_t $$
Suppose $s=1$. The asymptotic correction factor for AR-(1) model is given by $\frac{1-\rho^2}{1+\rho^2}$. Thus instead of an asymptotic variance of $\frac{1}{n}$, we have a larger asymptotic variance of $\frac{1}{n}\frac{1+\rho^2}{1-\rho^2}$
You can similarly find a derivation of Bartlett's formula for asymptotic variance vector of $(\rho_1,\ldots, \rho_s)$ autocorrelation coefficients case in Theorem 7.2.1 in [Brockwell and Davis][2] and for a number of other time-series models as well.
For the case of asymptotic variance between two different time-series, $x(t)$ and $y(t)$, a similar formula applies and is nicely discussed in this paper by [Haugh][3]. Even if you do whiten each individual time-series first, the test statistic for the sample cross-correlation still needs to adjust the asymptotic variance estimate.
[2]: Time Series: Theory and Methods by Brockwell and Davis. Available for free at http://link.springer.com/book/10.1007%2F978-1-4419-0320-4
[3]: Haugh, Larry D. "Checking the independence of two covariance-stationary time series: a univariate residual cross-correlation approach." Journal of the American Statistical Association 71.354 (1976): 378-385.
APA | What is Bartlett's theory? | Bartlett's theory here refers to results from this paper
On the Theoretical Specification and Sampling Properties of Autocorrelated Time-Series.
The main idea here is that if you have $n$ discrete ti | What is Bartlett's theory?
Bartlett's theory here refers to results from this paper
On the Theoretical Specification and Sampling Properties of Autocorrelated Time-Series.
The main idea here is that if you have $n$ discrete time observations, the effective number of degrees of freedom needs to be less than n since the observations are not independent, given by $n \cdot\text{correction factor}$
For a linear autoregressive model with $s$ lags
$$ x_{t+1} = \sum_{l=1}^{s}x_{t-l}\rho_{l} + e_t $$
Suppose $s=1$. The asymptotic correction factor for AR-(1) model is given by $\frac{1-\rho^2}{1+\rho^2}$. Thus instead of an asymptotic variance of $\frac{1}{n}$, we have a larger asymptotic variance of $\frac{1}{n}\frac{1+\rho^2}{1-\rho^2}$
You can similarly find a derivation of Bartlett's formula for asymptotic variance vector of $(\rho_1,\ldots, \rho_s)$ autocorrelation coefficients case in Theorem 7.2.1 in [Brockwell and Davis][2] and for a number of other time-series models as well.
For the case of asymptotic variance between two different time-series, $x(t)$ and $y(t)$, a similar formula applies and is nicely discussed in this paper by [Haugh][3]. Even if you do whiten each individual time-series first, the test statistic for the sample cross-correlation still needs to adjust the asymptotic variance estimate.
[2]: Time Series: Theory and Methods by Brockwell and Davis. Available for free at http://link.springer.com/book/10.1007%2F978-1-4419-0320-4
[3]: Haugh, Larry D. "Checking the independence of two covariance-stationary time series: a univariate residual cross-correlation approach." Journal of the American Statistical Association 71.354 (1976): 378-385.
APA | What is Bartlett's theory?
Bartlett's theory here refers to results from this paper
On the Theoretical Specification and Sampling Properties of Autocorrelated Time-Series.
The main idea here is that if you have $n$ discrete ti |
53,311 | Knn classifier for Online learning | KNN, as any other classifier, can be trained offline and then applied in online settings.
But data generation distribution may change over time, so you'll have to handle so-called "Concept Drifts" (see http://en.wikipedia.org/wiki/Concept_drift). The simplest way to deal with it is to retrain the model over some fixed period of time, e.g. each week. There are good surveys on concept drift adaptation, e.g. by Gama et al, 2014. | Knn classifier for Online learning | KNN, as any other classifier, can be trained offline and then applied in online settings.
But data generation distribution may change over time, so you'll have to handle so-called "Concept Drifts" (s | Knn classifier for Online learning
KNN, as any other classifier, can be trained offline and then applied in online settings.
But data generation distribution may change over time, so you'll have to handle so-called "Concept Drifts" (see http://en.wikipedia.org/wiki/Concept_drift). The simplest way to deal with it is to retrain the model over some fixed period of time, e.g. each week. There are good surveys on concept drift adaptation, e.g. by Gama et al, 2014. | Knn classifier for Online learning
KNN, as any other classifier, can be trained offline and then applied in online settings.
But data generation distribution may change over time, so you'll have to handle so-called "Concept Drifts" (s |
53,312 | Knn classifier for Online learning | KNN is essentially a special (extreme) case of the EM algorithm. Online variants for the EM have been developed (see for instance http://arxiv.org/pdf/0712.4273v3.pdf) and so the short answer to the question is yes | Knn classifier for Online learning | KNN is essentially a special (extreme) case of the EM algorithm. Online variants for the EM have been developed (see for instance http://arxiv.org/pdf/0712.4273v3.pdf) and so the short answer to the q | Knn classifier for Online learning
KNN is essentially a special (extreme) case of the EM algorithm. Online variants for the EM have been developed (see for instance http://arxiv.org/pdf/0712.4273v3.pdf) and so the short answer to the question is yes | Knn classifier for Online learning
KNN is essentially a special (extreme) case of the EM algorithm. Online variants for the EM have been developed (see for instance http://arxiv.org/pdf/0712.4273v3.pdf) and so the short answer to the q |
53,313 | How to simulate type I error and type II error | First, a conventional way to write a test of hypothesis is:
$H_0: \mu=0$ and $H_1: \mu \ne 0$ or $H_1: \mu >0$ or $H_1: \mu <0$ based on the interest of the study.
Let's define Type I error:
Probability of rejecting null hypothesis when it is TRUE.
Type II error:
Probability of not rejecting null hypothesis when it is False.
Let's test type I error:
To observe the type I error of a test we need to generate/simulate data from the same distribution that follows null hypothesis. Notice the following R code:
n=10000 # testing 10,000 times
t1err=0
for (i in 1:n){
x=rnorm(100, 0, 1)
if (((t.test(x, mu=0))$p.value)<=0.05) (t1err=t1err+1)
}
cat("Type I error rate in percentage is", (t1err/n)*100,"%")
It should give you about 5% error as Type I error.
Let's observe Type II error:
To test Type II error we have to generate/simulate data from another distribution than that followed by null hypothesis. Notice the following R code:
n=10000 # testing 10,000 times
t2err=0
for (i in 1:n){
x=rnorm(100, 2, 1)
if (((t.test(x, mu=0))$p.value)>0.05) (t2err=t2err+1)
}
cat("Type II error rate in percentage is", (t2err/n)*100,"%")
You will see 0.0%. As the variance is really low. If you increase variance to 5, you will see about 2% error as Type II error. | How to simulate type I error and type II error | First, a conventional way to write a test of hypothesis is:
$H_0: \mu=0$ and $H_1: \mu \ne 0$ or $H_1: \mu >0$ or $H_1: \mu <0$ based on the interest of the study.
Let's define Type I error:
Probabi | How to simulate type I error and type II error
First, a conventional way to write a test of hypothesis is:
$H_0: \mu=0$ and $H_1: \mu \ne 0$ or $H_1: \mu >0$ or $H_1: \mu <0$ based on the interest of the study.
Let's define Type I error:
Probability of rejecting null hypothesis when it is TRUE.
Type II error:
Probability of not rejecting null hypothesis when it is False.
Let's test type I error:
To observe the type I error of a test we need to generate/simulate data from the same distribution that follows null hypothesis. Notice the following R code:
n=10000 # testing 10,000 times
t1err=0
for (i in 1:n){
x=rnorm(100, 0, 1)
if (((t.test(x, mu=0))$p.value)<=0.05) (t1err=t1err+1)
}
cat("Type I error rate in percentage is", (t1err/n)*100,"%")
It should give you about 5% error as Type I error.
Let's observe Type II error:
To test Type II error we have to generate/simulate data from another distribution than that followed by null hypothesis. Notice the following R code:
n=10000 # testing 10,000 times
t2err=0
for (i in 1:n){
x=rnorm(100, 2, 1)
if (((t.test(x, mu=0))$p.value)>0.05) (t2err=t2err+1)
}
cat("Type II error rate in percentage is", (t2err/n)*100,"%")
You will see 0.0%. As the variance is really low. If you increase variance to 5, you will see about 2% error as Type II error. | How to simulate type I error and type II error
First, a conventional way to write a test of hypothesis is:
$H_0: \mu=0$ and $H_1: \mu \ne 0$ or $H_1: \mu >0$ or $H_1: \mu <0$ based on the interest of the study.
Let's define Type I error:
Probabi |
53,314 | How to simulate type I error and type II error | Just to replicate this post dwelling on a different iteration of the same idea - in this case how quickly an unscrupulous researcher could generate throw-away pseudo-science with significant p values, I landed on this page, and learned from the accepted answer (+1).
It turns out the mean is $20$ as predicted; the median is $14$; and the mode just $1.$ This is in keeping with the right skewed distribution on the histogram below.
Here is the code in R, and the results for mean, median and mode, which sounds like what you are asking in the follow-up comment:
set.seed(3141592)
firsthackingop <- 0 # Empty vec to collect number of studies before hitting the jackpot.
for(i in 1:1e5){ # The whole search for a sig p value will be done 100,000 times.
hackingwait <- 1 # The counting vector for every p-searching Safari.
repeat{
x=rnorm(100, 0, 1) # 100 draws from a norm dist as in @overwhelmed's answer.
if(t.test(x, mu=0)$p.value > 0.05){hackingwait=hackingwait+1}else{break}
}
firsthackingop[i] <- hackingwait
}
mean(firsthackingop)
# [1] 20.17556
median(firsthackingop)
# [1] 14
Mode <- function(x) {
ux <- unique(x)
ux[which.max(tabulate(match(x, ux)))]
}
Mode(firsthackingop)
[1] 1
hist(firsthackingop, freq = T, main = "No. t-tests before Type I Error",
xlim=c(0,100), col = rgb(.2,.2,.8,.5), border = F,
cex.axis=.75, cex.main=.9, xlab="", ylab="")
Here is the histogram:
It is interesting to note that this is simply the geometric distribution with $p=0.05$ defined as The probability distribution of the number X of Bernoulli trials needed to get one success, which mean is $\frac{1}{p}=\frac{1}{0.05}=20;$ and with a mode of $1.$ The data generation in R is v = rgeom(1e5,0.05) + 1 and here is the plot:
> Mode(v)
[1] 1
> mean(v)
[1] 20.12817
> median(v)
[1] 14 | How to simulate type I error and type II error | Just to replicate this post dwelling on a different iteration of the same idea - in this case how quickly an unscrupulous researcher could generate throw-away pseudo-science with significant p values, | How to simulate type I error and type II error
Just to replicate this post dwelling on a different iteration of the same idea - in this case how quickly an unscrupulous researcher could generate throw-away pseudo-science with significant p values, I landed on this page, and learned from the accepted answer (+1).
It turns out the mean is $20$ as predicted; the median is $14$; and the mode just $1.$ This is in keeping with the right skewed distribution on the histogram below.
Here is the code in R, and the results for mean, median and mode, which sounds like what you are asking in the follow-up comment:
set.seed(3141592)
firsthackingop <- 0 # Empty vec to collect number of studies before hitting the jackpot.
for(i in 1:1e5){ # The whole search for a sig p value will be done 100,000 times.
hackingwait <- 1 # The counting vector for every p-searching Safari.
repeat{
x=rnorm(100, 0, 1) # 100 draws from a norm dist as in @overwhelmed's answer.
if(t.test(x, mu=0)$p.value > 0.05){hackingwait=hackingwait+1}else{break}
}
firsthackingop[i] <- hackingwait
}
mean(firsthackingop)
# [1] 20.17556
median(firsthackingop)
# [1] 14
Mode <- function(x) {
ux <- unique(x)
ux[which.max(tabulate(match(x, ux)))]
}
Mode(firsthackingop)
[1] 1
hist(firsthackingop, freq = T, main = "No. t-tests before Type I Error",
xlim=c(0,100), col = rgb(.2,.2,.8,.5), border = F,
cex.axis=.75, cex.main=.9, xlab="", ylab="")
Here is the histogram:
It is interesting to note that this is simply the geometric distribution with $p=0.05$ defined as The probability distribution of the number X of Bernoulli trials needed to get one success, which mean is $\frac{1}{p}=\frac{1}{0.05}=20;$ and with a mode of $1.$ The data generation in R is v = rgeom(1e5,0.05) + 1 and here is the plot:
> Mode(v)
[1] 1
> mean(v)
[1] 20.12817
> median(v)
[1] 14 | How to simulate type I error and type II error
Just to replicate this post dwelling on a different iteration of the same idea - in this case how quickly an unscrupulous researcher could generate throw-away pseudo-science with significant p values, |
53,315 | Overdispersion in GLM with Gaussian distribution | how can I check for overdispersion with the Gaussian distribution and how can I correct for it?
The Poisson and the binomial have a variance that's a fixed function of the mean. e.g. for a Poisson, $\text{Var}(X)=\mu$, so it's possible to have some count data which has $\text{Var}(X)>\mu$, i.e. more dispersed than would be expected for the Poisson. There's no corresponding situation for the Gaussian. [If variance were some fixed value, like $1$, then a sample with larger variance would be overdispersed, but in the Gaussian family it's just another Gaussian.]
Since the Gaussian has a variance parameter, more dispersion will just be a larger variance parameter... so you don't have overdispersion with the Gaussian.
So there's nothing to correct. (On the other hand, changing dispersion would be an issue to deal with) | Overdispersion in GLM with Gaussian distribution | how can I check for overdispersion with the Gaussian distribution and how can I correct for it?
The Poisson and the binomial have a variance that's a fixed function of the mean. e.g. for a Poisson, $ | Overdispersion in GLM with Gaussian distribution
how can I check for overdispersion with the Gaussian distribution and how can I correct for it?
The Poisson and the binomial have a variance that's a fixed function of the mean. e.g. for a Poisson, $\text{Var}(X)=\mu$, so it's possible to have some count data which has $\text{Var}(X)>\mu$, i.e. more dispersed than would be expected for the Poisson. There's no corresponding situation for the Gaussian. [If variance were some fixed value, like $1$, then a sample with larger variance would be overdispersed, but in the Gaussian family it's just another Gaussian.]
Since the Gaussian has a variance parameter, more dispersion will just be a larger variance parameter... so you don't have overdispersion with the Gaussian.
So there's nothing to correct. (On the other hand, changing dispersion would be an issue to deal with) | Overdispersion in GLM with Gaussian distribution
how can I check for overdispersion with the Gaussian distribution and how can I correct for it?
The Poisson and the binomial have a variance that's a fixed function of the mean. e.g. for a Poisson, $ |
53,316 | Doing multiple regression without intercept in R (without changing data dimensions) | The formula
lm(formula = y ~ x1 + x2)
will include an intercept by default.
The formula
lm(formula = y ~ x1 + x2 -1)
or
lm(formula = y ~ x1 + x2 +0)
is how R estimates an OLS model without an intercept. The formula
lm(formula = y-1 ~ x1 + x2)
estimates a model against a dependent variable y with 1 subtracted from it.
Centering all terms at their mean will also enforce a zero intercept. | Doing multiple regression without intercept in R (without changing data dimensions) | The formula
lm(formula = y ~ x1 + x2)
will include an intercept by default.
The formula
lm(formula = y ~ x1 + x2 -1)
or
lm(formula = y ~ x1 + x2 +0)
is how R estimates an OLS model without an inte | Doing multiple regression without intercept in R (without changing data dimensions)
The formula
lm(formula = y ~ x1 + x2)
will include an intercept by default.
The formula
lm(formula = y ~ x1 + x2 -1)
or
lm(formula = y ~ x1 + x2 +0)
is how R estimates an OLS model without an intercept. The formula
lm(formula = y-1 ~ x1 + x2)
estimates a model against a dependent variable y with 1 subtracted from it.
Centering all terms at their mean will also enforce a zero intercept. | Doing multiple regression without intercept in R (without changing data dimensions)
The formula
lm(formula = y ~ x1 + x2)
will include an intercept by default.
The formula
lm(formula = y ~ x1 + x2 -1)
or
lm(formula = y ~ x1 + x2 +0)
is how R estimates an OLS model without an inte |
53,317 | The fallacy of splitting data collection time into shorter intervals to reduce error | A constant rate of events per unit time is called a Poisson process when the outcomes in one time interval are independent of the outcomes in any other time interval. This independence assumption is usually valid and supported by physical considerations. When not, it can be tested.
A Poisson process is characterized by a single parameter. The rate per unit time is usually chosen. Let's call it $\lambda$. This means the expected number of events observed throughout a duration $dt$ is $\lambda dt$. The variance of the number of events is also equal to $\lambda dt$.
Suppose you were to split the duration $dt$ into nonoverlapping subintervals $dt_1, dt_2, \ldots, dt_k$, with $dt_1 + dt_2 + \cdots + dt_k = dt$. Then:
The expected number of observations would equal the sum of the expectations,
$$\sum_{i=1}^k \lambda dt_i = \lambda \sum_{i=1}^k dt_i = \lambda dt,$$
just as it should.
The variance of the number of observations would equal the sum of the variances because the numbers of observations are independent. The calculation is exactly the same, only this time the quantities represent the variances:
$$\sum_{i=1}^k \lambda dt_i = \lambda \sum_{i=1}^k dt_i = \lambda dt.$$
Therefore, splitting the data into groups of observations does not improve the precision with which you can estimate the rate.
It can pay to go a little further. Suppose the rate does vary over time, but slowly enough that assuming a constant rate within each subinterval $dt_i$ is a good approximation. Let $X_i$ have Poisson$(\lambda_i)$ distributions and be independent, just as before. (To simplify the notation, I have incorporated the dependence on the durations $dt_i$ within the parameters $\lambda_i$.) Upon observing each of the $X_i$ you might want to estimate the underlying mean rate (per smaller interval) as the sample mean
$$\hat\lambda = \frac{1}{k} \sum_{i=1}^k x_i$$
and the variance in that rate as the sample variance
$$\hat\sigma^2 = \frac{1}{k-1} \sum_{i=1}^k (x_i - \hat\lambda)^2.$$
Using the Poisson distribution facts that $\mathbb{E}(X_i) = \lambda_i$ and $\mathbb{E}(X_i^2) = \text{Var}{X_i} + \mathbb{E}(X_i)^2 = \lambda_i + \lambda_i^2$, a little algebra shows that
$$\mathbb{E}(\hat\lambda) = \frac{\lambda}{k}$$
($\lambda = \sum_i \lambda_i$), which we may interpret as the mean rate per (smaller) interval, and
$$\mathbb{E}(\sigma^2) = \frac{1}{k-1}\left(\sum_i (\lambda_i - \lambda/k)^2\right) + \frac{\lambda}{k}.$$
The right hand term is the variance of a Poisson process of constant rate $\lambda/k$. The other term is the variance of the separate rates. When there is no variation, the first term drops out and this reduces to the previous result: the variance equals the mean and from that we derive the standard error as usual. But when the underlying rate does vary, its variance contributes to the expected sample variance, creating an overdispersed dataset.
Incidentally, the effect of the subdivision is never to decrease the estimated variance (and therefore the standard error): it can only increase it.
We see from this that the potential advantage of dividing the whole time interval into little parts is that it allows us to detect variations in the underlying rate and to use them to increase our estimate of the standard error. If you are confident that the underlying rate does not appreciably change, then there is no need to collect data from subintervals--but it couldn't hurt. | The fallacy of splitting data collection time into shorter intervals to reduce error | A constant rate of events per unit time is called a Poisson process when the outcomes in one time interval are independent of the outcomes in any other time interval. This independence assumption is | The fallacy of splitting data collection time into shorter intervals to reduce error
A constant rate of events per unit time is called a Poisson process when the outcomes in one time interval are independent of the outcomes in any other time interval. This independence assumption is usually valid and supported by physical considerations. When not, it can be tested.
A Poisson process is characterized by a single parameter. The rate per unit time is usually chosen. Let's call it $\lambda$. This means the expected number of events observed throughout a duration $dt$ is $\lambda dt$. The variance of the number of events is also equal to $\lambda dt$.
Suppose you were to split the duration $dt$ into nonoverlapping subintervals $dt_1, dt_2, \ldots, dt_k$, with $dt_1 + dt_2 + \cdots + dt_k = dt$. Then:
The expected number of observations would equal the sum of the expectations,
$$\sum_{i=1}^k \lambda dt_i = \lambda \sum_{i=1}^k dt_i = \lambda dt,$$
just as it should.
The variance of the number of observations would equal the sum of the variances because the numbers of observations are independent. The calculation is exactly the same, only this time the quantities represent the variances:
$$\sum_{i=1}^k \lambda dt_i = \lambda \sum_{i=1}^k dt_i = \lambda dt.$$
Therefore, splitting the data into groups of observations does not improve the precision with which you can estimate the rate.
It can pay to go a little further. Suppose the rate does vary over time, but slowly enough that assuming a constant rate within each subinterval $dt_i$ is a good approximation. Let $X_i$ have Poisson$(\lambda_i)$ distributions and be independent, just as before. (To simplify the notation, I have incorporated the dependence on the durations $dt_i$ within the parameters $\lambda_i$.) Upon observing each of the $X_i$ you might want to estimate the underlying mean rate (per smaller interval) as the sample mean
$$\hat\lambda = \frac{1}{k} \sum_{i=1}^k x_i$$
and the variance in that rate as the sample variance
$$\hat\sigma^2 = \frac{1}{k-1} \sum_{i=1}^k (x_i - \hat\lambda)^2.$$
Using the Poisson distribution facts that $\mathbb{E}(X_i) = \lambda_i$ and $\mathbb{E}(X_i^2) = \text{Var}{X_i} + \mathbb{E}(X_i)^2 = \lambda_i + \lambda_i^2$, a little algebra shows that
$$\mathbb{E}(\hat\lambda) = \frac{\lambda}{k}$$
($\lambda = \sum_i \lambda_i$), which we may interpret as the mean rate per (smaller) interval, and
$$\mathbb{E}(\sigma^2) = \frac{1}{k-1}\left(\sum_i (\lambda_i - \lambda/k)^2\right) + \frac{\lambda}{k}.$$
The right hand term is the variance of a Poisson process of constant rate $\lambda/k$. The other term is the variance of the separate rates. When there is no variation, the first term drops out and this reduces to the previous result: the variance equals the mean and from that we derive the standard error as usual. But when the underlying rate does vary, its variance contributes to the expected sample variance, creating an overdispersed dataset.
Incidentally, the effect of the subdivision is never to decrease the estimated variance (and therefore the standard error): it can only increase it.
We see from this that the potential advantage of dividing the whole time interval into little parts is that it allows us to detect variations in the underlying rate and to use them to increase our estimate of the standard error. If you are confident that the underlying rate does not appreciably change, then there is no need to collect data from subintervals--but it couldn't hurt. | The fallacy of splitting data collection time into shorter intervals to reduce error
A constant rate of events per unit time is called a Poisson process when the outcomes in one time interval are independent of the outcomes in any other time interval. This independence assumption is |
53,318 | The fallacy of splitting data collection time into shorter intervals to reduce error | One easy way to see that this is that in both cases, when asked to predict the mean you're going to predict the same thing. If your rate is given by $f(x)$,your first four readings are equal to
$$x_i =1/30 \int_{30i}^{30i + 30}f(x)dx$$
for $i \in \{0,1,2,3 \}$ and your other readings are
$$y_j =1/10 \int_{10j}^{10j + 10}f(x)dx$$
for $j \in \{0,1,\dots, 11\}$, then in both cases your prediction for the mean will be equal, $\sum_i x_i/4 = \sum_j y_j/12$. Therefore neither can be closer to the true mean. Now what is wrong with the naive logic that you presented?
While we have more samples, they are from a different distribution, one with greater variance, and these effects cancel out completely.
One can construct a similar example. Let $x_i \sim \mathcal{N}(0,1)$ for $i=1, \dots 2n$. Now define $y_j = x_{2j} + x_{2j+1}$. Basic properties of normal distributions will confirm that $y_j \sim \mathcal{N(0, 1/2)}$. And by the same logic
$$
\sum_i x_i/2n = \sum_j y_j/n \sim \mathcal{N(0,1/2n)}.
$$
So the sample mean of both distributions is an equally accurate estimate of the mean. With $x_i$ we have more samples, but greater variance. | The fallacy of splitting data collection time into shorter intervals to reduce error | One easy way to see that this is that in both cases, when asked to predict the mean you're going to predict the same thing. If your rate is given by $f(x)$,your first four readings are equal to
$$x_ | The fallacy of splitting data collection time into shorter intervals to reduce error
One easy way to see that this is that in both cases, when asked to predict the mean you're going to predict the same thing. If your rate is given by $f(x)$,your first four readings are equal to
$$x_i =1/30 \int_{30i}^{30i + 30}f(x)dx$$
for $i \in \{0,1,2,3 \}$ and your other readings are
$$y_j =1/10 \int_{10j}^{10j + 10}f(x)dx$$
for $j \in \{0,1,\dots, 11\}$, then in both cases your prediction for the mean will be equal, $\sum_i x_i/4 = \sum_j y_j/12$. Therefore neither can be closer to the true mean. Now what is wrong with the naive logic that you presented?
While we have more samples, they are from a different distribution, one with greater variance, and these effects cancel out completely.
One can construct a similar example. Let $x_i \sim \mathcal{N}(0,1)$ for $i=1, \dots 2n$. Now define $y_j = x_{2j} + x_{2j+1}$. Basic properties of normal distributions will confirm that $y_j \sim \mathcal{N(0, 1/2)}$. And by the same logic
$$
\sum_i x_i/2n = \sum_j y_j/n \sim \mathcal{N(0,1/2n)}.
$$
So the sample mean of both distributions is an equally accurate estimate of the mean. With $x_i$ we have more samples, but greater variance. | The fallacy of splitting data collection time into shorter intervals to reduce error
One easy way to see that this is that in both cases, when asked to predict the mean you're going to predict the same thing. If your rate is given by $f(x)$,your first four readings are equal to
$$x_ |
53,319 | Determine Maximum Likelihood Estimate (MLE) of loglogistic distribution | If you are looking to fit a log-logistic distribution to your data, it is fairly straightforward to do so. In the example below, I am using the function dllog to get at the density of the log-logistic for a given set of values of the shape and scale parameter, but it is no trouble to write the PDF code yourself as well.
(Log-)Likelihood & MLE
The density of log-logistic distributed a random variable has the probability density function [PDF]:
$$
f(X_i; \alpha, \beta) = \dfrac{\left(\tfrac{\beta}{\alpha}\right)\left(\tfrac{X_i}{\alpha}\right)^{\beta - 1}}{\left(1+ \left(\tfrac{X_i}{\alpha}\right)^{\beta}\right)^2}
$$
where $\alpha$ and $\beta$ are the scale and shape parameters respectively.
For a given sample of data $X_1, \ldots, X_N$, this implies that the log-likelihood of the sample is:
$$
\ell_N(\alpha, \beta \mid X_1, \ldots, X_N) = \sum_{i=1}^N \log f(X_i; \alpha, \beta)
$$
The MLE of the parameters given the sample of the data, is given by the maximizer of the log-likelihood:
$$
\hat{\alpha}_{MLE}, \hat{\beta}_{MLE} = \arg\max_{\alpha, \beta}\ell_N(\alpha, \beta \mid X_1, \ldots, X_N)
$$
Computing & optimizing the log-likelihood
In the code below:
0. I have used the function rllog to generate a random sample from a log-logistic distribution with parameters c(5, 6).
1. The function fnLLLL computes the (negative) log-likelihood of the data.
2. The function fnLLLL uses the function dllog from the FAdist package to compute the PDF of the log-logistic distribution, $f$.
3. optim computes the values of $\alpha$, and $\beta$ that minimize the negative log-likelihood, and the values c(2, 3) are the intial values for the optimizer.
Those optimized values are $5.132758$ & $5.654340$, and the optimized value of the negative log-likelihood function is $9239.179$.
# simulate some log-logistic data
library(FAdist)
vY = rllog(n = 1000, shape = 5, scale = 6)
# log-likelihood function
fnLLLL = function(vParams, vData) {
# uses the density function of the log-logistic function from FAdist
return(-sum(log(dllog(vData, shape = vParams[1], scale = vParams[2]))))
}
# optimize it
optim(c(2, 3), fnLLLL, vData = vY)
This gives:
> optim(c(2, 3), fnLLLL, vData = vY)
$par
[1] 5.132758 5.654340
$value
[1] 9239.179
$counts
function gradient
57 NA
$convergence
[1] 0
$message
NULL | Determine Maximum Likelihood Estimate (MLE) of loglogistic distribution | If you are looking to fit a log-logistic distribution to your data, it is fairly straightforward to do so. In the example below, I am using the function dllog to get at the density of the log-logistic | Determine Maximum Likelihood Estimate (MLE) of loglogistic distribution
If you are looking to fit a log-logistic distribution to your data, it is fairly straightforward to do so. In the example below, I am using the function dllog to get at the density of the log-logistic for a given set of values of the shape and scale parameter, but it is no trouble to write the PDF code yourself as well.
(Log-)Likelihood & MLE
The density of log-logistic distributed a random variable has the probability density function [PDF]:
$$
f(X_i; \alpha, \beta) = \dfrac{\left(\tfrac{\beta}{\alpha}\right)\left(\tfrac{X_i}{\alpha}\right)^{\beta - 1}}{\left(1+ \left(\tfrac{X_i}{\alpha}\right)^{\beta}\right)^2}
$$
where $\alpha$ and $\beta$ are the scale and shape parameters respectively.
For a given sample of data $X_1, \ldots, X_N$, this implies that the log-likelihood of the sample is:
$$
\ell_N(\alpha, \beta \mid X_1, \ldots, X_N) = \sum_{i=1}^N \log f(X_i; \alpha, \beta)
$$
The MLE of the parameters given the sample of the data, is given by the maximizer of the log-likelihood:
$$
\hat{\alpha}_{MLE}, \hat{\beta}_{MLE} = \arg\max_{\alpha, \beta}\ell_N(\alpha, \beta \mid X_1, \ldots, X_N)
$$
Computing & optimizing the log-likelihood
In the code below:
0. I have used the function rllog to generate a random sample from a log-logistic distribution with parameters c(5, 6).
1. The function fnLLLL computes the (negative) log-likelihood of the data.
2. The function fnLLLL uses the function dllog from the FAdist package to compute the PDF of the log-logistic distribution, $f$.
3. optim computes the values of $\alpha$, and $\beta$ that minimize the negative log-likelihood, and the values c(2, 3) are the intial values for the optimizer.
Those optimized values are $5.132758$ & $5.654340$, and the optimized value of the negative log-likelihood function is $9239.179$.
# simulate some log-logistic data
library(FAdist)
vY = rllog(n = 1000, shape = 5, scale = 6)
# log-likelihood function
fnLLLL = function(vParams, vData) {
# uses the density function of the log-logistic function from FAdist
return(-sum(log(dllog(vData, shape = vParams[1], scale = vParams[2]))))
}
# optimize it
optim(c(2, 3), fnLLLL, vData = vY)
This gives:
> optim(c(2, 3), fnLLLL, vData = vY)
$par
[1] 5.132758 5.654340
$value
[1] 9239.179
$counts
function gradient
57 NA
$convergence
[1] 0
$message
NULL | Determine Maximum Likelihood Estimate (MLE) of loglogistic distribution
If you are looking to fit a log-logistic distribution to your data, it is fairly straightforward to do so. In the example below, I am using the function dllog to get at the density of the log-logistic |
53,320 | Determine Maximum Likelihood Estimate (MLE) of loglogistic distribution | I asked above for the parametrization, as in Actuarial Science (at least in the US) the loglogistic (Fisk) is usually parameterized:
$$
\begin{align}
f(x) &= \frac{\gamma\left(\frac{x}{\theta}\right)^\gamma}{x + \left[1 + \left(\frac{x}{\theta}\right)^\gamma\right]^2}\\
F(x) &= \frac{\left(\frac{x}{\theta}\right)^\gamma}{1 + \left(\frac{x}{\theta}\right)^\gamma}
\end{align}
$$
Therefore:
$$
LL(x) = \ln\gamma + \gamma\left(\ln x - \ln\theta\right) - \ln x - 2\ln\left(1 + \left(\frac{x}{\theta}\right)^\gamma\right)
$$
As above, create an error function for this (well, negative log likelihood) similar to:
NLL_LGLG <- function(pars, X) {
gamma <- pars[[1]]
theta <- pars[[2]]
LL <- log(gamma) + gamma * (log(X) - log(theta)) - log(X) -
2 * log(1 + (X / theta) ^ gamma)
return(-sum(LL))
}
You can use deriv3 if you want to use gradient based methods like L_BFGS (or calculate it by hand if speed is of the essence), but plugging the above and your data into optim or nloptr should be what you want. If you don't want to roll your own logliklihoods, and this parameterization is the one you want, you can find dllogis in the actuar package on CRAN. | Determine Maximum Likelihood Estimate (MLE) of loglogistic distribution | I asked above for the parametrization, as in Actuarial Science (at least in the US) the loglogistic (Fisk) is usually parameterized:
$$
\begin{align}
f(x) &= \frac{\gamma\left(\frac{x}{\theta}\right)^ | Determine Maximum Likelihood Estimate (MLE) of loglogistic distribution
I asked above for the parametrization, as in Actuarial Science (at least in the US) the loglogistic (Fisk) is usually parameterized:
$$
\begin{align}
f(x) &= \frac{\gamma\left(\frac{x}{\theta}\right)^\gamma}{x + \left[1 + \left(\frac{x}{\theta}\right)^\gamma\right]^2}\\
F(x) &= \frac{\left(\frac{x}{\theta}\right)^\gamma}{1 + \left(\frac{x}{\theta}\right)^\gamma}
\end{align}
$$
Therefore:
$$
LL(x) = \ln\gamma + \gamma\left(\ln x - \ln\theta\right) - \ln x - 2\ln\left(1 + \left(\frac{x}{\theta}\right)^\gamma\right)
$$
As above, create an error function for this (well, negative log likelihood) similar to:
NLL_LGLG <- function(pars, X) {
gamma <- pars[[1]]
theta <- pars[[2]]
LL <- log(gamma) + gamma * (log(X) - log(theta)) - log(X) -
2 * log(1 + (X / theta) ^ gamma)
return(-sum(LL))
}
You can use deriv3 if you want to use gradient based methods like L_BFGS (or calculate it by hand if speed is of the essence), but plugging the above and your data into optim or nloptr should be what you want. If you don't want to roll your own logliklihoods, and this parameterization is the one you want, you can find dllogis in the actuar package on CRAN. | Determine Maximum Likelihood Estimate (MLE) of loglogistic distribution
I asked above for the parametrization, as in Actuarial Science (at least in the US) the loglogistic (Fisk) is usually parameterized:
$$
\begin{align}
f(x) &= \frac{\gamma\left(\frac{x}{\theta}\right)^ |
53,321 | Time Series for each customer | As @forecaster notes: sure you can. Just separate your data by customer.
However, you will need to consider a couple of things. For instance, you only have sales acts, no zeros, so you will need to think about how to fill in zeros. From when till when? Are there periods where zero filling makes no sense, because the product was not even available? And: are you interested in a daily, weekly or other time granularity?
No matter how you decide these questions, your time series will likely be very intermittent, i.e., contain many zeros. ARIMA (as your question is tagged) is not appropriate for that. One well-established forecasting technique for intermittent demands is Croston's method (croston() in the forecast package). Here is a brand new article discussing an alternative that also models obsolescence (which makes sense for your data: this would be customers that won't return) and gives further pointers to the literature.
Most time series forecasting methods, like Croston's, will give you either average demands (e.g., 0.1 if someone buys one unit every 10 time buckets), the usefulness of which is a bit dubious. Or it gives you the demand size conditional on demand being nonzero. You will need to think about what exactly you need.
And actually, depending on what you actually want to use the forecast for, it may turn out that you don't really want separate forecasts per customer, but a total forecast aggregated over all customers. For instance, you will likely capture seasonality far better on aggregate data (Croston's and other intermittent demand methods don't model seasonality at all). | Time Series for each customer | As @forecaster notes: sure you can. Just separate your data by customer.
However, you will need to consider a couple of things. For instance, you only have sales acts, no zeros, so you will need to th | Time Series for each customer
As @forecaster notes: sure you can. Just separate your data by customer.
However, you will need to consider a couple of things. For instance, you only have sales acts, no zeros, so you will need to think about how to fill in zeros. From when till when? Are there periods where zero filling makes no sense, because the product was not even available? And: are you interested in a daily, weekly or other time granularity?
No matter how you decide these questions, your time series will likely be very intermittent, i.e., contain many zeros. ARIMA (as your question is tagged) is not appropriate for that. One well-established forecasting technique for intermittent demands is Croston's method (croston() in the forecast package). Here is a brand new article discussing an alternative that also models obsolescence (which makes sense for your data: this would be customers that won't return) and gives further pointers to the literature.
Most time series forecasting methods, like Croston's, will give you either average demands (e.g., 0.1 if someone buys one unit every 10 time buckets), the usefulness of which is a bit dubious. Or it gives you the demand size conditional on demand being nonzero. You will need to think about what exactly you need.
And actually, depending on what you actually want to use the forecast for, it may turn out that you don't really want separate forecasts per customer, but a total forecast aggregated over all customers. For instance, you will likely capture seasonality far better on aggregate data (Croston's and other intermittent demand methods don't model seasonality at all). | Time Series for each customer
As @forecaster notes: sure you can. Just separate your data by customer.
However, you will need to consider a couple of things. For instance, you only have sales acts, no zeros, so you will need to th |
53,322 | Time Series for each customer | You could use separate time-series models on each customer, but you probably want to account for changes in sales of all customers when modeling each individual customer.
The most obvious way is to simply run VAR on the n-dimensional variable. There's an issue of missing observations: not every customer may have sales in every month. This easy to address with state-space (SSM) representation of VAR. The real problem is, of course, dimensionality: you'll have to estimate at least n$\times$n matrix. To deal with this issue you could apply PCA, and reduce the dimensionality of the problem to m$\times$m, where $m<<n$.
Another way of dealing with this issue is to build separate time series models for each customer such as ARIMA. Then calculate the $n\times$n correlation matrix of residuals from these models. Use this correlation matrix to create correlated random innovations for forecasting.
Another way is to run a Kalman filter on the n-dimensional vector assuming m-dimensional latent factor x. The key here is to assume diagonal matrix F, zero matrix B. This will render n$\times$m matrix H. So, the dimenionality of your problem went from n$\times$n to much lower ~n$\times$m. | Time Series for each customer | You could use separate time-series models on each customer, but you probably want to account for changes in sales of all customers when modeling each individual customer.
The most obvious way is to si | Time Series for each customer
You could use separate time-series models on each customer, but you probably want to account for changes in sales of all customers when modeling each individual customer.
The most obvious way is to simply run VAR on the n-dimensional variable. There's an issue of missing observations: not every customer may have sales in every month. This easy to address with state-space (SSM) representation of VAR. The real problem is, of course, dimensionality: you'll have to estimate at least n$\times$n matrix. To deal with this issue you could apply PCA, and reduce the dimensionality of the problem to m$\times$m, where $m<<n$.
Another way of dealing with this issue is to build separate time series models for each customer such as ARIMA. Then calculate the $n\times$n correlation matrix of residuals from these models. Use this correlation matrix to create correlated random innovations for forecasting.
Another way is to run a Kalman filter on the n-dimensional vector assuming m-dimensional latent factor x. The key here is to assume diagonal matrix F, zero matrix B. This will render n$\times$m matrix H. So, the dimenionality of your problem went from n$\times$n to much lower ~n$\times$m. | Time Series for each customer
You could use separate time-series models on each customer, but you probably want to account for changes in sales of all customers when modeling each individual customer.
The most obvious way is to si |
53,323 | Time Series for each customer | Check out some of the Fader and Hardie work, e.g. http://www.statwizards.com/Help/ForecastWizard/WebHelp/Overview/Overview_of_Fader-Hardie_Probability_Models.htm This link seems to be to a commercial site, but Fader and Hardie published their models in standard academic journals such as Marketing Science.
Usually there are working prototypes on Hardie's site, although usually in Excel, not R.
For much of this work, there is a compound model. The number of purchases / amount of purchases is predicted with a NBD model (gamma-poisson). There is a separate model predicting how long the customer will remain a customer. | Time Series for each customer | Check out some of the Fader and Hardie work, e.g. http://www.statwizards.com/Help/ForecastWizard/WebHelp/Overview/Overview_of_Fader-Hardie_Probability_Models.htm This link seems to be to a commercial | Time Series for each customer
Check out some of the Fader and Hardie work, e.g. http://www.statwizards.com/Help/ForecastWizard/WebHelp/Overview/Overview_of_Fader-Hardie_Probability_Models.htm This link seems to be to a commercial site, but Fader and Hardie published their models in standard academic journals such as Marketing Science.
Usually there are working prototypes on Hardie's site, although usually in Excel, not R.
For much of this work, there is a compound model. The number of purchases / amount of purchases is predicted with a NBD model (gamma-poisson). There is a separate model predicting how long the customer will remain a customer. | Time Series for each customer
Check out some of the Fader and Hardie work, e.g. http://www.statwizards.com/Help/ForecastWizard/WebHelp/Overview/Overview_of_Fader-Hardie_Probability_Models.htm This link seems to be to a commercial |
53,324 | Time Series for each customer | This kind of data is referred to as Intermittent Demand as the interval between demand/spend is non-uniform i.e there are days without any spend. Good forecasting software can handle these kinds of problems. | Time Series for each customer | This kind of data is referred to as Intermittent Demand as the interval between demand/spend is non-uniform i.e there are days without any spend. Good forecasting software can handle these kinds of pr | Time Series for each customer
This kind of data is referred to as Intermittent Demand as the interval between demand/spend is non-uniform i.e there are days without any spend. Good forecasting software can handle these kinds of problems. | Time Series for each customer
This kind of data is referred to as Intermittent Demand as the interval between demand/spend is non-uniform i.e there are days without any spend. Good forecasting software can handle these kinds of pr |
53,325 | Complement naive bayes | Let's make this simple, and do a very contrived two class case:
Let's say we have three documents with the following words:
Doc 1: "Food" occurs twice, "Meat" occurs once, "Brain" occurs once
Class of Doc 1: "Health"
Doc 2: "Food" occurs once, "Meat" occurs once, "Kitchen" occurs 9 times, "Job" occurs 5 times.
Class of Doc 2: "Butcher"
Doc 3: "Food" occurs 2 times, "Meat" occurs 1 times, "Job" occurs once.
Class of Doc 3: "Health"
Total word count in class 'Health' - $(2 + 1 + 1) + (2 + 1 + 1) = 8$
Total word count in class 'Butcher' - $(1 + 1 + 9 + 5) = 16$
So we have two possible y classes: $(y = Health)$ and $(y = Butcher)$, with prior probabilities thus:
$p(y = Health) = {2 \over 3}$ (2 out of 3 docs are about health)
$p(y = Butcher) = {1 \over 3}$
Now, for Complement Normal Naive Bayes, instead of calculating the likelihood of a word occuring in a class, we calculate the likelihood that it occurs in other classes. So, we would proceed to calculate the word-class dependencies thus:
Complement Probability of word 'Food' with class 'Health':
$$p(w = Food | \hat y = Health) = {1 \over 16}$$
See? 'Food' occurs 1 time in total for all classes NOT health, and the number of words in class NOT health is 16
Complement Probability of word 'Food' with class 'Butcher':
$$p(w = Food | \hat y = Butcher) = {2+1+5 \over 8} = 1 $$
For others,
$$p(w = Kitchen | \hat y = Health) = {9 \over 16}$$
$$p(w = Kitchen | \hat y = Butcher) = {0 \over 8} = 0$$
$$p(w = Meat | \hat y = Health) = {1 \over 16}$$
$$p(w = Meat | \hat y = Butcher) = {2 \over 8}$$
...and so forth
Then, say we had a new document containing the following:
New doc: "Food" - 1, "Job" - 1, "Meat" - 1
For normal Naive Bayes, (neglecting the constant evidence denominator), we would do our calculation and find the one with the maximum argument, viz:
$$argmax \ p(y) \bullet \prod p(w | y)^{f_i}$$
where $f_i$ is the frequency count of word $i$ in document $d$
But for Complement Naive Bayes, we would do:
$$argmin \ p(y) \bullet \prod {1 \over p(w | \hat y)^{f_i}}$$
...and now seek the one with the minimum argument.
We would predict the class of this new doc by doing the following:
$$ p(y=Health|w_1 = Food, w_2 =Job, w_3 =Meat) = $$
which gives us
$$ p(y=Health) \bullet {1 \over p(w = Food | \hat y=Health)^{f_{Food}} \bullet p(w = Job | \hat y=Health)^{f_{Butcher}} \bullet p(w=Meat|\hat y=Health)^{f_{Meat}}} $$
Let's work it out - this will give us:
$$ {2 \over 3} \bullet {1 \over { {1 \over 16}^{1} \bullet {5 \over 16}^{1} \bullet {1 \over 16}^{1} } } \approx 6.302 $$
and for the second:
$$ p(y=Butcher) \bullet {1 \over p(w = Food | \hat y=Butcher)^{f_{Food}} \bullet p(w = Job | \hat y=Butcher)^{f_{Butcher}} \bullet p(w=Meat|\hat y=Butcher)^{f_{Meat}}} $$
giving us:
$$ {1 \over 3} \bullet {1 \over { {1 \over 8}^{1} \bullet {1 \over 8}^{1} \bullet {2 \over 8}^{1} } } \approx 85.333 $$
...and likewise for the other classes. So, the one with the lower probability (minimum value) is said to be the class it belongs to - in this case, our new doc will be classified as belonging to Health. We DON'T use the one with the maximum probability because for the Complement Naive Bayes Algorithm, we take it - a higher value - to mean that it is highly likely that a document with these words does NOT belong to that class.
Obviously, this example is, again, highly contrived, and we should even talk about Laplacian smoothing. But hope this helps you have a working idea on which you can build! | Complement naive bayes | Let's make this simple, and do a very contrived two class case:
Let's say we have three documents with the following words:
Doc 1: "Food" occurs twice, "Meat" occurs once, "Brain" occurs once
Class of | Complement naive bayes
Let's make this simple, and do a very contrived two class case:
Let's say we have three documents with the following words:
Doc 1: "Food" occurs twice, "Meat" occurs once, "Brain" occurs once
Class of Doc 1: "Health"
Doc 2: "Food" occurs once, "Meat" occurs once, "Kitchen" occurs 9 times, "Job" occurs 5 times.
Class of Doc 2: "Butcher"
Doc 3: "Food" occurs 2 times, "Meat" occurs 1 times, "Job" occurs once.
Class of Doc 3: "Health"
Total word count in class 'Health' - $(2 + 1 + 1) + (2 + 1 + 1) = 8$
Total word count in class 'Butcher' - $(1 + 1 + 9 + 5) = 16$
So we have two possible y classes: $(y = Health)$ and $(y = Butcher)$, with prior probabilities thus:
$p(y = Health) = {2 \over 3}$ (2 out of 3 docs are about health)
$p(y = Butcher) = {1 \over 3}$
Now, for Complement Normal Naive Bayes, instead of calculating the likelihood of a word occuring in a class, we calculate the likelihood that it occurs in other classes. So, we would proceed to calculate the word-class dependencies thus:
Complement Probability of word 'Food' with class 'Health':
$$p(w = Food | \hat y = Health) = {1 \over 16}$$
See? 'Food' occurs 1 time in total for all classes NOT health, and the number of words in class NOT health is 16
Complement Probability of word 'Food' with class 'Butcher':
$$p(w = Food | \hat y = Butcher) = {2+1+5 \over 8} = 1 $$
For others,
$$p(w = Kitchen | \hat y = Health) = {9 \over 16}$$
$$p(w = Kitchen | \hat y = Butcher) = {0 \over 8} = 0$$
$$p(w = Meat | \hat y = Health) = {1 \over 16}$$
$$p(w = Meat | \hat y = Butcher) = {2 \over 8}$$
...and so forth
Then, say we had a new document containing the following:
New doc: "Food" - 1, "Job" - 1, "Meat" - 1
For normal Naive Bayes, (neglecting the constant evidence denominator), we would do our calculation and find the one with the maximum argument, viz:
$$argmax \ p(y) \bullet \prod p(w | y)^{f_i}$$
where $f_i$ is the frequency count of word $i$ in document $d$
But for Complement Naive Bayes, we would do:
$$argmin \ p(y) \bullet \prod {1 \over p(w | \hat y)^{f_i}}$$
...and now seek the one with the minimum argument.
We would predict the class of this new doc by doing the following:
$$ p(y=Health|w_1 = Food, w_2 =Job, w_3 =Meat) = $$
which gives us
$$ p(y=Health) \bullet {1 \over p(w = Food | \hat y=Health)^{f_{Food}} \bullet p(w = Job | \hat y=Health)^{f_{Butcher}} \bullet p(w=Meat|\hat y=Health)^{f_{Meat}}} $$
Let's work it out - this will give us:
$$ {2 \over 3} \bullet {1 \over { {1 \over 16}^{1} \bullet {5 \over 16}^{1} \bullet {1 \over 16}^{1} } } \approx 6.302 $$
and for the second:
$$ p(y=Butcher) \bullet {1 \over p(w = Food | \hat y=Butcher)^{f_{Food}} \bullet p(w = Job | \hat y=Butcher)^{f_{Butcher}} \bullet p(w=Meat|\hat y=Butcher)^{f_{Meat}}} $$
giving us:
$$ {1 \over 3} \bullet {1 \over { {1 \over 8}^{1} \bullet {1 \over 8}^{1} \bullet {2 \over 8}^{1} } } \approx 85.333 $$
...and likewise for the other classes. So, the one with the lower probability (minimum value) is said to be the class it belongs to - in this case, our new doc will be classified as belonging to Health. We DON'T use the one with the maximum probability because for the Complement Naive Bayes Algorithm, we take it - a higher value - to mean that it is highly likely that a document with these words does NOT belong to that class.
Obviously, this example is, again, highly contrived, and we should even talk about Laplacian smoothing. But hope this helps you have a working idea on which you can build! | Complement naive bayes
Let's make this simple, and do a very contrived two class case:
Let's say we have three documents with the following words:
Doc 1: "Food" occurs twice, "Meat" occurs once, "Brain" occurs once
Class of |
53,326 | How do we interpret the coefficients of the random effects model? | For the sake of explanation, suppose you have a simple mixed model with a fixed treatment effect and a random subject effect. Suppose further that there are 3 treatment levels A, B, C, and 10 subjects. The mixed model is $$
\mathbf{y = X\boldsymbol \beta + Z \boldsymbol \gamma + \boldsymbol \epsilon}
$$
where $X\boldsymbol \beta$ is the linear fixed-effects component, $\mathbf{Z}$ is the additional design matrix corresponding the the random-effects parameters, $\boldsymbol \gamma$.
Interpretation to fixed effect: Suppose we use treatment C as the reference level. Then the fixed effect $\beta_A$ tells us what the change in $y$ would be given a subject $i$ compared to subject $i$'s own average $\bar{y_i}$, due to treatment A.
You are almost correct in interpreting the fixed-effect, except that the "as compared to i's average x" part is not quite right. It is comparing to 1-unit change in $x$ if continuous, or different levels change if $x$ is categorical. It is not compared to average of $x$.
Interpretation of the random effect: The random effect $\gamma_i$ tells us what the additional change in $y$ would be due to subject $i$ itself, regardless of any change due to either treatment effect.
Hope this is clear to you. | How do we interpret the coefficients of the random effects model? | For the sake of explanation, suppose you have a simple mixed model with a fixed treatment effect and a random subject effect. Suppose further that there are 3 treatment levels A, B, C, and 10 subjects | How do we interpret the coefficients of the random effects model?
For the sake of explanation, suppose you have a simple mixed model with a fixed treatment effect and a random subject effect. Suppose further that there are 3 treatment levels A, B, C, and 10 subjects. The mixed model is $$
\mathbf{y = X\boldsymbol \beta + Z \boldsymbol \gamma + \boldsymbol \epsilon}
$$
where $X\boldsymbol \beta$ is the linear fixed-effects component, $\mathbf{Z}$ is the additional design matrix corresponding the the random-effects parameters, $\boldsymbol \gamma$.
Interpretation to fixed effect: Suppose we use treatment C as the reference level. Then the fixed effect $\beta_A$ tells us what the change in $y$ would be given a subject $i$ compared to subject $i$'s own average $\bar{y_i}$, due to treatment A.
You are almost correct in interpreting the fixed-effect, except that the "as compared to i's average x" part is not quite right. It is comparing to 1-unit change in $x$ if continuous, or different levels change if $x$ is categorical. It is not compared to average of $x$.
Interpretation of the random effect: The random effect $\gamma_i$ tells us what the additional change in $y$ would be due to subject $i$ itself, regardless of any change due to either treatment effect.
Hope this is clear to you. | How do we interpret the coefficients of the random effects model?
For the sake of explanation, suppose you have a simple mixed model with a fixed treatment effect and a random subject effect. Suppose further that there are 3 treatment levels A, B, C, and 10 subjects |
53,327 | Does convergence in mean imply convergence almost surely if the limit is zero and the sequence is nonnegative? | I thank user @guy for pointing out the mistake of my previous attempt. Instead of just deleting it, I will insert in its place a naive way to showcase what the 2nd Borel-Cantelli lemma tells us, using the case that @guy considers, a sequence of independent Bernoullis$(1/k)$.
Consider the event $\{\prod_{i=m}^k X_i = 0\}$. Since the r.v.'s are independent the probability of this event is the product of the probabilities of the individual events
$$P\left(\prod_{i=m}^k X_i = 0\right) = \prod_{i=m}^kP(X_i=0)$$
$$=\left(1-\frac 1m\right)\cdot\left(1-\frac 1{m+1}\right)\cdot...\cdot \left(1-\frac 1{k-1}\right)\cdot\left(1-\frac 1{k}\right)$$
$$=\left(\frac {m-1}m\right)\cdot\left(\frac {m}{m+1}\right)\cdot...\cdot\left(\frac {k-2}{k-1}\right)\cdot\left(\frac {k-1}{k}\right) $$
$$=\frac {m-1}{k}$$
As $k\rightarrow \infty$ this probability goes to $0$. But this probability is the probability of a subsequence to take on only the value zero. So we just concluded that the probability of an infinite subsequence of zeros is zero no matter where we start this subsequence (i.e. no matter what or how large the value of $m$ is). So the sequence does not converge to zero almost surely, even though it converges in mean to zero (and hence also in probability).
What about some intuition here? I would try this: while the single probability $P(X_k=1)$ tends to zero (and so the expected value of $X_k$ goes to zero too), it does not go "fast enough". So when looking at a whole sequence of $X_k$'s we cannot say that the sequence will converge to zero with probability one. | Does convergence in mean imply convergence almost surely if the limit is zero and the sequence is no | I thank user @guy for pointing out the mistake of my previous attempt. Instead of just deleting it, I will insert in its place a naive way to showcase what the 2nd Borel-Cantelli lemma tells us, using | Does convergence in mean imply convergence almost surely if the limit is zero and the sequence is nonnegative?
I thank user @guy for pointing out the mistake of my previous attempt. Instead of just deleting it, I will insert in its place a naive way to showcase what the 2nd Borel-Cantelli lemma tells us, using the case that @guy considers, a sequence of independent Bernoullis$(1/k)$.
Consider the event $\{\prod_{i=m}^k X_i = 0\}$. Since the r.v.'s are independent the probability of this event is the product of the probabilities of the individual events
$$P\left(\prod_{i=m}^k X_i = 0\right) = \prod_{i=m}^kP(X_i=0)$$
$$=\left(1-\frac 1m\right)\cdot\left(1-\frac 1{m+1}\right)\cdot...\cdot \left(1-\frac 1{k-1}\right)\cdot\left(1-\frac 1{k}\right)$$
$$=\left(\frac {m-1}m\right)\cdot\left(\frac {m}{m+1}\right)\cdot...\cdot\left(\frac {k-2}{k-1}\right)\cdot\left(\frac {k-1}{k}\right) $$
$$=\frac {m-1}{k}$$
As $k\rightarrow \infty$ this probability goes to $0$. But this probability is the probability of a subsequence to take on only the value zero. So we just concluded that the probability of an infinite subsequence of zeros is zero no matter where we start this subsequence (i.e. no matter what or how large the value of $m$ is). So the sequence does not converge to zero almost surely, even though it converges in mean to zero (and hence also in probability).
What about some intuition here? I would try this: while the single probability $P(X_k=1)$ tends to zero (and so the expected value of $X_k$ goes to zero too), it does not go "fast enough". So when looking at a whole sequence of $X_k$'s we cannot say that the sequence will converge to zero with probability one. | Does convergence in mean imply convergence almost surely if the limit is zero and the sequence is no
I thank user @guy for pointing out the mistake of my previous attempt. Instead of just deleting it, I will insert in its place a naive way to showcase what the 2nd Borel-Cantelli lemma tells us, using |
53,328 | Does convergence in mean imply convergence almost surely if the limit is zero and the sequence is nonnegative? | Unless I'm making an elementary mistake (entirely possible!), this does not hold, even for discrete random variables with finite support (contrary to another answer). Recall the second Borel-Cantelli Lemma
Second Borel Cantelli Lemma: Let $A_1, A_2, \ldots$ be independent events. If $\sum_{i = 1} ^ \infty P(A_i) = \infty$ then $P(A_i \mbox{ occurs infinitely often}) = 1$.
Let $X_k$ be independent, such that $P(X_k = 1) = 1/k$ and $P(X_k = 0) = 1 - 1/k$. $E|X_k| \to 0$ so $X_k$ converges to $0$ in mean, but $\sum_k P(X_k = 1) = \infty$, and by independence the events $[X_k = 1]$ are independent. Hence with probability $1$, $X_k = 1$ occurs infinitely often; obviously for any $\omega$ such that $X_k(\omega) = 1$ infinitely often we cannot have $X_k(\omega) \to 0$, so the sequence almost surely does not converge to $0$. | Does convergence in mean imply convergence almost surely if the limit is zero and the sequence is no | Unless I'm making an elementary mistake (entirely possible!), this does not hold, even for discrete random variables with finite support (contrary to another answer). Recall the second Borel-Cantelli | Does convergence in mean imply convergence almost surely if the limit is zero and the sequence is nonnegative?
Unless I'm making an elementary mistake (entirely possible!), this does not hold, even for discrete random variables with finite support (contrary to another answer). Recall the second Borel-Cantelli Lemma
Second Borel Cantelli Lemma: Let $A_1, A_2, \ldots$ be independent events. If $\sum_{i = 1} ^ \infty P(A_i) = \infty$ then $P(A_i \mbox{ occurs infinitely often}) = 1$.
Let $X_k$ be independent, such that $P(X_k = 1) = 1/k$ and $P(X_k = 0) = 1 - 1/k$. $E|X_k| \to 0$ so $X_k$ converges to $0$ in mean, but $\sum_k P(X_k = 1) = \infty$, and by independence the events $[X_k = 1]$ are independent. Hence with probability $1$, $X_k = 1$ occurs infinitely often; obviously for any $\omega$ such that $X_k(\omega) = 1$ infinitely often we cannot have $X_k(\omega) \to 0$, so the sequence almost surely does not converge to $0$. | Does convergence in mean imply convergence almost surely if the limit is zero and the sequence is no
Unless I'm making an elementary mistake (entirely possible!), this does not hold, even for discrete random variables with finite support (contrary to another answer). Recall the second Borel-Cantelli |
53,329 | Why does this logistic GAM fit so poorly? | You are ignoring the model intercept when evaluating the model fit. The plot method shows the fitted spline, but the model includes a parametric constant term, just like the intercept in a standard logistic regression model.
Instead, predict from the fitted model using the predict() method for locations on a grid of locations over the interval. For example:
m.gam <- gam(inside ~ te(x, y), data=df, family=binomial, method = "REML")
locs <- with(df,
data.frame(x = seq(min(x), max(x), length = 100),
y = seq(min(y), max(y), length = 100)))
pred <- expand.grid(locs)
pred <- transform(pred,
fitted = predict(m.gam, newdata = pred, type = "response"))
contour(locs$x, locs$y, matrix(pred$fitted, ncol = 100))
draw.circle(0, 0, 1, border="red")
which gives
Using a te() smoother seems to do a bit better than s() and I used method = "REML" as this can help with situations where the objective function in GCV/UBRE-based selection can become flat (and hence these methods can undersmooth), in case that was the problem here. | Why does this logistic GAM fit so poorly? | You are ignoring the model intercept when evaluating the model fit. The plot method shows the fitted spline, but the model includes a parametric constant term, just like the intercept in a standard lo | Why does this logistic GAM fit so poorly?
You are ignoring the model intercept when evaluating the model fit. The plot method shows the fitted spline, but the model includes a parametric constant term, just like the intercept in a standard logistic regression model.
Instead, predict from the fitted model using the predict() method for locations on a grid of locations over the interval. For example:
m.gam <- gam(inside ~ te(x, y), data=df, family=binomial, method = "REML")
locs <- with(df,
data.frame(x = seq(min(x), max(x), length = 100),
y = seq(min(y), max(y), length = 100)))
pred <- expand.grid(locs)
pred <- transform(pred,
fitted = predict(m.gam, newdata = pred, type = "response"))
contour(locs$x, locs$y, matrix(pred$fitted, ncol = 100))
draw.circle(0, 0, 1, border="red")
which gives
Using a te() smoother seems to do a bit better than s() and I used method = "REML" as this can help with situations where the objective function in GCV/UBRE-based selection can become flat (and hence these methods can undersmooth), in case that was the problem here. | Why does this logistic GAM fit so poorly?
You are ignoring the model intercept when evaluating the model fit. The plot method shows the fitted spline, but the model includes a parametric constant term, just like the intercept in a standard lo |
53,330 | Is it ok to use a random intercept model without testing for random slopes? | Models with mixed effects have 2 stochastic ingredients:
Residual errors
Random effects
If you use a random slope and notice that the model performs better, it means that there is variability that was not properly captured by the residual errors, the random intercept, and fixed effects. Another direction that you can take is to be flexible on the residual errors (e.g. by using a Student-t instead of a normal distribution for the residual errors). Now the question is: which model performs better? The one with normal errors or with Student-t errors? The one with random slope and normal errors? The one with random slope and Student-t errors? ... And so on. If you can implement all of them and compare them, you can gain understanding on the features of the data. For example, if your model selection favours the model with Student-t errors and no random slope, it tells you that a single slope fits the data better (no variability among subjects in terms of the slope) but there might be some outliers that require a distribution with heavier tails than normal. | Is it ok to use a random intercept model without testing for random slopes? | Models with mixed effects have 2 stochastic ingredients:
Residual errors
Random effects
If you use a random slope and notice that the model performs better, it means that there is variability that w | Is it ok to use a random intercept model without testing for random slopes?
Models with mixed effects have 2 stochastic ingredients:
Residual errors
Random effects
If you use a random slope and notice that the model performs better, it means that there is variability that was not properly captured by the residual errors, the random intercept, and fixed effects. Another direction that you can take is to be flexible on the residual errors (e.g. by using a Student-t instead of a normal distribution for the residual errors). Now the question is: which model performs better? The one with normal errors or with Student-t errors? The one with random slope and normal errors? The one with random slope and Student-t errors? ... And so on. If you can implement all of them and compare them, you can gain understanding on the features of the data. For example, if your model selection favours the model with Student-t errors and no random slope, it tells you that a single slope fits the data better (no variability among subjects in terms of the slope) but there might be some outliers that require a distribution with heavier tails than normal. | Is it ok to use a random intercept model without testing for random slopes?
Models with mixed effects have 2 stochastic ingredients:
Residual errors
Random effects
If you use a random slope and notice that the model performs better, it means that there is variability that w |
53,331 | Is it ok to use a random intercept model without testing for random slopes? | I would echo @East's advocacy for model exploration, but it was already stated very well. Per Barr et all (2013; full citation below), a failure to fit random slopes when present could result in an inflated Type-I error rate (incorrect rejections of the null hypothesis when the null hypothesis actually is true). So, (as I understand it) if the random slope model would better fit the data than the random intercept model, there will be a tendency for the fixed effect error to be under estimated. In practice (again, as I understand it), the fixed effects coefficients will be estimated reasonably well, but the errors will be estimated incorrectly (with bias towards lower magnitude).
Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3), 255-278. | Is it ok to use a random intercept model without testing for random slopes? | I would echo @East's advocacy for model exploration, but it was already stated very well. Per Barr et all (2013; full citation below), a failure to fit random slopes when present could result in an i | Is it ok to use a random intercept model without testing for random slopes?
I would echo @East's advocacy for model exploration, but it was already stated very well. Per Barr et all (2013; full citation below), a failure to fit random slopes when present could result in an inflated Type-I error rate (incorrect rejections of the null hypothesis when the null hypothesis actually is true). So, (as I understand it) if the random slope model would better fit the data than the random intercept model, there will be a tendency for the fixed effect error to be under estimated. In practice (again, as I understand it), the fixed effects coefficients will be estimated reasonably well, but the errors will be estimated incorrectly (with bias towards lower magnitude).
Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3), 255-278. | Is it ok to use a random intercept model without testing for random slopes?
I would echo @East's advocacy for model exploration, but it was already stated very well. Per Barr et all (2013; full citation below), a failure to fit random slopes when present could result in an i |
53,332 | ROC for more than 2 outcome categories | Several ideas and references are discussed in:
A simple generalization of the area under the ROC curve to multiple class classification problems.
Multi-class ROC (a tutorial) (using "volumes" under ROC)
Other approaches include computing
macro-average ROC curves (average per class in a 1-vs-all fashion)
micro-averaged ROC curves (consider all positives and negatives together as single class)
You can see examples in some libraries like scikit-learn.
See also this other thread in CrossValidated:
How to compute precision/recall for multiclass-multilabel classification? | ROC for more than 2 outcome categories | Several ideas and references are discussed in:
A simple generalization of the area under the ROC curve to multiple class classification problems.
Multi-class ROC (a tutorial) (using "volumes" under | ROC for more than 2 outcome categories
Several ideas and references are discussed in:
A simple generalization of the area under the ROC curve to multiple class classification problems.
Multi-class ROC (a tutorial) (using "volumes" under ROC)
Other approaches include computing
macro-average ROC curves (average per class in a 1-vs-all fashion)
micro-averaged ROC curves (consider all positives and negatives together as single class)
You can see examples in some libraries like scikit-learn.
See also this other thread in CrossValidated:
How to compute precision/recall for multiclass-multilabel classification? | ROC for more than 2 outcome categories
Several ideas and references are discussed in:
A simple generalization of the area under the ROC curve to multiple class classification problems.
Multi-class ROC (a tutorial) (using "volumes" under |
53,333 | ROC for more than 2 outcome categories | One of the ideas is to use one-vs-all classifier. This answer gives move information about it, including some R code.
Here's a plot from that answer | ROC for more than 2 outcome categories | One of the ideas is to use one-vs-all classifier. This answer gives move information about it, including some R code.
Here's a plot from that answer | ROC for more than 2 outcome categories
One of the ideas is to use one-vs-all classifier. This answer gives move information about it, including some R code.
Here's a plot from that answer | ROC for more than 2 outcome categories
One of the ideas is to use one-vs-all classifier. This answer gives move information about it, including some R code.
Here's a plot from that answer |
53,334 | What is the p-value for paired t-test if the two set of data are identical? | If the two sets of data are identical because the variables are defined on a discrete set of values, then the assumptions of the t-test are false, since the random variables aren't even continuous.
As such, any normal-theory calculation would not yield the correct p-values.
I think the "correct" t-test p-value would be NaN.
Alternatively, consider the degenerate case where continuous variables have finite variance but the differences have variance 0. Then the t-statistic would be 0/0; again I'd say that's "correctly" NaN. (edit: more details of that argument are in a comment)
However, if you (for example) assumed some discrete distribution for the differences and derived say a likelihood ratio test, or did a permutation test, you'd get a (legitimate) p-value of 1, since anything but zero-differences would be "more extreme" than all-0-differences.
So I think justifiable p-values for a t-test will actually be what R gave you, while justifiable p-values if you have discrete variables and choose a more appropriate test would generally be 1. | What is the p-value for paired t-test if the two set of data are identical? | If the two sets of data are identical because the variables are defined on a discrete set of values, then the assumptions of the t-test are false, since the random variables aren't even continuous.
As | What is the p-value for paired t-test if the two set of data are identical?
If the two sets of data are identical because the variables are defined on a discrete set of values, then the assumptions of the t-test are false, since the random variables aren't even continuous.
As such, any normal-theory calculation would not yield the correct p-values.
I think the "correct" t-test p-value would be NaN.
Alternatively, consider the degenerate case where continuous variables have finite variance but the differences have variance 0. Then the t-statistic would be 0/0; again I'd say that's "correctly" NaN. (edit: more details of that argument are in a comment)
However, if you (for example) assumed some discrete distribution for the differences and derived say a likelihood ratio test, or did a permutation test, you'd get a (legitimate) p-value of 1, since anything but zero-differences would be "more extreme" than all-0-differences.
So I think justifiable p-values for a t-test will actually be what R gave you, while justifiable p-values if you have discrete variables and choose a more appropriate test would generally be 1. | What is the p-value for paired t-test if the two set of data are identical?
If the two sets of data are identical because the variables are defined on a discrete set of values, then the assumptions of the t-test are false, since the random variables aren't even continuous.
As |
53,335 | What is the p-value for paired t-test if the two set of data are identical? | The null hypothesis for a paired t-test is that the mean difference of your two paired samples is equal to zero.
Considering you have identical values for each pair, the difference should always be equal to zero, thus failing to reject the null hypothesis.
This would mean p-value = 1 or NaN (0/0), but this doesn't really imply you should conclude anything.
A p-value of less that 0.05 will suggest the existence of a significantly different mean for each sample. | What is the p-value for paired t-test if the two set of data are identical? | The null hypothesis for a paired t-test is that the mean difference of your two paired samples is equal to zero.
Considering you have identical values for each pair, the difference should always be eq | What is the p-value for paired t-test if the two set of data are identical?
The null hypothesis for a paired t-test is that the mean difference of your two paired samples is equal to zero.
Considering you have identical values for each pair, the difference should always be equal to zero, thus failing to reject the null hypothesis.
This would mean p-value = 1 or NaN (0/0), but this doesn't really imply you should conclude anything.
A p-value of less that 0.05 will suggest the existence of a significantly different mean for each sample. | What is the p-value for paired t-test if the two set of data are identical?
The null hypothesis for a paired t-test is that the mean difference of your two paired samples is equal to zero.
Considering you have identical values for each pair, the difference should always be eq |
53,336 | Conditional variance - $Var(X + U | X) = Var(U)$? | Unless there's something missing, that looks right to me, but if you want to argue it fully you may want to insert a step or two. For example
$\text{Var}(X + U | X) = \text{Var}(X|X) + \text{Var}(U|X)$
I think you should explain here that you're using $\text{Cov}(X,U|X)=0$ by expanding the variance into its three components and then arguing one is 0. | Conditional variance - $Var(X + U | X) = Var(U)$? | Unless there's something missing, that looks right to me, but if you want to argue it fully you may want to insert a step or two. For example
$\text{Var}(X + U | X) = \text{Var}(X|X) + \text{Var}(U|X | Conditional variance - $Var(X + U | X) = Var(U)$?
Unless there's something missing, that looks right to me, but if you want to argue it fully you may want to insert a step or two. For example
$\text{Var}(X + U | X) = \text{Var}(X|X) + \text{Var}(U|X)$
I think you should explain here that you're using $\text{Cov}(X,U|X)=0$ by expanding the variance into its three components and then arguing one is 0. | Conditional variance - $Var(X + U | X) = Var(U)$?
Unless there's something missing, that looks right to me, but if you want to argue it fully you may want to insert a step or two. For example
$\text{Var}(X + U | X) = \text{Var}(X|X) + \text{Var}(U|X |
53,337 | Conditional variance - $Var(X + U | X) = Var(U)$? | More remarkably, independence between $X$ and $U$ provides a regular system of conditional distributions of $X+U$ given $X$ by setting $${\cal L}(X+U \mid X=x) = \text{"law of $x+U$"}.$$
Then $Var(X+U \mid X)=Var(U)$ because $Var(x+U)=Var(U)$ for any $x$. | Conditional variance - $Var(X + U | X) = Var(U)$? | More remarkably, independence between $X$ and $U$ provides a regular system of conditional distributions of $X+U$ given $X$ by setting $${\cal L}(X+U \mid X=x) = \text{"law of $x+U$"}.$$
Then $Var(X | Conditional variance - $Var(X + U | X) = Var(U)$?
More remarkably, independence between $X$ and $U$ provides a regular system of conditional distributions of $X+U$ given $X$ by setting $${\cal L}(X+U \mid X=x) = \text{"law of $x+U$"}.$$
Then $Var(X+U \mid X)=Var(U)$ because $Var(x+U)=Var(U)$ for any $x$. | Conditional variance - $Var(X + U | X) = Var(U)$?
More remarkably, independence between $X$ and $U$ provides a regular system of conditional distributions of $X+U$ given $X$ by setting $${\cal L}(X+U \mid X=x) = \text{"law of $x+U$"}.$$
Then $Var(X |
53,338 | Conditional variance - $Var(X + U | X) = Var(U)$? | $Var(X+U|X)=Var(U|X)$ sounds absolutely logical: if the value of $X$ is known, then $X$ has conditional variance 0 (it is a certain variable), so the conditional variance of $(X+U)$ will be the conditional variance of $U$. Then, if $X$ and $U$ are independent the conditional variance of $U$ is simply the variance of $U$.
Another way to look at it: recall that for any random variable $Z$ (that has a variance) and $(a,b) \in \mathbb{R}^2$, $Var(aZ+b)= a^2Var(Z)$, the additive constant $b$ vanishes from the variance (which is easily understood: $b$ only affects the magnitude of $(aZ+b)$, not its dispersion around the mean). Now in a conditional variance $Var(X+U|X)$ the $X$ is just like an additive constant $b$, and vanishes just the same. | Conditional variance - $Var(X + U | X) = Var(U)$? | $Var(X+U|X)=Var(U|X)$ sounds absolutely logical: if the value of $X$ is known, then $X$ has conditional variance 0 (it is a certain variable), so the conditional variance of $(X+U)$ will be the condit | Conditional variance - $Var(X + U | X) = Var(U)$?
$Var(X+U|X)=Var(U|X)$ sounds absolutely logical: if the value of $X$ is known, then $X$ has conditional variance 0 (it is a certain variable), so the conditional variance of $(X+U)$ will be the conditional variance of $U$. Then, if $X$ and $U$ are independent the conditional variance of $U$ is simply the variance of $U$.
Another way to look at it: recall that for any random variable $Z$ (that has a variance) and $(a,b) \in \mathbb{R}^2$, $Var(aZ+b)= a^2Var(Z)$, the additive constant $b$ vanishes from the variance (which is easily understood: $b$ only affects the magnitude of $(aZ+b)$, not its dispersion around the mean). Now in a conditional variance $Var(X+U|X)$ the $X$ is just like an additive constant $b$, and vanishes just the same. | Conditional variance - $Var(X + U | X) = Var(U)$?
$Var(X+U|X)=Var(U|X)$ sounds absolutely logical: if the value of $X$ is known, then $X$ has conditional variance 0 (it is a certain variable), so the conditional variance of $(X+U)$ will be the condit |
53,339 | Conditional variance - $Var(X + U | X) = Var(U)$? | I hope I'm adding/complementing sth, if not sorry for the excess of answers.
$\text{Var}(X+U|X)= E((X+U-E(X+U|X))^2|X)=E((X+U-X-E(U))^2|X)$
$=E((U-E(U))^2|X)=E((U-E(U))^2)$, where the last equality holds because $X$ and $U$ are independent. | Conditional variance - $Var(X + U | X) = Var(U)$? | I hope I'm adding/complementing sth, if not sorry for the excess of answers.
$\text{Var}(X+U|X)= E((X+U-E(X+U|X))^2|X)=E((X+U-X-E(U))^2|X)$
$=E((U-E(U))^2|X)=E((U-E(U))^2)$, where the last equality ho | Conditional variance - $Var(X + U | X) = Var(U)$?
I hope I'm adding/complementing sth, if not sorry for the excess of answers.
$\text{Var}(X+U|X)= E((X+U-E(X+U|X))^2|X)=E((X+U-X-E(U))^2|X)$
$=E((U-E(U))^2|X)=E((U-E(U))^2)$, where the last equality holds because $X$ and $U$ are independent. | Conditional variance - $Var(X + U | X) = Var(U)$?
I hope I'm adding/complementing sth, if not sorry for the excess of answers.
$\text{Var}(X+U|X)= E((X+U-E(X+U|X))^2|X)=E((X+U-X-E(U))^2|X)$
$=E((U-E(U))^2|X)=E((U-E(U))^2)$, where the last equality ho |
53,340 | Does $\Pr(\text{Type I error})$ ever not equal $\alpha$ with continuous data? | In real, nonsimulated data, does the true P(Type I error) ever actually equal to $\alpha$?
Assumptions don't hold (when do all the assumptions hold?), people choose hypotheses after they've seen the data, choose procedures after checking assumptions, omit outliers, etc etc.
Even 'exact' nonparametric procedures rely on assumptions that rarely hold exactly (such as independence or exchangeability).
Treat p-values as -- under perfect conditions -- sometimes reasonably accurate (and the rest of the time as possibly informative fictions) and you may not go too far wrong. | Does $\Pr(\text{Type I error})$ ever not equal $\alpha$ with continuous data? | In real, nonsimulated data, does the true P(Type I error) ever actually equal to $\alpha$?
Assumptions don't hold (when do all the assumptions hold?), people choose hypotheses after they've seen the | Does $\Pr(\text{Type I error})$ ever not equal $\alpha$ with continuous data?
In real, nonsimulated data, does the true P(Type I error) ever actually equal to $\alpha$?
Assumptions don't hold (when do all the assumptions hold?), people choose hypotheses after they've seen the data, choose procedures after checking assumptions, omit outliers, etc etc.
Even 'exact' nonparametric procedures rely on assumptions that rarely hold exactly (such as independence or exchangeability).
Treat p-values as -- under perfect conditions -- sometimes reasonably accurate (and the rest of the time as possibly informative fictions) and you may not go too far wrong. | Does $\Pr(\text{Type I error})$ ever not equal $\alpha$ with continuous data?
In real, nonsimulated data, does the true P(Type I error) ever actually equal to $\alpha$?
Assumptions don't hold (when do all the assumptions hold?), people choose hypotheses after they've seen the |
53,341 | Does $\Pr(\text{Type I error})$ ever not equal $\alpha$ with continuous data? | The only example I can readily think of is data dredging. For example, the probability of a type I error is much higher than alpha if many tests are run (but this is not corrected for) and only those that are 'significant' are reported. The obvious examples would be multiple comparisons via many $t$-tests without something like the Bonferroni correction, or stepwise variable selection. | Does $\Pr(\text{Type I error})$ ever not equal $\alpha$ with continuous data? | The only example I can readily think of is data dredging. For example, the probability of a type I error is much higher than alpha if many tests are run (but this is not corrected for) and only those | Does $\Pr(\text{Type I error})$ ever not equal $\alpha$ with continuous data?
The only example I can readily think of is data dredging. For example, the probability of a type I error is much higher than alpha if many tests are run (but this is not corrected for) and only those that are 'significant' are reported. The obvious examples would be multiple comparisons via many $t$-tests without something like the Bonferroni correction, or stepwise variable selection. | Does $\Pr(\text{Type I error})$ ever not equal $\alpha$ with continuous data?
The only example I can readily think of is data dredging. For example, the probability of a type I error is much higher than alpha if many tests are run (but this is not corrected for) and only those |
53,342 | Log-uniform distributions | Your definition of $X$ suggests that $X$ is a continuous random variable, but your question $\Pr[X = x]$ suggests you wish to treat it as a discrete variable. If you were asking for the probability density function of $X$, rather than the probability mass function, then we could proceed naturally using a transformation, since $\log$ is a monotone function: if $Y = g^{-1}(X) = \log X$, then $X = g(Y) = e^Y$ and $$f_X(x) = f_Y(g^{-1}(x)) \left| \frac{dg^{-1}}{dx} \right| = \ldots$$ This of course means that $X$ is not uniformly distributed. | Log-uniform distributions | Your definition of $X$ suggests that $X$ is a continuous random variable, but your question $\Pr[X = x]$ suggests you wish to treat it as a discrete variable. If you were asking for the probability d | Log-uniform distributions
Your definition of $X$ suggests that $X$ is a continuous random variable, but your question $\Pr[X = x]$ suggests you wish to treat it as a discrete variable. If you were asking for the probability density function of $X$, rather than the probability mass function, then we could proceed naturally using a transformation, since $\log$ is a monotone function: if $Y = g^{-1}(X) = \log X$, then $X = g(Y) = e^Y$ and $$f_X(x) = f_Y(g^{-1}(x)) \left| \frac{dg^{-1}}{dx} \right| = \ldots$$ This of course means that $X$ is not uniformly distributed. | Log-uniform distributions
Your definition of $X$ suggests that $X$ is a continuous random variable, but your question $\Pr[X = x]$ suggests you wish to treat it as a discrete variable. If you were asking for the probability d |
53,343 | Log-uniform distributions | I like @heropup's answer, but am slightly bothered by the fact that he didn't finish the derivation for the OP. To enrich his answer, I'd like to add the following picture, and some comments on the above answer:
If you follow @heropup's derivation, you'll find that
$$f_{X}(x) = \frac{I_{[e, e^e]}(x)}{x(e-1)}$$
More generally, if $Y \sim Unif(a,b)$, such that $Y = log(X)$ for some random variable $X$, then
$$ f_{X}(x) = \frac{I_{[e^a, e^b]}(x)}{x(b - a)} $$
For the sake of validating your intuition, I've made the figure of $Y \sim Unif(0,1)$, so that we see the originating random variable is actually defined on $[1, e]$ and looks something like $1/x$. As you pointed out, there is, indeed, more mass at the "beginning" of $X$'s domain and from the picture of the CDF, alone, we can see that $X$ cannot also be uniformly distributed (since the CDF is not a straight line).
I hope this fleshes out a bit of the above answer...
edit: you can find the code to make this picture here. | Log-uniform distributions | I like @heropup's answer, but am slightly bothered by the fact that he didn't finish the derivation for the OP. To enrich his answer, I'd like to add the following picture, and some comments on the ab | Log-uniform distributions
I like @heropup's answer, but am slightly bothered by the fact that he didn't finish the derivation for the OP. To enrich his answer, I'd like to add the following picture, and some comments on the above answer:
If you follow @heropup's derivation, you'll find that
$$f_{X}(x) = \frac{I_{[e, e^e]}(x)}{x(e-1)}$$
More generally, if $Y \sim Unif(a,b)$, such that $Y = log(X)$ for some random variable $X$, then
$$ f_{X}(x) = \frac{I_{[e^a, e^b]}(x)}{x(b - a)} $$
For the sake of validating your intuition, I've made the figure of $Y \sim Unif(0,1)$, so that we see the originating random variable is actually defined on $[1, e]$ and looks something like $1/x$. As you pointed out, there is, indeed, more mass at the "beginning" of $X$'s domain and from the picture of the CDF, alone, we can see that $X$ cannot also be uniformly distributed (since the CDF is not a straight line).
I hope this fleshes out a bit of the above answer...
edit: you can find the code to make this picture here. | Log-uniform distributions
I like @heropup's answer, but am slightly bothered by the fact that he didn't finish the derivation for the OP. To enrich his answer, I'd like to add the following picture, and some comments on the ab |
53,344 | Doing low-dimensional KNN on a large dataset | A naive nearest neighbor implementation will have to compute the distances between your test example and every instance in the training set. This $O(n)$ process can be problematic if you have a lot of data.
One solution is to find a more efficient representation of the training data. "Space-partitioning" data structures organize points in a way that makes it possible to efficiently search through them. Using a $k$-d tree, one can find a point's nearest neighbor in $O(\log n)$ time instead, which is substantial speed-up.
There are also approximate nearest neighbor algorithms such as locality sensitive hashing or the best-bin first algorithm. These results are approximations--they don't always find the nearest neighbor, but they often find something very close to it (which is probably just as good for classification.
Finally, if you've got a relatively fixed training set, you could compute something like a Voronoi diagram that indicates the neighborhoods around each point. This could then be used as a look-up table for future queries. | Doing low-dimensional KNN on a large dataset | A naive nearest neighbor implementation will have to compute the distances between your test example and every instance in the training set. This $O(n)$ process can be problematic if you have a lot of | Doing low-dimensional KNN on a large dataset
A naive nearest neighbor implementation will have to compute the distances between your test example and every instance in the training set. This $O(n)$ process can be problematic if you have a lot of data.
One solution is to find a more efficient representation of the training data. "Space-partitioning" data structures organize points in a way that makes it possible to efficiently search through them. Using a $k$-d tree, one can find a point's nearest neighbor in $O(\log n)$ time instead, which is substantial speed-up.
There are also approximate nearest neighbor algorithms such as locality sensitive hashing or the best-bin first algorithm. These results are approximations--they don't always find the nearest neighbor, but they often find something very close to it (which is probably just as good for classification.
Finally, if you've got a relatively fixed training set, you could compute something like a Voronoi diagram that indicates the neighborhoods around each point. This could then be used as a look-up table for future queries. | Doing low-dimensional KNN on a large dataset
A naive nearest neighbor implementation will have to compute the distances between your test example and every instance in the training set. This $O(n)$ process can be problematic if you have a lot of |
53,345 | Doing low-dimensional KNN on a large dataset | Depending on your task and your data, you might not need that many nearest neighbours. Since your data is two dimensional, I would first plot it to explore how it looks. Is it linearly separable?.
Otherwise, you could try the following, sample at random a small portion of the data. Say 10%. Classify your data, and keep only those samples which could not be correctly classified.
You could either take a new sample from those and repeat the process, until you reach a satisfactory accuracy. Maybe you end with a very low proportion of kNN. | Doing low-dimensional KNN on a large dataset | Depending on your task and your data, you might not need that many nearest neighbours. Since your data is two dimensional, I would first plot it to explore how it looks. Is it linearly separable?.
Ot | Doing low-dimensional KNN on a large dataset
Depending on your task and your data, you might not need that many nearest neighbours. Since your data is two dimensional, I would first plot it to explore how it looks. Is it linearly separable?.
Otherwise, you could try the following, sample at random a small portion of the data. Say 10%. Classify your data, and keep only those samples which could not be correctly classified.
You could either take a new sample from those and repeat the process, until you reach a satisfactory accuracy. Maybe you end with a very low proportion of kNN. | Doing low-dimensional KNN on a large dataset
Depending on your task and your data, you might not need that many nearest neighbours. Since your data is two dimensional, I would first plot it to explore how it looks. Is it linearly separable?.
Ot |
53,346 | Gaps in time series and time series validity | I am not sure what you mean by "a valid data set". Are you sure what you mean by it? There are reasons why, in a single or in multiple time series consecutive missingness would be irrelevant to the validity of an analysis, and reasons why it would be lethal to valid inference.
However, Honaker and King are at the head of practical multiple imputation within a time-series context:
Honaker, J. and King, G. (2010). What to do about missing values in time-series cross-section data. American Journal of Political Science, 54(2):561–581. (See also, the related R package Amelia II on CRAN)
It is not clear how familiar you are with multiple imputation, but it has two aims (1) to support inference that is unbiased by MAR and MCAR (i.e. to impute a set of reasonable values), and (2) in doing so to incorporate the additional uncertainty in one's analysis that is due to the presence of missing data (i.e. to incorporate the extra variation resulting from imputed values not all agreeing with one another). | Gaps in time series and time series validity | I am not sure what you mean by "a valid data set". Are you sure what you mean by it? There are reasons why, in a single or in multiple time series consecutive missingness would be irrelevant to the va | Gaps in time series and time series validity
I am not sure what you mean by "a valid data set". Are you sure what you mean by it? There are reasons why, in a single or in multiple time series consecutive missingness would be irrelevant to the validity of an analysis, and reasons why it would be lethal to valid inference.
However, Honaker and King are at the head of practical multiple imputation within a time-series context:
Honaker, J. and King, G. (2010). What to do about missing values in time-series cross-section data. American Journal of Political Science, 54(2):561–581. (See also, the related R package Amelia II on CRAN)
It is not clear how familiar you are with multiple imputation, but it has two aims (1) to support inference that is unbiased by MAR and MCAR (i.e. to impute a set of reasonable values), and (2) in doing so to incorporate the additional uncertainty in one's analysis that is due to the presence of missing data (i.e. to incorporate the extra variation resulting from imputed values not all agreeing with one another). | Gaps in time series and time series validity
I am not sure what you mean by "a valid data set". Are you sure what you mean by it? There are reasons why, in a single or in multiple time series consecutive missingness would be irrelevant to the va |
53,347 | Gaps in time series and time series validity | The Kalman filter is one alternative to fill in missing observations in time series. See this post as an example. The Kalman filter is a common algorithm that will be available in most languages and statistical software. Contrary to the Holt-Winters filter you have to specify a model for the data.
"How many consecutive gaps may make data set invalid for forecasting?
How many total gaps in data set makes it as invalid."
I don't know a rule to measure this. I would say it depends on how much
we know about the data and their context. Forecasting and, in general, the analysis of data involve a combination of our knowledge or theories and statistical methods to test our theories or find some further facts that we may have overlooked.
The amount of data or the presence of gaps may or may not be critical. For example, I have not looked at historical data about temperatures recorded in my town but I would be quite confident to give you a relatively narrow interval
about the temperatures that will be observed in the next days. On the other hand, I have a data base with thousands of flight prices and at this moment
I wouldn't dare to tell you whether you should buy a ticket today or wait
until tomorrow.
So there is a combination of knowledge and data. On one side, we may know a lot about the data but we lack a minimal amount of data. On the other side, we may have a huge amount of data but they don't have much meaning to us. In the former case, we may decide to throw the data away and trust our expert knowledge to foresee the future. In the latter case, we may throw the data into a brute force algorithm (some kind of machine learning algorithm) and let it find patterns and forecasts for us.
Usually we are at some point in-between these extreme cases. You are the one who knows how much the available data can contribute to your knowledge and how much uncertainty will be in the forecasts. | Gaps in time series and time series validity | The Kalman filter is one alternative to fill in missing observations in time series. See this post as an example. The Kalman filter is a common algorithm that will be available in most languages and s | Gaps in time series and time series validity
The Kalman filter is one alternative to fill in missing observations in time series. See this post as an example. The Kalman filter is a common algorithm that will be available in most languages and statistical software. Contrary to the Holt-Winters filter you have to specify a model for the data.
"How many consecutive gaps may make data set invalid for forecasting?
How many total gaps in data set makes it as invalid."
I don't know a rule to measure this. I would say it depends on how much
we know about the data and their context. Forecasting and, in general, the analysis of data involve a combination of our knowledge or theories and statistical methods to test our theories or find some further facts that we may have overlooked.
The amount of data or the presence of gaps may or may not be critical. For example, I have not looked at historical data about temperatures recorded in my town but I would be quite confident to give you a relatively narrow interval
about the temperatures that will be observed in the next days. On the other hand, I have a data base with thousands of flight prices and at this moment
I wouldn't dare to tell you whether you should buy a ticket today or wait
until tomorrow.
So there is a combination of knowledge and data. On one side, we may know a lot about the data but we lack a minimal amount of data. On the other side, we may have a huge amount of data but they don't have much meaning to us. In the former case, we may decide to throw the data away and trust our expert knowledge to foresee the future. In the latter case, we may throw the data into a brute force algorithm (some kind of machine learning algorithm) and let it find patterns and forecasts for us.
Usually we are at some point in-between these extreme cases. You are the one who knows how much the available data can contribute to your knowledge and how much uncertainty will be in the forecasts. | Gaps in time series and time series validity
The Kalman filter is one alternative to fill in missing observations in time series. See this post as an example. The Kalman filter is a common algorithm that will be available in most languages and s |
53,348 | Gaps in time series and time series validity | If you have enough data to do a meaningful test, you could look at a chunk of the data with no missing values. Then remove some values and fill in the missing values with interpolations. Fit the Holt Winters model on the interpolated data, and look at the error of the model on a holdout section of your data to see how it compares to forecasting from the original data set. Then you can experiment with removing different numbers of values to see what kind of effect it has on the error. | Gaps in time series and time series validity | If you have enough data to do a meaningful test, you could look at a chunk of the data with no missing values. Then remove some values and fill in the missing values with interpolations. Fit the Holt | Gaps in time series and time series validity
If you have enough data to do a meaningful test, you could look at a chunk of the data with no missing values. Then remove some values and fill in the missing values with interpolations. Fit the Holt Winters model on the interpolated data, and look at the error of the model on a holdout section of your data to see how it compares to forecasting from the original data set. Then you can experiment with removing different numbers of values to see what kind of effect it has on the error. | Gaps in time series and time series validity
If you have enough data to do a meaningful test, you could look at a chunk of the data with no missing values. Then remove some values and fill in the missing values with interpolations. Fit the Holt |
53,349 | Comparing AIC among models with different amounts of data | The magnitude of the AIC value is irrelevant; it will always be larger with more data points. AIC is used to compare models based on the exact same data, where the important statistic the the difference between the AIC values. So, in your case, if you remove c from the model and then test against the exact same data, you can compare the two. If you add more data points to your $y = ab$ model, you can no longer compare it to the $y = ab + c$ model. | Comparing AIC among models with different amounts of data | The magnitude of the AIC value is irrelevant; it will always be larger with more data points. AIC is used to compare models based on the exact same data, where the important statistic the the differen | Comparing AIC among models with different amounts of data
The magnitude of the AIC value is irrelevant; it will always be larger with more data points. AIC is used to compare models based on the exact same data, where the important statistic the the difference between the AIC values. So, in your case, if you remove c from the model and then test against the exact same data, you can compare the two. If you add more data points to your $y = ab$ model, you can no longer compare it to the $y = ab + c$ model. | Comparing AIC among models with different amounts of data
The magnitude of the AIC value is irrelevant; it will always be larger with more data points. AIC is used to compare models based on the exact same data, where the important statistic the the differen |
53,350 | Comparing AIC among models with different amounts of data | Adding to @Avraham, take a look at the formula for the AIC, which is an intuitive way to see why more or less data points will change the AIC without meaning the model fits better or worse:
$2k-2ln(L)$
k is the number of parameters, ln(L) is the likelihood. The log-likelihood magnitude is based on a summation across all data points. So as you have more data points, your sum will grow. | Comparing AIC among models with different amounts of data | Adding to @Avraham, take a look at the formula for the AIC, which is an intuitive way to see why more or less data points will change the AIC without meaning the model fits better or worse:
$2k-2ln(L) | Comparing AIC among models with different amounts of data
Adding to @Avraham, take a look at the formula for the AIC, which is an intuitive way to see why more or less data points will change the AIC without meaning the model fits better or worse:
$2k-2ln(L)$
k is the number of parameters, ln(L) is the likelihood. The log-likelihood magnitude is based on a summation across all data points. So as you have more data points, your sum will grow. | Comparing AIC among models with different amounts of data
Adding to @Avraham, take a look at the formula for the AIC, which is an intuitive way to see why more or less data points will change the AIC without meaning the model fits better or worse:
$2k-2ln(L) |
53,351 | Missing Values NAs in the Test Data When using predict.lm in R | First, let me preface this by stating that missing data is its own specialty in statistics, so there's lots and lots of different answers to this question.
As you've discovered, by default, R uses case-wise deletion of missing values. This means that whenever a missing value is encountered in your data (on either side of your regression formula), it simply ignores that row. This isn't great, since if you have 100 observations, but half of your rows has at least one variable value missing, you effectively have 50 observations. In some disciplines, the prevalence of missing data can rapidly diminish the size of your data. When I was an undergraduate, I analyzed a 3,000-person survey which shrank to just 316 people when using case-wise deletion!
But this gets even worse than shrinking your sample size: there may be hidden problems, such as an association between the pattern of missingness and the value of the missing element. For example, people with higher income are more likely to not disclose their salary. This will make it difficult to conduct meaningful, statistically sound judgments related to income.
One common method for dealing with missing values is imputation. There are many packages for imputation in R available. In my specialty area, political science, a widely-used one is AMELIA II, by Gary King. This treats your variables as multivariate normal and iteratively improves its "guesses" of what the missing values must be based on some convergence criteria: when convergence is declared when the "guess" seems to fit well with the rest of the data. (I'm sorry that this is nonspecific. I haven't used AMELIA II in several years. The documentation is thorough and lucidly written, so I would start there.)
But this is just one option. I'm sure that more knowledgeable people will speak up with their contributions. | Missing Values NAs in the Test Data When using predict.lm in R | First, let me preface this by stating that missing data is its own specialty in statistics, so there's lots and lots of different answers to this question.
As you've discovered, by default, R uses cas | Missing Values NAs in the Test Data When using predict.lm in R
First, let me preface this by stating that missing data is its own specialty in statistics, so there's lots and lots of different answers to this question.
As you've discovered, by default, R uses case-wise deletion of missing values. This means that whenever a missing value is encountered in your data (on either side of your regression formula), it simply ignores that row. This isn't great, since if you have 100 observations, but half of your rows has at least one variable value missing, you effectively have 50 observations. In some disciplines, the prevalence of missing data can rapidly diminish the size of your data. When I was an undergraduate, I analyzed a 3,000-person survey which shrank to just 316 people when using case-wise deletion!
But this gets even worse than shrinking your sample size: there may be hidden problems, such as an association between the pattern of missingness and the value of the missing element. For example, people with higher income are more likely to not disclose their salary. This will make it difficult to conduct meaningful, statistically sound judgments related to income.
One common method for dealing with missing values is imputation. There are many packages for imputation in R available. In my specialty area, political science, a widely-used one is AMELIA II, by Gary King. This treats your variables as multivariate normal and iteratively improves its "guesses" of what the missing values must be based on some convergence criteria: when convergence is declared when the "guess" seems to fit well with the rest of the data. (I'm sorry that this is nonspecific. I haven't used AMELIA II in several years. The documentation is thorough and lucidly written, so I would start there.)
But this is just one option. I'm sure that more knowledgeable people will speak up with their contributions. | Missing Values NAs in the Test Data When using predict.lm in R
First, let me preface this by stating that missing data is its own specialty in statistics, so there's lots and lots of different answers to this question.
As you've discovered, by default, R uses cas |
53,352 | SVM data normalization... what about classifying new (training) data? | Store the mean and standard deviation of the training dataset features. When the test data is received, normalize each feature by subtracting its corresponding training mean and dividing by the corresponding training standard deviation.
Normalizition by min/max is usually a very bad idea since it involves scaling your entire data according to two particular observations. This leads your scaling to be dominated by noise. mean/std is a standard procedure and you can even experiment with more robust measures (e.g. median/MAD)
Why scale/normalize? Because of the way the SVM optimization problem is defined, features with higher variance have greater effect on the margin. Usually this doesn't make sense - we'd like our classifier to be 'unit invariant' (e.g. a classifier that combines patients' weight and height shouldn't be affected by the choice of units - kgs or grams, centimeters or meters).
However, I guess that there might be cases in which all of the features are given in the same units and the differences in their variance indeed reflect differences in importance. In such case I'd try to skip scaling/normalization and see what it does to the performance. | SVM data normalization... what about classifying new (training) data? | Store the mean and standard deviation of the training dataset features. When the test data is received, normalize each feature by subtracting its corresponding training mean and dividing by the corres | SVM data normalization... what about classifying new (training) data?
Store the mean and standard deviation of the training dataset features. When the test data is received, normalize each feature by subtracting its corresponding training mean and dividing by the corresponding training standard deviation.
Normalizition by min/max is usually a very bad idea since it involves scaling your entire data according to two particular observations. This leads your scaling to be dominated by noise. mean/std is a standard procedure and you can even experiment with more robust measures (e.g. median/MAD)
Why scale/normalize? Because of the way the SVM optimization problem is defined, features with higher variance have greater effect on the margin. Usually this doesn't make sense - we'd like our classifier to be 'unit invariant' (e.g. a classifier that combines patients' weight and height shouldn't be affected by the choice of units - kgs or grams, centimeters or meters).
However, I guess that there might be cases in which all of the features are given in the same units and the differences in their variance indeed reflect differences in importance. In such case I'd try to skip scaling/normalization and see what it does to the performance. | SVM data normalization... what about classifying new (training) data?
Store the mean and standard deviation of the training dataset features. When the test data is received, normalize each feature by subtracting its corresponding training mean and dividing by the corres |
53,353 | Probablistic counterpart for kNN | The way I'd model it (I haven't seen this in the literature, but it wouldn't surprise me if it were already out there) is:
I'd think of it as expressing the posterior distribution over class labels,
given that you have a (test) sample $x$, as a marginalization over a sub-set of the training sample nodes, $n_i$:
$$
p(c \vert x) = \sum p(c \vert n_i ) p(n_i \vert x)
$$
In the back of my mind, I have the idea that the test sample is assigned-to (or is essentially the same as) one of the test-samples; but since we don't know which one, we marginalize over all of the feasible ones.
The first factor defines, for each training-sample node, the probability
distribution over class labels, given that the test-sample was "assigned"
to node $n_i$. If each training sample has a particular label, then this might be just an indicator function for that label; the various extensions to actual distributions are going to be problem domain specific.
The second factor is the probability that node $n_i$ is the correct assignment choice. The direct mapping of $kNN$ is to have $p(n_i \vert x) = 1/k$ when $n_i$ is in the set of $kNN$'s, and zero otherwise; i.e. a cookie-cutter type of distribution. Again, this could be generalized into something with a softer roll-off, e.g. a Gaussian type of shape in feature space, if desired.
In my opinion, this captures the the essential general structure of $kNN$:
You pick a population who gets to vote, this is achieved by $p(n_i \vert x)$, and then
you combine the results of each vote, in this case each vote is a probability distribution over classes, $p(c \vert n_i)$
whether you want to have a soft or hard inclusion function, the exact methods for defining the vote probabilities distributions and so on are problem-specific details. | Probablistic counterpart for kNN | The way I'd model it (I haven't seen this in the literature, but it wouldn't surprise me if it were already out there) is:
I'd think of it as expressing the posterior distribution over class labels,
| Probablistic counterpart for kNN
The way I'd model it (I haven't seen this in the literature, but it wouldn't surprise me if it were already out there) is:
I'd think of it as expressing the posterior distribution over class labels,
given that you have a (test) sample $x$, as a marginalization over a sub-set of the training sample nodes, $n_i$:
$$
p(c \vert x) = \sum p(c \vert n_i ) p(n_i \vert x)
$$
In the back of my mind, I have the idea that the test sample is assigned-to (or is essentially the same as) one of the test-samples; but since we don't know which one, we marginalize over all of the feasible ones.
The first factor defines, for each training-sample node, the probability
distribution over class labels, given that the test-sample was "assigned"
to node $n_i$. If each training sample has a particular label, then this might be just an indicator function for that label; the various extensions to actual distributions are going to be problem domain specific.
The second factor is the probability that node $n_i$ is the correct assignment choice. The direct mapping of $kNN$ is to have $p(n_i \vert x) = 1/k$ when $n_i$ is in the set of $kNN$'s, and zero otherwise; i.e. a cookie-cutter type of distribution. Again, this could be generalized into something with a softer roll-off, e.g. a Gaussian type of shape in feature space, if desired.
In my opinion, this captures the the essential general structure of $kNN$:
You pick a population who gets to vote, this is achieved by $p(n_i \vert x)$, and then
you combine the results of each vote, in this case each vote is a probability distribution over classes, $p(c \vert n_i)$
whether you want to have a soft or hard inclusion function, the exact methods for defining the vote probabilities distributions and so on are problem-specific details. | Probablistic counterpart for kNN
The way I'd model it (I haven't seen this in the literature, but it wouldn't surprise me if it were already out there) is:
I'd think of it as expressing the posterior distribution over class labels,
|
53,354 | Probablistic counterpart for kNN | Something very similar is Logistic regression with a Radial Basis Function (sometimes called Gaussian) kernel (http://en.wikipedia.org/wiki/Radial_basis_function_kernel), with all weights constrained to be $1$.
A fitted RBF kernel regression will compute weighted distances to various points, compute a Gaussian pseudo-probability of being a neighbor based on the distance, and average the values at the points in the training set weighted by that Gaussian pseudo-probability.
This is very similar to kNN, where $k$=size of the training set.
You can reduce the number of points under consideration via a Laplacian prior ($L_1$ regularization), but this won't have the same effect as varying $k$ in kNN, as the $k$ points chosen will be the only ones considered.
If weights are fixed at $1$, only the distance matters. When weights are allowed to vary, it affects how much each point is allowed to vary. | Probablistic counterpart for kNN | Something very similar is Logistic regression with a Radial Basis Function (sometimes called Gaussian) kernel (http://en.wikipedia.org/wiki/Radial_basis_function_kernel), with all weights constrained | Probablistic counterpart for kNN
Something very similar is Logistic regression with a Radial Basis Function (sometimes called Gaussian) kernel (http://en.wikipedia.org/wiki/Radial_basis_function_kernel), with all weights constrained to be $1$.
A fitted RBF kernel regression will compute weighted distances to various points, compute a Gaussian pseudo-probability of being a neighbor based on the distance, and average the values at the points in the training set weighted by that Gaussian pseudo-probability.
This is very similar to kNN, where $k$=size of the training set.
You can reduce the number of points under consideration via a Laplacian prior ($L_1$ regularization), but this won't have the same effect as varying $k$ in kNN, as the $k$ points chosen will be the only ones considered.
If weights are fixed at $1$, only the distance matters. When weights are allowed to vary, it affects how much each point is allowed to vary. | Probablistic counterpart for kNN
Something very similar is Logistic regression with a Radial Basis Function (sometimes called Gaussian) kernel (http://en.wikipedia.org/wiki/Radial_basis_function_kernel), with all weights constrained |
53,355 | Probablistic counterpart for kNN | This paper formulates explicitly a probabilistic version of KNN:
C.C. Holmes, N.M. Adams: A probabilistic nearest neighbour method for statistical pattern recognition. J. Roy. Statist. Soc. Ser. B, 64 (2) (2002), pp. 295–306.
Dave's idea also comes up in this paper:
A. Kaban. A probabilistic neighborhood translation approach for non-standard text categorization. Proc. Discovery Science (DS08), 2008. | Probablistic counterpart for kNN | This paper formulates explicitly a probabilistic version of KNN:
C.C. Holmes, N.M. Adams: A probabilistic nearest neighbour method for statistical pattern recognition. J. Roy. Statist. Soc. Ser. B, 64 | Probablistic counterpart for kNN
This paper formulates explicitly a probabilistic version of KNN:
C.C. Holmes, N.M. Adams: A probabilistic nearest neighbour method for statistical pattern recognition. J. Roy. Statist. Soc. Ser. B, 64 (2) (2002), pp. 295–306.
Dave's idea also comes up in this paper:
A. Kaban. A probabilistic neighborhood translation approach for non-standard text categorization. Proc. Discovery Science (DS08), 2008. | Probablistic counterpart for kNN
This paper formulates explicitly a probabilistic version of KNN:
C.C. Holmes, N.M. Adams: A probabilistic nearest neighbour method for statistical pattern recognition. J. Roy. Statist. Soc. Ser. B, 64 |
53,356 | Probablistic counterpart for kNN | Gaussian mixture model is rather a generalization of $k$-means where variances are not identity matrices. One way to construct a probabilistic model which will be a generalization of kNN is to assume there is a gaussian centered at each data point with unknown variances whose distribution is fixed. | Probablistic counterpart for kNN | Gaussian mixture model is rather a generalization of $k$-means where variances are not identity matrices. One way to construct a probabilistic model which will be a generalization of kNN is to assume | Probablistic counterpart for kNN
Gaussian mixture model is rather a generalization of $k$-means where variances are not identity matrices. One way to construct a probabilistic model which will be a generalization of kNN is to assume there is a gaussian centered at each data point with unknown variances whose distribution is fixed. | Probablistic counterpart for kNN
Gaussian mixture model is rather a generalization of $k$-means where variances are not identity matrices. One way to construct a probabilistic model which will be a generalization of kNN is to assume |
53,357 | Why does one report statistical power only when results are non significant? | If the result is not statistically significant, there are two possibilities. One is that the null hypothesis is true. The other is that the null hypothesis is false (so there really is a difference between the populations) but some combination of small sample size, large scatter and bad luck led your experiment to a conclusion that the result is not statistically significant.
Running a power analysis can help understand the results. A power calculation answers this question:
If the true difference between populations is a stated hypothetical value (a value you would find large enough to be worth detecting), what is the chance that a study of the size you just did (given the scatter you observed) would conclude that the difference is statistically significant?
Interpreting the results:
If the power to detect the difference you would have cared about is high, then your results are pretty good evidence that the actual difference is likely to be smaller than your hypothetical value. You have solid negative data.
If the power to detect that difference is low, then you really can't conclude much from your data. Your findings are equivocal.
Hopefully, the explanation above shows how a power analysis can be helpful in interpreting a not-statistically-significant result. In contrast, power analyses don't help much when the result is statistically significant.
Important note: The power analysis should be set to compute the power to detect the smallest difference that you would find scientifically (or clinically) worth detecting. It is not even a tiny bit helpful to run a power analysis set to compute the power to detect the difference that your study actually detected. Such post-hoc or observed power calculations are invalid if they are based on the effect actually observed.
2019 UPDATE. While I think everything above is true, I am not sure it is so helpful. Much better to compute and interpret the 95% confidence interval for the difference (or ratio) and not even think about power. Power really is a way to quantify the effectiveness of a proposed experiment and not a good way to quantify or understand the results of a completed experiment... | Why does one report statistical power only when results are non significant? | If the result is not statistically significant, there are two possibilities. One is that the null hypothesis is true. The other is that the null hypothesis is false (so there really is a difference be | Why does one report statistical power only when results are non significant?
If the result is not statistically significant, there are two possibilities. One is that the null hypothesis is true. The other is that the null hypothesis is false (so there really is a difference between the populations) but some combination of small sample size, large scatter and bad luck led your experiment to a conclusion that the result is not statistically significant.
Running a power analysis can help understand the results. A power calculation answers this question:
If the true difference between populations is a stated hypothetical value (a value you would find large enough to be worth detecting), what is the chance that a study of the size you just did (given the scatter you observed) would conclude that the difference is statistically significant?
Interpreting the results:
If the power to detect the difference you would have cared about is high, then your results are pretty good evidence that the actual difference is likely to be smaller than your hypothetical value. You have solid negative data.
If the power to detect that difference is low, then you really can't conclude much from your data. Your findings are equivocal.
Hopefully, the explanation above shows how a power analysis can be helpful in interpreting a not-statistically-significant result. In contrast, power analyses don't help much when the result is statistically significant.
Important note: The power analysis should be set to compute the power to detect the smallest difference that you would find scientifically (or clinically) worth detecting. It is not even a tiny bit helpful to run a power analysis set to compute the power to detect the difference that your study actually detected. Such post-hoc or observed power calculations are invalid if they are based on the effect actually observed.
2019 UPDATE. While I think everything above is true, I am not sure it is so helpful. Much better to compute and interpret the 95% confidence interval for the difference (or ratio) and not even think about power. Power really is a way to quantify the effectiveness of a proposed experiment and not a good way to quantify or understand the results of a completed experiment... | Why does one report statistical power only when results are non significant?
If the result is not statistically significant, there are two possibilities. One is that the null hypothesis is true. The other is that the null hypothesis is false (so there really is a difference be |
53,358 | Does K-means incorporate the K-nearest neighbour algorithm? | No, definitely not; kmeans and kNN are two completely different things, and kmeans doesn't use kNN at all. In step 2 of the kmeans algorithm as you have defined it (and BTW, except for this minor confusion, your summary is otherwise basically accurate), the kmeans function loops through each of the $n$ data points, and tests the distance (usually Euclidian, but it doesn't have to be, it could be Manhattan distance, or Chebyshev, or Minkowski or whatever you like, really) of each point from each of the $m$ centroids. The closest centroid "wins", and each point is grouped with its winning centroid, as computed on the previous iteration, in order to compute the new values of all the centroids for the next iteration.
The kNN algorithm answers a somewhat different question. Suppose I give you a so-called "training data set" which consists of a table with multiple columns. In the first column is a so-called "class value", basically a label which you might think of as identifying different types of real-world objects that you would like a computer to be able to automatically recognize. All of the other columns in the table specify the values of various "features" associated with each object; for example, it's length, mass, aspect ratio, brightness, color, or whatever other quantity you may choose to measure. Now, suppose that I give you a second data set, a so-called "test" data set, where all of the values in the data columns have been measured in just the same way as in the training data set, BUT the labels have all gone missing! I would like you to infer the labels which are missing from the test data set by comparing the features of the test data set to those of known examples in the training set, and looking for patterns of similarities between them.
This second type of problem which I have just described is known as "classification", or sometimes, "supervised machine learning", and there are many different strategies for dealing with it. The kNN algorithm is one of the simplest of those strategies. Basically, the kNN algorithm considers each unknown object in the test data set, and it finds the $k$ "nearest" (by whatever distance metric the user specifies) examples in the training set. Then, whichever label was most common among those top $k$ examples within the training set, that is the label which is assigned to the unknown object in the test set.
Bottom line, kNN is a completely different thing, and has nothing to do with subdividing a previously undifferentiated data set into empirically defined clusters. | Does K-means incorporate the K-nearest neighbour algorithm? | No, definitely not; kmeans and kNN are two completely different things, and kmeans doesn't use kNN at all. In step 2 of the kmeans algorithm as you have defined it (and BTW, except for this minor con | Does K-means incorporate the K-nearest neighbour algorithm?
No, definitely not; kmeans and kNN are two completely different things, and kmeans doesn't use kNN at all. In step 2 of the kmeans algorithm as you have defined it (and BTW, except for this minor confusion, your summary is otherwise basically accurate), the kmeans function loops through each of the $n$ data points, and tests the distance (usually Euclidian, but it doesn't have to be, it could be Manhattan distance, or Chebyshev, or Minkowski or whatever you like, really) of each point from each of the $m$ centroids. The closest centroid "wins", and each point is grouped with its winning centroid, as computed on the previous iteration, in order to compute the new values of all the centroids for the next iteration.
The kNN algorithm answers a somewhat different question. Suppose I give you a so-called "training data set" which consists of a table with multiple columns. In the first column is a so-called "class value", basically a label which you might think of as identifying different types of real-world objects that you would like a computer to be able to automatically recognize. All of the other columns in the table specify the values of various "features" associated with each object; for example, it's length, mass, aspect ratio, brightness, color, or whatever other quantity you may choose to measure. Now, suppose that I give you a second data set, a so-called "test" data set, where all of the values in the data columns have been measured in just the same way as in the training data set, BUT the labels have all gone missing! I would like you to infer the labels which are missing from the test data set by comparing the features of the test data set to those of known examples in the training set, and looking for patterns of similarities between them.
This second type of problem which I have just described is known as "classification", or sometimes, "supervised machine learning", and there are many different strategies for dealing with it. The kNN algorithm is one of the simplest of those strategies. Basically, the kNN algorithm considers each unknown object in the test data set, and it finds the $k$ "nearest" (by whatever distance metric the user specifies) examples in the training set. Then, whichever label was most common among those top $k$ examples within the training set, that is the label which is assigned to the unknown object in the test set.
Bottom line, kNN is a completely different thing, and has nothing to do with subdividing a previously undifferentiated data set into empirically defined clusters. | Does K-means incorporate the K-nearest neighbour algorithm?
No, definitely not; kmeans and kNN are two completely different things, and kmeans doesn't use kNN at all. In step 2 of the kmeans algorithm as you have defined it (and BTW, except for this minor con |
53,359 | Does K-means incorporate the K-nearest neighbour algorithm? | You're sort of right. Both kNN and k-means commonly use Euclidean distance for their respective distance metrics, which is why they seem similar. Keep in mind, however, that k-means is not a classification model, as that is a supervised learning algorithm (since the grouping variable is known). K-means and other clustering methods are used when the grouping variable is unknown. | Does K-means incorporate the K-nearest neighbour algorithm? | You're sort of right. Both kNN and k-means commonly use Euclidean distance for their respective distance metrics, which is why they seem similar. Keep in mind, however, that k-means is not a classific | Does K-means incorporate the K-nearest neighbour algorithm?
You're sort of right. Both kNN and k-means commonly use Euclidean distance for their respective distance metrics, which is why they seem similar. Keep in mind, however, that k-means is not a classification model, as that is a supervised learning algorithm (since the grouping variable is known). K-means and other clustering methods are used when the grouping variable is unknown. | Does K-means incorporate the K-nearest neighbour algorithm?
You're sort of right. Both kNN and k-means commonly use Euclidean distance for their respective distance metrics, which is why they seem similar. Keep in mind, however, that k-means is not a classific |
53,360 | shuffle my data to investigate differences between 3 groups | If there is really no difference between the 3 regions, then I can assume that any test (eg ANOVA or a non-parametric equivalent) will find approximate the same results even if I randomly mix all data once and again.
This is the central insight that underlies resampling methods, such as permutation tests / randomization tests.
e.g. see Wikipedia, for example here
The basic idea of a permutation test (let's take a one way ANOVA-like situation) is that if the null is true, the group labels are arbitrary - you don't change the distributions by shuffling them.
So if you look at all possible arrangements of the group labels and compute some test statistic of interest, you obtain what's called the permutation distribution of the test statistic. You can then see if your particular sample (which will be one of the possible permutations - or more accurately, possible combinations) is unusually far 'in the tails' of that null distribution (giving a p-values).
Many of the common nonparametric rank-based tests are actually permutation-tests carried out on the ranks (which is a practical way of doing permutation tests without computers, which are otherwise very tedious unless you have very small sample sizes).
When the sample sizes are large, an option is to sample (with replacement) from the permutation distribution, typically because there are too many combinations to evaluate them all. Generally this is achieved by randomly permuting the labels rather than systematically re-arranging them to cover every possibility. The test statistic is then computed for each such arrangement. The sample value of the statistic is then compared with the distribution (it is normally included as part of the distribution for computing the p-value, and counts in the values 'at least as extreme' as itself). Some authors call this sampled permutation test a randomization test (though other authors reserve that term for a somewhat different notion also connected to permutation tests).
What you described was pretty close to this randomly sampled permutation test (randomization test).
I advise trying such a randomization test, not least for its ability to expand your horizons in terms of the standard tools you have available for tackling problems. The procedure is distribution-free (conditional on the sample) - it requires fewer assumptions while still allowing you to use either familiar statistics or ones custom-designed to your circumstances (e.g. you could slot in a more robust measure of location).
In practice I'd advise more than 1000 resamples for a randomization test. Consider a test with a p-value near 5%. The standard error of an estimated p-value for a sample size of 1000 will be nearly 0.007; when the true p-value is just on one side of 5%, nearly 15% of the time you'll see a value more than 1% on the wrong side (more than 6% or less than 4% when it should be the other side). I usually regard 10000 as toward the low end of what I do unless I just want a rough idea of the ballpark of the p-value. If I was doing a formal test, I'd want to pin it down a bit better. I often do 100,000 and sometimes a million or more - at least for the simpler tests.
If you search here on permutation tests or randomization tests you should find a number of relevant questions and answers and even some examples. | shuffle my data to investigate differences between 3 groups | If there is really no difference between the 3 regions, then I can assume that any test (eg ANOVA or a non-parametric equivalent) will find approximate the same results even if I randomly mix all data | shuffle my data to investigate differences between 3 groups
If there is really no difference between the 3 regions, then I can assume that any test (eg ANOVA or a non-parametric equivalent) will find approximate the same results even if I randomly mix all data once and again.
This is the central insight that underlies resampling methods, such as permutation tests / randomization tests.
e.g. see Wikipedia, for example here
The basic idea of a permutation test (let's take a one way ANOVA-like situation) is that if the null is true, the group labels are arbitrary - you don't change the distributions by shuffling them.
So if you look at all possible arrangements of the group labels and compute some test statistic of interest, you obtain what's called the permutation distribution of the test statistic. You can then see if your particular sample (which will be one of the possible permutations - or more accurately, possible combinations) is unusually far 'in the tails' of that null distribution (giving a p-values).
Many of the common nonparametric rank-based tests are actually permutation-tests carried out on the ranks (which is a practical way of doing permutation tests without computers, which are otherwise very tedious unless you have very small sample sizes).
When the sample sizes are large, an option is to sample (with replacement) from the permutation distribution, typically because there are too many combinations to evaluate them all. Generally this is achieved by randomly permuting the labels rather than systematically re-arranging them to cover every possibility. The test statistic is then computed for each such arrangement. The sample value of the statistic is then compared with the distribution (it is normally included as part of the distribution for computing the p-value, and counts in the values 'at least as extreme' as itself). Some authors call this sampled permutation test a randomization test (though other authors reserve that term for a somewhat different notion also connected to permutation tests).
What you described was pretty close to this randomly sampled permutation test (randomization test).
I advise trying such a randomization test, not least for its ability to expand your horizons in terms of the standard tools you have available for tackling problems. The procedure is distribution-free (conditional on the sample) - it requires fewer assumptions while still allowing you to use either familiar statistics or ones custom-designed to your circumstances (e.g. you could slot in a more robust measure of location).
In practice I'd advise more than 1000 resamples for a randomization test. Consider a test with a p-value near 5%. The standard error of an estimated p-value for a sample size of 1000 will be nearly 0.007; when the true p-value is just on one side of 5%, nearly 15% of the time you'll see a value more than 1% on the wrong side (more than 6% or less than 4% when it should be the other side). I usually regard 10000 as toward the low end of what I do unless I just want a rough idea of the ballpark of the p-value. If I was doing a formal test, I'd want to pin it down a bit better. I often do 100,000 and sometimes a million or more - at least for the simpler tests.
If you search here on permutation tests or randomization tests you should find a number of relevant questions and answers and even some examples. | shuffle my data to investigate differences between 3 groups
If there is really no difference between the 3 regions, then I can assume that any test (eg ANOVA or a non-parametric equivalent) will find approximate the same results even if I randomly mix all data |
53,361 | Getting a second-order polynomial trend line from a set of data | As a programmer, you will find an algorithm to be even better than a formula, especially if the algorithm is based on simple steps. There is one, "sequential matching," that requires no matrix algebra and no specialized mathematical procedures (like Cholesky decomposition, QR decomposition, or matrix pseudo-inversion). It is easy to code, easy to test, extensible, and reasonably efficient.
A good algorithm for this situation is derived from the idea of "matching" developed by Tukey and Mosteller, as described in my answer at "How to Normalize Regression Coefficients". For quadratic regression it involves these simple ideas:
There are three independent variables: a constant (usually set to $1$ for simplicity and easy interpretation), $x$ itself, and $x^2$. For this purpose, you compute $x^2$ once and for all and then treat it henceforth as if it had no mathematical relationship to $x$ whatsoever.
After you have fit (or "matched") a single dependent variable $y$ to a single independent variable $x$, thereby producing a formula in the form $y = \beta_{y\cdot x} x + \text{ error },$ you take the fit away from the dependent variable by subtracting the fit, writing $y - \beta_{y\cdot x} x$ for what is left over. This (the "error" term above) is the residual. As a matter of notation, let the residual after matching $y$ to $x$ be called $y_{\cdot x}.$
To fit a dependent variable $y$ to $n\ge 2$ independent variables $x_1, x_2, \ldots, x_n$, you separately match $y$ and the last $n-2$ variables to the first one, producing dependent residual $y_{\cdot x_1}$ and independent residuals $x_{2\cdot x_1}, x_{3\cdot x_1}, \ldots, x_{n\cdot x_1}.$ Now proceed recursively to fit the dependent residual to the $n-1$ independent residuals.
Provided, then, that you have a routine to match a dependent variable to a single independent variable, you can figure out the residual for any multivariate linear fitting problem with practically no more coding. If you know the residuals, you know the prediction, so at this point it's just a matter of finding the coefficients. The most direct way to proceed is to do the algebra to work out the proper combination of all the appropriate $\beta$'s. This is worked out for the case $n=2$ in the answer previously referenced. The R code below shows it for quadratic regression. Rather than coding it in a loop over the $x_i$, I have unrolled that loop to exhibit every one of the steps that is needed, showing the basic simplicity of the algorithm. I have also avoided using any R-specific idioms, apart from its readiness to combine two vectors (such as x and y) component-by-component when they are added, subtracted, multiplied, or divided.
The output of this sample program consists of the coefficients of $1$, $x$, and $x^2$, named beta.0, beta.1, and beta.2, respectively, followed by the same coefficients as computed with R's built-in regression function:
> c(beta.0, beta.1, beta.2)
[1] 18.5575094 2.1555036 -0.1092891
> coef(lm(y ~ x + I(x^2)))
(Intercept) x I(x^2)
18.5575094 2.1555036 -0.1092891
The perfect agreement attests to the correctness and precision of this sequential matching process.
#
# Data.
#
n <- 32
set.seed(17)
x <- runif(n, -5, 20)
y <- floor(30 - ((x-10)/3)^2 + rnorm(n, sd=3))
#
# Linear regression ("matching")
#
fit.ls <- function(y, x) sum(x*y) / sum(x*x) # Returns the coefficient
#
# The additional variables are a constant `one` and the squared x's.
#
one <- rep(1, length(x))
x2 <- x*x
#
# Step 1: Match everything to `one`.
#
beta.x.1 <- fit.ls(x, one)
beta.x2.1 <- fit.ls(x2, one)
beta.y.1 <- fit.ls(y, one)
#
# Compute the residuals.
x.1 <- x - one * beta.x.1
x2.1 <- x2 - one * beta.x2.1
y.1 <- y - one * beta.y.1
#
# Step 2: Match the residuals to `x`.
#
beta.x2.1.x <- fit.ls(x2.1, x.1)
beta.y.1.x <- fit.ls(y.1, x.1)
#
# Compute the residuals.
#
x2.1.x <- x2.1 - x.1 * beta.x2.1.x
y.1.x <- y.1 - x.1 * beta.y.1.x
#
# Step 3: Match the residuals to x^2.
#
beta.y.1.x.x2 <- fit.ls(y.1.x, x2.1.x)
y.1.x.x2 <- y.1.x - x2.1.x * beta.y.1.x.x2
#
# Combine the coefficients into the full formula for `y` in terms of
# `one`, `x`, and x^2.
#
beta.2 <- beta.y.1.x.x2
beta.1 <- beta.y.1.x - beta.2*beta.x2.1.x
beta.0 <- beta.y.1 - beta.y.1.x*beta.x.1 - beta.2*(beta.x2.1 - beta.x2.1.x*beta.x.1)
c(beta.0, beta.1, beta.2)
#
# Compare this to the output of `R`'s built-in multiple regression.
#
coef(lm(y ~ x + I(x^2)))
#
# Plot the results.
#
par(mfrow=c(1,1))
plot(x, y)
curve(beta.0 + beta.1*x + beta.2*x^2, lwd=2, col="Red", add=TRUE) | Getting a second-order polynomial trend line from a set of data | As a programmer, you will find an algorithm to be even better than a formula, especially if the algorithm is based on simple steps. There is one, "sequential matching," that requires no matrix algebr | Getting a second-order polynomial trend line from a set of data
As a programmer, you will find an algorithm to be even better than a formula, especially if the algorithm is based on simple steps. There is one, "sequential matching," that requires no matrix algebra and no specialized mathematical procedures (like Cholesky decomposition, QR decomposition, or matrix pseudo-inversion). It is easy to code, easy to test, extensible, and reasonably efficient.
A good algorithm for this situation is derived from the idea of "matching" developed by Tukey and Mosteller, as described in my answer at "How to Normalize Regression Coefficients". For quadratic regression it involves these simple ideas:
There are three independent variables: a constant (usually set to $1$ for simplicity and easy interpretation), $x$ itself, and $x^2$. For this purpose, you compute $x^2$ once and for all and then treat it henceforth as if it had no mathematical relationship to $x$ whatsoever.
After you have fit (or "matched") a single dependent variable $y$ to a single independent variable $x$, thereby producing a formula in the form $y = \beta_{y\cdot x} x + \text{ error },$ you take the fit away from the dependent variable by subtracting the fit, writing $y - \beta_{y\cdot x} x$ for what is left over. This (the "error" term above) is the residual. As a matter of notation, let the residual after matching $y$ to $x$ be called $y_{\cdot x}.$
To fit a dependent variable $y$ to $n\ge 2$ independent variables $x_1, x_2, \ldots, x_n$, you separately match $y$ and the last $n-2$ variables to the first one, producing dependent residual $y_{\cdot x_1}$ and independent residuals $x_{2\cdot x_1}, x_{3\cdot x_1}, \ldots, x_{n\cdot x_1}.$ Now proceed recursively to fit the dependent residual to the $n-1$ independent residuals.
Provided, then, that you have a routine to match a dependent variable to a single independent variable, you can figure out the residual for any multivariate linear fitting problem with practically no more coding. If you know the residuals, you know the prediction, so at this point it's just a matter of finding the coefficients. The most direct way to proceed is to do the algebra to work out the proper combination of all the appropriate $\beta$'s. This is worked out for the case $n=2$ in the answer previously referenced. The R code below shows it for quadratic regression. Rather than coding it in a loop over the $x_i$, I have unrolled that loop to exhibit every one of the steps that is needed, showing the basic simplicity of the algorithm. I have also avoided using any R-specific idioms, apart from its readiness to combine two vectors (such as x and y) component-by-component when they are added, subtracted, multiplied, or divided.
The output of this sample program consists of the coefficients of $1$, $x$, and $x^2$, named beta.0, beta.1, and beta.2, respectively, followed by the same coefficients as computed with R's built-in regression function:
> c(beta.0, beta.1, beta.2)
[1] 18.5575094 2.1555036 -0.1092891
> coef(lm(y ~ x + I(x^2)))
(Intercept) x I(x^2)
18.5575094 2.1555036 -0.1092891
The perfect agreement attests to the correctness and precision of this sequential matching process.
#
# Data.
#
n <- 32
set.seed(17)
x <- runif(n, -5, 20)
y <- floor(30 - ((x-10)/3)^2 + rnorm(n, sd=3))
#
# Linear regression ("matching")
#
fit.ls <- function(y, x) sum(x*y) / sum(x*x) # Returns the coefficient
#
# The additional variables are a constant `one` and the squared x's.
#
one <- rep(1, length(x))
x2 <- x*x
#
# Step 1: Match everything to `one`.
#
beta.x.1 <- fit.ls(x, one)
beta.x2.1 <- fit.ls(x2, one)
beta.y.1 <- fit.ls(y, one)
#
# Compute the residuals.
x.1 <- x - one * beta.x.1
x2.1 <- x2 - one * beta.x2.1
y.1 <- y - one * beta.y.1
#
# Step 2: Match the residuals to `x`.
#
beta.x2.1.x <- fit.ls(x2.1, x.1)
beta.y.1.x <- fit.ls(y.1, x.1)
#
# Compute the residuals.
#
x2.1.x <- x2.1 - x.1 * beta.x2.1.x
y.1.x <- y.1 - x.1 * beta.y.1.x
#
# Step 3: Match the residuals to x^2.
#
beta.y.1.x.x2 <- fit.ls(y.1.x, x2.1.x)
y.1.x.x2 <- y.1.x - x2.1.x * beta.y.1.x.x2
#
# Combine the coefficients into the full formula for `y` in terms of
# `one`, `x`, and x^2.
#
beta.2 <- beta.y.1.x.x2
beta.1 <- beta.y.1.x - beta.2*beta.x2.1.x
beta.0 <- beta.y.1 - beta.y.1.x*beta.x.1 - beta.2*(beta.x2.1 - beta.x2.1.x*beta.x.1)
c(beta.0, beta.1, beta.2)
#
# Compare this to the output of `R`'s built-in multiple regression.
#
coef(lm(y ~ x + I(x^2)))
#
# Plot the results.
#
par(mfrow=c(1,1))
plot(x, y)
curve(beta.0 + beta.1*x + beta.2*x^2, lwd=2, col="Red", add=TRUE) | Getting a second-order polynomial trend line from a set of data
As a programmer, you will find an algorithm to be even better than a formula, especially if the algorithm is based on simple steps. There is one, "sequential matching," that requires no matrix algebr |
53,362 | Getting a second-order polynomial trend line from a set of data | Your model will be:
$$y_i = \beta_0 + \beta_1 x_i + \beta_2 x_i^2$$
Where $\beta_0$, $\beta_1$ and $\beta_2$ are parameters to be estimated from the data. Standard practice is to find values of these parameters such that the sum of squares:
$$ \sum_{i=1}^{n}\left[ y_i - (\beta_0 + \beta_1 x_i + \beta_2x_i^2) \right]^2$$
is minimized. In words, we are looking for coefficients of the polynomial such that the fitted values of the polynomial are as close to the observations as possible. In matrix/vector notation what we want is the vector $\vec{\beta}$ which satisfies:
$$\vec{y} = \matrix{X}\vec{\beta} $$
where $\vec{\beta} = [\beta_0,\beta_1,\beta_2]^T$, $\vec{y}=[y_1,\cdots,y_n]^T$ and
$$ \matrix{X} = \left[\array{1,x_1,x_1^2\\1,x_2,x_2^2\\\cdots\\1,x_n,x_n^2}\right] $$
As we cannot invert the matrix $\matrix{X}$ (it's not square for one thing), we solve the equation as follows:
$$ \matrix{X}^{T}\vec{y} = \matrix{X}^{T}\matrix{X}\vec{\beta} \\
(\matrix{X}^{T}\matrix{X})^{-1}\matrix{X}^{T}\vec{y} = \vec{\beta} $$
Actually, computationally speaking, inverting the matrix $\matrix{X}^{T}\matrix{X}$ is not the most efficient way of solving for $\vec{\beta}$ (see the Wikipedia entry on linear least squares) but it will give the right results.
Once you have found the vector of polynomial coefficients then you can evaluate the polynomial for any new value of $x$ you wish to predict.
An example of how to find values of $\vec{\beta}$ in R, manually:
X <- cbind(1,x,x^2)
beta <- solve(t(X) %*% X) %*% t(X) %*% y
Or just using the lm() function to fit a linear model which will exploit those efficient computational methods (and provide many more details on the model fit):
fit <- lm(y~x+I(x^2))
beta <- coef(fit)
A note for the uninitiated: when we specify the model in the call to lm() using R model syntax an intercept term (i.e. the column of $1$s in $\matrix{X}$) is automatically implicitly included. Hence we can just write y~x+I(x^2) rather than y~1+x+I(x^2). | Getting a second-order polynomial trend line from a set of data | Your model will be:
$$y_i = \beta_0 + \beta_1 x_i + \beta_2 x_i^2$$
Where $\beta_0$, $\beta_1$ and $\beta_2$ are parameters to be estimated from the data. Standard practice is to find values of these | Getting a second-order polynomial trend line from a set of data
Your model will be:
$$y_i = \beta_0 + \beta_1 x_i + \beta_2 x_i^2$$
Where $\beta_0$, $\beta_1$ and $\beta_2$ are parameters to be estimated from the data. Standard practice is to find values of these parameters such that the sum of squares:
$$ \sum_{i=1}^{n}\left[ y_i - (\beta_0 + \beta_1 x_i + \beta_2x_i^2) \right]^2$$
is minimized. In words, we are looking for coefficients of the polynomial such that the fitted values of the polynomial are as close to the observations as possible. In matrix/vector notation what we want is the vector $\vec{\beta}$ which satisfies:
$$\vec{y} = \matrix{X}\vec{\beta} $$
where $\vec{\beta} = [\beta_0,\beta_1,\beta_2]^T$, $\vec{y}=[y_1,\cdots,y_n]^T$ and
$$ \matrix{X} = \left[\array{1,x_1,x_1^2\\1,x_2,x_2^2\\\cdots\\1,x_n,x_n^2}\right] $$
As we cannot invert the matrix $\matrix{X}$ (it's not square for one thing), we solve the equation as follows:
$$ \matrix{X}^{T}\vec{y} = \matrix{X}^{T}\matrix{X}\vec{\beta} \\
(\matrix{X}^{T}\matrix{X})^{-1}\matrix{X}^{T}\vec{y} = \vec{\beta} $$
Actually, computationally speaking, inverting the matrix $\matrix{X}^{T}\matrix{X}$ is not the most efficient way of solving for $\vec{\beta}$ (see the Wikipedia entry on linear least squares) but it will give the right results.
Once you have found the vector of polynomial coefficients then you can evaluate the polynomial for any new value of $x$ you wish to predict.
An example of how to find values of $\vec{\beta}$ in R, manually:
X <- cbind(1,x,x^2)
beta <- solve(t(X) %*% X) %*% t(X) %*% y
Or just using the lm() function to fit a linear model which will exploit those efficient computational methods (and provide many more details on the model fit):
fit <- lm(y~x+I(x^2))
beta <- coef(fit)
A note for the uninitiated: when we specify the model in the call to lm() using R model syntax an intercept term (i.e. the column of $1$s in $\matrix{X}$) is automatically implicitly included. Hence we can just write y~x+I(x^2) rather than y~1+x+I(x^2). | Getting a second-order polynomial trend line from a set of data
Your model will be:
$$y_i = \beta_0 + \beta_1 x_i + \beta_2 x_i^2$$
Where $\beta_0$, $\beta_1$ and $\beta_2$ are parameters to be estimated from the data. Standard practice is to find values of these |
53,363 | Is least squares the standard method to fit a 3 parameters Gaussian function to some x and y data? | Use logistic regression with a Gaussian link.
The count of simultaneous responses for a given value of $x$, written $y(x)$, is the outcome of $n=100$ independent Bernoulli trials whose chance of success is given by the Gaussian function. Letting $\theta$ stand for the three parameters (unknown, to be estimated), let's write the value of that Gaussian at $x$ as $\mu(x, \theta)$. Then the probability of observing $y(x)$ successes, with $0 \le y(x) \le n$, is
$${\Pr}_\theta(y(x)) = \binom{n}{y(x)} \mu(x,\theta)^{y(x)} \left(1-\mu(x,\theta)\right)^{n-y(x)}.$$
The likelihood of a set of independent observations arising from the same underlying Gaussian is the product of these expressions. The part of the product that varies with $\theta$ is obtained from the last two factors. They will tend to be extremely small (because we are dropping the binomial coefficients), so about the only reasonable way to handle them on a computer is through their logarithms. Thus the part of the log likelihood that varies with $\theta$ is
$$\Lambda(\theta) = \sum_{x}\left(y(x)\log(\mu(x,\theta)) + (n-y(x))\log(1-\mu(x,\theta))\right).$$
Logistic regression with a Gaussian link maximizes this log likelihood.
To make sure the Gaussian peak does not exceed $1$, we might choose to parameterize its amplitude using a function that ranges from $0$ to $1$. Here is one convenient parameterization:
$$\mu(x, \theta) = \mu(x, (m,s,a)) = \frac{\exp(-\frac{1}{2} \left(\frac{x-m}{s}\right)^2)}{1 + \exp(-a)}.$$
The parameter $m$ is the mode of the Gaussian, $s$ (a positive number) is its spread, and $a$ (some real number) determines the amplitude, increasing with increasing $a$.
Use a multivariate nonlinear optimization procedure suitable for smooth functions. (Avoid specifying any constraints at all by using $s^2$ instead of $s$ as a parameter, if necessary.) Given some vaguely reasonable estimates of the parameters, it should have no trouble finding the global optimum.
As an example, here is a detailed implementation of the fitting procedure in R using data from the question. It is modified from code for a four-parameter least-squares fit of a Gaussian shown in an answer at Linear regression best polynomial (or better approach to use)?.
The fit is good: the standardized residuals do not become extreme and given the small amount of data, they are reasonably close to zero.
The estimated values are $\hat{m} = -0.033$, $\hat{s} = 0.127$, and $\hat{a} = 16.8$ (whence the estimated amplitude is $1/(1+\exp(-\hat{a})) = 1 - 0.00000005$).
y <- c(10,45,90,100,60,10,5) # Counts of successes at each `x`.
N <- length(y)
n <- rep(100, N) # Numbers of trials at each `x`.
x <- seq(-3,3,1)/10
#
# Define a Gaussian function (of three parameters {m,s,a}).
#
gaussian <- function(x, theta) {
m <- theta[1]; s <- theta[2]; a <- theta[3];
exp(-0.5*((x-m)/s)^2) / (1 + exp(-a))
}
#
# Compute an unnormalized log likelihood (negated).
# `y` are the observed counts,
# `n` are the trials,
# `x` are the independent values,
# `theta` is the parameter.
#
likelihood.log <- function(theta, y, n, x) {
p <- gaussian(x, theta)
-sum(y*log(p) + (n-y)*log(1-p))
}
#
# Estimate some starting values.
#
m.0 <- x[which.max(y)]; s.0 <- (max(x)-min(x))/4; a.0 <- 0
theta <- c(m.0, s.0, a.0)
#
# Do the fit.
#
fit <- nlm(likelihood.log, theta, y, n, x)
#
# Plot the results.
#
par(mfrow=c(1,1))
plot(c(min(x),max(x)), c(0,1), main="Data", type="n", xlab="x", ylab="y")
curve(gaussian(x, fit$estimate), add=TRUE, col="Red", lwd=2) #$
points(x, y/n, pch=19)
#
# Compute residuals.
#
mu.hat <- gaussian(x, fit$estimate)
residuals <- (y - n*mu.hat) / sqrt(n * mu.hat * (1-mu.hat))
boxplot(residuals, horizontal=TRUE, xlab="Standardized residuals")
#
# (Compute the variance-covariance matrix in terms of the Hessian
# of the log likelihood at the estimates, etc., etc.)
(Although R has a way of performing these calculations automatically (by means of a custom link function for its glm Generalized Linear Model function), the interface is so cumbersome and so uninstructive I have elected not to use it for this illustration.) | Is least squares the standard method to fit a 3 parameters Gaussian function to some x and y data? | Use logistic regression with a Gaussian link.
The count of simultaneous responses for a given value of $x$, written $y(x)$, is the outcome of $n=100$ independent Bernoulli trials whose chance of succ | Is least squares the standard method to fit a 3 parameters Gaussian function to some x and y data?
Use logistic regression with a Gaussian link.
The count of simultaneous responses for a given value of $x$, written $y(x)$, is the outcome of $n=100$ independent Bernoulli trials whose chance of success is given by the Gaussian function. Letting $\theta$ stand for the three parameters (unknown, to be estimated), let's write the value of that Gaussian at $x$ as $\mu(x, \theta)$. Then the probability of observing $y(x)$ successes, with $0 \le y(x) \le n$, is
$${\Pr}_\theta(y(x)) = \binom{n}{y(x)} \mu(x,\theta)^{y(x)} \left(1-\mu(x,\theta)\right)^{n-y(x)}.$$
The likelihood of a set of independent observations arising from the same underlying Gaussian is the product of these expressions. The part of the product that varies with $\theta$ is obtained from the last two factors. They will tend to be extremely small (because we are dropping the binomial coefficients), so about the only reasonable way to handle them on a computer is through their logarithms. Thus the part of the log likelihood that varies with $\theta$ is
$$\Lambda(\theta) = \sum_{x}\left(y(x)\log(\mu(x,\theta)) + (n-y(x))\log(1-\mu(x,\theta))\right).$$
Logistic regression with a Gaussian link maximizes this log likelihood.
To make sure the Gaussian peak does not exceed $1$, we might choose to parameterize its amplitude using a function that ranges from $0$ to $1$. Here is one convenient parameterization:
$$\mu(x, \theta) = \mu(x, (m,s,a)) = \frac{\exp(-\frac{1}{2} \left(\frac{x-m}{s}\right)^2)}{1 + \exp(-a)}.$$
The parameter $m$ is the mode of the Gaussian, $s$ (a positive number) is its spread, and $a$ (some real number) determines the amplitude, increasing with increasing $a$.
Use a multivariate nonlinear optimization procedure suitable for smooth functions. (Avoid specifying any constraints at all by using $s^2$ instead of $s$ as a parameter, if necessary.) Given some vaguely reasonable estimates of the parameters, it should have no trouble finding the global optimum.
As an example, here is a detailed implementation of the fitting procedure in R using data from the question. It is modified from code for a four-parameter least-squares fit of a Gaussian shown in an answer at Linear regression best polynomial (or better approach to use)?.
The fit is good: the standardized residuals do not become extreme and given the small amount of data, they are reasonably close to zero.
The estimated values are $\hat{m} = -0.033$, $\hat{s} = 0.127$, and $\hat{a} = 16.8$ (whence the estimated amplitude is $1/(1+\exp(-\hat{a})) = 1 - 0.00000005$).
y <- c(10,45,90,100,60,10,5) # Counts of successes at each `x`.
N <- length(y)
n <- rep(100, N) # Numbers of trials at each `x`.
x <- seq(-3,3,1)/10
#
# Define a Gaussian function (of three parameters {m,s,a}).
#
gaussian <- function(x, theta) {
m <- theta[1]; s <- theta[2]; a <- theta[3];
exp(-0.5*((x-m)/s)^2) / (1 + exp(-a))
}
#
# Compute an unnormalized log likelihood (negated).
# `y` are the observed counts,
# `n` are the trials,
# `x` are the independent values,
# `theta` is the parameter.
#
likelihood.log <- function(theta, y, n, x) {
p <- gaussian(x, theta)
-sum(y*log(p) + (n-y)*log(1-p))
}
#
# Estimate some starting values.
#
m.0 <- x[which.max(y)]; s.0 <- (max(x)-min(x))/4; a.0 <- 0
theta <- c(m.0, s.0, a.0)
#
# Do the fit.
#
fit <- nlm(likelihood.log, theta, y, n, x)
#
# Plot the results.
#
par(mfrow=c(1,1))
plot(c(min(x),max(x)), c(0,1), main="Data", type="n", xlab="x", ylab="y")
curve(gaussian(x, fit$estimate), add=TRUE, col="Red", lwd=2) #$
points(x, y/n, pch=19)
#
# Compute residuals.
#
mu.hat <- gaussian(x, fit$estimate)
residuals <- (y - n*mu.hat) / sqrt(n * mu.hat * (1-mu.hat))
boxplot(residuals, horizontal=TRUE, xlab="Standardized residuals")
#
# (Compute the variance-covariance matrix in terms of the Hessian
# of the log likelihood at the estimates, etc., etc.)
(Although R has a way of performing these calculations automatically (by means of a custom link function for its glm Generalized Linear Model function), the interface is so cumbersome and so uninstructive I have elected not to use it for this illustration.) | Is least squares the standard method to fit a 3 parameters Gaussian function to some x and y data?
Use logistic regression with a Gaussian link.
The count of simultaneous responses for a given value of $x$, written $y(x)$, is the outcome of $n=100$ independent Bernoulli trials whose chance of succ |
53,364 | Is least squares the standard method to fit a 3 parameters Gaussian function to some x and y data? | One answer to your question is that least squares is a standard way to fit Gaussians to "some x and y data".
However, your data are special in that they are fractions (probabilities) that must lie in [0,1]. What you are doing is fitting a curve which is for probability densities, so it is unaware of the upper limit. The fitted maximum exceeds 1 and is for a value of x just less than 0. So, from one point of view you fitted an impossible curve. Densities greater than 1 are certainly possible but probabilities greater than 1 certainly are not.
You could scale your probabilities to densities, assuming that each is for a bin width 0.1. However, that would then ignore the fact that your largest value is the largest possible value, so you can't win everything either way.
What you should be doing is a largely open question, as you would need to tell us more about your problem and your data to get better advice. But various comments spring to mind:
Even if you have a really good reason to be focusing on Gaussians, I would think of other models too, asymmetric as well as symmetric.
The data are presented as if already on a scale with time interval 0.1 s. If your data are finer, you should use exact times. If this is the actual time resolution, then be advised that the data are rather coarse for discriminating between possible models. Naturally, there may be excellent scientific or technical reasons for your time interval.
One model popular in ecology for proportional abundances has been called the Gaussian logit, which in your terms is a logit for y fitted as a quadratic, with predictors x and x$^2$. Taking each value to represent 100 repetitions, I fitted this model in Stata:
clear
set obs 7
gen x = (_n - 4)/10
mat y = (10,45,90,100,60,10,5)
gen y = y[1,_n]
gen xsq = x^2
glm y x xsq , link(logit) family(binomial 100)
gen py = y/100
twoway function invlogit(_b[_cons]+ _b[x]*x + _b[xsq]*x^2), ra(x) || scatter py x , scheme(s1color) legend(order(1 "predicted" 2 "observed")) xla(-.3(.1).3) ytitle(proportion)
A reference for this model is Jongman, R.H.G., ter Braak, C.J.F. and van Tongerren, O.F.R. 1995. Data analysis in community and landscape ecology.
Cambridge: Cambridge University Press.
The model fit is not especially encouraging. Note that in turn a limitation of, or more neutrally a fact about, this model is that it can never predict a maximum value of exactly 1 (or a minimum value of exactly 0). This model does concur with your least-square result in fitting a maximum for a negative x.
However, you (apparently) don't have enough data for much more flexible models to be tried comfortably, unless there are parsimonious physically- or psychometrically-based models tailored to the purpose that can be tried. | Is least squares the standard method to fit a 3 parameters Gaussian function to some x and y data? | One answer to your question is that least squares is a standard way to fit Gaussians to "some x and y data".
However, your data are special in that they are fractions (probabilities) that must lie in | Is least squares the standard method to fit a 3 parameters Gaussian function to some x and y data?
One answer to your question is that least squares is a standard way to fit Gaussians to "some x and y data".
However, your data are special in that they are fractions (probabilities) that must lie in [0,1]. What you are doing is fitting a curve which is for probability densities, so it is unaware of the upper limit. The fitted maximum exceeds 1 and is for a value of x just less than 0. So, from one point of view you fitted an impossible curve. Densities greater than 1 are certainly possible but probabilities greater than 1 certainly are not.
You could scale your probabilities to densities, assuming that each is for a bin width 0.1. However, that would then ignore the fact that your largest value is the largest possible value, so you can't win everything either way.
What you should be doing is a largely open question, as you would need to tell us more about your problem and your data to get better advice. But various comments spring to mind:
Even if you have a really good reason to be focusing on Gaussians, I would think of other models too, asymmetric as well as symmetric.
The data are presented as if already on a scale with time interval 0.1 s. If your data are finer, you should use exact times. If this is the actual time resolution, then be advised that the data are rather coarse for discriminating between possible models. Naturally, there may be excellent scientific or technical reasons for your time interval.
One model popular in ecology for proportional abundances has been called the Gaussian logit, which in your terms is a logit for y fitted as a quadratic, with predictors x and x$^2$. Taking each value to represent 100 repetitions, I fitted this model in Stata:
clear
set obs 7
gen x = (_n - 4)/10
mat y = (10,45,90,100,60,10,5)
gen y = y[1,_n]
gen xsq = x^2
glm y x xsq , link(logit) family(binomial 100)
gen py = y/100
twoway function invlogit(_b[_cons]+ _b[x]*x + _b[xsq]*x^2), ra(x) || scatter py x , scheme(s1color) legend(order(1 "predicted" 2 "observed")) xla(-.3(.1).3) ytitle(proportion)
A reference for this model is Jongman, R.H.G., ter Braak, C.J.F. and van Tongerren, O.F.R. 1995. Data analysis in community and landscape ecology.
Cambridge: Cambridge University Press.
The model fit is not especially encouraging. Note that in turn a limitation of, or more neutrally a fact about, this model is that it can never predict a maximum value of exactly 1 (or a minimum value of exactly 0). This model does concur with your least-square result in fitting a maximum for a negative x.
However, you (apparently) don't have enough data for much more flexible models to be tried comfortably, unless there are parsimonious physically- or psychometrically-based models tailored to the purpose that can be tried. | Is least squares the standard method to fit a 3 parameters Gaussian function to some x and y data?
One answer to your question is that least squares is a standard way to fit Gaussians to "some x and y data".
However, your data are special in that they are fractions (probabilities) that must lie in |
53,365 | Lasso cross validation | Basically you select whichever $\alpha$ gives you the lowest error rate (on a validation set). So to be complete cross-validation entails the following steps:
Split your data in three parts: training, validation and test.
Train a model with a given $\alpha$ on the train-set and test it on the validation-set and repeat this for the full range of possible $\alpha$ values in your grid.
Pick the best $\alpha$ value (i.e. the one that gives the lowest error)
Once you have complete this, retrain a new model using this optimal value of $\alpha$ on (trainset+validationset).
You can now evaluate your model on the test-set.
I haven't gone over your code, but your graph suggests to also look for even lower values of $\alpha$. Note, however, that learning curves usually look something like this when evaluated on the validation set:
That is, your error should be high in the beginning, drop to a low and then go somewhat up again.
The comment of @frank-harrell relates to the fact that you should probably repeat this experiment a few times to get robust estimates of your $\alpha$ values. For this you can also use k-fold cross-validation like you did so you should be fine. | Lasso cross validation | Basically you select whichever $\alpha$ gives you the lowest error rate (on a validation set). So to be complete cross-validation entails the following steps:
Split your data in three parts: trainin | Lasso cross validation
Basically you select whichever $\alpha$ gives you the lowest error rate (on a validation set). So to be complete cross-validation entails the following steps:
Split your data in three parts: training, validation and test.
Train a model with a given $\alpha$ on the train-set and test it on the validation-set and repeat this for the full range of possible $\alpha$ values in your grid.
Pick the best $\alpha$ value (i.e. the one that gives the lowest error)
Once you have complete this, retrain a new model using this optimal value of $\alpha$ on (trainset+validationset).
You can now evaluate your model on the test-set.
I haven't gone over your code, but your graph suggests to also look for even lower values of $\alpha$. Note, however, that learning curves usually look something like this when evaluated on the validation set:
That is, your error should be high in the beginning, drop to a low and then go somewhat up again.
The comment of @frank-harrell relates to the fact that you should probably repeat this experiment a few times to get robust estimates of your $\alpha$ values. For this you can also use k-fold cross-validation like you did so you should be fine. | Lasso cross validation
Basically you select whichever $\alpha$ gives you the lowest error rate (on a validation set). So to be complete cross-validation entails the following steps:
Split your data in three parts: trainin |
53,366 | Lasso cross validation | Before going that far, run 5 bootstrap replications of the lasso procedure to make sure the features selected are stable. Otherwise your final interpretation of the lasso result will be suspect. | Lasso cross validation | Before going that far, run 5 bootstrap replications of the lasso procedure to make sure the features selected are stable. Otherwise your final interpretation of the lasso result will be suspect. | Lasso cross validation
Before going that far, run 5 bootstrap replications of the lasso procedure to make sure the features selected are stable. Otherwise your final interpretation of the lasso result will be suspect. | Lasso cross validation
Before going that far, run 5 bootstrap replications of the lasso procedure to make sure the features selected are stable. Otherwise your final interpretation of the lasso result will be suspect. |
53,367 | Simple question about variance | In the details of ?var we find:
The denominator n - 1 is used which gives an unbiased estimator of the
(co)variance for i.i.d. observations.
so you should have $2/2=1$ instead. See here for some more details. | Simple question about variance | In the details of ?var we find:
The denominator n - 1 is used which gives an unbiased estimator of the
(co)variance for i.i.d. observations.
so you should have $2/2=1$ instead. See here for some m | Simple question about variance
In the details of ?var we find:
The denominator n - 1 is used which gives an unbiased estimator of the
(co)variance for i.i.d. observations.
so you should have $2/2=1$ instead. See here for some more details. | Simple question about variance
In the details of ?var we find:
The denominator n - 1 is used which gives an unbiased estimator of the
(co)variance for i.i.d. observations.
so you should have $2/2=1$ instead. See here for some m |
53,368 | Choice of distance metric when data is combination text/numeric/categorical | You are referring to a very hard problem of finding the best possible metric. It is a hard problem even for the unimodal data, the multimodal case you are referring to is a great challenge. There are basically three possibilities:
use some primitive metric, like Euclidean distance, treating everything as numbers (you can convert categorical values to some values as well). This will yield rather poor results, but is the simplest possibility and gives you time for analysis and optimization of the rest of the system.
perform deep analysis of your data and/or find an expert able to design a good metric. This is the most hard to do, but would yield the best results (assuming that you have access to "real expert").
add additional abstraction layer to your problem and treat finding this metric as an optimization problem on its own. There are numerous studies showing how one can find good multi-modal metrics for any kind of data by formalizing it as an optimization problem and applying one of many known mathematical solvers. Some examples of such studies would be:
Multi Modal Distance Metric Learning
Learning Multi-modal Similarity | Choice of distance metric when data is combination text/numeric/categorical | You are referring to a very hard problem of finding the best possible metric. It is a hard problem even for the unimodal data, the multimodal case you are referring to is a great challenge. There are | Choice of distance metric when data is combination text/numeric/categorical
You are referring to a very hard problem of finding the best possible metric. It is a hard problem even for the unimodal data, the multimodal case you are referring to is a great challenge. There are basically three possibilities:
use some primitive metric, like Euclidean distance, treating everything as numbers (you can convert categorical values to some values as well). This will yield rather poor results, but is the simplest possibility and gives you time for analysis and optimization of the rest of the system.
perform deep analysis of your data and/or find an expert able to design a good metric. This is the most hard to do, but would yield the best results (assuming that you have access to "real expert").
add additional abstraction layer to your problem and treat finding this metric as an optimization problem on its own. There are numerous studies showing how one can find good multi-modal metrics for any kind of data by formalizing it as an optimization problem and applying one of many known mathematical solvers. Some examples of such studies would be:
Multi Modal Distance Metric Learning
Learning Multi-modal Similarity | Choice of distance metric when data is combination text/numeric/categorical
You are referring to a very hard problem of finding the best possible metric. It is a hard problem even for the unimodal data, the multimodal case you are referring to is a great challenge. There are |
53,369 | Choice of distance metric when data is combination text/numeric/categorical | First of all, you must realize that there isn't the single one "correct" distance for your data.
Given two coordinates, Euclidean distance is appropriate when looking at a short distance without restrictions on travel. Manhattan distance is usually more appropriate when you are in a city with a grid layout. However, for more accurate travel times, you will need to look at the underlying road network and the network distance therein. Oh, and if you are looking at intercontinental coordinates, the various different formulas for approximating great-circle distance may be a good choice.
So even for 2d coordinates on earth, there is no "correct" distance without side information and extra data.
Now for non-vectorspace mixed type data, there do exist a number of metrics that you may want to understand and try out; such as Gower's similarity measure. | Choice of distance metric when data is combination text/numeric/categorical | First of all, you must realize that there isn't the single one "correct" distance for your data.
Given two coordinates, Euclidean distance is appropriate when looking at a short distance without restr | Choice of distance metric when data is combination text/numeric/categorical
First of all, you must realize that there isn't the single one "correct" distance for your data.
Given two coordinates, Euclidean distance is appropriate when looking at a short distance without restrictions on travel. Manhattan distance is usually more appropriate when you are in a city with a grid layout. However, for more accurate travel times, you will need to look at the underlying road network and the network distance therein. Oh, and if you are looking at intercontinental coordinates, the various different formulas for approximating great-circle distance may be a good choice.
So even for 2d coordinates on earth, there is no "correct" distance without side information and extra data.
Now for non-vectorspace mixed type data, there do exist a number of metrics that you may want to understand and try out; such as Gower's similarity measure. | Choice of distance metric when data is combination text/numeric/categorical
First of all, you must realize that there isn't the single one "correct" distance for your data.
Given two coordinates, Euclidean distance is appropriate when looking at a short distance without restr |
53,370 | Logistic regression diagnostics when predictors all have skewed distributions | The distribution of the predictors is almost irrelevant in regression, as you are conditioning on their values. Changing to factors is not needed unless there are very few unique values and some of them are not well populated.
But with very skewed predictors the model may fit better upon a transformation. I tend to use $\sqrt{}$ and $x^{\frac{1}{3}}$ because these allow zeros, unlike $\log(x)$. Then when the sample size allows I expand the transformed variables in a regression split to make them fit adequately, e.g.
require(rms)
cuber <- function(x) x^(1/3)
f <- lrm(y ~ rcs(cuber(x1), 4) + rcs(cuber(x2), 4) + rcs(x3, 5) + sex)
rcs means "restricted cubic spline" (natural spline) and the number after the variable or transformed variable is the number of knots (two more than the number of nonlinear terms in the spline). When you make the distribution more symmetric (here with cube root), it frequently requires fewer knots to get a good fit.
AIC can help in choosing the number of knots $k$ if you force all variables to have the same number of knots. Below, $k=0$ is the same as linearity after initial transformation.
for(k in c(0, 3:7)) {
f <- lrm(y ~ rcs(cuber(x1),k) + rcs(cuber(x2),k) + rcs(x3, k) + sex)
print(AIC(f))
} | Logistic regression diagnostics when predictors all have skewed distributions | The distribution of the predictors is almost irrelevant in regression, as you are conditioning on their values. Changing to factors is not needed unless there are very few unique values and some of t | Logistic regression diagnostics when predictors all have skewed distributions
The distribution of the predictors is almost irrelevant in regression, as you are conditioning on their values. Changing to factors is not needed unless there are very few unique values and some of them are not well populated.
But with very skewed predictors the model may fit better upon a transformation. I tend to use $\sqrt{}$ and $x^{\frac{1}{3}}$ because these allow zeros, unlike $\log(x)$. Then when the sample size allows I expand the transformed variables in a regression split to make them fit adequately, e.g.
require(rms)
cuber <- function(x) x^(1/3)
f <- lrm(y ~ rcs(cuber(x1), 4) + rcs(cuber(x2), 4) + rcs(x3, 5) + sex)
rcs means "restricted cubic spline" (natural spline) and the number after the variable or transformed variable is the number of knots (two more than the number of nonlinear terms in the spline). When you make the distribution more symmetric (here with cube root), it frequently requires fewer knots to get a good fit.
AIC can help in choosing the number of knots $k$ if you force all variables to have the same number of knots. Below, $k=0$ is the same as linearity after initial transformation.
for(k in c(0, 3:7)) {
f <- lrm(y ~ rcs(cuber(x1),k) + rcs(cuber(x2),k) + rcs(x3, k) + sex)
print(AIC(f))
} | Logistic regression diagnostics when predictors all have skewed distributions
The distribution of the predictors is almost irrelevant in regression, as you are conditioning on their values. Changing to factors is not needed unless there are very few unique values and some of t |
53,371 | Cronbach's alpha negative result | Two main causes:
Small sample size. Even if the assumptions are met and the reliability is decent, an estimate computed from a particular sample can be negative, just as a sample mean is not equal to the population mean. Somewhat surprisingly, whereas experimental psychologists tend to be obsessed with statistical testing and are conditioned to ask a p-value for everything, psychometrics textbooks generally don't care about tests/confidence intervals for reliability estimates. One reason is that their authors assume that any scale development effort will have at the very least hundreds of observations but there is some sampling variability in reliability estimates anyway.
Negatively worded items/items with strong negative correlation with the underlying factor.
Remedies:
Recode negatively worded items
Get more data (you probably don't have enough)
Remove some item(s)
Forget alpha (it has some well-known limitations and chances are that it doesn't tell you what you think it's telling you + what were you planning to do with it anyway?)
See also the references in Reverse scoring when question is stated in a negative fashion | Cronbach's alpha negative result | Two main causes:
Small sample size. Even if the assumptions are met and the reliability is decent, an estimate computed from a particular sample can be negative, just as a sample mean is not equal to | Cronbach's alpha negative result
Two main causes:
Small sample size. Even if the assumptions are met and the reliability is decent, an estimate computed from a particular sample can be negative, just as a sample mean is not equal to the population mean. Somewhat surprisingly, whereas experimental psychologists tend to be obsessed with statistical testing and are conditioned to ask a p-value for everything, psychometrics textbooks generally don't care about tests/confidence intervals for reliability estimates. One reason is that their authors assume that any scale development effort will have at the very least hundreds of observations but there is some sampling variability in reliability estimates anyway.
Negatively worded items/items with strong negative correlation with the underlying factor.
Remedies:
Recode negatively worded items
Get more data (you probably don't have enough)
Remove some item(s)
Forget alpha (it has some well-known limitations and chances are that it doesn't tell you what you think it's telling you + what were you planning to do with it anyway?)
See also the references in Reverse scoring when question is stated in a negative fashion | Cronbach's alpha negative result
Two main causes:
Small sample size. Even if the assumptions are met and the reliability is decent, an estimate computed from a particular sample can be negative, just as a sample mean is not equal to |
53,372 | Is the restricted Boltzmann machine a type of graphical model (Bayesian network)? | Boltzmann machines are graphical models, but they are not Bayesian networks. They're a kind of Markov random field, which has undirected connections between the variables, while Bayesian networks have directed connections.
The difference between the two kinds of connections can be subtle, but the main advantage of undirected connections for an RBM is that inferring the hidden states associated with a set of visible states is much easier when the connections are undirected and there are many hidden variables. | Is the restricted Boltzmann machine a type of graphical model (Bayesian network)? | Boltzmann machines are graphical models, but they are not Bayesian networks. They're a kind of Markov random field, which has undirected connections between the variables, while Bayesian networks hav | Is the restricted Boltzmann machine a type of graphical model (Bayesian network)?
Boltzmann machines are graphical models, but they are not Bayesian networks. They're a kind of Markov random field, which has undirected connections between the variables, while Bayesian networks have directed connections.
The difference between the two kinds of connections can be subtle, but the main advantage of undirected connections for an RBM is that inferring the hidden states associated with a set of visible states is much easier when the connections are undirected and there are many hidden variables. | Is the restricted Boltzmann machine a type of graphical model (Bayesian network)?
Boltzmann machines are graphical models, but they are not Bayesian networks. They're a kind of Markov random field, which has undirected connections between the variables, while Bayesian networks hav |
53,373 | Is the restricted Boltzmann machine a type of graphical model (Bayesian network)? | An RBM is an undirected graphical mode, see e.g. Wikipedia or this paper. | Is the restricted Boltzmann machine a type of graphical model (Bayesian network)? | An RBM is an undirected graphical mode, see e.g. Wikipedia or this paper. | Is the restricted Boltzmann machine a type of graphical model (Bayesian network)?
An RBM is an undirected graphical mode, see e.g. Wikipedia or this paper. | Is the restricted Boltzmann machine a type of graphical model (Bayesian network)?
An RBM is an undirected graphical mode, see e.g. Wikipedia or this paper. |
53,374 | How to compare percentages for two categories from one sample? | Are you interested in being able to say that one of the percentages is greater than the other?
In the cases you want to do it, do they always add to 100%?
In that case, it's easy - you compare one of the percentages to 50%; if it's bigger than 50% the complementary one is smaller than 50%.
If you want to compare two proportions where there are other outcomes as well (say compare A and B but where there's a C, D and E), then you can take the A proportion of just the A's and B's and again, compare to 50%.
So these cases reduce to a one-sample-proportions test, for which there are many useful pages to be found by a search on that term (not only tutorial pages with worked examples but also videos), but let me give you a rough outline here of what calculations are involved.
It might help to think of it like a coin-toss. If I was tossing a fair coin 586 times, what proportions of 'heads' are reasonably likely to come up? If we painted an 'A' on the head-side and a 'B' on the other side we would record the proportion of 'A's we see when they're equally likely.
This is what the distribution of the proportion of heads looks like if we do that "toss a fair coin 586 times and count the proportion of heads (A's)" ... and then repeat the experiment ten thousand times:
This shows that in your example of a total 586, if A's and B's are equally likely, the proportion of A's will nearly always be between 45% and 55%, and essentially never outside 40% to 60% (didn't happen once in ten thousand experiments).
If your sample size (586 in your example) is not too small - more than about 10-15 should be sufficient - you can use a normal approximation to the sample proportion.
If $\hat{p}$ is the sample proportion (512/586) and $p_0$ is the hypothesized "no difference" proportion (50%), you can calculate:
$z=\frac{\hat{p} - p_0}{\sqrt{p_0 (1-p_0)/n}}$
and use normal tables to calculate whether the difference is more than can be explained by chance.
In your example,
$z = \frac{(512/586 - 0.5)}{\sqrt{0.5 \times 0.5/586}} = 18.09$
You could very safely conclude that the proportion is different from 50% - and thereby higher than 50%, by inspection - and therefore it must logically also be higher than B. (It's possible to construct an explicit test of the difference in A and B proportions, but it gives the same result as the usual one sample test, so you might as well do the 'standard' thing.)
see http://en.wikipedia.org/wiki/Statistical_hypothesis_testing
At a typical significance level of 5%, you'd conclude that the A-proportion was different from 50% if $|z|$ is greater than 1.96 (at that sample size an A-count of 317 or more or 269 or less would lead you to conclude that the A proportion is different from 50%).
It's hard to tell how much of this you already know, so I'll let you ask questions to guide how much detail you need.
You don't need to obtain or learn sophisticated software for this test; you can easily do it by hand or in Excel (or a free equivalent like LibreOffice's Calc), for example, if you already have something like that. (If you want to try it, however, the package R is free and quite powerful, and very widely used by statisticians - so help on it is easy to find online.)
If you have Excel, you can calculate a p-value using NORMSDIST.
In small samples, you can't use the normal approximation and should use the exact binomial distribution, but again, that's reasonably straightforward to do in something like Excel.
[If your sample size is moderate - say between about 15 and 50 - I'd suggest using a continuity correction for the normal approximation, or using the exact binomial. Once you're past there (for a proportion of 50%), it usually doesn't make enough difference to bother with.] | How to compare percentages for two categories from one sample? | Are you interested in being able to say that one of the percentages is greater than the other?
In the cases you want to do it, do they always add to 100%?
In that case, it's easy - you compare one of | How to compare percentages for two categories from one sample?
Are you interested in being able to say that one of the percentages is greater than the other?
In the cases you want to do it, do they always add to 100%?
In that case, it's easy - you compare one of the percentages to 50%; if it's bigger than 50% the complementary one is smaller than 50%.
If you want to compare two proportions where there are other outcomes as well (say compare A and B but where there's a C, D and E), then you can take the A proportion of just the A's and B's and again, compare to 50%.
So these cases reduce to a one-sample-proportions test, for which there are many useful pages to be found by a search on that term (not only tutorial pages with worked examples but also videos), but let me give you a rough outline here of what calculations are involved.
It might help to think of it like a coin-toss. If I was tossing a fair coin 586 times, what proportions of 'heads' are reasonably likely to come up? If we painted an 'A' on the head-side and a 'B' on the other side we would record the proportion of 'A's we see when they're equally likely.
This is what the distribution of the proportion of heads looks like if we do that "toss a fair coin 586 times and count the proportion of heads (A's)" ... and then repeat the experiment ten thousand times:
This shows that in your example of a total 586, if A's and B's are equally likely, the proportion of A's will nearly always be between 45% and 55%, and essentially never outside 40% to 60% (didn't happen once in ten thousand experiments).
If your sample size (586 in your example) is not too small - more than about 10-15 should be sufficient - you can use a normal approximation to the sample proportion.
If $\hat{p}$ is the sample proportion (512/586) and $p_0$ is the hypothesized "no difference" proportion (50%), you can calculate:
$z=\frac{\hat{p} - p_0}{\sqrt{p_0 (1-p_0)/n}}$
and use normal tables to calculate whether the difference is more than can be explained by chance.
In your example,
$z = \frac{(512/586 - 0.5)}{\sqrt{0.5 \times 0.5/586}} = 18.09$
You could very safely conclude that the proportion is different from 50% - and thereby higher than 50%, by inspection - and therefore it must logically also be higher than B. (It's possible to construct an explicit test of the difference in A and B proportions, but it gives the same result as the usual one sample test, so you might as well do the 'standard' thing.)
see http://en.wikipedia.org/wiki/Statistical_hypothesis_testing
At a typical significance level of 5%, you'd conclude that the A-proportion was different from 50% if $|z|$ is greater than 1.96 (at that sample size an A-count of 317 or more or 269 or less would lead you to conclude that the A proportion is different from 50%).
It's hard to tell how much of this you already know, so I'll let you ask questions to guide how much detail you need.
You don't need to obtain or learn sophisticated software for this test; you can easily do it by hand or in Excel (or a free equivalent like LibreOffice's Calc), for example, if you already have something like that. (If you want to try it, however, the package R is free and quite powerful, and very widely used by statisticians - so help on it is easy to find online.)
If you have Excel, you can calculate a p-value using NORMSDIST.
In small samples, you can't use the normal approximation and should use the exact binomial distribution, but again, that's reasonably straightforward to do in something like Excel.
[If your sample size is moderate - say between about 15 and 50 - I'd suggest using a continuity correction for the normal approximation, or using the exact binomial. Once you're past there (for a proportion of 50%), it usually doesn't make enough difference to bother with.] | How to compare percentages for two categories from one sample?
Are you interested in being able to say that one of the percentages is greater than the other?
In the cases you want to do it, do they always add to 100%?
In that case, it's easy - you compare one of |
53,375 | How to compare percentages for two categories from one sample? | The "statistical test" your teacher is referring to would be a binomial test. It is an exact test, meaning that it yields the exact probability (p value) of your observed proportion (or a more extreme one) occurring under the null hypothesis. In this case, your null hypothesis appears to be that the proportion of people who pick category A (or alternatively, B) is .50.
For example, in R you would run the test as follows:
binom.test(x=512, n=586, p=.5, alternative="two.sided")
which yields p < .001, meaning that if it is the case that the true proportion in the population is .50, then the odds of finding a sample this size with this number of people picking category A (or an even more extreme proportion, i.e. a proportion closer to 1 or between 0 and 1-.874) is less than .001--a very small probability. You would conclude that people "tend indeed to be more concerned about Category A than B."
See here for more software options to run the test (e.g. SPSS, SAS). | How to compare percentages for two categories from one sample? | The "statistical test" your teacher is referring to would be a binomial test. It is an exact test, meaning that it yields the exact probability (p value) of your observed proportion (or a more extreme | How to compare percentages for two categories from one sample?
The "statistical test" your teacher is referring to would be a binomial test. It is an exact test, meaning that it yields the exact probability (p value) of your observed proportion (or a more extreme one) occurring under the null hypothesis. In this case, your null hypothesis appears to be that the proportion of people who pick category A (or alternatively, B) is .50.
For example, in R you would run the test as follows:
binom.test(x=512, n=586, p=.5, alternative="two.sided")
which yields p < .001, meaning that if it is the case that the true proportion in the population is .50, then the odds of finding a sample this size with this number of people picking category A (or an even more extreme proportion, i.e. a proportion closer to 1 or between 0 and 1-.874) is less than .001--a very small probability. You would conclude that people "tend indeed to be more concerned about Category A than B."
See here for more software options to run the test (e.g. SPSS, SAS). | How to compare percentages for two categories from one sample?
The "statistical test" your teacher is referring to would be a binomial test. It is an exact test, meaning that it yields the exact probability (p value) of your observed proportion (or a more extreme |
53,376 | curse of dimensionality & nonparametric techniques | Imagine that effects aren't additive. Imagine we only have to worry about two values of each predictor, $x_i =$ Low and $x_i =$ High. Then there'd be $2^p$ values our function would need to provide estimates for. In practice, there are more than two values in each dimension to worry about.
http://en.wikipedia.org/wiki/Curse_of_dimensionality | curse of dimensionality & nonparametric techniques | Imagine that effects aren't additive. Imagine we only have to worry about two values of each predictor, $x_i =$ Low and $x_i =$ High. Then there'd be $2^p$ values our function would need to provide es | curse of dimensionality & nonparametric techniques
Imagine that effects aren't additive. Imagine we only have to worry about two values of each predictor, $x_i =$ Low and $x_i =$ High. Then there'd be $2^p$ values our function would need to provide estimates for. In practice, there are more than two values in each dimension to worry about.
http://en.wikipedia.org/wiki/Curse_of_dimensionality | curse of dimensionality & nonparametric techniques
Imagine that effects aren't additive. Imagine we only have to worry about two values of each predictor, $x_i =$ Low and $x_i =$ High. Then there'd be $2^p$ values our function would need to provide es |
53,377 | curse of dimensionality & nonparametric techniques | Further to Glen's answer, I think it is nice to think of the problem in terms of the volume/concentration of high dimensional space. Directly from the wikipedia article:
There is an exponential increase in volume associated with adding extra dimensions to a mathematical space. For example, $10^2$=100 evenly-spaced sample points suffice to sample a unit interval (a "1-dimensional cube") with no more than $10^{-2}$=0.01 distance between points; an equivalent sampling of a 10-dimensional unit hypercube with a lattice that has a spacing of $10^{-2}$=0.01 between adjacent points would require $10^{20}$ sample points.
So basically as the dimension increases, the number of points required to provide the same coverage of space increases exponentially with the dimension.
This means that for non-parametric methods, which rely on there being points locally to base an estimator on, far more points are required as the volume explodes.
The curse is also often considered in terms of computational feasibility of trying to estimate functions in high dimensions, maybe you could look into this if you're still looking for insight. | curse of dimensionality & nonparametric techniques | Further to Glen's answer, I think it is nice to think of the problem in terms of the volume/concentration of high dimensional space. Directly from the wikipedia article:
There is an exponential incre | curse of dimensionality & nonparametric techniques
Further to Glen's answer, I think it is nice to think of the problem in terms of the volume/concentration of high dimensional space. Directly from the wikipedia article:
There is an exponential increase in volume associated with adding extra dimensions to a mathematical space. For example, $10^2$=100 evenly-spaced sample points suffice to sample a unit interval (a "1-dimensional cube") with no more than $10^{-2}$=0.01 distance between points; an equivalent sampling of a 10-dimensional unit hypercube with a lattice that has a spacing of $10^{-2}$=0.01 between adjacent points would require $10^{20}$ sample points.
So basically as the dimension increases, the number of points required to provide the same coverage of space increases exponentially with the dimension.
This means that for non-parametric methods, which rely on there being points locally to base an estimator on, far more points are required as the volume explodes.
The curse is also often considered in terms of computational feasibility of trying to estimate functions in high dimensions, maybe you could look into this if you're still looking for insight. | curse of dimensionality & nonparametric techniques
Further to Glen's answer, I think it is nice to think of the problem in terms of the volume/concentration of high dimensional space. Directly from the wikipedia article:
There is an exponential incre |
53,378 | Summary of residuals in R | The 5 number summary of the residuals that you see are the values that would be used to construct a boxplot. The residuals are not necessarily errors of the estimate, although you could think of them that way; it depends on what you are trying to estimate / predict.
The value people typically use as a 'prediction' is $\hat y$. This is actually the predicted mean of the conditional distribution of $y$, that is $\mathcal N(\mu_Y|x_i, \sigma^2_\varepsilon)$. In this case, the residuals help you understand the rest of that conditional distribution (for example, its variance).
Alternatively, you can use $\hat y$ as a point prediction for the value of a new observation when $X=x_i$. This is reasonable because, a-priori, the mean of a normal distribution is the single most likely point value to occur. However, you will nonetheless pretty much always be wrong. The distribution of your residuals can tell you how far off the value of a new observation will be on average from $\hat y$ (i.e., their SD).
Residuals are also useful in helping you estimate properties of the sampling distributions of your sample statistics (specifically, your betas), and in diagnosing possible problems with your model.
No matter how you think about / use your residuals, those values are simply a non-parametric summary of their distribution. (Note that the above discussion is generic, and is disregarding the fact that the model displayed in the question is a Poisson regression and the residuals displayed are deviance residuals.) | Summary of residuals in R | The 5 number summary of the residuals that you see are the values that would be used to construct a boxplot. The residuals are not necessarily errors of the estimate, although you could think of them | Summary of residuals in R
The 5 number summary of the residuals that you see are the values that would be used to construct a boxplot. The residuals are not necessarily errors of the estimate, although you could think of them that way; it depends on what you are trying to estimate / predict.
The value people typically use as a 'prediction' is $\hat y$. This is actually the predicted mean of the conditional distribution of $y$, that is $\mathcal N(\mu_Y|x_i, \sigma^2_\varepsilon)$. In this case, the residuals help you understand the rest of that conditional distribution (for example, its variance).
Alternatively, you can use $\hat y$ as a point prediction for the value of a new observation when $X=x_i$. This is reasonable because, a-priori, the mean of a normal distribution is the single most likely point value to occur. However, you will nonetheless pretty much always be wrong. The distribution of your residuals can tell you how far off the value of a new observation will be on average from $\hat y$ (i.e., their SD).
Residuals are also useful in helping you estimate properties of the sampling distributions of your sample statistics (specifically, your betas), and in diagnosing possible problems with your model.
No matter how you think about / use your residuals, those values are simply a non-parametric summary of their distribution. (Note that the above discussion is generic, and is disregarding the fact that the model displayed in the question is a Poisson regression and the residuals displayed are deviance residuals.) | Summary of residuals in R
The 5 number summary of the residuals that you see are the values that would be used to construct a boxplot. The residuals are not necessarily errors of the estimate, although you could think of them |
53,379 | Summary of residuals in R | Those numbers are deviance residuals.
$$r_{d_i} = \operatorname{sign}(y_i -\hat{\mu_i}) \sqrt{d_i}$$
where $d_i$ is the individual observations contribution to the deviance.
They are NOT like residuals in ordinary regression, which would be $y_i -\hat{\mu_i}$
Conceptually, Pearson residuals are more like a notion of a regression residual - a scaled $y_i -\hat{\mu_i}$.
However, Pearson residuals may tend to be quite skewed in GLMs and have other issues, while deviance residuals tend to be more normal.
The glm function in R returns a function that defines $d_i$ for each model.
e.g. 1
utils::data(anorexia, package="MASS")
anorex.1 <- glm(Postwt ~ Prewt + Treat + offset(Prewt),
family = gaussian, data = anorexia)
anorex.1$family$dev.resids
function (y, mu, wt)
wt * ((y - mu)^2) #<---- d(i) for a gaussian model
<bytecode: 0x0bef2398>
<environment: 0x0d214114>
e.g. 2
clotting <- data.frame(
u = c(5,10,15,20,30,40,60,80,100),
lot1 = c(118,58,42,35,27,25,21,19,18),
lot2 = c(69,35,26,21,18,16,13,12,12))
glm(lot1 ~ log(u), data=clotting, family=Gamma)$family$dev.resids
function (y, mu, wt)
-2 * wt * (log(ifelse(y == 0, 1, y/mu)) - (y - mu)/mu) #<- d(i) for Gamma model
<bytecode: 0x0cd3d11c>
<environment: 0x0cd3fd94> | Summary of residuals in R | Those numbers are deviance residuals.
$$r_{d_i} = \operatorname{sign}(y_i -\hat{\mu_i}) \sqrt{d_i}$$
where $d_i$ is the individual observations contribution to the deviance.
They are NOT like res | Summary of residuals in R
Those numbers are deviance residuals.
$$r_{d_i} = \operatorname{sign}(y_i -\hat{\mu_i}) \sqrt{d_i}$$
where $d_i$ is the individual observations contribution to the deviance.
They are NOT like residuals in ordinary regression, which would be $y_i -\hat{\mu_i}$
Conceptually, Pearson residuals are more like a notion of a regression residual - a scaled $y_i -\hat{\mu_i}$.
However, Pearson residuals may tend to be quite skewed in GLMs and have other issues, while deviance residuals tend to be more normal.
The glm function in R returns a function that defines $d_i$ for each model.
e.g. 1
utils::data(anorexia, package="MASS")
anorex.1 <- glm(Postwt ~ Prewt + Treat + offset(Prewt),
family = gaussian, data = anorexia)
anorex.1$family$dev.resids
function (y, mu, wt)
wt * ((y - mu)^2) #<---- d(i) for a gaussian model
<bytecode: 0x0bef2398>
<environment: 0x0d214114>
e.g. 2
clotting <- data.frame(
u = c(5,10,15,20,30,40,60,80,100),
lot1 = c(118,58,42,35,27,25,21,19,18),
lot2 = c(69,35,26,21,18,16,13,12,12))
glm(lot1 ~ log(u), data=clotting, family=Gamma)$family$dev.resids
function (y, mu, wt)
-2 * wt * (log(ifelse(y == 0, 1, y/mu)) - (y - mu)/mu) #<- d(i) for Gamma model
<bytecode: 0x0cd3d11c>
<environment: 0x0cd3fd94> | Summary of residuals in R
Those numbers are deviance residuals.
$$r_{d_i} = \operatorname{sign}(y_i -\hat{\mu_i}) \sqrt{d_i}$$
where $d_i$ is the individual observations contribution to the deviance.
They are NOT like res |
53,380 | How to get the expected counts when computing a chi-squared test? | Using a spreadsheet for a quick-and-dirty check of goodness of fit to a distribution is not a bad idea, especially if somebody has handed you a batch of data in a spreadsheet or you are doing other spreadsheet analyses with these data (or if you want to check up on other software to confirm its accuracy or your understanding of its calculations). Here is what a chi-squared test might look like in a spreadsheet:
The chi-square statistic in this example is $10.37$ with a p-value of $0.409$. The data are singled out with a separate (blue) color; important values are highlighted; and auxiliary values (whose columns I usually hide) are shown in light gray text. (A few calculations are done as cross-checks, such as the values in the "Total/max" line (row D14:H14).)
To accomplish the test,
Before inspecting the data, set up a list of cutpoints (column D) that will define the "bins" into which the data will fall. For instance, if you expect the data to lie between $0$ and $10$ and you have $1500$ values, you might choose to divide this range into bins of widths somewhere between $0.2$ and $1$. (The smaller width of $0.2$ would give an average of $30$ data per bin, but likely some extreme bins would have much smaller counts; the larger width of $1$ gives only $10$ bins, which might only coarsely distinguish the distribution of data.) You do not have to put equal gaps between the bins. Often, for instance, the most extreme bins are made wider to accommodate expected sparse data in the tails of the distribution.
Compute the data counts in each bin (column F). In Excel, use COUNTIF to count data below or equal to each cutpoint (column E) and subtract each pair of successive counts to obtain counts in each bin.
Estimate the parameters of the distribution you are fitting, using Maximum Likelihood (cells B3:C4). (There's some fudging going on here; for more about this, see https://stats.stackexchange.com/a/17148.) For a Normal distribution, the Maximum Likelihood estimates are the mean (AVERAGE) and the uncorrected ("population") standard deviation (STDEVP).
Use the cumulative distribution function to obtain expected values in each bin. In Excel the CDF is NORMINV. In any platform, the arguments to the CDF are the distribution parameters and the upper bin endpoint. This gives expected total counts for all values up to and including the endpoint (column G). Therefore, subtract the values of the CDF at successive endpoints to obtain expected values in bins (column H). Notice how the calculation of the CDF parallels the use of COUNTIF to find bin counts: the bin counts are the empirical distribution while the CDF gives the reference distribution to which the empirical distribution will be compared.
Apply the chi-squared formula: subtract expected values from bin counts (the residuals); square them; divide by the expectations; add everything up (cell C6). (It is convenient, actually, to modify this a bit in practice: divide the residuals by the square roots of the bin counts (column I): these are interesting in their own right, because very positive or very negative ones indicate precisely how the data deviate from the reference distribution. The chi-squared statistic is the sum of squares of the residuals.)
Compute the chi-squared p-value (using CHIDIST) (cell C7).
Inspect the counts and the residuals. If a sizable proportion of counts are nonzero but less than $5$, the p-value is untrustworthy. If the residuals show a clear pattern, such as progressing steadily from negative to positive back to negative, then you might have evidence of lack of fit, regardless of the chi-squared result. (Plotting the residuals is a good idea, but is not shown here.)
Here are the formulas used in the example:
In this case all residuals are small in size; there is no pattern to them; and the chi-squared p-value is large. We conclude these data look Normally distributed. (In fact, they were generated using draws from a Normal distribution with mean $5$ and standard deviation $2$ via the formula NORMSINV(RAND())*2 + 5.)
Notice that the formulas make extensive use of named ranges. The names correspond systematically to the labels shown in the spreadsheet; for instance, Data is the range A:A headed by "Data" (column A). I warmly recommend always using this technique, because it creates readable formulas and readable formulas are formulas that tend to be correct.
Be careful when reproducing these formulas: although most of the formulas in columns E:I have just been copied down, often the very first and very last formulas in each column are a little different. | How to get the expected counts when computing a chi-squared test? | Using a spreadsheet for a quick-and-dirty check of goodness of fit to a distribution is not a bad idea, especially if somebody has handed you a batch of data in a spreadsheet or you are doing other sp | How to get the expected counts when computing a chi-squared test?
Using a spreadsheet for a quick-and-dirty check of goodness of fit to a distribution is not a bad idea, especially if somebody has handed you a batch of data in a spreadsheet or you are doing other spreadsheet analyses with these data (or if you want to check up on other software to confirm its accuracy or your understanding of its calculations). Here is what a chi-squared test might look like in a spreadsheet:
The chi-square statistic in this example is $10.37$ with a p-value of $0.409$. The data are singled out with a separate (blue) color; important values are highlighted; and auxiliary values (whose columns I usually hide) are shown in light gray text. (A few calculations are done as cross-checks, such as the values in the "Total/max" line (row D14:H14).)
To accomplish the test,
Before inspecting the data, set up a list of cutpoints (column D) that will define the "bins" into which the data will fall. For instance, if you expect the data to lie between $0$ and $10$ and you have $1500$ values, you might choose to divide this range into bins of widths somewhere between $0.2$ and $1$. (The smaller width of $0.2$ would give an average of $30$ data per bin, but likely some extreme bins would have much smaller counts; the larger width of $1$ gives only $10$ bins, which might only coarsely distinguish the distribution of data.) You do not have to put equal gaps between the bins. Often, for instance, the most extreme bins are made wider to accommodate expected sparse data in the tails of the distribution.
Compute the data counts in each bin (column F). In Excel, use COUNTIF to count data below or equal to each cutpoint (column E) and subtract each pair of successive counts to obtain counts in each bin.
Estimate the parameters of the distribution you are fitting, using Maximum Likelihood (cells B3:C4). (There's some fudging going on here; for more about this, see https://stats.stackexchange.com/a/17148.) For a Normal distribution, the Maximum Likelihood estimates are the mean (AVERAGE) and the uncorrected ("population") standard deviation (STDEVP).
Use the cumulative distribution function to obtain expected values in each bin. In Excel the CDF is NORMINV. In any platform, the arguments to the CDF are the distribution parameters and the upper bin endpoint. This gives expected total counts for all values up to and including the endpoint (column G). Therefore, subtract the values of the CDF at successive endpoints to obtain expected values in bins (column H). Notice how the calculation of the CDF parallels the use of COUNTIF to find bin counts: the bin counts are the empirical distribution while the CDF gives the reference distribution to which the empirical distribution will be compared.
Apply the chi-squared formula: subtract expected values from bin counts (the residuals); square them; divide by the expectations; add everything up (cell C6). (It is convenient, actually, to modify this a bit in practice: divide the residuals by the square roots of the bin counts (column I): these are interesting in their own right, because very positive or very negative ones indicate precisely how the data deviate from the reference distribution. The chi-squared statistic is the sum of squares of the residuals.)
Compute the chi-squared p-value (using CHIDIST) (cell C7).
Inspect the counts and the residuals. If a sizable proportion of counts are nonzero but less than $5$, the p-value is untrustworthy. If the residuals show a clear pattern, such as progressing steadily from negative to positive back to negative, then you might have evidence of lack of fit, regardless of the chi-squared result. (Plotting the residuals is a good idea, but is not shown here.)
Here are the formulas used in the example:
In this case all residuals are small in size; there is no pattern to them; and the chi-squared p-value is large. We conclude these data look Normally distributed. (In fact, they were generated using draws from a Normal distribution with mean $5$ and standard deviation $2$ via the formula NORMSINV(RAND())*2 + 5.)
Notice that the formulas make extensive use of named ranges. The names correspond systematically to the labels shown in the spreadsheet; for instance, Data is the range A:A headed by "Data" (column A). I warmly recommend always using this technique, because it creates readable formulas and readable formulas are formulas that tend to be correct.
Be careful when reproducing these formulas: although most of the formulas in columns E:I have just been copied down, often the very first and very last formulas in each column are a little different. | How to get the expected counts when computing a chi-squared test?
Using a spreadsheet for a quick-and-dirty check of goodness of fit to a distribution is not a bad idea, especially if somebody has handed you a batch of data in a spreadsheet or you are doing other sp |
53,381 | How to get the expected counts when computing a chi-squared test? | 1) No test will prove your data is normally distributed. In fact I bet that it isn't.
(Why would any distribution be exactly normal? Can you name anything that actually is?)
2) When considering the distributional form, usually, hypothesis tests answer the wrong question. What's a good reason to use a hypothesis test for checking normality?
I thought that the chi squared test or the Kolmogorov–Smirnov test provide a good indication about the distribution.
Read this
However, if not, what are some usecases they are good for?
That's debatable.
I can think of a few cases where it makes some sense to formally test a distribution. One common use is in testing some random number generating algorithm for generating a uniform or a normal.
3) If you want to test normality, a chi-squared test is a really bad way to do it. Why not, say, a Shapiro-Francia test or say an Anderson-Darling adjusted for estimation? You'll have far more power.
4) What do you mean by 'expected range', specifically?
by expected range I mean in the formula of the chi squared test the relative frequencies: ei=n∗pi
To have expected counts you need to split it up into subranges, and multiply the total count by the probability in the subrange. | How to get the expected counts when computing a chi-squared test? | 1) No test will prove your data is normally distributed. In fact I bet that it isn't.
(Why would any distribution be exactly normal? Can you name anything that actually is?)
2) When considering the d | How to get the expected counts when computing a chi-squared test?
1) No test will prove your data is normally distributed. In fact I bet that it isn't.
(Why would any distribution be exactly normal? Can you name anything that actually is?)
2) When considering the distributional form, usually, hypothesis tests answer the wrong question. What's a good reason to use a hypothesis test for checking normality?
I thought that the chi squared test or the Kolmogorov–Smirnov test provide a good indication about the distribution.
Read this
However, if not, what are some usecases they are good for?
That's debatable.
I can think of a few cases where it makes some sense to formally test a distribution. One common use is in testing some random number generating algorithm for generating a uniform or a normal.
3) If you want to test normality, a chi-squared test is a really bad way to do it. Why not, say, a Shapiro-Francia test or say an Anderson-Darling adjusted for estimation? You'll have far more power.
4) What do you mean by 'expected range', specifically?
by expected range I mean in the formula of the chi squared test the relative frequencies: ei=n∗pi
To have expected counts you need to split it up into subranges, and multiply the total count by the probability in the subrange. | How to get the expected counts when computing a chi-squared test?
1) No test will prove your data is normally distributed. In fact I bet that it isn't.
(Why would any distribution be exactly normal? Can you name anything that actually is?)
2) When considering the d |
53,382 | Changing sign estimate manually | You can also make every coefficient your birthdate if you like, but you can't really call the result something based in statistics, and it's certainly not OLS. You also can't test it.
Making the coefficient the negative of the LS estimate is not remotely justified by the issues you bring up.
1) First let's deal with knowing this parameter is positive:
If there are a priori restrictions on the value of something, you build them into the estimation - but then (for a simple regression, as you appear to be discussing) you'll get a parameter estimate of zero or almost zero, not $-\hat{\beta}$.
2) Now, the "there's a proxy":
If "X1 can either have a negative or a positive effect on X", then it's not really doing the job of a proxy; it sounds like either a mediator or a moderator; at the very least a covariate -- not a proxy, and you would need both in your model. (Further, it seems like that sentence has the effect backward; for the coefficient of X1 to have the "wrong sign" from what it would have if X1 weren't there, wouldn't X need to be affecting X1?)
Secondly, you can't just say "If the situation were thus, it could change the sign" and then change the sign as if the supposition were true. There's actually two separate fallacies here!
a) you have not given any justification for retaining the magnitude of the estimate. Even if something were making the sign change, apart from some very restricted circumstances (circumstances in which you'd do something else!), it won't leave with with the same magnitude.
b) "If condition $A$ were true, outcome $B$ would happen" does not, of itself imply that condition $A$ is true. This looks like the fallacy of argument from ignorance, combined with the fallacy argument from consequences -- that is it looks like you're saying - "we don't know A didn't happen, and we don't like the outcome that we got, so we assume A happened in order to get the desired outcome")
For the kind of argument you'd like to apply to be valid you'd have to show that all the parts of the thing you think may have been going on actually happened. There could be numerous alternate explanations - the theory could be wrong in this situation. The data could be bad. There might be yet other variables that are important but which aren't present. Still other things might have happened. You need to show those didn't happen.
Even if you could show that all of the things you think happened did, and even if you could rule out all the alternative plausible explanations that critics could come up with, people are still going to be saying "why on earth would you use a proxy that you now can be pretty sure will give the wrong sign?"
You'd also still have to show such an effect had the right magnitude that would justify simply multiplying the result by -1 instead of -0.1 or -7.8 or some other number. This would be a difficult task. | Changing sign estimate manually | You can also make every coefficient your birthdate if you like, but you can't really call the result something based in statistics, and it's certainly not OLS. You also can't test it.
Making the coef | Changing sign estimate manually
You can also make every coefficient your birthdate if you like, but you can't really call the result something based in statistics, and it's certainly not OLS. You also can't test it.
Making the coefficient the negative of the LS estimate is not remotely justified by the issues you bring up.
1) First let's deal with knowing this parameter is positive:
If there are a priori restrictions on the value of something, you build them into the estimation - but then (for a simple regression, as you appear to be discussing) you'll get a parameter estimate of zero or almost zero, not $-\hat{\beta}$.
2) Now, the "there's a proxy":
If "X1 can either have a negative or a positive effect on X", then it's not really doing the job of a proxy; it sounds like either a mediator or a moderator; at the very least a covariate -- not a proxy, and you would need both in your model. (Further, it seems like that sentence has the effect backward; for the coefficient of X1 to have the "wrong sign" from what it would have if X1 weren't there, wouldn't X need to be affecting X1?)
Secondly, you can't just say "If the situation were thus, it could change the sign" and then change the sign as if the supposition were true. There's actually two separate fallacies here!
a) you have not given any justification for retaining the magnitude of the estimate. Even if something were making the sign change, apart from some very restricted circumstances (circumstances in which you'd do something else!), it won't leave with with the same magnitude.
b) "If condition $A$ were true, outcome $B$ would happen" does not, of itself imply that condition $A$ is true. This looks like the fallacy of argument from ignorance, combined with the fallacy argument from consequences -- that is it looks like you're saying - "we don't know A didn't happen, and we don't like the outcome that we got, so we assume A happened in order to get the desired outcome")
For the kind of argument you'd like to apply to be valid you'd have to show that all the parts of the thing you think may have been going on actually happened. There could be numerous alternate explanations - the theory could be wrong in this situation. The data could be bad. There might be yet other variables that are important but which aren't present. Still other things might have happened. You need to show those didn't happen.
Even if you could show that all of the things you think happened did, and even if you could rule out all the alternative plausible explanations that critics could come up with, people are still going to be saying "why on earth would you use a proxy that you now can be pretty sure will give the wrong sign?"
You'd also still have to show such an effect had the right magnitude that would justify simply multiplying the result by -1 instead of -0.1 or -7.8 or some other number. This would be a difficult task. | Changing sign estimate manually
You can also make every coefficient your birthdate if you like, but you can't really call the result something based in statistics, and it's certainly not OLS. You also can't test it.
Making the coef |
53,383 | Changing sign estimate manually | Adding to @glen_b 's excellent answer, the only remotely justifiable reason I could see for doing what you are doing is if you knew, somehow, that X1 was perfectly negatively correlated to X. One way this might happen is if X is a difference between two values and X1 is the reverse difference.
Even if this were the case, however, it would be better to just multiply X1 by -1 and get X, and you wouldn't need a proxy in the first place.
So, in short, if you are going to manipulate the coefficients after running the regression, you might as well not run the regression in the first place. | Changing sign estimate manually | Adding to @glen_b 's excellent answer, the only remotely justifiable reason I could see for doing what you are doing is if you knew, somehow, that X1 was perfectly negatively correlated to X. One way | Changing sign estimate manually
Adding to @glen_b 's excellent answer, the only remotely justifiable reason I could see for doing what you are doing is if you knew, somehow, that X1 was perfectly negatively correlated to X. One way this might happen is if X is a difference between two values and X1 is the reverse difference.
Even if this were the case, however, it would be better to just multiply X1 by -1 and get X, and you wouldn't need a proxy in the first place.
So, in short, if you are going to manipulate the coefficients after running the regression, you might as well not run the regression in the first place. | Changing sign estimate manually
Adding to @glen_b 's excellent answer, the only remotely justifiable reason I could see for doing what you are doing is if you knew, somehow, that X1 was perfectly negatively correlated to X. One way |
53,384 | Where can I learn about transforming uniform, random distribution into other distribution | A good way to get a random direction on a 2-sphere might be to choose $z$ uniform in $[-1,1]$, and $\theta$ uniform in $[0,2\pi]$. Then take the point
$$ (\sqrt{1-z^2} \cos \theta, \sqrt{1-z^2} \sin \theta, z).$$
I won't do the math to show why these give points uniformly on a sphere. It's not hard.
For large dimensions, the best way is probably to choose $n$ samples from a Gaussian distribution, $x_1 \ldots, x_n$, and normalize the resulting vector $(x_1, x_2, \ldots,x_n)$. You can see why this works by looking at the probability density function. The function for a Gaussian is $$\frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2}x^2},$$ so when you multiply $n$ of them you get $$\frac{1}{(2 \pi)^{n/2}} e^{-\frac{1}{2}\sum_i x_i^2},$$ which is clearly spherically symmetric. | Where can I learn about transforming uniform, random distribution into other distribution | A good way to get a random direction on a 2-sphere might be to choose $z$ uniform in $[-1,1]$, and $\theta$ uniform in $[0,2\pi]$. Then take the point
$$ (\sqrt{1-z^2} \cos \theta, \sqrt{1-z^2} \sin \ | Where can I learn about transforming uniform, random distribution into other distribution
A good way to get a random direction on a 2-sphere might be to choose $z$ uniform in $[-1,1]$, and $\theta$ uniform in $[0,2\pi]$. Then take the point
$$ (\sqrt{1-z^2} \cos \theta, \sqrt{1-z^2} \sin \theta, z).$$
I won't do the math to show why these give points uniformly on a sphere. It's not hard.
For large dimensions, the best way is probably to choose $n$ samples from a Gaussian distribution, $x_1 \ldots, x_n$, and normalize the resulting vector $(x_1, x_2, \ldots,x_n)$. You can see why this works by looking at the probability density function. The function for a Gaussian is $$\frac{1}{\sqrt{2 \pi}} e^{-\frac{1}{2}x^2},$$ so when you multiply $n$ of them you get $$\frac{1}{(2 \pi)^{n/2}} e^{-\frac{1}{2}\sum_i x_i^2},$$ which is clearly spherically symmetric. | Where can I learn about transforming uniform, random distribution into other distribution
A good way to get a random direction on a 2-sphere might be to choose $z$ uniform in $[-1,1]$, and $\theta$ uniform in $[0,2\pi]$. Then take the point
$$ (\sqrt{1-z^2} \cos \theta, \sqrt{1-z^2} \sin \ |
53,385 | Where can I learn about transforming uniform, random distribution into other distribution | If you choose 3 independent random numbers between -1 and 1, you are basically choosing a point inside a cube. From that image, you can easily see that, after normalization, the likelihood of your vector pointing along the cube main diagonal is $\sqrt{3}$ larger than that of it pointing along the $x$ axis, simply because there are $\sqrt{3}$ more points inside the cube along the main diagonal than along the direction of any coordinate axis.
As Jerry Schirmer points out, you can take two angles, and build your vector from there, basically using spherical coordinates. The idea is also developed here. Alternatively, you could still generate your three uniform random numbers, and before normalizing them, get rid of them if $x^2 + y^2 + z^2 > 1$, thus effectively limiting your 3-D point to within a sphere.
As for general reading on the subject, you want to look for sampling an arbitrary distribution, although most of what you will find will be for one-dimensional distributions, probably the most general method being inverse transform sampling. | Where can I learn about transforming uniform, random distribution into other distribution | If you choose 3 independent random numbers between -1 and 1, you are basically choosing a point inside a cube. From that image, you can easily see that, after normalization, the likelihood of your vec | Where can I learn about transforming uniform, random distribution into other distribution
If you choose 3 independent random numbers between -1 and 1, you are basically choosing a point inside a cube. From that image, you can easily see that, after normalization, the likelihood of your vector pointing along the cube main diagonal is $\sqrt{3}$ larger than that of it pointing along the $x$ axis, simply because there are $\sqrt{3}$ more points inside the cube along the main diagonal than along the direction of any coordinate axis.
As Jerry Schirmer points out, you can take two angles, and build your vector from there, basically using spherical coordinates. The idea is also developed here. Alternatively, you could still generate your three uniform random numbers, and before normalizing them, get rid of them if $x^2 + y^2 + z^2 > 1$, thus effectively limiting your 3-D point to within a sphere.
As for general reading on the subject, you want to look for sampling an arbitrary distribution, although most of what you will find will be for one-dimensional distributions, probably the most general method being inverse transform sampling. | Where can I learn about transforming uniform, random distribution into other distribution
If you choose 3 independent random numbers between -1 and 1, you are basically choosing a point inside a cube. From that image, you can easily see that, after normalization, the likelihood of your vec |
53,386 | Best predictive Cox regression model using c-index and cross-validation | There are a number of serious problems with your approach.
You used univariable screening which is known to have a number of problems
You took the variables that passed the screen and fitted a model, then used cross-validation (CV) to validate the model without informing CV of the univariable screening step. This creates a major bias. CV needs to repeat all modeling steps that made use of $Y$.
You are using a non-optimum optimality criterion (the $c$-index; generalized ROC area) whereas the most sensitive criterion will be likelihood-based
Your total sample size is too low by perhaps a factor of 100 for data splitting to be a reliable model validation approach. You will find that if you were to split the training and test samples again you will get completely different genetic signatures and validation performance.
Your original discovery set with 22 events is only large enough to assess a single pre-specified gene.
If the raw data were continuous gene expression values, it is completely invalid to dichotomize at "over expressed".
Validating a model by binning predictions (in your case tertiles of predicted risk) is arbitrary and has very low resolution. The R rms package has a few methods for estimating continuous calibration curves. See for example the function calibrate. | Best predictive Cox regression model using c-index and cross-validation | There are a number of serious problems with your approach.
You used univariable screening which is known to have a number of problems
You took the variables that passed the screen and fitted a model, | Best predictive Cox regression model using c-index and cross-validation
There are a number of serious problems with your approach.
You used univariable screening which is known to have a number of problems
You took the variables that passed the screen and fitted a model, then used cross-validation (CV) to validate the model without informing CV of the univariable screening step. This creates a major bias. CV needs to repeat all modeling steps that made use of $Y$.
You are using a non-optimum optimality criterion (the $c$-index; generalized ROC area) whereas the most sensitive criterion will be likelihood-based
Your total sample size is too low by perhaps a factor of 100 for data splitting to be a reliable model validation approach. You will find that if you were to split the training and test samples again you will get completely different genetic signatures and validation performance.
Your original discovery set with 22 events is only large enough to assess a single pre-specified gene.
If the raw data were continuous gene expression values, it is completely invalid to dichotomize at "over expressed".
Validating a model by binning predictions (in your case tertiles of predicted risk) is arbitrary and has very low resolution. The R rms package has a few methods for estimating continuous calibration curves. See for example the function calibrate. | Best predictive Cox regression model using c-index and cross-validation
There are a number of serious problems with your approach.
You used univariable screening which is known to have a number of problems
You took the variables that passed the screen and fitted a model, |
53,387 | How to convert to gaussian distribution with given mean and standard deviation | Dimitriy’s answer is ok if the grades are Gaussian already. In the general case, just perform quantile renormalization : modify your grades to map their quantiles on the Gaussian quantiles.
The following R code generates normal grades. Pay attention to the use of rank to deal with ties.
# generate uniform grades
grades <- sample(0:100, 50, replace = TRUE)
# map on quantiles
L <- length(grades)
normal.grades <- qnorm( rank(grades)/(L+1), mean = 81, sd = 12)
I have to warn you that many teacher tell this (with varying values for mean and standard deviation), but this is just a joke. Don’t do this.
I once made a normal qq-plot for around 150 grades. It was almost perfect. This is not very nice, because it’s what you expect if answers are random and independent... I never tried that again, since then. | How to convert to gaussian distribution with given mean and standard deviation | Dimitriy’s answer is ok if the grades are Gaussian already. In the general case, just perform quantile renormalization : modify your grades to map their quantiles on the Gaussian quantiles.
The follow | How to convert to gaussian distribution with given mean and standard deviation
Dimitriy’s answer is ok if the grades are Gaussian already. In the general case, just perform quantile renormalization : modify your grades to map their quantiles on the Gaussian quantiles.
The following R code generates normal grades. Pay attention to the use of rank to deal with ties.
# generate uniform grades
grades <- sample(0:100, 50, replace = TRUE)
# map on quantiles
L <- length(grades)
normal.grades <- qnorm( rank(grades)/(L+1), mean = 81, sd = 12)
I have to warn you that many teacher tell this (with varying values for mean and standard deviation), but this is just a joke. Don’t do this.
I once made a normal qq-plot for around 150 grades. It was almost perfect. This is not very nice, because it’s what you expect if answers are random and independent... I never tried that again, since then. | How to convert to gaussian distribution with given mean and standard deviation
Dimitriy’s answer is ok if the grades are Gaussian already. In the general case, just perform quantile renormalization : modify your grades to map their quantiles on the Gaussian quantiles.
The follow |
53,388 | How to convert to gaussian distribution with given mean and standard deviation | Standardize the original score by subtracting the sample mean and dividing by the standard deviation. Call that the $z$-score. It will have a mean of zero and standard deviation of one. Then create a rescaled score by multiplying the $z$-score by 12 and adding 81. | How to convert to gaussian distribution with given mean and standard deviation | Standardize the original score by subtracting the sample mean and dividing by the standard deviation. Call that the $z$-score. It will have a mean of zero and standard deviation of one. Then create a | How to convert to gaussian distribution with given mean and standard deviation
Standardize the original score by subtracting the sample mean and dividing by the standard deviation. Call that the $z$-score. It will have a mean of zero and standard deviation of one. Then create a rescaled score by multiplying the $z$-score by 12 and adding 81. | How to convert to gaussian distribution with given mean and standard deviation
Standardize the original score by subtracting the sample mean and dividing by the standard deviation. Call that the $z$-score. It will have a mean of zero and standard deviation of one. Then create a |
53,389 | How to convert to gaussian distribution with given mean and standard deviation | In fact, you need to use a copula-like transformation. You can use the empirical cdf of the data to transform them into uniformly distributed data and use the inverse CDF of Gaussian to transform them into Gaussian distributed data. | How to convert to gaussian distribution with given mean and standard deviation | In fact, you need to use a copula-like transformation. You can use the empirical cdf of the data to transform them into uniformly distributed data and use the inverse CDF of Gaussian to transform them | How to convert to gaussian distribution with given mean and standard deviation
In fact, you need to use a copula-like transformation. You can use the empirical cdf of the data to transform them into uniformly distributed data and use the inverse CDF of Gaussian to transform them into Gaussian distributed data. | How to convert to gaussian distribution with given mean and standard deviation
In fact, you need to use a copula-like transformation. You can use the empirical cdf of the data to transform them into uniformly distributed data and use the inverse CDF of Gaussian to transform them |
53,390 | How to convert to gaussian distribution with given mean and standard deviation | As you see there are many ways to get this! :)
Here's my two cents in this:
Box-Cox transform [1,2] your data first so you set the higher order moments (skewness and kurtosis) to the desire values (0 and 3 for the case of a Gaussian). That can be easily done by trying different parameters for the power transformation and testing to see the respective skewness and kurtosis of the transformed sample. Afterwards you follow the idea by Dimitriy; you subtract the sample mean and divide by the standard deviation to make your sample $N(0,1)$ (that won't affect the higher order moments) and then set your desire scale by multiplying the sample by 12 and adding 81.
The power transformation in the first step actually takes care of Douglas' comment on Dimitriy's solution for the "non-Gaussianity" of the original data.
And there you have it, grades $ \sim N(81,144)$ (almost). Truth be told, with such StdDev for a large class of students you 'll be expecting to have some people scoring 100+ grades... (0.0557 = 1-pnorm( 100.1, mean=81, sd=12)) | How to convert to gaussian distribution with given mean and standard deviation | As you see there are many ways to get this! :)
Here's my two cents in this:
Box-Cox transform [1,2] your data first so you set the higher order moments (skewness and kurtosis) to the desire values (0 | How to convert to gaussian distribution with given mean and standard deviation
As you see there are many ways to get this! :)
Here's my two cents in this:
Box-Cox transform [1,2] your data first so you set the higher order moments (skewness and kurtosis) to the desire values (0 and 3 for the case of a Gaussian). That can be easily done by trying different parameters for the power transformation and testing to see the respective skewness and kurtosis of the transformed sample. Afterwards you follow the idea by Dimitriy; you subtract the sample mean and divide by the standard deviation to make your sample $N(0,1)$ (that won't affect the higher order moments) and then set your desire scale by multiplying the sample by 12 and adding 81.
The power transformation in the first step actually takes care of Douglas' comment on Dimitriy's solution for the "non-Gaussianity" of the original data.
And there you have it, grades $ \sim N(81,144)$ (almost). Truth be told, with such StdDev for a large class of students you 'll be expecting to have some people scoring 100+ grades... (0.0557 = 1-pnorm( 100.1, mean=81, sd=12)) | How to convert to gaussian distribution with given mean and standard deviation
As you see there are many ways to get this! :)
Here's my two cents in this:
Box-Cox transform [1,2] your data first so you set the higher order moments (skewness and kurtosis) to the desire values (0 |
53,391 | Compute gaussian fit corresponding to a discrete variable | A "Gaussian fit" to a discrete probability distribution makes little or no sense in statistical applications, but mathematically almost a perfect fit can be made to these ordered pairs. Just find values $m$ and $s$ for which the cumulative Gaussian $$\Phi(x;m,s)=\frac{1}{\sqrt{2 \pi s^2}}\int_{-\infty}^x \exp(-(t-m)^2/s^2) dt$$ closely agrees with the data.
To illustrate the computation, here is an example of least-squares fitting (in R). Because the probabilities do not sum exactly to $1$, it standardizes them to sum to unity.
# The data
x <- c(90, 125, 180, 250, 355, 500, 710)
p <- c(0.0033, 0.0204, 0.0847, 0.2516, 0.4653, 0.1750, 0.0015)
# Standardize the cumulative probabilities
prob <- cumsum(p); prob <- prob / prob[7]
# Compute sum of squared residuals to a fit
f <- function(q) {
res <- pnorm(x, q[1], q[2]) - prob
sum(res * res)
}
# Find the least squares fit
coeff <-(fit <- nlm(f, c(334, 100)))$estimate
# Plot the fit
plot(x, prob)
curve(pnorm(x, coeff[1], coeff[2]), add=TRUE)
The optimal values are $m \approx 279.5$, $s \approx 80.5$. | Compute gaussian fit corresponding to a discrete variable | A "Gaussian fit" to a discrete probability distribution makes little or no sense in statistical applications, but mathematically almost a perfect fit can be made to these ordered pairs. Just find val | Compute gaussian fit corresponding to a discrete variable
A "Gaussian fit" to a discrete probability distribution makes little or no sense in statistical applications, but mathematically almost a perfect fit can be made to these ordered pairs. Just find values $m$ and $s$ for which the cumulative Gaussian $$\Phi(x;m,s)=\frac{1}{\sqrt{2 \pi s^2}}\int_{-\infty}^x \exp(-(t-m)^2/s^2) dt$$ closely agrees with the data.
To illustrate the computation, here is an example of least-squares fitting (in R). Because the probabilities do not sum exactly to $1$, it standardizes them to sum to unity.
# The data
x <- c(90, 125, 180, 250, 355, 500, 710)
p <- c(0.0033, 0.0204, 0.0847, 0.2516, 0.4653, 0.1750, 0.0015)
# Standardize the cumulative probabilities
prob <- cumsum(p); prob <- prob / prob[7]
# Compute sum of squared residuals to a fit
f <- function(q) {
res <- pnorm(x, q[1], q[2]) - prob
sum(res * res)
}
# Find the least squares fit
coeff <-(fit <- nlm(f, c(334, 100)))$estimate
# Plot the fit
plot(x, prob)
curve(pnorm(x, coeff[1], coeff[2]), add=TRUE)
The optimal values are $m \approx 279.5$, $s \approx 80.5$. | Compute gaussian fit corresponding to a discrete variable
A "Gaussian fit" to a discrete probability distribution makes little or no sense in statistical applications, but mathematically almost a perfect fit can be made to these ordered pairs. Just find val |
53,392 | Adaptive regression splines in earth package R | Yes, there could be several ways. First notice that the model selected is the one with minimal GCV of those fitted. Hence the algorithm has pruned out additional terms as they contribute little to the fit yet add complexity to the model. The issue here is one of parsimony and trying to avoid overfitting.
As the model is selected by GCV we can choose the penalty per term in the GCV computation and indeed, setting this to -1 results in a model where the GCV criterion is in effect the RSS/n where n is the number of terms.
> model <- earth(y~x, penalty = -1)
> summary(model)
Call: earth(formula=y~x, penalty=-1)
coefficients
(Intercept) 4.8340954
h(x-1400) 0.0014362
h(1400-x) -0.0033889
h(x-2047) -0.0027031
h(x-2500) 0.0012920
Selected 5 of 5 terms, and 1 of 1 predictors
Importance: x
Number of terms at each degree of interaction: 1 4 (additive model)
GCV 0.5231077 RSS 20.4012 GRSq 0.8307609 RSq 0.8307609
We can see that this model has retained all 5 terms the were added to the spline pool during the initialisation of the forward pass. Note also that the addition of three more terms has only reduced the RSS by ~2 units, or about 10% of the RSS for the simpler model.
The other arguments of interest here are nk and nprune but neither seemed to have an effect on the resulting model for this sample of data. | Adaptive regression splines in earth package R | Yes, there could be several ways. First notice that the model selected is the one with minimal GCV of those fitted. Hence the algorithm has pruned out additional terms as they contribute little to the | Adaptive regression splines in earth package R
Yes, there could be several ways. First notice that the model selected is the one with minimal GCV of those fitted. Hence the algorithm has pruned out additional terms as they contribute little to the fit yet add complexity to the model. The issue here is one of parsimony and trying to avoid overfitting.
As the model is selected by GCV we can choose the penalty per term in the GCV computation and indeed, setting this to -1 results in a model where the GCV criterion is in effect the RSS/n where n is the number of terms.
> model <- earth(y~x, penalty = -1)
> summary(model)
Call: earth(formula=y~x, penalty=-1)
coefficients
(Intercept) 4.8340954
h(x-1400) 0.0014362
h(1400-x) -0.0033889
h(x-2047) -0.0027031
h(x-2500) 0.0012920
Selected 5 of 5 terms, and 1 of 1 predictors
Importance: x
Number of terms at each degree of interaction: 1 4 (additive model)
GCV 0.5231077 RSS 20.4012 GRSq 0.8307609 RSq 0.8307609
We can see that this model has retained all 5 terms the were added to the spline pool during the initialisation of the forward pass. Note also that the addition of three more terms has only reduced the RSS by ~2 units, or about 10% of the RSS for the simpler model.
The other arguments of interest here are nk and nprune but neither seemed to have an effect on the resulting model for this sample of data. | Adaptive regression splines in earth package R
Yes, there could be several ways. First notice that the model selected is the one with minimal GCV of those fitted. Hence the algorithm has pruned out additional terms as they contribute little to the |
53,393 | Confusion related to MCMC technique | In Standard Monte Carlo integration, as you correctly state, you draw samples from a distribution and approximate some expectation using the sample average rather than calculating a difficult or intractable integral. So you are exploiting the Strong Law of Large Numbers to do this:
\begin{equation} \mathbb{E}_{\pi} [t(\boldsymbol{\theta})] = \int t(\boldsymbol{\theta}) \pi(\boldsymbol{\theta})d\boldsymbol{\theta} \approx \sum_{i=1}^n t(\boldsymbol{\theta}_i) \end{equation}
where each $\boldsymbol{\theta}_i \sim \pi$, provided $n$ is suitably large.
The problem is that in some cases you can't actually draw samples from $\pi$ directly - if you only know $c\pi$ for example, where $c$ is some unknown constant. This often happens in Bayesian Inference problems, where we have $posterior \propto likelihood \times prior$. Here one approach is to use some sort of re-sampling method, whereby we draw samples from some distribution $q$, and then adjust them in some way to make inferences about $\pi$. In rejection sampling, for example, some samples from $q$ are rejected, so that those remaining resemble a data set that would be likely under $\pi$. In importance sampling, the samples from $q$ are weighted based on their importance in inferring information about $\pi$.
Markov Chain Monte Carlo (MCMC) is another re-sampling technique, but this time samples are dependent on each other in some way. This can help or hinder inference, but crucially in high-dimensional problems MCMC methods can often be much more efficient than rejection sampling or importance sampling, so have become popular here.
This website may be of interest:
http://www.lancs.ac.uk/~jamest/Group/stats3.html
(An informal guide to Monte Carlo methods written by some PhD students at Lancaster University)
A couple of good text books are:
Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference - Gamerman & Lopes (2006)
Markov Chain Monte Carlo in Practice - Gilks, Richardson & Spiegelhalter (1995). | Confusion related to MCMC technique | In Standard Monte Carlo integration, as you correctly state, you draw samples from a distribution and approximate some expectation using the sample average rather than calculating a difficult or intra | Confusion related to MCMC technique
In Standard Monte Carlo integration, as you correctly state, you draw samples from a distribution and approximate some expectation using the sample average rather than calculating a difficult or intractable integral. So you are exploiting the Strong Law of Large Numbers to do this:
\begin{equation} \mathbb{E}_{\pi} [t(\boldsymbol{\theta})] = \int t(\boldsymbol{\theta}) \pi(\boldsymbol{\theta})d\boldsymbol{\theta} \approx \sum_{i=1}^n t(\boldsymbol{\theta}_i) \end{equation}
where each $\boldsymbol{\theta}_i \sim \pi$, provided $n$ is suitably large.
The problem is that in some cases you can't actually draw samples from $\pi$ directly - if you only know $c\pi$ for example, where $c$ is some unknown constant. This often happens in Bayesian Inference problems, where we have $posterior \propto likelihood \times prior$. Here one approach is to use some sort of re-sampling method, whereby we draw samples from some distribution $q$, and then adjust them in some way to make inferences about $\pi$. In rejection sampling, for example, some samples from $q$ are rejected, so that those remaining resemble a data set that would be likely under $\pi$. In importance sampling, the samples from $q$ are weighted based on their importance in inferring information about $\pi$.
Markov Chain Monte Carlo (MCMC) is another re-sampling technique, but this time samples are dependent on each other in some way. This can help or hinder inference, but crucially in high-dimensional problems MCMC methods can often be much more efficient than rejection sampling or importance sampling, so have become popular here.
This website may be of interest:
http://www.lancs.ac.uk/~jamest/Group/stats3.html
(An informal guide to Monte Carlo methods written by some PhD students at Lancaster University)
A couple of good text books are:
Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference - Gamerman & Lopes (2006)
Markov Chain Monte Carlo in Practice - Gilks, Richardson & Spiegelhalter (1995). | Confusion related to MCMC technique
In Standard Monte Carlo integration, as you correctly state, you draw samples from a distribution and approximate some expectation using the sample average rather than calculating a difficult or intra |
53,394 | Confusion related to MCMC technique | First of all, Monte Carlo methods are not intended just for estimation of integrals. It is intended to sample random variables from distribution and the "by product" is aproximation of integrals.
The reason behind the Markov chains is that MCMC methods/algorithms, e.g. Metropolis-Hastings produce a Markov Chain which has stationary distribution exactly the same from which you desire to obtain random draws. And the beauty of MCMC is that you can have your distribution just up to a normalizing constant. And the desire to obtain random samples from your distribution is not confined just to obtain estimates of expectation or variance. | Confusion related to MCMC technique | First of all, Monte Carlo methods are not intended just for estimation of integrals. It is intended to sample random variables from distribution and the "by product" is aproximation of integrals.
The | Confusion related to MCMC technique
First of all, Monte Carlo methods are not intended just for estimation of integrals. It is intended to sample random variables from distribution and the "by product" is aproximation of integrals.
The reason behind the Markov chains is that MCMC methods/algorithms, e.g. Metropolis-Hastings produce a Markov Chain which has stationary distribution exactly the same from which you desire to obtain random draws. And the beauty of MCMC is that you can have your distribution just up to a normalizing constant. And the desire to obtain random samples from your distribution is not confined just to obtain estimates of expectation or variance. | Confusion related to MCMC technique
First of all, Monte Carlo methods are not intended just for estimation of integrals. It is intended to sample random variables from distribution and the "by product" is aproximation of integrals.
The |
53,395 | Confusion between sample and population | It depends on to whom you wish to generalize your final results. If your sole interest was just to see how these people react and you don't care about inference, they are your population. If you wish to use the results to somehow infer how other similar people may behave under influence, then they are samples. Most studies tend to do the latter.
Also, for the inference to be valid, the sample should be drawn from the population with a known probability. The more the sampling deviated from the being probability-based, the shakier the inference will become.
It may also be worth mentioning that attributing exposure such as alcohol or marijuana to human subjects probably will not pass through ethical review process. Don't jump into answering the design feature before making sure that it's not a trick question. | Confusion between sample and population | It depends on to whom you wish to generalize your final results. If your sole interest was just to see how these people react and you don't care about inference, they are your population. If you wish | Confusion between sample and population
It depends on to whom you wish to generalize your final results. If your sole interest was just to see how these people react and you don't care about inference, they are your population. If you wish to use the results to somehow infer how other similar people may behave under influence, then they are samples. Most studies tend to do the latter.
Also, for the inference to be valid, the sample should be drawn from the population with a known probability. The more the sampling deviated from the being probability-based, the shakier the inference will become.
It may also be worth mentioning that attributing exposure such as alcohol or marijuana to human subjects probably will not pass through ethical review process. Don't jump into answering the design feature before making sure that it's not a trick question. | Confusion between sample and population
It depends on to whom you wish to generalize your final results. If your sole interest was just to see how these people react and you don't care about inference, they are your population. If you wish |
53,396 | Confusion between sample and population | As @Michael said, this would be considered a sample. One additional question is "from what population"? That is harder to say, but this is a problem that bedevils many studies. Often researchers will assume that a sample like this is "almost random" or something like that. What is the population, though? All college students? All students in this particular university? All students taking this type of class? All humans?
To consider these two groups as populations would be to say that you aren't interested in generalizing the study at all.
This is tricky stuff that often gets little attention. | Confusion between sample and population | As @Michael said, this would be considered a sample. One additional question is "from what population"? That is harder to say, but this is a problem that bedevils many studies. Often researchers will | Confusion between sample and population
As @Michael said, this would be considered a sample. One additional question is "from what population"? That is harder to say, but this is a problem that bedevils many studies. Often researchers will assume that a sample like this is "almost random" or something like that. What is the population, though? All college students? All students in this particular university? All students taking this type of class? All humans?
To consider these two groups as populations would be to say that you aren't interested in generalizing the study at all.
This is tricky stuff that often gets little attention. | Confusion between sample and population
As @Michael said, this would be considered a sample. One additional question is "from what population"? That is harder to say, but this is a problem that bedevils many studies. Often researchers will |
53,397 | Confusion between sample and population | For the purposes of statistical tests, this would be considered a sample. For example the average time for group 1 to complete puzzle is $\bar X_1 $ and not $\mu_1$ the population mean.
Using a small group (or even large group) to determine effect of alcohol on 'completion time' of puzzle, cannot give you $\mu_1$. The population mean is usually something 'imaginary' (unless we know the exact distribution) often approximated by $\bar X$. | Confusion between sample and population | For the purposes of statistical tests, this would be considered a sample. For example the average time for group 1 to complete puzzle is $\bar X_1 $ and not $\mu_1$ the population mean.
Using a small | Confusion between sample and population
For the purposes of statistical tests, this would be considered a sample. For example the average time for group 1 to complete puzzle is $\bar X_1 $ and not $\mu_1$ the population mean.
Using a small group (or even large group) to determine effect of alcohol on 'completion time' of puzzle, cannot give you $\mu_1$. The population mean is usually something 'imaginary' (unless we know the exact distribution) often approximated by $\bar X$. | Confusion between sample and population
For the purposes of statistical tests, this would be considered a sample. For example the average time for group 1 to complete puzzle is $\bar X_1 $ and not $\mu_1$ the population mean.
Using a small |
53,398 | Eighth order moment | By the language on the theorem, he's clearly referring to a random $D$-dimensional vector. This means that each $y_d$ is a random variable; for the sake of notation lets denote it by $Y_d$ (I really hate when authors don't do the distinction). With that said, the $n$-th order moment about $x_0$ of $Y_d$ is defined as:
$$\mathbb{E}[(Y_d-x_0)^n]=\int(y_d-x_0)^nf_{Y_d}(y_d)dy_d,$$
where $f_{Y_d}(y_d)$ is the probability density function of the random variable $Y_d$. Having finite eight order moment means that
$$\mathbb{E}[(Y_d-x_0)^8]=\int(y_d-x_0)^8f_{Y_d}(y_d)dy_d<\infty,$$
for some $x_0$. It is usual, however, to define the $n$-th order moment as the moment about $x_0=\mathbb{E}[Y_d]$, and assume zero-mean random variables, i.e., assume that $\mathbb{E}[Y_d]=0$ which would imply that what the authors meant was that
$$\mathbb{E}[Y_d^8]=\int y_d^8f_{Y_d}(y_d)dy_d<\infty.$$ | Eighth order moment | By the language on the theorem, he's clearly referring to a random $D$-dimensional vector. This means that each $y_d$ is a random variable; for the sake of notation lets denote it by $Y_d$ (I really h | Eighth order moment
By the language on the theorem, he's clearly referring to a random $D$-dimensional vector. This means that each $y_d$ is a random variable; for the sake of notation lets denote it by $Y_d$ (I really hate when authors don't do the distinction). With that said, the $n$-th order moment about $x_0$ of $Y_d$ is defined as:
$$\mathbb{E}[(Y_d-x_0)^n]=\int(y_d-x_0)^nf_{Y_d}(y_d)dy_d,$$
where $f_{Y_d}(y_d)$ is the probability density function of the random variable $Y_d$. Having finite eight order moment means that
$$\mathbb{E}[(Y_d-x_0)^8]=\int(y_d-x_0)^8f_{Y_d}(y_d)dy_d<\infty,$$
for some $x_0$. It is usual, however, to define the $n$-th order moment as the moment about $x_0=\mathbb{E}[Y_d]$, and assume zero-mean random variables, i.e., assume that $\mathbb{E}[Y_d]=0$ which would imply that what the authors meant was that
$$\mathbb{E}[Y_d^8]=\int y_d^8f_{Y_d}(y_d)dy_d<\infty.$$ | Eighth order moment
By the language on the theorem, he's clearly referring to a random $D$-dimensional vector. This means that each $y_d$ is a random variable; for the sake of notation lets denote it by $Y_d$ (I really h |
53,399 | When sampling without replacement from a given distribution, what's the total expected weight of the last k sampled items? | This method for creating a random permutation is used by poker players for estimating their expected value in a tournament which pays prizes for places lower than first. It is called the Independent Chip Model or ICM. The probabilities don't simplify that much although you can do better than the naive summation. I'll give a few different ways to describe this random permutation.
Choose a winner proportionally. Eliminate that player, and rescale the probabilities, then choose the second place finisher proportionally. Repeat until all places are assigned.
Each player has some integer number of chips in the proportion $p_1 : p_2 : ... :p_k$. Randomly eliminate one chip at a time so that each chip has an equal chance to be removed. When a player's last chip is eliminated, the player is knocked out. Players are ranked in reverse order of being knocked out.
I don't think it is obvious that these generate the same distribution on permutations. The second requires rational ratios between the probabilities, and is it obvious that doubling the number of chips won't change the distribution? However, a third description connects the two.
Each player has some integer number of chips in the proportion $p_1 : p_2 : ... :p_k$. Shuffle the chips. Sort the players by their highest chips. If you reveal the chips one by one from the bottom, you get the second method. If you reveal the chips from the top, you get the first.
Anyway, there are a few known results about the equities according to the ICM. For example, I proved that if prizes are nonincreasing, then the equity is concave, so players should be risk-averse in heads-up pots. Also, there are some ICM calculators, such as my program ICM Explorer, which you can download and use to calculate the finishing probabilities for up to $10$ players.
I haven't thought much about your particular problem, but I think the second description, eliminating chips to knock players out, may be helpful. I did look at the probabilities of finishing last, or the case of $k=1$, for some particular cases. | When sampling without replacement from a given distribution, what's the total expected weight of the | This method for creating a random permutation is used by poker players for estimating their expected value in a tournament which pays prizes for places lower than first. It is called the Independent C | When sampling without replacement from a given distribution, what's the total expected weight of the last k sampled items?
This method for creating a random permutation is used by poker players for estimating their expected value in a tournament which pays prizes for places lower than first. It is called the Independent Chip Model or ICM. The probabilities don't simplify that much although you can do better than the naive summation. I'll give a few different ways to describe this random permutation.
Choose a winner proportionally. Eliminate that player, and rescale the probabilities, then choose the second place finisher proportionally. Repeat until all places are assigned.
Each player has some integer number of chips in the proportion $p_1 : p_2 : ... :p_k$. Randomly eliminate one chip at a time so that each chip has an equal chance to be removed. When a player's last chip is eliminated, the player is knocked out. Players are ranked in reverse order of being knocked out.
I don't think it is obvious that these generate the same distribution on permutations. The second requires rational ratios between the probabilities, and is it obvious that doubling the number of chips won't change the distribution? However, a third description connects the two.
Each player has some integer number of chips in the proportion $p_1 : p_2 : ... :p_k$. Shuffle the chips. Sort the players by their highest chips. If you reveal the chips one by one from the bottom, you get the second method. If you reveal the chips from the top, you get the first.
Anyway, there are a few known results about the equities according to the ICM. For example, I proved that if prizes are nonincreasing, then the equity is concave, so players should be risk-averse in heads-up pots. Also, there are some ICM calculators, such as my program ICM Explorer, which you can download and use to calculate the finishing probabilities for up to $10$ players.
I haven't thought much about your particular problem, but I think the second description, eliminating chips to knock players out, may be helpful. I did look at the probabilities of finishing last, or the case of $k=1$, for some particular cases. | When sampling without replacement from a given distribution, what's the total expected weight of the
This method for creating a random permutation is used by poker players for estimating their expected value in a tournament which pays prizes for places lower than first. It is called the Independent C |
53,400 | When sampling without replacement from a given distribution, what's the total expected weight of the last k sampled items? | This may not be directly relevant if what you put as the question is the only thing that you are concerned with. However, if you in fact need to create an unequal probability sample, and you asked only for a small technical part of the process that you are struggling with, here's some broader context.
Sampling without replacement with unequal probabilities is an extremely messy procedure. Brewer and Hanif (1982) outlined about 50 algorithms to perform sampling of several units in a row in such a way that the ultimate probabilities of selection match the target $p_1 \ge p_2 \ge \ldots$ -- simply rescaling probabilities does not do the trick, and the "probabilities'' do not sum up to 1, as you've discovered the hard way. As a major innovation in this field of work, there's been papers in the second half of the 1990s (Tille 1996, Deville and Tille 1998) proposing elimination and sample splitting procedures that are arguably more straightforward than the procedures described by Brewer. | When sampling without replacement from a given distribution, what's the total expected weight of the | This may not be directly relevant if what you put as the question is the only thing that you are concerned with. However, if you in fact need to create an unequal probability sample, and you asked onl | When sampling without replacement from a given distribution, what's the total expected weight of the last k sampled items?
This may not be directly relevant if what you put as the question is the only thing that you are concerned with. However, if you in fact need to create an unequal probability sample, and you asked only for a small technical part of the process that you are struggling with, here's some broader context.
Sampling without replacement with unequal probabilities is an extremely messy procedure. Brewer and Hanif (1982) outlined about 50 algorithms to perform sampling of several units in a row in such a way that the ultimate probabilities of selection match the target $p_1 \ge p_2 \ge \ldots$ -- simply rescaling probabilities does not do the trick, and the "probabilities'' do not sum up to 1, as you've discovered the hard way. As a major innovation in this field of work, there's been papers in the second half of the 1990s (Tille 1996, Deville and Tille 1998) proposing elimination and sample splitting procedures that are arguably more straightforward than the procedures described by Brewer. | When sampling without replacement from a given distribution, what's the total expected weight of the
This may not be directly relevant if what you put as the question is the only thing that you are concerned with. However, if you in fact need to create an unequal probability sample, and you asked onl |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.