idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
49,401
Random forest cross validation for feature selection, imbalanced datasets
Your class1 and class2 summed 588 and 4709 do not add up to 5267 but 5297. But assuming you have a 5297x26 set of regressors allows me to estimate the random forrest by the call you posted data <- data.frame(matrix(rnorm(5297*26), ncol=26), label=c(rep('class1', 588), rep('class2', 4709))) randomForest(label~., data=data,importance=TRUE, proximity=TRUE, replace=TRUE, sampsize=c(588,588)) perhaps you could use stratified sampling with constant fractions of each class so if you would want 2/3 of each class you could use sampsize=c(392, 2472)
Random forest cross validation for feature selection, imbalanced datasets
Your class1 and class2 summed 588 and 4709 do not add up to 5267 but 5297. But assuming you have a 5297x26 set of regressors allows me to estimate the random forrest by the call you posted data <- da
Random forest cross validation for feature selection, imbalanced datasets Your class1 and class2 summed 588 and 4709 do not add up to 5267 but 5297. But assuming you have a 5297x26 set of regressors allows me to estimate the random forrest by the call you posted data <- data.frame(matrix(rnorm(5297*26), ncol=26), label=c(rep('class1', 588), rep('class2', 4709))) randomForest(label~., data=data,importance=TRUE, proximity=TRUE, replace=TRUE, sampsize=c(588,588)) perhaps you could use stratified sampling with constant fractions of each class so if you would want 2/3 of each class you could use sampsize=c(392, 2472)
Random forest cross validation for feature selection, imbalanced datasets Your class1 and class2 summed 588 and 4709 do not add up to 5267 but 5297. But assuming you have a 5297x26 set of regressors allows me to estimate the random forrest by the call you posted data <- da
49,402
Random forest cross validation for feature selection, imbalanced datasets
Any method that requires that you discard data in order to use it is defective. You may have been tempted to do this because you intend to use a discontinuous improper accuracy scoring rule such as proportion "classified" "correctly". That particular accuracy score is arbitrarily manipulated by the prevalence of positive cases, unlike the concordance probability ($c$-index; ROC area) and other measures of predictive discrimination.
Random forest cross validation for feature selection, imbalanced datasets
Any method that requires that you discard data in order to use it is defective. You may have been tempted to do this because you intend to use a discontinuous improper accuracy scoring rule such as p
Random forest cross validation for feature selection, imbalanced datasets Any method that requires that you discard data in order to use it is defective. You may have been tempted to do this because you intend to use a discontinuous improper accuracy scoring rule such as proportion "classified" "correctly". That particular accuracy score is arbitrarily manipulated by the prevalence of positive cases, unlike the concordance probability ($c$-index; ROC area) and other measures of predictive discrimination.
Random forest cross validation for feature selection, imbalanced datasets Any method that requires that you discard data in order to use it is defective. You may have been tempted to do this because you intend to use a discontinuous improper accuracy scoring rule such as p
49,403
Time Series Forecasting vs Linear Regression Extrapolation
The main reason why using ordinary least squares regression is frowned upon in modeling time series data is that the error terms are correlated with each other (this is called autocorrelation). If this is the case, then your standard errors from OLS will be incorrect, which affects hypothesis testing. There are several ways that you can use specialized techiniques to account for autocorrelation, such as autoregressive models with lags, generalized least squares, and HAC (heteroskedasticity and autocorrelation consistent) standard errors.
Time Series Forecasting vs Linear Regression Extrapolation
The main reason why using ordinary least squares regression is frowned upon in modeling time series data is that the error terms are correlated with each other (this is called autocorrelation). If thi
Time Series Forecasting vs Linear Regression Extrapolation The main reason why using ordinary least squares regression is frowned upon in modeling time series data is that the error terms are correlated with each other (this is called autocorrelation). If this is the case, then your standard errors from OLS will be incorrect, which affects hypothesis testing. There are several ways that you can use specialized techiniques to account for autocorrelation, such as autoregressive models with lags, generalized least squares, and HAC (heteroskedasticity and autocorrelation consistent) standard errors.
Time Series Forecasting vs Linear Regression Extrapolation The main reason why using ordinary least squares regression is frowned upon in modeling time series data is that the error terms are correlated with each other (this is called autocorrelation). If thi
49,404
VIF calculation in regression
It is important to address multicollinearity within all the explanatory variables, as there can be linear correlation between a group of variables (three or more) but none among all their possible pairs. The threshold for discarding explanatory variables with the Variance Inflation Factor is subjective. Here is a recommendation from The Pennsylvania State University (2014): VIF is a measure of how much the variance of the estimated regression coefficient $b_k$ is "inflated" by the existence of correlation among the predictor variables in the model. A VIF of 1 means that there is no correlation among the $k_{th}$ predictor and the remaining predictor variables, and hence the variance of $b_k$ is not inflated at all. The general rule of thumb is that VIFs exceeding 4 warrant further investigation, while VIFs exceeding 10 are signs of serious multicollinearity requiring correction. Remember always sticking to the hypothesis previously formulated to investigate the relationship between the variables. Keep the predictors which make more sense in explaining the response variable. Multicollinearity in logistic regression is equally important as other types of regression. See: Logistic Regression - Multicollinearity Concerns/Pitfalls.
VIF calculation in regression
It is important to address multicollinearity within all the explanatory variables, as there can be linear correlation between a group of variables (three or more) but none among all their possible pai
VIF calculation in regression It is important to address multicollinearity within all the explanatory variables, as there can be linear correlation between a group of variables (three or more) but none among all their possible pairs. The threshold for discarding explanatory variables with the Variance Inflation Factor is subjective. Here is a recommendation from The Pennsylvania State University (2014): VIF is a measure of how much the variance of the estimated regression coefficient $b_k$ is "inflated" by the existence of correlation among the predictor variables in the model. A VIF of 1 means that there is no correlation among the $k_{th}$ predictor and the remaining predictor variables, and hence the variance of $b_k$ is not inflated at all. The general rule of thumb is that VIFs exceeding 4 warrant further investigation, while VIFs exceeding 10 are signs of serious multicollinearity requiring correction. Remember always sticking to the hypothesis previously formulated to investigate the relationship between the variables. Keep the predictors which make more sense in explaining the response variable. Multicollinearity in logistic regression is equally important as other types of regression. See: Logistic Regression - Multicollinearity Concerns/Pitfalls.
VIF calculation in regression It is important to address multicollinearity within all the explanatory variables, as there can be linear correlation between a group of variables (three or more) but none among all their possible pai
49,405
Inter-rater agreement for Likert scale
Krippendorff's alpha, originally developed in the field of content analysis, is well-suited for dealing with ordinal ratings such as Likert-scale ratings. It has several advantages over some other measures such as Cohen's Kappa, Fleiss's Kappa, Cronbach's alpha: it is capable of dealing with more than 2 raters; it is robust to missing data; and it can handle different types of scales: nominal, ordinal, etc. It also accounts for chance agreements better than some other measures like Cohen's Kappa. Calculation of Krippendorff's alpha is supported by several statistical software packages, including R (by the irr package), SPSS, etc. Below are some relevant papers, that discuss Krippendorff's alpha including its properties and its implementation, and compare it with other measures: Hayes, A. F., & Krippendorff, K. (2007). Answering the call for a standard reliability measure for coding data. Communication Methods and Measures, 1(1), 77-89. Krippendorff, K. (2004). Reliability in Content Analysis: Some Common Misconceptions and Recommendations. Human Communication Research, 30(3), 411-433. doi: 10.1111/j.1468-2958.2004.tb00738.x Chapter 3 in Krippendorff, K. (2013). Content Analysis: An Introduction to Its Methodology (3rd ed.): Sage. There are some additional technical papers in Krippendorff's website.
Inter-rater agreement for Likert scale
Krippendorff's alpha, originally developed in the field of content analysis, is well-suited for dealing with ordinal ratings such as Likert-scale ratings. It has several advantages over some other mea
Inter-rater agreement for Likert scale Krippendorff's alpha, originally developed in the field of content analysis, is well-suited for dealing with ordinal ratings such as Likert-scale ratings. It has several advantages over some other measures such as Cohen's Kappa, Fleiss's Kappa, Cronbach's alpha: it is capable of dealing with more than 2 raters; it is robust to missing data; and it can handle different types of scales: nominal, ordinal, etc. It also accounts for chance agreements better than some other measures like Cohen's Kappa. Calculation of Krippendorff's alpha is supported by several statistical software packages, including R (by the irr package), SPSS, etc. Below are some relevant papers, that discuss Krippendorff's alpha including its properties and its implementation, and compare it with other measures: Hayes, A. F., & Krippendorff, K. (2007). Answering the call for a standard reliability measure for coding data. Communication Methods and Measures, 1(1), 77-89. Krippendorff, K. (2004). Reliability in Content Analysis: Some Common Misconceptions and Recommendations. Human Communication Research, 30(3), 411-433. doi: 10.1111/j.1468-2958.2004.tb00738.x Chapter 3 in Krippendorff, K. (2013). Content Analysis: An Introduction to Its Methodology (3rd ed.): Sage. There are some additional technical papers in Krippendorff's website.
Inter-rater agreement for Likert scale Krippendorff's alpha, originally developed in the field of content analysis, is well-suited for dealing with ordinal ratings such as Likert-scale ratings. It has several advantages over some other mea
49,406
Logistic regression with time series predictor data
Your question sounds very much like you are interested in discrete time event history analysis (aka discrete time survival analysis, aka a logit hazard model) to answer the question whether and when will an event occur? For example, equation 1 gives the logit hazard where discrete time periods (up to period $T$ are indicated $d_{1}, \dots, d_{T}$, and you may condition your model on $p$ number of predictors $X_{1}, \dots, X_{p}$. This gives you a hazard estimate as in equation 2. These equations specify a conditional hazard function with a fully discrete parameterization of time. Although you could instead specify a conditional hazard function that is constant over time, or is a linear or polynomial function of time period, or even a hybrid of polynomial functions of period plus some discrete time indicators. Your predictors can be constant over time, or time-varying, so I see no reason why you could not also include lagged or differenced functions of the predictors to model auto-correlation. $\mathrm{logit}\left(h\left(t,{X_{1t},\dots,X_{pt}}\right)\right) = \alpha_{1}d_{1} + \dots + \alpha_{T}d_{T} + \beta_{1}X_{1t} + \dots + \beta_{p}X_{pt}$ $\hat{h}\left(t,{X_{1t},\dots,X_{pt}}\right) = \frac{e^{\hat{\alpha}_{1}d_{1} + \dots + \hat{\alpha}_{T}d_{T} + \hat{\beta}_{1}X_{1t} + \dots + \hat{\beta}_{p}X_{pt}}}{1 + e^{\hat{\alpha}_{1}d_{1} + \dots + \hat{\alpha}_{T}d_{T} + \hat{\beta}_{1}X_{1t} + \dots + \hat{\beta}_{p}X_{pt}}}$ One need not use a logit hazard model (indeed one could use probit, complimentary log-log, robit, etc. binomial link functions). If you are using Stata see also the dthaz package by typing net describe dthaz, from(https://alexisdinno.com/stata). References Singer, J. and Willett, J. (1993). It’s about time: Using discrete-time survival analysis to study duration and the timing of events. Journal of Educational and Behavioral Statistics, 18(2):155–195. Singer, J. D. and Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. Oxford University Press, New York, NY.
Logistic regression with time series predictor data
Your question sounds very much like you are interested in discrete time event history analysis (aka discrete time survival analysis, aka a logit hazard model) to answer the question whether and when w
Logistic regression with time series predictor data Your question sounds very much like you are interested in discrete time event history analysis (aka discrete time survival analysis, aka a logit hazard model) to answer the question whether and when will an event occur? For example, equation 1 gives the logit hazard where discrete time periods (up to period $T$ are indicated $d_{1}, \dots, d_{T}$, and you may condition your model on $p$ number of predictors $X_{1}, \dots, X_{p}$. This gives you a hazard estimate as in equation 2. These equations specify a conditional hazard function with a fully discrete parameterization of time. Although you could instead specify a conditional hazard function that is constant over time, or is a linear or polynomial function of time period, or even a hybrid of polynomial functions of period plus some discrete time indicators. Your predictors can be constant over time, or time-varying, so I see no reason why you could not also include lagged or differenced functions of the predictors to model auto-correlation. $\mathrm{logit}\left(h\left(t,{X_{1t},\dots,X_{pt}}\right)\right) = \alpha_{1}d_{1} + \dots + \alpha_{T}d_{T} + \beta_{1}X_{1t} + \dots + \beta_{p}X_{pt}$ $\hat{h}\left(t,{X_{1t},\dots,X_{pt}}\right) = \frac{e^{\hat{\alpha}_{1}d_{1} + \dots + \hat{\alpha}_{T}d_{T} + \hat{\beta}_{1}X_{1t} + \dots + \hat{\beta}_{p}X_{pt}}}{1 + e^{\hat{\alpha}_{1}d_{1} + \dots + \hat{\alpha}_{T}d_{T} + \hat{\beta}_{1}X_{1t} + \dots + \hat{\beta}_{p}X_{pt}}}$ One need not use a logit hazard model (indeed one could use probit, complimentary log-log, robit, etc. binomial link functions). If you are using Stata see also the dthaz package by typing net describe dthaz, from(https://alexisdinno.com/stata). References Singer, J. and Willett, J. (1993). It’s about time: Using discrete-time survival analysis to study duration and the timing of events. Journal of Educational and Behavioral Statistics, 18(2):155–195. Singer, J. D. and Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. Oxford University Press, New York, NY.
Logistic regression with time series predictor data Your question sounds very much like you are interested in discrete time event history analysis (aka discrete time survival analysis, aka a logit hazard model) to answer the question whether and when w
49,407
A question on a non-parametric estimating equation
There is an issue here, nicely captured by the latest comment to the question: Doesn't [Equation] 10.5.6 ask us how much $Y$ has to shift so that the median of $Y$ coincides with the median of $n=n_1+n_2$ values? Why is it the median of $X$ instead? When the meaning of the equation is parsed--translated from math-ese into meaningful English--the answer becomes clear. Translating Math to English Let's begin by clarifying the context. Two datasets $X=(x_i,\,i=1,\ldots,n_1)$ and $Y=(y_i,\,i=1,\ldots,n_2)$ are each assumed to arise from sampling two continuous distributions. (The continuity assumption is merely a convenience to allow us to assume all the values are distinct.) The analysis is concerned with shifting the values: all the $y_i$ will be reduced by a constant $\Delta$ (to be determined later). These data--that is, the original $X$ and the shifted $Y_\Delta = (y_1-\Delta, y_2-\Delta, \ldots, y_{n_2}-\Delta)$--are combined into a single dataset $X\cup Y_\Delta = Z_\Delta=(z_i,\,i=1,\ldots,n=n_1+n_2)$ and sorted so that $z_i \le z_{i+1}$ for all $1\le i\lt n$. The position in the sort order is the rank of the value, written here as a function $R$: $$R(z_j)=j.$$ (As $\Delta$ varies, there will occasionally be two-way ties between some elements of $X$ and $Y_\Delta$. The rank function $R$ can be--and usually is--extended by assigning to any tied group the average of the ranks each element of the group would receive if the tie were arbitrarily resolved.) In this fashion each of the elements of $X$ and $Y_\Delta$ receives a rank between $1$ and $n$. The middle rank of $(n+1)/2$ splits the ranks into two halves: the upper half $H^{+}$ consists of all ranks strictly greater than $(n+1)/2$ and the lower half $H^{-}$ consists of all ranks strictly less than $(n+1)/2$. The appearance of the signum function, written $\text{sgn}$, in the equation simply assigns the value $1$ to elements in the upper half and $-1$ to elements in the lower half. Another way to write the sum in Equation 10.5.6 is to collect all the terms that are assigned $+1$ into one group--these are the elements of $Y$ in the upper half--and all the terms that are assigned $-1$ into another group. Evidently the sum reduces to a difference of counts: $$0 = \sum_{i=1}^{n_2} \text{sgn}\left(R(y_i-\Delta) - \frac{n+1}{2}\right) = |Y_\Delta \cap H^{+}| - |Y_\Delta \cap H^{-}|$$ (where $|\cdot|$ denotes the number of elements in a set). In other words, the equation asks that $Y_\Delta$ be balanced (as a subset of $Z_\Delta$) in the sense that the number of elements contained in the upper and lower halves of $Z_\Delta$ be equal: $$|Y_\Delta \cap H^{+}| = |Y_\Delta \cap H^{-}|.$$ (More generally, let's say that any finite set of numbers $W$ is balanced about some value $m$ when there are equally many values in $W$ that exceed $m$ as there are less than $m$. Any such $m$ is called a median of $W$.) The task before us, then, is By how much ($\Delta$) should the data $Y$ be shifted in order to balance $Y_\Delta$ within $Z_\Delta = X \cup Y_\Delta$? Solving the Equation The claim is that $Y$ must be shifted until a median of $Y_\Delta$ (equal to a median of $Y$, $m_Y$, minus $\Delta$) is a median for $X$. This is really two claims, which I will address in order of difficulty. When $\Delta = m_Y - m_X,$ $Y_\Delta$ is balanced. The number of elements of $Z_{m_Y-m_X}$ exceeding $m_X$ is the number of elements of $X$ exceeding $m_X$ plus the number of elements of $Y$ exceeding $m_Y$. These each equal the numbers of elements less than $m_X$ and $m_Y,$ respectively, whence $m_X$ is a median of $Z_\Delta$ and $Y_\Delta$ must be balanced in $Z_\Delta$. When $Y_\Delta$ is balanced, $\Delta$ can be expressed as the difference between a median of $X$ and a median of $Y$. The assumption is that the number of elements of $Y_\Delta$ in the upper half of $Z_\Delta$ equals the number in the lower half of $Z_\Delta$. (Equivalently, the median of $Z_\Delta$ is a median of $Y_\Delta$.) Consequently, the number of elements in the upper half of $Z_\Delta$ that are not in $Y_\Delta$ equals the number in the lower half of $Z_\Delta$ that are not in $Y_\Delta$. Since all such elements are in $X$, $X$ is also balanced as a subset of $Z_\Delta$. That means nothing other than that the median of $Z$ is a median of $X$. We conclude that a median of $Z$ coincides both with a median of $X$, $m_X$, and a median of $Y_\Delta$, $m_Y - \Delta$. The desired conclusion follows immediately.
A question on a non-parametric estimating equation
There is an issue here, nicely captured by the latest comment to the question: Doesn't [Equation] 10.5.6 ask us how much $Y$ has to shift so that the median of $Y$ coincides with the median of $n=n_1
A question on a non-parametric estimating equation There is an issue here, nicely captured by the latest comment to the question: Doesn't [Equation] 10.5.6 ask us how much $Y$ has to shift so that the median of $Y$ coincides with the median of $n=n_1+n_2$ values? Why is it the median of $X$ instead? When the meaning of the equation is parsed--translated from math-ese into meaningful English--the answer becomes clear. Translating Math to English Let's begin by clarifying the context. Two datasets $X=(x_i,\,i=1,\ldots,n_1)$ and $Y=(y_i,\,i=1,\ldots,n_2)$ are each assumed to arise from sampling two continuous distributions. (The continuity assumption is merely a convenience to allow us to assume all the values are distinct.) The analysis is concerned with shifting the values: all the $y_i$ will be reduced by a constant $\Delta$ (to be determined later). These data--that is, the original $X$ and the shifted $Y_\Delta = (y_1-\Delta, y_2-\Delta, \ldots, y_{n_2}-\Delta)$--are combined into a single dataset $X\cup Y_\Delta = Z_\Delta=(z_i,\,i=1,\ldots,n=n_1+n_2)$ and sorted so that $z_i \le z_{i+1}$ for all $1\le i\lt n$. The position in the sort order is the rank of the value, written here as a function $R$: $$R(z_j)=j.$$ (As $\Delta$ varies, there will occasionally be two-way ties between some elements of $X$ and $Y_\Delta$. The rank function $R$ can be--and usually is--extended by assigning to any tied group the average of the ranks each element of the group would receive if the tie were arbitrarily resolved.) In this fashion each of the elements of $X$ and $Y_\Delta$ receives a rank between $1$ and $n$. The middle rank of $(n+1)/2$ splits the ranks into two halves: the upper half $H^{+}$ consists of all ranks strictly greater than $(n+1)/2$ and the lower half $H^{-}$ consists of all ranks strictly less than $(n+1)/2$. The appearance of the signum function, written $\text{sgn}$, in the equation simply assigns the value $1$ to elements in the upper half and $-1$ to elements in the lower half. Another way to write the sum in Equation 10.5.6 is to collect all the terms that are assigned $+1$ into one group--these are the elements of $Y$ in the upper half--and all the terms that are assigned $-1$ into another group. Evidently the sum reduces to a difference of counts: $$0 = \sum_{i=1}^{n_2} \text{sgn}\left(R(y_i-\Delta) - \frac{n+1}{2}\right) = |Y_\Delta \cap H^{+}| - |Y_\Delta \cap H^{-}|$$ (where $|\cdot|$ denotes the number of elements in a set). In other words, the equation asks that $Y_\Delta$ be balanced (as a subset of $Z_\Delta$) in the sense that the number of elements contained in the upper and lower halves of $Z_\Delta$ be equal: $$|Y_\Delta \cap H^{+}| = |Y_\Delta \cap H^{-}|.$$ (More generally, let's say that any finite set of numbers $W$ is balanced about some value $m$ when there are equally many values in $W$ that exceed $m$ as there are less than $m$. Any such $m$ is called a median of $W$.) The task before us, then, is By how much ($\Delta$) should the data $Y$ be shifted in order to balance $Y_\Delta$ within $Z_\Delta = X \cup Y_\Delta$? Solving the Equation The claim is that $Y$ must be shifted until a median of $Y_\Delta$ (equal to a median of $Y$, $m_Y$, minus $\Delta$) is a median for $X$. This is really two claims, which I will address in order of difficulty. When $\Delta = m_Y - m_X,$ $Y_\Delta$ is balanced. The number of elements of $Z_{m_Y-m_X}$ exceeding $m_X$ is the number of elements of $X$ exceeding $m_X$ plus the number of elements of $Y$ exceeding $m_Y$. These each equal the numbers of elements less than $m_X$ and $m_Y,$ respectively, whence $m_X$ is a median of $Z_\Delta$ and $Y_\Delta$ must be balanced in $Z_\Delta$. When $Y_\Delta$ is balanced, $\Delta$ can be expressed as the difference between a median of $X$ and a median of $Y$. The assumption is that the number of elements of $Y_\Delta$ in the upper half of $Z_\Delta$ equals the number in the lower half of $Z_\Delta$. (Equivalently, the median of $Z_\Delta$ is a median of $Y_\Delta$.) Consequently, the number of elements in the upper half of $Z_\Delta$ that are not in $Y_\Delta$ equals the number in the lower half of $Z_\Delta$ that are not in $Y_\Delta$. Since all such elements are in $X$, $X$ is also balanced as a subset of $Z_\Delta$. That means nothing other than that the median of $Z$ is a median of $X$. We conclude that a median of $Z$ coincides both with a median of $X$, $m_X$, and a median of $Y_\Delta$, $m_Y - \Delta$. The desired conclusion follows immediately.
A question on a non-parametric estimating equation There is an issue here, nicely captured by the latest comment to the question: Doesn't [Equation] 10.5.6 ask us how much $Y$ has to shift so that the median of $Y$ coincides with the median of $n=n_1
49,408
Precision and recall are equal when the size is same
Let's call the number of users who are correctly classified as experts by $tp$ (true positive), the number of users who are incorrectly classified as non-experts (but they are experts) by $fn$ (false negative), and the number who are incorrectly classified as experts (because they are not) by $fp$ (false positive). The precision is defined as $p = \frac{tp}{tp + fp}$, where the recall is defined as $r = \frac{tp}{tp + fn}$. If precision and recall are equal, we have $p=r$, and since they have the same denominator, we get $fp = fn$. This means that our algorithm has classified an equal amount of users as false positives, as it classified false negatives. This may be a good thing if the data set had an equal set of experts/non-experts, but it also may be a hint of the fact that too many non-experts were classified as experts (if there were not many experts in the set), or too many experts were not classified as experts (of there were many experts in the set).
Precision and recall are equal when the size is same
Let's call the number of users who are correctly classified as experts by $tp$ (true positive), the number of users who are incorrectly classified as non-experts (but they are experts) by $fn$ (false
Precision and recall are equal when the size is same Let's call the number of users who are correctly classified as experts by $tp$ (true positive), the number of users who are incorrectly classified as non-experts (but they are experts) by $fn$ (false negative), and the number who are incorrectly classified as experts (because they are not) by $fp$ (false positive). The precision is defined as $p = \frac{tp}{tp + fp}$, where the recall is defined as $r = \frac{tp}{tp + fn}$. If precision and recall are equal, we have $p=r$, and since they have the same denominator, we get $fp = fn$. This means that our algorithm has classified an equal amount of users as false positives, as it classified false negatives. This may be a good thing if the data set had an equal set of experts/non-experts, but it also may be a hint of the fact that too many non-experts were classified as experts (if there were not many experts in the set), or too many experts were not classified as experts (of there were many experts in the set).
Precision and recall are equal when the size is same Let's call the number of users who are correctly classified as experts by $tp$ (true positive), the number of users who are incorrectly classified as non-experts (but they are experts) by $fn$ (false
49,409
Formula to calculate beta matrix in multivariate analysis [duplicate]
I think I understand what you're asking, but correct me if I'm wrong. The analytical formula for $\beta$ is the same for the multivariate case as the univariate case: $$ \hat \beta = (X'X)^{-1}X'Y $$ You find this the same way as for the univariate case, by taking the first derivative of residual sum of squares. It is relatively straightforward to calculate using matrix calculus (which is covered in the matrix cookbook linked to by queenbee). You can test whether this solution works in R: y <- cbind(rnorm(10), rnorm(10), rnorm(10)) x <- cbind(1, rnorm(10), rnorm(10), rnorm(10), rnorm(10), rnorm(10), rnorm(10)) colnames(x) <- paste("x", 1:6, sep = "") colnames(y) <- paste("y", 1:3, sep = "") fit <- lm(y ~ x - 1) summary(fit) anaSol <- solve((t(x) %*% x)) %*% t(x) %*% y anaSol coef(fit) - anaSol Here's another reference, specifically related to multivariate analysis: http://socserv.mcmaster.ca/jfox/Books/Companion/appendix/Appendix-Multivariate-Linear-Models.pdf
Formula to calculate beta matrix in multivariate analysis [duplicate]
I think I understand what you're asking, but correct me if I'm wrong. The analytical formula for $\beta$ is the same for the multivariate case as the univariate case: $$ \hat \beta = (X'X)^{-1}X'Y $$
Formula to calculate beta matrix in multivariate analysis [duplicate] I think I understand what you're asking, but correct me if I'm wrong. The analytical formula for $\beta$ is the same for the multivariate case as the univariate case: $$ \hat \beta = (X'X)^{-1}X'Y $$ You find this the same way as for the univariate case, by taking the first derivative of residual sum of squares. It is relatively straightforward to calculate using matrix calculus (which is covered in the matrix cookbook linked to by queenbee). You can test whether this solution works in R: y <- cbind(rnorm(10), rnorm(10), rnorm(10)) x <- cbind(1, rnorm(10), rnorm(10), rnorm(10), rnorm(10), rnorm(10), rnorm(10)) colnames(x) <- paste("x", 1:6, sep = "") colnames(y) <- paste("y", 1:3, sep = "") fit <- lm(y ~ x - 1) summary(fit) anaSol <- solve((t(x) %*% x)) %*% t(x) %*% y anaSol coef(fit) - anaSol Here's another reference, specifically related to multivariate analysis: http://socserv.mcmaster.ca/jfox/Books/Companion/appendix/Appendix-Multivariate-Linear-Models.pdf
Formula to calculate beta matrix in multivariate analysis [duplicate] I think I understand what you're asking, but correct me if I'm wrong. The analytical formula for $\beta$ is the same for the multivariate case as the univariate case: $$ \hat \beta = (X'X)^{-1}X'Y $$
49,410
Formula to calculate beta matrix in multivariate analysis [duplicate]
If you have $q$ equations and $p$ independent variables (including a constant) that appear in every equation, the parameter estimates are given by the $p \times q$ matrix: $$M=(X'IX)^{-1}X'IY$$ where $Y$ is $n \times q$ matrix of dependent variables X is $n \times p$ matrix of covariates I is the identity matrix
Formula to calculate beta matrix in multivariate analysis [duplicate]
If you have $q$ equations and $p$ independent variables (including a constant) that appear in every equation, the parameter estimates are given by the $p \times q$ matrix: $$M=(X'IX)^{-1}X'IY$$ where
Formula to calculate beta matrix in multivariate analysis [duplicate] If you have $q$ equations and $p$ independent variables (including a constant) that appear in every equation, the parameter estimates are given by the $p \times q$ matrix: $$M=(X'IX)^{-1}X'IY$$ where $Y$ is $n \times q$ matrix of dependent variables X is $n \times p$ matrix of covariates I is the identity matrix
Formula to calculate beta matrix in multivariate analysis [duplicate] If you have $q$ equations and $p$ independent variables (including a constant) that appear in every equation, the parameter estimates are given by the $p \times q$ matrix: $$M=(X'IX)^{-1}X'IY$$ where
49,411
Formula to calculate beta matrix in multivariate analysis [duplicate]
Other answers nicely cover how to derive the $\beta$ coefficients. I'm not sure what you mean by $n, \beta$ "put together." But, if it means that you'd like to use the coefficients to derive the model's predicted values using the coefficients, it's simply the product $XB$, where $X$ is an $m \times n$ matrix of $m$ observations, each with $n$ independent variables, and $B$ a $n \times p$ matrix of regression coefficients. ($p$ here is the number of dependent variables.) To your question about the standard error, for a single independent variable and single coefficient, the formula is: $s.e.(\beta_j) = \sqrt{s^2 (X'X)^{-1}_{jj} }$ where $s^2$ is the sum of squared residuals, given by $\sum_i y_i -\hat y_i $, over $m - n$. (More here.) To broaden the formula to return a vector of standard errors corresponding to each coefficient: $s.e.(\beta) = \sqrt{s^2 diag(X'X)^{-1} }$ where $diag$ returns the diagonal of the matrix. To further broaden that formula to return a matrix of standard error corresponding to each $\beta$ coefficient, replace $s^2$ with $S^2$, a row vector containing the $s^2$ for each independent variable: $s.e.(B) = \sqrt{diag(X'X)^{-1} S^2 }$ Note that the vector returned by $diag$ should be $n \times 1$, and $S^2$ $1 \times p$, making their product $n \times p$, standard errors corresponding to the coefficients in $B$. (Square root again applied element-wise.)
Formula to calculate beta matrix in multivariate analysis [duplicate]
Other answers nicely cover how to derive the $\beta$ coefficients. I'm not sure what you mean by $n, \beta$ "put together." But, if it means that you'd like to use the coefficients to derive the model
Formula to calculate beta matrix in multivariate analysis [duplicate] Other answers nicely cover how to derive the $\beta$ coefficients. I'm not sure what you mean by $n, \beta$ "put together." But, if it means that you'd like to use the coefficients to derive the model's predicted values using the coefficients, it's simply the product $XB$, where $X$ is an $m \times n$ matrix of $m$ observations, each with $n$ independent variables, and $B$ a $n \times p$ matrix of regression coefficients. ($p$ here is the number of dependent variables.) To your question about the standard error, for a single independent variable and single coefficient, the formula is: $s.e.(\beta_j) = \sqrt{s^2 (X'X)^{-1}_{jj} }$ where $s^2$ is the sum of squared residuals, given by $\sum_i y_i -\hat y_i $, over $m - n$. (More here.) To broaden the formula to return a vector of standard errors corresponding to each coefficient: $s.e.(\beta) = \sqrt{s^2 diag(X'X)^{-1} }$ where $diag$ returns the diagonal of the matrix. To further broaden that formula to return a matrix of standard error corresponding to each $\beta$ coefficient, replace $s^2$ with $S^2$, a row vector containing the $s^2$ for each independent variable: $s.e.(B) = \sqrt{diag(X'X)^{-1} S^2 }$ Note that the vector returned by $diag$ should be $n \times 1$, and $S^2$ $1 \times p$, making their product $n \times p$, standard errors corresponding to the coefficients in $B$. (Square root again applied element-wise.)
Formula to calculate beta matrix in multivariate analysis [duplicate] Other answers nicely cover how to derive the $\beta$ coefficients. I'm not sure what you mean by $n, \beta$ "put together." But, if it means that you'd like to use the coefficients to derive the model
49,412
Variance of $X_i / \sum\limits_{j=1}^n X_j$
I don't think this question can be fully answered without more information. But we can find a few quantities. Let $n=N$ and define $Y_i={{X_i} \over {\sum_{j=1}^N X_j}}.$ Let the variance of $Y_i$ be $\sigma^2$ and the covariance be $\sigma_{Y_iY_j}=\sigma_{12}.$ We can do this since the $Y_i$ are identically distributed. Calling their sum $Z,$ we know $Z=1$ so $E[Y_i]= {{1 \over N}}.$ The variance of $Z$ is zero, so we also know $$\sigma^2_Z=\sum^N \sigma^2 + 2 \sum_{i < j} \sigma_{Y_iY_j}=0.$$ This yields $$N \sigma^2 + N(N-1)\sigma_{12}=0$$ so we have $$\sigma_{12}={{-\sigma^2} \over {N-1}}$$ The correlation $\rho$ between $Y_i$ and $Y_j$ is then $$\rho ={{\sigma_{12}} \over {\sigma^2}}={{-1} \over {N-1}}.$$ This is far as I've been able to get. I've not been able to find any expression for $\sigma^2$ - perhaps another can provide further insight.
Variance of $X_i / \sum\limits_{j=1}^n X_j$
I don't think this question can be fully answered without more information. But we can find a few quantities. Let $n=N$ and define $Y_i={{X_i} \over {\sum_{j=1}^N X_j}}.$ Let the variance of $Y_i$ be
Variance of $X_i / \sum\limits_{j=1}^n X_j$ I don't think this question can be fully answered without more information. But we can find a few quantities. Let $n=N$ and define $Y_i={{X_i} \over {\sum_{j=1}^N X_j}}.$ Let the variance of $Y_i$ be $\sigma^2$ and the covariance be $\sigma_{Y_iY_j}=\sigma_{12}.$ We can do this since the $Y_i$ are identically distributed. Calling their sum $Z,$ we know $Z=1$ so $E[Y_i]= {{1 \over N}}.$ The variance of $Z$ is zero, so we also know $$\sigma^2_Z=\sum^N \sigma^2 + 2 \sum_{i < j} \sigma_{Y_iY_j}=0.$$ This yields $$N \sigma^2 + N(N-1)\sigma_{12}=0$$ so we have $$\sigma_{12}={{-\sigma^2} \over {N-1}}$$ The correlation $\rho$ between $Y_i$ and $Y_j$ is then $$\rho ={{\sigma_{12}} \over {\sigma^2}}={{-1} \over {N-1}}.$$ This is far as I've been able to get. I've not been able to find any expression for $\sigma^2$ - perhaps another can provide further insight.
Variance of $X_i / \sum\limits_{j=1}^n X_j$ I don't think this question can be fully answered without more information. But we can find a few quantities. Let $n=N$ and define $Y_i={{X_i} \over {\sum_{j=1}^N X_j}}.$ Let the variance of $Y_i$ be
49,413
Derivation of uncertainty propagation?
The idea behind the differential calculus is to study potentially complicated functions $f:\mathbb{R}^n \to \mathbb{R}^m$ by means of linear approximations. Everything flows from this single idea. For $x\in \mathbb{R}^n$ "the" linear approximation to $f$ near $x$ (if a unique one exists) is called the "derivative" or "gradient" $Df$. (It typically changes from one point to another and so is a function of $x$.) By definition, then, $$Df: T_x\mathbb{R}^n \to T_{f(x)}\mathbb{R}^m$$ is a linear map from the space of all vectors in $\mathbb{R}^n$ originating at $x$ to the space of all vectors in $\mathbb{R}^m$ originating at $y=f(x)$. It is a theorem (Spivak, Theorem 2-7) that when we use the directions determined by the coordinates $(x_1, x_2, \ldots, x_n)$ for $\mathbb{R}^n$ and $(y_1, y_2, \ldots, y_m)$ for $\mathbb{R}^m$ as bases for $T_x\mathbb{R}^n$ and $T_{f(x)}\mathbb{R}^m$, respectively, then the entries in the $m\times n$ matrix for $Df$ are the partial derivatives $$(Df)_{ij} = \frac{\partial y_i}{\partial x_j}.\tag{1}$$ Suppose the "errors" $u_{x_i}$ in the $x_i$ are constant multiples of the standard deviations of random variables $U_i$ describing uncertainties in the $x_i$. One way to express this is in terms of the squared errors: assume there is some positive number $\lambda$ (often $1$, sometimes $2$, occasionally something else) for which $$u_{x_i}^2 = \lambda^2 \operatorname{Var}(U_i)$$ for each $i$. Applying the linear approximation $Df(x)$, and taking $m=1$ for simplicity (although the general case is scarcely any more difficult), we know from $(1)$ that (at least approximately) the random variable governing uncertainties in $y$ is equal to $$V = (Df(x)) (U_1, U_2, \ldots, U_n)^\prime = \frac{\partial y}{\partial x_i}U_1 + \frac{\partial y}{\partial x_2}U_2 + \cdots + \frac{\partial y}{\partial x_n}U_n.$$ The formula quoted in the question arises when it is assumed there is no correlation among the $U_i$, whence all the covariances in the calculation of $\operatorname{Var}(V)$ vanish, yielding $$\eqalign{u_y^2 = \lambda^2\operatorname{Var}(V) &= \lambda^2\left(\operatorname{Var}\left(\frac{\partial y}{\partial x_1}U_1\right) + \cdots + \operatorname{Var}\left(\frac{\partial y}{\partial x_n}U_n\right)\right)\\ &= \left(\frac{\partial y}{\partial x_1}\right)^2 \lambda^2\operatorname{Var}(U_1)+ \cdots + \left(\frac{\partial y}{\partial x_n}\right)^2 \lambda^2\operatorname{Var}(U_n)\\ &= \left(\frac{\partial y}{\partial x_1}\right)^2 u_{x_1}^2 + \cdots + \left(\frac{\partial y}{\partial x_n}\right)^2 u_{x_n}^2 }$$ QED. The assumptions needed to derive this conclusion provide insight into its meaning, interpretation, and scope: $f$ must be differentiable at $x$: that is, $Df$ must exist at $x$. This means that within a neighborhood of $x$, $f$ can be approximated to second order in $|x|$ by a linear transformation. The "errors" $u_{x_i}$ must be multiples of a standard deviation. The random deviations (that model the uncertainty in $x$) must be uncorrelated. The typical size of the deviations $U_i = x^{*}_i - x_i$, as measured by the errors $u_{x_i}$, must be small enough that $(Df(x))(x^{*}-x)$ remains a good approximation to $f(x^{*})$; and large deviations (where the approximation no longer holds) must be improbable. Note that Taylor's Theorem is not needed: the result combines the most basic properties of differentiation with a fundamental property of variances. Reference Michael Spivak, Calculus on Manifolds, W. A. Benjamin (1965). Chapters 1 & 2.
Derivation of uncertainty propagation?
The idea behind the differential calculus is to study potentially complicated functions $f:\mathbb{R}^n \to \mathbb{R}^m$ by means of linear approximations. Everything flows from this single idea. F
Derivation of uncertainty propagation? The idea behind the differential calculus is to study potentially complicated functions $f:\mathbb{R}^n \to \mathbb{R}^m$ by means of linear approximations. Everything flows from this single idea. For $x\in \mathbb{R}^n$ "the" linear approximation to $f$ near $x$ (if a unique one exists) is called the "derivative" or "gradient" $Df$. (It typically changes from one point to another and so is a function of $x$.) By definition, then, $$Df: T_x\mathbb{R}^n \to T_{f(x)}\mathbb{R}^m$$ is a linear map from the space of all vectors in $\mathbb{R}^n$ originating at $x$ to the space of all vectors in $\mathbb{R}^m$ originating at $y=f(x)$. It is a theorem (Spivak, Theorem 2-7) that when we use the directions determined by the coordinates $(x_1, x_2, \ldots, x_n)$ for $\mathbb{R}^n$ and $(y_1, y_2, \ldots, y_m)$ for $\mathbb{R}^m$ as bases for $T_x\mathbb{R}^n$ and $T_{f(x)}\mathbb{R}^m$, respectively, then the entries in the $m\times n$ matrix for $Df$ are the partial derivatives $$(Df)_{ij} = \frac{\partial y_i}{\partial x_j}.\tag{1}$$ Suppose the "errors" $u_{x_i}$ in the $x_i$ are constant multiples of the standard deviations of random variables $U_i$ describing uncertainties in the $x_i$. One way to express this is in terms of the squared errors: assume there is some positive number $\lambda$ (often $1$, sometimes $2$, occasionally something else) for which $$u_{x_i}^2 = \lambda^2 \operatorname{Var}(U_i)$$ for each $i$. Applying the linear approximation $Df(x)$, and taking $m=1$ for simplicity (although the general case is scarcely any more difficult), we know from $(1)$ that (at least approximately) the random variable governing uncertainties in $y$ is equal to $$V = (Df(x)) (U_1, U_2, \ldots, U_n)^\prime = \frac{\partial y}{\partial x_i}U_1 + \frac{\partial y}{\partial x_2}U_2 + \cdots + \frac{\partial y}{\partial x_n}U_n.$$ The formula quoted in the question arises when it is assumed there is no correlation among the $U_i$, whence all the covariances in the calculation of $\operatorname{Var}(V)$ vanish, yielding $$\eqalign{u_y^2 = \lambda^2\operatorname{Var}(V) &= \lambda^2\left(\operatorname{Var}\left(\frac{\partial y}{\partial x_1}U_1\right) + \cdots + \operatorname{Var}\left(\frac{\partial y}{\partial x_n}U_n\right)\right)\\ &= \left(\frac{\partial y}{\partial x_1}\right)^2 \lambda^2\operatorname{Var}(U_1)+ \cdots + \left(\frac{\partial y}{\partial x_n}\right)^2 \lambda^2\operatorname{Var}(U_n)\\ &= \left(\frac{\partial y}{\partial x_1}\right)^2 u_{x_1}^2 + \cdots + \left(\frac{\partial y}{\partial x_n}\right)^2 u_{x_n}^2 }$$ QED. The assumptions needed to derive this conclusion provide insight into its meaning, interpretation, and scope: $f$ must be differentiable at $x$: that is, $Df$ must exist at $x$. This means that within a neighborhood of $x$, $f$ can be approximated to second order in $|x|$ by a linear transformation. The "errors" $u_{x_i}$ must be multiples of a standard deviation. The random deviations (that model the uncertainty in $x$) must be uncorrelated. The typical size of the deviations $U_i = x^{*}_i - x_i$, as measured by the errors $u_{x_i}$, must be small enough that $(Df(x))(x^{*}-x)$ remains a good approximation to $f(x^{*})$; and large deviations (where the approximation no longer holds) must be improbable. Note that Taylor's Theorem is not needed: the result combines the most basic properties of differentiation with a fundamental property of variances. Reference Michael Spivak, Calculus on Manifolds, W. A. Benjamin (1965). Chapters 1 & 2.
Derivation of uncertainty propagation? The idea behind the differential calculus is to study potentially complicated functions $f:\mathbb{R}^n \to \mathbb{R}^m$ by means of linear approximations. Everything flows from this single idea. F
49,414
Using log-linear models for presence/absence data in wildlife
You can use the binomial GLM, as it provides the freedom to model different sample sizes, $m_i$. So, you can use glm() function as follows: glm(cbind(presence, absence) ~ 1 + treatment + year, family=binomial) where "presence" and "absence" show the number of present or absent cases.
Using log-linear models for presence/absence data in wildlife
You can use the binomial GLM, as it provides the freedom to model different sample sizes, $m_i$. So, you can use glm() function as follows: glm(cbind(presence, absence) ~ 1 + treatment + year, family=
Using log-linear models for presence/absence data in wildlife You can use the binomial GLM, as it provides the freedom to model different sample sizes, $m_i$. So, you can use glm() function as follows: glm(cbind(presence, absence) ~ 1 + treatment + year, family=binomial) where "presence" and "absence" show the number of present or absent cases.
Using log-linear models for presence/absence data in wildlife You can use the binomial GLM, as it provides the freedom to model different sample sizes, $m_i$. So, you can use glm() function as follows: glm(cbind(presence, absence) ~ 1 + treatment + year, family=
49,415
Using log-linear models for presence/absence data in wildlife
I highly recommend you the R book, chapters 15 till 17. If you have just categorical variables and no continuous ones, Crawley' R Book suggests to make a contingency table or to convert your binary data in proportion data and analyze it then. I had the same problem (binary count data and just categorical explanatory variables) and made a binomial GLM and later a GLM with proportion data of counts. Both worked fine, outcome was the same.
Using log-linear models for presence/absence data in wildlife
I highly recommend you the R book, chapters 15 till 17. If you have just categorical variables and no continuous ones, Crawley' R Book suggests to make a contingency table or to convert your binary da
Using log-linear models for presence/absence data in wildlife I highly recommend you the R book, chapters 15 till 17. If you have just categorical variables and no continuous ones, Crawley' R Book suggests to make a contingency table or to convert your binary data in proportion data and analyze it then. I had the same problem (binary count data and just categorical explanatory variables) and made a binomial GLM and later a GLM with proportion data of counts. Both worked fine, outcome was the same.
Using log-linear models for presence/absence data in wildlife I highly recommend you the R book, chapters 15 till 17. If you have just categorical variables and no continuous ones, Crawley' R Book suggests to make a contingency table or to convert your binary da
49,416
Can a deep belief network (stacked RBMS) be used solely as a dataset generator?
Note that initialising neural nets with DBNs is more of a historical anecdote nowadays. Direct supervised training with dropout regularization and piecewise linear activation functions tend to work much better in the presence of many labeled training examples. More direct answer to your question: 1) Using DBN features for linear regression or logistic regression is nothing but fine tuning the net. The difference is that you also adapt the parameters of the full hierarchy if you initialize a neural net. 2) KNN has been used on top of a neural net, see (Salakhutdinov, Ruslan, and Geoffrey E. Hinton. "Learning a nonlinear embedding by preserving class neighbourhood structure." International Conference on Artificial Intelligence and Statistics. 2007.) The whole method is more complicated though, as the DBN is further fine tuned for KNN usage. 3) DBNs have also been used as inputs to Gaussian processes (Salakhutdinov, Ruslan, and Geoffrey E. Hinton. "Using Deep Belief Nets to Learn Covariance Kernels for Gaussian Processes." NIPS. 2007.) 4) And deep networks have also been used to learn the Kernel for SVMs. (Yichuan Tang "Deep Learning using Linear Support Vector Machines") Other methods, such as GBMs or RFs, seem not to be considered since they are not differentiable and thus fine tuning the feature extractor is not possible.
Can a deep belief network (stacked RBMS) be used solely as a dataset generator?
Note that initialising neural nets with DBNs is more of a historical anecdote nowadays. Direct supervised training with dropout regularization and piecewise linear activation functions tend to work mu
Can a deep belief network (stacked RBMS) be used solely as a dataset generator? Note that initialising neural nets with DBNs is more of a historical anecdote nowadays. Direct supervised training with dropout regularization and piecewise linear activation functions tend to work much better in the presence of many labeled training examples. More direct answer to your question: 1) Using DBN features for linear regression or logistic regression is nothing but fine tuning the net. The difference is that you also adapt the parameters of the full hierarchy if you initialize a neural net. 2) KNN has been used on top of a neural net, see (Salakhutdinov, Ruslan, and Geoffrey E. Hinton. "Learning a nonlinear embedding by preserving class neighbourhood structure." International Conference on Artificial Intelligence and Statistics. 2007.) The whole method is more complicated though, as the DBN is further fine tuned for KNN usage. 3) DBNs have also been used as inputs to Gaussian processes (Salakhutdinov, Ruslan, and Geoffrey E. Hinton. "Using Deep Belief Nets to Learn Covariance Kernels for Gaussian Processes." NIPS. 2007.) 4) And deep networks have also been used to learn the Kernel for SVMs. (Yichuan Tang "Deep Learning using Linear Support Vector Machines") Other methods, such as GBMs or RFs, seem not to be considered since they are not differentiable and thus fine tuning the feature extractor is not possible.
Can a deep belief network (stacked RBMS) be used solely as a dataset generator? Note that initialising neural nets with DBNs is more of a historical anecdote nowadays. Direct supervised training with dropout regularization and piecewise linear activation functions tend to work mu
49,417
Who said, "let the data speak for themselves"?
The earliest I can find in Google Books is from The Lookout, Seamen's Church Institute of New York and New Jersey, 1915 so unlikely to be John Tukey, who was born that year. A very similar quotation appears to come from records of the Protestant Episcopal Church in the United States of America in 1917 so it is possible both are reprinting something earlier.
Who said, "let the data speak for themselves"?
The earliest I can find in Google Books is from The Lookout, Seamen's Church Institute of New York and New Jersey, 1915 so unlikely to be John Tukey, who was born that year. A very similar quotation a
Who said, "let the data speak for themselves"? The earliest I can find in Google Books is from The Lookout, Seamen's Church Institute of New York and New Jersey, 1915 so unlikely to be John Tukey, who was born that year. A very similar quotation appears to come from records of the Protestant Episcopal Church in the United States of America in 1917 so it is possible both are reprinting something earlier.
Who said, "let the data speak for themselves"? The earliest I can find in Google Books is from The Lookout, Seamen's Church Institute of New York and New Jersey, 1915 so unlikely to be John Tukey, who was born that year. A very similar quotation a
49,418
Who said, "let the data speak for themselves"?
For what it's worth, Jaynes (Logic of Science, 2003) attributes it to Fisher: R. A. Fisher’s maxim: ‘Let the data speak for themselves!’ which has so dominated statistics in this century. The data cannot speak for themselves; and they never have, in any real problem of inference.
Who said, "let the data speak for themselves"?
For what it's worth, Jaynes (Logic of Science, 2003) attributes it to Fisher: R. A. Fisher’s maxim: ‘Let the data speak for themselves!’ which has so dominated statistics in this century. The data ca
Who said, "let the data speak for themselves"? For what it's worth, Jaynes (Logic of Science, 2003) attributes it to Fisher: R. A. Fisher’s maxim: ‘Let the data speak for themselves!’ which has so dominated statistics in this century. The data cannot speak for themselves; and they never have, in any real problem of inference.
Who said, "let the data speak for themselves"? For what it's worth, Jaynes (Logic of Science, 2003) attributes it to Fisher: R. A. Fisher’s maxim: ‘Let the data speak for themselves!’ which has so dominated statistics in this century. The data ca
49,419
Who said, "let the data speak for themselves"?
Google finds this paper on pubmed, with this exact quote in title which attributes it to Aristotle and Newton: This contrasts sharply with the second approach suggested by Aristotle and revived by Newton in the 18th century that places data at its center: "Let the data speak for themselves."
Who said, "let the data speak for themselves"?
Google finds this paper on pubmed, with this exact quote in title which attributes it to Aristotle and Newton: This contrasts sharply with the second approach suggested by Aristotle and revived by N
Who said, "let the data speak for themselves"? Google finds this paper on pubmed, with this exact quote in title which attributes it to Aristotle and Newton: This contrasts sharply with the second approach suggested by Aristotle and revived by Newton in the 18th century that places data at its center: "Let the data speak for themselves."
Who said, "let the data speak for themselves"? Google finds this paper on pubmed, with this exact quote in title which attributes it to Aristotle and Newton: This contrasts sharply with the second approach suggested by Aristotle and revived by N
49,420
Hypothesis testing with Neyman–Pearson (finding cutoff quantile of Poisson dist)
Apart from a few oddnesses with notation, you seem to have almost got it sorted out. This bit is wrong, though: Now to find the value of k where we would reject H0 when the total number of observed errors is no greater than k, can we not just use R with the following code: qpois(0.05 = the alpha level,5 = value of null) You need to stick with the observed number of errors across the sample. With 31 people, you won't observe only a few errors. If you try to work with 'number of errors per person' (i.e. you divide the total number of errors by the number of people), it's no longer a Poisson distribution, but a scaled Poisson (it takes values that are multiples of $\frac{1}{31}$), and the ignoring the distinction will give you grief. Better to just work with the integer variable, "the total count of errors". Under the null hypothesis, the distribution of $\sum_i y_i$ has a distribution you can easily work out. You can then use the inverse cdf, or quantile function (qpois in R) to compute the critical value you need for your rejection rule, but take care about what it gives you. If it's not clear to you why you would subtract 1 as whuber suggested, you should use the cdf (ppois in R) to compute the actual significance level for the rejection rule you propose; that should make it obvious why. (Alternatively, draw a diagram of what's going on, and it should soon become clear.) Here I've drawn the probability function for the total count of a similar but different problem (one with different numbers): The aim is to find the value $C$ which will yield a type-I error rate that is no more than 0.05 (the sum of all the probabilities up to and including it, as suggested by the brace underneath the diagram). If you keep that straight, you should have little trouble working out what's going on
Hypothesis testing with Neyman–Pearson (finding cutoff quantile of Poisson dist)
Apart from a few oddnesses with notation, you seem to have almost got it sorted out. This bit is wrong, though: Now to find the value of k where we would reject H0 when the total number of observed e
Hypothesis testing with Neyman–Pearson (finding cutoff quantile of Poisson dist) Apart from a few oddnesses with notation, you seem to have almost got it sorted out. This bit is wrong, though: Now to find the value of k where we would reject H0 when the total number of observed errors is no greater than k, can we not just use R with the following code: qpois(0.05 = the alpha level,5 = value of null) You need to stick with the observed number of errors across the sample. With 31 people, you won't observe only a few errors. If you try to work with 'number of errors per person' (i.e. you divide the total number of errors by the number of people), it's no longer a Poisson distribution, but a scaled Poisson (it takes values that are multiples of $\frac{1}{31}$), and the ignoring the distinction will give you grief. Better to just work with the integer variable, "the total count of errors". Under the null hypothesis, the distribution of $\sum_i y_i$ has a distribution you can easily work out. You can then use the inverse cdf, or quantile function (qpois in R) to compute the critical value you need for your rejection rule, but take care about what it gives you. If it's not clear to you why you would subtract 1 as whuber suggested, you should use the cdf (ppois in R) to compute the actual significance level for the rejection rule you propose; that should make it obvious why. (Alternatively, draw a diagram of what's going on, and it should soon become clear.) Here I've drawn the probability function for the total count of a similar but different problem (one with different numbers): The aim is to find the value $C$ which will yield a type-I error rate that is no more than 0.05 (the sum of all the probabilities up to and including it, as suggested by the brace underneath the diagram). If you keep that straight, you should have little trouble working out what's going on
Hypothesis testing with Neyman–Pearson (finding cutoff quantile of Poisson dist) Apart from a few oddnesses with notation, you seem to have almost got it sorted out. This bit is wrong, though: Now to find the value of k where we would reject H0 when the total number of observed e
49,421
Difference between point-biserial and rank-biserial correlations
The Wikipedia formula of "rank-biserial correlation" that you show was introduced by Glass (1966) and it is not equivalent to usual Pearson $r$ when the latter is computed on ranks data (that is, $r$ which actually will be Spearman's $rho$). Let define $Y$ to be the quantitative variable already turned into ranks; and $X$ be the dichotomous variable with groups coded 1 and 0 (total sample size $n=n_1+n_0$). Knowing the formula of Pearson $r$ and observing the following equivalencies of our situation on ranks vs 1-0 dichotomy, $\sum XY= \sum Y_{x=1}=R_1$ (Sum of ranks in group coded 1), $\sum X = \sum X^2 = n_1$, $\sum Y = n(n+1)/2$, $\sum Y^2 = n(n+1)(2n+1)/6$, substitute, and get Pearson $r$ (= Spearman $rho$) formula looking as: $r= \frac{2R_1-n_1(n+1)}{\sqrt{n_1n_0(n^2-1)/3}}$. Now do substitutions into Glass' "rank-biserial correlation", to obtain: $r_{rb}= \frac{2R_1-n_1(n+1)}{n_1n_0}$. You can see that their denominators are different. So, Glass's $r_{rb}$ correlation isn't true Pearson/Spearman correlation. (Point-biserial correlation is true Pearson correlation.) I haven't read Glass' original paper or its reviews and hesitate to say what can be the reason behind the correlation and is there any advantage of it over the Pearson/Spearman correlation.
Difference between point-biserial and rank-biserial correlations
The Wikipedia formula of "rank-biserial correlation" that you show was introduced by Glass (1966) and it is not equivalent to usual Pearson $r$ when the latter is computed on ranks data (that is, $r$
Difference between point-biserial and rank-biserial correlations The Wikipedia formula of "rank-biserial correlation" that you show was introduced by Glass (1966) and it is not equivalent to usual Pearson $r$ when the latter is computed on ranks data (that is, $r$ which actually will be Spearman's $rho$). Let define $Y$ to be the quantitative variable already turned into ranks; and $X$ be the dichotomous variable with groups coded 1 and 0 (total sample size $n=n_1+n_0$). Knowing the formula of Pearson $r$ and observing the following equivalencies of our situation on ranks vs 1-0 dichotomy, $\sum XY= \sum Y_{x=1}=R_1$ (Sum of ranks in group coded 1), $\sum X = \sum X^2 = n_1$, $\sum Y = n(n+1)/2$, $\sum Y^2 = n(n+1)(2n+1)/6$, substitute, and get Pearson $r$ (= Spearman $rho$) formula looking as: $r= \frac{2R_1-n_1(n+1)}{\sqrt{n_1n_0(n^2-1)/3}}$. Now do substitutions into Glass' "rank-biserial correlation", to obtain: $r_{rb}= \frac{2R_1-n_1(n+1)}{n_1n_0}$. You can see that their denominators are different. So, Glass's $r_{rb}$ correlation isn't true Pearson/Spearman correlation. (Point-biserial correlation is true Pearson correlation.) I haven't read Glass' original paper or its reviews and hesitate to say what can be the reason behind the correlation and is there any advantage of it over the Pearson/Spearman correlation.
Difference between point-biserial and rank-biserial correlations The Wikipedia formula of "rank-biserial correlation" that you show was introduced by Glass (1966) and it is not equivalent to usual Pearson $r$ when the latter is computed on ranks data (that is, $r$
49,422
Logit with dummies when certain number of dummies must be used
If the additive model's a good fit there isn't any problem. You could use it to make predictions for success of the process with six, eight or two dozen workers; but you needn't, just as if the ambient temperature were a predictor you shouldn't be tempted to use the model to predict success of the process at -50°C or 300°C. If Smith's the reference level, then when you substitute him with Jones, Jones' coefficient gives the change in the logit; when you substitute Jones with Brown, the difference between Brown's & Jones' coefficients gives the change in the logit. You could pick a typical worker as the reference level, or use effect (sum-to-zero) coding, if you think it aids interpretation. The coding scheme makes no substantive difference to the model.
Logit with dummies when certain number of dummies must be used
If the additive model's a good fit there isn't any problem. You could use it to make predictions for success of the process with six, eight or two dozen workers; but you needn't, just as if the ambien
Logit with dummies when certain number of dummies must be used If the additive model's a good fit there isn't any problem. You could use it to make predictions for success of the process with six, eight or two dozen workers; but you needn't, just as if the ambient temperature were a predictor you shouldn't be tempted to use the model to predict success of the process at -50°C or 300°C. If Smith's the reference level, then when you substitute him with Jones, Jones' coefficient gives the change in the logit; when you substitute Jones with Brown, the difference between Brown's & Jones' coefficients gives the change in the logit. You could pick a typical worker as the reference level, or use effect (sum-to-zero) coding, if you think it aids interpretation. The coding scheme makes no substantive difference to the model.
Logit with dummies when certain number of dummies must be used If the additive model's a good fit there isn't any problem. You could use it to make predictions for success of the process with six, eight or two dozen workers; but you needn't, just as if the ambien
49,423
Monte Carlo simulation vs. machine learning algorithms: what is the difference in application? [closed]
MC is not an inference technique for finding the "best" model, it is a numerical tool to obtain samples from a given model. Sure enough you can also build inference procedures relying on MC (e.g. optimizing a criterion over parameters as a function of the simulated empirical distribution) but that doesn't change the respective scopes and goals. The most common application of MC is probably the calculation of high-dimensional integrals.
Monte Carlo simulation vs. machine learning algorithms: what is the difference in application? [clos
MC is not an inference technique for finding the "best" model, it is a numerical tool to obtain samples from a given model. Sure enough you can also build inference procedures relying on MC (e.g. opti
Monte Carlo simulation vs. machine learning algorithms: what is the difference in application? [closed] MC is not an inference technique for finding the "best" model, it is a numerical tool to obtain samples from a given model. Sure enough you can also build inference procedures relying on MC (e.g. optimizing a criterion over parameters as a function of the simulated empirical distribution) but that doesn't change the respective scopes and goals. The most common application of MC is probably the calculation of high-dimensional integrals.
Monte Carlo simulation vs. machine learning algorithms: what is the difference in application? [clos MC is not an inference technique for finding the "best" model, it is a numerical tool to obtain samples from a given model. Sure enough you can also build inference procedures relying on MC (e.g. opti
49,424
How would you visualize the difference between Cox/Weibull regression?
You can try R's visreg package, as described in this paper "Visualization of Regression Models Using visreg". The package interface is consistent for visualizing linear models, generalized linear models, proportional hazards models, generalized additive models, robust regression models and more. Page 12 has an example of visualizing Cox's proportional hazards model.
How would you visualize the difference between Cox/Weibull regression?
You can try R's visreg package, as described in this paper "Visualization of Regression Models Using visreg". The package interface is consistent for visualizing linear models, generalized linear mod
How would you visualize the difference between Cox/Weibull regression? You can try R's visreg package, as described in this paper "Visualization of Regression Models Using visreg". The package interface is consistent for visualizing linear models, generalized linear models, proportional hazards models, generalized additive models, robust regression models and more. Page 12 has an example of visualizing Cox's proportional hazards model.
How would you visualize the difference between Cox/Weibull regression? You can try R's visreg package, as described in this paper "Visualization of Regression Models Using visreg". The package interface is consistent for visualizing linear models, generalized linear mod
49,425
Coefficient sign changes in fixed effect and first-difference estimation
Another thing that could go awry is an unbalanced panel where you have "gaps" in the middle of the time series. The FD estimator will lose two observations if there a single period missing. The dummy approach will loose only one. Are the sample sizes wildly different between the two regressions by any chance? What happens if you use only observations where you have all the periods? If that's not's what causing it, we have to think harder. You may have contemporaneous correlation between $x_{it}$ and $u_{it}$. In that case, both the FD and FE estimators will be inconsistent and have different probability limits (Adult Wooldridge, p.321-322). It's hard to know which one should be preferred ex ante and or what to do about it. If you have non-contemporaneous correlation, it will have similar effects, but there may be a solution. When $x_{it}$ and $u_{is}$ for $t < s$ are correlated, you can include lags of $x$. I think this is likely the culprit given your comment above. If there's feedback from $u_{is}$ to $x_{it}$ for $t > s$, a more complicated solution is described in chapter 11 of Wooldridge. If you maintain that you have contemporaneous exogeneity, then the inconsistency of the FE estimator from the failure of strict exogeneity goes to zero at the rate $\frac{1}{T}$, while the FD's is independent of $T$. But that's only true if $x_{it}$ and $y_{it}$ are cointegrated (in the time series sense). In fact, FE may be worse than FD in that case for fixed $N$ as $T$ grows. So FD may deal better with spurious regression.
Coefficient sign changes in fixed effect and first-difference estimation
Another thing that could go awry is an unbalanced panel where you have "gaps" in the middle of the time series. The FD estimator will lose two observations if there a single period missing. The dummy
Coefficient sign changes in fixed effect and first-difference estimation Another thing that could go awry is an unbalanced panel where you have "gaps" in the middle of the time series. The FD estimator will lose two observations if there a single period missing. The dummy approach will loose only one. Are the sample sizes wildly different between the two regressions by any chance? What happens if you use only observations where you have all the periods? If that's not's what causing it, we have to think harder. You may have contemporaneous correlation between $x_{it}$ and $u_{it}$. In that case, both the FD and FE estimators will be inconsistent and have different probability limits (Adult Wooldridge, p.321-322). It's hard to know which one should be preferred ex ante and or what to do about it. If you have non-contemporaneous correlation, it will have similar effects, but there may be a solution. When $x_{it}$ and $u_{is}$ for $t < s$ are correlated, you can include lags of $x$. I think this is likely the culprit given your comment above. If there's feedback from $u_{is}$ to $x_{it}$ for $t > s$, a more complicated solution is described in chapter 11 of Wooldridge. If you maintain that you have contemporaneous exogeneity, then the inconsistency of the FE estimator from the failure of strict exogeneity goes to zero at the rate $\frac{1}{T}$, while the FD's is independent of $T$. But that's only true if $x_{it}$ and $y_{it}$ are cointegrated (in the time series sense). In fact, FE may be worse than FD in that case for fixed $N$ as $T$ grows. So FD may deal better with spurious regression.
Coefficient sign changes in fixed effect and first-difference estimation Another thing that could go awry is an unbalanced panel where you have "gaps" in the middle of the time series. The FD estimator will lose two observations if there a single period missing. The dummy
49,426
Higher $r^2$ value on test data than training data?
I think the formula to calculate r-squared is R-squared = 1 - (RSS/TSS) where TSS = sum((y-mean(y))^2) and RSS = sum((y-y.predict)^2)
Higher $r^2$ value on test data than training data?
I think the formula to calculate r-squared is R-squared = 1 - (RSS/TSS) where TSS = sum((y-mean(y))^2) and RSS = sum((y-y.predict)^2)
Higher $r^2$ value on test data than training data? I think the formula to calculate r-squared is R-squared = 1 - (RSS/TSS) where TSS = sum((y-mean(y))^2) and RSS = sum((y-y.predict)^2)
Higher $r^2$ value on test data than training data? I think the formula to calculate r-squared is R-squared = 1 - (RSS/TSS) where TSS = sum((y-mean(y))^2) and RSS = sum((y-y.predict)^2)
49,427
Higher $r^2$ value on test data than training data?
$R^2$ value is not a metric for model selection or model fit. The reason for this is that there is inherent variability of data may affect the $R^2$. Consider the following data sets: The (Y_ v/s X) plot has more spread than (Y v/s X). As a result the $R^2$ value for the previous(which has more variance) will be lower than the latter(which has less variance). This proves that you must not use the $R^2$ value to check if the model is fitting the data well or not. Instead you should check the model assumptions of : Linear Trend : (from scatter plot) Constant variance of error : (from residual v/s fit plot) Normal distribution of error : (from QQ plot)
Higher $r^2$ value on test data than training data?
$R^2$ value is not a metric for model selection or model fit. The reason for this is that there is inherent variability of data may affect the $R^2$. Consider the following data sets: The (Y_ v/s X)
Higher $r^2$ value on test data than training data? $R^2$ value is not a metric for model selection or model fit. The reason for this is that there is inherent variability of data may affect the $R^2$. Consider the following data sets: The (Y_ v/s X) plot has more spread than (Y v/s X). As a result the $R^2$ value for the previous(which has more variance) will be lower than the latter(which has less variance). This proves that you must not use the $R^2$ value to check if the model is fitting the data well or not. Instead you should check the model assumptions of : Linear Trend : (from scatter plot) Constant variance of error : (from residual v/s fit plot) Normal distribution of error : (from QQ plot)
Higher $r^2$ value on test data than training data? $R^2$ value is not a metric for model selection or model fit. The reason for this is that there is inherent variability of data may affect the $R^2$. Consider the following data sets: The (Y_ v/s X)
49,428
Higher $r^2$ value on test data than training data?
One explanation might relate to how you subset your test data (they way you split training and testing data). If your test data only consists of (just a few) similar observations then it is very likely for your R-squared measure to be different than that of the training data. A good practice is to split X% of the data selected randomly into the training set, and the remaining (100 - X)% into your test data. Also, generally speaking, you should not be using R-squared for your test data but something like RMSE or MSE instead.
Higher $r^2$ value on test data than training data?
One explanation might relate to how you subset your test data (they way you split training and testing data). If your test data only consists of (just a few) similar observations then it is very likel
Higher $r^2$ value on test data than training data? One explanation might relate to how you subset your test data (they way you split training and testing data). If your test data only consists of (just a few) similar observations then it is very likely for your R-squared measure to be different than that of the training data. A good practice is to split X% of the data selected randomly into the training set, and the remaining (100 - X)% into your test data. Also, generally speaking, you should not be using R-squared for your test data but something like RMSE or MSE instead.
Higher $r^2$ value on test data than training data? One explanation might relate to how you subset your test data (they way you split training and testing data). If your test data only consists of (just a few) similar observations then it is very likel
49,429
Find 1 dimensional sufficient statistic for $Beta(\alpha, 2\alpha)$
Already answered in comments... PDF of $X\sim\mathcal{Be}(\alpha,2\alpha)$ is $$f(x;\alpha)=\frac{x^{\alpha-1}(1-x)^{2\alpha-1}}{B(\alpha,2\alpha)}\mathbf 1_{0<x<1},\quad\alpha>0$$ Suppose $(X_1,X_2,\cdots,X_n)$ is a random sample drawn from the above distribution. Joint PDF of $(X_1,X_2,\cdots,X_n)$ is \begin{align}f_{\alpha}(x_1,x_2,\cdots,x_n)&=\frac{1}{(B(\alpha,2\alpha))^n}\left(\prod_{i=1}^nx_i\right)^{\alpha-1}\left(\prod_{i=1}^n(1-x_i)\right)^{2\alpha-1}\mathbf1_{0<x_1,\cdots,x_n<1} \\\implies\ln f_{\alpha}(x_1,x_2,\cdots,x_n)&=-n\ln B(\alpha,2\alpha)+(\alpha-1)\sum_{i=1}^n\ln x_i+(2\alpha-1)\sum_{i=1}^n\ln(1-x_i) \\\implies f_{\alpha}(x_1,x_2,\cdots,x_n)&=\exp\left[(\alpha-1)\sum_{i=1}^n\ln x_i+(2\alpha-1)\sum_{i=1}^n \ln(1-x_i)+c(\alpha)\right] \\&=\exp\left[\alpha\sum_{i=1}^n\left(\ln x_i+2\ln (1-x_i)\right)+c(\alpha)+d(x_1,x_2,\cdots,x_n)\right] \end{align} for some $c$ and $d$. Clearly, $\mathcal{Be}(\alpha,2\alpha)$ belongs to the one-parameter exponential family. Hence our sufficient statistic for $\alpha$ is \begin{align} T(X_1,X_2,\cdots,X_n)&=\sum_{i=1}^n\left(\ln X_i+2\ln (1-X_i)\right) \\&=\ln \left[\left(\prod_{i=1}^n X_i\right)\left(\prod_{i=1}^n(1-X_i)\right)^2\right] \end{align} The claim that $\displaystyle T^*(X_1,\cdots,X_n)=\left(\prod_{i=1}^n X_i,\prod_{i=1}^n (1-X_i)\right)$ is sufficient for $\alpha$ is not exactly correct. If one was working with the $\mathcal Be(\alpha,\beta)$ density, then $T^*$ would have been the sufficient statistic for $(\alpha,\beta)$ where $\alpha\ne \beta$.
Find 1 dimensional sufficient statistic for $Beta(\alpha, 2\alpha)$
Already answered in comments... PDF of $X\sim\mathcal{Be}(\alpha,2\alpha)$ is $$f(x;\alpha)=\frac{x^{\alpha-1}(1-x)^{2\alpha-1}}{B(\alpha,2\alpha)}\mathbf 1_{0<x<1},\quad\alpha>0$$ Suppose $(X_1,X_2,\
Find 1 dimensional sufficient statistic for $Beta(\alpha, 2\alpha)$ Already answered in comments... PDF of $X\sim\mathcal{Be}(\alpha,2\alpha)$ is $$f(x;\alpha)=\frac{x^{\alpha-1}(1-x)^{2\alpha-1}}{B(\alpha,2\alpha)}\mathbf 1_{0<x<1},\quad\alpha>0$$ Suppose $(X_1,X_2,\cdots,X_n)$ is a random sample drawn from the above distribution. Joint PDF of $(X_1,X_2,\cdots,X_n)$ is \begin{align}f_{\alpha}(x_1,x_2,\cdots,x_n)&=\frac{1}{(B(\alpha,2\alpha))^n}\left(\prod_{i=1}^nx_i\right)^{\alpha-1}\left(\prod_{i=1}^n(1-x_i)\right)^{2\alpha-1}\mathbf1_{0<x_1,\cdots,x_n<1} \\\implies\ln f_{\alpha}(x_1,x_2,\cdots,x_n)&=-n\ln B(\alpha,2\alpha)+(\alpha-1)\sum_{i=1}^n\ln x_i+(2\alpha-1)\sum_{i=1}^n\ln(1-x_i) \\\implies f_{\alpha}(x_1,x_2,\cdots,x_n)&=\exp\left[(\alpha-1)\sum_{i=1}^n\ln x_i+(2\alpha-1)\sum_{i=1}^n \ln(1-x_i)+c(\alpha)\right] \\&=\exp\left[\alpha\sum_{i=1}^n\left(\ln x_i+2\ln (1-x_i)\right)+c(\alpha)+d(x_1,x_2,\cdots,x_n)\right] \end{align} for some $c$ and $d$. Clearly, $\mathcal{Be}(\alpha,2\alpha)$ belongs to the one-parameter exponential family. Hence our sufficient statistic for $\alpha$ is \begin{align} T(X_1,X_2,\cdots,X_n)&=\sum_{i=1}^n\left(\ln X_i+2\ln (1-X_i)\right) \\&=\ln \left[\left(\prod_{i=1}^n X_i\right)\left(\prod_{i=1}^n(1-X_i)\right)^2\right] \end{align} The claim that $\displaystyle T^*(X_1,\cdots,X_n)=\left(\prod_{i=1}^n X_i,\prod_{i=1}^n (1-X_i)\right)$ is sufficient for $\alpha$ is not exactly correct. If one was working with the $\mathcal Be(\alpha,\beta)$ density, then $T^*$ would have been the sufficient statistic for $(\alpha,\beta)$ where $\alpha\ne \beta$.
Find 1 dimensional sufficient statistic for $Beta(\alpha, 2\alpha)$ Already answered in comments... PDF of $X\sim\mathcal{Be}(\alpha,2\alpha)$ is $$f(x;\alpha)=\frac{x^{\alpha-1}(1-x)^{2\alpha-1}}{B(\alpha,2\alpha)}\mathbf 1_{0<x<1},\quad\alpha>0$$ Suppose $(X_1,X_2,\
49,430
Closed form recurrence formula for getting N consecutive heads on a coin
Let $T^{(k)}$ be the time it takes to see the first run of $k$ successes. Let $X\sim\mathrm{Ber}(p)$ be independent of $T^{(k)}$ for every $k$. Then, $$ T^{(k)} = (T^{(k-1)}+1)\, X + (T^{(k-1)}+1+T^{(k)}) \, (1 - X) \, , $$ because, in words, if I see a success in the current trial, then the time to get $k$ consecutive successes is the time to get $k-1$ consecutive successes plus one (the current trial); but if I see a failure, the time to get $k$ consecutive successes is the time to get $k-1$ consecutive successes plus one (the current trial), plus itself, because the process restarted in distribution. Defining $a_k=\mathrm{E}[T^{(k)}]$, we find the recurrence $$ a_k = (a_{k-1}+1)\,p + (a_{k-1}+1+a_k)\,(1-p) \, , $$ or $$ a_k = \frac{a_{k-1}}{p}+\frac{1}{p} \, . $$
Closed form recurrence formula for getting N consecutive heads on a coin
Let $T^{(k)}$ be the time it takes to see the first run of $k$ successes. Let $X\sim\mathrm{Ber}(p)$ be independent of $T^{(k)}$ for every $k$. Then, $$ T^{(k)} = (T^{(k-1)}+1)\, X + (T^{(k-1)}+1+T
Closed form recurrence formula for getting N consecutive heads on a coin Let $T^{(k)}$ be the time it takes to see the first run of $k$ successes. Let $X\sim\mathrm{Ber}(p)$ be independent of $T^{(k)}$ for every $k$. Then, $$ T^{(k)} = (T^{(k-1)}+1)\, X + (T^{(k-1)}+1+T^{(k)}) \, (1 - X) \, , $$ because, in words, if I see a success in the current trial, then the time to get $k$ consecutive successes is the time to get $k-1$ consecutive successes plus one (the current trial); but if I see a failure, the time to get $k$ consecutive successes is the time to get $k-1$ consecutive successes plus one (the current trial), plus itself, because the process restarted in distribution. Defining $a_k=\mathrm{E}[T^{(k)}]$, we find the recurrence $$ a_k = (a_{k-1}+1)\,p + (a_{k-1}+1+a_k)\,(1-p) \, , $$ or $$ a_k = \frac{a_{k-1}}{p}+\frac{1}{p} \, . $$
Closed form recurrence formula for getting N consecutive heads on a coin Let $T^{(k)}$ be the time it takes to see the first run of $k$ successes. Let $X\sim\mathrm{Ber}(p)$ be independent of $T^{(k)}$ for every $k$. Then, $$ T^{(k)} = (T^{(k-1)}+1)\, X + (T^{(k-1)}+1+T
49,431
Forecasting high frequency variable with low frequency predictor
There are two quick and dirty solutions. First would be to disaggregate series B to weekly values (R package tempdisagg is great for that) and then do a usual model. Second aggregate series A to monthly frequency, do a forecast and then use disaggregation on the forecast. The more theoretical approach would be casting problem to a state space model. There are a lot of literature on state space model approach when the dependent variable is observed at lower frequency. It usually assumes that the low frequency variable is really a high frequency variable observed at low frequency periods. You can make the same assumption and then reverse the methodology. Unfortunately I have not seen something similar being done, but I did not look hard enough. Concerning midasr, I can say that it was designed to work when the dependent variable is observed at the lowest frequency. The reverse situation was not seriously considered.
Forecasting high frequency variable with low frequency predictor
There are two quick and dirty solutions. First would be to disaggregate series B to weekly values (R package tempdisagg is great for that) and then do a usual model. Second aggregate series A to month
Forecasting high frequency variable with low frequency predictor There are two quick and dirty solutions. First would be to disaggregate series B to weekly values (R package tempdisagg is great for that) and then do a usual model. Second aggregate series A to monthly frequency, do a forecast and then use disaggregation on the forecast. The more theoretical approach would be casting problem to a state space model. There are a lot of literature on state space model approach when the dependent variable is observed at lower frequency. It usually assumes that the low frequency variable is really a high frequency variable observed at low frequency periods. You can make the same assumption and then reverse the methodology. Unfortunately I have not seen something similar being done, but I did not look hard enough. Concerning midasr, I can say that it was designed to work when the dependent variable is observed at the lowest frequency. The reverse situation was not seriously considered.
Forecasting high frequency variable with low frequency predictor There are two quick and dirty solutions. First would be to disaggregate series B to weekly values (R package tempdisagg is great for that) and then do a usual model. Second aggregate series A to month
49,432
How to calculate F-Measure from Precision Recall Curve
Precision-Recall curve and ROC curve (doesn't matter they are just the mirror images of each other) are used to give you the sense of the quality of the binary classifier for the different values for some parameter that affects the performance of your classifier. Now, F1 are particular scores which combine both precision and recall into a single one, so that way you just need to select the configuration of your classifier which has the highest F score. In your place for each pair of precision and recall I would calculate F score and then pick the configuration which has the highest F score. Now, the tricky part is which F score. F1 is the score which values precision and recall the same, but sometimes the recall is more important than precision (for example, you don't mind having a lot of people falsely tested for some cancer if you know that all of the ones who have that cancer are tested). In that case you could use F2 measure. I think it doesn't make sense to sum up all F measures for all combinations of precision and recall. After all, the idea is to pick a single model out of the broader range of models, I would prefer to pick a model with the highest value of F score instead of the one with the biggest sum of all F scores.
How to calculate F-Measure from Precision Recall Curve
Precision-Recall curve and ROC curve (doesn't matter they are just the mirror images of each other) are used to give you the sense of the quality of the binary classifier for the different values for
How to calculate F-Measure from Precision Recall Curve Precision-Recall curve and ROC curve (doesn't matter they are just the mirror images of each other) are used to give you the sense of the quality of the binary classifier for the different values for some parameter that affects the performance of your classifier. Now, F1 are particular scores which combine both precision and recall into a single one, so that way you just need to select the configuration of your classifier which has the highest F score. In your place for each pair of precision and recall I would calculate F score and then pick the configuration which has the highest F score. Now, the tricky part is which F score. F1 is the score which values precision and recall the same, but sometimes the recall is more important than precision (for example, you don't mind having a lot of people falsely tested for some cancer if you know that all of the ones who have that cancer are tested). In that case you could use F2 measure. I think it doesn't make sense to sum up all F measures for all combinations of precision and recall. After all, the idea is to pick a single model out of the broader range of models, I would prefer to pick a model with the highest value of F score instead of the one with the biggest sum of all F scores.
How to calculate F-Measure from Precision Recall Curve Precision-Recall curve and ROC curve (doesn't matter they are just the mirror images of each other) are used to give you the sense of the quality of the binary classifier for the different values for
49,433
How to calculate F-Measure from Precision Recall Curve
It is hard to read off F1 (or any other weighted F-measure) directly from a Precision-Recall graph, because of needing to work with reciprocals (harmonic mean). But if instead you plot the reciprocal Precision & Recall, then values of the F-measure form isobars (straight lines with equal values) with gradient depending on the tradeoff parameter. In the case of F1 they will be isobars parallel to the diagonal, corresponding to equal weighting of success in terms of precision of the positive predictions and in terms of recall of the positive cases (the number predicted may not correspond to the real number of positives). If you use isobars directly on the Precision-Recall graph, you optimize the arithmetic mean instead - the advantage of using harmonic over arithmetic is debatable and is discussed in the reference, and there is some evidence that geometric is better than either. If you plot Precision & Recall logarithmically, then the isobars can be used to optimize this geometric mean. Note that the logarithmic and reciprocal PR graphs are not defined at the points corresponding to no positives are present or none are predicted. At all other points these curves are equivalent to the ROC curve in that if a solution is better ranked in one it will be better ranked in the other. ROC curves are however a bit different as the compare TPR (Recall) against FPR (Fallout) but again isobars are useful and those parallel to the diagonal correspond to equal weighting of positives and negatives. In particular Precision and Recall are independent of the number of True Negatives (correctly predicted negatives) but this is a complmenetary component of Fallout - in fact a mirror image graph is formed by plotting TPR vs TNR. The difference TPR-FPR is also known as Youden J or Informedness, and is linearly related to the area under the curve formed by a specific operating point. This is discussed in detail in a report of mine I have uploaded, with the graphs I've discussed shown in Figure 3 for some real data relating to facial expression recognition: https://www.researchgate.net/publication/273761103_What_the_F-measure_doesn%27t_measure Discussion of ROC Area Under the Curve is here: https://www.researchgate.net/publication/261155937_The_problem_of_Area_Under_the_Curve?ev=pub_cit
How to calculate F-Measure from Precision Recall Curve
It is hard to read off F1 (or any other weighted F-measure) directly from a Precision-Recall graph, because of needing to work with reciprocals (harmonic mean). But if instead you plot the reciprocal
How to calculate F-Measure from Precision Recall Curve It is hard to read off F1 (or any other weighted F-measure) directly from a Precision-Recall graph, because of needing to work with reciprocals (harmonic mean). But if instead you plot the reciprocal Precision & Recall, then values of the F-measure form isobars (straight lines with equal values) with gradient depending on the tradeoff parameter. In the case of F1 they will be isobars parallel to the diagonal, corresponding to equal weighting of success in terms of precision of the positive predictions and in terms of recall of the positive cases (the number predicted may not correspond to the real number of positives). If you use isobars directly on the Precision-Recall graph, you optimize the arithmetic mean instead - the advantage of using harmonic over arithmetic is debatable and is discussed in the reference, and there is some evidence that geometric is better than either. If you plot Precision & Recall logarithmically, then the isobars can be used to optimize this geometric mean. Note that the logarithmic and reciprocal PR graphs are not defined at the points corresponding to no positives are present or none are predicted. At all other points these curves are equivalent to the ROC curve in that if a solution is better ranked in one it will be better ranked in the other. ROC curves are however a bit different as the compare TPR (Recall) against FPR (Fallout) but again isobars are useful and those parallel to the diagonal correspond to equal weighting of positives and negatives. In particular Precision and Recall are independent of the number of True Negatives (correctly predicted negatives) but this is a complmenetary component of Fallout - in fact a mirror image graph is formed by plotting TPR vs TNR. The difference TPR-FPR is also known as Youden J or Informedness, and is linearly related to the area under the curve formed by a specific operating point. This is discussed in detail in a report of mine I have uploaded, with the graphs I've discussed shown in Figure 3 for some real data relating to facial expression recognition: https://www.researchgate.net/publication/273761103_What_the_F-measure_doesn%27t_measure Discussion of ROC Area Under the Curve is here: https://www.researchgate.net/publication/261155937_The_problem_of_Area_Under_the_Curve?ev=pub_cit
How to calculate F-Measure from Precision Recall Curve It is hard to read off F1 (or any other weighted F-measure) directly from a Precision-Recall graph, because of needing to work with reciprocals (harmonic mean). But if instead you plot the reciprocal
49,434
Determining if a function is additive
Although the second order mixed differences $$D_{12}f(x_1, x_2, y_1, y_2) = f(x_1,y_1)+f(x_2,y_2)-f(x_1,y_2)-f(x_2,y_1)$$ provide useful information, by themselves they do not have enough power to discriminate between slightly noisy additive functions and non-additive functions. The trick is to compare $D_{12},$ suitably transformed to have good statistical properties, to some measure of the spread of $f,$ without assuming anything at all about $f$. This answer gives the details and provides evidence that this procedure can work well even for small grids and relatively large amounts of noise. A function $f$ of two variables $x$ and $y$ is "additively separable" ("additive" for short) when there exist functions $g$ and $h$ such that $f(x,y) = g(x)+h(y).$ A "noisy" additive function includes independent additive random errors $\varepsilon(x,y)$, so that the full model is $$f(x,y) = g(x) + h(y) + \varepsilon(x,y).$$ Suppose we have observed $f(x_i, y_j)$ for all $mn$ combinations of $m$ distinct values $x_1, x_2, \ldots, x_m$ and $n$ distinct values $y_1, y_2, \ldots, y_n.$ (In full generality--assuming absolutely nothing about $f$, not even whether it is continuous--it matters not what the values $x_i$ or $y_j$ are, because they can be subsumed in the definitions of $g$ and $h$, respectively. Furthermore, the ordering of the $x_i$ and $y_j$ are immaterial: the definition of additivity makes no use of that.) The first idea is that additive functions $f$ can be detected by comparing $D_{12}$ to zero, because $$\eqalign{ D_{12}f(x_1, x_2, y_1, y_2) &= g(x_1)+h(y_1)+\varepsilon(x_1,y_1) + g(x_2)+h(y_2)+\varepsilon(x_2,y_2) \\ &- g(x_1) - h(y_2) - \varepsilon(x_1,y_2) - g(x_2) - h(y_1) - \varepsilon(x_2, y_1) \\ &=\varepsilon(x_1,y_1) +\varepsilon(x_2,y_2)- \varepsilon(x_1,y_2)- \varepsilon(x_2, y_1)\\ &= D_{12}\varepsilon(x_1, x_2, y_1, y_2), }$$ which is the sum of four independent errors. Provided these "noise terms" tend to be small, all mixed differences ought to be small when applied to additive functions. When applied to non-additive functions, the mixed differences ought to be greater. Difficulties arise for several reasons: Because the orders of the $x_i$ and $y_j$ do not matter, one can compute mixed differences for all $m(m-1)/2$ distinct pairs of $x$'s and all $n(n-1)/2$ distinct pairs of $y$'s, even though only $(m-1)(n-1)$ of them are (linearly) independent. Which differences should be used? (I propose using them all.) Because adding and subtracting values of a function throughout its domain can cause relatively large values to be combined, the variation of a function's values can swamp any signal in the mixed differences. It is tempting instead to fit a model of the form $$\mathbb{E}(f(x_i, y_j)) = \alpha_i + \beta_j$$ and then study its residuals--but is there any way that can help and, if so, how should the model be fit? Can we do this in a robust fashion that is resistant not only to outlying values of the $\varepsilon(x_i,y_j)$ but also to possibly extreme variation in $g$ and $h$ themselves? A solution I have hit upon after much experimentation addresses these issues by constructing a set of paired values $(a_k, b_k)$ derived from the observations of $f$. The indexes $k$ designate $2\times 2$ blocks in the data and therefore correspond to all tuples $(i_1, i_2, j_1, j_2)$ for which $1 \le i_1 \lt i_2 \le m$ and $1 \le j_1 \lt j_2 \le n$. For such a tuple let $a_k$ be the range of the four values of $f$ found in its block and let $b_k$ be the square root of the absolute value of $D_{12}f$ for this block. My proposal is to normalize all the values of $a_k$ and $b_k$ (computed for all possible blocks) relative to the maximum value of $a_k$ and look at their scatterplot. For an additive function with no error, the $b_k$ are always zero and the scatterplot will be perfectly horizontal with no correlation. When there is error, suppose the typical variance of the $\varepsilon(x,y)$ is $\sigma^2$. Then the variance of the mixed differences will be $4\sigma^2$ while the $a_k$ will extend from near $0$ to the full range of $f$. The square root of the absolute values of those first differences therefore will typically be around $2\sigma$ or so; provided this is small compared to the range of $f$, the scatterplot ought to be perfectly level--at least for the larger values of $a_k$. (For the smaller values, $b_k$ will be constrained to be small no matter what.) This is the second key idea and it seems to work. This reasoning yields an additivity diagnostic plot: the normalized scatterplot of $(a_k, b_k)$ for noisy additive functions will Tend to be near the horizontal axis and Rise from the origin to a horizontal value well below $1$. By contrast, the scatterplot for non-additive functions will tend to produce larger values of $b_k$ when the $a_k$ are large, because large ranges of functional values ought to exhibit relatively large deviations from additivity. Consequently for many non-additive functions we expect the diagnostic plot to be nearly diagonal and linear, rising from the origin $(0,0)$ nearly to the opposite corner at $(1,1).$ Pictures are worth more than these words, so let's look at some examples. In the figure I have constructed three functions whose values range approximately from $-1$ to $1$, sampled them on a $7$ by $7$ grid covering the square $[-1,1]\times [-1,1]$, generated a fixed normally-distributed noise vector of $\varepsilon(x_i, y_j)$ (with unit variance), and added various multiples $\sigma = 0, 0.1, 0.3$ of this noise vector to the three samples. Each line of the figure corresponds to one function. It displays the additivity diagnostic plots for the three versions--one with no noise and two with increasing noise--then shows the functional values on a grid, and finally shows the noisiest values on a grid. The noise in that latter version has a standard deviation equal to $1/6$ the original range of the function, which is pretty big. Notice, too, that a $7$ by $7$ grid has only $(7-1)(7-1) = 36$ degrees of freedom (independent mixed differences), which is a pretty small amount of data concerning these functions: all in all, this is a fairly difficult test of the procedure. The red lines are lowess smooths of the scatterplots. It should be evident which of these functions is additive: only the first one, $f(x,y) = (x^2+y^2)/2.$ The other two, $xy^3$ and $\cos(\pi x) \sin(2\pi y),$ are decidedly non-additive. They are distinguished from the first in all three additivity diagnostic plots, which closely follow the preceding descriptions. In short, because the plots on the first row (a) are low in height, (b) exhibit low correlation, and (c) have smooths that level off for larger values of $a_k$ (although the one for $\sigma=0.3$ is marginal), they signal that the first function may be additive. Because the smoothed plots on the second and third rows all closely approximate the major diagonal of the square, they signal non-additivity of the data. The visual similarities among the noisy versions of these functions (in the rightmost column) attest to the utility of this diagnostic plot. Here is the R code used to construct the figure. Do not try to apply it to large grids of data! Because all $2 \times 2$ blocks of the grid are evaluated, the computational effort and storage requirements are both $O(m^2 n^2),$ which is pretty bad. For instance, a medium-sized grid of (say) $m=50$ and $n=100$ has $6063750$ blocks to evaluate! (For anyone lucky enough to have such a rich dataset, consider subsampling the blocks rather than exhaustively evaluating all of them.) # # Compute all rectangular differences. # More generally, `fun` will be applied to all 2 x 2 sub-blocks of a. # diff.rect <- function(z, a=matrix(c(1,-1,-1,1), 2), fun) { if (missing(fun)) fun <- function(x) sum(x*a) m <- dim(z)[1]; n <- dim(z)[2] ii <- combn(1:m, 2); jj <- combn(1:n, 2) sapply(1:dim(jj)[2], function(j) { sapply(1:dim(ii)[2], function(i) fun(z[ii[,i], jj[,j]])) }) } # # Compare ranges to differences. # additive.test <- function(z, ...) { z.diff <- diff.rect(z) z.range <- diff.rect(z, fun=function(x) diff(range(x))) scale <- max(z.range) z.range <- z.range / scale z.diff <- sqrt(abs(z.diff) / scale) plot(z.range, z.diff, cex=0.7, col="#40404080", ylim=c(0,1), ...) abline(h = median(z.diff), col="#4040ff80", lty=2) lines(lowess(z.range, z.diff), lwd=2, col="Red") } m <- 7; n <- 7 x <- seq(-1, 1, length.out=m) y <- seq(-1, 1, length.out=n) # # Plot differences against ranges. # require(raster) image <- function(x, ...) plot(raster(x), asp=dim(x)[1]/dim(x)[2], axes=FALSE, frame.plot=FALSE, ...) par(mfrow=c(3, 5)) set.seed(17); error <- rnorm(m*n) for (f in c(function(x,y) (x^2 + y^2)/2, function(x,y) x * y^3, function(x,y) cos(pi*x) * sin(2*pi*y))) { f.body <- deparse(f)[2] f.xy <- outer(x, y, f) for (sigma in c(0, 1/10, 3/10)) { g.xy <- f.xy + error * sigma additive.test(g.xy, main=paste("sd =", round(sigma, 2)), xlab="", ylab="", cex.main=9/10) } image(f.xy, main=f.body, cex.main=9/10) image(f.xy + error * 3/10, main="SD = 0.3", cex.main=9/10) }
Determining if a function is additive
Although the second order mixed differences $$D_{12}f(x_1, x_2, y_1, y_2) = f(x_1,y_1)+f(x_2,y_2)-f(x_1,y_2)-f(x_2,y_1)$$ provide useful information, by themselves they do not have enough power to dis
Determining if a function is additive Although the second order mixed differences $$D_{12}f(x_1, x_2, y_1, y_2) = f(x_1,y_1)+f(x_2,y_2)-f(x_1,y_2)-f(x_2,y_1)$$ provide useful information, by themselves they do not have enough power to discriminate between slightly noisy additive functions and non-additive functions. The trick is to compare $D_{12},$ suitably transformed to have good statistical properties, to some measure of the spread of $f,$ without assuming anything at all about $f$. This answer gives the details and provides evidence that this procedure can work well even for small grids and relatively large amounts of noise. A function $f$ of two variables $x$ and $y$ is "additively separable" ("additive" for short) when there exist functions $g$ and $h$ such that $f(x,y) = g(x)+h(y).$ A "noisy" additive function includes independent additive random errors $\varepsilon(x,y)$, so that the full model is $$f(x,y) = g(x) + h(y) + \varepsilon(x,y).$$ Suppose we have observed $f(x_i, y_j)$ for all $mn$ combinations of $m$ distinct values $x_1, x_2, \ldots, x_m$ and $n$ distinct values $y_1, y_2, \ldots, y_n.$ (In full generality--assuming absolutely nothing about $f$, not even whether it is continuous--it matters not what the values $x_i$ or $y_j$ are, because they can be subsumed in the definitions of $g$ and $h$, respectively. Furthermore, the ordering of the $x_i$ and $y_j$ are immaterial: the definition of additivity makes no use of that.) The first idea is that additive functions $f$ can be detected by comparing $D_{12}$ to zero, because $$\eqalign{ D_{12}f(x_1, x_2, y_1, y_2) &= g(x_1)+h(y_1)+\varepsilon(x_1,y_1) + g(x_2)+h(y_2)+\varepsilon(x_2,y_2) \\ &- g(x_1) - h(y_2) - \varepsilon(x_1,y_2) - g(x_2) - h(y_1) - \varepsilon(x_2, y_1) \\ &=\varepsilon(x_1,y_1) +\varepsilon(x_2,y_2)- \varepsilon(x_1,y_2)- \varepsilon(x_2, y_1)\\ &= D_{12}\varepsilon(x_1, x_2, y_1, y_2), }$$ which is the sum of four independent errors. Provided these "noise terms" tend to be small, all mixed differences ought to be small when applied to additive functions. When applied to non-additive functions, the mixed differences ought to be greater. Difficulties arise for several reasons: Because the orders of the $x_i$ and $y_j$ do not matter, one can compute mixed differences for all $m(m-1)/2$ distinct pairs of $x$'s and all $n(n-1)/2$ distinct pairs of $y$'s, even though only $(m-1)(n-1)$ of them are (linearly) independent. Which differences should be used? (I propose using them all.) Because adding and subtracting values of a function throughout its domain can cause relatively large values to be combined, the variation of a function's values can swamp any signal in the mixed differences. It is tempting instead to fit a model of the form $$\mathbb{E}(f(x_i, y_j)) = \alpha_i + \beta_j$$ and then study its residuals--but is there any way that can help and, if so, how should the model be fit? Can we do this in a robust fashion that is resistant not only to outlying values of the $\varepsilon(x_i,y_j)$ but also to possibly extreme variation in $g$ and $h$ themselves? A solution I have hit upon after much experimentation addresses these issues by constructing a set of paired values $(a_k, b_k)$ derived from the observations of $f$. The indexes $k$ designate $2\times 2$ blocks in the data and therefore correspond to all tuples $(i_1, i_2, j_1, j_2)$ for which $1 \le i_1 \lt i_2 \le m$ and $1 \le j_1 \lt j_2 \le n$. For such a tuple let $a_k$ be the range of the four values of $f$ found in its block and let $b_k$ be the square root of the absolute value of $D_{12}f$ for this block. My proposal is to normalize all the values of $a_k$ and $b_k$ (computed for all possible blocks) relative to the maximum value of $a_k$ and look at their scatterplot. For an additive function with no error, the $b_k$ are always zero and the scatterplot will be perfectly horizontal with no correlation. When there is error, suppose the typical variance of the $\varepsilon(x,y)$ is $\sigma^2$. Then the variance of the mixed differences will be $4\sigma^2$ while the $a_k$ will extend from near $0$ to the full range of $f$. The square root of the absolute values of those first differences therefore will typically be around $2\sigma$ or so; provided this is small compared to the range of $f$, the scatterplot ought to be perfectly level--at least for the larger values of $a_k$. (For the smaller values, $b_k$ will be constrained to be small no matter what.) This is the second key idea and it seems to work. This reasoning yields an additivity diagnostic plot: the normalized scatterplot of $(a_k, b_k)$ for noisy additive functions will Tend to be near the horizontal axis and Rise from the origin to a horizontal value well below $1$. By contrast, the scatterplot for non-additive functions will tend to produce larger values of $b_k$ when the $a_k$ are large, because large ranges of functional values ought to exhibit relatively large deviations from additivity. Consequently for many non-additive functions we expect the diagnostic plot to be nearly diagonal and linear, rising from the origin $(0,0)$ nearly to the opposite corner at $(1,1).$ Pictures are worth more than these words, so let's look at some examples. In the figure I have constructed three functions whose values range approximately from $-1$ to $1$, sampled them on a $7$ by $7$ grid covering the square $[-1,1]\times [-1,1]$, generated a fixed normally-distributed noise vector of $\varepsilon(x_i, y_j)$ (with unit variance), and added various multiples $\sigma = 0, 0.1, 0.3$ of this noise vector to the three samples. Each line of the figure corresponds to one function. It displays the additivity diagnostic plots for the three versions--one with no noise and two with increasing noise--then shows the functional values on a grid, and finally shows the noisiest values on a grid. The noise in that latter version has a standard deviation equal to $1/6$ the original range of the function, which is pretty big. Notice, too, that a $7$ by $7$ grid has only $(7-1)(7-1) = 36$ degrees of freedom (independent mixed differences), which is a pretty small amount of data concerning these functions: all in all, this is a fairly difficult test of the procedure. The red lines are lowess smooths of the scatterplots. It should be evident which of these functions is additive: only the first one, $f(x,y) = (x^2+y^2)/2.$ The other two, $xy^3$ and $\cos(\pi x) \sin(2\pi y),$ are decidedly non-additive. They are distinguished from the first in all three additivity diagnostic plots, which closely follow the preceding descriptions. In short, because the plots on the first row (a) are low in height, (b) exhibit low correlation, and (c) have smooths that level off for larger values of $a_k$ (although the one for $\sigma=0.3$ is marginal), they signal that the first function may be additive. Because the smoothed plots on the second and third rows all closely approximate the major diagonal of the square, they signal non-additivity of the data. The visual similarities among the noisy versions of these functions (in the rightmost column) attest to the utility of this diagnostic plot. Here is the R code used to construct the figure. Do not try to apply it to large grids of data! Because all $2 \times 2$ blocks of the grid are evaluated, the computational effort and storage requirements are both $O(m^2 n^2),$ which is pretty bad. For instance, a medium-sized grid of (say) $m=50$ and $n=100$ has $6063750$ blocks to evaluate! (For anyone lucky enough to have such a rich dataset, consider subsampling the blocks rather than exhaustively evaluating all of them.) # # Compute all rectangular differences. # More generally, `fun` will be applied to all 2 x 2 sub-blocks of a. # diff.rect <- function(z, a=matrix(c(1,-1,-1,1), 2), fun) { if (missing(fun)) fun <- function(x) sum(x*a) m <- dim(z)[1]; n <- dim(z)[2] ii <- combn(1:m, 2); jj <- combn(1:n, 2) sapply(1:dim(jj)[2], function(j) { sapply(1:dim(ii)[2], function(i) fun(z[ii[,i], jj[,j]])) }) } # # Compare ranges to differences. # additive.test <- function(z, ...) { z.diff <- diff.rect(z) z.range <- diff.rect(z, fun=function(x) diff(range(x))) scale <- max(z.range) z.range <- z.range / scale z.diff <- sqrt(abs(z.diff) / scale) plot(z.range, z.diff, cex=0.7, col="#40404080", ylim=c(0,1), ...) abline(h = median(z.diff), col="#4040ff80", lty=2) lines(lowess(z.range, z.diff), lwd=2, col="Red") } m <- 7; n <- 7 x <- seq(-1, 1, length.out=m) y <- seq(-1, 1, length.out=n) # # Plot differences against ranges. # require(raster) image <- function(x, ...) plot(raster(x), asp=dim(x)[1]/dim(x)[2], axes=FALSE, frame.plot=FALSE, ...) par(mfrow=c(3, 5)) set.seed(17); error <- rnorm(m*n) for (f in c(function(x,y) (x^2 + y^2)/2, function(x,y) x * y^3, function(x,y) cos(pi*x) * sin(2*pi*y))) { f.body <- deparse(f)[2] f.xy <- outer(x, y, f) for (sigma in c(0, 1/10, 3/10)) { g.xy <- f.xy + error * sigma additive.test(g.xy, main=paste("sd =", round(sigma, 2)), xlab="", ylab="", cex.main=9/10) } image(f.xy, main=f.body, cex.main=9/10) image(f.xy + error * 3/10, main="SD = 0.3", cex.main=9/10) }
Determining if a function is additive Although the second order mixed differences $$D_{12}f(x_1, x_2, y_1, y_2) = f(x_1,y_1)+f(x_2,y_2)-f(x_1,y_2)-f(x_2,y_1)$$ provide useful information, by themselves they do not have enough power to dis
49,435
Choosing number of PCA components when multiple samples for each data point are available
I'm not sure that I completely understand the data. Do you have M replicates at N timepoints of a response in $\Re^d$? That would make $N \times M \times D$ actual numbers? Are the trajectories vector valued functions? If I have understood this correctly, I don't think that your replicates are going to tell you anything about the number of PC's. The replicates enable you to obtain a more accurate estimate of the covariance matrix than you would otherwise have, but in themselves, they don't tell you anything about the relationship between successive time points (which is what the eigen-decomposition is about). To determine significance, you need some sort of probability model to the effect that the data belong to something in a reduced space (the PC space) with full rank error. Factor analysis posits models of this kind and the factor analysis literature is full of pointers on how to choose the right number of factors (it's as much an art as a science, I believe). Jeremy Anglim's blog is a useful resource, especially This post
Choosing number of PCA components when multiple samples for each data point are available
I'm not sure that I completely understand the data. Do you have M replicates at N timepoints of a response in $\Re^d$? That would make $N \times M \times D$ actual numbers? Are the trajectories vector
Choosing number of PCA components when multiple samples for each data point are available I'm not sure that I completely understand the data. Do you have M replicates at N timepoints of a response in $\Re^d$? That would make $N \times M \times D$ actual numbers? Are the trajectories vector valued functions? If I have understood this correctly, I don't think that your replicates are going to tell you anything about the number of PC's. The replicates enable you to obtain a more accurate estimate of the covariance matrix than you would otherwise have, but in themselves, they don't tell you anything about the relationship between successive time points (which is what the eigen-decomposition is about). To determine significance, you need some sort of probability model to the effect that the data belong to something in a reduced space (the PC space) with full rank error. Factor analysis posits models of this kind and the factor analysis literature is full of pointers on how to choose the right number of factors (it's as much an art as a science, I believe). Jeremy Anglim's blog is a useful resource, especially This post
Choosing number of PCA components when multiple samples for each data point are available I'm not sure that I completely understand the data. Do you have M replicates at N timepoints of a response in $\Re^d$? That would make $N \times M \times D$ actual numbers? Are the trajectories vector
49,436
Choosing number of PCA components when multiple samples for each data point are available
In order to advance the discussion here I will describe two approaches that I am currently using. For convenience I will repeat the notation here. There are $N$ points $\bar{\mathbf{x}}_i \in \mathbb{R}^D$, and each point is an average over $M$ repeated measurements $\bar{\mathbf{x}}_i=\frac{1}{M}\sum_{j=1}^M \mathbf{x}_i^{(j)}$. Each measurement is a noisy observation of a true position of the corresponding point, i.e. $\mathbf{x}_i^{(j)}=\mathbf{a}_i + \mathbf{\xi}_i^{(j)}$ (note that $\xi$ is a $D$-dimensional vector as well, but I cannot make Greek letters bold here). I am interested in principal components of $\{\mathbf{a}_i\}$, but can only perform PCA on $\{\bar{\mathbf{x}}_i\}$, and the small components can be completely distorted by the noise. The larger the noise, the less principal components of $\{\mathbf{a}_i\}$ I will be able to recover. The challenge is to select only those leading components that definitely "come from" $\{\mathbf{a}_i\}$. First method If the measurement noise in one particular dimension has variance $\sigma^2$ (which can be different for different dimensions and data points), then the standard error of the mean is given by $\frac{\sigma^2}{M}$. Informally, this is the amount of noise that will remain after averaging and can potentially screw the principal components. I want to estimate how much variance can possibly come from this remaining noise. Note that if I subtract two measurements $\Delta \mathbf{x}_i=\mathbf{x}_i^{(j_1)}-\mathbf{x}_i^{(j_2)}$, I get rid of the signal $\mathbf{a}_i$ and get twice the noise variance. So to estimate the maximum amount of overall "noise variance" I take $\frac{1}{\sqrt{2M}}\Delta \mathbf{x}_i$ as data points, run PCA on them and take the variance of the first PC as my noise floor. All PCs from $\bar{\mathbf{x}}_i$ that have variance above this noise floor I declare significant. See a figure below. In fact I can take different pairs of measurements to construct my noise estimates, and this will give slightly different noise floors. I run this procedure multiple times, randomly selecting pairs each time, and then take the average noise floor over repetitions. This procedure seems to follow @whuber's suggestion in the comment above. Second method This is a cross-validation procedure. I split my $M$ measurements in two parts, and average them separately, obtaining training data $\bar{\mathbf{x}}_i$ and test data $\bar{\mathbf{y}}_i$. Then I compute PCA on $\bar{\mathbf{x}}_i$ and get the projection of my data onto the principal axes: $\mathbf{z}_i$. The question is now: how many dimensions of $\mathbf{z}$ should I take to minimize the reconstruction error of the test dataset? More precisely, for each number $k$ of components I project the training data onto the $k$-dimensional subspace spanned by the first $k$ principal axes $\mathbf{z}_i=WW^\top \bar{\mathbf{x}}_i$ where $W$ is the $D\times k$ matrix of the first $k$ principal axes, and compute the reconstruction error $e(k)=\sum_i||\bar{\mathbf{y}}_i-\mathbf{z}_i||^2$. This error should have a minimum at certain $k$ and this number of components I declare significant. Here again I repeat this procedure many times for different random splits of the data, average obtained reconstruction errors $e(k)$ and then find the minimum. This procedure is probably related to @Placidia's suggestion ("do exploratory FA on half the data and a confirmatory on the other half"), but I know too little about confirmatory factor analysis to say if it's really the same thing. Results The following figure shows the outcomes of both methods on four different datasets. Four columns are datasets, top row is noise floor method, second row is cross-validation method. On the top, red line shows the empirical eigenvalue spectrum, blue lines are noise spectra from different repetitions, and the dashed horizontal line shows the mean noise floor. Black dots mark components above the noise floor. On the bottom, blue lines are reconstruction errors on various cross-validation folds, red line is the mean, black dots mark the components until the minimum of the red curve. The two methods give similar, but not identical results. In the fourth dataset I get 4 components with each method. In the second and third datasets cross-validation results in a bit more significant components than the noise floor. However, in the first dataset there is one component well above the noise floor, but cross-validation shows that minimum reconstruction error is obtained with zero components. Both methods seem conservative to me, however I don't like that sometimes one and sometimes another turns out to be more sensitive. At the moment I compute both estimates and take the maximum; this is reasonable if both estimates are indeed conservative, but not very elegant.
Choosing number of PCA components when multiple samples for each data point are available
In order to advance the discussion here I will describe two approaches that I am currently using. For convenience I will repeat the notation here. There are $N$ points $\bar{\mathbf{x}}_i \in \mathbb{
Choosing number of PCA components when multiple samples for each data point are available In order to advance the discussion here I will describe two approaches that I am currently using. For convenience I will repeat the notation here. There are $N$ points $\bar{\mathbf{x}}_i \in \mathbb{R}^D$, and each point is an average over $M$ repeated measurements $\bar{\mathbf{x}}_i=\frac{1}{M}\sum_{j=1}^M \mathbf{x}_i^{(j)}$. Each measurement is a noisy observation of a true position of the corresponding point, i.e. $\mathbf{x}_i^{(j)}=\mathbf{a}_i + \mathbf{\xi}_i^{(j)}$ (note that $\xi$ is a $D$-dimensional vector as well, but I cannot make Greek letters bold here). I am interested in principal components of $\{\mathbf{a}_i\}$, but can only perform PCA on $\{\bar{\mathbf{x}}_i\}$, and the small components can be completely distorted by the noise. The larger the noise, the less principal components of $\{\mathbf{a}_i\}$ I will be able to recover. The challenge is to select only those leading components that definitely "come from" $\{\mathbf{a}_i\}$. First method If the measurement noise in one particular dimension has variance $\sigma^2$ (which can be different for different dimensions and data points), then the standard error of the mean is given by $\frac{\sigma^2}{M}$. Informally, this is the amount of noise that will remain after averaging and can potentially screw the principal components. I want to estimate how much variance can possibly come from this remaining noise. Note that if I subtract two measurements $\Delta \mathbf{x}_i=\mathbf{x}_i^{(j_1)}-\mathbf{x}_i^{(j_2)}$, I get rid of the signal $\mathbf{a}_i$ and get twice the noise variance. So to estimate the maximum amount of overall "noise variance" I take $\frac{1}{\sqrt{2M}}\Delta \mathbf{x}_i$ as data points, run PCA on them and take the variance of the first PC as my noise floor. All PCs from $\bar{\mathbf{x}}_i$ that have variance above this noise floor I declare significant. See a figure below. In fact I can take different pairs of measurements to construct my noise estimates, and this will give slightly different noise floors. I run this procedure multiple times, randomly selecting pairs each time, and then take the average noise floor over repetitions. This procedure seems to follow @whuber's suggestion in the comment above. Second method This is a cross-validation procedure. I split my $M$ measurements in two parts, and average them separately, obtaining training data $\bar{\mathbf{x}}_i$ and test data $\bar{\mathbf{y}}_i$. Then I compute PCA on $\bar{\mathbf{x}}_i$ and get the projection of my data onto the principal axes: $\mathbf{z}_i$. The question is now: how many dimensions of $\mathbf{z}$ should I take to minimize the reconstruction error of the test dataset? More precisely, for each number $k$ of components I project the training data onto the $k$-dimensional subspace spanned by the first $k$ principal axes $\mathbf{z}_i=WW^\top \bar{\mathbf{x}}_i$ where $W$ is the $D\times k$ matrix of the first $k$ principal axes, and compute the reconstruction error $e(k)=\sum_i||\bar{\mathbf{y}}_i-\mathbf{z}_i||^2$. This error should have a minimum at certain $k$ and this number of components I declare significant. Here again I repeat this procedure many times for different random splits of the data, average obtained reconstruction errors $e(k)$ and then find the minimum. This procedure is probably related to @Placidia's suggestion ("do exploratory FA on half the data and a confirmatory on the other half"), but I know too little about confirmatory factor analysis to say if it's really the same thing. Results The following figure shows the outcomes of both methods on four different datasets. Four columns are datasets, top row is noise floor method, second row is cross-validation method. On the top, red line shows the empirical eigenvalue spectrum, blue lines are noise spectra from different repetitions, and the dashed horizontal line shows the mean noise floor. Black dots mark components above the noise floor. On the bottom, blue lines are reconstruction errors on various cross-validation folds, red line is the mean, black dots mark the components until the minimum of the red curve. The two methods give similar, but not identical results. In the fourth dataset I get 4 components with each method. In the second and third datasets cross-validation results in a bit more significant components than the noise floor. However, in the first dataset there is one component well above the noise floor, but cross-validation shows that minimum reconstruction error is obtained with zero components. Both methods seem conservative to me, however I don't like that sometimes one and sometimes another turns out to be more sensitive. At the moment I compute both estimates and take the maximum; this is reasonable if both estimates are indeed conservative, but not very elegant.
Choosing number of PCA components when multiple samples for each data point are available In order to advance the discussion here I will describe two approaches that I am currently using. For convenience I will repeat the notation here. There are $N$ points $\bar{\mathbf{x}}_i \in \mathbb{
49,437
Notation for Random Bernoulli-Like Vector With Fixed Sum
It is immaterial that the $p_i$ sum to unity. The problem describes a sum of Bernoulli$(p_i)$ distributions and conditions on the sum equalling $c$, $0 \le c \le k$. (I believe that is as far as we will get in terms of finding names for this procedure.) It is of course a discrete distribution. The nonzero probabilities occur at all vectors having $c$ nonzero components, each potentially with a different probability. Vector-like notation is convenient and in common use. To this end, let us stipulate that $[k]$ represents the set of indexes $\{1,2,\ldots, k\}$. Any vector $v\in \{0,1\}^k$ can be identified with the subset $V\subset [k]$ of indexes $i$ where $v_i \ne 0$. The notation $p^V$ means $\prod_{i \in V}p_i = \prod_i p_i^{v_i}$, the product of components of the vector $p$ for which $v_i \ne 0$. Let $q_i = 1-p_i$ determine a vector of complementary probabilities $q$. Denote by $V^\prime$ the complement of $V$ in $[k]$. Write $|V|$ for the cardinality of $V$. In this notation, the unconditional probability of observing $V$ is $p^Vq^{V^\prime}$. The probability of observing $V$ for which $|V|=c$ is the sum over all such $V$, $${\Pr}_p(c) = \sum_{V\subset [k], |V|=c} p^Vq^{V^\prime}.$$ This is a sum over all $\binom{k}{c}$ subsets of size $c$. There is no simpler formula for it for arbitrary $p$. Note that $p$ alone determines $k$ and $q$, so nothing else besides $p$ and $c$ needs to be indicated in the notation. The conditional probability distribution therefore assigns probability $${\Pr}_{p;c}(V) = \frac{p^Vq^{V^\prime}}{{\Pr}_p(c)}$$ to all $V$ for which $|V|=c$. In a context in which $p$ is fixed the $p$s can be dropped from the notation; where $c$ is also fixed it may be dropped, too.
Notation for Random Bernoulli-Like Vector With Fixed Sum
It is immaterial that the $p_i$ sum to unity. The problem describes a sum of Bernoulli$(p_i)$ distributions and conditions on the sum equalling $c$, $0 \le c \le k$. (I believe that is as far as we
Notation for Random Bernoulli-Like Vector With Fixed Sum It is immaterial that the $p_i$ sum to unity. The problem describes a sum of Bernoulli$(p_i)$ distributions and conditions on the sum equalling $c$, $0 \le c \le k$. (I believe that is as far as we will get in terms of finding names for this procedure.) It is of course a discrete distribution. The nonzero probabilities occur at all vectors having $c$ nonzero components, each potentially with a different probability. Vector-like notation is convenient and in common use. To this end, let us stipulate that $[k]$ represents the set of indexes $\{1,2,\ldots, k\}$. Any vector $v\in \{0,1\}^k$ can be identified with the subset $V\subset [k]$ of indexes $i$ where $v_i \ne 0$. The notation $p^V$ means $\prod_{i \in V}p_i = \prod_i p_i^{v_i}$, the product of components of the vector $p$ for which $v_i \ne 0$. Let $q_i = 1-p_i$ determine a vector of complementary probabilities $q$. Denote by $V^\prime$ the complement of $V$ in $[k]$. Write $|V|$ for the cardinality of $V$. In this notation, the unconditional probability of observing $V$ is $p^Vq^{V^\prime}$. The probability of observing $V$ for which $|V|=c$ is the sum over all such $V$, $${\Pr}_p(c) = \sum_{V\subset [k], |V|=c} p^Vq^{V^\prime}.$$ This is a sum over all $\binom{k}{c}$ subsets of size $c$. There is no simpler formula for it for arbitrary $p$. Note that $p$ alone determines $k$ and $q$, so nothing else besides $p$ and $c$ needs to be indicated in the notation. The conditional probability distribution therefore assigns probability $${\Pr}_{p;c}(V) = \frac{p^Vq^{V^\prime}}{{\Pr}_p(c)}$$ to all $V$ for which $|V|=c$. In a context in which $p$ is fixed the $p$s can be dropped from the notation; where $c$ is also fixed it may be dropped, too.
Notation for Random Bernoulli-Like Vector With Fixed Sum It is immaterial that the $p_i$ sum to unity. The problem describes a sum of Bernoulli$(p_i)$ distributions and conditions on the sum equalling $c$, $0 \le c \le k$. (I believe that is as far as we
49,438
Find the limiting distribution of $\sqrt{n} \left(\sqrt{\bar{X}} -1 \right) $ if $\sqrt{n} \left( \bar{X}-1 \right) \to N(0,1)$
The result is correct (up to a factor of $\sigma$, which is an unimportant typographical omission). This answer provides two separate ways to double-check it. We can in fact obtain the PDF of the transformed variables directly: when the $X_n$ are exactly Normal (and not just asymptotically so), the PDF of $\sqrt{n}\left(g(X_n)-1\right)$ can be found via integration as $$\frac{1}{(\sigma/2)\sqrt{2\pi n}} \left(\sqrt{n}+x\right) \exp\left({-\frac{x^2 \left(2 \sqrt{n}+x\right)^2}{2 n \sigma ^2}}\right)$$ for $x\gt -\sqrt{n}$ (and equal to $0$ otherwise). For fixed $x$, the limiting value as $n\to\infty$ is $$\frac{1}{(\sigma/2)\sqrt{2\pi}} \exp\left({-\frac {8x^2}{2 \sigma ^2}}\right) = \frac{1}{(\sigma/2)\sqrt{2\pi}} \exp\left({-\frac {x^2}{2 (\sigma/2) ^2}}\right),$$ the PDF of a Normal$(0, \sigma^2/4)$ distribution. One can check also check the result with a simulation, such as carried out by this R code: set.seed(17) n <- 10^6 x <- sqrt(n)*(sqrt(rnorm(n, 1, 1/sqrt(n)))-1) m <- mean(x); v <- var(x); k <- mean((x-m)^4) se.v <- sqrt(((n-1)^2 * k - (n-1)*(n-3)*v^2) / n^3) print(v) # Variance print((v - 1/4)/se.v) # Its standardized standard error This reports a simulated variance (when $n=10^6$) of $0.25015$ times the (unit) standard deviation, which is just $0.44$ standard errors away from $1/4$: evidence that the computed variance is not incorrect for such a large $n$.
Find the limiting distribution of $\sqrt{n} \left(\sqrt{\bar{X}} -1 \right) $ if $\sqrt{n} \left( \b
The result is correct (up to a factor of $\sigma$, which is an unimportant typographical omission). This answer provides two separate ways to double-check it. We can in fact obtain the PDF of the tra
Find the limiting distribution of $\sqrt{n} \left(\sqrt{\bar{X}} -1 \right) $ if $\sqrt{n} \left( \bar{X}-1 \right) \to N(0,1)$ The result is correct (up to a factor of $\sigma$, which is an unimportant typographical omission). This answer provides two separate ways to double-check it. We can in fact obtain the PDF of the transformed variables directly: when the $X_n$ are exactly Normal (and not just asymptotically so), the PDF of $\sqrt{n}\left(g(X_n)-1\right)$ can be found via integration as $$\frac{1}{(\sigma/2)\sqrt{2\pi n}} \left(\sqrt{n}+x\right) \exp\left({-\frac{x^2 \left(2 \sqrt{n}+x\right)^2}{2 n \sigma ^2}}\right)$$ for $x\gt -\sqrt{n}$ (and equal to $0$ otherwise). For fixed $x$, the limiting value as $n\to\infty$ is $$\frac{1}{(\sigma/2)\sqrt{2\pi}} \exp\left({-\frac {8x^2}{2 \sigma ^2}}\right) = \frac{1}{(\sigma/2)\sqrt{2\pi}} \exp\left({-\frac {x^2}{2 (\sigma/2) ^2}}\right),$$ the PDF of a Normal$(0, \sigma^2/4)$ distribution. One can check also check the result with a simulation, such as carried out by this R code: set.seed(17) n <- 10^6 x <- sqrt(n)*(sqrt(rnorm(n, 1, 1/sqrt(n)))-1) m <- mean(x); v <- var(x); k <- mean((x-m)^4) se.v <- sqrt(((n-1)^2 * k - (n-1)*(n-3)*v^2) / n^3) print(v) # Variance print((v - 1/4)/se.v) # Its standardized standard error This reports a simulated variance (when $n=10^6$) of $0.25015$ times the (unit) standard deviation, which is just $0.44$ standard errors away from $1/4$: evidence that the computed variance is not incorrect for such a large $n$.
Find the limiting distribution of $\sqrt{n} \left(\sqrt{\bar{X}} -1 \right) $ if $\sqrt{n} \left( \b The result is correct (up to a factor of $\sigma$, which is an unimportant typographical omission). This answer provides two separate ways to double-check it. We can in fact obtain the PDF of the tra
49,439
How to calculate the chance of getting completely unbalanced groups? (with R)
Taking the reworded question as a starting point: A) What is the chance of getting all males in one group (treatment/control) while there are all females in the other for different sample sizes? With n=1, the chance of picking a male from the population is 0.5. With n=2, the chance that the first pick is a male is 0.5. The chance that the second pick is a male as wel is also 0.5. Consequently the chance of picking all males in a sample of 2 is 0.5 * 0.5, which is the same as 0.5^2. To put it more generally: the chance of picking all males in a sample is 0.5^n (in which n = sample size). The same is true for the chance of picking all females: 0.5^n. Now the chance of having all males in one group and all females in the other group is the product of these chances: (0.5^n) * (0.5^n). When you want to take a more general approach, the case where it doesn't matter which sex is in the control group as long as the the other group is composed of the other sex, than the formula is (0.5^(n-1)) * (0.5^n). B) What is the chance that students in one group are from the same grade, while the student from the other group are all from one of the other grades? Building on (A), the chances for the first group (=grade) seem quite simple to calculate. As the population is equally divided among the four groups, the chance of picking a student from a particular group is 0.25. So you would say that the chance is 0.25^n. However, this is not true. As it doe not matter from which group the first pick of a sample comes from, only the following picks do matter. The first pick determines from which group the following pick have to come. The chances that these following picks are from the same group are 0.25 for each pick. The chances of picking students from the same group is therefore 0.25^(n-1). Now you want to know what the chances are that you pick students all from one of the other group. The chance for your first pick is 0.75. As the following picks have to be from the same group, the chance for each of these picks is 0.25. The chances of picking students for the second from another, but the same, group is 0.75 * (0.25^(n-1)) The chance of picking students from the same grade for the first group, while picking students from another, but the same, grade for the second group is therefore: (0.25^(n-1)) * (0.75 * (0.25^(n-1))) C) What is the chance of having all vegetable eaters in one group and all non-vegetable eaters in the second group? For this question you have to calculate the chance of picking a (non)-vegetable eater first. The chance of picking a vegetable eater is 0.5 for males and 0.75 for females. Because the population is equally divided betwwen male and females, the chance of picking a vegetable eater is: (0.5 * 0.5) + (0.75 * 0.5) = 0.625. The chance of picking a non-vegetable eater is: 1 - 0.625 = 0.375 The chance of picking all vegetable eaters in the first group while in the same time picking all non-vegetable eaters for the second group is: (0.625^n) * (0.375^n) You can calculate the answers for A, B & C in R with the following code: nr <- data.frame(sample.size = c(3, 4, 5, 10, 100)) nr$A <- (0.5 ^ (nr$sample.size - 1)) * (0.5 ^ nr$sample.size) nr$B <- (0.5 ^ (nr$sample.size - 1)) * (0.75 * (0.25 ^ (nr$sample.size - 1))) nr$C <- (0.625 ^ nr$sample.size) * (0.375 ^ nr$sample.size) This will produce a dataframe with sample size in the first column and the probabilities for A, B & C in their respective columns. Why you should use n-1 instead of simply multiplying by 2: Consider the example of two group with n = 3 each. When you start selecting the first group, it does not matter which sex you select. So your chance of picking the right one is 1. However, the outcome of your first pick determines which outcomes of the following two selections you need for a same sex group. The chances of picking the same sex for both selections are 0.5. Consequently the chance of picking participants of the same sex for the first group is 1*0.5*0.5 = 1*(0.5^2) = 0.5^2. The general formula for this outcome is 0.5^(n-1). For your second group, you have only a chance of 0.5 for the first pick of selecting a participant of the other sex. Also for the second and third pick you have a chance of 0.5 for each. The calculation is therefore: 0.5*0.5*0.5 = 0.5^3. The general formula for this part is 0.5 ^ n. As a result, the chance of selecting the first group of the same sex and the other group of the other sex is (0.5^(n-1)) * (0.5^n). I did some testing with different variations of the formula. In your specific setup the formula (0.5^n) * (0.5^n) * 2 gives the same outcome. The caveat is in the individual chances. When you have for example a population with 40% males and 60% females, just multiplying with 2 will give you the wrong probability. Therefore working with n-1 is the right solution. When you add the following lines of code to the code above, you can see it action: nr$Aa <- (0.5 ^ nr$sample.size) * (0.5 ^ nr$sample.size) * 2 nr$Ab <- (0.4 ^ (nr$sample.size - 1)) * (0.6 ^ nr$sample.size) nr$Ac <- (0.4 ^ nr$sample.size) * (0.6 ^ nr$sample.size) * 2
How to calculate the chance of getting completely unbalanced groups? (with R)
Taking the reworded question as a starting point: A) What is the chance of getting all males in one group (treatment/control) while there are all females in the other for different sample sizes? With
How to calculate the chance of getting completely unbalanced groups? (with R) Taking the reworded question as a starting point: A) What is the chance of getting all males in one group (treatment/control) while there are all females in the other for different sample sizes? With n=1, the chance of picking a male from the population is 0.5. With n=2, the chance that the first pick is a male is 0.5. The chance that the second pick is a male as wel is also 0.5. Consequently the chance of picking all males in a sample of 2 is 0.5 * 0.5, which is the same as 0.5^2. To put it more generally: the chance of picking all males in a sample is 0.5^n (in which n = sample size). The same is true for the chance of picking all females: 0.5^n. Now the chance of having all males in one group and all females in the other group is the product of these chances: (0.5^n) * (0.5^n). When you want to take a more general approach, the case where it doesn't matter which sex is in the control group as long as the the other group is composed of the other sex, than the formula is (0.5^(n-1)) * (0.5^n). B) What is the chance that students in one group are from the same grade, while the student from the other group are all from one of the other grades? Building on (A), the chances for the first group (=grade) seem quite simple to calculate. As the population is equally divided among the four groups, the chance of picking a student from a particular group is 0.25. So you would say that the chance is 0.25^n. However, this is not true. As it doe not matter from which group the first pick of a sample comes from, only the following picks do matter. The first pick determines from which group the following pick have to come. The chances that these following picks are from the same group are 0.25 for each pick. The chances of picking students from the same group is therefore 0.25^(n-1). Now you want to know what the chances are that you pick students all from one of the other group. The chance for your first pick is 0.75. As the following picks have to be from the same group, the chance for each of these picks is 0.25. The chances of picking students for the second from another, but the same, group is 0.75 * (0.25^(n-1)) The chance of picking students from the same grade for the first group, while picking students from another, but the same, grade for the second group is therefore: (0.25^(n-1)) * (0.75 * (0.25^(n-1))) C) What is the chance of having all vegetable eaters in one group and all non-vegetable eaters in the second group? For this question you have to calculate the chance of picking a (non)-vegetable eater first. The chance of picking a vegetable eater is 0.5 for males and 0.75 for females. Because the population is equally divided betwwen male and females, the chance of picking a vegetable eater is: (0.5 * 0.5) + (0.75 * 0.5) = 0.625. The chance of picking a non-vegetable eater is: 1 - 0.625 = 0.375 The chance of picking all vegetable eaters in the first group while in the same time picking all non-vegetable eaters for the second group is: (0.625^n) * (0.375^n) You can calculate the answers for A, B & C in R with the following code: nr <- data.frame(sample.size = c(3, 4, 5, 10, 100)) nr$A <- (0.5 ^ (nr$sample.size - 1)) * (0.5 ^ nr$sample.size) nr$B <- (0.5 ^ (nr$sample.size - 1)) * (0.75 * (0.25 ^ (nr$sample.size - 1))) nr$C <- (0.625 ^ nr$sample.size) * (0.375 ^ nr$sample.size) This will produce a dataframe with sample size in the first column and the probabilities for A, B & C in their respective columns. Why you should use n-1 instead of simply multiplying by 2: Consider the example of two group with n = 3 each. When you start selecting the first group, it does not matter which sex you select. So your chance of picking the right one is 1. However, the outcome of your first pick determines which outcomes of the following two selections you need for a same sex group. The chances of picking the same sex for both selections are 0.5. Consequently the chance of picking participants of the same sex for the first group is 1*0.5*0.5 = 1*(0.5^2) = 0.5^2. The general formula for this outcome is 0.5^(n-1). For your second group, you have only a chance of 0.5 for the first pick of selecting a participant of the other sex. Also for the second and third pick you have a chance of 0.5 for each. The calculation is therefore: 0.5*0.5*0.5 = 0.5^3. The general formula for this part is 0.5 ^ n. As a result, the chance of selecting the first group of the same sex and the other group of the other sex is (0.5^(n-1)) * (0.5^n). I did some testing with different variations of the formula. In your specific setup the formula (0.5^n) * (0.5^n) * 2 gives the same outcome. The caveat is in the individual chances. When you have for example a population with 40% males and 60% females, just multiplying with 2 will give you the wrong probability. Therefore working with n-1 is the right solution. When you add the following lines of code to the code above, you can see it action: nr$Aa <- (0.5 ^ nr$sample.size) * (0.5 ^ nr$sample.size) * 2 nr$Ab <- (0.4 ^ (nr$sample.size - 1)) * (0.6 ^ nr$sample.size) nr$Ac <- (0.4 ^ nr$sample.size) * (0.6 ^ nr$sample.size) * 2
How to calculate the chance of getting completely unbalanced groups? (with R) Taking the reworded question as a starting point: A) What is the chance of getting all males in one group (treatment/control) while there are all females in the other for different sample sizes? With
49,440
Law of Large numbers and central limit theorem
There are two basic criteria used to apply the strong law of large numbers. One of them requires that the variables are IID and that their expectation be finite. Since $X_1, ..., X_n$ are IID, so are $X_1^2, ..., X_n^2$, and $E(X_i^2) = 5$ is finite. So $$ \frac{\sum_{i=1}^nX_i^2}{n}$$ converges almost surely to the expected value of $E(X_i^2)$, that is to say 5. This illustrates the power of the strong law of large numbers. The criteria to apply it are very general and very often met in practice. For the last question, rewrite the ratio as $$ \frac{\sum_{i=1}^nX_i}{\sqrt{5n}}\frac{\sqrt{5}}{\sqrt{\frac{1}{n}\sum_{i=1}^nX_i^2}}.$$ The first term converges in distribution to $N(0,1)$ and the second term converges almost surely to 1. According to Slutsky's lemma (third item), the whole converges in distribution to $N(0,1)$.
Law of Large numbers and central limit theorem
There are two basic criteria used to apply the strong law of large numbers. One of them requires that the variables are IID and that their expectation be finite. Since $X_1, ..., X_n$ are IID, so are
Law of Large numbers and central limit theorem There are two basic criteria used to apply the strong law of large numbers. One of them requires that the variables are IID and that their expectation be finite. Since $X_1, ..., X_n$ are IID, so are $X_1^2, ..., X_n^2$, and $E(X_i^2) = 5$ is finite. So $$ \frac{\sum_{i=1}^nX_i^2}{n}$$ converges almost surely to the expected value of $E(X_i^2)$, that is to say 5. This illustrates the power of the strong law of large numbers. The criteria to apply it are very general and very often met in practice. For the last question, rewrite the ratio as $$ \frac{\sum_{i=1}^nX_i}{\sqrt{5n}}\frac{\sqrt{5}}{\sqrt{\frac{1}{n}\sum_{i=1}^nX_i^2}}.$$ The first term converges in distribution to $N(0,1)$ and the second term converges almost surely to 1. According to Slutsky's lemma (third item), the whole converges in distribution to $N(0,1)$.
Law of Large numbers and central limit theorem There are two basic criteria used to apply the strong law of large numbers. One of them requires that the variables are IID and that their expectation be finite. Since $X_1, ..., X_n$ are IID, so are
49,441
How can I estimate theta for the inverse hyperbolic sine transformation?
After a couple more days of thinking about the problem, I have two tentative answers. Select theta so that the transformed data is close to normal as measured by goodness of fit. For example choose theta to maximize the p-value of the Shapiro-Wik test. set.seed(1) x <- rnorm(1000) xt <- Inv.IHS(x, theta=2) Shapiro.test.pvalue <- function(theta, x){ x <- IHS(x, theta) shapiro.test(x)$p.value } optimise(Shapiro.test.pvalue, lower=0.001, upper=50, x=xt, maximum=TRUE) # 2.069838 Maximum likelihood estimation of theta. Looking at the paper by Burbidge et al. I think the likelihood function for a single variable can be expressed as follows. IHS.loglik <- function(theta,x){ IHS <- function(x, theta){ # function to IHS transform asinh(theta * x)/theta } n <- length(x) xt <- IHS(x, theta) log.lik <- -n*log(sum((xt - mean(xt))^2))- sum(log(1+theta^2*x^2)) return(log.lik) } # try this on our data optimise(IHS.loglik, lower=0.001, upper=50, x=xt, maximum=TRUE) # 2.0407 In both cases we get close to the expected theta, which is encouraging. But when I try the maximum likelihood approach on my real data it doesn't seem to give reasonable answers.
How can I estimate theta for the inverse hyperbolic sine transformation?
After a couple more days of thinking about the problem, I have two tentative answers. Select theta so that the transformed data is close to normal as measured by goodness of fit. For example choose
How can I estimate theta for the inverse hyperbolic sine transformation? After a couple more days of thinking about the problem, I have two tentative answers. Select theta so that the transformed data is close to normal as measured by goodness of fit. For example choose theta to maximize the p-value of the Shapiro-Wik test. set.seed(1) x <- rnorm(1000) xt <- Inv.IHS(x, theta=2) Shapiro.test.pvalue <- function(theta, x){ x <- IHS(x, theta) shapiro.test(x)$p.value } optimise(Shapiro.test.pvalue, lower=0.001, upper=50, x=xt, maximum=TRUE) # 2.069838 Maximum likelihood estimation of theta. Looking at the paper by Burbidge et al. I think the likelihood function for a single variable can be expressed as follows. IHS.loglik <- function(theta,x){ IHS <- function(x, theta){ # function to IHS transform asinh(theta * x)/theta } n <- length(x) xt <- IHS(x, theta) log.lik <- -n*log(sum((xt - mean(xt))^2))- sum(log(1+theta^2*x^2)) return(log.lik) } # try this on our data optimise(IHS.loglik, lower=0.001, upper=50, x=xt, maximum=TRUE) # 2.0407 In both cases we get close to the expected theta, which is encouraging. But when I try the maximum likelihood approach on my real data it doesn't seem to give reasonable answers.
How can I estimate theta for the inverse hyperbolic sine transformation? After a couple more days of thinking about the problem, I have two tentative answers. Select theta so that the transformed data is close to normal as measured by goodness of fit. For example choose
49,442
Pairwise comparisons for a regression with sandwich estimates (in R)
One solution is actually given as an example in the book on the multcomp package, section 4.6: Bretz, F., Hothorn, T., & Westfall, P. H. (2011). Multiple comparisons using R. Boca Raton, FL: CRC Press. One only needs to slightly adapt your code (everything needs to be in one data.frame instead of floating around): require(multcomp) require(sandwich) set.seed(81) pred3 = rnorm(24) df <- data.frame(pred1 = rep(c('Car', 'Bike', 'Train', 'Airplane'), 6), pred2 = rep(c('High', 'Low', 'Middle'), 8) , resp = c(rnorm(12, sd = 1), rnorm(12, sd = 5))) m <- aov(resp ~ pred1 + pred2, df) tukey <- glht(m, linfct = mcp(pred1 = "Tukey") , vcov = sandwich) summary(tukey, test = adjusted()) ## Simultaneous Tests for General Linear Hypotheses ## ## Multiple Comparisons of Means: Tukey Contrasts ## ## ## Fit: aov(formula = resp ~ pred1 + pred2, data = df) ## ## Linear Hypotheses: ## Estimate Std. Error t value Pr(>|t|) ## Bike - Airplane == 0 1.559 1.166 1.34 0.547 ## Car - Airplane == 0 1.239 1.241 1.00 0.748 ## Train - Airplane == 0 2.509 0.915 2.74 0.058 . ## Car - Bike == 0 -0.320 1.422 -0.23 0.996 ## Train - Bike == 0 0.950 1.149 0.83 0.838 ## Train - Car == 0 1.270 1.225 1.04 0.726 ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ## (Adjusted p values reported -- single-step method) Note that glht as default uses the single-step method as adjustment for alpha-error accumulation. If you want something else, you need to feed it to adjusted() For example, using the Bonferroni-Holm correction (which I tend to use as I don't understand what single-step actually does): summary(tukey, test = adjusted("holm")) If you want no alpha error correction, which I do not recommend, this is also possible: summary(tukey , test = adjusted("none"))
Pairwise comparisons for a regression with sandwich estimates (in R)
One solution is actually given as an example in the book on the multcomp package, section 4.6: Bretz, F., Hothorn, T., & Westfall, P. H. (2011). Multiple comparisons using R. Boca Raton, FL: CRC Press
Pairwise comparisons for a regression with sandwich estimates (in R) One solution is actually given as an example in the book on the multcomp package, section 4.6: Bretz, F., Hothorn, T., & Westfall, P. H. (2011). Multiple comparisons using R. Boca Raton, FL: CRC Press. One only needs to slightly adapt your code (everything needs to be in one data.frame instead of floating around): require(multcomp) require(sandwich) set.seed(81) pred3 = rnorm(24) df <- data.frame(pred1 = rep(c('Car', 'Bike', 'Train', 'Airplane'), 6), pred2 = rep(c('High', 'Low', 'Middle'), 8) , resp = c(rnorm(12, sd = 1), rnorm(12, sd = 5))) m <- aov(resp ~ pred1 + pred2, df) tukey <- glht(m, linfct = mcp(pred1 = "Tukey") , vcov = sandwich) summary(tukey, test = adjusted()) ## Simultaneous Tests for General Linear Hypotheses ## ## Multiple Comparisons of Means: Tukey Contrasts ## ## ## Fit: aov(formula = resp ~ pred1 + pred2, data = df) ## ## Linear Hypotheses: ## Estimate Std. Error t value Pr(>|t|) ## Bike - Airplane == 0 1.559 1.166 1.34 0.547 ## Car - Airplane == 0 1.239 1.241 1.00 0.748 ## Train - Airplane == 0 2.509 0.915 2.74 0.058 . ## Car - Bike == 0 -0.320 1.422 -0.23 0.996 ## Train - Bike == 0 0.950 1.149 0.83 0.838 ## Train - Car == 0 1.270 1.225 1.04 0.726 ## --- ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ## (Adjusted p values reported -- single-step method) Note that glht as default uses the single-step method as adjustment for alpha-error accumulation. If you want something else, you need to feed it to adjusted() For example, using the Bonferroni-Holm correction (which I tend to use as I don't understand what single-step actually does): summary(tukey, test = adjusted("holm")) If you want no alpha error correction, which I do not recommend, this is also possible: summary(tukey , test = adjusted("none"))
Pairwise comparisons for a regression with sandwich estimates (in R) One solution is actually given as an example in the book on the multcomp package, section 4.6: Bretz, F., Hothorn, T., & Westfall, P. H. (2011). Multiple comparisons using R. Boca Raton, FL: CRC Press
49,443
Goodness of fit test for exponential distribution
Firstly, Rearrange the table so it makes better sense: and calculate mean via calculator.calculator. In order to do this take the midpoint of each interval for time and enter this on the display alongside the frequencies. you should get 40. this is the MEAN however, so to get lambda, use the following formula lambda = 1/mean this should give you a lambda value of 0.025. use this and the cumulative pdf (1-e-^lambda x time interval) function to calculate p(X=x) for each and x by total to get expected for each. Because we are dealing with a continuous distribution, the first and last intervals should be calculated as less than 20 and more than 120.i.e first calculation will be 1-e-^0.025x20= 0.3935 (keep to at least 4dp).. second will be (1-e-^0.025x40) - (1-e-^0.025x20) = 0.2387.. carry on until ll intervals have been done as shown above. times each p(X=x) by 100. keep expected to 2dp. Time 0-20 20-40 40-60 60-90 90-120 120-180 total Frequency 41 19 16 13 9 (11) 2 100 p(X=x) 0.3935 0.2387 0.1447 0.1177 0.0556 0.0497 1 (about!) Expected 39.35 23.87 14.47 11.77 5.56 (10.53) 4.97 (combine last two) (O-E)^2/E 0.06919 0.99359 0.16178 0.12854 0.02098 1.37 (3sf) H0: Data follows an Exponential Distribution H1: Data does not follow an Exponential Distribution (i.e Use 5% SL) DF = 5-1-1 = 3 (1 for totals and 1 for estimating lambda from population) X (c.v)= 7.815 1.37 < 7.815 so accept H0. Conclude time in intervals in seconds between successive white cars in flowing traffic in an open road can be modeled by an exponential distribution. I did A2 Statistics, a reliable source in itself.
Goodness of fit test for exponential distribution
Firstly, Rearrange the table so it makes better sense: and calculate mean via calculator.calculator. In order to do this take the midpoint of each interval for time and enter this on the display along
Goodness of fit test for exponential distribution Firstly, Rearrange the table so it makes better sense: and calculate mean via calculator.calculator. In order to do this take the midpoint of each interval for time and enter this on the display alongside the frequencies. you should get 40. this is the MEAN however, so to get lambda, use the following formula lambda = 1/mean this should give you a lambda value of 0.025. use this and the cumulative pdf (1-e-^lambda x time interval) function to calculate p(X=x) for each and x by total to get expected for each. Because we are dealing with a continuous distribution, the first and last intervals should be calculated as less than 20 and more than 120.i.e first calculation will be 1-e-^0.025x20= 0.3935 (keep to at least 4dp).. second will be (1-e-^0.025x40) - (1-e-^0.025x20) = 0.2387.. carry on until ll intervals have been done as shown above. times each p(X=x) by 100. keep expected to 2dp. Time 0-20 20-40 40-60 60-90 90-120 120-180 total Frequency 41 19 16 13 9 (11) 2 100 p(X=x) 0.3935 0.2387 0.1447 0.1177 0.0556 0.0497 1 (about!) Expected 39.35 23.87 14.47 11.77 5.56 (10.53) 4.97 (combine last two) (O-E)^2/E 0.06919 0.99359 0.16178 0.12854 0.02098 1.37 (3sf) H0: Data follows an Exponential Distribution H1: Data does not follow an Exponential Distribution (i.e Use 5% SL) DF = 5-1-1 = 3 (1 for totals and 1 for estimating lambda from population) X (c.v)= 7.815 1.37 < 7.815 so accept H0. Conclude time in intervals in seconds between successive white cars in flowing traffic in an open road can be modeled by an exponential distribution. I did A2 Statistics, a reliable source in itself.
Goodness of fit test for exponential distribution Firstly, Rearrange the table so it makes better sense: and calculate mean via calculator.calculator. In order to do this take the midpoint of each interval for time and enter this on the display along
49,444
GLMM model specification help gender effects + an effect that is nested only within female
You have three groups: males (M), females with young (FY), females without young (FN). Although I understand why it would be conceptually tempting to think of this as two factors (gender and parental status), I think from a statistical point of view it is better to think of these groups as comprising a single factor, "Group", with 3 levels. Under this point of view, it turns out that the two questions you want to ask map perfectly onto a standard set of orthogonal contrast codes you can use to represent this factor. The codes would look like this: (group <- gl(n=3, k=5, labels=c("M","FY","FN"))) # [1] M M M M M FY FY FY FY FY FN FN FN FN FN # Levels: M FY FN # the default contrasts (i.e., dummy codes) suck: contrasts(group) # FY FN # M 0 0 # FY 1 0 # FN 0 1 # replace with meaningful contrasts: contrasts(group) <- cbind(MvsF=c(-2/3, 1/3, 1/3), YvsN=c(0, -1/2, 1/2)) contrasts(group) # MvsF YvsN # M -0.6666667 0.0 # FY 0.3333333 -0.5 # FN 0.3333333 0.5 In this new set of codes, the "MvsF" code represents the mean difference between males and females (more specifically, the difference between the Male mean and the mean of the two Female means), and the "YvsN" code represents the mean difference between females with and without young.
GLMM model specification help gender effects + an effect that is nested only within female
You have three groups: males (M), females with young (FY), females without young (FN). Although I understand why it would be conceptually tempting to think of this as two factors (gender and parental
GLMM model specification help gender effects + an effect that is nested only within female You have three groups: males (M), females with young (FY), females without young (FN). Although I understand why it would be conceptually tempting to think of this as two factors (gender and parental status), I think from a statistical point of view it is better to think of these groups as comprising a single factor, "Group", with 3 levels. Under this point of view, it turns out that the two questions you want to ask map perfectly onto a standard set of orthogonal contrast codes you can use to represent this factor. The codes would look like this: (group <- gl(n=3, k=5, labels=c("M","FY","FN"))) # [1] M M M M M FY FY FY FY FY FN FN FN FN FN # Levels: M FY FN # the default contrasts (i.e., dummy codes) suck: contrasts(group) # FY FN # M 0 0 # FY 1 0 # FN 0 1 # replace with meaningful contrasts: contrasts(group) <- cbind(MvsF=c(-2/3, 1/3, 1/3), YvsN=c(0, -1/2, 1/2)) contrasts(group) # MvsF YvsN # M -0.6666667 0.0 # FY 0.3333333 -0.5 # FN 0.3333333 0.5 In this new set of codes, the "MvsF" code represents the mean difference between males and females (more specifically, the difference between the Male mean and the mean of the two Female means), and the "YvsN" code represents the mean difference between females with and without young.
GLMM model specification help gender effects + an effect that is nested only within female You have three groups: males (M), females with young (FY), females without young (FN). Although I understand why it would be conceptually tempting to think of this as two factors (gender and parental
49,445
How to evaluate/validate clusters using multiple clustering methods
As daniellopez46 noted, I think you are thinking of consensus clustering where you basically form an ensemble of different clustering runs. What is a bit strange here is that you would want the ensemble to contain results from different clustering methods which can be very misleading. I say this because unlike supervised learning, unsupervised learning always has in a larger or smaller degree a subjective component as you need to have an idea of what you consider a grouping you would be interested in based on your data. Elaborating a bit, clustering is labeling observations based on what relationship they have with other observations in your feature space. Different clustering algorithms will understand this in a totally different fashion as they are looking for different things. Depending on what kind of topology you are looking for you will (as a human) be satisfied with what one clustering algorithm produced on some data set and be totally dissatisfied with what it did on another data set. Look at this question I recently answered, where you can see a diagram of how different clustering techniques treat the same data sets. Another thing that should be noted is that consensus clustering is still very new and is basically just being explored so don't take it as panacea.
How to evaluate/validate clusters using multiple clustering methods
As daniellopez46 noted, I think you are thinking of consensus clustering where you basically form an ensemble of different clustering runs. What is a bit strange here is that you would want the ensemb
How to evaluate/validate clusters using multiple clustering methods As daniellopez46 noted, I think you are thinking of consensus clustering where you basically form an ensemble of different clustering runs. What is a bit strange here is that you would want the ensemble to contain results from different clustering methods which can be very misleading. I say this because unlike supervised learning, unsupervised learning always has in a larger or smaller degree a subjective component as you need to have an idea of what you consider a grouping you would be interested in based on your data. Elaborating a bit, clustering is labeling observations based on what relationship they have with other observations in your feature space. Different clustering algorithms will understand this in a totally different fashion as they are looking for different things. Depending on what kind of topology you are looking for you will (as a human) be satisfied with what one clustering algorithm produced on some data set and be totally dissatisfied with what it did on another data set. Look at this question I recently answered, where you can see a diagram of how different clustering techniques treat the same data sets. Another thing that should be noted is that consensus clustering is still very new and is basically just being explored so don't take it as panacea.
How to evaluate/validate clusters using multiple clustering methods As daniellopez46 noted, I think you are thinking of consensus clustering where you basically form an ensemble of different clustering runs. What is a bit strange here is that you would want the ensemb
49,446
How to compute the maximum a posteriori probability (MAP) estimate with / without a prior
As mentioned in a comment, the MAP estimate is the maximum likelihood estimate when you omit $g(\theta)$ or if it is a constant. If $g(\theta)$ is not a constant, then there are of course various methods for finding the MAP estimate. Omitting the survey sampling aspect (or assuming we have a completely representative sample from a population of infinite size or assuming you have included the sampling mechanism into your likelihood): Analytically (often by taking logs and finding the maximum). In some cases conjugate priors are available have known modes so that you do not need to do the analytic calculation yourself. E.g. in the example you give we could use a Beta prior. You did not specify how certain you were about your prior, but let's say that in a previous survey you had 20 out of 50 for "A" and 30 out of 50 for "B" (and that there are no other options to vote for). If you are happy to use a Beta(20,30) prior, then your posterior is a Beta(20+60, 30+40) distribution. The mode is then known to be (80-1)/(150-2)=0.53 This would not be correct for a non-representative sample or one from a non-infinite population and this option only exists for a few distributions. Additionally, just because a conjugate prior is available and convenient does not mean it is what you want to use (e.g. you may have wanted to express some doubt about the applicability of the previous survey to your new survey by using a mixture of a Beta(0.5,0.5) prior and a Beta(20,30) prior with weights of 0.2 and 0.8 to express this uncertainty. Then you can still do conjugate updating, but getting the updated posterior weights is a tiny bit harder. Using some numeric minimization routine. In a simplistic situation where surveys really sample exactly how people will really vote (nothing else happens before the election to change the mind of people, there is no issues with voter turnout differing for parties etc.), you could then for a known total size of the number of voters predict the outcome of voting using the beta-binomial distribution (the predictive distribution of the binomial distribution with a beta prior). In reality predicting an election is of course much more difficult.
How to compute the maximum a posteriori probability (MAP) estimate with / without a prior
As mentioned in a comment, the MAP estimate is the maximum likelihood estimate when you omit $g(\theta)$ or if it is a constant. If $g(\theta)$ is not a constant, then there are of course various meth
How to compute the maximum a posteriori probability (MAP) estimate with / without a prior As mentioned in a comment, the MAP estimate is the maximum likelihood estimate when you omit $g(\theta)$ or if it is a constant. If $g(\theta)$ is not a constant, then there are of course various methods for finding the MAP estimate. Omitting the survey sampling aspect (or assuming we have a completely representative sample from a population of infinite size or assuming you have included the sampling mechanism into your likelihood): Analytically (often by taking logs and finding the maximum). In some cases conjugate priors are available have known modes so that you do not need to do the analytic calculation yourself. E.g. in the example you give we could use a Beta prior. You did not specify how certain you were about your prior, but let's say that in a previous survey you had 20 out of 50 for "A" and 30 out of 50 for "B" (and that there are no other options to vote for). If you are happy to use a Beta(20,30) prior, then your posterior is a Beta(20+60, 30+40) distribution. The mode is then known to be (80-1)/(150-2)=0.53 This would not be correct for a non-representative sample or one from a non-infinite population and this option only exists for a few distributions. Additionally, just because a conjugate prior is available and convenient does not mean it is what you want to use (e.g. you may have wanted to express some doubt about the applicability of the previous survey to your new survey by using a mixture of a Beta(0.5,0.5) prior and a Beta(20,30) prior with weights of 0.2 and 0.8 to express this uncertainty. Then you can still do conjugate updating, but getting the updated posterior weights is a tiny bit harder. Using some numeric minimization routine. In a simplistic situation where surveys really sample exactly how people will really vote (nothing else happens before the election to change the mind of people, there is no issues with voter turnout differing for parties etc.), you could then for a known total size of the number of voters predict the outcome of voting using the beta-binomial distribution (the predictive distribution of the binomial distribution with a beta prior). In reality predicting an election is of course much more difficult.
How to compute the maximum a posteriori probability (MAP) estimate with / without a prior As mentioned in a comment, the MAP estimate is the maximum likelihood estimate when you omit $g(\theta)$ or if it is a constant. If $g(\theta)$ is not a constant, then there are of course various meth
49,447
How to compute the maximum a posteriori probability (MAP) estimate with / without a prior
Calculate the the chances of A and B from the samples you have using Maximum Likelihood. Multiply the chances you get for A by 0.4 and B by 0.6. Compare the results. The MAP Estimator is the higher one. How do you interpret no knowledge? If you say it means any case has the same chances, use the above and multiply by 0.5. If you say there is no prior at all, use Maximum Likelihood. The nice thing about it, both cases collide. Enjoy,
How to compute the maximum a posteriori probability (MAP) estimate with / without a prior
Calculate the the chances of A and B from the samples you have using Maximum Likelihood. Multiply the chances you get for A by 0.4 and B by 0.6. Compare the results. The MAP Estimator is the higher on
How to compute the maximum a posteriori probability (MAP) estimate with / without a prior Calculate the the chances of A and B from the samples you have using Maximum Likelihood. Multiply the chances you get for A by 0.4 and B by 0.6. Compare the results. The MAP Estimator is the higher one. How do you interpret no knowledge? If you say it means any case has the same chances, use the above and multiply by 0.5. If you say there is no prior at all, use Maximum Likelihood. The nice thing about it, both cases collide. Enjoy,
How to compute the maximum a posteriori probability (MAP) estimate with / without a prior Calculate the the chances of A and B from the samples you have using Maximum Likelihood. Multiply the chances you get for A by 0.4 and B by 0.6. Compare the results. The MAP Estimator is the higher on
49,448
Construct confidence interval of the mean for auto-correlated data
Here are a couple thoughts that may be helpful: Auto-correlation doesn't matter when you only look at a single t at a time. So, at a fixed time t, you could just run a t-test to check for a difference in means. If you run the t-test for each time separately, then you get a bunch of p-values. Because of auto-correlation these p-values are not independent, but each p-value considered alone is just fine. So now you want to find the times for which there is a difference in means. I would try using false discovery rate (FDR) methods (see the "Benjamini-Hochberg procedure" at http://en.wikipedia.org/wiki/False_discovery_rate). Luckily, this procedure controls the FDR even when there is positive dependence among your p-values. (see "The Control of the False Discovery Rate in Multiple Testing under Dependency", free version here http://thom.jouve.free.fr/work/thesis/sitecopy_save/Biblio/ToCheck/fdr/Benjamini2001.pdf) This should give you a reasonable first answer to your original question. Finally, I think the two plots you drew are very clear. They are probably more informative than any kind of statistical analysis you can run... Good luck! Edit by Roland: Here is an R implementation of the FDR method for the example in the question. The result looks reasonable. dat <- setNames(cbind(stack(as.data.frame(t(a))), stack(as.data.frame(t(b)))), c("a", "i", "b", "i")) dat <- dat[,-4] library(plyr) p.raw <- ddply(dat, .(i), function(df) t.test(df$a, df$b)$p.value) p.fdr <- cbind(p.adjust(p.raw[,2], method="fdr"), t[as.numeric(gsub("V","",p.raw[,1]))]) p.fdr[order(p.fdr[,2]),] # [,1] [,2] # [1,] 0.63001435 3 # [2,] 0.19439226 4 # [3,] 0.06200315 5 # [4,] 0.07335654 6 # [5,] 0.05336699 7 # [6,] 0.06115999 8 # [7,] 0.06115999 9 # [8,] 0.06103370 10 # [9,] 0.04324050 11 # [10,] 0.04324050 12 # [11,] 0.04324050 13 # [12,] 0.04324050 14 # [13,] 0.06103370 15 # [14,] 0.05533972 16 # [15,] 0.15489402 17 # [16,] 0.58234624 18 # [17,] 0.05533972 19 # [18,] 0.04324050 20
Construct confidence interval of the mean for auto-correlated data
Here are a couple thoughts that may be helpful: Auto-correlation doesn't matter when you only look at a single t at a time. So, at a fixed time t, you could just run a t-test to check for a differenc
Construct confidence interval of the mean for auto-correlated data Here are a couple thoughts that may be helpful: Auto-correlation doesn't matter when you only look at a single t at a time. So, at a fixed time t, you could just run a t-test to check for a difference in means. If you run the t-test for each time separately, then you get a bunch of p-values. Because of auto-correlation these p-values are not independent, but each p-value considered alone is just fine. So now you want to find the times for which there is a difference in means. I would try using false discovery rate (FDR) methods (see the "Benjamini-Hochberg procedure" at http://en.wikipedia.org/wiki/False_discovery_rate). Luckily, this procedure controls the FDR even when there is positive dependence among your p-values. (see "The Control of the False Discovery Rate in Multiple Testing under Dependency", free version here http://thom.jouve.free.fr/work/thesis/sitecopy_save/Biblio/ToCheck/fdr/Benjamini2001.pdf) This should give you a reasonable first answer to your original question. Finally, I think the two plots you drew are very clear. They are probably more informative than any kind of statistical analysis you can run... Good luck! Edit by Roland: Here is an R implementation of the FDR method for the example in the question. The result looks reasonable. dat <- setNames(cbind(stack(as.data.frame(t(a))), stack(as.data.frame(t(b)))), c("a", "i", "b", "i")) dat <- dat[,-4] library(plyr) p.raw <- ddply(dat, .(i), function(df) t.test(df$a, df$b)$p.value) p.fdr <- cbind(p.adjust(p.raw[,2], method="fdr"), t[as.numeric(gsub("V","",p.raw[,1]))]) p.fdr[order(p.fdr[,2]),] # [,1] [,2] # [1,] 0.63001435 3 # [2,] 0.19439226 4 # [3,] 0.06200315 5 # [4,] 0.07335654 6 # [5,] 0.05336699 7 # [6,] 0.06115999 8 # [7,] 0.06115999 9 # [8,] 0.06103370 10 # [9,] 0.04324050 11 # [10,] 0.04324050 12 # [11,] 0.04324050 13 # [12,] 0.04324050 14 # [13,] 0.06103370 15 # [14,] 0.05533972 16 # [15,] 0.15489402 17 # [16,] 0.58234624 18 # [17,] 0.05533972 19 # [18,] 0.04324050 20
Construct confidence interval of the mean for auto-correlated data Here are a couple thoughts that may be helpful: Auto-correlation doesn't matter when you only look at a single t at a time. So, at a fixed time t, you could just run a t-test to check for a differenc
49,449
Dealing with missing data in the prediction set only
(I'll let someone else address the estimation of the missing data. You may want to directly model the probability that the observation is each level of the unknown factor using knowledge of other covariate values, and possibly outside information, e.g., priors etc. There are strategies such as propensity scores that you might be able to use for this type of thing. However, at first glance your approach looks reasonable to me.) One note is that I can't tell from your description if you are weighting by raw frequencies. If so, you want to divide these by $N$ to get the marginal probabilities instead. You are right that you are not handling level 3 correctly. The coding scheme that you use in your question set up is known as reference level coding. To use this approach correctly, you need to have an intercept (i.e., $\beta_0$), which estimates the mean of level 3. I suspect you do have such, even though you didn't list it. In this case, you would just add the intercept to your final equation. That is: $$ \beta_0\!*\!f_3 + \beta_1\!*\!f_1 + \beta_2\!*\!f_2 $$ Note that you are multiplying the intercept (which encodes the reference level) by the marginal probability that the observation is actually the reference level.
Dealing with missing data in the prediction set only
(I'll let someone else address the estimation of the missing data. You may want to directly model the probability that the observation is each level of the unknown factor using knowledge of other cov
Dealing with missing data in the prediction set only (I'll let someone else address the estimation of the missing data. You may want to directly model the probability that the observation is each level of the unknown factor using knowledge of other covariate values, and possibly outside information, e.g., priors etc. There are strategies such as propensity scores that you might be able to use for this type of thing. However, at first glance your approach looks reasonable to me.) One note is that I can't tell from your description if you are weighting by raw frequencies. If so, you want to divide these by $N$ to get the marginal probabilities instead. You are right that you are not handling level 3 correctly. The coding scheme that you use in your question set up is known as reference level coding. To use this approach correctly, you need to have an intercept (i.e., $\beta_0$), which estimates the mean of level 3. I suspect you do have such, even though you didn't list it. In this case, you would just add the intercept to your final equation. That is: $$ \beta_0\!*\!f_3 + \beta_1\!*\!f_1 + \beta_2\!*\!f_2 $$ Note that you are multiplying the intercept (which encodes the reference level) by the marginal probability that the observation is actually the reference level.
Dealing with missing data in the prediction set only (I'll let someone else address the estimation of the missing data. You may want to directly model the probability that the observation is each level of the unknown factor using knowledge of other cov
49,450
Fitting a Generalized Linear Model (GLM) in R
There are three components to the GLM: an outcome variable, a linear predictor and a link function. The link function in the GLM relates the expected value of the outcome variable to the linear predictor. In other words, not the expected value itself, but a function of it is modeled by the linear predictor. An example with the logarithm as the link function and the linear predictor $\beta_0 + \beta_1*x$ is: $$\log(E(y)) = \beta_0 + \beta_1*x$$ In your case, the linear predictor is $\log(\beta_0) + \beta_1*\log({\rm exp}_1) + \beta_2*\log({\rm exp}_2)$. So the equation for your model becomes: $$\log(E(y)) = \log(\beta_0) + \beta_1*\log({\rm exp}_1) + \beta_2*\log({\rm exp}_2)$$ I think this is a bit weird and I would argue that possibly that's not the model you are supposed to fit. Anyway, to fit this model with R, the code should look like this: model <- glm(formula = Y ~ log(exp1) + log(exp2), family = poisson(link="log"), data = CSV_table) The only thing you have to take care of after running the model is to take the exponential function of the intercept, if you want to write the intercept as a log. A good book if you want to learn about the GLM and categorical data analysis in general is the one by Agresti (2007). References: Agresti, A. (1996). An introduction to categorical data analysis (Vol. 135). New York: Wiley.
Fitting a Generalized Linear Model (GLM) in R
There are three components to the GLM: an outcome variable, a linear predictor and a link function. The link function in the GLM relates the expected value of the outcome variable to the linear predic
Fitting a Generalized Linear Model (GLM) in R There are three components to the GLM: an outcome variable, a linear predictor and a link function. The link function in the GLM relates the expected value of the outcome variable to the linear predictor. In other words, not the expected value itself, but a function of it is modeled by the linear predictor. An example with the logarithm as the link function and the linear predictor $\beta_0 + \beta_1*x$ is: $$\log(E(y)) = \beta_0 + \beta_1*x$$ In your case, the linear predictor is $\log(\beta_0) + \beta_1*\log({\rm exp}_1) + \beta_2*\log({\rm exp}_2)$. So the equation for your model becomes: $$\log(E(y)) = \log(\beta_0) + \beta_1*\log({\rm exp}_1) + \beta_2*\log({\rm exp}_2)$$ I think this is a bit weird and I would argue that possibly that's not the model you are supposed to fit. Anyway, to fit this model with R, the code should look like this: model <- glm(formula = Y ~ log(exp1) + log(exp2), family = poisson(link="log"), data = CSV_table) The only thing you have to take care of after running the model is to take the exponential function of the intercept, if you want to write the intercept as a log. A good book if you want to learn about the GLM and categorical data analysis in general is the one by Agresti (2007). References: Agresti, A. (1996). An introduction to categorical data analysis (Vol. 135). New York: Wiley.
Fitting a Generalized Linear Model (GLM) in R There are three components to the GLM: an outcome variable, a linear predictor and a link function. The link function in the GLM relates the expected value of the outcome variable to the linear predic
49,451
Testing for significance between means, having one normal distributed sample and one non normal distributed
If you are 100% sure that the two samples are drawn from populations with different distributions (one Gaussian, one not), are you sure you need any statistical test? You are already sure that the two populations are different. Isn't that enough? Does it really help to test for differences in means or medians? (The answer, of course, depends on your scientific goals, which were not part of the original question.)
Testing for significance between means, having one normal distributed sample and one non normal dist
If you are 100% sure that the two samples are drawn from populations with different distributions (one Gaussian, one not), are you sure you need any statistical test? You are already sure that the two
Testing for significance between means, having one normal distributed sample and one non normal distributed If you are 100% sure that the two samples are drawn from populations with different distributions (one Gaussian, one not), are you sure you need any statistical test? You are already sure that the two populations are different. Isn't that enough? Does it really help to test for differences in means or medians? (The answer, of course, depends on your scientific goals, which were not part of the original question.)
Testing for significance between means, having one normal distributed sample and one non normal dist If you are 100% sure that the two samples are drawn from populations with different distributions (one Gaussian, one not), are you sure you need any statistical test? You are already sure that the two
49,452
Testing for significance between means, having one normal distributed sample and one non normal distributed
You probably want the The Wilcoxon Rank Sum, also called Mann-Whitney U here. The Signed Rank Test is for paired samples, so not appropriate in your case. Your choice is basically between a t-test or Wilcoxon, and come down to a balance between On the one hand, t-test is more powerful particularly in small samples (which you have), but does assume normality On the other hand, Wilcoxon is more powerful when far from normality, and is asymptotically close to the t-test in power (but you are far from the asymptote!) So the first question you need to ask is, how non-normal are we talking? If we are talking seriously bad*, e.g. clearly bimodal (or multimodal!), or massive skewed, then the t-test can probably be ruled out. On the other hand, as a general rule of thumb, if it looks bell shaped, then t-test is probably ok. Arguably, the proper way to proceed, is to do a study of power for your particular set up before you handle the real data too much. Presuming you have some vague idea of the shape of the non-normal distribution* you should generate pretend samples from your weird distribution and your normal distribution and see how well the t-test and Wilcoxon work on your Monte Carlo data. Don't forget you need to check both False Positives and False Negatives. So run once where you make the null hypothesis true (i.e. the samples have the same mean) and another where they differ by the approximate effect size you are looking for (or the effect size that would be "materially interesting" to you). *I am assuming you must either have some a priori reason to believe that one sample is non-normal, or it is absolutely hideous - otherwise it would be almost impossible to spot much deviation from normal with only 20 data points.
Testing for significance between means, having one normal distributed sample and one non normal dist
You probably want the The Wilcoxon Rank Sum, also called Mann-Whitney U here. The Signed Rank Test is for paired samples, so not appropriate in your case. Your choice is basically between a t-test or
Testing for significance between means, having one normal distributed sample and one non normal distributed You probably want the The Wilcoxon Rank Sum, also called Mann-Whitney U here. The Signed Rank Test is for paired samples, so not appropriate in your case. Your choice is basically between a t-test or Wilcoxon, and come down to a balance between On the one hand, t-test is more powerful particularly in small samples (which you have), but does assume normality On the other hand, Wilcoxon is more powerful when far from normality, and is asymptotically close to the t-test in power (but you are far from the asymptote!) So the first question you need to ask is, how non-normal are we talking? If we are talking seriously bad*, e.g. clearly bimodal (or multimodal!), or massive skewed, then the t-test can probably be ruled out. On the other hand, as a general rule of thumb, if it looks bell shaped, then t-test is probably ok. Arguably, the proper way to proceed, is to do a study of power for your particular set up before you handle the real data too much. Presuming you have some vague idea of the shape of the non-normal distribution* you should generate pretend samples from your weird distribution and your normal distribution and see how well the t-test and Wilcoxon work on your Monte Carlo data. Don't forget you need to check both False Positives and False Negatives. So run once where you make the null hypothesis true (i.e. the samples have the same mean) and another where they differ by the approximate effect size you are looking for (or the effect size that would be "materially interesting" to you). *I am assuming you must either have some a priori reason to believe that one sample is non-normal, or it is absolutely hideous - otherwise it would be almost impossible to spot much deviation from normal with only 20 data points.
Testing for significance between means, having one normal distributed sample and one non normal dist You probably want the The Wilcoxon Rank Sum, also called Mann-Whitney U here. The Signed Rank Test is for paired samples, so not appropriate in your case. Your choice is basically between a t-test or
49,453
Testing for significance between means, having one normal distributed sample and one non normal distributed
If data in the treatment group is not normal while the control group is it sounds like the treatment may only be affecting a subset of the sample or having variable levels of effect. Comparing means under such circumstances would be losing out on this information. You should attempt to offer explanations for why this change of distribution occurred rather than only comparing means. The rank tests assume that both groups come from the same shape distribution. If you believe the distributions are different the tests are not useful for your purposes. Let us take an example of what can happen with the U-test. We will make our control group come from a normal distribution with mean=0. Meanwhile the treatment will have negative effects on half the subjects and positive effects on the other half. So the treatment group will come from two normal distributions. The first with mean=-5, the second with mean=5. All distributions have sd=1 and both groups have sample size=100. Red shows the treatment group while blue shows the control group: Results of doing a U-test (which is also called the Wilcoxon test): Wilcoxon rank sum test with continuity correction data: a and b W = 4999, p-value = 0.999 alternative hypothesis: true location shift is not equal to 0 We can see it returns "not significant". Would you really want to conclude the treatment had no effect? R code for generating the above: ##Generate Data control<-rnorm(100,0,1) # create control data treatment<-c(rnorm(50,-5,1),rnorm(50,5,1)) # create treatment data ##Plot data # Get min/max values (for plotting) min.val<-min(control,treatment) max.val<-max(control,treatment) # make plots hist(treatment, breaks=seq(min.val-.1,max.val+.1,.5), col="Red", xlab="Value", ylim=c(0,20), main="Results" ) hist(control, add=T, breaks=seq(min.val-.1,max.val+.1,.5),col="Blue") ##perform U-test wilcox.test(treatment,control)
Testing for significance between means, having one normal distributed sample and one non normal dist
If data in the treatment group is not normal while the control group is it sounds like the treatment may only be affecting a subset of the sample or having variable levels of effect. Comparing means u
Testing for significance between means, having one normal distributed sample and one non normal distributed If data in the treatment group is not normal while the control group is it sounds like the treatment may only be affecting a subset of the sample or having variable levels of effect. Comparing means under such circumstances would be losing out on this information. You should attempt to offer explanations for why this change of distribution occurred rather than only comparing means. The rank tests assume that both groups come from the same shape distribution. If you believe the distributions are different the tests are not useful for your purposes. Let us take an example of what can happen with the U-test. We will make our control group come from a normal distribution with mean=0. Meanwhile the treatment will have negative effects on half the subjects and positive effects on the other half. So the treatment group will come from two normal distributions. The first with mean=-5, the second with mean=5. All distributions have sd=1 and both groups have sample size=100. Red shows the treatment group while blue shows the control group: Results of doing a U-test (which is also called the Wilcoxon test): Wilcoxon rank sum test with continuity correction data: a and b W = 4999, p-value = 0.999 alternative hypothesis: true location shift is not equal to 0 We can see it returns "not significant". Would you really want to conclude the treatment had no effect? R code for generating the above: ##Generate Data control<-rnorm(100,0,1) # create control data treatment<-c(rnorm(50,-5,1),rnorm(50,5,1)) # create treatment data ##Plot data # Get min/max values (for plotting) min.val<-min(control,treatment) max.val<-max(control,treatment) # make plots hist(treatment, breaks=seq(min.val-.1,max.val+.1,.5), col="Red", xlab="Value", ylim=c(0,20), main="Results" ) hist(control, add=T, breaks=seq(min.val-.1,max.val+.1,.5),col="Blue") ##perform U-test wilcox.test(treatment,control)
Testing for significance between means, having one normal distributed sample and one non normal dist If data in the treatment group is not normal while the control group is it sounds like the treatment may only be affecting a subset of the sample or having variable levels of effect. Comparing means u
49,454
Testing for significance between means, having one normal distributed sample and one non normal distributed
I think in such case when we have two samples one normally distributed and another was not it is more accurate to make transformation for data to get normal distribution for two samples and compare the means by t-test. As t-test is more powerful for small samples. Otherwise we can used non parametric test depend on the type of samples independent or dependent.
Testing for significance between means, having one normal distributed sample and one non normal dist
I think in such case when we have two samples one normally distributed and another was not it is more accurate to make transformation for data to get normal distribution for two samples and compare th
Testing for significance between means, having one normal distributed sample and one non normal distributed I think in such case when we have two samples one normally distributed and another was not it is more accurate to make transformation for data to get normal distribution for two samples and compare the means by t-test. As t-test is more powerful for small samples. Otherwise we can used non parametric test depend on the type of samples independent or dependent.
Testing for significance between means, having one normal distributed sample and one non normal dist I think in such case when we have two samples one normally distributed and another was not it is more accurate to make transformation for data to get normal distribution for two samples and compare th
49,455
Correct definition of number of parameters $K$ in Akaike Information Criterion
This is how the original 1974 paper by Hirotugu Akaike defines the AIC: AIC = (-2)log(maximum likelihood) + 2(number of independently adjusted parameters within the model) The error term is not a parameter which you're independently trying to adjust, but the intercept is (e.g. your slope might be zero and the data best fit by a horizontal line). The correct answer for your simple univariate regression is $K=2$ (intercept and slope).
Correct definition of number of parameters $K$ in Akaike Information Criterion
This is how the original 1974 paper by Hirotugu Akaike defines the AIC: AIC = (-2)log(maximum likelihood) + 2(number of independently adjusted parameters within the model) The error term is not a pa
Correct definition of number of parameters $K$ in Akaike Information Criterion This is how the original 1974 paper by Hirotugu Akaike defines the AIC: AIC = (-2)log(maximum likelihood) + 2(number of independently adjusted parameters within the model) The error term is not a parameter which you're independently trying to adjust, but the intercept is (e.g. your slope might be zero and the data best fit by a horizontal line). The correct answer for your simple univariate regression is $K=2$ (intercept and slope).
Correct definition of number of parameters $K$ in Akaike Information Criterion This is how the original 1974 paper by Hirotugu Akaike defines the AIC: AIC = (-2)log(maximum likelihood) + 2(number of independently adjusted parameters within the model) The error term is not a pa
49,456
Interpreting Two-way repeated measures ANOVA results: Post-hoc tests allowed without significant interaction?
Well, this seems like an old problem, but I'm still tempted to share my two cents. I am tempted to included post-hoc comparisons to compare treatment levels at each time interval to show that the difference between groups disappears on day 11 I don't think you should do this. After all, ANOVA said that the interactions are not significant. The only thing I see in plot (d) is that on day 0, there does not seem to be a difference between Treatment and Control. But you probably already knew this. Just report the main effects, and file the rest under "Speculations".
Interpreting Two-way repeated measures ANOVA results: Post-hoc tests allowed without significant int
Well, this seems like an old problem, but I'm still tempted to share my two cents. I am tempted to included post-hoc comparisons to compare treatment levels at each time interval to show that the dif
Interpreting Two-way repeated measures ANOVA results: Post-hoc tests allowed without significant interaction? Well, this seems like an old problem, but I'm still tempted to share my two cents. I am tempted to included post-hoc comparisons to compare treatment levels at each time interval to show that the difference between groups disappears on day 11 I don't think you should do this. After all, ANOVA said that the interactions are not significant. The only thing I see in plot (d) is that on day 0, there does not seem to be a difference between Treatment and Control. But you probably already knew this. Just report the main effects, and file the rest under "Speculations".
Interpreting Two-way repeated measures ANOVA results: Post-hoc tests allowed without significant int Well, this seems like an old problem, but I'm still tempted to share my two cents. I am tempted to included post-hoc comparisons to compare treatment levels at each time interval to show that the dif
49,457
Testing for Poisson process
If you like Python / numpy / matplotlib, here is a small example demonstrating Remark 6.3: >>> import numpy as np >>> import matplotlib.pyplot as plt >>> import scipy.stats # interval between two events is distributed as an exponential >>> delta_t = scipy.stats.expon.rvs(size=10000) >>> t = np.cumsum(delta_t) >>> plt.hist(t/t.max(), 200) >>> plt.show() # see how much uniform it is # perform the ks test (second value returned is the p-value) >>> scipy.stats.kstest(t/t.max(), 'uniform')
Testing for Poisson process
If you like Python / numpy / matplotlib, here is a small example demonstrating Remark 6.3: >>> import numpy as np >>> import matplotlib.pyplot as plt >>> import scipy.stats # interval between two ev
Testing for Poisson process If you like Python / numpy / matplotlib, here is a small example demonstrating Remark 6.3: >>> import numpy as np >>> import matplotlib.pyplot as plt >>> import scipy.stats # interval between two events is distributed as an exponential >>> delta_t = scipy.stats.expon.rvs(size=10000) >>> t = np.cumsum(delta_t) >>> plt.hist(t/t.max(), 200) >>> plt.show() # see how much uniform it is # perform the ks test (second value returned is the p-value) >>> scipy.stats.kstest(t/t.max(), 'uniform')
Testing for Poisson process If you like Python / numpy / matplotlib, here is a small example demonstrating Remark 6.3: >>> import numpy as np >>> import matplotlib.pyplot as plt >>> import scipy.stats # interval between two ev
49,458
Experiment design question
The real question is: do you have a hypothesis? In Milgram's study, the real important information is really descriptive (e.g., how many people made it to 450 volts) rather than hypothesis driven. So, in short, the answer is no, you don't always need a statistical hypothesis beforehand. Note that many people may say that an experiment is inherently designed to test a hypothesis; consequently, this design wouldn't be an "experiment" at all. On the other hand, if you have any type of hypothesis (e.g., majority of participants will make it to 450 volts, participants will go higher in voltage than psychiatrists would expect, men are more likely to do X), then a statistical hypothesis is warranted.
Experiment design question
The real question is: do you have a hypothesis? In Milgram's study, the real important information is really descriptive (e.g., how many people made it to 450 volts) rather than hypothesis driven. So,
Experiment design question The real question is: do you have a hypothesis? In Milgram's study, the real important information is really descriptive (e.g., how many people made it to 450 volts) rather than hypothesis driven. So, in short, the answer is no, you don't always need a statistical hypothesis beforehand. Note that many people may say that an experiment is inherently designed to test a hypothesis; consequently, this design wouldn't be an "experiment" at all. On the other hand, if you have any type of hypothesis (e.g., majority of participants will make it to 450 volts, participants will go higher in voltage than psychiatrists would expect, men are more likely to do X), then a statistical hypothesis is warranted.
Experiment design question The real question is: do you have a hypothesis? In Milgram's study, the real important information is really descriptive (e.g., how many people made it to 450 volts) rather than hypothesis driven. So,
49,459
Resources for matrix calculus for optimization
I think a better book is Matrix Calculus by the same Jan Magnus (with H. Neudecker). It goes a little deeper into theory than the Matrix Algebra which is essentially a set of exercises (very good ones, but still... little room for the proofs and discussion of where the math stuff applies.) I heard that the first edition was printed on very low quality paper; don't know about the second edition really as what I have is a Russian edition that I myself helped translating.
Resources for matrix calculus for optimization
I think a better book is Matrix Calculus by the same Jan Magnus (with H. Neudecker). It goes a little deeper into theory than the Matrix Algebra which is essentially a set of exercises (very good ones
Resources for matrix calculus for optimization I think a better book is Matrix Calculus by the same Jan Magnus (with H. Neudecker). It goes a little deeper into theory than the Matrix Algebra which is essentially a set of exercises (very good ones, but still... little room for the proofs and discussion of where the math stuff applies.) I heard that the first edition was printed on very low quality paper; don't know about the second edition really as what I have is a Russian edition that I myself helped translating.
Resources for matrix calculus for optimization I think a better book is Matrix Calculus by the same Jan Magnus (with H. Neudecker). It goes a little deeper into theory than the Matrix Algebra which is essentially a set of exercises (very good ones
49,460
Resources for matrix calculus for optimization
A good paper to read is Dwyer (1967) "Some Applications of Matrix Derivatives in Multivariate Analysis". It covers matrix calculus from the perspective of statistical applications. In particular it contains two tables of identities which makes it a useful reference to have on hand. The paper is available behind a pay wall here or there is currently a freely accessible version on bitbucket.org here although I am unsure of the stability of that link.
Resources for matrix calculus for optimization
A good paper to read is Dwyer (1967) "Some Applications of Matrix Derivatives in Multivariate Analysis". It covers matrix calculus from the perspective of statistical applications. In particular it co
Resources for matrix calculus for optimization A good paper to read is Dwyer (1967) "Some Applications of Matrix Derivatives in Multivariate Analysis". It covers matrix calculus from the perspective of statistical applications. In particular it contains two tables of identities which makes it a useful reference to have on hand. The paper is available behind a pay wall here or there is currently a freely accessible version on bitbucket.org here although I am unsure of the stability of that link.
Resources for matrix calculus for optimization A good paper to read is Dwyer (1967) "Some Applications of Matrix Derivatives in Multivariate Analysis". It covers matrix calculus from the perspective of statistical applications. In particular it co
49,461
Regression with correlated explanatory variables
In the example you give it would make no sense to talk about only two of the explanatory variables ($x_1$, $x_2$) influencing a dependent variable ($y$), as the third ($x_3$) is derived from them. For e.g. the linear model with interactions, the fitted value of $y$ is given by $$\hat{y} = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \beta_{12} x_1 x_2 + \beta_{13} x_1 x_3 + \beta_{23} x_2 x_3 $$ where $\beta$ are the coefficients you want to estimate. Substituting e.g. $x_3 = x_1 x_2$ gives $$\hat{y} = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1 x_2 + \beta_{12} x_1 x_2 + \beta_{13} x_1^2 x_2 + \beta_{23} x_1 x_2^2 $$ so $\beta_3$ & $\beta_{12}$ are coefficients for the same term, & you can't separately estimate them. You could fit $$\hat{y} = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_{3}^* x_3 + \beta_{13} x_1 x_3 + \beta_{23} x_2 x_3 $$ if this model is of interest; & the multicollinearity is merely structural. (Note though that it forces the relation of the dependent variable to both $x_1$ & $x_2$ to be linear when $x_3=0$.) In general if an explanatory variable is a function of the other explanatory variables it's simpler to omit it—you can always rewrite the fitted model to put it back in. Of course there's no guarantee that the dependent variable is well fitted by a simple additive model. But when you say you measured three correlated variables it makes me doubt that the correlation is perfect, as in your example. If it's not then there are plenty of questions on this site about how to assess the effects of multicollinearity & how to deal with it, & the 'multicollinearity' tag will help you find them.
Regression with correlated explanatory variables
In the example you give it would make no sense to talk about only two of the explanatory variables ($x_1$, $x_2$) influencing a dependent variable ($y$), as the third ($x_3$) is derived from them. For
Regression with correlated explanatory variables In the example you give it would make no sense to talk about only two of the explanatory variables ($x_1$, $x_2$) influencing a dependent variable ($y$), as the third ($x_3$) is derived from them. For e.g. the linear model with interactions, the fitted value of $y$ is given by $$\hat{y} = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \beta_{12} x_1 x_2 + \beta_{13} x_1 x_3 + \beta_{23} x_2 x_3 $$ where $\beta$ are the coefficients you want to estimate. Substituting e.g. $x_3 = x_1 x_2$ gives $$\hat{y} = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1 x_2 + \beta_{12} x_1 x_2 + \beta_{13} x_1^2 x_2 + \beta_{23} x_1 x_2^2 $$ so $\beta_3$ & $\beta_{12}$ are coefficients for the same term, & you can't separately estimate them. You could fit $$\hat{y} = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_{3}^* x_3 + \beta_{13} x_1 x_3 + \beta_{23} x_2 x_3 $$ if this model is of interest; & the multicollinearity is merely structural. (Note though that it forces the relation of the dependent variable to both $x_1$ & $x_2$ to be linear when $x_3=0$.) In general if an explanatory variable is a function of the other explanatory variables it's simpler to omit it—you can always rewrite the fitted model to put it back in. Of course there's no guarantee that the dependent variable is well fitted by a simple additive model. But when you say you measured three correlated variables it makes me doubt that the correlation is perfect, as in your example. If it's not then there are plenty of questions on this site about how to assess the effects of multicollinearity & how to deal with it, & the 'multicollinearity' tag will help you find them.
Regression with correlated explanatory variables In the example you give it would make no sense to talk about only two of the explanatory variables ($x_1$, $x_2$) influencing a dependent variable ($y$), as the third ($x_3$) is derived from them. For
49,462
Regression with correlated explanatory variables
If you have some knowledge of a hypothetical relation (e.g., from the literature), then you might be interested in looking at non-linear as well as linear regression models. Standing on the shoulders of those who have worked on this stuff before you can give you some great vision. If you are content to limit yourself to additive linear regression models, then I suggest you start with two independent variables and their interaction: lm(dependent.variable ~ exp1 + exp2 + exp1:exp2) If there are signs of non-linearity in this relation, you may wish to explore the nature of that non-linearity with a generalized additive model. library(mgcv) gam(dependent.variable ~ s(exp1) + s(exp2) + s(exp3)) Note that this last line of code will give you an error in R with your example data set because there are so few observations. If you get the same error with your full data set, use the argument k= in the s() function to limit the degrees of freedom used in each smooth.
Regression with correlated explanatory variables
If you have some knowledge of a hypothetical relation (e.g., from the literature), then you might be interested in looking at non-linear as well as linear regression models. Standing on the shoulders
Regression with correlated explanatory variables If you have some knowledge of a hypothetical relation (e.g., from the literature), then you might be interested in looking at non-linear as well as linear regression models. Standing on the shoulders of those who have worked on this stuff before you can give you some great vision. If you are content to limit yourself to additive linear regression models, then I suggest you start with two independent variables and their interaction: lm(dependent.variable ~ exp1 + exp2 + exp1:exp2) If there are signs of non-linearity in this relation, you may wish to explore the nature of that non-linearity with a generalized additive model. library(mgcv) gam(dependent.variable ~ s(exp1) + s(exp2) + s(exp3)) Note that this last line of code will give you an error in R with your example data set because there are so few observations. If you get the same error with your full data set, use the argument k= in the s() function to limit the degrees of freedom used in each smooth.
Regression with correlated explanatory variables If you have some knowledge of a hypothetical relation (e.g., from the literature), then you might be interested in looking at non-linear as well as linear regression models. Standing on the shoulders
49,463
Pooled logistic regression with irregular intervals
You can model the time trend in the log odds of the probability that $Y=1$ using a spline function (and using other methods). I do not believe that the time points need to be equally spaced, but rather that they be discrete and have roughly the same schedule for every subject.
Pooled logistic regression with irregular intervals
You can model the time trend in the log odds of the probability that $Y=1$ using a spline function (and using other methods). I do not believe that the time points need to be equally spaced, but rath
Pooled logistic regression with irregular intervals You can model the time trend in the log odds of the probability that $Y=1$ using a spline function (and using other methods). I do not believe that the time points need to be equally spaced, but rather that they be discrete and have roughly the same schedule for every subject.
Pooled logistic regression with irregular intervals You can model the time trend in the log odds of the probability that $Y=1$ using a spline function (and using other methods). I do not believe that the time points need to be equally spaced, but rath
49,464
Comparing multiple incidence rates
If you only have the four data points, I think the best way to do this is with a G^2 test. You want to start by assuming the frequency is a binomial distribution (every person in the population has the condition with probability p). And your null hypothesis is that p_1=p_2=p_3=p_4. So the overall mean is (1800+539+490+301)/(2.9m+1.327m+.88m+.268m)=0.000582. Your expected cases in each group are 1688.7, 772.7, 512.4, and 156.1. You can calculate the G^2 statistic, but the answer I get is 192.8, which is chi-squared(3) under the null hypothesis. This is a very low p-value, so you'd reject the null and say that yes, you can be quite confident that the incidence is different between these locations. In particular that last location is considerably higher than the other three, so that is contributing heavily to the low p-value. You can repeat this analysis for the other three and you may get something a bit different, but that is an exercise to the reader :-) HTH ETA: the DF is 3, not 1, as Yves pointed out in the comments.
Comparing multiple incidence rates
If you only have the four data points, I think the best way to do this is with a G^2 test. You want to start by assuming the frequency is a binomial distribution (every person in the population has th
Comparing multiple incidence rates If you only have the four data points, I think the best way to do this is with a G^2 test. You want to start by assuming the frequency is a binomial distribution (every person in the population has the condition with probability p). And your null hypothesis is that p_1=p_2=p_3=p_4. So the overall mean is (1800+539+490+301)/(2.9m+1.327m+.88m+.268m)=0.000582. Your expected cases in each group are 1688.7, 772.7, 512.4, and 156.1. You can calculate the G^2 statistic, but the answer I get is 192.8, which is chi-squared(3) under the null hypothesis. This is a very low p-value, so you'd reject the null and say that yes, you can be quite confident that the incidence is different between these locations. In particular that last location is considerably higher than the other three, so that is contributing heavily to the low p-value. You can repeat this analysis for the other three and you may get something a bit different, but that is an exercise to the reader :-) HTH ETA: the DF is 3, not 1, as Yves pointed out in the comments.
Comparing multiple incidence rates If you only have the four data points, I think the best way to do this is with a G^2 test. You want to start by assuming the frequency is a binomial distribution (every person in the population has th
49,465
Comparing multiple incidence rates
Given the limited data you have to work with you may only be able to address this question by incorporating additional assumptions (or data?) regarding the process behind these incidence rates, then doing some manual modeling. Any statistical technique you use will be implicitly making such assumptions for you under the hood, so better to call those out and structure your analysis around them. You have observations of discrete incidence counts. The forms of some common discrete distributions encode the following assumptions: Poisson: variance is equal to the mean Binomial: variance is smaller than the mean Negative Binomial: variance is greater than the mean You've already started down this road by ruling out a poisson model for the underlying process, saying variance = mean is not reasonable. If the process is a contagion model then it might be very reasonable to assume variance is greater than mean, so a negative binomial distribution. The next question is fitting the parameters of the selected model and then making your comparisons. You could approach this in a couple of ways: Empirically with your four data points - calculate mean and variance, then fit to distribution with old-fashioned algebra using distribution's mean and variance formulas. (You may need to standardize your data, for same reasons you'd use an offset in the glm.) Then calculate the probabilities of all 4 data points (and perhaps different combinations of 3) using the fitted model(s); lower probability suggests the process generating the incidence rates are not equivalent. Use data from existing literature/research to fit the model; then test the probability of your incidence data occurring under that model. Poor fit for one of the data points could suggest that it's incidence rate deviates from the standard process in some way (or a poor model of course if most or all do not fit well). Such results are hardly conclusive (nothing can be with only this data imho), but just as importantly it can inform dialogue and further research into the process you're modeling.
Comparing multiple incidence rates
Given the limited data you have to work with you may only be able to address this question by incorporating additional assumptions (or data?) regarding the process behind these incidence rates, then d
Comparing multiple incidence rates Given the limited data you have to work with you may only be able to address this question by incorporating additional assumptions (or data?) regarding the process behind these incidence rates, then doing some manual modeling. Any statistical technique you use will be implicitly making such assumptions for you under the hood, so better to call those out and structure your analysis around them. You have observations of discrete incidence counts. The forms of some common discrete distributions encode the following assumptions: Poisson: variance is equal to the mean Binomial: variance is smaller than the mean Negative Binomial: variance is greater than the mean You've already started down this road by ruling out a poisson model for the underlying process, saying variance = mean is not reasonable. If the process is a contagion model then it might be very reasonable to assume variance is greater than mean, so a negative binomial distribution. The next question is fitting the parameters of the selected model and then making your comparisons. You could approach this in a couple of ways: Empirically with your four data points - calculate mean and variance, then fit to distribution with old-fashioned algebra using distribution's mean and variance formulas. (You may need to standardize your data, for same reasons you'd use an offset in the glm.) Then calculate the probabilities of all 4 data points (and perhaps different combinations of 3) using the fitted model(s); lower probability suggests the process generating the incidence rates are not equivalent. Use data from existing literature/research to fit the model; then test the probability of your incidence data occurring under that model. Poor fit for one of the data points could suggest that it's incidence rate deviates from the standard process in some way (or a poor model of course if most or all do not fit well). Such results are hardly conclusive (nothing can be with only this data imho), but just as importantly it can inform dialogue and further research into the process you're modeling.
Comparing multiple incidence rates Given the limited data you have to work with you may only be able to address this question by incorporating additional assumptions (or data?) regarding the process behind these incidence rates, then d
49,466
Hausman test - wrong conclusion
Statistical significance doesn't mean the model is good. In fact, in this case, it's probably a sign that it's bad. If your model is misspecified your estimate of the model variance could be wrong. Statistical significance depends on that estimate. If that estimate is wrong, you will get erroneous t-statistics and therefore p-values and therefore, possibly, erroneously significant coefficients at your desired confidence level. I say go with what the Hausman test says.
Hausman test - wrong conclusion
Statistical significance doesn't mean the model is good. In fact, in this case, it's probably a sign that it's bad. If your model is misspecified your estimate of the model variance could be wrong. St
Hausman test - wrong conclusion Statistical significance doesn't mean the model is good. In fact, in this case, it's probably a sign that it's bad. If your model is misspecified your estimate of the model variance could be wrong. Statistical significance depends on that estimate. If that estimate is wrong, you will get erroneous t-statistics and therefore p-values and therefore, possibly, erroneously significant coefficients at your desired confidence level. I say go with what the Hausman test says.
Hausman test - wrong conclusion Statistical significance doesn't mean the model is good. In fact, in this case, it's probably a sign that it's bad. If your model is misspecified your estimate of the model variance could be wrong. St
49,467
Hausman test - wrong conclusion
You should definitely utilize the result from the Hausman test. Remember what the test does: it compares a consistent but less efficient estimator (fixed effects) to a more efficient estimator that is only consistent under the null (random effects), $$H = (\beta_{FE}-\beta_{RE})'[Var(\beta_{FE})-Var(\beta_{RE})]^{-1}(\beta_{FE}-\beta_{RE})$$ where the null of the Hausman test is that the coefficients from fixed and random effects are not systematically different. If random effects was consistent then you might expect that its coefficient estimates will not be significantly different from the fixed effects ones. Rejecting this null hypothesis means that it is unlikely for random effects to be consistent, hence fixed effects would be the better choice. Statistical significance is not a good criterion to choose because random effects will always be more efficient than fixed effects, i.e. its coefficients' standard errors are going to be smaller. This is because random effects is a matrix weighted average of the between and the within variation in your data (fixed effects only uses the within variation and does not consider the information contained in the between variation, hence it is less efficient). For this reason you will be better off with choosing fixed over random effects if you have indication that random effects is not going to be consistent. A very precise but wrong estimate is probably less valuable to you than a less precise but more correct estimate.
Hausman test - wrong conclusion
You should definitely utilize the result from the Hausman test. Remember what the test does: it compares a consistent but less efficient estimator (fixed effects) to a more efficient estimator that is
Hausman test - wrong conclusion You should definitely utilize the result from the Hausman test. Remember what the test does: it compares a consistent but less efficient estimator (fixed effects) to a more efficient estimator that is only consistent under the null (random effects), $$H = (\beta_{FE}-\beta_{RE})'[Var(\beta_{FE})-Var(\beta_{RE})]^{-1}(\beta_{FE}-\beta_{RE})$$ where the null of the Hausman test is that the coefficients from fixed and random effects are not systematically different. If random effects was consistent then you might expect that its coefficient estimates will not be significantly different from the fixed effects ones. Rejecting this null hypothesis means that it is unlikely for random effects to be consistent, hence fixed effects would be the better choice. Statistical significance is not a good criterion to choose because random effects will always be more efficient than fixed effects, i.e. its coefficients' standard errors are going to be smaller. This is because random effects is a matrix weighted average of the between and the within variation in your data (fixed effects only uses the within variation and does not consider the information contained in the between variation, hence it is less efficient). For this reason you will be better off with choosing fixed over random effects if you have indication that random effects is not going to be consistent. A very precise but wrong estimate is probably less valuable to you than a less precise but more correct estimate.
Hausman test - wrong conclusion You should definitely utilize the result from the Hausman test. Remember what the test does: it compares a consistent but less efficient estimator (fixed effects) to a more efficient estimator that is
49,468
Validating correctness of ranking algorithm
If I understand correctly, what you would like to have is a measure to compare the underlying true ranking $\pi$ and the predicted ranking (i.e., the tournament ranking, or the simulated ranking) $\sigma$, where $\sigma$ is a function of some input parameters. In the statistics literature, there are a number of distance functions for rankings. I will list some of them below. Let $\pi(i)$ and $\sigma(i)$ be the ranks of item $i$ in $\pi$ and $\sigma$, respectively (e.g., $\pi(``\text{John}")=2$ and $\sigma(``\text{John}")=1$ in your example). The Kendall distance is defined as $$ K(\pi,\sigma) = \# \lbrace \; (i,j) \, \vert \, \pi(i)>\pi(j) \text{ and } \sigma(i)<\sigma(j) \; \rbrace \, . $$ The Spearman distance is defined as $$ S(\pi,\sigma) = \sum_i \left( \pi(i) - \sigma(i)\right)^2 \, . $$ The Spearman footrule distance is defined as $$ F(\pi,\sigma) = \sum_i \lvert \pi(i) - \sigma(i)\rvert \, . $$ These are three widely used distance functions for rankings. You could think of the distance as the loss the predicted ranking suffers w.r.t. the true ranking, so the lower the distance, the better. If you need an accuracy (the larger the better) instead of a loss, these distances can be easily normalized to different types of correlation coefficients for rankings. Regarding the bonus question: I think there are a lot of different ways to do it. For example, after you divide the items in the true rankings into several groups, you can use the C-index to measure the performance of the predicted ranking. In the study of multipartite ranking, C-index is commonly used. It is a kind of extension of AUC (area under the ROC curve). You can check Section 4 of this paper, where a short introduction of C-index is given.
Validating correctness of ranking algorithm
If I understand correctly, what you would like to have is a measure to compare the underlying true ranking $\pi$ and the predicted ranking (i.e., the tournament ranking, or the simulated ranking) $\si
Validating correctness of ranking algorithm If I understand correctly, what you would like to have is a measure to compare the underlying true ranking $\pi$ and the predicted ranking (i.e., the tournament ranking, or the simulated ranking) $\sigma$, where $\sigma$ is a function of some input parameters. In the statistics literature, there are a number of distance functions for rankings. I will list some of them below. Let $\pi(i)$ and $\sigma(i)$ be the ranks of item $i$ in $\pi$ and $\sigma$, respectively (e.g., $\pi(``\text{John}")=2$ and $\sigma(``\text{John}")=1$ in your example). The Kendall distance is defined as $$ K(\pi,\sigma) = \# \lbrace \; (i,j) \, \vert \, \pi(i)>\pi(j) \text{ and } \sigma(i)<\sigma(j) \; \rbrace \, . $$ The Spearman distance is defined as $$ S(\pi,\sigma) = \sum_i \left( \pi(i) - \sigma(i)\right)^2 \, . $$ The Spearman footrule distance is defined as $$ F(\pi,\sigma) = \sum_i \lvert \pi(i) - \sigma(i)\rvert \, . $$ These are three widely used distance functions for rankings. You could think of the distance as the loss the predicted ranking suffers w.r.t. the true ranking, so the lower the distance, the better. If you need an accuracy (the larger the better) instead of a loss, these distances can be easily normalized to different types of correlation coefficients for rankings. Regarding the bonus question: I think there are a lot of different ways to do it. For example, after you divide the items in the true rankings into several groups, you can use the C-index to measure the performance of the predicted ranking. In the study of multipartite ranking, C-index is commonly used. It is a kind of extension of AUC (area under the ROC curve). You can check Section 4 of this paper, where a short introduction of C-index is given.
Validating correctness of ranking algorithm If I understand correctly, what you would like to have is a measure to compare the underlying true ranking $\pi$ and the predicted ranking (i.e., the tournament ranking, or the simulated ranking) $\si
49,469
Validate cluster analysis in R
This is definitely not a question for this site. I flagged it to be migrated to c-v. What do you mean by "do I have to use all these methods"? because in your code you added all the clustering techniques: k-means, hierarchical, self organizing maps. But at the beginning of the question you said you wanted to perform hierarchical clustering. Anyways, It's all there in the paper you mention. If I were you I would first do the internal validation. The techniques described usually account for the internal variance of the clusters, meaning that you are aiming to find clusters which are as "homogeneous" as possible. Be wary that these can be affected by the nature of the variables you are using to cluster. If you have only continuous variables then an appropriate choice for the distance on which the hierarchical clustering will be based on would be the euclidean distance which is fitting for the idea of internal variation. Equipped with another distance hierarchical clustering can even cluster mixed type variables but the internal variation here can become more complex. K-means should only be applied on continuous variables so these techniques are often used to find an optimal number for 'k'. In hierarchical clustering the number of clusters is determined by the cut-off height, thinking about a dendrogram, so the number of clusters you can pick is not arbitrary. The stability measures are used to determine how solid your clustering is. This is especially useful to distinguish real patterns from spurious ones especially when it comes to clustering algorithms that have a random nature. For example k-means which because of its iterative nature is dependent of the initialization points (the initial k centers from where the algorithm takes off). I have seen stability studies for hierarchical clustering but I have never used them. A quick google search should probably shed some light on this. The biological validation of which they speak of in the paper you cited is a mystery to me.
Validate cluster analysis in R
This is definitely not a question for this site. I flagged it to be migrated to c-v. What do you mean by "do I have to use all these methods"? because in your code you added all the clustering techniq
Validate cluster analysis in R This is definitely not a question for this site. I flagged it to be migrated to c-v. What do you mean by "do I have to use all these methods"? because in your code you added all the clustering techniques: k-means, hierarchical, self organizing maps. But at the beginning of the question you said you wanted to perform hierarchical clustering. Anyways, It's all there in the paper you mention. If I were you I would first do the internal validation. The techniques described usually account for the internal variance of the clusters, meaning that you are aiming to find clusters which are as "homogeneous" as possible. Be wary that these can be affected by the nature of the variables you are using to cluster. If you have only continuous variables then an appropriate choice for the distance on which the hierarchical clustering will be based on would be the euclidean distance which is fitting for the idea of internal variation. Equipped with another distance hierarchical clustering can even cluster mixed type variables but the internal variation here can become more complex. K-means should only be applied on continuous variables so these techniques are often used to find an optimal number for 'k'. In hierarchical clustering the number of clusters is determined by the cut-off height, thinking about a dendrogram, so the number of clusters you can pick is not arbitrary. The stability measures are used to determine how solid your clustering is. This is especially useful to distinguish real patterns from spurious ones especially when it comes to clustering algorithms that have a random nature. For example k-means which because of its iterative nature is dependent of the initialization points (the initial k centers from where the algorithm takes off). I have seen stability studies for hierarchical clustering but I have never used them. A quick google search should probably shed some light on this. The biological validation of which they speak of in the paper you cited is a mystery to me.
Validate cluster analysis in R This is definitely not a question for this site. I flagged it to be migrated to c-v. What do you mean by "do I have to use all these methods"? because in your code you added all the clustering techniq
49,470
Validate cluster analysis in R
Did you try to get any of the warnings? warnings() Most of the times multiple warnings in the correlation matrix generation are because of the NA cells than because of 0 cells. Check if any of the two columns to be correlated have absolutely no variation so that the correlation coefficient can not be generated. Another suggestion would be try just HC method first and then dive into each method one at a time. This may be able to spot your error. Visualizing data beforehand using simple corrplot will help you too. Best luck
Validate cluster analysis in R
Did you try to get any of the warnings? warnings() Most of the times multiple warnings in the correlation matrix generation are because of the NA cells than because of 0 cells. Check if any of the two
Validate cluster analysis in R Did you try to get any of the warnings? warnings() Most of the times multiple warnings in the correlation matrix generation are because of the NA cells than because of 0 cells. Check if any of the two columns to be correlated have absolutely no variation so that the correlation coefficient can not be generated. Another suggestion would be try just HC method first and then dive into each method one at a time. This may be able to spot your error. Visualizing data beforehand using simple corrplot will help you too. Best luck
Validate cluster analysis in R Did you try to get any of the warnings? warnings() Most of the times multiple warnings in the correlation matrix generation are because of the NA cells than because of 0 cells. Check if any of the two
49,471
Should I re-center variables when looking at moderator effect in men and women separately?
Centring: Centring does not change the significance of the r-square change of your interaction effect. It also will not change the values you get for a simple slopes analysis. Thus, for most purposes it does not matter whether you centre or not. This applies both to the general analysis, and to the subgroup analysis. The main benefit of centring is that it can make the interpretation of the regression coefficients a little easier. If you want to compare these absolute size of these coefficients across males and females, then you should only centre once. Prefer integrated models: A better suggestion is to include gender in your overall multiple regression. For example, if you have DV, IV1, IV2 and gender and you are interested in the IV1 * IV2 interaction for each gender. I'd examine various models such as: DV ~ IV1 + IV2 + gender DV ~ IV1 * IV2 + gender DV ~ IV1 * IV2 + gender * IV1 + gender*IV2 DV ~ IV1 * IV2 * gender If you get a significant gender by something interaction, then you may wish to further explore this using separate analyses, but I'd start with the overall integrated model. Illustrating points about centered predictors: The following code returns the p-value of the r-square change and the final r-square for both an uncentered and three centred versions (global, female centred, male centred) of an interaction effect model. library(MASS) survey <- na.omit(survey) head(survey) x <- survey[, c('Sex', 'Wr.Hnd', 'NW.Hnd', 'Pulse')] names(x) <- c('gender', 'iv1', 'iv2', 'dv') x$scaled_iv1 <- scale(x$iv1, scale=FALSE) x$scaled_iv2 <- scale(x$iv2, scale=FALSE) x$female_scaled_iv1 <- scale(x$iv1, center=mean(x[x$gender == "Female", 'iv1']), scale=FALSE) x$female_scaled_iv2 <- scale(x$iv2, center=mean(x[x$gender == "Female", 'iv2']), scale=FALSE) x$male_scaled_iv1 <- scale(x$iv1, center=mean(x[x$gender == "Male", 'iv1']), scale=FALSE) x$male_scaled_iv2 <- scale(x$iv2, center=mean(x[x$gender == "Male", 'iv2']), scale=FALSE) compare_fits <- function(x) { fit1 <- lm(dv ~ iv1+iv2, x) fit2 <- lm(dv ~ iv1*iv2, x) fit3 <- lm(dv ~ scaled_iv1*scaled_iv2, x) fit4 <- lm(dv ~ male_scaled_iv1*male_scaled_iv2, x) fit5 <- lm(dv ~ female_scaled_iv1*female_scaled_iv2, x) results <- list() results$p_normal <- anova(fit1, fit2)[2,6] results$p_centered <- anova(fit1, fit3)[2,6] results$p_centered_male <- anova(fit1, fit4)[2,6] results$p_centered_female <- anova(fit1, fit5)[2,6] results$rsq_normal <- summary(fit2)$r.squared results$rsq_centered <- summary(fit3)$r.squared results$rsq_centered_male <- summary(fit4)$r.squared results$rsq_centered_female <- summary(fit5)$r.squared unlist(results) } # The following results report p-values and rsq for final model # using normal (i.e., uncentered) and centered predictors compare_fits(x) compare_fits(x[x$gender=='Male', ]) compare_fits(x[x$gender=='Female', ]) The results show how the values do not vary across uncentered and centered analyses. > compare_fits(x) p_normal p_centered p_centered_male p_centered_female rsq_normal 0.241816265 0.241816265 0.241816265 0.241816265 0.009982317 rsq_centered rsq_centered_male rsq_centered_female 0.009982317 0.009982317 0.009982317 > compare_fits(x[x$gender=='Male', ]) p_normal p_centered p_centered_male p_centered_female rsq_normal 0.14034102 0.14034102 0.14034102 0.14034102 0.03055692 rsq_centered rsq_centered_male rsq_centered_female 0.03055692 0.03055692 0.03055692 > compare_fits(x[x$gender=='Female', ]) p_normal p_centered p_centered_male p_centered_female rsq_normal 0.5196788 0.5196788 0.5196788 0.5196788 0.0128802 rsq_centered rsq_centered_male rsq_centered_female 0.0128802 0.0128802 0.0128802
Should I re-center variables when looking at moderator effect in men and women separately?
Centring: Centring does not change the significance of the r-square change of your interaction effect. It also will not change the values you get for a simple slopes analysis. Thus, for most purposes
Should I re-center variables when looking at moderator effect in men and women separately? Centring: Centring does not change the significance of the r-square change of your interaction effect. It also will not change the values you get for a simple slopes analysis. Thus, for most purposes it does not matter whether you centre or not. This applies both to the general analysis, and to the subgroup analysis. The main benefit of centring is that it can make the interpretation of the regression coefficients a little easier. If you want to compare these absolute size of these coefficients across males and females, then you should only centre once. Prefer integrated models: A better suggestion is to include gender in your overall multiple regression. For example, if you have DV, IV1, IV2 and gender and you are interested in the IV1 * IV2 interaction for each gender. I'd examine various models such as: DV ~ IV1 + IV2 + gender DV ~ IV1 * IV2 + gender DV ~ IV1 * IV2 + gender * IV1 + gender*IV2 DV ~ IV1 * IV2 * gender If you get a significant gender by something interaction, then you may wish to further explore this using separate analyses, but I'd start with the overall integrated model. Illustrating points about centered predictors: The following code returns the p-value of the r-square change and the final r-square for both an uncentered and three centred versions (global, female centred, male centred) of an interaction effect model. library(MASS) survey <- na.omit(survey) head(survey) x <- survey[, c('Sex', 'Wr.Hnd', 'NW.Hnd', 'Pulse')] names(x) <- c('gender', 'iv1', 'iv2', 'dv') x$scaled_iv1 <- scale(x$iv1, scale=FALSE) x$scaled_iv2 <- scale(x$iv2, scale=FALSE) x$female_scaled_iv1 <- scale(x$iv1, center=mean(x[x$gender == "Female", 'iv1']), scale=FALSE) x$female_scaled_iv2 <- scale(x$iv2, center=mean(x[x$gender == "Female", 'iv2']), scale=FALSE) x$male_scaled_iv1 <- scale(x$iv1, center=mean(x[x$gender == "Male", 'iv1']), scale=FALSE) x$male_scaled_iv2 <- scale(x$iv2, center=mean(x[x$gender == "Male", 'iv2']), scale=FALSE) compare_fits <- function(x) { fit1 <- lm(dv ~ iv1+iv2, x) fit2 <- lm(dv ~ iv1*iv2, x) fit3 <- lm(dv ~ scaled_iv1*scaled_iv2, x) fit4 <- lm(dv ~ male_scaled_iv1*male_scaled_iv2, x) fit5 <- lm(dv ~ female_scaled_iv1*female_scaled_iv2, x) results <- list() results$p_normal <- anova(fit1, fit2)[2,6] results$p_centered <- anova(fit1, fit3)[2,6] results$p_centered_male <- anova(fit1, fit4)[2,6] results$p_centered_female <- anova(fit1, fit5)[2,6] results$rsq_normal <- summary(fit2)$r.squared results$rsq_centered <- summary(fit3)$r.squared results$rsq_centered_male <- summary(fit4)$r.squared results$rsq_centered_female <- summary(fit5)$r.squared unlist(results) } # The following results report p-values and rsq for final model # using normal (i.e., uncentered) and centered predictors compare_fits(x) compare_fits(x[x$gender=='Male', ]) compare_fits(x[x$gender=='Female', ]) The results show how the values do not vary across uncentered and centered analyses. > compare_fits(x) p_normal p_centered p_centered_male p_centered_female rsq_normal 0.241816265 0.241816265 0.241816265 0.241816265 0.009982317 rsq_centered rsq_centered_male rsq_centered_female 0.009982317 0.009982317 0.009982317 > compare_fits(x[x$gender=='Male', ]) p_normal p_centered p_centered_male p_centered_female rsq_normal 0.14034102 0.14034102 0.14034102 0.14034102 0.03055692 rsq_centered rsq_centered_male rsq_centered_female 0.03055692 0.03055692 0.03055692 > compare_fits(x[x$gender=='Female', ]) p_normal p_centered p_centered_male p_centered_female rsq_normal 0.5196788 0.5196788 0.5196788 0.5196788 0.0128802 rsq_centered rsq_centered_male rsq_centered_female 0.0128802 0.0128802 0.0128802
Should I re-center variables when looking at moderator effect in men and women separately? Centring: Centring does not change the significance of the r-square change of your interaction effect. It also will not change the values you get for a simple slopes analysis. Thus, for most purposes
49,472
How to combine time-series based features with different frequencies
As suggested by @ChuckKillerDoll, you could find aggregate / derive features from your current measures, but chances are you will lose information by doing so. Another way to go about it, is to create three separate models and train a model for each of the frequency information matrices individually. These produce output scores $S_1,S_2,S_3$, you can then combine these in a new ensemble model. The easiest way to combine them is in a linear model: $$S = a_1 S_1 + a_2 S_2 + a_3 S_3$$ Here $S$ is the final output score and you still need to learn the weights $a_i$ on a validation set. You could of course, plug the scores into more complicated models ... The main disadvantage of this technique is that you lose some of the covariance information.
How to combine time-series based features with different frequencies
As suggested by @ChuckKillerDoll, you could find aggregate / derive features from your current measures, but chances are you will lose information by doing so. Another way to go about it, is to create
How to combine time-series based features with different frequencies As suggested by @ChuckKillerDoll, you could find aggregate / derive features from your current measures, but chances are you will lose information by doing so. Another way to go about it, is to create three separate models and train a model for each of the frequency information matrices individually. These produce output scores $S_1,S_2,S_3$, you can then combine these in a new ensemble model. The easiest way to combine them is in a linear model: $$S = a_1 S_1 + a_2 S_2 + a_3 S_3$$ Here $S$ is the final output score and you still need to learn the weights $a_i$ on a validation set. You could of course, plug the scores into more complicated models ... The main disadvantage of this technique is that you lose some of the covariance information.
How to combine time-series based features with different frequencies As suggested by @ChuckKillerDoll, you could find aggregate / derive features from your current measures, but chances are you will lose information by doing so. Another way to go about it, is to create
49,473
How to combine time-series based features with different frequencies
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I would try an state-space model. The simplest possible form would be: \begin{eqnarray} \begin{pmatrix} \theta_{1t} \\ \theta_{2t} \\ \theta_{3t} \end{pmatrix} &=& \begin{pmatrix} I & 0 & 0 \\ 0 & I & 0 \\ 0 & 0 & I \end{pmatrix} \begin{pmatrix}\theta_{1,t-1} \\ \theta_{2,t-1} \\ \theta_{3,t-1} \end{pmatrix} + \begin{pmatrix} \eta_{1t} \\ \eta_{2t} \\ \eta_{3t} \end{pmatrix} \\ \begin{pmatrix}Y_{1t} \\ Y_{2t} \\ Y_{3t} \end{pmatrix} &=& \begin{pmatrix} I & 0 & 0 \\ 0 & I & 0 \\ 0 & 0 & I \end{pmatrix}\begin{pmatrix} \theta_{1t} \\ \theta_{2t} \\ \theta_{3t} \end{pmatrix} + \begin{pmatrix} \epsilon_{1t} \\ \epsilon_{2t} \\ \epsilon_{3t} \end{pmatrix} \end{eqnarray} where all the $Y_{it}$, $\theta_{it}$ etc. are to be understood as vectors of the same dimensions of your three time series. You can fit such model --a multivariate random walk plus noise-- even if not all the $Y$'s are observed at all times, and estimate the state vector at all possible $t$'s. This removes the problem of the different frequencies of the three time series. If the situation warrants, you might also fit a different model. For instance, if there is some redundancy among the components of $Y_{it}$ you might want to use $\theta_{it}$ with $dim(\theta_{it}) < dim(Y_{it})$, and fit what would be essentially a dynamic factor analysis model.
How to combine time-series based features with different frequencies
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
How to combine time-series based features with different frequencies Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I would try an state-space model. The simplest possible form would be: \begin{eqnarray} \begin{pmatrix} \theta_{1t} \\ \theta_{2t} \\ \theta_{3t} \end{pmatrix} &=& \begin{pmatrix} I & 0 & 0 \\ 0 & I & 0 \\ 0 & 0 & I \end{pmatrix} \begin{pmatrix}\theta_{1,t-1} \\ \theta_{2,t-1} \\ \theta_{3,t-1} \end{pmatrix} + \begin{pmatrix} \eta_{1t} \\ \eta_{2t} \\ \eta_{3t} \end{pmatrix} \\ \begin{pmatrix}Y_{1t} \\ Y_{2t} \\ Y_{3t} \end{pmatrix} &=& \begin{pmatrix} I & 0 & 0 \\ 0 & I & 0 \\ 0 & 0 & I \end{pmatrix}\begin{pmatrix} \theta_{1t} \\ \theta_{2t} \\ \theta_{3t} \end{pmatrix} + \begin{pmatrix} \epsilon_{1t} \\ \epsilon_{2t} \\ \epsilon_{3t} \end{pmatrix} \end{eqnarray} where all the $Y_{it}$, $\theta_{it}$ etc. are to be understood as vectors of the same dimensions of your three time series. You can fit such model --a multivariate random walk plus noise-- even if not all the $Y$'s are observed at all times, and estimate the state vector at all possible $t$'s. This removes the problem of the different frequencies of the three time series. If the situation warrants, you might also fit a different model. For instance, if there is some redundancy among the components of $Y_{it}$ you might want to use $\theta_{it}$ with $dim(\theta_{it}) < dim(Y_{it})$, and fit what would be essentially a dynamic factor analysis model.
How to combine time-series based features with different frequencies Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
49,474
How to combine time-series based features with different frequencies
IMHO, your problem is related to "feature engineering". Dealing with financial market time series, I use to create one new column (feature) for each set of parameters of each indicator. For example a simple moving average (SMA) is defined by the set = {LookbackPeriod,Frequency,ShortPeriod,LongPeriod} and each combination of these elements forms a new feature/attribute. The instances of your Y values (classes) can be trained separately (in your classifier) for each frequency, like Return(t+period) where period == days, weeks, ... If you want something more general you are trying to predict using "first-order rule learning"
How to combine time-series based features with different frequencies
IMHO, your problem is related to "feature engineering". Dealing with financial market time series, I use to create one new column (feature) for each set of parameters of each indicator. For example a
How to combine time-series based features with different frequencies IMHO, your problem is related to "feature engineering". Dealing with financial market time series, I use to create one new column (feature) for each set of parameters of each indicator. For example a simple moving average (SMA) is defined by the set = {LookbackPeriod,Frequency,ShortPeriod,LongPeriod} and each combination of these elements forms a new feature/attribute. The instances of your Y values (classes) can be trained separately (in your classifier) for each frequency, like Return(t+period) where period == days, weeks, ... If you want something more general you are trying to predict using "first-order rule learning"
How to combine time-series based features with different frequencies IMHO, your problem is related to "feature engineering". Dealing with financial market time series, I use to create one new column (feature) for each set of parameters of each indicator. For example a
49,475
Sample size for multiple linear regression
Power analysis for multiple regression is quite complex as there are many moving parts and potentially several different tests of interest. The function pwr.f2.test is based on Cohen's book Statistical Power Analysis for the Behavioral Sciences and you can find detailed explanations and many examples there. The most important insight is that the sample size is already captured by the coefficient v (degrees of freedom for the denominator). Exactly how depends on the details of the model. Consequently, the analysis already takes it into account. Alternatively, another way to conduct a power analysis is to use simulation. It is particularly attractive for this sort of settings as you can play with each individual aspect of the design. For more on this, see Calculating statistical power (see also G. Jay Kern's post). Once you get the hang of it, it's also quite easy to extend the simulation approach to numerous other tests. For logistic regression, I am not sure if there is any specific function in the pwr package but there is one in G*Power. I don't remember ever using it so I can't comment further on this part of the question.
Sample size for multiple linear regression
Power analysis for multiple regression is quite complex as there are many moving parts and potentially several different tests of interest. The function pwr.f2.test is based on Cohen's book Statistica
Sample size for multiple linear regression Power analysis for multiple regression is quite complex as there are many moving parts and potentially several different tests of interest. The function pwr.f2.test is based on Cohen's book Statistical Power Analysis for the Behavioral Sciences and you can find detailed explanations and many examples there. The most important insight is that the sample size is already captured by the coefficient v (degrees of freedom for the denominator). Exactly how depends on the details of the model. Consequently, the analysis already takes it into account. Alternatively, another way to conduct a power analysis is to use simulation. It is particularly attractive for this sort of settings as you can play with each individual aspect of the design. For more on this, see Calculating statistical power (see also G. Jay Kern's post). Once you get the hang of it, it's also quite easy to extend the simulation approach to numerous other tests. For logistic regression, I am not sure if there is any specific function in the pwr package but there is one in G*Power. I don't remember ever using it so I can't comment further on this part of the question.
Sample size for multiple linear regression Power analysis for multiple regression is quite complex as there are many moving parts and potentially several different tests of interest. The function pwr.f2.test is based on Cohen's book Statistica
49,476
Sample size for multiple linear regression
The best treatment of power for logistic regression I have seen was in Breslow and Day's volume 2 of "Statistical Methods In Cancer Research". The starting point is to realize that the simplest case is a binomial test and that the variance depends on N and the case proportion by way of var(t) = N*p*(1-p). Rather delightful is the fact that the publisher now makes the entirety of both volumes available online and the "Design Considerations" chapter is at the link I embedded. See section 5.6
Sample size for multiple linear regression
The best treatment of power for logistic regression I have seen was in Breslow and Day's volume 2 of "Statistical Methods In Cancer Research". The starting point is to realize that the simplest case i
Sample size for multiple linear regression The best treatment of power for logistic regression I have seen was in Breslow and Day's volume 2 of "Statistical Methods In Cancer Research". The starting point is to realize that the simplest case is a binomial test and that the variance depends on N and the case proportion by way of var(t) = N*p*(1-p). Rather delightful is the fact that the publisher now makes the entirety of both volumes available online and the "Design Considerations" chapter is at the link I embedded. See section 5.6
Sample size for multiple linear regression The best treatment of power for logistic regression I have seen was in Breslow and Day's volume 2 of "Statistical Methods In Cancer Research". The starting point is to realize that the simplest case i
49,477
What would be a better alternative to two balanced latin squares (human subject experiment)?
A classical Latin Square confounds two factor interactions with main effects. If the second Latin Square differs from the first in how it combines the levels, you may be able to break some of the confounding. If the two are identical but run in different orders, you'll have an estimate of pure error. Which is better depends on what assumptions you are willing to make and what knowledge is more important to you. Run order should be randomized if at all possible. Even if you use the same square twice, you should randomize each one separately. If you get the same run order both times something almost certainly went wrong with your randomization.
What would be a better alternative to two balanced latin squares (human subject experiment)?
A classical Latin Square confounds two factor interactions with main effects. If the second Latin Square differs from the first in how it combines the levels, you may be able to break some of the con
What would be a better alternative to two balanced latin squares (human subject experiment)? A classical Latin Square confounds two factor interactions with main effects. If the second Latin Square differs from the first in how it combines the levels, you may be able to break some of the confounding. If the two are identical but run in different orders, you'll have an estimate of pure error. Which is better depends on what assumptions you are willing to make and what knowledge is more important to you. Run order should be randomized if at all possible. Even if you use the same square twice, you should randomize each one separately. If you get the same run order both times something almost certainly went wrong with your randomization.
What would be a better alternative to two balanced latin squares (human subject experiment)? A classical Latin Square confounds two factor interactions with main effects. If the second Latin Square differs from the first in how it combines the levels, you may be able to break some of the con
49,478
Is it invalid to use a single sample to estimate more than one proportion?
Is it fair for me to estimate the proportion of each colour and use the 'standard error of a proportion' for each colour independently? Yes, absolutely. It's not just fair, but also correct. For example, can I say that there are 30% white marbles with a margin error of $\sqrt{p(1-p)/n}$, and then do the same for black (40%) and red (30%)? Think of it as binomial "red" vs "not red" say. The standard error of the proportion of red will be right. However, if you start doing calculations that involve multiple colours, you must take their dependence into account. The counts for any two colours are negatively correlated. It's not "double dipping" at all. You're just dealing with a multinomial, and if you focus on just one category vs the rest, that's binomial. All perfectly legitimate. However, if you're talking about something akin to marbles, take care with the issue of sampling with replacement vs sampling without replacement. Note that if you're not sampling with replacement your standard errors don't apply because you'd have a hypergeometric not a binomial (and the multi-colour situation is multivariate hypergeometric rather than multinomial). The variance is smaller in this case, sometimes substantially smaller. Edit: just an additional note - the margin of error isn't usually taken to be one standard deviation of the sample proportion.
Is it invalid to use a single sample to estimate more than one proportion?
Is it fair for me to estimate the proportion of each colour and use the 'standard error of a proportion' for each colour independently? Yes, absolutely. It's not just fair, but also correct. For e
Is it invalid to use a single sample to estimate more than one proportion? Is it fair for me to estimate the proportion of each colour and use the 'standard error of a proportion' for each colour independently? Yes, absolutely. It's not just fair, but also correct. For example, can I say that there are 30% white marbles with a margin error of $\sqrt{p(1-p)/n}$, and then do the same for black (40%) and red (30%)? Think of it as binomial "red" vs "not red" say. The standard error of the proportion of red will be right. However, if you start doing calculations that involve multiple colours, you must take their dependence into account. The counts for any two colours are negatively correlated. It's not "double dipping" at all. You're just dealing with a multinomial, and if you focus on just one category vs the rest, that's binomial. All perfectly legitimate. However, if you're talking about something akin to marbles, take care with the issue of sampling with replacement vs sampling without replacement. Note that if you're not sampling with replacement your standard errors don't apply because you'd have a hypergeometric not a binomial (and the multi-colour situation is multivariate hypergeometric rather than multinomial). The variance is smaller in this case, sometimes substantially smaller. Edit: just an additional note - the margin of error isn't usually taken to be one standard deviation of the sample proportion.
Is it invalid to use a single sample to estimate more than one proportion? Is it fair for me to estimate the proportion of each colour and use the 'standard error of a proportion' for each colour independently? Yes, absolutely. It's not just fair, but also correct. For e
49,479
Discrete Time Survival Analysis - Correct Way to Write Survival Function
The answer is that both are used, unfortunately. In the continuous case, you are right the distinction is unimportant. In the discrete case, the interpretation would be slightly different and therefore clarity is important. In my experience, the most common definition of the survival function is $S(t) = Pr(T>t)$ and so would match your yellow column. This is the one used in the derivation of the Kaplan-Meier estimator: $\hat{S}(t) = \frac{\text{individuals with } T>t}{\text{total individuals}} = \prod_{j=1}^k{(1-\frac{d_j}{r_j})} $ where $d_j$ is the number of events in interval $j$, $r_j$ is the number of individuals at risk in interval $j$, and $\frac{d_j}{r_j} = h(t)$ An important note is that the survival function should start with $S(0) = 1$ if 0 is the first time point, in the absence of left censoring (i.e. assuming no one starts follow-up already having had the event). In the case of your example, I presume that someone who opens and closes a bank account in January 2012 opens the account before they close it; so if time intervals were shortened (for example using weeks or days as the time scale) then S(0) would equal 1 in both cases. How much the distinction between the two definitions matters may depend on the specific application. The degree of divergence between the two calculations will likely depend on the length of follow-up, the frequency of the event, and the number of ties and how these are considered. In addition, in many applications we are interested in comparing hazards or survival between two groups rather than the absolute survival or hazard in a specific group. In this case, I think the distinction should be even less important, but I would have to check into that to be sure. For more detail on survival analysis where $S(t)$ is clearly defined as $Pr(T>t)$, see: Allison: Survival Analysis Using the SAS System For more detail, with $S(t) = Pr(T>=t)$, see: Collett: Modelling Survival Data in Medical Research, 2nd ed. (note, most of the analytic details will be the same as in Allison, but the interpretations may differ)
Discrete Time Survival Analysis - Correct Way to Write Survival Function
The answer is that both are used, unfortunately. In the continuous case, you are right the distinction is unimportant. In the discrete case, the interpretation would be slightly different and therefor
Discrete Time Survival Analysis - Correct Way to Write Survival Function The answer is that both are used, unfortunately. In the continuous case, you are right the distinction is unimportant. In the discrete case, the interpretation would be slightly different and therefore clarity is important. In my experience, the most common definition of the survival function is $S(t) = Pr(T>t)$ and so would match your yellow column. This is the one used in the derivation of the Kaplan-Meier estimator: $\hat{S}(t) = \frac{\text{individuals with } T>t}{\text{total individuals}} = \prod_{j=1}^k{(1-\frac{d_j}{r_j})} $ where $d_j$ is the number of events in interval $j$, $r_j$ is the number of individuals at risk in interval $j$, and $\frac{d_j}{r_j} = h(t)$ An important note is that the survival function should start with $S(0) = 1$ if 0 is the first time point, in the absence of left censoring (i.e. assuming no one starts follow-up already having had the event). In the case of your example, I presume that someone who opens and closes a bank account in January 2012 opens the account before they close it; so if time intervals were shortened (for example using weeks or days as the time scale) then S(0) would equal 1 in both cases. How much the distinction between the two definitions matters may depend on the specific application. The degree of divergence between the two calculations will likely depend on the length of follow-up, the frequency of the event, and the number of ties and how these are considered. In addition, in many applications we are interested in comparing hazards or survival between two groups rather than the absolute survival or hazard in a specific group. In this case, I think the distinction should be even less important, but I would have to check into that to be sure. For more detail on survival analysis where $S(t)$ is clearly defined as $Pr(T>t)$, see: Allison: Survival Analysis Using the SAS System For more detail, with $S(t) = Pr(T>=t)$, see: Collett: Modelling Survival Data in Medical Research, 2nd ed. (note, most of the analytic details will be the same as in Allison, but the interpretations may differ)
Discrete Time Survival Analysis - Correct Way to Write Survival Function The answer is that both are used, unfortunately. In the continuous case, you are right the distinction is unimportant. In the discrete case, the interpretation would be slightly different and therefor
49,480
Use Random Forest model to make predictions from sensor data
Linear regressions are great because you can implement prediction very simply in any program that can multiply and add. Random forests, on the other hand, are much more complicated. They are made up individually of decision trees, which can basically be represented by a set of rules. However, a random forest may have hundreds or thousands of individual trees, which would be very tedious to implement by hand in another system. Your best bet is probably going to be to find a random forest implementation for the system you wish to export the model to, and then use PMML to export the model. The RPMML package will let you convert your random forest to an XML file, which you should be able to import to any system that supports PMML.
Use Random Forest model to make predictions from sensor data
Linear regressions are great because you can implement prediction very simply in any program that can multiply and add. Random forests, on the other hand, are much more complicated. They are made up
Use Random Forest model to make predictions from sensor data Linear regressions are great because you can implement prediction very simply in any program that can multiply and add. Random forests, on the other hand, are much more complicated. They are made up individually of decision trees, which can basically be represented by a set of rules. However, a random forest may have hundreds or thousands of individual trees, which would be very tedious to implement by hand in another system. Your best bet is probably going to be to find a random forest implementation for the system you wish to export the model to, and then use PMML to export the model. The RPMML package will let you convert your random forest to an XML file, which you should be able to import to any system that supports PMML.
Use Random Forest model to make predictions from sensor data Linear regressions are great because you can implement prediction very simply in any program that can multiply and add. Random forests, on the other hand, are much more complicated. They are made up
49,481
Use Random Forest model to make predictions from sensor data
You certainly have to invest some work in that, but that's not really that bad to export the randomForest model; the getTree function dumps individual trees in a very compact and nice format like this (documented in ?getTree): > getTree(iris_rf,3) left daughter right daughter split var split point status prediction 1 2 3 4 0.80 1 0 2 0 0 0 0.00 -1 1 3 4 5 4 1.75 1 0 4 6 7 1 5.00 1 0 5 8 9 4 1.85 1 0 6 10 11 2 2.45 1 0 7 12 13 2 2.25 1 0 8 14 15 1 5.95 1 0 9 0 0 0 0.00 -1 3 10 0 0 0 0.00 -1 2 11 0 0 0 0.00 -1 3 12 16 17 1 6.10 1 0 13 0 0 0 0.00 -1 2 14 0 0 0 0.00 -1 2 15 0 0 0 0.00 -1 3 16 18 19 3 4.50 1 0 17 0 0 0 0.00 -1 2 18 0 0 0 0.00 -1 2 19 0 0 0 0.00 -1 3 and combining predictions of the whole ensemble is just a matter of summing up the votes (for classification) or calculating mean (for regression). I have once written a converter that was eating those outputs and generating C code (with lots of gotos) and it was working quite well.
Use Random Forest model to make predictions from sensor data
You certainly have to invest some work in that, but that's not really that bad to export the randomForest model; the getTree function dumps individual trees in a very compact and nice format like this
Use Random Forest model to make predictions from sensor data You certainly have to invest some work in that, but that's not really that bad to export the randomForest model; the getTree function dumps individual trees in a very compact and nice format like this (documented in ?getTree): > getTree(iris_rf,3) left daughter right daughter split var split point status prediction 1 2 3 4 0.80 1 0 2 0 0 0 0.00 -1 1 3 4 5 4 1.75 1 0 4 6 7 1 5.00 1 0 5 8 9 4 1.85 1 0 6 10 11 2 2.45 1 0 7 12 13 2 2.25 1 0 8 14 15 1 5.95 1 0 9 0 0 0 0.00 -1 3 10 0 0 0 0.00 -1 2 11 0 0 0 0.00 -1 3 12 16 17 1 6.10 1 0 13 0 0 0 0.00 -1 2 14 0 0 0 0.00 -1 2 15 0 0 0 0.00 -1 3 16 18 19 3 4.50 1 0 17 0 0 0 0.00 -1 2 18 0 0 0 0.00 -1 2 19 0 0 0 0.00 -1 3 and combining predictions of the whole ensemble is just a matter of summing up the votes (for classification) or calculating mean (for regression). I have once written a converter that was eating those outputs and generating C code (with lots of gotos) and it was working quite well.
Use Random Forest model to make predictions from sensor data You certainly have to invest some work in that, but that's not really that bad to export the randomForest model; the getTree function dumps individual trees in a very compact and nice format like this
49,482
Use Random Forest model to make predictions from sensor data
I recently developed a Python package that exports C code from Random Forests classifier trained with Scikit learn: https://github.com/jonnor/emtrees It could be used as an example of how to transform a R model to C code, by combined with the getTree shown by mbq answer.
Use Random Forest model to make predictions from sensor data
I recently developed a Python package that exports C code from Random Forests classifier trained with Scikit learn: https://github.com/jonnor/emtrees It could be used as an example of how to transfor
Use Random Forest model to make predictions from sensor data I recently developed a Python package that exports C code from Random Forests classifier trained with Scikit learn: https://github.com/jonnor/emtrees It could be used as an example of how to transform a R model to C code, by combined with the getTree shown by mbq answer.
Use Random Forest model to make predictions from sensor data I recently developed a Python package that exports C code from Random Forests classifier trained with Scikit learn: https://github.com/jonnor/emtrees It could be used as an example of how to transfor
49,483
Which is the best accuracy measuring criteria among rmse, mae & mape?
I have to agree with Glen. It is axiomatic in control system's engineering that there is no such thing as "best" without a measure of goodness. Some (weak) examples of candidate bests include: Best = robust indicator of central tendency Best = robust indicator of variation around central tendency Best = fastest to compute Personally, when trying to select models, I like to use AICc because it is "good enough". It accounts for over-fitting, has a fair basis in statistics, and is comprised using figures of merit that many systems have as outputs. Here is some info on it: http://www4.ncsu.edu/~shu3/Presentation/AIC.pdf One of its family members is BIC (Bayes Information Criterion): link1,link2. You might want to explore "Information Criterion" for model selection. You might consider using "Akaike weights" to combine your models for better predictive power.
Which is the best accuracy measuring criteria among rmse, mae & mape?
I have to agree with Glen. It is axiomatic in control system's engineering that there is no such thing as "best" without a measure of goodness. Some (weak) examples of candidate bests include: Best =
Which is the best accuracy measuring criteria among rmse, mae & mape? I have to agree with Glen. It is axiomatic in control system's engineering that there is no such thing as "best" without a measure of goodness. Some (weak) examples of candidate bests include: Best = robust indicator of central tendency Best = robust indicator of variation around central tendency Best = fastest to compute Personally, when trying to select models, I like to use AICc because it is "good enough". It accounts for over-fitting, has a fair basis in statistics, and is comprised using figures of merit that many systems have as outputs. Here is some info on it: http://www4.ncsu.edu/~shu3/Presentation/AIC.pdf One of its family members is BIC (Bayes Information Criterion): link1,link2. You might want to explore "Information Criterion" for model selection. You might consider using "Akaike weights" to combine your models for better predictive power.
Which is the best accuracy measuring criteria among rmse, mae & mape? I have to agree with Glen. It is axiomatic in control system's engineering that there is no such thing as "best" without a measure of goodness. Some (weak) examples of candidate bests include: Best =
49,484
Kernel density estimator that doesn't collapse in the tails
It is a good idea to use the T-distribution to build a KDE When you build a KDE, once you go outside the data range, the rate of decay in the tails is determined by the rate of decay in the tails of the kernel distribution. The normal distribution has very thin tails (which decay at an exponentially-quadratic rate) so it is unsurprising that the tails of your KDE decay rapidly outside the data range. If you want to ameliorate this, and allow fatter tails, I would recommend that you use a T-distribution as the kernel for your KDE. This allows you to adjust the degrees-of-freedom parameter to adjust the desired "fatness" of the tails, and it even allows you to have heavy tails that give infinite variance in your KDE. You can implement a KDE using the T-distribution using the KDE function in the utilities package. This function allows you to specify the degrees-of-freedom parameter to control the fatness of the tails of the KDE. (This function produces an object containing probability functions for the KDE; you can also load those functions directly to the global environment so that you can call them just like the probability functions of another distribution.) Here is an example of fitting a KDE using a T-distribution with two degrees-of-freedom, which means that the KDE has tails that are sufficiently heavy to give infinite variance. If you were to examine the log-density of this KDE (using the dkde function generated here) you will see that the raite of decay in the tails is much slower than for a KDE that uses the normal kernel. #Load the package library(utilities) #Generate some mock data set.seed(1) DATA <- rnorm(40) #Create a KDE using the T-distribution with two degrees-of-freedom (infinite variance) MY_KDE <- KDE(DATA, df = 2, to.environment = TRUE) plot(MY_KDE) #Show the KDE output MY_KDE Kernel Density Estimator (KDE) Computed from 40 data points in the input 'DATA' Estimated bandwidth = 0.367412 Input degrees-of-freedom = 2.000000 Probability functions for the KDE are the following: Density function: dkde * Distribution function: pkde * Quantile function: qkde * Random generation function: rkde * * This function is presently loaded in the global environment
Kernel density estimator that doesn't collapse in the tails
It is a good idea to use the T-distribution to build a KDE When you build a KDE, once you go outside the data range, the rate of decay in the tails is determined by the rate of decay in the tails of t
Kernel density estimator that doesn't collapse in the tails It is a good idea to use the T-distribution to build a KDE When you build a KDE, once you go outside the data range, the rate of decay in the tails is determined by the rate of decay in the tails of the kernel distribution. The normal distribution has very thin tails (which decay at an exponentially-quadratic rate) so it is unsurprising that the tails of your KDE decay rapidly outside the data range. If you want to ameliorate this, and allow fatter tails, I would recommend that you use a T-distribution as the kernel for your KDE. This allows you to adjust the degrees-of-freedom parameter to adjust the desired "fatness" of the tails, and it even allows you to have heavy tails that give infinite variance in your KDE. You can implement a KDE using the T-distribution using the KDE function in the utilities package. This function allows you to specify the degrees-of-freedom parameter to control the fatness of the tails of the KDE. (This function produces an object containing probability functions for the KDE; you can also load those functions directly to the global environment so that you can call them just like the probability functions of another distribution.) Here is an example of fitting a KDE using a T-distribution with two degrees-of-freedom, which means that the KDE has tails that are sufficiently heavy to give infinite variance. If you were to examine the log-density of this KDE (using the dkde function generated here) you will see that the raite of decay in the tails is much slower than for a KDE that uses the normal kernel. #Load the package library(utilities) #Generate some mock data set.seed(1) DATA <- rnorm(40) #Create a KDE using the T-distribution with two degrees-of-freedom (infinite variance) MY_KDE <- KDE(DATA, df = 2, to.environment = TRUE) plot(MY_KDE) #Show the KDE output MY_KDE Kernel Density Estimator (KDE) Computed from 40 data points in the input 'DATA' Estimated bandwidth = 0.367412 Input degrees-of-freedom = 2.000000 Probability functions for the KDE are the following: Density function: dkde * Distribution function: pkde * Quantile function: qkde * Random generation function: rkde * * This function is presently loaded in the global environment
Kernel density estimator that doesn't collapse in the tails It is a good idea to use the T-distribution to build a KDE When you build a KDE, once you go outside the data range, the rate of decay in the tails is determined by the rate of decay in the tails of t
49,485
Kernel density estimator that doesn't collapse in the tails
It may be a consequence of the way you have presented your example, but it looks like your density function has finite support (e.g, a truncated Gaussian)? If this is the case, why not use a spline density estimator with linear tails: http://cran.r-project.org/web/packages/pendensity/vignettes/pendensity.pdf You could also have a look into wavelet density estimation: http://cran.r-project.org/web/packages/wavethresh/wavethresh.pdf I'm not sure about the suggestion of using a logspline? Surely this will have by construction exponential (fast decaying) tails?
Kernel density estimator that doesn't collapse in the tails
It may be a consequence of the way you have presented your example, but it looks like your density function has finite support (e.g, a truncated Gaussian)? If this is the case, why not use a spline de
Kernel density estimator that doesn't collapse in the tails It may be a consequence of the way you have presented your example, but it looks like your density function has finite support (e.g, a truncated Gaussian)? If this is the case, why not use a spline density estimator with linear tails: http://cran.r-project.org/web/packages/pendensity/vignettes/pendensity.pdf You could also have a look into wavelet density estimation: http://cran.r-project.org/web/packages/wavethresh/wavethresh.pdf I'm not sure about the suggestion of using a logspline? Surely this will have by construction exponential (fast decaying) tails?
Kernel density estimator that doesn't collapse in the tails It may be a consequence of the way you have presented your example, but it looks like your density function has finite support (e.g, a truncated Gaussian)? If this is the case, why not use a spline de
49,486
Kernel density estimator that doesn't collapse in the tails
I have a number of questions. Why do you care about the tail of the distribution if you don't know where it is? You have said that you would need to evaluate the density at x = 10 in your example. How many samples do you have? For KDE I am fairly certain you don't have enough for this. Looking at that part of the tail, we have $$P(|X| > 10) = erf(\frac{10}{\sqrt{2}})$$ and thus one sample in $\sim 6.5\times10^{22}$ will have $|x| \geq 10$ (wolfram alpha). Which brings me back to the question of why you need to evaluate that far out on the tail. Your tail density will also be significantly affected by the bandwidth that you select for your KDE. If you make your kernel have bandwidth = 1, I bet your graph will look a lot nicer at the tails, but this is only due to the coincidence of the kernel having the same tail properties as the density of interest.
Kernel density estimator that doesn't collapse in the tails
I have a number of questions. Why do you care about the tail of the distribution if you don't know where it is? You have said that you would need to evaluate the density at x = 10 in your example. Ho
Kernel density estimator that doesn't collapse in the tails I have a number of questions. Why do you care about the tail of the distribution if you don't know where it is? You have said that you would need to evaluate the density at x = 10 in your example. How many samples do you have? For KDE I am fairly certain you don't have enough for this. Looking at that part of the tail, we have $$P(|X| > 10) = erf(\frac{10}{\sqrt{2}})$$ and thus one sample in $\sim 6.5\times10^{22}$ will have $|x| \geq 10$ (wolfram alpha). Which brings me back to the question of why you need to evaluate that far out on the tail. Your tail density will also be significantly affected by the bandwidth that you select for your KDE. If you make your kernel have bandwidth = 1, I bet your graph will look a lot nicer at the tails, but this is only due to the coincidence of the kernel having the same tail properties as the density of interest.
Kernel density estimator that doesn't collapse in the tails I have a number of questions. Why do you care about the tail of the distribution if you don't know where it is? You have said that you would need to evaluate the density at x = 10 in your example. Ho
49,487
Why is the logistic regression cost function scaled by the number of examples?
I think @soufanom had a good answer. I will try to add on. In general, there are two reasons to have a constant in loss function. The first reason is to have a simpler notation later. For example, you can have a loss function $f(y,\hat y)=\frac 1 2(y-\hat y)^2$, take the derivative respect to $y$, you will not have the annoying term $2$. The second reason is trying to "normalize" the loss value on number of data points. For example, (let us think about the regression now), suppose you have $10$ data points, and for each data point, you made $1.0$ error, e.g., the ground truth is $3.5$, and your prediction is $2.5$, etc.. Then, your loss for the entire data set is $10.0$. On the other hand, if you have $100$ data points. you still have $1.0$ error for each estimation. Then your loss for entire data is $100$. This does not make too much sense because two models are equal, but applied on different data. If you normalize the loss by number of data (divide the total loss by number of data), then the loss values in above examples will be identical.
Why is the logistic regression cost function scaled by the number of examples?
I think @soufanom had a good answer. I will try to add on. In general, there are two reasons to have a constant in loss function. The first reason is to have a simpler notation later. For example, yo
Why is the logistic regression cost function scaled by the number of examples? I think @soufanom had a good answer. I will try to add on. In general, there are two reasons to have a constant in loss function. The first reason is to have a simpler notation later. For example, you can have a loss function $f(y,\hat y)=\frac 1 2(y-\hat y)^2$, take the derivative respect to $y$, you will not have the annoying term $2$. The second reason is trying to "normalize" the loss value on number of data points. For example, (let us think about the regression now), suppose you have $10$ data points, and for each data point, you made $1.0$ error, e.g., the ground truth is $3.5$, and your prediction is $2.5$, etc.. Then, your loss for the entire data set is $10.0$. On the other hand, if you have $100$ data points. you still have $1.0$ error for each estimation. Then your loss for entire data is $100$. This does not make too much sense because two models are equal, but applied on different data. If you normalize the loss by number of data (divide the total loss by number of data), then the loss values in above examples will be identical.
Why is the logistic regression cost function scaled by the number of examples? I think @soufanom had a good answer. I will try to add on. In general, there are two reasons to have a constant in loss function. The first reason is to have a simpler notation later. For example, yo
49,488
Why is the logistic regression cost function scaled by the number of examples?
Building further on the answer by @hxd1011: Suppose we had a set model -- parameters and all fixed. We would look at the Mean Squared Error as our measure of error. $MSE = \frac{1}{m}\sum_{i=1}^m (h_\theta(x^{(i)}) - y^{(i)})^2$ When we regularize using some function of our parameters we add a parameter of the form $\lambda \sum_{j=1}^n f(\theta_j)$ in your case $f(x) = x^2$, but this is a more general concept. If you don't scale that regularization by your sample size -- then for a given $\lambda$, as $m \rightarrow \infty$, the regularization becomes meaningless. By adding that scaling, we gain two things. The scale of $\lambda$ becomes stable for different sample sizes. (i.e. you can get a sense of 'how much' you are regularizing) In some contexts (live model updating) this is crucial. This is closely related to the mean loss on our predictions. While that may seem initially less useful than a concept like MSE, if we are regularizing, then it is a desirable feature.
Why is the logistic regression cost function scaled by the number of examples?
Building further on the answer by @hxd1011: Suppose we had a set model -- parameters and all fixed. We would look at the Mean Squared Error as our measure of error. $MSE = \frac{1}{m}\sum_{i=1}^m (h_
Why is the logistic regression cost function scaled by the number of examples? Building further on the answer by @hxd1011: Suppose we had a set model -- parameters and all fixed. We would look at the Mean Squared Error as our measure of error. $MSE = \frac{1}{m}\sum_{i=1}^m (h_\theta(x^{(i)}) - y^{(i)})^2$ When we regularize using some function of our parameters we add a parameter of the form $\lambda \sum_{j=1}^n f(\theta_j)$ in your case $f(x) = x^2$, but this is a more general concept. If you don't scale that regularization by your sample size -- then for a given $\lambda$, as $m \rightarrow \infty$, the regularization becomes meaningless. By adding that scaling, we gain two things. The scale of $\lambda$ becomes stable for different sample sizes. (i.e. you can get a sense of 'how much' you are regularizing) In some contexts (live model updating) this is crucial. This is closely related to the mean loss on our predictions. While that may seem initially less useful than a concept like MSE, if we are regularizing, then it is a desirable feature.
Why is the logistic regression cost function scaled by the number of examples? Building further on the answer by @hxd1011: Suppose we had a set model -- parameters and all fixed. We would look at the Mean Squared Error as our measure of error. $MSE = \frac{1}{m}\sum_{i=1}^m (h_
49,489
Duncan’s statistical test for blocks designed experiment with full factorial scheme
I think I have the answers for these questions: Answers for questions (1), (2) and (3) Since you want to compare averages between treatments, I would recommend to you to try first Tukey's test that is the most rigorous among the existent tests. Tukey's test is good if you want to avoid type I erros (reject null hypothesis, when null hypothesis is true). For example: Considering null hypothesis = Ho Average treatment 1 = u Average treatment 2 = uo Ho: u = uo H : u < uo However, if in your results you are expecting a significant difference and you don't observe this difference using Tukey's test, then you should try Duncan's test since it differentiate more easily the treatments than Tukey. However, Duncan's test is more hard-work than Tukey, because Duncan's test requires calculation of several minimum significant differences (m.s.d.) and you have to ordinate the averages from the highest to the lowest. In the ordered set of averages, the comparison between the highest and lowest average corresponds to a range that covers all "N" averages. If the difference between the highest and lowest average is significant, it is estimated another m.s.d. to compare means covering a range of N - 1, and so on. Answer for question (4): In R, since you have repetitions inside the blocks, first, I would recommend to calculate the averages in excel and make a table with these averages of blocks so you can use the follow script in R: fat2.rbd(factor1, factor2, block, response) You should make a table with the factors, blocks and responses in these order in the excel, and save it in csv to run the data. The script: fat2.rbd(factor1, factor2, block, response) It will automatically give to you the factorial ANOVA and Tukey's test. To run Duncan's test you should use the follow script: fat2.rbd(factor1, factor2, block, response, mcomp="duncan") It will give to you the factorial ANOVA and Duncan's test. In ANOVA you will see if there are interactions among the Genetic Materials and the Fertilizers. If there is no interaction between the factors, you can explain the behavior of your data with the following chart: However, if there is interaction between Genetic Material and Fertilizer, a possible behavior could look like the following:
Duncan’s statistical test for blocks designed experiment with full factorial scheme
I think I have the answers for these questions: Answers for questions (1), (2) and (3) Since you want to compare averages between treatments, I would recommend to you to try first Tukey's test that is
Duncan’s statistical test for blocks designed experiment with full factorial scheme I think I have the answers for these questions: Answers for questions (1), (2) and (3) Since you want to compare averages between treatments, I would recommend to you to try first Tukey's test that is the most rigorous among the existent tests. Tukey's test is good if you want to avoid type I erros (reject null hypothesis, when null hypothesis is true). For example: Considering null hypothesis = Ho Average treatment 1 = u Average treatment 2 = uo Ho: u = uo H : u < uo However, if in your results you are expecting a significant difference and you don't observe this difference using Tukey's test, then you should try Duncan's test since it differentiate more easily the treatments than Tukey. However, Duncan's test is more hard-work than Tukey, because Duncan's test requires calculation of several minimum significant differences (m.s.d.) and you have to ordinate the averages from the highest to the lowest. In the ordered set of averages, the comparison between the highest and lowest average corresponds to a range that covers all "N" averages. If the difference between the highest and lowest average is significant, it is estimated another m.s.d. to compare means covering a range of N - 1, and so on. Answer for question (4): In R, since you have repetitions inside the blocks, first, I would recommend to calculate the averages in excel and make a table with these averages of blocks so you can use the follow script in R: fat2.rbd(factor1, factor2, block, response) You should make a table with the factors, blocks and responses in these order in the excel, and save it in csv to run the data. The script: fat2.rbd(factor1, factor2, block, response) It will automatically give to you the factorial ANOVA and Tukey's test. To run Duncan's test you should use the follow script: fat2.rbd(factor1, factor2, block, response, mcomp="duncan") It will give to you the factorial ANOVA and Duncan's test. In ANOVA you will see if there are interactions among the Genetic Materials and the Fertilizers. If there is no interaction between the factors, you can explain the behavior of your data with the following chart: However, if there is interaction between Genetic Material and Fertilizer, a possible behavior could look like the following:
Duncan’s statistical test for blocks designed experiment with full factorial scheme I think I have the answers for these questions: Answers for questions (1), (2) and (3) Since you want to compare averages between treatments, I would recommend to you to try first Tukey's test that is
49,490
What's the best way to choose data for Crossvalidation on linear regression settings (PCA, PLS)
You want your evaluation to tell you something useful about your system's performance. Using a specific, held-out test set is nice because it tells you how the system will perform on totally new data. On the other hand, it's hard-to-impossible to perform meaningful inference (i.e., "In general, is my system better than this other one?") with only a single data point and one rarely has nearly enough data to create multiple test partitions. Cross validation (and similar techniques) try to estimate the generalization error by dividing up the data into multiple training and test sets and evaluating on each of them. This is nice because you get multiple estimates of its generalization ability, which allows you to do some proper statistical comparisons. However, these are only valid if the cross-validation folds are set up appropriately. For a "classic" machine learning problem with one set of observations per subject (e.g., the Fischer Iris or Pima Indian data sets), it's hard to mess up cross-validation. Divide your data into $k$ folds, use $k-1$ of them for training and test on the last one. Lather, rinse, and repeat until each fold has been used as the test set. This is probably less advisable for a data set like yours, where there are multiple (64x10x?) observations from each subject. If the observations are correlated across time, space/sensor, and within subjects (as yours almost certainly are), then the model might "learn" some of these associations, which would boost its performance when data from the same/nearby time points, sensors, and subjects appear in the test set, and thus provide an over-optimistic estimate of your system's ability to generalize. The most conservative approach would be to stratify your cross-validation by subject: all the data from a given subject goes into the same fold (e.g., subjects 1-5 are in fold 1, 6-10 in fold 2, etc) and then cross-validation proceeds as normal. If you're willing to assert that temporally-distant parts of the signal are independent, I guess you could try stratifying that way too, but I'd find that considerably less satisfying. What I would not do is dump all data into a big list, regardless of time point, sensor, or subject, and then randomly divide that list into folds. I can almost guarantee that will overstate your generalization error!
What's the best way to choose data for Crossvalidation on linear regression settings (PCA, PLS)
You want your evaluation to tell you something useful about your system's performance. Using a specific, held-out test set is nice because it tells you how the system will perform on totally new data
What's the best way to choose data for Crossvalidation on linear regression settings (PCA, PLS) You want your evaluation to tell you something useful about your system's performance. Using a specific, held-out test set is nice because it tells you how the system will perform on totally new data. On the other hand, it's hard-to-impossible to perform meaningful inference (i.e., "In general, is my system better than this other one?") with only a single data point and one rarely has nearly enough data to create multiple test partitions. Cross validation (and similar techniques) try to estimate the generalization error by dividing up the data into multiple training and test sets and evaluating on each of them. This is nice because you get multiple estimates of its generalization ability, which allows you to do some proper statistical comparisons. However, these are only valid if the cross-validation folds are set up appropriately. For a "classic" machine learning problem with one set of observations per subject (e.g., the Fischer Iris or Pima Indian data sets), it's hard to mess up cross-validation. Divide your data into $k$ folds, use $k-1$ of them for training and test on the last one. Lather, rinse, and repeat until each fold has been used as the test set. This is probably less advisable for a data set like yours, where there are multiple (64x10x?) observations from each subject. If the observations are correlated across time, space/sensor, and within subjects (as yours almost certainly are), then the model might "learn" some of these associations, which would boost its performance when data from the same/nearby time points, sensors, and subjects appear in the test set, and thus provide an over-optimistic estimate of your system's ability to generalize. The most conservative approach would be to stratify your cross-validation by subject: all the data from a given subject goes into the same fold (e.g., subjects 1-5 are in fold 1, 6-10 in fold 2, etc) and then cross-validation proceeds as normal. If you're willing to assert that temporally-distant parts of the signal are independent, I guess you could try stratifying that way too, but I'd find that considerably less satisfying. What I would not do is dump all data into a big list, regardless of time point, sensor, or subject, and then randomly divide that list into folds. I can almost guarantee that will overstate your generalization error!
What's the best way to choose data for Crossvalidation on linear regression settings (PCA, PLS) You want your evaluation to tell you something useful about your system's performance. Using a specific, held-out test set is nice because it tells you how the system will perform on totally new data
49,491
Update rule for beta distribution with fixed K/confidence/sample size
I've taken a look at the book. It seems to me that the rationale for this "prior sample size" teminology is the following. We have the usual model with $X_1,\dots,X_n$ conditionaly independent and identically distributed, given $\Theta=\theta$, with distribution $X_1\mid\Theta=\theta\sim\mathrm{Ber}(\theta)$. Suppose that a priori $\Theta\sim\mathrm{Beta}(a,b)$. The prior mean is just $\mathbb{E}[\Theta]=a/(a+b)=:\mu$. If $a$ and $b$ are integers bigger than $1$, one way to interpret this prior is to suppose that we started with a $\mathrm{U}[0,1]$ prior and observed $a-1$ successes and $b-1$ failures in a Bernoulli experiment. By Bayes's theorem, the "posterior" of $\Theta$ for this gedanken experiment is exaclty $\mathrm{Beta}(a,b)$. Hence, we may suggestively define the prior sample size $\nu:=a+b-2$. We know from the properties of the beta distribution that a bigger $\nu$ will give us a more concentrated distribution. Now, to answer your question, what you want to do seems impossible: the more data you observe, the smaller will be your posterior uncertainty about $\Theta$. The posterior of $\Theta$ is $\mathrm{Beta}(c,d)$, with $c=a+\sum_{i=1}^n x_i$, and $d=b+n-\sum_{i=1}^n x_i$. Hence, $c+d$ grows linearly with $n$.
Update rule for beta distribution with fixed K/confidence/sample size
I've taken a look at the book. It seems to me that the rationale for this "prior sample size" teminology is the following. We have the usual model with $X_1,\dots,X_n$ conditionaly independent and ide
Update rule for beta distribution with fixed K/confidence/sample size I've taken a look at the book. It seems to me that the rationale for this "prior sample size" teminology is the following. We have the usual model with $X_1,\dots,X_n$ conditionaly independent and identically distributed, given $\Theta=\theta$, with distribution $X_1\mid\Theta=\theta\sim\mathrm{Ber}(\theta)$. Suppose that a priori $\Theta\sim\mathrm{Beta}(a,b)$. The prior mean is just $\mathbb{E}[\Theta]=a/(a+b)=:\mu$. If $a$ and $b$ are integers bigger than $1$, one way to interpret this prior is to suppose that we started with a $\mathrm{U}[0,1]$ prior and observed $a-1$ successes and $b-1$ failures in a Bernoulli experiment. By Bayes's theorem, the "posterior" of $\Theta$ for this gedanken experiment is exaclty $\mathrm{Beta}(a,b)$. Hence, we may suggestively define the prior sample size $\nu:=a+b-2$. We know from the properties of the beta distribution that a bigger $\nu$ will give us a more concentrated distribution. Now, to answer your question, what you want to do seems impossible: the more data you observe, the smaller will be your posterior uncertainty about $\Theta$. The posterior of $\Theta$ is $\mathrm{Beta}(c,d)$, with $c=a+\sum_{i=1}^n x_i$, and $d=b+n-\sum_{i=1}^n x_i$. Hence, $c+d$ grows linearly with $n$.
Update rule for beta distribution with fixed K/confidence/sample size I've taken a look at the book. It seems to me that the rationale for this "prior sample size" teminology is the following. We have the usual model with $X_1,\dots,X_n$ conditionaly independent and ide
49,492
Update rule for beta distribution with fixed K/confidence/sample size
I like Zen's answer and want to add to it in order to clarify some misconceptions in the question. "Confidence" or "sample size" cannot be meaningfully applied to a posterior distribution. Only a likelihood can have a notion of "sample size". That's the motivation behind Zen's point that the confidence should be $a + b - 2$. In general, the only way to map the likelihood of an exponential family distribution into a confidence is to linearly transform its natural parameters (for the Beta distribution these are $a-1, b-1$) to the positive real line. (For example, a light that turns on when two heads are flipped, another light turns on when two tails are flipped. So, a single sample might induce a likelihood such that $a$ or $b$ goes up by more than 1.) To answer your question: do the whole Bayesian inference to arrive at the natural parameters of the induced likelihood ($\Delta a, \Delta b$ in your case), and then scale these to your desired "confidence" if it exceeds it. After scaling, you can combine your likelihood with your prior. To me, this scaling is meaningful: You're saying that the evidence you have saturates after a certain point. (I'm interested in this subject if anyone has any references.)
Update rule for beta distribution with fixed K/confidence/sample size
I like Zen's answer and want to add to it in order to clarify some misconceptions in the question. "Confidence" or "sample size" cannot be meaningfully applied to a posterior distribution. Only a li
Update rule for beta distribution with fixed K/confidence/sample size I like Zen's answer and want to add to it in order to clarify some misconceptions in the question. "Confidence" or "sample size" cannot be meaningfully applied to a posterior distribution. Only a likelihood can have a notion of "sample size". That's the motivation behind Zen's point that the confidence should be $a + b - 2$. In general, the only way to map the likelihood of an exponential family distribution into a confidence is to linearly transform its natural parameters (for the Beta distribution these are $a-1, b-1$) to the positive real line. (For example, a light that turns on when two heads are flipped, another light turns on when two tails are flipped. So, a single sample might induce a likelihood such that $a$ or $b$ goes up by more than 1.) To answer your question: do the whole Bayesian inference to arrive at the natural parameters of the induced likelihood ($\Delta a, \Delta b$ in your case), and then scale these to your desired "confidence" if it exceeds it. After scaling, you can combine your likelihood with your prior. To me, this scaling is meaningful: You're saying that the evidence you have saturates after a certain point. (I'm interested in this subject if anyone has any references.)
Update rule for beta distribution with fixed K/confidence/sample size I like Zen's answer and want to add to it in order to clarify some misconceptions in the question. "Confidence" or "sample size" cannot be meaningfully applied to a posterior distribution. Only a li
49,493
Weibull regression with known intercept in R
Without commenting on the validity of the method, from an R formula point of view, you can explicitly remove the intercept term by using -1 or +0 eg something like survreg(Surv(y)~x -1 + offset(rep(log(rweibull_scale),length(x))), scale=1/rweibull_shape) or survreg(Surv(y)~x+0 + offset(rep(log(rweibull_scale),length(x))), scale=1/rweibull_shape) Using the example from ?survreg and fixing the shape at 0.2 (as an example, without any reasoning) survreg(Surv(futime, fustat) ~ ecog.ps + rx+0 + offset(rep(0.2,nrow(ovarian))), ovarian, dist='weibull', scale=1/0.2) Coefficients: ecog.ps rx 1.161230 5.525834 Scale fixed at 5 Loglik(model)= -111 Loglik(intercept only)= -110.1 Chisq= -1.72 on 1 degrees of freedom, p= 1 n= 26 Compared with the model where the intercept and scale are estimated survreg(Surv(futime, fustat) ~ ecog.ps + rx, ovarian, dist='weibull') Call: survreg(formula = Surv(futime, fustat) ~ ecog.ps + rx, data = ovarian, dist = "weibull") Coefficients: (Intercept) ecog.ps rx 6.8966931 -0.3850425 0.5286455 Scale= 0.8838731 Loglik(model)= -97.1 Loglik(intercept only)= -98 Chisq= 1.74 on 2 degrees of freedom, p= 0.42 n= 26
Weibull regression with known intercept in R
Without commenting on the validity of the method, from an R formula point of view, you can explicitly remove the intercept term by using -1 or +0 eg something like survreg(Surv(y)~x -1 + offset(rep(lo
Weibull regression with known intercept in R Without commenting on the validity of the method, from an R formula point of view, you can explicitly remove the intercept term by using -1 or +0 eg something like survreg(Surv(y)~x -1 + offset(rep(log(rweibull_scale),length(x))), scale=1/rweibull_shape) or survreg(Surv(y)~x+0 + offset(rep(log(rweibull_scale),length(x))), scale=1/rweibull_shape) Using the example from ?survreg and fixing the shape at 0.2 (as an example, without any reasoning) survreg(Surv(futime, fustat) ~ ecog.ps + rx+0 + offset(rep(0.2,nrow(ovarian))), ovarian, dist='weibull', scale=1/0.2) Coefficients: ecog.ps rx 1.161230 5.525834 Scale fixed at 5 Loglik(model)= -111 Loglik(intercept only)= -110.1 Chisq= -1.72 on 1 degrees of freedom, p= 1 n= 26 Compared with the model where the intercept and scale are estimated survreg(Surv(futime, fustat) ~ ecog.ps + rx, ovarian, dist='weibull') Call: survreg(formula = Surv(futime, fustat) ~ ecog.ps + rx, data = ovarian, dist = "weibull") Coefficients: (Intercept) ecog.ps rx 6.8966931 -0.3850425 0.5286455 Scale= 0.8838731 Loglik(model)= -97.1 Loglik(intercept only)= -98 Chisq= 1.74 on 2 degrees of freedom, p= 0.42 n= 26
Weibull regression with known intercept in R Without commenting on the validity of the method, from an R formula point of view, you can explicitly remove the intercept term by using -1 or +0 eg something like survreg(Surv(y)~x -1 + offset(rep(lo
49,494
Two sequences, one HMM
If you'd like to know the theory of doing this, it's covered in Rabiner's great paper "A tutorial to Hidden Markov models and selected applications in speech recognition" (Proc of the IEEE, 1989, 77(2), p.273; the full text available on multiple websites online - just google the name). As for whether there is an implementation in Matlab (or any other environment), I don't unfortunately know.
Two sequences, one HMM
If you'd like to know the theory of doing this, it's covered in Rabiner's great paper "A tutorial to Hidden Markov models and selected applications in speech recognition" (Proc of the IEEE, 1989, 77(2
Two sequences, one HMM If you'd like to know the theory of doing this, it's covered in Rabiner's great paper "A tutorial to Hidden Markov models and selected applications in speech recognition" (Proc of the IEEE, 1989, 77(2), p.273; the full text available on multiple websites online - just google the name). As for whether there is an implementation in Matlab (or any other environment), I don't unfortunately know.
Two sequences, one HMM If you'd like to know the theory of doing this, it's covered in Rabiner's great paper "A tutorial to Hidden Markov models and selected applications in speech recognition" (Proc of the IEEE, 1989, 77(2
49,495
Validation: Data splitting into training vs. test datasets
The split sample validation you proposed above has become less popular in many fields because of the issue Harrell mentions (unreliable out of bag estimates). I know Harrell has mentioned this in his textbook, but other references would be Steyerberg "Clinical Prediction Models" p301, James et al "An Introduction to Statistical Learning" p175. In the biomedical field boostrap resampling has thus become the standard. This is implemented in Harrell's rms package and so fairly easy to implement. But you could really use any of the other resampling methods, bootstap has just become popular because of a Steyerberg article suggesting it is the most efficient of the resampling methods ("Internal validation of predictive models: efficiency of some procedures for logistic regression analysis"). It is worth mention that the benefit of the rms package is that it easily enables you to include some of the variable selection in the bootstap (built in stepwise selection option). This can be awkward to achieve with most commercial packages. I own sense is that the differences have been overemphasized. I usually get pretty reliable/consistent results irrespective of the method used. With large sample sizes the differences are really non-existent. Bootstrap validation - as well as the other resampling methods - can also easily be done wrong. Often only some of the model building stages are included in the bootstrap giving inaccurate estimates. On the other hand it is fairly hard to mess up split sample validation. Given the face validity of split sampling - I know you didn't muck it up - I prefer split sample unless it is a very small dataset. It many cases the model building process is also complicated enough that it can't really be included in a resampling method. If you want to publish in a biomedical journal though, and you aren't using a medicare size database, you will want to use a resampling method - likely bootstrapping. If the dataset is large, you can likely still get published with k-fold and save yourself some processing time.
Validation: Data splitting into training vs. test datasets
The split sample validation you proposed above has become less popular in many fields because of the issue Harrell mentions (unreliable out of bag estimates). I know Harrell has mentioned this in his
Validation: Data splitting into training vs. test datasets The split sample validation you proposed above has become less popular in many fields because of the issue Harrell mentions (unreliable out of bag estimates). I know Harrell has mentioned this in his textbook, but other references would be Steyerberg "Clinical Prediction Models" p301, James et al "An Introduction to Statistical Learning" p175. In the biomedical field boostrap resampling has thus become the standard. This is implemented in Harrell's rms package and so fairly easy to implement. But you could really use any of the other resampling methods, bootstap has just become popular because of a Steyerberg article suggesting it is the most efficient of the resampling methods ("Internal validation of predictive models: efficiency of some procedures for logistic regression analysis"). It is worth mention that the benefit of the rms package is that it easily enables you to include some of the variable selection in the bootstap (built in stepwise selection option). This can be awkward to achieve with most commercial packages. I own sense is that the differences have been overemphasized. I usually get pretty reliable/consistent results irrespective of the method used. With large sample sizes the differences are really non-existent. Bootstrap validation - as well as the other resampling methods - can also easily be done wrong. Often only some of the model building stages are included in the bootstrap giving inaccurate estimates. On the other hand it is fairly hard to mess up split sample validation. Given the face validity of split sampling - I know you didn't muck it up - I prefer split sample unless it is a very small dataset. It many cases the model building process is also complicated enough that it can't really be included in a resampling method. If you want to publish in a biomedical journal though, and you aren't using a medicare size database, you will want to use a resampling method - likely bootstrapping. If the dataset is large, you can likely still get published with k-fold and save yourself some processing time.
Validation: Data splitting into training vs. test datasets The split sample validation you proposed above has become less popular in many fields because of the issue Harrell mentions (unreliable out of bag estimates). I know Harrell has mentioned this in his
49,496
Cumulative counts or counts for Poisson regression
The approach generally taken is to regress the counts on features that were present during the intervals during which the counts accumulated. The length of the interval is used as an offset after applying log() to the values to match the default link for a Poisson model. The data situation described does not justify anything very fancy. A glm model would suffice. The first offset would be log(T1) ,,, assuming T0 was 0 ... and the second offset would be log(T2-T1). Edit: with the better description/illustration of your data, I would say definitely to go with the first of your alternatives. You are not interested in the cumulative value per period but the incremental value per period. See comment below.
Cumulative counts or counts for Poisson regression
The approach generally taken is to regress the counts on features that were present during the intervals during which the counts accumulated. The length of the interval is used as an offset after appl
Cumulative counts or counts for Poisson regression The approach generally taken is to regress the counts on features that were present during the intervals during which the counts accumulated. The length of the interval is used as an offset after applying log() to the values to match the default link for a Poisson model. The data situation described does not justify anything very fancy. A glm model would suffice. The first offset would be log(T1) ,,, assuming T0 was 0 ... and the second offset would be log(T2-T1). Edit: with the better description/illustration of your data, I would say definitely to go with the first of your alternatives. You are not interested in the cumulative value per period but the incremental value per period. See comment below.
Cumulative counts or counts for Poisson regression The approach generally taken is to regress the counts on features that were present during the intervals during which the counts accumulated. The length of the interval is used as an offset after appl
49,497
Testing whether there is an increase between two regression slopes in a time series
I feel that after three years it does no harm to post my own answer to how I solved this problem. It could be the case I have made an error or two, so please use with a pinch of salt! Following whuber's advice in the comments, since my data is a time series of heights we would expect a continuity of the height at the change-point (1980). So I recoded the data to be number of years from 1980, such that at 1980 it is 0, negative values are for dates prior to then and positive values post 1980. Then I created two new variables: $before$ which contains the heights for the dates prior to 1980 and zero afterwards; and $after$ which contains the heights for the dates after 1980 and zero beforehand. This means that I can view the data in the following way, with the intention of finding the plane of best fit. Applying a multiple regression with these two variables as explanatory and height as response yielded the following: $estimated\ height = 4.1\cdot before + 1.3\cdot after+7066$ and looks like this: where the black line is a regression on the whole data set whereas the red kinked line is the model above. So my original question now becomes whether this kink is significant, by asking whether there is a difference between the slope of before compared to the slope of after. Assuming I can use the relevant statistics obtained from the multiple regression, I followed this response to compute the test statistic: $Z=\dfrac{b_{after}-b_{before}}{\sqrt{SEb_{after}^2+SEb_{before}^2}}$ which with my values gives $Z=3.95$ and from there I can continue to calculate the $p$-value depending on the direction of the hypothesis. (In my case I hypothesised an increase, but that doesn't appear to be the case!)
Testing whether there is an increase between two regression slopes in a time series
I feel that after three years it does no harm to post my own answer to how I solved this problem. It could be the case I have made an error or two, so please use with a pinch of salt! Following whuber
Testing whether there is an increase between two regression slopes in a time series I feel that after three years it does no harm to post my own answer to how I solved this problem. It could be the case I have made an error or two, so please use with a pinch of salt! Following whuber's advice in the comments, since my data is a time series of heights we would expect a continuity of the height at the change-point (1980). So I recoded the data to be number of years from 1980, such that at 1980 it is 0, negative values are for dates prior to then and positive values post 1980. Then I created two new variables: $before$ which contains the heights for the dates prior to 1980 and zero afterwards; and $after$ which contains the heights for the dates after 1980 and zero beforehand. This means that I can view the data in the following way, with the intention of finding the plane of best fit. Applying a multiple regression with these two variables as explanatory and height as response yielded the following: $estimated\ height = 4.1\cdot before + 1.3\cdot after+7066$ and looks like this: where the black line is a regression on the whole data set whereas the red kinked line is the model above. So my original question now becomes whether this kink is significant, by asking whether there is a difference between the slope of before compared to the slope of after. Assuming I can use the relevant statistics obtained from the multiple regression, I followed this response to compute the test statistic: $Z=\dfrac{b_{after}-b_{before}}{\sqrt{SEb_{after}^2+SEb_{before}^2}}$ which with my values gives $Z=3.95$ and from there I can continue to calculate the $p$-value depending on the direction of the hypothesis. (In my case I hypothesised an increase, but that doesn't appear to be the case!)
Testing whether there is an increase between two regression slopes in a time series I feel that after three years it does no harm to post my own answer to how I solved this problem. It could be the case I have made an error or two, so please use with a pinch of salt! Following whuber
49,498
Testing whether there is an increase between two regression slopes in a time series
The tests that habe been suggested are limited to comparing the two slopes BUT the slopes could be the same and the two ontercepts could be different. The Chow Test http://en.wikipedia.org/wiki/Chow_test tests the significance of both coefficients not just the slope. Some software will actually find the breakpoint (if any ) rather than force the user to guess at the breakpoint. AUTOBOX a piece of software I am involved with actually does that. You can get a 30 day version to use on your problem at no cost. Nothing is cheaper than free !
Testing whether there is an increase between two regression slopes in a time series
The tests that habe been suggested are limited to comparing the two slopes BUT the slopes could be the same and the two ontercepts could be different. The Chow Test http://en.wikipedia.org/wiki/Chow_t
Testing whether there is an increase between two regression slopes in a time series The tests that habe been suggested are limited to comparing the two slopes BUT the slopes could be the same and the two ontercepts could be different. The Chow Test http://en.wikipedia.org/wiki/Chow_test tests the significance of both coefficients not just the slope. Some software will actually find the breakpoint (if any ) rather than force the user to guess at the breakpoint. AUTOBOX a piece of software I am involved with actually does that. You can get a 30 day version to use on your problem at no cost. Nothing is cheaper than free !
Testing whether there is an increase between two regression slopes in a time series The tests that habe been suggested are limited to comparing the two slopes BUT the slopes could be the same and the two ontercepts could be different. The Chow Test http://en.wikipedia.org/wiki/Chow_t
49,499
Confusion related to generalized linear model example
My less-technical explanation: Actually, you are approaching their answer. You realize that the formula for a large beach doesn't work well for a small beach, so you make a second model. But neither of these models will work well for a medium beach, so you add a third. None of these works well for a huge beach, so you add a fourth. If you get picky, you end up with models for micro, medium-large and extra-large beaches, too. Why the proliferation of models? If you take each one and calculate the population increase from a 1-degree increase, and plot these from smallest to largest, you'll see that it's not linear. It's exponential and its slope increases as the size increases. (You see that already $50x$ versus $1000x$.) What your quote is describing is a multiplicative model, which is based not on absolute values but percentages. Which is exactly what you were trying to do (the hard way): an $x$-percent increase at a small beach is a small number, while at a large beach it is a large number. You're just trying to imitate this percentage-based increase by using a bunch absolute-based increase models. EDIT: Think of the combinations of action and reaction you expect to see in the world: "If temperature increases 1 degree, the number of people at this beach will increase by approximately 100 people" "If the temperature increases 1 percent, the number of people at this beach will increase by approximately 1 percent" "If the temperature increases by 1 degree, the number of people at this beach will increase by approximately 1 percent" "If the temperature increases 1 percent, the number of people at this beach will increase by approximately 100 people" You are proposing that you can use #1 in any situation, but the quote is describing a situation where reality more closely resembles #2.
Confusion related to generalized linear model example
My less-technical explanation: Actually, you are approaching their answer. You realize that the formula for a large beach doesn't work well for a small beach, so you make a second model. But neither o
Confusion related to generalized linear model example My less-technical explanation: Actually, you are approaching their answer. You realize that the formula for a large beach doesn't work well for a small beach, so you make a second model. But neither of these models will work well for a medium beach, so you add a third. None of these works well for a huge beach, so you add a fourth. If you get picky, you end up with models for micro, medium-large and extra-large beaches, too. Why the proliferation of models? If you take each one and calculate the population increase from a 1-degree increase, and plot these from smallest to largest, you'll see that it's not linear. It's exponential and its slope increases as the size increases. (You see that already $50x$ versus $1000x$.) What your quote is describing is a multiplicative model, which is based not on absolute values but percentages. Which is exactly what you were trying to do (the hard way): an $x$-percent increase at a small beach is a small number, while at a large beach it is a large number. You're just trying to imitate this percentage-based increase by using a bunch absolute-based increase models. EDIT: Think of the combinations of action and reaction you expect to see in the world: "If temperature increases 1 degree, the number of people at this beach will increase by approximately 100 people" "If the temperature increases 1 percent, the number of people at this beach will increase by approximately 1 percent" "If the temperature increases by 1 degree, the number of people at this beach will increase by approximately 1 percent" "If the temperature increases 1 percent, the number of people at this beach will increase by approximately 100 people" You are proposing that you can use #1 in any situation, but the quote is describing a situation where reality more closely resembles #2.
Confusion related to generalized linear model example My less-technical explanation: Actually, you are approaching their answer. You realize that the formula for a large beach doesn't work well for a small beach, so you make a second model. But neither o
49,500
Confusion related to generalized linear model example
The 'geometric' change with constant input means that it will change by a constant proportion, or factor, rather than by a constant additive amount. For example, if the old value is $y_{old}$ and $x$ goes up by 1, there will be an $\exp(\beta)$-fold change in $y$. That is, the new $y_{new}=\exp(\beta)y_{old}$. On the other hand, if the change were additive, it would be $y_{new}=y_{old}+\beta$ instead. In your example, you're using the wrong value for b (presumably b1, note that you also need a b0). Try the following (in R code, let me know if it's not transparent): > b1 = 0.06931472 # this is the appropriate b1, not b1=1 > b0 = 1-b1 # here I make the constant > X = 1 > exp(b0 + b1*X) [1] 2.718282 # this calculation gives you your original y > X = 11 # now we've incremented X by 10, > exp(b0 + b1*X) [1] 5.436564 # and y has doubled > X = 21 # we can go up by another 10, > exp(b0 + b1*X) [1] 10.87313 # and y doubles again Here's a quick way to create your own examples: > y1 = 2.7183 # this is the number you want y to be when X=1 > foldIncrease = 2 # this is the multiple by which you want y to go up > y2 = y1*foldIncrease # now we've created y_new > xUnits = 10 # this is the number of units that X will go up to get to y_new > b1 = (log(y2)-log(y)) / xUnits > b1 [1] 0.06931472 # now we've calculated the appropriate b1 > b0 = log(y1)-b1 > b0 [1] 0.930692 # this gives us the appropriate value for b0
Confusion related to generalized linear model example
The 'geometric' change with constant input means that it will change by a constant proportion, or factor, rather than by a constant additive amount. For example, if the old value is $y_{old}$ and $x$
Confusion related to generalized linear model example The 'geometric' change with constant input means that it will change by a constant proportion, or factor, rather than by a constant additive amount. For example, if the old value is $y_{old}$ and $x$ goes up by 1, there will be an $\exp(\beta)$-fold change in $y$. That is, the new $y_{new}=\exp(\beta)y_{old}$. On the other hand, if the change were additive, it would be $y_{new}=y_{old}+\beta$ instead. In your example, you're using the wrong value for b (presumably b1, note that you also need a b0). Try the following (in R code, let me know if it's not transparent): > b1 = 0.06931472 # this is the appropriate b1, not b1=1 > b0 = 1-b1 # here I make the constant > X = 1 > exp(b0 + b1*X) [1] 2.718282 # this calculation gives you your original y > X = 11 # now we've incremented X by 10, > exp(b0 + b1*X) [1] 5.436564 # and y has doubled > X = 21 # we can go up by another 10, > exp(b0 + b1*X) [1] 10.87313 # and y doubles again Here's a quick way to create your own examples: > y1 = 2.7183 # this is the number you want y to be when X=1 > foldIncrease = 2 # this is the multiple by which you want y to go up > y2 = y1*foldIncrease # now we've created y_new > xUnits = 10 # this is the number of units that X will go up to get to y_new > b1 = (log(y2)-log(y)) / xUnits > b1 [1] 0.06931472 # now we've calculated the appropriate b1 > b0 = log(y1)-b1 > b0 [1] 0.930692 # this gives us the appropriate value for b0
Confusion related to generalized linear model example The 'geometric' change with constant input means that it will change by a constant proportion, or factor, rather than by a constant additive amount. For example, if the old value is $y_{old}$ and $x$