idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
48,801
How to set SMOTE parameters in R package DMwR?
I would be surprised if you did see improvement when you had 30% 'rare' data in the training set. 30% isn't really all that rare in the context of machine learning. What you could do is cross validate with various levels of synthetic data to determine what's giving you the best accuracy on your hold-out data (pretty standard approach to parameter tuning) and then go with that for your final model build. But I would be very surprised based upon personal experience if you saw significant gains in accuracy when you SMOTE past 20-25% positive class instances in your training set.
How to set SMOTE parameters in R package DMwR?
I would be surprised if you did see improvement when you had 30% 'rare' data in the training set. 30% isn't really all that rare in the context of machine learning. What you could do is cross validate
How to set SMOTE parameters in R package DMwR? I would be surprised if you did see improvement when you had 30% 'rare' data in the training set. 30% isn't really all that rare in the context of machine learning. What you could do is cross validate with various levels of synthetic data to determine what's giving you the best accuracy on your hold-out data (pretty standard approach to parameter tuning) and then go with that for your final model build. But I would be very surprised based upon personal experience if you saw significant gains in accuracy when you SMOTE past 20-25% positive class instances in your training set.
How to set SMOTE parameters in R package DMwR? I would be surprised if you did see improvement when you had 30% 'rare' data in the training set. 30% isn't really all that rare in the context of machine learning. What you could do is cross validate
48,802
One step ahead forecast with new data collected sequentially
You don't need the loop here. The one-step forecasts are the same as fitted values in a time series model. So the following should do what you want: library(forecast) model <- auto.arima(y) newfit <- Arima(c(y,new.data), model=model) onestep.for <- fitted(newfit)[1001:1010]
One step ahead forecast with new data collected sequentially
You don't need the loop here. The one-step forecasts are the same as fitted values in a time series model. So the following should do what you want: library(forecast) model <- auto.arima(y) newfit <-
One step ahead forecast with new data collected sequentially You don't need the loop here. The one-step forecasts are the same as fitted values in a time series model. So the following should do what you want: library(forecast) model <- auto.arima(y) newfit <- Arima(c(y,new.data), model=model) onestep.for <- fitted(newfit)[1001:1010]
One step ahead forecast with new data collected sequentially You don't need the loop here. The one-step forecasts are the same as fitted values in a time series model. So the following should do what you want: library(forecast) model <- auto.arima(y) newfit <-
48,803
critical value of a point mass at zero and a chi square distribution with one degree of freedom
The area to the right of any point above 0 is half that of a $\chi_1^2$. So to get a level $\alpha$ test, look up the $2\alpha$ point of a $\chi_1^2$. .... as long as $\alpha < 0.5$. Of course, p-values work similarly. Look the value up as if it were a $\chi_1^2$ and halve the resulting p-value.`
critical value of a point mass at zero and a chi square distribution with one degree of freedom
The area to the right of any point above 0 is half that of a $\chi_1^2$. So to get a level $\alpha$ test, look up the $2\alpha$ point of a $\chi_1^2$. .... as long as $\alpha < 0.5$. Of course, p-val
critical value of a point mass at zero and a chi square distribution with one degree of freedom The area to the right of any point above 0 is half that of a $\chi_1^2$. So to get a level $\alpha$ test, look up the $2\alpha$ point of a $\chi_1^2$. .... as long as $\alpha < 0.5$. Of course, p-values work similarly. Look the value up as if it were a $\chi_1^2$ and halve the resulting p-value.`
critical value of a point mass at zero and a chi square distribution with one degree of freedom The area to the right of any point above 0 is half that of a $\chi_1^2$. So to get a level $\alpha$ test, look up the $2\alpha$ point of a $\chi_1^2$. .... as long as $\alpha < 0.5$. Of course, p-val
48,804
critical value of a point mass at zero and a chi square distribution with one degree of freedom
Some R code if interested: Using TcGSA package: ss <- 0.3 sample_mixt <- TcGSA:::rchisqmix(n=1e5, s=0, q=1) TcGSA:::pval_simu(s=ss, sample_mixt) Using base: 1/2*(1-pchisq(ss,df=1))
critical value of a point mass at zero and a chi square distribution with one degree of freedom
Some R code if interested: Using TcGSA package: ss <- 0.3 sample_mixt <- TcGSA:::rchisqmix(n=1e5, s=0, q=1) TcGSA:::pval_simu(s=ss, sample_mixt) Using base: 1/2*(1-pchisq(ss,df=1))
critical value of a point mass at zero and a chi square distribution with one degree of freedom Some R code if interested: Using TcGSA package: ss <- 0.3 sample_mixt <- TcGSA:::rchisqmix(n=1e5, s=0, q=1) TcGSA:::pval_simu(s=ss, sample_mixt) Using base: 1/2*(1-pchisq(ss,df=1))
critical value of a point mass at zero and a chi square distribution with one degree of freedom Some R code if interested: Using TcGSA package: ss <- 0.3 sample_mixt <- TcGSA:::rchisqmix(n=1e5, s=0, q=1) TcGSA:::pval_simu(s=ss, sample_mixt) Using base: 1/2*(1-pchisq(ss,df=1))
48,805
Law of total expectation and how prove that two variables are independent
It's impossible to read minds, so to locate any error in thinking let's help you work through as simple an example as possible using the most basic possible definitions and axioms of probability. Consider a sample space $\Omega = \{au, bu, cu, av, bv, cv\}$ ("$au$" etc. are just names of six abstract things) where all subsets are considered measurable. Define a probability $\mathbb{P}$ on $\Omega$ in terms of its values on the atoms via $$\mathbb{P}(au) = \mathbb{P}(cu) = p,\ \mathbb{P}(bu) = r-2p;\quad \mathbb{P}(av) = \mathbb{P}(cv) = q,\ \mathbb{P}(bv) = 1-r-2q$$ where $p,q,r$ are any numbers for which all six probabilities are positive. For instance, we may take $p=1/6$, $q=1/8$, and $r=1/2$. Because the sum of the six given probabilities is unity, this defines a valid probability measure. Define the random variables $U$ and $X$ as $$U(\omega)=-1, 0, 1$$ depending on whether the initial letter in the name of $\omega$ is $a$, $b$, or $c$ respectively; and $$X(\omega) = 0,1$$ depending on whether the final letter in the name of $\omega$ is $u$ or $v$ respectively. This can be neatly summarized in a $3$ by $2$ table of probabilities, headed by values of $U$ and $X$, whose interpretation I trust is evident: $$\begin{array}{r|cc} & \text{X=0} & \text{X=1} \\ \hline \text{U=-1} & p & q \\ \text{U=0} & r-2 p & 1-r-2 q \\ \text{U=1} & p & q \end{array}$$ It is then easy to compute the following: $\mathbb{P}(X=0) = \mathbb{P}(\{au,bu,cu\}) = p + r-2p + p = r$ (sum the left column in the table). $\mathbb{P}(U=-1) = \mathbb{P}(\{au,av\}) = p+q$ (sum the top row in the table). Independence means nothing other than probabilities multiply. This would imply, among other things, that $$p = \mathbb{P}(au) = \mathbb{P}(X=0, U=-1) = \mathbb{P}(X=0)\mathbb{P}(U=-1) = r(p+q)$$ (investigate the top left entry in the table). But this is rarely the case; for instance, $1/6 \ne (1/2)(1/6 + 1/8)$. Therefore $X$ and $U$ are not independent (except for some special possible combinations of $p$, $q$, and $r$). However, $$\mathbb{E}(U) = \mathbb{P}(U=-1)(-1) + \mathbb{P}(U=0)(0) + \mathbb{P}(U=1)(1) = 0$$ That is, the expectation of $U$ is zero. (This should be obvious from the symmetry: the chance that $U=1$ balances the chance that $U=-1$ regardless of the value of $X$.) However, $$\mathbb{E}(U|X=0) = \mathbb{P}(U=-1, X=0)(-1) + \cdots + \mathbb{P}(U=1, X=0)(1) = 0,$$ and similarly $$\mathbb{E}(U|X=1) = 0.$$ That is, the conditional expectation of $U$--which is a function--has the constant value zero. In symbols, we have found that $$\mathbb{E}(U|X) = 0 = \mathbb{E}(U)$$ and we have a simple, explicit example showing why constant conditional expectation does not imply independence.
Law of total expectation and how prove that two variables are independent
It's impossible to read minds, so to locate any error in thinking let's help you work through as simple an example as possible using the most basic possible definitions and axioms of probability. Cons
Law of total expectation and how prove that two variables are independent It's impossible to read minds, so to locate any error in thinking let's help you work through as simple an example as possible using the most basic possible definitions and axioms of probability. Consider a sample space $\Omega = \{au, bu, cu, av, bv, cv\}$ ("$au$" etc. are just names of six abstract things) where all subsets are considered measurable. Define a probability $\mathbb{P}$ on $\Omega$ in terms of its values on the atoms via $$\mathbb{P}(au) = \mathbb{P}(cu) = p,\ \mathbb{P}(bu) = r-2p;\quad \mathbb{P}(av) = \mathbb{P}(cv) = q,\ \mathbb{P}(bv) = 1-r-2q$$ where $p,q,r$ are any numbers for which all six probabilities are positive. For instance, we may take $p=1/6$, $q=1/8$, and $r=1/2$. Because the sum of the six given probabilities is unity, this defines a valid probability measure. Define the random variables $U$ and $X$ as $$U(\omega)=-1, 0, 1$$ depending on whether the initial letter in the name of $\omega$ is $a$, $b$, or $c$ respectively; and $$X(\omega) = 0,1$$ depending on whether the final letter in the name of $\omega$ is $u$ or $v$ respectively. This can be neatly summarized in a $3$ by $2$ table of probabilities, headed by values of $U$ and $X$, whose interpretation I trust is evident: $$\begin{array}{r|cc} & \text{X=0} & \text{X=1} \\ \hline \text{U=-1} & p & q \\ \text{U=0} & r-2 p & 1-r-2 q \\ \text{U=1} & p & q \end{array}$$ It is then easy to compute the following: $\mathbb{P}(X=0) = \mathbb{P}(\{au,bu,cu\}) = p + r-2p + p = r$ (sum the left column in the table). $\mathbb{P}(U=-1) = \mathbb{P}(\{au,av\}) = p+q$ (sum the top row in the table). Independence means nothing other than probabilities multiply. This would imply, among other things, that $$p = \mathbb{P}(au) = \mathbb{P}(X=0, U=-1) = \mathbb{P}(X=0)\mathbb{P}(U=-1) = r(p+q)$$ (investigate the top left entry in the table). But this is rarely the case; for instance, $1/6 \ne (1/2)(1/6 + 1/8)$. Therefore $X$ and $U$ are not independent (except for some special possible combinations of $p$, $q$, and $r$). However, $$\mathbb{E}(U) = \mathbb{P}(U=-1)(-1) + \mathbb{P}(U=0)(0) + \mathbb{P}(U=1)(1) = 0$$ That is, the expectation of $U$ is zero. (This should be obvious from the symmetry: the chance that $U=1$ balances the chance that $U=-1$ regardless of the value of $X$.) However, $$\mathbb{E}(U|X=0) = \mathbb{P}(U=-1, X=0)(-1) + \cdots + \mathbb{P}(U=1, X=0)(1) = 0,$$ and similarly $$\mathbb{E}(U|X=1) = 0.$$ That is, the conditional expectation of $U$--which is a function--has the constant value zero. In symbols, we have found that $$\mathbb{E}(U|X) = 0 = \mathbb{E}(U)$$ and we have a simple, explicit example showing why constant conditional expectation does not imply independence.
Law of total expectation and how prove that two variables are independent It's impossible to read minds, so to locate any error in thinking let's help you work through as simple an example as possible using the most basic possible definitions and axioms of probability. Cons
48,806
IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's?
As I stated in the comments above, missing data can be handled by either the ltm or mirt package when the data is MCAR. Here is an example of how to use both on a dataset with missing values: > library(ltm) > library(mirt > set.seed(1234) > dat <- expand.table(LSAT7) > dat[sample(1:(nrow(dat)*ncol(dat)), 150)] <- NA > head(dat) Item.1 Item.2 Item.3 Item.4 Item.5 [1,] 0 0 0 0 0 [2,] 0 0 0 0 0 [3,] 0 0 0 0 0 [4,] 0 0 0 0 0 [5,] 0 0 0 0 0 [6,] 0 0 0 0 NA > (ltmmod <- ltm(dat ~ z1)) Call: ltm(formula = dat ~ z1) Coefficients: Dffclt Dscrmn Item.1 -1.891 0.967 Item.2 -0.720 1.147 Item.3 -1.008 1.885 Item.4 -0.671 0.760 Item.5 -2.554 0.729 Log.Lik: -2572.402 > (mirtmod <- mirt(dat, 1)) Iteration: 22, Log-Lik: -2572.402, Max-Change: 0.00010 Call: mirt(data = dat, model = 1) Full-information item factor analysis with 1 factors Converged in 22 iterations with 41 quadrature. Log-likelihood = -2572.402 AIC = 5164.805; AICc = 5165.027 BIC = 5213.882; SABIC = 5182.122 > coef(mirtmod) $Item.1 a1 d g u par 0.967 1.829 0 1 $Item.2 a1 d g u par 1.148 0.826 0 1 $Item.3 a1 d g u par 1.886 1.902 0 1 $Item.4 a1 d g u par 0.76 0.51 0 1 $Item.5 a1 d g u par 0.729 1.863 0 1 $GroupPars MEAN_1 COV_11 par 0 1 It's also possible to impute missing values given a good estimate of $\theta$ for obtaining things like model and item fit statistics (should do this several times if the amount of missingness is non-trivial, and it's even better to jitter the $\hat{\theta}$ values as a function of the respective $SE_{\hat{\theta}}$ values for more reasonable imputation results). > Theta <- fscores(mirtmod, full.scores = TRUE, scores.only = TRUE) > fulldat <- imputeMissing(mirtmod, Theta) > head(fulldat) Item.1 Item.2 Item.3 Item.4 Item.5 1 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 0 4 0 0 0 0 0 5 0 0 0 0 0 6 0 0 0 0 0
IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's?
As I stated in the comments above, missing data can be handled by either the ltm or mirt package when the data is MCAR. Here is an example of how to use both on a dataset with missing values: > libra
IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's? As I stated in the comments above, missing data can be handled by either the ltm or mirt package when the data is MCAR. Here is an example of how to use both on a dataset with missing values: > library(ltm) > library(mirt > set.seed(1234) > dat <- expand.table(LSAT7) > dat[sample(1:(nrow(dat)*ncol(dat)), 150)] <- NA > head(dat) Item.1 Item.2 Item.3 Item.4 Item.5 [1,] 0 0 0 0 0 [2,] 0 0 0 0 0 [3,] 0 0 0 0 0 [4,] 0 0 0 0 0 [5,] 0 0 0 0 0 [6,] 0 0 0 0 NA > (ltmmod <- ltm(dat ~ z1)) Call: ltm(formula = dat ~ z1) Coefficients: Dffclt Dscrmn Item.1 -1.891 0.967 Item.2 -0.720 1.147 Item.3 -1.008 1.885 Item.4 -0.671 0.760 Item.5 -2.554 0.729 Log.Lik: -2572.402 > (mirtmod <- mirt(dat, 1)) Iteration: 22, Log-Lik: -2572.402, Max-Change: 0.00010 Call: mirt(data = dat, model = 1) Full-information item factor analysis with 1 factors Converged in 22 iterations with 41 quadrature. Log-likelihood = -2572.402 AIC = 5164.805; AICc = 5165.027 BIC = 5213.882; SABIC = 5182.122 > coef(mirtmod) $Item.1 a1 d g u par 0.967 1.829 0 1 $Item.2 a1 d g u par 1.148 0.826 0 1 $Item.3 a1 d g u par 1.886 1.902 0 1 $Item.4 a1 d g u par 0.76 0.51 0 1 $Item.5 a1 d g u par 0.729 1.863 0 1 $GroupPars MEAN_1 COV_11 par 0 1 It's also possible to impute missing values given a good estimate of $\theta$ for obtaining things like model and item fit statistics (should do this several times if the amount of missingness is non-trivial, and it's even better to jitter the $\hat{\theta}$ values as a function of the respective $SE_{\hat{\theta}}$ values for more reasonable imputation results). > Theta <- fscores(mirtmod, full.scores = TRUE, scores.only = TRUE) > fulldat <- imputeMissing(mirtmod, Theta) > head(fulldat) Item.1 Item.2 Item.3 Item.4 Item.5 1 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 0 4 0 0 0 0 0 5 0 0 0 0 0 6 0 0 0 0 0
IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's? As I stated in the comments above, missing data can be handled by either the ltm or mirt package when the data is MCAR. Here is an example of how to use both on a dataset with missing values: > libra
48,807
IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's?
The package eRm (Mair& Hatzinger) is also dealing with missing values. But eRm only estimates unidimensional models http://erm.r-forge.r-project.org/
IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's?
The package eRm (Mair& Hatzinger) is also dealing with missing values. But eRm only estimates unidimensional models http://erm.r-forge.r-project.org/
IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's? The package eRm (Mair& Hatzinger) is also dealing with missing values. But eRm only estimates unidimensional models http://erm.r-forge.r-project.org/
IRT in R: Does anyone know of an IRT item calibration function that can cope with NA's? The package eRm (Mair& Hatzinger) is also dealing with missing values. But eRm only estimates unidimensional models http://erm.r-forge.r-project.org/
48,808
Measuring k-means clustering quality on training and test sets
The problem, in particular with k-means applied to real world, labeled data is that clusters will usually not agree with your labels very well, unless you either generated the labels by using a similar clustering algorithm (self-fulfilling prophecy), or the data set is really simple. Have you tried computing the k-means-statistics such as sums of squares etc. on the original data set? I would not at all be surprised if they are substantially worse than after running k-means. I figure it's just another case of the algorithm does not fit to your problem. Evaluating clustering algorithms is really hard. Because you actually want to find something you do not know yet. Even if the clustering would reproduce the original labels it then actually failed, because it did not tell you something new, and then you could just have used the labels instead. Maybe the most realistic evaluation for a clustering algorithm is the following: if you incorporate the result from the clustering algorithm into a classification algorithm, does it improve the classification accuracy significantly? I.e. treat clustering as a preprocessing/support functionality for an algorithm that you can evaluate reasonably.
Measuring k-means clustering quality on training and test sets
The problem, in particular with k-means applied to real world, labeled data is that clusters will usually not agree with your labels very well, unless you either generated the labels by using a simila
Measuring k-means clustering quality on training and test sets The problem, in particular with k-means applied to real world, labeled data is that clusters will usually not agree with your labels very well, unless you either generated the labels by using a similar clustering algorithm (self-fulfilling prophecy), or the data set is really simple. Have you tried computing the k-means-statistics such as sums of squares etc. on the original data set? I would not at all be surprised if they are substantially worse than after running k-means. I figure it's just another case of the algorithm does not fit to your problem. Evaluating clustering algorithms is really hard. Because you actually want to find something you do not know yet. Even if the clustering would reproduce the original labels it then actually failed, because it did not tell you something new, and then you could just have used the labels instead. Maybe the most realistic evaluation for a clustering algorithm is the following: if you incorporate the result from the clustering algorithm into a classification algorithm, does it improve the classification accuracy significantly? I.e. treat clustering as a preprocessing/support functionality for an algorithm that you can evaluate reasonably.
Measuring k-means clustering quality on training and test sets The problem, in particular with k-means applied to real world, labeled data is that clusters will usually not agree with your labels very well, unless you either generated the labels by using a simila
48,809
A measure of overall variance from multivariate Gaussian
Just like the univariate variance is the average squared distance to the mean, $trace(\hat{\bf{\Sigma}})$ is the average squared distance to the centroid: With $\dot{\bf{X}}$ as the matrix of the centered variables, $\hat{\bf{\Sigma}} = \frac{1}{n} \dot{\bf{X}}' \dot{\bf{X}}$ where $\dot{\bf{X}}' \dot{\bf{X}}$ is the matrix of dot products of the columns of $\dot{\bf{X}}$. Its diagonal elements are $\dot{\bf{X}}_{\cdot i}' \dot{\bf{X}}_{\cdot i} = (\bf{X}_{\cdot i} - \overline{X}_{\cdot i})' (\bf{X}_{\cdot i} - \overline{X}_{\cdot i})$, i.e., the squared distance of variable $i$ to its mean. As such, $trace(\hat{\bf{\Sigma}})$ is a natural generalization of the univariate variance. A second generalization is $det(\hat{\bf{\Sigma}})$: This is a measure for the volume of the ellipsoid that characterizes the distribution. More precisely, $|det(\hat{\bf{\Sigma}})|$ is the factor by which the volume of the unit cube changes after applying the linear transformation $\hat{\bf{\Sigma}}$. (explanation). Here is an illustration for the matrix $\left(\begin{smallmatrix}1 & -.5\\ .5 & .5\end{smallmatrix}\right)$ with determinant $0.75$ (left: before, right: after transformation): I do not have a good answer for your second question. But it seems like the original scales of your variables should matter as they define what variance is "small". It might also be worthwile trying some thresholds and check the stability of the resulting estimates with (bootstrap) crossvalidation.
A measure of overall variance from multivariate Gaussian
Just like the univariate variance is the average squared distance to the mean, $trace(\hat{\bf{\Sigma}})$ is the average squared distance to the centroid: With $\dot{\bf{X}}$ as the matrix of the cent
A measure of overall variance from multivariate Gaussian Just like the univariate variance is the average squared distance to the mean, $trace(\hat{\bf{\Sigma}})$ is the average squared distance to the centroid: With $\dot{\bf{X}}$ as the matrix of the centered variables, $\hat{\bf{\Sigma}} = \frac{1}{n} \dot{\bf{X}}' \dot{\bf{X}}$ where $\dot{\bf{X}}' \dot{\bf{X}}$ is the matrix of dot products of the columns of $\dot{\bf{X}}$. Its diagonal elements are $\dot{\bf{X}}_{\cdot i}' \dot{\bf{X}}_{\cdot i} = (\bf{X}_{\cdot i} - \overline{X}_{\cdot i})' (\bf{X}_{\cdot i} - \overline{X}_{\cdot i})$, i.e., the squared distance of variable $i$ to its mean. As such, $trace(\hat{\bf{\Sigma}})$ is a natural generalization of the univariate variance. A second generalization is $det(\hat{\bf{\Sigma}})$: This is a measure for the volume of the ellipsoid that characterizes the distribution. More precisely, $|det(\hat{\bf{\Sigma}})|$ is the factor by which the volume of the unit cube changes after applying the linear transformation $\hat{\bf{\Sigma}}$. (explanation). Here is an illustration for the matrix $\left(\begin{smallmatrix}1 & -.5\\ .5 & .5\end{smallmatrix}\right)$ with determinant $0.75$ (left: before, right: after transformation): I do not have a good answer for your second question. But it seems like the original scales of your variables should matter as they define what variance is "small". It might also be worthwile trying some thresholds and check the stability of the resulting estimates with (bootstrap) crossvalidation.
A measure of overall variance from multivariate Gaussian Just like the univariate variance is the average squared distance to the mean, $trace(\hat{\bf{\Sigma}})$ is the average squared distance to the centroid: With $\dot{\bf{X}}$ as the matrix of the cent
48,810
Mutual Information really invariant to invertible transformations?
If you define $I(X; X)$ for continuous random variables at all, the proper value for it is infinite, not $I(X; X) = H(X)$. Essentially, the value of $X$ gives you an infinite amount of information about $X$. If $X$ is simply a uniformly random real number for instance, it almost surely takes infinite number of bits to describe it (there's no pattern like in e.g. pi). OTOH, for different variables $X$ and $Y$ (no matter how similar), the value of $X$ always gives you only a finite amount of information about $Y$. If you zoom in sufficiently to some point in $p(x, y)$, it will look flat, so $X$ and $Y$ are practically independent inside that region. Nevertheless, describing where that region is takes a finite number of bits, and specifying the exact point in the region takes an infinite number of bits. The shared information about $X$ and $Y$ is in that finite number of bits, so the mutual information is finite. If, however, $X=Y$, then no matter how much you zoom, knowing $X$ will always tell you exactly where $Y$ is, giving you an infinity of information. That's why $I(X; X)$ is very different from $I(X, Y)$. If that's not convincing, you can just try some calculations. Example: the mutual information of $(x, y)$ for a bivariate Gaussian with $Var(x) = Var(y) = 1$ and $Cov(x, y) = r$ is $I(x; y) = -0.5log(1-r^2)$, which goes to infinity as $r$ goes to $1$.
Mutual Information really invariant to invertible transformations?
If you define $I(X; X)$ for continuous random variables at all, the proper value for it is infinite, not $I(X; X) = H(X)$. Essentially, the value of $X$ gives you an infinite amount of information abo
Mutual Information really invariant to invertible transformations? If you define $I(X; X)$ for continuous random variables at all, the proper value for it is infinite, not $I(X; X) = H(X)$. Essentially, the value of $X$ gives you an infinite amount of information about $X$. If $X$ is simply a uniformly random real number for instance, it almost surely takes infinite number of bits to describe it (there's no pattern like in e.g. pi). OTOH, for different variables $X$ and $Y$ (no matter how similar), the value of $X$ always gives you only a finite amount of information about $Y$. If you zoom in sufficiently to some point in $p(x, y)$, it will look flat, so $X$ and $Y$ are practically independent inside that region. Nevertheless, describing where that region is takes a finite number of bits, and specifying the exact point in the region takes an infinite number of bits. The shared information about $X$ and $Y$ is in that finite number of bits, so the mutual information is finite. If, however, $X=Y$, then no matter how much you zoom, knowing $X$ will always tell you exactly where $Y$ is, giving you an infinity of information. That's why $I(X; X)$ is very different from $I(X, Y)$. If that's not convincing, you can just try some calculations. Example: the mutual information of $(x, y)$ for a bivariate Gaussian with $Var(x) = Var(y) = 1$ and $Cov(x, y) = r$ is $I(x; y) = -0.5log(1-r^2)$, which goes to infinity as $r$ goes to $1$.
Mutual Information really invariant to invertible transformations? If you define $I(X; X)$ for continuous random variables at all, the proper value for it is infinite, not $I(X; X) = H(X)$. Essentially, the value of $X$ gives you an infinite amount of information abo
48,811
Urn with non-uniform probabilities
The same model is used by poker players to estimate the probability of finishing in each place in a tournament given the stack sizes. It is called the Independent Chip Model or ICM. You can download my program ICM Explorer which can calculate the finishing probabilities for up to $10$ balls/players. Although there doesn't seem to be a simple expression for the probability of finishing in the $k$th place, it's actually quite easy to answer your question. The expected number of red balls you draw before the green ball is the sum of the probabilities that you draw each red ball before the green ball. That's the same as a "last longer" bet in a poker tournament. According to this model, you can ignore all of the other players: consider the first time that you draw the green ball or the $i$th red ball. The conditional probability that you draw the red ball of weight $r_i$ before the green ball of weight $g$ is $r_i/(r_i+g)$, so the expected number of red balls you draw before the green ball is $$\sum_i \frac{r_i}{r_i+g}$$ and the expected number of attempts necessary to find the green ball is $1$ more than this.
Urn with non-uniform probabilities
The same model is used by poker players to estimate the probability of finishing in each place in a tournament given the stack sizes. It is called the Independent Chip Model or ICM. You can download m
Urn with non-uniform probabilities The same model is used by poker players to estimate the probability of finishing in each place in a tournament given the stack sizes. It is called the Independent Chip Model or ICM. You can download my program ICM Explorer which can calculate the finishing probabilities for up to $10$ balls/players. Although there doesn't seem to be a simple expression for the probability of finishing in the $k$th place, it's actually quite easy to answer your question. The expected number of red balls you draw before the green ball is the sum of the probabilities that you draw each red ball before the green ball. That's the same as a "last longer" bet in a poker tournament. According to this model, you can ignore all of the other players: consider the first time that you draw the green ball or the $i$th red ball. The conditional probability that you draw the red ball of weight $r_i$ before the green ball of weight $g$ is $r_i/(r_i+g)$, so the expected number of red balls you draw before the green ball is $$\sum_i \frac{r_i}{r_i+g}$$ and the expected number of attempts necessary to find the green ball is $1$ more than this.
Urn with non-uniform probabilities The same model is used by poker players to estimate the probability of finishing in each place in a tournament given the stack sizes. It is called the Independent Chip Model or ICM. You can download m
48,812
Multicollinearity between categorical and continuous variable
I'm converting a previous comment to an answer, expanding a bit based on a follow-up comment from the OP. The original, unedited comment was: There is no silver bullet for decomposing variation in that situation. One thing you can do with two collinear predictors, $x_1,x_2$, is fit a model $x_1 \sim x_2$, take the residuals from that model, $η$, and replace $x_1$ with $η$ in the model $y \sim x_1+x_2$. This way, you will, definitionally, have uncorrelated predictors and the contribution of η is thought of as the variance explained by $x_1$ that is not subsumed by $x_2$. Of course, which variable is $x_1$ and which is $x_2$ is a judgment call (though the overall model fit will be identical). In response to the OP's comment: @Macro, this is a nice thing... maybe worth posting an answer, so we can discuss it with more detail? This is very interesting, because then $x_1=x_2+η$, and if you replace the x1 with η in the original model, you get $y \sim η+x_2=x_1$, which means you loose $x_2$ for the overall fit of the model! And this is strange, paradox! Please post your comment as an answer to discuss it in more detail. Be careful here, because $x_1 \sim x_2$ is R pseudo-code for the model $x_1 = \beta_0 + \beta_1 x_2 + \eta$, not $x_1 = x_2 + \eta$. So, by my back-of-the-envelope calculation, this means that the model $y \sim \eta + x_2$, which is short hand for $y = \alpha_0 + \alpha_1 \eta + \alpha_2 x_2 + \varepsilon$, can be written as $$ y = (\alpha_0 - \alpha_1 \beta_0) + \alpha_1 x_1 + (\alpha_2 - \alpha_1 \beta_1) x_2 + \varepsilon $$ So $x_2$ does not drop out of the model. Indeed the model $y \sim \eta + x_2$ can be seen to have identical degrees of freedom, fit statistics, etc. to the model $y \sim x_1 + x_2$, but the predictors are now uncorrelated.
Multicollinearity between categorical and continuous variable
I'm converting a previous comment to an answer, expanding a bit based on a follow-up comment from the OP. The original, unedited comment was: There is no silver bullet for decomposing variation in tha
Multicollinearity between categorical and continuous variable I'm converting a previous comment to an answer, expanding a bit based on a follow-up comment from the OP. The original, unedited comment was: There is no silver bullet for decomposing variation in that situation. One thing you can do with two collinear predictors, $x_1,x_2$, is fit a model $x_1 \sim x_2$, take the residuals from that model, $η$, and replace $x_1$ with $η$ in the model $y \sim x_1+x_2$. This way, you will, definitionally, have uncorrelated predictors and the contribution of η is thought of as the variance explained by $x_1$ that is not subsumed by $x_2$. Of course, which variable is $x_1$ and which is $x_2$ is a judgment call (though the overall model fit will be identical). In response to the OP's comment: @Macro, this is a nice thing... maybe worth posting an answer, so we can discuss it with more detail? This is very interesting, because then $x_1=x_2+η$, and if you replace the x1 with η in the original model, you get $y \sim η+x_2=x_1$, which means you loose $x_2$ for the overall fit of the model! And this is strange, paradox! Please post your comment as an answer to discuss it in more detail. Be careful here, because $x_1 \sim x_2$ is R pseudo-code for the model $x_1 = \beta_0 + \beta_1 x_2 + \eta$, not $x_1 = x_2 + \eta$. So, by my back-of-the-envelope calculation, this means that the model $y \sim \eta + x_2$, which is short hand for $y = \alpha_0 + \alpha_1 \eta + \alpha_2 x_2 + \varepsilon$, can be written as $$ y = (\alpha_0 - \alpha_1 \beta_0) + \alpha_1 x_1 + (\alpha_2 - \alpha_1 \beta_1) x_2 + \varepsilon $$ So $x_2$ does not drop out of the model. Indeed the model $y \sim \eta + x_2$ can be seen to have identical degrees of freedom, fit statistics, etc. to the model $y \sim x_1 + x_2$, but the predictors are now uncorrelated.
Multicollinearity between categorical and continuous variable I'm converting a previous comment to an answer, expanding a bit based on a follow-up comment from the OP. The original, unedited comment was: There is no silver bullet for decomposing variation in tha
48,813
Predicting continuous variables from text features
A similar question has been asked on stackoverflow: https://stackoverflow.com/questions/15087322/how-to-predict-a-continuous-value-time-from-text-documents One answer here was to use k-nearest-neighbor regression to predict a continuous value from text documents, see https://stackoverflow.com/a/15089788/179014.
Predicting continuous variables from text features
A similar question has been asked on stackoverflow: https://stackoverflow.com/questions/15087322/how-to-predict-a-continuous-value-time-from-text-documents One answer here was to use k-nearest-neigh
Predicting continuous variables from text features A similar question has been asked on stackoverflow: https://stackoverflow.com/questions/15087322/how-to-predict-a-continuous-value-time-from-text-documents One answer here was to use k-nearest-neighbor regression to predict a continuous value from text documents, see https://stackoverflow.com/a/15089788/179014.
Predicting continuous variables from text features A similar question has been asked on stackoverflow: https://stackoverflow.com/questions/15087322/how-to-predict-a-continuous-value-time-from-text-documents One answer here was to use k-nearest-neigh
48,814
Predicting continuous variables from text features
I recommend Gradient Boosting with trees as described in chapter 10 "Boosting and Additive Trees" of The elements of statistical learning. These approach is suitable for bag-of-words-data, can catch interaction of word-features and can be used for both regression and classification.
Predicting continuous variables from text features
I recommend Gradient Boosting with trees as described in chapter 10 "Boosting and Additive Trees" of The elements of statistical learning. These approach is suitable for bag-of-words-data, can catch i
Predicting continuous variables from text features I recommend Gradient Boosting with trees as described in chapter 10 "Boosting and Additive Trees" of The elements of statistical learning. These approach is suitable for bag-of-words-data, can catch interaction of word-features and can be used for both regression and classification.
Predicting continuous variables from text features I recommend Gradient Boosting with trees as described in chapter 10 "Boosting and Additive Trees" of The elements of statistical learning. These approach is suitable for bag-of-words-data, can catch i
48,815
Predicting continuous variables from text features
There is a type of Bayesian linear regression that handles the case of many features. It is called latent factor regression and you can find a good description in the paper Bayesian Factor Regression Models in the "Large p, Small n" Paradigm. If the number of latent factors is large, then it is equivalent to linear regression. Otherwise, it encourages the regression to follow the principal components of the features.
Predicting continuous variables from text features
There is a type of Bayesian linear regression that handles the case of many features. It is called latent factor regression and you can find a good description in the paper Bayesian Factor Regression
Predicting continuous variables from text features There is a type of Bayesian linear regression that handles the case of many features. It is called latent factor regression and you can find a good description in the paper Bayesian Factor Regression Models in the "Large p, Small n" Paradigm. If the number of latent factors is large, then it is equivalent to linear regression. Otherwise, it encourages the regression to follow the principal components of the features.
Predicting continuous variables from text features There is a type of Bayesian linear regression that handles the case of many features. It is called latent factor regression and you can find a good description in the paper Bayesian Factor Regression
48,816
Count explanatory variable, proportion dependent variable
You have a binary response. That is the important part of this. The count status of your explanatory variable doesn't matter. As a result, you should be doing some form of logistic regression. The part that makes this more difficult is that your data are clustered within just four participants. That means you need to either use a GLiMeM, or the GEE. This is a subtle decision, but I discuss it at some length here: Difference between generalized linear models & generalized linear mixed models in SPSS. Depending on the options that your software affords you, you may also have to un-group your data, so that you have a (very long) matrix where the response listed in each row is a 1 or a 0.
Count explanatory variable, proportion dependent variable
You have a binary response. That is the important part of this. The count status of your explanatory variable doesn't matter. As a result, you should be doing some form of logistic regression. The
Count explanatory variable, proportion dependent variable You have a binary response. That is the important part of this. The count status of your explanatory variable doesn't matter. As a result, you should be doing some form of logistic regression. The part that makes this more difficult is that your data are clustered within just four participants. That means you need to either use a GLiMeM, or the GEE. This is a subtle decision, but I discuss it at some length here: Difference between generalized linear models & generalized linear mixed models in SPSS. Depending on the options that your software affords you, you may also have to un-group your data, so that you have a (very long) matrix where the response listed in each row is a 1 or a 0.
Count explanatory variable, proportion dependent variable You have a binary response. That is the important part of this. The count status of your explanatory variable doesn't matter. As a result, you should be doing some form of logistic regression. The
48,817
Bootstrapping unbalanced clustered data (non-parametric bootstrap)
With clustered data, you have 500 degrees of freedom, anyway. It does not matter that your nominal sample size may be 1005 or 1320 or whatever the number will be. The sampling variance of your estimates will generally improve only to the extent that you increase the number of clusters. So I would not see the random sample size as an issue. I have written cluster bootstrap code in Stata, see http://www.stata-journal.com/article.html?article=st0187.
Bootstrapping unbalanced clustered data (non-parametric bootstrap)
With clustered data, you have 500 degrees of freedom, anyway. It does not matter that your nominal sample size may be 1005 or 1320 or whatever the number will be. The sampling variance of your estimat
Bootstrapping unbalanced clustered data (non-parametric bootstrap) With clustered data, you have 500 degrees of freedom, anyway. It does not matter that your nominal sample size may be 1005 or 1320 or whatever the number will be. The sampling variance of your estimates will generally improve only to the extent that you increase the number of clusters. So I would not see the random sample size as an issue. I have written cluster bootstrap code in Stata, see http://www.stata-journal.com/article.html?article=st0187.
Bootstrapping unbalanced clustered data (non-parametric bootstrap) With clustered data, you have 500 degrees of freedom, anyway. It does not matter that your nominal sample size may be 1005 or 1320 or whatever the number will be. The sampling variance of your estimat
48,818
expectation of an exponential function [closed]
Wikipedia's page on the log-normal distribution has the more general result for distributions with non-zero location parameter $\mu$. It notes that, for the lognormal distribution defined as: $$X = e^{\mu + \sigma Z}$$ with $Z$ a standard normal variable, the expectation is: $$\mathbb{E}[X] = e^{\mu + \sigma^2/2}$$
expectation of an exponential function [closed]
Wikipedia's page on the log-normal distribution has the more general result for distributions with non-zero location parameter $\mu$. It notes that, for the lognormal distribution defined as: $$X = e^
expectation of an exponential function [closed] Wikipedia's page on the log-normal distribution has the more general result for distributions with non-zero location parameter $\mu$. It notes that, for the lognormal distribution defined as: $$X = e^{\mu + \sigma Z}$$ with $Z$ a standard normal variable, the expectation is: $$\mathbb{E}[X] = e^{\mu + \sigma^2/2}$$
expectation of an exponential function [closed] Wikipedia's page on the log-normal distribution has the more general result for distributions with non-zero location parameter $\mu$. It notes that, for the lognormal distribution defined as: $$X = e^
48,819
Measure of variation around Median
Since the median is a statistic estimated from sample data, it has an associated sample standard error which can give confidence intervals and tests of location for that value. The variance of a normally distributed random variable can be used directly to compute exact confidence intervals for sample means of IID random normal variables. Practically all distributions give sampling distributions for sample means which are approximately normal for reasonably large $n$. The standard deviation of the sampling distribution of the sample mean is what's called the standard error. The relationship between the sample standard error and the standard deviation of the sample data's probability distributions are related. Anything you estimate from data, whether the minimum, maximum, median, (etc.) has a sampling distribution and hence an associated standard error. This means you have an associated standard error for sample medians. This value is computed by using the inverse quantile function for that data, practical example here.
Measure of variation around Median
Since the median is a statistic estimated from sample data, it has an associated sample standard error which can give confidence intervals and tests of location for that value. The variance of a norma
Measure of variation around Median Since the median is a statistic estimated from sample data, it has an associated sample standard error which can give confidence intervals and tests of location for that value. The variance of a normally distributed random variable can be used directly to compute exact confidence intervals for sample means of IID random normal variables. Practically all distributions give sampling distributions for sample means which are approximately normal for reasonably large $n$. The standard deviation of the sampling distribution of the sample mean is what's called the standard error. The relationship between the sample standard error and the standard deviation of the sample data's probability distributions are related. Anything you estimate from data, whether the minimum, maximum, median, (etc.) has a sampling distribution and hence an associated standard error. This means you have an associated standard error for sample medians. This value is computed by using the inverse quantile function for that data, practical example here.
Measure of variation around Median Since the median is a statistic estimated from sample data, it has an associated sample standard error which can give confidence intervals and tests of location for that value. The variance of a norma
48,820
With a small sample from a normal distribution, do you simulate using a t distribution?
Using your assumptions that the points came from a normal distribution with unknown mean and variance, The T distribution is the correct distribution to sample from no matter how many data points you have because it is the posterior predictive distribution of your model. You might want to check your formula though as it looks a bit simpler than what I've seen before. To answer your questions, (1) yes, (2) yes, and (3) no.
With a small sample from a normal distribution, do you simulate using a t distribution?
Using your assumptions that the points came from a normal distribution with unknown mean and variance, The T distribution is the correct distribution to sample from no matter how many data points you
With a small sample from a normal distribution, do you simulate using a t distribution? Using your assumptions that the points came from a normal distribution with unknown mean and variance, The T distribution is the correct distribution to sample from no matter how many data points you have because it is the posterior predictive distribution of your model. You might want to check your formula though as it looks a bit simpler than what I've seen before. To answer your questions, (1) yes, (2) yes, and (3) no.
With a small sample from a normal distribution, do you simulate using a t distribution? Using your assumptions that the points came from a normal distribution with unknown mean and variance, The T distribution is the correct distribution to sample from no matter how many data points you
48,821
With a small sample from a normal distribution, do you simulate using a t distribution?
You could generate a vector of means from a normal distribution (or t if you prefer) representing your uncertainty in the mean, then generate a vector of variances from a $\chi^2$ distribution representing your uncertainty in the variance, then generate the actual observations from a normal with your vector of means and the vector of variances as the parameters. This will take into account the extra levels of uncertainty that you mention. If you have some feel for where you think the mean and/or variance should be (but don't know exactly) then you may want to try a Bayesian approach where you can use that prior information.
With a small sample from a normal distribution, do you simulate using a t distribution?
You could generate a vector of means from a normal distribution (or t if you prefer) representing your uncertainty in the mean, then generate a vector of variances from a $\chi^2$ distribution represe
With a small sample from a normal distribution, do you simulate using a t distribution? You could generate a vector of means from a normal distribution (or t if you prefer) representing your uncertainty in the mean, then generate a vector of variances from a $\chi^2$ distribution representing your uncertainty in the variance, then generate the actual observations from a normal with your vector of means and the vector of variances as the parameters. This will take into account the extra levels of uncertainty that you mention. If you have some feel for where you think the mean and/or variance should be (but don't know exactly) then you may want to try a Bayesian approach where you can use that prior information.
With a small sample from a normal distribution, do you simulate using a t distribution? You could generate a vector of means from a normal distribution (or t if you prefer) representing your uncertainty in the mean, then generate a vector of variances from a $\chi^2$ distribution represe
48,822
With a small sample from a normal distribution, do you simulate using a t distribution?
I would elaborate Neil G and Greg Snow's answers as follows : run a noninformative Bayesian inference for your original $10$ data values use the posterior predictive distribution to generate new data The posterior predictive distribution derived from a noninformative prior exactly aims to provide your desire: a distribution that generates data "consistent with the original data", taking into account the uncertainty about the model parameters. Now, what is the posterior predictive distribution derived from the noninformative prior ? That depends on the choice of the noninformative prior, but there is a good "default" noninformative prior for the normal sample model. You can also "cheat" a little and use the "Bayesian-frequentist" predictive distribution (also called sometimes "the frequentist predictive distribution"). The principle of the frequentist predictive distribution is the following one. The classical $100(1-\alpha)\%$-prediction interval for a new observation is $\bar{y} \pm \mathrm{t}^*_{n-1}(\alpha/2) \hat\sigma\sqrt{1+\frac{1}{n}}$. Then the Bayesian-frequentist predictive distribution is taken to be the distribution of $\bar{y} + T \hat\sigma\sqrt{1+\frac{1}{n}}$ where $\bar{y}$ and $\hat\sigma$ are considered as fixed and $T$ has the Student $\mathrm{t}_{n-1}$ distribution. Thus, the $100(1-\alpha)\%$-quantile of the frequentist predictive distribution equals the usual $100(1-\alpha)\%$-upper prediction bound. I do not exactly remember the Bayesian predictive distribution derived from the default noninformative prior but it is very close to the frequentist predictive distribution (there are some slight differences such as $\mathrm{t}^*_{n-\frac{1}{2}}$ instead of $\mathrm{t}^*_{n-1}$). I will update my answer when I will find the formulas. Here I asked a question related to the performance of these predictive distributions. I claimed that the frequentist predictive distribution is derived from "little cheating" because it does not really has a theoretical fundation. But I'm sure it is possible to show the performance of the use of this distribution in a frequentist sense.
With a small sample from a normal distribution, do you simulate using a t distribution?
I would elaborate Neil G and Greg Snow's answers as follows : run a noninformative Bayesian inference for your original $10$ data values use the posterior predictive distribution to generate new data
With a small sample from a normal distribution, do you simulate using a t distribution? I would elaborate Neil G and Greg Snow's answers as follows : run a noninformative Bayesian inference for your original $10$ data values use the posterior predictive distribution to generate new data The posterior predictive distribution derived from a noninformative prior exactly aims to provide your desire: a distribution that generates data "consistent with the original data", taking into account the uncertainty about the model parameters. Now, what is the posterior predictive distribution derived from the noninformative prior ? That depends on the choice of the noninformative prior, but there is a good "default" noninformative prior for the normal sample model. You can also "cheat" a little and use the "Bayesian-frequentist" predictive distribution (also called sometimes "the frequentist predictive distribution"). The principle of the frequentist predictive distribution is the following one. The classical $100(1-\alpha)\%$-prediction interval for a new observation is $\bar{y} \pm \mathrm{t}^*_{n-1}(\alpha/2) \hat\sigma\sqrt{1+\frac{1}{n}}$. Then the Bayesian-frequentist predictive distribution is taken to be the distribution of $\bar{y} + T \hat\sigma\sqrt{1+\frac{1}{n}}$ where $\bar{y}$ and $\hat\sigma$ are considered as fixed and $T$ has the Student $\mathrm{t}_{n-1}$ distribution. Thus, the $100(1-\alpha)\%$-quantile of the frequentist predictive distribution equals the usual $100(1-\alpha)\%$-upper prediction bound. I do not exactly remember the Bayesian predictive distribution derived from the default noninformative prior but it is very close to the frequentist predictive distribution (there are some slight differences such as $\mathrm{t}^*_{n-\frac{1}{2}}$ instead of $\mathrm{t}^*_{n-1}$). I will update my answer when I will find the formulas. Here I asked a question related to the performance of these predictive distributions. I claimed that the frequentist predictive distribution is derived from "little cheating" because it does not really has a theoretical fundation. But I'm sure it is possible to show the performance of the use of this distribution in a frequentist sense.
With a small sample from a normal distribution, do you simulate using a t distribution? I would elaborate Neil G and Greg Snow's answers as follows : run a noninformative Bayesian inference for your original $10$ data values use the posterior predictive distribution to generate new data
48,823
What test should I use to determine if a policy change had a statistically significant impact on website registrations?
You are describing "intervention analysis" or "interrupted time series". It refers to estimating how much an intervention has changed a time series. (Intervention-analysis is even one of the tags here, so I am proposing an edit to add it to your question.) Among other ways, it can be done using an autoregressive integrated moving average (ARIMA) model. ARIMA should be done on a stationary time series but you can estimate a seasonal component and control for it if necessary. And NickAdams is right that you don't want to use raw numbers but rather use proportion of visitors who sign up.
What test should I use to determine if a policy change had a statistically significant impact on web
You are describing "intervention analysis" or "interrupted time series". It refers to estimating how much an intervention has changed a time series. (Intervention-analysis is even one of the tags her
What test should I use to determine if a policy change had a statistically significant impact on website registrations? You are describing "intervention analysis" or "interrupted time series". It refers to estimating how much an intervention has changed a time series. (Intervention-analysis is even one of the tags here, so I am proposing an edit to add it to your question.) Among other ways, it can be done using an autoregressive integrated moving average (ARIMA) model. ARIMA should be done on a stationary time series but you can estimate a seasonal component and control for it if necessary. And NickAdams is right that you don't want to use raw numbers but rather use proportion of visitors who sign up.
What test should I use to determine if a policy change had a statistically significant impact on web You are describing "intervention analysis" or "interrupted time series". It refers to estimating how much an intervention has changed a time series. (Intervention-analysis is even one of the tags her
48,824
What test should I use to determine if a policy change had a statistically significant impact on website registrations?
Yes, you can simply do a t-test, although you may very well have confounding variables that will affect how you want to go about this, and perhaps you may want to use an ANOVA with blocks. One confounding variable that you may want to watch out for is effects over time. Does the site have more sign-ups in certain parts of the year over others? Have there been more sign-ups this year than in other years? You may also want to control for traffic: is there more traffic now than in the past? Perhaps now, more people are seeing the sign-up sheet than before. A better metric may be (sign-ups)/(site visitor), and you could find this out with some preliminary ANOVA tables.
What test should I use to determine if a policy change had a statistically significant impact on web
Yes, you can simply do a t-test, although you may very well have confounding variables that will affect how you want to go about this, and perhaps you may want to use an ANOVA with blocks. One confoun
What test should I use to determine if a policy change had a statistically significant impact on website registrations? Yes, you can simply do a t-test, although you may very well have confounding variables that will affect how you want to go about this, and perhaps you may want to use an ANOVA with blocks. One confounding variable that you may want to watch out for is effects over time. Does the site have more sign-ups in certain parts of the year over others? Have there been more sign-ups this year than in other years? You may also want to control for traffic: is there more traffic now than in the past? Perhaps now, more people are seeing the sign-up sheet than before. A better metric may be (sign-ups)/(site visitor), and you could find this out with some preliminary ANOVA tables.
What test should I use to determine if a policy change had a statistically significant impact on web Yes, you can simply do a t-test, although you may very well have confounding variables that will affect how you want to go about this, and perhaps you may want to use an ANOVA with blocks. One confoun
48,825
Internal consistency reliability in item response theory models
You can compute test information curves from your IRT parameter estimates. These curves give you the precision of the test at each $\theta$ of the latent trait. The information $I$ can be transformed into the standard error of estimate $SEE$, which is a direct estimate of the reliability of the test at that $\theta$: $SEE = 1 / \sqrt{I}$. The metric of the test information can also be converted to a traditional reliability metric expressed by a correlation coefficient (Thissen, 2000): $Rel = 1 - (1/I)$. Here are the conversions from a set of TICs to correlational reliability estimates: # following Thissen, 2000: TIC <- seq(1, 12, by=1) round((rel <- data.frame(TIC, SEE=sqrt(1/TIC), REL=1-1/TIC)), 2) TIC SEE REL 1 1.00 0.00 2 0.71 0.50 3 0.58 0.67 4 0.50 0.75 5 0.45 0.80 6 0.41 0.83 7 0.38 0.86 8 0.35 0.88 9 0.33 0.89 10 0.32 0.90 11 0.30 0.91 12 0.29 0.92 For example, a TIC > 5 corresponds to a reliability > .80. Thissen, D. (2000). Reliability and measurement precision. In H. Wainer (Ed.), Computerized adaptive testing: A primer (2nd ed., pp. 159–184). Lawrence Erlbaum Associates Publishers.
Internal consistency reliability in item response theory models
You can compute test information curves from your IRT parameter estimates. These curves give you the precision of the test at each $\theta$ of the latent trait. The information $I$ can be transformed
Internal consistency reliability in item response theory models You can compute test information curves from your IRT parameter estimates. These curves give you the precision of the test at each $\theta$ of the latent trait. The information $I$ can be transformed into the standard error of estimate $SEE$, which is a direct estimate of the reliability of the test at that $\theta$: $SEE = 1 / \sqrt{I}$. The metric of the test information can also be converted to a traditional reliability metric expressed by a correlation coefficient (Thissen, 2000): $Rel = 1 - (1/I)$. Here are the conversions from a set of TICs to correlational reliability estimates: # following Thissen, 2000: TIC <- seq(1, 12, by=1) round((rel <- data.frame(TIC, SEE=sqrt(1/TIC), REL=1-1/TIC)), 2) TIC SEE REL 1 1.00 0.00 2 0.71 0.50 3 0.58 0.67 4 0.50 0.75 5 0.45 0.80 6 0.41 0.83 7 0.38 0.86 8 0.35 0.88 9 0.33 0.89 10 0.32 0.90 11 0.30 0.91 12 0.29 0.92 For example, a TIC > 5 corresponds to a reliability > .80. Thissen, D. (2000). Reliability and measurement precision. In H. Wainer (Ed.), Computerized adaptive testing: A primer (2nd ed., pp. 159–184). Lawrence Erlbaum Associates Publishers.
Internal consistency reliability in item response theory models You can compute test information curves from your IRT parameter estimates. These curves give you the precision of the test at each $\theta$ of the latent trait. The information $I$ can be transformed
48,826
Appropriateness of K-S test and Kruskal-Wallis for assessing medical data set
1) Only assess normality for the cases where you assume it. (I don't think this is an issue in your case, but it's a common problem so it bears mentioning.) 2) When checking a normality assumption, a Q-Q plot is a better idea than a formal hypothesis test - hypothesis tests don't actually answer the relevant question. 3) The Kruskal-Wallis test is the nonparametric rank-based equivalent to a one-way ANOVA. The Kruskal-Wallis is to the Wilcoxon-Mann-Whitney two-sample test as one-way ANOVA is to a two-sample t-test. It is used when you want to test against the null that more than two groups have the same location. Hope this helps some. -- If you want to test specifically for a difference in means, neither the K-S test nor the W-M-W really does it (though with some additional assumptions the W-M-W is also a test for a difference in means). The best way to test for a difference in means is probably to do a permutation test, as long as the distributions would be the same under the null (so if your alternative is a location-shift, you're basically assuming identical shapes apart from location). The one-sample K-S test is a test for a fully specified distribution - it can test any continuous distribution you can give the pdf for -- you would NOT use that to test the normality of data because unless you know the population parameters, the distribution isn't fully specified... and if you knew the population parameters, you would not need to test the means! You could use a Smirnov test (a two sample K-S test) to test for any kind of difference between the two groups, but if your interest is a difference in means it's not a very powerful test. Your confusion about what is being tested with K-S may be because you're muddling the two (the one and two-sample tests) together - they're used for different things. Yes, a Kruskal-Wallis applied to two samples would give the same result as a Wilcoxon-Mann-Whitney for a two-tailed test (i.e. it doesn't allow a one-sided alternative, in the same way that a one-way ANOVA doesn't give you the one-sided alternative you can get with a two-sample t-test and a chi-square doesn't give the one-sided alternative of a two-sample proportions test).
Appropriateness of K-S test and Kruskal-Wallis for assessing medical data set
1) Only assess normality for the cases where you assume it. (I don't think this is an issue in your case, but it's a common problem so it bears mentioning.) 2) When checking a normality assumption, a
Appropriateness of K-S test and Kruskal-Wallis for assessing medical data set 1) Only assess normality for the cases where you assume it. (I don't think this is an issue in your case, but it's a common problem so it bears mentioning.) 2) When checking a normality assumption, a Q-Q plot is a better idea than a formal hypothesis test - hypothesis tests don't actually answer the relevant question. 3) The Kruskal-Wallis test is the nonparametric rank-based equivalent to a one-way ANOVA. The Kruskal-Wallis is to the Wilcoxon-Mann-Whitney two-sample test as one-way ANOVA is to a two-sample t-test. It is used when you want to test against the null that more than two groups have the same location. Hope this helps some. -- If you want to test specifically for a difference in means, neither the K-S test nor the W-M-W really does it (though with some additional assumptions the W-M-W is also a test for a difference in means). The best way to test for a difference in means is probably to do a permutation test, as long as the distributions would be the same under the null (so if your alternative is a location-shift, you're basically assuming identical shapes apart from location). The one-sample K-S test is a test for a fully specified distribution - it can test any continuous distribution you can give the pdf for -- you would NOT use that to test the normality of data because unless you know the population parameters, the distribution isn't fully specified... and if you knew the population parameters, you would not need to test the means! You could use a Smirnov test (a two sample K-S test) to test for any kind of difference between the two groups, but if your interest is a difference in means it's not a very powerful test. Your confusion about what is being tested with K-S may be because you're muddling the two (the one and two-sample tests) together - they're used for different things. Yes, a Kruskal-Wallis applied to two samples would give the same result as a Wilcoxon-Mann-Whitney for a two-tailed test (i.e. it doesn't allow a one-sided alternative, in the same way that a one-way ANOVA doesn't give you the one-sided alternative you can get with a two-sample t-test and a chi-square doesn't give the one-sided alternative of a two-sample proportions test).
Appropriateness of K-S test and Kruskal-Wallis for assessing medical data set 1) Only assess normality for the cases where you assume it. (I don't think this is an issue in your case, but it's a common problem so it bears mentioning.) 2) When checking a normality assumption, a
48,827
Can I use a likelihood ratio test when the error distributions differ?
This is more a set of rambling comments and thoughts than an actual answer, but it's a bit long for a comment. It might end up as something approximating an answer of at least some slight value eventually, though, if it is edited enough times by me or other people. Generally, the Neyman-Pearson Lemma doesn't apply except in special cases, so the base answer is 'well, no'. However, I am going to go on a ramble that basically says 'Clearly in general you won't have an asymptotic chi-square distribution, but maybe there's something to looking at the ratio of likelihoods'. To begin - I think the Likelihood Principle should apply, which might at least give us some hope that the ratio of likelihoods could be informative about the problem. I've seen it done (use a ratio of likelihoods to derive a statistic) for goodness of fit tests (testing a specific null against a specific alternative) - but just to get a form of statistic rather than its distribution. In a number of particular cases (e.g. some specific symmetric nulls vs specific symmetric alternatives), the likelihood ratio does often seem to lead to a very sensible test statistic, one that has excellent power properties. Alternatively efficient scores have been used as a way to get to test statistics that sometimes end up looking like (a monotonic function of) a likelihood ratio, again, suggesting that likelihood ratios may be informative in testing one distribution against another. Then again, I've also seen people claim that you can't do that kind of thing at all; that the ratio of likelihoods isn't meaningful. From a Bayesian point of view such comparisons seem to present no immediately obvious problem (unless I missed something, which certainly happens), as long as everything is appropriately normalized. With equal prior probability, we boil down to looking at the Bayes factor. If instead of the Bayes factor integral, the likelihood corresponding to the maximum likelihood estimate of the parameter for each model is used, then we get back to a likelihood ratio, as mentioned here. Alternatively, we might look at approximating the integrals using Laplace's method -- at least in some circumstances the likelihood ratio can come up as a term in it, though there's another factor there; such things are at least suggestive that the likelihood ratio is the appropriate way to make use of the likelihood principle, even if we don't have a distribution for the ratio. For Gaussian vs gamma, you can parameterize that (note that they're both special cases of the Tweedie distribution), so that may make a difference even if the general case isn't okay, though in the Tweedie family the Gaussian is a rather special case, since there's a "boundary" there (as there is for the Poisson), so again, it's not a standard situation - indeed, there's a 'gap' in the parameter space between the Gaussian and the gamma.
Can I use a likelihood ratio test when the error distributions differ?
This is more a set of rambling comments and thoughts than an actual answer, but it's a bit long for a comment. It might end up as something approximating an answer of at least some slight value eventu
Can I use a likelihood ratio test when the error distributions differ? This is more a set of rambling comments and thoughts than an actual answer, but it's a bit long for a comment. It might end up as something approximating an answer of at least some slight value eventually, though, if it is edited enough times by me or other people. Generally, the Neyman-Pearson Lemma doesn't apply except in special cases, so the base answer is 'well, no'. However, I am going to go on a ramble that basically says 'Clearly in general you won't have an asymptotic chi-square distribution, but maybe there's something to looking at the ratio of likelihoods'. To begin - I think the Likelihood Principle should apply, which might at least give us some hope that the ratio of likelihoods could be informative about the problem. I've seen it done (use a ratio of likelihoods to derive a statistic) for goodness of fit tests (testing a specific null against a specific alternative) - but just to get a form of statistic rather than its distribution. In a number of particular cases (e.g. some specific symmetric nulls vs specific symmetric alternatives), the likelihood ratio does often seem to lead to a very sensible test statistic, one that has excellent power properties. Alternatively efficient scores have been used as a way to get to test statistics that sometimes end up looking like (a monotonic function of) a likelihood ratio, again, suggesting that likelihood ratios may be informative in testing one distribution against another. Then again, I've also seen people claim that you can't do that kind of thing at all; that the ratio of likelihoods isn't meaningful. From a Bayesian point of view such comparisons seem to present no immediately obvious problem (unless I missed something, which certainly happens), as long as everything is appropriately normalized. With equal prior probability, we boil down to looking at the Bayes factor. If instead of the Bayes factor integral, the likelihood corresponding to the maximum likelihood estimate of the parameter for each model is used, then we get back to a likelihood ratio, as mentioned here. Alternatively, we might look at approximating the integrals using Laplace's method -- at least in some circumstances the likelihood ratio can come up as a term in it, though there's another factor there; such things are at least suggestive that the likelihood ratio is the appropriate way to make use of the likelihood principle, even if we don't have a distribution for the ratio. For Gaussian vs gamma, you can parameterize that (note that they're both special cases of the Tweedie distribution), so that may make a difference even if the general case isn't okay, though in the Tweedie family the Gaussian is a rather special case, since there's a "boundary" there (as there is for the Poisson), so again, it's not a standard situation - indeed, there's a 'gap' in the parameter space between the Gaussian and the gamma.
Can I use a likelihood ratio test when the error distributions differ? This is more a set of rambling comments and thoughts than an actual answer, but it's a bit long for a comment. It might end up as something approximating an answer of at least some slight value eventu
48,828
Can I use a likelihood ratio test when the error distributions differ?
I think that you understand that nested comparisons are well understood using the LRT. Non-nested comparisons are also possible. Consider two alternative distributions, one parameterized by $\theta$, $f_\theta(x)$, the other parameterized by $\eta$, $f_\eta(x)$, with $\theta$ and $\eta$ of equal length (this condition can be dropped; see the link below). Take the differences in the log likelihoods: $$\begin{equation*} T(X) = \frac{1}{n}[l(\theta) - l(\eta)] = \frac{1}{n} \sum_{i=1}^n{\log\left(\frac{f_\theta(x_i)}{f_\eta(x_i)}\right)}. \end{equation*}$$ By the law of large numbers, this mean converges to it expected value. If the two distributions model the process equally well, the expected value is 0. By the central limit theorem, the distribution is normal. The only difficulty lies in finding the variance. I would estimate the variance by bootstrapping values of the test statistic $T(X)$. Actually, I'd just bootstrap the distribution of $T(X)$. This is known as the Vuong test. It can be extended to the partially nested case as well. As a broader comment, you can use any test statistic that you want to test a hypothesis. The only questions are: What is the distribution of the test statistic under the null? How powerful is this test statistic relative to alternatives that you care about?
Can I use a likelihood ratio test when the error distributions differ?
I think that you understand that nested comparisons are well understood using the LRT. Non-nested comparisons are also possible. Consider two alternative distributions, one parameterized by $\theta$,
Can I use a likelihood ratio test when the error distributions differ? I think that you understand that nested comparisons are well understood using the LRT. Non-nested comparisons are also possible. Consider two alternative distributions, one parameterized by $\theta$, $f_\theta(x)$, the other parameterized by $\eta$, $f_\eta(x)$, with $\theta$ and $\eta$ of equal length (this condition can be dropped; see the link below). Take the differences in the log likelihoods: $$\begin{equation*} T(X) = \frac{1}{n}[l(\theta) - l(\eta)] = \frac{1}{n} \sum_{i=1}^n{\log\left(\frac{f_\theta(x_i)}{f_\eta(x_i)}\right)}. \end{equation*}$$ By the law of large numbers, this mean converges to it expected value. If the two distributions model the process equally well, the expected value is 0. By the central limit theorem, the distribution is normal. The only difficulty lies in finding the variance. I would estimate the variance by bootstrapping values of the test statistic $T(X)$. Actually, I'd just bootstrap the distribution of $T(X)$. This is known as the Vuong test. It can be extended to the partially nested case as well. As a broader comment, you can use any test statistic that you want to test a hypothesis. The only questions are: What is the distribution of the test statistic under the null? How powerful is this test statistic relative to alternatives that you care about?
Can I use a likelihood ratio test when the error distributions differ? I think that you understand that nested comparisons are well understood using the LRT. Non-nested comparisons are also possible. Consider two alternative distributions, one parameterized by $\theta$,
48,829
Shifted intercepts in logistic regression
Your shifted, average score is: \begin{align} M(X,\beta,\delta,\alpha) &= \frac{\sum_{p(X\beta+\alpha)>\delta} p(X\beta+\alpha)}{\sum \mathbf{1}_{p(X\beta+\alpha)>\delta}} \end{align} Your question is basically "Is $M$ monotonically increasing in $\alpha$?" The answer is that it is not. Once you put the question this way, it is easy to see why it is not. At "most" values of $\alpha$, $M$ is going to be increasing. The numerator is increasing in $\alpha$ and the denominator is unaffected by $\alpha$. At a very few special values of $\alpha$ (the points for which $X_i\beta+\alpha=\delta$ for some $i$, $M$ will decrease in $\alpha$. Why? Because the numerator will increase by exactly $\delta$ at one of these points, and the denominator will also increase by exactly 1 at one of these points. Since the fraction is higher than $\delta$ all the time, this will drag it down at that point (in effect, you are suddenly including a new data point in your average which you know is less than the average was before you included it). The little down-ticks are where the data lie. To make the graph, I slightly modified your R program. I generate a single 100-observation dataset. Then I shift it using $\alpha=0$ and take the truncated mean of the scores. Then I shift it using $\alpha=0.001$ and use the truncated mean of the scores. And so on. Here is the R script: # This program written in response to a Cross Validated question # http://stats.stackexchange.com/questions/41267/shifted-intercepts-in-logistic-regression # The program graphs the expectation of a shifted logit score conditional on the score passing # some threshold. The conditional mean is not monotonic in the shift. library(faraway) library(plyr) set.seed(12344321) # simulation parameters vBeta <- rbind(0.1, 0.2, 0.3, 0.4) # vector of coefficients sDelta <- 0.16 # threshold for the scores # simulate the data mX <- matrix(rnorm(400, 4, 1), 100, 4) vY <- (0.4 + mX%*%vBeta + rt(n=100, df=7)>=5) data <- as.data.frame(cbind(vY,mX)) # logistic regression resLogitFit <- glm(V1~V2+V3+V4+V5, binomial(link = "logit"), data=data) raw.scores <- resLogitFit$fitted.values # mean of scores bigger than delta: mean(raw.scores[raw.scores>sDelta]) # Create mean greater than delta for a variety of alphas shift.logit.mean <- function(alpha,delta,scores){ alpha <- as.numeric(alpha) shifted <- ilogit(logit(scores) + alpha) return(mean(shifted[shifted>delta])) } results <- ddply(data.frame(alpha=seq(from=0,to=1000,by=1)/1000),.(alpha), shift.logit.mean,delta=sDelta,scores=raw.scores) names(results)[2]<-"shifted.scores" plot(results,type="l",main="Scores not monotonic in alpha") # Now. let's artificially pile up the data right near the delta cut point: raw.scores[1:10] <- sDelta - 1:10/1000 results <- ddply(data.frame(alpha=seq(from=0,to=1000,by=1)/1000),.(alpha), shift.logit.mean,delta=sDelta,scores=raw.scores) names(results)[2]<-"shifted.scores" plot(results,type="l",main="With scores piled up near delta") Now that we know that the graph is going to go down wherever there are data, it is easy to make it go down a lot. Just modify the data so that a whole bunch of scores are just a little less than $\delta$ (or, really, just grouped close together anywhere to the left of the original cut point). The R script above does that, and here is what you get: I got it to go down really fast for low values of $\alpha$ by piling up scores just to the left of $\delta$, the cut point. OK, so we know that, theoretically, there is not going to be any monotonicity result. What can we say? Not much, I think. Obviously, once $\alpha$ gets big enough that all the scores pass the cut point, the function is going to be monotonic and is going to aymptote at 1. That's about it, though. You can make the function go down, on average, locally by putting a lot of data points there. You can make the function go up locally by not putting any data points there. Now, suppose we have a really big dataset, so that it would be OK to approximate the sums by integrals ($f(X)$ is the multivariate density of $X$): \begin{align} M(X,\beta,\delta,\alpha) &= \frac{\int_{p(X\beta+\alpha)>\delta} p(X\beta+\alpha) f(X)}{\int_{p(X\beta+\alpha)>\delta}f(X)} \end{align} The derivative of this in $\alpha$ is kind of ugly. However, you get the same result. At $\alpha$s for which the denominator is increasing a lot (where $f(X)$ is high), you can get a negative derivative.
Shifted intercepts in logistic regression
Your shifted, average score is: \begin{align} M(X,\beta,\delta,\alpha) &= \frac{\sum_{p(X\beta+\alpha)>\delta} p(X\beta+\alpha)}{\sum \mathbf{1}_{p(X\beta+\alpha)>\delta}} \end{align} Your question is
Shifted intercepts in logistic regression Your shifted, average score is: \begin{align} M(X,\beta,\delta,\alpha) &= \frac{\sum_{p(X\beta+\alpha)>\delta} p(X\beta+\alpha)}{\sum \mathbf{1}_{p(X\beta+\alpha)>\delta}} \end{align} Your question is basically "Is $M$ monotonically increasing in $\alpha$?" The answer is that it is not. Once you put the question this way, it is easy to see why it is not. At "most" values of $\alpha$, $M$ is going to be increasing. The numerator is increasing in $\alpha$ and the denominator is unaffected by $\alpha$. At a very few special values of $\alpha$ (the points for which $X_i\beta+\alpha=\delta$ for some $i$, $M$ will decrease in $\alpha$. Why? Because the numerator will increase by exactly $\delta$ at one of these points, and the denominator will also increase by exactly 1 at one of these points. Since the fraction is higher than $\delta$ all the time, this will drag it down at that point (in effect, you are suddenly including a new data point in your average which you know is less than the average was before you included it). The little down-ticks are where the data lie. To make the graph, I slightly modified your R program. I generate a single 100-observation dataset. Then I shift it using $\alpha=0$ and take the truncated mean of the scores. Then I shift it using $\alpha=0.001$ and use the truncated mean of the scores. And so on. Here is the R script: # This program written in response to a Cross Validated question # http://stats.stackexchange.com/questions/41267/shifted-intercepts-in-logistic-regression # The program graphs the expectation of a shifted logit score conditional on the score passing # some threshold. The conditional mean is not monotonic in the shift. library(faraway) library(plyr) set.seed(12344321) # simulation parameters vBeta <- rbind(0.1, 0.2, 0.3, 0.4) # vector of coefficients sDelta <- 0.16 # threshold for the scores # simulate the data mX <- matrix(rnorm(400, 4, 1), 100, 4) vY <- (0.4 + mX%*%vBeta + rt(n=100, df=7)>=5) data <- as.data.frame(cbind(vY,mX)) # logistic regression resLogitFit <- glm(V1~V2+V3+V4+V5, binomial(link = "logit"), data=data) raw.scores <- resLogitFit$fitted.values # mean of scores bigger than delta: mean(raw.scores[raw.scores>sDelta]) # Create mean greater than delta for a variety of alphas shift.logit.mean <- function(alpha,delta,scores){ alpha <- as.numeric(alpha) shifted <- ilogit(logit(scores) + alpha) return(mean(shifted[shifted>delta])) } results <- ddply(data.frame(alpha=seq(from=0,to=1000,by=1)/1000),.(alpha), shift.logit.mean,delta=sDelta,scores=raw.scores) names(results)[2]<-"shifted.scores" plot(results,type="l",main="Scores not monotonic in alpha") # Now. let's artificially pile up the data right near the delta cut point: raw.scores[1:10] <- sDelta - 1:10/1000 results <- ddply(data.frame(alpha=seq(from=0,to=1000,by=1)/1000),.(alpha), shift.logit.mean,delta=sDelta,scores=raw.scores) names(results)[2]<-"shifted.scores" plot(results,type="l",main="With scores piled up near delta") Now that we know that the graph is going to go down wherever there are data, it is easy to make it go down a lot. Just modify the data so that a whole bunch of scores are just a little less than $\delta$ (or, really, just grouped close together anywhere to the left of the original cut point). The R script above does that, and here is what you get: I got it to go down really fast for low values of $\alpha$ by piling up scores just to the left of $\delta$, the cut point. OK, so we know that, theoretically, there is not going to be any monotonicity result. What can we say? Not much, I think. Obviously, once $\alpha$ gets big enough that all the scores pass the cut point, the function is going to be monotonic and is going to aymptote at 1. That's about it, though. You can make the function go down, on average, locally by putting a lot of data points there. You can make the function go up locally by not putting any data points there. Now, suppose we have a really big dataset, so that it would be OK to approximate the sums by integrals ($f(X)$ is the multivariate density of $X$): \begin{align} M(X,\beta,\delta,\alpha) &= \frac{\int_{p(X\beta+\alpha)>\delta} p(X\beta+\alpha) f(X)}{\int_{p(X\beta+\alpha)>\delta}f(X)} \end{align} The derivative of this in $\alpha$ is kind of ugly. However, you get the same result. At $\alpha$s for which the denominator is increasing a lot (where $f(X)$ is high), you can get a negative derivative.
Shifted intercepts in logistic regression Your shifted, average score is: \begin{align} M(X,\beta,\delta,\alpha) &= \frac{\sum_{p(X\beta+\alpha)>\delta} p(X\beta+\alpha)}{\sum \mathbf{1}_{p(X\beta+\alpha)>\delta}} \end{align} Your question is
48,830
Not standardizing outcome, standardizing predictors only
No, it's not really correct. The questions about (and advantages and disadvantages of) standardizing variables are very similar for dependent and independent variables, with one rather questionable exception: The idea that standardizing independent variables makes it easier to compare the effects of one variable to another. This advantage is, in my opinion, somewhat illusory, since it depends on the range of data in your sample. Although it's a matter of some contention, I am generally against standardizing variables. Variables themselves are, in my view, easier to interpret than standard deviations of variables - we often have an intuitive sense about variables themselves. For example, if we were regressing weight on height, and left the units in pounds and inches (or kg and cm, if you're metric), then we have a sense of the meaning: "A height difference of 1 inch is related to a weight difference of 2 pounds" (or whatever). Further, inches and pounds stay the same from one sample to another; standard deviations do not.
Not standardizing outcome, standardizing predictors only
No, it's not really correct. The questions about (and advantages and disadvantages of) standardizing variables are very similar for dependent and independent variables, with one rather questionable ex
Not standardizing outcome, standardizing predictors only No, it's not really correct. The questions about (and advantages and disadvantages of) standardizing variables are very similar for dependent and independent variables, with one rather questionable exception: The idea that standardizing independent variables makes it easier to compare the effects of one variable to another. This advantage is, in my opinion, somewhat illusory, since it depends on the range of data in your sample. Although it's a matter of some contention, I am generally against standardizing variables. Variables themselves are, in my view, easier to interpret than standard deviations of variables - we often have an intuitive sense about variables themselves. For example, if we were regressing weight on height, and left the units in pounds and inches (or kg and cm, if you're metric), then we have a sense of the meaning: "A height difference of 1 inch is related to a weight difference of 2 pounds" (or whatever). Further, inches and pounds stay the same from one sample to another; standard deviations do not.
Not standardizing outcome, standardizing predictors only No, it's not really correct. The questions about (and advantages and disadvantages of) standardizing variables are very similar for dependent and independent variables, with one rather questionable ex
48,831
Do correlated and/or derived fields require special consideration when using Random Forest?
Classification and regression trees do not have the same type of multicollinearity issues that you have in multiple linear regression. Splits are based on best-split criteria from which you have choices with the Gini index being the most commonly used one. In fact, I think it is beneficial to have highly correlated variables available for selection in building the model. This makes it possible to use good surrogate splits when certain variables used in the constructed tree are missing for a particular data point that you want to predict the outcome for in the case of regression or for the classification of a new case where some covariate is missing. Now Random Forest creates an ensemble of trees and if variables are highly correlated one may appear in one tree while a variable highly correlated with it may be absent in that particular tree but the situation might reverse for another tree. Since you are doing ensemble averaging and bootstrap bagging I think there is even less of an issue with highly correlated variables in Random Forest than there would be just using CART.
Do correlated and/or derived fields require special consideration when using Random Forest?
Classification and regression trees do not have the same type of multicollinearity issues that you have in multiple linear regression. Splits are based on best-split criteria from which you have choi
Do correlated and/or derived fields require special consideration when using Random Forest? Classification and regression trees do not have the same type of multicollinearity issues that you have in multiple linear regression. Splits are based on best-split criteria from which you have choices with the Gini index being the most commonly used one. In fact, I think it is beneficial to have highly correlated variables available for selection in building the model. This makes it possible to use good surrogate splits when certain variables used in the constructed tree are missing for a particular data point that you want to predict the outcome for in the case of regression or for the classification of a new case where some covariate is missing. Now Random Forest creates an ensemble of trees and if variables are highly correlated one may appear in one tree while a variable highly correlated with it may be absent in that particular tree but the situation might reverse for another tree. Since you are doing ensemble averaging and bootstrap bagging I think there is even less of an issue with highly correlated variables in Random Forest than there would be just using CART.
Do correlated and/or derived fields require special consideration when using Random Forest? Classification and regression trees do not have the same type of multicollinearity issues that you have in multiple linear regression. Splits are based on best-split criteria from which you have choi
48,832
How to test the hypothesis of dependency between price and demand
I think you need need to be careful in distinguishing between demand and quantity demanded. Quantity demanded would rise when prices fall, not the demand itself, which is merely the relationship between price and quantity demanded. It a movement along the curve (the slope of which you care about), rather than a movement of the curve itself. A regression of price on quantity typically does not recover the slope or even its sign, because the demand curve moves over time for many reasons (competitor prices, for example). I would take a look Eric Rasmusen's intro to demand estimation for an explanation. In short, for many products, you can try using marginal costs as an instrumental variable for price in the demand equation. The details really depend on what "manna" is and that market structure looks like. There are also the NBER 2012 Summer Institute lectures and notes on demand estimation, which are more advanced.
How to test the hypothesis of dependency between price and demand
I think you need need to be careful in distinguishing between demand and quantity demanded. Quantity demanded would rise when prices fall, not the demand itself, which is merely the relationship betwe
How to test the hypothesis of dependency between price and demand I think you need need to be careful in distinguishing between demand and quantity demanded. Quantity demanded would rise when prices fall, not the demand itself, which is merely the relationship between price and quantity demanded. It a movement along the curve (the slope of which you care about), rather than a movement of the curve itself. A regression of price on quantity typically does not recover the slope or even its sign, because the demand curve moves over time for many reasons (competitor prices, for example). I would take a look Eric Rasmusen's intro to demand estimation for an explanation. In short, for many products, you can try using marginal costs as an instrumental variable for price in the demand equation. The details really depend on what "manna" is and that market structure looks like. There are also the NBER 2012 Summer Institute lectures and notes on demand estimation, which are more advanced.
How to test the hypothesis of dependency between price and demand I think you need need to be careful in distinguishing between demand and quantity demanded. Quantity demanded would rise when prices fall, not the demand itself, which is merely the relationship betwe
48,833
How to test the hypothesis of dependency between price and demand
Although this is a time series that might best be handled through time series modeling, let us assume that the drops are independent and only (possibly depend on manna dropping slightly). Then you could look at this like analyzing an itnervention. Take the set of paired differences between demand prior to the price drop with demand after the price drop and apply either a paired t test or a Wilcoxon signed rank test (with the choice depending on the appropriateness of the normality assumption on the paired difference).
How to test the hypothesis of dependency between price and demand
Although this is a time series that might best be handled through time series modeling, let us assume that the drops are independent and only (possibly depend on manna dropping slightly). Then you co
How to test the hypothesis of dependency between price and demand Although this is a time series that might best be handled through time series modeling, let us assume that the drops are independent and only (possibly depend on manna dropping slightly). Then you could look at this like analyzing an itnervention. Take the set of paired differences between demand prior to the price drop with demand after the price drop and apply either a paired t test or a Wilcoxon signed rank test (with the choice depending on the appropriateness of the normality assumption on the paired difference).
How to test the hypothesis of dependency between price and demand Although this is a time series that might best be handled through time series modeling, let us assume that the drops are independent and only (possibly depend on manna dropping slightly). Then you co
48,834
What are the rules / guidelines for downsampling?
If you keep all the positives from your data set you may find that you have skewed your results. A priori the probability of a positive in your data is about 1 in a hundred. If you down sample so you have 100K +ve and 100K -ve the a priori +ve probability is now 1 in 2. Unless there is a large separation with little overlap between the two classes you will most likely create a strongly biased classifier. As a first step create a smaller stratified sub sample and see what performance you can achieve with that. Then you can investigate how your classifier behaves if you increase the percentage of +ves in the training set and use a test set with a much higher percentage of -ves. This should give you some idea of the sensitivity of your methods to class balance.
What are the rules / guidelines for downsampling?
If you keep all the positives from your data set you may find that you have skewed your results. A priori the probability of a positive in your data is about 1 in a hundred. If you down sample so yo
What are the rules / guidelines for downsampling? If you keep all the positives from your data set you may find that you have skewed your results. A priori the probability of a positive in your data is about 1 in a hundred. If you down sample so you have 100K +ve and 100K -ve the a priori +ve probability is now 1 in 2. Unless there is a large separation with little overlap between the two classes you will most likely create a strongly biased classifier. As a first step create a smaller stratified sub sample and see what performance you can achieve with that. Then you can investigate how your classifier behaves if you increase the percentage of +ves in the training set and use a test set with a much higher percentage of -ves. This should give you some idea of the sensitivity of your methods to class balance.
What are the rules / guidelines for downsampling? If you keep all the positives from your data set you may find that you have skewed your results. A priori the probability of a positive in your data is about 1 in a hundred. If you down sample so yo
48,835
Presenting the error term in a quantile regression specification
These are statistical models, So, of course an error term is assumed. Most likely it is an additive error term. The book may not make it explicit but the fact that it is not shown in the equation should not be interpreted to mean that no error term is assumed. The author probably thinks that the error term is implicitly assumed.
Presenting the error term in a quantile regression specification
These are statistical models, So, of course an error term is assumed. Most likely it is an additive error term. The book may not make it explicit but the fact that it is not shown in the equation s
Presenting the error term in a quantile regression specification These are statistical models, So, of course an error term is assumed. Most likely it is an additive error term. The book may not make it explicit but the fact that it is not shown in the equation should not be interpreted to mean that no error term is assumed. The author probably thinks that the error term is implicitly assumed.
Presenting the error term in a quantile regression specification These are statistical models, So, of course an error term is assumed. Most likely it is an additive error term. The book may not make it explicit but the fact that it is not shown in the equation s
48,836
Presenting the error term in a quantile regression specification
When we state the model, the error is usually the thing which measures the accuracy of the model. I.e if we model variable $y$ with model $M$, the error is $y-M$. Hence when you state the model there is no error there. Suppose your model $M$ is $f(X)$. Then $$y = f(X)+\varepsilon$$ For general $f$ this type of model statement generalizes very nicely to the model $$y = E(y|X) + \varepsilon,$$ since informally $E(y|X)$ is basically $f(X)$ for some $f$. Furthermore error $\varepsilon$ has the following nice property: $$E(\varepsilon|X)=0.$$ Now quantile regression specifies the model for $\tau$-th quantile: $$Q_y(\tau|X)=g_\tau(X)$$ The conditional quantile function $Q_y(\tau|X)$ is defined as $$Q_y(\tau|X)=\inf\{v: P(y<v|X)>\tau\}=g_\tau(X)$$ Again, since we condition for $X$, in general case we get that there must exist some $g_\tau$ which satisfies the equation. Note that in this case we model a different function of $y$, conditional quantile, not the conditional expectation. This does not preclude us from defining $$\varepsilon = y - Q_y(\tau|X), $$ and writing $$y = Q_y(\tau|X) + \varepsilon, $$ but the error term now does not have nice properties. The conditional quantile function for $\varepsilon$ would be: $$Q_\varepsilon(\upsilon|X)=Q_y(\upsilon|X)-Q_y(\tau|X),$$ which ensures only that the $\tau$-th quantile of $\varepsilon$ when $\tau$-th quantile is used to model $y$ is zero. In both cases we can have a specification error, i.e. that for example linear hypotheses $$f(X) = X\beta,$$ or $$g_\tau(X) = X\beta(\tau)$$ do not capture the true $f$ or $g_\tau$. But then we explicitly state that one model is the true one and another is approximatio and the true model would not have the error in its definition.
Presenting the error term in a quantile regression specification
When we state the model, the error is usually the thing which measures the accuracy of the model. I.e if we model variable $y$ with model $M$, the error is $y-M$. Hence when you state the model there
Presenting the error term in a quantile regression specification When we state the model, the error is usually the thing which measures the accuracy of the model. I.e if we model variable $y$ with model $M$, the error is $y-M$. Hence when you state the model there is no error there. Suppose your model $M$ is $f(X)$. Then $$y = f(X)+\varepsilon$$ For general $f$ this type of model statement generalizes very nicely to the model $$y = E(y|X) + \varepsilon,$$ since informally $E(y|X)$ is basically $f(X)$ for some $f$. Furthermore error $\varepsilon$ has the following nice property: $$E(\varepsilon|X)=0.$$ Now quantile regression specifies the model for $\tau$-th quantile: $$Q_y(\tau|X)=g_\tau(X)$$ The conditional quantile function $Q_y(\tau|X)$ is defined as $$Q_y(\tau|X)=\inf\{v: P(y<v|X)>\tau\}=g_\tau(X)$$ Again, since we condition for $X$, in general case we get that there must exist some $g_\tau$ which satisfies the equation. Note that in this case we model a different function of $y$, conditional quantile, not the conditional expectation. This does not preclude us from defining $$\varepsilon = y - Q_y(\tau|X), $$ and writing $$y = Q_y(\tau|X) + \varepsilon, $$ but the error term now does not have nice properties. The conditional quantile function for $\varepsilon$ would be: $$Q_\varepsilon(\upsilon|X)=Q_y(\upsilon|X)-Q_y(\tau|X),$$ which ensures only that the $\tau$-th quantile of $\varepsilon$ when $\tau$-th quantile is used to model $y$ is zero. In both cases we can have a specification error, i.e. that for example linear hypotheses $$f(X) = X\beta,$$ or $$g_\tau(X) = X\beta(\tau)$$ do not capture the true $f$ or $g_\tau$. But then we explicitly state that one model is the true one and another is approximatio and the true model would not have the error in its definition.
Presenting the error term in a quantile regression specification When we state the model, the error is usually the thing which measures the accuracy of the model. I.e if we model variable $y$ with model $M$, the error is $y-M$. Hence when you state the model there
48,837
How to decide which decision tree classifier to use?
Start with J4.8 since it is fastest to train and generally gives good results. Also its output is human readable therefore you can see if it makes sense. It has tree visualizers to aid understanding. It is among most used data mining algorithms. If J4.8 does not give you good enough solutions , try other algorithms. Random forests may give you better solution but it is not human readable and it is not as fast as J4.8 (due to training multiple trees in process). I recommend following strategy to you if you want to learn how tree algorithms work better. Read about J4.8 and how it is trained. Most tree algorithms use variation of CART, ID3, C4.5, C5.0. They are very similar conceptually. After that read about boosting and ensemble methods. Read about random forests. They use ideas from above methods. Read about other algorithms after these ones. For example NBTree uses naive bayes at the leaves. LMT uses "Classifier for building 'logistic model trees', which are classification trees with logistic regression functions at the leaves." There are also some issues to consider choosing algorithms. I found that some algorithms are more memory hungry than others. I worked with 4.8 Million instances database, KDD99. I could train j4.8 with 4GB ram but not random forests for that matter a lot of other algorithms, ( Neural Networks , SVM etc) Some other tree classifiers may not deal with your attributes or your class size. for example ADTree only support two class problems. Some algorithms may not support date attributes etc.
How to decide which decision tree classifier to use?
Start with J4.8 since it is fastest to train and generally gives good results. Also its output is human readable therefore you can see if it makes sense. It has tree visualizers to aid understanding.
How to decide which decision tree classifier to use? Start with J4.8 since it is fastest to train and generally gives good results. Also its output is human readable therefore you can see if it makes sense. It has tree visualizers to aid understanding. It is among most used data mining algorithms. If J4.8 does not give you good enough solutions , try other algorithms. Random forests may give you better solution but it is not human readable and it is not as fast as J4.8 (due to training multiple trees in process). I recommend following strategy to you if you want to learn how tree algorithms work better. Read about J4.8 and how it is trained. Most tree algorithms use variation of CART, ID3, C4.5, C5.0. They are very similar conceptually. After that read about boosting and ensemble methods. Read about random forests. They use ideas from above methods. Read about other algorithms after these ones. For example NBTree uses naive bayes at the leaves. LMT uses "Classifier for building 'logistic model trees', which are classification trees with logistic regression functions at the leaves." There are also some issues to consider choosing algorithms. I found that some algorithms are more memory hungry than others. I worked with 4.8 Million instances database, KDD99. I could train j4.8 with 4GB ram but not random forests for that matter a lot of other algorithms, ( Neural Networks , SVM etc) Some other tree classifiers may not deal with your attributes or your class size. for example ADTree only support two class problems. Some algorithms may not support date attributes etc.
How to decide which decision tree classifier to use? Start with J4.8 since it is fastest to train and generally gives good results. Also its output is human readable therefore you can see if it makes sense. It has tree visualizers to aid understanding.
48,838
Designing a Simple (a.k.a 'bad') Ranking value from several values of unknown distribution
There are a huge number of potential functions that could work. What you want is probably a weighted sum, but with some transformation of the variables prior to adding them. The choice of transformation and weights is really arbitrary, and a reasonable way to do it is to programme them into a spreadsheet and then alter the transformation and weights until you have something that fits your criteria for a good overall metric. My suggested approach is as follows: choose (what I will call) a high reference point for each metric. For instance, you could choose 10,000,000 for shares; 10,000 for comments, and so on. For shares, comments and LinkedIn, since you think they follow power laws, you could take logarithms of the metrics which will convert them to a more linear scale. Take a linear transformation of each metric so that it has a value that is 0 when the original value is 0, and 1 when the original value is the high reference point. For shares, this would be $log(shares + 1) / log(10000001)$. (You need to add 1 because log 0 is not defined). For Klout score it would be simply $klout / 100$. Multiply each transformed metric by a weighting and sum them. If you are able to define all your criteria for a good metric mathematically, then you might be able to determine appropriate weightings by solving a system of linear inequalities. But in most cases I would think trial and error using a spreadsheet (like this one) would be easier, as you will probably want to experiment and see what the effects are as you choose the criteria, rather than having rigid criteria in mind from the outset. In summary: $score = \sum_{i=1}^n w_i m_i/h_i + \sum_{i=n+1}^N w_i log(m_i + 1) / log(h_i + 1) $ where $m_1 ...m_n$ are the metrics you think are normally distributed and $m_{n+1} ... m_N$ are the metrics you think have a power distribution, $w$ are the weights and $h$ the high reference points. Some other possible approaches: instead of guessing a high reference point, you could guess the parameters of an exponential or normal distribution that would fit your metrics, then use a standard score (z-score) as your metric. It can be any value on the same order of magnitude as what you think is the maximum value for each metric. If you actually have a load of data already, then you could examine the actual distribution of each metric, and then use the mean and standard deviations of the existing data to calculate standard scores for both the existing and future data. (Your existing data would, in other words, be the reference point rather than picking an arbitrary high number such as 10,000,000). If you already have some data, you could explore applying a technique like weighted principal components analysis, which would look at how the different metrics vary together, and give you a linear combination of the metrics that would explain the maximum variation. But you would still have to transform the variables and choose weights for them beforehand.
Designing a Simple (a.k.a 'bad') Ranking value from several values of unknown distribution
There are a huge number of potential functions that could work. What you want is probably a weighted sum, but with some transformation of the variables prior to adding them. The choice of transformati
Designing a Simple (a.k.a 'bad') Ranking value from several values of unknown distribution There are a huge number of potential functions that could work. What you want is probably a weighted sum, but with some transformation of the variables prior to adding them. The choice of transformation and weights is really arbitrary, and a reasonable way to do it is to programme them into a spreadsheet and then alter the transformation and weights until you have something that fits your criteria for a good overall metric. My suggested approach is as follows: choose (what I will call) a high reference point for each metric. For instance, you could choose 10,000,000 for shares; 10,000 for comments, and so on. For shares, comments and LinkedIn, since you think they follow power laws, you could take logarithms of the metrics which will convert them to a more linear scale. Take a linear transformation of each metric so that it has a value that is 0 when the original value is 0, and 1 when the original value is the high reference point. For shares, this would be $log(shares + 1) / log(10000001)$. (You need to add 1 because log 0 is not defined). For Klout score it would be simply $klout / 100$. Multiply each transformed metric by a weighting and sum them. If you are able to define all your criteria for a good metric mathematically, then you might be able to determine appropriate weightings by solving a system of linear inequalities. But in most cases I would think trial and error using a spreadsheet (like this one) would be easier, as you will probably want to experiment and see what the effects are as you choose the criteria, rather than having rigid criteria in mind from the outset. In summary: $score = \sum_{i=1}^n w_i m_i/h_i + \sum_{i=n+1}^N w_i log(m_i + 1) / log(h_i + 1) $ where $m_1 ...m_n$ are the metrics you think are normally distributed and $m_{n+1} ... m_N$ are the metrics you think have a power distribution, $w$ are the weights and $h$ the high reference points. Some other possible approaches: instead of guessing a high reference point, you could guess the parameters of an exponential or normal distribution that would fit your metrics, then use a standard score (z-score) as your metric. It can be any value on the same order of magnitude as what you think is the maximum value for each metric. If you actually have a load of data already, then you could examine the actual distribution of each metric, and then use the mean and standard deviations of the existing data to calculate standard scores for both the existing and future data. (Your existing data would, in other words, be the reference point rather than picking an arbitrary high number such as 10,000,000). If you already have some data, you could explore applying a technique like weighted principal components analysis, which would look at how the different metrics vary together, and give you a linear combination of the metrics that would explain the maximum variation. But you would still have to transform the variables and choose weights for them beforehand.
Designing a Simple (a.k.a 'bad') Ranking value from several values of unknown distribution There are a huge number of potential functions that could work. What you want is probably a weighted sum, but with some transformation of the variables prior to adding them. The choice of transformati
48,839
Friedman's test for binary data - possible or not?
When the repeated-measures or related-samples data are dichotomous Friedman nonparametric test degenerates into Cochran's Q test (Friedman's chi-square statistic becomes identical to Cochran's Q statistic) which is the extension of McNemar's test from 2 to several related samples. McNemar's uses exact binomial computation of p-value, while Cochran relies on normal approximation, although exact p is available too, via permutations approach.
Friedman's test for binary data - possible or not?
When the repeated-measures or related-samples data are dichotomous Friedman nonparametric test degenerates into Cochran's Q test (Friedman's chi-square statistic becomes identical to Cochran's Q stati
Friedman's test for binary data - possible or not? When the repeated-measures or related-samples data are dichotomous Friedman nonparametric test degenerates into Cochran's Q test (Friedman's chi-square statistic becomes identical to Cochran's Q statistic) which is the extension of McNemar's test from 2 to several related samples. McNemar's uses exact binomial computation of p-value, while Cochran relies on normal approximation, although exact p is available too, via permutations approach.
Friedman's test for binary data - possible or not? When the repeated-measures or related-samples data are dichotomous Friedman nonparametric test degenerates into Cochran's Q test (Friedman's chi-square statistic becomes identical to Cochran's Q stati
48,840
Conjugate prior for a binomial-like distribution
There's no conjugate prior for this likelihood. Likelihoods that admit conjugate distributions correspond to data distributions that are members of some exponential family. Having a non-linear function of the parameters in the log-likelihood makes it impossible for the data distribution to belong to an exponential family. Even though there's no conjugate prior, one possibility for a reasonable log-prior is $L_0({\bf p} ; {\bf j}, {\bf N} ) = \sum_i [j_i \log(p_i) + (N_i - j_i) \log (1 - p_i)]$ You can think of this log-prior as equivalent to a log-likelihood for a data set in which each person did a set of questions alone and correctly answered $j_i$ out of $N_i$. This interpretation allows you to set the prior parameters ${\bf j}$ and ${\bf N}$ in a reasonably intuitive way. I'd be somewhat surprised if even small values of $N_i$ (e.g., 2 to 4) did not provide good regularization. Note that $j_i$ and $N_i$ need not be integers. It seems to me that you're thinking of using the plug-in predictive distribution. May I suggest you go full Bayes and use the posterior predictive distribution instead? It would require MCMC, which may be more trouble than you're willing to go to. (If you're using Matlab I can recommend an MCMC routine that would shorten your coding time considerably.)
Conjugate prior for a binomial-like distribution
There's no conjugate prior for this likelihood. Likelihoods that admit conjugate distributions correspond to data distributions that are members of some exponential family. Having a non-linear functio
Conjugate prior for a binomial-like distribution There's no conjugate prior for this likelihood. Likelihoods that admit conjugate distributions correspond to data distributions that are members of some exponential family. Having a non-linear function of the parameters in the log-likelihood makes it impossible for the data distribution to belong to an exponential family. Even though there's no conjugate prior, one possibility for a reasonable log-prior is $L_0({\bf p} ; {\bf j}, {\bf N} ) = \sum_i [j_i \log(p_i) + (N_i - j_i) \log (1 - p_i)]$ You can think of this log-prior as equivalent to a log-likelihood for a data set in which each person did a set of questions alone and correctly answered $j_i$ out of $N_i$. This interpretation allows you to set the prior parameters ${\bf j}$ and ${\bf N}$ in a reasonably intuitive way. I'd be somewhat surprised if even small values of $N_i$ (e.g., 2 to 4) did not provide good regularization. Note that $j_i$ and $N_i$ need not be integers. It seems to me that you're thinking of using the plug-in predictive distribution. May I suggest you go full Bayes and use the posterior predictive distribution instead? It would require MCMC, which may be more trouble than you're willing to go to. (If you're using Matlab I can recommend an MCMC routine that would shorten your coding time considerably.)
Conjugate prior for a binomial-like distribution There's no conjugate prior for this likelihood. Likelihoods that admit conjugate distributions correspond to data distributions that are members of some exponential family. Having a non-linear functio
48,841
How to tell how extreme an outlier is?
Actually, neither: compute how that point is extreme with respect to a robust estimator of location $l_x$ using a robust estimator of scale $s_x$. In essence, if your original point was an outlier, you will be essentially ignoring it in the computation of $(l_x,s_x)$. if your original point was not an outlier, it will have a negligible influence on $(l_x,s_x)$. Here is an article that will help you think clearly about this problem.
How to tell how extreme an outlier is?
Actually, neither: compute how that point is extreme with respect to a robust estimator of location $l_x$ using a robust estimator of scale $s_x$. In essence, if your original point was an outlier, yo
How to tell how extreme an outlier is? Actually, neither: compute how that point is extreme with respect to a robust estimator of location $l_x$ using a robust estimator of scale $s_x$. In essence, if your original point was an outlier, you will be essentially ignoring it in the computation of $(l_x,s_x)$. if your original point was not an outlier, it will have a negligible influence on $(l_x,s_x)$. Here is an article that will help you think clearly about this problem.
How to tell how extreme an outlier is? Actually, neither: compute how that point is extreme with respect to a robust estimator of location $l_x$ using a robust estimator of scale $s_x$. In essence, if your original point was an outlier, yo
48,842
How to tell how extreme an outlier is?
Before you get too comfortable with removing "outliers" you might want to look at the outliers dataset in the TeachingDemos package for R and work through the examples on the help page. It would be good to look through more discussion on the topic, one place to start is wikipedia. It also includes some of the other methods of looking at outliers. Also think about what ammunition you are giving to critics of your results if you remove outliers.
How to tell how extreme an outlier is?
Before you get too comfortable with removing "outliers" you might want to look at the outliers dataset in the TeachingDemos package for R and work through the examples on the help page. It would be go
How to tell how extreme an outlier is? Before you get too comfortable with removing "outliers" you might want to look at the outliers dataset in the TeachingDemos package for R and work through the examples on the help page. It would be good to look through more discussion on the topic, one place to start is wikipedia. It also includes some of the other methods of looking at outliers. Also think about what ammunition you are giving to critics of your results if you remove outliers.
How to tell how extreme an outlier is? Before you get too comfortable with removing "outliers" you might want to look at the outliers dataset in the TeachingDemos package for R and work through the examples on the help page. It would be go
48,843
How to tell how extreme an outlier is?
Here's some advice available on the web : from http://www.autobox.com/cms/index.php/blog , a software site that focuses on this subject. I am involved in software development for this site. Why don't simple outlier methods work? The argument against our competition. For a couple of reasons: It wasn't an outlier. It was a seasonal pulse. The observations outside of the 2 or 3 sigma bounds could in fact be a newly formed seasonal pattern. For example, halfway through the time series June's become become very high when it had been average. Simple approaches would just remove anything outside the bounds which could be throwing the "baby out with the bathwater". Your 3 sigma calculation was skewed due to the outlier itself. It is a chicken and egg dilemma. The outliers make the sigma wide so that you miss outliers. The outlier was in fact a promotion. Using just the history of the series is not enough. You should include causals as they can help explain what is perceived to be an outlier. Now let's consider the inlier. There could be outliers that are within 3 sigma and let's say the observation is near the mean. When could the mean be unusual? When the observation should have been high and it just didn't for some reason. Simple methods force the user to specify the # of times the system should iterate to remove outliers. You are then asked how many times do you want to iterate to find the interventions by the forecasting tool? Is this intelligence or a crutch? So, you are somehow supposed to provide some empirically based guidance??? You don't know as it would be just a guess. The reality is that Simple methods/software use a process where they assume a "mean model" to determine the outliers. The correct way is to build a model and identify the outliers at the same time. Sounds simple, right? Does anyone have any other examples of bad outlier methodologies? or other software with their examples posted?
How to tell how extreme an outlier is?
Here's some advice available on the web : from http://www.autobox.com/cms/index.php/blog , a software site that focuses on this subject. I am involved in software development for this site. Why don't
How to tell how extreme an outlier is? Here's some advice available on the web : from http://www.autobox.com/cms/index.php/blog , a software site that focuses on this subject. I am involved in software development for this site. Why don't simple outlier methods work? The argument against our competition. For a couple of reasons: It wasn't an outlier. It was a seasonal pulse. The observations outside of the 2 or 3 sigma bounds could in fact be a newly formed seasonal pattern. For example, halfway through the time series June's become become very high when it had been average. Simple approaches would just remove anything outside the bounds which could be throwing the "baby out with the bathwater". Your 3 sigma calculation was skewed due to the outlier itself. It is a chicken and egg dilemma. The outliers make the sigma wide so that you miss outliers. The outlier was in fact a promotion. Using just the history of the series is not enough. You should include causals as they can help explain what is perceived to be an outlier. Now let's consider the inlier. There could be outliers that are within 3 sigma and let's say the observation is near the mean. When could the mean be unusual? When the observation should have been high and it just didn't for some reason. Simple methods force the user to specify the # of times the system should iterate to remove outliers. You are then asked how many times do you want to iterate to find the interventions by the forecasting tool? Is this intelligence or a crutch? So, you are somehow supposed to provide some empirically based guidance??? You don't know as it would be just a guess. The reality is that Simple methods/software use a process where they assume a "mean model" to determine the outliers. The correct way is to build a model and identify the outliers at the same time. Sounds simple, right? Does anyone have any other examples of bad outlier methodologies? or other software with their examples posted?
How to tell how extreme an outlier is? Here's some advice available on the web : from http://www.autobox.com/cms/index.php/blog , a software site that focuses on this subject. I am involved in software development for this site. Why don't
48,844
Details regarding the delete-a-group jackknife
PSU, or the primary sampling unit, is the object or a group of objects what you sample in the first stage of a multi-stage sample. Typically in large scale national studies, this could be a county or a census tract. Then you go down to the level of city blocks (secondary sampling units), dwellings, households, and individuals. So when you sampled Autauga County, Alabama (one of 3K+ counties in the US, the first that comes on the standard lists), you have to think of the 50,000 people that live in it as a single unit for variance estimation purposes. Of course, you would likely subsample this county, and end up interviewing may be 10 people. However, most of the contribution to the variance comes from the first stage, especially when observations with the PSU are similar to one another. This is the standard formula for the variance of a clustered sample; a common knowledge, if you like. There is no simple explanation for it sans the derivation from the first principles. You would have to look at a standard survey statistics book, such as Lohr 2009, Korn & Graubard 1999 or Thompson 1997 (in an increasing order of complexity and mathematical rigor). The first principles of finite population sampling are really orthogonal to anything you've learned in statistics (be it mainstream or Bayesian or machine learning). What you measure on the sample elements is considered fixed (someone's weight or height or color of their eyes; and that make sense, except for some measurement error: your height tomorrow should not differ from your height today, so how can it be random?). What is random, however, are the indicators of the finite population elements being taken into the sample. In other words, if you talk about sampling 1000 people from US population, you are talking about a 300-million dimensional vector that has zeroes for most people who did not make it to the sample, and ones for the 1000 people who were sampled. Thus, the probability spaces that you would encounter in the world of sample survey are discrete (although combinatorially huge), and so are the sampling distributions of the sample statistics, although the latter would sometimes be well approximated by the normal distributions. The CLT-type justifications, however, are way more complicated in survey statistics, as appropriate CLTs have only been proven in limited contexts of specific sampling designs. You would need to get used to thinking in terms of the totals (because they are the only linear statistics of the random elements); to the weighted mean being a biased estimator of the population mean (as it is a ratio estimator, i.e., a non-linear statistic); and to variance estimation being an order of magnitude more complex than point estimation. While Phil Kott is a very wise guy who writes quite well, I doubt that this paper is a good starting point on survey statistics. You must have been thrown into this quite harshly, I gather, to have to read this out of blue sky.
Details regarding the delete-a-group jackknife
PSU, or the primary sampling unit, is the object or a group of objects what you sample in the first stage of a multi-stage sample. Typically in large scale national studies, this could be a county or
Details regarding the delete-a-group jackknife PSU, or the primary sampling unit, is the object or a group of objects what you sample in the first stage of a multi-stage sample. Typically in large scale national studies, this could be a county or a census tract. Then you go down to the level of city blocks (secondary sampling units), dwellings, households, and individuals. So when you sampled Autauga County, Alabama (one of 3K+ counties in the US, the first that comes on the standard lists), you have to think of the 50,000 people that live in it as a single unit for variance estimation purposes. Of course, you would likely subsample this county, and end up interviewing may be 10 people. However, most of the contribution to the variance comes from the first stage, especially when observations with the PSU are similar to one another. This is the standard formula for the variance of a clustered sample; a common knowledge, if you like. There is no simple explanation for it sans the derivation from the first principles. You would have to look at a standard survey statistics book, such as Lohr 2009, Korn & Graubard 1999 or Thompson 1997 (in an increasing order of complexity and mathematical rigor). The first principles of finite population sampling are really orthogonal to anything you've learned in statistics (be it mainstream or Bayesian or machine learning). What you measure on the sample elements is considered fixed (someone's weight or height or color of their eyes; and that make sense, except for some measurement error: your height tomorrow should not differ from your height today, so how can it be random?). What is random, however, are the indicators of the finite population elements being taken into the sample. In other words, if you talk about sampling 1000 people from US population, you are talking about a 300-million dimensional vector that has zeroes for most people who did not make it to the sample, and ones for the 1000 people who were sampled. Thus, the probability spaces that you would encounter in the world of sample survey are discrete (although combinatorially huge), and so are the sampling distributions of the sample statistics, although the latter would sometimes be well approximated by the normal distributions. The CLT-type justifications, however, are way more complicated in survey statistics, as appropriate CLTs have only been proven in limited contexts of specific sampling designs. You would need to get used to thinking in terms of the totals (because they are the only linear statistics of the random elements); to the weighted mean being a biased estimator of the population mean (as it is a ratio estimator, i.e., a non-linear statistic); and to variance estimation being an order of magnitude more complex than point estimation. While Phil Kott is a very wise guy who writes quite well, I doubt that this paper is a good starting point on survey statistics. You must have been thrown into this quite harshly, I gather, to have to read this out of blue sky.
Details regarding the delete-a-group jackknife PSU, or the primary sampling unit, is the object or a group of objects what you sample in the first stage of a multi-stage sample. Typically in large scale national studies, this could be a county or
48,845
What are U-type statistics?
From the comments and the answer I got that "U-type statistics" is jargon for "U-statistics". Here are a couple of elements taken from the reference provided by @cardinal, and in the previous answer. A U-statistics of degree or order $r$ is based on a permutation symmetric kernel function $h$ of arity $r$ $$ h(x_1, ..., x_r): \mathbb{X}^r \rightarrow \mathbb{R}, $$ and is the average of that function taken over all possible subsets of observations from the sample. More formally $$ U = \frac{1}{\left( \array{n\\r} \right)} \sum_{\Pi_r(n)}h(x_{\pi_1}, ..., x_{\pi_r}), $$ where the sum is taken over $\Pi_r$, the set of all unordered subsets chosen from $\{1, ..., n\}$. The interest of U-statistics is that they are asymptotically Gaussian provided $E \{ h^2(X_1, ..., X_r) \} < \infty$. Example 1: The sample mean is a first order U-statistics with $h(x) = x$. Example 2: The signed rank statistic is a second order U-statistics with $h(x_1, x_2) = 1_{\mathbb{R}^+}(x_1+x_2)$ (the function that is equal to $1$ if $x_1 + x_2 > 0$, and $0$ otherwise). $$ U = \frac{1}{\left( \array{n\\2} \right)} \sum_{i=1}^{n-1} \sum_{j=i+1}^n 1_{\mathbb{R}^+}(x_i+x_i) $$ is the sum of pairs $(x_i, x_j)$ from the sample with positive sum $x_i+x_j > 0$ and can be used as test statistic for investigating whether the distribution of the observations is located at 0. Example 3: The unit definition space $\mathbb{X}$ of $h$ need not be real. Kendall's $\tau$ statistics is a second order U-statistics with $\frac{1}{2} h((x_1, y_1), (x_2, y_2)) = 1_{\mathbb{R}^+}((y_2-y_1)(x_2-x_1)) - 1$. $$ \tau = \frac{2}{\left( \array{n\\2} \right)} \sum_{i=1}^{n-1} \sum_{j=i+1}^n 1_{\mathbb{R}^+}((y_2-y_1)(x_2-x_1)) - 1 $$ is a measure of dependence between $X$ and $Y$ and counts the number of concordant pairs $(x_i, y_i)$ and $(x_j, y_j)$ in the observations.
What are U-type statistics?
From the comments and the answer I got that "U-type statistics" is jargon for "U-statistics". Here are a couple of elements taken from the reference provided by @cardinal, and in the previous answer.
What are U-type statistics? From the comments and the answer I got that "U-type statistics" is jargon for "U-statistics". Here are a couple of elements taken from the reference provided by @cardinal, and in the previous answer. A U-statistics of degree or order $r$ is based on a permutation symmetric kernel function $h$ of arity $r$ $$ h(x_1, ..., x_r): \mathbb{X}^r \rightarrow \mathbb{R}, $$ and is the average of that function taken over all possible subsets of observations from the sample. More formally $$ U = \frac{1}{\left( \array{n\\r} \right)} \sum_{\Pi_r(n)}h(x_{\pi_1}, ..., x_{\pi_r}), $$ where the sum is taken over $\Pi_r$, the set of all unordered subsets chosen from $\{1, ..., n\}$. The interest of U-statistics is that they are asymptotically Gaussian provided $E \{ h^2(X_1, ..., X_r) \} < \infty$. Example 1: The sample mean is a first order U-statistics with $h(x) = x$. Example 2: The signed rank statistic is a second order U-statistics with $h(x_1, x_2) = 1_{\mathbb{R}^+}(x_1+x_2)$ (the function that is equal to $1$ if $x_1 + x_2 > 0$, and $0$ otherwise). $$ U = \frac{1}{\left( \array{n\\2} \right)} \sum_{i=1}^{n-1} \sum_{j=i+1}^n 1_{\mathbb{R}^+}(x_i+x_i) $$ is the sum of pairs $(x_i, x_j)$ from the sample with positive sum $x_i+x_j > 0$ and can be used as test statistic for investigating whether the distribution of the observations is located at 0. Example 3: The unit definition space $\mathbb{X}$ of $h$ need not be real. Kendall's $\tau$ statistics is a second order U-statistics with $\frac{1}{2} h((x_1, y_1), (x_2, y_2)) = 1_{\mathbb{R}^+}((y_2-y_1)(x_2-x_1)) - 1$. $$ \tau = \frac{2}{\left( \array{n\\2} \right)} \sum_{i=1}^{n-1} \sum_{j=i+1}^n 1_{\mathbb{R}^+}((y_2-y_1)(x_2-x_1)) - 1 $$ is a measure of dependence between $X$ and $Y$ and counts the number of concordant pairs $(x_i, y_i)$ and $(x_j, y_j)$ in the observations.
What are U-type statistics? From the comments and the answer I got that "U-type statistics" is jargon for "U-statistics". Here are a couple of elements taken from the reference provided by @cardinal, and in the previous answer.
48,846
What are U-type statistics?
We have established that U-statistics are what the OP is looking for. I will address his second question about orer of U-statistics. The theory of U-statistics can be found in many books on nonparametrics and I am sure also in the various statistical encyclopedias. Here is a nice article by Tom Ferguson that summarizes the theory. I think it is actually a class tutorial on it. Here is what he says about order. The rest you can find in the paper 5. Degeneracy. When using U-statistics for testing hypotheses, it occasionally happens that at the null hypothesis, the asymptotic distribution has variance zero. This is a degenerate case, and we cannot use Theorem 2 to find approximate cutoff points. The general definition of degeneracy for a U-statistic of order $m$ and variances, $\sigma_1^2 \leq \sigma_2^2 \leq ... \leq \sigma_m^2$ given by (19) is as follows. Definition 3. We say that a U-statistic has a degeneracy of order $k$ if $\sigma_1^2 = · · · = \sigma_k^2 = 0$ and $\sigma^2_{k+1} > 0$. http://www.math.ucla.edu/~tom/Stat200C/Ustat.pdf
What are U-type statistics?
We have established that U-statistics are what the OP is looking for. I will address his second question about orer of U-statistics. The theory of U-statistics can be found in many books on nonparam
What are U-type statistics? We have established that U-statistics are what the OP is looking for. I will address his second question about orer of U-statistics. The theory of U-statistics can be found in many books on nonparametrics and I am sure also in the various statistical encyclopedias. Here is a nice article by Tom Ferguson that summarizes the theory. I think it is actually a class tutorial on it. Here is what he says about order. The rest you can find in the paper 5. Degeneracy. When using U-statistics for testing hypotheses, it occasionally happens that at the null hypothesis, the asymptotic distribution has variance zero. This is a degenerate case, and we cannot use Theorem 2 to find approximate cutoff points. The general definition of degeneracy for a U-statistic of order $m$ and variances, $\sigma_1^2 \leq \sigma_2^2 \leq ... \leq \sigma_m^2$ given by (19) is as follows. Definition 3. We say that a U-statistic has a degeneracy of order $k$ if $\sigma_1^2 = · · · = \sigma_k^2 = 0$ and $\sigma^2_{k+1} > 0$. http://www.math.ucla.edu/~tom/Stat200C/Ustat.pdf
What are U-type statistics? We have established that U-statistics are what the OP is looking for. I will address his second question about orer of U-statistics. The theory of U-statistics can be found in many books on nonparam
48,847
How to combine several time series into a useful average time series?
I am not particularly familiar with them, but it seems to me that this would be a reasonable case to use a VAR (vector autoregression) model for. In particular, a VAR with a linear time trend or period-specific deterministic component would give you a nice summary statistic for the overall trend in a given period. See for example equation (11.4) here: http://faculty.washington.edu/ezivot/econ584/notes/varModels.pdf You could then consider your series d as analagous to the exogenous variable in impulse response modeling (or the X in the equation listed above) and see its joint effect on the vector of Y's (your series a-c). The F-test seems to be the standard way to do this. Example Here's an example that (I think) shows what you are looking to test: set.seed(1) n <- 10 dat <- data.frame( a = runif(n), b = runif(n), c = runif(n) ) dat$d <- apply( dat[ c(1,1:9), ], 1, function(x) x[1] + x[2]*3 + x[3]*4 + runif(1) ) mdl <- VAR( dat, p=1, type="const" ) causality(mdl,cause="d") $Granger Granger causality H0: d do not Granger-cause a b c data: VAR object mdl F-Test = 3.8527, df1 = 3, df2 = 16, p-value = 0.02994 $Instant H0: No instantaneous causality between: d and a b c data: VAR object mdl Chi-squared = 2.6609, df = 3, p-value = 0.4469 The null Granger hypothesis is rejected at p<.05. So a shock in d is useful in predicting future values of (a,b,c).
How to combine several time series into a useful average time series?
I am not particularly familiar with them, but it seems to me that this would be a reasonable case to use a VAR (vector autoregression) model for. In particular, a VAR with a linear time trend or perio
How to combine several time series into a useful average time series? I am not particularly familiar with them, but it seems to me that this would be a reasonable case to use a VAR (vector autoregression) model for. In particular, a VAR with a linear time trend or period-specific deterministic component would give you a nice summary statistic for the overall trend in a given period. See for example equation (11.4) here: http://faculty.washington.edu/ezivot/econ584/notes/varModels.pdf You could then consider your series d as analagous to the exogenous variable in impulse response modeling (or the X in the equation listed above) and see its joint effect on the vector of Y's (your series a-c). The F-test seems to be the standard way to do this. Example Here's an example that (I think) shows what you are looking to test: set.seed(1) n <- 10 dat <- data.frame( a = runif(n), b = runif(n), c = runif(n) ) dat$d <- apply( dat[ c(1,1:9), ], 1, function(x) x[1] + x[2]*3 + x[3]*4 + runif(1) ) mdl <- VAR( dat, p=1, type="const" ) causality(mdl,cause="d") $Granger Granger causality H0: d do not Granger-cause a b c data: VAR object mdl F-Test = 3.8527, df1 = 3, df2 = 16, p-value = 0.02994 $Instant H0: No instantaneous causality between: d and a b c data: VAR object mdl Chi-squared = 2.6609, df = 3, p-value = 0.4469 The null Granger hypothesis is rejected at p<.05. So a shock in d is useful in predicting future values of (a,b,c).
How to combine several time series into a useful average time series? I am not particularly familiar with them, but it seems to me that this would be a reasonable case to use a VAR (vector autoregression) model for. In particular, a VAR with a linear time trend or perio
48,848
Evidence on red-purple-blue graphs
To elaborate on my comment above, my suspicion was partly confirmed by the code posted to create the above cited graph at the knowledge discovery blog. Do you perhaps see many of these examples utilizing ggplot2 graphics? It appears the default for scale_color_gradient is blue to red. It appears to me to be a default interpolation along LAB color space as oppossed to RGB (so I'm not sure as to the exact transformation), but the result appears pretty similar. Below is an example in R for varyious mixing of Red and Blue while holding green at a constant 0. red <- rep(seq(15,255,15),16) blue <- rep(seq(15,255,15), each = 16) color <- rgb(red = red, green = 0, blue = blue, maxColorValue = 255) plot(x = red, y = blue, col = color, pch = 19, cex = 3) To elaborate on why this is a bad choice (as is written on the help(scale_color_gradient) page) for sequential color scales (i.e. from low to high) you typically want to keep hue constant and vary chroma and luminance (where chroma and luminance are defined in the Munsell color scale). Or, more straightforward, people don't typically interpret varying hues as either higher or lower value, but people can typically associate darker or lighter colors on an ordinal scale. A blue to red interpolation like this might be defensible choice for a diverging color scheme, but typically we want more contrast between the shades. See the scale_gradient2 help page for some examples. So, in line with gestalt principles of visual perception, I would suggest rewriting the plot cited as below; (p + geom_point(aes(x = month, y = year, size = Value, colour = VIX),shape=16, alpha=0.80) + scale_colour_gradient(limits = c(10, 60), low="red", high="black", breaks= seq(10, 60, by = 10)) + scale_x_continuous(breaks = 1:12, labels=c("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec")) + scale_y_continuous(trans = "reverse") + theme_bw() + opts(panel.grid.minor=theme_blank(), panel.grid.major=theme_blank()) ) This is certainly a difficult visual task, as the small dots need some color to be able to distinguish them between and the background (I removed the gridlines and grey background to provide more contrast). Other graphical options may be to scale the points so the smallest points are slightly larger and utilize an outline so they are more obviously distinguished from the background. But, IMO, a more fruitful approach is not via the heatmap, but by sprucing up the line plot (see a similar discussion on birthdays by day of year by Andrew Gelman). (p + geom_line(aes(x = Date, y = Value), alpha = 0.2) + geom_point(aes(x=Date, y=Value, size=VIX), shape=1) ) For other references on utilizing color in plots I would highly suggest the work of the cartographer Cynthia Brewer. Her ColorBrewer scales are widely implemented and are becoming a defacto standard for generating color scales.
Evidence on red-purple-blue graphs
To elaborate on my comment above, my suspicion was partly confirmed by the code posted to create the above cited graph at the knowledge discovery blog. Do you perhaps see many of these examples utiliz
Evidence on red-purple-blue graphs To elaborate on my comment above, my suspicion was partly confirmed by the code posted to create the above cited graph at the knowledge discovery blog. Do you perhaps see many of these examples utilizing ggplot2 graphics? It appears the default for scale_color_gradient is blue to red. It appears to me to be a default interpolation along LAB color space as oppossed to RGB (so I'm not sure as to the exact transformation), but the result appears pretty similar. Below is an example in R for varyious mixing of Red and Blue while holding green at a constant 0. red <- rep(seq(15,255,15),16) blue <- rep(seq(15,255,15), each = 16) color <- rgb(red = red, green = 0, blue = blue, maxColorValue = 255) plot(x = red, y = blue, col = color, pch = 19, cex = 3) To elaborate on why this is a bad choice (as is written on the help(scale_color_gradient) page) for sequential color scales (i.e. from low to high) you typically want to keep hue constant and vary chroma and luminance (where chroma and luminance are defined in the Munsell color scale). Or, more straightforward, people don't typically interpret varying hues as either higher or lower value, but people can typically associate darker or lighter colors on an ordinal scale. A blue to red interpolation like this might be defensible choice for a diverging color scheme, but typically we want more contrast between the shades. See the scale_gradient2 help page for some examples. So, in line with gestalt principles of visual perception, I would suggest rewriting the plot cited as below; (p + geom_point(aes(x = month, y = year, size = Value, colour = VIX),shape=16, alpha=0.80) + scale_colour_gradient(limits = c(10, 60), low="red", high="black", breaks= seq(10, 60, by = 10)) + scale_x_continuous(breaks = 1:12, labels=c("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec")) + scale_y_continuous(trans = "reverse") + theme_bw() + opts(panel.grid.minor=theme_blank(), panel.grid.major=theme_blank()) ) This is certainly a difficult visual task, as the small dots need some color to be able to distinguish them between and the background (I removed the gridlines and grey background to provide more contrast). Other graphical options may be to scale the points so the smallest points are slightly larger and utilize an outline so they are more obviously distinguished from the background. But, IMO, a more fruitful approach is not via the heatmap, but by sprucing up the line plot (see a similar discussion on birthdays by day of year by Andrew Gelman). (p + geom_line(aes(x = Date, y = Value), alpha = 0.2) + geom_point(aes(x=Date, y=Value, size=VIX), shape=1) ) For other references on utilizing color in plots I would highly suggest the work of the cartographer Cynthia Brewer. Her ColorBrewer scales are widely implemented and are becoming a defacto standard for generating color scales.
Evidence on red-purple-blue graphs To elaborate on my comment above, my suspicion was partly confirmed by the code posted to create the above cited graph at the knowledge discovery blog. Do you perhaps see many of these examples utiliz
48,849
Ordered logit with (too many?) categorical independent variables
Using ordinary least squares (OLS) does not solve the problem you are facing. It only assumes it away. If you are using OLS you are implicitly assuming that the different points on your scale are equally spaced. If you are comfortable with this assumption, push the OLS button and try to convince your audience. I would tackle the problem differently. You have already mentioned of the solutions. Indeed, it could make sense to recode the control variables and to reduce the number of categories. Sparsely populated categories could be merged to other categories. Use your topical knowledge to merge and redefine categories. You can also try to recode the dependent variables. Even on a 10 point scale, responses are usually clustered around some modalities. Again, guided by topical knowledge, you could redefine the dependent variable. This topic is not new on CrossValidated. Under the Likert tag you will find plenty of discussions that may be of interest to you.
Ordered logit with (too many?) categorical independent variables
Using ordinary least squares (OLS) does not solve the problem you are facing. It only assumes it away. If you are using OLS you are implicitly assuming that the different points on your scale are equa
Ordered logit with (too many?) categorical independent variables Using ordinary least squares (OLS) does not solve the problem you are facing. It only assumes it away. If you are using OLS you are implicitly assuming that the different points on your scale are equally spaced. If you are comfortable with this assumption, push the OLS button and try to convince your audience. I would tackle the problem differently. You have already mentioned of the solutions. Indeed, it could make sense to recode the control variables and to reduce the number of categories. Sparsely populated categories could be merged to other categories. Use your topical knowledge to merge and redefine categories. You can also try to recode the dependent variables. Even on a 10 point scale, responses are usually clustered around some modalities. Again, guided by topical knowledge, you could redefine the dependent variable. This topic is not new on CrossValidated. Under the Likert tag you will find plenty of discussions that may be of interest to you.
Ordered logit with (too many?) categorical independent variables Using ordinary least squares (OLS) does not solve the problem you are facing. It only assumes it away. If you are using OLS you are implicitly assuming that the different points on your scale are equa
48,850
Ordered logit with (too many?) categorical independent variables
After a lot procrastinating, I ended up solving this issue. I thought I'd post my solution so nobody else makes the same (stupid!) error. My issue wasn't categorical variables at all. The issue was that in my independent variables, I had household Income, which was not scaled properly. This meant that both polr and bayespolr fell over when working out the Hessian. So, when having issues like this, just remember first-year stats class: scale your variables. Take logs of things like income.
Ordered logit with (too many?) categorical independent variables
After a lot procrastinating, I ended up solving this issue. I thought I'd post my solution so nobody else makes the same (stupid!) error. My issue wasn't categorical variables at all. The issue was t
Ordered logit with (too many?) categorical independent variables After a lot procrastinating, I ended up solving this issue. I thought I'd post my solution so nobody else makes the same (stupid!) error. My issue wasn't categorical variables at all. The issue was that in my independent variables, I had household Income, which was not scaled properly. This meant that both polr and bayespolr fell over when working out the Hessian. So, when having issues like this, just remember first-year stats class: scale your variables. Take logs of things like income.
Ordered logit with (too many?) categorical independent variables After a lot procrastinating, I ended up solving this issue. I thought I'd post my solution so nobody else makes the same (stupid!) error. My issue wasn't categorical variables at all. The issue was t
48,851
Refugee from SPSS having issues with fit.contrast in R
My initial remark is: why do you need these functions from the gmodels package to accomplish this? These are basic tasks that can be accomplished straightforwardly in base R, which is what I would recommend. But to answer your question directly, the issue here is that fit.contrast is using "type 3" tests of effects while summary is using "type 2" tests. This can be verified most easily using Anova from the car package, which lets you select the type of sums of squares that you desire: library(car) > Anova(model, type=2) Anova Table (Type II tests) Response: y Sum Sq Df F value Pr(>F) Genotype 1.09 1 1.0495 0.3059 Time 0.74 2 0.3526 0.7029 Genotype:Time 1.15 2 0.5524 0.5758 Residuals 1036.64 994 > Anova(model, type=3) Anova Table (Type III tests) Response: y Sum Sq Df F value Pr(>F) (Intercept) 0.10 1 0.0935 0.7599 Genotype 0.02 1 0.0236 0.8779 Time 1.87 2 0.8954 0.4088 Genotype:Time 1.15 2 0.5524 0.5758 Residuals 1036.64 994 A description of the difference between these two methods can be found here: https://stat.ethz.ch/pipermail/r-help/2006-August/111854.html
Refugee from SPSS having issues with fit.contrast in R
My initial remark is: why do you need these functions from the gmodels package to accomplish this? These are basic tasks that can be accomplished straightforwardly in base R, which is what I would rec
Refugee from SPSS having issues with fit.contrast in R My initial remark is: why do you need these functions from the gmodels package to accomplish this? These are basic tasks that can be accomplished straightforwardly in base R, which is what I would recommend. But to answer your question directly, the issue here is that fit.contrast is using "type 3" tests of effects while summary is using "type 2" tests. This can be verified most easily using Anova from the car package, which lets you select the type of sums of squares that you desire: library(car) > Anova(model, type=2) Anova Table (Type II tests) Response: y Sum Sq Df F value Pr(>F) Genotype 1.09 1 1.0495 0.3059 Time 0.74 2 0.3526 0.7029 Genotype:Time 1.15 2 0.5524 0.5758 Residuals 1036.64 994 > Anova(model, type=3) Anova Table (Type III tests) Response: y Sum Sq Df F value Pr(>F) (Intercept) 0.10 1 0.0935 0.7599 Genotype 0.02 1 0.0236 0.8779 Time 1.87 2 0.8954 0.4088 Genotype:Time 1.15 2 0.5524 0.5758 Residuals 1036.64 994 A description of the difference between these two methods can be found here: https://stat.ethz.ch/pipermail/r-help/2006-August/111854.html
Refugee from SPSS having issues with fit.contrast in R My initial remark is: why do you need these functions from the gmodels package to accomplish this? These are basic tasks that can be accomplished straightforwardly in base R, which is what I would rec
48,852
Refugee from SPSS having issues with fit.contrast in R
Strongly recommended to not use reserved words as R objects names. For genotype contrast it is clear that the p-values from ANOVA and contrast statement from gmodels are identical and F=t^2. library(gmodels) set.seed(03215) Genotype <- sample(c("WT","KO"), 1000, replace=TRUE) Time <- factor(sample(1:3, 1000, replace=TRUE)) y <- rnorm(1000) dat <- data.frame(y, Genotype, Time) fit1 <- aov( y ~ Genotype + Time + Genotype:Time, data=dat) summary(fit1) Df Sum Sq Mean Sq F value Pr(>F) Genotype 1 1.2 1.1687 1.121 0.290 Time 2 0.7 0.3677 0.353 0.703 Genotype:Time 2 1.2 0.5760 0.552 0.576 Residuals 994 1036.6 1.0429 model.tables(fit1, "means") Tables of means Grand mean 0.01447773 Genotype KO WT -0.02154 0.04693 rep 474.00000 526.00000 Time 1 2 3 0.03267 0.03313 -0.02539 rep 350.00000 334.00000 316.00000 Genotype:Time Time Genotype 1 2 3 KO 0.02 0.02 -0.11 rep 160.00 155.00 159.00 WT 0.04 0.04 0.06 rep 190.00 179.00 157.00 As (-1)(-0.02154)+(1)(0.04693) = 0.06847 fit.contrast(fit1, "Genotype", rbind("KO vs WT"=c(1, -1)), conf=0.95, df=TRUE) Estimate Std. Error t value Pr(>|t|) DF lower CI upper CI GenotypeKO vs WT 0.06869178 0.06477589 1.060453 0.2891962 994 -0.0584214 0.195805
Refugee from SPSS having issues with fit.contrast in R
Strongly recommended to not use reserved words as R objects names. For genotype contrast it is clear that the p-values from ANOVA and contrast statement from gmodels are identical and F=t^2. library(g
Refugee from SPSS having issues with fit.contrast in R Strongly recommended to not use reserved words as R objects names. For genotype contrast it is clear that the p-values from ANOVA and contrast statement from gmodels are identical and F=t^2. library(gmodels) set.seed(03215) Genotype <- sample(c("WT","KO"), 1000, replace=TRUE) Time <- factor(sample(1:3, 1000, replace=TRUE)) y <- rnorm(1000) dat <- data.frame(y, Genotype, Time) fit1 <- aov( y ~ Genotype + Time + Genotype:Time, data=dat) summary(fit1) Df Sum Sq Mean Sq F value Pr(>F) Genotype 1 1.2 1.1687 1.121 0.290 Time 2 0.7 0.3677 0.353 0.703 Genotype:Time 2 1.2 0.5760 0.552 0.576 Residuals 994 1036.6 1.0429 model.tables(fit1, "means") Tables of means Grand mean 0.01447773 Genotype KO WT -0.02154 0.04693 rep 474.00000 526.00000 Time 1 2 3 0.03267 0.03313 -0.02539 rep 350.00000 334.00000 316.00000 Genotype:Time Time Genotype 1 2 3 KO 0.02 0.02 -0.11 rep 160.00 155.00 159.00 WT 0.04 0.04 0.06 rep 190.00 179.00 157.00 As (-1)(-0.02154)+(1)(0.04693) = 0.06847 fit.contrast(fit1, "Genotype", rbind("KO vs WT"=c(1, -1)), conf=0.95, df=TRUE) Estimate Std. Error t value Pr(>|t|) DF lower CI upper CI GenotypeKO vs WT 0.06869178 0.06477589 1.060453 0.2891962 994 -0.0584214 0.195805
Refugee from SPSS having issues with fit.contrast in R Strongly recommended to not use reserved words as R objects names. For genotype contrast it is clear that the p-values from ANOVA and contrast statement from gmodels are identical and F=t^2. library(g
48,853
How to check whether a sample is representative across two dimensions simultaneously?
You can still do the chi-square test. Nothing says that the bins have to be 1 dimensional. Divide the globe into longitude by latitude segments and count the number of cases in each bin for the two samples. The same chi square test applies.
How to check whether a sample is representative across two dimensions simultaneously?
You can still do the chi-square test. Nothing says that the bins have to be 1 dimensional. Divide the globe into longitude by latitude segments and count the number of cases in each bin for the two
How to check whether a sample is representative across two dimensions simultaneously? You can still do the chi-square test. Nothing says that the bins have to be 1 dimensional. Divide the globe into longitude by latitude segments and count the number of cases in each bin for the two samples. The same chi square test applies.
How to check whether a sample is representative across two dimensions simultaneously? You can still do the chi-square test. Nothing says that the bins have to be 1 dimensional. Divide the globe into longitude by latitude segments and count the number of cases in each bin for the two
48,854
How to check whether a sample is representative across two dimensions simultaneously?
Fasano and Franceschini suggested a multi-dimensional version of the Kolmogorv-Smirnov test which they show to be preferable to the $\chi^2$-test for 2- and 3-dimensional data in Monthly Notices of the Royal Astronomical Society 225:155-170. The paper is freely available here.
How to check whether a sample is representative across two dimensions simultaneously?
Fasano and Franceschini suggested a multi-dimensional version of the Kolmogorv-Smirnov test which they show to be preferable to the $\chi^2$-test for 2- and 3-dimensional data in Monthly Notices of th
How to check whether a sample is representative across two dimensions simultaneously? Fasano and Franceschini suggested a multi-dimensional version of the Kolmogorv-Smirnov test which they show to be preferable to the $\chi^2$-test for 2- and 3-dimensional data in Monthly Notices of the Royal Astronomical Society 225:155-170. The paper is freely available here.
How to check whether a sample is representative across two dimensions simultaneously? Fasano and Franceschini suggested a multi-dimensional version of the Kolmogorv-Smirnov test which they show to be preferable to the $\chi^2$-test for 2- and 3-dimensional data in Monthly Notices of th
48,855
How to check whether a sample is representative across two dimensions simultaneously?
Actually, I had the same question recently. By scanning rapidly through the published literature, I came to realize that a general test has been developed by Friedman & Rafsky. Their approach is to use a minimum spanning tree, which is the smallest tree that connects the points of the cloud in $n$ dimensions, and compute a statistic from it that is distributed as a Student's $t$. Unfortunately, I am not aware of any implementation of that test. All I can suggest is the trick that consists in normalizing your variables in the square $(0,1)\times(0,1)$, applying the inverse erf function to get a bivariate gaussian, square them and sum them, wich should give you a sample distributed as a $\chi^2(2)$, which you can check with your favorite goodness of fit test. Update: There is C library to test for uniformity in several dimension written by Ben Pfaff. At the section Uniformity testing library you can download the source code and the documentation. If I understood well, this is an implementation of the Smith & Jain test which is a refinement of the Friedman & Rafsky test in case the boundaries of the domain are not defined. You can find more details on how to install and run the code at this question.
How to check whether a sample is representative across two dimensions simultaneously?
Actually, I had the same question recently. By scanning rapidly through the published literature, I came to realize that a general test has been developed by Friedman & Rafsky. Their approach is to us
How to check whether a sample is representative across two dimensions simultaneously? Actually, I had the same question recently. By scanning rapidly through the published literature, I came to realize that a general test has been developed by Friedman & Rafsky. Their approach is to use a minimum spanning tree, which is the smallest tree that connects the points of the cloud in $n$ dimensions, and compute a statistic from it that is distributed as a Student's $t$. Unfortunately, I am not aware of any implementation of that test. All I can suggest is the trick that consists in normalizing your variables in the square $(0,1)\times(0,1)$, applying the inverse erf function to get a bivariate gaussian, square them and sum them, wich should give you a sample distributed as a $\chi^2(2)$, which you can check with your favorite goodness of fit test. Update: There is C library to test for uniformity in several dimension written by Ben Pfaff. At the section Uniformity testing library you can download the source code and the documentation. If I understood well, this is an implementation of the Smith & Jain test which is a refinement of the Friedman & Rafsky test in case the boundaries of the domain are not defined. You can find more details on how to install and run the code at this question.
How to check whether a sample is representative across two dimensions simultaneously? Actually, I had the same question recently. By scanning rapidly through the published literature, I came to realize that a general test has been developed by Friedman & Rafsky. Their approach is to us
48,856
Inference on a probabilistic graphical model with observed continuous variable
Your derivation looks correct and is consistent with how joint distributions that are a mix of discrete and continuous are normally handled. Note that some of the things inside your integral do not depend on $y$. You can re-write it as $$ P(d,l,x,f) = P(l) P(f) f_{X|L}(x|l) \int_y P(d|x,y) f_{Y|L,F}(y|l,f) dy $$ This is an integral against the conditional density of $Y|L,F$, which you can estimate using the procedure you described, which is known as Monte Carlo Integration. The basic idea is that if you want to estimate an integral against a density, $p(x)$: $$ I = \int g(x) p(x) dx $$ then this can be written as the expected value of the random variable $g(X)$, where $X$ has distribution $p$. Therefore, if you (1) Simulate values of $X \sim p$ and (2) Calculate $\hat{I}$, the sample mean of $g(X)$ then $\hat{I}$ is a consistent estimator of $I$, by the Law of Large Numbers. In finite samples, the standard deviation of this approximation will be $\sqrt{{\rm var}( g(X) )/n}$, so you can see the larger the sample size you use, the more accurate your estimates will be, both in terms of the potential bias (i.e. the consistency of the estimator) and the precision (the reduction in variance). In your case, note that the integral in your problem can be written as an expectation against the joint density of $Y|L,F$: $$ P(d,l,x,f) = P(l) P(f) f_{X|L}(x|l) \cdot E_{Y|L,F} \left( P(d|x,Y) \right) $$ Therefore, if you can generate from the conditional distribution of $Y|L,F$, then the method you described is feasible for estimating $E_{Y|L,F} \left( P(d,x,Y) \right)$; all other terms are constants in $y$, and so they can be pulled out the integral. Above I've described the simplest possible form of monte carlo integration and if simulating from this distribution has high computational cost, you may consider a more sophisticated method (like importance sampling) that will require less simulation for a fixed level of monte carlo error you're willing to accept in your estimate. Edit: Another method for calculating the integral is numerical integration, which only requires you to be able to calculate the integrand. Numerical integration is typically difficult itself when the objective function is complicated and is not as intuitive as monte carlo integration, so you probably don't need to consider it unless the distribution of $Y|L,F$ is impossible to sample from or is computationally expensive to the point that you couldn't take enough samples to get a precise estimate of the integral.
Inference on a probabilistic graphical model with observed continuous variable
Your derivation looks correct and is consistent with how joint distributions that are a mix of discrete and continuous are normally handled. Note that some of the things inside your integral do not d
Inference on a probabilistic graphical model with observed continuous variable Your derivation looks correct and is consistent with how joint distributions that are a mix of discrete and continuous are normally handled. Note that some of the things inside your integral do not depend on $y$. You can re-write it as $$ P(d,l,x,f) = P(l) P(f) f_{X|L}(x|l) \int_y P(d|x,y) f_{Y|L,F}(y|l,f) dy $$ This is an integral against the conditional density of $Y|L,F$, which you can estimate using the procedure you described, which is known as Monte Carlo Integration. The basic idea is that if you want to estimate an integral against a density, $p(x)$: $$ I = \int g(x) p(x) dx $$ then this can be written as the expected value of the random variable $g(X)$, where $X$ has distribution $p$. Therefore, if you (1) Simulate values of $X \sim p$ and (2) Calculate $\hat{I}$, the sample mean of $g(X)$ then $\hat{I}$ is a consistent estimator of $I$, by the Law of Large Numbers. In finite samples, the standard deviation of this approximation will be $\sqrt{{\rm var}( g(X) )/n}$, so you can see the larger the sample size you use, the more accurate your estimates will be, both in terms of the potential bias (i.e. the consistency of the estimator) and the precision (the reduction in variance). In your case, note that the integral in your problem can be written as an expectation against the joint density of $Y|L,F$: $$ P(d,l,x,f) = P(l) P(f) f_{X|L}(x|l) \cdot E_{Y|L,F} \left( P(d|x,Y) \right) $$ Therefore, if you can generate from the conditional distribution of $Y|L,F$, then the method you described is feasible for estimating $E_{Y|L,F} \left( P(d,x,Y) \right)$; all other terms are constants in $y$, and so they can be pulled out the integral. Above I've described the simplest possible form of monte carlo integration and if simulating from this distribution has high computational cost, you may consider a more sophisticated method (like importance sampling) that will require less simulation for a fixed level of monte carlo error you're willing to accept in your estimate. Edit: Another method for calculating the integral is numerical integration, which only requires you to be able to calculate the integrand. Numerical integration is typically difficult itself when the objective function is complicated and is not as intuitive as monte carlo integration, so you probably don't need to consider it unless the distribution of $Y|L,F$ is impossible to sample from or is computationally expensive to the point that you couldn't take enough samples to get a precise estimate of the integral.
Inference on a probabilistic graphical model with observed continuous variable Your derivation looks correct and is consistent with how joint distributions that are a mix of discrete and continuous are normally handled. Note that some of the things inside your integral do not d
48,857
What to do when the standard error equals 0
How about a permuted or randomized t-test? That would let you avoid computing the variance entirely. They make minimal assumptions about the data and are pretty easy to perform. If you're unfamiliar with them, the idea is actually pretty simple. Start by calculating the difference between the two groups' means. We'll call that the "observed difference." Under the null hypothesis, the treatments have the same/no effect, so the group labels are meaningless. To test this, randomize or shuffle the group labels and compute the difference between the means of these "fake" groups. Repeat this process many times to generate a distribution of "shuffled differences." Finally, assign a p value by asking how often one sees a shuffled difference at least as extreme as the observed one: $$p=\frac{\textrm{#(Shuffled > Observed)}}{\textrm{# Shuffled}+1}$$ You may need to take the absolute value for a two-tailed test. I think you should consider rolando2 and Michael Lew's questions carefully too. In particular: Is your design actually paired? To review, a paired t-test is usually used when the same subjects are examined in both conditions (e.g., give everyone drug A, measure, wait and give everyone drug B and measure again), while the unpaired t-test is typically used when there are two non-overlapping conditions (half the subjects get drug A, half get drug B). There's not much in your description that suggests that you've got "paired data." Perhaps you could elaborate a little bit on your design? Do you have sufficient power? If you're looking for relatively rare effects, they might not show up in your 10 sample/condition data set. If you've got a rough idea of expected difference between the means, you could do a power analysis to see if you have enough samples.
What to do when the standard error equals 0
How about a permuted or randomized t-test? That would let you avoid computing the variance entirely. They make minimal assumptions about the data and are pretty easy to perform. If you're unfamiliar w
What to do when the standard error equals 0 How about a permuted or randomized t-test? That would let you avoid computing the variance entirely. They make minimal assumptions about the data and are pretty easy to perform. If you're unfamiliar with them, the idea is actually pretty simple. Start by calculating the difference between the two groups' means. We'll call that the "observed difference." Under the null hypothesis, the treatments have the same/no effect, so the group labels are meaningless. To test this, randomize or shuffle the group labels and compute the difference between the means of these "fake" groups. Repeat this process many times to generate a distribution of "shuffled differences." Finally, assign a p value by asking how often one sees a shuffled difference at least as extreme as the observed one: $$p=\frac{\textrm{#(Shuffled > Observed)}}{\textrm{# Shuffled}+1}$$ You may need to take the absolute value for a two-tailed test. I think you should consider rolando2 and Michael Lew's questions carefully too. In particular: Is your design actually paired? To review, a paired t-test is usually used when the same subjects are examined in both conditions (e.g., give everyone drug A, measure, wait and give everyone drug B and measure again), while the unpaired t-test is typically used when there are two non-overlapping conditions (half the subjects get drug A, half get drug B). There's not much in your description that suggests that you've got "paired data." Perhaps you could elaborate a little bit on your design? Do you have sufficient power? If you're looking for relatively rare effects, they might not show up in your 10 sample/condition data set. If you've got a rough idea of expected difference between the means, you could do a power analysis to see if you have enough samples.
What to do when the standard error equals 0 How about a permuted or randomized t-test? That would let you avoid computing the variance entirely. They make minimal assumptions about the data and are pretty easy to perform. If you're unfamiliar w
48,858
What to do when the standard error equals 0
Suppose you looked at it in quality control terms. Will you, or will your audience for this research, really be satisfied to learn that no failure has occurred after 10 trials? (I'm guessing that that is an accurate way to describe your situation.) You may have to use a much larger sample for each of your 2 groups if failures are going to be rare. And then a paired t-test would not be applicable: you'd probably want to use a test of the difference between dependent proportions, if your 2 groups are really paired. On the other hand, I could imagine a situation in which 10 trials per group is all you can realistically generate, in which case you'd report the descriptive results you obtained without running any significance test.
What to do when the standard error equals 0
Suppose you looked at it in quality control terms. Will you, or will your audience for this research, really be satisfied to learn that no failure has occurred after 10 trials? (I'm guessing that th
What to do when the standard error equals 0 Suppose you looked at it in quality control terms. Will you, or will your audience for this research, really be satisfied to learn that no failure has occurred after 10 trials? (I'm guessing that that is an accurate way to describe your situation.) You may have to use a much larger sample for each of your 2 groups if failures are going to be rare. And then a paired t-test would not be applicable: you'd probably want to use a test of the difference between dependent proportions, if your 2 groups are really paired. On the other hand, I could imagine a situation in which 10 trials per group is all you can realistically generate, in which case you'd report the descriptive results you obtained without running any significance test.
What to do when the standard error equals 0 Suppose you looked at it in quality control terms. Will you, or will your audience for this research, really be satisfied to learn that no failure has occurred after 10 trials? (I'm guessing that th
48,859
Using multinomial logistic regression for multiple related outcomes
As @Riaz Rizvi suggests, this may not be a good idea. Your scheme enforces a particular (and rather unlikely) covariance structure on the problem by flattening to a multinomial this way. Since you suspect, or at least wish to allow the possibility that the presence of A is informative of B, then you should be working with a bivariate probit. Working with two separate logistic models is not going to be able to represent this. The model is a regression with an explicit correlated bivariate latent variable generating the choice probabilities, as discussed briefly in the link and at greater length in good econometrics texts.
Using multinomial logistic regression for multiple related outcomes
As @Riaz Rizvi suggests, this may not be a good idea. Your scheme enforces a particular (and rather unlikely) covariance structure on the problem by flattening to a multinomial this way. Since you
Using multinomial logistic regression for multiple related outcomes As @Riaz Rizvi suggests, this may not be a good idea. Your scheme enforces a particular (and rather unlikely) covariance structure on the problem by flattening to a multinomial this way. Since you suspect, or at least wish to allow the possibility that the presence of A is informative of B, then you should be working with a bivariate probit. Working with two separate logistic models is not going to be able to represent this. The model is a regression with an explicit correlated bivariate latent variable generating the choice probabilities, as discussed briefly in the link and at greater length in good econometrics texts.
Using multinomial logistic regression for multiple related outcomes As @Riaz Rizvi suggests, this may not be a good idea. Your scheme enforces a particular (and rather unlikely) covariance structure on the problem by flattening to a multinomial this way. Since you
48,860
Using multinomial logistic regression for multiple related outcomes
A multinomial is perfectly fine in this situation, but it comes at two costs: An explosion in the number of parameters. (If you were to combine $n$ binary variables like this, you would have $2^n$ parameters instead of the original $n$.) The solution is harder to interpret if the original variables are actually independent. (If you would have had a simple relationship such that input variable $x$ implies dependent variable $y=1$, you would now have $x$ implies that the combined dependent variable takes on one of the many outcomes corresponding to $y=1$.) The major advantage is that your model can use the additional parameters to encode distributions not possible in the original model.
Using multinomial logistic regression for multiple related outcomes
A multinomial is perfectly fine in this situation, but it comes at two costs: An explosion in the number of parameters. (If you were to combine $n$ binary variables like this, you would have $2^n$ pa
Using multinomial logistic regression for multiple related outcomes A multinomial is perfectly fine in this situation, but it comes at two costs: An explosion in the number of parameters. (If you were to combine $n$ binary variables like this, you would have $2^n$ parameters instead of the original $n$.) The solution is harder to interpret if the original variables are actually independent. (If you would have had a simple relationship such that input variable $x$ implies dependent variable $y=1$, you would now have $x$ implies that the combined dependent variable takes on one of the many outcomes corresponding to $y=1$.) The major advantage is that your model can use the additional parameters to encode distributions not possible in the original model.
Using multinomial logistic regression for multiple related outcomes A multinomial is perfectly fine in this situation, but it comes at two costs: An explosion in the number of parameters. (If you were to combine $n$ binary variables like this, you would have $2^n$ pa
48,861
Using multinomial logistic regression for multiple related outcomes
I don't think so. The multinomial distribution is derived from n independent variables, but your situation has two dependent variables. A multinomial regression is not applicable here.
Using multinomial logistic regression for multiple related outcomes
I don't think so. The multinomial distribution is derived from n independent variables, but your situation has two dependent variables. A multinomial regression is not applicable here.
Using multinomial logistic regression for multiple related outcomes I don't think so. The multinomial distribution is derived from n independent variables, but your situation has two dependent variables. A multinomial regression is not applicable here.
Using multinomial logistic regression for multiple related outcomes I don't think so. The multinomial distribution is derived from n independent variables, but your situation has two dependent variables. A multinomial regression is not applicable here.
48,862
R: How to "control" for another variable in Linear Mixed Effects Regression model?
I don't think the issues here can be addressed in a simple answer posted online. I would add: the inclusion of age and time is problematic and should be thought through. It is unclear to me what the benefit is of having both variables in the model. It can be done. But not by avoiding the issue by making one of the variables a random effect. 1.5. if you want to include age, from what I understand, include age as age at start of experiment. This should not be collinear with other data and should be informative. I would be very reluctant to include Age and time as random effects in this model. An assumption of the random effects model is that clusters are exchangeable. 2.5. There is a tendency in the R code I've seen to include multiple random effects. I'm not sure why. Once you go beyond a single random effect, or simple single random effect clustered in another, the model complexity is significant and often not warranted. I don't think the models as written make sense. The following makes sense to me and are defensible: lmer(FiringRate~ Time + (1|Subject)) lmer(FiringRate~ Time + (Time|Subject)) lmer(FiringRate~ Time + age_atstart + (Time|Subject))
R: How to "control" for another variable in Linear Mixed Effects Regression model?
I don't think the issues here can be addressed in a simple answer posted online. I would add: the inclusion of age and time is problematic and should be thought through. It is unclear to me what the
R: How to "control" for another variable in Linear Mixed Effects Regression model? I don't think the issues here can be addressed in a simple answer posted online. I would add: the inclusion of age and time is problematic and should be thought through. It is unclear to me what the benefit is of having both variables in the model. It can be done. But not by avoiding the issue by making one of the variables a random effect. 1.5. if you want to include age, from what I understand, include age as age at start of experiment. This should not be collinear with other data and should be informative. I would be very reluctant to include Age and time as random effects in this model. An assumption of the random effects model is that clusters are exchangeable. 2.5. There is a tendency in the R code I've seen to include multiple random effects. I'm not sure why. Once you go beyond a single random effect, or simple single random effect clustered in another, the model complexity is significant and often not warranted. I don't think the models as written make sense. The following makes sense to me and are defensible: lmer(FiringRate~ Time + (1|Subject)) lmer(FiringRate~ Time + (Time|Subject)) lmer(FiringRate~ Time + age_atstart + (Time|Subject))
R: How to "control" for another variable in Linear Mixed Effects Regression model? I don't think the issues here can be addressed in a simple answer posted online. I would add: the inclusion of age and time is problematic and should be thought through. It is unclear to me what the
48,863
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
Technically LDA Gibbs sampling works because we intentionally set up a Markov chain that converges into the posterior distribution of the model parameters, or word–topic assignments. See http://en.wikipedia.org/wiki/Gibbs_sampling#Mathematical_background. But I guess you are seeking an intuitive answer on why the sampler tends to put similar words into the same topic? That's an interesting question. If you look at the equations for collapsed Gibbs sampling, there is a factor for words, another for documents. Probabilities are higher for assignments that "don't break document boundaries", that is, words appearing in the same document have a slightly higher odds of ending up in the same topic. The same holds for document assignments, they to a degree follow "word boundaries". These effects mix up and spread over clusters of documents and words, eventually. By the way, LDA Gibbs samplers do not actually work properly, in the sense that they do not mix, or are not able to represent the posterior distribution well. If they did, the permutation symmetries of the model would make all solutions obtained by samplers useless, or at least non-interpretable. Instead the sampler sticks around a local mode (of the likelihood), and we get well-defined topics.
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
Technically LDA Gibbs sampling works because we intentionally set up a Markov chain that converges into the posterior distribution of the model parameters, or word–topic assignments. See http://en.wik
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? Technically LDA Gibbs sampling works because we intentionally set up a Markov chain that converges into the posterior distribution of the model parameters, or word–topic assignments. See http://en.wikipedia.org/wiki/Gibbs_sampling#Mathematical_background. But I guess you are seeking an intuitive answer on why the sampler tends to put similar words into the same topic? That's an interesting question. If you look at the equations for collapsed Gibbs sampling, there is a factor for words, another for documents. Probabilities are higher for assignments that "don't break document boundaries", that is, words appearing in the same document have a slightly higher odds of ending up in the same topic. The same holds for document assignments, they to a degree follow "word boundaries". These effects mix up and spread over clusters of documents and words, eventually. By the way, LDA Gibbs samplers do not actually work properly, in the sense that they do not mix, or are not able to represent the posterior distribution well. If they did, the permutation symmetries of the model would make all solutions obtained by samplers useless, or at least non-interpretable. Instead the sampler sticks around a local mode (of the likelihood), and we get well-defined topics.
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? Technically LDA Gibbs sampling works because we intentionally set up a Markov chain that converges into the posterior distribution of the model parameters, or word–topic assignments. See http://en.wik
48,864
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
I am not familiar with this and can only give a partial answer but maybe its better than nothing since I am a statistician and understand statistical terminology. Clusters of coocurring words means words that appear frequently in sequence such as "of the" being a common pair in english. Languages tend to have patterns like this which can help you characterize them. Mixture models in statistics usually means that the probability distribution in the model can be represented as a mixture of two or more distributions. In the case of two variables say f(x) and g(x) are two distributions. A mixture would pick an x from f with some probability p and with probability 1-p from g. These models are useful as a way to construct bimodal or multimodal distributions. It makes sense that they could mean this since they speak about words occurring in clusters. So I think they may be saying that if we condition on the word "of" occurring the frequency with which the word "the" follows it is much higher than say "the" following a noun like say "missile". So these models I suspect are used to better represent the frequency with which words occur in the english language. In statistics word frequencies have been used in the past to identify authorship. For example Mosteller and Wallace looked at samples of writing from the authors of the Federalist papers to try to attribute authorship to papers where the author was not identified. The Federalist papers were writtne by Hamilton,Jay and Madison and there are many papers where each author is identified. So they constructed a classification rule based on the differences in word usage in their writing to identify who the author of disputed papers are. I recently discover that Glen Fung published a paper in the Journal of ACM identifying disputed papers in the Federalist papers using support vector machines.
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
I am not familiar with this and can only give a partial answer but maybe its better than nothing since I am a statistician and understand statistical terminology. Clusters of coocurring words means wo
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? I am not familiar with this and can only give a partial answer but maybe its better than nothing since I am a statistician and understand statistical terminology. Clusters of coocurring words means words that appear frequently in sequence such as "of the" being a common pair in english. Languages tend to have patterns like this which can help you characterize them. Mixture models in statistics usually means that the probability distribution in the model can be represented as a mixture of two or more distributions. In the case of two variables say f(x) and g(x) are two distributions. A mixture would pick an x from f with some probability p and with probability 1-p from g. These models are useful as a way to construct bimodal or multimodal distributions. It makes sense that they could mean this since they speak about words occurring in clusters. So I think they may be saying that if we condition on the word "of" occurring the frequency with which the word "the" follows it is much higher than say "the" following a noun like say "missile". So these models I suspect are used to better represent the frequency with which words occur in the english language. In statistics word frequencies have been used in the past to identify authorship. For example Mosteller and Wallace looked at samples of writing from the authors of the Federalist papers to try to attribute authorship to papers where the author was not identified. The Federalist papers were writtne by Hamilton,Jay and Madison and there are many papers where each author is identified. So they constructed a classification rule based on the differences in word usage in their writing to identify who the author of disputed papers are. I recently discover that Glen Fung published a paper in the Journal of ACM identifying disputed papers in the Federalist papers using support vector machines.
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? I am not familiar with this and can only give a partial answer but maybe its better than nothing since I am a statistician and understand statistical terminology. Clusters of coocurring words means wo
48,865
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
I understand your question as asking about LDA, rather than about the mechanisms of Gibbs sampling. Thus, my answer to the question of why the Gibbs sampling algorithm works is that it is designed to do LDA: our goal is to fit the best possible LDA model given the data and our initial parameter settings. How many topics (that is, how much clustering) LDA does depends on the choice of the Dirichlet concentration parameters. I'm not an expert on using LDA, but so far as I know these parameters are usually fixed beforehand or drawn from a fixed distribution which makes the Gibbs sampling algorithm more convenient to sample. I read the quoted paragraph as saying, roughly, that if we decide to penalize models with many topics, we will force LDA to perform more clustering: it will have to find the best way to describe all the documents using the fewer topics.
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
I understand your question as asking about LDA, rather than about the mechanisms of Gibbs sampling. Thus, my answer to the question of why the Gibbs sampling algorithm works is that it is designed to
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? I understand your question as asking about LDA, rather than about the mechanisms of Gibbs sampling. Thus, my answer to the question of why the Gibbs sampling algorithm works is that it is designed to do LDA: our goal is to fit the best possible LDA model given the data and our initial parameter settings. How many topics (that is, how much clustering) LDA does depends on the choice of the Dirichlet concentration parameters. I'm not an expert on using LDA, but so far as I know these parameters are usually fixed beforehand or drawn from a fixed distribution which makes the Gibbs sampling algorithm more convenient to sample. I read the quoted paragraph as saying, roughly, that if we decide to penalize models with many topics, we will force LDA to perform more clustering: it will have to find the best way to describe all the documents using the fewer topics.
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? I understand your question as asking about LDA, rather than about the mechanisms of Gibbs sampling. Thus, my answer to the question of why the Gibbs sampling algorithm works is that it is designed to
48,866
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
Seems you are asking for intuition. In a mixture, this is enough to find clusters of co-occurring words. This means that the distribution over vocabulary i.e. topics will sum to one. So it is sensible that the co-occuring words come in same topic and fewer words in same topic increases the probability of words occuring in the topic as it should sum to one. In LDA, the Dirichlet on the topic proportions can encourage sparsity As topics are sampled from an exchangable dirichilet so all topics are sampled uniformly. But if alpha in dirichlet is less than 1 than by the definition of dirichilet distribution you can see that $\theta^{(\alpha - 1)}$ is $(fraction)^{(-ve)}$ which will be high so the peaks in the simplex will be high at the corners i.e. sparsity as the topics are sampled from that dirichilet.
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)?
Seems you are asking for intuition. In a mixture, this is enough to find clusters of co-occurring words. This means that the distribution over vocabulary i.e. topics will sum to one. So it is sensib
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? Seems you are asking for intuition. In a mixture, this is enough to find clusters of co-occurring words. This means that the distribution over vocabulary i.e. topics will sum to one. So it is sensible that the co-occuring words come in same topic and fewer words in same topic increases the probability of words occuring in the topic as it should sum to one. In LDA, the Dirichlet on the topic proportions can encourage sparsity As topics are sampled from an exchangable dirichilet so all topics are sampled uniformly. But if alpha in dirichlet is less than 1 than by the definition of dirichilet distribution you can see that $\theta^{(\alpha - 1)}$ is $(fraction)^{(-ve)}$ which will be high so the peaks in the simplex will be high at the corners i.e. sparsity as the topics are sampled from that dirichilet.
Why LDA (Latent Dirichlet Allocation) works (i.e. why put co-occurring words together)? Seems you are asking for intuition. In a mixture, this is enough to find clusters of co-occurring words. This means that the distribution over vocabulary i.e. topics will sum to one. So it is sensib
48,867
Implementing an ordered probit model in pymc [closed]
This took quite a bit of work, but I got it in the end. Note that I used the development version (pymc 2.2grad) from github, not the older version available on pypi. Also, this runs pretty slowly, since it doesn't make good use of numpy's array manipulation. For data sets of reasonable size, some smart preprocessing and a rewrite of the cutpoints node function could probably fix this. Here's full source code: from scipy.stats import norm, pearsonr from pymc import Normal, Lambda, Uniform, Exponential, stochastic, deterministic, observed, MCMC, Matplot from numpy import mean, std, log import numpy as np #Set array dimensions (I, J, K, M, N) = (5, 20, 3, 4, 1000) #Set simulation parameters alpha_star = np.random.normal(0, 1, size=(I,K)) beta_star = np.random.normal(1, 1, size=(I,K)) z_star = np.random.normal(0, 1, size=(J,1)) w_star = np.array([0,1,3]) #Generate data coder = np.random.randint(I, size=(N)) doc = np.random.randint(J, size=(N)) item = np.random.randint(K, size=(N)) code = np.zeros(shape=(N)) for n in range(N): i, j, k = coder[n], doc[n], item[n] m = alpha_star[i,k] + beta_star[i,k] * z_star[j] + np.random.normal(0, 1) code[n] = 1+sum(m > w_star) # print "\t".join([str(x) for x in [i, j, k, m, code[n]]]) #Set GLM parameters alpha = Normal('alpha', mu=0.0, tau=0.01, value=np.zeros(I*K)) beta = Normal('beta', mu=1.0, tau=0.01, value=np.ones(I*K)) z = Normal('z', mu=0.0, tau=0.01, value=np.random.normal(0,1,J)) w = Exponential('w', .1, value=np.ones(M-3)) #Link functions mu = Lambda('mu', lambda alpha=alpha, beta=beta, z=z, i=coder, j=doc, k=item: alpha[i+I*k]+beta[i+I*k]*z[j]) @deterministic(plot=False) def cutpoints(w=w): w2 = [-np.inf, 0.0, 1.0] v = 1 for i in w: v += i w2.append(v) w2.append(np.inf) cp = np.array( w2 ) return cp @stochastic(dtype=int, observed=True) def y(value=code, mu=mu, cp=cutpoints): def logp(value, mu, cp): d = norm.cdf(cp[value]-mu)-norm.cdf(cp[value-1]-mu) lp = sum(log(d)) return lp #Run chain M = MCMC([alpha, beta, z, mu, w, cutpoints, y]) M.isample(10000, 5000, thin=5, verbose=0) #Summarize results Matplot.summary_plot([alpha], name="alpha", path="./graphs/") Matplot.summary_plot([beta], name="beta", path="./graphs/") Matplot.summary_plot([z], name="z", path="./graphs/") Matplot.summary_plot([w], name="w", path="./graphs/") print pearsonr( alpha_star.transpose().reshape((I*K,)), alpha.stats()['mean']) print pearsonr( beta_star.transpose().reshape((I*K,)), beta.stats()['mean']) print pearsonr( z_star.reshape((J,)), z.stats()['mean'])
Implementing an ordered probit model in pymc [closed]
This took quite a bit of work, but I got it in the end. Note that I used the development version (pymc 2.2grad) from github, not the older version available on pypi. Also, this runs pretty slowly, si
Implementing an ordered probit model in pymc [closed] This took quite a bit of work, but I got it in the end. Note that I used the development version (pymc 2.2grad) from github, not the older version available on pypi. Also, this runs pretty slowly, since it doesn't make good use of numpy's array manipulation. For data sets of reasonable size, some smart preprocessing and a rewrite of the cutpoints node function could probably fix this. Here's full source code: from scipy.stats import norm, pearsonr from pymc import Normal, Lambda, Uniform, Exponential, stochastic, deterministic, observed, MCMC, Matplot from numpy import mean, std, log import numpy as np #Set array dimensions (I, J, K, M, N) = (5, 20, 3, 4, 1000) #Set simulation parameters alpha_star = np.random.normal(0, 1, size=(I,K)) beta_star = np.random.normal(1, 1, size=(I,K)) z_star = np.random.normal(0, 1, size=(J,1)) w_star = np.array([0,1,3]) #Generate data coder = np.random.randint(I, size=(N)) doc = np.random.randint(J, size=(N)) item = np.random.randint(K, size=(N)) code = np.zeros(shape=(N)) for n in range(N): i, j, k = coder[n], doc[n], item[n] m = alpha_star[i,k] + beta_star[i,k] * z_star[j] + np.random.normal(0, 1) code[n] = 1+sum(m > w_star) # print "\t".join([str(x) for x in [i, j, k, m, code[n]]]) #Set GLM parameters alpha = Normal('alpha', mu=0.0, tau=0.01, value=np.zeros(I*K)) beta = Normal('beta', mu=1.0, tau=0.01, value=np.ones(I*K)) z = Normal('z', mu=0.0, tau=0.01, value=np.random.normal(0,1,J)) w = Exponential('w', .1, value=np.ones(M-3)) #Link functions mu = Lambda('mu', lambda alpha=alpha, beta=beta, z=z, i=coder, j=doc, k=item: alpha[i+I*k]+beta[i+I*k]*z[j]) @deterministic(plot=False) def cutpoints(w=w): w2 = [-np.inf, 0.0, 1.0] v = 1 for i in w: v += i w2.append(v) w2.append(np.inf) cp = np.array( w2 ) return cp @stochastic(dtype=int, observed=True) def y(value=code, mu=mu, cp=cutpoints): def logp(value, mu, cp): d = norm.cdf(cp[value]-mu)-norm.cdf(cp[value-1]-mu) lp = sum(log(d)) return lp #Run chain M = MCMC([alpha, beta, z, mu, w, cutpoints, y]) M.isample(10000, 5000, thin=5, verbose=0) #Summarize results Matplot.summary_plot([alpha], name="alpha", path="./graphs/") Matplot.summary_plot([beta], name="beta", path="./graphs/") Matplot.summary_plot([z], name="z", path="./graphs/") Matplot.summary_plot([w], name="w", path="./graphs/") print pearsonr( alpha_star.transpose().reshape((I*K,)), alpha.stats()['mean']) print pearsonr( beta_star.transpose().reshape((I*K,)), beta.stats()['mean']) print pearsonr( z_star.reshape((J,)), z.stats()['mean'])
Implementing an ordered probit model in pymc [closed] This took quite a bit of work, but I got it in the end. Note that I used the development version (pymc 2.2grad) from github, not the older version available on pypi. Also, this runs pretty slowly, si
48,868
Communicating Regression Model Results
In addition to Michelle's answer, predictive ability (as measured by $R^2$) is not relevant to all uses of regression. In your example, if one is interested in the difference in mean houses prices, comparing houses whose NOx level differs by one unit but that have identical values of all the other covariates, then (given some assumptions of linearity) the regression in Table 1 is the right one to do, regardless of its $R^2$. In my experience, getting people to translate between "what quantity is of interest?" and "what regression do we do?" is much more challenging than quantifying predictive ability.
Communicating Regression Model Results
In addition to Michelle's answer, predictive ability (as measured by $R^2$) is not relevant to all uses of regression. In your example, if one is interested in the difference in mean houses prices, c
Communicating Regression Model Results In addition to Michelle's answer, predictive ability (as measured by $R^2$) is not relevant to all uses of regression. In your example, if one is interested in the difference in mean houses prices, comparing houses whose NOx level differs by one unit but that have identical values of all the other covariates, then (given some assumptions of linearity) the regression in Table 1 is the right one to do, regardless of its $R^2$. In my experience, getting people to translate between "what quantity is of interest?" and "what regression do we do?" is much more challenging than quantifying predictive ability.
Communicating Regression Model Results In addition to Michelle's answer, predictive ability (as measured by $R^2$) is not relevant to all uses of regression. In your example, if one is interested in the difference in mean houses prices, c
48,869
Communicating Regression Model Results
I disagree that another overall method evaluation is needed, what perhaps would be more useful is if people reported model summaries using the statistics already available to them, for example: Mallow's Cp adjusted R^2 AIC BIC I see no need for another statistic. The misinterpretation of p-values is, I think, a different problem. Update based on comment below: Your method appears to ignore how models account for variance. When you have the "missing variable" problem, some of the variance accounted for by the absent variable is assigned to variables correlated with it that are included, and the rest is error. This happens because of measurement error, use of proxy variables (particularly in social sciences), etc. So when you add a new variable to the model, you're not getting a pure adjustment to variance explained. The situation is even more complicated if you bring in interactions. Essentially your "honest R^2" advocating a type of step-wise model (where the added variable is added in at the last step), and there are major issues with those models.
Communicating Regression Model Results
I disagree that another overall method evaluation is needed, what perhaps would be more useful is if people reported model summaries using the statistics already available to them, for example: Mallo
Communicating Regression Model Results I disagree that another overall method evaluation is needed, what perhaps would be more useful is if people reported model summaries using the statistics already available to them, for example: Mallow's Cp adjusted R^2 AIC BIC I see no need for another statistic. The misinterpretation of p-values is, I think, a different problem. Update based on comment below: Your method appears to ignore how models account for variance. When you have the "missing variable" problem, some of the variance accounted for by the absent variable is assigned to variables correlated with it that are included, and the rest is error. This happens because of measurement error, use of proxy variables (particularly in social sciences), etc. So when you add a new variable to the model, you're not getting a pure adjustment to variance explained. The situation is even more complicated if you bring in interactions. Essentially your "honest R^2" advocating a type of step-wise model (where the added variable is added in at the last step), and there are major issues with those models.
Communicating Regression Model Results I disagree that another overall method evaluation is needed, what perhaps would be more useful is if people reported model summaries using the statistics already available to them, for example: Mallo
48,870
Communicating Regression Model Results
The consensus thus far seems to be against, and I have to say that I agree. Every so often, I come across the argument 'people make so many mistakes with stats, all these problems would go away if we just switched to _________'. I have never yet found that convincing, and I'm afraid don't here either. I recognize that there are some merits to the approach you favor, but it doesn't address all the problems that can occur with the old methods, would almost certainly be misused just as readily as old approaches, and isn't necessarily better overall than current methods when they are used well by someone who knows what they are doing and cares about getting it right. I rambled on interminably about a similar issue recently here--it's not exactly the same, but it's similar enough to get the idea. My main point is that much of the poor data analysis that occurs can be best described as 'mechanical' (or 'rote', in Cleveland's terms). Switching one mechanical approach for another is unlikely to change that. I can think of two related issues, which I didn't discuss there: There are a lot of people who need to be able to analyze data in their work who have weak mathematical backgrounds and/or are math-phobic. We are free to dislike this aspect of reality and to grouse about it, but it isn't going away. Part of the reason for poor analyses in practice is that a lot of people don't really understand what is going on, and see statistical analyses as though they work somehow by magic. To reduce problematic analyses we need to find ways to get people to a basic (non-magic based) conceptual grasp of how statistical analyses work. (It should be noted that sites like CV are part of the answer to this.) There are a lot of smart, interested people who are absorbed by the content-matter that they deal with, but who have little interest in the methodology (statistical or otherwise) with which that subject-matter information is intertwined. We need to get people on board with the fact that methodological issues are very important as well. This is not easy to do. You can point out actual cases where things were done incorrectly and important conclusions were missed or gotten wrong, but this can easily come off as overbearing and turn people off. I don't have a good solution. Better data analyses will emerge all by themselves (without any new techniques), when these three problems are solved. That is, when people think the methodological issues are important, understand them reasonably well, and don't simply apply them mechanically. So long as those conditions continue, however, new techniques will not guarantee better analyses. You make the case that your approach will help with #2, and it very well might, but that's still not enough. Regarding the merits of your approach, I think it has some strengths and weaknesses. Others have pointed to some issues with which I agree and so won't repeat them. However, I do want to say that using predictive accuracy makes this approach more appropriate for predictive modeling and less applicable to explanatory modeling (see here for a discussion on CV, and see the links listed on that page). Lastly, on a stylistic note, I would recommend you change the signs in your tables (2 & 3) so that the numbers represent the change from the top line -Full Model- Honest.R2 if that term is dropped. This will be much clearer and more intuitive for people. I'm afraid this response comes off pretty negative; I don't mean to be. Your approach clearly has some good ideas. I simply do not think that it, or any other candidate solution of its type, will solve our problems.
Communicating Regression Model Results
The consensus thus far seems to be against, and I have to say that I agree. Every so often, I come across the argument 'people make so many mistakes with stats, all these problems would go away if we
Communicating Regression Model Results The consensus thus far seems to be against, and I have to say that I agree. Every so often, I come across the argument 'people make so many mistakes with stats, all these problems would go away if we just switched to _________'. I have never yet found that convincing, and I'm afraid don't here either. I recognize that there are some merits to the approach you favor, but it doesn't address all the problems that can occur with the old methods, would almost certainly be misused just as readily as old approaches, and isn't necessarily better overall than current methods when they are used well by someone who knows what they are doing and cares about getting it right. I rambled on interminably about a similar issue recently here--it's not exactly the same, but it's similar enough to get the idea. My main point is that much of the poor data analysis that occurs can be best described as 'mechanical' (or 'rote', in Cleveland's terms). Switching one mechanical approach for another is unlikely to change that. I can think of two related issues, which I didn't discuss there: There are a lot of people who need to be able to analyze data in their work who have weak mathematical backgrounds and/or are math-phobic. We are free to dislike this aspect of reality and to grouse about it, but it isn't going away. Part of the reason for poor analyses in practice is that a lot of people don't really understand what is going on, and see statistical analyses as though they work somehow by magic. To reduce problematic analyses we need to find ways to get people to a basic (non-magic based) conceptual grasp of how statistical analyses work. (It should be noted that sites like CV are part of the answer to this.) There are a lot of smart, interested people who are absorbed by the content-matter that they deal with, but who have little interest in the methodology (statistical or otherwise) with which that subject-matter information is intertwined. We need to get people on board with the fact that methodological issues are very important as well. This is not easy to do. You can point out actual cases where things were done incorrectly and important conclusions were missed or gotten wrong, but this can easily come off as overbearing and turn people off. I don't have a good solution. Better data analyses will emerge all by themselves (without any new techniques), when these three problems are solved. That is, when people think the methodological issues are important, understand them reasonably well, and don't simply apply them mechanically. So long as those conditions continue, however, new techniques will not guarantee better analyses. You make the case that your approach will help with #2, and it very well might, but that's still not enough. Regarding the merits of your approach, I think it has some strengths and weaknesses. Others have pointed to some issues with which I agree and so won't repeat them. However, I do want to say that using predictive accuracy makes this approach more appropriate for predictive modeling and less applicable to explanatory modeling (see here for a discussion on CV, and see the links listed on that page). Lastly, on a stylistic note, I would recommend you change the signs in your tables (2 & 3) so that the numbers represent the change from the top line -Full Model- Honest.R2 if that term is dropped. This will be much clearer and more intuitive for people. I'm afraid this response comes off pretty negative; I don't mean to be. Your approach clearly has some good ideas. I simply do not think that it, or any other candidate solution of its type, will solve our problems.
Communicating Regression Model Results The consensus thus far seems to be against, and I have to say that I agree. Every so often, I come across the argument 'people make so many mistakes with stats, all these problems would go away if we
48,871
Communicating Regression Model Results
What you described is pretty much already captured within Stepwise Regression methodology with Hold Out periods. At least as presented by a software such as XLStat, the latter selects the best variables and discloses what the R Square is for the best one variable model, best 2 variable model, etc... It keeps on adding variables until the adjusted R Square value does not rise much anymore. So, you can readily see how much incremental information each additional variable provides.
Communicating Regression Model Results
What you described is pretty much already captured within Stepwise Regression methodology with Hold Out periods. At least as presented by a software such as XLStat, the latter selects the best variab
Communicating Regression Model Results What you described is pretty much already captured within Stepwise Regression methodology with Hold Out periods. At least as presented by a software such as XLStat, the latter selects the best variables and discloses what the R Square is for the best one variable model, best 2 variable model, etc... It keeps on adding variables until the adjusted R Square value does not rise much anymore. So, you can readily see how much incremental information each additional variable provides.
Communicating Regression Model Results What you described is pretty much already captured within Stepwise Regression methodology with Hold Out periods. At least as presented by a software such as XLStat, the latter selects the best variab
48,872
Are there problems with inference using linear regression on observational data with highly skewed distributions of predictor values?
Yes, there shouldn't be any problem given your description in comments of skewed predictor as actually meaning a 0/1 dummy variable that just has many more values of 1 than 0. There's no reason why this should be a problem; at most it will mean you just have a relatively high degree of uncertainty about your estimates of the coefficient for that variable, and this will just come up automatically in your results no particular problem there. But in my world, even 1,000 observations of X=0 and 30,000 of X=1 is still lots of data... :-) The fishhook with skewed continuous data is that you may end up with a few extreme points for one of your continuous variables. These points have high degrees of leverage and potentially of influence which can cause you problems. But this is not likely to be a problem in your case.
Are there problems with inference using linear regression on observational data with highly skewed d
Yes, there shouldn't be any problem given your description in comments of skewed predictor as actually meaning a 0/1 dummy variable that just has many more values of 1 than 0. There's no reason why t
Are there problems with inference using linear regression on observational data with highly skewed distributions of predictor values? Yes, there shouldn't be any problem given your description in comments of skewed predictor as actually meaning a 0/1 dummy variable that just has many more values of 1 than 0. There's no reason why this should be a problem; at most it will mean you just have a relatively high degree of uncertainty about your estimates of the coefficient for that variable, and this will just come up automatically in your results no particular problem there. But in my world, even 1,000 observations of X=0 and 30,000 of X=1 is still lots of data... :-) The fishhook with skewed continuous data is that you may end up with a few extreme points for one of your continuous variables. These points have high degrees of leverage and potentially of influence which can cause you problems. But this is not likely to be a problem in your case.
Are there problems with inference using linear regression on observational data with highly skewed d Yes, there shouldn't be any problem given your description in comments of skewed predictor as actually meaning a 0/1 dummy variable that just has many more values of 1 than 0. There's no reason why t
48,873
Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed]
You cannot share parameters between Q and R, as you have specified in the model. See http://journal.r-project.org/archive/2012-1/RJournal_2012-1_Holmes~et~al.pdf pg 13 "Elements with the same character name are constrained to be equal (no sharing across parameter matrices, only within)." I don't know if this helps much, since you already discovered it didn't work for you, but at least you know there is no official support for this type of parameter sharing. I don't know the solution, or if there is a solution, but you might try asking the authors of the package. I have found them to be very gracious with their time and expertise.
Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed]
You cannot share parameters between Q and R, as you have specified in the model. See http://journal.r-project.org/archive/2012-1/RJournal_2012-1_Holmes~et~al.pdf pg 13 "Elements with the same charact
Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed] You cannot share parameters between Q and R, as you have specified in the model. See http://journal.r-project.org/archive/2012-1/RJournal_2012-1_Holmes~et~al.pdf pg 13 "Elements with the same character name are constrained to be equal (no sharing across parameter matrices, only within)." I don't know if this helps much, since you already discovered it didn't work for you, but at least you know there is no official support for this type of parameter sharing. I don't know the solution, or if there is a solution, but you might try asking the authors of the package. I have found them to be very gracious with their time and expertise.
Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed] You cannot share parameters between Q and R, as you have specified in the model. See http://journal.r-project.org/archive/2012-1/RJournal_2012-1_Holmes~et~al.pdf pg 13 "Elements with the same charact
48,874
Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed]
I think that there is a way around this. Write a function with MARSS inside it where sigma_epsilon is given (and not estimated), which in turn determines sigma_nu using the restriction. (To get an initial value for this run with no restrictions on the variance terms to start with; alternatively use the filter package, which does HP filtering). You can then optimise across sigma_epsilon values to minimise the AIC. This can be done by using a grid search approach or one of the optimisation packages. To get out confidenced intervals you can write a simple function to jack-knife the confidence intervals.
Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed]
I think that there is a way around this. Write a function with MARSS inside it where sigma_epsilon is given (and not estimated), which in turn determines sigma_nu using the restriction. (To get an ini
Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed] I think that there is a way around this. Write a function with MARSS inside it where sigma_epsilon is given (and not estimated), which in turn determines sigma_nu using the restriction. (To get an initial value for this run with no restrictions on the variance terms to start with; alternatively use the filter package, which does HP filtering). You can then optimise across sigma_epsilon values to minimise the AIC. This can be done by using a grid search approach or one of the optimisation packages. To get out confidenced intervals you can write a simple function to jack-knife the confidence intervals.
Estimating State Space Model in R with MARSS package and shared parameters between Q and R [closed] I think that there is a way around this. Write a function with MARSS inside it where sigma_epsilon is given (and not estimated), which in turn determines sigma_nu using the restriction. (To get an ini
48,875
How can factor-levels be automatically chosen in R to maximize the number of positive coefficients in a regression model?
Thanks to whuber's comment, and Seb's answer, I have put together the following function which I believe does what I want. Hopefully it will be useful to someone. Comments are welcome. # take a dataframe, and re-level it such that the levels of the factors are # assigned positive coefficients by lm() # NOTE: this currently only works for model-forms that don't include # interaction terms. auto_relevel <- function(df, model_form) { # get list of categorical variables in df catvar_indices <- get_catvar_indices(df) # loop over categorical variables df_colnames <- attr(df, 'names') model_form_zeroicept <- paste(model_form, "- 1") for (i in catvar_indices) { catvar_name = df_colnames[i] all_levels = attr(df[[i]], "levels") temp_model <- lm(model_form_zeroicept, data=df) # If at least one of the levels' coefficients is less than zero, then # choose the one w/min coeff to be the new base-level # put a space after catvar_name so that it doesn't match longer level-names catvar_name <- paste(catvar_name, " ", sep="") factors <- grep(catvar_name, names(coef(temp_model))) coeffs <- coef(temp_model)[factors] # remove NA's from coeffs coeffs2 <- coeffs[! is.na(coeffs)] if (any(coeffs2 < 0)) { # find out where this factor is in *all_levels* chosen_level_name <- names(coeffs2)[which(coeffs2==min(coeffs2))] stripped_level_name <- unlist(strsplit(chosen_level_name," "))[2] # strip factor name # add an initial space (to match all_levels) stripped_level_name <- paste(" ", stripped_level_name, sep="") min_level_index <- which(all_levels == stripped_level_name) df[[i]] <- relevel(df[[i]], ref=min_level_index) } } return(df) }
How can factor-levels be automatically chosen in R to maximize the number of positive coefficients i
Thanks to whuber's comment, and Seb's answer, I have put together the following function which I believe does what I want. Hopefully it will be useful to someone. Comments are welcome. # take a data
How can factor-levels be automatically chosen in R to maximize the number of positive coefficients in a regression model? Thanks to whuber's comment, and Seb's answer, I have put together the following function which I believe does what I want. Hopefully it will be useful to someone. Comments are welcome. # take a dataframe, and re-level it such that the levels of the factors are # assigned positive coefficients by lm() # NOTE: this currently only works for model-forms that don't include # interaction terms. auto_relevel <- function(df, model_form) { # get list of categorical variables in df catvar_indices <- get_catvar_indices(df) # loop over categorical variables df_colnames <- attr(df, 'names') model_form_zeroicept <- paste(model_form, "- 1") for (i in catvar_indices) { catvar_name = df_colnames[i] all_levels = attr(df[[i]], "levels") temp_model <- lm(model_form_zeroicept, data=df) # If at least one of the levels' coefficients is less than zero, then # choose the one w/min coeff to be the new base-level # put a space after catvar_name so that it doesn't match longer level-names catvar_name <- paste(catvar_name, " ", sep="") factors <- grep(catvar_name, names(coef(temp_model))) coeffs <- coef(temp_model)[factors] # remove NA's from coeffs coeffs2 <- coeffs[! is.na(coeffs)] if (any(coeffs2 < 0)) { # find out where this factor is in *all_levels* chosen_level_name <- names(coeffs2)[which(coeffs2==min(coeffs2))] stripped_level_name <- unlist(strsplit(chosen_level_name," "))[2] # strip factor name # add an initial space (to match all_levels) stripped_level_name <- paste(" ", stripped_level_name, sep="") min_level_index <- which(all_levels == stripped_level_name) df[[i]] <- relevel(df[[i]], ref=min_level_index) } } return(df) }
How can factor-levels be automatically chosen in R to maximize the number of positive coefficients i Thanks to whuber's comment, and Seb's answer, I have put together the following function which I believe does what I want. Hopefully it will be useful to someone. Comments are welcome. # take a data
48,876
How can factor-levels be automatically chosen in R to maximize the number of positive coefficients in a regression model?
Here is an attampt of doing what you wanted. # Setting up some sample data require(dummies) df <- data.frame(categorial=rep(c(1,2,3), each=20), x=rnorm(60)) flevels <- dummy(df$categorial) df$categorial <- factor(df$categorial) df$y=20 + df$x*3 + flevels%*%c(3,1,2) + rnorm(60)*2 I use a regression in order to obtain the minimum factor level and then reorder: # Now we start with trying to find the minimum cateogry. However, note that this does not work in every context! summary(helpreg <- lm(y~x+factor(categorial) - 1, data=df)) Coefficients: Estimate Std. Error t value Pr(>|t|) x 2.9944 0.2334 12.83 <2e-16 *** factor(categorial)1 22.9640 0.4472 51.35 <2e-16 *** factor(categorial)2 21.0720 0.4390 48.00 <2e-16 *** factor(categorial)3 22.1300 0.4364 50.71 <2e-16 *** Then I start to sort out the minimum: factors <- grep('categorial', names(coef(helpreg))) # --- replace categorial with your variable name minimumf <- which(coef(helpreg)[factors]==min(coef(helpreg)[factors])) This is then releveled df$categorial <- relevel(df$categorial, ref=minimumf) And in my case it works - probably it works for you as well.... summary(lm(y~x+factor(categorial), data=df)) Estimate Std. Error t value Pr(>|t|) (Intercept) 21.0720 0.4390 48.003 < 2e-16 *** x 2.9944 0.2334 12.828 < 2e-16 *** factor(categorial)1 1.8920 0.6341 2.984 0.00421 ** factor(categorial)3 1.0580 0.6193 1.708 0.09310 . Comments of course highly appreciated!
How can factor-levels be automatically chosen in R to maximize the number of positive coefficients i
Here is an attampt of doing what you wanted. # Setting up some sample data require(dummies) df <- data.frame(categorial=rep(c(1,2,3), each=20), x=rnorm(60)) flevels <- dummy(df$categorial) df$cat
How can factor-levels be automatically chosen in R to maximize the number of positive coefficients in a regression model? Here is an attampt of doing what you wanted. # Setting up some sample data require(dummies) df <- data.frame(categorial=rep(c(1,2,3), each=20), x=rnorm(60)) flevels <- dummy(df$categorial) df$categorial <- factor(df$categorial) df$y=20 + df$x*3 + flevels%*%c(3,1,2) + rnorm(60)*2 I use a regression in order to obtain the minimum factor level and then reorder: # Now we start with trying to find the minimum cateogry. However, note that this does not work in every context! summary(helpreg <- lm(y~x+factor(categorial) - 1, data=df)) Coefficients: Estimate Std. Error t value Pr(>|t|) x 2.9944 0.2334 12.83 <2e-16 *** factor(categorial)1 22.9640 0.4472 51.35 <2e-16 *** factor(categorial)2 21.0720 0.4390 48.00 <2e-16 *** factor(categorial)3 22.1300 0.4364 50.71 <2e-16 *** Then I start to sort out the minimum: factors <- grep('categorial', names(coef(helpreg))) # --- replace categorial with your variable name minimumf <- which(coef(helpreg)[factors]==min(coef(helpreg)[factors])) This is then releveled df$categorial <- relevel(df$categorial, ref=minimumf) And in my case it works - probably it works for you as well.... summary(lm(y~x+factor(categorial), data=df)) Estimate Std. Error t value Pr(>|t|) (Intercept) 21.0720 0.4390 48.003 < 2e-16 *** x 2.9944 0.2334 12.828 < 2e-16 *** factor(categorial)1 1.8920 0.6341 2.984 0.00421 ** factor(categorial)3 1.0580 0.6193 1.708 0.09310 . Comments of course highly appreciated!
How can factor-levels be automatically chosen in R to maximize the number of positive coefficients i Here is an attampt of doing what you wanted. # Setting up some sample data require(dummies) df <- data.frame(categorial=rep(c(1,2,3), each=20), x=rnorm(60)) flevels <- dummy(df$categorial) df$cat
48,877
Permutational MANOVA and Mahalanobis distances in R
As one of the developers of vegan (though not the adonis() function) I am reasonably well-placed to comment; unfortunately, adonis() assumes vegdist() is to be used for computation of the dissimilarity matrix in the function. Changing adonis() wouldn't be too difficult to do so that it allows any function that returns an object of class "dist" to be used in place of vegdist() - in fact it would be trivial - but I'd need to know which function you intended to use to compute the mahalanobis distance so I could write a wrapper function and provide a modified adonis() here for your use. Changing the actual adonis() function in vegan is more involved... In the meantime, I'll take this up with the vegan developers; there are several functions in vegan that could benefit from being generalised to allow different dissimilarity functions and some already allow this. At this point in the package's development, we should be looking at making the variously-authored functions work more similarly.
Permutational MANOVA and Mahalanobis distances in R
As one of the developers of vegan (though not the adonis() function) I am reasonably well-placed to comment; unfortunately, adonis() assumes vegdist() is to be used for computation of the dissimilarit
Permutational MANOVA and Mahalanobis distances in R As one of the developers of vegan (though not the adonis() function) I am reasonably well-placed to comment; unfortunately, adonis() assumes vegdist() is to be used for computation of the dissimilarity matrix in the function. Changing adonis() wouldn't be too difficult to do so that it allows any function that returns an object of class "dist" to be used in place of vegdist() - in fact it would be trivial - but I'd need to know which function you intended to use to compute the mahalanobis distance so I could write a wrapper function and provide a modified adonis() here for your use. Changing the actual adonis() function in vegan is more involved... In the meantime, I'll take this up with the vegan developers; there are several functions in vegan that could benefit from being generalised to allow different dissimilarity functions and some already allow this. At this point in the package's development, we should be looking at making the variously-authored functions work more similarly.
Permutational MANOVA and Mahalanobis distances in R As one of the developers of vegan (though not the adonis() function) I am reasonably well-placed to comment; unfortunately, adonis() assumes vegdist() is to be used for computation of the dissimilarit
48,878
Test for non random-walk
You could try Runs Test. Let $n_1$ bet the number of +1 runs and $n_2$ bet the number of -1 runs. You could use the test as in the wiki page.
Test for non random-walk
You could try Runs Test. Let $n_1$ bet the number of +1 runs and $n_2$ bet the number of -1 runs. You could use the test as in the wiki page.
Test for non random-walk You could try Runs Test. Let $n_1$ bet the number of +1 runs and $n_2$ bet the number of -1 runs. You could use the test as in the wiki page.
Test for non random-walk You could try Runs Test. Let $n_1$ bet the number of +1 runs and $n_2$ bet the number of -1 runs. You could use the test as in the wiki page.
48,879
McNemar’s test or T-test for measuring statistical significance of matched-pre-post-test result
Yes, for research question 1 t-test is appropriate, provided that the difference post-score minus pre-score is about normally distributed; if not, consider using Wilcoxon signed rank test. Yes, for research question 2 McNemar is the right choice. No. T-test is unusual to apply for proportions. When they do it for all that, they apply Fisher angular transformation of proportions before doing t-test; this generally is regarded out-of-date approach. Notice also that with those 2 proportions in hands you cannot do a paired test, it will be independent-samples test.
McNemar’s test or T-test for measuring statistical significance of matched-pre-post-test result
Yes, for research question 1 t-test is appropriate, provided that the difference post-score minus pre-score is about normally distributed; if not, consider using Wilcoxon signed rank test. Yes, for re
McNemar’s test or T-test for measuring statistical significance of matched-pre-post-test result Yes, for research question 1 t-test is appropriate, provided that the difference post-score minus pre-score is about normally distributed; if not, consider using Wilcoxon signed rank test. Yes, for research question 2 McNemar is the right choice. No. T-test is unusual to apply for proportions. When they do it for all that, they apply Fisher angular transformation of proportions before doing t-test; this generally is regarded out-of-date approach. Notice also that with those 2 proportions in hands you cannot do a paired test, it will be independent-samples test.
McNemar’s test or T-test for measuring statistical significance of matched-pre-post-test result Yes, for research question 1 t-test is appropriate, provided that the difference post-score minus pre-score is about normally distributed; if not, consider using Wilcoxon signed rank test. Yes, for re
48,880
What are subjective interestingness measures?
Consider, a classic example of the following rule: IF (patient is pregnant) THEN (patient is female). This rule is very accurate and comprehensible, but it is not interesting, since it represents the obvious. Another Example from real world data set, IF (used_seat_belt = ‘yes’) THEN (injury = ‘no’).......................................................(1) IF ((used_seat_belt = ‘yes’) Λ (passenger = child)) THEN (injury = ‘yes’)...............(2) Rule (1) is a general and an obvious rule. But rule (2) contradicts the knowledge represented by rule (1) and so the user's belief. This kind of knowledge is unexpected from users preset beliefs and it is always interesting to extract this interesting (or surprising) knowledge from data sets. “Unexpectedness” means knowledge which is unexpected from the beliefs of users i.e. A decision rule is considered to be interesting (or surprising) if it represents knowledge that was not only previously unknown to the users but also contradicts the original beliefs of the users. I hope, these examples may help you to understand the concept more clearly. Edit Yes, firstly, discover the general rules and then discover exceptions to these general rules. For example, A general rule : If bird then fly However, there are few exceptional birds like emu and penguin that do not fly. It would definitely be valuable to discover such exceptions along with the rule, making the rule more accurate, comprehensible as well as interesting.
What are subjective interestingness measures?
Consider, a classic example of the following rule: IF (patient is pregnant) THEN (patient is female). This rule is very accurate and comprehensible, but it is not interesting, since it represents th
What are subjective interestingness measures? Consider, a classic example of the following rule: IF (patient is pregnant) THEN (patient is female). This rule is very accurate and comprehensible, but it is not interesting, since it represents the obvious. Another Example from real world data set, IF (used_seat_belt = ‘yes’) THEN (injury = ‘no’).......................................................(1) IF ((used_seat_belt = ‘yes’) Λ (passenger = child)) THEN (injury = ‘yes’)...............(2) Rule (1) is a general and an obvious rule. But rule (2) contradicts the knowledge represented by rule (1) and so the user's belief. This kind of knowledge is unexpected from users preset beliefs and it is always interesting to extract this interesting (or surprising) knowledge from data sets. “Unexpectedness” means knowledge which is unexpected from the beliefs of users i.e. A decision rule is considered to be interesting (or surprising) if it represents knowledge that was not only previously unknown to the users but also contradicts the original beliefs of the users. I hope, these examples may help you to understand the concept more clearly. Edit Yes, firstly, discover the general rules and then discover exceptions to these general rules. For example, A general rule : If bird then fly However, there are few exceptional birds like emu and penguin that do not fly. It would definitely be valuable to discover such exceptions along with the rule, making the rule more accurate, comprehensible as well as interesting.
What are subjective interestingness measures? Consider, a classic example of the following rule: IF (patient is pregnant) THEN (patient is female). This rule is very accurate and comprehensible, but it is not interesting, since it represents th
48,881
Determining how well given real-life data fits to a given probability distribution
There are statistical tests that allow you to check if your data are an inappropriately poor match to a given distribution as @ChillPenguin has noted. However, I think graphical techniques are best for this task. Typically, the best approach is to use a qq-plot. A somewhat less-used, but similar approach is to use a pp-plot. Note that a qq-plot gives you better resolution in the tails of the distribution, while a pp-plot gives you better resolution in the middle of the distribution. As I said, people usually go with a qq-plot, because typically deviations in the tails are more important. These plots make it easy to see that your data differ from a theoretical distribution, but sometimes it is hard to interpret how they are deviating. If you have checked a qq-plot, and are concerned that your data don't fit, but want a clearer picture of how that manifests, one approach is to make a kernel demsity plot of your data, possibly overlaid with a theoretical distribution that has the same mean and SD. Note that none of these approaches necessarily tells you which distribution your data come from, they would only tell you that the fit is reasonable or poor. If they are poor, then you need to use your knowledge of your data and the range of distributions that exist to pick another contender to explore. For example, if you had a distribution of counts for, say, the number of auto accidents at different locations, and checked it against a normal, you would most likely find a poor fit. However, nothing there would tell you that you should be checking your data against a Poisson distribution instead; you would need to know about that yourself.
Determining how well given real-life data fits to a given probability distribution
There are statistical tests that allow you to check if your data are an inappropriately poor match to a given distribution as @ChillPenguin has noted. However, I think graphical techniques are best f
Determining how well given real-life data fits to a given probability distribution There are statistical tests that allow you to check if your data are an inappropriately poor match to a given distribution as @ChillPenguin has noted. However, I think graphical techniques are best for this task. Typically, the best approach is to use a qq-plot. A somewhat less-used, but similar approach is to use a pp-plot. Note that a qq-plot gives you better resolution in the tails of the distribution, while a pp-plot gives you better resolution in the middle of the distribution. As I said, people usually go with a qq-plot, because typically deviations in the tails are more important. These plots make it easy to see that your data differ from a theoretical distribution, but sometimes it is hard to interpret how they are deviating. If you have checked a qq-plot, and are concerned that your data don't fit, but want a clearer picture of how that manifests, one approach is to make a kernel demsity plot of your data, possibly overlaid with a theoretical distribution that has the same mean and SD. Note that none of these approaches necessarily tells you which distribution your data come from, they would only tell you that the fit is reasonable or poor. If they are poor, then you need to use your knowledge of your data and the range of distributions that exist to pick another contender to explore. For example, if you had a distribution of counts for, say, the number of auto accidents at different locations, and checked it against a normal, you would most likely find a poor fit. However, nothing there would tell you that you should be checking your data against a Poisson distribution instead; you would need to know about that yourself.
Determining how well given real-life data fits to a given probability distribution There are statistical tests that allow you to check if your data are an inappropriately poor match to a given distribution as @ChillPenguin has noted. However, I think graphical techniques are best f
48,882
Determining how well given real-life data fits to a given probability distribution
Given a model (that is, a parametric family of distributions, such the family of normal distributions parametrized by mean and variance), the most straightforward thing to do is use maximum likelihood estimation to estimate the parameters, then use the probability density function to assess how typical the data are. If the model is conventional and parsimonious (rather than being tailored to the data, or something), and it makes the data look reasonably typical, you can argue that the model is good enough. Goodness-of-fit tests are often recommended for this sort of thing, but all they're good for is justifying a statement that the data doesn't come from a given distribution. Failure to reject the null hypothesis of a goodness-of-fit test isn't evidence that the data does in fact come from that distribution.
Determining how well given real-life data fits to a given probability distribution
Given a model (that is, a parametric family of distributions, such the family of normal distributions parametrized by mean and variance), the most straightforward thing to do is use maximum likelihood
Determining how well given real-life data fits to a given probability distribution Given a model (that is, a parametric family of distributions, such the family of normal distributions parametrized by mean and variance), the most straightforward thing to do is use maximum likelihood estimation to estimate the parameters, then use the probability density function to assess how typical the data are. If the model is conventional and parsimonious (rather than being tailored to the data, or something), and it makes the data look reasonably typical, you can argue that the model is good enough. Goodness-of-fit tests are often recommended for this sort of thing, but all they're good for is justifying a statement that the data doesn't come from a given distribution. Failure to reject the null hypothesis of a goodness-of-fit test isn't evidence that the data does in fact come from that distribution.
Determining how well given real-life data fits to a given probability distribution Given a model (that is, a parametric family of distributions, such the family of normal distributions parametrized by mean and variance), the most straightforward thing to do is use maximum likelihood
48,883
Interpreting coefficients from a VECM (Vector Error Correction Model)
After much researching I the following reference was the most useful to me when trying to interpret the findings of a vecm: Helmut Lütkepohl, Markus Krätzig Structural Vector Autoregressive Modeling and Impulse Responses pp. 159-196. In: Applied time-series economics. A link to the chapter is given below: http://ebooks.cambridge.org/chapter.jsf?bid=CBO9780511606885&cid=CBO9780511606885A036
Interpreting coefficients from a VECM (Vector Error Correction Model)
After much researching I the following reference was the most useful to me when trying to interpret the findings of a vecm: Helmut Lütkepohl, Markus Krätzig Structural Vector Autoregressive Modeling a
Interpreting coefficients from a VECM (Vector Error Correction Model) After much researching I the following reference was the most useful to me when trying to interpret the findings of a vecm: Helmut Lütkepohl, Markus Krätzig Structural Vector Autoregressive Modeling and Impulse Responses pp. 159-196. In: Applied time-series economics. A link to the chapter is given below: http://ebooks.cambridge.org/chapter.jsf?bid=CBO9780511606885&cid=CBO9780511606885A036
Interpreting coefficients from a VECM (Vector Error Correction Model) After much researching I the following reference was the most useful to me when trying to interpret the findings of a vecm: Helmut Lütkepohl, Markus Krätzig Structural Vector Autoregressive Modeling a
48,884
Interpreting coefficients from a VECM (Vector Error Correction Model)
ECT is consider good if the range between 0 ~ 1 but not more than 2. ECT should be in negative number and if positive value means explosive and not reasonable. For example, if the ECT(-1) estimated coefficient is -0.87 (The estimated coefficient indicates that about 87 per cent of this disequilibrium is corrected between 1 year (if annually data)). But if the ECT(-1) are -1.07 as an example (The estimated coefficient indicates that about 107 per cent of this disequilibrium is corrected between 1 year - and this does not make sense).
Interpreting coefficients from a VECM (Vector Error Correction Model)
ECT is consider good if the range between 0 ~ 1 but not more than 2. ECT should be in negative number and if positive value means explosive and not reasonable. For example, if the ECT(-1) estimated co
Interpreting coefficients from a VECM (Vector Error Correction Model) ECT is consider good if the range between 0 ~ 1 but not more than 2. ECT should be in negative number and if positive value means explosive and not reasonable. For example, if the ECT(-1) estimated coefficient is -0.87 (The estimated coefficient indicates that about 87 per cent of this disequilibrium is corrected between 1 year (if annually data)). But if the ECT(-1) are -1.07 as an example (The estimated coefficient indicates that about 107 per cent of this disequilibrium is corrected between 1 year - and this does not make sense).
Interpreting coefficients from a VECM (Vector Error Correction Model) ECT is consider good if the range between 0 ~ 1 but not more than 2. ECT should be in negative number and if positive value means explosive and not reasonable. For example, if the ECT(-1) estimated co
48,885
What descriptive statistics should be reported in tables and graphs when using Friedman's nonparametric test?
You can almost never go wrong with more information. With that in mind, reporting the median, interquartile range and range is a good idea. Also, reporting the bootstrapped 95% CI of the median is also a good idea. (See Haukoos JS, Lewis RJ. Advanced Statistics: Bootstrapping Confidence Intervals for Statistics with ‘‘Difficult’’ Distributions. Academic Emergency Medicine 2005;12:360-5 for more information.) SEM is rarely appropriate for graphs, as it speaks to the population, not to the sample. I think it is totally acceptable for the graphs to have SDs plotted. That being said, you'll get many different opinions about this, as there is no "best practice". Just be sure that the caption to your graph is completely descriptive of what is contained within the graph, and you'll be fine!
What descriptive statistics should be reported in tables and graphs when using Friedman's nonparamet
You can almost never go wrong with more information. With that in mind, reporting the median, interquartile range and range is a good idea. Also, reporting the bootstrapped 95% CI of the median is als
What descriptive statistics should be reported in tables and graphs when using Friedman's nonparametric test? You can almost never go wrong with more information. With that in mind, reporting the median, interquartile range and range is a good idea. Also, reporting the bootstrapped 95% CI of the median is also a good idea. (See Haukoos JS, Lewis RJ. Advanced Statistics: Bootstrapping Confidence Intervals for Statistics with ‘‘Difficult’’ Distributions. Academic Emergency Medicine 2005;12:360-5 for more information.) SEM is rarely appropriate for graphs, as it speaks to the population, not to the sample. I think it is totally acceptable for the graphs to have SDs plotted. That being said, you'll get many different opinions about this, as there is no "best practice". Just be sure that the caption to your graph is completely descriptive of what is contained within the graph, and you'll be fine!
What descriptive statistics should be reported in tables and graphs when using Friedman's nonparamet You can almost never go wrong with more information. With that in mind, reporting the median, interquartile range and range is a good idea. Also, reporting the bootstrapped 95% CI of the median is als
48,886
What descriptive statistics should be reported in tables and graphs when using Friedman's nonparametric test?
This largely overlaps with what @propofol has said. LAERD statistics has a tutorial on reporting Friedman's test that emphasises reporting the median and interquartile range. If your data contains many tied ranks , then an interpolated median is typically more sensitive than a standard median. See this discussion, and the interp.median function in R. Example reports You might get additional inspiration by doing a search on Google Scholar specifically for "Friedman's test". However, I admit that when I had a quick look a fair few studies were not implementing best practice in reporting. One interesting example is Blana et al (2006, PDF HERE). They graphically represented the change using box plots for each time point (i.e., showing median, interquartile range, outliers, and so forth). I think this is a good option. Blana, A., Rogenhofer, S., Ganzer, R., Wild, P., Wieland, W., and Walter, B. (2006). Morbidity associated with repeated transrectal high-intensity focused ultrasound treatment of localized prostate cancer. World journal of urology, 24(5):585-590.
What descriptive statistics should be reported in tables and graphs when using Friedman's nonparamet
This largely overlaps with what @propofol has said. LAERD statistics has a tutorial on reporting Friedman's test that emphasises reporting the median and interquartile range. If your data contains ma
What descriptive statistics should be reported in tables and graphs when using Friedman's nonparametric test? This largely overlaps with what @propofol has said. LAERD statistics has a tutorial on reporting Friedman's test that emphasises reporting the median and interquartile range. If your data contains many tied ranks , then an interpolated median is typically more sensitive than a standard median. See this discussion, and the interp.median function in R. Example reports You might get additional inspiration by doing a search on Google Scholar specifically for "Friedman's test". However, I admit that when I had a quick look a fair few studies were not implementing best practice in reporting. One interesting example is Blana et al (2006, PDF HERE). They graphically represented the change using box plots for each time point (i.e., showing median, interquartile range, outliers, and so forth). I think this is a good option. Blana, A., Rogenhofer, S., Ganzer, R., Wild, P., Wieland, W., and Walter, B. (2006). Morbidity associated with repeated transrectal high-intensity focused ultrasound treatment of localized prostate cancer. World journal of urology, 24(5):585-590.
What descriptive statistics should be reported in tables and graphs when using Friedman's nonparamet This largely overlaps with what @propofol has said. LAERD statistics has a tutorial on reporting Friedman's test that emphasises reporting the median and interquartile range. If your data contains ma
48,887
Software for learning statistical quality control
The qcc package comes to mind. A quick search through the packages list at http://cran.r-project.org/ shows other packages that may be helpful: graphicsQC, IQCC, qualityTools, SixSigma, and two Rcmdr plugins.
Software for learning statistical quality control
The qcc package comes to mind. A quick search through the packages list at http://cran.r-project.org/ shows other packages that may be helpful: graphicsQC, IQCC, qualityTools, SixSigma, and two Rcmdr
Software for learning statistical quality control The qcc package comes to mind. A quick search through the packages list at http://cran.r-project.org/ shows other packages that may be helpful: graphicsQC, IQCC, qualityTools, SixSigma, and two Rcmdr plugins.
Software for learning statistical quality control The qcc package comes to mind. A quick search through the packages list at http://cran.r-project.org/ shows other packages that may be helpful: graphicsQC, IQCC, qualityTools, SixSigma, and two Rcmdr
48,888
Literature on generating "similar" synthetic time series from observed time series
There is are several papers under the label of "surrogate data" in the nonlinear data-analysis literature which deals with the question of how to generate data that have "similar" properties to some reference data. This data is then used to run tests to see whether there is additional (nonlinear/chaotic) structure in the data that is not covered by the surrogate-creation technique. There are many different papers on this issue. Theiler and colleagues worked on it: http://link.aps.org/doi/10.1103/PhysRevLett.77.635 http://link.aps.org/doi/10.1103/PhysRevLett.73.951 http://www.sciencedirect.com/science/article/pii/S0167278903001362 and they do use spectral methods with Fourier and Wavelet-transforms...
Literature on generating "similar" synthetic time series from observed time series
There is are several papers under the label of "surrogate data" in the nonlinear data-analysis literature which deals with the question of how to generate data that have "similar" properties to some r
Literature on generating "similar" synthetic time series from observed time series There is are several papers under the label of "surrogate data" in the nonlinear data-analysis literature which deals with the question of how to generate data that have "similar" properties to some reference data. This data is then used to run tests to see whether there is additional (nonlinear/chaotic) structure in the data that is not covered by the surrogate-creation technique. There are many different papers on this issue. Theiler and colleagues worked on it: http://link.aps.org/doi/10.1103/PhysRevLett.77.635 http://link.aps.org/doi/10.1103/PhysRevLett.73.951 http://www.sciencedirect.com/science/article/pii/S0167278903001362 and they do use spectral methods with Fourier and Wavelet-transforms...
Literature on generating "similar" synthetic time series from observed time series There is are several papers under the label of "surrogate data" in the nonlinear data-analysis literature which deals with the question of how to generate data that have "similar" properties to some r
48,889
Literature on generating "similar" synthetic time series from observed time series
Maybe you can take a Fourier transform or a wavelet transform, and then flip the signs of the randomly selected components (or shift phases in Fourier space), and then re-assemble the series back. Of course there's also a certain amount of literature on how to bootstrap time series (block bootstrap, mostly), which may or may not be related to what you want to do.
Literature on generating "similar" synthetic time series from observed time series
Maybe you can take a Fourier transform or a wavelet transform, and then flip the signs of the randomly selected components (or shift phases in Fourier space), and then re-assemble the series back. Of
Literature on generating "similar" synthetic time series from observed time series Maybe you can take a Fourier transform or a wavelet transform, and then flip the signs of the randomly selected components (or shift phases in Fourier space), and then re-assemble the series back. Of course there's also a certain amount of literature on how to bootstrap time series (block bootstrap, mostly), which may or may not be related to what you want to do.
Literature on generating "similar" synthetic time series from observed time series Maybe you can take a Fourier transform or a wavelet transform, and then flip the signs of the randomly selected components (or shift phases in Fourier space), and then re-assemble the series back. Of
48,890
Literature on generating "similar" synthetic time series from observed time series
Not sure about the Fourier transform approach, never heard of that. On the other hand, if you can make some distribution assumption (e.g. changes are multivariate normal) it is easy to simulate from a multivariate normal distribution by running a Cholesky decomposition on the sample covariance matrix of your data set. You simply take the triangular matrix you get and multiply by a vector of uncorrelated standard normal random variable samples, and you get a vector of samples respecting the covariance structure observed in your data set. For example, in finance, we typically model log-returns (the log of a day's price divided by the previous day's price) as normally distributed. So we create data sets of log-return data, calc the covariance matrix, do the Cholesky decomposition, and simulate paths by drawing standard normally distributed random variables, multiplying by the triangular matrix obtained in the decomposition.
Literature on generating "similar" synthetic time series from observed time series
Not sure about the Fourier transform approach, never heard of that. On the other hand, if you can make some distribution assumption (e.g. changes are multivariate normal) it is easy to simulate from a
Literature on generating "similar" synthetic time series from observed time series Not sure about the Fourier transform approach, never heard of that. On the other hand, if you can make some distribution assumption (e.g. changes are multivariate normal) it is easy to simulate from a multivariate normal distribution by running a Cholesky decomposition on the sample covariance matrix of your data set. You simply take the triangular matrix you get and multiply by a vector of uncorrelated standard normal random variable samples, and you get a vector of samples respecting the covariance structure observed in your data set. For example, in finance, we typically model log-returns (the log of a day's price divided by the previous day's price) as normally distributed. So we create data sets of log-return data, calc the covariance matrix, do the Cholesky decomposition, and simulate paths by drawing standard normally distributed random variables, multiplying by the triangular matrix obtained in the decomposition.
Literature on generating "similar" synthetic time series from observed time series Not sure about the Fourier transform approach, never heard of that. On the other hand, if you can make some distribution assumption (e.g. changes are multivariate normal) it is easy to simulate from a
48,891
How to choose the tolerance parameter for ABC?
One approach to choosing the cutoff value $\epsilon$ for ABC rejection sampling is the following (similar to Aniko's answer). Simulate several test data sets from known parameter values which are vaguely similar to your observed data (e.g. by performing ABC with a relatively large $\epsilon$). From the ABC output for a test data set, some criterion of performance compared to the true parameters can be calculated, such as mean squared error. Calculate this for all test data sets at many $\epsilon$ values, and choose $\epsilon$ to optimise the mean criterion (as this is a Monte Carlo estimate of its expectation). This requires many repetitions of the ABC algorithm, but can be done efficiently by using the same $N$ data simulations in every ABC algorithm (although this introduces some dependency between simulations). In general, there is not a lot of published work on the choice of $\epsilon$. I think the approach above has been used somewhere and I will edit if I remember the references. An alternative is in "Choosing the Summary Statistics and the Acceptance Rate in Approximate Bayesian Computation" by Michael Blum. Other methods that I'm aware of apply only to SMC or MCMC methods.
How to choose the tolerance parameter for ABC?
One approach to choosing the cutoff value $\epsilon$ for ABC rejection sampling is the following (similar to Aniko's answer). Simulate several test data sets from known parameter values which are vag
How to choose the tolerance parameter for ABC? One approach to choosing the cutoff value $\epsilon$ for ABC rejection sampling is the following (similar to Aniko's answer). Simulate several test data sets from known parameter values which are vaguely similar to your observed data (e.g. by performing ABC with a relatively large $\epsilon$). From the ABC output for a test data set, some criterion of performance compared to the true parameters can be calculated, such as mean squared error. Calculate this for all test data sets at many $\epsilon$ values, and choose $\epsilon$ to optimise the mean criterion (as this is a Monte Carlo estimate of its expectation). This requires many repetitions of the ABC algorithm, but can be done efficiently by using the same $N$ data simulations in every ABC algorithm (although this introduces some dependency between simulations). In general, there is not a lot of published work on the choice of $\epsilon$. I think the approach above has been used somewhere and I will edit if I remember the references. An alternative is in "Choosing the Summary Statistics and the Acceptance Rate in Approximate Bayesian Computation" by Michael Blum. Other methods that I'm aware of apply only to SMC or MCMC methods.
How to choose the tolerance parameter for ABC? One approach to choosing the cutoff value $\epsilon$ for ABC rejection sampling is the following (similar to Aniko's answer). Simulate several test data sets from known parameter values which are vag
48,892
How to choose the tolerance parameter for ABC?
Based on your edit, it appears that you are looking for guidance in selecting the tolerance parameter $\epsilon$ for ABC sampling. I don't know much about the topic, but $\epsilon$ should be small. A simple possibility is to select several different values and see whether the resulting posterior distributions look similar (based on new sets of samples). The largest value that still gives the same posterior can be used.
How to choose the tolerance parameter for ABC?
Based on your edit, it appears that you are looking for guidance in selecting the tolerance parameter $\epsilon$ for ABC sampling. I don't know much about the topic, but $\epsilon$ should be small. A
How to choose the tolerance parameter for ABC? Based on your edit, it appears that you are looking for guidance in selecting the tolerance parameter $\epsilon$ for ABC sampling. I don't know much about the topic, but $\epsilon$ should be small. A simple possibility is to select several different values and see whether the resulting posterior distributions look similar (based on new sets of samples). The largest value that still gives the same posterior can be used.
How to choose the tolerance parameter for ABC? Based on your edit, it appears that you are looking for guidance in selecting the tolerance parameter $\epsilon$ for ABC sampling. I don't know much about the topic, but $\epsilon$ should be small. A
48,893
How to draw a random sample from distribution of prediction?
Don't really agree with Macro. To me, it seems like what you're asking is how to perform a Bayesian analysis, in which you specify prior distributions over your $\beta_i$ and combine your observed data to obtain a posterior ("predictive") distribution, which you can sample from. This not only has benefits in terms of avoiding overfitting the regression, but also is a more natural way to handle uncertainty within your model (IMHO). In order to do this, I'd suggest reading up on the Gibbs Sampling and Metropolis-Hastings algorithms. The basic idea is that you formulate conditional distributions over each of your parameters in terms of the other parameters in your model, and take draws from each parameter in turn. You record every $k$th observation in the chain and the samples will be drawn from the posterior distribution (thanks to some beautiful mathematics). You can use this to estimate moments, quantiles, etc.
How to draw a random sample from distribution of prediction?
Don't really agree with Macro. To me, it seems like what you're asking is how to perform a Bayesian analysis, in which you specify prior distributions over your $\beta_i$ and combine your observed dat
How to draw a random sample from distribution of prediction? Don't really agree with Macro. To me, it seems like what you're asking is how to perform a Bayesian analysis, in which you specify prior distributions over your $\beta_i$ and combine your observed data to obtain a posterior ("predictive") distribution, which you can sample from. This not only has benefits in terms of avoiding overfitting the regression, but also is a more natural way to handle uncertainty within your model (IMHO). In order to do this, I'd suggest reading up on the Gibbs Sampling and Metropolis-Hastings algorithms. The basic idea is that you formulate conditional distributions over each of your parameters in terms of the other parameters in your model, and take draws from each parameter in turn. You record every $k$th observation in the chain and the samples will be drawn from the posterior distribution (thanks to some beautiful mathematics). You can use this to estimate moments, quantiles, etc.
How to draw a random sample from distribution of prediction? Don't really agree with Macro. To me, it seems like what you're asking is how to perform a Bayesian analysis, in which you specify prior distributions over your $\beta_i$ and combine your observed dat
48,894
How to draw a random sample from distribution of prediction?
I think you want to simulate from the predictor distribution values ${\rm age}_{i}$, ${\rm sex}_{i}$ and error terms $u_{i}$ and calculate $$ (y_{t})_{i} = \hat{\beta}_{0} + \hat{\beta}_{1}{\rm age}_{i} + \hat{\beta}_{2} {\rm sex}_{i} + \hat{\beta}_{3}(y_{t-1})_{i} + u_{i} $$ to generate a sample $y_{1}, ..., y_{t}$. The predictor values can either be resampled from your data set, or generated from something similar to the empirical distribution of your predictors. The errors should be generated from the parametric distribution assumed when you fit the model, with variance estimated by the model. How you choose $(y_{1})_{i}$ is largely arbitrary. This is essentially the same as the parametric bootstrap, except leaving off the final step where you then re-estimate the model to characterize the sampling distribution of $\hat{\beta}$, which leads me to say - I'm not completely sure why you want to do this process - if it's to see what kind of variation you can expect in the response values, I don't think this is useful for that, since I'm pretty sure the resulting variance will be about the same as the observed variance from your original data set.
How to draw a random sample from distribution of prediction?
I think you want to simulate from the predictor distribution values ${\rm age}_{i}$, ${\rm sex}_{i}$ and error terms $u_{i}$ and calculate $$ (y_{t})_{i} = \hat{\beta}_{0} + \hat{\beta}_{1}{\rm age}_
How to draw a random sample from distribution of prediction? I think you want to simulate from the predictor distribution values ${\rm age}_{i}$, ${\rm sex}_{i}$ and error terms $u_{i}$ and calculate $$ (y_{t})_{i} = \hat{\beta}_{0} + \hat{\beta}_{1}{\rm age}_{i} + \hat{\beta}_{2} {\rm sex}_{i} + \hat{\beta}_{3}(y_{t-1})_{i} + u_{i} $$ to generate a sample $y_{1}, ..., y_{t}$. The predictor values can either be resampled from your data set, or generated from something similar to the empirical distribution of your predictors. The errors should be generated from the parametric distribution assumed when you fit the model, with variance estimated by the model. How you choose $(y_{1})_{i}$ is largely arbitrary. This is essentially the same as the parametric bootstrap, except leaving off the final step where you then re-estimate the model to characterize the sampling distribution of $\hat{\beta}$, which leads me to say - I'm not completely sure why you want to do this process - if it's to see what kind of variation you can expect in the response values, I don't think this is useful for that, since I'm pretty sure the resulting variance will be about the same as the observed variance from your original data set.
How to draw a random sample from distribution of prediction? I think you want to simulate from the predictor distribution values ${\rm age}_{i}$, ${\rm sex}_{i}$ and error terms $u_{i}$ and calculate $$ (y_{t})_{i} = \hat{\beta}_{0} + \hat{\beta}_{1}{\rm age}_
48,895
Choosing variables for Discriminant Analysis
You can get rid of some by looking for pairs that are very highly correlated and randomly deleting one of the pair. Then you can look at partial least squares, and pick variables that are important in the PLS solution. I did this with a similar problem and it worked pretty well (that is, the resulting discriminant function did pretty well)
Choosing variables for Discriminant Analysis
You can get rid of some by looking for pairs that are very highly correlated and randomly deleting one of the pair. Then you can look at partial least squares, and pick variables that are important in
Choosing variables for Discriminant Analysis You can get rid of some by looking for pairs that are very highly correlated and randomly deleting one of the pair. Then you can look at partial least squares, and pick variables that are important in the PLS solution. I did this with a similar problem and it worked pretty well (that is, the resulting discriminant function did pretty well)
Choosing variables for Discriminant Analysis You can get rid of some by looking for pairs that are very highly correlated and randomly deleting one of the pair. Then you can look at partial least squares, and pick variables that are important in
48,896
What is the best way of weighing cardinal scores and Likert scale scores to create a composite score?
Combining likert items with different numeric scalings Taking the sum or the mean of a set of items is standard practice in the behavioural and social sciences where each item is measured on the same response scale (e.g., a 1 to 5 likert scale). If you add or substract a constant to the scaling of an item, this will not alter the scale from a correlational perspective. For example, if item 1 was 1,2,3,4,5 and item 2 was -2,-1,0,1,2, you could combine these two items to form a scale, and this version of the scale would be perfectly correlated with a version where you rescaled item 2 to have the same scaling as item 1. That said, there are good reasons to use a consistent numeric scaling for all items. In particular, if the composite is the mean of a set of items on a consistent response scale (e.g., 1 to 5), then the mean for a sample provides a sense of where the sample tends to lie on the underlying response scale (e.g., a mean of 4.5 on a 5 point job satisfaction scale suggests that the sample is highly satisfied). The sum and the mean will both be perfectly correlated. From an interpretation perspective, I prefer the mean; from the perspective of manually interpreting norm tables and avoiding decimal and rounding issues, the sum is sometimes preferable. All the above advice is predicated on the idea that the items should be combined in the first place. See my answer to your previous question for a discussion of the broader issue of validity, and how to assess whether it is appropriate to combine items. Combining count variables with likert items Count variables typically have no upper limit. Thus, if you were combining a count variable with a likert item, there is the risk that the count variable could have much greater variance and thus importance than the likert item, if for example, the counts were sometimes large (e.g., 20 hospital visits in the last month). There are several options for how you could scale a count variable to enable you to combine it with likert items. In general, when mapping counts on to a psychological conception of frequency, I find that it is better to take the log of the counts (or log(counts + 1)) or some similar transformation that reduces the positive skew of the distribution. One simple way of scaling the count to be comparable to a 5 point likert scale would be to devise five categories (e.g., 1=never, 2=occasionally, ..., 5=very often; or some such) and ask subject matter expects to assign cut-offs for each category (e.g., 0 visits is 1, 1 visit is 2, 2 to 4 visits is 3, 5 to 6 visits is 4, and 7+ visits is 5). Given that you want a simple process, this might be appealing. You could apply factor analytic procedures that include both the count variable and the likert items to determine weights. If you do this, I'd use log(count + 1) or something similar instead of the raw count in your factor analysis. Potential issues with combining items on different scales From my observation, scale scores are typically derived from items that use the same response scales (e.g., agreement, frequency, importance, satisfaction, etc.). This can facilitate a clean interpretation of the scale scores. Mixing counts, agreement, satisfaction and items using other response formats, can raise questions over whether the composite is meaningful or pure. Thus, if you are mixing response scales, there is an additional onus on you to justify why you are combining the variables that you are combining. For example, what are you measuring when you combine a variable that measures frequency of going to hospital and satisfaction with the hospital? The variables sound like two very separate things.
What is the best way of weighing cardinal scores and Likert scale scores to create a composite score
Combining likert items with different numeric scalings Taking the sum or the mean of a set of items is standard practice in the behavioural and social sciences where each item is measured on the same
What is the best way of weighing cardinal scores and Likert scale scores to create a composite score? Combining likert items with different numeric scalings Taking the sum or the mean of a set of items is standard practice in the behavioural and social sciences where each item is measured on the same response scale (e.g., a 1 to 5 likert scale). If you add or substract a constant to the scaling of an item, this will not alter the scale from a correlational perspective. For example, if item 1 was 1,2,3,4,5 and item 2 was -2,-1,0,1,2, you could combine these two items to form a scale, and this version of the scale would be perfectly correlated with a version where you rescaled item 2 to have the same scaling as item 1. That said, there are good reasons to use a consistent numeric scaling for all items. In particular, if the composite is the mean of a set of items on a consistent response scale (e.g., 1 to 5), then the mean for a sample provides a sense of where the sample tends to lie on the underlying response scale (e.g., a mean of 4.5 on a 5 point job satisfaction scale suggests that the sample is highly satisfied). The sum and the mean will both be perfectly correlated. From an interpretation perspective, I prefer the mean; from the perspective of manually interpreting norm tables and avoiding decimal and rounding issues, the sum is sometimes preferable. All the above advice is predicated on the idea that the items should be combined in the first place. See my answer to your previous question for a discussion of the broader issue of validity, and how to assess whether it is appropriate to combine items. Combining count variables with likert items Count variables typically have no upper limit. Thus, if you were combining a count variable with a likert item, there is the risk that the count variable could have much greater variance and thus importance than the likert item, if for example, the counts were sometimes large (e.g., 20 hospital visits in the last month). There are several options for how you could scale a count variable to enable you to combine it with likert items. In general, when mapping counts on to a psychological conception of frequency, I find that it is better to take the log of the counts (or log(counts + 1)) or some similar transformation that reduces the positive skew of the distribution. One simple way of scaling the count to be comparable to a 5 point likert scale would be to devise five categories (e.g., 1=never, 2=occasionally, ..., 5=very often; or some such) and ask subject matter expects to assign cut-offs for each category (e.g., 0 visits is 1, 1 visit is 2, 2 to 4 visits is 3, 5 to 6 visits is 4, and 7+ visits is 5). Given that you want a simple process, this might be appealing. You could apply factor analytic procedures that include both the count variable and the likert items to determine weights. If you do this, I'd use log(count + 1) or something similar instead of the raw count in your factor analysis. Potential issues with combining items on different scales From my observation, scale scores are typically derived from items that use the same response scales (e.g., agreement, frequency, importance, satisfaction, etc.). This can facilitate a clean interpretation of the scale scores. Mixing counts, agreement, satisfaction and items using other response formats, can raise questions over whether the composite is meaningful or pure. Thus, if you are mixing response scales, there is an additional onus on you to justify why you are combining the variables that you are combining. For example, what are you measuring when you combine a variable that measures frequency of going to hospital and satisfaction with the hospital? The variables sound like two very separate things.
What is the best way of weighing cardinal scores and Likert scale scores to create a composite score Combining likert items with different numeric scalings Taking the sum or the mean of a set of items is standard practice in the behavioural and social sciences where each item is measured on the same
48,897
Generating data with a pre-specified odds ratio
It appears you're asking how to generate bivariate binary data with a pre-specified odds ratio. Here I will describe how you can do this, as long as you can generate a discrete random variables (as described here), for example. If you want to generate data with a particular odds ratio, you're talking about binary that comes from a $2 \times 2$ table, so the normal distribution is not relevant. Let $X,Y$ be the two binary outcomes; the $2 \times 2$ table can be parameterized in terms of the cell probabilities $p_{ij} = P(Y = i, X = j)$. The parameters $p_{11}, p_{01}, p_{10}$ will suffice, since $p_{00} = 1 - p_{11} - p_{01} - p_{10}$. It can be shown that there is a 1-to-1 invertible mapping $\{ p_{11}, p_{01}, p_{10} \} \longrightarrow \{ M_{X}, M_{Y}, OR \}$ where $M_{X} = p_{11} + p_{01}, M_{Y} = p_{11} + p_{10}$ are the marginal probabilities and $OR$ is the odds ratio. That is, we can map back and forth at will between the $\{$cell probabilities $\}$ and $\{$ the marginal probabilities & Odds ratio$\}$. Using this fact, you can generate bivariate binary data with a pre-specified odds ratio. This rest of this answer will walk one through that process and supply some crude R code to carry it out The '$\longrightarrow$' is simple enough; to generate data with a particular odds ratio you have to invert this mapping. For a fixed value of $M_{X}, M_{Y}$, we have \begin{equation} \log( OR ) = \log(p_{11}) + \log \left(1 - M_{Y} - M_{X} + p_{11}\right) - \log \left(M_{Y}-p_{11}\right) - \log \left(M_{X}-p_{11}\right). \end{equation} It is a fact that \begin{equation} {\rm max}\Big(0, M_X + M_Y-1\Big) \le p_{11}\le {\rm min}\Big(M_X, M_Y\Big). \end{equation} As $p_{11}$ moves through this range, $OR$ increases monotonically from 0 to $\infty$, thus there is a unique root of \begin{equation} \log(p_{11}) + \log \left(1 - M_{Y} - M_{X} + p_{11}\right) - \log \left(M_{Y}-p_{11}\right) - \log \left(M_{X}-p_{11}\right) - \log(OR) \end{equation} as a function of $p_{11}$. After solving for this root, $p_{10} = M_{Y} - p_{11}$ and $p_{01} = M_{X} - p_{11}$ and $p_{00} = 1 - p_{11} - p_{01} - p_{10}$, at which point we have the cell probabilities and the problem reduces to simply generating discrete random variables. The width of the confidence interval will be a function of the cell counts so more information is needed to precisely reproduce the results. Here is some crude R code to generate data as specified above. # return a 2x2 table of n outcomes with row marginal prob M1, column marginal prob # M2, and odds ratio OR f = function(n, M1, M2, OR) { # find p11 g = function(p) log(p) + log(1-M1-M2+p) - log(M1-p) - log(M2-p) - log(OR) br = c( max(0,M1+M2-1), min(M1,M2) ) p11 = uniroot(g, br)$root # fill in other cell probabilities p10 = M1 - p11 p01 = M2 - p11 p00 = 1-p11-p10-p01 # generate random numbers with those cell probabilities x = runif(n) n11 = sum(x < p11) n10 = sum(x < (p11+p10)) - n11 n01 = sum(x < (p11+p10+p01)) - n11 - n10 n00 = n - (n11+n10+n01) z = matrix(0,2,2) z[1,] = c(n11,n10) z[2,] = c(n01,n00) return(z) }
Generating data with a pre-specified odds ratio
It appears you're asking how to generate bivariate binary data with a pre-specified odds ratio. Here I will describe how you can do this, as long as you can generate a discrete random variables (as de
Generating data with a pre-specified odds ratio It appears you're asking how to generate bivariate binary data with a pre-specified odds ratio. Here I will describe how you can do this, as long as you can generate a discrete random variables (as described here), for example. If you want to generate data with a particular odds ratio, you're talking about binary that comes from a $2 \times 2$ table, so the normal distribution is not relevant. Let $X,Y$ be the two binary outcomes; the $2 \times 2$ table can be parameterized in terms of the cell probabilities $p_{ij} = P(Y = i, X = j)$. The parameters $p_{11}, p_{01}, p_{10}$ will suffice, since $p_{00} = 1 - p_{11} - p_{01} - p_{10}$. It can be shown that there is a 1-to-1 invertible mapping $\{ p_{11}, p_{01}, p_{10} \} \longrightarrow \{ M_{X}, M_{Y}, OR \}$ where $M_{X} = p_{11} + p_{01}, M_{Y} = p_{11} + p_{10}$ are the marginal probabilities and $OR$ is the odds ratio. That is, we can map back and forth at will between the $\{$cell probabilities $\}$ and $\{$ the marginal probabilities & Odds ratio$\}$. Using this fact, you can generate bivariate binary data with a pre-specified odds ratio. This rest of this answer will walk one through that process and supply some crude R code to carry it out The '$\longrightarrow$' is simple enough; to generate data with a particular odds ratio you have to invert this mapping. For a fixed value of $M_{X}, M_{Y}$, we have \begin{equation} \log( OR ) = \log(p_{11}) + \log \left(1 - M_{Y} - M_{X} + p_{11}\right) - \log \left(M_{Y}-p_{11}\right) - \log \left(M_{X}-p_{11}\right). \end{equation} It is a fact that \begin{equation} {\rm max}\Big(0, M_X + M_Y-1\Big) \le p_{11}\le {\rm min}\Big(M_X, M_Y\Big). \end{equation} As $p_{11}$ moves through this range, $OR$ increases monotonically from 0 to $\infty$, thus there is a unique root of \begin{equation} \log(p_{11}) + \log \left(1 - M_{Y} - M_{X} + p_{11}\right) - \log \left(M_{Y}-p_{11}\right) - \log \left(M_{X}-p_{11}\right) - \log(OR) \end{equation} as a function of $p_{11}$. After solving for this root, $p_{10} = M_{Y} - p_{11}$ and $p_{01} = M_{X} - p_{11}$ and $p_{00} = 1 - p_{11} - p_{01} - p_{10}$, at which point we have the cell probabilities and the problem reduces to simply generating discrete random variables. The width of the confidence interval will be a function of the cell counts so more information is needed to precisely reproduce the results. Here is some crude R code to generate data as specified above. # return a 2x2 table of n outcomes with row marginal prob M1, column marginal prob # M2, and odds ratio OR f = function(n, M1, M2, OR) { # find p11 g = function(p) log(p) + log(1-M1-M2+p) - log(M1-p) - log(M2-p) - log(OR) br = c( max(0,M1+M2-1), min(M1,M2) ) p11 = uniroot(g, br)$root # fill in other cell probabilities p10 = M1 - p11 p01 = M2 - p11 p00 = 1-p11-p10-p01 # generate random numbers with those cell probabilities x = runif(n) n11 = sum(x < p11) n10 = sum(x < (p11+p10)) - n11 n01 = sum(x < (p11+p10+p01)) - n11 - n10 n00 = n - (n11+n10+n01) z = matrix(0,2,2) z[1,] = c(n11,n10) z[2,] = c(n01,n00) return(z) }
Generating data with a pre-specified odds ratio It appears you're asking how to generate bivariate binary data with a pre-specified odds ratio. Here I will describe how you can do this, as long as you can generate a discrete random variables (as de
48,898
Analyzing treatment effect with possibly flawed control data
There is a growing econometric literature on the misclassification of treatment status. A standard difference-in-difference approach would be a natural starting point here - see e.g. http://www.nber.org/WNE/lect_10_diffindiffs.pdf p.17 mentioning Poisson case. The problem with misclassification for a general conditional mean function is described here: https://www2.bc.edu/~lewbel/mistreanote2.pdf If it applies to your set up, then you may be confident of a significant effect finding as the bias is said to be towards zero.
Analyzing treatment effect with possibly flawed control data
There is a growing econometric literature on the misclassification of treatment status. A standard difference-in-difference approach would be a natural starting point here - see e.g. http://www.nber.o
Analyzing treatment effect with possibly flawed control data There is a growing econometric literature on the misclassification of treatment status. A standard difference-in-difference approach would be a natural starting point here - see e.g. http://www.nber.org/WNE/lect_10_diffindiffs.pdf p.17 mentioning Poisson case. The problem with misclassification for a general conditional mean function is described here: https://www2.bc.edu/~lewbel/mistreanote2.pdf If it applies to your set up, then you may be confident of a significant effect finding as the bias is said to be towards zero.
Analyzing treatment effect with possibly flawed control data There is a growing econometric literature on the misclassification of treatment status. A standard difference-in-difference approach would be a natural starting point here - see e.g. http://www.nber.o
48,899
Analyzing treatment effect with possibly flawed control data
I'd suggest looking into multiple imputation or other missing data approaches for dealing with your control data. You can build a vast array of different possible combinations of whether or not a given control was on or off treatment, and see how they effect your results. When it comes down to it, yes, you can combine the two data sets, and using something like a multiple imputation method will allow you to handle your missing data problem, though of course you'll likely have wider confidence intervals and the approach is somewhat less elegant, and this harder to explain.
Analyzing treatment effect with possibly flawed control data
I'd suggest looking into multiple imputation or other missing data approaches for dealing with your control data. You can build a vast array of different possible combinations of whether or not a give
Analyzing treatment effect with possibly flawed control data I'd suggest looking into multiple imputation or other missing data approaches for dealing with your control data. You can build a vast array of different possible combinations of whether or not a given control was on or off treatment, and see how they effect your results. When it comes down to it, yes, you can combine the two data sets, and using something like a multiple imputation method will allow you to handle your missing data problem, though of course you'll likely have wider confidence intervals and the approach is somewhat less elegant, and this harder to explain.
Analyzing treatment effect with possibly flawed control data I'd suggest looking into multiple imputation or other missing data approaches for dealing with your control data. You can build a vast array of different possible combinations of whether or not a give
48,900
Statistically comparing classifiers using only confusion matrix (or average accuracies)
You want to test whether $$p_A-p_B>0,$$ where $$p_A, p_B$$ are the accuracies of the classifiers. To test this, you need an estimate of $$p_A-p_B$$ and $$Var(p_A-p_B)=Var(p_A)+Var(p_B)-2Cov(p_A,p_B)$$. Without knowing the samples that each classifier gets right/wrong, you won't be able to estimate the covariance, thus you can't statistically compare the classifiers.
Statistically comparing classifiers using only confusion matrix (or average accuracies)
You want to test whether $$p_A-p_B>0,$$ where $$p_A, p_B$$ are the accuracies of the classifiers. To test this, you need an estimate of $$p_A-p_B$$ and $$Var(p_A-p_B)=Var(p_A)+Var(p_B)-2Cov(p_A,p_B)$
Statistically comparing classifiers using only confusion matrix (or average accuracies) You want to test whether $$p_A-p_B>0,$$ where $$p_A, p_B$$ are the accuracies of the classifiers. To test this, you need an estimate of $$p_A-p_B$$ and $$Var(p_A-p_B)=Var(p_A)+Var(p_B)-2Cov(p_A,p_B)$$. Without knowing the samples that each classifier gets right/wrong, you won't be able to estimate the covariance, thus you can't statistically compare the classifiers.
Statistically comparing classifiers using only confusion matrix (or average accuracies) You want to test whether $$p_A-p_B>0,$$ where $$p_A, p_B$$ are the accuracies of the classifiers. To test this, you need an estimate of $$p_A-p_B$$ and $$Var(p_A-p_B)=Var(p_A)+Var(p_B)-2Cov(p_A,p_B)$